text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Human Activity Prediction
### Libraries
```
## Modelling
library(caret); library(rattle); library(randomForest); library(e1071); library(forecast)
## Data processing/visualization
library(dplyr); library(ggplot2); library(rattle)
```
#### Getting the data
Downloading
```
train_URL = "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
test_URL = "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
download.file(train_URL, destfile = "./data/train.csv")
download.file(test_URL, destfile = "./data/test.csv")
```
Loading
```
train_raw = read.csv("./data/train.csv")
test_raw = read.csv("./data/test.csv")
head(train_raw)
dim(train_raw); dim(test_raw)
```
## Data cleaning
There are lot of unnecessary variables, some of which are highly correlated.
The 'raw_timestamp' is used to derive converted timestamp - 'cvtd_timestamp'
I remove X, user_name, new_window, num_window to simplify the model further, and
raw_timestamp_part_1, raw_timestamp_part_2, cvtd_timestamp because our goal is to identify the class of activity and timeseries information has no significant role in assessing this.
Looking for **missing** data
```
missing = union(as.numeric(which(sapply(train_raw, function(x){mean(is.na(x))})>0.80)),
as.numeric(which(sapply(train_raw, function(x){mean(x=="", na.rm = T)})>0.80)))
missing = union(missing, c(1,2,3,4,5,6,7))
missing
```
Removing **zero** or **near-zero covariates**
```
nzv = nearZeroVar(train_pre_clean)
nzv
```
No output suggests all of the remaining covariates contribute significant information to modelling a predictve model.
**Cleaned** dataset
```
train_clean = train_raw[,-missing]
test_clean = test_raw[,-missing]
dim(train_clean); dim(test_clean)
head(train_clean)
```
## Modelling Random Forests classifier
splitting the training dataset into train and validation sets
```
inTrain = createDataPartition(train_clean$classe, p = 0.75, list = F)
training = train_clean[inTrain,]
validation = train_clean[-inTrain,]
```
#### Building classifier
```
train_control_params = trainControl(method="repeatedcv", number=10, repeats= 5)
mdl_rf = train(classe~., method = "rf", trControl=train_control_params, ntree = 10, data = training)
```
#### Evaluating the model on validation set
```
val_preds = predict(mdl_rf, newdata = validation)
confusionMatrix(val_preds, validation$classe)
```
### Predicting test set
```
test_preds = predict(mdl_rf, test_clean)
test_preds
```
### Visualizing decision tree
```
mdl_rpart = train(classe~., method = "rpart", data = training)
fancyRpartPlot(mdl_rpart$finalModel, sub="")
```
| github_jupyter |
# Gensim
> Gensim is designed to automatically extract semantic topics from documents, as efficiently (computer-wise) and painlessly (human-wise) as possible.
> Gensim is designed to process raw, unstructured digital texts (“plain text”). The algorithms in gensim, such as Latent Semantic Analysis, Latent Dirichlet Allocation and Random Projections discover semantic structure of documents by examining statistical co-occurrence patterns of the words within a corpus of training documents. These algorithms are unsupervised, which means no human input is necessary – you only need a corpus of plain text documents.
> Once these statistical patterns are found, any plain text documents can be succinctly expressed in the new, semantic representation and queried for topical similarity against other documents.
## From Strings to Vectors
Let’s start from documents represented as strings:
```
from gensim import corpora
# This is a tiny corpus of nine documents, each consisting of only a single sentence.
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
```
First, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:
```
# Remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
print(texts)
# Remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
print(texts)
#pretty-printer
from pprint import pprint
pprint(texts)
```
To convert documents to vectors, we’ll use a document representation called **bag-of-words**. In this representation, each document is represented by one vector where a vector element i represents the number of times the ith word appears in the document.
It is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:
```
dictionary = corpora.Dictionary(texts)
# we assign a unique integer ID to all words appearing in the processed corpus
# this sweeps across the texts, collecting word counts and relevant statistics.
# In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector).
print(dictionary)
# To see the mapping between the words and their ids
print(dictionary.token2id)
# To convert tokenized documents to vectors
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec)
```
The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a bag-of-words--a sparse vector, in the form of [(word_id, word_count), ...].
As the token_id is 0 for "human" and 2 for "computer", the new document “Human computer interaction” will be transformed to [(0, 1), (2, 1)]. The words "computer" and "human" exist in the dictionary and appear once. Thus, they become (0, 1), (2, 1) respectively in the sparse vector. The word "interaction" doesn't exist in the dictionary and, thus, will not show up in the sparse vector. The other ten dictionary words, that appear (implicitly) zero times, will not show up in the sparse vector and, there will never be a element in the sparse vector like (3, 0).
```
corpus = [dictionary.doc2bow(text) for text in texts]
for c in corpus:
print(c)
```
| github_jupyter |
# Demo of Ch2. Linear Classifier
----
This is the sample code of TU-ETP-AD1062 Machine Learning Fundamentals.
For more information, please refer to:
https://sites.google.com/view/tu-ad1062-mlfundamentals/
## Import packages
----
- `numpy`: Provide linear algebra related computation ability, with `norm` used to measure the l2-norm of matrices and vectors
- `sklearn`: Scikit-Learn, provides basic data analysis and machine learning methods functionality
- `mlfund`:
- `dataset`: Used to generate data in normal distribution
- `Plot2D`: Used to plot the figure, implemented by using `matplotlib`
```
import numpy as np
from numpy.linalg import norm
import sklearn.metrics
import sklearn.svm
import sklearn.linear_model
from mlfund.dataset import Gaussian, GaussianParam
from mlfund.plot import Plot2D
%matplotlib inline
```
## 2.2. Perceptron
### Demo 2.2.1. Implement Perceptron Algorithm by the Simplest Gradient Descent
----
#### Perceptron Algorithm Implementation
The demo here shows how to use the simplest gradient descent (which leverages fixed learning rate `self._mu`) to implement Perceptron algorithm. Here's the method details:
##### `__delta_x(self, X_augmented, y)`:
For each augmented sample $\mathbf{x}$ stored in `X_augmented`, denotes each wrongly classified sample $\mathbf{x}\in Y$ by $\delta_{\mathbf{x}}$ defined as following:
$$
\delta_{\mathbf{x}}=\left\{
\begin{array}{ll}
-1, \text{if } \mathbf{x} \in \omega_{1}, \\
+1, \text{if } \mathbf{x} \in \omega_{2}
\end{array}
\right.
$$
##### `__gradient_w(self, X_augmented, y)`:
For each augmented sample $\mathbf{x}$ stored in `X_augmented`, compute the gradient with respect to $\mathbf{w}$ (i.e., `self._w`) as belows:
$$
\nabla J\left(\mathbf{w}\right) = \sum_{\mathbf{x}\in Y}\delta_{\mathbf{x}}\mathbf{x}
$$
##### `decision_function(self, X)`:
For each sample $\mathbf{x}$ stored in `X`, compute the value of $\mathbf{w}^T\mathbf{x}$
##### `cost(self, X, y)`:
For each sample $\mathbf{x}$ stored in `X`, compute the value of the Perception cost function:
$$
J\left(\mathbf{w}\right)=\sum_{\mathbf{x}\in Y} \delta_{\mathbf{x}} \mathbf{w}^T\mathbf{x}
$$
##### `fit(self, X, y)`:
Training the Perceptron by the following steps:
> - $\mathbf{w}_0 =$ Random init()
> - while (iteration < max_iteration)
> - $\mathbf{w}_{t+1} = \mathbf{w}_{t} - \mu \cdot \nabla J\left(\mathbf{w}\right)$
> - if ( $||\nabla J\left(\mathbf{w}_{t+1}\right)||^2$ < tolerance value )
> - break
> - return $\mathbf{w}_{tlast}$
Notice:
1. For the purpose of the visualization, here we don't use the random initialized $\mathbf{w}_0$, we use a fixed vector `[1,-2,0]` instead.
2. Here we don't return $\mathbf{w}_{tlast}$ directly. Instead, we stored it into `self._w` for the object oriented purposes.
##### `predict(self, X)`:
For each sample $\mathbf{x}$ stored in `X`, predict the label to `-1` or `+1` by using the trained parameter `self._w`.
```
class HandCraftedBinaryPerceptron:
def __init__(self):
self._w = None
self._mu = 0.01
self._max_itr = 50
self._verbose_log = True
def __log(self, title, cost, X, y):
if self._verbose_log == True:
print('%s, w = %s, cost: %2.5f' % (title, self._w.__str__(), cost))
plot = Plot2D()
plot.scatter(X, y)
plot.classifierContour(X_train, y_train, self)
plot.show()
def __validate_data_type(self, X, y):
assert isinstance(X, np.ndarray)
assert isinstance(y, np.ndarray)
assert len(np.unique(y)) == 2, '`%s` allows binary classification only, whereas input labels `y` contains %d different labels.' % (HandCraftedBinaryPerceptron.__name__, len(np.unique(y)))
assert set(np.unique(y)) == set([1, -1]), 'Labels in `y` allows +1 and -1 only.'
def __delta_x(self, X_augmented, y):
err_indices = np.array(X_augmented.dot(self._w) * y < 0, dtype='int')
return -1 * np.multiply(err_indices, y)
def __gradient_w(self, X_augmented, y):
delta_x = self.__delta_x(X_augmented, y)
return np.sum(np.multiply(X_augmented, np.repeat(delta_x.reshape( (len(y), 1) ), X_augmented.shape[1], axis=1)), axis=0)
def decision_function(self, X):
X_augmented = np.hstack((X, np.ones( ( len(X), 1) )))
return X_augmented.dot(self._w.transpose())
def cost(self, X, y):
decision_values = self.decision_function(X)
err_indices = decision_values * y < 0
return np.sum (np.abs( decision_values[err_indices] ))
def fit(self, X, y):
self.__validate_data_type(X, y)
X_augmented = np.hstack((X, np.ones( ( len(X), 1) )))
self._w = np.array([-1, 2, 0])
_cost = self.cost(X, y)
self.__log('Initial', _cost, X, y)
for i in range(self._max_itr):
grad_w = self.__gradient_w(X_augmented, y)
self._w = self._w - self._mu * grad_w
_cost = self.cost(X, y)
self.__log('Iteration %d' % i, _cost, X, y)
if norm(grad_w, 2) < 1e-4:
print('Converged at iteration %d, with cost = %2.3f' % (i, _cost))
break
def predict(self, X):
assert isinstance(self._w, np.ndarray)
assert isinstance(X, np.ndarray)
decision_values = self.decision_function(X)
ret = np.zeros(len(decision_values), dtype='int')
ret[decision_values > 0.0] = 1
ret[decision_values <= 0.0] = -1
return ret
```
#### Demo of the Hand-crafted Perceptron Alogrithm
- Generate 2 group of data, which is in normal distribution
- Trained by the `HandCraftedBinaryPerceptron`
```
# Generate Training data and plot it
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train, label_type='positive_negative')
clf = HandCraftedBinaryPerceptron()
clf.fit(X_train, y_train)
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.classifierContour(X_train, y_train, clf)
plot.show()
```
### Demo 2.2.2. Perceptron of Scikit-Learn
----
The demo here shows how to generate 2 normal distributed groups of data, then classified by Scikit-learn built-in Perceptron algorithm.
#### Data Generation
Here we generate data as belows:
1. Generate 200 training data `X_train`, with corresponded label `y_train`
2. Generate 100 testing data `X_test`, with corresponded label `y_test`
```
# Generate Training data
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train)
plot = Plot2D()
plot.title('Training data')
plot.scatter(X_train, y_train)
plot.show()
# Generate testing data
params_test = []
param = GaussianParam()
param.mean = [-1, 2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
param = GaussianParam()
param.mean = [1, -2.5]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
X_test, y_test = Gaussian.generate(params_test)
plot = Plot2D()
plot.title('Test data')
plot.scatter(X_test, y_test)
plot.show()
```
#### Training and Predicting
Training a Perceptron model (which is built-in with Scikit-learn) with `X_train`, then predict the labels of `X_test`, with MCE computed.
```
clfPLA = sklearn.linear_model.Perceptron()
clfPLA.fit(X_train, y_train)
y_test_predict = clfPLA.predict(X_test)
print("Training data:")
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.classifierContour(X_train, y_train, clfPLA)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfPLA)
plot.show()
```
## 2.3. Support Vector Machine (SVM)
### Demo 2.3.1. c-Support Vector Machine (c-SVC)
----
The demo here trains the model by SVM with `X_train`, then predict the testing data by `X_test`.
Notice that:
1. The number of support vectors is output via the attribute of `clfSVC.support_vectors_`
2. The support vectors are drawn via the wrapped function `mlfund.scatterSV`
```
clfSVC = sklearn.svm.SVC(C=1, kernel='linear')
clfSVC.fit(X_train, y_train)
y_test_predict = clfSVC.predict(X_test)
print("Training data:")
print("#SV = %d" % len(clfSVC.support_vectors_))
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.scatterCSVC(clfSVC)
plot.classifierContour(X_train, y_train, clfSVC)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfSVC)
plot.show()
```
### Demo 2.3.2. c-Support Vector Machine (c-SVC) - A More Crowded Case
----
The demo here use the same settings of the c-SVM model, but learning from a more crowded data. One could adjust the value of `C` to observe the support vectors being relaxed by slack variables
* The larger `C`, the less support vectors (due to the more penalty of $\xi_i$), but the smaller margin size
* The smaller `C`, the more support vectors (due to the less penalty of $\xi_i$), but the larger margin size
```
# Generate Training data and plot it
np.random.seed(0)
params_train = []
param = GaussianParam()
param.mean = [-0.3, 2]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
param = GaussianParam()
param.mean = [0.3, -2]
param.cov = [[1, 5], [0, 1]]
param.N = 100
params_train.append(param)
X_train, y_train = Gaussian.generate(params_train)
# Generate testing data
params_test = []
param = GaussianParam()
param.mean = [-0.3, 2]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
param = GaussianParam()
param.mean = [0.3, -2]
param.cov = [[1, 5], [0, 1]]
param.N = 50
params_test.append(param)
X_test, y_test = Gaussian.generate(params_test)
clfSVC = sklearn.svm.SVC(C=1000, kernel='linear')
clfSVC.fit(X_train, y_train)
y_test_predict = clfSVC.predict(X_test)
print("Training data:")
print("#SV = %d" % len(clfSVC.support_vectors_))
plot = Plot2D()
plot.scatter(X_train, y_train)
plot.scatterCSVC(clfSVC)
plot.classifierContour(X_train, y_train, clfSVC)
plot.show()
print("Testing data:")
print("MCE = %2.3f" % sklearn.metrics.zero_one_loss(y_test, y_test_predict))
plot = Plot2D()
plot.scatter(X_test, y_test)
plot.classifierContour(X_test, y_test, clfSVC)
plot.show()
```
| github_jupyter |
### MEDC0106: Bioinformatics in Applied Biomedical Science
<p align="center">
<img src="../../resources/static/Banner.png" alt="MEDC0106 Banner" width="90%"/>
<br>
</p>
---------------------------------------------------------------
# 12 - Introduction to Biopython - Proteins Exercises
*Written by:* Mateusz Kaczyński
**This notebook contains the exercises covering the basic protein analysis and search.**
## Contents
1. [Plotting relative mutability](#Plotting-relative-mutability)
2. [BLAST and analyse](#BLAST-and-analyse)
-----
**Remember to save your results!**
#### Imports
Some imports you may, or may not need to complete the tasks (run this before you attempt the exercises).
```
%matplotlib notebook
import matplotlib.pyplot as plt
from urllib.request import urlretrieve
from Bio import SeqIO
from Bio.Seq import Seq
```
## Plotting the relative mutability
In this exercise, we will use the relative aminoacid mutability scale as outlined in *Dayhoff M.O., Schwartz R.M., Orcutt B.C. In "Atlas of Protein Sequence and Structure", Vol.5, Suppl.3 (1978).* Their work includes a table presenting experimentally - derived mutation probability relative to Alanine.
1. Obtain a FASTA file for any protein of interest (e.g. using [Uniprot](https://uniprot.org)). *You can provide the sequence by hand if you find downloading too slow.*
2. Plot the relative (Ala=100) mutability of the protein regions. Use 15-item wide sliding window.
```
aminoacid_relative_mutability = {
"A": 100, "C": 20, "D": 106, "E": 102, "F": 41,
"G": 49, "H": 66, "I": 96, "K": 56, "L": 40,
"M": 94, "N": 134, "P": 56, "Q": 102, "R": 65,
"S": 120, "T": 97, "V": 74, "W": 18, "Y": 41
}
# Write your solution here, adding more cells if necessary.
```
## BLAST and analyse
1. Run the provided protein sequence against NCBI non-redundant protein sequence database.
2. Download the first available protein sequence hits (don't worry if you don't know how to do it with python, copy-paste the sequences by hand).
3. Calculate the molecular weight of the two proteins. Which one is heavier?
4. Using `aminoacid_relative_mutability` dictionary from the previous exercise, which of the two is more prone to mutation?
```
sequence = """
EVSIIQSMGYRNRAKRLLQSEPENPSLQETSLSVQLSNLGTVRTLRTKQRIQPQKTSVYI
ELGSDSSEDTVNKATYCSVGDQELLQITPQGTRDEISLDSAKKAACEFSETDVTNTEHHQ
PSNNDLNTTEKRAAERHPEKYQGSSVSNLHVEPCGTNTHASSLQHENSSLLLTKDRMNVE
KAEFCNKSKQPGLARSQHNRWAGSKETCNDRRTPSTEKKVDLNADPLCERKEWNKQKLPC
"""
# Write your solution here, adding more cells if necessary.
```
| github_jupyter |
```
import json
import logging
from collections import OrderedDict
from typing import Optional
import coloredlogs
from ph4_walkingpad.profile import Profile, calories_walk2_minute, calories_rmrcb_minute
from ph4_walkingpad.analysis import StatsAnalysis
import scipy
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
logger = logging.getLogger(__name__)
coloredlogs.CHROOT_FILES = []
coloredlogs.install(level=logging.INFO)
an = StatsAnalysis()
an.profile_file = '/Users/dusanklinec/workspace/ph4-walkingpad/profile.json'
an.stats_file = '/Users/dusanklinec/walking.json'
an.load_profile()
an.load_stats(6, collect_details=True)
for ix, m in enumerate(an.loaded_margins):
print('============================================= Margins: %s, segms: %s' % (ix,len(m)))
print(json.dumps(an.remove_records([m])[0], indent=2))
res = an.load_last_stats()
print(res)
import itertools
mm = an.loaded_margins[0]
acc = []
for r in mm:
if '_records' in r:
acc += list(r['_records'])
else:
acc.append(r)
factor = 3*60
srter = lambda x: (x['time'], x['steps'])
acc.sort(key=srter)
times = []
step_size = []
for g, k in itertools.groupby(acc, key=lambda x: int(x['time']/factor)):
trecs = list(k)
min_rec, max_rec = trecs[0], trecs[-1]
dist_el = max_rec['dist'] - min_rec['dist']
if dist_el <= 0:
logger.warning('Elapsed distance is zero for %s: %s' % (g,dist_el,))
continue
steps_el = max_rec['steps'] - min_rec['steps']
cstep_size = dist_el*10 / steps_el
if cstep_size < 0.01:
continue
times.append(min_rec['time'])
step_size.append(cstep_size)
plt.figure(figsize=(16, 12))
data_plot = pd.DataFrame({"time": times, "step_size": step_size})
print(data_plot)
sns.lineplot(x = "time", y = "step_size", data=data_plot)
plt.show()
# calorie burn depending on the speed
def compute_walk_burn(profile):
cal_data = []
for speed in range(5, 61, 5):
el_time = 60 * 15 # mins
ccal = (el_time / 60) * calories_walk2_minute(speed/10, profile.weight, 0.00)
ccal_net = ccal - (el_time / 60) * calories_rmrcb_minute(profile.weight, profile.height,
profile.age, profile.male)
cal_data.append({'speed': speed/10, 'cal': ccal, 'net':ccal_net})
print('Speed: %4.1f, ccal: %7.2f, net: %7.2f' % (speed/10, ccal, ccal_net))
plt.figure(figsize=(16, 12))
data_plot = pd.DataFrame(cal_data)
sns.lineplot(x = "speed", y = "cal", data=data_plot)
sns.lineplot(x = "speed", y = "net", data=data_plot)
plt.show()
compute_walk_burn(an.profile)
```
| github_jupyter |
# Logistic Regression
(before Multinomial logistic regression)
We want to predict the probability of an input belonging to one of two classes.
---
## Study case :
Classify the zero and one digits from MNist dataset
### a) Dataset !
- Input: Images of size 28*28 where a one or two is present
- Output: 0 if the input is a 0, 1 otherwise
Lets... Load it, check it, quick, rewrite it !
```
import keras
import utils
reload(utils)
from utils import *
%pylab inline
# Get the input data (All the zeros and ones in the dataset)
(x, y), (x_, y_) = keras.datasets.mnist.load_data()
X = x[np.where(y<=1)]
Y = y[np.where(y<=1)]
Y = np.array(Y, dtype='int')
# Reshape the images to vectors
X = X.reshape(X.shape[0], -1)
X = X / 255. # Normalize inputs
# Vizualize the digits
pylab.rcParams['figure.figsize'] = (15, 4)
for i in range(12):
plt.subplot(2, 6, i+1)
plt.imshow(X[i].reshape([28, 28]))
plt.show()
```
---
### b) Classifier
We use a logistic regression,
(You may want to read this : http://cs229.stanford.edu/notes/cs229-notes1.pdf) :
The Cost is a function of the true output $Y$ and the prediction $p$, which itself is a function of a linear activation $s(x)$
- linear unit : $ s = (W^t \cdot X + b) $
- prediction : $ p(s) = \frac{1}{1 + e^{-s}} $
- Cost : $ C(y, p) = - y \ln(p) - (1-y)(\ln(1-p)) $
To use gradient descent, we have to compute the gradient of the cost with respect to w :
$ \frac{dC}{dW} $
We take adventage of the chain rule :
$ \frac{dC}{dW} = \frac{dC}{dp} \cdot \frac{dp}{ds} \cdot \frac{ds}{dw} $
---
We derive each terms :
\begin{align}
\frac{dC}{dp} &= - \frac{y}{p} - (-1) \cdot \frac{1-y}{1-p} \\
&= - \frac{y}{p} + \frac{1-y}{1-p} \\
&= \frac{-y + y.p + p - y.p}{p \cdot (1-p)} \\
&= \frac{-y+p}{p \cdot (1-p)}
\end{align}
---
\begin{align}
\frac{dp}{ds} &= - \frac{-e^{-s}}{1 + e^{-s}} \\
&= \frac{-e^{-s}}{1 + e^{-s}} \\
&= \frac{e^{-s} + 1 - 1}{(1 + e^{-s})^2} \\
&= \frac{e^{-s} + 1}{(1 + e^{-s})^2} - \frac{1}{(1 + e^{-s})^2} \\
&= \frac{1}{1 + e^{-s}} - \left(\frac{1}{1 + e^{-s}}\right)^2 \\
&= p - p^2 \\
&= p \cdot (1-p)
\end{align}
---
\begin{align}
\frac{ds}{dw} = x
\end{align}
---
All together, we have :
\begin{align}
\frac{dC}{dW} &= \frac{dC}{dp} \cdot \frac{dp}{ds} \cdot \frac{ds}{dw} \\
&= \frac{-y+p}{p \cdot (1-p)} \cdot p \cdot (1-p) \cdot x \\
&= (-y+p) \cdot x \\
&= (p-y) \cdot x
\end{align}
```
# Set-up the weights
W = np.random.random((784,))-.5
# Train
for _ in range(2):
acc = []
losses = []
for x,y in zip(X, Y):
pred = linear(x, W)
pred = sigmoid(pred)
acc.append(round(pred)==y)
loss = nll(pred, y)
losses.append(loss)
update = (pred - y) * x
W = W - .02 * update
print sum(acc) / float(len(acc)), sum(losses)/len(losses)
gen = batch_generator(1)
valid_gen = batch_generator(100)
X_valid, Y_valid = valid_gen.next()
W = np.random.normal(size=IMG_SIZE * IMG_SIZE)
b = np.random.normal()
log = lambda x: np.log(x + 1e-8)
exp = lambda x: np.exp(x + 1e-8)
alph_ = 1.6732632423543772848170429916717
lambd_ = 1.0507009873554804934193349852946
linear = lambda x: np.dot(W.T, x) + b
sigm = lambda x: 1 / (1 + exp(-x))
elu = lambda x, alpha: np.maximum(x, alpha * (exp(x) - 1))
selu = lambda x: lambd_ * elu(x, alph_)
nll = lambda p, y: - y * log(p) - (1 - y) * log(1 - p)
def prob(X):
return sigm(linear(X))
def loss(X, y):
# loss = - y .ln( sigm(WT.X+b))
# -(1-y).ln(1-sigm(WT.X+b))
p = prob(X)
return nll(p, y)
def gradient_loss(X, y):
# d.loss / d.W = (p-y).X
p = prob(X)
return ((p - y) * X)
def evaluate():
probs = np.array(map(prob, X_valid))
loss = nll(probs, Y_valid)
loss = loss.mean()
probs = map(round, probs)
accuracy = sum(probs == Y_valid)
return accuracy, loss
losses = []
alpha = 0.001
for epoch in range(60):
_loss = 0
alpha *= 0.95
for _ in range(2000):
X, Y = gen.next()
X, Y = X[0], Y[0]
_loss += loss(X, Y)
W = W - alpha * gradient_loss(X, Y)
losses.append(_loss / 2000)
print epoch, losses[-1], evaluate(), alpha
plt.plot(losses)
plt.show()
def prob(X):
return sigm(selu(linear(X)))
def loss(X, y):
# loss = - y .ln( sigm(WT.X+b))
# -(1-y).ln(1-sigm(WT.X+b))
p = prob(X)
return nll(p, y)
def gradient_loss(X, y):
# d.loss / d.W = (p-y).X
p = prob(X)
if linear(X) <= 0:
return X * (p - y) * (p + lambd_ * lambd_)
else:
return X * (p - y) * lambd_
def evaluate():
probs = np.array(map(prob, X_valid))
loss = nll(probs, Y_valid)
loss = loss.mean()
probs = map(round, probs)
accuracy = sum(probs == Y_valid)
return accuracy, loss
losses = []
alpha = 0.001
for epoch in range(30):
_loss = 0
alpha *= 0.95
for _ in range(2000):
X, Y = gen.next()
X, Y = X[0], Y[0]
_loss += loss(X, Y)
W = W - alpha * gradient_loss(X, Y)
losses.append(_loss / 2000)
print epoch, losses[-1], evaluate(), alpha
plt.plot(losses)
plt.show()
```
| github_jupyter |
```
# HIDDEN
from datascience import *
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
# Interaction
from IPython.display import display
from functools import partial
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import nbinteract as nbi
```
### Empirical Distributions ###
Much of data science involves data from large random samples. In this section we will examine some properties of such samples.
We will start with a simple experiment: rolling a die multiple times and keeping track of which face appears. The table `die` contains the numbers of spots on the faces of a die. All the numbers appear exactly once, as we are assuming that the die is fair.
```
die = Table().with_column('Face', np.arange(1, 7, 1))
die
```
### A Probability Distribution ###
The histogram below helps us visualize the fact that every face appears with probability 1/6. We say that the histogram shows the *distribution* of probabilities over all the possible faces. Since all the bars represent the same percent chance, the distribution is called *uniform on the integers 1 through 6.*
```
die_options = {'bins': 6, 'xlim': (1, 7), 'ylim': (0, 0.2)}
nbi.hist(die.column('Face'), options=die_options)
```
Variables whose successive values are separated by the same fixed amount, such as the values on rolls of a die (successive values separated by 1), are called *discrete*. The histogram above is called a *discrete* histogram. Its bins are specified by the array `die_bins` and ensure that each bar is centered over the corresponding integer value.
It is important to remember that the die can't show 1.3 spots, or 5.2 spots – it always shows an integer number of spots. But our visualization spreads the probability of each value over the area of a bar. While this might seem a bit arbitrary at this stage of the course, it will become important later when we overlay smooth curves over discrete histograms.
Before going further, let's make sure that the numbers on the axes make sense. The probability of each face is 1/6, which is 16.67% when rounded to two decimal places. The width of each bin is 1 unit. So the height of each bar is 16.67% per unit. This agrees with the horizontal and vertical scales of the graph.
### Empirical Distributions ###
The distribution above consists of the theoretical probability of each face. It is not based on data. It can be studied and understood without any dice being rolled.
*Empirical distributions,* on the other hand, are distributions of observed data. They can be visualized by *empirical histograms*.
Let us get some data by simulating rolls of a die. This can be done by sampling at random with replacement from the integers 1 through 6. To do this using Python, we will use the Table method `sample`, which draws at random with replacement from the rows of a table. Its argument is the sample size, and it returns a table consisting of the rows that were selected. An optional argument `with_replacement=False` specifies that the sample should be drawn without replacement, but that does not apply to rolling a die.
Here are some results of 10 rolls of a die. Drag the slider right
to see different samples.
```
def die_sample_10(sample_num=0):
return die.sample(10)
_ = interact(die_sample_10, sample_num=(0, 20))
```
We can use the same method to simulate as many rolls as we like, and then draw empirical histograms of the results. Because we are going to do this repeatedly, we define a function `empirical_die_rolls` that takes as its argument the sample size; the function rolls the die as many times as its argument and then returns the die rolls in an array.
```
def empirical_die_rolls(n_rolls, sample_num=0):
return die.sample(n_rolls).column('Face')
```
### Empirical Histograms ###
Here is an empirical histogram of 10 rolls. It doesn't look very much like the probability histogram above. Drag the slider to look at different sets of 10 rolls.
```
die_options = {'bins': 6, 'xlim': (1, 7), 'ylim': (0, 0.3)}
nbi.hist(empirical_die_rolls, options=die_options,
n_rolls=fixed(10), sample_num=(0, 20))
```
When the sample size increases, the empirical histogram begins to look more like the histogram of theoretical probabilities. Try it out!
```
nbi.hist(empirical_die_rolls, options=die_options,
n_rolls=(10, 1000, 10),
sample_num=(0, 20))
```
As we increase the number of rolls in the simulation, the area of each bar gets closer to 16.67%, which is the area of each bar in the probability histogram.
What we have observed in an instance of a general rule:
### The Law of Averages ###
If a chance experiment is repeated independently and under identical conditions, then, in the long run, the proportion of times that an event occurs gets closer and closer to the theoretical probability of the event.
For example, in the long run, the proportion of times the face with four spots appears gets closer and closer to 1/6.
Here "independently and under identical conditions" means that every repetition is performed in the same way regardless of the results of all the other repetitions.
| github_jupyter |
```
import numpy as np
import pandas as pd
import xarray as xr
import geojson
import geopandas as gpd
from shapely.geometry import Polygon as shpPolygon
from scipy.optimize import differential_evolution
from numba import jit
from hm import gr4j, gr4j_bounds
from bokeh.plotting import figure, show, output_file
from bokeh.io import save as save_html
import zipfile
import os
import matplotlib.pyplot as plt
%matplotlib inline
meta_data = pd.read_csv("../data/raw_shapes/ESP_meta.csv")[["STATION", "NAME", "SQ_KM"]]
#consistency
meta_data.at[314, "STATION"] = 38001
meta_data.at[315, "STATION"] = 39001
def get_boundaries(polygon):
"""
Function for receiving boundaries
from the provided .geojson file.
Use for further creating grid polygons
for intersection with our input .geojson polygon
Input:
.geojson polygon
You can import it using Qgis or
use geojson.io webservice.
Output:
[west, south, east, north] rounded to integer coordinates
"""
lons = [coords[0] for coords in polygon["geometry"]["coordinates"][0]]
lats = [coords[1] for coords in polygon["geometry"]["coordinates"][0]]
lon_min, lon_max = np.min(lons), np.max(lons)
lat_min, lat_max = np.min(lats), np.max(lats)
west = np.min([np.floor(lon_min), np.ceil(lon_min)])
east = np.max([np.floor(lon_max), np.ceil(lon_max)])
south = np.min([np.floor(lat_min), np.ceil(lat_min)])
north = np.max([np.floor(lat_max), np.ceil(lat_max)])
return [west, south, east, north]
def get_era5_polygons_geodf(boundaries, gridsize=0.25):
"""
Fuction for obtaining ERA5 grid polygons
for further intersection with .geojson
watershed representation polygon
Input:
1. boundaries from get_boundaries() function
2. gridsize (default=0.25) of ERA5 dataset
Output:
Geodataframe (geopandas package data container)
with grid polygons cover watershed area
"""
# read in already prepared lats/lons
lons_c = np.load("../data/era5_uk_lons.npy")
lats_c = np.load("../data/era5_uk_lats.npy")
# because of the care of geographical latitude consistency
# we need to keep longitude boundaries between -180 and 180
# so, it is not so simple substractinf of 180
# because of they have the same 0 - grinvich meridian
# so we need to transform only values from 180 to 360
# to from 180 to -180
# previous calculations were wrong
# lons_c = lons_c - 180
lons_c = np.where(lons_c > 180, lons_c - 360, lons_c)
hg = gridsize / 2 # half gridsize
# save only useful longitudes based on conditioning under boundaries
# +-5 added for instances where is no grid centers
# between boundaries (small catchments)
lons_c = lons_c[(lons_c >=boundaries[0]-5)&(lons_c<=boundaries[2]+5)]
lats_c = lats_c[(lats_c>=boundaries[1]-5)&(lats_c<=boundaries[3]+5)]
polygons = [shpPolygon([ (x-hg, y-hg), (x-hg, y+hg), (x+hg, y+hg), (x+hg, y-hg), (x-hg, y-hg) ])
for x in lons_c for y in lats_c]
geodf = gpd.GeoDataFrame(geometry=polygons)
# for absolute consistency with grid cell centroids
geodf["lat_c"] = [ y for x in lons_c for y in lats_c]
geodf["lon_c"] = [ x for x in lons_c for y in lats_c]
return geodf
def get_me_ERA5_gridcell_weights(file, gridsize=0.25, full_data=False):
'''
Function converts .geojson file of basin delineation
to weight matrix for further meteo forcing averaging
for using as input in lumped hydrological models
Input:
1. file - .geojson polygon file.
You can import it using Qgis or
use geojson.io webservice.
2. gridsize (default=0.125) of ERA-Interim
global product resolution
3. full_data (default=False) return
geodataframe result (can be plotted)
Output:
1. Default (full_data=False): pandas dataframe
with ['lon', 'lat', 'weight'] columns
2. Extended (full_data=True): geopandas dataframe
with added "geometry" field
source:
https://gis.stackexchange.com/questions/178765/intersecting-two-shapefiles-from-python-or-command-line
'''
# make geoDataFrame from basin file
watershed = gpd.read_file(file)
# make geoDataFrame of grid cells matrix
with open(file) as f:
basin_json = geojson.loads(f.read())["features"][0]
# retrive geodataframe representation of grid cell polygons
grid = get_era5_polygons_geodf(get_boundaries(basin_json), gridsize=gridsize)
data = []
for index, wtr in watershed.iterrows():
for index2, cell in grid.iterrows():
if wtr['geometry'].intersects(cell['geometry']):
data.append({'geometry': wtr['geometry'].intersection(cell['geometry']),
'lon': cell['lon_c'],
'lat': cell['lat_c'],
'area': wtr['geometry'].intersection(cell['geometry']).area})
df = gpd.GeoDataFrame(data,columns=['geometry', 'lon', 'lat', 'area'])
# add weight for each gridcell
df['weight'] = df['area'] / df['area'].sum()
# Because of global (!) ERA (and the same is valid for CMIP5)
# dataset has a longitute representation from 0 to 360
# instead of geographical projection representation from -180 to +180
# we need to add +180 to "lon" values
# for further valid data acqusition from global ERA datasets (tmin, tmax)
# link: https://software.ecmwf.int/wiki/display/CKB/ERA-Interim%3A+What+is+the+spatial+reference
# so, finally! ti is not so easy too!
# we need to perform backward convertion
# (according to forward convertion in get*polygons_geodf functions)
# previous (wrong) decision
# df['lon'] = df['lon'] + 180
#df["lon"] = np.where(df["lon"] < 0, df["lon"] + 360, df["lon"])
# despite changes in df['lon'] values
# returned geometries with full_data=True are geographical (lon:[-180,180])!
if full_data:
return df
return df[['lon', 'lat', 'weight']]
def oudin2005(df_row):
# climatic temperature was changed by observed one
tmean, lat, doy = df_row["t2m"], df_row["lat"], df_row["doy"]
# Reference: http://www.fao.org/docrep/x0490e/x0490e07.htm
# use with caution for latitudes out of range 0-67 degrees
# Convert latitude [degrees] to radians
latrad = np.deg2rad(lat)
# Part 2. Extraterrrestrial radiation calculation
# set solar constant (in W m-2)
Rsc = 1367
# calculate solar declination dt (in radians)
dt = 0.409 * np.sin(2 * np.pi / 365 * doy - 1.39)
# calculate sunset hour angle (in radians)
# calculate sunset hour angle [radians]
# accorging to sunset_hour_angle_func() in PyET0
cos_ws = -np.tan(latrad) * np.tan(dt)
ws = np.arccos(min(max(cos_ws, -1.0), 1.0))
# Calculate sunshine duration N (in hours)
N = 24 / np.pi * ws
# Calculate day angle j (in radians)
j = 2 * np.pi / 365.25 * doy
# Calculate relative distance to sun
dr = 1.0 + 0.03344 * np.cos(j - 0.048869)
# Calculate extraterrestrial radiation (J m-2 day-1)
Re = Rsc * 86400 / np.pi * dr * (ws * np.sin(latrad) * np.sin(dt) + np.sin(ws) * np.cos(latrad) * np.cos(dt))
# convert from J m-2 day-1 to MJ m-2 day-1
Re = Re/10**6
# Part 3. Avearge daily temperatures calculation derived from long-term observations
#Ta = np.array([tmean[tmean.index.dayofyear == x].mean() for x in range(1, 367)])
# Part 4. PET main equation by (Oudin et al., 2005)
# lambda (latent heat flux const) = 2.45 MJ kg-1
# ro (density of water const) = 1000 kg m-3
# PE im m day -1 should be converted to mm/day (* 10**3)
# PE = ( Re / (2.45*1000) ) * ( (Ta+5) /100 ) * 10**3
# threshhold condition
# if Ta+5>0 - use Oudin formula, else set to zero
#PE = np.where(Ta+5 > 0, ( Re / (2.45*1000) ) * ( (Ta+5) /100 )*10**3, 0)
PE = np.where(tmean+5 > 0, ( Re / (2.45*1000) ) * ( (tmean+5) /100 )*10**3, 0)
return PE
def delineation2list(data_instance, delineation_instance, variable):
# handler for variable
# list of pandas.Series which than we will convert to dataframe
results_list = []
#print(len(delineation_instance), " nodes for extraction")
for idx, row in delineation_instance.iterrows():
#print(idx)
# obtain values from grid cells
point = data_instance[variable].sel(longitude=row["lon"],
latitude=row["lat"]).to_series() * row["weight"]
# append obtained series to a list
results_list.append(point)
# close data reference
# data_instance.close()
return results_list
def temp2pet(tmean_list, delineation_instance):
# handler for variable
# list of pandas.Series which than we will convert to dataframe
pet_list = []
for idx, row in delineation_instance.iterrows():
tmean = tmean_list[idx] / row["weight"]
df = pd.DataFrame(tmean)
#print(tmean)
df["lat"] = row["lat"]
#df.rename(index=str, columns={"t2m":"tmean", "lat": "lat"}, inplace=True)
df.index = pd.to_datetime(df.index)
df["doy"] = df.index.dayofyear
df["pet"] = df.apply(oudin2005, axis=1)
# append obtained series to a list
pet_list.append(df["pet"] * row["weight"])
return pet_list
def extract_ERA_reanalisys_data(geojson_file):
# calculate delineation
ERA_delineation = get_me_ERA5_gridcell_weights(file=geojson_file)
# read the data
ERA_precipitation = xr.open_dataset("../data/era5_uk_daily_prec.nc")
ERA_temp_mean = xr.open_dataset("../data/era5_uk_daily_temp.nc")
# ERA precipitation data
precipitation_list = delineation2list(data_instance=ERA_precipitation,
delineation_instance=ERA_delineation, variable="tp")
# ERA minimum temperature data
temperature_mean_list = delineation2list(data_instance=ERA_temp_mean,
delineation_instance=ERA_delineation, variable="t2m")
#return precipitation_list, temperature_min_list, temperature_max_list
evaporation_list = temp2pet(tmean_list=temperature_mean_list,
delineation_instance=ERA_delineation)
# create dataframes and store our averaged series
lumped_out = pd.DataFrame( {'Temp': pd.concat(temperature_mean_list, axis=1).sum(axis=1),
'Prec': pd.concat(precipitation_list, axis=1).sum(axis=1),
'Evap': pd.concat(evaporation_list, axis=1).sum(axis=1)} )
#lumped_out["Temp"] = lumped_out["Tmin"] + lumped_out["Tmax"] / 2.0
#return temperature_mean_list
return lumped_out
def hm_calibration(data_instance, model, bounds, seed=1010):
"""
Calibration meta function for hydrologic models
Input :
1. data_instance: pandas dataframe with data:
"Obs" : observed streamflow (mm/day)
"Temp" : mean daily temperature (Celsius)
"Prec" : daily sum of precipitation (mm)
"Evap" : daily potential evapotranspiration (mm)
2. model :
function for running hydrologic model
3. bounds :
function for deriving lower and upper bounds
of model parameters for calibration
4. seed :
random seed number for differential evolution algorithm
Output:
#1. runoff :
Simulated (Qsim) runoff
2. opt_par :
optimal set of model parameters
#3. NS :
Nash-Sutcliffe efficiency criteria
"""
_Q = data_instance["Obs"].values
_T = data_instance["Temp"].values
_P = data_instance["Prec"].values
_PET = data_instance["Evap"].values
# add warmup
# simply full period
Qobs = np.concatenate([_Q, _Q])
T = np.concatenate([_T, _T])
P = np.concatenate([_P, _P])
PET = np.concatenate([_PET, _PET])
def NS_loss(params):
# simulate hydrograph
Qsim = model(T, P, PET, params)
Qm = np.nanmean(Qobs)
# calculate objective function value
return np.nansum((Qobs-Qsim)**2)/np.nansum((Qobs-Qm)**2)
result = differential_evolution(NS_loss,
bounds=bounds(),
maxiter=1000,
polish=True,
disp=False,
seed=seed)
opt_par = result.x
Qsim = model(T, P, PET, opt_par)
# cut the warmup period
Qsim = Qsim[len(_Q):].copy()
Qobs = Qobs[len(_Q):].copy()
NS = 1 - np.nansum((Qobs-Qsim)**2)/np.nansum((Qobs-np.nanmean(Qobs))**2)
#print("NS : {}".format(np.round(NS, 2)))
#runoff = pd.DataFrame({"Qobs": Qobs, "Qsim": Qsim}, index=data_instance.index)
return Qsim, opt_par, NS
def generate_future_data(data_instance):
observed_rainfall = data_instance["Prec"]
Tclim = np.array([data_instance["Temp"].loc[data_instance["Temp"].index.dayofyear == doy].mean() for doy in range(1, 367)])
PETclim = np.array([data_instance["Evap"].loc[data_instance["Evap"].index.dayofyear == doy].mean() for doy in range(1, 367)])
future_dates = pd.date_range("2016-01-01", "2020-12-2", freq="D")
lenght = len(future_dates)
df = pd.DataFrame(index=future_dates)
np.random.seed(42)
# 1. Constant mean obs rain
df["MeanRain"] = np.ones(lenght) * np.median(observed_rainfall)
# 2. Constant max obs rain
df["MaxRain"] = np.ones(lenght) * np.max(observed_rainfall)
# 3. Random sampling from distribution
df["RandomRain"] = np.random.random(lenght) * np.max(observed_rainfall)
df["Temp"] = np.array([Tclim[i.dayofyear-1] for i in df.index])
df["Evap"] = np.array([PETclim[i.dayofyear-1] for i in df.index])
return df
def df2html(df, df_future, station_number, station_name):
title = f"{station_name}"
p1 = figure(title=title, x_axis_type="datetime", plot_width=500, plot_height=300)
p1.grid.grid_line_alpha=0.5
p1.xaxis.axis_label = 'Date'
p1.yaxis.axis_label = 'Runoff, mm/day'
p1.line(df.index, df["Sim"], color='deepskyblue', legend='Historical')
p1.line(df_future.index, df_future["MeanRunoff"], color='seagreen', legend='MeanRainRunoff')
p1.line(df_future.index, df_future["MaxRunoff"], color='crimson', legend='MaxRainRunoff')
p1.line(df_future.index, df_future["RandomRunoff"], color='turquoise', legend='RandomRainRunoff')
p1.legend.location = "top_left"
output_file(f"../docs/{station_number}.html", title="4Y11M2D")
save_html(p1)
#show(p1)
def generate_html(station_id, station_name):
path_runoff = f"../data/runoff/{station_id}.pkl"
path_shape = f"../data/shapes/{station_id}.geojson"
runoff = pd.read_pickle(path_runoff)
shape = gpd.read_file(path_shape)
meteo = extract_ERA_reanalisys_data(path_shape)
data = pd.concat([runoff, meteo], axis=1)
qsim, opt_par, ns = hm_calibration(data, gr4j, gr4j_bounds, seed=42)
data["Sim"] = qsim
data_future = generate_future_data(data)
for rain in ["MeanRain", "MaxRain", "RandomRain"]:
data_future[rain[:-4]+"Runoff"] = gr4j(np.concatenate([data["Temp"], data_future["Temp"]]),
np.concatenate([data["Prec"], data_future[rain]]),
np.concatenate([data["Evap"], data_future["Evap"]]), opt_par)[len(data["Prec"]):]
df2html(data, data_future, station_id, station_name)
return ns
%%time
efficiencies = []
for idd, row in meta_data.iterrows():
idx = row["STATION"]
name = row["NAME"]
nss = generate_html(idx, name)
efficiencies.append( nss )
print(f"{idd}: {np.round(nss, 2)}")
efficiencies = np.array(efficiencies)
plt.figure(figsize=(8,8))
plt.boxplot(efficiencies)
#https://stackoverflow.com/questions/12998430/remove-xticks-in-a-matplotlib-plot
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
plt.xlabel("GR4J")
plt.ylabel("NS")
np.median(efficiencies)
```
| github_jupyter |
```
import numpy as np
```
**Module** is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
```
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
"""
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
"""
def forward(self, input):
"""
Takes an input object, and computes the corresponding output of the module.
"""
return self.updateOutput(input)
def backward(self, input, gradOutput):
"""
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
"""
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
"""
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
"""
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
"""
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
"""
pass
def zeroGradParameters(self):
"""
Zeroes `gradParams` variable if the module has params.
"""
pass
def getParameters(self):
"""
Returns a list with its parameters.
If the module does not have parameters return empty list.
"""
return []
def getGradParameters(self):
"""
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
"""
return []
def training(self):
"""
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = True
def evaluate(self):
"""
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = False
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Module"
```
# Sequential container
**Define** a forward and backward pass procedures.
```
class Sequential(Module):
"""
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
"""
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
"""
Adds a module to the container.
"""
self.modules.append(module)
def updateOutput(self, input):
"""
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
"""
self.y = [input]
for i in range(len(self.modules)):
self.y.append(self.modules[i].forward(self.y[i]))
self.output = self.y[-1]
return self.output
def backward(self, input, gradOutput):
"""
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
"""
self.g = [gradOutput]
if not (input == self.y[0]).all():
self.updateOutput(input)
for i in range(len(self.modules)):
self.g.append(self.modules[-i - 1].backward(self.y[-i - 2],self.g[i]))
self.gradInput = self.g[-1]
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
"""
Should gather all parameters in a list.
"""
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
"""
Should gather all gradients w.r.t parameters in a list.
"""
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
```
# Layers
- input: **`batch_size x n_feats1`**
- output: **`batch_size x n_feats2`**
```
np.sum([[0,1,2],[3,4,5]], axis=0)
class Linear(Module):
"""
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
"""
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
self.output = np.transpose(self.W.dot(input.transpose()) + self.b.reshape((len(self.b), -1)))
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = gradOutput.dot(self.W)
return self.gradInput
def accGradParameters(self, input, gradOutput):
self.gradW = input.transpose().dot(gradOutput).transpose()
self.gradb = np.sum(gradOutput, axis=0)
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
```
This one is probably the hardest but as others only takes 5 lines of code in total.
- input: **`batch_size x n_feats`**
- output: **`batch_size x n_feats`**
```
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
probs = np.exp(self.output)
self.output = probs/np.sum(probs, axis=1, keepdims=True)
return self.output
def updateGradInput(self, input, gradOutput):
#YEAAAHHH! TENSORS!!!
input = np.subtract(input, input.max(axis=1, keepdims=True))
probs = np.exp(input)
probs = probs/np.sum(probs, axis=1, keepdims=True)
probs_reshaped = probs.reshape(input.shape[0], input.shape[1], 1)
self.gradInput = np.einsum('lji, lj->lji', np.einsum('i, jk', np.ones(input.shape[0]), np.eye(input.shape[1])), probs) - \
np.einsum('...j,...k', probs_reshaped, np.einsum('...kj', probs_reshaped)).reshape(input.shape[0], input.shape[1], input.shape[1])
self.gradInput = np.einsum('...j,...jk', gradOutput, self.gradInput).reshape(input.shape[0], input.shape[1])
return self.gradInput
def __repr__(self):
return "SoftMax"
```
Implement [**dropout**](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf). The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask.
This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.
While training (`self.training == True`) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. `self.output = input`.
- input: **`batch_size x n_feats`**
- output: **`batch_size x n_feats`**
```
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
if self.training == True:
self.mask = np.random.binomial(1, self.p, size=input.shape)
self.output=np.multiply(self.mask, input)
else:
self.output = input
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(self.mask, gradOutput)
return self.gradInput
def __repr__(self):
return "Dropout"
```
# Activation functions
Here's the complete example for the **Rectified Linear Unit** non-linearity (aka **ReLU**):
```
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
```
Implement [**Leaky Rectified Linear Unit**](http://en.wikipedia.org/wiki%2FRectifier_%28neural_networks%29%23Leaky_ReLUs). Expriment with slope.
```
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
self.output = np.maximum(input, self.slope*input)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput, input > 0) + \
np.multiply(gradOutput, input <= 0) * self.slope
return self.gradInput
def __repr__(self):
return "LeakyReLU"
```
# Criterions
Criterions are used to score the models answers.
```
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
"""
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
"""
return self.updateOutput(input, target)
def backward(self, input, target):
"""
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
"""
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
"""
Function to override.
"""
return self.output
def updateGradInput(self, input, target):
"""
Function to override.
"""
return self.gradInput
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Criterion"
```
The **MSECriterion**, which is basic L2 norm usually used for regression, is implemented here for you.
```
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
```
You task is to implement the **ClassNLLCriterion**. It should implement [multiclass log loss](http://scikit-learn.org/stable/modules/model_evaluation.html#log-loss). Nevertheless there is a sum over `y` (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
```
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
self.output = -1 * np.einsum('ik,ik', target, np.log(input_clamp)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
self.gradInput = -1 * np.einsum('ik,ik->ik', target, 1.0/(input_clamp)) / input.shape[0]
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow import keras
print( 'Tensorflow : ',tf.__version__)
print( ' |-> Keras : ',keras.__version__)
```
# Text generation with LSTM
This notebook contains the code samples found in Chapter 8, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
[...]
## Implementing character-level LSTM text generation
Let's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a
language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this
example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model
we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the
English language.
## Preparing the data
Let's start by downloading the corpus and converting it to lowercase:
```
#import keras
import numpy as np
path = keras.utils.get_file(
'nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
```
Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of
shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot
encoded characters that come right after each extracted sequence.
```
# Length of extracted character sequences
maxlen = 60
# We sample a new sequence every `step` characters
step = 3
# This holds our extracted sequences
sentences = []
# This holds the targets (the follow-up characters)
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
# List of unique characters in the corpus
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
# Dictionary mapping unique characters to their index in `chars`
char_indices = dict((char, chars.index(char)) for char in chars)
# Next, one-hot encode the characters into binary arrays.
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
```
## Building the network
Our network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that
recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in
recent times.
```
#from keras import layers
model = keras.models.Sequential()
model.add(keras.layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(keras.layers.Dense(len(chars), activation='softmax'))
```
Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model:
```
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
```
## Training the language model and sampling from it
Given a trained model and a seed text snippet, we generate new text by repeatedly:
* 1) Drawing from the model a probability distribution over the next character given the text available so far
* 2) Reweighting the distribution to a certain "temperature"
* 3) Sampling the next character at random according to the reweighted distribution
* 4) Adding the new character at the end of the available text
This is the code we use to reweight the original probability distribution coming out of the model,
and draw a character index from it (the "sampling function"):
```
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures
after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of
temperature in the sampling strategy.
```
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
# Fit the model for 1 epoch on the available training data
model.fit(x, y,
batch_size=128,
epochs=1)
# Select a text seed at random
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
# We generate 400 characters
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
As you can see, a low temperature results in extremely repetitive and predictable text, but where local structure is highly realistic: in
particular, all words (a word being a local pattern of characters) are real English words. With higher temperatures, the generated text
becomes more interesting, surprising, even creative; it may sometimes invent completely new words that sound somewhat plausible (such as
"eterned" or "troveration"). With a high temperature, the local structure starts breaking down and most words look like semi-random strings
of characters. Without a doubt, here 0.5 is the most interesting temperature for text generation in this specific setup. Always experiment
with multiple sampling strategies! A clever balance between learned structure and randomness is what makes generation interesting.
Note that by training a bigger model, longer, on more data, you can achieve generated samples that will look much more coherent and
realistic than ours. But of course, don't expect to ever generate any meaningful text, other than by random chance: all we are doing is
sampling data from a statistical model of which characters come after which characters. Language is a communication channel, and there is
a distinction between what communications are about, and the statistical structure of the messages in which communications are encoded. To
evidence this distinction, here is a thought experiment: what if human language did a better job at compressing communications, much like
our computers do with most of our digital communications? Then language would be no less meaningful, yet it would lack any intrinsic
statistical structure, thus making it impossible to learn a language model like we just did.
## Take aways
* We can generate discrete sequence data by training a model to predict the next tokens(s) given previous tokens.
* In the case of text, such a model is called a "language model" and could be based on either words or characters.
* Sampling the next token requires balance between adhering to what the model judges likely, and introducing randomness.
* One way to handle this is the notion of _softmax temperature_. Always experiment with different temperatures to find the "right" one.
| github_jupyter |
<a href="https://colab.research.google.com/github/amir1m/learning-ml/blob/master/FCML_CoinToss.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from scipy.special import comb
from scipy.stats import beta
import matplotlib.pyplot as plt
import numpy as np
import time
def get_binomial_prob(N,y,r):
return (comb(N, y) * (r ** y) * (1 - r) ** (N - y) )
def get_binomial_prob_dist(N = 10,r = 0.5):
prob_y = []
for y in range(0,N+1,1):
prob_y.append(get_binomial_prob(N, y, r))
return prob_y
def plot_dist(prob_y):
N = len(prob_y)
plt.bar(range(0,N), prob_y)
plt.xticks(ticks=range(0,N))
plt.xlabel("y")
plt.ylabel("P(y)")
plt.plot(range(0,N), prob_y)
plot_dist(get_binomial_prob_dist(N = 10, r = 0.5))
plot_dist(get_binomial_prob_dist(N = 10, r = 0.9))
```
## Bayesian
```
N = 10
Y_N = 6
p_yn_given_r = []
for r in np.arange(0,1, 0.1):
p_yn_given_r.append(get_binomial_prob(N, Y_N, r))
plt.plot(np.arange(0,1, 0.1), p_yn_given_r, 'b')
N = 100
Y_N = 70
p_yn_given_r = []
for r in np.arange(0, 1, 0.1):
p_yn_given_r.append(get_binomial_prob(N, Y_N, r))
print(N, Y_N, r)
print(get_binomial_prob(N, Y_N, r))
plt.plot(np.arange(0,1, 0.1), p_yn_given_r, 'r')
```
## Three Scenarios
### No prior knowledge (3.3.1)
```
plt.plot(beta.pdf(np.linspace(0.0, 1.0, 100),1,1))
plt.plot(beta.pdf(np.linspace(0.0, 1.0, num = 10), 2,1)) # For one Toss as Head
tosses = [0,1,2,3,4,10]
heads = [0,1,1,2,3,6]
a = 1.0
b = 1.0
prob_yn_r = []
r_values = np.linspace(0.0,1.0, 10)
expectations = []
variance = []
for i in range(0,6):
N = tosses[i]
yN = heads[i]
delta = yN + a
gamma = N - yN + b
expectations.append(delta / (delta + gamma))
variance.append((delta * gamma) / ((delta + gamma)**2 * (delta + gamma + 1)))
print("Toss %d: \n\t Heads = %f, delta = %f, gamma = %f, expectations = %f, variance = %f" %(N,yN, delta, gamma,expectations[i],variance[i]))
prob_yn_r.append(beta.pdf(r_values, delta, gamma ))
figure, axes = plt.subplots(nrows=3, ncols=2)
i = 0
for row in range(3):
for col in range(2):
axes[row, col].plot(r_values, prob_yn_r[i])
axes[row, col].set_title('Tosses=%d,Heads:%d'%(tosses[i], heads[i]))
i += 1
figure.tight_layout(pad=1.0)
figure, axes = plt.subplots(nrows=1, ncols=2)
axes[0].plot(tosses,expectations)
axes[0].set_title("No. of tosses Vs Expectations")
axes[0].set_xlabel("No. of Coin Tosses")
axes[0].set_ylabel("Expectations")
axes[1].plot(tosses,variance)
axes[1].set_title("No. of tosses Vs Varince")
axes[1].set_xlabel("No. of Coin Tosses")
axes[1].set_ylabel("Expectations")
figure.tight_layout(pad=3.0)
```
| github_jupyter |
# Time Series Forecasting with Linear Learner
_**Using Linear Regression to Forecast Monthly Demand**_
---
---
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Host](#Host)
1. [Forecast](#Forecast)
1. [Extensions](#Extensions)
---
## Background
Forecasting is potentially the most broadly relevant machine learning topic there is. Whether predicting future sales in retail, housing prices in real estate, traffic in cities, or patient visits in healthcare, almost every industry could benefit from improvements in their forecasts. There are numerous statistical methodologies that have been developed to forecast time-series data, but still, the process for developing forecasts tends to be a mix of objective statistics and subjective interpretations.
Properly modeling time-series data takes a great deal of care. What's the right level of aggregation to model at? Too granular and the signal gets lost in the noise, too aggregate and important variation is missed. Also, what is the right cyclicality? Daily, weekly, monthly? Are there holiday peaks? How should we weight recent versus overall trends?
Linear regression with appropriate controls for trend, seasonality, and recent behavior, remains a common method for forecasting stable time-series with reasonable volatility. This notebook will build a linear model to forecast weekly output for US gasoline products starting in 1991 to 2005. It will focus almost exclusively on the application. For a more in-depth treatment on forecasting in general, see [Forecasting: Principles & Practice](https://robjhyndman.com/uwafiles/fpp-notes.pdf). In addition, because our dataset is a single time-series, we'll stick with SageMaker's Linear Learner algorithm. If we had multiple, related time-series, we would use SageMaker's DeepAR algorithm, which is specifically designed for forecasting. See the [DeepAR Notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_synthetic/deepar_synthetic.ipynb) for more detail.
---
## Setup
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
```
import sagemaker
# Default S3 bucket created for training and storing model artifacts
# The bucket and prefix parameters can be modified to use an existing bucket
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-linear-time-series-forecast'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
```
Now we'll import the Python libraries we'll need.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import io
import os
import time
import json
import sagemaker.amazon.common as smac
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
```
---
## Data
Let's download the data. More information about this dataset can be found [here](https://rdrr.io/github/robjhyndman/fpp/man/gasoline.html).
```
!wget http://robjhyndman.com/data/gasoline.csv
```
And take a look at it.
```
gas = pd.read_csv('gasoline.csv', header=None, names=['thousands_barrels'])
display(gas.head())
plt.plot(gas)
plt.show()
```
As we can see, there's a definitive upward trend, some yearly seasonality, but sufficient volatility to make the problem non-trivial. There are several unexpected dips and years with more or less pronounced seasonality. These same characteristics are common in many topline time-series.
Next we'll transform the dataset to make it look a bit more like a standard prediction model. Our target variable is `thousands_barrels`. Let's create explanatory features, like:
- `thousands_barrels` for each of the 4 preceeding weeks.
- Trend. The chart above suggests the trend is simply linear, but we'll create log and quadratic trends in case.
- Indicator variables {0 or 1} that will help capture seasonality and key holiday weeks.
```
gas['thousands_barrels_lag1'] = gas['thousands_barrels'].shift(1)
gas['thousands_barrels_lag2'] = gas['thousands_barrels'].shift(2)
gas['thousands_barrels_lag3'] = gas['thousands_barrels'].shift(3)
gas['thousands_barrels_lag4'] = gas['thousands_barrels'].shift(4)
gas['trend'] = np.arange(len(gas))
gas['log_trend'] = np.log1p(np.arange(len(gas)))
gas['sq_trend'] = np.arange(len(gas)) ** 2
weeks = pd.get_dummies(np.array(list(range(52)) * 15)[:len(gas)], prefix='week')
gas = pd.concat([gas, weeks], axis=1)
```
Now, we'll:
- Clear out the first four rows where we don't have lagged information.
- Split the target off from the explanatory features.
- Split the data into training, validation, and test groups so that we can tune our model and then evaluate its accuracy on data it hasn't seen yet. Since this is time-series data, we'll use the first 60% for training, the second 20% for validation, and the final 20% for final test evaluation.
```
gas = gas.iloc[4:, ]
split_train = int(len(gas) * 0.6)
split_test = int(len(gas) * 0.8)
train_y = gas['thousands_barrels'][:split_train]
train_X = gas.drop('thousands_barrels', axis=1).iloc[:split_train, ].to_numpy()
validation_y = gas['thousands_barrels'][split_train:split_test]
validation_X = gas.drop('thousands_barrels', axis=1).iloc[split_train:split_test, ].to_numpy()
test_y = gas['thousands_barrels'][split_test:]
test_X = gas.drop('thousands_barrels', axis=1).iloc[split_test:, ].to_numpy()
```
Now, we'll convert the datasets to the recordIO-wrapped protobuf format used by the Amazon SageMaker algorithms and upload this data to S3. We'll start with training data.
```
buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, np.array(train_X).astype('float32'), np.array(train_y).astype('float32'))
buf.seek(0)
key = 'linear_train.data'
boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(buf)
s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key)
print('uploaded training data location: {}'.format(s3_train_data))
```
Next we'll convert and upload the validation dataset.
```
buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, np.array(validation_X).astype('float32'), np.array(validation_y).astype('float32'))
buf.seek(0)
key = 'linear_validation.data'
boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(buf)
s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key)
print('uploaded validation data location: {}'.format(s3_validation_data))
```
---
## Train
Now we can begin to specify our linear model. First, let's specify the containers for the Linear Learner algorithm. Since we want this notebook to run in all of Amazon SageMaker's regions, we'll use a convenience function to look up the container image name for our current region. More details on algorithm containers can be found in [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).
```
from sagemaker.amazon.amazon_estimator import image_uris
container = image_uris.retrieve(framework='linear-learner', region=boto3.Session().region_name)
```
Amazon SageMaker's Linear Learner actually fits many models in parallel, each with slightly different hyperparameters, and then returns the one with the best fit. This functionality is automatically enabled. We can influence this using parameters like:
- `num_models` to increase to total number of models run. The specified parameters will always be one of those models, but the algorithm also chooses models with nearby parameter values in order to find a solution nearby that may be more optimal. In this case, we're going to use the max of 32.
- `loss` which controls how we penalize mistakes in our model estimates. For this case, let's use absolute loss as we haven't spent much time cleaning the data, and absolute loss will adjust less to accomodate outliers.
- `wd` or `l1` which control regularization. Regularization can prevent model overfitting by preventing our estimates from becoming too finely tuned to the training data, which can actually hurt generalizability. In this case, we'll leave these parameters as their default "auto" though.
Let's kick off our training job in SageMaker's distributed, managed training. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, we'll use the Python SDK to track to wait and track our progress.
```
sess = sagemaker.Session()
linear = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=59,
mini_batch_size=100,
predictor_type='regressor',
epochs=10,
num_models=32,
loss='absolute_loss')
linear.fit({'train': s3_train_data, 'validation': s3_validation_data})
```
---
## Host
Now that we've trained the linear algorithm on our data, let's create a model and deploy that to a hosted endpoint.
```
linear_predictor = linear.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
```
### Forecast
Now that we have our hosted endpoint, we can generate statistical forecasts from it. Let's forecast on our test dataset to understand how accurate our model may be.
There are many metrics to measure forecast error. Common examples include:
- Root Mean Square Error (RMSE)
- Mean Absolute Percent Error (MAPE)
- Geometric Mean of the Relative Absolute Error (GMRAE)
- Quantile forecast errors
- Errors that account for asymmetric loss in over or under-prediction
For our example we'll keep things simple and use Median Absolute Percent Error (MdAPE), but we'll also compare it to a naive benchmark forecast (that week last year's demand * that week last year / that week two year's ago).
There are also multiple ways to generate forecasts.
- One-step-ahead forecasts: When predicting for multiple data points, one-step-ahead forecasts update the history with the correct known value. These are common, easy to produce, and can give us some intuition of whether our model is performing as expected. However, they can also present an unnecessarily optimistic evaluation of the forecast. In most real-life cases, we want to predict out well into the future, because the actions we may take based on that forecast are not immediate. In these cases, we want to know what the time-periods in between will bring, so generating a forecast based on the knowledge that we do, can be misleading.
- Multi-step-ahead (or horizon) forecasts: In this case, when forecasting out of sample, each forecast builds off of the forecasted periods that precede it. So, errors early on in the test data can compound to create large deviations for observations late in the test data. Although this is more realistic, it can be difficult to create the forecasts, particularly as model complexity increases.
For our example, we'll calculate both, but focus on the multi-step forecast accuracy.
Let's start by generating the naive forecast.
```
gas['thousands_barrels_lag52'] = gas['thousands_barrels'].shift(52)
gas['thousands_barrels_lag104'] = gas['thousands_barrels'].shift(104)
gas['thousands_barrels_naive_forecast'] = gas['thousands_barrels_lag52'] ** 2 / gas['thousands_barrels_lag104']
naive = gas[split_test:]['thousands_barrels_naive_forecast'].to_numpy()
```
And investigating it's accuracy.
```
print('Naive MdAPE =', np.median(np.abs(test_y - naive) / test_y))
plt.plot(np.array(test_y), label='actual')
plt.plot(naive, label='naive')
plt.legend()
plt.show()
```
Now we'll generate the one-step-ahead forecast. First we need a function to convert our numpy arrays into a format that can be handled by the HTTP POST request we pass to the inference container. In this case that's a simple CSV string. The results will be published back as JSON. For these common formats we can use the Amazon SageMaker Python SDK's built in `CSVSerializer` and `JSONDeserializer` functions.
```
linear_predictor.serializer = CSVSerializer()
linear_predictor.deserializer = JSONDeserializer()
```
Next, we'll invoke the endpoint to get predictions.
```
result = linear_predictor.predict(test_X)
one_step = np.array([r['score'] for r in result['predictions']])
```
Let's compare forecast errors.
```
print('One-step-ahead MdAPE = ', np.median(np.abs(test_y - one_step) / test_y))
plt.plot(np.array(test_y), label='actual')
plt.plot(one_step, label='forecast')
plt.legend()
plt.show()
```
As we can see our MdAPE is substantially better than the naive, and we actually swing from a forecasts that's too volatile to one that under-represents the noise in our data. However, the overall shape of the statistical forecast does appear to better represent the actual data.
Next, let's generate multi-step-ahead forecast. To do this, we'll need to loop over invoking the endpoint one row at a time and make sure the lags in our model are updated appropriately.
```
multi_step = []
lags = test_X[0, 0:4]
for row in test_X:
row[0:4] = lags
result = linear_predictor.predict(row)
prediction = result['predictions'][0]['score']
multi_step.append(prediction)
lags[1:4] = lags[0:3]
lags[0] = prediction
multi_step = np.array(multi_step)
```
And now calculate the accuracy of these predictions.
```
print('Multi-step-ahead MdAPE =', np.median(np.abs(test_y - multi_step) / test_y))
plt.plot(np.array(test_y), label='actual')
plt.plot(multi_step, label='forecast')
plt.legend()
plt.show()
```
As we can see our multi-step ahead error performs worse than our one-step ahead forecast, but nevertheless remains substantially stronger than the naive benchmark forecast. This 1.5 percentage point difference may not seem particularly meaningful, but at the large scale of many topline forecasts can mean millions of dollars in excess supply or lost sales.
---
## Extensions
Our linear model does a good job of predicting gasoline demand, but of course, improvements could be made. The fact that statistical forecast actually underrepresents some of the volatility in the data could suggest that we have actually over-regularized the data. Or, perhaps our choice of absolute loss was incorrect. Rerunning the model with further tweaks to these hyperparameters may provide more accurate out of sample forecasts. We also did not do a large amount of feature engineering. Occasionally, the lagging time-periods have complex interrelationships with one another that should be explored. Finally, alternative forecasting algorithms could be explored. Less interpretable methods like ARIMA, and black-box methods like LSTM Recurrent Neural Networks have been shown to predict time-series very well. Balancing the simplicity of a linear model with predictive accuracy is an important subjective question where the right answer depends on the problem being solved, and its implications to the business.
### (Optional) Clean-up
If you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
```
sagemaker.Session().delete_endpoint(linear_predictor.endpoint_name)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import h5py
import os
from preprocessor import Preprocessor
def save_data_set(x, y, data_type, path, s=''):
if not os.path.exists(path):
os.makedirs(path)
fname=os.path.join(path, f'x_{data_type}{s}.h5')
# print("Saving x_{} of shape {} in {}".format(data_type, x.shape, fname))
xf = h5py.File(fname, 'w')
xf.create_dataset('x_{}'.format(data_type), data=x)
xf.close()
# print("Saving y_{} of shape {} in {}".format(data_type, y.shape, fname))
yf = h5py.File(os.path.join(path, f'y_{data_type}{s}.h5'), 'w')
yf.create_dataset(f'y_{data_type}', data=y)
yf.close()
cols = {
'checking':str,
'duration':np.int64,
'credit-hist':str,
'purpose':str,
'credit-amount':np.int64,
'savings-account':str,
'employment-duration':str,
'installment-income-ratio':np.int64,
'marital-gender-status':str,
'debtors-guarantors':str,
'residence-since':str,
'property':str,
'age':np.int64,
'installment-plans':str,
'housing':str,
'num-existing-credits':np.int64,
'job':str,
'num-liable':np.int64,
'telephone':str,
'foreign-worker':str,
'is_good':np.int64
}
df = pd.read_csv('./datasets/loan_approval.generated', sep=" ", index_col=False, names=cols.keys(), header=None, dtype=cols)
X = df.drop(['is_good'], axis=1)
y = df['is_good'].replace([1, 2], [1, 0])
# Returned values type:
# x_train - pandas.core.frame.DataFrame
# y_train - pandas.core.series.Series
# x_test - pandas.core.frame.DataFrame
# y_test - pandas.core.series.Series
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=5, stratify=y)
prep = Preprocessor()
# Returned value type: Numpy NDArray
x_train = prep.fit_transform(x_train)
# Returned value type: Numpy NDArray
x_test = prep.transform(x_test)
# Returned values type:
# x_test - numpy.ndarray
# y_test - pandas.core.series.Series
# x_val - numpy.ndarray
# y_val - pandas.core.series.Series
x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=4096, random_state=5, stratify=y_test)
test_idx = y_test[0:15].index
path = "samples//"
if not os.path.exists(path):
os.makedirs(path)
for i in range(test_idx.size):
out = f"Sample_Index: {test_idx[i]}\n"
for col in df.columns:
out = out + col + " : " + str(df.loc[test_idx[i]][col]) + "\n"
fname = os.path.join(path, "sample" + str(i) + ".raw")
# print("Saving raw data to: " + fname)
with open(fname, 'w') as f:
f.write(out)
save_data_set(np.reshape(x_test[i], [1, x_test[i].shape[0]]), y_test.to_numpy()[i], data_type='sample'+str(i), path=path)
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as f
import torch.optim as optim
import time
import random, argparse, logging, os
from collections import namedtuple
from minatar import Environment
import matplotlib.pyplot as plt
import numpy as np
# remove for game display
%matplotlib inline
NUM_FRAMES = 1000
MAX_EVALS = 5000
device = "cpu"
class Network(nn.Module):
def __init__(self, in_channels, num_actions):
super(Network, self).__init__()
# One hidden 2D convolution layer:
# in_channels: variable
# out_channels: 4
# kernel_size: 3 of a 3x3 filter matrix
# stride: 1
self.conv = nn.Conv2d(in_channels, 4, kernel_size=3, stride=1)
# Final fully connected hidden layer:
# the number of linear unit depends on the output of the conv
# the output consist 32 rectified units
def size_linear_unit(size, kernel_size=3, stride=1):
return (size - (kernel_size - 1) - 1) // stride + 1
num_linear_units = size_linear_unit(10) * size_linear_unit(10) * 4
self.fc_hidden = nn.Linear(in_features=num_linear_units, out_features=32)
# Output layer:
self.output = nn.Linear(in_features=32, out_features=num_actions)
def forward(self, x):
x = f.relu(self.conv(x))
x = f.relu(self.fc_hidden(x.view(x.size(0), -1)))
return self.output(x)
def set_params(self, params):
a = torch.tensor(params, device=device).float()
torch.nn.utils.vector_to_parameters(a, self.parameters())
def get_params(self):
with torch.no_grad():
params = self.parameters()
vec = torch.nn.utils.parameters_to_vector(params)
return vec.to(device).numpy()
def get_state(s):
return (torch.tensor(s, device=device).permute(2, 0, 1)).unsqueeze(0).float()
env = Environment("breakout", sticky_action_prob=0.0, random_seed=0)
env.reset()
in_channels = env.state_shape()[2]
num_actions = env.num_actions()
policy_net = Network(in_channels, num_actions).to(device)
print(env.state().shape)
plt.imshow(env.state()[:, :, 0]);
s = get_state(env.state())
print(s.shape)
plt.imshow(s[:, 0, :, :][0]);
with torch.no_grad():
outputs = policy_net(s)
print("outputs: ", outputs)
print("argmax: ", torch.argmax(outputs))
def play(policy_net, display=False):
env = Environment("breakout", sticky_action_prob=0.0, random_seed=0)
env.reset()
is_terminated = False
total_reward = 0.0
t = 0
while (not is_terminated) and t < NUM_FRAMES:
s = get_state(env.state())
with torch.no_grad():
action = torch.argmax(policy_net(s))
reward, is_terminated = env.act(action)
total_reward += reward
t += 1
if display:
env.display_state(1)
return total_reward
play(policy_net, display=False)
genes = policy_net.get_params()
n_genes = len(genes)
print(genes, n_genes)
genes = np.random.randn(n_genes)
policy_net.set_params(genes)
policy_net.get_params() - genes
np.random.seed(4321)
best_genes = genes
best_fit = 0.0
fits = np.zeros(MAX_EVALS)
for i in range(MAX_EVALS):
genes = np.random.randn(n_genes)
policy_net.set_params(genes)
fitness = play(policy_net)
if fitness > best_fit:
print(i, " ", fitness)
best_fit = fitness
best_genes = genes.copy()
fits[i] = best_fit
plt.plot(fits)
plt.xlabel("Evaluations")
plt.ylabel("Total reward")
plt.title(env.env_name);
policy_net.set_params(best_genes)
play(policy_net, display=False)
```
| github_jupyter |
<table width="100%">
<tr style="border-bottom:solid 2pt #009EE3">
<td style="text-align:left" width="10%">
<a href="open_h5.dwipynb" download><img src="../../images/icons/download.png"></a>
</td>
<td style="text-align:left" width="10%">
<a><img class="not_active_img" src="../../images/icons/program.png" title="Will be released soon !"></a>
</td>
<td></td>
<td style="text-align:left" width="5%">
<a href="../MainFiles/opensignalsfactory.ipynb"><img src="../../images/icons/home.png"></a>
</td>
<td style="text-align:left" width="5%">
<a href="../MainFiles/contacts.ipynb"><img src="../../images/icons/contacts.png"></a>
</td>
<td style="text-align:left" width="5%">
<a href="https://github.com/opensignalsfactory/opensignalsfactory"><img src="../../images/icons/github.png"></a>
</td>
<td style="border-left:solid 2pt #009EE3" width="20%">
<img src="../../images/ost_logo.png">
</td>
</tr>
</table>
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_1"><div id="image_img"
class="header_image_1"></div></td>
<!-- Available classes for "image_td" element:
- header_image_color_1 (For Notebooks of "Open" Area);
- header_image_color_2 (For Notebooks of "Acquire" Area);
- header_image_color_3 (For Notebooks of "Visualise" Area);
- header_image_color_4 (For Notebooks of "Process" Area);
- header_image_color_5 (For Notebooks of "Detect" Area);
- header_image_color_6 (For Notebooks of "Extract" Area);
- header_image_color_7 (For Notebooks of "Decide" Area);
- header_image_color_8 (For Notebooks of "Explain" Area);
Available classes for "image_img" element:
- header_image_1 (For Notebooks of "Open" Area);
- header_image_2 (For Notebooks of "Acquire" Area);
- header_image_3 (For Notebooks of "Visualise" Area);
- header_image_4 (For Notebooks of "Process" Area);
- header_image_5 (For Notebooks of "Detect" Area);
- header_image_6 (For Notebooks of "Extract" Area);
- header_image_7 (For Notebooks of "Decide" Area);
- header_image_8 (For Notebooks of "Explain" Area);-->
<td class="header_text">Load acquired data from .h5 file</td>
</tr>
</table>
<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">open☁load☁h5</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>
For storing large amounts of data a .h5 (hierarchical data format) file defines an interesting approach. This is one of the predefined OpenSignals output files formats.
It will be explained how to load/transpose the data inside .h5 file to a Python list, that can easily be manipulated in the processing operations.
<hr>
<p class="steps">1 - Needed packages importation</p>
```
# Package used for loading data from the input h5 file
from h5py import File
# Package intended to work with arrays
from numpy import array
# opensignalsfactory python package
import opensignalsfactory as osf
```
<p class="steps">2 - Creation of a h5py object from the file named "ecg_sample.h5"</p>
```
file_folder = "../../signal_samples"
file_name = "ecg_sample.h5"
file_path = file_folder + "/" + file_name
h5_object = File(file_path)
```
<p class="steps">3 - Inspection of .h5 file internal structure/groups (in this case the mac address list of the used devices for acquiring data)</p>
```
# Keys list (.h5 hierarchy ground level)
list(h5_object.keys())
```
<p class="steps">4 - Access to the second hierarchy level through group key "00:07:80:3B:46:61"</p>
```
h5_group = h5_object.get('00:07:80:3B:46:61')
print ("Second hierarchy level: " + str(list(h5_group)))
```
<p class="steps">5 - Identification of h5_group metadata attributes</p>
```
print ("Metadata of h5_group: \n" + str(list(h5_group.attrs.keys())))
```
<p class="steps">6 - Storage of acquisition sampling rate by accessing h5_group metadata (attributes)</p>
```
sampling_rate = h5_group.attrs.get("sampling rate")
print ("Sampling Rate: " + str(sampling_rate))
```
<p class="steps">7 - Access to the third level of data through group key "00:07:80:3B:46:61" and sub-group key "raw"</p>
```
h5_sub_group = h5_group.get("raw")
print("Third hierarchy level: " + str(list(h5_sub_group)))
```
<p class="steps">8 - Transposition of "channel_1" dataset to a Python list (the units are mV). This sub-group corresponds to the data acquired at channel 1</p>
```
h5_data = h5_sub_group.get("channel_1")
# Conversion of a nested list to a flatten list by list-comprehension
# The following line is equivalent to:
# for sublist in h5_data:
# for item in sublist:
# flat_list.append(item)
data_list = [item for sublist in h5_data for item in sublist]
time = osf.generate_time(data_list, sampling_rate)
# Signal data samples values and graphical representation.
print (array([item for sublist in h5_data for item in sublist]))
osf.plot([time], [data_list], x_axis_label="Time (s)", y_axis_label="Raw Data")
```
*This procedure can be automatically done by **load** function of **<span class="color2">opensignalsfactory</span>** package*
With the described steps the user acquire the capability of exploring h5 files, navigating through his hierarchy and accessing data and metadata.
In order to access this information through a graphical user interface, there are interesting applications such as **<span class="color2">HDFView</span> <a href="https://support.hdfgroup.org/products/java/hdfview/"><img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>**.
In the following animation the file hierarchy is presented together with the signal samples in channel 1 and the file metadata/attributes.
<video id="video_1" muted loop src="../../images/open/hdf_view_crop_video.mp4" class="video"></video>
```
%%javascript
document.getElementById("video_1").play()
```
<hr>
<table width="100%">
<tr>
<td style="border-right:solid 3px #009EE3" width="30%">
<img src="../../images/ost_logo.png">
</td>
<td width="35%" style="text-align:left">
<a href="https://github.com/opensignalsfactory/opensignalsfactory">☌ Project Presentation</a>
<br>
<a href="https://github.com/opensignalsfactory/opensignalsfactory">☌ GitHub Repository</a>
<br>
<a href="https://pypi.org/project/opensignalsfactory/">☌ How to install opensignalsfactory Python package ?</a>
<br>
<a href="../MainFiles/signal_samples.ipynb">☌ Signal Library</a>
</td>
<td width="35%" style="text-align:left">
<a href="../MainFiles/opensignalsfactory.ipynb">☌ Notebook Categories</a>
<br>
<a href="../MainFiles/by_diff.ipynb">☌ Notebooks by Difficulty</a>
<br>
<a href="../MainFiles/by_signal_type.ipynb">☌ Notebooks by Signal Type</a>
<br>
<a href="../MainFiles/by_tag.ipynb">☌ Notebooks by Tag</a>
</td>
</tr>
</table>
```
from opensignalsfactory.__notebook_support__ import css_style_apply
css_style_apply()
```
| github_jupyter |
# 08. Pseudo-Random Numbers, Simulating from Some Discrete and Continuous Random Variables
## [Inference Theory 1](https://lamastex.github.io/scalable-data-science/infty/2018/01/)
©2018 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- The $Uniform(0,1)$ RV
- The $Bernoulli(\theta)$ RV
- Simulating from the $Bernoulli(\theta)$ RV
- The Equi-Probable $de\,Moivre(k)$ RV
- Simulating from the Equi-Probable $de\,Moivre(k)$ RV
- The $Uniform(\theta_1, \theta_2)$ RV
- Simulating from the $Uniform(\theta_1, \theta_2)$ RV
- The $Exponential(\lambda)$ RV
- Simulating from the $Exponential(\lambda)$ RV
- The standard $Cauchy$ RV
- Simulating from the standard $Cauchy$ RV
- Investigating running means
- Replicable samples
- A simple simulation
In the last notebook, we started to look at how we can produce realisations from the most elementary $Uniform(0,1)$ random variable.
i.e., how can we produce samples $(x_1, x_2, \ldots, x_n)$ from $X_1, X_2, \ldots, X_n$ $\overset{IID}{\thicksim}$ $Uniform(0,1)$?
What is SageMath doing when we ask for random()?
```
random()
```
We looked at how Modular arithmetic and number theory gives us pseudo-random number generators.
We used linear congruential generators (LCG) as simple pseudo-random number generators.
Remember that "pseudo-random" means that the numbers are not really random. We saw that some linear congruential generators (LCG) have much shorter, more predictable, patterns than others and we learned what makes a good LCG.
We introduced the pseudo-random number generator (PRNG) called the Mersenne Twister that we will use for simulation purposes in this course. It is based on more sophisticated theory than that of LCG but the basic principles of recurrence relations are the same.
# The $Uniform(0,1)$ Random Variable
Recall that the $Uniform(0,1)$ random variable is the fundamental model as we can transform it to any other random variable, random vector or random structure. The PDF $f$ and DF $F$ of $X \sim Uniform(0,1)$ are:
$f(x) = \begin{cases} 0 & \text{if} \ x \notin [0,1] \\ 1 & \text{if} \ x \in [0,1] \end{cases}$
$F(x) = \begin{cases} 0 & \text{if} \ x < 0 \\ 1 & \text{if} \ x > 1 \\ x & \text{if} \ x \in [0,1] \end{cases}$
We use the Mersenne twister pseudo-random number generator to mimic independent and identically distributed draws from the $uniform(0,1)$ RV.
In Sage, we use the python random module to generate pseudo-random numbers for us. (We have already used it: remember randint?)
random() will give us one simulation from the $Uniform(0,1)$ RV:
```
random()
```
If we want a whole simulated sample we can use a list comprehension. We will be using this technique frequently so make sure you understand what is going on. "for i in range(3)" is acting like a counter to give us 3 simulated values in the list we are making
```
[random() for i in range(3)]
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
```
If we do this again, we will get a different sample:
```
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
```
Often is it useful to be able to replicate the same random sample. For example, if we were writing some code to do some simulations using samples from a PRNG, and we "improved" the way that we were doing it, how would we want to test our improvement? If we could replicate the same samples then we could show that our new code was equivalent to our old code, just more efficient.
Remember when we were using the LCGs, and we could set the seed $x_0$? More sophisticated PRNGs like the Mersenne Twister also have a seed. By setting this seed to a specified value we can make sure that we can replicate samples.
```
?set_random_seed
set_random_seed(256526)
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
initial_seed()
```
Now we can replicate the same sample again by setting the seed to the same value:
```
set_random_seed(256526)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
set_random_seed(2676676766)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
```
We can compare some samples visually by plotting them:
```
set_random_seed(256526)
listOfUniformSamples = [(i,random()) for i in range(100)]
plotsSeed1 = points(listOfUniformSamples)
t1 = text('Seed 1 = 256626', (60,1.2), rgbcolor='blue',fontsize=10)
set_random_seed(2676676766)
plotsSeed2 = points([(i,random()) for i in range(100)],rgbcolor="red")
t2 = text('Seed 2 = 2676676766', (60,1.2), rgbcolor='red',fontsize=10)
bothSeeds = plotsSeed1 + plotsSeed2
t31 = text('Seed 1 and', (30,1.2), rgbcolor='blue',fontsize=10)
t32 = text('Seed 2', (65,1.2), rgbcolor='red',fontsize=10)
show(graphics_array( (plotsSeed1+t1,plotsSeed2+t2, bothSeeds+t31+t32)),figsize=[9,3])
```
### YouTry
Try looking at the more advanced documentation and play a bit.
```
?sage.misc.randstate
```
(end of You Try)
---
---
### Question:
What can we do with samples from a $Uniform(0,1)$ RV? Why bother?
### Answer:
We can use them to sample or simulate from other, more complex, random variables.
# The $Bernoulli(\theta)$ Random Variable
The $Bernoulli(\theta)$ RV $X$ with PMF $f(x;\theta)$ and DF $F(x;\theta)$ parameterised by some real $\theta\in [0,1]$ is a discrete random variable with only two possible outcomes.
$f(x;\theta)= \theta^x (1-\theta)^{1-x} \mathbf{1}_{\{0,1\}}(x) =
\begin{cases}
\theta & \text{if} \ x=1,\\
1-\theta & \text{if} \ x=0,\\
0 & \text{otherwise}
\end{cases}$
$F(x;\theta) =
\begin{cases}
1 & \text{if} \ 1 \leq x,\\
1-\theta & \text{if} \ 0 \leq x < 1,\\
0 & \text{otherwise}
\end{cases}$
Here are some functions for the PMF and DF for a $Bernoulli$ RV along with various useful functions for us in the sequel. Let's take a quick look at them.
```
def bernoulliPMF(x, theta):
'''Probability mass function for Bernoulli(theta).
Param x is the value to find the Bernoulli probability mass of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x == 1:
retValue = theta
elif x == 0:
retValue = 1 - theta
return retValue
def bernoulliCDF(x, theta):
'''DF for Bernoulli(theta).
Param x is the value to find the Bernoulli cumulative density function of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x >= 1:
retValue = 1
elif x >= 0:
retValue = 1 - theta
# in the case where x < 0, retValue is the default of 0
return retValue
# PFM plot
def pmfPlot(outcomes, pmf_values):
'''Returns a pmf plot for a discrete distribution.'''
pmf = points(zip(outcomes,pmf_values), rgbcolor="blue", pointsize='20')
for i in range(len(outcomes)):
pmf += line([(outcomes[i], 0),(outcomes[i], pmf_values[i])], rgbcolor="blue", linestyle=":")
# padding
pmf += point((0,1), rgbcolor="black", pointsize="0")
return pmf
# CDF plot
def cdfPlot(outcomes, cdf_values):
'''Returns a DF plot for a discrete distribution.'''
cdf_pairs = zip(outcomes, cdf_values)
cdf = point(cdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(cdf_pairs)):
x, kheight = cdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = cdf_pairs[k-1] # unpack previous tuple
cdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
cdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
cdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
max_index = len(outcomes)-1
cdf += line([(outcomes[0]-0.2, 0),(outcomes[0], 0)], rgbcolor="grey")
cdf += line([(outcomes[max_index],cdf_values[max_index]),(outcomes[max_index]+0.2, cdf_values[max_index])], rgbcolor="grey")
return cdf
def makeFreqDictHidden(myDataList):
'''Make a frequency mapping out of a list of data.
Param myDataList, a list of data.
Return a dictionary mapping each data value from min to max in steps of 1 to its frequency count.'''
freqDict = {} # start with an empty dictionary
sortedMyDataList = sorted(myDataList)
for k in sortedMyDataList:
freqDict[k] = myDataList.count(k)
return freqDict # return the dictionary created
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
numRelFreqPairs = zip(freqs.keys(), relFreqs) # zip the keys and relative frequencies together
numRelFreqPairs.sort() # sort the list of tuples
return numRelFreqPairs
from pylab import array
def makeEDFHidden(myDataList):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
relFreqsArray = array(relFreqs)
cumFreqs = list(relFreqsArray.cumsum())
numCumFreqPairs = zip(freqs.keys(), cumFreqs) # zip the keys and culm relative frequencies together
numCumFreqPairs.sort() # sort the list of tuples
return numCumFreqPairs
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
```
We can see the effect of varying $\theta$ interactively:
```
@interact
def _(theta=(0.5)):
'''Interactive function to plot the bernoulli pmf and cdf.'''
if theta <=1 and theta >= 0:
outcomes = (0, 1) # define the bernoulli outcomes
print "Bernoulli (", RR(theta).n(digits=2), ") pmf and cdf"
# pmf plot
pmf_values = [bernoulliPMF(x, theta) for x in outcomes]
pmf = pmfPlot(outcomes, pmf_values) # this is one of our own, hidden, functions
# cdf plot
cdf_values = [bernoulliCDF(x, theta) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our own, hidden, functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "0 <= theta <= 1"
```
Don't worry about how these plots are done: you are not expected to be able to understand all of these details now.
Just use them to see the effect of varying $\theta$.
## Simulating a sample from the $Bernoulli(\theta)$ RV
We can simulate a sample from a $Bernoulli$ distribution by transforming input from a $Uniform(0,1)$ distribution using the floor() function in Sage. In maths, $\lfloor x \rfloor$, the 'floor of $x$' is the largest integer that is smaller than or equal to $x$. For example, $\lfloor 3.8 \rfloor = 3$.
```
z=3.8
floor(z)
```
Using floor, we can do inversion sampling from the $Bernoulli(\theta)$ RV using the the $Uniform(0,1)$ random variable that we said is the fundamental model.
We will introduce inversion sampling more formally later. In general, inversion sampling means using the inverse of the CDF $F$, $F^{[-1]}$, to transform input from a $Uniform(0,1)$ distribution.
To simulate from the $Bernoulli(\theta)$, we can use the following algorithm:
### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG, $\qquad \qquad \text{where, } \sim$ means "sample from"
- $\theta$, the parameter
### Output:
$x \thicksim Bernoulli(\theta)$
### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor u + \theta \rfloor$
- Return $x$
We can illustrate this with SageMath:
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
x = floor(u + theta)
x
```
To make a number of simulations, we can use list comprehensions again:
```
theta = 0.5
n = 20
randomUs = [random() for i in range(n)]
simulatedBs = [floor(u + theta) for u in randomUs]
simulatedBs
```
To make modular reusable code we can package up what we have done as functions.
The function `bernoulliFInverse(u, theta)` codes the inverse of the CDF of a Bernoulli distribution parameterised by `theta`. The function `bernoulliSample(n, theta)` uses `bernoulliFInverse(...)` in a list comprehension to simulate n samples from a Bernoulli distribution parameterised by theta, i.e., the distribution of our $Bernoulli(\theta)$ RV.
```
def bernoulliFInverse(u, theta):
'''A function to evaluate the inverse CDF of a bernoulli.
Param u is the value to evaluate the inverse CDF at.
Param theta is the distribution parameters.
Returns inverse CDF under theta evaluated at u'''
return floor(u + theta)
def bernoulliSample(n, theta):
'''A function to simulate samples from a bernoulli distribution.
Param n is the number of samples to simulate.
Param theta is the bernoulli distribution parameter.
Returns a simulated Bernoulli sample as a list'''
us = [random() for i in range(n)]
return [bernoulliFInverse(u, theta) for u in us] # use bernoulliFInverse in a list comprehension
```
Note that we are using a list comprehension and the built-in SageMath `random()` function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named `us` (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function `bernoulliFInverse(...)` and passing in values for theta together with each u in us in turn.
Let's try a small number of samples:
```
theta = 0.2
n = 10
samples = bernoulliSample(n, theta)
samples
```
Now lets explore the effect of interactively varying n and $\theta$:
```
@interact
def _(theta=(0.5), n=(10,(0..1000))):
'''Interactive function to plot samples from bernoulli distribution.'''
if theta >= 0 and theta <= 1:
print "epmf and ecdf for ", n, " samples from Bernoulli (", theta, ")"
samples = bernoulliSample(n, theta)
# epmf plot
epmf = epmfPlot(samples) # this is one of our hidden functions
# ecdf plot
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[8,3])
else:
print "0 <= theta <=1, n>0"
```
You can vary $\theta$ and $n$ on the interactive plot. You should be able to see that as $n$ increases, the empirical plots should get closer to the theoretical $f$ and $F$.
### YouTry
Check that you understand what `floor` is doing. We have put some extra print statements into our demonstration of floor so that you can see what is going on in each step. Try evaluating this cell several times so that you see what happens with different values of `u`.
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
print "u is", u
print "u + theta is", (u + theta)
print "floor(u + theta) is", floor(u + theta)
```
In the cell below we use floor to get 1's and 0's from the pseudo-random u's given by random(). It is effectively doing exactly the same thing as the functions above that we use to simulate a specified number of $Bernoulli(\theta)$ RVs, but the why that it is written may be easier to understand. If `floor` is doing what we want it to, then when `n` is sufficiently large, we'd expect our proportion of `1`s to be close to `theta` (remember Kolmogorov's axiomatic motivations for probability!). Try changing the value assigned to the variable `theta` and re-evaluting the cell to check this.
```
theta = 0.7 # theta must be such that 0 <= theta <= 1
listFloorResults = [] # an empty list to store results in
n = 100000 # how many iterations to do
for i in range(n): # a for loop to do something n times
u = random() # generate u
x = floor(u + theta) # use floor
listFloorResults.append(x) # add x to the list of results
listFloorResults.count(1)*1.0/len(listFloorResults) # proportion of 1s in the results
```
# The equi-probable $de~Moivre(\theta)$ Random Variable
The $de~Moivre(\theta_1,\theta_2,\ldots,\theta_k)$ RV is the natural generalisation of the $Bernoulli (\theta)$ RV to more than two outcomes. Take a die (i.e. one of a pair of dice): there are 6 possible outcomes from tossing a die if the die is a normal six-sided one (the outcome is which face is the on the top). To start with we can allow the possibility that the different faces could be loaded so that they have different probabilities of being the face on the top if we throw the die. In this case, k=6 and the parameters $\theta_1$, $\theta_2$, ...$\theta_6$ specify how the die is loaded, and the number on the upper-most face if the die is tossed is a $de\,Moivre$ random variable parameterised by $\theta_1,\theta_2,\ldots,\theta_6$.
If $\theta_1=\theta_2=\ldots=\theta_6= \frac{1}{6}$ then we have a fair die.
Here are some functions for the equi-probable $de\, Moivre$ PMF and CDF where we code the possible outcomes as the numbers on the faces of a k-sided die, i.e, 1,2,...k.
```
def deMoivrePMF(x, k):
'''Probability mass function for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve pmf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) pmf at x.'''
if (int(x)==x) & (x > 0) & (x <= k):
return 1.0/k
else:
return 0
def deMoivreCDF(x, k):
'''DF for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve cdf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) cdf at x.'''
return 1.0*x/k
@interact
def _(k=(6)):
'''Interactive function to plot the de Moivre pmf and cdf.'''
if (int(k) == k) and (k >= 1):
outcomes = range(1,k+1,1) # define the outcomes
pmf_values = [deMoivrePMF(x, k) for x in outcomes]
print "equi-probable de Moivre (", k, ") pmf and cdf"
# pmf plot
pmf = pmfPlot(outcomes, pmf_values) # this is one of our hidden functions
# cdf plot
cdf_values = [deMoivreCDF(x, k) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our hidden functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "k must be an integer, k>0"
```
### YouTry
Try changing the value of k in the above interact.
## Simulating a sample from the equi-probable $de\,Moivre(k)$ random variable
We use floor ($\lfloor \, \rfloor$) again for simulating from the equi-probable $de \, Moivre(k)$ RV, but because we are defining our outcomes as 1, 2, ... k, we just add 1 to the result.
```
k = 6
u = random()
x = floor(u*k)+1
x
```
To simulate from the equi-probable $de\,Moivre(k)$, we can use the following algorithm:
#### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG
- $k$, the parameter
#### Output:
- $x \thicksim \text{equi-probable } de \, Moivre(k)$
#### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor uk \rfloor + 1$
- return $x$
We can illustrate this with SageMath:
```
def deMoivreFInverse(u, k):
'''A function to evaluate the inverse CDF of an equi-probable de Moivre.
Param u is the value to evaluate the inverse CDF at.
Param k is the distribution parameter.
Returns the inverse CDF for a de Moivre(k) distribution evaluated at u.'''
return floor(k*u) + 1
def deMoivreSample(n, k):
'''A function to simulate samples from an equi-probable de Moivre.
Param n is the number of samples to simulate.
Param k is the bernoulli distribution parameter.
Returns a simulated sample of size n from an equi-probable de Moivre(k) distribution as a list.'''
us = [random() for i in range(n)]
return [deMoivreFInverse(u, k) for u in us]
```
A small sample:
```
deMoivreSample(15,6)
```
You should understand the `deMoivreFInverse` and `deMoivreSample` functions and be able to write something like them if you were asked to.
You are not expected to be to make the interactive plots below (but this is not too hard to do by syntactic mimicry and google searches!).
Now let's do some interactive sampling where you can vary $k$ and the sample size $n$:
```
@interact
def _(k=(6), n=(10,(0..500))):
'''Interactive function to plot samples from equi-probable de Moivre distribution.'''
if n > 0 and k >= 0 and int(k) == k:
print "epmf and ecdf for ", n, " samples from equi-probable de Moivre (", k, ")"
outcomes = range(1,k+1,1) # define the outcomes
samples = deMoivreSample(n, k) # get the samples
epmf = epmfPlot(samples) # this is one of our hidden functions
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[10,3])
else:
print "k>0 must be an integer, n>0"
```
Try changing $n$ and/or $k$. With $k = 40$ for example, you could be simulating the number on the first ball for $n$ Lotto draws.
### YouTry
A useful counterpart to the floor of a number is the ceiling, denoted $\lceil \, \rceil$. In maths, $\lceil x \rceil$, the 'ceiling of $x$' is the smallest integer that is larger than or equal to $x$. For example, $\lceil 3.8 \rceil = 4$. We can use the ceil function to do this in Sage:
```
ceil(3.8)
```
Try using `ceil` to check that you understand what it is doing. What would `ceil(0)` be?
# Inversion Sampler for Continuous Random Variables
When we simulated from the discrete RVs above, the $Bernoulli(\theta)$ and the equi-probable $de\,Moivre(k)$, we transformed some $u \thicksim Uniform(0,1)$ into some value for the RV.
Now we will look at the formal idea of an inversion sampler for continuous random variables. Inversion sampling for continuous random variables is a way to simulate values for a continuous random variable $X$ using $u \thicksim Uniform(0,1)$.
The idea of the inversion sampler is to treat $u \thicksim Uniform(0,1)$ as some value taken by the CDF $F$ and find the value $x$ at which $F(X \le x) = u$.
To find x where $F(X \le x) = u$ we need to use the inverse of $F$, $F^{[-1]}$. This is why it is called an **inversion sampler**.
Formalising this,
### Proposition
Let $F(x) := \int_{- \infty}^{x} f(y) \,d y : \mathbb{R} \rightarrow [0,1]$ be a continuous DF with density $f$, and let its inverse $F^{[-1]} $ be:
$$ F^{[-1]}(u) := \inf \{ x : F(x) = u \} : [0,1] \rightarrow \mathbb{R} $$
Then, $F^{[-1]}(U)$ has the distribution function $F$, provided $U \thicksim Uniform(0,1)$ ($U$ is a $Uniform(0,1)$ RV).
Note:
The infimum of a set A of real numbers, denoted by $\inf(A)$, is the greatest lower bound of every element of $A$.
Proof
The ``one-line proof'' of the proposition is due to the following equalities:
$$P(F^{[-1]}(U) \leq x) = P(\inf \{ y : F(y) = U)\} \leq x ) = P(U \leq F(x)) = F(x), \quad \text{for all } x \in \mathbb{R} . $$
# Algorithm for Inversion Sampler
#### Input:
- A PRNG for $Uniform(0,1)$ samples
- A procedure to give us $F^{[-1]}(u)$, inverse of the DF of the target RV $X$ evaluated at $u$
#### Output:
- A sample $x$ from $X$ distributed according to $F$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u)$
# The $Uniform(\theta_1, \theta_2)$RV
We have already met the$Uniform(\theta_1, \theta_2)$ RV.
Given two real parameters $\theta_1,\theta_2 \in \mathbb{R}$, such that $\theta_1 < \theta_2$, the PDF of the $Uniform(\theta_1,\theta_2)$ RV $X$ is:
$$f(x;\theta_1,\theta_2) =
\begin{cases}
\frac{1}{\theta_2 - \theta_1} & \text{if }\theta_1 \leq x \leq \theta_2\text{,}\\
0 & \text{otherwise}
\end{cases}
$$
and its DF given by $F(x;\theta_1,\theta_2) = \int_{- \infty}^x f(y; \theta_1,\theta_2) \, dy$ is:
$$
F(x; \theta_1,\theta_2) =
\begin{cases}
0 & \text{if }x < \theta_1 \\
\frac{x-\theta_1}{\theta_2-\theta_1} & \text{if}~\theta_1 \leq x \leq \theta_2,\\
1 & \text{if} x > \theta_2
\end{cases}
$$
For example, here are the PDF, CDF and inverse CDF for the $Uniform(-1,1)$:
<img src="images/UniformMinus11ThreeCharts.png" width=800>
As usual, we can make some SageMath functions for the PDF and CDF:
```
# uniform pdf
def uniformPDF(x, theta1, theta2):
'''Uniform(theta1, theta2) pdf function f(x; theta1, theta2).
x is the value to evaluate the pdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if x >= theta1 and x <= theta2:
retvalue = 1.0/(theta2-theta1)
return retvalue
# uniform cdf
def uniformCDF(x, theta1, theta2):
'''Uniform(theta1, theta2) CDF or DF function F(x; theta1, theta2).
x is the value to evaluate the cdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if (x > theta2):
retvalue = 1
elif (x > theta1): # else-if
retvalue = (x - theta1) / (theta2-theta1)
# if (x < theta1), retvalue will be 0
return retvalue
```
Using these functions in an interactive plot, we can see the effect of changing the distribution parameters $\theta_1$ and $\theta_2$.
```
@interact
def InteractiveUniformPDFCDFPlots(theta1=0,theta2=1):
if theta2 > theta1:
print "Uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") pdf and cdf"
p1 = line([(theta1-1,0), (theta1,0)], rgbcolor='blue')
p1 += line([(theta1,1/(theta2-theta1)), (theta2,1/(theta2-theta1))], rgbcolor='blue')
p1 += line([(theta2,0), (theta2+1,0)], rgbcolor='blue')
p2 = line([(theta1-1,0), (theta1,0)], rgbcolor='red')
p2 += line([(theta1,0), (theta2,1)], rgbcolor='red')
p2 += line([(theta2,1), (theta2+1,1)], rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "theta2 must be greater than theta1"
```
# Simulating from the $Uniform(\theta_1, \theta_2)$ RV
We can simulate from the $Uniform(\theta_1,\theta_2)$ using the inversion sampler, provided that we can get an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\theta_1,\theta_2)$:
$$
u = \frac{x-\theta_1}{\theta_2-\theta_1} \quad \iff \quad x = (\theta_2-\theta_1)u+\theta_1 \quad \iff \quad F^{[-1]}(u;\theta_1,\theta_2) = \theta_1+(\theta_2-\theta_1)u
$$
<img src="images/Week7InverseUniformSampler.png" width=600>
## Algorithm for Inversion Sampler for the $Uniform(\theta_1, \theta_2)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\theta_1$, $\theta_2$
#### Output:
- A sample $x \thicksim Uniform(\theta_1, \theta_2)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = (\theta_1 + u(\theta_2 - \theta_1))$
- Return $x$
We can illustrate this with SageMath by writing a function to calculate the inverse of the CDF of a uniform distribution parameterised by theta1 and theta2. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF at this point, i.e. the value in the range theta1 to theta2 where the CDF evaluates to u.
```
def uniformFInverse(u, theta1, theta2):
'''A function to evaluate the inverse CDF of a uniform(theta1, theta2) distribution.
u, u should be 0 <= u <= 1, is the value to evaluate the inverse CDF at.
theta1, theta2, theta2 > theta1, are the uniform distribution parameters.'''
return theta1 + (theta2 - theta1)*u
```
This function transforms a single $u$ into a single simulated value from the $Uniform(\theta_1, \theta_2)$, for example:
```
u = random()
theta1, theta2 = 3, 6
uniformFInverse(u, theta1, theta2)
```
Then we can use this function inside another function to generate a number of samples:
```
def uniformSample(n, theta1, theta2):
'''A function to simulate samples from a uniform distribution.
n > 0 is the number of samples to simulate.
theta1, theta2 (theta2 > theta1) are the uniform distribution parameters.'''
us = [random() for i in range(n)]
return [uniformFInverse(u, theta1, theta2) for u in us]
```
The basic strategy is the same as for simulating $Bernoulli$ and $de \, Moirve$ samples: we are using a list comprehension and the built-in SAGE random() function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named us (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function uniformFInverse(...) and passing in values for theta1 and theta2 together with each u in us in turn.
You should be able to write simple functions like uniformFinverse and uniformSample yourself.
Try this for a small sample:
```
param1 = -5
param2 = 5
nToGenerate = 30
myUniformSample = uniformSample(nToGenerate, param1, param2)
print(myUniformSample)
```
Much more fun, we can make an interactive plot which uses the uniformSample(...) function to generate and plot while you choose the parameters and number to generate (you are not expected to be able to make interactive plots like this):
```
@interact
def _(theta1=0, theta2=1, n=(1..5000)):
'''Interactive function to plot samples from uniform distribution.'''
if theta2 > theta1:
if n == 1:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") sample"
else:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") samples"
sample = uniformSample(n, theta1, theta2)
pts = zip(range(1,n+1,1),sample) # plot so that first sample is at x=1
p=points(pts)
p+= text(str(theta1), (0, theta1), fontsize=10, color='black') # add labels manually
p+= text(str(theta2), (0, theta2), fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=theta1, ymax = theta2, axes=false, gridlines=[[0,n+1],[theta1,theta2]], figsize=[7,3])
else:
print "Theta1 must be less than theta2"
```
We can get a better idea of the distribution of our sample using a histogram (the minimum sample size has been set to 50 here because the automatic histogram generation does not do a very good job with small samples).
```
import pylab
@interact
def _(theta1=0, theta2=1, n=(50..5000), Bins=5):
'''Interactive function to plot samples from uniform distribution as a histogram.'''
if theta2 > theta1:
sample = uniformSample(n, theta1, theta2)
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sample, Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Theta1 must be less than theta2"
```
# The $Exponential(\lambda)$ Random Variable
For a given $\lambda$ > 0, an $Exponential(\lambda)$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x;\lambda) =\begin{cases}\lambda e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
$$
F(x;\lambda) =\begin{cases}1 - e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
An exponential distribution is useful because is can often be used to model inter-arrival times or making inter-event measurements (if you are familiar with the $Poisson$ distribution, a discrete distribution, you may have also met the $Exponential$ distribution as the time between $Poisson$ events). Here are some examples of random variables which are sometimes modelled with an exponential distribution:
time between the arrival of buses at a bus-stop
distance between roadkills on a stretch of highway
In SageMath, the we can use `exp(x)` to calculate $e^x$, for example:
```
x = 3.0
exp(x)
```
We can code some functions for the PDF and DF of an $Exponential$ parameterised by lambda like this $\lambda$.
**Note** that we cannot or should not use the name `lambda` for the parameter because in SageMath (and Python), the term `lambda` has a special meaning. Do you recall lambda expressions?
```
def exponentialPDF(x, lam):
'''Exponential pdf function.
x is the value we want to evaluate the pdf at.
lam is the exponential distribution parameter.'''
return lam*exp(-lam*x)
def exponentialCDF(x, lam):
'''Exponential cdf or df function.
x is the value we want to evaluate the cdf at.
lam is the exponential distribution parameter.'''
return 1 - exp(-lam*x)
```
You should be able to write simple functions like `exponentialPDF` and `exponentialCDF` yourself, but you are not expected to be able to make the interactive plots.
You can see the shapes of the PDF and CDF for different values of $\lambda$ using the interactive plot below.
```
@interact
def _(lam=('lambda',0.5),Xmax=(5..100)):
'''Interactive function to plot the exponential pdf and cdf.'''
if lam > 0:
print "Exponential(", RR(lam).n(digits=2), ") pdf and cdf"
from pylab import arange
xvalues = list(arange(0.1, Xmax, 0.1))
p1 = line(zip(xvalues, [exponentialPDF(y, lam) for y in xvalues]), rgbcolor='blue')
p2 = line(zip(xvalues, [exponentialCDF(y, lam) for y in xvalues]), rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Lambda must be greater than 0"
```
We are going to write some functions to help us to do inversion sampling from the $Exponential(\lambda)$ RV.
As before, we need an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\lambda)$
### YouTry later
Show that
$$
F^{[-1]}(u;\lambda) =\frac{-1}{\lambda} \ln(1-u)
$$
$\ln = \log_e$ is the natural logarthim.
(end of You try)
---
---
# Simulating from the $Exponential(\lambda)$ RV
Algorithm for Inversion Sampler for the $Exponential(\lambda)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\lambda$
### Output:
- sample $x \thicksim Exponential(\lambda)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = \frac{-1}{\lambda}\ln(1-u)$
- Return $x$
The function `exponentialFInverse(u, lam)` codes the inverse of the CDF of an exponential distribution parameterised by `lam`. Given a value between 0 and 1 for the parameter `u`, it returns the height of the inverse CDF of the exponential distribution at this point, i.e. the value where the CDF evaluates to `u`. The function `exponentialSample(n, lam)` uses `exponentialFInverse(...)` to simulate `n` samples from an exponential distribution parameterised by `lam`.
```
def exponentialFInverse(u, lam):
'''A function to evaluate the inverse CDF of a exponential distribution.
u is the value to evaluate the inverse CDF at.
lam is the exponential distribution parameter.'''
# log without a base is the natural logarithm
return (-1.0/lam)*log(1 - u)
def exponentialSample(n, lam):
'''A function to simulate samples from an exponential distribution.
n is the number of samples to simulate.
lam is the exponential distribution parameter.'''
us = [random() for i in range(n)]
return [exponentialFInverse(u, lam) for u in us]
```
We can have a look at a small sample:
```
lam = 0.5
nToGenerate = 30
sample = exponentialSample(nToGenerate, lam)
print sorted(sample) # recall that sorted makes a new sorted list
```
You should be able to write simple functions like `exponentialFinverse` and `exponentialSample` yourself by now.
The best way to visualise the results is to use a histogram. With this interactive plot you can explore the effect of varying lambda and n:
```
import pylab
@interact
def _(lam=('lambda',0.5), n=(50,(10..10000)), Bins=(5,(1,1000))):
'''Interactive function to plot samples from exponential distribution.'''
if lam > 0:
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(exponentialSample(n, lam), Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Lambda must be greater than 0"
```
# The Standard $Cauchy$ Random Variable
A standard $Cauchy$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x) =\frac{1}{\pi(1+x^2)}\text{,}\,\, -\infty < x < \infty
$$
$$
F(x) = \frac{1}{\pi}\tan^{-1}(x) + 0.5
$$
The $Cauchy$ distribution is an interesting distribution because the expectation does not exist:
$$
\int \left|x\right|\,dF(x) = \frac{2}{\pi} \int_0^{\infty} \frac{x}{1+x^2}\,dx = \left(x \tan^{-1}(x) \right]_0^{\infty} - \int_0^{\infty} \tan^{-1}(x)\, dx = \infty \ .
$$
In SageMath, we can use the `arctan` function for $tan^{-1}$, and `pi` for $\pi$ and code some functions for the PDF and DF of the standard Cauchy as follows.
```
def cauchyPDF(x):
'''Standard Cauchy pdf function.
x is the value to evaluate the pdf at.'''
return 1.0/(pi.n()*(1+x^2))
def cauchyCDF(x):
'''Standard Cauchy cdf function.
x is the value to evaluate the cdf at.'''
return (1.0/pi.n())*arctan(x) + 0.5
```
You can see the shapes of the PDF and CDF using the plot below. Note from the PDF $f$ above is defined for $-\infty < x < \infty$. This means we should set some arbitrary limits on the minimum and maximum values to use for the x-axis on the plots. You can change these limits interactively.
```
@interact
def _(lower=(-4), upper=(4)):
'''Interactive function to plot the Cauchy pdf and cdf.'''
if lower < upper:
print "Standard Cauchy pdf and cdf"
p1 = plot(cauchyPDF, lower,upper, rgbcolor='blue')
p2 = plot(cauchyCDF, lower,upper, rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Upper must be greater than lower"
```
#### Constructing a standard $Cauchy$ RVs
- Place a double light sabre (i.e., one that can shoot its lazer beam from both ends, like that of Darth Mole in Star Wars) on a cartesian axis so that it is centred on $(0, 1)$.
- Randomly spin it (so that its spin angle to the x-axis is $\theta \thicksim Uniform (0, 2\pi)$).
- Let it come to rest.
- The y-coordinate of the point of intersection with the y-axis is a standard Cauchy RV.
You can see that we are equally likely to get positive and negative values (the density function of the standard $Cauchy$ RV is symmetrical about 0) and whenever the spin angle is close to $\frac{\pi}{4}$ ($90^{\circ}$) or $\frac{3\pi}{2}$ ($270^{\circ}$), the intersections will be a long way out up or down the y-axis, i.e. very negative or very positive values. If the light sabre is exactly parallel to the y-axis there will be no intersection: a $Cauchy$ RV $X$ can take values $-\infty < x < \infty$
<img src="images/Week7CauchyLightSabre.png" width=300>
## Simulating from the standard $Cauchy$
We can perform inversion sampling on the $Cauchy$ RV by transforming a $Uniform(0,1)$ random variable into a $Cauchy$ random variable using the inverse CDF.
We can get this by replacing $F(x)$ by $u$ in the expression for $F(x)$:
$$
\frac{1}{\pi}tan^{-1}(x) + 0.5 = u
$$
and solving for $x$:
$$
\begin{array}{lcl} \frac{1}{\pi}tan^{-1}(x) + 0.5 = u & \iff & \frac{1}{\pi} tan^{-1}(x) = u - \frac{1}{2}\\ & \iff & tan^{-1}(x) = (u - \frac{1}{2})\pi\\ & \iff & tan(tan^{-1}(x)) = tan((u - \frac{1}{2})\pi)\\ & \iff & x = tan((u - \frac{1}{2})\pi) \end{array}
$$
## Inversion Sampler for the standard $Cauchy$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
#### Output:
- A sample $x \thicksim \text{standard } Cauchy$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = tan((u - \frac{1}{2})\pi)$
- Return $x$
The function `cauchyFInverse(u)` codes the inverse of the CDF of the standard Cauchy distribution. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF of the standard $Cauchy$ at this point, i.e. the value where the CDF evaluates to u. The function `cauchySample(n`) uses `cauchyFInverse(...)` to simulate `n` samples from a standard Cauchy distribution.
```
def cauchyFInverse(u):
'''A function to evaluate the inverse CDF of a standard Cauchy distribution.
u is the value to evaluate the inverse CDF at.'''
return RR(tan(pi*(u-0.5)))
def cauchySample(n):
'''A function to simulate samples from a standard Cauchy distribution.
n is the number of samples to simulate.'''
us = [random() for i in range(n)]
return [cauchyFInverse(u) for u in us]
```
And we can visualise these simulated samples with an interactive plot:
```
@interact
def _(n=(50,(0..5000))):
'''Interactive function to plot samples from standard Cauchy distribution.'''
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n)
pts = zip(range(1,n+1,1),sample)
p=points(pts)
p+= text(str(floor(min(sample))), (0, floor(min(sample))), \
fontsize=10, color='black') # add labels manually
p+= text(str(ceil(max(sample))), (0, ceil(max(sample))), \
fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=floor(min(sample)), \
ymax = ceil(max(sample)), axes=false, \
gridlines=[[0,n+1],[floor(min(sample)),ceil(max(sample))]],\
figsize=[7,3])
```
Notice how we can get some very extreme values This is because of the 'thick tails' of the density function of the $Cauchy$ RV. Think about this in relation to the double light sabre visualisation. We can see effect of the extreme values with a histogram visualisation as well. The interactive plot below will only use values between lower and upper in the histogram. Try increasing the sample size to something like 1000 and then gradually widening the limits:
```
import pylab
@interact
def _(n=(50,(0..5000)), lower=(-4), upper=(4), Bins=(5,(1,100))):
'''Interactive function to plot samples from
standard Cauchy distribution.'''
if lower < upper:
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n) # the whole sample
sampleToShow=[c for c in sample if (c >= lower and c <= upper)]
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sampleToShow, Bins, normed=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram, values between ' \
+ str(floor(lower)) + ' and ' + str(ceil(upper)))
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "lower must be less than upper"
```
# Running means
When we introduced the $Cauchy$ distribution, we noted that the expectation of the $Cauchy$ RV does not exist. This means that attempts to estimate the mean of a $Cauchy$ RV by looking at a sample mean will not be successful: as you take larger and larger samples, the effect of the extreme values will still cause the sample mean to swing around wildly (we will cover estimation properly soon). You are going to investigate the sample mean of simulated $Cauchy$ samples of steadily increasing size and show how unstable this is. A convenient way of doing this is to look at a running mean. We will start by working through the process of calculating some running means for the $Uniform(0,10)$, which do stabilise. You will then do the same thing for the $Cauchy$ and be able to see the instability.
We will be using the pylab.cumsum function, so we make sure that we have it available. We then generate a sample from the $Uniform(0,10)$
```
from pylab import cumsum
nToGenerate = 10 # sample size to generate
theta1, theta2 = 0, 10 # uniform parameters
uSample = uniformSample(nToGenerate, theta1, theta2)
print(uSample)
```
We are going to treat this sample as though it is actually 10 samples of increasing size:
- sample 1 is the first element in uSample
- sample 2 contains the first 2 elements in uSample
- sample 3 contains the first 3 elements in uSample
- ...
- sample10 contains the first 10 elements in uSample
We know that a sample mean is the sum of the elements in the sample divided by the number of elements in the sample $n$:
$$
\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i
$$
We can get the sum of the elements in each of our 10 samples with the cumulative sum of `uSample`.
We use `cumsum` to get the cumulative sum. This will be a `pylab.array` (or `numpy.arrat`) type, so we use the `list` function to turn it back into a list:
```
csUSample = list(cumsum(uSample))
print(csUSample)
```
What we have now is effectively a list
$$\left[\displaystyle\sum_{i=1}^1x_i, \sum_{i=1}^2x_i, \sum_{i=1}^3x_i, \ldots, \sum_{i=1}^{10}x_i\right]$$
So all we have to do is divide each element in `csUSample` by the number of elements that were summed to make it, and we have a list of running means
$$\left[\frac{1}{1}\displaystyle\sum_{i=1}^1x_i, \frac{1}{2}\sum_{i=1}^2x_i, \frac{1}{3}\sum_{i=1}^3x_i, \ldots, \frac{1}{10}\sum_{i=1}^{10}x_i\right]$$
We can get the running sample sizes using the `range` function:
```
samplesizes = range(1, len(uSample)+1,1)
samplesizes
```
And we can do the division with list comprehension:
```
uniformRunningMeans = [csUSample[i]/samplesizes[i] for i in range(nToGenerate)]
print(uniformRunningMeans)
```
We could pull all of this together into a function which produced a list of running means for sample sizes 1 to $n$.
```
def uniformRunningMeans(n, theta1, theta2):
'''Function to give a list of n running means from uniform(theta1, theta2).
n is the number of running means to generate.
theta1, theta2 are the uniform distribution parameters.
return a list of n running means.'''
sample = uniformSample(n, theta1, theta2)
from pylab import cumsum # we can import in the middle of code!
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
Have a look at the running means of 10 incrementally-sized samples:
```
nToGenerate = 10
theta1, theta2 = 0, 10
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(range(1, len(uRunningMeans)+1,1),uRunningMeans)
p = points(pts)
show(p, figsize=[5,3])
```
Recall that the expectation $E_{(\theta_1, \theta_2)}(X)$ of a $X \thicksim Uniform(\theta_1, \theta_2) = \frac{(\theta_1 +\theta_2)}{2}$
In our simulations we are using $\theta_1 = 0$, $\theta_2 = 10$, so if $X \thicksim Uniform(0,10)$, $E(X) = 5$
To show that the running means of different simulations from a $Uniform$ distribution settle down to be close to the expectation, we can plot say 5 different groups of running means for sample sizes $1, \ldots, 1000$. We will use a line plot rather than plotting individual points.
```
nToGenerate = 1000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
redshade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(xvalues,uRunningMeans)
if (i == 0):
p = line(pts, rgbcolor = (redshade,0,1))
else:
p += line(pts, rgbcolor = (redshade,0,1))
show(p, figsize=[5,3])
```
### YouTry!
Your task is to now do the same thing for some standard Cauchy running means.
To start with, do not put everything into a function, just put statements into the cell(s) below to:
Make variable for the number of running means to generate; assign it a small value like 10 at this stage
Use the cauchySample function to generate the sample from the standard $Cauchy$; have a look at your sample
Make a named list of cumulative sums of your $Cauchy$ sample using list and cumsum, as we did above; have a look at your cumulative sums
Make a named list of sample sizes, as we did above
Use a list comprehension to turn the cumulative sums and sample sizes into a list of running means, as we did above
Have a look at your running means; do they make sense to you given the individual sample values?
Add more cells as you need them.
When you are happy that you are doing the right things, **write a function**, parameterised by the number of running means to do, that returns a list of running means. Try to make your own function rather than copying and changing the one we used for the $Uniform$: you will learn more by trying to do it yourself. Please call your function `cauchyRunningMeans`, so that (if you have done everything else right), you'll be able to use some code we will supply you with to plot the results.
Try checking your function by using it to create a small list of running means. Check that the function does not report an error and gives you the kind of list you expect.
When you think that your function is working correctly, try evaluating the cell below: this will put the plot of 5 groups of $Uniform(0,10)$ running means beside a plot of 5 groups of standard $Cauchy$ running means produced by your function (as usual, you are not expected to be able to produce plots like this one).
```
nToGenerate = 10000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
shade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
problemStr="" # an empty string
# use try to catch problems with cauchyRunningMeans functions
try:
cRunningMeans = cauchyRunningMeans(nToGenerate)
##cRunningMeans = hiddenCauchyRunningMeans(nToGenerate)
cPts = zip(xvalues, cRunningMeans)
except NameError, e:
# cauchyRunningMeans is not defined
cRunningMeans = [1 for c in range(nToGenerate)] # default value
problemStr = "No "
except Exception, e:
# some other problem with cauchyRunningMeans
cRunningMeans = [1 for c in range(nToGenerate)]
problemStr = "Problem with "
uPts = zip(xvalues, uRunningMeans)
cPts = zip(xvalues, cRunningMeans)
if (i < 1):
p1 = line(uPts, rgbcolor = (shade, 0, 1))
p2 = line(cPts, rgbcolor = (1-shade, 0, shade))
cauchyTitleMax = max(cRunningMeans) # for placement of cauchy title
else:
p1 += line(uPts, rgbcolor = (shade, 0, 1))
p2 += line(cPts, rgbcolor = (1-shade, 0, shade))
if max(cRunningMeans) > cauchyTitleMax:
cauchyTitleMax = max(cRunningMeans)
titleText1 = "Uniform(" + str(theta1) + "," + str(theta2) + ") running means" # make title text
t1 = text(titleText1, (nToGenerate/2,theta2), rgbcolor='blue',fontsize=10)
titleText2 = problemStr + "standard Cauchy running means" # make title text
t2 = text(titleText2, (nToGenerate/2,ceil(cauchyTitleMax)+1), rgbcolor='red',fontsize=10)
show(graphics_array((p1+t1,p2+t2)),figsize=[10,5])
```
# Replicable samples
Remember that we know how to set the seed of the PRNG used by `random()` with `set_random_seed`? If we wanted our sampling functions to give repeatable samples, we could also pass the functions the seed to use. Try making a new version of `uniformSample` which has a parameter for a value to use as the random number generator seed. Call your new version `uniformSampleSeeded` to distinguish it from the original one.
Try out your new `uniformSampleSeeded` function: if you generate two samples using the same seed they should be exactly the same. You could try using a large sample and checking on sample statistics such as the mean, min, max, variance etc, rather than comparing small samples by eye.
Recall that you can also give parameters default values in SageMath. Using a default value means that if no value is passed to the function for that parameter, the default value is used. Here is an example with a very simple function:
```
def simpleDefaultExample(x, y=0):
'''A simple function to demonstrate default parameter values.
x is the first parameter, with no default value.
y is the second parameter, defaulting to 0.'''
return x + y
```
Note that parameters with default values need to come after parameters without default values when we define the function.
Now you can try the function - evaluate the following cells to see what you get:
```
simpleDefaultExample (1,3) # specifying two arguments for the function
simpleDefaultExample (1) # specifying one argument for the function
# another way to specify one argument for the function
simpleDefaultExample (x=6)
# this will give an error because x has no default value
simpleDefaultExample()
# this will also give an error because x has no default value
simpleDefaultExample (y=9)
```
Try making yet another version of the uniform sampler which takes a value to be used as a random number generator seed, but defaults to `None` if no value is supplied for that parameter. `None` is a special Python type.
```
x = None
type(x)
```
Using `set_random_seed(None)` will mean that the random seed is actually reset to a new ('random') value. You can see this by testing what happens when you do this twice in succession and then check what seed is being used with `initial_seed`:
```
set_random_seed(None)
initial_seed()
set_random_seed(None)
initial_seed()
```
Do another version of the `uniformSampleSeeded` function with a default value for the seed of `None`.
Check your function again by testing with both when you supply a value for the seed and when you don't.
### YouTry
### A Simple Simulation
We could use the samplers we have made to do a very simple simulation. Suppose the inter-arrival times, in minutes, of Orbiter buses at an Orbiter stop follows an $Exponential(\lambda = 0.1)$ distribution. Also suppose that this is quite a popular bus stop, and the arrival of people is very predictable: one new person will arrive in each whole minute. This means that the longer another bus takes in coming, the more people arrive to join the queue. Also suppose that the number of free seats available on any bus follows a $de\, Moivre(k=40)$ distribution, i.e, there are equally like to to be 1, or 2, or 3 ... or 40 spare seats. If there are more spare seats than people in the queue, everyone can get onto the bus and nobody is left waiting, but if there are not enough spare seats some people will be left waiting for the next bus. As they wait, more people arrive to join the queue....
This is not very realistic - we would want a better model for how many people arrive at the stop at least, and for the number of spare seats there will be on the bus. However, we are just using this as a simple example that you can do using the RVs you know how to simulate samples from.
Try to code this example yourself, using our suggested steps. We have put our version the code into a cell below, but you will get more out of this example by trying to do it yourself first.
#### Suggested steps:
- Get a list of 100 $Exponential(\lambda = 0.1)$ samples using the `exponentialSamples` function. Assign the list to a variable named something like `busTime`s. These are your 100 simulated bus inter-arrival times.
- Choose a value for the number of people who will be waiting at the busstop when you start the simulation. Call this something like `waiting`.
- Make a list called something like `leftWaiting`, which to begin with contains just the value assigned to `waiting`.
- Make an empty list called something like `boardBus`.
- Start a for loop which takes each element in `busTimes` in turn, i.e. each bus inter-arrival time, and within the for loop:
- Calculate the number of people arriving at the stop as the floor of the time taken for that bus to arrive (i.e., one person for each whole minute until the bus arrives)
- Add this to the number of people waiting (e.g., if the number of arrivals is assigned to a variable arrivals, then waiting = waiting + arrivals will increment the value assigned to the waiting variable by the value of arrivals).
- Simulate a value for the number of seats available on the bus as one simulation from a $de \, Moirve(k=40)$ RV (it may be easier to use `deMoirveFInverse` rather than `deMoivrveSample` because you only need one value - remember that you will have to pass a simulated $u \thicksim Uniform(0,1)$ to `deMoivreFInverse` as well as the value of the parameter $k$).
- The number of people who can get on the bus is the minimum of the number of people waiting in the queue and the number of seats on the bus. Calculate this value and assign it to a variable called something like `getOnBus`.
- Append `getOnBus` to the list `boardBus`.
- Subtract `getOnBus` from the number of people waiting, waiting (e.g., `waiting = waiting - getOnBus` will decrement waiting by the number of people who get on the bus).
- Append the new value of `waiting` to the list `leftWaiting`.
- That is the end of the for loop: you now have two lists, one for the number of people waiting at the stop and one for the number of people who can board each bus as it arrives.
### YouTry!
Here is our code to do the bust stop simulation.
Yours may be different - maybe it will be better!
```
buses = 100
lam = 0.1
busTimes = exponentialSample(buses,lam)
waiting = 0 # how many people are waiting at the start of the simulation
boardBus = [] # empty list
leftWaiting = [waiting] # list with just waiting in it
for time in busTimes: # for each bus inter-arrival time
arrivals = floor(time) # people who arrive at the stop before the bus gets there
waiting = waiting + arrivals # add them to the queue
busSeats = deMoivreFInverse(random(), 40) # how many seats available on the bus
getOnBus = min(waiting, busSeats) # how many people can get on the bus
boardBus.append(getOnBus) # add to the list
waiting = waiting - getOnBus # take the people who board the bus out of the queue
leftWaiting.append(waiting) # add to the list
print(leftWaiting) # look at the leftWaiting list
```
We could do a visualisation of this, showing the number of people able to board the bus and the number of people left by the height of lines on the plot.
```
p1 = line([(0.5,0),(0.5,leftWaiting[0])])
from pylab import cumsum
csBusTimes=list(cumsum(busTimes))
for i in range(1, len(leftWaiting), 1):
p1+= line([(csBusTimes[i-1],0),(csBusTimes[i-1],boardBus[i-1])], rgbcolor='green')
p1+= line([(csBusTimes[i-1]+.01,0),(csBusTimes[i-1]+.01,leftWaiting[i])], rgbcolor='red')
t1 = text("Boarding the bus", (csBusTimes[len(busTimes)-1]/3,max(max(boardBus),max(leftWaiting))+1), rgbcolor='green',fontsize=10)
t2 = text("Waiting", (csBusTimes[len(busTimes)-1]*(2/3),max(max(boardBus),max(leftWaiting))+1), rgbcolor='red',fontsize=10)
xaxislabel = text("Time", (csBusTimes[len(busTimes)-1],-10),fontsize=10,color='black')
yaxislabel = text("People", (-50,max(max(boardBus),max(leftWaiting))+1),fontsize=10,color='black')
show(p1+t1+t2+xaxislabel+yaxislabel,figsize=[8,5])
```
You could try the effect on your simulation of changing the $Exponential$ parameter $\lambda$, or some of the other factors involved.
#### Solution for CauchyRunningMeans
```
def hiddenCauchyRunningMeans(n):
'''Function to give a list of n running means from standardCauchy.
n is the number of running means to generate.'''
sample = cauchySample(n)
from pylab import cumsum
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
| github_jupyter |
## Functions for TAA Component Preparation
Complementary notebook to executable on the same topic.
Functions are presented in an order closely following the exe notebook steps:
#### Libraries
```
import os
import re
import numpy as np # basic numeric calculation
import pandas as pd # split-apply-combine operations on dataframe
import fiona
import geopandas as gpd
import rasterio
import rasterstats
import matplotlib as mpl
import matplotlib.pyplot as plt # plotting tool
from matplotlib.patches import RegularPolygon # drawing hexagons
import shapely # to attribute geometric properties for shapes
from shapely.geometry import Polygon, Point
```
#### 1. General functions
```
def get_agg_cols(data, from_position):
return list(data.columns)[from_position:]
def snake2camel(name):
return re.sub(r'(?:^|_)([a-z])', lambda x: x.group(1).upper(), name)
def camel2spaced(name):
return re.sub('([a-z])([A-Z])', '\\g<1> \\g<2>', name)
def read_geojson(path: str, file_name: str):
"""
A function to read a geojson file
Arguments:
path - address to directory to read from
file_name - name of file to read
Returns: GeoDataFrame as read from geojson file
"""
file = open(os.path.join(path, file_name))
df = gpd.read_file(file)
df = df.set_crs(epsg=4326)
return df
def read_geojson_(path: str, file_name: str):
"""
A function to read a geojson file
Arguments:
path - address to directory to read from
file_name - name of file to read
Returns: GeoDataFrame as read from geojson file
"""
file = open(os.path.join(path, file_name),encoding="utf-8")
df = gpd.read_file(file)
df = df.set_crs(epsg=4326)
return df
def _to_2d(x, y, z):
return tuple(filter(None, [x, y]))
def construct_wmean_func(carto_data_name, cred_container, w_col):
"""
A function to create a weighted mean function object when given input specs.
Arguments:
carto_data_name {string} -- A string containing the name of a carto dataset (for live read)
cred_container {string} -- A string containing the file path and name of a json on carto credentials (for live read)
w_col {string} -- A string containing the name of a column to use for weighing
Returns: function --
"""
carto_data = ingest_carto_data(carto_data_name, cred_container)
wmean = lambda x: np.average(x, weights=carto_data.loc[x.index, w_col])
wmean.__name__ = "wmean"
return wmean
def construct_wmean_func_from_set(data, w_col):
"""
A function to create a weighted mean function object when given input specs.
Arguments:
data {gpd.DataFrame} -- A dataframe to index on for custom wmean function
w_col {string} -- A string containing the name of a column to use for weighing
Returns: function --
"""
wmean = lambda x: np.average(x, weights=data.loc[x.index, w_col])
wmean.__name__ = "wmean"
return wmean
def derive_outlets_by_chan(cmd, geo_data, id_var="CUSTOMER_ID"):
"""
A function to engineer features for the number of outlets - by type for geographical grouping (hexagons, or trade areas).
Arguments:
cmd {SparkDataFrame} -- WP population data - aggregated to hexagon level
geo_data {GeoDataFrame} -- OSM poi data - aggregated to hexagon level
id_var {str} -- Name of customer id variable
Returns: Pandas DataFrame with columns for the number of outlet types within each hexagon/trade area.
"""
# 1) Define channel subcategories
outlets_by_chan = cmd. \
select(*[id_var,"CHANNEL_CUSTOM", "LATITUDE", "LONGITUDE"]). \
toPandas()
outlets_by_chan['CHANNEL_COUNTS'] = outlets_by_chan['CHANNEL_CUSTOM']
outlets = outlets_by_chan["CHANNEL_COUNTS"].unique().tolist()
outlets_by_chan = outlets_by_chan.set_index([id_var, "LATITUDE", "LONGITUDE"])
outlets_by_chan = pd.get_dummies(outlets_by_chan['CHANNEL_COUNTS']).reset_index()
outlets_by_chan = gpd.GeoDataFrame(outlets_by_chan, geometry=gpd.points_from_xy(outlets_by_chan["LONGITUDE"], outlets_by_chan["LATITUDE"]), crs="epsg:4326")
# 2) Aggregate to hexagon level
# outlets_by_chan_ready_set = pd.merge(outlets_by_chan, geo_data, on=id_var, how="left")
outlets_by_chan_ready_set = gpd.overlay(outlets_by_chan, geo_data, how="intersection")
outlets_by_chan_ready_set = outlets_by_chan_ready_set. \
groupby("hex_id"). \
agg(dict(zip(outlets, ["sum"]*len(outlets)))). \
reset_index()
return outlets_by_chan_ready_set
```
#### 2. Functions on preparing & overlaying a hexagonal grid over a territory
```
def haversine_custom(coord1, coord2):
"""
A function to determine the great-circle distance between 2 points on Earth given their longitudes and latitudes
Arguments:
coord1 - territory bounds for first point, lon & lat
coord2 - territory bounds for second point, lon & lat
Returns: Distance in meters
"""
# Coordinates in decimal degrees (e.g. 43.60, -79.49)
lon1, lat1 = coord1
lon2, lat2 = coord2
# Radius of Earth in meters
R = 6371000
phi_1 = np.radians(lat1)
phi_2 = np.radians(lat2)
delta_phi = np.radians(lat2 - lat1)
delta_lambda = np.radians(lon2 - lon1)
a = np.sin(delta_phi / 2.0) ** 2 + np.cos(phi_1) * np.cos(phi_2) * np.sin(delta_lambda / 2.0) ** 2
c = 2 * np.arctan2(np.sqrt(a),np.sqrt(1 - a))
# Output distance in meters
meters = R * c
# Output distance in kilometers
km = meters / 1000.0
meters = round(meters)
km = round(km, 3)
#print(f"Distance: {meters} m")
#print(f"Distance: {km} km")
return meters
def form_hex_grid(territory, hex_diameter: int):
"""
A function to form a hexagonal grid
Arguments:
territory - GeoDataFrame for a territory
hex_diameter - integer, diameter size to use in defining hexagon dimensions
Returns: GeoDataFrame with geometry for hexagonal grid formed for a territory provided
"""
# 1) Define general hex parameters
xmin, ymin, xmax, ymax = [x * 1.01 for x in territory.total_bounds]
EW = haversine_custom((xmin,ymin),(xmax,ymin))
NS = haversine_custom((xmin,ymin),(xmin,ymax))
# diamter of each hexagon in the grid
d = hex_diameter
# horizontal width of hexagon = w = d* sin(60)
w = d*np.sin(np.pi/3)
# Approximate number of hexagons per row = EW/w
n_cols = int(EW/w) + 1
# Approximate number of hexagons per column = NS/d
n_rows = int(NS/w) + 1
# 2) Add hex params to territory
# ax = territory[["geometry"]].boundary.plot(edgecolor='black', figsize=(30, 60)) #
# width of hexagon
w = (xmax-xmin)/n_cols
# diameter of hexagon
d = w/np.sin(np.pi/3)
array_of_hexes = []
for rows in range(0,n_rows):
hcoord = np.arange(xmin,xmax,w) + (rows%2)*w/2
vcoord = [ymax- rows*d*0.75]*n_cols
for x, y in zip(hcoord, vcoord): #, colors):
hexes = RegularPolygon((x, y), numVertices=6, radius=d/2, alpha=0.2, edgecolor='k')
verts = hexes.get_path().vertices
trans = hexes.get_patch_transform()
points = trans.transform(verts)
array_of_hexes.append(Polygon(points))
# ax.add_patch(hexes) #
# ax.set_xlim([xmin, xmax]) #
# ax.set_ylim([ymin, ymax]) #
# plt.show() #
# 3) Form hex grid as gpd
hex_grid = gpd.GeoDataFrame({'geometry': array_of_hexes}, crs="EPSG:4326")
hex_grid = hex_grid.to_crs(epsg=4326)
return hex_grid
def hexalize_territory(territory, hex_grid):
"""
A function to add hexagonal grid geometry to GeoDataFrame territory
Arguments:
territory - GeoDataFrame for a territory
hex_grid - GeoDataFrame, hexagonal grid geometry as prepared for specified territory
Returns: GeoDataFrame of a territory overlayed with a hexagonal grid
"""
territory_hex = gpd.overlay(hex_grid, territory)
territory_hex = gpd.GeoDataFrame(territory_hex, geometry='geometry', crs="EPSG:4326")
territory_hex = territory_hex.reset_index()
territory_hex.rename(columns={'index': 'hex_id'}, inplace=True)
return territory_hex
def derive_hex_area(data):
"""
A function to derive the square kilometer area per hexagon
Arguments:
data - GeoDataFrame to calculate the area for pre-specified polygons (hexagons)
Returns: GeoDataFrame expanded with a field on square kilometer are per hexagon
"""
data['hexagon_area_sq_m'] = data.to_crs({'proj':'cea'})["geometry"].area
data['hexagon_area_sq_km'] = data['hexagon_area_sq_m'] / 1000000
# data = data.drop(['hexagon_area_share', 'hexagon_area'], axis=1)
data = data.to_crs(epsg=4326)
return data
def prep_territory_hex(hex_diameter: int = 2500,
territory_scope: str = '/dbfs/mnt/AA/ba008/assets/shape_files/morocco_l2_adm_areas.shp',
hex_set: str = None):
"""
A function to overlay a custom hexagonal grid over a territory
Arguments:
hex_diameter - integer, diameter size to use in defining hexagon dimensions
territory_scope - string, name of shape file to draw areas to keep for the analysis (i.e. only marrakech_eccbc), default is set to full country
hex_set - string, name of json file to export
Returns: GeoJSON file saved to fs - of a territory overlayed with a customizable hexagonal grid
"""
# 1) Load full territory data, i.e. from country shape file
territory = gpd.read_file(territory_scope)
territory = territory.to_crs(epsg=4326)
# 2) Form hex grid for the relevant territory - post area selection
hex_grid = form_hex_grid(territory, hex_diameter)
# 3) Add hexagonal grid to territory
territory_hex = hexalize_territory(territory, hex_grid)
# 4) Add a field for the area per hexagon
territory_hex = derive_hex_area(territory_hex)
# 5) Export
if hex_set is not None:
territory_hex.to_file(hex_set, driver='GeoJSON')
return territory_hex
```
#### 3. Functions on preparinging external data sources and performing hexagonal-based aggregations
###### 3.1.1. Wrapper function on population data, source: Worldpop
```
def prep_pop_set(path_hex: str, hex_set: str, path_pop: str, pop_set: str, agg_funcs: list, export_agg_file: str):
"""
A function to derive aggregate population stats by hexagon
Arguments:
path_hex - string, name of fs directory to read hex data from
hex_set - string, name of shape file containing hexagon grid (read as gpd)
path_pop - string, name of fs directory to read population data from
pop_set - string, name of tif file with population data (read as raster)
agg_funcs - list of strings with aggregation operations to be performed over the population data
export_agg_file - file path to export aggregated data to
Returns: GeoDataFrame with population stats aggregated by hexagon
"""
set_prefix = 'pop_'
# 1) Derive aggregated stats: over the raster data
output = gpd.GeoDataFrame.from_features(
rasterstats.zonal_stats(
read_geojson(path_hex, hex_set),
os.path.join(path_pop, pop_set),
stats=agg_funcs,
prefix=set_prefix,
geojson_out=True))
# 2) Fix: a few NaN-s arise from the aggregation step -> replace with 0s
for i in range(0, len(agg_funcs), 1):
output[set_prefix+agg_funcs[i]].fillna(0, inplace=True)
# 3) Set the coordinate reference system
output.set_crs(epsg=4326)
# 4) Export to blob
output.to_file(export_agg_file, driver='GeoJSON')
return output
```
###### 3.1.2 Wrapper function on osm poi data, source: OSM
```
def prep_osm_set(territory_hex: gpd.GeoDataFrame, path_osm: str, osm_set: str, export_agg_file: str = None):
"""
A function to derive the point-of-interest number per hexagon (count)
Arguments:
path_hex - string, name of fs directory to read hex data from
hex_set - string, name of shape file containing hexagon grid (read as gdp)
path_osm - string, name of fs directory to read osm data from
osm_set - string, name of parquet file with osm data (read as pd)
export_agg_file - string, file path to export aggregated data to
Returns: GeoDataFrame with the number of poi-s per hexagon
"""
# 1) Read osm set
init = pd.read_parquet(os.path.join(path_osm, osm_set)).filter(["osm_id", "latitude", "longitude"], axis=1)
output = gpd.GeoDataFrame(init, geometry=gpd.points_from_xy(init.longitude, init.latitude))
output = output.set_crs(epsg=4326)
# 2) Add hex grid
output = gpd.sjoin(output, territory_hex, how="inner", op="intersects")
# 3) Derive aggregated stats: over the goepandas data
output = output.groupby(['hex_id'])[['osm_id']].count()
output.reset_index(level=0, inplace=True)
output.rename(columns={'osm_id': 'osm_count'}, inplace=True)
# 4) Export
if export_agg_file is not None:
output.to_parquet(export_agg_file)
return output
```
###### 3.1.3. Wrapper function on customer spend data, source: Carto/Experian
```
def prep_carto_cexpend(territory_hex: gpd.GeoDataFrame, carto_data):
"""
A function to derive carto aggregations per hexagon
Arguments:
territory_hex {gpd.GeoDataFrame} -- The hexagonal coverage of the territory
carto_data {pd.DataFrame} -- Carto data read in the notebook
Returns: GeoDataFrame with carto hexagon-level aggregations
"""
# 1) Set helper items
val_cols = ["wvce_01"] ## --- may need changing -> expecting a list of column names
aggs = [["sum"]] * len(val_cols) ## --- may need changing -> expecting a list of str-lists on function names (for naming @ step7)
# 2) Read input sets
# carto_data = ingest_carto_data(carto_data_name, cred_container)
# territory_hex = read_geojson(path_hex, hex_set)
# 3) Overlay carto and hexagonal grid
output = overlay_carto_hex(carto_data, territory_hex, carto_id_var="cartodb_id", hex_id_var="hex_id")
# 4) Update columns to reflect the carto value share within each hexagon
output = reflect_carto_share(output, val_cols)
# 5) Derive aggregated stats
output = output. \
groupby("hex_id"). \
agg(dict(zip(val_cols, aggs)))
# 6) Address naming
output = fix_agg_data_names(output, prefix="carto_", id_var="hex_id")
return output
```
###### 3.1.4. Wrapper function on socio-demographic data, source: Carto/Experian
```
def prep_carto_sociodemo(territory_hex: gpd.GeoDataFrame, carto_data, hex_geom_name = "territory_hex_geom", carto_geom_name = "carto_set_geom"):
"""
A function to derive carto aggregations per hexagon
Arguments:
territory_hex {gpd.GeoDataFrame} -- The hexagonal coverage of the territory
carto_data {pd.DataFrame} -- Carto data read in the notebook
Returns: GeoDataFrame with carto hexagon-level aggregations
"""
# 1) Set helper items
val_cols =['pop','hh','male','female','age_t0014', 'age_m0014', 'age_f0014', 'age_t1529', 'age_m1529','age_f1529','age_t3044','age_m3044','age_f3044','age_t4559','age_m4559','age_f4559','age_t60pl','age_m60pl','age_f60pl','di_mio','di_pc','di_ci']
# 2) Read input sets
# carto_data = ingest_carto_data(carto_data_name, cred_container)
# territory_hex = read_geojson(path_hex, hex_set)
# 3) Overlay carto and hexagonal grid
output = overlay_carto_hex(carto_data, territory_hex, carto_id_var="cartodb_id", hex_id_var="hex_id")
# Add aggregation functions - data needed for construct_wmean_func_from_set
aggs = [["sum"]]*20 + [["mean", construct_wmean_func_from_set(output, "pop")]]*2 ## --- may need changing -> expecting a list of str-lists on function names (for naming @ step7)
# 4) Update columns to reflect the carto value share within each hexagon
output = reflect_carto_share(output, val_cols, hex_geom_name, carto_geom_name)
# 5) Derive aggregated stats
output = output. \
groupby("hex_id"). \
agg(dict(zip(val_cols, aggs)))
# 6) Address naming
output = fix_agg_data_names(output, prefix="carto_", id_var="hex_id")
return output
```
###### 3.1.5. Wrapper function on activity/mobility data, source: Unacast
```
def prep_carto_unacast(territory_hex: gpd.GeoDataFrame, carto_data):
"""
A function to derive carto aggregations per hexagon
Arguments:
territory_hex {gpd.GeoDataFrame} -- The hexagonal coverage of the territory
carto_data {pd.DataFrame} -- Carto data read in the notebook
Returns: GeoDataFrame with carto hexagon-level aggregations
"""
# 1) Read input sets
# 1.1) Territory hex set
# territory_hex = read_geojson(path_hex, hex_set)
# 1.2) Carto set
# carto_data = ingest_carto_data(carto_data_name, cred_container)
carto_data = init_prep_carto_unacast(carto_data, id_var="seen_in")
# 2) Set helper items
# val_cols = get_agg_cols(data=carto_data, from_position=2) ## --- may need changing -> expecting a list of column names (vestige)
vals_sum = list(carto_data[carto_data.filter(regex='staying').columns])
vals_mean = list(carto_data[carto_data.filter(regex='proportion_|return_').columns])
val_cols = vals_sum + vals_mean
aggs = [["sum"]]*len(vals_sum) + [["mean"]]*len(vals_mean) ## --- may need changing -> expecting a list of str-lists on function names (for naming @ step7)
# 3) Overlay carto and hexagonal grid
output = overlay_carto_hex(carto_data, territory_hex, carto_id_var="seen_in", hex_id_var="hex_id")
# 4) Update columns to reflect the carto value share within each hexagon
# AR: There's something wrong with this function
output = reflect_carto_share(output, val_cols)
# 5) Derive aggregated stats
output = output. \
groupby("hex_id"). \
agg(dict(zip(val_cols, aggs)))
# 7) Address naming
output = fix_agg_data_names(output, prefix="carto_", id_var="hex_id")
return output
```
###### 3.1.6. Wrapper function on poi data, source: Pitney Bowes
```
def prep_carto_poi(territory_hex: gpd.GeoDataFrame, carto_data, carto_val_cols):
"""
A function to derive the carto point-of-interest number per hexagon (count), item/s to count may be changed if needed.
Arguments:
territory_hex {gpd.GeoDataFrame} -- The hexagonal coverage of the territory
carto_data {pd.DataFrame} -- Carto data
carto_val_cols {str list} -- Names of carto columns to aggregate
Returns: GeoDataFrame with the number of poi-s per hexagon
"""
# 0) Helper items
aggs = [["sum"]]*len(carto_val_cols) # summing dummies -->> resulting in counts
# 1) Aggregate
output = \
gpd.sjoin(custom_group_carto_poi(carto_data), territory_hex, how="inner", op="intersects"). \
groupby("hex_id"). \
agg(dict(zip(carto_val_cols, aggs)))
# 2) Adjust names
output = fix_agg_data_names(output, prefix="carto_", id_var="hex_id")
output.columns = output.columns.str.replace("sum", "count")
return output
```
###### 3.2.1. Additional general function
```
def overlay_carto_hex(carto_data, territory_hex, carto_id_var, hex_id_var):
"""
A function to overlay carto data and hexagonal grid
Arguments:
carto_data {pd.DataFrame} -- Carto data
territory_hex {pd.DataFrame} -- Hexagonal grid data for the territory
carto_id_var {string} -- Name of carto_data id column
hex_id_var {string} -- Name of territory_hex id column
Returns: GeoDataFrame with carto hexagon-level aggregations
"""
# 1) Overlay hex grid onto carto data -->> NB! the default geometry becomes that of the intersection items
output = gpd.overlay(carto_data, territory_hex, how="intersection")
# 2) Add extra geometries - from full carto set and from full hex set (gives context for carto value weighing)
output = pd.merge(output, carto_data[[carto_id_var, "geometry"]].rename(columns={"geometry": "carto_set_geom"}), on = [carto_id_var], how="left")
output = pd.merge(output, territory_hex[[hex_id_var, "geometry"]].rename(columns={"geometry": "territory_hex_geom"}), on = [hex_id_var], how="left")
return output
```
###### 3.2.2. Additional general function
```
def reflect_carto_share(data, val_cols, hex_geom_name = "territory_hex_geom", carto_geom_name = "carto_set_geom"):
"""
A function to update carto value columns with their approproiate carto share - falling within each hexagon, prior to follow-up aggregation
Arguments:
data {gpd.DataFrame} -- Data for which to update the carto value columns
val_cols {str list} -- Name of shape file containing hexagon grid (read as gpd)
Returns: GeoDataFrame with carto hexagon-level aggregations
"""
# hex_geom_name = "territory_hex_geom"
# carto_geom_name = "carto_set_geom"
# "geometry" is the geom of the intersection from the overlay intersections
form_hex_share = lambda x: [y.area/z.area for y, z in zip(x["geometry"], x[carto_geom_name])]
for each in val_cols:
form_val_as_share = lambda x: [y*z for y, z in zip(x[each], x["hex_share"])]
new_col_spex = dict(zip(["hex_share"] + [each], [form_hex_share] + [form_val_as_share]))
data = data.assign(**(new_col_spex))
return data
```
##### 3.2.3. Additional general function
```
def fix_agg_data_names(data, prefix="carto_", id_var="hex_id"):
"""
A function to adjust column names of aggregated data -
concatenates index grouping to column names and adds an indicator prefix (for easier filtering @ later stages)
Arguments:
data {gpd.DataFrame} -- Post-pivot data to process
prefix {string} -- Suffix to add to column names, short indicator of pivot column setting (i.e. "dow" for "day_of_week")
id_var {string} -- Name of DataFrame id var
Returns: gpd.DataFrame with adjusted column names.
"""
data.reset_index(level=0, inplace=True)
data.columns = ["_".join(x) for x in data.columns.ravel() if x != id_var] # [id_var] +
data = data.add_prefix(prefix)
data = data.rename(columns={prefix+id_var+"_": id_var})
data.head()
return data
```
###### 3.2.4. Additional general function
```
def fix_pivot_data_names(data, suffix, id_var, id_trigger=False):
"""
A function to adjust column names of pivoted data -
concatenates index grouping to column names and adds an indicator suffix (for easier filtering @ later stages)
Arguments:
data {gpd.DataFrame} -- Post-pivot data to process
suffix {string} -- Suffix to add to column names, short indicator of pivot column setting (i.e. "dow" for "day_of_week")
id_var {string} -- Name of DataFrame id var
id_trigger {boolean} -- Flag for id_var treatment in renaming, False drops id_var from renaming functionality (defaults to False)
Returns: performs renaming in place, does not return an object
"""
if id_trigger:
data.columns = ["_".join(x) + suffix for x in data.columns.ravel()]
else:
data.columns = ["_".join(x) + suffix for x in data.columns.ravel() if x != id_var]
data.reset_index(level=0, inplace=True)
```
###### 3.2.5. Additional set-specific function: carto_poi (callable in wrapper 3.1.6 "prep_carto_poi")
```
def custom_group_carto_poi(carto_data, id_var="cartodb_id"):
"""
A function to create a custom grouping based on pb poi trade divisions - to leverage in the estimation of poi count by category.
Arguments:
carto_data {pd.GeoDataFrame} -- Carto poi data to process
id_var {string} -- Name of id var (key id) to use in indexing and df joining
Returns: GeoDataFrame on carto PB POI - with dummies for the different custom poi groups defined.
"""
# 1) Create subset with trade divisions only
data = carto_data.copy()[[id_var, "trade_division"]]
# 2) Define new column parameters - conditions and names,
conditions = [
(data["trade_division"].isin(["DIVISION C. - CONSTRUCTION"])),
(data["trade_division"].isin(["DIVISION D. - MANUFACTURING"])),
(data["trade_division"].isin(["DIVISION B. -MINING"])),
(data["trade_division"].isin(["DIVISION A. - AGRICULTURE, FORESTRY, AND FISHING"])),
(data["trade_division"].isin(["DIVISION K. - NONCLASSIFIABLE ESTABLISHMENTS"])),
(data["trade_division"].isin(["DIVISION N. - LABEL FEATURES"])),
(data["trade_division"].isin(["DIVISION E. - TRANSPORTATION AND PUBLIC UTILITIES"])),
(data["trade_division"].isin(["DIVISION I. - SERVICES", "DIVISION F. - WHOLESALE TRADE", "DIVISION G. - RETAIL TRADE", "DIVISION M. - SPORTS",
"DIVISION L. - TOURISM", "DIVISION H. - FINANCE, INSURANCE, AND REAL ESTATE", "DIVISION J. - PUBLIC ADMINISTRATION"]))
]
values = ["poi_" + x for x in ["construction", "manufacturing","mining", "life_sciences", "nonclassifiable", "terrain_features", "utility_transportation", "trade_services"] ]
# 3) Generate new columns
data = data. \
assign(poi_sic_grouping=np.select(conditions, values)). \
set_index(id_var)
business = pd.get_dummies(data["poi_sic_grouping"]). \
reset_index()
divisions = pd.get_dummies(data["trade_division"])
divisions.columns = [re.sub(r'^.*?-', "", x.lower()) for x in divisions.columns]
divisions.columns = [re.sub(",", "", x) for x in divisions.columns]
divisions.columns = ["trade_div" + re.sub(" ", "_", x) for x in divisions.columns]
divisions = divisions.reset_index()
# 4) Add to original carto data
carto_data = carto_data.merge(business, on=id_var, how="left").merge(divisions, on=id_var, how="left")
return carto_data
```
###### 3.2.6. Additional set-specific function: carto_unacast (callable in wrapper 3.1.5 "prep_carto_unacast")
```
def init_prep_carto_unacast(carto_data, id_var="seen_in"):
"""
A function to perform initial prep over the raw carto_unacast_data - spread to wide format.
Arguments:
carto_unacast {gpd.DataFrame} -- Spread Carto Unacast data to be processed
id_var {string} -- Name of dataset key id var/quadid (indicating unique rows, defaults to "seen_in")
Returns: GeoDataFrame with wider data.
"""
# 1) Spread carto set categorical columns
carto_unacast_wip = spread_carto_unacast(carto_data)
# 2) Transform original proportions to counts
carto_unacast_wip = reflect_unacast_proportions(carto_unacast_wip)
# 3) Aggregate carto set to geom level
output = agg_carto_unacast_to_geomid(carto_unacast_wip, id_var=id_var)
return output
```
**3.2.7. Additional set-specific function: carto_unacast (callable in addon 3.2.6 "init_prep_carto_unacast")**
```
def spread_carto_unacast(carto_unacast):
"""
A function to spread categorical carto_unacast columns
Arguments:
carto_unacast {gpd.DataFrame} -- Carto Unacast data to be processed
Returns: GeoDataFrame with wider data.
"""
# Unfold return_rate column
try:
carto_unacast_wip = carto_unacast. \
assign(
rr_unfolded = lambda x: [parse_return_rate(string=y) for y in x["return_rate"]],
less_than_5_flag = carto_unacast["less_than_5_flag"].astype(int)
)
except:
return carto_unacast
# Spread into columns
carto_unacast_wip = pd.concat([carto_unacast_wip.drop(["rr_unfolded", "return_rate"], axis=1), carto_unacast_wip["rr_unfolded"].apply(pd.Series)], axis=1)
carto_unacast_wip.columns = ["return_rate_" + col if col in ["rare", "weekly", "multi_weekly", "monthly"] else col for col in carto_unacast_wip.columns]
# Adjust type
fix_cols = [col for col in carto_unacast_wip if col.startswith("return_rate_")]
carto_unacast_wip[fix_cols] = carto_unacast_wip[fix_cols].apply(pd.to_numeric, errors='coerce', axis=1)
return carto_unacast_wip
```
**3.2.8. Additional set-specific function: carto_unacast (callable in addon 3.2.7 "spread_carto_unacast")**
```
def parse_return_rate(string):
"""
A function to parse return_rate column of the carto unacast data (originally a string).
Arguments:
string {string} -- String to parse
Returns: String - parsed as a dictionary of key-value pairs to be split into DataFrame columns as the next prep step.
"""
output_string = string.split("), ")
output_string = [re.sub(r'[]()[{}]', '', x) for x in output_string]
output_string = [re.sub(r' ', '', x) for x in output_string]
output_string = [re.sub(r'frequency:', '', x) for x in output_string]
output_string = [re.sub(r'ratio:', '', x) for x in output_string]
output_string = [re.sub(r'-', '_', x) for x in output_string]
output_string = [re.sub(r',', '=', x) for x in output_string]
sep = ";"
output_string = sep.join(output_string)
output_string = dict(item.split("=") for item in output_string.split(";"))
return output_string
```
**3.2.9. Additional set-specific function: carto_unacast (callable in addon 3.2.6 "init_prep_carto_unacast")**
```
def reflect_unacast_proportions(carto_unacast_wip):
"""
A function to reflect proportions as counts in unacast data (relative to "staying" count) - prior to aggregation.
Arguments:
carto_unacast_wip {gpd.DataFrame} -- Spread Carto Unacast data to be processed
Returns: GeoDataFrame with original proportion columns transformed as counts.
"""
fix_cols = [col for col in carto_unacast_wip if col.startswith("proportion_")]
for col in fix_cols:
carto_unacast_wip[col] = carto_unacast_wip["staying"] * carto_unacast_wip[col]
return carto_unacast_wip
```
**3.2.10. Additional set-specific function: carto_unacast (callable in addon 3.2.10 "init_prep_carto_unacast")**
```
def agg_carto_unacast_to_geomid(carto_unacast_wip, id_var="seen_in"):
"""
A function to aggregate carto_unacast data to quadid level (same as geometry level).
Arguments:
carto_unacast {gpd.DataFrame} -- Spread Carto Unacast data to be processed
id_var {string} -- Name of dataset key id var/quadid (indicating unique rows, not a geometry)
Returns: GeoDataFrame with wider data.
"""
# 1) Helper items
geom_mapping = carto_unacast_wip[["geometry", id_var]].drop_duplicates()
val_cols = ['staying','proportion_residents','proportion_workers','proportion_others','proportion_same_state','proportion_other_state','proportion_unknown','return_rate_rare','return_rate_multi_weekly','return_rate_weekly','return_rate_monthly']
rr_cols = [col for col in val_cols if col.startswith("return_rate_")]
# 2) Derive wide set
# 2.1) aggregate return_rates to seen_in id -> independent of any breaks
# average (if spread by category the same result is observed across different categories)
out_rr_mean = carto_unacast_wip.groupby(id_var).agg(dict(zip(rr_cols, ["mean"]*len(rr_cols)))).reset_index()
# 2.2) aggregate day_of_week
day_of_week = \
wide_agg_carto_unacast(
carto_unacast_wip, val_cols, id_var, dimension_var="day_of_week", suffix_dim="_dow",
suffix_specs=["ALL", "Friday", "Monday", "Saturday", "Sunday", "Thursday", "Tuesday", "Wednesday"])
# 2.3) aggregate time_of_day
part_of_day = \
wide_agg_carto_unacast(
carto_unacast_wip, val_cols, id_var, dimension_var="part_of_day", suffix_dim="_pod",
suffix_specs=['ALL', 'Evening', 'Morning', 'Afternoon', 'Night'])
# 2.4) aggregate months
month = \
wide_agg_carto_unacast(
carto_unacast_wip, val_cols, id_var, dimension_var="month", suffix_dim="_mo",
suffix_specs=['September', 'May', 'August', 'June', 'July', 'October'])
# 2.4) pivot on combination of day_of_week, time_of_day & month --> NB! results in 240 cols per value var
# time_combos = carto_unacast_wip. \
# assign(part_of_time = lambda x: [y + "_" + z + "_" + q for y, z, q in zip(x["month"], x["day_of_week"], x["part_of_day"])]). \
# pivot(index=id_var, columns="part_of_time", values=val_cols)
# fix_pivot_data_names(data=time_combos, suffix="_combo", id_var=id_var, id_trigger=True)
# 2.5) Combine into 1
output = geom_mapping. \
merge(month, on=id_var, how="inner"). \
merge(day_of_week, on=id_var, how="inner"). \
merge(part_of_day, on=id_var, how="inner"). \
merge(out_rr_mean, on=id_var, how="inner") #. \
# merge(time_combos, on=id_var, how="inner")
return output
```
**3.2.11. Additional set-specific function: carto_unacast (callable in addon 3.2.10 "agg_carto_unacast_to_geomid")**
```
def wide_agg_carto_unacast(carto_unacast_wip, val_cols, id_var, dimension_var, suffix_dim, suffix_specs):
"""
A function to aggregate unacast data to unique id level - prerequisite to hex id aggregation (accounting for proportions and sub categories of temporal columns).
Arguments:
carto_unacast_wip {gpd.DataFrame} -- Carto Unacast data post initial spread of return rate columns
val_cols {str list} -- Names of value columns to be aggregated
id_var {string} -- Name of dataset key id var/quadid (indicating unique rows, defaults to "seen_in")
dimension_var {string} -- Name of temporal column to split the categories of (i.e. "day_of_week", "month", "time_of_day")
suffix_dim {string} -- Name of suffix corresponding to dimension_var (marks source data in new cols)
suffix_specs {str list} -- Names of dimension_var categories (pssible values to be spread into new variables)
Returns: Aggregated GeoDataFrame with additional variables derived from categories of temporal variables.
"""
suffixes = ["_" + each + suffix_dim for each in suffix_specs]
rr_cols = [col for col in val_cols if col.startswith("return_rate_")]
val_cols_pure = list(set(val_cols) - set(rr_cols))
# Aggregations:
# 1) reflect temporal categories by pivoting with sum of value columns
output = carto_unacast_wip.pivot_table(values=val_cols_pure, columns=dimension_var, index=id_var, aggfunc=dict(zip(val_cols_pure, ["sum"]*len(val_cols_pure))))
fix_pivot_data_names(data=output, suffix=suffix_dim, id_var=id_var)
# 1.1) keep sum of the [avg number of people] per subcategory
out_ppl_sum = output.copy().set_index(id_var)
out_ppl_sum = out_ppl_sum.loc[:, out_ppl_sum.columns.str.startswith("proportion_")].reset_index()
out_ppl_sum.columns = out_ppl_sum.columns.str.replace(r"proportion_", "sum_")
# 1.2) add columns reverted back to proportions
for suffix in suffixes:
columns_with_suffix = [col for col in output.columns if suffix in col]
denominator=[col for col in columns_with_suffix if col.startswith("staying_")]
numerators=list(set(columns_with_suffix) - set(denominator))
for numerator in numerators:
output[numerator] = output[numerator]/output[denominator[0]]
# 2) build combined output file
output = output. \
merge(out_ppl_sum, on=id_var, how="left")
return output
```
#### 4. Functions on combining the TAA components
```
def merge_taa_components_from_wppop(pop_stats, poi_stats, carto_cexpend_stats, carto_sociodemo_stats, carto_unacast_stats, carto_poi_stats,
id_var="hex_id", export_spec=None):
"""
A function to combine trade-area-analysis component datasets.
Arguments:
pop_stats {GeoDataFrame} -- WP population data - aggregated to hexagon level
poi_stats {GeoDataFrame} -- OSM poi data - aggregated to hexagon level
carto_cexpend_stats {GeoDataFrame} -- Carto customer spend data - aggregated to hexagon level
carto_sociodemo_stats {GeoDataFrame} -- Carto socio-demographic data - aggregated to hexagon level
carto_unacast_stats {GeoDataFrame} -- Carto Unacast data - aggregated to hexagon level
carto_poi_stats {GeoDataFrame} -- Carto PB poi data - aggregated to hexagon level
id_var {string} -- Name of id variable to merge on
export_spec {string} -- Name of file to export to blob (path and name)
Returns: GeoDataFrame with wider data.
"""
# need a dataset with geometry and hex_id-> to form a gpd
# currently using pop_stats
output = \
pop_stats. \
merge(poi_stats, on=id_var, how='left'). \
merge(carto_cexpend_stats, on=id_var, how='left'). \
merge(carto_sociodemo_stats, on=id_var, how='left'). \
merge(carto_unacast_stats, on=id_var, how='left'). \
merge(carto_poi_stats, on=id_var, how='left')
if export_spec is not None:
output.to_file(export_spec, driver="GeoJSON")
return(output)
def merge_taa_components_from_osm_carto(poi_stats, carto_cexpend_stats, carto_sociodemo_stats, carto_unacast_stats, carto_poi_stats,
id_var="hex_id", export_spec="/dbfs/mnt/AA/ba008/assets/trade_area_analysis/taa_combo_set.json"):
"""
A function to combine trade-area-analysis component datasets.
Arguments:
poi_stats {GeoDataFrame} -- OSM poi data - aggregated to hexagon level
carto_cexpend_stats {GeoDataFrame} -- Carto customer spend data - aggregated to hexagon level
carto_sociodemo_stats {GeoDataFrame} -- Carto socio-demographic data - aggregated to hexagon level
carto_unacast_stats {GeoDataFrame} -- Carto Unacast data - aggregated to hexagon level
carto_poi_stats {GeoDataFrame} -- Carto PB poi data - aggregated to hexagon level
id_var {string} -- Name of id variable to merge on
export_spec {string} -- Name of file to export to blob (path and name)
Returns: GeoDataFrame with wider data.
"""
# need a dataset with geometry and hex_id-> to form a gpd
# currently using pop_stats
output = \
poi_stats. \
merge(carto_cexpend_stats, on=id_var, how='left'). \
merge(carto_sociodemo_stats, on=id_var, how='left'). \
merge(carto_unacast_stats, on=id_var, how='left'). \
merge(carto_poi_stats, on=id_var, how='left')
output = gpd.GeoDataFrame(output)
output.to_file(export_spec, driver="GeoJSON")
return(output)
def merge_taa_components_from_hex(territory_hex, poi_stats, carto_cexpend_stats, carto_sociodemo_stats, carto_unacast_stats, carto_poi_stats,
id_var="hex_id", export_spec="/dbfs/mnt/AA/ba008/assets/trade_area_analysis/taa_combo_set.json"):
"""
A function to combine trade-area-analysis component datasets.
Arguments:
territory_hex {GeoDataFrame} -- Hexagonal grid for the territory
poi_stats {GeoDataFrame} -- OSM poi data - aggregated to hexagon level
carto_cexpend_stats {GeoDataFrame} -- Carto customer spend data - aggregated to hexagon level
carto_sociodemo_stats {GeoDataFrame} -- Carto socio-demographic data - aggregated to hexagon level
carto_unacast_stats {GeoDataFrame} -- Carto Unacast data - aggregated to hexagon level
carto_poi_stats {GeoDataFrame} -- Carto PB poi data - aggregated to hexagon level
id_var {string} -- Name of id variable to merge on
export_spec {string} -- Name of file to export to blob (path and name)
Returns: GeoDataFrame with wider data.
"""
# need a dataset with geometry and hex_id-> to form a gpd
# currently using territory_hex
output = \
territory_hex. \
merge(poi_stats, on=id_var, how='left'). \
merge(carto_cexpend_stats, on=id_var, how='left'). \
merge(carto_sociodemo_stats, on=id_var, how='left'). \
merge(carto_unacast_stats, on=id_var, how='left'). \
merge(carto_poi_stats, on=id_var, how='left')
output.to_file(export_spec, driver="GeoJSON")
return(output)
```
#### Data visualization functions
```
def plot_simple_hex(combo_set, show_var):
"""
A function to display data per hexagon - on regional and city level.
Arguments:
combo_set {gpd.DataFrame} -- Complete data, on regional level
show_var {string} -- Name of variable to display
Returns: GeoDataFrame with wider data.
"""
combo_set.plot(column=show_var, figsize=(10, 10))
```
| github_jupyter |
# Empirical Approximation overview
For most models we use sampling MCMC algorithms like Metropolis or NUTS. In PyMC3 we got used to store traces of MCMC samples and then do analysis using them. There is a similar concept for the variational inference submodule in PyMC3: *Empirical*. This type of approximation stores particles for the SVGD sampler. There is no difference between independent SVGD particles and MCMC samples. *Empirical* acts as a bridge between MCMC sampling output and full-fledged VI utils like `apply_replacements` or `sample_node`. For the interface description, see [variational_api_quickstart](variational_api_quickstart.ipynb). Here we will just focus on `Emprical` and give an overview of specific things for the *Empirical* approximation
```
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
print(f"Running on PyMC3 v{pm.__version__}")
%config InlineBackend.figure_format = 'retina'
az.style.use("arviz-darkgrid")
np.random.seed(42)
pm.set_tt_rng(42)
```
## Multimodal density
Let's recall the problem from [variational_api_quickstart](variational_api_quickstart.ipynb) where we first got a NUTS trace
```
w = pm.floatX([0.2, 0.8])
mu = pm.floatX([-0.3, 0.5])
sd = pm.floatX([0.1, 0.1])
with pm.Model() as model:
x = pm.NormalMixture("x", w=w, mu=mu, sigma=sd, dtype=theano.config.floatX)
trace = pm.sample(50000)
pm.traceplot(trace);
```
Great. First having a trace we can create `Empirical` approx
```
print(pm.Empirical.__doc__)
with model:
approx = pm.Empirical(trace)
approx
```
This type of approximation has it's own underlying storage for samples that is `theano.shared` itself
```
approx.histogram
approx.histogram.get_value()[:10]
approx.histogram.get_value().shape
```
It has exactly the same number of samples that you had in trace before. In our particular case it is 50k. Another thing to notice is that if you have multitrace with **more than one chain** you'll get much **more samples** stored at once. We flatten all the trace for creating `Empirical`.
This *histogram* is about *how* we store samples. The structure is pretty simple: `(n_samples, n_dim)` The order of these variables is stored internally in the class and in most cases will not be needed for end user
```
approx.ordering
```
Sampling from posterior is done uniformly with replacements. Call `approx.sample(1000)` and you'll get again the trace but the order is not determined. There is no way now to reconstruct the underlying trace again with `approx.sample`.
```
new_trace = approx.sample(50000)
%timeit new_trace = approx.sample(50000)
```
After sampling function is compiled sampling bacomes really fast
```
pm.traceplot(new_trace);
```
You see there is no order any more but reconstructed density is the same.
## 2d density
```
mu = pm.floatX([0.0, 0.0])
cov = pm.floatX([[1, 0.5], [0.5, 1.0]])
with pm.Model() as model:
pm.MvNormal("x", mu=mu, cov=cov, shape=2)
trace = pm.sample(1000)
with model:
approx = pm.Empirical(trace)
pm.traceplot(approx.sample(10000));
import seaborn as sns
sns.kdeplot(approx.sample(1000)["x"])
```
Previously we had a `trace_cov` function
```
with model:
print(pm.trace_cov(trace))
```
Now we can estimate the same covariance using `Empirical`
```
print(approx.cov)
```
That's a tensor itself
```
print(approx.cov.eval())
```
Estimations are very close and differ due to precision error. We can get the mean in the same way
```
print(approx.mean.eval())
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
# Hill Climbing
---
In this notebook, we will train hill climbing with adaptive noise scaling with OpenAI Gym's Cartpole environment.
### 1. Import the Necessary Packages
```
import gym
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
```
### 2. Define the Policy
```
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
class Policy():
def __init__(self, s_size=4, a_size=2):
self.w = 1e-4*np.random.rand(s_size, a_size) # weights for simple linear policy: state_space x action_space
def forward(self, state):
x = np.dot(state, self.w)
return np.exp(x)/sum(np.exp(x))
def act(self, state):
probs = self.forward(state)
#action = np.random.choice(2, p=probs) # option 1: stochastic policy
action = np.argmax(probs) # option 2: deterministic policy
return action
```
### 3. Train the Agent with Stochastic Policy Search
```
env = gym.make('CartPole-v0')
env.seed(0)
np.random.seed(0)
policy = Policy()
def hill_climbing(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100, noise_scale=1e-2):
"""Implementation of hill climbing with adaptive noise scaling.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
noise_scale (float): standard deviation of additive noise
"""
scores_deque = deque(maxlen=100)
scores = []
best_R = -np.Inf
best_w = policy.w
for i_episode in range(1, n_episodes+1):
rewards = []
state = env.reset()
for t in range(max_t):
action = policy.act(state)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
if R >= best_R: # found better weights
best_R = R
best_w = policy.w
noise_scale = max(1e-3, noise_scale / 2)
policy.w += noise_scale * np.random.rand(*policy.w.shape)
else: # did not find better weights
noise_scale = min(2, noise_scale * 2)
policy.w = best_w + noise_scale * np.random.rand(*policy.w.shape)
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
policy.w = best_w
break
return scores
scores = hill_climbing()
```
### 4. Plot the Scores
```
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 5. Watch a Smart Agent!
```
env = gym.make('CartPole-v0')
state = env.reset()
for t in range(2000): #Default of range(200) was way too short!
action = policy.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
| github_jupyter |
# 프로젝트 1. 영화 리뷰 감정 분석
**RNN 을 이용해 IMDB 데이터를 가지고 텍스트 감정분석을 해 봅시다.**
이번 책에서 처음으로 접하는 텍스트 형태의 데이터셋인 IMDB 데이터셋은 50,000건의 영화 리뷰로 이루어져 있습니다.
각 리뷰는 다수의 영어 문장들로 이루어져 있으며, 평점이 7점 이상의 긍정적인 영화 리뷰는 2로, 평점이 4점 이하인 부정적인 영화 리뷰는 1로 레이블링 되어 있습니다. 영화 리뷰 텍스트를 RNN 에 입력시켜 영화평의 전체 내용을 압축하고, 이렇게 압축된 리뷰가 긍정적인지 부정적인지 판단해주는 간단한 분류 모델을 만드는 것이 이번 프로젝트의 목표입니다.
```
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchtext import data, datasets
# 하이퍼파라미터
BATCH_SIZE = 64
lr = 0.001
EPOCHS = 10
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
print("다음 기기로 학습합니다:", DEVICE)
# 데이터 로딩하기
print("데이터 로딩중...")
TEXT = data.Field(sequential=True, batch_first=True, lower=True)
LABEL = data.Field(sequential=False, batch_first=True)
trainset, testset = datasets.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(trainset, min_freq=5)
LABEL.build_vocab(trainset)
# 학습용 데이터를 학습셋 80% 검증셋 20% 로 나누기
trainset, valset = trainset.split(split_ratio=0.8)
train_iter, val_iter, test_iter = data.BucketIterator.splits(
(trainset, valset, testset), batch_size=BATCH_SIZE,
shuffle=True, repeat=False)
vocab_size = len(TEXT.vocab)
n_classes = 2
print("[학습셋]: %d [검증셋]: %d [테스트셋]: %d [단어수]: %d [클래스] %d"
% (len(trainset),len(valset), len(testset), vocab_size, n_classes))
class BasicGRU(nn.Module):
def __init__(self, n_layers, hidden_dim, n_vocab, embed_dim, n_classes, dropout_p=0.2):
super(BasicGRU, self).__init__()
print("Building Basic GRU model...")
self.n_layers = n_layers
self.embed = nn.Embedding(n_vocab, embed_dim)
self.hidden_dim = hidden_dim
self.dropout = nn.Dropout(dropout_p)
self.gru = nn.GRU(embed_dim, self.hidden_dim,
num_layers=self.n_layers,
batch_first=True)
self.out = nn.Linear(self.hidden_dim, n_classes)
def forward(self, x):
x = self.embed(x)
h_0 = self._init_state(batch_size=x.size(0))
x, _ = self.gru(x, h_0) # [i, b, h]
h_t = x[:,-1,:]
self.dropout(h_t)
logit = self.out(h_t) # [b, h] -> [b, o]
return logit
def _init_state(self, batch_size=1):
weight = next(self.parameters()).data
return weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()
def train(model, optimizer, train_iter):
model.train()
for b, batch in enumerate(train_iter):
x, y = batch.text.to(DEVICE), batch.label.to(DEVICE)
y.data.sub_(1) # 레이블 값을 0과 1로 변환
optimizer.zero_grad()
logit = model(x)
loss = F.cross_entropy(logit, y)
loss.backward()
optimizer.step()
def evaluate(model, val_iter):
"""evaluate model"""
model.eval()
corrects, total_loss = 0, 0
for batch in val_iter:
x, y = batch.text.to(DEVICE), batch.label.to(DEVICE)
y.data.sub_(1) # 레이블 값을 0과 1로 변환
logit = model(x)
loss = F.cross_entropy(logit, y, reduction='sum')
total_loss += loss.item()
corrects += (logit.max(1)[1].view(y.size()).data == y.data).sum()
size = len(val_iter.dataset)
avg_loss = total_loss / size
avg_accuracy = 100.0 * corrects / size
return avg_loss, avg_accuracy
model = BasicGRU(1, 256, vocab_size, 128, n_classes, 0.5).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
best_val_loss = None
for e in range(1, EPOCHS+1):
train(model, optimizer, train_iter)
val_loss, val_accuracy = evaluate(model, val_iter)
print("[이폭: %d] 검증 오차:%5.2f | 검증 정확도:%5.2f" % (e, val_loss, val_accuracy))
# 검증 오차가 가장 적은 최적의 모델을 저장
if not best_val_loss or val_loss < best_val_loss:
if not os.path.isdir("snapshot"):
os.makedirs("snapshot")
torch.save(model.state_dict(), './snapshot/txtclassification.pt')
best_val_loss = val_loss
model.load_state_dict(torch.load('./snapshot/txtclassification.pt'))
test_loss, test_acc = evaluate(model, test_iter)
print('테스트 오차: %5.2f | 테스트 정확도: %5.2f' % (test_loss, test_acc))
```
| github_jupyter |
# Decision Tree Algorithm 👨🏻💻
---
## SKlearn implementation
---
### `Imports`
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
### `Importing Dataset`
Next, we import the dataset from the CSV file to the Pandas dataframes.
```
col = [ 'Class Name','Left weight','Left distance','Right weight','Right distance']
df = pd.read_csv('./data/balance-scale.data',names=col,sep=',')
df.head()
```
### `Information About Dataset`
We can get the overall information of our data set by using the df.info function. From the output, we can see that it has 625 records with 5 fields.
```
df.info()
```
### `Exploratory Data Analysis (EDA)`
Let us do a bit of exploratory data analysis to understand our dataset better. We have plotted the classes by using countplot function. We can see in the figure given below that most of the classes names fall under the labels R and L which means Right and Left respectively. Very few data fall under B, which stands for balanced.
```
sns.countplot(df['Class Name'])
sns.countplot(df['Left weight'],hue=df['Class Name'])
sns.countplot(df['Right weight'],hue=df['Class Name'])
```
### `Splitting the Dataset in Train-Test`
Before feeding the data into the model we first split it into train and test data using the train_test_split function.
```
from sklearn.model_selection import train_test_split
X = df.drop('Class Name',axis=1)
y = df[['Class Name']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3,random_state=42)
```
### `Training the Decision Tree Classifier`
We have used the Gini index as our attribute selection method for the training of decision tree classifier with Sklearn function DecisionTreeClassifier().
We have created the decision tree classifier by passing other parameters such as random state, max_depth, and min_sample_leaf to DecisionTreeClassifier().
Finally, we do the training process by using the model.fit() method.
```
from sklearn.tree import DecisionTreeClassifier
clf_model = DecisionTreeClassifier(criterion="gini", random_state=42,max_depth=3, min_samples_leaf=5)
clf_model.fit(X_train,y_train)
```
### `Test Accuracy`
We will now test accuracy by using the classifier on test data. For this we first use the model.predict function and pass X_test as attributes.
```
y_predict = clf_model.predict(X_test)
```
Next, we use accuracy_score function of Sklearn to calculate the accuracty. We can see that we are getting a pretty good accuracy of 78.6% on our test data.
```
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
accuracy_score(y_test,y_predict)
```
### `Plotting Decision Tree`
We can plot our decision tree with the help of the Graphviz library and passing after a bunch of parameters such as classifier model, target values, and the features name of our data.
```
target = list(df['Class Name'].unique())
feature_names = list(X.columns)
from sklearn import tree
import graphviz
dot_data = tree.export_graphviz(clf_model,
out_file=None,
feature_names=feature_names,
class_names=target,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
```
| github_jupyter |
# PANDAS!!!
IS made for working with data sets generally below or around 1 GB in size, but really this limit varies depending on the memory constraints of the device you run it on. A good rule of thumb is have at least five to ten times the amount of memory on the device as your data set. Once the data set starts to exceed the single-digit gigabyte range, it’s generally recommended to use a different library such as Vaex.
```
import pandas as pd
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.figsize': (14, 10),})
import json
account_info = pd.DataFrame({
"name": ["Bob", "Mary", "Mita"],
"account": [123846, 123972, 347209],
"balance": [123, 3972, 7209],
})
account_info
account_info[["name", "balance"]]
```
# The `iloc` method
A DataFrame’s rows can be accessed via the iloc method which uses a list-like syntax.
```
account_info.iloc[1]
account_info.iloc[:, [0, 2]]
account_info.iloc[account_info.index % 2 == 1]
account_info.index
list(range(0, 3, 1))
[0, 1, 2]%2
```
# The `loc` method
loc is similar to iloc, but it allows you to index into a DataFrame via column names or labels.
```
account_info.loc[1]
account_info.loc[1, "balance"]
```
# The `.iat` method
get/set numpy array result by index position
```
account_info.iat[0,0]
```
# The `.at` method
get/set numpy array result by index label
```
account_info.at[0,'name']
```
The `` method
```
import io
data = io.StringIO(
"""
id,age,height,weight
129237,32,5.4,126
123083,20,6.1,145
"""
)
pd.read_csv(data)
pd.read_csv(data, usecols=["height", "age"])
data = io.StringIO(
"""
{
"columns": ["temp"],
"index": ["234unf923", "340inf351", "234abe045"],
"data": [[35.2],[32.5],[33.1]],
}
"""
)
data
temperatures = pd.read_json(
data,
orient="split",
)
temperatures
data = io.StringIO(
"""
[
{"location": "234unf923", "temp": 35.2},
{"location": "340inf351", "temp": 32.5},
{"location": "234abe045", "temp": 33.1},
]
"""
)
data
temperatures = pd.read_json(
data,
orient="records",
)
temperatures
np.random.rand(10,3)
df = pd.DataFrame(np.random.rand(10,3), columns = ['a', 'b', 'c'])
df
df.groupby('a').median()
df.groupby('a').b.pct_change()
df.axes
df.axes[0]
df.axes[1]
beatles = pd.DataFrame({
'name': ['Ringo', 'Paul', 'John', 'George'],
'surname': ['Starr', 'McCartney', 'Lennon', 'Harrison'],
'born': [1940, 1942, 1940, 1942],
'teacher': ['Martin', 'Epstein','Martin', 'Epstein']
})
beatles
beatles.groupby('teacher').median()
d = {
u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
u'2012-06-13': 389,
u'2012-06-14': 389,
u'2012-06-15': 389,
u'2012-06-16': 389,
u'2012-06-17': 389,
u'2012-06-18': 390,
u'2012-06-19': 390,
u'2012-06-20': 390,
u'2012-06-21': 390,
u'2012-06-22': 390,
u'2012-06-23': 390,
u'2012-06-24': 390,
u'2012-06-25': 391,
u'2012-06-26': 391,
u'2012-06-27': 391,
u'2012-06-28': 391,
u'2012-06-29': 391,
u'2012-06-30': 391,
u'2012-07-01': 391,
u'2012-07-02': 392,
u'2012-07-03': 392,
u'2012-07-04': 392,
u'2012-07-05': 392,
}
s = pd.Series(d, name='DateValue')
s.index
import pandas as pd
import numpy as np
import datetime as dt
fname = ["Paul", "John", "Richard", "George"]
lname = ["McCartney", "Lennon", "Starkey", "Harrison"]
birth = [1942, 1940, 1940, 1943]
group = {"first": fname, "last": lname, "birth": birth}
group
beatles = pd.DataFrame(group)
beatles
beatles.loc[0:2,'last']
d = {'col1': [1, 2], 'col2': [3, 4]}
pd.DataFrame(d)
[i for i in np.random.rand(10)]
np.random.rand(10).tolist()
# numpy.random.uniform(low=0.0, high=1.0, size=None)
np.random.uniform(-1.1,1.1,10).tolist()
SIZE = 50
df = pd.DataFrame({
'a': np.random.uniform(-1.1,1.1, SIZE),
'b': np.random.uniform(0,12.1, SIZE),
'c': np.random.uniform(-np.pi,np.pi, SIZE),
},pd.date_range(dt.datetime.today(), periods=SIZE, freq='S'))
df.info()
SIZE = 50
df = pd.DataFrame({
'dt': pd.date_range(dt.datetime.today(), periods=SIZE, freq='S'),
'a': np.random.uniform(-1.1,1.1, SIZE),
'b': np.random.uniform(0,12.1, SIZE),
'c': np.random.uniform(-np.pi,np.pi, SIZE),
})
df.set_index('dt', inplace=True)
df.info()
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.figsize': (14, 10),})
df.plot()
import datetime as dt
dt.datetime.today()
dt.datetime.now().strftime("%s")
dt.datetime.fromtimestamp(1595066315.1000)
pd.date_range(dt.datetime.today(), periods=100, freq='S')
df = pd.read_json("https://raw.githubusercontent.com/domoritz/maps/master/data/iris.json")
df
df.groupby('species').sum()
s = pd.Series(data=np.arange(3))
pd.DataFrame(s,['A', 'B', 'C'])
portNames = df.groupby('portName').sum().index.to_list()
port_io = pd.DataFrame(df[(df['portName'] == portNames[0]) & (df.metricId == 854)]['current'].to_list()[0], columns=['x'])
port_io['time'] = port_io['x'].apply(lambda x: dt.datetime.fromtimestamp(x/1e3))
port_io.set_index('time', inplace=True)
port_io.drop(columns=['x'], inplace=True)
for portName in portNames:
port_io[portName] = pd.DataFrame(df[(df['portName'] == portName) & (df.metricId == 854)]['current'].to_list()[0], columns=['y'])
portName = portNames[1]
portName
import json
import requests
adr = 'https://dl.boxcloud.com/d/1/b1!M2krUimm8uPjg85XvBvHFKCQBa81J-Wftp8SnWgisAZJaQbZ0bFPTiE1eSyq0EqgJrJKXGIse4cvOP6RqmlRRJ9axbiO6w_nwKR1kmpzKUh99rmcS279QamNnVKe7YIvXYikhKdoUrLJJhmsb13qYoGzojIsmx3CphX69aveaP6NGMK25nibgKJZqnMmc4sxWV9yBDm7QgUiyrD4jFXZHG2nioUiqdT6fOvnX96ra7-o-07vAsfVRqtWYKdGpajdMiCarpqVUIDvruWRo1rDF2AD7C4DLQx858c0mIRiRWiJxWI-cfsbwE6-Tcgx4AKaJC-82Ytb-oEUWgT3rc11Otz-AinMGf_5uSVT4061ad6d1sAzaYZYu7BwKE1ljq_mOtUQ189t1r3e17F2-GBkCxoZRwfVZw4_-gsLPp6DCz99Mw8JpliV0S7CAYJmlbKin5k8ebblUMToYmv_IDiCkqCSoKY5Rs-f_QhpvQcEMZ6LinVipUsWMlMVNWATLCqW6kZNdZED0IhCcOTBsmJe74gkTkLUB-atHtjrCnG27Dh0GpPHt5Sn2w3FGmhajNZorJ4bQycLJQSuZUQWV6BcnsCGdhOiMlQ0_u0IOstOLBItz_HaJTvQ4rsMknHe0OIXM3EMtLddhF_zNA3yHDNWYxZMpGPd9LHq1_HQijQimyhfXLXUier1Vn-ta5Bf5ZxC9r5bfKBTbPczHnKlz2-va7_lLbu0g488uamFy47HPDK2sRNkf5LVEE2i8yQOw5Whbd3holerzjit3IiiKJg1qFlbQrjY1DgEh01TFbe5R7VNrvE17p4UWxV7siBdutdgegV_kj2El3m9hhjk6khABX-TYBKmMrzDIhQCiuuQZsfXUElypZy6qDMIvO0WdREoFZSiwf-wObNUAaLxbuxnHhye7Z_VSreyKGJ6th1-h0bnB5CwCkk04tx8v2vbqmVXMfbkmVL1zfIspubu73BSw-bsmmZy-nBWN2D7INhTwjbeuhYgd57GTiADvJmqEUfj74mbNTj9Ww173Rnhp8V5Y7Nxy-eSE0z_jQerOMf6479XKaGuHjkQW9XfrvLmg-_WMP075HYZh1QNLR5K2eabN3f5OrUcnXoptZNj_1e-ywzWCN_qDX-MvSnDoC4POfppZKKsUbvlq2L4xKLFgtqhNUr4TngUoBZOKEE6kCeLNUgnm9PLKeOpc8_iatlj58xxC0SdnGBXNo6ezssZ_AmAJnIQxBg763_kmH2zZW8_fEuqVW5QqvNRzjhJaItTUtN0-HPSvGb3yXJ-PDLIelNnN792SRz_pG8-qdJrlgnnGACt5xYGyZvCRiDfVX7EBYravt_EPeG0jtXJO4EJyBM6SZwPpr8sRSUuAdJJCDW9pDoipoz_SBSm7NZ2mHiZ-hAT2VgSzg9PL0hbRtkxwkfZiVsug6N1bTW8KIf6Z8jPFCQCwg../download'
r = requests.get(adr)
r
with open('data.json', 'w') as outfile:
json.dump(r.json(), outfile)
!curl -O $adr -o "data.json"
import io
data = io.StringIO(
"""
Country,Ranking,,Economy,US dollars,
USA,1,,United States," 21,427,700 ",
CHN,2,,China," 14,342,903 ",
JPN,3,,Japan," 5,081,770 ",
DEU,4,,Germany," 3,845,630 ",
IND,5,,India," 2,875,142 ",
GBR,6,,United Kingdom," 2,827,113 ",
FRA,7,,France," 2,715,518 ",
ITA,8,,Italy," 2,001,244 ",
BRA,9,,Brazil," 1,839,758 ",
CAN,10,,Canada," 1,736,426 ",
RUS,11,,Russian Federation," 1,699,877 ",a
KOR,12,,"Korea, Rep."," 1,642,383 ",
ESP,13,,Spain," 1,394,116 ",
AUS,14,,Australia," 1,392,681 ",
MEX,15,,Mexico," 1,258,287 ",
IDN,16,,Indonesia," 1,119,191 ",
NLD,17,,Netherlands," 909,070 ",
SAU,18,,Saudi Arabia," 792,967 ",
TUR,19,,Turkey," 754,412 ",
CHE,20,,Switzerland," 703,082 ",
POL,21,,Poland," 592,164 ",
THA,22,,Thailand," 543,650 ",
SWE,23,,Sweden," 530,833 ",
BEL,24,,Belgium," 529,607 ",
ARG,25,,Argentina," 449,663 ",b
NGA,26,,Nigeria," 448,120 ",
AUT,27,,Austria," 446,315 ",
IRN,28,,"Iran, Islamic Rep."," 445,345 ",
ARE,29,,United Arab Emirates," 421,142 ",
NOR,30,,Norway," 403,336 ",
ISR,31,,Israel," 395,099 ",
IRL,32,,Ireland," 388,699 ",
PHL,33,,Philippines," 376,796 ",
SGP,34,,Singapore," 372,063 ",
HKG,35,,"Hong Kong SAR, China"," 366,030 ",
MYS,36,,Malaysia," 364,702 ",
ZAF,37,,South Africa," 351,432 ",
DNK,38,,Denmark," 348,078 ",
COL,39,,Colombia," 323,803 ",
EGY,40,,"Egypt, Arab Rep."," 303,175 ",
BGD,41,,Bangladesh," 302,571 ",
CHL,42,,Chile," 282,318 ",
PAK,43,,Pakistan," 278,222 ",
FIN,44,,Finland," 268,761 ",
VNM,45,,Vietnam," 261,921 ",
ROU,46,,Romania," 250,077 ",
CZE,47,,Czech Republic," 246,489 ",
PRT,48,,Portugal," 237,686 ",
IRQ,49,,Iraq," 234,094 ",
PER,50,,Peru," 226,848 ",
GRC,51,,Greece," 209,853 ",
NZL,52,,New Zealand," 206,929 ",
QAT,53,,Qatar," 183,466 ",
KAZ,54,,Kazakhstan," 180,162 ",
DZA,55,,Algeria," 169,988 ",
HUN,56,,Hungary," 160,967 ",
UKR,57,,Ukraine," 153,781 ",a
KWT,58,,Kuwait," 134,761 ",
MAR,59,,Morocco," 118,725 ",c
ECU,60,,Ecuador," 107,436 ",
SVK,61,,Slovak Republic," 105,422 ",
PRI,62,,Puerto Rico," 104,989 ",
CUB,63,,Cuba," 100,023 ",
ETH,64,,Ethiopia," 96,108 ",
KEN,65,,Kenya," 95,503 ",
AGO,66,,Angola," 94,635 ",
DOM,67,,Dominican Republic," 88,941 ",
LKA,68,,Sri Lanka," 84,009 ",
OMN,69,,Oman," 76,983 ",
GTM,70,,Guatemala," 76,710 ",
MMR,71,,Myanmar," 76,086 ",
LUX,72,,Luxembourg," 71,105 ",
BGR,73,,Bulgaria," 67,927 ",
GHA,74,,Ghana," 66,984 ",
PAN,75,,Panama," 66,801 ",
TZA,76,,Tanzania," 63,177 ",d
BLR,77,,Belarus," 63,080 ",
CRI,78,,Costa Rica," 61,774 ",
HRV,79,,Croatia," 60,416 ",
CIV,80,,CÙte d'Ivoire," 58,792 ",
UZB,81,,Uzbekistan," 57,921 ",
URY,82,,Uruguay," 56,046 ",
LTU,83,,Lithuania," 54,219 ",
MAC,84,,"Macao SAR, China"," 53,859 ",
SVN,85,,Slovenia," 53,742 ",
LBN,86,,Lebanon," 53,367 ",
LBY,87,,Libya," 52,076 ",
SRB,88,,Serbia," 51,409 ",
AZE,89,,Azerbaijan," 48,048 ",
COD,90,,"Congo, Dem. Rep."," 47,320 ",
JOR,91,,Jordan," 43,744 ",
BOL,92,,Bolivia," 40,895 ",
TKM,93,,Turkmenistan," 40,761 ",
TUN,94,,Tunisia," 38,798 ",
CMR,95,,Cameroon," 38,760 ",
BHR,96,,Bahrain," 38,574 ",
PRY,97,,Paraguay," 38,145 ",
UGA,98,,Uganda," 34,387 ",
LVA,99,,Latvia," 34,117 ",
EST,100,,Estonia," 31,387 ",
NPL,101,,Nepal," 30,641 ",
YEM,102,,"Yemen, Rep."," 27,591 ",
KHM,103,,Cambodia," 27,089 ",
SLV,104,,El Salvador," 27,023 ",
HND,105,,Honduras," 25,095 ",
PNG,106,,Papua New Guinea," 24,970 ",
CYP,107,,Cyprus," 24,565 ",e
ISL,108,,Iceland," 24,188 ",
TTO,109,,Trinidad and Tobago," 24,100 ",
SEN,110,,Senegal," 23,578 ",
ZMB,111,,Zambia," 23,065 ",
ZWE,112,,Zimbabwe," 21,441 ",
BIH,113,,Bosnia and Herzegovina," 20,048 ",
AFG,114,,Afghanistan," 19,101 ",
SDN,115,,Sudan," 18,902 ",
BWA,116,,Botswana," 18,341 ",
LAO,117,,Lao PDR," 18,174 ",
GEO,118,,Georgia," 17,743 ",f
MLI,119,,Mali," 17,510 ",
GAB,120,,Gabon," 16,658 ",
JAM,121,,Jamaica," 16,458 ",
BFA,122,,Burkina Faso," 15,746 ",
ALB,123,,Albania," 15,278 ",
MOZ,124,,Mozambique," 14,934 ",
MLT,125,,Malta," 14,786 ",
PSE,126,,West Bank and Gaza," 14,616 ",
BEN,127,,Benin," 14,391 ",
MUS,128,,Mauritius," 14,180 ",
MDG,129,,Madagascar," 14,084 ",
MNG,130,,Mongolia," 13,853 ",
ARM,131,,Armenia," 13,673 ",
GIN,132,,Guinea," 13,590 ",
BRN,133,,Brunei Darussalam," 13,469 ",
NER,134,,Niger," 12,928 ",
BHS,135,,"Bahamas, The"," 12,827 ",
MKD,136,,North Macedonia," 12,695 ",
NIC,137,,Nicaragua," 12,521 ",
NAM,138,,Namibia," 12,367 ",
MDA,139,,Moldova," 11,955 ",g
TCD,140,,Chad," 11,315 ",
GNQ,141,,Equatorial Guinea," 11,027 ",
COG,142,,"Congo, Rep."," 10,821 ",
RWA,143,,Rwanda," 10,122 ",
HTI,144,,Haiti," 8,499 ",
KGZ,145,,Kyrgyz Republic," 8,455 ",
TJK,146,,Tajikistan," 8,117 ",
XKX,147,,Kosovo," 7,926 ",
MWI,148,,Malawi," 7,667 ",
MRT,149,,Mauritania," 7,594 ",
MCO,150,,Monaco," 7,188 ",
IMN,151,,Isle of Man," 6,771 ",
LIE,152,,Liechtenstein," 6,553 ",
GUM,153,,Guam," 5,920 ",
MDV,154,,Maldives," 5,729 ",
FJI,155,,Fiji," 5,536 ",
MNE,156,,Montenegro," 5,495 ",
CYM,157,,Cayman Islands," 5,485 ",
TGO,158,,Togo," 5,460 ",
BRB,159,,Barbados," 5,209 ",
SWZ,160,,Eswatini," 4,405 ",
GUY,161,,Guyana," 4,280 ",
SUR,162,,Suriname," 3,985 ",
SLE,163,,Sierra Leone," 3,941 ",
VIR,164,,Virgin Islands (U.S.)," 3,855 ",
DJI,165,,Djibouti," 3,319 ",
AND,166,,Andorra," 3,154 ",
CUW,167,,CuraÁao," 3,128 ",
LBR,168,,Liberia," 3,071 ",
ABW,169,,Aruba," 3,056 ",
GRL,170,,Greenland," 3,052 ",
BDI,171,,Burundi," 3,012 ",
FRO,172,,Faroe Islands," 2,833 ",
LSO,173,,Lesotho," 2,460 ",
BTN,174,,Bhutan," 2,447 ",
CAF,175,,Central African Republic," 2,220 ",
LCA,176,,St. Lucia," 2,122 ",
CPV,177,,Cabo Verde," 1,982 ",
BLZ,178,,Belize," 1,880 ",
GMB,179,,"Gambia, The"," 1,764 ",
ATG,180,,Antigua and Barbuda," 1,728 ",
SYC,181,,Seychelles," 1,699 ",
TLS,182,,Timor-Leste," 1,674 ",
SMR,183,,San Marino," 1,638 ",
SLB,184,,Solomon Islands," 1,425 ",
GNB,185,,Guinea-Bissau," 1,340 ",
MNP,186,,Northern Mariana Islands," 1,323 ",
GRD,187,,Grenada," 1,228 ",
COM,188,,Comoros," 1,186 ",
KNA,189,,St. Kitts and Nevis," 1,051 ",
TCA,190,,Turks and Caicos Islands," 1,022 ",
VUT,191,,Vanuatu, 917 ,
WSM,192,,Samoa, 851 ,
VCT,193,,St. Vincent and the Grenadines, 825 ,
ASM,194,,American Samoa, 636 ,
DMA,195,,Dominica, 596 ,
TON,196,,Tonga, 450 ,
STP,197,,S„o TomÈ and Principe, 429 ,
FSM,198,,"Micronesia, Fed. Sts.", 402 ,
PLW,199,,Palau, 284 ,
MHL,200,,Marshall Islands, 221 ,
KIR,201,,Kiribati, 195 ,
NRU,202,,Nauru, 118 ,
TUV,203,,Tuvalu, 47 ,
"""
)
data
gdp = pd.read_csv(data, index_col='Ranking', usecols=[0,1,3,4])
gdp.head(50)
gdp.columns
gdp.rename(columns={' Country': 'Country', 'US dollars': 'Nominal' }, inplace=True)
gdp[gdp['Country'] == 'CZE']
```
# `applymap`
```
str.strip
gdp = gdp.applymap(str.strip)
gdp.Country[1]
gdp.Nominal[1] = gdp.Nominal[1].replace(',','')
gdp['Nominal'] = gdp['Nominal'].str.replace(',','')
gdp.Nominal[3]
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
locale.atoi(gdp.Nominal[1])
```
# Convert number strings with commas in pandas DataFrame
```
gdp.dtypes
gdp['Nominal'] = gdp['Nominal'].astype(int)
gdp.dtypes
pd.to_numeric(gdp['Nominal'])
import locale
locale.setlocale(locale.LC_NUMERIC, '')
pd.to_numeric(gdp['Nominal'])
gdp.at[47,'Country'].strip()
gdp.at[47,'Economy']
type(gdp.at[47,'Nominal'])
gdp[gdp.Country == 'CZE']
gdp.head(25)
gdp[['Country','Nominal']].head(50)
gdp['Country'][1]
gdp.loc[1]
gdp.loc[1,'Country']
gdp.iloc[0,0]
gdp.index
for i in gdp.index:
print(gdp.Country[i], end=', ')
mtcars = pd.DataFrame({
'mpg':[21,21,22.8,21.4,18.7,18.1,18.3,24.4,22.8,19.2],
'cyl':[6,6,4,6,8,6,8,4,4,4],
'disp':[160,160,108,258,360,225,360,146.7,140.8,167.7],
'hp':[110,110,93,110,175,105,245,62,95,123],
'category':['SUV','Sedan','Sedan','Hatchback','SUV','Sedan','SUV','Hatchback','SUV','Sedan']
})
mtcars
pd.plotting.andrews_curves(mtcars,'category')
```
# Boolean indexing
Another common operation is the use of boolean vectors to filter the data.
The operators are:
| pd | bool |
| :---- | :----: |
| \| | or |
| & | and |
| ~ | not |
```
from pandas_datareader import wb
df = pd.DataFrame(
data={
'ports': [
['I0232', 'I0302'],
['I0232', 'I0302', 'I0232', 'I0302'],
['I0232', 'I0302'],
['I0232', 'I0302'],
['I0232', 'I0302', 'I0232', 'I0302'],
['I0232', 'I0302', 'I0232', 'I0302'],
['I0232', 'I0302'],
]
}
)
df
df.index
df.columns
df = pd.DataFrame(df.ports.tolist(), columns=['io0', 'io1', 'io2', 'io3' ])
df.columns.tolist()
df['io0'] = df['io0'].str.lstrip('I')
df
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
df
(df > 0).all()
(df > 0).any()
df.to_numpy()
```
| github_jupyter |
# Demonstration of the Metrics To-Date
For a complete list of metrics and their documentation, please see the API Metrics [documentation](../API/simulation_api.md#metrics-computation).
This demonstration will rely on the results produced in the "How To" notebook.
```
from pprint import pprint
import pandas as pd
from wombat.core import Simulation
from wombat.core.library import load_yaml
pd.set_option("display.float_format", '{:,.2f}'.format)
pd.set_option("display.max_rows", 1000)
pd.set_option("display.max_columns", 1000)
```
## Setup
The simulations from the How To notebook are going to be rerun as it is not recommended to create a Metrics class from scratch due to the
large number of inputs that are required and the initialization is provided in the simulation API's run method.
```
simulation_name = "dinwoodie_base"
sim = Simulation(simulation_name, "DINWOODIE", "base.yaml")
sim.run()
# For convenience only
metrics = sim.metrics
```
## Availability
There are two methods to produce availability, which have their own function calls:
- energy: `production_based_availability`
- time: `time_based_availability`
Here, we will go through the various input definitions to get time-based availability data as both methods use the same inputs, and provide outputs in the same format.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by` options:
- windfarm: computed across all turbines
- turbine: computed for each turbine
```
# Project total at the whole windfarm level
total = metrics.time_based_availability(frequency="project", by="windfarm")
print(f"Project total: {total * 100:.1f}%")
# Project total at the turbine level
metrics.time_based_availability(frequency="project", by="turbine")
# Project annual totals at the windfarm level
metrics.time_based_availability(frequency="annual", by="windfarm")
# Project monthly totals at the windfarm level
metrics.time_based_availability(frequency="monthly", by="windfarm")
# Project month-by-year totals at the windfarm level
# NOTE: This is limited to the first two years for cleanliness of the notebook
metrics.time_based_availability(frequency="month-year", by="windfarm").head(24)
```
## Capacity Factor
Here, we will go through the various input definitions to get capacity factor data. The inputs are very similar to that of the availability calculation.
`which` options:
- net: net capcity factor, actual production
- gross: gross capacity factor, potential production
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by` options:
- windfarm: computed across all turbines
- turbine: computed for each turbine
```
# Project total at the whole windfarm level
cf = metrics.capacity_factor(which="net", frequency="project", by="windfarm")
print(f" Net Capacity Factor: {cf:.2f}%")
cf = metrics.capacity_factor(which="gross", frequency="project", by="windfarm")
print(f"Gross Capacity Factor: {cf:.2f}%")
# Project total at the turbine level
metrics.capacity_factor(which="net", frequency="project", by="turbine")
# Project annual totals at the windfarm level
metrics.capacity_factor(which="net", frequency="annual", by="windfarm")
# Project monthly totals at the windfarm level
metrics.capacity_factor(which="net", frequency="monthly", by="windfarm")
# Project month-by-year totals at the windfarm level
# NOTE: This is limited to the first two years for cleanliness of the notebook
metrics.capacity_factor(which="net", frequency="month-year", by="windfarm").head(24)
```
## Task Completion Rate
Here, we will go through the various input definitions to get the task completion rates. The inputs are very similar to that of the availability calculation.
`which` options:
- scheduled: scheduled maintenance only (classified as maintenace tasks in inputs)
- unscheduled: unscheduled maintenance only (classified as failure events in inputs)
- both:
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
```
# Project total at the whole windfarm level
total = metrics.task_completion_rate(which="scheduled", frequency="project")
print(f" Scheduled Task Completion Rate: {total * 100:.0f}%")
total = metrics.task_completion_rate(which="unscheduled", frequency="project")
print(f"Unscheduled Task Completion Rate: {total * 100:.0f}%")
total = metrics.task_completion_rate(which="both", frequency="project")
print(f" Overall Task Completion Rate: {total * 100:.0f}%")
# Project annual totals at the windfarm level
metrics.task_completion_rate(which="both", frequency="annual")
# Project monthly totals at the windfarm level
metrics.task_completion_rate(which="both", frequency="monthly")
# Project month-by-year totals at the windfarm level
# NOTE: This is limited to the first two years for cleanliness of the notebook
metrics.task_completion_rate(which="both", frequency="month-year").head(24)
```
## Equipment Costs
Here, we will go through the various input definitions to get the equipment cost data.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by_equipment` options:
- `True`: computed across all equipment used
- `False`: computed for each piece of equipment
```
# Project total at the whole windfarm level
total = metrics.equipment_costs(frequency="project", by_equipment=False)
print(f"Project total: ${total / metrics.project_capacity:,.2f}/MW")
# Project totals at the equipment level
metrics.equipment_costs(frequency="project", by_equipment=True)
# Project annual totals at the windfarm level
metrics.equipment_costs(frequency="annual", by_equipment=False)
# Project monthly totals at the equipment level
metrics.equipment_costs(frequency="monthly", by_equipment=True)
# Project month-by-year totals at the equipment level
# NOTE: This is limited to the two years only
metrics.equipment_costs(frequency="month-year", by_equipment=True).head(24)
```
## Service Equipment Utilization Rate
Here, we will go through the various input definitions to get the service equipment utiliztion rates.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
```
# Project totals at the project level
total = metrics.service_equipment_utilization(frequency="project")
total
# Annualized project totals
total = metrics.service_equipment_utilization(frequency="annual")
total
```
## Labor Costs
Here, we will go through the various input definitions to get the labor cost data.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by_type` options:
- `True`: computed across each labor type
- `False`: computed for both labor types used
```
# Project total at the whole windfarm level
total = metrics.labor_costs(frequency="project", by_type=False)
print(f"Project total: ${total / metrics.project_capacity:,.2f}/MW")
# Project totals for each type of labor
# NOTE: Only salaried labor was defined for thesese analyses
metrics.labor_costs(frequency="project", by_type=True)
# Project annual totals for all labor
metrics.labor_costs(frequency="annual", by_type=False)
# Project monthly totals for all labor
metrics.labor_costs(frequency="monthly", by_type=False)
# Project month-by-year totals for all labor
# NOTE: This is limited to the first two years only
metrics.labor_costs(frequency="month-year", by_type=False).head(24)
```
## Equipment and Labor Costs
Here, we will go through the various input definitions to get the equipment and labor cost data broken out by expense categories.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by_category` options:
- `True`: computed across as equipment, plus each labor type, plus totals
- `False`: computed as single total
##### **NOTE:** For this breakdown the expense category (reason) is distributed across the rows in addition to time.
`reason` definitions:
- Maintenance: routine maintenance
- Repair: unscheduled maintenance, ranging from inspections to replacements
- Weather Delay: Any delays caused by unsafe weather conditions
- No Requests: Equipment and labor is active, but there are no repairs or maintenance tasks to be completed
- Not in Shift: Any time outside of the operating hours of the windfarm
```
# Project totals
metrics.equipment_labor_cost_breakdowns(frequency="project", by_category=False)
# Project totals by each category
metrics.equipment_labor_cost_breakdowns(frequency="project", by_category=True)
# Project annual totals
# NOTE: This is limited to the first two years
metrics.equipment_labor_cost_breakdowns(frequency="annual", by_category=False).head(10)
# Project monthly totals
# NOTE: This is limited to the first two years
metrics.equipment_labor_cost_breakdowns(frequency="monthly", by_category=False).head(10)
# Project month-by-year totals
# NOTE: This is limited to the first two years
metrics.equipment_labor_cost_breakdowns(frequency="month-year", by_category=False).head(20)
```
## Component
Here, we will go through the various input definitions to get the component cost data broken out by various categories.
**NOTE**: It should be noted that the the component costs will not sum up to the whole project operations costs because of delays that are not associated with any repair or maintenance task, such as no requests needing to be processed.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by_category` options:
- `True`: computed across each cost category (includes total)
- `False`: computed as single total
`by_action` options:
- `True`: computed by each of "repair", "maintenance", and "delay"
- `False`: computed as single total
##### **NOTE:** For this breakdown the expense category (reason) is distributed across the rows in addition to time.
`action` definitions:
- maintenance: routine maintenance
- repair: unscheduled maintenance, ranging from inspections to replacements
- delay: Any delays caused by unsafe weather conditions or not being able to finish a process within a single shift
```
# Project totals by component
metrics.component_costs(frequency="project", by_category=False, by_action=False)
# Project totals by each category and action type
metrics.component_costs(frequency="project", by_category=True, by_action=True)
# Project annual totals by category
# NOTE: This is limited to the first two years
metrics.component_costs(frequency="annual", by_category=True, by_action=False).head(28)
# Project monthly totals
# NOTE: This is limited to the first two months
metrics.component_costs(frequency="monthly", by_category=True, by_action=False).head(28)
# Project month-by-year totals
# NOTE: This is limited to the first two months
metrics.component_costs(frequency="month-year", by_category=True, by_action=False).head(28)
```
## Fixed Cost Impacts
Here, we will go through the various input definitions to get the fixed cost data
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
`resolution` options:
- high: computed across the lowest itemized cost levels
- medium: computed across overarching cost levels
- low: computed as single total
```
# The resolution hierarchy for fixed costs
pprint(metrics.fixed_costs.hierarchy)
# Project totals at the highest level
metrics.project_fixed_costs(frequency="project", resolution="low")
# Project totals at the medium level
metrics.project_fixed_costs(frequency="project", resolution="medium")
# Project totals at the lowest level
metrics.project_fixed_costs(frequency="project", resolution="high")
# Project annualized totals at the medium level
metrics.project_fixed_costs(frequency="annual", resolution="medium")
```
## Process Times
There are no inputs for the process timing as it is a slow calculation, so aggregation is left to the user for now. The results corresond to the number of hours required to complete any of the repair or maintenance activities.
```
# Project totals at the project level
total = metrics.process_times()
total
```
## Power Production
Here, we will go through the various input definitions to get the power production data.
`frequency` options:
- project: computed across the whole simulation
- annual: computed on a yearly basis
- monthly: computed across years on a monthly basis
- month-year: computed on a month-by-year basis
`by_turbine` options:
- `True`: computed for each turbines
- `False`: computed for the whole windfarm
```
# Project total at the whole windfarm level
total = metrics.power_production(frequency="project", by_turbine=False)
total
# Project totals at the turbine level
metrics.power_production(frequency="project", by_turbine=True)
# Project annual totals for the windfarm
metrics.power_production(frequency="annual", by_turbine=False)
# Project monthly totals for the windfarm
metrics.power_production(frequency="monthly", by_turbine=False)
# Project month-by-year totals for the windfarm
# NOTE: This is limited to the first two years only
metrics.power_production(frequency="month-year", by_turbine=False).head(24)
```
## PySAM-Powered Results
For a number of project financial metrics, the PySAM library is utilized.
<div class="alert alert-block alert-warning">
<b>NOTE:</b> If a "SAM_settings" file is not provided to the simulation, then the following metrics will not be able to be calculated and will raise a `NotImplementedError`.
</div>
With the above warning in mind, the appropriate simulation outputs are provided as inputs to PySAM upon initialization to ensure all values are aligned.
### Net Present Value (NPV)
```
try:
npv = metrics.pysam_npv()
print(f"NPV: ${npv:,.0f}")
except NotImplementedError as e:
print(e)
```
### Real Levelized Cost of Energy (LCOE)
```
try:
lcoe = metrics.pysam_lcoe_real()
print(f"Real LCOE: ${lcoe:,.2f}/kW")
except NotImplementedError as e:
print(e)
```
### Nominal Levelized Cost of Energy (LCOE)
```
try:
lcoe = metrics.pysam_lcoe_nominal()
print(f"Nominal LCOE: ${lcoe:,.2f}/kW")
except NotImplementedError as e:
print(e)
```
### After-tax Internal Return Rate (IRR)
```
try:
npv = metrics.pysam_irr()
print(f"IRR: {npv:,.1f}%")
except NotImplementedError as e:
print(e)
```
### One Data Frame to Rule Them All
For this demonstration we will manually load a PySAM settings file and trigger the setup for demonstration purposes, but it should be noted that this practice should be avoided.
```
SAM_settings = "SAM_Singleowner_defaults.yaml"
metrics.sam_settings = load_yaml(sim.env.data_dir / "windfarm", SAM_settings)
metrics._setup_pysam()
metrics.pysam_all_outputs()
sim.env.cleanup_log_files(log_only=False)
```
| github_jupyter |
# Tribolium embryo morphometry over time in Napari
Authors: Robert Haase, Daniela Vorkel, 2020
This is the pyclesperanto version of a workflow earlier [published for clij2](https://clij.github.io/clij2-docs/md/tribolium_morphometry/).
[ImageJ Macro original](https://github.com/clij/clij2-docs/tree/master/src/main/macro/tribolium_morphometry.ijm)
This script is an example of heavy GPU-accelerated processing. It is recommended to use a dedicated
graphics card with at least 8 GB of GDDR6 memory. Otherwise, it may be quite slow.
Let's start by checking that pyclesperanto is installed and which GPU it uses.
```
"""
import pyclesperanto_prototype as cle
import numpy as np
# show all graphics cards
#print(cle._tier0._pycl.filter_devices())
# show only GPU devices
print(cle._tier0._pycl.filter_devices(dev_type='gpu'))
# selecting an Nvidia RTX
cle.select_device("Quadro M2200")
print("Using OpenCL device " + cle.get_device().name)
"""
%gui qt
```
## Load a data set
The dataset shows a *Tribolium castaneum* embryo, imaged by a custom light sheet microscope, at a wavelength of 488nm (Imaging credits: Daniela Vorkel, Myers lab, MPI CBG).
The data set has been resampled to a voxel size of 1x1x1 microns. The embryo expresses nuclei-GFP. We will use the dataset to detect nuclei and to generate an estimated cell-segmentation.
All processing steps are performed in 3D space.
```
from aicspylibczi import CziFile
import imgfile_tools as imf
from aicsimageio import AICSImage
from skimage import data
import napari
import dask
import dask.array as da
from IPython.display import display, HTML
from dask import delayed
filename = r"c:\Testdata_Zeiss\LatticeLightSheet\LS_Mitosis_T=150-300.czi"
# get the metadata
md, addmd = imf.get_metadata(filename)
czi = CziFile(filename)
def load_image(czi, t=0):
zstack = czi.read_image(S=0, T=t)
return zstack
#lazy_imread = delayed(load_image)
#reader = lazy_imread(czi, t=0) # doesn't actually read the file
#array = reader.compute() # *now* it reads.
"""
sample = imread(filenames[0])
lazy_imread = delayed(imread) # lazy reader
lazy_arrays = [lazy_imread(fn) for fn in filenames]
dask_arrays = [
da.from_delayed(delayed_reader, shape=sample.shape, dtype=sample.dtype)
for delayed_reader in lazy_arrays
]
# Stack into one large dask.array
stack = da.stack(dask_arrays, axis=0)
stack.shape # (nfiles, nz, ny, nx)
# in jupyter notebook the repr of a dask stack provides a useful visual:
stack
"""
sp = [md['SizeC'], md['SizeZ'], md['SizeY'], md['SizeX']]
# create dask stack of lazy image readers
lazy_process_image = dask.delayed(load_image) # lazy reader
lazy_arrays = [lazy_process_image(czi, t=t) for t in range(0, md['SizeT'])]
dask_arrays = [
da.from_delayed(lazy_array, shape=sp, dtype=md['NumPy.dtype'])
for lazy_array in lazy_arrays
]
# Stack into one large dask.array
dask_stack = da.stack(dask_arrays, axis=0)
print(dask_stack.shape)
dask_stack
viewer = napari.Viewer()
# configure napari automatically based on metadata and show stack
layers = imf.show_napari(viewer, dask_stack, md,
blending='additive',
gamma=0.85,
add_mdtable=True,
rename_sliders=True)
from napari.utils import nbscreenshot
nbscreenshot(viewer)
```
| github_jupyter |
# Practice Assignment: Understanding Distributions Through Sampling
** *This assignment is optional, and I encourage you to share your solutions with me and your peers in the discussion forums!* **
To complete this assignment, create a code cell that:
* Creates a number of subplots using the `pyplot subplots` or `matplotlib gridspec` functionality.
* Creates an animation, pulling between 100 and 1000 samples from each of the random variables (`x1`, `x2`, `x3`, `x4`) for each plot and plotting this as we did in the lecture on animation.
* **Bonus:** Go above and beyond and "wow" your classmates (and me!) by looking into matplotlib widgets and adding a widget which allows for parameterization of the distributions behind the sampling animations.
Tips:
* Before you start, think about the different ways you can create this visualization to be as interesting and effective as possible.
* Take a look at the histograms below to get an idea of what the random variables look like, as well as their positioning with respect to one another. This is just a guide, so be creative in how you lay things out!
* Try to keep the length of your animation reasonable (roughly between 10 and 30 seconds).
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
# generate 4 random variables from the random, gamma, exponential, and uniform distributions
x1 = np.random.normal(-2.5, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)+7
x4 = np.random.uniform(14,20, 10000)
# plot the histograms
plt.figure(figsize=(9,3))
plt.hist(x1, normed=True, bins=20, alpha=0.5)
plt.hist(x2, normed=True, bins=20, alpha=0.5)
plt.hist(x3, normed=True, bins=20, alpha=0.5)
plt.hist(x4, normed=True, bins=20, alpha=0.5);
plt.axis([-7,21,0,0.6])
plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
plt.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
plt.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
plt.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
fig , ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True, sharey=True)
for ax in [ax1, ax2, ax3, ax4]:
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_visible(True)
for ax in fig.get_axes():
ax.clear()
x1 = np.random.normal(0, 1, 10000)
x2 = np.random.gamma(2, 1.5, 10000)
x3 = np.random.exponential(2, 10000)
x4 = np.random.uniform(0, 1, 10000)
ax1.hist(x1, normed=True, bins=20, alpha=0.5)
ax2.hist(x2, normed=True, bins=20, alpha=0.5)
ax3.hist(x3, normed=True, bins=20, alpha=0.5)
ax4.hist(x4, normed=True, bins=20, alpha=0.5);
# plt.text(x1.mean()-1.5, 0.5, 'x1\nNormal')
# ax2.text(x2.mean()-1.5, 0.5, 'x2\nGamma')
# ax3.text(x3.mean()-1.5, 0.5, 'x3\nExponential')
# ax4.text(x4.mean()-1.5, 0.5, 'x4\nUniform')
fig.get_axes()
```
| github_jupyter |
##### Copyright 2019 DeepMind Technologies Limited.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/demo.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/deepmind/reverb/blob/master/examples/demo.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
# Introduction
This colab is a demonstration of how to use Reverb through examples.
# Setup
Installs the stable build of Reverb (dm-reverb) and TensorFlow (tf) to match.
```
!pip install dm-tree
!pip install dm-reverb[tensorflow]
import reverb
import tensorflow as tf
```
The code below defines a dummy RL environment for use in the examples below.
```
OBSERVATION_SPEC = tf.TensorSpec([10, 10], tf.uint8)
ACTION_SPEC = tf.TensorSpec([2], tf.float32)
def agent_step(unused_timestep) -> tf.Tensor:
return tf.cast(tf.random.uniform(ACTION_SPEC.shape) > .5,
ACTION_SPEC.dtype)
def environment_step(unused_action) -> tf.Tensor:
return tf.cast(tf.random.uniform(OBSERVATION_SPEC.shape, maxval=256),
OBSERVATION_SPEC.dtype)
```
# Creating a Server and Client
```
# Initialize the reverb server.
simple_server = reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# we prefix all shapes with a 3 as the trajectories we'll be writing
# consist of 3 timesteps.
signature={
'actions': tf.TensorSpec([3, *ACTION_SPEC.shape],
ACTION_SPEC.dtype),
'observations': tf.TensorSpec([3, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
)
],
# Sets the port to None to make the server pick one automatically.
port=None)
# Initializes the reverb client on the same port as the server.
client = reverb.Client(f'localhost:{simple_server.port}')
```
For details on customizing the sampler, remover, and rate limiter, see below.
# Example 1: Overlapping Trajectories
## Inserting Overlapping Trajectories
```
# Dynamically adds trajectories of length 3 to 'my_table' using a client writer.
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
if step >= 2:
# In this example, the item consists of the 3 most recent timesteps that
# were added to the writer and has a priority of 1.5.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][-3:],
'observations': writer.history['observation'][-3:],
}
)
```
The animation illustrates the state of the server at each step in the
above code block. Although each item is being set to have the same
priority value of 1.5, items do not need to have the same priority values.
In real world scenarios, items would have differing and
dynamically-calculated priority values.
<img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram1.svg" />
## Sampling Overlapping Trajectories in TensorFlow
```
# Dataset samples sequences of length 3 and streams the timesteps one by one.
# This allows streaming large sequences that do not necessarily fit in memory.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{simple_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10)
# Batches 2 sequences together.
# Shapes of items is now [2, 3, 10, 10].
batched_dataset = dataset.batch(2)
for sample in batched_dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([2], uint64)
print(sample.info.probability) # ([2], float64)
print(sample.data['observations']) # ([2, 3, 10, 10], uint8)
print(sample.data['actions']) # ([2, 3, 2], float32)
```
# Example 2: Complete Episodes
Create a new server for this example to keep the elements of the priority table consistent.
```
EPISODE_LENGTH = 150
complete_episode_server = reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# the number of observations is larger than the number of actions.
# The extra observation is the terminal state where no action is
# taken.
signature={
'actions': tf.TensorSpec(
[EPISODE_LENGTH, *ACTION_SPEC.shape],
ACTION_SPEC.dtype),
'observations': tf.TensorSpec(
[EPISODE_LENGTH + 1, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
),
])
# Initializes the reverb client on the same port.
client = reverb.Client(f'localhost:{complete_episode_server.port}')
```
## Inserting Complete Episodes
```
# Writes whole episodes of varying length to a Reverb server.
NUM_EPISODES = 10
# We know that episodes are at most 150 steps so we set the writer buffer size
# to 151 (to capture the final observation).
with client.trajectory_writer(num_keep_alive_refs=151) as writer:
for _ in range(NUM_EPISODES):
timestep = environment_step(None)
for _ in range(EPISODE_LENGTH):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
# The astute reader will recognize that the final timestep has not been
# appended to the writer. We'll go ahead and add it WITHOUT an action. The
# writer will automatically fill in the gap with `None` for the action
# column.
writer.append({'observation': timestep})
# Now that the entire episode has been added to the writer buffer we can an
# item with a trajectory that spans the entire episode. Note that the final
# action must not be included as it is None and the trajectory would be
# rejected if we tried to include it.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][:-1],
'observations': writer.history['observation'][:],
})
# This call blocks until all the items (in this case only one) have been
# sent to the server, inserted into respective tables and confirmations
# received by the writer.
writer.end_episode(timeout_ms=1000)
# Ending the episode also clears the history property which is why we are
# able to use `[:]` in when defining the trajectory above.
assert len(writer.history['action']) == 0
assert len(writer.history['observation']) == 0
```
## Sampling Complete Episodes in TensorFlow
```
# Each sample is an entire episode.
# Adjusts the expected shapes to account for the whole episode length.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{complete_episode_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10,
rate_limiter_timeout_ms=10)
# Batches 128 episodes together.
# Each item is an episode of the format (observations, actions) as above.
# Shape of items are now ([128, 151, 10, 10], [128, 150, 2]).
dataset = dataset.batch(128)
# Sample has type reverb.ReplaySample.
for sample in dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([128], uint64)
print(sample.info.probability) # ([128], float64)
print(sample.data['observations']) # ([128, 151, 10, 10], uint8)
print(sample.data['actions']) # ([128, 150, 2], float32)
```
# Example 3: Multiple Priority Tables
Create a server that maintains multiple priority tables.
```
multitable_server = reverb.Server(
tables=[
reverb.Table(
name='my_table_a',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
reverb.Table(
name='my_table_b',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
],
port=None)
client = reverb.Client('localhost:{}'.format(multitable_server.port))
```
## Inserting Sequences of Varying Length into Multiple Priority Tables
```
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
writer.append({'timestep': timestep})
action = agent_step(timestep)
timestep = environment_step(action)
if step >= 1:
writer.create_item(
table='my_table_b',
priority=4-step,
trajectory=writer.history['timestep'][-2:])
if step >= 2:
writer.create_item(
table='my_table_a',
priority=4-step,
trajectory=writer.history['timestep'][-3:])
```
This diagram shows the state of the server after executing the above cell.
<img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram2.svg" />
# Example 4: Samplers and Removers
## Creating a Server with a Prioritized Sampler and a FIFO Remover
```
reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.MinSize(100)),
],
port=None)
```
## Creating a Server with a MaxHeap Sampler and a MinHeap Remover
Setting `max_times_sampled=1` causes each item to be removed after it is
sampled once. The end result is a priority table that essentially functions
as a max priority queue.
```
max_size = 1000
reverb.Server(
tables=[
reverb.Table(
name='my_priority_queue',
sampler=reverb.selectors.MaxHeap(),
remover=reverb.selectors.MinHeap(),
max_size=max_size,
rate_limiter=reverb.rate_limiters.MinSize(int(0.95 * max_size)),
max_times_sampled=1,
)
],
port=None)
```
## Creating a Server with One Queue and One Circular Buffer
Behavior of canonical data structures such as
[circular buffer](https://en.wikipedia.org/wiki/Circular_buffer) or a max
[priority queue](https://en.wikipedia.org/wiki/Priority_queue) can
be implemented in Reverb by modifying the `sampler` and `remover`
or by using the `PriorityTable` queue initializer.
```
reverb.Server(
tables=[
reverb.Table.queue(name='my_queue', max_size=10000),
reverb.Table(
name='my_circular_buffer',
sampler=reverb.selectors.Fifo(),
remover=reverb.selectors.Fifo(),
max_size=10000,
max_times_sampled=1,
rate_limiter=reverb.rate_limiters.MinSize(1)),
],
port=None)
```
# Example 5: Rate Limiters
## Creating a Server with a SampleToInsertRatio Rate Limiter
```
reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(
samples_per_insert=3.0, min_size_to_sample=3,
error_buffer=3.0)),
],
port=None)
```
This example is intended to be used in a distributed or multi-threaded
enviroment where insertion blocking will be unblocked by sample calls from
an independent thread. If the system is single threaded, the blocked
insertion call will cause a deadlock.
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
#Fecha
import locale
locale.setlocale(locale.LC_ALL,'es_ES.UTF-8')
import dateparser
import datetime as datet
from datetime import datetime
sns.set()
date_fmt = '%b %Y'
#Graficación
import plotly
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objs as go
import matplotlib.pyplot as plt
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
comments_genre_df = pd.read_csv('./datasets/cwg.csv')
print(len(comments_genre_df))
comments_genre_df.head()
```
# 1. Cantidad de comentarios por día - histórico
```
df_tot_by_day = comments_genre_df.groupby(['post_time']).size()
df_tot_by_day = df_tot_by_day.to_frame()
df_tot_by_day.reset_index(inplace=True)
df_tot_by_day[['post_date','post_time']] = df_tot_by_day['post_time'].str.split(' ',expand=True)
df_tot_by_day = df_tot_by_day.rename(columns={0: "count"})
df_tot_by_day = df_tot_by_day[['post_date','post_time', 'count']]
df_tot_by_day
h_comment_vals = list(df_tot_by_day['post_time'])
fig1 = go.Figure()
fig1.add_trace(go.Scatter(x=list(df_tot_by_day['post_date']), y=h_comment_vals,
mode='markers',
marker=dict(size=3),
name='Datetime inicio'))
fig1.update_layout(title='2013 - 2021: Distribución diaria de comentarios',
legend_title="Time marks",
xaxis_tickangle=-45,
yaxis=dict(
title='Hora',
titlefont_size=13,
tickfont_size=12
),
xaxis=dict(
title='Fecha',
titlefont_size=13,
tickfont_size=12
),
autosize=False,
width=950,
height=400,
margin=dict(l=0, r=0, t=50, b=0),
font=dict(size=10))
fig1.update_yaxes(categoryorder='category ascending')
fig1.show()
```
# 1.1. Cantidad de comentarios por día de la semana
```
df_tot_by_day_count = df_tot_by_day.groupby(['post_date'])['count'].sum().to_frame()
df_tot_by_day_count.reset_index(inplace=True)
df_tot_by_day_count['post_date'] = pd.to_datetime(df_tot_by_day_count['post_date'], errors='coerce')
df_tot_by_day_count['post_date'] = df_tot_by_day_count['post_date'].dt.normalize()
df_tot_by_day_count['day_of_week'] = df_tot_by_day_count['post_date'].dt.day_name()
df_tot_by_day_count = df_tot_by_day_count[['post_date','day_of_week', 'count']]
df_tot_by_day_count = df_tot_by_day_count.groupby(['day_of_week'])['count'].sum().to_frame()
df_tot_by_day_count.reset_index(inplace=True)
di = {"Monday":"Lunes", "Tuesday":"Martes", "Wednesday":"Miércoles", "Thursday":"Jueves","Friday":"Viernes",
"Saturday":"Sábado", "Sunday":"Domingo"}
df_tot_by_day_count = df_tot_by_day_count.replace({"day_of_week": di})
df_tot_by_day_count
labels = df_tot_by_day_count['day_of_week'].value_counts().index
values = df_tot_by_day_count['count'].values
fig2 = make_subplots(rows=1, cols=3, specs=[[{"type": "pie"}, {"type": "pie"}, {"type": "pie"}]])
fig2.add_trace(go.Pie(labels=labels, values=values), row=1, col=2)
# Use `hole` to create a donut-like pie chart
fig2.update_traces(hole=.4, hoverinfo="label+percent")
fig2.update_layout(
title_text="2013 - 2021: % Total de comentarios por día de la semana",
legend_title="Día",
# Add annotations in the center of the donut pies.
autosize=False,
width=950,
height=400,
margin=dict(l=0, r=0, t=50, b=0),
font=dict(size=10))
fig2.show()
```
# 2. Género
```
comments_genre_df.head()
```
# Stacked bar - distro género por año
```
gen_month_tot = comments_genre_df[['post_time', 'user', 'SEXO']]
print(len(gen_month_tot))
gen_month_tot = gen_month_tot.drop_duplicates(subset='user', keep="first")
print(len(gen_month_tot))
gen_month_tot[['post_date','post_time']] = gen_month_tot['post_time'].str.split(' ',expand=True)
gen_month_tot[['post_date_year','post_date_month','post_date_day']] = gen_month_tot['post_date'].str.split('-',expand=True)
gen_month_tot = gen_month_tot[['post_date_year','post_date_month', 'SEXO']]
gen_month_tot_g = gen_month_tot.groupby(['post_date_year', 'post_date_month', 'SEXO'])['SEXO'].size().to_frame()
gen_month_tot_g = gen_month_tot_g.rename(columns={'SEXO': "count_genre"})
gen_month_tot_g = gen_month_tot_g.reset_index()
gen_month_tot_g
fig3 = px.bar(gen_month_tot_g, x="post_date_year", y="count_genre", color="SEXO", barmode='stack')
fig3.update_layout(legend_title="Género", xaxis_title='Año', yaxis_title='Total de usuarios')
fig3.update_layout(title='2013 - 2021: Distribución anual de género de comentadores',
legend_title="Género",
autosize=False,
width=950,
height=400,
margin=dict(l=0, r=0, t=50, b=0),
font=dict(size=10))
fig3.show()
comments_genre_df.columns
comments_genre_df.head()
cat_per_year = comments_genre_df[['post_time', 'post_id', 'text', 'user', 'categorie']]
cat_per_year[['post_date','post_time']] = cat_per_year['post_time'].str.split(' ',expand=True)
cat_per_year[['post_date_year','post_date_month','post_date_day']] = cat_per_year['post_date'].str.split('-',expand=True)
cat_per_year = cat_per_year.groupby(['post_date_year', 'post_date_month', 'categorie'])['categorie'].size().to_frame()
cat_per_year = cat_per_year.rename(columns={'categorie': "count_categorie"})
cat_per_year = cat_per_year.reset_index()
cat_per_year
fig4 = px.scatter(cat_per_year, x="post_date_year", y="post_date_month", color="categorie",
size='count_categorie')
fig4.update_layout(title='2013 - 2021: Distribución de posts por fecha y categoría',
legend_title="Categorías",
xaxis_tickangle=-45,
yaxis=dict(
title='Mes',
titlefont_size=13,
tickfont_size=12
),
xaxis=dict(
title='Año',
titlefont_size=13,
tickfont_size=12
),
autosize=False,
width=950,
height=400,
margin=dict(l=0, r=0, t=50, b=0),
font=dict(size=10))
fig4.update_yaxes(categoryorder='category ascending')
fig4.show()
def figures_to_html(figs, filename="dashboard_percepciondisc_dane.html"):
dashboard = open(filename, 'w')
dashboard.write("<html><head></head><body>" + "\n")
#dashboard.write()
for fig in figs:
inner_html = fig.to_html().split('<body>')[1].split('</body>')[0]
dashboard.write(inner_html)
dashboard.write("</body></html>" + "\n")
figures_to_html([fig1, fig2, fig3, fig4])
```
| github_jupyter |
# 5. Compressing h5 Training/Validation Dataset
Attempting to compress the h5 dataset to allow for temporary storage of dataset on Compute Canada Cedar GPU node SSD. Compression was done using create_compressed_h5.py in the same directory.
```
import sys
import os
import random
import h5py
from collections import Counter
from progressbar import *
import re
import numpy as np
# Add the path to the parent directory to augment search for module
par_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
if par_dir not in sys.path:
sys.path.append(par_dir)
trainval_path = '/fast_scratch/WatChMaL/data/IWCDmPMT_4pi_fulltank_9M_splits_CNN'
```
## Load h5 trainval file
```
# Import test events from h5 file
index_file = os.path.join(trainval_path,"IWCDmPMT_4pi_fulltank_9M_trainval_idxs.npz")
indices = np.load(index_file, allow_pickle=True)
train_indices = indices['train_idxs']
val_indices = indices['val_idxs']
original_data_path = os.path.join(trainval_path,"IWCDmPMT_4pi_fulltank_9M_trainval.h5")
f = h5py.File(original_data_path, "r")
hdf5_event_data = (f["event_data"])
# original_eventdata = np.memmap(original_data_path, mode="r", shape=hdf5_event_data.shape,
# offset=hdf5_event_data.id.get_offset(), dtype=hdf5_event_data.dtype)
original_eventids = np.array(f['event_ids'])
original_energies = np.array(f['energies'])
original_positions = np.array(f['positions'])
original_angles = np.array(f['angles'])
original_labels = np.array(f['labels'])
original_eventdata.shape
```
## Load compressed h5
```
compressed_data_path = os.path.join(trainval_path,'IWCDmPMT_4pi_fulltank_9M_trainval_compressed.h5')
compressed_h5 = h5py.File(compressed_data_path,'r')
compressed_event_data = (f["event_data"])
compressed_eventids = np.array(compressed_h5['event_ids'])
compressed_energies = np.array(compressed_h5['energies'])
compressed_positions = np.array(compressed_h5['positions'])
compressed_angles = np.array(compressed_h5['angles'])
compressed_labels = np.array(compressed_h5['labels'])
compressed_event_data.shape
```
## Check that the datasets are still identical
```
pbar = ProgressBar(widgets=['Verification Progress: ', Percentage(), ' ', Bar(marker='0',left='[',right=']'),
' ', ETA()], maxval=compressed_event_data.shape[0])
pbar.start()
for idx in range(compressed_event_data.shape[0]):
pbar.update(idx)
assert np.array_equal(compressed_event_data[idx],original_eventdata[idx])
assert compressed_eventids[idx] == original_eventids[idx]
assert compressed_energies[idx] == original_energies[idx]
assert np.array_equal(compressed_positions[idx],original_positions[idx])
assert np.array_equal(compressed_angles[idx],original_angles[idx])
assert compressed_labels[idx] == original_labels[idx]
pbar.finish()
print("Success! Compressed dataset contains the same data in the same order")
```
| github_jupyter |
# Tutorial 07: Networks from Custom Templates
In the previous tutorial, we discussed how OpenStreetMap files can be simulated in Flow. These networks, however, may at time be imperfect, as we can see in the toll section of the Bay Bridge (see the figure below). The simulators SUMO and Aimsun both possess methods for augmenting the network after they have been imported, and store the changes in their own versions of the initial template (whether it was generated via a custom network class or a network imported from OpenStreetMap). In order to utilize these newly generated networks, we demonstrate in this tutorial how simulator-generated template files can be imported when running a simulation in Flow.
<img src="img/osm_to_template.png">
<center> **Figure 1**: Example benefit of converting OpenStreetMap to a custom template </center>
The remainder of the tutorial is organized as follows. In section 1, we begin by importing the classic set of parameters. In section 2, we introduce the template files that are used as examples for importing the template files. In section 3, we present how custom SUMO network templates, i.e. the generated .net.xml files, can be modified and simulated in Flow for the purposed of improving network features. Finally, in section 4, we demonstrate how custom Aimsun network files can be simulated in Flow.
## 1. Importing Modules
Before we begin, let us import all relevant Flow parameters as we have done for previous tutorials. If you are unfamiliar with these parameters, you are encouraged to review tutorial 1.
```
# the TestEnv environment is used to simply simulate the network
from flow.envs import TestEnv
# the Experiment class is used for running simulations
from flow.core.experiment import Experiment
# the base network class
from flow.networks import Network
# all other imports are standard
from flow.core.params import VehicleParams
from flow.core.params import NetParams
from flow.core.params import InitialConfig
from flow.core.params import EnvParams
# create some default parameters parameters
env_params = EnvParams()
initial_config = InitialConfig()
vehicles = VehicleParams()
vehicles.add('human', num_vehicles=1)
```
## 2. Example Network
In this tutorial, we use the [Luxembourg SUMO Traffic (LuST) Network](https://github.com/lcodeca/LuSTScenario) as an example use case. This example consists of a well-calibrated model of vehicles in Luxembourg. A representation of the simulation can be seen in the figure below.
<img src="img/LuST_network.png" width="500">
<center><b>Figure 2</b>: Simulation of the LuST network </center>
Before, continuing with this tutorial, please begin by cloning the LuST scenario repository by running the following command.
git clone https://github.com/lcodeca/LuSTScenario.git
Once you have cloned the repository, please modify the code snippet below to match correct location of the repository's main directory.
```
LuST_dir = "/path/to/LuSTScenario"
```
## 3. Sumo Network Files
Sumo generates several network and simulation-specifc template files prior to starting a simulation. This procedure when creating custom networks and networks from OpenStreetMap is covered by the network class. Three of these files (\*.net.xml, \*.rou.xml, and vtype.add.xml) can be imported once again via the network class to recreate a previously designed network.
We start by creating the simulation parameters:
```
from flow.core.params import SumoParams
sim_params = SumoParams(render=True, sim_step=1)
```
### 3.1 Importing Network (\*.net.xml) Files
The \*.net.xml file covers the network geometry within a simulation, and can be imported independently of the SUMO route file (see section 1.2). This can be done through the `template` parameter within `NetParams` as follows:
```
import os
net_params = NetParams(
template=os.path.join(LuST_dir, "scenario/lust.net.xml"),
)
```
This network alone, similar to the OpenStreetMap file, does not cover the placement of vehicles or the routes vehicles can traverse. These, however, can be defined a they were in the previous tutorial for importing networks from OpenStreetMap. For the LuST network, this looks something similar to the following code snippet (note that the specific edges were not spoken for any specific reason).
```
# specify the edges vehicles can originate on
initial_config = InitialConfig(
edges_distribution=["-32410#3"]
)
# specify the routes for vehicles in the network
class TemplateNetwork(Network):
def specify_routes(self, net_params):
return {"-32410#3": ["-32410#3"]}
```
The simulation can then be executed as follows:
```
# create the network
network = TemplateNetwork(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
network=network
)
# run the simulation for 1000 steps
exp = Experiment(env=env)
_ = exp.run(1, 1000)
```
### 3.2 Importing Additional Files
Sumo templates will at times contain files other than the network templates that can be used to specify the positions, speeds, and properties of vehicles at the start of a simulation, as well as the departure times of vehicles while the network is running and the routes that all these vehicles are meant to traverse. All these files can also be imported under the `template` attribute in order to recreate the simulation in it's entirety.
When incorporating files other that the net.xml file to the simulation, the template attribute is treated as a dictionary instead, with a different element for each of the additional files that are meant to be imported. Starting with the net.xml file, it is added to the template attribute as follows:
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml")
}
)
```
#### 3.2.1 Vehicle Type (vtype.add.xml)
The vehicle types file describing the properties of different vehicle types in the network. These include parameters such as the max acceleration and comfortable deceleration of drivers. This file can be imported via the "vtype" attribute in template.
Note that, when vehicle information is being imported from a template file, the `VehicleParams` object does not need be modified, unless you would like additionally vehicles to enter the network as well.
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtype.add.xml")
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
```
#### 3.2.2 Route (\*.rou.xml)
Next, the routes can be imported from the \*.rou.xml files that are generated by SUMO. These files help define which cars enter the network at which point in time, whether it be at the beginning of a simulation or some time during it run. The route files are passed to the "rou" key in the templates attribute. Moreover, since the vehicle routes can be spread over multiple files, the "rou" key that a *list* of string filenames.
```
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtypes.add.xml"),
# features associated with the routes vehicles take
"rou": [os.path.join(LuST_dir, "scenario/DUARoutes/local.0.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.1.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.2.rou.xml")]
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
```
#### 3.2.3 Running the Modified Simulation
Finally, the fully imported simulation can be run as follows.
**Warning**: the network takes time to initialize while the departure positions and times and vehicles are specified.
```
# create the network
network = Network(
name="template",
net_params=new_net_params,
vehicles=new_vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
network=network
)
# run the simulation for 100000 steps
exp = Experiment(env=env)
_ = exp.run(1, 100000)
```
## 4. Aimsun Network Files
Flow can run templates that have been created in Aimsun and saved into an \*.ang file. Although it is possible to have control over the network, for instance add vehicles and monitor them directly from Flow, this tutorial only covers how to run the network.
We will use the template located at `tutorials/networks/test_template.ang`, which looks like this:
<img src="img/test_template.png">
<center><b>Figure 2</b>: Simulation of <code>test_template.ang</code> in Aimsun</center>
It contains two input and three output centroids that define the centroid configuration `Centroid Configuration 910`. The inflows are defined by two OD matrices, one for the type `Car` (in blue), the other for the type `rl` (in red). Note that there is no learning in this tutorial so the two types both act as regular cars. The two OD matrices form the traffic demand `Traffic Demand 925` that is used by the network `Dynamic Scenario 927`. Finally, the experiment `Micro SRC Experiment 928` and the replication `Replication 930` are created, and we will run this replication in the following.
First, we create the Aimsun-specific simulation parameters:
```
from flow.core.params import AimsunParams
sim_params = AimsunParams(
sim_step=0.1,
render=True,
emission_path='data',
replication_name="Replication 930",
centroid_config_name="Centroid Configuration 910"
)
```
As you can see, we need to specify the name of the replication we want to run as well as the centroid configuration that is to be used. There is an other optional parameter, `subnetwork_name`, that can be specified if only part of the network should be simulated. Please refer to the documentation for more information.
The template can then be imported as follows:
```
import os
import flow.config as config
net_params = NetParams(
template=os.path.join(config.PROJECT_PATH,
"tutorials/networks/test_template.ang")
)
```
Finally, we can run the simulation by specifying `'aimsun'` as the simulator to be used:
```
network = Network(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
env = TestEnv(
env_params,
sim_params,
network,
simulator='aimsun'
)
exp = Experiment(env)
exp.run(1, 1000)
```
| github_jupyter |
```
from __future__ import print_function
import numpy as np
import pandas as pd
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.manifold import TSNE
#from sklearn.datasets import fetch_mldata
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.preprocessing import StandardScaler
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import pearsonr
from scipy import stats
plt.rcParams["font.family"] = "Times New Roman"
# [x4_source, x5_source, x4_target, x5_target]
#with open('Encoder_5_feature_map_x4s_x5s_x4t_x5t.pkl', 'rb') as f:
# list5 = pickle.load(f)
#for i in range(len(list5)):
# print(list5[i].shape)
# [x4_source, x4_target]
#with open('Encoder_4_feature_map_x4s_x4t.pkl', 'rb') as f:
# list4 = pickle.load(f)
#for i in range(len(list4)):
# print(list4[i].shape)
#feature_maps = list5[1]
########## 4Encoder_4FC ##########
#x_source,x1_source,x2_source,x3_source,x4_source,fc1_source,fc2_source,fc3_source,fc_last_source,x_target,x1_target,x2_target,x3_target,x4_target,fc1_target,fc2_target,fc3_target,fc_last_target = model(source_image, target_image)
########## 4RB 2FC ##########
#output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,x4_feature_map_source,x4_source,fc_source,out_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,x4_feature_map_target,x4_target,fc_target,out_target = model(source_image, target_image)
########## 3RB 2FC ##########
# output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,fc_source,out_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,fc_target,out_target = model(source_image, target_image)
#
#output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target
#output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,x4_feature_map_source,x4_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,x4_feature_map_target,x4_target
#x_source,x1_source,x2_source,x3_source,x4_source,x_target,x1_target,x2_target,x3_target,x4_target
with open('Encoder_3RB_GAP_2FCLayer_feature_map.pkl', 'rb') as f:
RB_GAP3 = pickle.load(f)
with open('Encoder_4RB_GAP_2FCLayer_feature_map.pkl', 'rb') as f:
RB_GAP4 = pickle.load(f)
with open('Encoder_4Encoder_4FC_feature_map.pkl', 'rb') as f:
FC4 = pickle.load(f)
for i in range(len(RB_GAP3)):
print(RB_GAP3[i].shape)
for i in range(len(RB_GAP4)):
print(RB_GAP4[i].shape)
for i in range(len(FC4)):
print(FC4[i].shape)
def plotter(encoder_type,entity_number=-1,mode='Heatmap'):
feature_ids = []
if(encoder_type=="RB_GAP3"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number")
entity_number=int(input())
feature_maps = RB_GAP3[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
elif(encoder_type=="RB_GAP4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,x4_feature_map_source,x4_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,x4_feature_map_target,x4_target")
print("enter entity number")
entity_number=int(input())
feature_maps = RB_GAP4[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
elif(encoder_type=="FC4"):
print("x_source,x1_source,x2_source,x3_source,x4_source,x_target,x1_target,x2_target,x3_target,x4_target")
print("enter entity number")
entity_number=int(input())
feature_maps = FC4[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
if mode=='Heatmap':
fig = plt.figure(figsize=(24,10))
ax0 = fig.add_subplot(311, aspect='equal') #241
plt.title('Feature '+str(feature_ids[0]))
sns.heatmap(feature_maps[0, feature_ids[0], :, :], cbar=False, square = True, ax = ax0)
ax1 = fig.add_subplot(312, aspect='equal') #242
plt.title('Feature '+str(feature_ids[1]))
sns.heatmap(feature_maps[0, feature_ids[1], :, :], cbar=False, square = True, ax = ax1)
ax2 = fig.add_subplot(313, aspect='equal') #243
plt.title('Feature '+str(feature_ids[2]))
sns.heatmap(feature_maps[0, feature_ids[2], :, :], cbar=False, square = True, ax = ax2)
#ax3 = fig.add_subplot(244, aspect='equal')
#plt.title('Feature '+str(feature_ids[3]))
#sns.heatmap(feature_maps[0, feature_ids[3], :, :], cbar=False, square = True, ax = ax3)
#ax4 = fig.add_subplot(245, aspect='equal')
#plt.title('Feature '+str(feature_ids[4]))
#sns.heatmap(feature_maps[0, feature_ids[4], :, :], cbar=False, square = True, ax = ax4)
#ax5 = fig.add_subplot(246, aspect='equal')
#plt.title('Feature '+str(feature_ids[5]))
#sns.heatmap(feature_maps[0, feature_ids[5], :, :], cbar=False, square = True, ax = ax5)
#ax6 = fig.add_subplot(247, aspect='equal')
#plt.title('Feature '+str(feature_ids[6]))
#sns.heatmap(feature_maps[0, feature_ids[6], :, :], cbar=False, square = True, ax = ax6)
#ax7 = fig.add_subplot(248, aspect='equal')
#plt.title('Feature '+str(feature_ids[7]))
#sns.heatmap(feature_maps[0, feature_ids[7], :, :], cbar=False, square = True, ax = ax7)
#fig.subplots_adjust(left=2.5*0.125, bottom=0.1, right=2.5*0.9, top=2*0.9, wspace=0.2, hspace=0.2)
plt.tight_layout()
fig.savefig('Heatmap'+encoder_type+'_'+str(entity_number)+'.png', dpi = 300)
elif mode=='t-SNE':
X = feature_maps[0]
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(X)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
plt.figure(figsize=(5,5))
plt.title('tsne plot')
plt.xlabel('tsne-one')
plt.ylabel('tsne-two')
sns.scatterplot(tsne_results[:,0],tsne_results[:,1])
plt.savefig('t-SNE'+encoder_type+'_'+str(entity_number)+'.jpg')
elif mode=='PCA':
X = feature_maps[0]
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
pca = PCA(n_components=2)
pca_result = pca.fit_transform(X)
print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_))
plt.figure(figsize=(5,5))
plt.title('PCA plot')
plt.xlabel('PCA-one')
plt.ylabel('PCA-two')
sns.scatterplot(pca_result[:,0],pca_result[:,1])
plt.savefig('PCA'+encoder_type+'_'+str(entity_number)+'.jpg')
def plotter_modified(encoder_type,entity_number=-1,mode='Heatmap'):
feature_ids = []
if(encoder_type=="RB_GAP3"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number")
entity_number=int(input())
feature_maps = RB_GAP3[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
elif(encoder_type=="RB_GAP4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,x4_feature_map_source,x4_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,x4_feature_map_target,x4_target")
print("enter entity number")
entity_number=int(input())
feature_maps = RB_GAP4[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
elif(encoder_type=="FC4"):
print("x_source,x1_source,x2_source,x3_source,x4_source,x_target,x1_target,x2_target,x3_target,x4_target")
print("enter entity number")
entity_number=int(input())
feature_maps = FC4[entity_number]
print("No. of features = "+str(feature_maps.shape[1]))
print("Enter feature ids for any 3 features")
if mode=='Heatmap':
for i in range(3):
feature_ids.append(int(input()))
if mode=='Heatmap':
fig = plt.figure(figsize=(14,10))
ax0 = fig.add_subplot(311) #241
plt.title('Feature '+str(feature_ids[0]))
sns.distplot(np.ndarray.flatten(feature_maps[0, feature_ids[0], :, :]), bins=5, kde=False, rug=True);
ax1 = fig.add_subplot(312) #242
plt.title('Feature '+str(feature_ids[1]))
sns.distplot(np.ndarray.flatten(feature_maps[0, feature_ids[1], :, :]), bins=5, kde=False, rug=True);
ax2 = fig.add_subplot(313) #243
plt.title('Feature '+str(feature_ids[2]))
sns.distplot(np.ndarray.flatten(feature_maps[0, feature_ids[2], :, :]), bins=5, kde=False, rug=True);
#ax3 = fig.add_subplot(244, aspect='equal')
#plt.title('Feature '+str(feature_ids[3]))
#sns.heatmap(feature_maps[0, feature_ids[3], :, :], cbar=False, square = True, ax = ax3)
#ax4 = fig.add_subplot(245, aspect='equal')
#plt.title('Feature '+str(feature_ids[4]))
#sns.heatmap(feature_maps[0, feature_ids[4], :, :], cbar=False, square = True, ax = ax4)
#ax5 = fig.add_subplot(246, aspect='equal')
#plt.title('Feature '+str(feature_ids[5]))
#sns.heatmap(feature_maps[0, feature_ids[5], :, :], cbar=False, square = True, ax = ax5)
#ax6 = fig.add_subplot(247, aspect='equal')
#plt.title('Feature '+str(feature_ids[6]))
#sns.heatmap(feature_maps[0, feature_ids[6], :, :], cbar=False, square = True, ax = ax6)
#ax7 = fig.add_subplot(248, aspect='equal')
#plt.title('Feature '+str(feature_ids[7]))
#sns.heatmap(feature_maps[0, feature_ids[7], :, :], cbar=False, square = True, ax = ax7)
#fig.subplots_adjust(left=2.5*0.125, bottom=0.1, right=2.5*0.9, top=2*0.9, wspace=0.2, hspace=0.2)
plt.tight_layout()
fig.savefig('Heatmap'+encoder_type+'_'+str(entity_number)+'.png', dpi = 300)
elif mode=='t-SNE':
X = feature_maps[0]
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(X)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
plt.figure(figsize=(5,5))
plt.title('tsne plot')
plt.xlabel('tsne-one')
plt.ylabel('tsne-two')
sns.scatterplot(tsne_results[:,0],tsne_results[:,1])
plt.savefig('t-SNE'+encoder_type+'_'+str(entity_number)+'.jpg')
elif mode=='PCA':
X = feature_maps[0]
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
pca = PCA(n_components=2)
pca_result = pca.fit_transform(X)
print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_))
plt.figure(figsize=(5,5))
plt.title('PCA plot')
plt.xlabel('PCA-one')
plt.ylabel('PCA-two')
sns.scatterplot(pca_result[:,0],pca_result[:,1])
plt.savefig('PCA'+encoder_type+'_'+str(entity_number)+'.jpg')
def plotter_histogram(encoder_type,entity_number=-1):
feature_ids = []
if(encoder_type=="RB_GAP3"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source0 = RB_GAP3[entity_number_s]
feature_maps_target0 = RB_GAP3[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source1 = RB_GAP3[entity_number_s]
feature_maps_target1 = RB_GAP3[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source2 = RB_GAP3[entity_number_s]
feature_maps_target2 = RB_GAP3[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source3 = RB_GAP3[entity_number_s]
feature_maps_target3 = RB_GAP3[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source4 = RB_GAP3[entity_number_s]
feature_maps_target4 = RB_GAP3[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source5 = RB_GAP3[entity_number_s]
feature_maps_target5 = RB_GAP3[entity_number_t]
fig = plt.figure(figsize=(20,12))
ax0 = fig.add_subplot(321)
sns.distplot(np.ndarray.flatten(feature_maps_source0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("first_conv_source / first_conv_target", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax0.get_legend().get_texts(), fontsize='22')
ax1 = fig.add_subplot(322)
sns.distplot(np.ndarray.flatten(feature_maps_source1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_1_source / rb_1_target", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax1.get_legend().get_texts(), fontsize='22')
ax2 = fig.add_subplot(323)
sns.distplot(np.ndarray.flatten(feature_maps_source2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.ylabel("Probability distribution", fontsize=26)
plt.title("rb_2_source / rb_2_target", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax2.get_legend().get_texts(), fontsize='22')
ax3 = fig.add_subplot(324)
sns.distplot(np.ndarray.flatten(feature_maps_source3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_3_source / rb_3_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax3.get_legend().get_texts(), fontsize='22')
ax4 = fig.add_subplot(325)
sns.distplot(np.ndarray.flatten(feature_maps_source4), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target4), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.xlabel("Feature Map distribution", fontsize=26)
plt.title("gap_source / gap_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax4.get_legend().get_texts(), fontsize='22')
ax5 = fig.add_subplot(326)
sns.distplot(np.ndarray.flatten(feature_maps_source5), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target5), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.xlabel("Feature Map distribution", fontsize=26)
plt.title("fc1_source / fc1_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax5.get_legend().get_texts(), fontsize='22')
#fig.subplots_adjust(left=2.5*0.125, bottom=0.1, right=2.5*0.9, top=2*0.9, wspace=0.2, hspace=0.2)
plt.tight_layout()
fig.savefig('Histogram'+encoder_type+'_'+str(entity_number)+'.png', dpi = 300)
elif(encoder_type=="RB_GAP4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,x4_feature_map_source,x4_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target,x4_feature_map_target,x4_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source0 = RB_GAP4[entity_number_s]
feature_maps_target0 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source1 = RB_GAP4[entity_number_s]
feature_maps_target1 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source2 = RB_GAP4[entity_number_s]
feature_maps_target2 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source3 = RB_GAP4[entity_number_s]
feature_maps_target3 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source4 = RB_GAP4[entity_number_s]
feature_maps_target4 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source5 = RB_GAP4[entity_number_s]
feature_maps_target5 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source6 = RB_GAP4[entity_number_s]
feature_maps_target6 = RB_GAP4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source7 = RB_GAP4[entity_number_s]
feature_maps_target7 = RB_GAP4[entity_number_t]
fig = plt.figure(figsize=(24,16))
ax0 = fig.add_subplot(421)
sns.distplot(np.ndarray.flatten(feature_maps_source0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("first_conv_source / first_conv_target", fontsize=26)
plt.ylabel("Probability distribution", fontsize=26)
plt.setp(ax0.get_legend().get_texts(), fontsize='22')
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax0.get_legend().get_texts(), fontsize='22')
ax1 = fig.add_subplot(422)
sns.distplot(np.ndarray.flatten(feature_maps_source1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_1_source / rb_1_target", fontsize=26)
plt.setp(ax1.get_legend().get_texts(), fontsize='22')
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
ax2 = fig.add_subplot(423)
sns.distplot(np.ndarray.flatten(feature_maps_source2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.setp(ax2.get_legend().get_texts(), fontsize='22')
plt.title("rb_2_source / rb_2_target", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
ax3 = fig.add_subplot(424)
sns.distplot(np.ndarray.flatten(feature_maps_source3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.setp(ax3.get_legend().get_texts(), fontsize='22')
plt.title("rb_3_source / rb_3_target", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
ax4 = fig.add_subplot(425)
sns.distplot(np.ndarray.flatten(feature_maps_source4), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target4), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_4_source / rb_4_target", fontsize=26)
plt.ylabel("Probability distribution", fontsize=26)
plt.setp(ax4.get_legend().get_texts(), fontsize='22')
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
ax5 = fig.add_subplot(426)
sns.distplot(np.ndarray.flatten(feature_maps_source5), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target5), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("gap_source / gap_target ", fontsize=26)
plt.setp(ax5.get_legend().get_texts(), fontsize='22')
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
ax6 = fig.add_subplot(427)
sns.distplot(np.ndarray.flatten(feature_maps_source6), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target6), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.xlabel("Feature Map distribution", fontsize=26)
plt.title("fc1_source / fc1_target ", fontsize=26)
plt.setp(ax6.get_legend().get_texts(), fontsize='22')
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
#ax7 = fig.add_subplot(428)
#sns.distplot(np.ndarray.flatten(feature_maps_source7), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
#sns.distplot(np.ndarray.flatten(feature_maps_target7), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
#plt.xlabel("Feature Map distribution", fontsize=26)
#plt.setp(ax7.get_legend().get_texts(), fontsize='22')
#plt.title("fc_source / fc_target ", fontsize=26)
#fig.subplots_adjust(left=2.5*0.125, bottom=0.1, right=2.5*0.9, top=2*0.9, wspace=0.2, hspace=0.2)
plt.tight_layout()
fig.savefig('Histogram'+encoder_type+'_'+str(entity_number)+'.png', dpi = 300)
elif(encoder_type=="FC4"):
print("x_source,x1_source,x2_source,x3_source,x4_source,x_target,x1_target,x2_target,x3_target,x4_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source0 = FC4[entity_number_s]
feature_maps_target0 = FC4[entity_number_t]
#np.save(r'feature_maps_source.npy', feature_maps_source)
#np.save(r'feature_maps_target.npy', feature_maps_target)
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source1 = FC4[entity_number_s]
feature_maps_target1 = FC4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source2 = FC4[entity_number_s]
feature_maps_target2 = FC4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source3 = FC4[entity_number_s]
feature_maps_target3 = FC4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source4 = FC4[entity_number_s]
feature_maps_target4 = FC4[entity_number_t]
np.save(r'feature_maps_source4.npy', feature_maps_source4)
np.save(r'feature_maps_target4.npy', feature_maps_target4)
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source5 = FC4[entity_number_s]
feature_maps_target5 = FC4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source6 = FC4[entity_number_s]
feature_maps_target6 = FC4[entity_number_t]
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source7 = FC4[entity_number_s]
feature_maps_target7 = FC4[entity_number_t]
print(max(np.ndarray.flatten(feature_maps_source7)))
print(max(np.ndarray.flatten(feature_maps_target7)))
fig = plt.figure(figsize=(24,16))
ax0 = fig.add_subplot(421)
sns.distplot(np.ndarray.flatten(feature_maps_source0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target0), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("first_conv_source / first_conv_source", fontsize=26)
plt.ylabel("Probability distribution", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax0.get_legend().get_texts(), fontsize='22')
ax1 = fig.add_subplot(422)
sns.distplot(np.ndarray.flatten(feature_maps_source1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target1), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_1_source / rb_1_source", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax1.get_legend().get_texts(), fontsize='22')
ax2 = fig.add_subplot(423)
sns.distplot(np.ndarray.flatten(feature_maps_source2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target2), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_2_source / rb_2_source", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax2.get_legend().get_texts(), fontsize='22')
ax3 = fig.add_subplot(424)
sns.distplot(np.ndarray.flatten(feature_maps_source3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target3), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_3_source / rb_3_source ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax3.get_legend().get_texts(), fontsize='22')
ax4 = fig.add_subplot(425)
sns.distplot(np.ndarray.flatten(feature_maps_source4[np.where(feature_maps_source4!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target4[np.where(feature_maps_target4!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("rb_4_source / rb_4_source ", fontsize=26)
plt.ylabel("Probability distribution", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax4.get_legend().get_texts(), fontsize='22')
ax5 = fig.add_subplot(426)
sns.distplot(np.ndarray.flatten(feature_maps_source5[np.where(feature_maps_source5!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target5[np.where(feature_maps_target5!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.title("fc1_source / fc1_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax5.get_legend().get_texts(), fontsize='22')
ax6 = fig.add_subplot(427)
sns.distplot(np.ndarray.flatten(feature_maps_source6[np.where(feature_maps_source6!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot(np.ndarray.flatten(feature_maps_target6[np.where(feature_maps_target6!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
plt.xlabel("Feature Map distribution", fontsize=26)
plt.title("fc2_source / fc2_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax6.get_legend().get_texts(), fontsize='22')
ax7 = fig.add_subplot(428)
feature_maps_source7_new = np.ndarray.flatten(feature_maps_source7)/max(np.ndarray.flatten(feature_maps_source7))
feature_maps_target7_new = np.ndarray.flatten(feature_maps_target7)/max(np.ndarray.flatten(feature_maps_target7))
sns.distplot((feature_maps_source7_new[np.where(feature_maps_source7_new!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Netherlands"} );
sns.distplot((feature_maps_target7_new[np.where(feature_maps_target7_new!=0)]), bins=5, kde = True, hist=False, kde_kws={"lw": 3, "label": "Penobscot"} );
#plt.hist(feature_maps_source7, 10)
#plt.hist(feature_maps_target7, 10)
plt.xlabel("Feature Map distribution", fontsize=26)
plt.title("fc3_source / fc3_target ", fontsize=26)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax7.get_legend().get_texts(), fontsize='22')
#fig.subplots_adjust(left=2.5*0.125, bottom=0.1, right=2.5*0.9, top=2*0.9, wspace=0.2, hspace=0.2)
plt.tight_layout()
fig.savefig('Histogram'+encoder_type+'_'+str(entity_number)+'.png', dpi = 300)
#plotter("FC4")
#plotter("RB_GAP3",_,'Heatmap')
#plotter_modified("RB_GAP3",_,'H16eatmap')
plotter_histogram("RB_GAP3",_)
#plotter("FC4",_,'PCA')
```
## plotter("RB_GAP3")
plotter("RB_GAP3",_,'t-SNE')
#plotter("RB_GAP3",_,'PCA')
```
#plotter("RB_GAP4")
plotter("RB_GAP4",_,'t-SNE')
#plotter("RB_GAP4",_,'PCA')
def pearson_coeff(encoder_type,entity_number=-1):
feature_ids = []
# Pearson Coefficient
# model RB_GAP3
if(encoder_type=="RB_GAP3"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = RB_GAP3[entity_number_s]
feature_maps_target = RB_GAP3[entity_number_t]
elif(encoder_type=="RB_GAP4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = RB_GAP4[entity_number_s]
feature_maps_target = RB_GAP4[entity_number_t]
elif(encoder_type=="FC4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = FC4[entity_number_s]
feature_maps_target = FC4[entity_number_t]
print("Enter feature ids for any 3 features")
for i in range(3):
feature_ids.append(int(input()))
print(len(feature_ids))
for i in range(3):
data1 = np.ndarray.flatten(feature_maps_source[0, feature_ids[i], :, :]);
data2 = np.ndarray.flatten(feature_maps_target[0, feature_ids[i], :, :]);
# calculate covariance matrix
covariance,_ = pearsonr(data1, data2)
print(covariance)
def pearson_coeff_modified(encoder_type,entity_number=-1):
feature_ids = []
# Pearson Coefficient
# model RB_GAP3
if(encoder_type=="RB_GAP3"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = RB_GAP3[entity_number_s]
feature_maps_target = RB_GAP3[entity_number_t]
elif(encoder_type=="RB_GAP4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = RB_GAP4[entity_number_s]
feature_maps_target = RB_GAP4[entity_number_t]
elif(encoder_type=="FC4"):
print("output_source,x_source,x1_source,x2_source,x3_feature_map_source,x3_source,output_target,x_target,x1_target,x2_target,x3_feature_map_target,x3_target")
print("enter entity number source")
entity_number_s=int(input())
print("enter entity number target")
entity_number_t=int(input())
feature_maps_source = FC4[entity_number_s]
feature_maps_target = FC4[entity_number_t]
#print("Enter feature ids for any 3 features")
#for i in range(3):
# feature_ids.append(int(input()))
#print(feature_maps_source.shape)
#print(feature_maps_target.shape)
data1 = np.ndarray.flatten(feature_maps_source);
data2 = np.ndarray.flatten(feature_maps_target);
# calculate covariance matrix
#covariance,_ = pearsonr(data1, data2)
covariance= stats.ks_2samp(data1, data2)
print(covariance)
#pearson_coeff("RB_GAP4",_)
pearson_coeff_modified("FC4",_)
X = np.load('source.npy')
#Xnew = X[np.where(X!=0)]
Y = np.load('target.npy')
#print(len(Xnew), len(Ynew))
#Ynew = Y[np.where(Y!=0)]
fig = plt.figure(figsize=(24,8))
ax0 = fig.add_subplot(121)
sns.distplot(np.ndarray.flatten(X), bins=30, kde = True, hist=True, color="k", kde_kws={"lw": 1.5, "label": "Netherlands", "color": "b"} );
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax0.get_legend().get_texts(), fontsize='22')
plt.ylabel("Frequency distribution", fontsize=26)
plt.xlabel("Feature Map distribution", fontsize=26)
ax1 = fig.add_subplot(122)
sns.distplot(np.ndarray.flatten(Y), bins=30, kde = True, hist=True, color="k", kde_kws={"lw": 1.5, "label": "Penobscot" , "color": "b"} );
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.setp(ax1.get_legend().get_texts(), fontsize='22')
plt.xlabel("Feature Map distribution", fontsize=26)
plt.tight_layout()
fig.savefig('Distribution_Initial.png', dpi = 300)
#plt.scatter(X,Y)
#plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid")
```
# Model: xgboost
```
from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix, roc_auc_score, roc_curve, precision_recall_curve
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from xgboost.sklearn import XGBClassifier
```
## Data
Load the dataset, applying no major transformations to it.
```
data = pd.read_csv('../dataset/creditcard.csv')
data.head()
X = data.drop(columns=['Class'])
y = data['Class']
```
Since the data is largely unbalanced we must use a stratified sampling to make sure we get both negative and positive samples to train with.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0, stratify=y)
```
## Pipeline (build)
```
numeric_feature_indexes = slice(0, 30)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', XGBClassifier(objective= 'binary:logistic'))
])
num_features_type_map = {feature: 'float64' for feature in X_train.columns[numeric_feature_indexes]}
X_train = X_train.astype(num_features_type_map)
X_test = X_test.astype(num_features_type_map)
```
## Pipeline (train)
```
model = pipeline.fit(X_train, y_train, classifier__eval_metric='auc')
model
```
## Pipeline (evaluate)
```
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))
disp = plot_confusion_matrix(model, X_test, y_test, display_labels=['normal', 'fraudulent'], cmap=plt.cm.Blues)
disp.ax_.grid(False)
```
Some great material is available here: https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/
```
y_pred_proba = pipeline.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
fig, ax = plt.subplots(figsize=(5,5))
ax.plot(fpr,tpr,label=f"auc {auc:2.2f}")
ax.legend(loc=4)
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate');
precision, recall, _ = precision_recall_curve(y_test, y_pred_proba)
fig, ax = plt.subplots(figsize=(5,5))
no_skill = len(y_test[y_test==1]) / len(y_test)
ax.plot([0, 1], [no_skill, no_skill], linestyle='--', label='No Skill')
ax.plot(recall, precision)
ax.set_xlabel('Precision')
ax.set_ylabel('Recall');
```
## Tune the pipeline
```
parameters = {
'classifier__max_depth': range (2, 10, 1),
'classifier__n_estimators': range(60, 220, 40),
'classifier__learning_rate': [0.1, 0.01, 0.05]
}
grid_search = GridSearchCV(
estimator=pipeline,
param_grid=parameters,
scoring = 'roc_auc',
n_jobs = 3,
cv = 3,
verbose=True
)
grid_search.fit(X_train, y_train)
```
Plot the outcome of the model search.
```
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(grid_search.cv_results_['mean_test_score'])
ax.set_ylabel("Average AUC score")
ax.set_xlabel("Model candidate")
sns.despine(offset=10)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import nltk
from collections import Counter
from sklearn.metrics import log_loss
from scipy.optimize import minimize
import multiprocessing
import difflib
import time
import gc
import xgboost as xgb
from sklearn.cross_validation import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
def get_train():
keras_q1 = np.load('../../data/transformed/keras_tokenizer/train_q1_transformed.npy')
keras_q2 = np.load('../../data/transformed/keras_tokenizer/train_q2_transformed.npy')
xgb_feats = pd.read_csv('../../data/features/the_1owl/owl_train.csv')
abhishek_feats = pd.read_csv('../../data/features/abhishek/train_features.csv',
encoding = 'ISO-8859-1').iloc[:, 2:]
text_feats = pd.read_csv('../../data/features/other_features/text_features_train.csv',
encoding = 'ISO-8859-1')
img_feats = pd.read_csv('../../data/features/other_features/img_features_train.csv')
srk_feats = pd.read_csv('../../data/features/srk/SRK_grams_features_train.csv')
xgb_feats.drop(['z_len1', 'z_len2', 'z_word_len1', 'z_word_len2'], axis = 1, inplace = True)
y_train = xgb_feats['is_duplicate']
xgb_feats = xgb_feats.iloc[:, 8:]
X_train2 = np.concatenate([keras_q1, keras_q2, xgb_feats, abhishek_feats, text_feats, img_feats], axis = 1)
#X_train2 = np.concatenate([xgb_feats, abhishek_feats, text_feats, img_feats], axis = 1)
for i in range(X_train2.shape[1]):
if np.sum(X_train2[:, i] == y_train.values) == X_train2.shape[0]:
print('LEAK FOUND')
X_train2 = X_train2.astype('float32')
X_train2 = pd.DataFrame(X_train2)
X_train2['is_duplicate'] = y_train
print('Training data shape:', X_train2.shape)
return X_train2, y_train
def get_test():
keras_q1 = np.load('../../data/transformed/keras_tokenizer/test_q1_transformed.npy')
keras_q2 = np.load('../../data/transformed/keras_tokenizer/test_q2_transformed.npy')
xgb_feats = pd.read_csv('../../data/features/the_1owl/owl_test.csv')
abhishek_feats = pd.read_csv('../../data/features/abhishek/test_features.csv',
encoding = 'ISO-8859-1').iloc[:, 2:]
text_feats = pd.read_csv('../../data/features/other_features/text_features_test.csv',
encoding = 'ISO-8859-1')
img_feats = pd.read_csv('../../data/features/other_features/img_features_test.csv')
srk_feats = pd.read_csv('../../data/features/srk/SRK_grams_features_test.csv')
xgb_feats.drop(['z_len1', 'z_len2', 'z_word_len1', 'z_word_len2'], axis = 1, inplace = True)
xgb_feats = xgb_feats.iloc[:, 5:]
X_test2 = np.concatenate([keras_q1, keras_q2, xgb_feats, abhishek_feats, text_feats, img_feats], axis = 1)
#X_test2 = np.concatenate([keras_q1, keras_q2, xgb_feats, abhishek_feats, text_feats], axis = 1)
X_test2 = X_test2.astype('float32')
X_test2 = pd.DataFrame(X_test2)
print('Test data shape:', X_test2.shape)
return X_test2
def predict_test(model_name):
X_test = get_test()
gbm = xgb.Booster(model_file = 'saved_models/XGB/{}.txt'.format(model_name))
test_preds = gbm.predict(xgb.DMatrix(X_test))
sub_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/submissions/'
sample_sub = pd.read_csv(sub_src + 'sample_submission.csv')
sample_sub['is_duplicate'] = test_preds
sample_sub.to_csv(sub_src + '{}.csv'.format(model_name), index = False)
return
def oversample():
print('Oversampling negative y according to anokas method')
X_train, y_train = get_train()
pos_train = X_train[X_train['is_duplicate'] == 1]
neg_train = X_train[X_train['is_duplicate'] == 0]
p = 0.165
scale = ((len(pos_train) / (len(pos_train) + len(neg_train))) / p) - 1
while scale > 1:
neg_train = pd.concat([neg_train, neg_train])
scale -=1
neg_train = pd.concat([neg_train, neg_train[:int(scale * len(neg_train))]])
X_train = pd.concat([pos_train, neg_train])
y_train = (np.zeros(len(pos_train)) + 1).tolist() + np.zeros(len(neg_train)).tolist()
X_train = X_train.astype('float32')
X_train.drop(['is_duplicate'], axis = 1, inplace = True)
return X_train, y_train
def oversample2(X_train):
print('Oversampling negative y according to SRK method')
y_train = np.array(X_train["is_duplicate"])
X_train.drop(['is_duplicate'], axis = 1, inplace = True)
X_train_dup = X_train[y_train==1]
X_train_non_dup = X_train[y_train==0]
X_train = np.vstack([X_train_non_dup, X_train_dup, X_train_non_dup, X_train_non_dup])
y_train = np.array([0]*X_train_non_dup.shape[0] + [1]*X_train_dup.shape[0] + [0]*X_train_non_dup.shape[0] + [0]*X_train_non_dup.shape[0])
del X_train_dup
del X_train_non_dup
print("Mean target rate : ",y_train.mean())
X_train = X_train.astype('float32')
return X_train, y_train
def kappa(preds, y):
score = []
a = 0.165 / 0.37
b = (1 - 0.165) / (1 - 0.37)
for pp,yy in zip(preds, y.get_label()):
score.append(a * yy * np.log (pp) + b * (1 - yy) * np.log(1-pp))
score = -np.sum(score) / len(score)
return 'kappa', score
def get_temporal_pattern(df2):
df = df2.copy()
df["qmax"] = df.apply( lambda row: max(row["qid1"], row["qid2"]), axis=1 )
df = df.sort_values(by=["qmax"], ascending=True)
df["dupe_rate"] = df.is_duplicate.rolling(window=500, min_periods=500).mean()
df["timeline"] = np.arange(df.shape[0]) / float(df.shape[0])
return df
def train_xgb(cv = False):
t = time.time()
params = {
'seed': 1337,
'colsample_bytree': 0.7,
'silent': 1,
'subsample': 0.7,
'eta': 0.05,
'objective': 'binary:logistic',
'eval_metric': 'logloss',
'max_depth': 4,
'min_child_weight': 1,
'nthread': 6,
'tree_method': 'hist',
}
X_train, y_train = get_train()
#X_train = X_train.astype('float32')
#X_train.drop(['is_duplicate'], axis = 1, inplace = True)
X_train, y_train = oversample2(X_train)
if cv:
dtrain = xgb.DMatrix(X_train, y_train)
hist = xgb.cv(params, dtrain, num_boost_round = 100000, nfold = 5,
stratified = True, early_stopping_rounds = 350, verbose_eval = 250,
seed = 1337)
del X_train, y_train
gc.collect()
print('Time it took to train in CV manner:', time.time() - t)
return hist
else:
X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, stratify = y_train,
test_size = 0.2, random_state = 111)
del X_train, y_train
gc.collect()
dtrain = xgb.DMatrix(X_tr, label = y_tr)
dval = xgb.DMatrix(X_val, label = y_val)
watchlist = [(dtrain, 'train'), (dval, 'valid')]
print('Start training...')
gbm = xgb.train(params, dtrain, 100000, watchlist,
early_stopping_rounds = 350, verbose_eval = 250)
print('Start predicting...')
val_pred = gbm.predict(xgb.DMatrix(X_val), ntree_limit=gbm.best_ntree_limit)
score = log_loss(y_val, val_pred)
print('Final score:', score, '\n', 'Time it took to train and predict:', time.time() - t)
del X_tr, X_val, y_tr, y_val
gc.collect()
return gbm
def run_xgb(model_name, train = True, test = False, cv = False):
if cv:
gbm_hist = train_xgb(True)
return gbm_hist
if train:
gbm = train_xgb()
gbm.save_model('saved_models/XGB/{}.txt'.format(model_name))
if test:
predict_test('{}'.format(model_name))
return gbm
def get_train():
keras_q1 = np.load('../../data/transformed/keras_tokenizer/train_q1_transformed.npy')
keras_q2 = np.load('../../data/transformed/keras_tokenizer/train_q2_transformed.npy')
xgb_feats = pd.read_csv('../../data/features/the_1owl/owl_train.csv')
abhishek_feats = pd.read_csv('../../data/features/abhishek/train_features.csv',
encoding = 'ISO-8859-1').iloc[:, 2:]
text_feats = pd.read_csv('../../data/features/other_features/text_features_train.csv',
encoding = 'ISO-8859-1')
img_feats = pd.read_csv('../../data/features/other_features/img_features_train.csv')
srk_feats = pd.read_csv('../../data/features/srk/SRK_grams_features_train.csv')
xgb_feats.drop(['z_len1', 'z_len2', 'z_word_len1', 'z_word_len2'], axis = 1, inplace = True)
y_train = xgb_feats['is_duplicate']
xgb_feats = xgb_feats.iloc[:, 8:]
#X_train2 = np.concatenate([keras_q1, keras_q2, xgb_feats, abhishek_feats, text_feats, img_feats], axis = 1)
#X_train2 = np.concatenate([xgb_feats, abhishek_feats, text_feats, img_feats], axis = 1)
X_train2 = np.concatenate([keras_q1, keras_q2, xgb_feats, abhishek_feats, img_feats], axis = 1)
for i in range(X_train2.shape[1]):
if np.sum(X_train2[:, i] == y_train.values) == X_train2.shape[0]:
print('LEAK FOUND')
X_train2 = X_train2.astype('float32')
X_train2 = pd.DataFrame(X_train2)
X_train2['is_duplicate'] = y_train
print('Training data shape:', X_train2.shape)
return X_train2, y_train
input_folder = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/'
df_train = pd.read_csv(input_folder + 'train.csv')
X_train, y_train = get_train()
X_train['qid1'] = df_train['qid1']
X_train['qid2'] = df_train['qid2']
X_traintemp = get_temporal_pattern(X_train)
X_tr = X_traintemp.iloc[:360000, :]
X_val = X_traintemp.iloc[360000:, :]
X_tr.drop(['qid1', 'qid2', 'qmax', 'dupe_rate'], axis = 1, inplace = True)
X_val.drop(['qid1', 'qid2', 'qmax', 'dupe_rate'], axis = 1, inplace = True)
y_tr = X_tr['is_duplicate']
X_tr.drop(['is_duplicate'], axis = 1, inplace = True)
y_val = X_val['is_duplicate']
X_val.drop(['is_duplicate'], axis = 1, inplace = True)
params = {
'seed': 1337,
'colsample_bytree': 0.7,
'silent': 1,
'subsample': 0.7,
'eta': 0.05,
'objective': 'binary:logistic',
'eval_metric': 'logloss',
'max_depth': 4,
'min_child_weight': 1,
'nthread': 6,
'tree_method': 'hist',
}
t = time.time()
dtrain = xgb.DMatrix(X_tr, label = y_tr)
dval = xgb.DMatrix(X_val, label = y_val)
watchlist = [(dtrain, 'train'), (dval, 'valid')]
print('Start training...')
gbm = xgb.train(params, dtrain, 100000, watchlist,
early_stopping_rounds = 350, verbose_eval = 250)
print('Start predicting...')
val_pred = gbm.predict(xgb.DMatrix(X_val), ntree_limit=gbm.best_ntree_limit)
score = log_loss(y_val, val_pred)
print('Final score:', score, '\n', 'Time it took to train and predict:', time.time() - t)
predict_test('XGB_nooversampling')
```
| github_jupyter |
# Tutorial Part 21: Introduction to Bioinformatics
So far in this tutorial, we've primarily worked on the problems of cheminformatics. We've been interested in seeing how we can use the techniques of machine learning to make predictions about the properties of molecules. In this tutorial, we're going to shift a bit and see how we can use classical computer science techniques and machine learning to tackle problems in bioinformatics.
For this, we're going to use the venerable [biopython](https://biopython.org/) library to do some basic bioinformatics. A lot of the material in this notebook is adapted from the extensive official [Biopython tutorial]http://biopython.org/DIST/docs/tutorial/Tutorial.html). We strongly recommend checking out the official tutorial after you work through this notebook!
## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/21_Introduction_to_Bioinformatics.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
```
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
```
We'll use `pip` to install `biopython`
```
!pip install biopython
import Bio
Bio.__version__
from Bio.Seq import Seq
my_seq = Seq("AGTACACATTG")
my_seq
my_seq.complement()
my_seq.reverse_complement()
```
## Parsing Sequence Records
We're going to download a sample `fasta` file from the Biopython tutorial to use in some exercises. This file is a set of hits for a sequence (of lady slipper orcid genes).
```
!wget https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/ls_orchid.fasta
```
Let's take a look at what the contents of this file look like:
```
from Bio import SeqIO
for seq_record in SeqIO.parse('ls_orchid.fasta', 'fasta'):
print(seq_record.id)
print(repr(seq_record.seq))
print(len(seq_record))
```
## Sequence Objects
A large part of the biopython infrastructure deals with tools for handlings sequences. These could be DNA sequences, RNA sequences, amino acid sequences or even more exotic constructs. To tell biopython what type of sequence it's dealing with, you can specify the alphabet explicitly.
```
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
my_seq = Seq("ACAGTAGAC", IUPAC.unambiguous_dna)
my_seq
my_seq.alphabet
```
If we want to code a protein sequence, we can do that just as easily.
```
my_prot = Seq("AAAAA", IUPAC.protein) # Alanine pentapeptide
my_prot
my_prot.alphabet
```
We can take the length of sequences and index into them like strings.
```
print(len(my_prot))
my_prot[0]
```
You can also use slice notation on sequences to get subsequences.
```
my_prot[0:3]
```
You can concatenate sequences if they have the same type so this works.
```
my_prot + my_prot
```
But this fails
```
my_prot + my_seq
```
## Transcription
Transcription is the process by which a DNA sequence is converted into messenger RNA. Remember that this is part of the "central dogma" of biology in which DNA engenders messenger RNA which engenders proteins. Here's a nice representation of this cycle borrowed from a Khan academy [lesson](https://cdn.kastatic.org/ka-perseus-images/20ce29384b2e7ff0cdea72acaa5b1dbd7287ab00.png).
<img src="https://cdn.kastatic.org/ka-perseus-images/20ce29384b2e7ff0cdea72acaa5b1dbd7287ab00.png">
Note from the image above that DNA has two strands. The top strand is typically called the coding strand, and the bottom the template strand. The template strand is used for the actual transcription process of conversion into messenger RNA, but in bioinformatics, it's more common to work with the coding strand. Let's now see how we can execute a transcription computationally using Biopython.
```
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
coding_dna = Seq("ATGATCTCGTAA", IUPAC.unambiguous_dna)
coding_dna
template_dna = coding_dna.reverse_complement()
template_dna
```
Note that these sequences match those in the image below. You might be confused about why the `template_dna` sequence is shown reversed. The reason is that by convention, the template strand is read in the reverse direction.
Let's now see how we can transcribe our `coding_dna` strand into messenger RNA. This will only swap 'T' for 'U' and change the alphabet for our object.
```
messenger_rna = coding_dna.transcribe()
messenger_rna
```
We can also perform a "back-transcription" to recover the original coding strand from the messenger RNA.
```
messenger_rna.back_transcribe()
```
## Translation
Translation is the next step in the process, whereby a messenger RNA is transformed into a protein sequence. Here's a beautiful diagram [from Wikipedia](https://en.wikipedia.org/wiki/Translation_(biology)#/media/File:Ribosome_mRNA_translation_en.svg) that lays out the basics of this process.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/b1/Ribosome_mRNA_translation_en.svg/1000px-Ribosome_mRNA_translation_en.svg.png">
Note how 3 nucleotides at a time correspond to one new amino acid added to the growing protein chain. A set of 3 nucleotides which codes for a given amino acid is called a "codon." We can use the `translate()` method on the messenger rna to perform this transformation in code.
messenger_rna.translate()
The translation can also be performed directly from the coding sequence DNA
```
coding_dna.translate()
```
Let's now consider a longer genetic sequence that has some more interesting structure for us to look at.
```
coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
coding_dna.translate()
```
In both of the sequences above, '*' represents the [stop codon](https://en.wikipedia.org/wiki/Stop_codon). A stop codon is a sequence of 3 nucleotides that turns off the protein machinery. In DNA, the stop codons are 'TGA', 'TAA', 'TAG'. Note that this latest sequence has multiple stop codons. It's possible to run the machinery up to the first stop codon and pause too.
```
coding_dna.translate(to_stop=True)
```
We're going to introduce a bit of terminology here. A complete coding sequence CDS is a nucleotide sequence of messenger RNA which is made of a whole number of codons (that is, the length of the sequence is a multiple of 3), starts with a "start codon" and ends with a "stop codon". A start codon is basically the opposite of a stop codon and is mostly commonly the sequence "AUG", but can be different (especially if you're dealing with something like bacterial DNA).
Let's see how we can translate a complete CDS of bacterial messenger RNA.
```
from Bio.Alphabet import generic_dna
gene = Seq("GTGAAAAAGATGCAATCTATCGTACTCGCACTTTCCCTGGTTCTGGTCGCTCCCATGGCA" + \
"GCACAGGCTGCGGAAATTACGTTAGTCCCGTCAGTAAAATTACAGATAGGCGATCGTGAT" + \
"AATCGTGGCTATTACTGGGATGGAGGTCACTGGCGCGACCACGGCTGGTGGAAACAACAT" + \
"TATGAATGGCGAGGCAATCGCTGGCACCTACACGGACCGCCGCCACCGCCGCGCCACCAT" + \
"AAGAAAGCTCCTCATGATCATCACGGCGGTCATGGTCCAGGCAAACATCACCGCTAA",
generic_dna)
# We specify a "table" to use a different translation table for bacterial proteins
gene.translate(table="Bacterial")
gene.translate(table="Bacterial", to_stop=True)
```
# Handling Annotated Sequences
Sometimes it will be useful for us to be able to handle annotated sequences where there's richer annotations, as in GenBank or EMBL files. For these purposes, we'll want to use the `SeqRecord` class.
```
from Bio.SeqRecord import SeqRecord
help(SeqRecord)
```
Let's write a bit of code involving `SeqRecord` and see how it comes out looking.
```
from Bio.SeqRecord import SeqRecord
simple_seq = Seq("GATC")
simple_seq_r = SeqRecord(simple_seq)
simple_seq_r.id = "AC12345"
simple_seq_r.description = "Made up sequence"
print(simple_seq_r.id)
print(simple_seq_r.description)
```
Let's now see how we can use `SeqRecord` to parse a large fasta file. We'll pull down a file hosted on the biopython site.
```
!wget https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.fna
from Bio import SeqIO
record = SeqIO.read("NC_005816.fna", "fasta")
record
```
Note how there's a number of annotations attached to the `SeqRecord` object!
Let's take a closer look.
```
record.id
record.name
record.description
```
Let's now look at the same sequence, but downloaded from GenBank. We'll download the hosted file from the biopython tutorial website as before.
```
!wget https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.gb
from Bio import SeqIO
record = SeqIO.read("NC_005816.gb", "genbank")
record
```
## SeqIO Objects
TODO(rbharath): Continue filling this up in future PRs.
```
```
| github_jupyter |
# Задание 2.1 - Нейронные сети
В этом задании вы реализуете и натренируете настоящую нейроную сеть своими руками!
В некотором смысле это будет расширением прошлого задания - нам нужно просто составить несколько линейных классификаторов вместе!
<img src="https://i.redd.it/n9fgba8b0qr01.png" alt="Stack_more_layers" width="400px"/>
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
from dataset import load_svhn, random_split_train_val
from gradient_check import check_layer_gradient, check_layer_param_gradient, check_model_gradient
from layers import FullyConnectedLayer, ReLULayer
from model import TwoLayerNet
from trainer import Trainer, Dataset
from optim import SGD, MomentumSGD
from metrics import multiclass_accuracy
```
# Загружаем данные
И разделяем их на training и validation.
```
def prepare_for_neural_network(train_X, test_X):
train_flat = train_X.reshape(train_X.shape[0], -1).astype(np.float) / 255.0
test_flat = test_X.reshape(test_X.shape[0], -1).astype(np.float) / 255.0
# Subtract mean
mean_image = np.mean(train_flat, axis = 0)
train_flat -= mean_image
test_flat -= mean_image
return train_flat, test_flat
train_X, train_y, test_X, test_y = load_svhn("data", max_train=10000, max_test=1000)
train_X, test_X = prepare_for_neural_network(train_X, test_X)
# Split train into train and val
train_X, train_y, val_X, val_y = random_split_train_val(train_X, train_y, num_val = 1000)
```
# Как всегда, начинаем с кирпичиков
Мы будем реализовывать необходимые нам слои по очереди. Каждый слой должен реализовать:
- прямой проход (forward pass), который генерирует выход слоя по входу и запоминает необходимые данные
- обратный проход (backward pass), который получает градиент по выходу слоя и вычисляет градиент по входу и по параметрам
Начнем с ReLU, у которого параметров нет.
```
# TODO: Implement ReLULayer layer in layers.py
# Note: you'll need to copy implementation of the gradient_check function from the previous assignment
X = np.array([[1,-2,3],
[-1, 2, 0.1]
])
assert check_layer_gradient(ReLULayer(), X)
```
А теперь реализуем полносвязный слой (fully connected layer), у которого будет два массива параметров: W (weights) и B (bias).
Все параметры наши слои будут использовать для параметров специальный класс `Param`, в котором будут храниться значения параметров и градиенты этих параметров, вычисляемые во время обратного прохода.
Это даст возможность аккумулировать (суммировать) градиенты из разных частей функции потерь, например, из cross-entropy loss и regularization loss.
```
# TODO: Implement FullyConnected layer forward and backward methods
assert check_layer_gradient(FullyConnectedLayer(3, 4), X)
# TODO: Implement storing gradients for W and B
assert check_layer_param_gradient(FullyConnectedLayer(3, 4), X, 'W')
assert check_layer_param_gradient(FullyConnectedLayer(3, 4), X, 'B')
```
## Создаем нейронную сеть
Теперь мы реализуем простейшую нейронную сеть с двумя полносвязным слоями и нелинейностью ReLU. Реализуйте функцию `compute_loss_and_gradients`, она должна запустить прямой и обратный проход через оба слоя для вычисления градиентов.
Не забудьте реализовать очистку градиентов в начале функции.
```
# TODO: In model.py, implement compute_loss_and_gradients function
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 2, reg = 0)
loss = model.compute_loss_and_gradients(train_X[:10], train_y[:10])
# TODO Now implement backward pass and aggregate all of the params
check_model_gradient(model, train_X[:10], train_y[:10])
```
Теперь добавьте к модели регуляризацию - она должна прибавляться к loss и делать свой вклад в градиенты.
```
# TODO Now implement l2 regularization in the forward and backward pass
model_with_reg = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 3, reg = 1e-1)
loss_with_reg = model_with_reg.compute_loss_and_gradients(train_X[:10], train_y[:10])
assert loss_with_reg > loss and not np.isclose(loss_with_reg, loss), \
"Loss with regularization (%2.4f) should be higher than without it (%2.4f)!" % (loss, loss_with_reg)
check_model_gradient(model_with_reg, train_X[:10], train_y[:10])
```
Также реализуем функцию предсказания (вычисления значения) модели на новых данных.
Какое значение точности мы ожидаем увидеть до начала тренировки?
```
# Finally, implement predict function!
# TODO: Implement predict function
# What would be the value we expect?
multiclass_accuracy(model_with_reg.predict(train_X[:30]), train_y[:30])
def display_history(train_history, val_history, loss_history):
plt.figure(figsize=(14, 5))
plt.subplot('121')
plt.title("Train/Validation accuracy")
plt.plot(train_history, label='train')
plt.plot(val_history, label='val')
plt.legend()
plt.subplot('122')
plt.title("Loss")
plt.plot(loss_history, label='loss')
plt.legend();
```
# Допишем код для процесса тренировки
```
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = .25e-2)
dataset = Dataset(train_X, train_y, val_X, val_y)
trainer = Trainer(model, dataset, SGD(), learning_rate=.25e-1, num_epochs=30)
# TODO Implement missing pieces in Trainer.fit function
# You should expect loss to go down and train and val accuracy go up for every epoch
loss_history, train_history, val_history = trainer.fit()
display_history(train_history, val_history, loss_history)
```
# Улучшаем процесс тренировки
Мы реализуем несколько ключевых оптимизаций, необходимых для тренировки современных нейросетей.
## Уменьшение скорости обучения (learning rate decay)
Одна из необходимых оптимизаций во время тренировки нейронных сетей - постепенное уменьшение скорости обучения по мере тренировки.
Один из стандартных методов - уменьшение скорости обучения (learning rate) каждые N эпох на коэффициент d (часто называемый decay). Значения N и d, как всегда, являются гиперпараметрами и должны подбираться на основе эффективности на проверочных данных (validation data).
В нашем случае N будет равным 1.
```
# TODO Implement learning rate decay inside Trainer.fit method
# Decay should happen once per epoch
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = .25e-2)
dataset = Dataset(train_X, train_y, val_X, val_y)
trainer = Trainer(model, dataset, SGD(), learning_rate=.3e-1, learning_rate_decay=0.99, num_epochs=30)
initial_learning_rate = trainer.learning_rate
loss_history, train_history, val_history = trainer.fit()
assert trainer.learning_rate < initial_learning_rate, "Learning rate should've been reduced"
assert trainer.learning_rate > 0.5*initial_learning_rate, "Learning rate shouldn'tve been reduced that much!"
```
# display_history(train_history, val_history, loss_history)
###### Накопление импульса (Momentum SGD)
Другой большой класс оптимизаций - использование более эффективных методов градиентного спуска. Мы реализуем один из них - накопление импульса (Momentum SGD).
Этот метод хранит скорость движения, использует градиент для ее изменения на каждом шаге, и изменяет веса пропорционально значению скорости.
(Физическая аналогия: Вместо скорости градиенты теперь будут задавать ускорение, но будет присутствовать сила трения.)
```
velocity = momentum * velocity - learning_rate * gradient
w = w + velocity
```
`momentum` здесь коэффициент затухания, который тоже является гиперпараметром (к счастью, для него часто есть хорошее значение по умолчанию, типичный диапазон -- 0.8-0.99).
Несколько полезных ссылок, где метод разбирается более подробно:
http://cs231n.github.io/neural-networks-3/#sgd
https://distill.pub/2017/momentum/
```
# TODO: Implement MomentumSGD.update function in optim.py
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 16e-4)
dataset = Dataset(train_X, train_y, val_X, val_y)
trainer = Trainer(model, dataset, MomentumSGD(momentum=.85), learning_rate=5e-3,
learning_rate_decay=0.99, num_epochs=20)
# You should see even better results than before!
loss_history, train_history, val_history = trainer.fit()
display_history(train_history, val_history, loss_history)
```
# Ну что, давайте уже тренировать сеть!
## Последний тест - переобучимся (overfit) на маленьком наборе данных
Хороший способ проверить, все ли реализовано корректно - переобучить сеть на маленьком наборе данных.
Наша модель обладает достаточной мощностью, чтобы приблизить маленький набор данных идеально, поэтому мы ожидаем, что на нем мы быстро дойдем до 100% точности на тренировочном наборе.
Если этого не происходит, то где-то была допущена ошибка!
```
data_size = 15
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = 1e-1)
dataset = Dataset(train_X[:data_size], train_y[:data_size], val_X[:data_size], val_y[:data_size])
trainer = Trainer(model, dataset, SGD(), learning_rate=1e-1, num_epochs=150, batch_size=5)
# You should expect this to reach 1.0 training accuracy
loss_history, train_history, val_history = trainer.fit()
```
Теперь найдем гипепараметры, для которых этот процесс сходится быстрее.
Если все реализовано корректно, то существуют параметры, при которых процесс сходится в **20** эпох или еще быстрее.
Найдите их!
```
# Now, tweak some hyper parameters and make it train to 1.0 accuracy in 20 epochs or less
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10, hidden_layer_size = 100, reg = .25e-2)
dataset = Dataset(train_X[:data_size], train_y[:data_size], val_X[:data_size], val_y[:data_size])
# TODO: Change any hyperparamers or optimizators to reach training accuracy in 20 epochs
trainer = Trainer(model, dataset, SGD(), learning_rate=3e-1, num_epochs=20, batch_size=5)
loss_history, train_history, val_history = trainer.fit()
```
# Итак, основное мероприятие!
Натренируйте лучшую нейросеть! Можно добавлять и изменять параметры, менять количество нейронов в слоях сети и как угодно экспериментировать.
Добейтесь точности лучше **40%** на validation set.
```
from itertools import product
from tqdm import tqdm_notebook
# Let's train the best one-hidden-layer network we can
learning_rates = 5e-3 * 10**np.random.uniform(-.5, .5, 5)
reg_strength = 16e-4 * 10**np.random.uniform(-.5, .5, 5)
learning_rate_decay = 0.99
hidden_layer_size = 100
num_epochs = 100
best_classifier = None
best_trainer = None
best_val_accuracy = 0
# TODO find the best hyperparameters to train the network
# Don't hesitate to add new values to the arrays above, perform experiments, use any tricks you want
# You should expect to get to at least 40% of valudation accuracy
# Save loss/train/history of the best classifier to the variables above
dataset = Dataset(train_X, train_y, val_X, val_y)
for lr, rs in tqdm_notebook(list(product(learning_rates, reg_strength))):
print('----------')
print('Learning Rate: %.6f, Regularization: %.6f' % (lr, rs))
print('----------')
model = TwoLayerNet(n_input = train_X.shape[1], n_output = 10,
hidden_layer_size = hidden_layer_size, reg = rs)
trainer = Trainer(model, dataset, MomentumSGD(), learning_rate=lr,
learning_rate_decay=learning_rate_decay, num_epochs=num_epochs)
max_val_accuracy = max(trainer.fit()[2])
if(max_val_accuracy > best_val_accuracy):
best_val_accuracy = max_val_accuracy
best_classifier = model
best_trainer = trainer
print('best validation accuracy achieved: %f' % best_val_accuracy)
```
# Как обычно, посмотрим, как наша лучшая модель работает на тестовых данных
```
test_pred = model.predict(test_X)
test_accuracy = multiclass_accuracy(test_pred, test_y)
print('Neural net test set accuracy: %f' % (test_accuracy, ))
```
| github_jupyter |
# Evaluate likelihood ratio
```
import sys, os
sys.path.append('../')
import logging
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import colorConverter
from scipy.stats import norm
from sklearn.metrics import roc_curve
from inference.utils import s_from_r, shuffle
import paper_settings
logging.basicConfig(
format='%(asctime)-5.5s %(name)-20.20s %(levelname)-7.7s %(message)s',
datefmt='%H:%M',
level=logging.INFO
)
paper_settings.setup()
```
## Setting
```
setting = "full"
filename = "alices_full_sgd1e2_grid" # "calibrated_alices_full_sgd1e2_grid"
#samples = [2, 16, 3,6]
samples = [3, 28, 21, 16]
```
## Data
```
xs = np.load("../data/samples/x_test_{}_point.npy".format(setting))[samples]
llrs = np.load("../data/results/llr_{}.npy".format(filename))[:,samples]
resolution = 25
f_sub_1d = np.linspace(0.001, 0.200, resolution)
beta_1d = np.linspace(-2.5, -1.5, resolution)
theta0, theta1 = np.meshgrid(f_sub_1d, beta_1d)
theta_grid = np.vstack((theta0.flatten(), theta1.flatten())).T
bin_size = f_sub_1d[1] - f_sub_1d[0]
alpha_edges = np.linspace(f_sub_1d[0] - bin_size/2, f_sub_1d[-1] + bin_size/2, resolution + 1)
bin_size = beta_1d[1] - beta_1d[0]
beta_edges = np.linspace(beta_1d[0] - bin_size/2, beta_1d[-1] + bin_size/2, resolution + 1)
```
## Plot
```
ncols = len(samples)
nrows = 2
gradmin, gradmax = -0.1, 0.1
gradrelmax = 0.002
xmin, xmax = 2.3, 3.15
llrmin, llrmax = 0., 25.
normalize_to_mle = True
fig, gs = paper_settings.grid2_width(ncols, nrows)
for i in range(ncols):
ax = plt.subplot(gs[i])
im = plt.imshow(
np.log10(xs[i]),
vmin=xmin,
vmax=xmax,
cmap='gist_gray',
extent=(-3.2,3.2,-3.2,3.2),
origin="lower",
alpha=1.
)
if i == 0:
plt.plot([-2.9, -1.9], [-2.9, -2.9], c="white", lw=1.5, ls="-")
plt.text(-2.4, -2.6, "$1''$", va="center", ha="center", color="white")
plt.xlim(-3.2,3.2)
plt.ylim(-3.2,3.2)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
if i == 0:
ax = plt.subplot(gs[ncols])
cbar = plt.colorbar(im, cax=ax)
cbar.set_label(r"$\log_{10} \; x$")
cbar.set_ticks([2.4,2.6,2.8,3.0])
ax.yaxis.set_label_coords(5.3, 0.5)
ax = plt.subplot(gs[ncols + 1 + i])
z = -2. * llrs[:, i]
z = z - np.min(z)
z_clip = np.clip(z, llrmin, llrmax)
pcm = ax.imshow(
(z_clip**0.5).reshape((resolution, resolution)),
extent=(alpha_edges[0],alpha_edges[-1],beta_edges[0], beta_edges[-1]),
cmap=paper_settings.CMAP1,
origin="lower",
aspect="auto",
norm=matplotlib.colors.Normalize(vmin=llrmin**0.5, vmax=llrmax**0.5),
)
cs = plt.contour(
0.5 * (alpha_edges[1:] + alpha_edges[:-1]),
0.5 * (beta_edges[1:] + beta_edges[:-1]),
z.reshape((resolution, resolution)),
[5.991464547107983],
colors="black",
linewidths=1.,
linestyles=["-"]
)
plt.scatter(0.05, -1.9, s=10., color='black', marker='*')
plt.xlim(0.,0.2)
plt.ylim(-2.5,-1.5)
plt.xlabel(r'$f_{\mathrm{sub}}$')
plt.xticks([0.,0.05,0.1,0.15])
if i == 0:
plt.ylabel(r'$\beta$')
plt.yticks([-1.6,-1.8,-2.0,-2.2,-2.4],["-0.6", "-0.8", "-1.0", "-1.2", "-1.4"])
ax.yaxis.set_label_coords(-0.24, 0.5)
else:
plt.ylabel(None)
ax.set_yticklabels(['' for item in ax.get_xticklabels()])
if i == 0:
ax = plt.subplot(gs[2*ncols + 1])
cbar = fig.colorbar(
pcm,
cax=ax,
ticks=[0.,1.,2.,3.,4.,5.],
format=matplotlib.ticker.FuncFormatter(lambda x, _ : "{:.0f}".format(x**2))
)
cbar.set_label(r'$-2 \Delta \log \hat{r}(x | \beta, f_{\mathrm{sub}})$')
ax.yaxis.set_label_coords(5., 0.5)
plt.savefig("../figures/individual_lens_predictions.pdf", dpi=300)
```
| github_jupyter |
*Copyright (c) Microsoft Corporation. All rights reserved.*
*Licensed under the MIT License.*
# Natural Language Inference on MultiNLI Dataset using Transformers
# Before You Start
It takes about 4 hours to fine-tune the `bert-large-cased` model on a Standard_NC24rs_v3 Azure Data Science Virtual Machine with 4 NVIDIA Tesla V100 GPUs.
> **Tip:** If you want to run through the notebook quickly, you can set the **`QUICK_RUN`** flag in the cell below to **`True`** to run the notebook on a small subset of the data and a smaller number of epochs.
If you run into CUDA out-of-memory error, try reducing the `BATCH_SIZE` and `MAX_SEQ_LENGTH`, but note that model performance will be compromised.
```
## Set QUICK_RUN = True to run the notebook on a small subset of data and a smaller number of epochs.
QUICK_RUN = False
```
## Summary
In this notebook, we demostrate fine-tuning pretrained transformer models to perform Natural Language Inference (NLI). We use the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) dataset and the task is to classify sentence pairs into three classes: contradiction, entailment, and neutral.
To classify a sentence pair, we concatenate the tokens in both sentences and separate the sentences by the special [SEP] token. A [CLS] token is prepended to the token list and used as the aggregate sequence representation for the classification task.The NLI task essentially becomes a sequence classification task. For example, the figure below shows how [BERT](https://arxiv.org/abs/1810.04805) classifies sentence pairs.
<img src="https://nlpbp.blob.core.windows.net/images/bert_two_sentence.PNG">
We compare the training time and performance of bert-large-cased and xlnet-large-cased. The model used can be set in the **Configurations** section.
```
import sys, os
nlp_path = os.path.abspath('../../')
if nlp_path not in sys.path:
sys.path.insert(0, nlp_path)
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
from sklearn.metrics import classification_report
from sklearn.preprocessing import LabelEncoder
import torch
from utils_nlp.models.transformers.sequence_classification import Processor, SequenceClassifier
from utils_nlp.dataset.multinli import load_pandas_df
from utils_nlp.common.pytorch_utils import dataloader_from_dataset
from utils_nlp.common.timer import Timer
```
To see all the model supported by `SequenceClassifier`, call the `list_supported_models` method.
**Note**: Although `SequenceClassifier` supports distilbert for single sequence classification, distilbert doesn't support sentence pair classification and can not be used in this notebook
```
SequenceClassifier.list_supported_models()
```
## Configurations
```
MODEL_NAME = "bert-large-cased"
TO_LOWER = False
BATCH_SIZE = 16
# MODEL_NAME = "xlnet-large-cased"
# TO_LOWER = False
# BATCH_SIZE = 16
TRAIN_DATA_USED_FRACTION = 1
DEV_DATA_USED_FRACTION = 1
NUM_EPOCHS = 2
WARMUP_STEPS= 2500
if QUICK_RUN:
TRAIN_DATA_USED_FRACTION = 0.001
DEV_DATA_USED_FRACTION = 0.01
NUM_EPOCHS = 1
WARMUP_STEPS= 10
if not torch.cuda.is_available():
BATCH_SIZE = BATCH_SIZE/2
RANDOM_SEED = 42
# model configurations
MAX_SEQ_LENGTH = 128
# optimizer configurations
LEARNING_RATE= 5e-5
# data configurations
TEXT_COL_1 = "sentence1"
TEXT_COL_2 = "sentence2"
LABEL_COL = "gold_label"
LABEL_COL_NUM = "gold_label_num"
CACHE_DIR = TemporaryDirectory().name
```
## Load Data
The MultiNLI dataset comes with three subsets: train, dev_matched, dev_mismatched. The dev_matched dataset are from the same genres as the train dataset, while the dev_mismatched dataset are from genres not seen in the training dataset.
The `load_pandas_df` function downloads and extracts the zip files if they don't already exist in `local_cache_path` and returns the data subset specified by `file_split`.
```
train_df = load_pandas_df(local_cache_path=CACHE_DIR, file_split="train")
dev_df_matched = load_pandas_df(local_cache_path=CACHE_DIR, file_split="dev_matched")
dev_df_mismatched = load_pandas_df(local_cache_path=CACHE_DIR, file_split="dev_mismatched")
dev_df_matched = dev_df_matched.loc[dev_df_matched['gold_label'] != '-']
dev_df_mismatched = dev_df_mismatched.loc[dev_df_mismatched['gold_label'] != '-']
print("Training dataset size: {}".format(train_df.shape[0]))
print("Development (matched) dataset size: {}".format(dev_df_matched.shape[0]))
print("Development (mismatched) dataset size: {}".format(dev_df_mismatched.shape[0]))
print()
print(train_df[['gold_label', 'sentence1', 'sentence2']].head())
# sample
train_df = train_df.sample(frac=TRAIN_DATA_USED_FRACTION).reset_index(drop=True)
dev_df_matched = dev_df_matched.sample(frac=DEV_DATA_USED_FRACTION).reset_index(drop=True)
dev_df_mismatched = dev_df_mismatched.sample(frac=DEV_DATA_USED_FRACTION).reset_index(drop=True)
label_encoder = LabelEncoder()
train_labels = label_encoder.fit_transform(train_df[LABEL_COL])
train_df[LABEL_COL_NUM] = train_labels
num_labels = len(np.unique(train_labels))
```
## Tokenize and Preprocess
Before training, we tokenize and preprocess the sentence texts to convert them into the format required by transformer model classes.
The `dataset_from_dataframe` method of the `Processor` class performs the following preprocessing steps and returns a Pytorch `DataSet`
* Tokenize input texts using the tokenizer of the pre-trained model specified by `model_name`.
* Convert the tokens into token indices corresponding to the tokenizer's vocabulary.
* Pad or truncate the token lists to the specified max length.
```
processor = Processor(model_name=MODEL_NAME, cache_dir=CACHE_DIR, to_lower=TO_LOWER)
train_dataset = processor.dataset_from_dataframe(
df=train_df,
text_col=TEXT_COL_1,
label_col=LABEL_COL_NUM,
text2_col=TEXT_COL_2,
max_len=MAX_SEQ_LENGTH,
)
dev_dataset_matched = processor.dataset_from_dataframe(
df=dev_df_matched,
text_col=TEXT_COL_1,
text2_col=TEXT_COL_2,
max_len=MAX_SEQ_LENGTH,
)
dev_dataset_mismatched = processor.dataset_from_dataframe(
df=dev_df_mismatched,
text_col=TEXT_COL_1,
text2_col=TEXT_COL_2,
max_len=MAX_SEQ_LENGTH,
)
train_dataloader = dataloader_from_dataset(
train_dataset, batch_size=BATCH_SIZE, shuffle=True
)
dev_dataloader_matched = dataloader_from_dataset(
dev_dataset_matched, batch_size=BATCH_SIZE, shuffle=False
)
dev_dataloader_mismatched = dataloader_from_dataset(
dev_dataset_mismatched, batch_size=BATCH_SIZE, shuffle=False
)
```
## Train and Predict
### Create Classifier
```
classifier = SequenceClassifier(
model_name=MODEL_NAME, num_labels=num_labels, cache_dir=CACHE_DIR
)
```
### Train Classifier
```
with Timer() as t:
classifier.fit(
train_dataloader,
num_epochs=NUM_EPOCHS,
learning_rate=LEARNING_RATE,
warmup_steps=WARMUP_STEPS,
)
print("Training time : {:.3f} hrs".format(t.interval / 3600))
```
### Predict on Test Data
```
with Timer() as t:
predictions_matched = classifier.predict(dev_dataloader_matched)
print("Prediction time : {:.3f} hrs".format(t.interval / 3600))
with Timer() as t:
predictions_mismatched = classifier.predict(dev_dataloader_mismatched)
print("Prediction time : {:.3f} hrs".format(t.interval / 3600))
```
## Evaluate
```
predictions_matched = label_encoder.inverse_transform(predictions_matched)
print(classification_report(dev_df_matched[LABEL_COL], predictions_matched, digits=3))
predictions_mismatched = label_encoder.inverse_transform(predictions_mismatched)
print(classification_report(dev_df_mismatched[LABEL_COL], predictions_mismatched, digits=3))
```
## Compare Model Performance
|Model name|Training time|Scoring time|Matched F1|Mismatched F1|
|:--------:|:-----------:|:----------:|:--------:|:-----------:|
|xlnet-large-cased|5.15 hrs|0.11 hrs|0.887|0.890|
|bert-large-cased|4.01 hrs|0.08 hrs|0.867|0.867|
```
result_matched_dict = classification_report(dev_df_matched[LABEL_COL], predictions_matched, digits=3, output_dict=True)
result_mismatched_dict = classification_report(dev_df_mismatched[LABEL_COL], predictions_mismatched, digits=3, output_dict=True)
sb.glue("matched_precision", result_matched_dict["weighted avg"]["precision"])
sb.glue("matched_recall", result_matched_dict["weighted avg"]["recall"])
sb.glue("matched_f1", result_matched_dict["weighted avg"]["f1-score"])
sb.glue("mismatched_precision", result_mismatched_dict["weighted avg"]["precision"])
sb.glue("mismatched_recall", result_mismatched_dict["weighted avg"]["recall"])
sb.glue("mismatched_f1", result_mismatched_dict["weighted avg"]["f1-score"])
```
| github_jupyter |
# This notebook refines the model using only playable songs:
### While generating new songs to play we only want to generate songs with difficulty of 8. Therefore we will load the refined model and then finetune it only to operate on our desired songs
```
from pathlib import Path
import pandas as pd
import re
#Get the list of all of the step files
step_files = list(Path("C:/Users/brent/Desktop/StepMania 5").rglob("*.[dD][wW][iI]"))
#Get the list of all of the step files
song_files = list(Path("C:/Users/brent/Desktop/StepMania 5").rglob("*.[mM][pP][3]"))
def process_song(path, title):
#Open File
text_file = open(path, "r")
lines = text_file.readlines()
text_file.close()
#Combine all text into single line
song = "".join(lines)
#Remove newline characters
song = re.sub('\n', '', song)
#Split on semicolon and then add the semicolons back into the respective lines
song = song.split(';')
song = [line+';' for line in song][:-1]
#Remove lines that start with 2 // (some files had this for some reason)
song = [line for line in song if (line.find('//') == -1)]
#Create a dataframe of the song
df = pd.DataFrame()
df[title] = song
return df
def pull_all_step_patterns(song, row):
song = song[row].str.split(":", n = 3, expand = True)
song = song[song[0].isin(["#SINGLE","#SOLO"])]
return song
def remove_leading_zeroes(songs):
"""Take a song step file and remove the leading zeroes"""
songs[3] = songs[3].str.replace(r"^0+","")
return songs
def fastaiFormat(songs):
"""Take a list of step files and make it into a format for fastai NLP"""
songs = songs.reset_index()
songs = songs[[1,3]]
songs.columns = ['label','text']
#Split the song into characters with spaces
songs['text'] = songs['text'].apply(lambda x: " ".join(x))
#Remove the trailing semicolon as we can add it back in when we are done predicting songs
songs['text'] = songs['text'].apply(lambda x: x[:-1])
return songs
def selectedDifficulty(songs, low=1, high=10):
"""Filters the songs only within a specific difficulty range given by low and high (inclusive)"""
songs = songs[pd.to_numeric(songs[2]).between(low,high)]
return songs
def join_all_step_patterns(step_files):
"""Create a dataframe of all songs for a fastai training model."""
songs = pd.DataFrame()
for row, path in enumerate(step_files):
df = process_song(path, row)
df = pull_all_step_patterns(df, row)
songs = pd.concat([songs,df])
songs = remove_leading_zeroes(songs)
songs = selectedDifficulty(songs, low=8, high=8)
songs = fastaiFormat(songs)
return songs
songs = join_all_step_patterns(step_files)
songs.head()
songs.to_csv("songs_8.csv", index=False)
```
# Refine our Language Model
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
from pathlib import Path
import pandas as pd
import re
import string
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
cwd = os.getcwd()
path = Path(cwd)
all_letters = list(string.printable + string.whitespace)
#We don't want to remove repetition in the DDR song as that is part of it
customtokenizer = Tokenizer(pre_rules= [], post_rules=[])
processors = [TokenizeProcessor(tokenizer=customtokenizer, mark_fields=False),
NumericalizeProcessor(vocab=Vocab.create(all_letters, max_vocab=1000, min_freq=0))]
data = (TextList.from_csv(path, "songs_8.csv", cols='text', processor=processors)
.split_by_rand_pct(0.2)
.label_for_lm()
.databunch(bs=96))
data.save('data_block_lm4.pkl')
data_lm = load_data(path, 'data_block_lm4.pkl',bs=96)
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
learn.load('fine_tuned_3')
learn.load_encoder('fine_tuned_enc_3')
learn.fit_one_cycle(4, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7))
learn.fit_one_cycle(10, 1e-4, moms=(0.8,0.7))
learn.save('fine_tuned_4')
learn.save_encoder('fine_tuned_enc_4')
TEXT = ""
N_WORDS = 200
N_SENTENCES = 1
print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.50) for _ in range(N_SENTENCES)))
```
# Now we work on the classifier
```
from pathlib import Path
import pandas as pd
import re
import string
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.text import *
import os
cwd = os.getcwd()
path = Path(cwd)
#Try out the datablock API to see if we can replicate and use either no tokenization or our custom tokenizer
all_letters = list(string.printable + string.whitespace)
#We don't want to remove repetition in the DDR song as that is part of it
customtokenizer = Tokenizer(pre_rules= [], post_rules=[])
processors = [TokenizeProcessor(tokenizer=customtokenizer, mark_fields=False),
NumericalizeProcessor(vocab=Vocab.create(all_letters, max_vocab=1000, min_freq=0))]
data_clas = (TextList.from_csv(path, 'songs_8.csv', cols='text', processor=processors)
.split_by_rand_pct(0.2)
.label_from_df('label')
.databunch(bs=12))
data_clas.save('data_clas_4.pkl')
data_clas = load_data(path, 'data_clas_4.pkl', bs=12)
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.3)
learn.load_encoder('fine_tuned_enc_4')
learn.fit_one_cycle(5, 1e-2, moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7))
learn.fit_one_cycle(10, 1e-4, moms=(0.8,0.7))
learn.save('fine_tuned_classifier_4')
learn.save_encoder('fine_tuned_enc_classifier_4')
```
## What are the most frequently misclassified?
```
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data_clas.valid_ds)==len(losses)==len(idxs)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
```
| github_jupyter |
# Getting Data Ready
Forecasting is used in a variety of applications and business use cases: For example, retailers need to forecast the sales of their products to decide how much stock they need by location, Manufacturers need to estimate the number of parts required at their factories to optimize their supply chain, Businesses need to estimate their flexible workforce needs, Utilities need to forecast electricity consumption needs in order to attain an efficient energy network, and enterprises need to estimate their cloud infrastructure needs.
<img src="https://amazon-forecast-samples.s3-us-west-2.amazonaws.com/common/images/forecast_overview_steps.png" width="98%">
In this notebook we will be walking through the first steps outlined in left-box above.
## Table Of Contents
* Step 1: [Setup Amazon Forecast](#setup)
* Step 2: [Prepare the Datasets](#DataPrep)
* Step 3: [Create the Dataset Group and Dataset](#DataSet)
* Step 4: [Create the Target Time Series Data Import Job](#DataImport)
* [Next Steps](#nextSteps)
For more informations about APIs, please check the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/what-is-forecast.html)
## Step 1: Setup Amazon Forecast<a class="anchor" id="setup"></a>
This section sets up the permissions and relevant endpoints.
```
!pip install boto3 --upgrade
import sys
import os
import pandas as pd
# importing forecast notebook utility from notebooks/common directory
sys.path.insert( 0, os.path.abspath("../../common") )
import util
%reload_ext autoreload
import boto3
import s3fs
```
Configure the S3 bucket name and region name for this lesson.
- If you don't have an S3 bucket, create it first on S3.
- Although we have set the region to us-west-2 as a default value below, you can choose any of the regions that the service is available in.
```
region = 'us-west-2'
bucket_name = 'forecast-demo-uci-electricity'
# Connect API session
session = boto3.Session(region_name=region)
forecast = session.client(service_name='forecast')
forecastquery = session.client(service_name='forecastquery')
```
<b>Create IAM Role for Forecast</b> <br>
Like many AWS services, Forecast will need to assume an IAM role in order to interact with your S3 resources securely. In the sample notebooks, we use the get_or_create_iam_role() utility function to create an IAM role. Please refer to "notebooks/common/util/fcst_utils.py" for implementation.
```
# Create the role to provide to Amazon Forecast.
role_name = "ForecastNotebookRole-Basic"
print(f"Creating Role {role_name} ...")
role_arn = util.get_or_create_iam_role( role_name = role_name )
# echo user inputs without account
print(f"Success! Created role arn = {role_arn.split('/')[1]}")
```
The last part of the setup process is to validate that your account can communicate with Amazon Forecast, the cell below does just that.
```
# check you can communicate with Forecast API session
forecast.list_predictors()
```
## Step 2: Prepare the Datasets<a class="anchor" id="DataPrep"></a>
For this exercise, we use the individual household electric power consumption dataset. (Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.) We aggregate the usage data hourly.
To begin, use Pandas to read the CSV and to show a sample of the data.
```
df = pd.read_csv("../../common/data/item-demand-time.csv", dtype = object, names=['timestamp','value','item'])
df.head(3)
```
Notice in the output above there are 3 columns of data:
1. The Timestamp
1. A Value
1. An Item ID
These are the 3 key required pieces of information to generate a forecast with Amazon Forecast. More can be added but these 3 must always remain present.
The dataset happens to span January 01, 2014 to Deceber 31, 2014. We are only going to use January to October to train Amazon Forecast.
You may notice a variable named `df` this is a popular convention when using Pandas if you are using the library's dataframe object, it is similar to a table in a database. You can learn more here: https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html
```
# Select January to October for one dataframe.
jan_to_oct = df[(df['timestamp'] >= '2014-01-01') & (df['timestamp'] < '2014-11-01')]
print(f"min timestamp = {jan_to_oct.timestamp.min()}")
print(f"max timestamp = {jan_to_oct.timestamp.max()}")
# save an item_id for querying later
item_id = "client_12"
```
Now export them to CSV files and place them into your `data` folder.
```
jan_to_oct.to_csv("data/item-demand-time-train.csv", header=False, index=False)
```
At this time the data is ready to be sent to S3 where Forecast will use it later. The following cells will upload the data to S3.
```
key="elec_data/item-demand-time-train.csv"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file("data/item-demand-time-train.csv")
```
## Step 3: Create the Dataset Group and Dataset <a class="anchor" id="DataSet"></a>
In Amazon Forecast , a dataset is a collection of file(s) which contain data that is relevant for a forecasting task. A dataset must conform to a schema provided by Amazon Forecast. Since data files are imported headerless, it is important to define a schema for your data.
More details about `Domain` and dataset type can be found on the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-domains-ds-types.html) . For this example, we are using [CUSTOM](https://docs.aws.amazon.com/forecast/latest/dg/custom-domain.html) domain with 3 required attributes `timestamp`, `target_value` and `item_id`.
Next, you need to make some choices.
<ol>
<li><b>How many time units do you want to forecast?</b>. For example, if your time unit is Hour, then if you want to forecast out 1 week, that would be 24*7 = 168 hours, so answer = 168. </li>
<li><b>What is the time granularity for your data?</b>. For example, if your time unit is Hour, answer = "H". </li>
<li><b>Think of a name you want to give this project (Dataset Group name)</b>, so all files will have the same names. You should also use this same name for your Forecast DatasetGroup name, to set yourself up for reproducibility. </li>
</ol>
```
# what is your forecast horizon in number time units you've selected?
# e.g. if you're forecasting in months, how many months out do you want a forecast?
FORECAST_LENGTH = 24
# What is your forecast time unit granularity?
# Choices are: ^Y|M|W|D|H|30min|15min|10min|5min|1min$
DATASET_FREQUENCY = "H"
TIMESTAMP_FORMAT = "yyyy-MM-dd hh:mm:ss"
# What name do you want to give this project?
# We will use this same name for your Forecast Dataset Group name.
PROJECT = 'util_power_demo'
DATA_VERSION = 1
```
### Create the Dataset Group
In this task, we define a container name or Dataset Group name, which will be used to keep track of Dataset import files, schema, and all Forecast results which go together.
```
dataset_group = f"{PROJECT}_{DATA_VERSION}"
print(f"Dataset Group Name = {dataset_group}")
dataset_arns = []
create_dataset_group_response = \
forecast.create_dataset_group(Domain="CUSTOM",
DatasetGroupName=dataset_group,
DatasetArns=dataset_arns)
dataset_group_arn = create_dataset_group_response['DatasetGroupArn']
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn)
```
### Create the Schema
```
# Specify the schema of your dataset here. Make sure the order of columns matches the raw data files.
ts_schema ={
"Attributes":[
{
"AttributeName":"timestamp",
"AttributeType":"timestamp"
},
{
"AttributeName":"target_value",
"AttributeType":"float"
},
{
"AttributeName":"item_id",
"AttributeType":"string"
}
]
}
```
### Create the Dataset
```
ts_dataset_name = f"{PROJECT}_{DATA_VERSION}"
print(ts_dataset_name)
response = \
forecast.create_dataset(Domain="CUSTOM",
DatasetType='TARGET_TIME_SERIES',
DatasetName=ts_dataset_name,
DataFrequency=DATASET_FREQUENCY,
Schema=ts_schema
)
ts_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=ts_dataset_arn)
```
### Update the dataset group with the datasets we created
You can have multiple datasets under the same dataset group. Update it with the datasets we created before.
```
dataset_arns = []
dataset_arns.append(ts_dataset_arn)
forecast.update_dataset_group(DatasetGroupArn=dataset_group_arn, DatasetArns=dataset_arns)
```
### Step 4: Create a Target Time Series Dataset Import Job <a class="anchor" id="DataImport"></a>
Now that Forecast knows how to understand the CSV we are providing, the next step is to import the data from S3 into Amazon Forecaast.
```
# Recall path to your data
ts_s3_data_path = "s3://"+bucket_name+"/"+key
print(f"S3 URI for your data file = {ts_s3_data_path}")
ts_dataset_import_job_response = \
forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=ts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": ts_s3_data_path,
"RoleArn": role_arn
}
},
TimestampFormat=TIMESTAMP_FORMAT)
ts_dataset_import_job_arn=ts_dataset_import_job_response['DatasetImportJobArn']
ts_dataset_import_job_arn
```
### Stop the data import
Possibly during fine-tuning development, you'll accidentally upload data before you're ready. If you don't want to wait for the data upload and processing, there is a handy "Stop API" call.
```
# StopResource
stop_ts_dataset_import_job_arn = forecast.stop_resource(ResourceArn=ts_dataset_import_job_arn)
# Delete the target time series dataset import job
util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
```
### Do the data import again
Maybe you fixed something you forgot before, and now you're ready to really upload the data for Forecast ingestion and processing.
```
ts_dataset_import_job_response = \
forecast.create_dataset_import_job(DatasetImportJobName=dataset_group,
DatasetArn=ts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": ts_s3_data_path,
"RoleArn": role_arn
}
},
TimestampFormat=TIMESTAMP_FORMAT)
ts_dataset_import_job_arn=ts_dataset_import_job_response['DatasetImportJobArn']
ts_dataset_import_job_arn
```
Check the status of dataset, when the status change from **CREATE_IN_PROGRESS** to **ACTIVE**, we can continue to next steps. Depending on the data size. It can take 10 mins to be **ACTIVE**. This process will take 5 to 10 minutes.
```
status = util.wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
assert status
forecast.describe_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn)
```
## Next Steps<a class="anchor" id="nextSteps"></a>
At this point you have successfully imported your data into Amazon Forecast and now it is time to get started in the next notebook to build your first model. To Continue, execute the cell below to store important variables where they can be used in the next notebook, then open `2.Building_Your_Predictor.ipynb`.
```
# Now save your choices for the next notebook
%store item_id
%store PROJECT
%store DATA_VERSION
%store FORECAST_LENGTH
%store DATASET_FREQUENCY
%store TIMESTAMP_FORMAT
%store ts_dataset_import_job_arn
%store ts_dataset_arn
%store dataset_group_arn
%store role_arn
%store bucket_name
%store region
%store key
```
| github_jupyter |
# Solving the Taxi Problem Using SARSA
### Goal:
Say our agent is the driving the taxi. There are totally four locations and the agent has to
pick up a passenger at one location and drop at the another. The agent will receive +20
points as a reward for successful drop off and -1 point for every time step it takes. The agent
will also lose -10 points for illegal pickups and drops. So the goal of our agent is to learn to
pick up and drop passengers at the correct location in a short time without boarding any illegal
passengers.
First, we import all necessary libraries and initialize the environment
```
import random
import gym
env = gym.make('Taxi-v2')
```
The environment is shown below, where the letters (R, G, Y, B) represents the different
locations and a tiny yellow colored rectangle is the taxi driving by our agent.
```
env.render()
```
Now, we initialize, Q table has a dictionary which stores state-action pair specifying value of performing an action in
a state s.
```
Q = {}
for s in range(env.observation_space.n):
for a in range(env.action_space.n):
Q[(s,a)] = 0.0
```
Then, we define a function for performing epsilon-greedy policy. In epsilon-greedy policy, either we select best action with probability 1-epsilon or we explore new action with probability epsilon
```
def epsilon_greedy(state, epsilon):
if random.uniform(0,1) < epsilon:
return env.action_space.sample()
else:
return max(list(range(env.action_space.n)), key = lambda x: Q[(state,x)])
```
Now we initialize necessary variables
alpha - TD learning rate
gamma - discount factor <br>
epsilon - epsilon value in epsilon greedy policy
```
alpha = 0.85
gamma = 0.90
epsilon = 0.8
```
Now, we perform SARSA!!
```
for i in range(4000):
# we store cumulative reward of each episodes in r
r = 0
# initialize the state,
state = env.reset()
# select the action using epsilon-greedy policy
action = epsilon_greedy(state,epsilon)
while True:
# env.render()
# then we perform the action and move to the next state, and receive the reward
nextstate, reward, done, _ = env.step(action)
# again, we select the next action using epsilon greedy policy
nextaction = epsilon_greedy(nextstate,epsilon)
# we calculate the Q value of previous state using our update rule
Q[(state,action)] += alpha * (reward + gamma * Q[(nextstate,nextaction)]-Q[(state,action)])
# finally we update our state and action with next action and next state
action = nextaction
state = nextstate
# store the rewards
r += reward
# we will break the loop, if we are at the terminal state of the episode
if done:
break
print("total reward: ", r)
env.close()
```
| github_jupyter |
```
# %pip install --upgrade pip --user
# %pip install zarr --user
# %pip install tables --user
# %pip install git+https://github.com/simpeg/simpeg.git@simulation-tdem --user
# %pip install dask dask_jobqueue --user
# %pip install git+https://github.com/simpeg-research/casingSimulations.git@simulation --user
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm as cmap
from matplotlib.colors import LogNorm, SymLogNorm, Normalize
import discretize
from scipy import sparse as sp
from scipy.constants import mu_0
from SimPEG.utils.SolverUtils import SolverWrapI
import pandas as pd
from pymatsolver import Pardiso, SolverCG
import os
import json
import dask
# import dask_jobqueue
# from dask.distributed import Client
import casingSimulations as casing_sim
import torch
# we are in the midst of upgrading the API, so this is
# more closely in-line with the upcoming changes
from SimPEG.electromagnetics import time_domain as tdem
%matplotlib inline
SolverICG = SolverWrapI(sp.linalg.cg, checkAccuracy=False)
Solver = Pardiso
solver_opts = {} #{"maxiter": 10}
data_directory = "./experiment1"
df = pd.read_hdf(f"{data_directory}/trial_data.h5", "data")
fig, ax = plt.subplots(1,len(df.keys()), figsize=(20, 4))
for i, key in enumerate(df.keys()):
ax[i].hist(df[key])
ax[i].set_title(f"{key}".replace("_", " "))
plt.tight_layout()
# pick a single model to try training on
trial_index = 5 # a 1200 m long well (relatively short --> fast simulations)
trial_directory = f"{data_directory}/trial_{trial_index}"
# generate the 2D model
with open(f"{trial_directory}/approx_casing.json") as f:
params = json.load(f)
model = casing_sim.model.CasingInHalfspace.deserialize(params, trusted=True)
with open(f"{trial_directory}/simulation_approx_casing.json") as f:
simulation_params = json.load(f)
sim3D = tdem.Problem3D_j.deserialize(simulation_params, trusted=True)
mesh3D = sim3D.mesh
# create a 2D simulation
mesh = discretize.CylMesh([mesh3D.hx, 1, mesh3D.hz], x0=mesh3D.x0)
sim = tdem.Problem3D_j(mesh=mesh, time_steps=sim3D.time_steps, solver=Solver, solver_opts=solver_opts, sigma=model.sigma(mesh))
fig, ax = plt.subplots(1, 1)
plt.colorbar(
mesh.plotImage(
sim.sigma, ax=ax, pcolorOpts={"norm":LogNorm()}, mirror=True
)[0], ax=ax
)
ax.set_xlim([-1, 1])
ax.set_ylim([-2000, 10])
sim.mesh.edgeCurl.shape
def getRHS(sim, src):
# full source term
# rhs = -1./dt * (s_e - s_en1) + C * MeMuI * s_m
# we are setting s_e to zero
rhs = sim.mesh.edgeCurl * (sim.MeMuI * src)
if sim._makeASymmetric:
return sim.MfRho.T * rhs
return rhs
def getRHS_deriv(sim, v=None, adjoint=False):
# full source term
# rhs = -1./dt * (s_e - s_en1) + C * MeMuI * s_m
# we are setting s_e to zero
mesh = sim.mesh
if adjoint:
if sim._makeASymmetric:
if v is not None:
rhs = sim.MfRho * v
else:
rhs = sim.MfRho
else:
rhs = v if v is not None else sp.eye(mesh.nF)
return sim.MeMuI.T * (mesh.edgeCurl.T * rhs)
if v is not None:
rhs = sim.mesh.edgeCurl * (sim.MeMuI * v)
else:
rhs = sim.mesh.edgeCurl * sim.MeMuI
if sim._makeASymmetric:
return sim.MfRho.T * rhs
return rhs
# solve the forward problem
def fields(sim, source):
f = np.zeros((sim.mesh.nF, sim.nT+1))
# this assumes the initial condition is zero.
# timestep to solve forward
Ainv = None
for tInd, dt in enumerate(sim.timeSteps):
# keep factors if dt is the same as previous step b/c A will be the
# same
if Ainv is not None and (
tInd > 0 and abs(dt-sim.timeSteps[tInd - 1]) >
sim.dt_threshold
):
Ainv.clean()
Ainv = None
if Ainv is None:
A = sim.getAdiag(tInd)
Ainv = Pardiso(A)
rhs = getRHS(sim, source[:, tInd+1]) # this is on the nodes of the time mesh
Asubdiag = sim.getAsubdiag(tInd)
# taking a step
sol = Ainv * (rhs - Asubdiag * f[:, tInd])
f[:, tInd+1] = sol
# clean factors and return
Ainv.clean()
return f
def fields_deriv(sim, v=None, adjoint=False):
if adjoint:
return fields_deriv_adjoint(sim, v=v)
df_dm_v = np.zeros((sim.mesh.nF, sim.nT+1))
# timestep to solve forward
Ainv = None
for tInd, dt in enumerate(sim.timeSteps):
# keep factors if dt is the same as previous step b/c A will be the
# same
if Ainv is not None and (
tInd > 0 and abs(dt-sim.timeSteps[tInd - 1]) > sim.dt_threshold
):
Ainv.clean()
Ainv = None
if Ainv is None:
A = sim.getAdiag(tInd)
Ainv = Pardiso(A)
rhs_deriv = getRHS_deriv(sim, v[:, tInd+1]) # this is on the nodes of the time mesh
Asubdiag = sim.getAsubdiag(tInd)
# taking a step
sol = Ainv * (rhs_deriv - Asubdiag * df_dm_v[:, tInd])
df_dm_v[:, tInd+1] = sol
# clean factors and return
Ainv.clean()
return df_dm_v
def fields_deriv_adjoint(sim, v=None):
df_dmT_v = np.zeros((sim.mesh.nE, sim.nT+1)) # the source is defined on edges
# timestep to solve forward
ATinv = None
for tInd in reversed(range(sim.nT)):
dt = sim.time_steps[tInd]
# keep factors if dt is the same as previous step b/c A will be the
# same
if ATinv is not None and (
tInd <= sim.nT and abs(dt-sim.timeSteps[tInd + 1]) > sim.dt_threshold
):
ATinv.clean()
ATinv = None
if ATinv is None:
AT = sim.getAdiag(tInd).T
ATinv = Pardiso(AT)
# ATinv_v = ATinv * v[:, tInd+1]
if tInd < sim.nT - 1:
AsubdiagT = sim.getAsubdiag(tInd+1).T
sol = ATinv * (v[:, tInd+1] - AsubdiagT * sol)
else:
sol = ATinv * v[:, tInd+1]
rhs_deriv = getRHS_deriv(sim, sol, adjoint=True) # this is on the nodes of the time mesh
df_dmT_v[:, tInd+1] = rhs_deriv
# clean factors and return
ATinv.clean()
return df_dmT_v
def create_source(sim, model, s, trial_directory):
# interpolate on to the spatial mesh (lets use exact time for now)
# z_source = np.load(f"{trial_directory}/z_currents.npy")
mesh = sim.mesh
src = np.zeros((mesh.nEy, sim.nT+1))
csx = mesh.hx.min()
xinds = (mesh.gridEy[:, 0] < model.casing_b + csx/2) & (mesh.gridEy[:, 0] > model.casing_b - csx/2)
zinds = (mesh.gridEy[:, 2] >= model.casing_z.min()) & (mesh.gridEy[:, 2] <= model.casing_z.max())
src_inds_bool = xinds & zinds
src_inds = np.where(src_inds_bool)[0]
# P = discretize.utils.interpmat(mesh.gridEy[src_inds, 2], z_source)
src[src_inds, :] = s
def grad(dy, adjoint=True):
if adjoint:
return dy[src_inds, :]
grd = np.zeros((mesh.nEy, sim.nT+1))
grd[src_inds, :] = dy
return grd
return src, grad
def load_trial(trial_directory):
# model parameters
with open(f"{trial_directory}/casing.json") as f:
params = json.load(f)
casing = casing_sim.model.CasingInHalfspace.deserialize(params, trusted=True)
with open(f"{trial_directory}/approx_casing.json") as f:
params = json.load(f)
approx_casing = casing_sim.model.CasingInHalfspace.deserialize(params, trusted=True)
model_dict = {
"casing": casing,
"approx_casing": approx_casing
}
with open(f"{trial_directory}/simulation_approx_casing.json") as f:
simulation_params = json.load(f)
sim = tdem.Problem3D_j.deserialize(simulation_params, trusted=True)
sim.survey.source_list = sim.survey.source_list # HAck to trigger the validator
mesh = sim.mesh
# load up the fields
fields_dict = {}
for key in model_dict.keys():
print(key)
sim.sigma = model_dict[key].sigma(mesh)
f = np.load(f"{trial_directory}/{key}_fields.npy")
fields_dict[key] = sim.fieldsPair(sim)
fields_dict[key][:, "jSolution", :] = f
return model_dict, fields_dict, sim, mesh
def get_j_inds(
mesh, sim, x_bounds=np.r_[1, 2000], z_bounds=np.r_[-2000, 0]
):
inds_Fx = (
(mesh.gridFx[:, 0] > x_bounds.min()) & (mesh.gridFx[:, 0] < x_bounds.max()) &
(mesh.gridFx[:, 2] > z_bounds.min()) & (mesh.gridFx[:, 2] < z_bounds.max())
)
inds_Fx = np.kron(np.ones(sim.nT+1, dtype=bool), inds_Fx)
inds_Fz = (
(mesh.gridFz[:, 0] > x_bounds.min()) & (mesh.gridFz[:, 0] < x_bounds.max()) &
(mesh.gridFz[:, 2] > z_bounds.min()) & (mesh.gridFz[:, 2] < z_bounds.max())
)
inds_Fz = np.kron(np.ones(sim.nT+1, dtype=bool), inds_Fz)
inds = np.hstack([inds_Fx, inds_Fz])
return inds
def run_forward(model, mesh, sim, source_vec):
# trial_directory = f"{data_directory}/trial_{trial_ind}"
# model, mesh, sim = load_trial(trial_directory)
source, source_grad = create_source(sim, model, source_vec, trial_directory)
f = fields(sim, source)
# P = get_j_interpolation_mat(trial_directory, mesh)
j_compare = discretize.utils.mkvc(f)
def grad(dy, adjoint=True):
if adjoint:
v = dy
v = v.reshape(mesh.nF, sim.nT+1, order="F")
f_deriv = fields_deriv_adjoint(sim, v)
return source_grad(f_deriv, adjoint=True)
f_deriv = fields_deriv(sim, source_grad(dy, adjoint=False))
return discretize.utils.mkvc(f_deriv)
return j_compare, grad
```
# set up a simple test example
```
def waveform(t, t_peak=5e-3, width=10, amplitude=1):
t = np.log10(t)
t_peak = np.log10(t_peak)
width = np.log10(width)
return amplitude * np.exp(-(t - t_peak)**2/(2*width**2))
def sigmoid(x, x0=0, slope=1):
return np.arctan(slope * (x-x0))/np.pi + 0.5
def depth_distribution(z, dz=200, slope=1e-1):
return sigmoid(z, model.casing_z.min() + dz, slope) * sigmoid(-z, -(model.casing_z.max() - dz), slope)
def source_sm(mesh, t, z):
sm = np.zeros(mesh.nE)
sm = np.outer(depth_distribution(z), waveform(t))
return sm
# z = np.load(f"{trial_directory}/z_currents.npy")
csx = mesh.hx.min()
xinds = (mesh.gridEy[:, 0] < model.casing_b + csx/2) & (mesh.gridEy[:, 0] > model.casing_b - csx/2)
zinds = (mesh.gridEy[:, 2] >= model.casing_z.min()) & (mesh.gridEy[:, 2] <= model.casing_z.max())
src_inds_bool = xinds & zinds
src_inds = np.where(src_inds_bool)[0]
z = mesh.gridEy[src_inds, 2]
src_vec = source_sm(mesh, sim.times, z)
fig, ax = plt.subplots(1, 1)
plt.colorbar(ax.pcolormesh(sim.times, z, src_vec), ax=ax)
ax.set_xscale("log")
ax.set_xlim(1e-6, sim.times.max())
ax.set_xlabel("time (s)")
ax.set_ylabel("z")
def test_source(source):
source = source.reshape(len(z), sim.nT+1, order="F")
src, grad = create_source(sim, model, source, trial_directory)
def src_deriv(dy, adjoint=False):
if not adjoint:
dy = dy.reshape(len(z), sim.nT+1, order="F")
else:
dy = dy.reshape(mesh.nE, sim.nT+1, order="F")
return discretize.utils.mkvc(grad(dy, adjoint))
return discretize.utils.mkvc(src), src_deriv
src_vec[:, 0] = 0
x0 = discretize.utils.mkvc(src_vec)
discretize.Tests.checkDerivative(
test_source,
x0=x0,
num=4,
plotIt=False,
)
# adjoint test
src_vec = discretize.utils.mkvc(src_vec.reshape(len(z), sim.nT+1, order="F"))
src, src_deriv = test_source(src_vec)
v = np.random.rand(len(z)*(sim.nT+1))
w = np.random.rand(mesh.nE*(sim.nT+1))
a = w.T.dot(discretize.utils.mkvc(src_deriv(v.reshape(len(z), sim.nT+1, order="F"), adjoint=False)))
b = v.T.dot(discretize.utils.mkvc(src_deriv(w, adjoint=True)))
print(f"{np.linalg.norm(a):1.3e}, {np.linalg.norm(b):1.3e}, {np.linalg.norm(a-b):1.3e}")
def test_rhs(source):
source = source.reshape(len(z), (sim.nT+1), order="F")
src, grad_src = create_source(sim, model, source, trial_directory)
rhs = getRHS(sim, src)
def src_deriv(dy, adjoint=False):
if not adjoint:
dy = dy.reshape(len(z), (sim.nT+1), order="F")
return discretize.utils.mkvc(getRHS_deriv(sim, grad_src(dy, adjoint), adjoint))
else:
dy = dy.reshape(mesh.nF, (sim.nT+1), order="F")
return grad_src(getRHS_deriv(sim, dy, adjoint), adjoint)
return discretize.utils.mkvc(rhs), src_deriv
x0 = discretize.utils.mkvc(src_vec)
discretize.Tests.checkDerivative(
test_rhs,
x0=0*x0,
dx=x0,
num=8,
plotIt=False,
expectedOrder=1,
)
# adjoint test
src_vec = discretize.utils.mkvc(src_vec.reshape(len(z), sim.nT+1, order="F"))
rhs, rhs_deriv = test_rhs(src_vec)
v = np.random.rand(len(z)*(sim.nT+1))
w = np.random.rand(mesh.nF*(sim.nT+1))
a = w.T.dot(discretize.utils.mkvc(rhs_deriv(v.reshape(len(z), (sim.nT+1), order="F"), adjoint=False)))
b = v.T.dot(discretize.utils.mkvc(rhs_deriv(w, adjoint=True)))
print(f"{np.linalg.norm(a):1.3e}, {np.linalg.norm(b):1.3e}, {np.linalg.norm(a-b):1.3e}")
src_sm, _ = create_source(sim, model, src_vec.reshape(len(z), sim.nT+1, order="F"), trial_directory)
def test_forward(src_sm):
src_sm = src_sm.reshape(mesh.nEy, sim.nT+1, order="F")
j = fields(sim, src_sm)
def j_deriv(v, adjoint=False):
if not adjoint:
v = v.reshape(mesh.nEy, sim.nT+1, order="F")
return discretize.utils.mkvc(fields_deriv(sim, v, adjoint))
else:
v = v.reshape(mesh.nF, sim.nT+1, order="F")
return fields_deriv(sim, v, adjoint)
return discretize.utils.mkvc(j), j_deriv
x0 = discretize.utils.mkvc(src_sm)
discretize.Tests.checkDerivative(
test_forward,
x0=0*x0,
num=8,
plotIt=False,
expectedOrder=1,
)
# adjoint test
j, j_deriv = test_forward(src_sm)
v = np.random.rand(np.prod(src_sm.shape))
w = np.random.rand(np.prod(j.shape))
a = w.T.dot(discretize.utils.mkvc(j_deriv(v, adjoint=False)))
b = v.T.dot(discretize.utils.mkvc(j_deriv(w, adjoint=True)))
print(f"{np.linalg.norm(a):1.3e}, {np.linalg.norm(b):1.3e}, {np.linalg.norm(a-b):1.3e}")
model_dict, fields_dict, sim3D, mesh3D = load_trial(trial_directory)
mesh = discretize.CylMesh([mesh3D.hx, 1, mesh3D.hz], x0=mesh3D.x0)
sim = tdem.Problem3D_j(mesh=mesh, time_steps=sim3D.time_steps, solver=Solver, solver_opts=solver_opts, sigma=model.sigma(mesh))
def test_forward_full(src_vec):
src_vec = src_vec.reshape(len(z), sim.nT+1, order="F")
j, j_deriv = run_forward(model_dict["approx_casing"], mesh, sim, src_vec)
def grad(v):
v = v.reshape(len(z), sim.nT+1, order="F")
return discretize.utils.mkvc(j_deriv(v, adjoint=False))
return discretize.utils.mkvc(j), grad
x0 = discretize.utils.mkvc(src_vec)
discretize.Tests.checkDerivative(
test_forward_full,
x0=x0,
num=8,
plotIt=False,
expectedOrder=1,
)
# adjoint test
src_vec = src_vec.reshape(len(z), sim.nT+1, order="F")
j, j_deriv = run_forward(model_dict["approx_casing"], mesh, sim, src_vec)
v = np.random.rand(len(z)*(sim.nT+1))
w = np.random.rand(np.prod(j.shape))
a = w.T.dot(discretize.utils.mkvc(j_deriv(v.reshape(len(z), sim.nT+1, order="F"), adjoint=False)))
b = v.T.dot(discretize.utils.mkvc(j_deriv(w, adjoint=True)))
err = a-b
if np.linalg.norm(err)/np.linalg.norm(a) < 1e-10:
passing = True
else:
passing = False
print(
f"{np.linalg.norm(a):1.3e}, "
f"{np.linalg.norm(b):1.3e}, "
f"{np.linalg.norm(err):1.3e}, "
f"{'passing :)' if passing is True else 'failing :('}"
)
src_sm, _ = create_source(sim, model, src_vec, trial_directory)
src_sm = src_sm.reshape(mesh.nEy, sim.nT+1, order="F")
j = fields(sim, src_sm)
tind = 30
fig, ax = plt.subplots(1, 1)
out = mesh.plotImage(
mesh.aveF2CCV * j[:, tind],
view="vec",
vType="CCv",
ax=ax, mirror=True,
range_x=np.r_[-1000, 1000],
range_y=np.r_[-1500, 100],
sample_grid = np.r_[5., 5.],
pcolorOpts={"norm":LogNorm()},
clim = np.r_[1e-10, 1e-2],
stream_threshold = 1e-10
)
ax.set_aspect(1)
plt.colorbar(out[0])
ax.set_title(f"current density, t={sim.times[tind]*1e3:1.1e}ms")
tind = 10
fig, ax = plt.subplots(1, 1)
out = mesh.plotImage(
mesh.aveE2CC * src_sm[:, tind],
# view="vec",
# vType="CCv",
ax=ax, mirror=True,
range_x=0.15*np.r_[-1, 1],
range_y=np.r_[-1210, -1190], #10*np.r_[-1, 1],
# sample_grid = np.r_[5., 5.],
pcolorOpts={"norm":LogNorm()},
clim = np.r_[1e-13, 1e-2],
# stream_threshold = 1e-13
)
mesh.plotGrid(ax=ax)
# ax.set_aspect(1)
plt.colorbar(out[0])
ax.set_title(f"source term, t={sim.times[tind]*1e3:1.1e}ms")
class TmpForward:
def __init__(self, model, mesh, sim):
self.model = model
self.mesh = mesh
self.sim = sim
def forward(self, source_vec):
if len(source_vec.shape) <= 1:
sourvce_vec = source_vec.reshape(len(z), self.sim.nT + 1, order="F")
source, source_grad = create_source(self.sim, self.model, source_vec, trial_directory)
self.source_grad = source_grad
# compute fields
f = fields(self.sim, source)
if getattr(self, 'inds', None) is None:
inds = get_j_inds(
self.mesh, sim, x_bounds=x_bounds, z_bounds=z_bounds
)
self.inds = inds
# project data
j_compare = (discretize.utils.mkvc(f))[self.inds]
return j_compare
def grad(self, dy):
if len(dy.shape) <= 1:
dy = dy.reshape(len(z), self.sim.nT + 1, order="F")
src_grad = self.source_grad(dy, adjoint=False)
f_deriv = fields_deriv(self.sim, src_grad)
grad = discretize.utils.mkvc(f_deriv)[self.inds]
return grad
def adjoint(self, dy):
inds = self.inds
v = np.zeros(self.mesh.nF*(self.sim.nT+1))
v[inds] = dy
v = v.reshape(self.mesh.nF, self.sim.nT+1, order="F")
f_deriv = fields_deriv_adjoint(self.sim, v)
grad = self.source_grad(f_deriv, adjoint=True)
return grad
mesh = discretize.CylMesh([mesh3D.hx, 1, mesh3D.hz], x0=mesh3D.x0)
sim = tdem.Problem3D_j(mesh=mesh, time_steps=sim3D.time_steps, solver=Solver, solver_opts=solver_opts, sigma=model.sigma(mesh))
tmp_forward = TmpForward(model, mesh, sim)
def test_forward(src):
src = src.reshape(len(z), sim.nT+1, order="F")
data = tmp_forward.forward(src)
def grad(v):
v = v.reshape(len(z), sim.nT+1, order="F")
return tmp_forward.grad(v)
return data, grad
x0 = np.zeros(len(z)*(sim.nT+1))
discretize.Tests.checkDerivative(
test_forward,
x0=x0,
num=8,
plotIt=False,
expectedOrder=1,
)
# adjoint test
src_vec = src_vec.reshape(len(z), sim.nT+1, order="F")
j = tmp_forward.forward(src_vec)
v = np.random.rand(len(z)*(sim.nT+1))
w = np.random.rand(np.prod(j.shape))
a = w.T.dot(discretize.utils.mkvc(tmp_forward.grad(v.reshape(len(z), sim.nT+1, order="F"))))
b = v.T.dot(discretize.utils.mkvc(tmp_forward.adjoint(w)))
err = a-b
if np.linalg.norm(err)/np.linalg.norm(a) < 1e-10:
passing = True
else:
passing = False
print(
f"{np.linalg.norm(a):1.3e}, "
f"{np.linalg.norm(b):1.3e}, "
f"{np.linalg.norm(err):1.3e}, "
f"{'passing :)' if passing is True else 'failing :('}"
)
```
# Set up ML pipeline
```
dtype = torch.float64
device = torch.device("cpu")
nspatial = len(z)
ntimes = sim.nT + 1
nsrcz = len(z)
x_bounds = np.r_[5, 1000]
z_bounds = np.r_[-2000, 0]
class ForwardSimulation(torch.autograd.Function):
@staticmethod
def forward(ctx, source_vec): #, trial_ind):
# trial_ind = tri
# trial_directory = f"{data_directory}/trial_{trial_ind}"
# load up objects
# model, mesh, sim = load_trial(trial_directory)
ctx.model = model
ctx.mesh = mesh
ctx.sim = sim
# create source
source, source_grad = create_source(sim, model, source_vec.data.numpy(), trial_directory)
# rhs = getRHS(sim, source)
ctx.source_grad = source_grad
# compute fields
f = fields(sim, source)
if getattr(ctx, 'inds', None) is None:
inds = inds = get_j_inds(
mesh, sim, x_bounds=x_bounds, z_bounds=z_bounds
)
ctx.inds = inds
# project data
j_compare = (discretize.utils.mkvc(f))[ctx.inds]
if dtype == torch.float32:
return torch.from_numpy(j_compare).float()
return torch.from_numpy(j_compare).double()
@staticmethod
def backward(ctx, dy):
inds = ctx.inds
v = np.zeros(ctx.mesh.nF*(ctx.sim.nT+1))
v[inds] = dy.data.numpy()
v = v.reshape(ctx.mesh.nF, ctx.sim.nT+1, order="F")
f_deriv = fields_deriv_adjoint(ctx.sim, v)
grad = ctx.source_grad(f_deriv, adjoint=True)
if dtype == torch.float32:
return torch.from_numpy(grad).float()
return torch.from_numpy(grad).double()
# class CasingData(torch.utils.data.Dataset):
# def __init__(self, directory, trial_indices):
# self.directory = directory
# self.trial_indices = trial_indices
# def __len__(self):
# return len(self.trial_indices)
# def __getitem__(self, idx):
# if torch.is_tensor(idx):
# idx = idx.tolist()
# source, source_deriv = create_source(sim, model, src_vec, trial_directory)
# rhs = getRHS(sim, source)
# trial_ind = 10
# trials = [trial_ind]
# jd_numpy = np.load(f"{trial_directory}/j_difference.npy")
# plt.hist(np.log10(np.abs(jd_numpy)), 20);
jd_numpy3D = fields_dict["casing"][:, "j", :] - fields_dict["approx_casing"][:, "j", :]
jd_x = (jd_numpy3D[:mesh3D.nFx, :]).reshape(np.hstack([mesh3D.vnFx, np.r_[sim.nT+1]]), order="F")
jd_z = (jd_numpy3D[mesh3D.vnF[:2].sum():, :]).reshape(np.hstack([mesh3D.vnFz, np.r_[sim.nT+1]]), order="F")
# grab a slice through theta
theta_ind = 3
jd_numpy = np.hstack([
discretize.utils.mkvc(jd_x[:, theta_ind, :, :]),
discretize.utils.mkvc(jd_z[:, theta_ind, :, :]),
])
# select inds away from the well
inds = get_j_inds(
mesh, sim, x_bounds=x_bounds, z_bounds=z_bounds
)
jd_numpy = jd_numpy[inds]
def convert_to_torch_sparse(mat):
mat = mat.tocoo()
values = mat.data
indices = np.vstack((mat.row, mat.col))
# create pytorch sparse matrix
i = torch.LongTensor(indices)
if dtype == torch.float32:
v = torch.FloatTensor(values)
else:
v = torch.DoubleTensor(values)
shape = mat.shape
if dtype == torch.float32:
return torch.sparse.FloatTensor(i, v, torch.Size(shape))
return torch.sparse.DoubleTensor(i, v, torch.Size(shape))
Dtime = discretize.utils.sdiag(1./sim.time_mesh.hx) * discretize.utils.ddx(sim.nT)
Dtime_torch = convert_to_torch_sparse(Dtime)
# z_currents = np.load(f"{trial_directory}/z_currents.npy")
Dz = discretize.utils.sdiag(1./np.diff(z)) * discretize.utils.ddx(len(z)-1)
Dz_torch = convert_to_torch_sparse(Dz)
floor = 1e-10
print((np.abs(jd_numpy)>floor).sum() / len(jd_numpy))
jd = torch.from_numpy(jd_numpy)
std = 0.05
w = torch.from_numpy(1./(std * np.abs(jd_numpy) + floor))
forward = ForwardSimulation.apply
if dtype == torch.float64:
jd = jd.double()
w = w.double()
else:
jd = jd.float()
w = w.float()
s0_scaling = 1e-4
learning_rate = 1e-5
plt.hist(1./(std * np.abs(jd_numpy) + floor));
# optimizer = torch.optim.SGD(s0, lr=learning_rate)
torch.manual_seed(2019)
# %%time
max_iter = 20
beta = None
beta_factor = 0
beta_cooling = 0.5
beta_cool_every = np.inf
alpha_s = 1e-2
alpha_t = sim.time_mesh.hx.min()
alpha_z = 1
s0 = torch.zeros(nspatial, ntimes, dtype=dtype, device=device, requires_grad=True)
# estimate beta
if beta is None:
s0_tmp = torch.randn(nspatial, ntimes, dtype=dtype, device=device, requires_grad=True)
j_pred = forward(s0_scaling * s0_tmp)
dmisfit = 1./len(jd) * (w.mul(j_pred - jd)).pow(2).sum()
# dmisfit = ((j_pred - jd)).pow(2).sum()
regularization = (
alpha_s * s0_tmp.pow(2).sum() +
alpha_t * Dtime_torch.mm(s0_tmp.T).pow(2).sum() +
alpha_z * Dz_torch.mm(s0_tmp).pow(2).sum()
)
beta = beta_factor * dmisfit.item() / regularization.item()
# for i in range(max_iter):
i = 0
while dmisfit.data.numpy() > 1:
s_iter = s0_scaling * s0
j_pred = forward(s_iter)
dmisfit = 1./len(jd) *(w.mul(j_pred - jd)).pow(2).sum()
# dmisfit = ((j_pred - jd)).pow(2).sum()
smallness = alpha_s * s0.pow(2).sum()
smooth_time = alpha_t * Dtime_torch.mm(s0.T).pow(2).sum()
smooth_depth = alpha_z * Dz_torch.mm(s0).pow(2).sum()
regularization = (
smallness +
smooth_time +
smooth_depth
)
if np.mod(i, beta_cool_every) == 0:
beta *= beta_cooling
loss = dmisfit + beta * regularization
print(
f"{i}: "
f"dmisfit: {dmisfit.item():1.2e}, "
f"beta: {beta:1.2e}, "
# f"reg: {regularization.item():1.2e}, "
f"beta*reg: {beta * regularization.item():1.2e}, "
f"small: {alpha_s * smallness.item():1.2e}, "
f"smooth time: {alpha_t * smooth_time.item():1.2e}, "
f"smooth depth: {alpha_z * smooth_depth.item():1.2e}, "
f"loss: {loss.item():1.2e}"
)
# optimizer.zero_grad()
loss.backward()
# optimizer.step()
with torch.no_grad():
if i < 2:
s0 -= 10 * learning_rate * s0.grad
else:
# maxIterCG = 20
# tolCG = 1e-1
# s0_numpy = s0.data.numpy()
# s0_grad_numpy = s0.grad.numpy()
# nbfgs = 10
# bfgs =
# H = sp.linalg.LinearOperator(
# (nspatial*ntimes, nspatial*ntimes), lambda v: 2*tmp_forward.adjoint(tmp_forward.grad(v)), dtype=s0_numpy.dtype
# )
# Hinv = SolverCG(H, tol=tolCG, maxiter=maxIterCG)
# step = (Hinv * (discretize.utils.mkvc(s0_grad_numpy))).reshape((nspatial, ntimes), order="F")
# s0 -= torch.from_numpy(step).double()
s0 -= learning_rate * s0.grad
s0.grad.zero_()
i += 1
if i >= max_iter:
break
# beta = beta_cooling * beta
fig, ax = plt.subplots(1, 1)
# z = np.load(f"{trial_directory}/z_currents.npy")
plotme = s0_scaling * s0.data.numpy()
clim = np.r_[1e-4, 1] * np.max(np.abs(plotme))
norm = Normalize(
# clim[0] if clim is not None else
# np.max([1e-20, np.min(np.absolute(plotme))]),
vmin = -clim[1], vmax=clim[1]
)
plt.colorbar(ax.pcolormesh(sim.times, z, plotme, cmap="BrBG_r", norm=norm), ax=ax)
ax.set_xscale("log")
ax.set_xlim(1e-6, sim.times.max())
ax.set_xlabel("time (s)")
ax.set_ylabel("z")
fig, ax = plt.subplots(1, 1)
cm = plt.get_cmap('viridis')
c_norm = LogNorm(vmin=sim.time_mesh.vectorCCx[0], vmax=sim.time_mesh.vectorCCx[-1])
scalar_map = cmap.ScalarMappable(norm=c_norm, cmap=cm)
scalar_map.set_array([])
plotme = s0_scaling * s0.data.numpy()
for time_ind in range(sim.nT)[::int(sim.nT/20)]:
color = scalar_map.to_rgba(sim.time_mesh.vectorCCx[time_ind])
ax.plot(z, plotme[:, time_ind], color=color)
ax.set_xlim(z.max(), z.min())
cbar_ax = fig.add_axes([1, 0.1, 0.02, 0.8])
cb = plt.colorbar(scalar_map, cbar_ax)
cb.set_label('time (s)')
cb.ax.invert_yaxis()
fig, ax = plt.subplots(1, 1)
cm = plt.get_cmap('viridis')
c_norm = Normalize(vmin=z.min(), vmax=z.max())
scalar_map = cmap.ScalarMappable(norm=c_norm, cmap=cm)
scalar_map.set_array([])
plotme = s0_scaling * s0.data.numpy()
for z_ind in range(len(z))[::int(len(z)/20)]:
color = scalar_map.to_rgba(z[z_ind])
ax.semilogx(sim.time_mesh.vectorNx, plotme[z_ind, :], color=color)
# ax.set_xlim(z.max(), z.min())
cbar_ax = fig.add_axes([1, 0.1, 0.02, 0.8])
cb = plt.colorbar(scalar_map, cbar_ax)
cb.set_label('z (m)')
cb.ax.invert_yaxis()
# load up objects
# model, mesh, sim = load_trial(trial_directory)
src, _ = create_source(sim, model, s0_scaling * s0.data.numpy(), trial_directory)
tind = 20
plotme = mesh.aveE2CC * src[:, tind]
clim = np.r_[1e-4, 1] * np.max(np.abs(plotme))
norm = Normalize(
# clim[0] if clim is not None else
# np.max([1e-20, np.min(np.absolute(plotme))]),
vmin = -clim[1], vmax=clim[1]
)
fig, ax = plt.subplots(1, 1)
plt.colorbar(mesh.plotImage(
plotme,
mirror=True,
mirror_data=-1*plotme,
pcolorOpts={"norm": norm, "cmap": "BrBG_r"},
ax=ax
)[0], ax=ax)
ax.set_title(f"t={sim.times[tind]:1.1e}ms")
ax.set_xlim(0.25*np.r_[-1, 1])
ax.set_ylim(np.r_[-2000, 50])
j_rec = fields(sim, src)
model_names = list(model_dict.keys())
# sim.survey.source_list = sim.survey.source_list
viewer = casing_sim.FieldsViewer(
mesh=mesh3D, model_parameters_dict=model_dict, survey_dict={key: sim3D.survey for key in model_names},
fields_dict=fields_dict, model_keys=model_names, primary_key="casing"
)
tind = 40
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
xlim = np.r_[-1000, 1000]
zlim = np.r_[-1500, 100]
sample_grid = np.r_[5., 5.]
clim = np.r_[1e-12, 1e-6]
out = mesh.plotImage(
mesh.aveF2CCV * j_rec[:, tind],
view="vec",
vType="CCv",
ax=ax[0], mirror=True,
range_x=xlim,
range_y=zlim,
sample_grid = sample_grid,
pcolorOpts={"norm":LogNorm()},
clim=clim,
stream_threshold=clim.min()
)
plt.colorbar(out[0], ax=ax[0])
ax[0].set_title(f"recovered, t={sim.times[tind]*1e3:1.1e}ms")
out2 = viewer.plot_cross_section(
ax=ax[1], clim=clim, zlim=zlim,
xlim=xlim,
view='j', theta_ind=3, time_ind=tind,
model_key='approx_casing', show_cb=True, casing_outline=False,
prim_sec="secondary"
# stream_opts={"density":0.75, "color": "k", "arrowsize": 2}
)
# ax[1].set_ylim(np.r_[-max_depth, top])
ax[1].set_ylabel('z (m)')
ax[1].set_title(f"true , t={sim.times[tind]*1e3:1.1e}ms")
for a in ax:
a.set_aspect(1)
j_pred_plot = j_pred.data.numpy()
jd_plot = jd.data.numpy()
plt.plot(jd_plot)
plt.plot(j_pred_plot)
np.sum((j_pred_plot - jd_plot)**2)
```
| github_jupyter |
```
%%capture
!apt-get install cmake
!apt-get install zlib1g-dev
!pip install gym[atari]
!pip install JSAnimation
import numpy as np
# import cPickle as pickle
import matplotlib.pyplot as plt
from JSAnimation.IPython_display import display_animation
from matplotlib import animation
import gym
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten
from keras.optimizers import rmsprop
import keras.backend as K
%matplotlib inline
env = gym.make("Pong-v0")
env.env.get_action_meanings()
action_space = [0,2,3] #[No-op, up, down]
def display_frames_as_gif(frames):
"""
Displays a list of frames as a gif, with controls
"""
#plt.figure(figsize=(frames[0].shape[1] / 72.0, frames[0].shape[0] / 72.0), dpi = 72)
patch = plt.imshow(frames[0])
plt.axis('off')
def animate(i):
patch.set_data(frames[i])
anim = animation.FuncAnimation(plt.gcf(), animate, frames = len(frames), interval=50)
display(display_animation(anim, default_mode='once'))
observation = env.reset()
cum_reward = 0
frames = []
r = []
for t in range(100):
# Render into buffer.
frames.append(env.render(mode = 'rgb_array'))
p = [0.5, 0.3, 0.2]
a = np.random.choice(3, p=p)
action = action_space[a]
observation, reward, done, info = env.step(action)
r.append(reward)
if done:
break
r = np.array(r)
# env.render(close=True)
display_frames_as_gif(frames)
print(t)
gamma = 0.99
def discount_rewards(r):
""" take 1D float array of rewards and compute discounted reward """
discounted_r = np.zeros_like(r)
running_add = 0
for t in reversed(range(len(discounted_r))):
if r[t] != 0: running_add = 0 # reset the sum, since this was a game boundary (pong specific!)
running_add = r[t] + running_add * gamma # belman equation
discounted_r[t] = running_add
return discounted_r
def discount_n_standardise(r):
dr = discount_rewards(r)
dr = (dr - dr.mean()) / dr.std()
return dr
```
## Preprocess image
```
def preprocess(I):
""" prepro 210x160x3 uint8 frame into 6400 (80x80) 1D float vector """
I = I[35:195] # crop
I = I[::2,::2,0] # downsample by factor of 2
I[I == 144] = 0 # erase background (background type 1)
I[I == 109] = 0 # erase background (background type 2)
I[I != 0] = 1 # everything else (paddles, ball) just set to 1
return I.astype(np.float)[:,:,None]
```
## Reinforcement Learning
It turns out that action 2 makes the racket go up and 3 makes the racket go down. It has 6 actions by default because it's an Atari game, and there were 6 buttons in the controller. See [here](https://ai.stackexchange.com/questions/2449/what-are-different-actions-in-action-space-of-environment-of-pong-v0-game-from) for source of this answer.
```
def policy_loss(adv_y_true, y_pred):
reward = adv_y_true[:,0]
y_true = adv_y_true[:,1:]
return K.mean(reward*
K.sparse_categorical_crossentropy(y_true, y_pred),
axis=-1)
model = Sequential()
model.add(Conv2D(4, kernel_size=(3,3), padding='same', activation='relu', input_shape = (80,80,1)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Conv2D(8, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Conv2D(12, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Conv2D(16, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(10))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer='rmsprop', loss=policy_loss) #
model.summary()
episodes = 0
n_episodes = 1000
reward_sums = np.zeros(n_episodes)
losses = np.zeros(n_episodes)
time_taken = np.zeros(n_episodes)
reward_sum = 0
prev_x = None
im_shape = (80, 80, 1)
prev_frame = None
buffer = 30000
xs = np.zeros((buffer,)+im_shape)
ys = np.zeros(buffer)
rs = np.zeros(buffer)
k = 0
observation = env.reset()
while episodes<n_episodes:
x = preprocess(observation)
xs[k] = x - prev_frame if prev_frame is not None else np.zeros(im_shape)
prev_frame = x
p = model.predict(xs[k][None,:,:,:])
a = np.random.choice(3, p=p[0])
action = action_space[a]
ys[k] = a
observation, reward, done, info = env.step(action)
reward_sum += reward
rs[k] = reward
k += 1
if done or k==buffer:
reward_sums[episodes] = reward_sum
reward_sum = 0
ep_x = xs[:k]
ep_y = ys[:k]
ep_r = rs[:k]
ep_r = discount_n_standardise(ep_r)
model.fit(ep_x, np.hstack([ep_r, ep_y]), batch_size=512, epochs=1, verbose=0)
time_taken[episodes] = k
k = 0
prev_frame = None
observation = env.reset()
losses[episodes] = model.evaluate(ep_x,
np.hstack([ep_r, ep_y]),
batch_size=len(ep_x),
verbose=0)
episodes += 1
if episodes%(n_episodes//20) == 0:
ave_reward = np.mean(reward_sums[max(0,episodes-200):episodes])
ave_loss = np.mean(losses[max(0,episodes-200):episodes])
ave_time = np.mean(time_taken[max(0,episodes-200):episodes])
print('Episode: {0:d}, Average Loss: {1:.4f}, Average Reward: {2:.4f}, Average steps: {3:.4f}'
.format(episodes, ave_loss, ave_reward, ave_time))
plt.plot(losses[:episodes])
plt.plot(np.convolve(losses[:episodes], np.ones((100,))/100, mode='valid'))
plt.show()
plt.plot(reward_sums[:episodes])
plt.plot(np.convolve(reward_sums[:episodes], np.ones((100,))/100, mode='valid'))
plt.show()
```
## Result
```
observation = env.reset()
cum_reward = 0
frames = []
prev_frame = None
for t in range(5000):
x = preprocess(observation)
diff = x - prev_frame if prev_frame is not None else np.zeros(im_shape)
p = model.predict(diff[None,:,:,:])
prev_frame = x
a = np.random.choice(3, p=p[0])
action = action_space[a]
# Render into buffer.
frames.append(env.render(mode = 'rgb_array'))
observation, reward, done, info = env.step(action)
if done:
break
# env.render(close=True)
display_frames_as_gif(frames)
print(t)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Water/usgs_watersheds.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC02')
styleParams = {
'fillColor': '000070',
'color': '0000be',
'width': 3.0,
}
regions = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 4)
Map.addLayer(regions, {}, 'USGS/WBD/2017/HUC02')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC04')
styleParams = {
'fillColor': '5885E3',
'color': '0000be',
'width': 3.0,
}
subregions = dataset.style(**styleParams)
Map.setCenter(-110.904, 36.677, 7)
Map.addLayer(subregions, {}, 'USGS/WBD/2017/HUC04')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC06')
styleParams = {
'fillColor': '588593',
'color': '587193',
'width': 3.0,
}
basins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 7)
Map.addLayer(basins, {}, 'USGS/WBD/2017/HUC06')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC08')
styleParams = {
'fillColor': '2E8593',
'color': '587193',
'width': 2.0,
}
subbasins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 8)
Map.addLayer(subbasins, {}, 'USGS/WBD/2017/HUC08')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC10')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 1.0,
}
watersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 9)
Map.addLayer(watersheds, {}, 'USGS/WBD/2017/HUC10')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC12')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 0.1,
}
subwatersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 10)
Map.addLayer(subwatersheds, {}, 'USGS/WBD/2017/HUC12')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Deploying Tensorflow models on Verta
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more [here](https://docs.verta.ai/verta/registry/concepts).
This notebook provides an example of how to deploy a Tensorflow model on Verta as a Verta Standard Model either via convenience functions (for Keras) or by extending [VertaModelBase](https://verta.readthedocs.io/en/master/_autogen/verta.registry.VertaModelBase.html?highlight=VertaModelBase#verta.registry.VertaModelBase).
## 0. Imports
```
import os
import tensorflow as tf
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
client = Client(os.environ['VERTA_HOST'])
```
## 1. Model Training
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
```
## 2. Register Model
```
registered_model = client.get_or_create_registered_model(
name="mnist", labels=["computer-vision", "tensorflow"])
```
### 2.1 Register from the model object
#### If you are in the same file where you have the model object handy, use the code below to package the model
```
from verta.environment import Python
model_version_from_obj = registered_model.create_standard_model_from_keras(
model, environment=Python(requirements=["tensorflow"]), name="v1")
```
### 2.2 (OR) Register a serialized version of the model using the VertaModelBase
```
model.save("mnist.tf_saved_model")
from verta.registry import VertaModelBase
class MNISTModel(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
self.model = tf.keras.models.load_model(
artifacts["mnist_model"])
def predict(self, input_data):
output = []
for input_data_point in input_data:
reshaped_data = tf.reshape(input_data_point, (1, 28, 28))
output.append(self.model(reshaped_data).numpy().tolist())
return output
# test locally
mnist_model1 = MNISTModel({"mnist_model" : "mnist.tf_saved_model/"})
mnist_model1.predict([x_test[0]])
model_version_from_cls = registered_model.create_standard_model(
MNISTModel,
environment=Python(["tensorflow"]),
name="v2",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
```
### 2.3 (OR) Register a serialized version of the model using the VertaModelBase (Variation: take in a base64 encoded input vs. a tensor)
```
class MNISTModel2(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
import base64
self.model = tf.keras.models.load_model(artifacts["mnist_model"])
def predict(self, input_data):
# decode base64
import base64
output = []
for input_data_point in input_data:
decoded_data = base64.b64decode(input_data_point["img_bytes"])
decoded_data = tf.io.decode_image(decoded_data)
decoded_data = tf.reshape(decoded_data, (1, 28, 28))
output.append(self.model(decoded_data).numpy().tolist())
return output
# test locally
import base64
mnist_model2 = MNISTModel2({"mnist_model" : "mnist.tf_saved_model/"})
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(mnist_model2.predict([{"img_bytes" : encoded_string}]))
model_version_from_cls_base64 = registered_model.create_standard_model(
MNISTModel2,
environment=Python(["tensorflow"]),
name="v3",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
```
## 3. Deploy model to endpoint
```
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_obj, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls_base64, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(deployed_model.predict([{"img_bytes" : encoded_string}]))
```
---
| github_jupyter |
```
from osgeo import gdal,ogr,osr
raster = r'/usgs/data0/bathy/sandy/zip3/big.tif'
ofile = r'/usgs/data2/notebook/data/big.ncml'
def GetExtent(gt,cols,rows):
''' Return list of corner coordinates from a geotransform
@type gt: C{tuple/list}
@param gt: geotransform
@type cols: C{int}
@param cols: number of columns in the dataset
@type rows: C{int}
@param rows: number of rows in the dataset
@rtype: C{[float,...,float]}
@return: coordinates of each corner
'''
ext=[]
xarr=[0,cols]
yarr=[0,rows]
for px in xarr:
for py in yarr:
x=gt[0]+(px*gt[1])+(py*gt[2])
y=gt[3]+(px*gt[4])+(py*gt[5])
ext.append([x,y])
print x,y
yarr.reverse()
return ext
def ReprojectCoords(coords,src_srs,tgt_srs):
''' Reproject a list of x,y coordinates.
@type geom: C{tuple/list}
@param geom: List of [[x,y],...[x,y]] coordinates
@type src_srs: C{osr.SpatialReference}
@param src_srs: OSR SpatialReference object
@type tgt_srs: C{osr.SpatialReference}
@param tgt_srs: OSR SpatialReference object
@rtype: C{tuple/list}
@return: List of transformed [[x,y],...[x,y]] coordinates
'''
trans_coords=[]
transform = osr.CoordinateTransformation( src_srs, tgt_srs)
for x,y in coords:
x,y,z = transform.TransformPoint(x,y)
trans_coords.append([x,y])
return trans_coords
ds=gdal.Open(raster)
gt=ds.GetGeoTransform()
cols = ds.RasterXSize
rows = ds.RasterYSize
ext=GetExtent(gt,cols,rows)
src_srs=osr.SpatialReference()
src_srs.ImportFromWkt(ds.GetProjection())
#tgt_srs=osr.SpatialReference()
#tgt_srs.ImportFromEPSG(4326)
tgt_srs = src_srs.CloneGeogCS()
geo_ext=ReprojectCoords(ext,src_srs,tgt_srs)
ext
ncml = '''<netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2"
location="/usgs/data0/bathy/srtm30plus_v1.nc">
<variable name="lon" shape="lon" type="double">
<attribute name="units" value="degrees_east"/>
<values start="-180.00" increment="+0.008333333333333333"/>
</variable>
<variable name="lat" shape="lat" type="double">
<attribute name="units" value="degrees_north"/>
<values start="90.00" increment="-0.008333333333333333"/>
</variable>
<variable name="topo">
<attribute name="units" value="meters"/>
<attribute name="long_name" value="Topography"/>
</variable>
<attribute name="Conventions" value="CF-1.0"/>
<attribute name="title" value="SRTM30_v1"/>
</netcdf>'''
print(ncml)
gt
#replace lon_min
ncml = ncml.replace('-180.00',str(gt[0]))
#replace lon_increment
ncml = ncml.replace('+0.008333333333333333',str(gt[1]))
#replace lat_max
ncml = ncml.replace('90.00',str(gt[3]))
#replace lat_increment
ncml = ncml.replace('-0.008333333333333333',str(gt[5]))
#replace file location
ncml = ncml.replace(r'/usgs/data0/bathy/srtm30plus_v1.nc',raster)
print(ncml)
```
| github_jupyter |
# Simulated Sky Signal in time domain
In this lesson we will use the TOAST Operator `OpSimPySM` to create timestreams for an instrument given a sky model.
```
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
%reload_ext wurlitzer
import toast
import healpy as hp
import numpy as np
env = toast.Environment.get()
env.set_log_level("DEBUG")
```
## Scanning strategy
Before being able to scan a map into a timestream we need to define a scanning strategy
and get pointing information for each channel.
We use the same **satellite** scanning used in lesson 2 about scanning strategies,
see the `02_Simulated_Scan_Strategies/simscan_satellite.ipynb` for more details.
```
focal_plane = fake_focalplane()
focal_plane.keys()
focal_plane["0A"]["fwhm_arcmin"]
# Scan parameters
alpha = 50.0 # precession opening angle, degrees
beta = 45.0 # spin opening angle, degrees
p_alpha = 25.0 # precession period, minutes
p_beta = 1.25 # spin period, minutes
samplerate = 0.5 # sample rate, Hz
hwprpm = 5.0 # HWP rotation in RPM
nside = 64 # Healpix NSIDE
# We will use one observation per day, with no gaps in between, and
# run for one year.
obs_samples = int(24 * 3600.0 * samplerate) - 1
nobs = 366
# Slew the precession axis so that it completes one circle
deg_per_day = 360.0 / nobs
from toast.todmap import TODSatellite, slew_precession_axis
detquat = {ch: focal_plane[ch]["quat"] for ch in focal_plane}
# Create distributed data
comm = toast.Comm()
data = toast.Data(comm)
# Append observations
for ob in range(nobs):
obsname = "{:03d}".format(ob)
obsfirst = ob * (obs_samples + 1)
obsstart = 24 * 3600.0
tod = TODSatellite(
comm.comm_group,
detquat,
obs_samples,
firstsamp=obsfirst,
firsttime=obsstart,
rate=samplerate,
spinperiod=p_beta,
spinangle=beta,
precperiod=p_alpha,
precangle=alpha,
coord="E",
hwprpm=hwprpm
)
qprec = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4))
slew_precession_axis(
qprec,
firstsamp=obsfirst,
samplerate=samplerate,
degday=deg_per_day,
)
tod.set_prec_axis(qprec=qprec)
obs = dict()
obs["tod"] = tod
data.obs.append(obs)
from toast.todmap import (
get_submaps_nested,
OpPointingHpix,
OpAccumDiag
)
from toast.map import (
DistPixels
)
# Make a simple pointing matrix
pointing = OpPointingHpix(nside=nside, nest=True, mode="IQU")
pointing.exec(data)
# Compute the locally hit pixels
localpix, localsm, subnpix = get_submaps_nested(data, nside)
# Construct a distributed map to store the hit map
npix = 12 * nside**2
hits = DistPixels(
comm=data.comm.comm_world,
size=npix,
nnz=1,
dtype=np.int64,
submap=subnpix,
local=localsm,
)
hits.data.fill(0)
# Accumulate the hit map locally
build_hits = OpAccumDiag(hits=hits)
build_hits.exec(data)
# Reduce the map across processes (a No-op in this case)
hits.allreduce()
%matplotlib inline
hp.mollview(hits.data.flatten(), nest=True)
```
## Define PySM parameters and instrument bandpasses
Then we define the sky model parameters, choosing the desired set of `PySM` models and then we specify the band center and the bandwidth for a top-hat bandpass.
Currently top-hat bandpasses are the only type supported by the operator, in the future we will implement arbitrary bandpasses.
Then bandpass parameters can be added directly to the `focal_plane` dictionary:
```
for ch in focal_plane:
focal_plane[ch]["bandcenter_ghz"] = 70
focal_plane[ch]["bandwidth_ghz"] = 10
focal_plane[ch]["fwhm"] = 60*2
pysm_sky_config = ["s1", "f1", "a1", "d1"]
```
## Run the OpSimPySM operator
The `OpSimPySM` operator:
* Creates top-hat bandpasses arrays (frequency axis and weights) as expected by `PySM`
* Loops by channel and for each:
* Creates a `PySMSky` object just with 1 channel at a time
* Executes `PySMSky` to evaluate the sky models and bandpass-integrate
* Calls `PySM` to perform distributed smoothing with `libsharp`
* Gathers the map on the first MPI process
* Applies coordinate transformation if necessary (not currently implemented in `libsharp`)
* Use the `DistMap` object to communicate to each process the part of the sky they observe
* Calls `OpSimScan` to rescan the map to a timeline
```
from toast.todmap import OpSimPySM
OpSimPySM?
opsim_pysm = OpSimPySM(
comm=None,
pysm_model=pysm_sky_config,
nside=nside,
apply_beam=True,
debug=True,
focalplanes=[focal_plane],
subnpix=subnpix,
localsm=localsm
)
opsim_pysm.exec(data)
```
### Plot output timelines
```
%matplotlib inline
import matplotlib.pyplot as plt
tod = data.obs[0]['tod']
pix = tod.cache.reference("pixels_0A")
import toast.qarray as qa
theta, phi, pa = qa.to_angles(tod.read_pntg(detector="0A"))
pix
num = 10000
plt.figure(figsize=(7, 5))
plt.plot(np.degrees(theta[:num]), tod.cache.reference("signal_0A")[:num], ".")
plt.xlabel("$Colatitude [deg]$")
plt.ylabel("$Signal [ \mu K_{RJ} ]$");
```
### Bin the output to a map
```
from numba import njit
@njit
def just_make_me_a_map(output_map, signals):
"""Temperature only binner
Bins a list of (pix, signal) tuples into an output map,
it does not support polarization, so it just averages it out.
Parameters
----------
output_map : np.array
already zeroed output map
signals : numba.typed.List of (np.array[int64] pix, np.array[np.double] signal)
Returns
-------
hits : np.array[np.int64]
hitmap
"""
hits = np.zeros(len(output_map), dtype=np.int64)
for pix, signal in signals:
for p,s in zip(pix, signal):
output_map[p] += s
hits[p] += 1
output_map[hits != 0] /= hits[hits != 0]
return hits
from numba.typed import List
signals = List()
for obs in data.obs:
for ch in focal_plane:
signals.append((obs["tod"].cache.reference("pixels_%s" % ch), obs["tod"].cache.reference("signal_%s" % ch)))
output_map = np.zeros(npix, dtype=np.double)
h = just_make_me_a_map(output_map, signals)
hp.mollview(h, title="hitmap", nest=True)
hp.mollview(output_map, nest=True, min=0, max=1e-3, cmap="coolwarm")
hp.gnomview(output_map, rot=(0,0), xsize=5000, ysize=2000, cmap="coolwarm", nest=True, min=0, max=1e-2)
```
### Custom sky components
* `pysm_component_objects`: pass custom PySM component objects, see for example the [WebSkyCIB](https://so-pysm-models.readthedocs.io/en/latest/api/so_pysm_models.WebSkyCIB.html#so_pysm_models.WebSkyCIB) model in the [so_pysm_models](https://github.com/simonsobs/so_pysm_models) repository, it provides a Cosmic Infrared Background computed from
| github_jupyter |
# PyEcharts
**注意**
- 本文案例来源:https://github.com/pyecharts/pyecharts-gallery
- 本文用来直接复制到使用地方进行使用...T_T
## 折线图
```
import pyecharts.options as opts
from pyecharts.charts import Line
x_data = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
y_data = [820, 932, 901, 934, 1290, 1330, 1320]
y_data2 = [i*1.5 for i in y_data]
l = (Line()
.add_xaxis(
xaxis_data=x_data,
)
.add_yaxis(
series_name="y1",
y_axis=y_data
)
.add_yaxis(
series_name="y2",
y_axis=y_data2,
)
.set_global_opts(
title_opts=opts.TitleOpts(title="折线图堆叠"),
tooltip_opts=opts.TooltipOpts(trigger="axis"),
toolbox_opts=opts.ToolboxOpts(is_show=True),
yaxis_opts=opts.AxisOpts(
type_="value",
axistick_opts=opts.AxisTickOpts(is_show=True),
splitline_opts=opts.SplitLineOpts(is_show=True),
),
xaxis_opts=opts.AxisOpts(type_="category", boundary_gap=False))
)
l.render_notebook()
```
## 柱状图
```
import pyecharts.options as opts
from pyecharts.charts import Bar
import numpy as np
category = ["类目{}".format(i) for i in range(0, 100)]
red_bar = list(np.random.random(100))
blue_bar = [i*1.5 for i in red_bar]
bar = (Bar(init_opts=opts.InitOpts(width="1600px", height="800px"))
.add_xaxis(xaxis_data=category)
.add_yaxis(
series_name="bar", yaxis_data=red_bar, label_opts=opts.LabelOpts(is_show=False)
)
.add_yaxis(
series_name="bar2",
yaxis_data=blue_bar,
label_opts=opts.LabelOpts(is_show=False),
)
.set_global_opts(
title_opts=opts.TitleOpts(title="柱状图动画延迟"),
xaxis_opts=opts.AxisOpts(splitline_opts=opts.SplitLineOpts(is_show=False)),
yaxis_opts=opts.AxisOpts(
axistick_opts=opts.AxisTickOpts(is_show=True),
splitline_opts=opts.SplitLineOpts(is_show=True),
),
))
bar.render_notebook()
```
### 柱状图+折线图
```
x_data = ["1月", "2月", "3月", "4月", "5月", "6月", "7月", "8月", "9月", "10月", "11月", "12月"]
bar = (
Bar(init_opts=opts.InitOpts(width="1600px", height="800px"))
.add_xaxis(xaxis_data=x_data)
.add_yaxis(
series_name="蒸发量",
yaxis_data=[
2.0,
4.9,
7.0,
23.2,
25.6,
76.7,
135.6,
162.2,
32.6,
20.0,
6.4,
3.3,
],
label_opts=opts.LabelOpts(is_show=False),
)
.add_yaxis(
series_name="降水量",
yaxis_data=[
2.6,
5.9,
9.0,
26.4,
28.7,
70.7,
175.6,
182.2,
48.7,
18.8,
6.0,
2.3,
],
label_opts=opts.LabelOpts(is_show=False),
)
.extend_axis(
yaxis=opts.AxisOpts(
name="温度",
type_="value",
min_=0,
max_=25,
interval=5,
axislabel_opts=opts.LabelOpts(formatter="{value} °C"),
)
)
.set_global_opts(
tooltip_opts=opts.TooltipOpts(
is_show=True, trigger="axis", axis_pointer_type="cross"
),
xaxis_opts=opts.AxisOpts(
type_="category",
axispointer_opts=opts.AxisPointerOpts(is_show=True, type_="shadow"),
),
yaxis_opts=opts.AxisOpts(
name="水量",
type_="value",
min_=0,
max_=250,
interval=50,
axislabel_opts=opts.LabelOpts(formatter="{value} ml"),
axistick_opts=opts.AxisTickOpts(is_show=True),
splitline_opts=opts.SplitLineOpts(is_show=True),
),
)
)
line = (
Line()
.add_xaxis(xaxis_data=x_data)
.add_yaxis(
series_name="平均温度",
yaxis_index=1,
y_axis=[2.0, 2.2, 3.3, 4.5, 6.3, 10.2, 20.3, 23.4, 23.0, 16.5, 12.0, 6.2],
label_opts=opts.LabelOpts(is_show=False),
)
)
#
bar.overlap(line).render_notebook()
```
## 饼图
```
import pyecharts.options as opts
from pyecharts.charts import Pie
x_data = ["直接访问", "邮件营销", "联盟广告", "视频广告", "搜索引擎"]
y_data = [335, 310, 274, 235, 400]
data_pair = [list(z) for z in zip(x_data, y_data)]
data_pair.sort(key=lambda x: x[1])
pie = (
Pie(init_opts=opts.InitOpts(width="1600px", height="800px", bg_color="#2c343c"))
.add(
series_name="访问来源",
data_pair=data_pair,
rosetype="radius",
radius="55%",
center=["50%", "50%"],
label_opts=opts.LabelOpts(is_show=False, position="center"),
)
.set_global_opts(
title_opts=opts.TitleOpts(
title="Customized Pie",
pos_left="center",
pos_top="20",
title_textstyle_opts=opts.TextStyleOpts(color="#fff"),
),
legend_opts=opts.LegendOpts(is_show=False),
)
.set_series_opts(
tooltip_opts=opts.TooltipOpts(
trigger="item", formatter="{a} <br/>{b}: {c} ({d}%)"
),
label_opts=opts.LabelOpts(color="rgba(255, 255, 255, 0.3)"),
)
)
pie.render_notebook()
```
## 散点图
```
import pyecharts.options as opts
from pyecharts.charts import EffectScatter
data = [
[10.0, 8.04],
[8.0, 6.95],
[13.0, 7.58],
[9.0, 8.81],
[11.0, 8.33],
[14.0, 9.96],
[6.0, 7.24],
[4.0, 4.26],
[12.0, 10.84],
[7.0, 4.82],
[5.0, 5.68],
]
data.sort(key=lambda x: x[0])
x_data = [d[0] for d in data]
y_data = [d[1] for d in data]
scatter = ( EffectScatter(init_opts=opts.InitOpts(width="1600px", height="1000px"))
.add_xaxis(
xaxis_data=x_data,
)
.add_yaxis(
series_name="xc",
y_axis=y_data,
symbol_size=20,
label_opts=opts.LabelOpts(is_show=False),
)
.set_series_opts(
)
.set_global_opts(
title_opts=opts.TitleOpts(title="demo"),
xaxis_opts=opts.AxisOpts(
type_="value", splitline_opts=opts.SplitLineOpts(is_show=True)
),
yaxis_opts=opts.AxisOpts(
type_="value",
axistick_opts=opts.AxisTickOpts(is_show=True),
splitline_opts=opts.SplitLineOpts(is_show=True),
),
tooltip_opts=opts.TooltipOpts(is_show=False),
)
)
scatter.render_notebook()
```
## 雷达图
```
import pyecharts.options as opts
from pyecharts.charts import Radar
v1 = [[4300, 10000, 28000, 35000, 50000, 19000]]
v2 = [[5000, 14000, 28000, 31000, 42000, 21000]]
radar = (
Radar(init_opts=opts.InitOpts(width="1600px", height="1000px", bg_color="#CCCCCC"))
.add_schema(
schema=[
opts.RadarIndicatorItem(name="销售(sales)", max_=6500),
opts.RadarIndicatorItem(name="管理(Administration)", max_=16000),
opts.RadarIndicatorItem(name="信息技术(Information Technology)", max_=30000),
opts.RadarIndicatorItem(name="客服(Customer Support)", max_=38000),
opts.RadarIndicatorItem(name="研发(Development)", max_=52000),
opts.RadarIndicatorItem(name="市场(Marketing)", max_=25000),
],
splitarea_opt=opts.SplitAreaOpts(
is_show=True, areastyle_opts=opts.AreaStyleOpts(opacity=1)
),
textstyle_opts=opts.TextStyleOpts(color="#fff"),
)
.add(
series_name="预算分配(Allocated Budget)",
data=v1,
linestyle_opts=opts.LineStyleOpts(color="#CD0000"),
)
.add(
series_name="实际开销(Actual Spending)",
data=v2,
linestyle_opts=opts.LineStyleOpts(color="#5CACEE"),
)
.set_series_opts(label_opts=opts.LabelOpts(is_show=False))
.set_global_opts(
title_opts=opts.TitleOpts(title="基础雷达图"), legend_opts=opts.LegendOpts()
)
)
radar.render_notebook()
```
## 词云
```
import pyecharts.options as opts
from pyecharts.charts import WordCloud
data = [
("python", "999"),
("java", "888"),
("C\C++", "777"),
("js", "688"),
("node", "588"),
("C#", "516"),
]
word_cloud = (
WordCloud()
.add(series_name="热点分析", data_pair=data, word_size_range=[6, 66])
.set_global_opts(
title_opts=opts.TitleOpts(
title="热点分析", title_textstyle_opts=opts.TextStyleOpts(font_size=23)
),
tooltip_opts=opts.TooltipOpts(is_show=True),
)
)
word_cloud.render_notebook()
```
| github_jupyter |
# Image Classification using Pre-trained model
## Step 1- Download the model
```
!omz_downloader --name inception-resnet-v2-tf
```
## Step 2 - Import the libraries
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.runtime import Core
from pathlib import Path
from IPython.display import Markdown
```
## Step 3 - Convert the model to IR
```
# The paths of the source and converted models
model_path = Path("/home/chetan/public/inception-resnet-v2-tf/inception_resnet_v2.pb")
ir_path = Path(model_path).with_suffix(".xml")
# Construct the command for Model Optimizer
mo_command = f"""mo
--input_model "{model_path}"
--input_shape "[1,299,299,3]"
--mean_values="[127.5,127.5,127.5]"
--scale_values="[127.5]"
--data_type FP16
--output_dir "{model_path.parent}"
"""
mo_command = " ".join(mo_command.split())
print("Model Optimizer command to convert TensorFlow to OpenVINO:")
display(Markdown(f"`{mo_command}`"))
# Run Model Optimizer
print("Exporting TensorFlow model to IR... This may take a few minutes.")
! $mo_command
```
## Load the model
```
# Load the converted model
ie = Core()
model = ie.read_model(model="/home/chetan/public/inception-resnet-v2-tf/inception_resnet_v2.xml")
compiled_model = ie.compile_model(model=model, device_name="CPU")
```
## Get Model Information
```
input_layer = next(iter(compiled_model.inputs))
output_layer = next(iter(compiled_model.outputs))
network_input_shape = input_layer.shape
```
## Load an Image
```
## # The MobileNet network expects images in RGB format
image = cv2.cvtColor(cv2.imread(filename="data/Bengal-tiger-1.jpg"), code=cv2.COLOR_BGR2RGB)
# Resize image to network input image shape
resized_image = cv2.resize(src=image, dsize=(299, 299))
# Transpose image to network input shape
input_image = np.expand_dims(resized_image, 0)
plt.imshow(image);
```
## Inference
```
# Option 1
result = compiled_model([input_image])[output_layer]
result_index = np.argmax(result)
print('Result index', result_index)
# Convert the inference result to a class name.
imagenet_classes = open("/home/chetan/public/inception-resnet-v2-tf/labels.txt").read().splitlines()
print('Predicted class:', imagenet_classes[result_index])
# Option 2
request = compiled_model.create_infer_request()
request.infer(inputs={input_layer.any_name: input_image})
result = request.get_output_tensor(output_layer.index).data
result_index = np.argmax(result)
# Convert the inference result to a class name.
imagenet_classes = open("/home/chetan/public/inception-resnet-v2-tf/labels.txt").read().splitlines()
print('Predicted class:', imagenet_classes[result_index])
```
| github_jupyter |
# Lesson 04. Python intro
**Udacity Full Stack Web Developer Nanodegree program**
Part 01. Programming fundamentals and the web
[Programming foundations with Python](https://www.udacity.com/course/programming-foundations-with-python--ud036)
Brendon Smith
br3ndonland
## 01. What will we create?
Kunal, Udacity instructor
1. project take a break
2. project profanity editor
3. project movie database
*I stopped here for a bit to review basic python and drill loops and if statements. See [Appendix](#appendix) below for some examples.*
## 02. Quiz: comfort level
* Kunal said that in his computer science classes, students often started out interested, but became frustrated after the difficulty level skyrocketed.
* I chose 3, somewhat comfortable, with writing computer programs.
## 03. What should I know?
*(viewed source HTML on the website, copied here. Remember [Markdown can interpret HTML](https://daringfireball.net/projects/markdown/syntax#html)).*
<div class="ltr"><div class="ureact-markdown--markdown--3IhZa ureact-markdown "><p>Need to brush up? Check out these helpful tutorials below:</p>
<h3 id="if-statements">If Statements</h3>
<ol>
<li><a target="_blank" href="http://www.tutorialspoint.com/python/python_if_else.htm">Tutorialspoint: If-Else Blocks</a></li>
<li><a target="_blank" href="https://classroom.udacity.com/courses/cs101/lessons/48753036/concepts/487343560923">CS 101: If Statements</a></li>
</ol>
<h3 id="while-loops">While Loops</h3>
<ol>
<li><a target="_blank" href="http://www.tutorialspoint.com/python/python_while_loop.htm">Tutorialspoint: Loops</a></li>
<li><a target="_blank" href="https://www.udacity.com/course/viewer#!/c-cs101/l-48753036/e-48686708/m-48480488">CS 101: While Loops</a></li>
</ol>
<h3 id="functions">Functions</h3>
<ol>
<li><a target="_blank" href="http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html">Loyola University: Defining Functions</a></li>
<li><a target="_blank" href="https://classroom.udacity.com/courses/cs101/lessons/48753036/concepts/487134840923">CS 101: Functions</a></li>
</ol>
</div></div><p class="ltr _notes-wide--forums-message--2LZQb"><!-- react-text: 1610 -->Have questions?<!-- /react-text --><!-- react-text: 1611 --> <!-- /react-text --><!-- react-text: 1612 -->Head to the<!-- /react-text --><!-- react-text: 1613 --> <!-- /react-text --><a target="_blank" href="https://forums.udacity.com?forum_path=c/nd004-full-stack-broadcast">forums</a><!-- react-text: 1615 --> <!-- /react-text --><!-- react-text: 1616 -->for discussion with the Udacity Community.<!-- /react-text --></p></div>
## 04. Quiz: test for loops
They had a dance video shaking shoulders to the left and right three times.
**I didn't really agree with the answer, so I re-coded it myself:**
```
# Python for loops quiz
# I would actually code it like this:
num_1 = 1
while(num_1 <= 3):
print('shake shoulders to the left')
print('shake shoulders to the right')
# omission of the line below may result in an infinite loop.
num_1 = num_1 + 1
print('Strike a pose')
```
## 05. Quiz: Test for If Statements
I drilled if-else statements for a while (see conditionals section below). I understood this immediately! Very happy with my syntax progress.
```
def determine_direction(message):
if message == 'TAKE THE ROAD LESS TRAVELLED':
print('Turn left')
elif message == 'COMFORT IS DIVINE':
print('Turn right')
else:
print('Stop here')
determine_direction('TAKE THE ROAD LESS TRAVELLED')
```
## 06. What Will We Learn?
Somewhat familiar ideas: lesson 0 and 1
* Functions like `print('Hello, World!')`
New ideas: lesson 2 and 3
* Classes
* Object-oriented programming
## Appendix
### Selection of jupyter notebook
* This nanodegree will use Python.
* Jupyter allows both markdown and inline code chunks in the same notebook.
* It is easier to write Markdown in Sublime, but jupyter is a better choice because of the coding functionality.
* The ability to run individual cells is useful for both code and markdown. Rendering Markdown is quicker than knitting an entire document with Rstudio, or using Markdown Preview with Sublime.
* Rstudio has some similar features, but jupyter has support for more coding languages.
* Jupyter is also useful for research because of checkpoint/version control and collaboration features. It is a good idea to get used to jupyter so I can use it for collaborative, reproducible research in the future.
See [GitHub](https://github.com/br3ndonland/general/blob/master/br3ndonland_terminal.md) for more info on my computer setup.
### Resources
#### Python
* [Python 3 documentation](https://docs.python.org/3/)
* [Python PEP8 style guide](https://www.python.org/dev/peps/pep-0008/)
* [Python PEP8 style checker](http://pep8online.com/)
* [Python Central](http://pythoncentral.io/)
* [jupyter/iPython documentation](http://nbviewer.jupyter.org/github/ipython/ipython/blob/3.x/examples/Index.ipynb)
#### Markdown
* [Markdown Guide](https://www.markdownguide.org/)
* [CommonMark](http://commonmark.org/)
* [GitHub-Flavored Markdown (GFM) Basic writing and formatting syntax](https://help.github.com/articles/basic-writing-and-formatting-syntax/)
* [Page from the original creator of Markdown](https://daringfireball.net/projects/markdown/syntax)
* [Dillinger](https://dillinger.io/), a helpful online Markdown editor with live preview.
### User input
I worked on this later, out of interest.
```
name = input("Please type your name: ")
# Old syntax
print('Yo yo yo', name + '!', 'My new album just dropped!')
# Python 3 string formatting
print("Yo yo yo {}! My new album just dropped!".format(name))
```
### Loops
```
# looping examples
## simple example from GA_python101
### this example prints each letter of the reference string `phrase` in turn, as uppercase
phrase = 'Happy Birthday!' # establish a reference
for letter in phrase: # perform the following for each letter in the phrase
print(letter.upper()) # print the letter as uppercase
integers_2 = range(1, 16)
for x in integers_2:
print(x)
# looping examples
## example from Harvard Computefest Intro to Python 2017 python-cf17-02-language.ipynb
colors = 'red green blue yellow white black pink'.split()
print(colors)
for c in colors:
print('color is', c)
```
A while loop statement in Python programming language repeatedly executes a target statement as long as a given condition is true.
```
num_1 = 1
while(num_1 <= 3):
print('shake shoulders to the left')
print('shake shoulders to the right')
num_1 = num_1 + 1 # omission of this line may result in an infinite loop.
else:
print('Strike a pose.')
```
**To stop an infinite loop in jupyter, press `Esc` to enter command mode, and then `i,i` to interrupt the kernel.**
### Conditionals
```
# Conditionals (if statements)
## simple example from GA_python101
secret = 55
value = 55
if secret == value:
print('they are equal')
else:
print('they are not equal')
## more complicated example
###
### not sure how to do this with a range, like:
### integers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
### integers = range(1, 15)
integer = 15
if integer == 1:
print('this number is one')
elif integer == 10:
print('this number is ten')
elif integer == 15:
print('Finally! This number is fifteen!')
else:
print('End of line.')
```
### Functions
```
# Functions
## factorial example from [repl.it](https://repl.it)
### try running it with 0 and 7 as the inputs in the last line.
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
factorial(7)
# Functions
## example from Harvard Computefest Intro to Python 2017
## python-cf17-02-language.ipynb
def say_hi(me):
print("Yo {}!".format(me))
me = "br3ndonland"
say_hi(me)
def f(x):
"A function f of x that calculates a second order function for y and also z"
y = 3*x + 2*x**2
z = 5*x**2 + 4*y**(0.5)
return y, z # tuple packing
f(4)
# Functions
## example from Harvard Computefest Intro to Python 2017
## python-cf17-02-language.ipynb
## Function exercise
## * Create a list of 10 numbers, your choice.
## * Use a `for` loop to iterate over it.
## * Think about what you'd need to do to calculate the average
## * Remember the `len()` function.
number_list = [1, -594, 90, 45, 7, 3, 59, 9087, 4, 2]
def myaverage(numlist):
'calculate the mean from an iterable of numbers'
print('given numlist:', numlist)
total = 0
for x in numlist:
total += x
mean = total / float(len(numlist))
print("Got a mean of", mean)
return mean
average = myaverage(number_list)
average
# Functions
## that was a very time-consuming way of calculating an average.
## (in human time, not computer time)
## how about this
import statistics
statistics.mean(number_list)
# numpy is also faster for humans, but imports a larger package
# so may take longer for the computer.
import numpy
numpy.mean(number_list)
```
| github_jupyter |
This is one of the Objectiv example notebooks. For more examples visit the
[example notebooks](https://objectiv.io/docs/modeling/example-notebooks/) section of our docs. The notebooks can run with the demo data set that comes with the our [quickstart](https://objectiv.io/docs/home/quickstart-guide/), but can be used to run on your own collected data as well.
All example notebooks are also available in our [quickstart](https://objectiv.io/docs/home/quickstart-guide/). With the quickstart you can spin up a fully functional Objectiv demo pipeline in five minutes. This also allows you to run these notebooks and experiment with them on a demo data set.
With Objectiv you can do all your analysis and Machine Learning directly on the raw data in your SQL database.
This example shows in the simplest way possible how you can use Objectiv to create a basic feature set and use
sklearn to do machine learning on this data set. We also have an example that goes deeper into
feature engineering [here](https://objectiv.io/docs/modeling/example-notebooks/feature-engineering/).
### Import the required packages for this notebook
The open model hub package can be installed with `pip install objectiv-modelhub` (this installs Bach as well).
If you are running this notebook from our quickstart, the model hub and Bach are already installed, so you don't have to install it separately.
```
from modelhub import ModelHub
from sklearn import cluster
```
At first we have to instantiate the Objectiv DataFrame object and the model hub.
```
# instantiate the model hub
modelhub = ModelHub(time_aggregation='%Y-%m-%d')
# get the Bach DataFrame with Objectiv data
df = modelhub.get_objectiv_dataframe(start_date='2022-02-02')
```
If you are running this example on your own collected data, setup the db connection like this and replace above cell:
```
# df = modelhub.get_objectiv_dataframe(db_url='postgresql://USER:PASSWORD@HOST:PORT/DATABASE',
# start_date='2022-06-01',
# end_date='2022-06-30',
# table_name='data')
```
We create a data set of per user all the root locations that the user clicked on. For the ins and outs on feature engineering see our feature [engineering example](https://objectiv.io/docs/modeling/example-notebooks/feature-engineering/).
```
df['root'] = df.location_stack.ls.get_from_context_with_type_series(type='RootLocationContext', key='id')
features = df[(df.event_type=='PressEvent')].groupby('user_id').root.value_counts()
features.head()
features_unstacked = features.unstack(fill_value=0)
# sample or not
kmeans_frame = features_unstacked
kmeans_frame = features_unstacked.get_sample(table_name='kmeans_test', sample_percentage=50, overwrite=True, seed=2224)
```
Now we have a basic feature set that is small enough to fit in memory. This can be used with sklearn, as we
demonstrate in this example.
```
# export to pandas now
pdf = kmeans_frame.to_pandas()
pdf
# do basic kmeans
est = cluster.KMeans(n_clusters=3)
est.fit(pdf)
pdf['cluster'] = est.labels_
```
Now you can use the created clusters on your entire data set again if you add it back to your DataFrame.
This is simple, as Bach and pandas are cooperating nicely. Your original Objectiv data now has a 'cluster'
column.
```
kmeans_frame['cluster'] = pdf['cluster']
kmeans_frame.sort_values('cluster').head()
df_with_cluster = df.merge(kmeans_frame[['cluster']], on='user_id')
df_with_cluster.head()
```
You can use this column, just as any other. For example you can now use your created clusters to group models
from the model hub by:
```
modelhub.aggregate.session_duration(df_with_cluster, groupby='cluster').head()
```
| github_jupyter |
# 5 - Creating, getting and visualizing Mesh groups and Mesh Fields
**This Notebook will introduce you to**:
1. what is a Mesh Group
2. the Mesh Group data model
3. how to create a mesh Group
4. the Mesh Field data model
5. how to add a field to a Mesh Group
6. how to get Mesh Groups and Mesh Fields
<div class="alert alert-info">
**Note**
Throughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [first notebook of this User Guide](./SampleData_Introduction.ipynb) of this User Guide.
</div>
## I - SampleData Mesh Groups <a class="anchor" id="sec1"></a>
**SampleData Grid Groups** have been introduced in the [last tutorial](./3_SampleData_Image_groups.ipynb), which also presented in details one of the two type of *Grids*, the *Image Groups*, representing regular grids (rectilinear meshes). This tutorial focuses on the second type on *Grid Groups*, the **Mesh groups**.
### SampleData meshes
**Mesh Groups** represent unstructured grids, defined by a set of **Nodes** with their associate coordinates, a set of **Elements**, that are defined by a connectivity matrix, indicating which *nodes* define each *element*, and a set of **Fields**, describing data defined on this grid. They are typically used to store **Finite Element meshes**, and **Finite Element fields**. For a precise mathematical definition of what *Nodes* and *Elements* are, the reader can refer to standard concepts of finite element theory, which is not in the scpe of this tutorial.
Like for *Image Groups*, three types of *Mesh Groups* can be handled by *SampleData*:
1. `emptyMesh`: an empty group that can be used to set the organization and metadata of the dataset before adding actual data to it (similar to [empty data arrays, see tutorial 2, section IV](./2_SampleData_basic_data_items.ipynb))IV](./2_SampleData_basic_data_items.ipynb))
2. `2DMesh`: a grid of Nodes defined by two coordinates $x$ and $y$
3. `3DMesh`: a grid of Nodes defined by three coordinates $x$, $y$ and $z$
An Mesh field is a data array that has one value associated to each node, each element, or each integration point of the mesh.
For many applications, it may be usefull to define specific sets of *Nodes* or sets of *Elements* (for instance to define distinct components of a microstructure, define an area where specific boundary conditions are applied for numerical simulation etc...). Therefore, *SampleData Mesh Groups* also store data defining those *Node* and *Element* sets.
Hence, *Mesh Groups* contain data and metadata defining the mesh Nodes, Node Sets, Elements, Element types and Sets, Mesh Fields, and metadata to synchronize with the XDMF Grid node associated to the Mesh. To explore in details this data model, we will once again open the reference test dataset of the *SampleData* unit tests.
<div class="alert alert-warning">
**Warning 1:**
The SampleData class is not designed to create meshes , in particular complex meshes of microstructures that are of interest for material science. It is designed mainly to serve as a I/O interface to load/store meshes from mesh data files, visualize them, and provide them to other numerical tools that may need them (finite element solvers for instance).
An automatic mesher that directly outputs a *SampleData Mesh Group* has been implemented in the *Pymicro* package. It relies on an automatic meshing code that is not open-source, and is the property of the laboratory *Centre des Matériaux* of *Mines ParisTech*. Please contact us if you want to know more and use this tool. If you are from *le Centre des Matériaux*, a dedicated tutorial for this tool exists. Please contact H. Proudhon, B. Marchand or L. Lacourt to get it.
If you need to create a meshing tool that automatically outputs results as a SampleData dataset, please contact us to see if we may help you or provide you with guidelines to develop your mesher interface.
</div>
<div class="alert alert-warning">
**Warning 2:**
The SampleData class currently relies on the BasicTools package to handle grids. For now, *BasicTools* integration within *SampleData* covers mainly the input and output of *BasicTools* mesh objects in/out of *SampleData* datasets.
**The *BasicTools* package offers many I/O functionalities with many classical mesh formats, and mesh creation tools. Their are not currently all integrated within the *SampleData* class. They will be in the future, but for now, you will need to directly use *BasicTools* code to access those possibilities.** If you want to do so, feel free to contact us to get help.
*Basictools* is a required package to use Pymicro, so you should have it installed in your Python environment if you are using Pymicro.
</div>
## II - Mesh Groups Data Model <a class="anchor" id="sec2"></a>
This section will introduce you to the various data items constituting the *Mesh Group* data model, and to the *Sampledata* methods that you can use to retrieve them.
```
# Import the Sample Data class
from pymicro.core.samples import SampleData as SD
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path
import os
dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'test_sampledata_ref') # test dataset file path
data = SD(filename=dataset_file)
```
Let us print the content of the dataset to remember its composition:
```
data.print_index()
data.print_dataset_content(short=True)
```
The test dataset contains one *3DMesh* Group, the `test_mesh` group, with indexname `mesh`. It has four fields defined on it, `Test_field1`, `Test_field2`, `Test_field3` and `Test_field4`.
Let us print more information about this *Mesh Group*:
```
data.print_node_info('mesh')
```
Like for *Image Groups*, a lot of metadata is attached to a *Mesh Group*. All the attribute that you see attached to the `test_mesh` group are part of the *Mesh Group* data model, and are automatically created upon creation of *Mesh Groups*. Like for *Image Groups*, you can see a `group_type` attribute, indicating that our group stores data representing a three dimensional mesh.
### Mesh Group indexnames
Let us print more precisely the index
```
data.print_index(local_root='test_mesh')
```
**As you can see, all the elements of the *Mesh Group* data model have an indexname constructed following the same pattern:
`mesh_indexname + _ + item_name`. Their `item_name` is also their Name in the dataset as you can see in their Pathes.**
For our reference dataset, and for the *Mesh Group* `test_mesh`, the `mesh_indexname` is `mesh`.
The various elements of this data model an their `item_name` are all printed above in the `print_index` output, and will be all presented below.
### The Mesh Geometry Group
You can see in the output of the `print_dataset_content` method above that the mesh contains 3 *data array* nodes, and one *Group*, which has a lot of childrens. This group, **the Geometry Group, contains all the data necessary to define the grid topology.** This group `item_name` is `Geometry`.
```
data.print_node_info('mesh_Geometry')
data.print_group_content('mesh_Geometry', short=True)
```
This Group contains a lot of data arrays, and 2 groups. You see from their names that all of its children are related to the grid *Nodes* and *Elements*. We will now explore in details how this essential data is stored.
### Mesh Nodes
**Mesh Nodes define the geometry of the grid. They are defined by an array of coordinates, of size $[N_{nodes},N_{coordinates}]$**.
You can see a few cells above that some attributes of the *Mesh Group* provide information on the mesh *Nodes*:
* `number_of_nodes`: $N_{nodes}$, the number of nodes in the grid
* `nodes_path`: path of a data array containing the node coordinates array
* `nodesID_path`: path of a data array containing Identification numbers for each node of the previous array
As you can see, the node coordinates and identification number arrays are stored in the *Geometry Group*. These two data items have respectively the *item name* `Nodes` and `Nodes_ID`. We can print more information about them:
```
data.print_node_info('mesh_Nodes')
data.print_node_info('mesh_Nodes_ID')
```
#### Nodes and Nodes ID arrays
You see that they are simple *Data array* items, with no specific metadata. You can see that the shape of the `Nodes` array is $[N_{nodes}, N_{coordinates}]$, which is $[6,3]$ in this case. This is consistent with the dimensionality of the mesh, which is 3 (tridimensional mesh). The `Nodes_Id` array has one value per nodes, which is also consistent.
Let us print the content of those arrays:
```
mesh_nodes = data['mesh_Nodes']
mesh_nodes_id = data['mesh_Nodes_ID']
for i in range(len(mesh_nodes_id)):
print(f'Mesh node number {mesh_nodes_id[i]} coordinates: {mesh_nodes[i,:]}')
```
If you see that the 4 first nodes form a square in the $(X,Y)$ plane, and that two other nodes are symetrically located on top and bottom of the center of this square. They are the vertexes of a regular octahedron, which we will verify later by visualizing the mesh.
To get those arrays, the most convenient solution is to use the `get_mesh_nodes` and `get_mesh_nodesID` methods:
```
mesh_nodes = data.get_mesh_nodes('mesh')
mesh_nodes_id = data.get_mesh_nodesID('mesh')
for i in range(len(mesh_nodes_id)):
print(f'Mesh node number {mesh_nodes_id[i]} coordinates: {mesh_nodes[i,:]}')
```
#### Node Sets/Tags
You can also see that the Geometry contains a *String Array* labelled `Nodes_tag_list`, with *item_name* `NodeTagsList`. This group has the following content:
```
data.print_node_info('mesh_NodeTagsList')
```
You see that this node is a *String array*, containing 2 elements. They are the names of the **NODE SETS**, or also **NODE TAGS** (*notation used in SampleData's code*), defined on this mesh, and stored on the dataset. You can access them easily (see *String Array* section in [tutorial 2](./2_SampleData_basic_data_items.ipynb)):
```
for tag in data['mesh_NodeTagsList']:
print(tag.decode('utf-8'))
```
An easier way to get this information is to use the `get_mesh_node_tags` method:
```
data.get_mesh_node_tags_names('test_mesh')
```
The two node tags are names `Z0_plane`, and `out_of_plane`. From the value of the node coordinates printed a few cells above, we can expect that the Node Set `Z0_plane` contains the 4 first nodes, and that `out_of_plane` set the two last nodes.
To see their content, you can use the dedicated methods `get_mesh_node_tag` and `get_mesh_node_tag_coordinates`:
```
# Get the node IDs for the Node sets defined on the mesh
Z0_plane = data.get_mesh_node_tag(meshname='test_mesh',node_tag='Z0_plane')
out_of_plane = data.get_mesh_node_tag(meshname='test_mesh',node_tag='out_of_plane')
# Get the node coordinates for the Node sets defined on the mesh
Z0_plane_xyz = data.get_mesh_node_tag_coordinates(meshname='test_mesh',node_tag='Z0_plane')
out_of_plane_xyz = data.get_mesh_node_tag_coordinates(meshname='test_mesh',node_tag='out_of_plane')
# Print informations
print(f'Node tag "Z0_plane" is composed of nodes {Z0_plane} withi coordinates \n {Z0_plane_xyz}','\n')
print(f'Node tag "out_of_plane" is composed of nodes {out_of_plane} withi coordinates \n {out_of_plane_xyz}')
```
The *Node Tags* are stored in the *Group* `NodeTags`, children of the *Group* `Geometry`:
```
data.print_node_info('mesh_NodeTags')
data.print_group_content('mesh_NodeTags')
```
You can observe that the the group contains four data items, two for each node set in our mesh group, as their name indicates.
Two of those data items have a name that starts with **`NT_`**, which stand for **NodeTag**. They contain the data items associated to the *node tags*, that can be retrieved with the `get_mesh_node_tag` method (see above).
The two other data item have a name that starts with **`_field`**. They are *Field Arrays* data items. These fields are nodal value fields (see below the section dedicated to *Mesh Fields*), and have a value of **1** on the *node set*, and **0** out of the *node set*.
We have now covered all the elements of the *Mesh Group* data model related to mesh *nodes*.
### Mesh Elements
**Mesh Elements define the geometry of the geometrical domain represented by the grid. They are polylines, polygons or polyhedra whose vertices are mesh Nodes. They are defined by a connectivity array indicating for each element the IDs of the nodes defining the element.**.
Let us print again the *Mesh Group* information to find out which data items relate to mesh elements:
```
data.print_node_info('mesh')
```
As shown by this print, many *Mesh Group* attributes are related to *Mesh Elements*.
The `elements_path` attribute indicates the path of the *Data Array* item containing the elements connectivity array.
The `Topology` attribute can have two values, **Uniform** (mesh composed of only one type of elements) or **Mixed** (mesh composed of several types of elements). Here the mesh is *uniform*. You will see an example of *Mixed* topology in the next section of this tutorial. All items of the *Mesh Group data Model* related to mesh elements will be reviewed again for the *Mixed* topology case.
*****
#### Element types
You can see 6 attributes that store *list* values. All those have one value for each type of element that composes the mesh. Here, as the mesh has a *Uniform* topology, they have only one value. Two of those attributes provide information on the type of elements:
* **`elements_type`**: a list of the types of elements that is used in the mesh. These type names consist of a geometrical prefix indicating the geometrical nature of the element, followed by a number indicating the number of nodes in the element. Elements types are stored by decreasding dimensionality, in the case of a *Mixed* topology: the first elements of the lists are the elements types with the higher dimensionality in the mesh (**bulk element types**), and then element types with a lower dimensionality (**boundary element types**) are stored. The different type of elements supported, and their dimensionality, are:
| Element type | Description |
|---|---|
|`point1` | A simple point |
|`bar2`/`bar3` | A linear/quadratic line segment element (1D) |
|`tri3` and `tri6` | A linear/quadratic triangle element (2D) |
|`quad4` and `quad8` | A linear/quadratic quadrilateral element (2D) |
|`tet4` and `tet10` | A linear/quadratic tetrahedron element (3D) |
|`pyr5` and `pyr13` | A linear/quadratic pyramid (square basis) element (3D) |
|`wed6` and `wed15` | A linear/quadratic wedge element (3D) |
|`hex8` and `hex20` | A linear/quadratic hexaedron element (3D) |
* **`Xdmf_elements_code`**: same as `elements_type` but with the element type names of the XDMF data model, [that can be found here](https://www.xdmf.org/index.php/XDMF_Model_and_Format#Topology) (used for synchronization with XDMF file)
<div class="alert alert-info">
**Note**
You will find the geometric definition of these nodes into the `BasicTools/Containers/ElementNames.py` file of the *BasicTools* package. They are not presented here for the sake of brevity.
</div>
*****
#### Elements count
Other attributes are used to keep track of the number of elements: `Number_of_elements`, `Number_of_bulk_elements` `Number_of_boundary_elements`. They contain integer values indicating respectively how many elements, bulk elements (*i.e.* that have the same dimensionality of the mesh, 3D elements for 3D mesh for instance) or boundary elements (elements with a lower dimensionality than the mesh, 2D elements for a 3D mesh for instance) compose the mesh.
These three attributes contain lists, that have a value for each element type of the list, and their order correspond to the element types stored in the `elements_type` attribute list.
The `Elements_offset` attribute also is a list with one element per element in `elements_type`. It contains the position of the first element of each type of element in the complete set of mesh elements. It is used by the class for consistency when interacting with *BasicTools* Elements objects. It is not relevant for *SampleData* users.
Using all this information, we can determine that our mesh contains 8 elements, that are linear triangle elements (`tri3` elements). They are the 8 faces of our octahedron. A quicker way to retrieve this is to use the `get_mesh_elem_types_and_number` method, that returns a dictionary whose keys are the element type in the mesh, and values are the number of element for each type:
```
data.get_mesh_elem_types_and_number('mesh')
```
#### Elements connectivity array
Like for the *Mesh Nodes*, all the data describing the *Mesh elements* is also gathered in the *Mesh Geometry Group*. Let us print again the content of this group:
```
data.print_group_content('mesh_Geometry', short=True)
data.print_index(local_root='mesh_Geometry')
```
Like the nodes coordinate array, the element connectivity array is a *Data Array* node stored into the *Mesh Geometry Group*. Its *item name* is *Elements*. Let us print the content of this array:
```
print(f'The shape of the mesh Elements connectivity array is {data["mesh_Elements"].shape}','\n')
print(f'Its content is {data["mesh_Elements"]}')
```
This array contains for each element the IDs of the Nodes that compose the element : 24 nodes IDS, three for each triangle. We can see that the first element of the mesh is composed of the Nodes 0, 1 and 4.
**The connectivity array is used as grid Topology array in the XDMF file of the dataset. Hence, it must comply with the XDMF format. For this reason, it is always stored as a 1D array. This format is detailed on the dedicated XDMF data format webpage, [here](https://www.xdmf.org/index.php/XDMF_Model_and_Format#Topology).**
**In the case of a Uniform topology, the connectivities of the elements are stored consecutively.** As the number of nodes of each element is identical, the interpretation of the array is possible. **For the Mixed topology case** *(see example later in this tutorial)*, **the connectivity of each element must be preceded by a number that identifies the element type.** This number is the XMDF code for the element type.
The *SampleData* class offers more convenient ways to get the elements connectivity array. The first is the `get_mesh_xdmf_connectivity` method, that return the *Elements* Data array:
```
print(f'The mesh elements connectivity array is {data.get_mesh_xdmf_connectivity("mesh")}')
```
The second is the `get_mesh_elements` method. With the meshname and one if its element types as arguments, it allows to get the connectivity of a type of elements with a classical 2D array format:
```
data.get_mesh_elements(meshname='mesh', get_eltype_connectivity='tri3')
```
#### Elements Sets/Tags
Like *Node Tags*, *Mesh Group* can store *Element Tags* (= element sets) in the *Mesh Geometry Group*), that are stored in the *ElementsTags Group*, and listed in the *ElTagsList String array*. A second string array is associated to element tags: the *ElTagsTypeList* that contains the type of element constituting each tag (element tags can only be composed of a single type of elements). Like for node tags, they are explicit methods that allow to retrieve information on element tags:
```
data.get_mesh_elem_tags_names('mesh')
```
Our *Mesh Group* contains 3 element tags, all containing `tri3` elements. The elements IDs and the specific connectivity of an element tag can easily be accessed using their name and the *Mesh Group* name:
```
el_tag = data.get_mesh_elem_tag(meshname='mesh', element_tag='2D')
el_tag_connect = data.get_mesh_elem_tag_connectivity(meshname='mesh', element_tag='Bottom')
# el_tag_connect = data.get_mesh_elem_tags_connectivity(meshname='mesh', element_tag='2D')
print(f'The element tag `2D` contains the elements: {el_tag}.')
print(f'The connectivity of the `Bottom` element tag is: \n {el_tag_connect}')
```
You can observe that the `get_mesh_elem_tag_connectivity` reshapes the connectivity array to a more usual 2D format, each row of the array corresponding to the connectivity of 1 element.
Like for *Node Tags*, the arrays containing the *Element Tags* data are stored in a specific group `ElementsTags`, subgroup of the *Mesh Geometry Group*. They contain the ID numbers of the elements constituting the *Element Tag*, and are named following the pattern : `ET_+tag name` (`ET` standing for element tag).
Like for Nodes, *Mesh Groups* can also store *Mesh fields* for *Element Tags* to allow to visualize them. These fields have one boolean value per mesh element, which is equal to 1 on elements that are part of the *Element Tag*. The associated data arrays are also stored in the `ElementsTags` Group, and are named following the pattern : `field_+tag name`.
Let us look at the content of this Group:
```
data.print_group_content('mesh_ElemTags')
```
We see that our 3 element tags have a field associated in the Mesh Group.
### Mesh fields
Like *Image Groups*, *Mesh Groups* can store fields. The list of the names of fields defined on the mesh group is stored in the *Field_Index* string array, like for *Image Groups* ( see [previous tutorial](./3_SampleData_Image_groups.ipynb)). It can be accessed easily using the `get_grid_field_list` method:
```
data.get_grid_field_list('mesh')
```
We see in this list the various Node/Element tag fields, but also additional fields called `mesh_test_fieldX`. A specific section is dedicated to Mesh fields in this tutorial, so no additional detail on them will be provided here.
### Mesh XDMF grids
We have now seen all the data contained in a *Mesh Group* in the HDF5 dataset of a *SampleData* object. To conclude this section on the *Mesh Group* data model, we will observe how this group is stored in the XDMF dataset:
```
data.print_xdmf()
```
You can see that the test_mesh is stored as a Grid Node. This node contains a *Geometry* node, defined by a data array, which is the *Node coordinate* data array stored in the HDF5 dataset. It also contains a *Topology* Group, defined by the *Elements connectivity* data array stored in the HDF5 dataset. Finally, it contains one *Attribute* node for each field listed in the *Field Index* string array of the HDF5 dataset, defined by the associated data array in the HDF5 dataset.
As already explained in the [tutorial 1 (sec. V)](./1_Getting_Information_from_SampleData_datasets.ipynb), the XDMF file allows to directly visualize the data described in it with the *Paraview* software. Before closing our reference dataset, Let us try to visualize the Mesh Group that we have studied, with the `pause_for_visualization` method and the Paraview software.
You have to choose the **XdmfReader** to open the file, so that Paraview may properly read the data. In the **Property** panel, tick only the `test_mesh` block to only plot the content of the Mesh Group, and then click on the **Apply** button.
```
# Use the second code line if you want to specify the path of the paraview executable you want to use
# otherwise use the first line
#data.pause_for_visualization(Paraview=True)
#data.pause_for_visualization(Paraview=True, Paraview_path='path to paraview executable')
```
You should be able to produce visualization of the mesh geometry:
<img src="./Images/Tutorial_4/test_mesh_geometry.png" width="50%">
or the mesh fields:
<img src="./Images/Tutorial_4/test_mesh_top_field.png" width="50%">
<img src="./Images/Tutorial_4/out_of_plane_field.png" width="50%">
Before moving to the next section on the creation of *Mesh Groups*, let us close our reference dataset:
```
del data
```
## III - Creating Mesh Groups <a class="anchor" id="sec3"></a>
Like for *Image Groups* (see [tutorial 4 section III](./3_SampleData_Image_groups.ipynb)), *Mesh Groups* are created in *SampleData* instances through *Basictools* **mesh objects**. Creating meshes is a complex geometrical and mathematical task, that require specific tools (meshers like *gmsh* for instance). The *SampleData* code has not been designed to create meshes, but rather to manipulate them. In relevant application of the class, mesh data stored into *SampleData* datasets come from external tools. The main tasks are then to load and manipulate the data. This tutorial will mainly focus on those aspect, rather than mesh creation.
Hence, in this section, we will only review some quick ways to create mesh objects, from simple mesh data, mesh files or other Python packages; and how to add them to a *SampleData* dataset.
### Creating mesh objects with BasicTools
The *BasicTools* package implements a few methods that allow to create meshes with simple topologies. We will rely on them to recreate the mesh that we studied before, and introduce you to *mesh objects*.
#### BasicTools mesh creation tools
The mesh creation tools of the *BasicTools* package can be imported with the following code line:
```
import BasicTools.Containers.UnstructuredMeshCreationTools as UMCT
```
This module contains functions that allow, from a set of points and a connectivity matrix, to create uniform meshes of one type of element. Learning in details how to use them is beyond the scope of this tutorial. Likewise, how to add data such as tags or fields to a mesh object will not be detailed in this tutorial. The reader is refered to *BasicTools* code to learn more on those methods/objects.
These method include:
* `CreateUniformMeshOfBars`
* `CreateMeshOfTriangles`
* `CreateMeshOf`
* `CreateSquare`
* `CreateCube`
* `CreateMeshFromConstantRectilinearMesh`
The `UnstructuredMeshCreationTools` contains test methods for each of these methods that can be read to understand how to use them.
To recreate the octahedron mesh of the reference dataset studied in section II, we will use the `CreateMeshOfTriangles` method. For that, we will need to create two arrays, one for the node coordinates, one for the elements connectivity, and simply pass them to the *BasicTools* method:
```
import numpy as np
# Mesh node coordinates array
mesh_nodes = np.array([[-1.,-1., 0.],
[-1., 1., 0.],
[ 1., 1., 0.],
[ 1.,-1., 0.],
[ 0., 0., 1.],
[ 0., 0.,-1.]])
# Mesh connectivity array
mesh_elements = np.array([[0, 1, 4],
[0, 1, 5],
[1, 2, 4],
[1, 2, 5],
[2, 3, 4],
[2, 3, 5],
[3, 0, 4],
[3, 0, 5]])
mesh = UMCT.CreateMeshOfTriangles(mesh_nodes, mesh_elements)
```
#### Mesh objects
The method return a *BasicTools* unstructured mesh object, a class dedicated to the manipulation of finite element meshes data. Printing this object provides information on the content of the mesh:
```
print(mesh)
```
We will now complete this mesh object with additional data, by defining fields and node/element tags:
```
# Here we recreate the Node tags of the reference dataset `test_mesh`
mesh.nodesTags.CreateTag('Z0_plane', False).SetIds([0,1,2,3])
mesh.nodesTags.CreateTag('out_of_plane', False).SetIds([4,5])
# Here we recreate the Element tags of the reference dataset `test_mesh`
mesh.GetElementsOfType('tri3').GetTag('Top').SetIds([0,2,4,6])
mesh.GetElementsOfType('tri3').GetTag('Bottom').SetIds([1,3,5,7])
# Here we add mesh node fields
mesh.nodeFields['Test_field1'] = np.array([0., 0., 0., 0., 1., 0.])
mesh.nodeFields['Test_field2'] = np.array([0., 0., 0., 0., 0., 1.])
# Here we add mesh element fields
mesh.elemFields['Test_field3'] = np.array([0., 1., 2., 3., 4., 5., 6., 7.])
mesh.elemFields['Test_field4'] = np.array([1., 1., -1., -1., 1., 1., -1., -1.])
print(mesh)
```
### Creating a Mesh Group from a Meshobject
Our mesh object now contain all the data that is stored into the `test_mesh` Group of the reference dataset. We can now create a new *Sampledata* instance to see how we can create a *MeshGroup* from this mesh object:
```
data = SD(filename='test_mesh_dataset', overwrite_hdf5=True, autodelete=True)
```
To add a *Mesh Group* into this dataset, use the `add_mesh` method. Its use is very straightforward, you just have to provide the name, indexname, location of the *Mesh Group* you want to create, and the mesh object from which the data must be loaded. Like all the *data item* creation methods, it also has a `replace` argument (see [tutorial 3](./2_SampleData_basic_data_items.ipynb)).
This methods has one specific argument, `bin_fields_from_sets`, that allow you to control if you want to simply store the Node/Element Tags as data arrays (in this case set it to `False`), or if you want to also store visualization fields for each Node/Element Tag in the mesh object.
Let us create our mesh group, with visualization fields:
```
data.add_mesh(mesh_object=mesh, meshname='test_mesh', indexname='mesh', location='/', bin_fields_from_sets=True)
print(data)
```
That is it ! The *Mesh Group* has been created and all data has been stored into the *SampleData* dataset. Let us try to retrieve some information to check the loaded data:
```
data.print_node_info('mesh_Geometry')
print('Mesh elements: ',data.get_mesh_elem_types_and_number('mesh'))
print('Mesh node tags: ', data.get_mesh_node_tags_names('mesh'))
print('Mesh element tags:',data.get_mesh_elem_tags_names('mesh'))
print('Element Tag "Bottom" connectivity: \n',data.get_mesh_elem_tag_connectivity('mesh', 'Bottom'))
```
### Creating meshes from files
In practice, creating relevant meshes for mechanical or material science application through *BasicTools* mesh creation tools is very limited. The most relevant use case consist in loading meshes from mesh files created by finite element or meshing softwares. Through *Basictools*, the *SampleData* class allows to directly load *Mesh Groups* from various mesh file formats, and the opposite: write mesh files in various formats from *Mesh Groups*.
#### Load mesh files
We will use a mesh file used for *Pymicro* unit tests to illustrate the mesh file loading functionality. This file is a *.geof* file, the mesh file format used by the **Zset** finite element software.
To load a mesh file from into a *SampleData* mesh group, use the `add_mesh` method, but instead of providing a *mesh object*, use the `file` argument and provide the mesh file name:
```
meshfile_name = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'cube_ref.geof')
print(meshfile_name)
data.add_mesh(file=meshfile_name, meshname='loaded_geof_mesh', indexname='mesh_geof', location='/', bin_fields_from_sets=True)
print(data)
```
The *Mesh Group* has been loaded as a *Mesh Group* in the dataset. Its data can be explored, and visualized, as presented above:
```
data.print_node_info('mesh_geof')
print('Mesh elements: ',data.get_mesh_elem_types_and_number('mesh_geof'))
print('Mesh node tags: ', data.get_mesh_node_tags_names('mesh_geof'))
print('Mesh element tags:',data.get_mesh_elem_tags_names('mesh_geof'))
```
As you can see, this mesh has a mixed topology, and has bulk (linear tetrahedra `tet4`) and boundary (linear triangles `tri3`) elements. This is the mesh of a cube, and it contains element tags to define its faces (`X0`, `X1`, `Y0` ...). As a training exercise, you may try to explore and visualize all the data contained into this mesh group !
This method will in principle work with the following files extensions :
`.geof` `.inp`, `.asc`, `.ansys`, `.geo`, `.msh`, `.mesh`, `.meshb`, `.sol`, `.solb`, `.gcode`, `.fem`, `.stl`, `.xdmf`, `.pxdmf`, `.PIPE`, `.odb`, `.ut`, `.utp`, `.vtu`, `.dat`, `.datt`
<div class="alert alert-info">
**Note: MeshIO Bridge**
If you use the *meshio* package to handle meshes, you can interface it with *SampleData* thanks to the *BasicTools* package. For that, you can use the `MeshToMeshIO` and `MeshIOToMesh` methods in the `BasicTools.Containers.MeshIOBridge` module of the *Basictools* package. These methods allow you to transform a *Basictools* mesh object into a *MeshIO* object. Hence, you can load a *meshio* mesh into a *SampleData* dataset by transforming it into a *Basictools* mesh object, and feeding it to the `add_mesh` method.
</div>
## IV - Mesh Field data model
In this new section, we will now focus on the definition of **mesh fields** data items. They are very similar to *image fields* data items, presented in the [previous tutorial, section IV](./3_SampleData_Image_groups.ipynb). They are data arrays enriched with metadata describing how they relate to the mesh they are associated to. Like *image fields*, these data arrays must follow some shape, ordering and indexing conventions to be added as mesh fields.
### SampleData Mesh fields conventions
#### Possible shapes and field types
In the following, $N_n$, $N_e$ and $N_{ip}$ represent respectively the number of nodes, elements and intergration points of the mesh. Mesh fields can be added from *Numpy* array that can have the following shapes:
1. $(N_n, N_c)$ or $(N_n)$ (equivalent to $N_c=1$). In this case, the field is a **nodal field**, whose values are defined at the mesh nodes
1. $(N_e, N_c)$ or $(N_e)$ (equivalent to $N_c=1$). In this case, the field is an **element field**, whose values are defined on each mesh element (constant per element)
1. $(N_{ip},N_c)$ or $(N_{ip})$ (equivalent to $N_c=1$). In this case, the field is an **integration point field**, whose values are defined at the mesh integration points. In practice, $N_{ip}$ can be any multiple of $N_e$, $N_{ip} = k*N_e$
**If an array that do not have a compatible shape is added as mesh field, the class will raise an exception.**
#### Field padding
Finite element softwares may output fields only on bulk elements or boundary elements (variable flux, pressure field...) depending on the field nature. If such field must be loaded into a *SampleData* *Mesh Group* that has a mixed topology, with bulk and boundary elements, an *element* or *integration point* field array outputed by the software will not have a number of values compliant with the convention presented in the subsection above.
The *SampleData* class handles automatically this particular issue. **Element and integration point fields** may be added from field data that are defined only on *bulk* or *boundary* elements. The class will padd them with zeros values for the element over which they are not defined, to get the proper array size.
In practice, to add *element fields* / *integration point fields*, the input array can have the shape $(N_e, N_c)$ or $(N_e)$ / $(N_{ip}, N_c)$ or $(N_{ip})$, with $N_{ip} = k*N_e$ and $N_e$ being either the total number of elements in the mesh, the number of bulk elements in the mesh, or the number of boundary elements in the mesh.
#### Integration points ordering convention and visualization
For each component of the field, integration point field array values are ordered by increasing integration point index first, and then by increasing element index, as follows:
$Array[:,c] = [A_{ip1/e1}, A_{ip2/e1}, ..., A_{ip1/e2}, ...,A_{ipM/eN} ]$
where $A_{ipi/ej}$ denotes the value of the array $A$ for the integration point $i$ of the element $j$, and $M$ and $N$ are respectively the number of integration point per element and the number of elements in the mesh.
**The XDMF format does not allow to specify integration points values for grids, and hence, these values cannot be directly visualized with Paraview and SampleData dataset yet.** However, when adding a an integration point field to a dataset, the user may chose to associate to it a visualization field that is an *element field* (1 value per element), which is the minimum, maximum or mean of the field value on all element integration points (see next section).
#### Field dimensionality
The size of the last dimension of the array determines the **number of field components** $N_c$, which defines the **field dimensionality**. If the array is 1D, then $N_c=1$.
*SampleData* will interpret the field dimensionality (scalar, vector, or tensor) depending on the grid topology and the array shape. All possibilities are listed hereafter:
* $N_c=1$: **scalar field**
* $N_c=3$: **vector field**
* $N_c=6$: **symetric tensor field** (*2nd order tensor*)
* $N_c=9$: **tensor field** (*2nd order tensor*)
The dimensionality of the field has the following implications:
* **XDMF**: the dimensionality of the field is one of the metadata stored in the XDMF file, for Fields (`Attribute` nodes)
* **visualization**: as it is stored in the XDMF file, Paraview can correctly interpret the fields according to their dimensionality. Il allows to plot separately each field component, and the field norm (magnitude). It also allows to use the Paraview *Glyph* filter to plot vector fields with arrows
* **indexing**: The order of the value in the last dimension for non-scalar fields correspond to a specific order of the field components, according to a specific convention. These conventions will be detailed in the next subsection.
* **compression**: Specific lossy compression options exist for fields. See dedicated tutorial.
* **interface with external tools**: when interfacing SampleData with external softwares, such as numerical simulation tools, it is very practical to have fields with appropriate dimensionality accounted for (for instance to use a displacement or temperature gradient vector field stored in a *SampleData* dataset as input for a finite element simulation).
#### Field components indexing
To detail the convention, we will omit the spatial indexes $i,j$ or $k$ and only consider the last dimension of the field: $F[i,c] = F[c]$. The indexing convention to describe field components order in data arrays are:
* For **vector fields** (3 components), the convention is $[F_0,F_1,F_2] = [F_x,F_y,F_z]$
* For **symetric tensor fields** (2nd order, 6 components), the convention is
$[F_0,F_1,F_2,F_3,F_4,F_5] = [F_{xx},F_{yy},F_{zz},F_{xy},F_{yz},F_{zx}]$
* For **tensor fields** (2nd order, 9 components), the convention is
$[F_0,F_1,F_2,F_3,F_4,F_5,F_6,F_7,F_8] = [F_{xx},F_{yy},F_{zz},F_{xy},F_{yz},F_{zx},F_{yx},F_{zy},F_{xz}]$
<div class="alert alert-warning">
**Warning**
Field components (example: $x,y,xx,zy$...) are assumed to be defined in the same frame as the grid. However, that cannot be ensured simply by providing a data array. Hence, the user must ensure to have this coincidence between the grid and the data. It is not mandatory, but not respecting this implicit convention may lead to confusions and geometrical misinterpretation of the data by other users or software. If you want to do it, a good idea would be to add attributes to the field to specify and explain the nature of the field components.
</div>
#### Components transposition
**Paraview component order convention is different from *SampleData* condition for tensors.** The convention in Paraview is:
* $[F_0,F_1,F_2,F_3,F_4,F_5] = [F_{xx},F_{xy},F_{xz},F_{yy},F_{yz},F_{zz}]$ for symetric tensors
* $[F_0,F_1,F_2,F_3,F_4,F_5,F_6,F_7,F_8] = [F_{xx},F_{xy},F_{xz},F_{yx},F_{yy},F_{yz},F_{zx},F_{zy},F_{zz}]$ for non symetric tensors
For these reason, fields are stored with indices transposition in *SampleData* datasets so that their visualization with Paraview yield a rendering that is consistent with the ordering convention presented in the previous subsection. These transposition are automatically handled, as well as the back transposition when a field is retrieved, as you will see in the last section of this tutorial.
**An attribute `transpose_components` is added to field data items. Its values represent the order in which components or the original array are stored in memory** (see examples below).
<div class="alert alert-info">
**Note**
When visualizing a field in paraview, you may choose which component (including field magnitude) you are plotting in the box (*highlighted in red in the image below*) located in between the boxes for the choice of the visualization mode (*Surface in the image below*) and the data item choice (*tensor_field2D in the image below*).
In this box, you will have to choose between 9 (0 to 8) components, even for symetric tensors. In the later case, the corresponding equal components (1 and 3: $xy$&$yx$, 2 and 6: $xz$&$zx$, 5 and 7: $yz$&$zy$) will yield the same visualization.
<img src="./Images/Tutorial_3/Paraview_components.png" width="100%">
</div>
#### Field attributes
Let us look more closely on a *Mesh Group Field* data item, from the test mesh that we created earlier:
```
data.print_node_info('mesh_Test_field1')
data.print_node_info('mesh_Test_field3')
```
As you can see, *Field data item* attributes provide information on the field dimensionality, type and padding. Here for instance, the two fields `Test_field1` and `Test_field3` are scalar fields. The `Test_field3` is an *element field*, that has been inputed with an array containing one value per bulk elemenbt of the mesh.
The `parent_grid_path` and `xdmf_gridname` attribute provides the path of the *Image Group* and the XDMF Grid Node name to which this field belongs.
The `xdmf_fieldname` provides the name of the XDMF Attribute Node associated to the field.
The `padding` attribute is of no use for Image fields. It will be presented in the next tutorial for mesh fields.
## V - Adding Fields to Image Groups
Like for *Image Group Fields* (see [previous tutorial, section V](./3_SampleData_Image_groups.ipynb)), *Mesh Group Fields* can be created in a *SampleData* dataset using the `add_field` method.
To add a field to a grid from a *numpy* array, you just have to call `add_field` like you would have called `add_data_array`, with an additional argument `gridname`, that indicates the mesh group to which you want to add the field.
The analysis of the field dimensionality, nature, padding and required transpositions is automatically handled by the class.
If the added field is an *integration point field*, the value of the `visualisation_type` argument will control the creation of an associated visualization field. Its possible values are:
* `visualization_type='Elt_mean'`: a visualization field is created by taking the mean of integration point values in each element
* `visualization_type='Elt_max'`: a visualization field is created by taking the mean of integration point values in each element
* `visualization_type='Elt_min'`: a visualization field is created by taking the mean of integration point values in each element
* `visualization_type='None'`: no visualization field is created
The `add_field` method has already been presented in the last [tutorial](./3_SampleData_Image_groups.ipynb). Please refer to it to see examples on how to use this method, in particular to create Field time series. The rest of this section is dedicated to a few example of *Field data items* creation specificities for mesh groups.
The fields will be created o,n the mesh `mesh_geof`. Let us print again information on this mesh topology:
```
data.print_node_info('mesh_geof')
```
### Example 1: Creating a vector node field
Node vector fields are a very common data in material science and material mechanics. They can be for instance a displacement or velocity field computed from a simulation solver or experimentally measured, a magnetic field...
To create a node vector field, we need to create a field data array that has one value for each node in the mesh. As we can see above, the *Mesh Group* has a `number_of_nodes` that provides us with this information ($N_n$). To create a vector field, each value of the field must be a 1D vector of 3 values.
We have to create a data array of shape $(N_n, 3)$. Let us do it by creating a random array:
```
# random array creation
array = np.random.rand(data.get_attribute('number_of_nodes','mesh_geof'),3) - 0.5
# creation of the mesh field
data.add_field(gridname='mesh_geof', fieldname='nodal_vectorF', array=array, indexname='vectF', replace=True)
# Printing information on our created field:
print(f'Is "vectF" in the field list of the mesh ? {"vectF" in data.get_grid_field_list("mesh_geof")}')
data.print_node_info('vectF')
```
We can verify the field attributes to check that it is indeed a vector field, a nodal field (hence with no padding), that is defined on the `loaded_geof_mesh` *Mesh Group*.
You can also visualize the field with Paraview :
```
# Use the second code line if you want to specify the path of the paraview executable you want to use
# otherwise use the first line
#data.pause_for_visualization(Paraview=True)
#data.pause_for_visualization(Paraview=True, Paraview_path='path to paraview executable')
```
Open the XDMF dataset with the **XdmfReader** of Paraview, and then, in the *Property* panel, tick only the box associated to the the `loaded_geof_mesh` block. Then you can use the **Glyph** filter, which allows you to plot vector fields with arrows. You should be able to get visualization looking like the image below:
<img src="./Images/Tutorial_4/Nodal_vector_field.png" width="100%">
### Example 2: Creating a scalar element field for boundary elements
Scalar fields defined on boundary can correspond for instance to pressure fields applied on a body, or a surface temperature measurement. They are also a common type of data for material mechanics datasets.
To add one, we need to create a 1D array (as the field is scalar) that has one value per boundary element in the mesh. in the present case, the mesh has as many bulk as boundary elements. To solve the ambiguity, the add field method has a `bulk_padding` argument. If it is `True` (default value), the field is padded as a bulk element field. If it is set to `False`, the field is padded as a boundary element field.
```
# Random array creation. Warning ! 'Number_of_boundary_elements' attribute is a list !
array = np.random.rand(data.get_attribute('Number_of_boundary_elements','mesh_geof')[0])
# creation of the mesh field
data.add_field(gridname='mesh_geof', fieldname='boundary_scalarF', array=array, indexname='boundaryF', replace=True,
bulk_padding=False)
# Printing information on our created field:
print(f'Is "boundaryF" in the field list of the mesh ? {"boundaryF" in data.get_grid_field_list("mesh_geof")}')
data.print_node_info('boundaryF')
```
We can verify the field attributes to check that it is indeed a scalar element field, padded as a boundary element field, that is defined on the `loaded_geof_mesh` *Mesh Group*.
Once again, you can visualize your field with the Paraview software:
Open the XDMF dataset with the **XdmfReader** of Paraview, and then, in the *Property* panel, tick only the box associated to the the `loaded_geof_mesh` block. Then you can use the **Clip** filter, which allows you to cut your grid with a customizable plane and hence plot data within the bulk of your mesh, as the right picture below:
<img src="./Images/Tutorial_4/Field_scalar_boundary.png" width="100%">
As you can see, the visual rendering of the field displays the random values of the field on the boundary triangle elements of the external surface of the mesh, and 0 values within the bulk of the mesh: *SampleData* has automatically padded the field array with 0 at positions corresponding to bulk elements to ensure compatibility with the XDMF format, and hence visualization.
### Example 3: Creating a time serie for a tensorial integration point field for bulk elements
In this last example, we will add a field representing a uniaxial stress state to the mesh, proportional to time. We need to create a symetric tensor field, with 6 components. We will suppose that this field comes from a finite element solver output, that provided the full stress tensor at each integration point in the mesh bulk elements. We will suppose that the mesh elements have 4 integration points each, and that the values have been outputed at three instants.
Hence, we have to create 3 arrays of shape $(4N_{be},6)$, where $N_{be}$ is the number of bulk elements in the mesh.
```
# Arrays creation
array_T0 = np.zeros((4*data.get_attribute('Number_of_bulk_elements','mesh_geof')[0],6))
array_T1 = np.zeros((4*data.get_attribute('Number_of_bulk_elements','mesh_geof')[0],6))
array_T2 = np.zeros((4*data.get_attribute('Number_of_bulk_elements','mesh_geof')[0],6))
# time values:
T0 = 0.
T1 = 1.
T2 = 10.
# Filling stress state values
# at second time increment --> stress = 100*t in direction Z/3 (see indexing conventions)
array_T1[:,2] = 100*T1
# at second time increment --> stress = 100*t in direction Z/3 (see indexing conventions)
array_T2[:,2] = 100*T2
# create fields
data.add_field(gridname='mesh_geof', fieldname='tensor_ipF', array=array_T0, indexname='tensorF', replace=True,
time=T0)
data.add_field(gridname='mesh_geof', fieldname='tensor_ipF', array=array_T1, indexname='tensorF', replace=True,
time=T1)
data.add_field(gridname='mesh_geof', fieldname='tensor_ipF', array=array_T2, indexname='tensorF', replace=True,
time=T2)
print(data.get_grid_field_list('mesh_geof'))
data.print_node_info('tensorF_T0')
data.print_node_info('tensorF_T2')
```
You can see that the fields have been automatically added as integration point fields symmetric tensor (`Tensor6` `IP_field`). You can also observe the apparition of the `transpose_components` attribute, that has been already introduce in the [last tutorial](./3_SampleData_Image_groups.ipynb).
You can also observe that the fields have a `visualisation_field_path` and a `visualisation_type` attributes. They indicate respectively the path of the visualization field data item associated to our added field, as well as the type of operation that has been used to create it (here, taking the mean of integration points value within each element to create an element wise constant field: `Elt_mean`).
You can try to modify the calls to `add_field` in the previous cells by adding the `visualisation_type` argument, and setting its value to `Elt_max`, or `None`, and observe the differences in the field data item attributes, and the field visualization.
Let us now look at the XDMf dataset:
```
data.print_xdmf()
```
As expected, the *Attributes* of our grid time collection `loaded_geof_mesh` that contain the tensor field do not point towards the added data array, but towards the visualization field data array: `tensor_ipF_T0_Elt_mean`, `tensor_ipF_T1_Elt_mean` or `tensor_ipF_T2_Elt_mean`.
## VI - Getting Mesh Groups and Mesh Fields
### Getting mesh objects from Mesh groups
Like *Image groups*, you can easily retrieve a complete *SampleData Mesh Group* under the form of a *BasicTools* Mesh object, with the `get_mesh` method:
```
mesh = data.get_mesh('mesh_geof', with_fields=True, with_tags=True)
print(mesh)
```
This method has two arguments that allows you to control how rich you want your output to be. By setting `with_fields` to `False`, the return mesh object will not contain the field arrays stored on the *Mesh Group*. By setting `with_tags` to `False`, the return mesh object will not contain the Node and Element tags stored in the *Mesh Group*. If both are `False`, the mesh object will only contain the mesh nodes and elements.
### Getting mesh Fields
As for image fields, retrieveing mesh fields from *Sampledata* dataset is done with the `get_field` method. All padding, transpositions etc... are automatically reverted to return the originally inputed data array.
In the case of mesh fields, you can prevent the reversion of field padding with the `unpad_field` argument (set it to `False`). You can also choose, when getting an integration point field, to get the complete data array, or on the contrary, only the associated visualization field data array, by setting the `get_visualisation_field` argument to `True`.
Let us show these various use of the method by trying to get the tensor field created in the third exampe above.
```
# getting the inputed array --> no options
O_array = data.get_field('tensorF_T1')
# getting the visualization array
V_array = data.get_field('tensorF_T1', get_visualisation_field=True)
# getting the unpadded visualization array
UPV_array = data.get_field('tensorF_T1', unpad_field=False, get_visualisation_field=True)
print(f' Original array shape is {O_array.shape}, unpadded array shape is {V_array.shape},'
f' unpadded visualization array shape is {UPV_array.shape}')
print(f'\nIs it the original inputed array ? {np.all(O_array == array_T1)} \n')
```
As you can see, using `get_field` allow you to retrieve exactly the original array that you inputed, reverting all transformations ($1536 = 4*384$ here). Using the `get_visualisation_field` argument, you retrieve the unpadded visualization field with values only for the mesh bulk elements, as inputted ($384$ values here); and adding the `unpad_field=False` argument, you get the field with the added 0, that has one value per element in the mesh ($768$ values here).
****
**This is the end of this tutorial on SampleData Mesh Groups ! We can now close our dataset to conclude it.**
```
del data
```
| github_jupyter |
<div style="text-align: right">Dino Konstantopoulos, 3 June 2021</div>
# Introducing sentence transformers
A python package called **sentence-transformers** that has specifically been optimized for doing semantic textual similarity searches. The model creates a 1024-dimensional embedding for each sentence, and the similarity between two such sentences can then be calculated by the cosine similarity between the corresponding two vectors.
A cosine similarity of 1 means the questions are identical (the angle is 0), and a cosine similarity of -1 means the questions are very different.
# ARC Classification dataset
The [ARC question classification dataset](https://allenai.org/data/arc-classification) is a dataset of 1700 questions. that went offline last week. But I found it on Amazon.
We can use it as our testing ground to experiment with the affinity of our sentence embeddings.
**Approach 1**: The transformer model outputs a 1024-dimensional vector for each token in our sentence. Then, we can mean-pool the vectors to generate a single sentence-level vector.
**Approach 2**: We can also calculate the cosine distance between each token in our query and each token in the sentence-to-compare-with, and then mean-pool the cosine angles. Calculating the cosine similarity between all token embeddingslets us see the contributions of each token towards the final similarity score and explaining what the model is doing.
>**Research Question**: Should we take the mean of all token embeddings ***prior*** to calculating cosine similarity between different sentence embeddings? Or should we see how each token embedding from the query is aligned against token embeddings in potentially matching questions? What is the best approach for our **belief models**?
# Install libraries required
# Experiment
Let's pretend that the first question in our dataset is our original query, and try to find the closest matching entry from the rest of the questions, and contrast our approaches.
# Download ARC dataset
```
!wget https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip
from zipfile import ZipFile
with ZipFile('ARC-V1-Feb2018.zip', "r") as zip_obj:
zip_obj.extractall("data")
```
# Import dataset into Pandas
```
import pandas as pd
import numpy as np
df = pd.read_csv("./data/ARC-V1-Feb2018-2/ARC-Easy/ARC-Easy-Train.csv")
```
# Load transformer model
```
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModel
from itertools import zip_longest
import torch
def grouper(iterable, n, fillvalue=None):
"""Taken from: https://docs.python.org/3/library/itertools.html#itertools-recipes"""
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def mean_pooling(model_output, attention_mask):
"""
Mean pooling to get sentence embeddings. See:
https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v1
"""
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) # Sum columns
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Sentences to embed
df = df[df.question.str.contains('\?')]
df.question = [s.split('?')[0] + '?' for s in df.question]
# Fetch the model & tokenizer from transformers library
model_name = 'sentence-transformers/stsb-roberta-large'
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
# Create sentence embeddings
```
sentence_embeddings = []
token_embeddings = []
# Embed 8 sentences at a time
for sentences in tqdm(grouper(df.question.tolist(), 8, None)):
# Ignore sentences with None
valid_sentences = [s for s in sentences if s]
# Tokenize input
encoded_input = tokenizer(valid_sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
# Create word embeddings
model_output = model(**encoded_input)
# For each sentence, store a list of token embeddings; i.e. a 1024-dimensional vector for each token
for i, sentence in enumerate(valid_sentences):
tokens = tokenizer.convert_ids_to_tokens(encoded_input['input_ids'][i])
embeddings = model_output[0][i]
token_embeddings.append(
[{"token": token, "embedding": embedding.detach().numpy()} for token, embedding in zip(tokens, embeddings)]
)
# Pool to get sentence embeddings; i.e. generate one 1024 vector for the entire sentence
sentence_embeddings.append(
mean_pooling(model_output, encoded_input['attention_mask']).detach().numpy()
)
# Concatenate all of the embeddings into one numpy array of shape (n_sentences, 1024)
sentence_embeddings = np.concatenate(sentence_embeddings)
```
# Perform Search & Show Search Context
```
from IPython.core.display import display, HTML
from sklearn.preprocessing import normalize
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
# Noralize the data
norm_data = normalize(sentence_embeddings, norm='l2')
# Set QUERY & BEST MATCH IDs
QUERY_ID = 0
scores = np.dot(norm_data, norm_data[QUERY_ID].T)
MATCH_ID = np.argsort(scores)[-2]
def get_token_embeddings(embeddings_word):
"""Returns a list of tokens and list of embeddings"""
tokens, embeddings = [], []
for word in embeddings_word:
if word['token'] not in ['<s>', '<pad>', '</pad>', '</s>']:
tokens.append(word['token'].replace('Ġ', ''))
embeddings.append(word['embedding'])
return tokens, normalize(embeddings, norm='l2')
# Get tokens & token embeddings
query_tokens, query_token_embeddings = get_token_embeddings(token_embeddings[QUERY_ID])
match_tokens, match_token_embeddings = get_token_embeddings(token_embeddings[MATCH_ID])
# Calculate cosine similarity between all tokens in query and match sentences
attention = (query_token_embeddings @ match_token_embeddings.T)
def plot_attention(src, trg, attention):
"""Plot 2D plot of cosine similarities"""
fig = plt.figure(dpi=150)
ax = fig.add_subplot(111)
cax = ax.matshow(attention, interpolation='nearest')
clb = fig.colorbar(cax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xticklabels([''] + src, rotation=90)
ax.set_yticklabels([''] + trg)
plot_attention(match_tokens, query_tokens, attention)
lengths_query
attention.shape
```
# How to run
Since I have trouble loading conda environments on my Jupyter notebook, I run the code in a python file on the command line.
# To think about
Our first experiments should be to see which of the two approaches outlined herein produce best results with the ARC dataset.
Also for next week, think about how can we combine LDA with transformer sentence embeddings.
# Using `SentenceTransformer`
```
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('bert-base-nli-mean-tokens')
sentences = ['This framework generates embeddings for each input sentence',
'A package that maps sentences into embeddings.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
# Loading `stsb-roberta-large`
Which I found [here](https://huggingface.co/models).
This take 5 hours to run!
```
from sentence_transformers import SentenceTransformer
#model = SentenceTransformer('bert-base-nli-mean-tokens')
model = SentenceTransformer('stsb-roberta-large')
```
| github_jupyter |
# Basic CNN based digit recognizer
In this tutorial we shall go through a bangla digit recognizer model in details. Our model is going to be based on a convolutional neural network (CNN). The focus is to get familiar with the components of a bangla digit recognizer framework. There are three steps in building this digit recognizer, <br>
**Step 1 : Process the data.<br>
Step 2 : Design the model.<br>
Step 3 : Train the model.**
```
# Importing necessary libraries
import numpy as np
import os
import glob
import cv2
import matplotlib.pyplot as plt
import pandas as pd
import pickle
from keras.utils import to_categorical
from keras.layers import Dense, Input, Conv2D, Flatten, MaxPool2D, Activation
from keras.models import Model
from keras.callbacks import ModelCheckpoint
from keras import backend as K
```
While writing the codes, files and folder was organized in the following way
* Numta
* code
* data
* model
* Final_DB
The `code` folder contains this jupyter notebook, the processed images will be placed in the `data` folder, the trained model will be saved in the `model` folder, and the `Final_DB` folder has the raw image datasets.
## Step 1: Process the data
Our dataset comes from six different source. For this tutorial we are using only dataset **A**.
```
#Declaring constants
FIG_WIDTH=16 # Width of figure
ROW_HEIGHT=3 # Height of each row when showing a figure which consists of multiple rows
RESIZE_DIM=28 # The images will be resized to 28x28 pixels
project_dir='..'
# We hall get all the filepaths by using glob.glob() function
paths_train_a=glob.glob(os.path.join(project_dir,'Final_DB','training-a','*.png'))
paths_test_a=glob.glob(os.path.join(project_dir,'Final_DB','testing-a','*.png'))
path_label_train_a=os.path.join(project_dir,'Final_DB','training-a.csv')
path_label_test_a=os.path.join(project_dir,'Final_DB','testing-a.csv')
```
### Define some utility functions
We shall write some helper functions to process and visualize the images.
```
def get_key(path):
# seperates the key of an image from the filepath
key=path.split(sep=os.sep)[-1]
return key
def get_data(paths_img,path_label,resize_dim=None,rescale=True):
'''reads images from the filepaths, resizes them, and returns them in a numpy array
Args:
paths_img: image filepaths
path_label: image label filepath
Returns:
X: group of images
y: categorical true labels
'''
X=[] # initialize empty list for resized images
for i,path in enumerate(paths_img):
img=cv2.imread(path,cv2.IMREAD_GRAYSCALE) # read image, image size is 180x180
if resize_dim!=None:
img=cv2.resize(img,(resize_dim,resize_dim),interpolation=cv2.INTER_AREA) # resize image to 28x28
if rescale==True:
img=img/255
X.append(np.expand_dims(img,axis=2)) # expand image to 28x28x1 and append to the list.
# display progress
if i==len(paths_img)-1:
end='\n'
else: end='\r'
print('processed {}/{}'.format(i+1,len(paths_img)),end=end)
X=np.array(X) # tranform list to numpy array
df = pd.read_csv(path_label) # read labels
df=df.set_index('filename')
y_label=[df.loc[get_key(path)]['digit'] for path in paths_img] # get the labels corresponding to the images
y=to_categorical(y_label,10) # transfrom integer value to categorical variable
return X, y
def imshow_group(X,y=None,y_pred=None,n_per_row=10,phase='processed'):
'''helper function to visualize a group of images along with their categorical true labels (y).
Args:
X: group of images
y: categorical true labels
y_pred: predicted class probabilities
n_per_row: number of images per row to be plotted
'''
n_sample=len(X)
img_dim=X.shape[1]
j=np.ceil(n_sample/n_per_row)
fig=plt.figure(figsize=(FIG_WIDTH,HEIGHT_PER_ROW*j))
for i,img in enumerate(X):
plt.subplot(j,n_per_row,i+1)
img_sq=np.squeeze(img,axis=2)
plt.imshow(img_sq,cmap='gray')
if y is not None:
plt.title(np.argmax(y[i]))
if y_pred is not None:
top_n=3 # top 3 predictions with highest probabilities
ind_sorted=np.argsort(y_pred[i])[::-1]
h=img_dim+4
for k in range(top_n):
string='pred: {} ({:.0f}%)\n'.format(ind_sorted[k],y_pred[i,ind_sorted[k]]*100)
plt.text(img_dim/2, h, string, horizontalalignment='center',verticalalignment='center')
h+=4
plt.axis('off')
plt.show()
```
Next we are going to use the `get_data()` function to process all the images from dataset **A**
```
X_train_a,y_train_a=get_data(paths_train_a,path_label_train_a,resize_dim=RESIZE_DIM)
X_test_a,y_test_a=get_data(paths_test_a,path_label_test_a,resize_dim=RESIZE_DIM)
```
Let's see some samples of the processed data.
```
X_sample=X_train_a[:40]
y_sample=y_train_a[:40]
X_sample.shape
imshow_group(X=X_sample,y=y_sample)
```
Next, we are going to randomly choose 80% of the training data and use it to train our neural network. The remaining 20% images are going to be our validation data.
```
indices=list(range(len(X_train_a)))
np.random.shuffle(indices)
ind=int(len(indices)*0.80)
X_train=X_train_a[indices[:ind]] # train data
y_train=y_train_a[indices[:ind]]
X_val=X_train_a[indices[-(len(indices)-ind):]] # validation data
y_val=y_train_a[indices[-(len(indices)-ind):]]
```
## Step 2: Design the model
In this step we shall design our neural network model. We are going to build a small model based on the classic LeNet architecture. We shall use only three convolutional layers. Each convolution layer has rectified linear unit (ReLU) activation which is followed by a max pooling layer. The convolution layers are followed by two dense layers.
```
def get_model():
input_layer=Input(shape=(RESIZE_DIM,RESIZE_DIM,1))
x=Conv2D(filters=8,kernel_size=(5,5),padding='valid', activation='relu')(input_layer)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Conv2D(filters=16,kernel_size=(3,3),padding='valid', activation='relu')(x)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Conv2D(filters=32,kernel_size=(3,3),padding='valid', activation='relu')(x)
x=MaxPool2D(pool_size=(2,2),strides=2,padding='valid')(x)
x=Flatten()(x)
x=Dense(units=64)(x)
x=Dense(units=10)(x)
output_layer=Activation('softmax')(x)
model=Model(inputs=input_layer,outputs=output_layer)
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='adam')
return model
model=get_model()
model.summary()
```
## Step 3: Train the model
```
path_model=os.path.join(project_dir,'model','model_tutorial.h5') # save model at this location after each epoch
K.tensorflow_backend.clear_session() # destroys the current graph and builds a new one
model=get_model() # create the model
K.set_value(model.optimizer.lr,1e-3) # set the learning rate
# fit the model
h=model.fit(x=X_train,
y=y_train,
batch_size=512,
epochs=100,
verbose=1,
validation_data=(X_val,y_val),
shuffle=True,
callbacks=[
ModelCheckpoint(filepath=path_model),
]
)
```
After 100 epochs training accuracy is 92% and valiadation accuracy is 90%.
Let's evaluate the model performance on the test set
```
model.evaluate(X_test_a,y_test_a)
```
The loss and accuracy is similar to the validation set.
## Result Analysis
Let's observe the images which is misclassified by our model.
```
predictions=model.predict(X_test_a) # get predictions for all the test data
# get the indice of the images which were incorrectly labeled
incorrect_ind=[]
for i,pred in enumerate(predictions):
if np.argmax(y_test_a[i])!=np.argmax(pred):
incorrect_ind.append(i)
# let's observe some samples of the incorrect data
X_inc=X_test_a[incorrect_ind[:40]]
y_inc=predictions[incorrect_ind[:40]]
y_true=y_test_a[incorrect_ind[:40]]
imshow_group(X=X_inc,y=y_true,y_pred=y_inc)
```
Our model misclassifies often misclassifies '5' as '6', '9' as '1', among other mistakes. Since the neural network architecture used in this tutorial is shallow, has a simple architecture and not fine-tuned for this problem, its performance is not quite satisfactory. A deeper state of the art architecture should yield better results which will be investigated in future notebooks.
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=1
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### load packages
```
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
```
### Load dataset
```
dataset = 'cifar10'
dims = (32,32,3)
n_components = 2
from tensorflow.keras.datasets import cifar10
# load dataset
(train_images, Y_train), (test_images, Y_test) = cifar10.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
print(len(X_train), len(X_valid), len(X_test))
```
### define networks
```
from tfumap.vae import VAE, Sampling
n_components = 2
encoder_inputs = tf.keras.Input(shape=dims)
x = tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation="relu"
)(encoder_inputs)
x = tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation="relu"
)(x)
x = tf.keras.layers.Conv2D(
filters=128, kernel_size=3, strides=(2, 2), activation="relu"
)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units=512, activation="relu")(x)
z_mean = tf.keras.layers.Dense(n_components, name="z_mean")(x)
z_log_var = tf.keras.layers.Dense(n_components, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = tf.keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
latent_inputs = tf.keras.Input(shape=(n_components,))
x = tf.keras.layers.Dense(units=512, activation="relu")(latent_inputs)
x = tf.keras.layers.Dense(units=4 * 4 * 128, activation="relu")(x)
x = tf.keras.layers.Reshape(target_shape=(4, 4, 128))(x)
x = tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
)(x)
decoder_outputs = tf.keras.layers.Conv2DTranspose(
filters=dims[2], kernel_size=3, strides=(1, 1), padding="SAME", activation="sigmoid"
)(x)
decoder = tf.keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
```
### Create model and train
```
X_train.shape
X_train = X_train.reshape([len(X_train)]+ list(dims))
X_train.shape
vae = VAE(encoder, decoder)
vae.compile(optimizer=tf.keras.optimizers.Adam())
vae.fit(X_train, epochs=30, batch_size=128)
# z = embedder.fit_transform(X_train_flat)
z = vae.encoder.predict(X_train)[0]
```
### Plot model output
```
Y_train
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)].flatten(),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
### View loss
```
from tfumap.umap import retrieve_tensors
import seaborn as sns
```
### Save output
```
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ dataset / 'vae'
ensure_dir(output_dir)
#vae.save(output_dir)
vae.encoder.save(output_dir / 'encoder')
vae.decoder.save(output_dir / 'encoder')
#loss_df.to_pickle(output_dir / 'loss_df.pickle')
np.save(output_dir / 'z.npy', z)
```
### compute metrics
```
X_test.shape
z_test = encoder.predict(X_test.reshape((len(X_test), 32,32,3)))[0]
```
#### silhouette
```
from tfumap.silhouette import silhouette_score_block
ss, sil_samp = silhouette_score_block(z, Y_train, n_jobs = -1)
ss
ss_test, sil_samp_test = silhouette_score_block(z_test, Y_test, n_jobs = -1)
ss_test
fig, axs = plt.subplots(ncols = 2, figsize=(10, 5))
axs[0].scatter(z[:, 0], z[:, 1], s=0.1, alpha=0.5, c=sil_samp, cmap=plt.cm.viridis)
axs[1].scatter(z_test[:, 0], z_test[:, 1], s=1, alpha=0.5, c=sil_samp_test, cmap=plt.cm.viridis)
```
#### KNN
```
from sklearn.neighbors import KNeighborsClassifier
neigh5 = KNeighborsClassifier(n_neighbors=5)
neigh5.fit(z, Y_train)
score_5nn = neigh5.score(z_test, Y_test)
score_5nn
neigh1 = KNeighborsClassifier(n_neighbors=1)
neigh1.fit(z, Y_train)
score_1nn = neigh1.score(z_test, Y_test)
score_1nn
```
#### Trustworthiness
```
from sklearn.manifold import trustworthiness
tw = trustworthiness(X_train_flat[:10000], z[:10000])
tw_test = trustworthiness(X_test_flat[:10000], z_test[:10000])
tw, tw_test
```
### Save output metrics
```
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
```
#### train
```
metrics_df = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"trustworthiness",
"silhouette_score",
"silhouette_samples",
]
)
metrics_df.loc[len(metrics_df)] = [dataset, 'vae', n_components, tw, ss, sil_samp]
metrics_df
save_loc = DATA_DIR / 'projection_metrics' / 'vae' / 'train' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
metrics_df.to_pickle(save_loc)
```
#### test
```
metrics_df_test = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"trustworthiness",
"silhouette_score",
"silhouette_samples",
]
)
metrics_df_test.loc[len(metrics_df)] = [dataset, 'vae', n_components, tw_test, ss_test, sil_samp_test]
metrics_df_test
save_loc = DATA_DIR / 'projection_metrics' / 'vae' / 'test' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
metrics_df.to_pickle(save_loc)
```
#### knn
```
nn_acc_df = pd.DataFrame(columns = ["method_","dimensions","dataset","1NN_acc","5NN_acc"])
nn_acc_df.loc[len(nn_acc_df)] = ['vae', n_components, dataset, score_1nn, score_5nn]
nn_acc_df
save_loc = DATA_DIR / 'knn_classifier' / 'vae' / 'train' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
nn_acc_df.to_pickle(save_loc)
```
### Reconstruction
```
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error, r2_score
X_recon = vae.decoder.predict(vae.encoder.predict(X_test.reshape((len(X_test), 32, 32, 3)))[0])
X_real = X_test.reshape((len(X_test), 32, 32, 3))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
reconstruction_acc_df = pd.DataFrame(
columns=["method_", "dimensions", "dataset", "MSE", "MAE", "MedAE", "R2"]
)
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['vae', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
save_loc = DATA_DIR / 'reconstruction_acc' / 'vae' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
reconstruction_acc_df.to_pickle(save_loc)
```
### Compute clustering quality
```
from sklearn.cluster import KMeans
from sklearn.metrics import homogeneity_completeness_v_measure
def get_cluster_metrics(row, n_init=5):
# load cluster information
save_loc = DATA_DIR / 'clustering_metric_df'/ ('_'.join([row.class_, str(row.dim), row.dataset]) + '.pickle')
print(save_loc)
if save_loc.exists() and save_loc.is_file():
cluster_df = pd.read_pickle(save_loc)
return cluster_df
# make cluster metric dataframe
cluster_df = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"silhouette",
"homogeneity",
"completeness",
"v_measure",
"init_",
"n_clusters",
"model",
]
)
y = row.train_label
z = row.train_z
n_labels = len(np.unique(y))
for n_clusters in tqdm(np.arange(n_labels - int(n_labels / 2), n_labels + int(n_labels / 2)), leave=False, desc = 'n_clusters'):
for init_ in tqdm(range(n_init), leave=False, desc='init'):
kmeans = KMeans(n_clusters=n_clusters, random_state=init_).fit(z)
clustered_y = kmeans.labels_
homogeneity, completeness, v_measure = homogeneity_completeness_v_measure(
y, clustered_y
)
ss, _ = silhouette_score_block(z, clustered_y)
cluster_df.loc[len(cluster_df)] = [
row.dataset,
row.class_,
row.dim,
ss,
homogeneity,
completeness,
v_measure,
init_,
n_clusters,
kmeans,
]
# save cluster df in case this fails somewhere
ensure_dir(save_loc)
cluster_df.to_pickle(save_loc)
return cluster_df
projection_df = pd.DataFrame(columns = ['dataset', 'class_', 'train_z', 'train_label', 'dim'])
projection_df.loc[len(projection_df)] = [dataset, 'vae', z, Y_train.flatten(), n_components]
projection_df
get_cluster_metrics(projection_df.iloc[0], n_init=5)
```
| github_jupyter |
```
%pylab inline
import pandas as pd
import os
# Just use 1 GPU
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import pandas as pd
from pyvirchow.io import WSIReader
from pyvirchow.morphology import TissuePatch
from matplotlib.patches import Polygon
from shapely.geometry import Point as shapelyPoint
from shapely.geometry import box as shapelyRectangle
from pyvirchow.io.operations import get_annotation_bounding_boxes, get_annotation_polygons, translate_and_scale_object
from pyvirchow.io.operations import translate_and_scale_polygon
from openslide.deepzoom import DeepZoomGenerator
from multiprocessing import Pool
import os
import glob
from skimage.filters import threshold_otsu
from skimage.color import rgb2gray, gray2rgb
from shapely.geometry import Polygon as shapelyPolygon
import openslide
from tqdm import tqdm_notebook, tqdm
import cv2
from pyvirchow.io.operations import get_annotation_bounding_boxes, get_annotation_polygons, \
poly2mask, translate_and_scale_polygon, read_as_rgb
from pyvirchow.morphology.patch_extractor import TissuePatch
from pyvirchow.morphology.mask import mpl_polygon_to_shapely_scaled, get_common_interior_polygons
from keras.models import Sequential
from keras.layers import Lambda, Dropout
from keras.layers.convolutional import Convolution2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import StratifiedShuffleSplit
from keras.callbacks import ModelCheckpoint
from pyvirchow.io.tiling import generate_tiles, get_all_patches_from_slide
import matplotlib.gridspec as gridspec
from sklearn.metrics import confusion_matrix
from tqdm import tqdm_notebook
from matplotlib import cm
NUM_CLASSES = 2 # not_tumor, tumor
BATCH_SIZE = 32
model = Sequential()
model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(256, 256, 3)))
model.add(Convolution2D(100, (5, 5), strides=(2, 2), activation='elu', padding='same'))
model.add(MaxPooling2D())
model.add(Convolution2D(200, (5, 5), strides=(2, 2), activation='elu', padding='same'))
model.add(MaxPooling2D())
model.add(Convolution2D(300, (3, 3), activation='elu', padding='same'))
model.add(Convolution2D(400, (3, 3), activation='elu', padding='same'))
model.add(Dropout(0.1))
model.add(Convolution2D(400, (3, 3), activation='elu', padding='same'))
model.add(Convolution2D(300, (3, 3), activation='elu', padding='same'))
model.add(Dropout(0.1))
model.add(Convolution2D(2, (1, 1))) # this is called upscore layer for some reason?
model.add(Conv2DTranspose(2, (31, 31), strides=(16, 16), activation='softmax', padding='same'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.load_weights('weights-improvement-11-0.97.hdf')
# Let's try on tumor_076 samples
def predict_from_model(patch, model):
"""Predict which pixels are tumor.
input: patch: 256x256x3, rgb image
input: model: keras model
output: prediction: 256x256x1, per-pixel tumor probability
"""
prediction = model.predict(patch.reshape(1, 256, 256, 3))
prediction = prediction[:, :, :, 1].reshape(256, 256)
return prediction
def plot_blend(patch, prediction, ax, alpha=0.75):
"""alpha blend patch and prediction.
https://matplotlib.org/examples/pylab_examples/layer_images.html
input: patch: 256x256x3, rgb image
input: prediction: 256x256x1, per-pixel tumor probability
input: ax: maplotlib Axes object
input: alpha: alpha blend
"""
dx, dy = 0.05, 0.05
x = np.arange(0, patch.shape[1] - 1, dx)
y = np.arange(0, patch.shape[0] - 1, dy)
xmin, xmax, ymin, ymax = np.amin(x), np.amax(x), np.amin(y), np.amax(y)
extent = xmin, xmax, ymin, ymax
# fig = plt.figure(frameon=False, figsize=(10, 5))
Z1 = rgb2gray(patch)
Z2 = prediction
im1 = ax.imshow(Z1, cmap='gray', extent=extent)
im2 = ax.imshow(Z2, cmap='coolwarm', alpha=alpha, vmin=0.0, vmax=1.0,
extent=extent)
ax.axis('off');
def plot_patch_with_pred(patch, truth, prediction, title_str='', alpha=0.6):
"""
input: patch: 256x256x3, rgb image
input: truth: 256x256x2, onehot output classes (not_tumor, tumor)
input: prediction: 256x256x1, per-pixel tumor probability
"""
gs = gridspec.GridSpec(2, 4, width_ratios=[10, 10, 19, 1])
ax0 = plt.subplot(gs[0, 0])
ax1 = plt.subplot(gs[0, 1])
ax2 = plt.subplot(gs[1, 0])
ax3 = plt.subplot(gs[1, 1])
ax4 = plt.subplot(gs[:, 2])
axc = plt.subplot(gs[:, 3])
ax0.imshow(patch);
ax0.set_title('Original')
ax1.imshow(truth.argmax(axis=2), cmap='gray', vmin=0, vmax=1);
ax1.set_title('Truth mask (white=tumor, black=not_tumor)')
p = ax2.imshow(prediction, cmap='coolwarm', vmin=0, vmax=1);
ax2.set_title('Prediction heatmap')
ax3.imshow((prediction > 0.5).astype(np.int), cmap='gray', vmin=0, vmax=1);
ax3.set_title('Prediction mask (white=tumor, black=not_tumor)')
plot_blend(patch, prediction, ax4, alpha)
ax4.set_title('Original+Prediction blend')
fig = plt.gcf()
fig.set_size_inches(20, 10)
fig.suptitle(title_str)
fig.colorbar(p, cax=axc, orientation="vertical")
axc.set_title('Probability pixel is tumor')
def predict_batch_from_model(patches, model):
"""Predict which pixels are tumor.
input: patch: `batch_size`x256x256x3, rgb image
input: model: keras model
output: prediction: 256x256x1, per-pixel tumor probability
"""
predictions = model.predict(patches)
predictions = predictions[:, :, :, 1]
return predictions
"""
validation_samples = len(tumor_076.index)
validation_generator = generate_tiles(tumor_076, BATCH_SIZE)
validation_steps = np.ceil((validation_samples) / BATCH_SIZE)
confusion_mtx = np.zeros((2, 2))
for i in tqdm_notebook(range(int(validation_steps))):
X, y = next(validation_generator)
preds = predict_batch_from_model(X, model)
y_true = y[:, :, :, 1].ravel()
y_pred = np.uint8(preds > 0.5).ravel()
confusion_mtx += confusion_matrix(y_true, y_pred, labels=[0, 1])
"""
tumor_df = pd.read_table('/Z/personal-folders/interns/saket/histopath_data/patches_dataframe/training/tumor/master_df.tsv')
tumor_076 = tumor_df[tumor_df.uid=='tumor_076']
tumor_082 = tumor_df[tumor_df.uid=='tumor_082']
tumor_002 = tumor_df[tumor_df.uid=='tumor_002']
tumor_002.tile_loc = [eval(x) for x in tumor_002.tile_loc]
sample_gen = generate_tiles(tumor_002.sample(32, random_state=42), 32, shuffle=False)
example_X, example_y = next(sample_gen)
example_patch = example_X[1]
example_truth = example_y[1]
prediction = predict_from_model(example_patch, model)
plot_patch_with_pred(example_patch, example_truth, prediction, title_str='Example Tumor Patch')
output_dir = '/Z/personal-folders/interns/saket/histopath_data/prediction_heatmaps/tumor_002'
os.makedirs(output_dir, exist_ok=True)
alpha = 0.5
slide_path = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_002.tif'
json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_002.json'
all_samples = get_all_patches_from_slide(slide_path, json_filepath, False, 256)
slide = WSIReader(slide_path, 40)
n_samples = len(all_samples)
n_cols = int(slide.dimensions[0] / 256)
n_rows = int(slide.dimensions[1] / 256)
#assert n_cols * n_rows == n_samples
thumbnail = slide.get_thumbnail((n_cols, n_rows))
thumbnail = np.array(thumbnail)
# batch_size = n_cols
batch_size = 32
output_thumbnail_preds = list()
for offset in tqdm_notebook(list(range(0, n_samples, batch_size))):
batch_samples = all_samples.iloc[offset:offset+batch_size]
#batch_samples.loc[: 'tile_loc'] = [eval(x) for x in batch_samples.tile_loc]
png_fnames = batch_samples.tile_loc.apply(lambda coord: os.path.join(output_dir,
'{}_{}.png'.format(coord[1], coord[0])))
X, _ = next(generate_tiles(batch_samples, batch_size, shuffle=False))
if batch_samples.is_tissue.nunique() == 1 and batch_samples.iloc[0].is_tissue == False:
# all patches in this row do not have tissue, skip them all
output_thumbnail_preds.append(np.zeros(batch_size, dtype=np.float32))
# output pngs
for i, png_fname in enumerate(png_fnames):
plt.imsave(png_fname, X[i])
else:
# make predictions
preds = predict_batch_from_model(X, model)
output_thumbnail_preds.append(preds.mean(axis=(1,2)))
# overlay preds
# save blended imgs
for i, png_fname in enumerate(png_fnames):
pred_i = preds[i]
X_i = X[i]
#output_img = rgb2gray(X_i)
#output_img2 = gray2rgb(output_img.copy())
#overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
output_img = cv2.cvtColor(X_i, cv2.COLOR_RGB2GRAY)
output_img2 = cv2.cvtColor(output_img.copy(), cv2.COLOR_GRAY2RGB)
overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
blended = cv2.addWeighted(overlay, alpha, output_img2, 1-alpha, 0, output_img)
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
#blended = np.clip(blended, 0, 255)
plt.imsave(png_fname, blended)
output_thumbnail_preds = np.array(output_thumbnail_preds)
# Remove first 10 and last 10 entries
output_thumbnail_preds_reshaped = output_thumbnail_preds[1:-1,:].reshape(836, 392)#reshape(n_rows, n_cols)
f, axes = plt.subplots(1, 2, figsize=(40, 18))
ax = axes.flatten()
plot_blend(thumbnail, output_thumbnail_preds, ax=ax[0])
output_dir = '/Z/personal-folders/interns/saket/histopath_data/prediction_heatmaps/tumor_009'
os.makedirs(output_dir, exist_ok=True)
alpha = 0.5
slide_path = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_009.tif'
json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_009.json'
all_samples = get_all_patches_from_slide(slide_path, json_filepath, False, 256)
slide = WSIReader(slide_path, 40)
n_samples = len(all_samples)
n_cols = int(slide.dimensions[0] / 256)
n_rows = int(slide.dimensions[1] / 256)
assert n_cols * n_rows == n_samples
thumbnail = slide.get_thumbnail((n_cols, n_rows))
thumbnail = np.array(thumbnail)
# batch_size = n_cols
batch_size = 32
output_thumbnail_preds = list()
for offset in tqdm(list(range(0, n_samples, batch_size))):
batch_samples = all_samples.iloc[offset:offset+batch_size]
#batch_samples.loc[: 'tile_loc'] = [eval(x) for x in batch_samples.tile_loc]
png_fnames = batch_samples.tile_loc.apply(lambda coord: os.path.join(output_dir,
'{}_{}.png'.format(coord[1], coord[0])))
X, _ = next(generate_tiles(batch_samples, batch_size, shuffle=False))
if batch_samples.is_tissue.nunique() == 1 and batch_samples.iloc[0].is_tissue == False:
# all patches in this row do not have tissue, skip them all
output_thumbnail_preds.append(np.zeros(batch_size, dtype=np.float32))
# output pngs
for i, png_fname in enumerate(png_fnames):
plt.imsave(png_fname, X[i])
else:
# make predictions
preds = predict_batch_from_model(X, model)
output_thumbnail_preds.append(preds.mean(axis=(1,2)))
# overlay preds
# save blended imgs
for i, png_fname in enumerate(png_fnames):
pred_i = preds[i]
X_i = X[i]
#output_img = rgb2gray(X_i)
#output_img2 = gray2rgb(output_img.copy())
#overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
output_img = cv2.cvtColor(X_i, cv2.COLOR_RGB2GRAY)
output_img2 = cv2.cvtColor(output_img.copy(), cv2.COLOR_GRAY2RGB)
overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
blended = cv2.addWeighted(overlay, alpha, output_img2, 1-alpha, 0, output_img)
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
#blended = np.clip(blended, 0, 255)
plt.imsave(png_fname, blended)
output_thumbnail_preds = np.array(output_thumbnail_preds)
#def process_batch(batch_samples):
output_dir = '/Z/personal-folders/interns/saket/histopath_data/prediction_heatmaps/tumor_009'
os.makedirs(output_dir, exist_ok=True)
alpha = 0.5
slide_path = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/tumor/tumor_009.tif'
json_filepath = '/Z/personal-folders/interns/saket/histopath_data/CAMELYON16/training/lesion_annotations_json/tumor_009.json'
all_samples = get_all_patches_from_slide(slide_path, json_filepath, False, 256)
slide = WSIReader(slide_path, 40)
n_samples = len(all_samples)
n_cols = int(slide.dimensions[0] / 256)
n_rows = int(slide.dimensions[1] / 256)
assert n_cols * n_rows == n_samples
thumbnail = slide.get_thumbnail((n_cols, n_rows))
thumbnail = np.array(thumbnail)
# batch_size = n_cols
batch_size = 32
output_thumbnail_preds = list()
def process_batch(args):
idx, batch_samples = args
png_fnames = batch_samples.tile_loc.apply(lambda coord: os.path.join(output_dir,
'{}_{}.png'.format(coord[1], coord[0])))
output_thumbnail_pred = None
X, _ = next(generate_tiles(batch_samples, batch_size, shuffle=False))
if batch_samples.is_tissue.nunique() == 1 and batch_samples.iloc[0].is_tissue == False:
# all patches in this row do not have tissue, skip them all
output_thumbnail_pred = np.zeros(batch_size, dtype=np.float32)
# output pngs
for i, png_fname in enumerate(png_fnames):
plt.imsave(png_fname, X[i])
else:
# make predictions
preds = predict_batch_from_model(X, model)
output_thumbnail_pred = preds.mean(axis=(1,2))
# overlay preds
# save blended imgs
for i, png_fname in enumerate(png_fnames):
pred_i = preds[i]
X_i = X[i]
#output_img = rgb2gray(X_i)
#output_img2 = gray2rgb(output_img.copy())
#overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
output_img = cv2.cvtColor(X_i, cv2.COLOR_RGB2GRAY)
output_img2 = cv2.cvtColor(output_img.copy(), cv2.COLOR_GRAY2RGB)
overlay = np.uint8(cm.viridis(pred_i) * 255)[:,:,:3]
blended = cv2.addWeighted(overlay, alpha, output_img2, 1-alpha, 0, output_img)
#blended = overlay*alpha + output_img2 *(1-alpha) + 0
#blended = np.clip(blended, 0, 255)
plt.imsave(png_fname, blended)
return idx, output_thumbnail_pred
all_batch_samples = []
for offset in tqdm_notebook(list(range(0, n_samples, batch_size))):
all_batch_samples.append(all_samples.iloc[offset:offset+batch_size])
total = len(list(range(0, n_samples, batch_size)))
output_thumbnail_preds = []
output_thumbnail_idx = []
with tqdm_notebook(total=total) as pbar:
with Pool(processes=8) as p:
results = p.imap_unordered(process_batch, enumerate(all_batch_samples))
for idx, result in results:
output_thumbnail_preds.append(result)
output_thumbnail_idx.append(idx)
pbar.update()
#output_thumbnail_pred = list(tqdm_notebook(p.imap(process_batch, all_batch_samples), total=total))
#for i, output_thumbnail_pred in enumerate(p.imap(process_batch, all_batch_samples)):
# output_thumbnail_preds.append(output_thumbnail_pred)
# pbar.update()
output_thumbnail_preds = np.array(output_thumbnail_preds)
from functools import reduce
def factors(n):
return set(reduce(list.__add__,
([i, n//i] for i in range(1, int(pow(n, 0.5) + 1)) if n % i == 0)))
10241 * 32/392
```
| github_jupyter |
<a href="https://colab.research.google.com/github/OUCTheoryGroup/colab_demo/blob/master/02_Unsupervised_Segmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Unsupervised Image Segmentation. *ICASSP* 2018
**图片无监督语义分割**,作者是东京大学的 Asako Kanezaki ,这里采用的是曾伊言修改的代码。
GITHUB地址:https://github.com/Yonv1943/Unsupervised-Segmentation/tree/master
知乎链接:https://zhuanlan.zhihu.com/p/68528056
原作者的算法要运行30秒左右,这里的代码只需要5秒钟就可以取得相同的效果。
```
# 首先:下载待处理的图像,这里选择的是 tiger.jpg 这张图
! wget https://raw.githubusercontent.com/Yonv1943/Unsupervised-Segmentation/master/image/tiger.jpg
import os
import time
import cv2
import numpy as np
from skimage import segmentation
import torch
import torch.nn as nn
from matplotlib import pyplot as plt
```
论文的总体框架如下:

完整算法如下:

其中,$Net()$ 为作者使用的一个全卷积网络,接收输入图像进行特征提取,该网络由三层卷积组成,具体如下:
| | kernel | dim | stride | padding | activation |
|:--:|:--:|:--:|:--:|:--:|:--:|
|conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm |
|conv2d| 3x3 | 100 | 1 | 1 | RelU, BatchNorm |
|conv2d| 1x1 | 100 | 1 | 1 | BatchNorm |
为了提高效率,曾伊言对网络进行了改进,使用四层卷积,仿照SENet ,使用3x3与1x1交替,膨胀64 与 压缩32。网络的实现代码如下:
```
class MyNet(nn.Module):
def __init__(self, inp_dim, mod_dim1, mod_dim2):
super(MyNet, self).__init__()
self.seq = nn.Sequential(
nn.Conv2d(inp_dim, mod_dim1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(mod_dim1),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(mod_dim2),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim2, mod_dim1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(mod_dim1),
nn.ReLU(inplace=True),
nn.Conv2d(mod_dim1, mod_dim2, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(mod_dim2),
)
def forward(self, x):
return self.seq(x)
```
## 1. 初始化参数
train_epoch 指定最大迭代 $2^6 = 64$ 个 epoch;inp_dim指输入图像是3通道; mod_dim1 和 mod_dim2 指网络为 64、32通道交替,因为是对原作者代码进行了修改,因此命名前加了 mod
```
input_image_path = 'tiger.jpg'
train_epoch = 2 ** 6
inp_dim = 3
mod_dim1 = 64
mod_dim2 = 32
gpu_id = 0
# if the label number small than it, break loop
min_label_num = 4
# if the label number small than it, start to show result image.
max_label_num = 256
start_time0 = time.time()
torch.cuda.manual_seed_all(1943)
np.random.seed(1943)
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id) # choose GPU:0
image = cv2.imread(input_image_path)
```
## 2. 超像素分割
这里使用了Efficient Graph-Based Image Segmentation - Felzenszwalb (MIT)2004 基于图的超像素分割算法 (简称Felz算法)。具体细节不过多介绍。对于超像素分割,有两个算法,一个是 Felz算法,另一个是 SLIC 算法。论文原作者使用的是 SLIC 算法,曾伊言推荐使用 Felz 算法替代 SLIC 算法。具体原因在知乎专栏里说的比较清楚,这里不再介绍。
```
seg_map = segmentation.felzenszwalb(image, scale=32, sigma=0.5, min_size=64)
plt.imshow(seg_map)
seg_map = seg_map.flatten()
seg_lab = [np.where(seg_map == u_label)[0]
for u_label in np.unique(seg_map)]
```
上面的代码首先进行超像素分割,分割结果保存在 seg_map 里。一共分割得到 616 个区域,各个区域像素的 index 保存在 seg_lab 数组里。
## 3. 算法训练
超像素分割的结果可以看作是**『预分类』**:相似颜色和纹理的像素保存相同的label。比如例子里的 tiger图,超像素分类得到616个区域,分别分配 0 至 615 的标签。
使用上面提到的CNN,对输入图片进行分类,分类的目标是:使输出的分割结果,每一个超像素内分配的标签一致,训练到收敛。
具体来说,把图像输入CNN得到一个图为 output,在 output 里,每个像素被分配一个 label (因为网络最后一层是32个 feature map,用 argmax 取值最大的那个为 label ,因此,label 的范围是 0 到 31)。统计每个超像素里像素的 label,以数量最多的为目标,放到一个 target 的图里,计划 output 和 target 间的交叉熵损失,然后反向传播。
经过多轮训练,CNN会逐步实现具备相同语义信息的小区块合并,得到大区块。(代码设置里,当最终只剩下4个区域时,会停止迭代。)
```
'''train init'''
device = torch.device("cuda" if torch.cuda.is_available() else 'cpu')
tensor = image.transpose((2, 0, 1))
tensor = tensor.astype(np.float32) / 255.0
tensor = tensor[np.newaxis, :, :, :]
tensor = torch.from_numpy(tensor).to(device)
model = MyNet(inp_dim, mod_dim1, mod_dim2).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=5e-2, momentum=0.9)
image_flatten = image.reshape((-1, 3))
color_avg = np.random.randint(255, size=(max_label_num, 3))
show = image
'''train loop'''
start_time1 = time.time()
model.train()
for batch_idx in range(train_epoch):
'''forward'''
optimizer.zero_grad()
output = model(tensor)[0]
output = output.permute(1, 2, 0).view(-1, mod_dim2)
target = torch.argmax(output, 1)
im_target = target.data.cpu().numpy()
'''refine'''
for inds in seg_lab:
u_labels, hist = np.unique(im_target[inds], return_counts=True)
im_target[inds] = u_labels[np.argmax(hist)]
'''backward'''
target = torch.from_numpy(im_target)
target = target.to(device)
loss = criterion(output, target)
loss.backward()
optimizer.step()
'''show image'''
un_label, lab_inverse = np.unique(im_target, return_inverse=True, )
if un_label.shape[0] < max_label_num: # update show
img_flatten = image_flatten.copy()
if len(color_avg) != un_label.shape[0]:
color_avg = [np.mean(img_flatten[im_target == label], axis=0, dtype=np.int) for label in un_label]
for lab_id, color in enumerate(color_avg):
img_flatten[lab_inverse == lab_id] = color
show = img_flatten.reshape(image.shape)
print('Loss:', batch_idx, loss.item())
if len(un_label) < min_label_num:
break
'''save'''
time1 = time.time() - start_time1
print('TimeUsed: %.2f' % time1)
cv2.imwrite("seg_%s_%ds.jpg" % (input_image_path[6:-4], time1), show)
plt.imshow(show)
```
## 4. 总结
**曾伊言对算法的理解:** 深度学习CNN在整个无监督语义分割任务中,承担的任务是:对经典机器学习无监督语义分割的细粒度预分类结果进行处理。并在迭代中,逐步对小区块进行合并,最后得到符合人类预期的语义分割结果。
但是,这个方法也有明显的**缺点**:那就是鲁棒性不强,算法受参数影响大(包括梯度下降法的参数,与机器学习的预分类算法的参数),并且算法多次随机重启的结果会有不同。
| github_jupyter |
# Monte Carlo Simulations with the Efficient Frontier
### Summary of Efficient Frontier
The Efficient fronter is a set of optimal portfolios that offer the highest expected return for a defined level of risk. It provides a great visualization on how to choose an optimal portfolio mathematically. _*Risk is defined as the assests actual return differing from our expected return.*_
"The efficient frontier is the set of optimal portfolios that offer the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal because they do not provide enough return for the level of risk." - Investopedia
# <center>Founder: Harry Markowitz</center>

Harry Markowitz introduced the efficient frontier theory in 1952 and later won a Nobel Memorial Prize in economics for the Modern Portfolio Theory in 1990. This theory is widely taught in every introductory Financial Course throughout the United States. His theory is written in detail in a paper: *Portfolio Selection* (1952).
# Summary
I will simulate weights on individual companies within a given portfolio to obtain an understanding on what return to risk is desired by the individual.
I picked 10 or so companies that are spread out in their corresponding Industries such that we have a relatively "low" correlation with each other.
# Companies
### Google | NVIDIA | Facebook
### Wells Fargo | Pfizer | COKE
### Disney | IMAX | Catepillar
### Southwest Airlines
```
import re
from io import StringIO
from datetime import datetime, timedelta
import requests
import pandas as pd
import numpy as np
```
# Obtaining the Data
### Companies of Interest (with their associated ticker)
| Technology | Finance | Health | Consumer | Entertainment | Industrials | Transportation |
| --- | --- | --- |--- | --- | --- | --- |
| (GOOG) Google | (WFC) Wells Fargo | (PFE) Pfizer | (COKE) Coke |(DIS) Disney | (CAT) Catepillar |(LUV) Southwest Airlines|
| (NVDA) NVIDIA | --- | --- | --- | (IMAX) IMAX | --- | --- |
| (FB) Facebook | --- | --- | --- | --- | --- | --- |
```
# Getting Data from 6 years back
# I will use the most recent 1 year to determine how well I would have done if I follow the efficient frontier.
# The market is open 252 times in a given year.
# I will get the adjusted close as my main data.
import pandas_datareader as pdr
from datetime import datetime
def get_historical_Data(tickers):
"""
This function returns a pd dataframe with all of the adjusted closing information
"""
data = pd.DataFrame()
names = list()
for i in tickers:
data = pd.concat([data,pdr.get_data_yahoo(symbols=i, start=datetime(2013, 10, 11), end=datetime(2020, 10, 11)).iloc[:,5]], axis = 1)
names.append(i)
data.columns = names
return data
# The ticker names of the companies that we will be looking at.
ticks = ["GOOG", "NVDA", "FB", "WFC","DIS", "IMAX", "LUV", "PFE", "COKE", "CAT"]
d = get_historical_Data(ticks)
print(d.shape)
# Most Recent Data
d.tail()
# Saving the most recent year data such that we can compare...
# Called dT (DataTest)
dT = d.iloc[d.shape[0] - 252:,:] # Data test
# Update the "Training" or "data full"
d = d.iloc[:d.shape[0] - 252,:] # Data Train for the Simulation
print("Testing Data dimensions: ", dT.shape)
print("Training Data dimensions:", d.shape)
dT # Test
d # Train
```
# Understanding Returns
```
from scipy import stats
expected_returns_a = d.pct_change() # Daily returns from trading day to day...
expected_returns_a.columns = ticks # Setting the Column names
expected_returns_aA = pd.DataFrame(expected_returns_a.mean()*250) # Annualizing the average rate of return
expected_returns_aA = expected_returns_aA.T # Transpose the values
dar = d.pct_change().iloc[1:,:]+1 # dar = portfolio returns for each period (in this case day to day)
# 6 is the number of years I am working with (Note: Remember that earlier I've took out a year for training purposes.)
gar = pd.DataFrame(np.prod(dar)**(1/float(6)) - 1) # Geometric Average Rate of Return
# print(gar)
full_return_annual = (pd.concat([expected_returns_aA.T, gar], axis = 1))
# DO NOTE that Arithmetic Average Return is not usually an appropriate method
# for calculating the average return and telling others...
# Example: Returns are the following (50%, 30%, -50%) on a yearly basis (jan 1st to dec 31st)
# Average: (50 + 30 - 50) / 3 = 10% average rate of return. This is not a great "representation of how well you done"
# Example
# Start with initial value of $ 100 Dollars:
# First year becomes 150.
# Second Year becomes 190.
# Third year becomes 97.5. You LOST money.
# Geometric Average: (also known as the Compounded annual growth rate)
# Using the example from above...
# ((1+ 0.5) * (1 + 0.3) * (0.5))^(1/3) - 1
# ((1.5)*(1.3)*(0.5))^(1/3) - 1
# .9916 - 1
# -0.0084
# or (-0.84) % average ANNUAL rate of return (more accurate gauge as to how well you've done.)
full_return_annual.columns = ["Average Arithmetic Returns", "Average Geometric Returns"]
print("Expected Annual Returns ", expected_returns_aA)
print("dar", dar)
print("Full Annual Return", full_return_annual)
```
# Equations Utilized
## Measuring the Adjusted Risk of Return
Measures the risk adjusted rate of return of a portfolio.
$$
\begin{aligned}
Sharpe Ratio = \frac{R_p - R_f}{\sigma_p}
\end{aligned}
$$
$\sigma_p$ = Standard Deviation of Portfolio \
$R_p$ = Return of Portfolio \
$R_f$ = Return of Risk Free Instrument
\
Rule of Thumb:
Sharpe Ratio < 1 sub-optimal... There is most likely a better option \
Sharpe Ratio > 1 is acceptable \
Sharpe Ratio > 2 is VERY good \
Sharpe Ratio > 3 is EXCELLENT!
# Volatility
$$
\begin{aligned}
\sum_{i=0}^N \sum_{j=0}^N {\sigma_{ij}}{X_i X}
\end{aligned}
$$
$X$ = Weights in Portfolio \
$\sigma_{ij}$ = Variance - Covariance Matrix
# Expected Return
$$
\begin{aligned}
\sum_{i=0}^N X_i \mu_i
\end{aligned}
$$
\
$X$ = Weights in Porfolio \
$\mu_i$ = Arithmetic Average Rate of Return for $i^{th}$ security
```
# Storing lists that retain returns, volatility, and weights of the Simulated portfolios
portfolio_returns = []
portfolio_volatility = []
sharpe_ratio = []
# This is what is going to be randomized
stock_weights = []
# Number of Indiviudal securities that will be a part of the portfolio
num_assets = len(ticks)
# Number of simulated iterations
num_portfolios = 100000
# Getting the covariance matrix
# Gets a percentage change one day to the next
daily_returns = d.pct_change()
# Converting daily returns to annual returns (standardizing to a year)
annual_returns = (daily_returns.mean() * 250) + 1
# Obtaining the covariance of annual
cov_daily = daily_returns.cov() # Covariance
cov_annual = cov_daily*250 # Covariance Annualized
print(annual_returns)
# Setting seed of interpretability
np.random.seed(3)
# Filling in the lists with a simulated return, risk, and a given weight
# num_portfolios
for i in range(num_portfolios):
# Randomly assign weights
weights = np.random.random(num_assets)
# Standardize the weights
weights /= np.sum(weights)
returns = (np.dot(weights, (annual_returns)))
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
"""
sharpe ratio: This calculates the risk adjusted return
It suggests that adding assets to a portfolio that have low correlation can decrease portfolio risk without
sacrificing return
"""
sharpe = ((returns-1) / volatility)
sharpe_ratio.append(sharpe)
portfolio_returns.append(returns-1)
portfolio_volatility.append(volatility)
stock_weights.append(weights)
# Storing the portfolio values
portfolio = {'Returns': portfolio_returns,
'Volatility': portfolio_volatility,
'Sharpe Ratio': sharpe_ratio}
# Add an additional entry to the portfolio such that each indivudal weight is incorporated for its corresponding company
for counter,symbol in enumerate(ticks):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
df
# PLotting the efficient frontier.
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# Finding the Optimal Portfolio
min_volatility = df['Volatility'].min()
max_sharpe = df['Sharpe Ratio'].max()
# use the min, max values to locate and create the two special portfolios
sharpe_portfolio = df.loc[df['Sharpe Ratio'] == max_sharpe]
min_variance_port = df.loc[df['Volatility'] == min_volatility]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('fivethirtyeight')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.scatter(x=sharpe_portfolio['Volatility'], y=sharpe_portfolio['Returns'], c='red', marker='D', s=200)
plt.scatter(x=min_variance_port['Volatility'], y=min_variance_port['Returns'], c='blue', marker='D', s=200 )
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# Additional Details
r_ef = pd.concat([min_variance_port.T,sharpe_portfolio.T], axis = 1)
r_ef.columns = ["Minimum Risk Adjusted Values", "Max Risk Adjusted Values"]
print(r_ef)
```
# If I were to invest 1,000 USD last year... what would I have now?
```
amount_invest = 1000
expected_return = pd.DataFrame(amount_invest * (1+r_ef.iloc[0,:]))
print("----------------------------------------------------------------")
print(" Expected Returns on my Portfolio")
print("----------------------------------------------------------------")
print(expected_return.T)
print("")
print("----------------------------------------------------------------")
print("If I invested", amount_invest,"USD on |", dT.index[0],"| I would have...")
actual_return = (dT.iloc[dT.shape[0]-1,:] - dT.iloc[0,:]) / ( dT.iloc[0,:])
# Multipling the weights to the price at the beginning of the year
beg_price = (dT.iloc[0,:])
end_price = dT.iloc[dT.shape[0]-1,:]
print("----------------------------------------------------------------")
# Weights derived from the Efficient Frontier Portfolio
# Weights for Minimum Risk
w = np.array(r_ef.iloc[3:,0])
percentage_change = (end_price - beg_price)/(beg_price)+1
print("Using the Portfolio Weights for Minimum Risk Return Portfolio")
money_left = sum(w * percentage_change* amount_invest)
print("")
print(" Starting balance $ 1000 : Ending with $ ",round(money_left, 2))
print("")
print("----------------------------------------------------------------")
print("Using the Portfolio Weights Maximized Risk-Return Portfolio")
# Weights for Maxmimum Risk
w1 = np.array(r_ef.iloc[3:,1])
money_left1 = sum(w1 * percentage_change* amount_invest)
print("")
print(" Starting balance $ 1000 : Ending with $ ", round(money_left1,2))
print("")
# Other models to take a look at...
# That try to predict a securities rate of return
# CAPM
# CCAPM
# ICAPM
# Fama French 3 factor, 4 factor, and 5 factor model.
```
| github_jupyter |
# Think Bayes solutions: Chapter 4
This notebook presents solutions to exercises in Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
import numpy as np
import thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite
import thinkplot
% matplotlib inline
```
## The Euro problem
Here's a class that represents hypotheses about the probability a coin lands heads.
```
class Euro(Suite):
def Likelihood(self, data, hypo):
"""Computes the likelihood of `data` given `hypo`.
data: string 'H' or 'T'
hypo: probability of heads, 0-100
returns: float
"""
x = hypo
if data == 'H':
return x/100
else:
return 1 - x/100
```
We can make a uniform prior and update it with 140 heads and 110 tails:
```
suite = Euro(range(0, 101))
dataset = 'H' * 140 + 'T' * 110
for data in dataset:
suite.Update(data)
```
And here's what the posterior looks like.
```
thinkplot.Pdf(suite)
```
We can summarize the posterior several ways, including the mean:
```
suite.Mean()
```
Median:
```
suite.Percentile(50)
```
The peak of the posterior, known as the Maximum Aposteori Probability (MAP)
```
suite.MAP()
```
And a 90% credible interval
```
suite.CredibleInterval(90)
```
We can look up a particular value in the posterior PMF, but the result doesn't mean much, because we could have divided the range (0-100) into as many pieces as we like, and the result would be different.
```
suite.Prob(50)
```
## Different priors
Let's see how that looks with different priors.
Here's a function that makes a uniform prior:
```
def UniformPrior(label='uniform'):
"""Makes a Suite with a uniform prior."""
suite = Euro(range(0, 101), label=label)
return suite
```
And another that makes a triangular prior.
```
def TrianglePrior(label='triangle'):
"""Makes a Suite with a triangle prior."""
suite = Euro(label=label)
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
```
Here's what they look like:
```
triangle = TrianglePrior()
uniform = UniformPrior()
suites = [triangle, uniform]
thinkplot.Pdfs(suites)
thinkplot.Config(xlabel='x', ylabel='Probability')
```
If we update them both with the same data:
```
def RunUpdate(suite, heads=140, tails=110):
"""Updates the Suite with the given number of heads and tails.
suite: Suite object
heads: int
tails: int
"""
dataset = 'H' * heads + 'T' * tails
for data in dataset:
suite.Update(data)
for suite in suites:
RunUpdate(suite)
```
The results are almost identical; the remaining difference is unlikely to matter in practice.
```
thinkplot.Pdfs(suites)
thinkplot.Config(xlabel='x', ylabel='Probability')
```
## The binomial likelihood function
We can make the Euro class more efficient by computing the likelihood of the entire dataset at once, rather than one coin toss at a time.
If the probability of heads is p, we can compute the probability of k=140 heads in n=250 tosses using the binomial PMF.
```
class Euro2(thinkbayes2.Suite):
"""Represents hypotheses about the probability of heads."""
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: integer value of x, the probability of heads (0-100)
data: tuple of (number of heads, number of tails)
"""
x = hypo / 100.0
heads, tails = data
like = x**heads * (1-x)**tails
return like
```
I left out the binomial coefficient ${n}\choose{k}$ because it does not depend on `p`, so it's the same for all hypotheses.
```
suite = Euro2(xrange(0, 101))
dataset = 140, 110
suite.Update(dataset)
```
Here's what the posterior looks like.
```
thinkplot.Pdf(suite)
```
## The Beta distribution
The Beta distribution is a conjugate prior for the binomial likelihood function, which means that if you start with a Beta distribution and update with a binomial likelihood, the posterior is also Beta.
Also, given the parameters of the prior and the data, we can compute the parameters of the posterior directly. The following class represents a Beta distribution and provides a constant-time Update method.
```
from scipy import special
class Beta:
"""Represents a Beta distribution.
See http://en.wikipedia.org/wiki/Beta_distribution
"""
def __init__(self, alpha=1, beta=1, label=None):
"""Initializes a Beta distribution."""
self.alpha = alpha
self.beta = beta
self.label = label if label is not None else '_nolegend_'
def Update(self, data):
"""Updates a Beta distribution.
data: pair of int (heads, tails)
"""
heads, tails = data
self.alpha += heads
self.beta += tails
def Mean(self):
"""Computes the mean of this distribution."""
return self.alpha / (self.alpha + self.beta)
def MAP(self):
"""Computes the value with maximum a posteori probability."""
a = self.alpha - 1
b = self.beta - 1
return a / (a + b)
def Random(self):
"""Generates a random variate from this distribution."""
return random.betavariate(self.alpha, self.beta)
def Sample(self, n):
"""Generates a random sample from this distribution.
n: int sample size
"""
size = n,
return np.random.beta(self.alpha, self.beta, size)
def EvalPdf(self, x):
"""Evaluates the PDF at x."""
return x ** (self.alpha - 1) * (1 - x) ** (self.beta - 1)
def MakePmf(self, steps=101, label=None):
"""Returns a Pmf of this distribution.
Note: Normally, we just evaluate the PDF at a sequence
of points and treat the probability density as a probability
mass.
But if alpha or beta is less than one, we have to be
more careful because the PDF goes to infinity at x=0
and x=1. In that case we evaluate the CDF and compute
differences.
The result is a little funny, because the values at 0 and 1
are not symmetric. Nevertheless, it is a reasonable discrete
model of the continuous distribution, and behaves well as
the number of values increases.
"""
if label is None and self.label is not None:
label = self.label
if self.alpha < 1 or self.beta < 1:
cdf = self.MakeCdf()
pmf = cdf.MakePmf()
return pmf
xs = [i / (steps - 1) for i in range(steps)]
probs = [self.EvalPdf(x) for x in xs]
pmf = Pmf(dict(zip(xs, probs)), label=label)
return pmf
def MakeCdf(self, steps=101):
"""Returns the CDF of this distribution."""
xs = [i / (steps - 1) for i in range(steps)]
ps = special.betainc(self.alpha, self.beta, xs)
cdf = Cdf(xs, ps)
return cdf
def Percentile(self, ps):
"""Returns the given percentiles from this distribution.
ps: scalar, array, or list of [0-100]
"""
ps = np.asarray(ps) / 100
xs = special.betaincinv(self.alpha, self.beta, ps)
return xs
```
Here's how we use it.
```
beta = Beta()
beta.Update((140, 110))
beta.Mean()
```
And here's the posterior.
```
thinkplot.Pdf(beta.MakePmf())
```
Amazing, no?
**Exercise:** One way to construct priors is to make a Beta distribution and adjust the parameters until it has the shape you want. Then when you do an update, the data get added to the parameters of the prior. Since the parameters of the prior play the same mathematical role as the data, they are sometimes called "precounts".
Suppose you believe that most coins are fair or unlikely to deviate from 50% by more than a few percentage points. Construct a prior that captures this belief and update it with the Euro data. How much effect does it have on the posterior, compared to the uniform prior?
Hint: A Beta distribution with parameters `(1, 1)` is uniform from 0 to 1.
```
# Solution
# Here's the uniform prior
uniform = Beta(1, 1, label='uniform')
thinkplot.Pdf(uniform.MakePmf())
# Solution
# And here's what it looks like after the update
uniform.Update(dataset)
thinkplot.Pdf(beta.MakePmf())
# Solution
# Here's a beta prior with precounts chosen to represent
# out background knowledge about coins.
beta = Beta(100, 100, label='beta')
thinkplot.Pdf(beta.MakePmf())
# Solution
# And here's what it looks like after the update
beta.Update(dataset)
thinkplot.Pdf(beta.MakePmf())
# Solution
# Comparing the two, we see that the (more) informative
# prior influences the location and spread of the
# posterior.
thinkplot.Pdf(beta.MakePmf())
thinkplot.Pdf(uniform.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
```
**Exercise:** At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 skeets, sending the match into sudden death. In the first round, both hit 1 of 2 skeets. In the next two rounds, they each hit 2 skeets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.
But after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?
As always, you will have to make some modeling decisions, but one approach is to estimate, for each shooter, the probability of hitting a skeet. Then, to estimate the probability that Rhode is a better shooter, you can draw samples from the two posterior distributions and compare them. To estimate the probability of winning a rematch, you could draw samples from the posterior distributions and simulate a round of 25 shots.
```
# Solution
# Here's a Beta distribution that represents Rhode's probability
# of hitting a skeet
rhode = Beta(1, 1, label='Rhode')
rhode.Update((22, 11))
# Solution
# And another Beta for Wei
wei = Beta(1, 1, label='Wei')
wei.Update((21, 12))
# Solution
# Here's what the posteriors look like
thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
# Solution
# To estimate the probability of superiority, we can
# draw samples from the posteriors and compare them
rhode_sample = rhode.MakeCdf(10001).Sample(10000)
wei_sample = wei.MakeCdf(10001).Sample(10000)
np.mean(rhode_sample > wei_sample)
# Solution
# The probability that Rhode is a better shooter is about 59%
np.mean(rhode_sample < wei_sample)
# Solution
# To simulate a rematch, we can draw `p` from the posterior
# distribution and then sample from a binomial distribution
# with parameters `p` and `n=25`.
rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
# Solution
# The probability that Rhode wins a rematch (without going
# to sudden death) is about 52%
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
# Solution
# The probability that Wei wins the rematch is 39%
np.mean(rhode_rematch < wei_rematch)
# Solution
# And the chance that the rematch also goes to sudden death is
# about 9%
# Assuming that sudden death is close to 50/50, the overall chance
# that Rhode winds is about 56%
np.mean(rhode_rematch == wei_rematch)
```
**Exercise** Suppose that instead of observing coin tosses directly, you measure the outcome using an instrument that is not always correct. Specifically, suppose there is a probability `y` that an actual heads is reported as tails, or actual tails reported as heads.
Write a class that estimates the bias of a coin given a series of outcomes and the value of `y`.
How does the spread of the posterior distribution depend on `y`?
```
# Solution
# Here's a class that models an unreliable coin
class UnreliableCoin(Suite):
def __init__(self, prior, y):
"""
prior: seq or map
y: probability of accurate measurement
"""
Suite.__init__(self, prior)
self.y = y
def Likelihood(self, data, hypo):
"""
data: outcome of unreliable measurement, either 'H' or 'T'
hypo: probability of heads, 0-100
"""
x = hypo / 100
y = self.y
if data == 'H':
return x*y + (1-x)*(1-y)
else:
return x*(1-y) + (1-x)*y
# Solution
# Now let's initialize one with `y=0.9`:
prior = range(0, 101)
suite = UnreliableCoin(prior, y=0.9)
thinkplot.Pdf(suite)
# Solution
# And update with 3 heads and 7 tails.
for outcome in 'HHHTTTTTTT':
suite.Update(outcome)
thinkplot.Pdf(suite)
# Solution
# Now let's try it out with different values of `y`:
def compute_prior(y):
prior = range(0, 101)
suite = UnreliableCoin(prior, y=y)
for outcome in 'HHHTTTTTTT':
suite.Update(outcome)
thinkplot.Pdf(suite, label='y=%g' % y)
# Solution
# The posterior distribution gets wider as the measurement gets less reliable.
compute_prior(1)
compute_prior(0.8)
compute_prior(0.6)
thinkplot.config(legend=True)
# Solution
# At `y=0.5`, the measurement provides no information, so the posterior equals the prior:
compute_prior(0.5)
thinkplot.config(legend=True)
# Solution
# As the coin gets less reliable (below `y=0.5`) the distribution gets narrower again.
# In fact, a measurement with `y=0` is just as good as one with `y=1`,
# provided that we know what `y` is.
compute_prior(0.4)
compute_prior(0.2)
compute_prior(0.0)
thinkplot.config(legend=True)
```
**Exercise** This exercise is inspired by a question posted by a “redditor” named dominosci on Reddit’s statistics “subreddit” at http://reddit.com/r/statistics.
Reddit is an online forum with many interest groups called subreddits. Users, called redditors, post links to online content and other web pages. Other redditors vote on the links, giving an “upvote” to high-quality links and a “downvote” to links that are bad or irrelevant.
A problem, identified by dominosci, is that some redditors are more reliable than others, and Reddit does not take this into account.
The challenge is to devise a system so that when a redditor casts a vote, the estimated quality of the link is updated in accordance with the reliability of the redditor, and the estimated reliability of the redditor is updated in accordance with the quality of the link.
One approach is to model the quality of the link as the probability of garnering an upvote, and to model the reliability of the redditor as the probability of correctly giving an upvote to a high-quality item.
Write class definitions for redditors and links and an update function that updates both objects whenever a redditor casts a vote.
```
# Solution
# Here's one possible model:
# Each article has a quality Q, which is the probability of
# eliciting an upvote from a completely reliab;e redditor.
# Each user has a reliability R, which is the probability of
# giving an upvote to an item with Q=1.
# The probability that a redditor with reliability R gives an
# upvote to an item with quality Q is `R*Q + (1-R) * (1-Q)`
# Now when a redditor votes on a item, we simultaneously update our
# belief about the redditor and the item.
class Redditor(Suite):
"""Represents hypotheses about the trustworthiness of a redditor."""
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: integer value of r, the prob of a correct vote (0-100)
data: (vote, q) pair, where vote is 'up' or 'down' and
q is the mean quality of the link
"""
r = hypo / 100.0
vote, q = data
if vote == 'up':
return r * q + (1-r) * (1-q)
elif vote == 'down':
return r * (1-q) + (1-r) * q
else:
return 0
# Solution
class Item(Suite):
"""Represents hypotheses about the quality of an item."""
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: integer value of x, the prob of garnering an upvote
data: (vote, t) pair, where vote is 'up' or 'down' and
t is the mean trustworthiness of the redditor
"""
x = hypo / 100.0
vote, r = data
if vote == 'up':
return x * r + (1-x) * (1-r)
elif vote == 'down':
return x * (1-r) + (1-x) * r
else:
return 0
# Solution
# Suppose we start with a redditor who has demonstrated some reliability.
from thinkbayes2 import Beta
redditor = Redditor(label='redditor')
beta = Beta(2, 1)
for val, prob in beta.MakePmf().Items():
redditor.Set(val*100, prob)
thinkplot.Pdf(redditor)
mean_r = redditor.Mean() / 100.0
mean_r
# Solution
# And a completely unknown item.
item = Item(range(0, 101), label='item')
thinkplot.Pdf(item)
mean_q = item.Mean() / 100.0
mean_q
# Solution
# We update the priors simultaneously, each using the mean value of the other.
redditor.Update(('up', mean_q))
item.Update(('up', mean_r))
# Solution
# And here are the results. Since we knew nothing about the item,
# the vote provides no information about the redditor:
thinkplot.Pdf(redditor)
print(redditor.Mean(), redditor.CredibleInterval(90))
# Solution
# But since we think the redditor is reliable, the vote provides
# some information about the item:
thinkplot.Pdf(item)
print(item.Mean(), item.CredibleInterval(90))
# Solution
# After the upvote, the mean quality of the item increases to about 56%.
# The model I used to compute likelihoods is not the only choice.
# As an alternative, I could have used something like
# item response theory (https://en.wikipedia.org/wiki/Item_response_theory),
# which we'll see in Chapter 12.
```
| github_jupyter |
# Templating and Jinja2
---
```html
<div class="simplelist">
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
</div>
```
<div class="simplelist">
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
</div>
```html
<table>
<thead>
<tr>
<th>S.No</th>
<th>Name</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Team 1</td>
<td>30</td>
</tr>
<tr>
<td>2</td>
<td>Team 2</td>
<td>20</td>
</tr>
</tbody>
</table>
```
<table>
<thead>
<tr>
<th>S.No</th>
<th>Name</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Team 1</td>
<td>30</td>
</tr>
<tr>
<td>2</td>
<td>Team 2</td>
<td>20</td>
</tr>
</tbody>
</table>
```
from jinja2 import Template
t = Template('Hello {{name}}!')
t
t.render(name='World')
t1 = Template('Hello {% if value > 5 %}{{ value }}{% endif %}!')
t1.render(value=4)
t1.render(value=6)
tpl ='''
<table>
<thead>
<tr>
<th>S.No</th>
<th>Name</th>
<th>Score</th>
</tr>
</thead>
<tbody>
{% for row in data %}
<tr>
<td>1</td>
<td>Team 1</td>
<td>30</td>
</tr>
{% endfor %}
</tbody>
</table>
'''
from IPython.display import HTML
Template(tpl).render(data=range(0, 5))
HTML(_)
tpl ='''
<table>
<thead>
<tr>
<th>S.No</th>
<th>Name</th>
<th>Score</th>
<th>Performance</th>
</tr>
</thead>
<tbody>
{% for row in data %}
<tr>
<td>{{row.id}}</td>
<td>{{row.name}}</td>
<td>{{row.score}}</td>
<td>
{% if row.score < 30 %}
Very Bad
{% elif row.score < 50 %}
Bad
{% elif row.score > 80 %}
Good
{% else %}
Very Good
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
'''
from random import randint
records = [{'id': i + 1, 'name': 'Team ' + str(i + 1), 'score': randint(0, 100)} for i in range(0, 10)]
records
Template(tpl).render(data=records)
HTML(_)
```
| github_jupyter |
# Accessing Physical Quantities
In order to compute the synthetic spectrum, TARDIS must either be told
or must calculate many physical properties of the model. To understand and
test the code it can be important to look at these values. One
easy way to do this is to run TARDIS in an interactive mode and then
inspect the model properties.
### Runing in interactive Python session
```
# Download the atomic data
from tardis.io.atom_data.util import download_atom_data
download_atom_data('kurucz_cd23_chianti_H_He')
# Download the example configuration file
!curl -O https://raw.githubusercontent.com/tardis-sn/tardis/master/docs/tardis_example.yml
from tardis import run_tardis
simulation = run_tardis('tardis_example.yml')
```
If all goes well, the simulation should run as usual. Afterwards, the
information from the simulation will all exist in `Simulation` and
can be examined.
Some examples for useful/interesting quantities are
given below (but much more information is available: contact us via
[tardis-sn-users](http://groups.google.com/forum/#!forum/tardis-sn-users) if you need
further help).
### Examples of finding physical quantities
For example, two of our important quantities are the parameters of the
radiation field model, $T_{\rm rad}$ and $W$. These exist as `numpy.ndarray`
Thus `simulation.plasma.t_rad` will give you a list of the $T_{\rm rad}$-values for the model zones in cgs units.
```
simulation.plasma.t_rad
```
Similarly, the $W$-values can be accessed using `simulation.plasma.w`
```
simulation.plasma.w
```
Several important quantities that were setup when the model was defined by the configuration file are located in the `model` section of the simulation. For example, the inner and outer velocity boundaries of the zones in the model is given by `simulation.model.v_inner.cgs` and `simulation.model.v_outer.cgs` respectively. These exist as Astropy [Quantities](http://astropy.readthedocs.org/en/v0.2.1/_generated/astropy.units.quantity.Quantity.html).
```
simulation.model.v_inner.cgs
simulation.model.v_outer.cgs
```
The average density in the zones is given by `simulation.model.density.cgs`. These also exist as Astropy [Quantities](http://astropy.readthedocs.org/en/v0.2.1/_generated/astropy.units.quantity.Quantity.html).
```
simulation.model.density.cgs
```
Many other interesting quantities are stored in the `plasma`.
For example the calculated ion populations and level populations is given by `simulation.plasma.ion_number_density` and `simulation.plasma.level_number_density` respectively.
```
simulation.plasma.ion_number_density
simulation.plasma.level_number_density
```
These are stored as Pandas `DataFrames`. An index can be supplied to obtain the population in a particular zone. E.g., for the ion populations of the innermost zone (index = 0), we will use
`simulation.plasma.ion_number_density[0]`
```
simulation.plasma.ion_number_density[0]
```
Ion populations for a particular ionization stage of a particular element can be accessed by specifying an appropriate tuple (𝑍,𝐶), which identifies the element (via atomic number 𝑍 ) and the charge (via the ion charge 𝐶 ). Thus, `simulation.plasma.ion_number_density.loc[14,1]` will identify the ion popuations for Si II (𝑍=14,𝐶=1) in all the zones.
```
simulation.plasma.ion_number_density.loc[14,1]
```
The above examples can be combined to obtain e.g. the Si II population in the innermost zone
can be obtained by
`simulation.plasma.ion_number_density[0].loc[14,1]`
```
simulation.plasma.ion_number_density[0].loc[14,1]
```
The level populations are stored (and can be accessed) in a similar way - a third label can be used to pick out a particular atomic level. E.g., to pull out the population of the ground state (index 0) of Si II we can use `simulation.plasma.level_number_density.loc[14,1,0]`
```
simulation.plasma.level_number_density.loc[14,1,0]
```
### Notes
- If you prefer to work in SI units, all the Astropy Quantities may instead by accessed with “xxx.si”.
- Information that is not stored as Astropy Quantities (e.g. the ion and level populations used in the example above) are usually stored in cgs units (i.e. cm−3 for the populations).
| github_jupyter |
# Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this:
<img src="images/cost.jpg" style="width:650px;height:300px;">
<caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>
**Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.
To get started, run the following code to import the libraries you will need.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
## 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
**Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74604067]
[-0.75184921]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88020257]
[ 0.02561572]
[ 0.57539477]] </td>
</tr>
</table>
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
- **(Batch) Gradient Descent**:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
- **Stochastic Gradient Descent**:
```python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
**Note** also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
**What you should remember**:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
## 2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
- **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
**Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
```python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
```
Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
```
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, k * mini_batch_size : (k+1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k+1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, -(m - num_complete_minibatches * mini_batch_size): m]
mini_batch_Y = shuffled_Y[:, -(m - num_complete_minibatches * mini_batch_size): m]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
## 3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
**Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:
for $l =1,...,L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
```
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
**Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$:
$$ \begin{cases}
v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\
W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}
\end{cases}\tag{3}$$
$$\begin{cases}
v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\
b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}}
\end{cases}\tag{4}$$
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1-beta) * grads['dW' + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1-beta) * grads['db' + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["db" + str(l+1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td > **W1** </td>
<td > [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74493465]
[-0.76027113]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.87809283]
[ 0.04055394]
[ 0.58207317]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]</td>
</tr>
</table>
**Note** that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
**How do you choose $\beta$?**
- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
- Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
- Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
**What you should remember**:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
## 4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
**How does Adam work?**
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\
v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\
s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the `parameters` dictionary
**Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.
**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
```
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
**Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\
v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\
s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}
\end{cases}$$
**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1-beta1) * grads['dW' + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1-beta1) * grads['db' + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - np.power(beta1,t))
v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - np.power(beta1,t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1-beta2) * grads['dW' + str(l+1)]**2
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1-beta2) * grads['db' + str(l+1)]**2
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - np.power(beta2,t))
s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - np.power(beta2,t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / (np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon)
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / (np.sqrt(s_corrected["db" + str(l+1)]) + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
```
**Expected Output**:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.75225313]
[-0.75376553]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88529978]
[ 0.03477238]
[ 0.57537385]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 1.51020075e-05]
[ 8.75664434e-04]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]] </td>
</tr>
</table>
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.
## 5 - Model with different optimization algorithms
Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
```
train_X, train_Y = load_dataset()
```
We have already implemented a 3-layer neural network. You will train it with:
- Mini-batch **Gradient Descent**: it will call your function:
- `update_parameters_with_gd()`
- Mini-batch **Momentum**: it will call your functions:
- `initialize_velocity()` and `update_parameters_with_momentum()`
- Mini-batch **Adam**: it will call your functions:
- `initialize_adam()` and `update_parameters_with_adam()`
```
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
```
You will now run this 3 layer neural network with each of the 3 optimization methods.
### 5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.4 - Summary
<table>
<tr>
<td>
**optimization method**
</td>
<td>
**accuracy**
</td>
<td>
**cost shape**
</td>
</tr>
<td>
Gradient descent
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
<tr>
<td>
Momentum
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
</tr>
<tr>
<td>
Adam
</td>
<td>
94%
</td>
<td>
smoother
</td>
</tr>
</table>
Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.
Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.
Some advantages of Adam include:
- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum)
- Usually works well even with little tuning of hyperparameters (except $\alpha$)
**References**:
- Adam paper: https://arxiv.org/pdf/1412.6980.pdf
| github_jupyter |
**Copyright 2018 Google LLC.**
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Training a Simple Neural Network, with tensorflow/datasets Data Loading
[](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/neural_network_with_tfds_data.ipynb)
_Forked from_ `neural_network_and_data_loading.ipynb`

Let's combine everything we showed in the [quickstart notebook](https://colab.research.google.com/github/google/jax/blob/main/notebooks/quickstart.ipynb) to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use `tensorflow/datasets` data loading API to load images and labels (because it's pretty great, and the world doesn't need yet another data loading library :P).
Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won't use any neural network libraries or special APIs for builidng our model.
```
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
```
## Hyperparameters
Let's get a few bookkeeping items out of the way.
```
# A helper function to randomly initialize weights and biases
# for a dense neural network layer
def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
param_scale = 0.1
step_size = 0.01
num_epochs = 10
batch_size = 128
n_targets = 10
params = init_network_params(layer_sizes, random.PRNGKey(0))
```
## Auto-batching predictions
Let us first define our prediction function. Note that we're defining this for a _single_ image example. We're going to use JAX's `vmap` function to automatically handle mini-batches, with no performance penalty.
```
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
```
Let's check that our prediction function only works on single images.
```
# This works on single examples
random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
# Doesn't work with a batch
random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function
batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
```
At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of `predict`, which we should be able to use in a loss function. We should be able to use `grad` to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use `jit` to speed up everything.
## Utility and loss functions
```
def one_hot(x, k, dtype=jnp.float32):
"""Create a one-hot encoding of x of size k."""
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit
def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
```
## Data Loading with `tensorflow/datasets`
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the `tensorflow/datasets` data loader.
```
import tensorflow_datasets as tfds
data_dir = '/tmp/tfds'
# Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy
mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
mnist_data = tfds.as_numpy(mnist_data)
train_data, test_data = mnist_data['train'], mnist_data['test']
num_labels = info.features['label'].num_classes
h, w, c = info.features['image'].shape
num_pixels = h * w * c
# Full train set
train_images, train_labels = train_data['image'], train_data['label']
train_images = jnp.reshape(train_images, (len(train_images), num_pixels))
train_labels = one_hot(train_labels, num_labels)
# Full test set
test_images, test_labels = test_data['image'], test_data['label']
test_images = jnp.reshape(test_images, (len(test_images), num_pixels))
test_labels = one_hot(test_labels, num_labels)
print('Train:', train_images.shape, train_labels.shape)
print('Test:', test_images.shape, test_labels.shape)
```
## Training Loop
```
import time
def get_train_batches():
# as_supervised=True gives us the (image, label) as a tuple instead of a dict
ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)
# You can build up an arbitrary tf.data input pipeline
ds = ds.batch(batch_size).prefetch(1)
# tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays
return tfds.as_numpy(ds)
for epoch in range(num_epochs):
start_time = time.time()
for x, y in get_train_batches():
x = jnp.reshape(x, (len(x), num_pixels))
y = one_hot(y, num_labels)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
```
We've now used the whole of the JAX API: `grad` for derivatives, `jit` for speedups and `vmap` for auto-vectorization.
We used NumPy to specify all of our computation, and borrowed the great data loaders from `tensorflow/datasets`, and ran the whole thing on the GPU.
| github_jupyter |
# Amazon SageMaker と Amazon Redshift を利用した、高速、柔軟、セキュアな機械学習基盤の構築
必要な Python Package をインポートします。
```
# Import packages
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import boto3
import json
```
## Obtain parameters from AWS CloudFormation
AWS CloudFormation で設定したパラメータを取得します。
```
# Please edit stack name
stack_name = 'SageMakerRedshift'
cfn = boto3.client('cloudformation')
response = cfn.describe_stacks(StackName=stack_name)['Stacks'][0]
for item in response['Parameters']:
if item['ParameterKey'] == 'MasterUsername':
db_user = item['ParameterValue']
elif item['ParameterKey'] == 'DatabaseName':
db_name = item['ParameterValue']
elif item['ParameterKey'] == 'PortNumber':
db_port = item['ParameterValue']
for item in response['Outputs']:
if item['OutputKey'] == 'ClusterEndpoint':
cluster_endpoint = item['OutputValue'].split(':')[0]
elif item['OutputKey'] == 'ClusterName':
cluster_name = item['OutputValue']
elif item['OutputKey'] == 'RedshiftBucketAccessRoleArn':
redshift_role = item['OutputValue']
# show parameters
print('stack_name: {}'.format(stack_name))
print('db_user: {}'.format(db_user))
print('db_name: {}'.format(db_name))
print('db_port: {}'.format(db_port))
print('cluster_endpoint: {}'.format(cluster_endpoint))
print('cluster_name: {}'.format(cluster_name))
print('redshift_role: {}'.format(redshift_role))
```
## Get temporary credentials and connect to Redshift
Amazon Redshift へアクセスするための、[一時的データベースユーザー認証情報](https://docs.aws.amazon.com/ja_jp/redshift/latest/mgmt/generating-iam-credentials-cli-api.html)
を取得します。
```
# get temporal cluster credentials
redshift = boto3.client('redshift')
credentials = redshift.get_cluster_credentials(
DbUser=db_user,
DbName=db_name,
ClusterIdentifier=cluster_name,
DurationSeconds=3600,
AutoCreate=False
)
tmp_db_user = credentials['DbUser']
tmp_db_password = credentials['DbPassword']
```
Python 用の PostgreSQL ドライバである psycopg2 を利用して Redshift へアクセスします。
```
# connect to Redshift
conn = psycopg2.connect(
host=cluster_endpoint,
port=db_port,
dbname=db_name,
user=tmp_db_user,
password=tmp_db_password
)
```
## Create tables and load [sample data](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-sample-db.html) from Amazon S3
ここでは、[Redshift の公式ドキュメントで利用されているデータセット](https://docs.aws.amazon.com/ja_jp/redshift/latest/gsg/rs-gsg-create-sample-db.html)を使用します。
はじめにテーブルを作成します。
```
sql_create_table = [
"""
create table users(
userid integer not null distkey sortkey,
username char(8),
firstname varchar(30),
lastname varchar(30),
city varchar(30),
state char(2),
email varchar(100),
phone char(14),
likesports boolean,
liketheatre boolean,
likeconcerts boolean,
likejazz boolean,
likeclassical boolean,
likeopera boolean,
likerock boolean,
likevegas boolean,
likebroadway boolean,
likemusicals boolean);
""",
"""
create table venue(
venueid smallint not null distkey sortkey,
venuename varchar(100),
venuecity varchar(30),
venuestate char(2),
venueseats integer);
""",
"""
create table category(
catid smallint not null distkey sortkey,
catgroup varchar(10),
catname varchar(10),
catdesc varchar(50));
""",
"""
create table date(
dateid smallint not null distkey sortkey,
caldate date not null,
day character(3) not null,
week smallint not null,
month character(5) not null,
qtr character(5) not null,
year smallint not null,
holiday boolean default('N'));
""",
"""
create table event(
eventid integer not null distkey,
venueid smallint not null,
catid smallint not null,
dateid smallint not null sortkey,
eventname varchar(200),
starttime timestamp);
""",
"""
create table listing(
listid integer not null distkey,
sellerid integer not null,
eventid integer not null,
dateid smallint not null sortkey,
numtickets smallint not null,
priceperticket decimal(8,2),
totalprice decimal(8,2),
listtime timestamp);
""",
"""
create table sales(
salesid integer not null,
listid integer not null distkey,
sellerid integer not null,
buyerid integer not null,
eventid integer not null,
dateid smallint not null sortkey,
qtysold smallint not null,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp);
"""
]
with conn.cursor() as cur:
for sql in sql_create_table:
cur.execute(sql)
print('Done: ', sql)
```
次にCOPYコマンドを利用しS3からデータをロードします。
COPYコマンドを実行する場合は、クラスターが S3 のオブジェクトにアクセスするために必要な認証情報を提供する必要があります。ここでは推奨の認証方法であるIAM Roleでの認証を行っています。詳細については
「[IAM ロールを使用して COPY、UNLOAD、および CREATE EXTERNAL SCHEMA オペレーションを承認する](https://docs.aws.amazon.com/ja_jp/redshift/latest/mgmt/copy-unload-iam-role.html)」をご参照下さい。
```
sql_copy=[
"""
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' region 'us-west-2';
""",
"""
copy venue from 's3://awssampledbuswest2/tickit/venue_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' region 'us-west-2';
""",
"""
copy category from 's3://awssampledbuswest2/tickit/category_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' region 'us-west-2';
""",
"""
copy date from 's3://awssampledbuswest2/tickit/date2008_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' region 'us-west-2';
""",
"""
copy event from 's3://awssampledbuswest2/tickit/allevents_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' timeformat 'YYYY-MM-DD HH:MI:SS' region 'us-west-2';
""",
"""
copy listing from 's3://awssampledbuswest2/tickit/listings_pipe.txt'
credentials 'aws_iam_role={}'
delimiter '|' region 'us-west-2';
""",
"""
copy sales from 's3://awssampledbuswest2/tickit/sales_tab.txt'
credentials 'aws_iam_role={}'
delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS' region 'us-west-2';
"""
]
%%time
with conn.cursor() as cur:
for sql in sql_copy:
cur.execute(sql.format(redshift_role))
print('Done: ', sql)
```
## Play with data
SQLクエリを実行し、一部の必要なデータのみを pandas の DataFrame に格納します。
```
# Get definition for the sales table.
sql="""
SELECT *
FROM pg_table_def
WHERE tablename = 'sales';
"""
%time pd.read_sql(sql=sql, con=conn)
# Find total sales on a given calendar date.
sql="""
SELECT sum(qtysold)
FROM sales, date
WHERE sales.dateid = date.dateid
AND caldate = '2008-01-05';
"""
%time pd.read_sql(sql=sql, con=conn)
# Find top 10 buyers by quantity.
sql="""
SELECT firstname, lastname, total_quantity
FROM (SELECT buyerid, sum(qtysold) total_quantity
FROM sales
GROUP BY buyerid
ORDER BY total_quantity desc limit 10) Q, users
WHERE Q.buyerid = userid
ORDER BY Q.total_quantity desc;
"""
%time df = pd.read_sql(sql=sql, con=conn)
df.shape
df
# Find events in the 99.9 percentile in terms of all time gross sales.
sql="""
SELECT eventname, total_price
FROM (SELECT eventid, total_price, ntile(1000) over(order by total_price desc) as percentile
FROM (SELECT eventid, sum(pricepaid) total_price
FROM sales
GROUP BY eventid)) Q, event E
WHERE Q.eventid = E.eventid
AND percentile = 1
ORDER BY total_price desc;
"""
%time df = pd.read_sql(sql=sql, con=conn)
df.shape
df
```
DataFrame を可視化します。
```
df.total_price.hist()
plt.xlabel('Total price')
plt.ylabel('Histogram')
```
最後に、psycopg2 の connection を閉じます。
```
conn.close()
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#Save-and-Restore-Variables" data-toc-modified-id="Save-and-Restore-Variables-2"><span class="toc-item-num">2 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore Variables</a></a></span></li><li><span><a href="#Save-and-Restore-a-Model" data-toc-modified-id="Save-and-Restore-a-Model-3"><span class="toc-item-num">3 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore a Model</a></a></span></li><li><span><a href="#Serving-Client" data-toc-modified-id="Serving-Client-4"><span class="toc-item-num">4 </span>Serving Client</a></span></li></ul></div>
# Intro
Notebook revolving around the use and concepts of [Tensorflow](https://www.tensorflow.org/).
```
import os
from os.path import join
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
%matplotlib notebook
#%matplotlib inline
models_data_folder = "/Users/amartinelli/Documents/models/"
```
# [Save and Restore Variables](https://www.tensorflow.org/programmers_guide/saved_model)
```
# dummy variables
#v1 = tf.get_variable("v1", shape=[3], initializer=tf.zeros_initializer)
#v2 = tf.get_variablea("v2", shape=[5], initializer=tf.zeros_initializer)
v1 = tf.Variable(tf.constant(0), name='v1')
v2 = tf.Variable(tf.constant(5), name='v2')
# dummy operations
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Save variables
# def init op and saver
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
# run some operations and save sessions
with tf.Session() as sess:
sess.run(init_op)
inc_v1.op.run()
dec_v2.op.run()
save_path = saver.save(sess,
join(models_data_folder, 'tmp', "model.ckpt"))
print("Model saved in {}".format(save_path))
# test behavior in new session (need to rerun initializer)
with tf.Session() as sess:
sess.run(init_op)
print(v1.eval())
print(inc_v1.eval())
print(v1.eval())
# Restore Variables
# need to redefine the variable
v1 = tf.Variable(tf.constant(0), name='v1')
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess,
join(models_data_folder, 'tmp', "model.ckpt"))
#now v1 should have the value we previously saved
print(v1.eval())
```
# [Save and Restore a Model](https://www.tensorflow.org/programmers_guide/saved_model)
Uses *SavedModelBuilder* instead of *Saver*. Should this be done only for serving? In what way can I reload a model saved with the former and retrain?
```
# directory where model will be exported
# include version info in model path as required by TF
version = 0
export_dir = join(models_data_folder, "tf_test_models_export", str(version))
# dummy model
x = tf.Variable(tf.constant(0), name='x')
y = tf.Variable(tf.constant(5), name='y')
f = tf.multiply(x, y, name='f')
# save model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#consider difference between eval and run
#see: https://stackoverflow.com/questions/33610685/in-tensorflow-what-is-the-difference-between-session-run-and-tensor-eval
#sess.run(f, feed_dict={x:3.0, y:5.0})
fval = f.eval(feed_dict={x:3.0, y:5.0})
print(fval)
# Init builder
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
# Build info for inputs and outputs tensors
#??Is the key associated with the tensor name?
inputs = {
'x' : tf.saved_model.utils.build_tensor_info(x),
'y' : tf.saved_model.utils.build_tensor_info(y)
}
outputs = {
'f' : tf.saved_model.utils.build_tensor_info(f)
}
# Define signature (set of inputs and outputs for the graph)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
# method used for the inference
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)
# Add meta-graph (dataflow graph, variables, assets, and signatures)
# to the builder
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
# ??
signature_def_map={
'predict' : prediction_signature
},
# ??
#legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
)
# Finally save builder
builder.save()
# Restore model
# redefine target
x = tf.Variable(tf.constant(1), name='x')
y = tf.Variable(tf.constant(5), name='y')
#f = tf.Operation(None, name='f')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#print(f.eval())
mg = tf.saved_model.loader.load(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
export_dir
)
f = tf.get_default_graph().get_operation_by_name("f")
# ??Why session graph keeps getting new operations?
# isn't it clean every time we exit the "with" scope
#print(sess.graph.get_operations())
print(sess.run(f))
```
# Serving Client
Needs
pip install grpcio grpcio-tools
Plus Tensorflow Serving API files.
```
from grpc.beta import implementations
# reference local copy of Tensorflow Serving API Files
sys.path.append(os.path.join(os.getcwd(), *[os.pardir]*2, 'ext_libs'))
import lib.predict_pb2 as predict_pb2
import lib.prediction_service_pb2 as prediction_service_pb2
host='127.0.0.1'
port=9000
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# build request
request = predict_pb2.PredictRequest()
request.model_spec.name = 'ed' # model name, as given to bazel script
request.model_spec.signature_name = 'predict' # as defined in ModelBuilder
# define inputs
x = 3
y = 4
x_tensor = tf.contrib.util.make_tensor_proto(x, dtype=tf.int32)
y_tensor = tf.contrib.util.make_tensor_proto(y, dtype=tf.int32)
request.inputs['x'].CopyFrom(x_tensor)
request.inputs['y'].CopyFrom(y_tensor)
# call prediction on the server
result = stub.Predict(request, timeout=10.0)
result
```
| github_jupyter |
## Coding Exercise #0707
### 1. Convolutional Neural Network (color images):
```
import numpy as np
import pandas as pd
# import tensorflow as tf
# from keras.datasets.cifar10 import load_data
import tensorflow.compat.v1 as tf
from tensorflow.keras.datasets.cifar10 import load_data
import matplotlib.pyplot as plt
tf.disable_v2_behavior()
%matplotlib inline
```
#### 1.1. Download the data:
More information about the dataset can be found [here](https://www.cs.toronto.edu/~kriz/cifar.html).
```
(X_train, y_train), (X_test, y_test) = load_data()
n_train_size = X_train.shape[0]
```
#### 1.2. Take a look at the dataset:
```
# Images already reshaped as 32x32.
# 3 Color channels.
# y is not one-hot-encoded yet.
print("Training data X shape: {}".format(X_train.shape))
print("Training data y shape: {}".format(y_train.shape))
print("\n")
print("Testing data X shape: {}".format(X_test.shape))
print("Testing data y shape: {}".format(y_test.shape))
```
Visualization.
```
i_image= 123 # Image index. You can change it at will.
a_single_image= X_train[i_image,:,:,:]
plt.imshow(a_single_image) # Display as a color image.
plt.show()
# Check for the minimum and maximum pixel value.
print("MIN : {}".format(a_single_image.min()))
print("MAX : {}".format(a_single_image.max()))
```
#### 1.3. Data preprocessing:
```
# Scaling.
X_train = X_train/255
X_test = X_test/255
# One-Hot-Encoding.
y = np.concatenate([y_train[:,0],y_test[:,0]],axis=0)
y = np.array(pd.get_dummies(y, drop_first=False)) # drop_frist = False for one-hot-encoding.
y_train = y[:n_train_size,:]
y_test = y[n_train_size:,:]
```
#### 1.4. Define the hyperparameters and placeholders:
```
batch_size = 8
n_epochs = 50001
learn_rate = 0.0001
drop_prob = 0.5 # For the dropout layer.
X_ph = tf.placeholder(tf.float32, [None, 32, 32, 3]) # 'None' means any number of rows (observations or batch_size)
y_ph = tf.placeholder(tf.float32,[None, 10])
drop_prob_ph = tf.placeholder(tf.float32) # The drop probability at the dropout layer is a hyperparameter.
```
#### 1.5. Define the Variables:
The configuration of the first convolution layer is as following:
- Kernel height = 7.
- Kernel width = 7.
- In_chanels = **3 (color)**.
- Out_channels = 32 (number of feature maps).
We need Variables with the folllowing shapes:
- Shape of the weight matrix = [kernel_height, kernel_width, in_channels, out_channels].
- Shape of the bias = [out_channels].
```
# Variables are defined according to the specifications mentioned above.
W1 = tf.Variable(initial_value=tf.random_normal([7,7,3,32], mean=0, stddev=0.1))
b1 = tf.Variable(initial_value=tf.fill([32], 0.1))
```
The configuration of the second convolution layer is as following:
- Kernel height = 7.
- Kernel width = 7.
- In_chanels = 32 (out_channels from the previous convolution layer).
- Out_channels = 64 (number of feature maps).
Again, we need Variables with the folllowing shapes:
- Shape of the weight matrix = [kernel_height, kernel_width, in_channels, out_channels].
- Shape of the bias = [out_channels].
```
# Variables are defined according to the specifications mentioned above.
W2 = tf.Variable(initial_value=tf.random_normal([7,7,32,64], mean=0, stddev=0.1))
b2 = tf.Variable(initial_value=tf.fill([64], 0.1))
```
We do the following considerations for the flattened fully connected layer:
- We will apply convolution twice with padding and there will be no image size reduction.
- We will also apply max pooling twice with stride = 2 (vertically and horizontally).
- At each max pooling with stride = 2, the image size is halved. Thus, **(32/2)/2 = 8** will be the size (vertical and horizontal) of the resulting final image.
- In the previous layer there were 64 output channels (feature maps).
- Considering all these facts, there should be **8x8x64 = 4096** nodes in the flattened layer.
- Finally, we will shrink the output from this layer to 1024.
```
# Variables are defined according to the specifications mentioned above.
W3 = tf.Variable(initial_value=tf.random_normal([4096,1024], mean=0, stddev=0.1))
b3 = tf.Variable(initial_value=tf.fill([1024], 0.1))
```
We do the following considerations for the final output layer:
- There are 1024 nodes to match with the output from the previous layer.
- We should shrink the output once more because there are 10 different labels (digits 0~9).
```
# Variables are defined according to the specifications mentioned above.
W4 = tf.Variable(initial_value=tf.random_normal([1024,10], mean=0, stddev=0.1))
b4 = tf.Variable(initial_value=tf.fill([10], 0.1))
```
#### 1.6. Define the deep learning model (CNN):
Explanation of the arguments:
- padding = 'SAME' to apply a padding. padding = 'VALID' to apply no padding.
- ksize = [1, kernel_height, kernel_width, 1]
- strides = [1, stride_vertical, stride_horizontal,1]
```
# 1st Convolution layer.
y1 = tf.nn.conv2d(X_ph, W1, strides=[1, 1, 1, 1], padding='SAME') + b1
conv1 = tf.nn.relu(y1) # Apply the ReLu activation function.
# 1st Pooling layer.
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# 2nd Convolution layer.
y2 = tf.nn.conv2d(pool1, W2, strides=[1, 1, 1, 1], padding='SAME') + b2
conv2 = tf.nn.relu(y2) # Apply the ReLu activation function.
# 2nd Pooling layer.
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Flattened full layer.
conv2_flattened = tf.reshape(pool2, [-1,4096]) # 8x8x64 = 4096.
y3 = tf.matmul(conv2_flattened, W3) + b3
full_layer = tf.nn.relu(y3) # Apply the ReLu activation function.
# Dropout layer.
dropout_layer = tf.nn.dropout(full_layer, rate = drop_prob_ph)
# Output layer.
y_model = tf.matmul(dropout_layer, W4) + b4 # No activation function. Softmax at the output layer is optional.
```
#### 1.7. Define the loss function and the optimizer:
```
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_ph, logits=y_model))
optimizer = tf.train.AdamOptimizer(learning_rate = learn_rate)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
```
#### 1.8. Training and Testing:
```
with tf.Session() as sess:
sess.run(init)
for i in range(n_epochs):
idx_rnd = np.random.choice(range(n_train_size),batch_size,replace=False) # Random sampling w/o replacement for the batch indices.
batch_X, batch_y = X_train[idx_rnd,:,:] , y_train[idx_rnd] # Sample a batch!
my_feed = {X_ph:batch_X, y_ph:batch_y, drop_prob_ph:drop_prob}
sess.run(train, feed_dict = my_feed)
if i % 500 == 0:
correct_predictions = tf.equal(tf.argmax(y_ph, axis=1), tf.argmax(y_model, axis=1)) # In argmax(), axis=1 means horizontal direction.
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # Recast the Boolean as float32 first. Then calculate the mean.
my_feed = {X_ph:X_test, y_ph:y_test, drop_prob_ph:0.0} # No dropout for testing.
accuracy_value = sess.run(accuracy, feed_dict = my_feed)
print("Step = {} , Accuracy = {:5.3f} \n".format(i, accuracy_value))
```
| github_jupyter |
```
#Import python packages. Some may need to be installed using the Python Package Manager.
import os
import datetime
import exifread
from PIL import Image
import wikipedia
#This is the only variable that needs to be set. It is the path to the folder of images.
path = "C:\\1_projects\\138_fedgis2021\\images\\"
#This function converts the coordinates contained in the exif to decimal degrees.
def _convert_to_degress(value):
d = float(value.values[0].num) / float(value.values[0].den)
m = float(value.values[1].num) / float(value.values[1].den)
s = float(value.values[2].num) / float(value.values[2].den)
return d + (m / 60.0) + (s / 3600.0)
#This code reads the exif data, scrapes wikipedia, and plots each location in a geodatabase.
arcpy.env.workspace = path
current_time = datetime.datetime.now().strftime("%B_%d_%Y_%I_%M_%S%p")
arcpy.CreateFileGDB_management(path, current_time + '.gdb')
SR = arcpy.SpatialReference(4326)
new_point = arcpy.CreateFeatureclass_management(path + current_time + '.gdb', "Pictures", 'POINT', spatial_reference=SR)
fc = new_point[0]
arcpy.AddField_management(fc, "Name", "TEXT", "", "", 100, "Name")
arcpy.AddField_management(fc, "Image_Link", "TEXT", "", "", 100, "Image_Link")
arcpy.AddField_management(fc, "Wiki_Link", "TEXT", "", "", 200, "Wiki_link")
for filename in os.listdir(path):
if filename.endswith(".JPG") or filename.endswith(".PNG") or filename.endswith(".jpg") or filename.endswith(".png"):
im = Image.open(os.path.join(path, filename))
tags = {}
with open(os.path.join(path, filename), 'rb') as f:
tags = exifread.process_file(f, details=False)
if "GPS GPSLatitude" in tags.keys():
lat = _convert_to_degress(tags["GPS GPSLatitude"])
latRef = tags["GPS GPSLatitudeRef"]
lngRef = tags["GPS GPSLongitudeRef"]
if str(latRef) == 'S':
lat = -lat
lng = _convert_to_degress(tags["GPS GPSLongitude"])
if str(lngRef) == 'W':
lng = -lng
name_search = wikipedia.geosearch(lat, lng, results=1, radius=10000)
name = name_search[0]
wiki = "https://en.wikipedia.org/wiki/" + name.replace(" ", "_")
with arcpy.da.InsertCursor(fc, ['SHAPE@', 'Name', "Image_Link", "Wiki_link"]) as cursor:
coordinates = arcpy.Point(lng,lat)
cursor.insertRow((coordinates, str(name).replace(",",""), filename, wiki))
print("Export complete.")
```
| github_jupyter |
# Example: Regenerating Data from
# [R. Wu et al. / Elec Acta 54 25 (2010) 7394–7403](http://www.sciencedirect.com/science/article/pii/S0013468610009503)
Import the modules
```
import scipy as sp
import numpy as np
import openpnm as op
import matplotlib.pyplot as plt
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
np.random.seed(10)
```
Set the workspace loglevel to not print anything
```
ws = op.Workspace()
ws.settings["loglevel"] = 50
%run shared_funcs.ipynb
```
We can run multiple times as the network sizes are randomly generated between a given range we can obtain an average
```
x_values = []
y_values = []
for ensemble in range(10):
x_ensemble, y_ensemble = simulation(n=8)
x_values.append(x_ensemble)
y_values.append(y_ensemble)
x_values = np.asarray(x_values).flatten()
y_values = np.asarray(y_values).flatten()
plt.figure()
from matplotlib.font_manager import FontProperties
fontP = FontProperties()
fontP.set_size('small')
wu_average_x_values = [0.004, 0.021, 0.052, 0.081, 0.129, 0.162, 0.186, 0.219, 0.261,
0.286, 0.324, 0.363, 0.42, 0.478, 0.531, 0.586, 0.64, 0.698, 0.747, 0.802]
wu_average_y_values = [0.118, 0.113, 0.105, 0.096, 0.085, 0.078, 0.07, 0.062, 0.054, 0.049, 0.04,
0.033, 0.027, 0.02, 0.012, 0.006, 0.003, 0.002, 0.002, 0.002]
p1, = plt.plot(x_values, y_values, 'ko')
p2, = plt.plot(wu_average_x_values, wu_average_y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel(r'$\frac{D_e}{D_b}$')
#plt.ylim([0, .15])
plt.xlim([0, 1])
plt.legend([p1, p2],
[r'$\frac{D_e}{D_b} = f(\epsilon, \phi)g(s, \phi)$' + '\n' + r'$X = 1.8$' +
'\n' + r'$Z_t = 2.0$' + '\n' + r'$Z_i = 4.0$' + '\n' + r'$\beta = 1.0$' + '\n' + r'$n = 14$', "Wu's results"])
plt.show()
```
And finally extract the g(S) function for relative diffusivity.
```
plt.figure()
normalize_factor = max(y_values)
g_values = y_values / normalize_factor
wu_saturation = [0.004, 0.066, 0.0930, .119, 0.14, 0.175, 0.209, 0.24, 0.282, 0.32, 0.371, 0.413,
0.464, 0.517, 0.605, 0.672, 0.761, 0.831, 0.898, 0.948, 0.996]
wu_g_values = [0.986, 0.838, 0.758, 0.701, 0.651, 0.576, 0.516, 0.456, 0.39, 0.335, 0.268, 0.221,
0.171, 0.111, 0.067, 0.04, 0.019, 0.007, 0.003, 0.003, 0.003]
p1, = plt.plot(x_values, g_values, 'ko')
p2, = plt.plot(wu_saturation, wu_g_values, 'ro')
plt.title('g(s) versus saturation')
plt.xlabel('saturation')
plt.ylabel('g(s)')
plt.legend([p1, p2],
["our values", "Wu's values (fitted curve)"], loc='center left', bbox_to_anchor=(1, 0.5), prop = fontP)
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
```
# 自由现金流估值法 DCF
有以下四种模型:
1. 零增长模型
2. 不变增长模型
3. 两阶段模型
4. 三阶段模型
不同的是自由现金流的使用和贴现的方式不同。
**计算步骤**:
1. 计算自由现金流并依据相应的方法折现($\star\star\star\star\star$, the most important, this is what the code solves)
2. 计算股权价值= 1.+金融资产+长期股权投资-公司债务
3. 计算少数股东比例
4. 归属于上市公司股东的价值=股权价值$\times$(1-少数股东比例)
5. 每股内在价值=归属于上市公司股东的价值/股本
其中,
- 经营资产自由现金流=公司维持原有生产经营规模前提下的增量现金流入=经营活动现金流量净额-保全性资本支出=经营活动现金流量净额-固定资产折旧-无形资产和长期待摊费用摊销-处置长期资产的损失
- $WACC=k_d\times\frac{D}{D+E}\times(1-t)+k_e\times\frac{E}{D+E}$。其中债务资本成本率=债务资本总额/债务资本平均金额$\times$100%=(财务费用+汇兑收益)/(期初债务资本+期末债务资本)/2;股权资本成本率应该高于同期的国债利率,加上股票投资的风险溢价,我们普遍设置为8%;t为公司实际所得税税率=1-净利润/税前利润。
- 公司债务=有息债务
- 少数股东比例=$\frac{少数股东权益}{股东权益合计}$
- 股本=市值/股价
$$
\begin{aligned}
&零增长模型:V=\frac{FCF}{WACC}\\
&不变增长模型:V=\frac{FCF(1+g)}{WACC-g}\\
&两阶段模型:V=\sum_{t=1}^n\frac{{FCF}_t}{(1+WACC)^t}+\frac{TV}{(1+WACC)^n},\ \ 其中TV=\frac{FCF_n(1+g_2)}{WACC-g_2}\\
&三阶段模型:V=\sum_{t=1}^n\frac{{FCF}_0(1+g_1)}{(1+WACC)^t}+\sum_{t=n+1}^m\frac{{FCF}_n(1+g_2)}{(1+WACC)^t}+\frac{FCF_{n+m}(1+g_3)}{(WACC-g_3)(1+WACC)^{n+m}}\\
\end{aligned}
$$
零增长模型适用于成熟稳定、没有增长的公司,每年的自由现金流也保持在一个稳定的金额水平,类似于永续年金;如果该类公司的自由现金流全部用于发放股利现金,那么其得出的结果与股利贴现模型非常接近。
不变增长模型适用于成熟的公司,未来的自由现金流以非常缓慢的速度增长。
在两阶段模型中,投资者的预期回报WACC至少要高于总体的经济增长率;不变增长率g2通常小于WACC,反之,意味着很长时间以后公司的规模将超过总体经济规模。
在三阶段模型中,假设所有的公司经历三个阶段:成长阶段、过渡阶段和稳定阶段。三个阶段的成长率由高到低,稳定阶段保持较低增长率的不变增长。
```
#=== 变量
file_name = 'sz000977' # 股票代码
time = 4 # 采用最近n期的数据(季)
zero_change = False # 是否为零增长模型
one_change = False # 是否为不变增长模型
two_change = True # 是否为两阶段模型
three_change = False # 是否为三阶段模型
g1, g2, g3 = 0.2, 0.03, 0.01 # 增长速度,分别是后三个模型的g,如果使用不变增长模型,则不需要更改后两个
t1, t2 = np.arange(1, 3), np.arange(1,2) # 某阶段的几年,两阶段与三阶段模型需要,注意最大值减一为实际值
#=== functions
def read_file(file_name):
# 读取股票基本数据
df = pd.read_csv(r'youraddress\%s.csv' % file_name, encoding='GBK', skiprows=1, parse_dates=['交易日期'])
df = df[['股票代码', '股票名称', '交易日期', '总市值', '净利润TTM', '收盘价']]
print(df.tail(5))
# 读取股票财务数据
finance_df = pd.read_csv(r'youraddress\%s.csv' % file_name, parse_dates=['财报日期', '财报发布日期'], skiprows=1, encoding='gbk')
finance_df = finance_df.resample('Q', on='财报日期').first()
del finance_df['财报日期']
finance_df.reset_index(inplace=True)
finance_df.dropna(subset=['财报发布日期'], inplace=True)
finance_df.sort_values(by='财报发布日期', inplace=True)
return df, finance_df
def merge_data(df, finance_df):
add_columns = ['B_货币资金',
'B_交易性金融资产',
'B_衍生金融资产',
'B_应收票据及应收账款',
'B_应收票据',
'B_应收账款',
'B_应收款项融资',
'B_应收利息',
'B_应收股利',
'B_其他应收款',
'B_买入返售金融资产',
'B_发放贷款及垫款',
'B_可供出售金融资产',
'B_持有至到期投资',
'B_长期应收款',
'B_长期股权投资',
'B_投资性房地产',
'B_所有者权益(或股东权益)合计',
'C_经营活动产生的现金流量净额',
'B_短期借款',
'B_交易性金融负债',
'B_应付利息',
'B_应付短期债券',
'B_一年内到期的非流动负债',
'B_长期借款',
'B_应付债券',
'B_租赁负债',
'B_长期应付款(合计)',
'R_财务费用',
'R_汇兑收益',
'R_四、利润总额',
'R_减:所得税费用',
'C_固定资产折旧、油气资产折耗、生产性物资折旧', 'C_无形资产摊销', 'C_长期待摊费用摊销', 'C_处置固定资产、无形资产和其他长期资产的损失',
'B_少数股东权益']
col = ['财报发布日期', '财报日期'] + add_columns
stock_df = pd.merge_asof(df, finance_df[col], left_on='交易日期', right_on='财报日期', direction='backward')
print(stock_df.columns)
return stock_df
def data_been_prepared(now_df, stock_df):
now_df[['股票代码', '股票名称', '交易日期', '总市值', '财报发布日期', '财报日期', '净利润TTM', '收盘价']] = stock_df[['股票代码', '股票名称', '交易日期', '总市值', '财报发布日期', '财报日期', '净利润TTM', '收盘价']]
now_df['金融资产'] = 0
now_df['公司债务'] = 0
for factor1 in ['B_货币资金',
'B_交易性金融资产',
'B_衍生金融资产',
'B_应收票据及应收账款',
'B_应收票据',
'B_应收账款',
'B_应收款项融资',
'B_应收利息',
'B_应收股利',
'B_其他应收款',
'B_买入返售金融资产',
'B_发放贷款及垫款',
'B_可供出售金融资产',
'B_持有至到期投资',
'B_长期应收款',
'B_投资性房地产',
'B_长期股权投资']:
now_df['金融资产'] += stock_df[factor1]
for factor2 in ['B_短期借款',
'B_交易性金融负债',
'B_应付利息',
'B_应付短期债券',
'B_一年内到期的非流动负债',
'B_长期借款',
'B_应付债券',
'B_租赁负债',
'B_长期应付款(合计)']:
now_df['公司债务'] += stock_df[factor2]
now_df['债务资本成本总额'] = stock_df['R_财务费用'] + stock_df['R_汇兑收益']
now_df['经营资产自由现金流'] = stock_df['C_经营活动产生的现金流量净额'] - stock_df['C_固定资产折旧、油气资产折耗、生产性物资折旧'] - stock_df['C_无形资产摊销'] - stock_df['C_长期待摊费用摊销'] - stock_df['C_处置固定资产、无形资产和其他长期资产的损失']
now_df['实际企业所得税税率'] = 1 - ((stock_df['R_四、利润总额'] - stock_df['R_减:所得税费用']) / stock_df['R_四、利润总额'])
now_df['少数股东权益比例'] = stock_df['B_少数股东权益'] / stock_df['B_所有者权益(或股东权益)合计']
now_df['债务占比'] = now_df['公司债务'] / (stock_df['B_所有者权益(或股东权益)合计'] + now_df['公司债务'])
now_df.drop_duplicates(subset=['财报日期'], inplace=True)
now_df.reset_index(inplace=True)
del now_df['index']
print(now_df.tail(10))
return now_df
def cal_WACC(now_df, time):
WACC = (now_df['债务资本成本总额'] / ((now_df['公司债务'] + now_df['公司债务'].shift(time)) / 2) * now_df['债务占比'] * (1-now_df['实际企业所得税税率'])) + (0.09 * (1-now_df['债务占比']))
return WACC.tolist()[-time]
def fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3, t1, t2):
value = (now_df.loc[: ,'金融资产'].tolist()[-time] - now_df.loc[: ,'公司债务'].tolist()[-time])
if zero_change == True:
FCF = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] / WACC
if one_change == True:
FCF = (now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * (1+g1)) / (WACC - g1)
if two_change == True:
temp_sum = 0
for _ in t1:
temp = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** _) / ((1+WACC) ** _)
temp_sum = temp + temp_sum
FCF = ((now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** (t1[-1]-1)) * (1+g2)) / ((WACC-g2)*((1+WACC)**t1[-1]))) + temp_sum
if three_change == True:
temp_sum1, temp_sum2 = 0, 0
for _ in t1:
temp1 = now_df.loc[: ,'经营资产自由现金流'].tolist()[-time] * ((1+g1) ** _)
temp = temp1 / ((1+WACC) ** _)
temp_sum1 = temp + temp_sum1
for _ in t2:
temp = temp1 * ((1+g2) ** _) / ((1+WACC) ** (_+t1[-1]))
temp_sum2 = temp + temp_sum2
FCF = (temp1 * ((1+g2) ** t2) * (1+g3)) / ((WACC-g3)*((1+WACC)**(t1[-1]+t2[-1]))) + temp_sum1 + temp_sum2
FCF_plus_value = (FCF + value) * (1 - now_df.loc[: ,'少数股东权益比例'].tolist()[-time])
result = FCF_plus_value / (now_df.loc[: ,'总市值'].tolist()[-time] / now_df.loc[: ,'收盘价'].tolist()[-time]) # 股票内在价值,价值/股数
print('归属于上市公司股东的价值:', FCF_plus_value, '\n', '股票内在价值:', result)
return FCF_plus_value, result
def statistics(now_df, time):
PE1 = now_df.loc[: ,'总市值'].tolist()[-time] / now_df.loc[: ,'净利润TTM'].tolist()[-time] # 市盈率
PE2 = FCF_plus_value / now_df.loc[: ,'净利润TTM'].tolist()[-time]
print('原PE: ', PE1, '估值PE:', PE2)
for time_n in [1, 2, 3, time, time+1, time+2, time+3]:
print('前%s个季度股票价格' % (time_n-1), now_df.loc[: ,'收盘价'].tolist()[-time_n]) # 股票收盘价
```
### 主程序
用来计算股票内在价值
```
#=== main
df, finance_df = read_file(file_name)
stock_df = merge_data(df, finance_df)
now_df = pd.DataFrame()
now_df = data_been_prepared(now_df, stock_df)
WACC = cal_WACC(now_df, time)
print('=============================')
print('WACC is ', WACC)
FCF_plus_value, result = fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3, t1, t2)
statistics(now_df, time)
```
### 循环测试
以下是进行g变量等情况的测试
```
a = []
for b in np.arange(0.01, 0.1, 0.01):
#=== main
g1 = b
df, finance_df = read_file(file_name)
stock_df = merge_data(df, finance_df)
now_df = pd.DataFrame()
now_df = data_been_prepared(now_df, stock_df)
print('=============================')
FCF_plus_value, result = fcf_discounted(now_df, WACC, time, zero_change, one_change, two_change, three_change, g1, g2, g3)
statistics(now_df, time)
a.append(result)
a.append(b)
print(a)
# pd.set_option('display.max_columns', 100)
finance_df = pd.read_csv(r'C:\Users\xueli\python_file\stock_quant\sina_financial_data\%s.csv' % file_name, parse_dates=['财报日期', '财报发布日期'], skiprows=1, encoding='gbk')
finance_df.columns.values.tolist()
```
| github_jupyter |
< [Classes](PythonIntroCh7.ipynb) | [Contents](PythonIntro.ipynb) | [File I/O](PythonIntroCh9.ipynb) >
# 8. Modules
## 8.1 Introduction
Last lesson we covered the killer topic of Classes. As you can remember, classes are neat combinations of variables and functions in a nice, neat package. Programming lingo calls this feature encapsulation, but regardless of what it is called, it's a really cool feature for keeping things together so the code can be used in many instances in lots of places. Of course, you've got to ask, "how do I get my classes to many places, in many programs?". The answer is to put them into a module, to be imported into other programs.
## 8.2 Module? What's a Module?
A module is a Python file that (generally) has only definitions of variables, functions, and classes. For example, a module might look like this, which we store in a file `moduletest.py`:
```Python
### EXAMPLE PYTHON MODULE
# Define some variables:
numberone = 1
ageofqueen = 78
# define some functions
def printhello():
print("hello")
def timesfour(input):
print(eval(input) * 4)
# define a class
class Piano:
def __init__(self):
self.type = input("What type of piano? ")
self.height = input("What height (in feet)? ")
self.price = input("How much did it cost? ")
self.age = input("How old is it (in years)? ")
def printdetails(self):
print("This piano is a/an " + self.height + " foot", end=" ")
print(self.type, "piano, " + self.age, "years old and costing\
" + self.price + " dollars.")
```
As you see, a module looks pretty much like your normal Python program.
So what do we do with a module? We `import` bits of it (or all of it) into other programs.
To import all the variables, functions and classes from `moduletest.py` into another program you are writing, we use the `import` operator. For example, to import `moduletest.py` into your main program (`mainprogram.py`), you would have this:
```Python
### mainprogam.py
### IMPORTS ANOTHER MODULE
import moduletest
```
This assumes that the module is in the same directory as `mainprogram.py`, or is a default module that comes with Python. You leave out the `.py` at the end of the file name - it is ignored. You normally put all `import` statements at the beginning of the Python file, but technically they can be anywhere. In order to use the items in the module in your main program, you use the following:
```Python
### USING AN IMPORTED MODULE
# Use the form modulename.itemname
# Examples:
print(moduletest.ageofqueen)
cfcpiano = moduletest.Piano()
cfcpiano.printdetails()
```
As you see, the modules that you import act very much like the classes we looked at last lesson - anything inside them must be preceded with `modulename.` for it to work.
## 8.3 More module thingummyjigs (in lack of a better title)
Wish you could get rid of the `modulename.` part that you have to put before every item you use from a module? No? Never? Well, I'll teach it to you anyway.
One way to avoid this hassle is to import only the wanted objects from the module. To do this, you use the `from` operator. You use it in the form of `from modulename import itemname`. Here is an example:
```Python
### IMPORT ITEMS DIRECTLY INTO YOUR PROGRAM
# import them
from moduletest import ageofqueen
from moduletest import printhello
# now try using them
print(ageofqueen)
printhello()
```
What is the point of this? Well, maybe you could use it to make your code a little more readable. If we get into heaps of modules inside modules, it could also remove that extra layer of crypticness.
If you wanted to, you could import everything from a module in this way by using `from modulename import *`. Of course, this can be troublesome if there are objects in your program with the same name as some items in the module. With large modules, this can easily happen, and can cause many a headache. A better way to do this would be to import a module in the normal way (without the `from` operator) and then assign items to a local name:
```Python
### ASSIGNING ITEMS TO A LOCAL NAME
# Assigning to a local name
timesfour = moduletest.timesfour
# Using the local name
print(timesfour(565))
```
This way, you can remove some crypticness, AND have all of the items from a certain module.
A final handy way to import modules is with an alias. Maybe you want to change a name because you've already used the same name for something else in your program, another module you imported uses the same name, or maybe you want to abbreviate a longer name that you use a lot. We can then use the `as` operator. That looks like this:
```Python
### IMPORT A MODULE WITH AN ALIAS
# import module
import moduletest as mt
# use module
print(mt.age)
cfcpiano = mt.Piano()
cfcpiano.printdetails()
```
## 8.4 Conclusion
That's it! A very simple lesson, but now you can organise your programs very neatly. In fact, now it is incredibly easy to make programs that can grow in complexity without ending up with one cryptic file that is full of bugs.
Modules are great for importing code. Next lesson, we learn about file input and output, and the saving of information inside classes, to be retrieved later. Will be great!
< [Classes](PythonIntroCh7.ipynb) | [Contents](PythonIntro.ipynb) | [File I/O](PythonIntroCh9.ipynb) >
| github_jupyter |
# Functions
If you find yourself doing the same thing over and over again in your code, it might be time to write a function.
Functions are blocks of reusable code -- little boxes that (usually) take inputs and return outputs. In Excel, `=SUM()` is a function. `print()` is one of Python's built-in function.
You can also _define your own functions_. This can save you some typing, and it will help separate your code into logical, easy-to-read pieces.
### Syntax
Functions start with the `def` keyword -- short for _define_, because you're defining a function -- then the name of the function, then parentheses (sometimes with the names of any `arguments` your function requires inside the parentheses) and then a colon. The function's code sits inside an indented block immediately below that line. In most cases, a function will `return` a value at the end.
Here is a function that takes a number and returns that number multiplied by 10:
```
def times_ten(number):
return number * 10
```
The `number` variable is just a placeholder for the values we're going to hand the function as input. We could have called that argument name "banana" and things would be just fine, though it would be confusing for people reading your code.
### Calling a function
By itself, a function doesn't do anything. We have built a tiny machine to multiply a number by 10. But it's just sitting on the workshop bench, waiting for us to use it.
Let's use it.
```
two_times_10 = times_ten(2)
print(two_times_10)
```
### Arguments
Functions can accept _positional_ arguments or _keyword_ arguments.
If your function uses _positional_ arguments, the order in which you pass arguments to the function matters. Here is a function that prints out a message based on its input (a person's name and their hometown).
```
def greet(name, hometown):
return f'Hello, {name} from {hometown}!'
```
Now let's call it.
```
print(greet('Cody', 'Pavillion, WY'))
```
If we change the order of the arguments, nonsense ensues.
```
print(greet('Pavillion, WY', 'Cody'))
```
Using _keyword_ arguments requires us to specify what value belongs to what argument, and it allows us to set a default value for the argument -- values that the function will use if you fail to pass any arguments when you call it. We could rewrite our function like this:
```
def greet(name='Cody', hometown='Pavillion, WY'):
return f'Hello, {name} from {hometown}!'
```
And now it doesn't matter what order we pass in the arguments, because we're defining the keyword that they belong to:
```
print(greet(hometown='Pittsburgh, PA', name='Jacob'))
```
What happens if we call the `greet()` function without any arguments at all, now? It'll use the default arguments.
```
print(greet())
```
### Lambda expressions
Sometimes, you'll see code that looks like this:
```python
df['new_column'] = df['old_column'].apply(lambda x: x[0])
```
That stuff inside the `apply()` parentheses? That's called a [_lambda expression_](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions), a time-saving way to turn loose a simple function on some values without having to write out a function with `def`. (It's a Python thing, not specific to pandas, but for our purposes that's probably where you'll see them most often.)
This code is equivalent but takes longer to write:
```python
def take_first_char(value):
return value[0]
df['new_column'] = df['old_column'].apply(take_first_char)
```
### More resources
- [TutorialsPoint post on functions](https://www.tutorialspoint.com/python/python_functions.htm)
- [LearnPython tutorial](https://www.learnpython.org/en/Functions)
- [Software Carpentry tutorial](https://swcarpentry.github.io/python-novice-inflammation/06-func/)
- [Hitchhiker's Guide to Python: Function Arguments](http://docs.python-guide.org/en/latest/writing/style/#function-arguments)
| github_jupyter |
# Convolutional Neural Networks with Tensorflow
"Deep Learning" is a general term that usually refers to the use of neural networks with multiple layers that synthesize the way the human brain learns and makes decisions. A convolutional neural network is a kind of neural network that extracts *features* from matrices of numeric values (often images) by convolving multiple filters over the matrix values to apply weights and identify patterns, such as edges, corners, and so on in an image. The numeric representations of these patterns are then passed to a fully-connected neural network layer to map the features to specific classes.
## Building a CNN
There are several commonly used frameworks for creating CNNs. In this notebook, we'll build a simple example CNN using Tensorflow. The example is a classification model that can classify an image as a circle, a triangle, or a square.
### Import framework
First, let's import the Tensorflow libraries we'll need.
```
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
```
### Preparing the Data
Before we can train the model, we need to prepare the data. We'll divide the feature values by 255 to normalize them as floating point values between 0 and 1, and we'll split the data so that we can use 70% of it to train the model, and hold back 30% to validate it. When loading the data, the data generator will assing "hot-encoded" numeric labels to indicate which class each image belongs to based on the subfolders in which the data is stored. In this case, there are three subfolders - *circle*, *square*, and *triangle*, so the labels will consist of three *0* or *1* values indicating which of these classes is associated with the image - for example the label [0 1 0] indicates that the image belongs to the second class (*square*).
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
data_folder = 'data/shapes'
img_size = (128, 128)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print("class names: ", classnames)
```
### Defining the CNN
Now we're ready to create our model. This involves defining the layers for our CNN, and compiling them for multi-class classification.
```
# Define a CNN classifier network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
# Define the model as a sequence of layers
model = Sequential()
# The input layer accepts an image and applies a convolution that uses 32 6x6 filters and a rectified linear unit activation function
model.add(Conv2D(32, (6, 6), input_shape=train_generator.image_shape, activation='relu'))
# Next we;ll add a max pooling layer with a 2x2 patch
model.add(MaxPooling2D(pool_size=(2,2)))
# We can add as many layers as we think necessary - here we'll add another convolution, max pooling, and dropout layer
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# And another set
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# A dropout layer randomly drops some nodes to reduce inter-dependencies (which can cause over-fitting)
model.add(Dropout(0.2))
# Now we'll flatten the feature maps and generate an output layer with a predicted probability for each class
model.add(Flatten())
model.add(Dense(train_generator.num_classes, activation='sigmoid'))
# With the layers defined, we can now compile the model for categorical (multi-class) classification
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
```
### Training the Model
With the layers of the CNN defined, we're ready to train the model using our image data. In the example below, we use 5 iterations (*epochs*) to train the model in 30-image batches, holding back 30% of the data for validation. After each epoch, the loss function measures the error (*loss*) in the model and adjusts the weights (which were randomly generated for the first iteration) to try to improve accuracy.
> **Note**: We're only using 5 epochs to minimze the training time for this simple example. A real-world CNN is usually trained over more epochs than this. CNN model training is processor-intensive, involving a lot of matrix and vector-based operations; so it's recommended to perform this on a system that can leverage GPUs, which are optimized for these kinds of calculation. This will take a while to complete on a CPU-based system - status will be displayed as the training progresses.
```
# Train the model over 5 epochs using 30-image batches and using the validation holdout dataset for validation
num_epochs = 5
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
```
### View the Loss History
We tracked average training and validation loss history for each epoch. We can plot these to verify that loss reduced as the model was trained, and to detect *overfitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase.
```
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
```
### Evaluate Model Performance
We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
```
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the moedl to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Predicted Shape")
plt.ylabel("True Shape")
plt.show()
```
### Using the Trained Model
Now that we've trained the model, we can use it to predict the class of a new image.
```
from tensorflow.keras import models
from random import randint
import os
%matplotlib inline
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Save the trained model
modelFileName = 'models/shape_classifier.h5'
model.save(modelFileName)
del model # deletes the existing model variable
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((128,128), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the classifier to predict the class
model = models.load_model(modelFileName) # loads the saved model
class_probabilities = model.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
class_idx = np.argmax(class_probabilities, axis=1)
print (classnames[int(class_idx[0])])
```
In this notebook, you used Tensorflow to train an image classification model based on a convolutional neural network.
| github_jupyter |
# Thematic Reports
Thematic reports run historical analyses on the exposure of a portfolio to various Goldman Sachs Flagship Thematic baskets over a specified date range.
### Prerequisite
To execute all the code in this tutorial, you will need the following application scopes:
- **read_product_data**
- **read_financial_data**
- **modify_financial_data** (must be requested)
- **run_analytics** (must be requested)
If you are not yet permissioned for these scopes, please request them on your [My Applications Page](https://developer.gs.com/go/apps/view).
If you have any other questions please reach out to the [Marquee sales team](mailto:gs-marquee-sales@gs.com).
## Step 1: Authenticate and Initialize Your Session
First you will import the necessary modules and add your client id and client secret.
```
import datetime as dt
from time import sleep
from gs_quant.markets.baskets import Basket
from gs_quant.markets.portfolio_manager import PortfolioManager
from gs_quant.markets.report import ThematicReport
from gs_quant.session import GsSession, Environment
client = None
secret = None
scopes = None
## External users must fill in their client ID and secret below and comment out the line below
#client = 'ENTER CLIENT ID'
#secret = 'ENTER CLIENT SECRET'
#scopes = ('read_product_data read_financial_data modify_financial_data run_analytics',)
GsSession.use(
Environment.PROD,
client_id=client,
client_secret=secret,
scopes=scopes
)
print('GS Session initialized.')
```
## Step 2: Create a New Thematic Report
#### Already have a thematic report?
<i>If you want to skip creating a new report and continue this tutorial with an existing thematic report, run the following and skip to Step 3:</i>
```
portfolio_id = 'ENTER PORTFOLIO ID'
thematic_report = PortfolioManager(portfolio_id).get_thematic_report()
```
The only parameter necessary in creating a new thematic report is the unique Marquee identifier of the portfolio on which you would like to run thematic analytics.
```
portfolio_id = 'ENTER PORTFOLIO ID'
thematic_report = ThematicReport()
thematic_report.set_position_source(portfolio_id)
thematic_report.save()
print(f'A new thematic report for portfolio "{portfolio_id}" has been made with ID "{thematic_report.id}".')
```
## Step 3: Schedule the Report
When scheduling reports, you have two options:
- Backcast the report: Take the earliest date with positions in the portfolio / basket and run the report on the positions held then with a start date before the earliest position date and an end date
of the earliest position date
- Do not backcast the report: Set the start date as a date that has positions in the portfolio or basket and an end date after that (best practice is to set it to T-1). In this case the
report will run on positions held as of each day in the date range
In this case, let's try scheduling the report without backcasting:
```
start_date = dt.date(2021, 1, 4)
end_date = dt.date(2021, 8, 4)
thematic_report.schedule(
start_date=start_date,
end_date=end_date,
backcast=False
)
print(f'Report "{thematic_report.id}" has been scheduled.')
```
## Alternative Step 3: Run the Report
Depending on the size of your portfolio and the length of the schedule range, it usually takes anywhere from a couple seconds to half a minute for your report to finish executing.
Only after that can you successfully pull the results from that report. If you would rather run the report and pull the results immediately after they are ready, you can leverage the `run`
function.
You can run a report synchronously or asynchronously.
- Synchronous: the Python script will stall at the `run` function line and wait for the report to finish. The `run` function will then return a dataframe with the report results
- Asynchronously: the Python script will not stall at the `run` function line. The `run` function will return a `ReportJobFuture` object that will contain the report results when they are ready.
In this example, let's run the report asynchronously and wait for the results:
```
start_date = dt.date(2021, 1, 4)
end_date = dt.date(2021, 8, 4)
report_result_future = thematic_report.run(
start_date=start_date,
end_date=end_date,
backcast=False,
is_async=True
)
while not report_result_future.done():
print('Waiting for report results...')
sleep(5)
print('\nReport results done! Here they are...')
print(report_result_future.result())
```
### Step 3: Pull Report Results
Now that we have our factor risk report, we can leverage the unique functionalities of the `ThematicReport` class to pull exposure and PnL data. Let's get the historical changes in thematic exposure and beta to the GS Asia Stay at Home basket:
```
basket = Basket.get('GSXASTAY')
thematic_exposures = thematic_report.get_thematic_data(
start_date=start_date,
end_date=end_date,
basket_ids=[basket.get_marquee_id()]
)
print(f'Thematic Exposures: \n{thematic_exposures.__str__()}')
thematic_exposures.plot(title='Thematic Data Breakdown')
```
### You're all set; Congrats!
*Other questions? Reach out to the [Portfolio Analytics team](mailto:gs-marquee-analytics-support@gs.com)!*
| github_jupyter |
# Pythonic APIs: the workshop notebook
## Tutorial overview
* Introduction
* A simple but full-featured Pythonic class
* **Exercise:** custom formatting and alternate constructor
* A Pythonic sequence
* **Exercise:** implementing sequence behavior
* *Coffee break*
* A Pythonic sequence (continued)
* **Exercise:** custom formatting
* Operator overloading
* **Exercise:** implement `@` for dot product
* Wrap-up
## What is *Pythonic*?
Pythonic code is concise and expressive. It leverages Python features and idioms to acomplish maximum effect with minimum effort, without being unreadable. It uses the language as it's designed to be used, so it is most readable to the fluent Pythonista.
### Real example 1: the `requests` API
`requests` is pleasant HTTP client library. It's great but it would be awesome if it was asynchronous (but could it be pleasant **and** asynchronous at the same time?). The examples below are from Kenneth Reitz, the author of `requests` ([source](https://gist.github.com/kennethreitz/973705)).
#### Pythonic, using `requests`
```python
import requests
r = requests.get('https://api.github.com', auth=('user', 'pass'))
print r.status_code
print r.headers['content-type']
# ------
# 200
# 'application/json'
```
#### Unpythonic, using `urllib2`
```python
import urllib2
gh_url = 'https://api.github.com'
req = urllib2.Request(gh_url)
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, 'user', 'pass')
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
urllib2.install_opener(opener)
handler = urllib2.urlopen(req)
print handler.getcode()
print handler.headers.getheader('content-type')
# ------
# 200
# 'application/json'
```
### Real example 2: classes are optional in `py.test` and `nosetests`
## Features of idiomaitc Python APIs
* Let the user apply previous knowledge of the standard types and operations
* Make it easy to leverage existing libraries
* Come with “batteries included”
* Use duck typing for enhanced interoperation with user-defined types
* Provide ready to use objects (no instantiation needed)
* Don't require subclassing for basic usage
* Leverage standard language objects: containers, functions, classes, modules
* Make proper use of the Data Model (i.e. special methods)
## Introduction
One of the keys to consistent, *Pythonic*, behavior in Python is understanding and leveraging the **Data Model**. The Python Data Model defines standard APIs which enable...
### Iteration
```
s = 'Fluent'
L = [10, 20, 30, 40, 50]
print(list(s)) # list constructor iterates over its argument
a, b, *middle, c = L # tuple unpacking iterates over right side
print((a, b, c))
for i in L:
print(i, end=' ')
```
## Sizing with `len()`
```
len(s), len(L)
s.__len__(), L.__len__()
```
### Arithmetic
```
a = 2
b = 3
a * b, a.__mul__(b)
L = [1, 2, 3]
L.append(L)
L
```
## A simple but full-featured Pythonic class
## String formatting mini-language
```
x = 2**.5
x
format(x, '.3f')
from datetime import datetime
agora = datetime.now()
print(agora)
print(format(agora, '%H:%M'))
'{1:%H}... {0:.3f}!'.format(x, agora)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.svm import SVC
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
import warnings
import numpy as np
from collections import OrderedDict
from lob_data_utils import lob, db_result
from lob_data_utils.svm_calculation import lob_svm
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
data_length = 15000
stock = '9064'
d, d_cv, d_test = lob.load_prepared_data(
stock, data_dir='../queue_imbalance/data/prepared', cv=True, length=data_length)
d.head()
d_test.head()
## Data visualization
lob.plot_density_imbalance_vs_mid(d, 0, len(d))
```
## Logistic
```
log_clf = lob.logistic_regression(d, 0, len(d))
pred = log_clf.predict(d_test['queue_imbalance'].values.reshape(-1, 1))
lob.plot_roc(d_test, log_clf, stock=stock, label='')
```
## SVM
```
gammas = [0.0005, 0.005, 1, 5, 50, 500, 5000]
cs = [0.0005, 0.005, 1, 5.0, 50, 500, 1000]
coef0s = [0, 0.0005, 0.005, 1, 5, 50, 500, 5000]
try:
df_svm_res = pd.read_csv('res_svm_{}_{}.csv'.format(stock, data_length))
print('Results read from file')
except FileNotFoundError:
print('Results file does not exist yet')
df_svm_res = pd.DataFrame(columns=['svm', 'c', 'gamma', 'coef0', 'roc_cv_score', 'roc_train_score'])
s = stock
svm_results = []
for c in cs:
for g in gammas:
for coef0 in coef0s:
if np.any(df_svm_res[df_svm_res['c']
== c][df_svm_res['gamma'] == g][df_svm_res['coef0']
== coef0][df_svm_res['svm'] =='sigmoid']):
continue
svm = lob_svm.SVMSigmoid(s, d, c=c, coef0=coef0, gamma=g, data_length=data_length)
cv_score = svm.predict(d_cv, 'cv', check=False)
train_score = svm.predict(d, 'train', check=False)
svm_results.append({'svm': 'sigmoid', 'c': c, 'coef0': coef0, 'gamma': g,
'roc_cv_score': cv_score, 'roc_train_score': train_score})
if np.any(df_svm_res[df_svm_res['c'] == c][df_svm_res['gamma'] == g][df_svm_res['svm'] =='rbf']):
continue
svm = lob_svm.SVMRbf(s, d, c=c, gamma=g, data_length=data_length)
cv_score = svm.predict(d_cv, 'cv', check=False)
train_score = svm.predict(d, 'train', check=False)
svm_results.append({'svm': 'rbf', 'c': c, 'gamma': g,
'roc_cv_score': cv_score, 'roc_train_score': train_score})
if np.any(df_svm_res[df_svm_res['c'] == c][df_svm_res['svm'] =='linear']):
continue
svm = lob_svm.SVMLinear(s, d, c=c, data_length=data_length)
cv_score = svm.predict(d_cv, 'cv', check=False)
train_score = svm.predict(d, 'train', check=False)
svm_results.append({'svm': 'linear', 'c': c, 'roc_cv_score': cv_score, 'roc_train_score': train_score})
pd.DataFrame(svm_results).to_csv('res_svm_{}_{}.csv'.format(stock, data_length))
for svm_result in svm_results:
df_svm_res = df_svm_res.append(svm_result, ignore_index=True)
df_svm_res.drop(columns=[c for c in df_svm_res.columns if 'Unnamed:' in c], inplace=True)
df_svm_res.sort_values(by='roc_cv_score', ascending=False)
df_svm_res.head()
df_svm_res.to_csv('res_svm_{}_{}.csv'.format(stock, data_length))
df_svm_res.sort_values(by='roc_cv_score')
```
## GDF
```
K = 50
def gdf_svm_classification(df, K, C=1000, gamma=1):
clf = SVC(kernel='rbf', C=C, gamma=gamma)
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
X = df.loc[:, gdf_columns]
y = df['mid_price_indicator'].values.reshape(-1, 1)
y[0] = 0
clf.fit(X, y)
return clf
length = data_length
rr = [0.01, 0.05, 0.1, 0.5, 1]
ss = [0.01, 0.05, 0.1, 0.5, 1]
results = []
try:
df_gdf_res = pd.read_csv('res_gdf_svm_{}_{}.csv'.format(stock,data_length))
print('Results read from file')
except FileNotFoundError:
print('Results file does not exist yet')
df_gdf_res = pd.DataFrame(columns=['svm', 'c', 'gamma', 'roc_cv_score', 'roc_train_score',
'K', 'r', 's'])
for r in rr:
for s in ss:
filename = 'gdf_{}_len{}_r{}_s{}_K{}'.format(stock, length, r, s, K)
dfs, dfs_cv, dfs_test = lob.load_prepared_data(
filename, data_dir='../gaussian_filter/data_gdf/', cv=True, length=length)
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
for C in [1, 10, 100, 1000, 10000]:
for gamma in [1, 10, 100, 1000, 10000]:
res = {}
res['c'] = C
res['gamma'] = gamma
res['r'] = r
res['s'] = s
res['stock'] = stock
res['K'] = K
res['svm'] = 'rbf'
if np.any(df_gdf_res[df_gdf_res['c']
== C][df_gdf_res['gamma']
== gamma][df_gdf_res['r']
== r][df_gdf_res['s']
== s][df_gdf_res['K']
== K][df_gdf_res['svm'] == 'rbf']):
continue
clf = gdf_svm_classification(dfs, K, C=C, gamma=gamma)
predictions = clf.predict(dfs.loc[:, gdf_columns])
try:
roc_train = roc_auc_score(predictions, dfs['mid_price_indicator'])
res['roc_train_score'] = roc_train
except Exception as e:
print(e, r, s, C, gamma)
predictions = clf.predict(dfs_cv.loc[:, gdf_columns])
try:
roc_cv = roc_auc_score(predictions, dfs_cv['mid_price_indicator'])
res['roc_cv_score'] = roc_cv
except Exception as e:
print(e, r, s, C, gamma)
results.append(res)
pd.DataFrame(results).to_csv('res_gdf_svm_{}_{}.csv'.format(stock, data_length))
for result in results:
df_gdf_res = df_gdf_res.append(result, ignore_index=True)
df_gdf_res.drop(columns=[c for c in df_gdf_res.columns if 'Unnamed:' in c], inplace=True)
df_gdf_res.to_csv('res_gdf_svm_{}_{}.csv'.format(stock, data_length))
a = df_gdf_res[df_gdf_res['r'] == 0.1].sort_values(by='roc_cv_score', ascending=False).sort_values(by='s')
a[['s', 'roc_train_score', 'roc_cv_score']].plot(kind='bar', figsize=(16,16))
```
## GDF with logistic reg
```
K = 50
def gdf_log_classification(df, K, C=1000):
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
clf = LogisticRegression(C=C)
X = df.loc[:, gdf_columns]
y = df['mid_price_indicator'].values.reshape(-1, 1)
y[0] = 0
clf.fit(X, y)
return clf
length = data_length
rr = [0.01, 0.05, 0.1, 0.5, 1]
ss = [0.01, 0.05, 0.1, 0.5, 1]
results = []
try:
df_gdf_log_res = pd.read_csv('res_gdf_log_{}_{}.csv'.format(stock, data_length))
print('Results read from file')
except FileNotFoundError:
print('Results file does not exist yet')
df_gdf_log_res = pd.DataFrame(columns=['c', 'roc_cv_score', 'roc_train_score', 'K', 'r', 's'])
for r in rr:
for s in ss:
filename = 'gdf_{}_len{}_r{}_s{}_K{}'.format(stock, length, r, s, K)
dfs, dfs_cv, dfs_test = lob.load_prepared_data(
filename, data_dir='../gaussian_filter/data_gdf/', cv=True, length=length)
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
for C in [1, 10, 100, 1000, 10000]:
res = {}
res['c'] = C
res['r'] = r
res['s'] = s
res['stock'] = stock
res['K'] = K
if np.any(df_gdf_log_res[df_gdf_log_res['c']
== C][df_gdf_log_res['r'] == r][df_gdf_log_res['s'] == s][df_gdf_log_res['K'] == K]):
continue
clf = gdf_log_classification(dfs, K, C=C)
predictions = clf.predict(dfs.loc[:, gdf_columns])
try:
roc_train = roc_auc_score(predictions, dfs['mid_price_indicator'])
res['roc_train_score'] = roc_train
except Exception as e:
print(e, r, s, C, gamma)
predictions = clf.predict(dfs_cv.loc[:, gdf_columns])
try:
roc_cv = roc_auc_score(predictions, dfs_cv['mid_price_indicator'])
res['roc_cv_score'] = roc_cv
except Exception as e:
print(e, r, s, C, gamma)
results.append(res)
pd.DataFrame(results).to_csv('res_gdf_log_{}_{}.csv'.format(stock, data_length))
for result in results:
df_gdf_log_res = df_gdf_log_res.append(result, ignore_index=True)
df_gdf_log_res.drop(columns=[c for c in df_gdf_log_res.columns if 'Unnamed:' in c], inplace=True)
df_gdf_log_res.to_csv('res_gdf_log_{}_{}.csv'.format(stock, data_length))
df_gdf_log_res.sort_values(by='roc_cv_score', ascending=False).head(5)
df_gdf_res.sort_values(by='roc_cv_score', ascending=False).head(5)
```
## Results on test
```
best_gdf_res = df_gdf_res.sort_values(by='roc_cv_score', ascending=False).iloc[0]
best_gdf_log_res = df_gdf_log_res.sort_values(by='roc_cv_score', ascending=False).iloc[0]
best_svm_sig_res = df_svm_res[df_svm_res['svm'] == 'sigmoid'].sort_values(
by='roc_cv_score', ascending=False).iloc[0]
best_svm_rbf_res = df_svm_res[df_svm_res['svm'] == 'rbf'].sort_values(
by='roc_cv_score', ascending=False).iloc[0]
best_svm_lin_res = df_svm_res[df_svm_res['svm'] == 'linear'].sort_values(
by='roc_cv_score', ascending=False).iloc[0]
res_dict = OrderedDict({
'gdf_svm': best_gdf_res,
'gdf_log': best_gdf_log_res,
'svm_rbf': best_svm_rbf_res,
'svm_lin': best_svm_lin_res,
'svm_sig': best_svm_sig_res,
})
plt.bar(list(range(len(res_dict.keys()))), [v['roc_cv_score'] for v in res_dict.values()],)
d = plt.xticks(list(range(len(res_dict.keys()))), list(res_dict.keys()))
plt.title('CV score')
for k, v in res_dict.items():
print(k, v['roc_cv_score'])
plt.bar(list(range(len(res_dict.keys()))), [v['roc_train_score'] for v in res_dict.values()],)
d = plt.xticks(list(range(len(res_dict.keys()))), list(res_dict.keys()))
plt.title('train score')
for k, v in res_dict.items():
print(k, v['roc_train_score'])
list(res_dict.values())[0]
filename = 'gdf_{}_len{}_r{}_s{}_K{}'.format(stock, length, best_gdf_res['r'], best_gdf_res['s'],
int(best_gdf_res['K']))
dfs, dfs_cv, dfs_test = lob.load_prepared_data(
filename, data_dir='../gaussian_filter/data_gdf/', cv=True, length=length)
svm_gdf_clf = gdf_svm_classification(dfs, K, C=best_gdf_res['c'], gamma=best_gdf_res['gamma'])
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
pred_test = svm_gdf_clf.predict(dfs_test.loc[:, gdf_columns])
roc_test = roc_auc_score(pred_test, dfs_test['mid_price_indicator'])
best_gdf_res['roc_test_score'] = roc_test
roc_test
filename = 'gdf_{}_len{}_r{}_s{}_K{}'.format(
stock, length, best_gdf_log_res['r'], best_gdf_log_res['s'], int(best_gdf_log_res['K']))
dfs, dfs_cv, dfs_test = lob.load_prepared_data(
filename, data_dir='../gaussian_filter/data_gdf/', cv=True, length=length)
svm_gdf_clf = gdf_log_classification(dfs, K, C=best_gdf_res['c'])
gdf_columns = ['gdf_' + str(i) for i in range(0, K)]
pred_test = svm_gdf_clf.predict(dfs_test.loc[:, gdf_columns])
roc_test = roc_auc_score(pred_test, dfs_test['mid_price_indicator'])
best_gdf_log_res['roc_test_score'] = roc_test
roc_test
d, d_cv, d_test = lob.load_prepared_data(stock,
data_dir='../queue_imbalance/data/prepared', cv=True, length=10000)
svm = lob_svm.SVMRbf(stock, d, c=best_svm_rbf_res['c'], gamma=best_svm_rbf_res['gamma'], data_length=data_length)
roc_test = svm.predict(d_test, 'test', check=False)
best_svm_rbf_res['roc_test_score'] = roc_test
svm = lob_svm.SVMSigmoid(stock, d, c=best_svm_sig_res['c'],
gamma=best_svm_sig_res['gamma'], coef0=best_svm_sig_res['coef0'])
roc_test = svm.predict(d_test, 'test', check=False)
best_svm_sig_res['roc_test_score'] = roc_test
svm = lob_svm.SVMLinear(stock, d, c=best_svm_lin_res['c'])
roc_test = svm.predict(d_test, 'test', check=False)
best_svm_lin_res['roc_test_score'] = roc_test
plt.bar(list(range(len(res_dict.keys()))), [v['roc_train_score'] for v in res_dict.values()])
plt.bar(list(range(len(res_dict.keys()))), [v['roc_test_score'] for v in res_dict.values()])
d = plt.xticks(list(range(len(res_dict.keys()))), list(res_dict.keys()))
plt.title('test score')
for k, v in res_dict.items():
print(k, v['roc_test_score'])
res = []
for k, v in res_dict.items():
dd = v.to_dict()
dd['type'] = k
res.append(dd)
df_res = pd.DataFrame(res)
df_res[['roc_train_score', 'roc_cv_score', 'roc_test_score']].plot(kind='bar', figsize=(8, 8))
d = plt.xticks(list(range(len(res_dict.keys()))), list(res_dict.keys()))
plt.legend(loc='upper right')
```
| github_jupyter |
# [모듈 3.1] 모델 배포 및 추론 (VPC 및 No VPC 모두에서 사용 가능)
이 노트북은 아래와 같은 작업을 합니다.
- 엔드포인트 생성
- SageMaker Estimator 생성
- Training Job 을 Estimator 에 연결
- 엔드포인트 생성은 위의 방법 말고도 다른 방법이 추가적으로 있습니다. 아래를 참고 하세요
- https://docs.aws.amazon.com/ko_kr/sagemaker/latest/dg/ex1-deploy-model.html
- 엔드포인트 대상으로 추론
- 추론 예시 1, 2
---
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
## SageMaker Estimator 생성
SageMaker Estimator 인스턴스를 생성하고 `estimator.attach()`로 학습한 모델을 쉽게 불러올 수 있습니다.
```
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py', # 여러분이 작성한 엔트리포인트 파일명으로 변경해 주세요
source_dir='training_script',
role=role,
framework_version='2.0.0', # TensorFlow 버전을 지정합니다.
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 5},
train_instance_count=1,
train_instance_type='ml.p2.xlarge')
```
## Training Job 을 Estimator 에 연결
- 아래 이전 노트북에서 정의한 train_job_name를 불러와서 사용 합니다.
- 다른 방법으로 SageMaker 콘솔 좌측 메뉴에서 `Training` > `Training jobs`를 클릭하여 이전 단계에서 수행했던 training_job_name을 그대로 가져옵니다.
- 참고로, 콘솔 접속 없이 주피터 노트북에서 아래 CLI 커맨드로 training job 목록들을 쉽게 확인할 수 있습니다.
```shell
!aws sagemaker list-training-jobs
```
```
%store -r train_job_name
#estimator = estimator.attach(training_job_name=) ## Configure with your previous cifar10 job name
estimator = estimator.attach(training_job_name=train_job_name) ## Configure with your previous cifar10 job name
```
## 엔드포인트 생성
아래 코드 셀을 통해 SageMaker 엔드포인트(endpoint)를 생성합니다. 배포 인스턴스를 시작하기 때문에 약 7분에서 10분의 시간이 소요됩니다.
```
%%time
predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
```
## 엔드포인트 대상으로 추론
엔드포인트(endpoint)가 정상적으로 작동하는지 검증하기 위해, 여러분께서는 랜덤 데이터를 생성하여 예측(prediction)을 수행합니다.
### Fake로 데이터 생성하여 추론
```
# Creating fake prediction data
import numpy as np
data = np.random.randn(1, 32, 32, 3)
print("Predicted class is {}".format(np.argmax(predictor.predict(data)['predictions'])))
```
## Cifar10 테스트 데이터 추론 예시 1
아래는 먼저 해당 이미지를 보여주고 실제 레이블 값을 보여줍니다.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import utils.p_utils
from importlib import reload
utils.p_utils = reload(utils.p_utils)
from utils.p_utils import load_cfar10_batch, display_img
cifar10_dataset_file_path ='data/test/test_batch'
features, labels = load_cfar10_batch(cifar10_dataset_file_path)
sample_id = 30
feature, label = display_img(features,labels, sample_id )
```
생성한 predictor에 입력 데이터를 넣어서 각 클래스(10개)의 Score 값을 가져오고, 여기서 가장 큰 값을 lable 로 예측 합니다.
```
import numpy as np
r_feature = feature[np.newaxis, :, :, :]
print("shape: ", r_feature.shape)
pred_response = predictor.predict(r_feature)
print(pred_response)
print("Predicted class is {}".format(np.argmax(pred_response['predictions'])))
```
## Cifar10 테스트 데이터 추론 예시 2
```
sample_id = 50
feature, label = display_img(features,labels, sample_id )
import numpy as np
r_feature = feature[np.newaxis, :, :, :]
print("shape: ", r_feature.shape)
pred_response = predictor.predict(r_feature)
print(pred_response)
print("Predicted class is {}".format(np.argmax(pred_response['predictions'])))
```
# 리소스 제거
본 워크샵에서 사용한 리소스에 대해 여러분의 AWS 계정에 과금되지 않도록 하려면 SageMaker Endpoint를 삭제해야 합니다.
```
# sagemaker_session.delete_endpoint(predictor.endpoint)
```
| github_jupyter |
This example notebook uses the averaging functions found ins the diff_classifier msd module to find average msd profiles over input msd datasets using precision-weighted averaging. Precision is the inverse of the standard squared error. This increases the contribution of videos that have many particles and more homogeneous datasets to the final calculated MSD.
```
import numpy as np
import diff_classifier.aws as aws
import diff_classifier.msd as msd
to_track = []
result_futures = {}
start_knot = 31 #Must be unique number for every run on Cloudknot.
frames = 651
fps = 100.02
umppx = 0.07
folder = '09_26_18_tissue_study' #Folder in AWS S3 containing files to be analyzed
bucket = 'hpontes.data'
vids = 5
types = ['10K', '1K', '5K', 'COOH']
slices = [1, 2, 3]
for typ in types:
for slic in slices:
for num in range(1, vids+1):
#to_track.append('100x_0_4_1_2_gel_{}_bulk_vid_{}'.format(vis, num))
to_track.append('{}_tissue_S{}_XY{}'.format(typ, slic, num))
geomean = {}
gSEM = {}
for sample_name in to_track:
# Users can toggle between using pre-calculated geomean files and calculating new values by commenting out the relevant
# lines of code within the for loop.
aws.download_s3('{}/geomean_{}.csv'.format(folder, sample_name), 'geomean_{}.csv'.format(sample_name), bucket_name=bucket)
aws.download_s3('{}/geoSEM_{}.csv'.format(folder, sample_name), 'geoSEM_{}.csv'.format(sample_name), bucket_name=bucket)
geomean[sample_name] = np.genfromtxt('geomean_{}.csv'.format(sample_name))
gSEM[sample_name] = np.genfromtxt('geoSEM_{}.csv'.format(sample_name))
#aws.download_s3('{}/msd_{}.csv'.format(folder, sample_name), 'msd_{}.csv'.format(sample_name), bucket_name=bucket)
#geomean[sample_name], gSEM[sample_name] = msd.geomean_msdisp(sample_name, umppx=umppx, fps=fps,
# remote_folder=folder, bucket=bucket)
print('Done with {}'.format(sample_name))
for typ in types:
to_avg = []
for sample in to_track:
if typ in sample:
to_avg.append(sample)
weights, wh1 = msd.precision_weight(to_avg, gSEM)
geodata = msd.precision_averaging(to_avg, geomean, gSEM, weights,
bucket=bucket, folder=folder, experiment='{}_tissue'.format(typ))
for typ in exps:
print('{}: {} (95% CI: {} - {}) um2/s'.format(typ, np.exp(geodata[typ].geomean[100])/4,
np.exp(geodata[typ].geomean[100] - geodata[typ].geostd[100])/4,
np.exp(geodata[typ].geomean[100] + geodata[typ].geostd[100])/4))
#print('{}: '.format(typ))
```
Note that in cases where two or more averaging steps are needed (for instance, if the user takes 5 videos per well with a total of four wells), averaging steps can be performed consecutively. the msd.binning function is a helpful tool by defining bins over which to average for multi-step averaging.
```
msd.plot_all_experiments(to_track[45:60], yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
msd.plot_all_experiments(['10K_tissue', '5K_tissue', '1K_tissue', 'COOH_tissue'],
yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
msd.plot_all_experiments(to_track[0:15],
yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
msd.plot_all_experiments(to_track[15:30],
yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
msd.plot_all_experiments(to_track[30:45],
yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
msd.plot_all_experiments(to_track[45:60],
yrange=(10**-3, 10**1), bucket=bucket, folder=folder)
```
| github_jupyter |
### DCGANs `MNIST` dataset.
```
import tensorflow as tf
from tensorflow.keras import layers, Model
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Reshape, Conv2DTranspose, MaxPooling2D, UpSampling2D, LeakyReLU
from tensorflow.keras.activations import relu
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from packaging.version import parse as parse_version
```
### Loading the `mnist` dataset.
```
(ds_train, ds_test_), ds_info = tfds.load('mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True)
batch_size = 256
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = image/255.
return image, image
ds_train = ds_train.map(preprocess)
ds_train = ds_train.cache() # put dataset into memory
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_test = ds_test_.map(preprocess).batch(batch_size).cache().prefetch(batch_size)
# return label for testing
def preprocess_with_label(image, label):
image = tf.cast(image, tf.float32)
image = tf.math.round(image/255.)
return image, label
ds_test_label = ds_test_.map(preprocess_with_label).batch(1000)
def Encoder(z_dim):
inputs = layers.Input(shape=[28,28,1])
x = inputs
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=2, padding='same', activation='relu')(x)
x = Conv2D(filters=8, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = Flatten()(x)
out = Dense(z_dim)(x)
return Model(inputs=inputs, outputs=out, name='encoder')
def Decoder(z_dim):
inputs = layers.Input(shape=[z_dim])
x = inputs
x = Dense(7*7*64, activation='relu')(x)
x = Reshape((7,7,64))(x)
x = Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
x = Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2,2))(x)
out = Conv2D(filters=1, kernel_size=(3,3), strides=1, padding='same', activation='sigmoid')(x)
#return out
return Model(inputs=inputs, outputs=out, name='decoder')
class Autoencoder:
def __init__(self, z_dim):
self.encoder = Encoder(z_dim)
self.decoder = Decoder(z_dim)
model_input = self.encoder.input
model_output = self.decoder(self.encoder.output)
self.model = Model(model_input, model_output)
autoencoder = Autoencoder(z_dim=10)
model_path = "./models/autoencoder.h5"
os.makedirs("./models", exist_ok=True)
checkpoint = ModelCheckpoint(model_path,
monitor= "val_loss",
verbose=1,
save_best_only=True,
mode= "auto",
save_weights_only = False)
early = EarlyStopping(monitor= "val_loss",
mode= "auto",
patience = 5)
callbacks_list = [checkpoint, early]
autoencoder.model.compile(
loss = "mse",
optimizer=tf.keras.optimizers.RMSprop(learning_rate=3e-4))
#metrics=[tf.keras.losses.BinaryCrossentropy()])
autoencoder.model.fit(ds_train, validation_data=ds_test,
epochs = 100, callbacks = callbacks_list)
images, labels = next(iter(ds_test))
autoencoder.model = load_model(model_path)
outputs = autoencoder.model.predict(images)
# Display
grid_col = 10
grid_row = 2
f, axarr = plt.subplots(grid_row, grid_col, figsize=(grid_col*1.1, grid_row))
i = 0
for row in range(0, grid_row, 2):
for col in range(grid_col):
axarr[row,col].imshow(images[i,:,:,0], cmap='gray')
axarr[row,col].axis('off')
axarr[row+1,col].imshow(outputs[i,:,:,0], cmap='gray')
axarr[row+1,col].axis('off')
i += 1
f.tight_layout(0.1, h_pad=0.2, w_pad=0.1)
plt.show()
```
| github_jupyter |
# Random Forests - Redux
From Fastai ML1 [Lesson 1 Intro to Random Forests](https://github.com/fastai/fastai/blob/master/courses/ml1/lesson1-rf.ipynb)
This notebook turned into a redux of my [first RF Code Along](https://github.com/WNoxchi/Kaukasos/blob/master/FAML1/Lesson1-RandomForests.ipynb) with notes.
---
## 1 Imports
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = "../../data/competitions/bluebook-for-bulldozers/"
!ls {PATH}
```
## 2. Data
`low_memory=False` tells Pandas to read more of the file to decide what the types are.
`parse_dates=[...]` is used for any columns that contain dates.
```
df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False, parse_dates=['saledate'])
```
Entering a DataFrame to display it will truncate it if it's too long.
This function sets the truncation threshold to 1000 rows & cols.
```
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
```
`df_raw.tail()` will show the last few rows of the DataFrame. By default it shows the
cols at top and rows on side. There're a lot of cols, so using `.transpose()`
displays the table on its side.
```
# display_all(df_raw.tail().transpose())
# display_all(df_raw.describe(include='all').transpose())
# df_raw.head()
```
[RMSLE](https://www.kaggle.com/c/bluebook-for-bulldozers#evaluation) is used in the Kaggle competition. So by taking the log of all sale prices, we can just use RMSE later to calculate our loss. RMSLE: $Σ\big(($log(prediction) - log(actual)$)^2\big)$ : this means ratios not absolutes.
Here we also replace a column w/ a new column:
```
df_raw.SalePrice = np.log(df_raw.SalePrice)
```
### 2.2.2 Initial Processing
A Random Forest is smth of a universal machine learning technique. Could be a category or a continous variable - and can predict w/ cols of almost any kind (pixel data, zip codes, etc). RFs genrly don't overfit (prevention: easy). RFs don't genrly req validation sets, & can tell you how well they generalize - even w/ only 1 dataset. RFs have few if any statistical assumptions of your data, & req v.few pieces of feature engineering.
`model.fit(`__`Independant Variables`__`, `__`Dependent Variables`__`)`
Indep: used to predict; Dep: predicted. `pandas.DataFrame.drop(..)` returns a new DataFrame w/ a list of rows/cols removed. So we use everything but the SalePrice to predict the SalePrice.
```
model = RandomForestRegressor(n_jobs=-1) # n_jobs: number of cores to use. -1 ==> all
model.fit(df_raw.drop('SalePrice', axis=1), df_raw.SalePrice)
```
This dataset contains a mix of **continuous** and __categorical__ variables. Most ML models (incl. RFs) req numbers -- so we need to convert all our cols to numbers.
`sklearn.ensemble.RandomForestRegressor`: predict __continuous__ variables
`sklearn.ensemble.RandomForestClassifier`: predict __categorical__ variables
---
One issue is `saledate` was parsed as a date $ \longrightarrow $ as a number. But if we look at it, it isn't a number, it's a `datetime64` -- which is __not__ a number. So we need to do our first bit of feature engineering.
```
df_raw.saledate[:5]
```
Inside `fastai.structured` is a function called `add_datepart`, which we'll use to fix this.
__Overview of `add_datepart`:__
1. We pass in a dataframe and a field (in this case `'saledate'`) to `add_datepart(df, fldname)`. We can't do `df.fieldname` because that'd return a field called 'fieldname'. So `df[fldname]` is how we grab a column when that column name is stored in the variable `fldname`. This gives us the field itself, the `pd.Series`.
2. `add_datepart` then goes through a list of date attribute strings ('Year', 'Month', 'Dayofyear', etc) and builds new columns by looking them up in `fld`'s datetime attributes (`fld.dt`).
3. It finally drops the original `fldname` column (`'saledate'` here) because it isn't numerical.
---
***NOTE***: `'saledate'` is a date type because we told Pandas to make it such via `parse_dates=["saledate"]`. That's why it has the relevant datetime attributes.
```
add_datepart(df_raw, 'saledate')
df_raw.saleYear.head()
```
Now the datatype for `'saledate'` is numerical (`int64`). If we check the columns of the DataFrame we'll see the new ones added by `add_datepart`:
```
df_raw.columns
```
This isn't enough. One more bit of feature engineering is needed: there are strings in the dataset (`'Low'`, `'High'`) etc. FastAI has function to automatically create categorical variables for all strings - by creating a column (backend) mapping integers to strings.
FastAI also has a `apply_cats` function to preserve training-set category mappings for validation & test set use.
```
df_raw.head()
train_cats(df_raw)
```
Now we can access categorical variables as `.cat`attributes just as we could with `.dt` for datetime:
```
df_raw.UsageBand.cat.categories
```
'High', 'Low', 'Medium' in `UsageBand` will be seen by the RF as cats `0`, `1`, `2`. It'll form a split first on either `0` vs `1, 2`, or `2` vs `0, 1`. That translates to 'High' vs 'Low' & 'Medium' or 'Medium' vs 'High' & 'Low'. That's a bit odd, and though the DT can get to a correct split regardless, by using a sensible ordering we can ensure it gets there in fewer splits - thus improving our model.
So we reorder 'High', 'Low', 'Medium' st. they're ordered wrt the category numbers, ie: so that any split starts by comparing 'High' and 'Low':
'High','Medium','Low' $\longrightarrow$ 0, 1, 2
`ordered=True` preserved supplied order, `inplace=True` changes the DataFrame in place instead of returning a new one.
```
df_raw.UsageBand.cat.set_categories(['High','Medium','Low'], ordered=True, inplace=True)
print(df_raw.UsageBand[100:110])
print(df_raw.UsageBand.cat.codes[100:110])
```
### 2.2.3 Preprocessing
We still have a number of Null values. Here we display the fraction of Null values in each category:
```
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
```
First we'll save our dataframe to disk in feather format since we have a good starting point.
```
os.makedirs(PATH + 'tmp', exist_ok=True)
df_raw.to_feather(PATH + 'tmp/raw')
```
Now we want to replace the string categories with their numeric codes, handle missing continous values, and pull out the dependant variable (`SalePrice`) into a separate variable. The `fastai.structured.proc_df` is what we'll use to do this.
---
**Overview of `proc_df`:**
`df:` DataFrame | `y_fld`: name of dependent variable
• Makes copy of DataFrame. • Grabs y values. • Drops DepVar from DataFrame. • Then fixes missing via `fastai.structured.fix_missing`.
>**Overview of `fix_missing`:**
>
>• Check that the column has missing values (`pd.isnull(col).sum() != 0`). • Create new column with same name as original + '_na' that's a boolean column w/ **1** any time a value is missing, **0** otherwise. • Then replace all Null values with the columns median.
>
>ie: All NaNs replaced by col's median. New col added keeping track of NaNs.
That is done for numeric variables (cols) -- Pandas automatically handles categorical variables by setting them to `-1` if missing.
• Then call `fastai.structured.numericalize`.
>**Overview of `numericalize`:**
>
>• If column is **Not** numeric and **is** a categorical type: replace column w/ it's category codes (integers) + 1.
---
```
df, y, nans = proc_df(df_raw, 'SalePrice')
```
'SalePrice' is now absent from the DataFrame's columns, and all columns with a non-zero value for null-fractions have corresponding '_na' columns.
```
df.columns
```
If we check the DataFrame, we see that everything is now a number:
```
df.head()
```
Now we have something we can pass into a Random-Forest Regressor.
```
model = RandomForestRegressor(n_jobs=-1)
model.fit(df, y)
model.score(df, y)
```
***NOTE***: Random Forests are *trivially* parallelizable, meaning computation time more/less linearly scales (negatively) with number of CPUs.
The score is the R$^2$ value. Range is < 1. 1 is perfect. If your R$^2$ score is < 0 your model is worse than predicting the mean. [FastAI ML1 L2 bit on R2](https://youtu.be/blyXCk4sgEg?t=718). **Gist of R$^2$:** *the ratio between how good your model is (RMSE) vs. how good is the naïve mean model (RMSE)*.
We'll create a simple validation set to test this. The dataset is sorted by date, so the most recent `n` rows will make up the validation set.
```
def split_vals(a, n): return a[:n].copy(), a[n:].copy()
n_valid = 12000 # same as Kaggle's test set size # 12000 rows for val set
n_trn = len(df) - n_valid # all else in trn set
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df, n_trn)
y_train, y_valid = split_vals(y, n_trn)
X_train.shape, y_train.shape, X_valid.shape
```
## 3. Random Forests
### 3.1 Base Model
Now we'll run our model again, but with the separate training and validation sets:
```
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
model = RandomForestRegressor(n_jobs=-1)
%time model.fit(X_train, y_train)
print_score(model)
```
There was some overfitting going on -- but this 0.252 loss gets into the top 25% of the Kaggle public leaderboard.
---
[Fast.ai ML1 L2](https://youtu.be/blyXCk4sgEg?t=1114) p.much picks up from here.
Since the data/competition is predicted time-series data -- you want your validation set to reflect that by being a range of consecutive dates (specifically some tail slice of the dataset).
### 3.2 Speeding things up
One way to speed up iteration time for model development is to use the `subset` parameter in `proc_df`: which returns only a subset of the data to work on. This returns a randomly sampled subset of the data.
We need to make sure our train-subset doesn't overlap with our validation set. We also want to use our original val set, and **not** overwrite it, so of the 30k subset, we set the first 20k (this may overlap a bit..) to be training, and throw the rest away.
* Create `df_trn`, `y_trn` from a random 30k subset of `df_raw`.
* Set `X_train`, `y_train` to be the first 20k rows of `df_trn`, `y_trn`.
```
df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice', subset=30000)
X_train, _ = split_vals(df_trn, 20000)
y_train, _ = split_vals(y_trn, 20000)
model = RandomForestRegressor(n_jobs=-1) ## initialize Model
%time model.fit(X_train, y_train) ## train Model
print_score(model) ## run predictions - still using orig valset
```
Loss Train, | Loss Valid, | R2 Train, | R2 Loss
### 3.3 Single Tree
Scikit-Learn calls trees estimators. `max_depth` is depth of splits. `bootstrap` controls random:on/off for Random Forest.
```
# A small Deterministic Decision Tree
model = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
model.fit(X_train, y_train)
print_score(model)
```
`fastai.structured.draw_tree` lets us visualize Decision Trees. `model.estimators_[0]` returns the 1st estimator from an array.
```
# model.estimators_
draw_tree(model.estimators_[0], df_trn, precision=3)
```
We have 20k samples at the start of the this tree - because that's what we made the training set as when we split our data.
Looking at first node: in our whole dataset (X_train) there're 20k rows, the average sale price is ~10.1, and if we built a model where we used that average all the time: then our MSE would be 0.452. Iow that's the denominator of an R2. This 1st node is the most basic model: a tree with zero splits - just predict the average.
The best single split the RF is able to make is based on whether the Coupler_System is ≤ 0.5 (True / False). If it does that, the MSE of Coupler_System > 0.5 (False) goes to 0.114: a large improvement. In the other group: Coupler_System ≤ 0.5 (True) improves slightly to 0.397. The False group is a small fraction: ~1,700/20,000.
---
**How to find the best possible split with a Single Decision Tree?**:
* **For each** categorical variable:
* **For each** value of that variable:
* **Find** the split with the Minimum weighted-average MSE
* **Return** the categorry:value split w/ Minimum weighted-average MSE
Equivalent to this is to take the MSE of a hypothetical model where all values of a binary split are set to their decisions average.
---
Now we can improve this Decision Tree by setting `depth=None` to continue until each leaf node has only one decision possible for it. If we do that (surprise) we get a model that perfectly overfits our data. Our validation loss/error is not 1, but is better than our shallow tree.
```
model = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
model.fit(X_train, y_train)
print_score(model)
```
### 3.4 Bagging
#### 3.4.1 Intro to Bagging
We can improve these D.Trees by making forests of them. We create forests with a statistical technique called 'bagging'. Any kind of model can be bagged. A Random Forest is a way of bagging trees.
What if we created `N` different models, each of which was only somewhat predictive, but the models weren't at all corelated with eachother. That means the `N` models would had to have found different insights into the relationships in the data. If you took the average of those `N` models, you're effectively bringing in the insights from each of them. This is Ensembling.
What if we made a bunch of these big, deep, strongly-overfit trees, but each one only gets a random 1/10th of the data. So each tree will be perfect on that subset but bad at the rest. So each of the trees will be better than nothing, and all overfit in different ways on different things because they use different random samples.
So they all have errors, but the errors are random. The average of a bunch of random errors is **Zero**.
So if we take the average of all these trees (ea. of which trained on a dfnt rand subset) the errors will average out to zero and what's left is the true relationship. *That* is a **Random Forest**. [Lecture 2](https://youtu.be/blyXCk4sgEg?t=2971)
---
1. Grab random subset of data.
2. Build a Decision Tree on it.
3. Put that D.Tree aside and repeat `N` times
4. For each D.Tree make predictions by running test data through tree to get to leaf node
5. Take average in that leaf node $\forall$ the trees
6. Average them all together.
To do that we call `RandomForestRegressor`. An estimator (specfd by `n_estimators`) is a D.Tree.
---
The key insight is to construct multitple models that are better than nothing and where the errors are as much as possible uncorrelated with eachother. If the errors are correlated this breaks down.
For subsets, Scikit-Learn picks out `n` rows *with* replacement. This is called bootstrapping. This on average represents 62.3% of the rows, with a bunch represented multiple times. ***(I think this means 62.3% rows used on any given tree).***
[Lecture 2:](https://youtu.be/blyXCk4sgEg?t=3146) So instead of picking out a 1/10 of the rows, of an `n` row dataset, we pick out `n` rows with replacement, which on average represnts 62.3% of the rows, many of them multiple times.
*Aside:* The Whole point of modeling Machine Learning is to find a model that tells you which variables are important and how they interact together to drive your independent variable. The difference between using 'Tree Space / Random-Forest Space' and 'Euclidean Space' to find nearest neighbors is the difference between a model that makes good predictions and one that makes meaningless predictions.
---
In **Bagging** you want each of your individual estimators / trees to be as predictive as possible and for their predictions to be as uncorrelated as possible. The inventor of RFs in the 1990s spoke about this: trying to come up with predictive but poorly-correlated trees.
Recent research has shown correlation is more important than individual predictiveness: so recent methods focus on creating trees which are less accurate on their own, and aim to minimize correlation between trees. Scikit-Learn has an ExtraTrees[Regressor/Classifier] with the exact same API as RandomForest[R/C] (and can be dropped in to replace it) which stands for "Extremely-Randomized Trees Model" Instead of trying every split of every variable, it randomly tries a few splits of a few variables. It's much faster to train, has much more randomness, and in that time you can build more trees and get better generalization.
```
model = RandomForestRegressor(n_jobs=-1) # default is 10 estimators
model.fit(X_train, y_train)
print_score(model)
```
We'll grab predictions for each individual tree and look at one example. After you've built a RF, each tree is stored in the attribute: `.estimators_`
```
preds = np.stack([t.predict(X_valid) for t in model.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0] # print first tree's predictions
preds.shape # 12,000 predictions for each of 10 trees
```
Notice that most of the predictions were a bit off, but the mean of all of them is pretty good. 9.459 avg, 9.105 actual.
Here's a plot going through each of the 10 trees, taking the mean of all the predictions up to the i$^{th}$ tree (1st tree, 1st 2 trees, 1st 3 .. etc), and plot the R$^2$. Notice the final value on the plot matches the R$^2$ value of the model up above.
```
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
```
Also note the plot's flattening out. (tested in original notebook and course nb): adding more trees won't improve the model much beyond this point.
The `n_estimators` hyperparameter is chosen based on:
1. amount of time you have for fitting
2. point of diminshing returns
More trees slows model fit/train time, but fewer trees can still offer valuable insight. J.Howard will often work through a day with a 20-tree RF, and at the end expand that to a 1000-tree model.
#### 3.4.2 Out-of-Bag (OoB) Score
Sometimes your dataset will be too small to create a validation set and a good model at the same time. There's a trick unique to Random Forests for this:
Recognize that some for each tree, some dataset rows did not get used. So pass those rows through those trees as their validation sets.
So you end up with a different validation set for each tree. Now to calculate our prediction we average all the trees where that row was not used for training. As long as you have enough trees every row will appear in the OoB sample for one of them at least.
So you create an OoB prediction by averaging all the trees you didn't use to train each individual row, then calculate your RMSE, R2, etc on that.
You can do this automatically by specifying the `oob_score=True` parameter in Scikit-Learn's `RandomForestRegressor`, creating a `.oob_score_` attribute in the resulting model.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
model.fit(X_train, y_train)
print_score(model)
```
The OoB Score will usually slightly underestimate the generalizability of the model -- the more trees you have, the less the underestimation of the model - but it works well enough anyway.
### 3.5 Reducing Over-Fitting
#### 3.5.1 Subsampling
One of the easiest ways to avoid over-fitting is also one of the best ways to speed up analysis: *subsampling*. Let's return to using our full dataset, so we can demonstrate the impact of this technique.
___
***NOTE***: before we took a subset of 30k rows of the data and built every model on that. Meaning every tree in the RF is a different subset of that subset of 30k. Why not pick a totally different subset of 30k each time. ie: leave the entire dataset of 300k records as is, and if we want to make things faster: pick a different subset of 30k each time. So rather than bootstrapping the entire set of rows: let's just randomly sample a subset of data.
---
So we'll do this by calling `proc_df` without our subset parameter:
```
df_trn, y_trn, nans = proc_df(df_raw, 'SalePrice')
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
```
The basic idea is this: rather than limit the total amount of data that our model can access, let's instead limit it to a *different* random subset per tree. That way, given enough trees, the model can still see *all* of the data, but for each individual tree, it'll be just as fast as if we had cut down our dataset as before.
Calling `fastai.structurered.set_rf_samples(n)` will change Scikit-learn's random forests to give each tree a random sample of `n` random rows.
When we do this, now when we run a RF, it's not going to bootstrap an entire set of 391k rows (len(X_train)), it'll just grab a subset of 20k rows. So when we run `set_rf_samples(20000)` it'll still run just as quickly as if we'd've originally done a random sample of 20k, but now every tree can have access to the entire dataset.
So if we use enough D.Trees, the RF will eventually see everything.
```
set_rf_samples(20000)
model = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Now with 10 estimators (default) we get an R2 of 0.858.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Increasing to 40 esimates increases our R2 score from 0.858 to 0.877.
---
`set_rf_samples` will be very useful for working with massive structured datasets.
Unfortunately as of 31/10/2017 (and now 25/3/2018) it will not change how OoB is calculated (`set_rf_samples` is a hack that replaces scikit-learn's internal function call with a lambda function with the desired behavior). OoB should be turned off for now when `set_rf_samples` is used.
***NOTE*** to **reset** the RF sampling: call `fastai.structured.reset_rf_samples()`
---
When doing EDA (Exploratory Data Analysis) ie: when working and probing at a problem / doing interactive machine learning, [J.Howard](https://youtu.be/blyXCk4sgEg?t=4791) will use `set_rf_samples` (subsets) and reasonable small forests, because:
> all the insights that I'm going to get are exactly the same as the big ones, but I can run them in 3 or 4 seconds instead of hours.
> this is one of the biggest tips I can give you, and very very few people in industry or academia actually do this. Most people run all of their models on all of the data all of the time using their best possible parameters, which is just pointless.
> if you're trying to find out which features are important and how they're related to each other and so forth: having that 4th decimal place of accuracy isn't going to change any of your insights at all.
> do most of your models on a large enough sample size that your accuracy is reasonable (w/n a reasonable distance of the best accuracy you can get) and is taking a small number of seconds to train - so you can interactively do your analysis.
#### 3.5.2 Tree-building Parameters
We revert to using a full bootstrap sample in order to show the impact of other over-fitting avoidance methods.
```
reset_rf_samples()
```
Let's get a baseline for this full set to compare to.
```
model = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Each of the estimators will train all the way down until the leaf nodes have 1 sample in them. **NOTE** that our OoB score is better than our validation R2 score (.89278) because our validation set is **not** a random sample: it's a different time period, and it's much harder to predict an entirely different time period than it is to predict random dates.
---
Another way to reduce over-fitting is to grow our trees less deeply. We do this by specifying (with `min_samples_leaf`) that we require some minimum number of rows in every leafe node. This has 2 benefits:
* There are fewer decision rules for each leaf node; simpler models should generalize better
* The predictions are made by averaging more rows in the leaf node, resulting in less volatility
example: `min_samples_leaf=3`: stop training the tree further when your leaf node has 3 or less samples in it.
In practice this means there'll be 1 or 2 fewer levels of decisions being made, which means about half or a quarter the number of actual decision criteria we have to do -- so it'll train quicker. It means also when we look at an individual tree, rather than just taking 1 point, we're taking the average of at least 3 points -- so we expect each tree to generalize a bit better; but ea. tree is also likely to be less powerful on its own.
```
model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Values of **1, 3, 5, 10, 25** tend to work well for `min_samples_leaf`.
If working with a massive dataset without subsampling, you may need values of hundreds or thousands.
---
Here we see increasing `min_samples_leaf` from 1 to 3 has increased our Validation R$^2$ from 0.898 to 0.903. So it's a slight improvement and trains a bit faster.
---
We can also increase the amount of variation amongst the trees by not only using a sample of rows for each tree, but also using a sample of *columns* for each *split*. We do this by specifying `max_features`, which is the proportion of features to randomly select from at each split.
```
model = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
%time model.fit(X_train, y_train)
print_score(model)
```
Our model now has a validation R2 of 0.906. Our RMSE of log(price) has dropped from 0.233 to 0.229 as well. How good is that? Well on the [Kaggle public leaderboard](https://www.kaggle.com/c/bluebook-for-bulldozers/leaderboard) a loss of 0.2289 puts us in the top 20 of the competition. That's with a *"totally brainless random forest with some totally brainless minor hyperparameter tuning."*
> This why the Random Forest is such an important - not just first step but often only step in Machine Learning. Because it's hard to screw it up (even when we didn't tune the hyperparameters we still got a good result), and a small amt of hypar tuning got a much better result.
>So any kind of model - specifically Linear-type models which have a whole bunch of statistical assumptions and require a bunch of prereqs to get right before they start to work at all - can really throw you off track because they can give you totally wrong answers about how accurate the predictions can be.
>The Random Forest generally-speaking tends to work on most datasets most of the time with most sets of hypars.
-- [J.Howard Fast.ai ML1 Lecture 2](https://youtu.be/blyXCk4sgEg?t=5370)
Random Forests work because the trees are p.much infinitely flexible. Even with a categorical variable - if there're particular categories which have different levels of price: it can gradually zoom in on those groups by using multiple splits.
You can help it by telling it the order of your CatVar, but even if you don't: it's okay, it'll just take a few more decisions to get there.
In a Linear model, or almost *any* other kind of model, especially non-tree models, encoding CatVars the way RFs do won't work - because there's no linear relationship between arbitrary identifiers.
---
>What does `max_features` do? The idea is that the less correlated your trees are w.eachother, the better. Imagine you had 1 column that was so much better than all the others at being predictive, that every single tree you built - regardless of which subset of rows - always started with that column. So the trees will all be pretty similar.
>But you can imagine there might be some interaction of variables where that interaction is more important than that individual column. So if every tree always fits on the same thing the 1st time, you're not going to get much variation in those trees.
>So what we do is in addition to just taking a subset of rows: we then at every single split point take a different subset of columns.
>This is slightly different than row sampling. In row-sampling each new tree is based on a random set of rows. For column sampling every individual binary split we choose from a different subset of columns.
>In other words: rather than looking at every possible level of every possible column: we look at every possible level of a random subset of columns. And each binary split / decision point we use a different random subset.
>How many? you get to pick. `max_features=0.5` means randomly choose half of them. The default is to use all of them. There are also some special parameters you can pass in (sqrt, log, etc).
>In practice I've found good values to be 1, 0.5, log(2), or sqrt -- that'll give you a nice bit of variation.
-- [J.Howard Fast.ai ML1 Lecture 2](https://youtu.be/blyXCk4sgEg?t=5049)
---
As an example: here's what the Random Forest sees when it's making it's split decisions:
```
df_raw.fiProductClassDesc.cat.codes
df_raw.fiProductClassDesc.cat.categories
```
| github_jupyter |
<img src="http://upload.wikimedia.org/math/7/5/2/752fd6396a9c9d026f10eccb39ddca15.png"/>
$$V(x) = w\left(\frac{L}{2} - x\right)$$
$$M(x) = \frac{w}{2}\left(L x - x^2\right)$$
$$\theta(x) = \frac{- w}{2 EI}\left(\frac{L x^2}{2} - \frac{x^3}{3} +C\right)$$
$$\Delta(x) = \frac{- w}{2 EI}\left(\frac{L x^3}{6} - \frac{x^4}{12} +Cx + D \right)$$
$$\Delta(0) = \frac{-w}{2 EI}\left(\frac{L\cdot 0^3}{6} - \frac{0^4}{12} +C\cdot 0 + D \right) = 0 \therefore D = 0$$
$$\Delta(L) = \frac{-w}{2 EI}\left(\frac{L^4}{6} - \frac{L^4}{12} +CL \right) = 0 $$
$$\frac{L^4}{6} - \frac{L^4}{12} +CL = 0 $$
$$ CL = \frac{L^4}{12} - \frac{L^4}{6} $$
$$ CL = \frac{L^4}{12} - \frac{2 L^4}{12} $$
$$ CL = - \frac{L^4}{12} $$
$$ C = -\frac{L^3}{12}$$
$$\Delta(x) = \frac{- w}{2 EI}\left(\frac{L x^3}{6} - \frac{x^4}{12} -\frac{L^3}{12}x \right)$$
$$\theta(x) = \frac{-w}{2 EI}\left(\frac{L x^2}{2} - \frac{x^3}{3} - \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{-w}{2 EI}\left(\frac{L \cdot 0^2}{2} - \frac{0^3}{3} - \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{-w}{2 EI}\left(- \frac{L^3}{12}\right)$$
$$\theta(0) = \frac{w L^3}{24 EI}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{\frac{w L^3}{24 EI}}{\frac{5 w L^4}{384 E I}}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{384}{5\cdot 24\cdot L}$$
$$\frac{\theta(0)}{\Delta(L/2)} = \frac{16}{5 L}$$
$$\theta(0) = \frac{16}{5 L}\Delta(L/2)$$
$$P_0 = (x_0,y_0)\text{, known}$$
$$P_1 = (x_1,y_1)\text{, known}$$
$$C_0 = (x_2,y_2)$$
$$C_1 = (x_3,y_3)$$
$$P_I = (x_m,y_m)\text{, known}$$
$$y - y_m = \frac{\frac{y_0+y_3}{2}-y_m}{\frac{x_0+x_3}{2}-x_m}(x-x_m)\text{, known}$$
$$\frac{\frac{y_0+y_3}{2}-y_m}{\frac{x_0+x_3}{2}-x_m} = -\frac{x_3-x_0}{y_3-y_0}\text{, known}$$
$$y - y_c= m_{perp} (x-x_c)$$
$$y - y_c= m_{perp} x-m_{perp} x_c$$
$$y = m_{perp} x-m_{perp} x_c + y_c$$
$$y = m_{perp} x + (-m_{perp} x_c + y_c)$$
$$y = m_{perp} x + b_{perp}$$
$$y=a x^4 + b x^3 +c x^2 +d x + e$$
$$ 0 = a x^4 + b x^3 +c x^2 +(d-m_{perp}) x + (e-b_{perp})$$
$$\Delta(x) = \frac{- w}{2 EI}\left(- \frac{x^4}{12}+\frac{L x^3}{6} -\frac{L^3}{12}x \right)$$
$$\Delta(x) = \frac{w}{2 EI}\left(\frac{x^4}{12} - \frac{L x^3}{6} +\frac{L^3}{12}x \right)$$
$$y_0 - m_0 x_0 = - m_0 x_1 +y_1 \tag{1}$$
$$y_3-m_1 x_3 = -m_1 x_2+y_2\tag{2}$$
$$x_m -\frac{x_0+x_3}{8}= \frac{3}{8}x_1+\frac{3}{8}x_2 \tag{3}$$
$$y_m - \frac{y_0+y_3}{8}=\frac{3}{8}y_1+\frac{3}{8}y_2 \tag{4}$$
$$Y = \begin{bmatrix}y_0 - m_0 x_0\\y_3-m_1 x_3\\x_m -\frac{x_0+x_3}{8}\\y_m - \frac{y_0+y_3}{8}\end{bmatrix}\qquad E = \begin{bmatrix}-m_0&0&1&0\\0&-m_1&0&1\\3/8&3/8&0&0\\0&0&3/8&3/8\end{bmatrix}\qquad C = \begin{bmatrix}x_1\\x_2\\y_1\\y_2\end{bmatrix}$$
$$[Y]=[E][C]$$
$$[C]=[E]^{-1}[Y]$$
```
# Path CurveManip.py
from IPython.display import SVG
from numpy import matrix
from numpy.linalg import inv, pinv
from numpy import transpose as T
from collections import namedtuple
from numpy import sin, cos, tan, array, pi
import numpy as np
# from SVG_lib import point
def rotate(point, base, angle, DEBUG = False):
"Rotates the point about the bas by the angle"
R = matrix(((cos(angle),-sin(angle)),(sin(angle),cos(angle))))
point = array(point)
base = array(base)
tmp = point - base
R_tmp = array(T(R*T(matrix(tmp)))).reshape((1,2))
R_point = array(R_tmp[0]+T(base))#.reshape((1,2))
if DEBUG:
Debug_rotate = namedtuple('Debug_rotate','point angle_deg tmp R_tmp_size R_tmp base R_point')
debug = Debug_rotate(point, angle/pi*180, tmp, R_tmp.size, R_tmp, base, R_point)
print(debug)
print()
return R_point
def translate(point, vector):
"Returns a point (list) that is displaced from the original point be the vector (list)"
new_point = [x0+dx for x0,dx in zip(point, vector)]
return new_point
def reflect_y_axis(point):
"returns a point mirrored about the y axis"
px, py = point
return [-px, py]
def reflect_x_axis(point):
"returns a point mirrored about the x axis"
px, py = point
return [px, -py]
def mirror(point, mirror_line = [(0,0),(0,-1)]):
"Mirror a point about a line defined by two points"
p0, p1 = mirror_line
# Find angle of mirror line
angle = np.arctan2((p1[1]-p0[1]),(p1[0]-p0[0]))
# Rotate all points to make mirror line parallel to y-axis
flip_angles = [-angle,-pi/2]
for flip_angle in flip_angles:
p0 = rotate(p0,[0,0],flip_angle)
p1 = rotate(p1,[0,0],flip_angle)
point = rotate(point,[0,0],flip_angle)
if round((p0[0]-p1[0])*10000)!=0: #check for errors
er = "problem with fil_angle. post rotate x0, x1 = {}, {}".format(p0[0],p1[0])
raise(RuntimeError(er))
# translaste points so mirror line is on y-axis
point = translate(point,[-p0[0],0])
point = reflect_y_axis(point)
# translate back to original location
point = translate(point,[p0[0],0])
# rotate to original angle
flip_angles = [pi/2,angle]
for flip_angle in flip_angles:
point = rotate(point,[0,0],flip_angle)
p_x, p_y = float(point[0]), float(point[1])
return [p_x, p_y]
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#export
from fastai.torch_basics import *
from fastai.data.all import *
#hide
from nbdev.showdoc import *
#default_exp text.core
#default_cls_lvl 3
```
# Text core
> Basic function to preprocess text before assembling it in a `DataLoaders`.
```
#export
import spacy,html
from spacy.symbols import ORTH
```
## Preprocessing rules
The following are rules applied to texts before or after it's tokenized.
```
#export
#special tokens
UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ = "xxunk xxpad xxbos xxeos xxfld xxrep xxwrep xxup xxmaj".split()
#export
_all_ = ["UNK", "PAD", "BOS", "EOS", "FLD", "TK_REP", "TK_WREP", "TK_UP", "TK_MAJ"]
#export
_re_spec = re.compile(r'([/#\\])')
def spec_add_spaces(t):
"Add spaces around / and #"
return _re_spec.sub(r' \1 ', t)
test_eq(spec_add_spaces('#fastai'), ' # fastai')
test_eq(spec_add_spaces('/fastai'), ' / fastai')
test_eq(spec_add_spaces('\\fastai'), ' \\ fastai')
#export
_re_space = re.compile(' {2,}')
def rm_useless_spaces(t):
"Remove multiple spaces"
return _re_space.sub(' ', t)
test_eq(rm_useless_spaces('a b c'), 'a b c')
#export
_re_rep = re.compile(r'(\S)(\1{2,})')
def replace_rep(t):
"Replace repetitions at the character level: cccc -- TK_REP 4 c"
def _replace_rep(m):
c,cc = m.groups()
return f' {TK_REP} {len(cc)+1} {c} '
return _re_rep.sub(_replace_rep, t)
```
It starts replacing at 3 repetitions of the same character or more.
```
test_eq(replace_rep('aa'), 'aa')
test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')
#export
_re_wrep = re.compile(r'(?:\s|^)(\w+)\s+((?:\1\s+)+)\1(\s|\W|$)')
#hide
"""
Matches any word repeated at least four times with spaces between them
(?:\s|^) Non-Capture either a whitespace character or the beginning of text
(\w+) Capture any alphanumeric character
\s+ One or more whitespace
((?:\1\s+)+) Capture a repetition of one or more times \1 followed by one or more whitespace
\1 Occurrence of \1
(\s|\W|$) Capture last whitespace, non alphanumeric character or end of text
""";
#export
def replace_wrep(t):
"Replace word repetitions: word word word word -- TK_WREP 4 word"
def _replace_wrep(m):
c,cc,e = m.groups()
return f' {TK_WREP} {len(cc.split())+2} {c} {e}'
return _re_wrep.sub(_replace_wrep, t)
```
It starts replacing at 3 repetitions of the same word or more.
```
test_eq(replace_wrep('ah ah'), 'ah ah')
test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')
#export
def fix_html(x):
"Various messy things we've seen in documents"
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace('nbsp;', ' ').replace(
'#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace('<br />', "\n").replace(
'\\"', '"').replace('<unk>',UNK).replace(' @.@ ','.').replace(' @-@ ','-').replace('...',' …')
return html.unescape(x)
test_eq(fix_html('#39;bli#146;'), "'bli'")
test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
test_eq(fix_html('a nbsp; #36;'), 'a $')
test_eq(fix_html('\\" <unk>'), f'" {UNK}')
test_eq(fix_html('quot; @.@ @-@ '), "' .-")
test_eq(fix_html('<br />text\\n'), '\ntext\n')
#export
_re_all_caps = re.compile(r'(\s|^)([A-Z]+[^a-z\s]*)(?=(\s|$))')
#hide
"""
Catches any word in all caps, even with ' or - inside
(\s|^) Capture either a whitespace or the beginning of text
([A-Z]+ Capture one capitalized letter or more...
[^a-z\s]*) ...followed by anything that's non lowercase or whitespace
(?=(\s|$)) Look ahead for a space or end of text
""";
#export
def replace_all_caps(t):
"Replace tokens in ALL CAPS by their lower version and add `TK_UP` before."
def _replace_all_caps(m):
tok = f'{TK_UP} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_all_caps.sub(_replace_all_caps, t)
test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")
#export
_re_maj = re.compile(r'(\s|^)([A-Z][^A-Z\s]*)(?=(\s|$))')
#hide
"""
Catches any capitalized word
(\s|^) Capture either a whitespace or the beginning of text
([A-Z] Capture exactly one capitalized letter...
[^A-Z\s]*) ...followed by anything that's not uppercase or whitespace
(?=(\s|$)) Look ahead for a space of end of text
""";
#export
def replace_maj(t):
"Replace tokens in Sentence Case by their lower version and add `TK_MAJ` before."
def _replace_maj(m):
tok = f'{TK_MAJ} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_maj.sub(_replace_maj, t)
test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)
#export
def lowercase(t, add_bos=True, add_eos=False):
"Converts `t` to lowercase"
return (f'{BOS} ' if add_bos else '') + t.lower().strip() + (f' {EOS}' if add_eos else '')
#export
def replace_space(t):
"Replace embedded spaces in a token with unicode line char to allow for split/join"
return t.replace(' ', '▁')
#export
defaults.text_spec_tok = [UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ]
defaults.text_proc_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces,
replace_all_caps, replace_maj, lowercase]
defaults.text_postproc_rules = [replace_space]
```
## Tokenizing
A tokenizer is a class that must implement `__call__`. This method receives a iterator of texts and must return a generator with their tokenized versions. Here is the most basic example:
```
#export
class BaseTokenizer():
"Basic tokenizer that just splits on spaces"
def __init__(self, split_char=' ', **kwargs): self.split_char=split_char
def __call__(self, items): return (t.split(self.split_char) for t in items)
tok = BaseTokenizer()
test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
tok = BaseTokenizer('x')
test_eq(tok(["This is a text"]), [["This is a te", "t"]])
#export
class SpacyTokenizer():
"Spacy tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, buf_sz=5000):
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
nlp = spacy.blank(lang, disable=["parser", "tagger", "ner"])
for w in self.special_toks: nlp.tokenizer.add_special_case(w, [{ORTH: w}])
self.pipe,self.buf_sz = nlp.pipe,buf_sz
def __call__(self, items):
return (L(doc).attrgot('text') for doc in self.pipe(map(str,items), batch_size=self.buf_sz))
#export
WordTokenizer = SpacyTokenizer
tok = SpacyTokenizer()
inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
test_eq(L(tok([inp,inp])), [exp,exp])
#export
class TokenizeWithRules:
"A wrapper around `tok` which applies `rules`, then tokenizes, then applies `post_rules`"
def __init__(self, tok, rules=None, post_rules=None):
self.rules = L(ifnone(rules, defaults.text_proc_rules))
self.post_f = compose(*L(ifnone(post_rules, defaults.text_postproc_rules)))
self.tok = tok
def __call__(self, batch):
return (L(o).map(self.post_f) for o in self.tok(maps(*self.rules, batch)))
f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
f = TokenizeWithRules(SpacyTokenizer())
test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])
```
The main function that will be called during one of the processes handling tokenization. It will iterate through the `batch` of texts, apply them `rules` and tokenize them.
```
texts = ["this is a text", "this is another text"]
tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])
#export
@delegates(TokenizeWithRules)
def tokenize1(text, tok, **kwargs):
"Call `TokenizeWithRules` with a single text"
return first(TokenizeWithRules(tok=tok, **kwargs)([text]))
test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
['This',"isn't",'a','problem'])
#export
def parallel_tokenize(items, tok=None, rules=None, n_workers=defaults.cpus, **kwargs):
"Calls optional `setup` on `tok` before launching `TokenizeWithRules` using `parallel_gen"
if tok is None: tok = WordTokenizer()
if hasattr(tok, 'setup'): tok.setup(items, rules)
return parallel_gen(TokenizeWithRules, items, tok=tok, rules=rules, n_workers=n_workers, **kwargs)
```
Note that since this uses `parallel_gen` behind the scenes, the generator returned contains tuples of indices and results. There is no guarantee that the results are returned in order, so you should sort by the first item of the tuples (the indices) if you need them ordered.
```
res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
idxs,toks = zip(*L(res).sorted(itemgetter(0)))
test_eq(toks, [['0','1'],['1','2']])
#hide
res1 = parallel_tokenize(['0 1', '1 2'], tok=BaseTokenizer(), rules=[], n_workers=0)
idxs1,toks1 = zip(*L(res1).sorted(itemgetter(0)))
test_eq(toks, toks1)
```
### Tokenize texts in files
Preprocessing function for texts in filenames. Tokenized texts will be saved in a similar fashion in a directory suffixed with `_tok` in the parent folder of `path` (override with `output_dir`). This directory is the return value.
```
#export
fn_counter_pkl = 'counter.pkl'
fn_lengths_pkl = 'lengths.pkl'
#export
def _tokenize_files(func, files, path, output_dir=None, output_names=None, n_workers=defaults.cpus, rules=None, tok=None,
encoding='utf8', skip_if_exists=False):
"Tokenize text `files` in parallel using `n_workers`"
if tok is None: tok = WordTokenizer()
output_dir = Path(ifnone(output_dir, path.parent/f'{path.name}_tok'))
if skip_if_exists and output_dir.exists(): return output_dir
output_dir.mkdir(exist_ok=True)
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
rules = partial(Path.read_text, encoding=encoding) + L(ifnone(rules, defaults.text_proc_rules.copy()))
lengths,counter = {},Counter()
for i,tok in parallel_tokenize(files, tok, rules, n_workers=n_workers):
out = func(i,output_dir)
out.mk_write(' '.join(tok))
lengths[str(files[i].relative_to(path))] = len(tok)
counter.update(tok)
save_pickle(output_dir/fn_lengths_pkl, lengths)
save_pickle(output_dir/fn_counter_pkl, counter)
return output_dir
#export
@delegates(_tokenize_files)
def tokenize_folder(path, extensions=None, folders=None, output_dir=None, skip_if_exists=True, **kwargs):
"Tokenize text files in `path` in parallel using `n_workers`"
path,extensions = Path(path),ifnone(extensions, ['.txt'])
files = get_files(path, extensions=extensions, recurse=True, folders=folders)
def _f(i,output_dir): return output_dir/files[i].relative_to(path)
return _tokenize_files(_f, files, path, skip_if_exists=skip_if_exists, **kwargs)
```
The result will be in `output_dir` (defaults to a folder in the same parent directory as `path`, with `_tok` added to `path.name`) with the same structure as in `path`. Tokenized texts for a given file will be in the file having the same name in `output_dir`. Additionally, a file with a .len suffix contains the number of tokens and the count of all words is stored in `output_dir/counter.pkl`.
`extensions` will default to `['.txt']` and all text files in `path` are treated unless you specify a list of folders in `include`. `rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer.
```
#export
@delegates(_tokenize_files)
def tokenize_files(files, path, output_dir, output_names=None, **kwargs):
"Tokenize text `files` in parallel using `n_workers`"
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
def _f(i,output_dir): return output_dir/output_names[i]
return _tokenize_files(_f, files, path, output_dir=output_dir, **kwargs)
```
### Tokenize texts in a dataframe
```
#export
def _join_texts(df, mark_fields=False):
"Join texts in row `idx` of `df`, marking each field with `FLD` if `mark_fields=True`"
text_col = (f'{FLD} {1} ' if mark_fields else '' ) + df.iloc[:,0].astype(str)
for i in range(1,len(df.columns)):
text_col += (f' {FLD} {i+1} ' if mark_fields else ' ') + df.iloc[:,i].astype(str)
return text_col.values
#hide
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'text1': texts}, columns=['text', 'text1'])
col = _join_texts(df, mark_fields=True)
for i in range(len(df)):
test_eq(col[i], f'{FLD} 1 This is an example of text {i} {FLD} 2 This is an example of text {i}')
#export
def tokenize_texts(texts, n_workers=defaults.cpus, rules=None, tok=None):
"Tokenize `texts` in parallel using `n_workers`"
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
outputs = L(parallel_tokenize(texts, tok=tok, rules=rules, n_workers=n_workers)
).sorted().itemgot(1)
return outputs
#export
def tokenize_df(df, text_cols, n_workers=defaults.cpus, rules=None, mark_fields=None,
tok=None, tok_text_col="text"):
"Tokenize texts in `df[text_cols]` in parallel using `n_workers` and stores them in `df[tok_text_col]`"
text_cols = [df.columns[c] if isinstance(c, int) else c for c in L(text_cols)]
#mark_fields defaults to False if there is one column of texts, True if there are multiple
if mark_fields is None: mark_fields = len(text_cols)>1
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
texts = _join_texts(df[text_cols], mark_fields=mark_fields)
outputs = L(parallel_tokenize(texts, tok, rules, n_workers=n_workers)
).sorted().itemgot(1)
other_cols = df.columns[~df.columns.isin(text_cols)]
res = df[other_cols].copy()
res[tok_text_col] = pd.Series(outputs, dtype=object)
res[f'{tok_text_col}_length'] = [len(o) for o in outputs]
return res,Counter(outputs.concat())
```
This function returns a new dataframe with the same non-text columns, a column named text that contains the tokenized texts and a column named text_lengths that contains their respective length. It also returns a counter of all seen words to quickly build a vocabulary afterward.
`rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer. If `mark_fields` isn't specified, it defaults to `False` when there is a single text column, `True` when there are several. In that case, the texts in each of those columns are joined with `FLD` markers followed by the number of the field.
```
#export
def tokenize_csv(fname, text_cols, outname=None, n_workers=4, rules=None, mark_fields=None,
tok=None, header='infer', chunksize=50000):
"Tokenize texts in the `text_cols` of the csv `fname` in parallel using `n_workers`"
df = pd.read_csv(fname, header=header, chunksize=chunksize)
outname = Path(ifnone(outname, fname.parent/f'{fname.stem}_tok.csv'))
cnt = Counter()
for i,dfp in enumerate(df):
out,c = tokenize_df(dfp, text_cols, n_workers=n_workers, rules=rules,
mark_fields=mark_fields, tok=tok)
out.text = out.text.str.join(' ')
out.to_csv(outname, header=(None,header)[i==0], index=False, mode=('a','w')[i==0])
cnt.update(c)
save_pickle(outname.with_suffix('.pkl'), cnt)
#export
def load_tokenized_csv(fname):
"Utility function to quickly load a tokenized csv ans the corresponding counter"
fname = Path(fname)
out = pd.read_csv(fname)
for txt_col in out.columns[1:-1]:
out[txt_col] = out[txt_col].str.split(' ')
return out,load_pickle(fname.with_suffix('.pkl'))
```
The result will be written in a new csv file in `outname` (defaults to the same as `fname` with the suffix `_tok.csv`) and will have the same header as the original file, the same non-text columns, a text and a text_lengths column as described in `tokenize_df`.
`rules` (that defaults to `defaults.text_proc_rules`) are applied to each text before going in the tokenizer. If `mark_fields` isn't specified, it defaults to `False` when there is a single text column, `True` when there are several. In that case, the texts in each of those columns are joined with `FLD` markers followed by the number of the field.
The csv file is opened with `header` and optionally with blocks of `chunksize` at a time. If this argument is passed, each chunk is processed independently and saved in the output file to save memory usage.
```
def _prepare_texts(tmp_d):
"Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
path = Path(tmp_d)/'tmp'
path.mkdir()
for d in ['a', 'b', 'c']:
(path/d).mkdir()
for i in range(5):
with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
csv_fname = tmp_d/'input.csv'
df.to_csv(csv_fname, index=False)
return path,df,csv_fname
#hide
# integration test
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
#Tokenize as folders
tokenize_folder(path)
outp = Path(tmp_d)/'tmp_tok'
for d in ['a', 'b', 'c']:
p = outp/d
for i in range(5):
test_eq((p/f'text{i}.txt').read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', d, str(i) ]))
cnt_a = load_pickle(outp/fn_counter_pkl)
test_eq(cnt_a['this'], 15)
test_eq(cnt_a['a'], 5)
test_eq(cnt_a['0'], 3)
#Tokenize as files
files = get_text_files(path)
tokenize_files(files, path, output_dir=path/'d')
for f in files:
test_eq((path/'d'/f.relative_to(path)).read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', f.parent.name, f.name[4]]))
#Tokenize as individual texts
out = tokenize_texts(df['text'].values)
test_eq(out, [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
#Tokenize as a dataframe
out,cnt_b = tokenize_df(df, text_cols='text')
test_eq(list(out.columns), ['label', 'text', 'text_length'])
test_eq(out['label'].values, df['label'].values)
test_eq(out['text'], [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
test_eq(cnt_a, cnt_b)
#Tokenize as a csv
out_fname = Path(tmp_d)/'output.csv'
tokenize_csv(csv_fname, text_cols='text', outname=out_fname)
test_eq((out,cnt_b), load_tokenized_csv(out_fname))
```
## `Tokenizer`-
```
#export
class Tokenizer(Transform):
"Provides a consistent `Transform` interface to tokenizers operating on `DataFrame`s and folders"
input_types = (str, list, L, tuple, Path)
def __init__(self, tok, rules=None, counter=None, lengths=None, mode=None, sep=' '):
if isinstance(tok,type): tok=tok()
store_attr('tok,counter,lengths,mode,sep')
self.rules = defaults.text_proc_rules if rules is None else rules
@classmethod
@delegates(tokenize_df, keep=True)
def from_df(cls, text_cols, tok=None, rules=None, sep=' ', **kwargs):
if tok is None: tok = WordTokenizer()
res = cls(tok, rules=rules, mode='df')
res.kwargs,res.train_setup = merge({'tok': tok}, kwargs),False
res.text_cols,res.sep = text_cols,sep
return res
@classmethod
@delegates(tokenize_folder, keep=True)
def from_folder(cls, path, tok=None, rules=None, **kwargs):
path = Path(path)
if tok is None: tok = WordTokenizer()
output_dir = tokenize_folder(path, tok=tok, rules=rules, **kwargs)
res = cls(tok, counter=load_pickle(output_dir/fn_counter_pkl),
lengths=load_pickle(output_dir/fn_lengths_pkl), rules=rules, mode='folder')
res.path,res.output_dir = path,output_dir
return res
def setups(self, dsets):
if not self.mode == 'df' or not isinstance(dsets.items, pd.DataFrame): return
dsets.items,count = tokenize_df(dsets.items, self.text_cols, rules=self.rules, **self.kwargs)
if self.counter is None: self.counter = count
return dsets
def encodes(self, o:Path):
if self.mode=='folder' and str(o).startswith(str(self.path)):
tok = self.output_dir/o.relative_to(self.path)
return L(tok.read_text().split(' '))
else: return self._tokenize1(o.read_text())
def encodes(self, o:str): return self._tokenize1(o)
def _tokenize1(self, o): return first(self.tok([compose(*self.rules)(o)]))
def get_lengths(self, items):
if self.lengths is None: return None
if self.mode == 'df':
if isinstance(items, pd.DataFrame) and 'text_lengths' in items.columns: return items['text_length'].values
if self.mode == 'folder':
try:
res = [self.lengths[str(Path(i).relative_to(self.path))] for i in items]
if len(res) == len(items): return res
except: return None
def decodes(self, o): return TitledStr(self.sep.join(o))
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
print(dsets.train[0])
dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
print(dsets.train[0][0].text)
tst = test_set(dsets, ['This is a test', 'this is another test'])
test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
(['xxbos','this','is','another','test'],)])
```
## Sentencepiece
```
#export
eu_langs = ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu",
"it","lt","lv","mt","nl","pl","pt","ro","sk","sl","sv"] # all European langs
#export
class SentencePieceTokenizer():#TODO: pass the special tokens symbol to sp
"SentencePiece tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, sp_model=None, vocab_sz=None, max_vocab_sz=30000,
model_type='unigram', char_coverage=None, cache_dir='tmp'):
try: from sentencepiece import SentencePieceTrainer,SentencePieceProcessor
except ImportError:
raise Exception('sentencepiece module is missing: run `pip install sentencepiece!=0.1.90,!=0.1.91`')
self.sp_model,self.cache_dir = sp_model,Path(cache_dir)
self.vocab_sz,self.max_vocab_sz,self.model_type = vocab_sz,max_vocab_sz,model_type
self.char_coverage = ifnone(char_coverage, 0.99999 if lang in eu_langs else 0.9998)
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
if sp_model is None: self.tok = None
else:
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
os.makedirs(self.cache_dir, exist_ok=True)
def _get_vocab_sz(self, raw_text_path):
cnt = Counter()
with open(raw_text_path, 'r') as f:
for line in f.readlines():
cnt.update(line.split())
if len(cnt)//4 > self.max_vocab_sz: return self.max_vocab_sz
res = len(cnt)//4
while res%8 != 0: res+=1
return max(res,29)
def train(self, raw_text_path):
"Train a sentencepiece tokenizer on `texts` and save it in `path/tmp_dir`"
from sentencepiece import SentencePieceTrainer
vocab_sz = self._get_vocab_sz(raw_text_path) if self.vocab_sz is None else self.vocab_sz
spec_tokens = ['\u2581'+s for s in self.special_toks]
SentencePieceTrainer.Train(" ".join([
f"--input={raw_text_path} --vocab_size={vocab_sz} --model_prefix={self.cache_dir/'spm'}",
f"--character_coverage={self.char_coverage} --model_type={self.model_type}",
f"--unk_id={len(spec_tokens)} --pad_id=-1 --bos_id=-1 --eos_id=-1 --minloglevel=2",
f"--user_defined_symbols={','.join(spec_tokens)} --hard_vocab_limit=false"]))
raw_text_path.unlink()
return self.cache_dir/'spm.model'
def setup(self, items, rules=None):
from sentencepiece import SentencePieceProcessor
if rules is None: rules = []
if self.tok is not None: return {'sp_model': self.sp_model}
raw_text_path = self.cache_dir/'texts.out'
with open(raw_text_path, 'w') as f:
for t in progress_bar(maps(*rules, items), total=len(items), leave=False):
f.write(f'{t}\n')
sp_model = self.train(raw_text_path)
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
return {'sp_model': sp_model}
def __call__(self, items):
if self.tok is None: self.setup(items)
for t in items: yield self.tok.EncodeAsPieces(t)
#export
SubwordTokenizer = SentencePieceTokenizer
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
tok = SentencePieceTokenizer(special_toks=[])
dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
print(dsets.train[0][0])
with warnings.catch_warnings():
dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
print(dsets.train[0][0].text)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
_Lambda School Data Science_
This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty?
# Decision Trees
#### Objectives
- clean data with outliers
- impute missing values
- use scikit-learn for decision trees
- understand why decision trees are useful to model non-linear, non-monotonic relationships and feature interactions
- get and interpret feature importances of a tree-based model
#### Links
- A Visual Introduction to Machine Learning
- [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/)
- [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)
- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.html#advantages-2)
- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)
- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)
- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._
- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)
### Libraries
#### category_encoders
You aren't required to use [category_encoders](https://github.com/scikit-learn-contrib/categorical-encoding), but it's recommended.
If you're working locally, you already installed it, probably with this shell command: `conda install -c conda-forge category_encoders`
If you're using Google Colab, you need to reinstall it every time you restart all runtimes: `pip install category_encoders`
#### scikit-learn version 0.21.2
Until recently, scikit-learn required graphviz to visualize decision trees, and it could be a pain to install. But sklearn's newest versions have a [plot_tree](https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html) function that uses matplotlib!
Google Colab already has version 0.21.2. But if you're running Anaconda locally, you may need to upgrade.
You can check your version with this Python code: `import sklearn; print(sklearn.__version__)`
If necessary, you can update your version with this shell command: `conda update scikit-learn`
This isn't required to do your assignment, but it's required to run this lecture notebook.
#### pdpbox
[PDPbox](https://github.com/SauceCat/PDPbox) stands for "Partial Dependence Plot toolbox." It's a tool for model interpretation & visualization.
You can install it on Colab or locally with this shell command: `pip install pdpbox`
This also isn't required to do your assignment, but it's used in the lecture notebook.
```
# !pip install pdpbox category_encoders
```
## Clean data with outliers, impute missing values (example solutions)
```
# !pip install category_encoders
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler
LOCAL = '../data/tanzania/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/tanzania/'
train = pd.merge(pd.read_csv(WEB + 'train_features.csv'),
pd.read_csv(WEB + 'train_labels.csv'))
test = pd.read_csv(WEB + 'test_features.csv')
sample_submission = pd.read_csv(WEB + 'sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
train.shape, val.shape, test.shape
```
Some of the locations are at ["Null Island"](https://en.wikipedia.org/wiki/Null_Island) instead of Tanzania.
```
sns.jointplot(x='longitude', y='latitude', data=train);
```
#### Define a function to wrangle train, validate, and test sets in the same way.
Fix the location, and do more data cleaning and feature engineering.
```
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace them with the column mean.
cols_with_zeros = ['construction_year', 'longitude', 'latitude']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col] = X[col].fillna(X[col].mean())
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract year from date_recorded
X['year_recorded'] = X['date_recorded'].dt.year
# quantity & quantity_group are duplicates, so drop one
X = X.drop(columns='quantity_group')
# for categoricals with missing values, fill with the category 'MISSING'
categoricals = X.select_dtypes(exclude='number').columns
for col in categoricals:
X[col] = X[col].fillna('MISSING')
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
```
Now the locations look better.
```
sns.relplot(x='longitude', y='latitude', hue='status_group',
data=train, alpha=0.1);
```
#### Select features
```
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target & id
train_features = train.drop(columns=[target, 'id'])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
```
#### Encode categoricals, scale features, fit and score Logistic Regression model, make predictions
```
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
# Encoder: fit_transform on train, transform on val & test
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
# Scaler: fit_transform on train, transform on val & test
scaler = RobustScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
# Model: Fit on train, score on val, predict on test
model = LogisticRegression(solver='lbfgs', multi_class='auto', n_jobs=-1)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
y_pred = model.predict(X_test_scaled)
# Write submission csv file
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('submission-02.csv', index=False)
```
#### Get and plot coefficients
```
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
plt.figure(figsize=(10,30))
coefficients.sort_values().plot.barh(color='grey');
```
## Use scikit-learn for decision trees
### Compare a Logistic Regression with 2 features, longitude & latitude ...
### ... versus a Decision Tree Classifier with 2 features, longitude & latitude
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
## Understand why decision trees are useful to model non-linear, non-monotonic relationships and feature interactions
#### What does _(non)monotonic_ mean?!?!
- See Figures 1-3 in Wikipedia's article, [Monotonic function](https://en.wikipedia.org/wiki/Monotonic_function)
- See [World Population Growth, 1700-2010](https://ourworldindata.org/world-population-growth-past-future). World Population is non-linear and monotonic. Annual growth rate is non-linear and non-monotonic.
- See [Accidents per Mile Driven, by Driver Age](http://howwedrive.com/2009/02/20/whats-the-real-risk-of-older-drivers/). This is non-linear and non-monotonic.
#### What does _feature interactions_ mean?!?!
- See the explanation in [_Interpretable Machine Learning_, Chapter 5.4.1, Feature Interaction](https://christophm.github.io/interpretable-ml-book/interaction.html#feature-interaction).
- See the exploration in this notebook, under the heading ***Interlude #2: Simple housing***
### Visualize decision tree
https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html
### Make 3 heatmaps, with longitude & latitude
- Actual % of functional waterpumps
- Decision Tree predicted probability of functional waterpumps
- Logistic Regression predicted probability of functional waterpumps
### Interlude #1: predicting golf putts
(1 feature, non-linear, regression)
https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/
```
columns = ['distance', 'tries', 'successes']
data = [[2, 1443, 1346],
[3, 694, 577],
[4, 455, 337],
[5, 353, 208],
[6, 272, 149],
[7, 256, 136],
[8, 240, 111],
[9, 217, 69],
[10, 200, 67],
[11, 237, 75],
[12, 202, 52],
[13, 192, 46],
[14, 174, 54],
[15, 167, 28],
[16, 201, 27],
[17, 195, 31],
[18, 191, 33],
[19, 147, 20],
[20, 152, 24]]
putts = pd.DataFrame(columns=columns, data=data)
putts['rate of success'] = putts['successes'] / putts['tries']
putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts');
```
#### Compare Linear Regression ...
```
from sklearn.linear_model import LinearRegression
putts_X = putts[['distance']]
putts_y = putts['rate of success']
lr = LinearRegression()
lr.fit(putts_X, putts_y)
print('R^2 Score', lr.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.plot(putts_X, lr.predict(putts_X));
```
#### ... versus a Decision Tree Regressor
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html
```
import graphviz
from ipywidgets import interact
from sklearn.tree import DecisionTreeRegressor, export_graphviz
def viztree(decision_tree, feature_names):
dot_data = export_graphviz(decision_tree, out_file=None, feature_names=feature_names,
filled=True, rounded=True)
return graphviz.Source(dot_data)
def putts_tree(max_depth=1):
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(putts_X, putts_y)
print('R^2 Score', tree.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.step(putts_X, tree.predict(putts_X), where='mid')
plt.show()
display(viztree(tree, feature_names=['distance']))
interact(putts_tree, max_depth=(1,6,1));
```
### Interlude #2: Simple housing
(2 features, regression)
https://christophm.github.io/interpretable-ml-book/interaction.html#feature-interaction
```
columns = ['Price', 'Good Location', 'Big Size']
data = [[300000, 1, 1],
[200000, 1, 0],
[250000, 0, 1],
[150000, 0, 0]]
house = pd.DataFrame(columns=columns, data=data)
house
```
#### Compare Linear Regression ...
```
house_X = house.drop(columns='Price')
house_y = house['Price']
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
#### ... versus a Decision Tree Regressor
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
### Simple housing, with a twist: _Feature Interaction_
```
house.loc[0, 'Price'] = 400000
house_X = house.drop(columns='Price')
house_y = house['Price']
house
```
#### Compare Linear Regression ...
```
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
#### ... versus a Decision Tree Regressor
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
## Get and interpret feature importances of a tree-based model
# Assignment
- Start a clean notebook, or continue with yesterday's assignment notebook.
- Continue to participate in our Kaggle competition with the Tanzania Waterpumps data.
- Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.
- Try a Decision Tree Classifier.
- Submit new predictions.
- Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- Create visualizations and share on Slack.
- Read more about decision trees and tree ensembles. You can start with the links at the top of this notebook.
- Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
| github_jupyter |
```
from openeye import oechem, oedepict
import oenotebook as oenb
import pandas as pd
def depict_smiles(smiles):
mol = oechem.OEMol()
oechem.OESmilesToMol(mol,smiles)
return oenb.draw_mol(mol)
depict_smiles(smiles)
```
## SM11
Initial mol is the same as the tautomer: SM11_micro018 and SM11_micro020
SM11 resonance structures:
('SM11_micro018', 'SM11_micro020')
```
mol_ID = "SM11"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles1 = df_microstates[df_microstates["microstate ID"] == "SM11_micro018"]["canonical isomeric SMILES"].values[0]
print(smiles1)
smiles2 = df_microstates[df_microstates["microstate ID"] == "SM11_micro020"]["canonical isomeric SMILES"].values[0]
print(smiles2)
depict_smiles(smiles1)
depict_smiles(smiles2)
These are the same structure so I will deprecate SM11_micro018.
```
## SM18
('SM18_micro008', 'SM18_micro023')
('SM18_micro008', 'SM18_micro024')
('SM18_micro008', 'SM18_micro036')
('SM18_micro023', 'SM18_micro024')
('SM18_micro023', 'SM18_micro036')
('SM18_micro024', 'SM18_micro036')
('SM18_micro002', 'SM18_micro018')
('SM18_micro002', 'SM18_micro022')
('SM18_micro018', 'SM18_micro022')
('SM18_micro004', 'SM18_micro006')
('SM18_micro004', 'SM18_micro014')
('SM18_micro006', 'SM18_micro014')
```
mol_ID = "SM18"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles008 = df_microstates[df_microstates["microstate ID"] == "SM18_micro008"]["canonical isomeric SMILES"].values[0]
smiles023 = df_microstates[df_microstates["microstate ID"] == "SM18_micro023"]["canonical isomeric SMILES"].values[0]
smiles024 = df_microstates[df_microstates["microstate ID"] == "SM18_micro024"]["canonical isomeric SMILES"].values[0]
smiles036 = df_microstates[df_microstates["microstate ID"] == "SM18_micro036"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles008)
depict_smiles(smiles023)
depict_smiles(smiles024)
depict_smiles(smiles036)
```
SM18_micro008, SM18_micro023, SM18_micro024, and SM18_micro036 are resonance structures. SM18_micro023, SM18_micro024, and SM18_micro036 will be deprecated.
```
smiles002 = df_microstates[df_microstates["microstate ID"] == "SM18_micro002"]["canonical isomeric SMILES"].values[0]
smiles018 = df_microstates[df_microstates["microstate ID"] == "SM18_micro018"]["canonical isomeric SMILES"].values[0]
smiles022 = df_microstates[df_microstates["microstate ID"] == "SM18_micro022"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles002)
depict_smiles(smiles018)
depict_smiles(smiles022)
```
SM18_micro002, SM18_micro018, and SM18_micro022 are resonance structures. SM18_micro018 and SM18_micro022 will be deprecated.
```
smiles004 = df_microstates[df_microstates["microstate ID"] == "SM18_micro004"]["canonical isomeric SMILES"].values[0]
smiles006 = df_microstates[df_microstates["microstate ID"] == "SM18_micro006"]["canonical isomeric SMILES"].values[0]
smiles014 = df_microstates[df_microstates["microstate ID"] == "SM18_micro014"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles004)
depict_smiles(smiles006)
depict_smiles(smiles014)
```
SM18_micro004, SM18_micro006, and SM18_micro014 are resonance structures. SM18_micro006 and SM18_micro014 will be deprecated.
## SM23
SM23 resonance structures:
('SM23_micro001', 'SM23_micro003')
('SM23_micro001', 'SM23_micro009')
('SM23_micro001', 'SM23_micro023')
('SM23_micro001', 'SM23_micro031')
('SM23_micro001', 'SM23_micro032')
('SM23_micro001', 'SM23_micro037')
('SM23_micro003', 'SM23_micro009')
('SM23_micro003', 'SM23_micro023')
('SM23_micro003', 'SM23_micro031')
('SM23_micro003', 'SM23_micro032')
('SM23_micro003', 'SM23_micro037')
('SM23_micro009', 'SM23_micro023')
('SM23_micro009', 'SM23_micro031')
('SM23_micro009', 'SM23_micro032')
('SM23_micro009', 'SM23_micro037')
('SM23_micro023', 'SM23_micro031')
('SM23_micro023', 'SM23_micro032')
('SM23_micro023', 'SM23_micro037')
('SM23_micro031', 'SM23_micro032')
('SM23_micro031', 'SM23_micro037')
('SM23_micro032', 'SM23_micro037')
```
mol_ID = "SM23"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles001 = df_microstates[df_microstates["microstate ID"] == "SM23_micro001"]["canonical isomeric SMILES"].values[0]
smiles003 = df_microstates[df_microstates["microstate ID"] == "SM23_micro003"]["canonical isomeric SMILES"].values[0]
smiles009 = df_microstates[df_microstates["microstate ID"] == "SM23_micro009"]["canonical isomeric SMILES"].values[0]
smiles023 = df_microstates[df_microstates["microstate ID"] == "SM23_micro023"]["canonical isomeric SMILES"].values[0]
smiles031 = df_microstates[df_microstates["microstate ID"] == "SM23_micro031"]["canonical isomeric SMILES"].values[0]
smiles032 = df_microstates[df_microstates["microstate ID"] == "SM23_micro032"]["canonical isomeric SMILES"].values[0]
smiles037 = df_microstates[df_microstates["microstate ID"] == "SM23_micro037"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles001)
depict_smiles(smiles003)
depict_smiles(smiles009)
depict_smiles(smiles023)
depict_smiles(smiles031)
depict_smiles(smiles032)
depict_smiles(smiles037)
```
These are all resonance structures of the same microstate:
"SM23_micro001", "SM23_micro003", "SM23_micro009", "SM23_micro023", "SM23_micro031", "SM23_micro032", "SM23_micro037"
The following will be deprecated:
"SM23_micro003", "SM23_micro009", "SM23_micro023", "SM23_micro031", "SM23_micro032", "SM23_micro037"
## SM24
SM24 resonance structures:
('SM24_micro001', 'SM24_micro012')
('SM24_micro001', 'SM24_micro018')
('SM24_micro007', 'SM24_micro019')
('SM24_micro007', 'SM24_micro021')
('SM24_micro011', 'SM24_micro015')
('SM24_micro012', 'SM24_micro018')
('SM24_micro019', 'SM24_micro021')
```
mol_ID = "SM24"
path_to_input_microstates = "corrections_for_v1_3_1/"
input_file_name = path_to_input_microstates + mol_ID +"_correction.csv"
df_microstates = pd.read_csv(input_file_name)
smiles001 = df_microstates[df_microstates["microstate ID"] == "SM24_micro001"]["canonical isomeric SMILES"].values[0]
smiles012 = df_microstates[df_microstates["microstate ID"] == "SM24_micro012"]["canonical isomeric SMILES"].values[0]
smiles018 = df_microstates[df_microstates["microstate ID"] == "SM24_micro018"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles001)
depict_smiles(smiles012)
depict_smiles(smiles018)
```
SM24_micro001, SM24_micro012 and SM24_micro018 are resonance structures. SM24_micro012 and SM24_micro018 will be deprecated.
```
smiles007 = df_microstates[df_microstates["microstate ID"] == "SM24_micro007"]["canonical isomeric SMILES"].values[0]
smiles019 = df_microstates[df_microstates["microstate ID"] == "SM24_micro019"]["canonical isomeric SMILES"].values[0]
smiles021 = df_microstates[df_microstates["microstate ID"] == "SM24_micro021"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles007)
depict_smiles(smiles019)
depict_smiles(smiles021)
```
SM24_micro007, SM24_micro019 and SM24_micro021 are resonance structures. SM24_micro019 and SM24_micro021 will be deprecated.
```
smiles011 = df_microstates[df_microstates["microstate ID"] == "SM24_micro011"]["canonical isomeric SMILES"].values[0]
smiles015 = df_microstates[df_microstates["microstate ID"] == "SM24_micro015"]["canonical isomeric SMILES"].values[0]
depict_smiles(smiles011)
depict_smiles(smiles015)
```
SM24_micro011 and SM24_micro015 are resonance structures. SM24_micro015 will be deprecated.
| github_jupyter |
```
import numpy as np
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib.backends.backend_pdf import PdfPages
import random
import pingouin
def gen_pcorr(df, method = "pearson", sig = 0.01):
# Correlation type:
# 'pearson': Pearson r product-moment correlation
# 'spearman': Spearman ρ rank-order correlation
# 'kendall': Kendall’s τB correlation (for ordinal data)
# 'bicor': Biweight midcorrelation (robust)
# 'percbend': Percentage bend correlation (robust)
# 'shepherd': Shepherd’s pi correlation (robust)
# 'skipped': Skipped correlation (robust)
pcs_dct = {}
sig_corr_dct = {}
for x in df.keys():
sig_corr_dct[x] = []
pcs_dct[x]={}
for y in df.keys():
# control variables
# select variables that are not x or y
other_vars = [z for z in df.keys() if z != y and z != x ]
if x == y:
# No need to calculate if the variable is itself
pcs_dct[x][y] = 1
else:
pcs_dct[x][y] = df.partial_corr(x=x,y=y, covar=other_vars,
method=method).round(3)
if pcs_dct[x][y]["p-val"].values[0] < sig:
sig_corr_dct[x].append((y, pcs_dct[x][y]["r"].values[0]))
return pcs_dct, sig_corr_dct
mpl_colors = ["C" + str(i) for i in range(10)]
mpl_colors = mpl_colors + ["b", "m", "c", "y"]
#list(mpl.colors.cnames.values())
#random.shuffle(mpl_colors)
def gather_data(data_codes, start, end = datetime.datetime.today(), freq = "A"):
i = 0
# dct.items() calls key and value that key points to
for key, val in data_codes.items():
if i == 0:
# Create dataframe for first variable, then rename column
df = web.DataReader(val, "fred", start, end).resample(freq).mean()
df.rename(columns = {val:key}, inplace = True)
i = None
else:
# If dataframe already exists, add new column
df[key] = web.DataReader(val, "fred", start, end).resample(freq).mean()
return df
def plot_lines(df, plot_vars, linewidth = 1, logy = False, figsize = (40,20),
secondary_y = None, pp = None):
fig, ax = plt.subplots(figsize = figsize)
legend_scale = 20 / figsize[1]
# If no secondary_y (axis), plot all variables at once
if secondary_y == None:
df[plot_vars].plot.line(linewidth = linewidth, logy = logy, ax = ax, ls = "-")
ax.legend(bbox_to_anchor=(0, 1.035 + .045 * len(plot_vars) * legend_scale),
loc=2)
# Otherwise, create a new axis and plot each variable individually
else:
ax2 = ax.twinx()
for var in plot_vars:
if var == secondary_y:
df[var].plot.line(linewidth = linewidth, logy = logy, ax = ax2,
c = "C9",
label = var + " (right)")
else:
df[var].plot.line(linewidth = linewidth, logy = logy, ax = ax)
# If there are two axes, then gather lines from each axis
lines = ax.get_lines() + ax2.get_lines()
# then gather the label from each line
labels = [l.get_label() for l in lines]
# and use the lines and labels to create the legend
ax.legend(lines, labels, bbox_to_anchor=(0,
1.04 + .045 * len(plot_vars) * legend_scale), loc=2)
# Turn the text on the x-axis so that it reads vertically
ax.hlines(0, linestyle = "--", xmin = 0, xmax = df.index[-1])
ax.tick_params(axis='x', rotation=90)
# Get rid of tick lines perpendicular to the axis for aesthetic
ax.tick_params('both', length=0, which='both')
plt.savefig(str(plot_vars).replace("[", "").replace("]","").replace(":", "").replace("$","").replace("'","")[:50] + " line.png",
bbox_inches = "tight")
plt.show()
# save image if PdfPages object was passed
if pp != None: pp.savefig(fig, bbox_inches = "tight")
plt.close()
def plot_stacked_lines(df, plot_vars, linewidth = 1, logy = False, figsize = (40,20),
pp = None, sep_var = False):
fig, ax = plt.subplots(figsize = figsize)
legend_scale = 20 / figsize[1]
# cmap = "Greys"
cmap = None
if sep_var == False:
# If no secondary_y (axis), plot all variables at once
df[plot_vars].plot.area(stacked = True, linewidth = linewidth, logy = logy,
cmap=cmap, ax = ax, color = mpl_colors)
ax.legend(bbox_to_anchor=(0, 1.035 + .045 * math.ceil(len(plot_vars) / 2) * legend_scale), loc = 2, ncol = 2)
else:
# If no secondary_y (axis), plot all variables at once
df[plot_vars].plot.area(stacked = True, linewidth = linewidth, logy = logy,
cmap = cmap, ax = ax, legend = False, label = plot_vars, color = mpl_colors)
df[sep_var].plot.line(linewidth = linewidth, logy = logy, ax = ax, c = "k",
label = sep_var, ls = "--")
# If there are two axes, then gather lines from each axis
# lines = ax.get_lines()
# # then gather the label from each line
# labels = [l.get_label() for l in lines]
# and use the lines and labels to create the legend
# ax.legend(lines, labels, bbox_to_anchor=(0,
# 1.04 + .045 * len(plot_vars) * legend_scale), loc=2)
ax.legend(bbox_to_anchor=(0, 1.035 + .045 * math.ceil((len(plot_vars) + 1) / 2) * legend_scale),
loc=2, ncol = 2)
# Turn the text on the x-axis so that it reads vertically
ax.tick_params(axis='x', rotation=90)
# Get rid of tick lines perpendicular to the axis for aesthetic
ax.tick_params('both', length=0, which='both')
# save image if PdfPages object was passed
plt.savefig(str(plot_vars).replace("[", "").replace("]","").replace(":", "").replace("$","").replace("'","")[:50] + " stack.png",
bbox_inches = "tight")
plt.show()
if pp != None: pp.savefig(fig, bbox_inches = "tight")
plt.close()
def plot_scatter(df, plot_vars, s = 75, figsize = (40, 20), pp = None):
# Create plot for every unique pair of variables
for var1 in plot_vars:
for var2 in plot_vars:
if var1 != var2:
fig, ax = plt.subplots(figsize = figsize)
# Create list of years from index
# Year will be represented by color
if "Year" not in df.keys():
df["Year"] = [int(str(ind)[:4]) for ind in df.index]
df.plot.scatter(x = var1, y = var2, s = s, ax = ax, c = "Year",
cmap = "viridis")
# Turn the text on the x-axis so that it reads vertically
ax.tick_params(axis='x', rotation=90)
# Get rid of tick lines perpendicular to the axis for aesthetic
ax.tick_params('both', length=0, which='both')
# save image if PdfPages object was passed
# plt.savefig(str(plot_vars).replace("[", "").replace("]","") + " scatter.png",
# bbox_inches = "tight")
if df[var1].min() < 0:
plt.axvline(0, ls = "--", c = "k")
if df[var2].min() < 0:
plt.axhline(0, ls = "--", c = "k")
plt.show()
if pp != None: pp.savefig(fig, bbox_inches = "tight")
plt.close()
# Create PDF that will hold visualizations
today = datetime.datetime.today()
# set default fontsize for text in plot
plt.rcParams.update({'font.size': 32})
plt.rcParams['axes.ymargin'] = .05
plt.rcParams['axes.xmargin'] = .05
# Choose data from FRED
# Keys will be used to name variable. Key points to FRED code
data_codes = {"Nominal GDP ($ Bil)":"GDP",
"Real GDP ($ Bil)":"GDPC1",
"GDP Deflator":"GDPDEF",
"CPI":"CPIAUCSL",
"Base: Total ($ Mil)": "BOGMBASEW",
"Base: Currency in Circulation ($ Bil)": "CURRCIR",
"1 Month Treasury Rate (%)": "DGS1MO",
"3 Month Treasury Rate (%)": "DGS3MO",
"1 Year Treasury Rate (%)": "DGS1",
"2 Year Treasury Rate (%)": "DGS2",
"10 Year Treasury Rate (%)": "DGS10",
"30 Year Treasury Rate (%)": "DGS30",
"Effective Federal Funds Rate (%)": "DFF",
"Federal Funds Target Rate (Pre-crisis)":"DFEDTAR",
"Federal Funds Upper Target":"DFEDTARU",
"Federal Funds Lower Target":"DFEDTARL",
"Interest on Excess Reserves (%)": "IOER"}
data_dict = {}
freq = "Q"
start = datetime.datetime(1975, 1, 1)
# end = datetime.datetime(1985, 12, 31)
end = today
# Select start and end dates
# end = datetime.datetime.today()
# Check if data has been gathered.
# If data needs to be gathered again, clear variables or restart kernel
if "data_gathered" not in locals():
df = gather_data(data_codes, start,
end = end, freq = freq)
ticker = "^GSPC"
freq = "Q"
df.fillna(0, inplace=True)
df["S&P"]= web.DataReader(ticker, start = start, end = end,
data_source = "yahoo").resample(freq).mean()["Close"]
df["S&P Growth Rate (%)"] = df["S&P"].pct_change(4)
df["S&P Growth Rate Change (%; Year-over-Year)"] = df["S&P Growth Rate (%)"].diff(4)
# Create new variables
df["Base: Currency in Circulation ($ Mil)"] = df["Base: Currency in Circulation ($ Bil)"].mul(1000)
df["Base: Currency not in Circulation ($ Mil)"] = df["Base: Total ($ Mil)"].sub(df["Base: Currency in Circulation ($ Mil)"])
df["Currency in Circulation Growth Rate (%)"] = df["Base: Currency in Circulation ($ Mil)"].pct_change(4) * 100
df["% Currency not in Circulation"] = df["Base: Currency not in Circulation ($ Mil)"].div(df["Base: Total ($ Mil)"]) * 100
df["% Currency in Circulation"] = df["Base: Currency in Circulation ($ Mil)"].div(df["Base: Total ($ Mil)"]) * 100
df["Base: Total Growth Rate (%)"] = df["Base: Total ($ Mil)"]
df["Change % Currency not in Circulation"] = df["% Currency not in Circulation"].diff(4)
df["Currency not in Circulation Growth Rate (%)"] = df["Base: Currency not in Circulation ($ Mil)"].pct_change(4) * 100
df["Inflation (CPI)"] = df["CPI"].pct_change(4) * 100
df["Effective Federal Funds Rate (%; Change Year-over-Year)"] = df["Effective Federal Funds Rate (%)"].diff(4)
df["1 Year Treasury Rate (%; Change Year-over-Year)"] = df["1 Year Treasury Rate (%)"].diff(4)
df["2 Year Treasury Rate (%; Change Year-over-Year)"] = df["2 Year Treasury Rate (%)"].diff(4)
df["10 Year Treasury Rate (%; Change Year-over-Year)"] = df["10 Year Treasury Rate (%)"].diff(4)
df["30 Year Treasury Rate (%; Change Year-over-Year)"] = df["30 Year Treasury Rate (%)"].diff(4)
df["Nominal GDP ($ Mil)"] = df["Nominal GDP ($ Bil)"].mul(1000)
df["Nominal GDP Growth Rate (%)"] = df["Nominal GDP ($ Bil)"].pct_change(4) * 100
df["Real GDP ($ Mil)"] = df["Real GDP ($ Bil)"].mul(1000)
df["Real GDP Growth Rate (%)"] = df["Real GDP ($ Bil)"].pct_change(4) * 100
df["Inflation (GDPDEF)"] = df["GDP Deflator"].pct_change(4) * 100
df["Real Currency in Circulation Growth Rate (%)"] = df["Currency in Circulation Growth Rate (%)"].sub(df["Inflation (GDPDEF)"])
df["Currency in Circulation Velocity"] = df["Nominal GDP ($ Mil)"].div(df["Base: Currency in Circulation ($ Mil)"])
df["Currency in Circulation % Change Velocity"] = df["Currency in Circulation Velocity"].pct_change(4)
df["Real S&P Growth Rate (%)"] = df["S&P Growth Rate (%)"].sub(df["Inflation (CPI)"])
df["Real 1 Year Treasury Rate"] = df["1 Year Treasury Rate (%)"].sub(df["Inflation (CPI)"])
df["Real 3 Month Treasury Rate"] = df["3 Month Treasury Rate (%)"].sub(df["Inflation (CPI)"])
df["Real 1 Month Treasury Rate"] = df["1 Month Treasury Rate (%)"].sub(df["Inflation (CPI)"])
df["Real Effective Federal Funds Rate"] = df['Effective Federal Funds Rate (%)'].sub(df["Inflation (CPI)"])
df["30 Year Minus 1 Year (%)"] = df["30 Year Treasury Rate (%)"].sub(df["1 Year Treasury Rate (%)"])
df["30 Year Minus 3 Month (%)"] = df["30 Year Treasury Rate (%)"].sub(df["3 Month Treasury Rate (%)"])
df["30 Year Minus 1 Month (%)"] = df["30 Year Treasury Rate (%)"].sub(df["1 Month Treasury Rate (%)"])
df["30 Year Minus Effective Federal Funds Rate"] = df["30 Year Treasury Rate (%)"].sub(df['Effective Federal Funds Rate (%)'])
df["10 Year Minus 2 Year (%)"] = df["10 Year Treasury Rate (%)"].sub(df["2 Year Treasury Rate (%)"])
df["10 Year Minus 1 Year (%)"] = df["10 Year Treasury Rate (%)"].sub(df["1 Year Treasury Rate (%)"])
df["10 Year Minus 3 Month (%)"] = df["10 Year Treasury Rate (%)"].sub(df["3 Month Treasury Rate (%)"])
df["10 Year Minus 1 Month (%)"] = df["10 Year Treasury Rate (%)"].sub(df["1 Month Treasury Rate (%)"])
df["10 Year Minus Effective Federal Funds Rate"] = df["10 Year Treasury Rate (%)"].sub(df['Effective Federal Funds Rate (%)'])
# After data is downloaded create new data and transform data
divisiaAggregates = pd.read_excel("http://centerforfinancialstability.org/amfm/Divisia.xlsx", header = [1], index_col = [0],
parse_dates=True).resample("Q").first()
dkeys = {'Divisia M4 level, normalized to equal 100 in Jan. 1967': "DM4",
'Divisia M4 year-over-year percentage growth rate':"DM4 YoY % Change",
'M4 interest-rate aggregate, percent per year': "DM4 Interest Agg",
'Divisia M4- level, normalized to equal 100 in Jan. 1967': "DM4-",
'Divisia M4- year-over-year percentage growth rate':"DM4- YoY % Change",
'M4- interest-rate aggregate, percent per year': "DM4- Interest Agg",
'Divisia M3 level, normalized to equal 100 in Jan. 1967':"DM3",
'Divisia M3 year-over-year percentage growth rate':"DM3 YoY Change",
'M3 interest-rate aggregate, percent per year':"DM3 Interest Agg"}
divisiaAggregates.rename(columns={key:val for key, val in dkeys.items()},
inplace = True)
for key, val in divisiaAggregates.items():
df[key] = val.loc["1970":]
df["DM4 Velocity"] = df["DM4"].div(df["Nominal GDP ($ Mil)"])
df["DM4 % Change Velocity"] = df["DM4 Velocity"].pct_change(4)
df["DM4 Velocity (normalized)"] = df["DM4 Velocity"].div(df["DM4 Velocity"].iloc[0])
df["DM4- Velocity"] = df["DM4-"].div(df["Nominal GDP ($ Mil)"])
df["DM4- % Change Velocity"] = df["DM4- Velocity"].pct_change(4)
df["DM4- Velocity (normalized)"] = df["DM4- Velocity"].div(df["DM4- Velocity"].iloc[0])
df["Currency in Circulation Velocity (normalized)"] = df["Currency in Circulation Velocity"].div(df["Currency in Circulation Velocity"].iloc[0])
log_vars = ["Nominal GDP ($ Mil)",
"Real GDP ($ Mil)",
"GDP Deflator",
"CPI",
"Base: Total ($ Mil)",
"Base: Currency in Circulation ($ Mil)",
"Base: Currency not in Circulation ($ Mil)",
"Currency in Circulation Velocity",
"DM4",
"DM4 Velocity",
"DM4-",
"DM4- Velocity",]
for log_var in log_vars:
df["Log " + log_var] = np.log(df[log_var])
data_gathered = True
import copy
def map_pcs(df, method = "pearson", sig = 0.01):
def check_naive_corr(pcs_dct, sig_corr_dct):
def check_remaining_controls(control_vars, p_dct, s_c_dct,x,y, controls_used):
for c_var in control_vars:
controls_used.append(c_var)
controls_used_string = str(controls_used)
corr = df.partial_corr(x=x,y=y, covar=controls_used,
method=method).round(3)
#p_dct[controls_used] = {}
#s_c_dct[c_var]={}
c_used = copy.copy(controls_used)
if corr["p-val"].values[0] < sig:
p_dct[controls_used_string] = corr
s_c_dct[controls_used_string] =(p_dct[controls_used_string]["r"].values[0])
remaining_controls = [x for x in control_vars if x not in controls_used]
if len(remaining_controls) > 0:
check_remaining_controls(remaining_controls, p_dct, s_c_dct, x, y, c_used)
for x in df.keys():
sig_corr_dct[x] = {}
pcs_dct[x]={}
for y in df.keys():
pcs_dct[x][y] = {}
sig_corr_dct[x][y] = {}
if x == y:
# No need to calculate if the variable is itself
pcs_dct[x][y]["[]"] = 1
else:
pcs_dct[x][y]["[]"] = df.partial_corr(x=x,y=y, covar=None,
method=method).round(3)
#other_vars = [z for z in df.keys() if z != y and z != x ]
if pcs_dct[x][y]["[]"]["p-val"].values[0] < sig:
sig_corr_dct[x][y]["[]"] =(y, pcs_dct[x][y]["[]"]["r"].values[0])
### Check controls to evaluate d-separate / d-connected
control_vars = [z for z in df.keys() if z != y and z != x]
check_remaining_controls(control_vars, pcs_dct[x][y], sig_corr_dct[x][y],x, y, controls_used = [])
pcs_dct = {}
sig_corr_dct = {}
check_naive_corr(pcs_dct, sig_corr_dct)
return pcs_dct, sig_corr_dct
#MV=Py
plt.rcParams.update({'font.size': 24})
plot_vars = ["Currency in Circulation Growth Rate (%)",
#"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (GDPDEF)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)"]
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
#pcs_dct, sig_corr_dct = gen_pcorr(df[plot_vars].dropna()[:-1])
pcs_dct, sig_corr_dct = map_pcs(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
sig_corr_dct
pcs_dct
import statsmodels.api as sm
import numpy
residuals = {}
partial_corr = {}
reg_df = df[plot_vars].dropna()[:-1]
for y_var in plot_vars:
X_vars = [x for x in plot_vars if x != y_var]
X= reg_df[X_vars]
X["constant"] = 1
y = reg_df[y_var]
model = sm.OLS(y,X)
results = model.fit()
print(results.summary())
predict = results.predict()
reg_df["predict"] = predict
residuals[y_var] = results.resid
r2 = {}
for x in plot_vars:
partial_corr[x] = {}
r2[x] = {}
for y in plot_vars:
if x != y:
Y = pd.DataFrame(residuals[y])
X = pd.DataFrame(residuals[x])
model = sm.OLS(Y,X)
results = model.fit()
print(results.rsquared, results.pvalues)
partial_corr[x][y] = np.corrcoef(residuals[x], residuals[y])[0][1] * -1
print(partial_corr[x][y])
print(pcs_dct[x][y])
reg_df[plot_vars].pcorr().sort_index(axis=0, ascending=True).sort_index(axis=1, ascending = True)
pd.DataFrame(residuals).corr()
pd.DataFrame(partial_corr).sort_index(axis=0, ascending=True).sort_index(axis=1, ascending = True)
corr = reg_df[plot_vars].corr()
pcorr01 = corr[plot_vars[0]][plot_vars[1]]
```
## Create line plots and scatter plots of variables that you expect to be correlated
```
pp = PdfPages("Fed Data" + str(today)[:10] + ".pdf")
plot_vars =["Currency in Circulation Velocity",
"% Currency in Circulation",
"1 Year Treasury Rate (%)",
"Nominal GDP Growth Rate (%)"]
plot_lines(df, plot_vars, linewidth = 3, logy = False, pp = pp)
plot_scatter(df, plot_vars, pp = pp)
plot_vars =["Currency in Circulation Velocity (normalized)",
"DM4- YoY % Change",
"DM4- Velocity (normalized)",
"1 Year Treasury Rate (%)",
"Nominal GDP Growth Rate (%)"]
plot_lines(df, plot_vars, linewidth = 3, logy = False, pp = pp)
plot_scatter(df, plot_vars, pp = pp)
pp.close()
df['S&P Growth Rate (%)']
#MV=Py
plt.rcParams.update({'font.size': 24})
plot_vars = ["Currency in Circulation Growth Rate (%)",
#"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (GDPDEF)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)"]
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
sig_corr
```
## I expect that velocity will be correlated with the interest rate and that growth of the money stock will be correlated with inflation and nominal interest rates
## So far I have not used the rate of change of velocity, but it will probably be appropriate to check estimates using rates for this variable.
```
plt.rcParams['axes.ymargin'] = .1
plt.rcParams['axes.xmargin'] = .1
import networkx as nx
def graph_pcorr(sig_corr, title = "Macro Partial Correlations"):
graph = nx.Graph()
edges = []
edge_labels = {}
for key in sig_corr:
for key2 in sig_corr[key]:
if (key2, key) not in edges:
edge = (key.replace(" ","\n"), key2[0].replace(" ","\n"))
edges.append(edge)
edge_labels[edge] = str(key2[1])
# edge format: ("i", "j") --> from node i to node j
graph.add_edges_from(edges)
color_map = ["C0" for g in graph]
fig, ax = plt.subplots(figsize = (20,12))
graph.nodes()
plt.tight_layout()
pos = nx.spring_layout(graph)#, k = 5/(len(sig_corr.keys())**.5))
plt.title(title, fontsize = 30)
nx.draw_networkx(graph, pos, node_color=color_map,
with_labels=True, arrows=False,
font_size = 20, alpha = .8,
ax = ax)
nx.draw_networkx_edge_labels(graph,pos,
edge_labels=edge_labels,
font_color='green',
font_size=20)
plt.axis("off")
plt.savefig("g1.png", format="PNG")
# tell matplotlib you're done with the plot: https://stackoverflow.com/questions/741877/how-do-i-tell-matplotlib-that-i-am-done-with-a-plot
plt.show()
graph_pcorr(sig_corr, "Choice Control: Currency in Circulation Growth Rate (%),\nInflation: GDP Deflator")
#MV=Py
plot_vars = ["Currency in Circulation Growth Rate (%)",
#"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (CPI)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)"]
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
graph_pcorr(sig_corr, "Choice Control: Currency in Circulation Growth Rate (%),\nInflation: CPI")
#MV=Py
plot_vars = [#"Currency in Circulation Growth Rate (%)",
"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (GDPDEF)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)"]
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
graph_pcorr(sig_corr, "Choice Control: Log Currency in Circulation Velocity,\nInflation: GDP Deflator")
#MV=Py
plot_vars = [#"Currency in Circulation Growth Rate (%)",
"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (CPI)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)"]
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
graph_pcorr(sig_corr, "Choice Control: Log Currency in Circulation Velocity,\nInflation: CPI")
fig, ax = plt.subplots(figsize = (16,10))
df[["Currency in Circulation Velocity", "1 Year Treasury Rate (%)"]].plot(ax = ax,
secondary_y = "1 Year Treasury Rate", legend = False)
plt.legend(loc= "upper right", fontsize = 16)
plot_vars = [#"Currency in Circulation Growth Rate (%)",
#"Currency in Circulation % Change Velocity",# (normalized)",
"Log Currency in Circulation Velocity",
#"% Currency not in Circulation",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"Real GDP Growth Rate (%)",
"Inflation (GDP)",
"Real 1 Year Treasury Rate",
"10 Year Minus 1 Year (%)"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna(), method = "pearson", sig = .01)
graph_pcorr(sig_corr, "Choice Control: Log Currency in Circulation Velocity,\nInflation: CPI")
# calculate correlation table
df[plot_vars].corr().to_csv("MacroVariableCorr.csv")
df[plot_vars].corr().to_excel("MacroVariableCorr.xlsx")
df[plot_vars].corr()
```
## No success so far, probably don't understand gomme data...
```
gomme_data = pd.read_csv("GommeData.csv", parse_dates = True,
index_col = [0]).resample(freq).mean()
for key, val in gomme_data.items():
df[key] = val.loc["1975":]
df[key +" (%; Change Year-over-Year)"] = df[key].diff(4)
list(df.keys())
#MV=Py
plt.rcParams.update({'font.size': 24})
plot_vars = ["Currency in Circulation Growth Rate (%)",
#"Log Currency in Circulation Velocity",# (normalized)",
#"Currency in Circulation % Change Velocity",
"% Currency not in Circulation",
"Inflation (GDPDEF)",
#"Currency not in Circulation Growth Rate (%)",
#"DM4 YoY % Change",
#"DM4 % Change Velocity",
"S&P Growth Rate (%)",
#"S&P Growth Rate Change (%; Year-over-Year)",
"Nominal GDP Growth Rate (%)",
#"3 Month Treasury Rate (%)"]
"1 Year Treasury Rate (%)"]
#"1 Year Treasury Rate (%; Change Year-over-Year)",
#"30 Year Treasury Rate (%; Change Year-over-Year)"]#,
#'Effective Federal Funds Rate (%)',
#"30 Year Minus 3 Month (%)"]#,
#"30 Year Minus 1 Year (%)"]#,
#"30 Year Minus Effective Federal Funds Rate"]
#'Return to business capital, pre-tax, no capital gain (%; Change Year-over-Year)',
#'Return to business capital, pre-tax, no capital gain']
#"Return to all capital, after--tax"]
#"Solow residual (%; Change Year-over-Year)"]
pcs_dct, sig_corr = gen_pcorr(df[plot_vars].dropna()[:-1], method = "pearson", sig = .05)
graph_pcorr(sig_corr)
```
| github_jupyter |
# Translate demand files format: from .dat to flow.csv
```
import sys
import pandas as pd
import xml.etree.ElementTree as ET
import datetime
import re
import nltk
import numpy
import os
from IPython.display import display, HTML
low_memory=False
PATH="data/OD_MADRID_v2"
SCENARIO="madrid_barrio_salamanca_od"
SCENARIO="madrid_las_tablas_od"
SCENARIO="madrid_retiro_od"
file_nodes="{}/{}_nodes.csv".format(PATH,SCENARIO)
file_edges="{}/{}_edgeids.csv".format(PATH,SCENARIO)
file_weights="{}/{}_weights.csv".format(PATH,SCENARIO)
file_speeds="{}/{}_speeds.csv".format(PATH,SCENARIO)
file_lengths="{}/{}_lengths.csv".format(PATH,SCENARIO)
file_demands="{}/{}_source_demand.dat".format(PATH,SCENARIO)
DEST_DIR=PATH
out_file_flows="{}/{}_flows_NEW.csv".format(DEST_DIR,SCENARIO)
def isNaN(num):
return num != num
# https://www.saltycrane.com/blog/2008/01/how-to-invert-dict-in-python/
def invert_dict(d):
return dict([(value, key) for key, value in d.items()])
def demand_dat_2_csv( file_nodes, file_demands, out_file_flows ):
low_memory=False
sep=","
demsep=";"
#---------------------------------------------------
# Load sources
#---------------------------------------------------
# Load nodes
print("Reading nodes file: "+file_nodes)
dfn = pd.read_csv(file_nodes, encoding='utf-8', sep=sep )
dfn.columns.values[0] = "node_name"
# Load DEMANDS
print("Reading DEMAND file: "+file_demands)
dfd = pd.read_csv(file_demands, encoding='utf-8', sep=demsep )
#---------------------------------------------------
# Generating flows file
#---------------------------------------------------
print("Generating flows file: "+out_file_flows)
df2 = dfn[['node_name']]
for col in dfn['node_name'].tolist():
df2[str(col)]=0
nr = len(df2.columns)
nnodes={}
inodes={}
for x in range(1,nr):
nname='{}'.format(df2.columns[x])
nnodes[nname]=x-1
inodes[x-1]=nname
for x, rowData in dfd.iterrows():
nfrom = str(rowData[0])
if( nfrom[0] == 'N'):
nfrom = nfrom[1:]
nto = str(rowData[1])
if( nto[0] == 'N'):
nto = nto[1:]
demand = rowData[2]
df2.iloc[nnodes[nfrom],nnodes[nto]]=demand
df2.to_csv(out_file_flows, encoding='utf-8', index=False, sep=sep)
return df2
df= demand_dat_2_csv( file_nodes, file_demands, out_file_flows)
```
# BACKUP
```
sep=','
demsep=';'
dfd = pd.read_csv(file_demands, encoding='utf-8', sep=demsep )
dfn = pd.read_csv(file_nodes, encoding='utf-8', sep=sep)
dfn.columns.values[0] = "node_name"
df2 = dfn[['node_name']]
dfd.head()
for col in dfn['node_name'].tolist():
df2[str(col)]=0
def cast(x):
str(x)
df2['node_name']=df2['node_name'].apply(str)
df2.head()
nr = len(df2.columns)
nnodes={}
inodes={}
for x in range(1,nr):
nname='{}'.format(df2.columns[x])
nnodes[nname]=x-1
inodes[x-1]=nname
nnodes['1209332387']
df2.iloc[nnodes['1209330272'],nnodes['1209332387']]=999
df2
df2.loc['1209330272','1209332387']=999
df2
dfn['node_name'].tolist()
rows = invert_dict(dfw[dfw.columns[0]].apply(str).to_dict())
"SI" if '1209330272' in rows else "NO"
#rows[0]
#rows
dfw.head()
for col in dfw.columns[1:]:
dfw[col].values[:] = 0
dfw[dfw.columns[0:3]].head().to_csv
cols = dfw.columns
for x, rowData in dfd.iterrows():
if( rowData[0][0] == 'N'):
nfrom = rowData[0][1:]
else:
nfrom = rowData[0]
if( rowData[1][0] == 'N'):
nto = rowData[1][1:]
else:
nto = rowData[1]
nto_idx = rows[nto]
# display(HTML("nfrom={} >>> {}".format(nfrom,"True" if nfrom in cols else "False")))
# display(HTML("nto ={} >>> {}".format(nto,"True" if nto in rows else "False")))
prev = dfw.loc[nto_idx,nfrom]
demand = numpy.float64(rowData[2])
dfw.loc[nfrom,nto_idx] = demand
display(HTML("[{},{}] >>> {} >>> {}".format(nfrom,nto,prev,demand)))
# prev= dfw.loc[nfrom,nto_idx]
# demand = numpy.float64(777.0)
# dfw.loc[nfrom,nto_idx] = demand
# display(HTML("[{},{}] >>> {} >>> {}".format(nfrom,nto,prev,demand)))
dfw.head()
```
| github_jupyter |
## **MODULE 3: Fundamental analysis using Regression**
###***3.1***
```
# Import pandas library
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('/content/GOLD.csv') #data of the last 2 years price action of Indian (MCX) gold standard.
df.head(5)
df.tail(5) #EXPLORATION OF DATASET
df.describe() #EXPLORATION OF DATASET
df.shape #EXPLORATION OF DATASET
df.dtypes
df.dropna(inplace=True)
df
```
*USING LINEAR REGRESSION AND POLYNOMIAL REGRESSION*
```
#importing the needed libraries
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
```
*Using polynomial function of "open" price column*
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
X = df["Open"].values
y = df["Pred"].values
X = X.reshape(-1, 1)
poly = PolynomialFeatures(degree=2)
poly_data = poly.fit_transform(X)
poly.fit(poly_data, y)
model = LinearRegression()
model.fit(poly_data,y)
coef = model.coef_
print(coef)
intercept = model.intercept_
print(intercept)
y3_pred=model.predict(poly.fit_transform(X))
plt.scatter(X,y,color='red')
plt.plot(X, y3_pred,color='blue')
plt.legend(['Prediction','Original'])
plt.show()
rmse = np.sqrt(mean_squared_error(y,y3_pred))
r2 = r2_score(y,y3_pred)
print(rmse)
print(r2)
x = df[['Price','Open','High','Low']]
y = df['new'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y,
test_size=0.2, random_state=42)
from sklearn.pipeline import make_pipeline
model=make_pipeline(PolynomialFeatures(),LinearRegression())
model.fit(X_train, y_train)
predict=model.predict(x)
print("THE FIRST TEN PREDICTIONS ARE : ",end=" ")
df_1 = pd.DataFrame()
df_1['ACTUAL'] = df['new']
df_1['PREDICTED'] = predict
df_1.head(10)
import seaborn as sns
ax1 = sns.distplot(y, hist=False, color="r", label="Actual Value of Pred")
sns.distplot(predict, hist=False, color="b", label="Fitted Values of Pred" , ax=ax1)
```
*Using linear regression function of "open" price column*
```
x = df['Open'].values.reshape(-1,1)
y = df['Pred'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
reg = LinearRegression()
reg.fit(X_train, y_train) #training the algorithm
print("INTERCEPT IS :",end=" ")
print(reg.intercept_)
print("COEFFICIENT IS :",end=" ")
print(reg.coef_)
y_pred = reg.predict(x)
print("THE FIRST TEN PREDICTIONS ARE : ")
print(y_pred[0:10])
plt.scatter(x,y,color='green',label="dataset")
plt.plot(x,y_pred, color='red',label="REGRESSION LINE")
plt.title('predicted Vs actual', fontsize=14)
plt.legend()
plt.grid(True)
plt.show()
```
*Using linear regression of high price column*
```
x = df['High'].values.reshape(-1,1)
y = df['Pred'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y,
test_size=0.2, random_state=42)
reg1 = LinearRegression()
reg1.fit(X_train, y_train) #training the algorithm
print("INTERCEPT IS :",end=" ")
print(reg1.intercept_)
print("COEFFICIENT IS :",end=" ")
print(reg1.coef_)
y1_pred = reg1.predict(x)
print("THE FIRST TEN PREDICTIONS ARE : ",end=" ")
print(y1_pred[0:10])
plt.scatter(x,y,color='brown',label="dataset")
plt.plot(x,y1_pred, color='orange',label="REGRESSION LINE")
plt.title('predicted Vs actual', fontsize=10)
plt.legend()
plt.grid(True)
plt.show()
```
*Using linear regression on "low" price*
```
x = df['Low'].values.reshape(-1,1)
y = df['Pred'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y,
test_size=0.2, random_state=42)
reg1 = LinearRegression()
reg1.fit(X_train, y_train) #training the algorithm
print("INTERCEPT IS :",end=" ")
print(reg1.intercept_)
print("COEFFICIENT IS :",end=" ")
print(reg1.coef_)
y3_pred = reg1.predict(x)
print("THE FIRST TEN PREDICTIONS ARE : ",end=" ")
print(y3_pred[0:10])
plt.scatter(x,y,color='pink',label="dataset")
plt.plot(x,y3_pred, color='black',label="REGRESSION LINE")
plt.title('predicted Vs actual', fontsize=10)
plt.legend()
plt.grid(True)
plt.show()
```
MULTIPLE LINEAR REGRESSION :
```
x = df[['Price','Open','High','Low']]
y = df['Pred'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y,
test_size=0.2, random_state=42)
reg1 = LinearRegression()
reg1.fit(X_train, y_train) #training the algorithm
#Using linear regression, find the coefficients of the inputs
print("INTERCEPT IS :",end=" ")
print(reg1.intercept_)
print("COEFFICIENT IS :",end=" ")
print(reg1.coef_)
y3_pred = reg1.predict(x)
print("THE FIRST TEN PREDICTIONS ARE : ",end=" ")
d = {'col1': df['Pred']}
df1 = pd.DataFrame()
df1['ACTUAL'] = df['Pred']
df1['PREDICTED'] = y3_pred
df1.head(10)
import seaborn as sns
ax1 = sns.distplot(y, hist=False, color="r", label="Actual Value of Pred")
sns.distplot(y3_pred, hist=False, color="b", label="Fitted Values of Pred" , ax=ax1)
df['Pred'] = y3_pred
df
```
**THUS WE CAN CONCLUDE THAT :**
1. **PRED COLUMN IS A LINEAR FUNCTION OF THE INPUT COLUMNS**
2. **NEW COLUMN IS A POLYNOMIAL FUNCTION OF THE INPUT COLUMNS**
***Distplots***
Plotting distplot to recognize the discrepencies
```
import seaborn as sns
sns.distplot(df['Pred'],kde_kws={"color": "k", "lw": 2, "label": "KDE"},
hist_kws={"histtype": "step", "linewidth": 3,
"alpha": 1, "color": "g"})
sns.distplot(df['new'],kde_kws={"color": "k", "lw": 2, "label": "KDE"},
hist_kws={"histtype": "step", "linewidth": 3,
"alpha": 1, "color": "g"})
```
###**3.2**
***CAPM Analysis and Beta Calculation using regression***
* Beta is a measure of a stock's volatility in relation to the overall market
* The formula for calculating beta is the covariance of the return of an asset with the return of the benchmark divided by the variance of the return of the benchmark over a certain period.
```
df1=pd.read_csv('/content/week2 (1).csv')
df1
df_nifty=pd.read_csv('/content/Nifty50.csv')
df_nifty
```
CREATE A NEW COLUMN CONSISTING OF THE RETURNS
```
df1['returns1']=df1['Close Price'].pct_change()
df1['returns1'][0]=0
df1
```
CREATE A NEW COLUMN CONSISTING OF RETURNS OF NIFTY
```
#creating a column in nifty which contains the daily returns
df_nifty['returns']=df_nifty['Close'].pct_change()
df_nifty['returns'][0]=0
df_nifty
```
CALCULATING BETA FOR LAST 3 MONTHS(90 DAYS)
```
#The daily Beta value for the past 3 months. (Daily= Daily returns)
y = df1['returns1'].tail(90).values.reshape(-1,1)
x = df_nifty['returns'].tail(90).values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
reg1 = LinearRegression()
reg1.fit(X_train, y_train) #training the algorithm
print("INTERCEPT (ALPHA) IS :",end=" ")
print(reg1.intercept_)
print("COEFFICIENT (BETA) IS :",end=" ")
print(reg1.coef_)
```
* *THE BETA VALUE IS :0.40389234 FOR DAY RETURNS.*
* *THE ALPHA VALUE IS :-0.00070961 FOR DAY RETURNS.*
```
y3_pred = reg1.predict(x)
print("THE LAST TEN PREDICTIONS ARE : ",end=" ")
d = {'col1': df['Pred']}
df2 = pd.DataFrame()
df2['ACTUAL'] =(df1['Day_Perc_Change'].tail(90))/100
df2['PREDICTED'] = y3_pred
df2.tail(10)
plt.scatter(x,y,color='brown',label="dataset")
plt.plot(x,y3_pred, color='orange',label="REGRESSION LINE")
plt.title('predicted Vs actual', fontsize=10)
plt.legend()
plt.grid(True)
plt.show()
```
MONTHLY BETA VALUE PREDICTION
```
df2=pd.DataFrame()
print("MONTHLY RETURNS OF THE STOCK HINDUSTAN UNILEVER")
df2['Monthly returns']=df1.groupby(['Year','Month']).apply(lambda x: np.average(x['Close Price']))
df2
print("MONTHLY RETURNS OF THE STOCK HINDUSTAN UNILEVER")
print(" ")
df2['perc_monthly_return']=df2.pct_change()
df2['perc_monthly_return'][0]=0
print(df2) #returns of the month
df_nifty.astype({'Date': 'datetime64[ns]'}).dtypes
import datetime
df_nifty['Month'] = pd.DatetimeIndex(df_nifty['Date']).month #to extract month from date
df_nifty['Year'] = pd.DatetimeIndex(df_nifty['Date']).year #to extract year from the date
df_nifty
df3=pd.DataFrame()
print("MONTHLY RETURNS OF NIFTY")
print(" ")
df3['Monthly return']=df_nifty.groupby(['Year','Month']).apply(lambda x: np.average(x['Close']))
df3
df3['perc_monthly_returns']=df3.pct_change()
df3['perc_monthly_returns'][0]=0
print(df3)
#The monthly Beta value. (Monthly= Monthly returns)
y = df2['perc_monthly_return'].values.reshape(-1,1)
x = df3['perc_monthly_returns'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
reg1 = LinearRegression()
reg1.fit(X_train, y_train) #training the algorithm
print("INTERCEPT (ALPHA) IS :",end=" ")
print(reg1.intercept_)
print("COEFFICIENT (BETA) IS :",end=" ")
print(reg1.coef_)
```
* *THE BETA VALUE IS :0.66850356 FOR MONTHLY RETURNS.*
* *THE ALPHA VALUE IS :0.02003885 FOR MONTHLY RETURNS.*
```
y3_pred = reg1.predict(x)
print("THE LAST TEN PREDICTIONS ARE : ",end=" ")
df4 = pd.DataFrame()
df4['ACTUAL'] =df2['perc_monthly_return']
df4['PREDICTED'] = y3_pred
df4
plt.scatter(x,y,color='brown',label="dataset")
plt.plot(x,y3_pred, color='orange',label="REGRESSION LINE")
plt.title('predicted Vs actual', fontsize=10)
plt.legend()
plt.grid(True)
plt.show()
rmse = np.sqrt(mean_squared_error(y,y3_pred))
r2 = r2_score(y,y3_pred)
print("RMSE IS ",end="")
print(rmse)
print("R^2 IS ",end="")
print(r2)
```
###**INFERENCE**
1. Beta can be referred to as a measure of the sensitivity of stock returns to market returns.
2. A stock with a beta less than one tends to be less volatile.
3. A stock with a beta more than one tends to be more volatile.
4. Here the beta values are less than one as well as positive so stock is less volatile.
5. It means that for every +1% move in the Nifty our portfolio will go up 0.4% in value.[DAILY RETURNS]
6. Thus we can conclude that buying the stock is less risky but will offer less returns as well.
| github_jupyter |
```
# Copyright 2020 IITK EE604A Image Processing. All Rights Reserved.
#
# Licensed under the MIT License. Use and/or modification of this code outside of EE604 must reference:
#
# © IITK EE604A Image Processing
# https://github.com/ee604/ee604_assignments
#
# Author: Shashi Kant Gupta and Prof K. S. Venkatesh, Department of Electrical Engineering, IIT Kanpur
```
## Prepare environment
```
%%bash
pip install git+https://github.com/ee604/ee604_plugins
pip install scikit-video
from ee604_plugins import download_dataset
download_dataset(assignment_no=0, task_no=3)
```
## Getting started with OpenCV
OpenCV is a widely used computer vision package used for image processing and conputer vision applications.
### Images as numpy array
```
import numpy as np # import numpy package
import cv2 # import opencv package
import matplotlib.pyplot as plt # we will use this to display images
img_orig = cv2.imread("data/lena_color.jpg") # this is how you load an image in openCV
# You will note here that the size is (512, 512, 3). This means image width and height are 512.
# And 3 corresponds to color channels. Therefore it's color image.
# Note: OpenCV loads images in 'BGR' format. i.e.
# img[:, :, 0] -- corresponds to blue parts of the image
# img[:, :, 1] -- corresponds to green parts of the image
# img[:, :, 2] -- corresponds to red parts of the image
print(img_orig.shape) # print the image size
# displaying images: use plt.imshow(img)
# but before we display images using "plt.imshow()"
# we will conver BGR to RGB.
img = img_orig[:, :, [2, 1, 0]] # change to rgb
# alternate:
# img = cv2.cvtColor(img_orig, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis("off") # disable axis
plt.show()
img = cv2.imread("data/lena_color.jpg", 0) # loads a greyscale version of the image
print(img.shape) # no color channel
plt.imshow(img, cmap='gray') # you need to inform pyplot that you want a grayscaled image
plt.axis("off")
plt.show()
# Normally if you do not mention cmap="gray". plt.imshow()
# will display a thermal plot for intensity values. see the example below
plt.imshow(img)
plt.axis("off")
plt.colorbar() # display the colorbar
plt.show()
# Another example of thermal plot
plt.imshow(img, cmap="plasma")
plt.axis("off")
plt.colorbar() # display the colorbar
plt.show()
# Alternatively you can use cv2.imshow() to display images if only if you aren't using jupyter notebooks
# If you are on google colab you can use following patch
# We will stick to using pyplot at most of the places as it can be used independent of platform your are using.
from google.colab.patches import cv2_imshow
print("GreyScaled")
cv2_imshow(img)
print("Colored")
cv2_imshow(img_orig) # No need to convert to RGB
# With pyplot you can even create image grids!!
# We will create an example of 3 x 3 grid.
# Load images
img_orig = cv2.imread("data/lena_color.jpg")
img_rgb = cv2.cvtColor(img_orig, cv2.COLOR_BGR2RGB) # convert to rgb
img_gray = cv2.cvtColor(img_orig, cv2.COLOR_BGR2GRAY) # convert to greyscale
img_blue_channel = img_orig[:, :, 0] # blue channel
img_green_channel = img_orig[:, :, 1] # green channel
img_red_channel = img_orig[:, :, 2] # red channel
img_blue, img_green, img_red = np.copy(img_rgb), np.copy(img_rgb), np.copy(img_rgb)
img_blue[:, :, [0, 1]] = 0 # set values of rest of the channel to be zero so that we see color corresponding to single channel
img_green[:, :, [0, 2]] = 0 # set values of rest of the channel to be zero so that we see color corresponding to single channel
img_red[:, :, [1, 2]] = 0 # set values of rest of the channel to be zero so that we see color corresponding to single channel
plt.figure(figsize=(12, 12)) #Initiate figure
plt.subplot(3, 3, 1) #3, 3 --> 3 x 3 grid | 1 --> first cell
plt.imshow(img_rgb)
plt.axis("off")
plt.title("RGB")
plt.subplot(3, 3, 2)
plt.imshow(img_gray, cmap="gray")
plt.axis("off")
plt.title("Grayscale")
plt.subplot(3, 3, 3)
plt.imshow(img_gray, cmap="plasma")
plt.axis("off")
plt.title("Thermal")
plt.subplot(3, 3, 4)
plt.imshow(img_red)
plt.axis("off")
plt.title("Red part")
plt.subplot(3, 3, 5)
plt.imshow(img_green)
plt.axis("off")
plt.title("Green part")
plt.subplot(3, 3, 6)
plt.imshow(img_blue)
plt.axis("off")
plt.title("Blue part")
plt.subplot(3, 3, 7)
plt.imshow(img_red_channel, cmap="gray")
plt.axis("off")
plt.title("Red channel as grayscale")
plt.subplot(3, 3, 8)
plt.imshow(img_green_channel, cmap="gray")
plt.axis("off")
plt.title("Green channel as grayscale")
plt.subplot(3, 3, 9)
plt.imshow(img_blue_channel, cmap="gray")
plt.axis("off")
plt.title("Blue channel as grayscale")
# uncomment below line to save the image grid
# plt.savefig("img_grid.png", dpi=150)
plt.show()
# PS: This code can be easily simplified using function or loop.
# We intentially didn't used them for better understanding.
# Saving images.
cv2.imwrite("lena_gray.jpg", img_gray)
# PS: To save the image grid generated using pyplot. Use this
# plt.savefig("filename.png", dpi=150) # dpi is dots per inch higher dpi ==> better resolution
# supported formats for plt.savefig() ==> eps, pdf, pgf, png, ps, raw, rgba, svg, svgz
```
---
### Loading images from video
Recall that videos are nothing more than a sequence of images. You can easily extract those image sequences using openCV `cv2.VideoCapture()` module.
```
# Loading video file
cap = cv2.VideoCapture("data/bunny_video.mp4")
if not cap.isOpened(): # check if video file is opened
cap.open()
video_frames = [] # we will save frames into this python list
while True:
# Capture frame-by-frame, everytime you call cap.read() it will return you
# next frame in sequence, ret -> 'True' if cap.read() succesfully returns an image frame.
ret, frame = cap.read()
if ret:
video_frames.append(frame)
else:
break
cap.release() # to close the video file
# Note 1: You can use cv2.VideoCapture(0) to capture videos from webcam
# Note 2: cv2.VideoCapture(0) won't work on Google Colab
print("Number of frames used in the video:", len(video_frames))
print("Frames per second:", int(len(video_frames)/29)) # video length is 29 sec
# Lets plot video frames recorded at [0, 1, 2, ...., 15] sec.
fps = int(len(video_frames)/29)
def plot_frame(subplot_id, img, name):
plt.subplot(4, 4, 1 + int(subplot_id))
plt.imshow(img[:, :, [2, 1, 0]])
plt.axis("off")
plt.title(name)
plt.figure(figsize=(16, 11))
for i in range(16):
plot_frame(i, video_frames[i*fps], "t = " + str(i) + " secs")
plt.show()
```
### Saving a video
We can also create a video file using multiple images. Lets convert our "bunny_video" from colored video to a grayscale and save it as `bunny_gray.mp4`. OpenCV has its own supports for this using `cv2.VideoWrite()` but we will use `skvideo.io.FFmpegWriter`
```
from skvideo.io import FFmpegWriter as VideoWriter
# create VideoWriter
out = VideoWriter('bunny_gray.mp4', inputdict={'-r': str(fps)}, outputdict={'-r': str(fps)})
for frame in video_frames:
new_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.writeFrame(new_frame) # write frames to video file
out.close() # close when done
# Lets create a fast forward video
from skvideo.io import FFmpegWriter as VideoWriter
# create VideoWriter
out = VideoWriter('bunny_ff.mp4', inputdict={'-r': str(100)}, outputdict={'-r': str(fps)})
for frame in video_frames:
out.writeFrame(frame[:, :, [2, 1, 0]]) # write frames to video file
out.close() # close when done
```
---
### Playing video files in google colab
We will use this package: `https://github.com/shashikg/google_colab_plugins` to play video files in colab.
```
%%bash
pip install git+https://github.com/shashikg/google_colab_plugins
from google_colab_plugins import playVideo
print("Original")
playVideo(filename="data/bunny_video.mp4", width=640, height=360)
print("Grayscale")
playVideo(filename="bunny_gray.mp4", width=640, height=360)
print("Fast forward")
playVideo(filename="bunny_ff.mp4", width=640, height=360)
```
---
### Capturing images from webcam in google colab
You can use `https://github.com/shashikg/google_colab_plugins` package to capture image frames from webcam in colab.
```
import numpy as np
from google.colab.patches import cv2_imshow
from google_colab_plugins import cameraCapture
cap = cameraCapture() # start cameraCapture module
frame = cap.read() # take snapshot
cv2_imshow(frame) # show image
cap.release() # close cameraCapture module
```
---
**Reference:** https://opencv-python-tutroals.readthedocs.io/
| github_jupyter |
# Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that.
So rather than writing:
>"Congratulations on the promotion! Let's get coffee and talk. Love you!"
The emojifier can automatically turn this into:
>"Congratulations on the promotion! 👍 Let's get coffee and talk. ☕️ Love you! ❤️"
* You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️).
#### Using word vectors to improve emoji lookups
* In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol.
* In other words, you'll have to remember to type "heart" to find the desired emoji, and typing "love" won't bring up that symbol.
* We can make a more flexible emoji interface by using word vectors!
* When using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate additional words in the test set to the same emoji.
* This works even if those additional words don't even appear in the training set.
* This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
#### What you'll build
1. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings.
2. Then you will build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "2a".
* You can find your original work saved in the notebook with the previous version name ("v2")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* sentence_to_avg
* Updated instructions.
* Use separate variables to store the total and the average (instead of just `avg`).
* Additional hint about how to initialize the shape of `avg` vector.
* sentences_to_indices
* Updated preceding text and instructions, added additional hints.
* pretrained_embedding_layer
* Additional instructions to explain how to implement each step.
* Emoify_V2
* Modifies instructions to specify which parameters are needed for each Keras layer.
* Remind users of Keras syntax.
* Explanation of how to use the layer object that is returned by `pretrained_embedding_layer`.
* Provides sample Keras code.
* Spelling, grammar and wording corrections.
Let's get started! Run the following cell to load the package you are going to use.
```
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
```
## 1 - Baseline model: Emojifier-V1
### 1.1 - Dataset EMOJISET
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings).
- Y contains an integer label between 0 and 4 corresponding to an emoji for each sentence.
<img src="images/data_set.png" style="width:700px;height:300px;">
<caption><center> **Figure 1**: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
```
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
```
Run the following cell to print sentences from X_train and corresponding labels from Y_train.
* Change `idx` to see different examples.
* Note that due to the font used by iPython notebook, the heart emoji may be colored black rather than red.
```
for idx in range(10):
print(X_train[idx], label_to_emoji(Y_train[idx]))
```
### 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width:900px;height:300px;">
<caption><center> **Figure 2**: Baseline model (Emojifier-V1).</center></caption>
</center>
#### Inputs and outputs
* The input of the model is a string corresponding to a sentence (e.g. "I love you).
* The output will be a probability vector of shape (1,5), (there are 5 emojis to choose from).
* The (1,5) probability vector is passed to an argmax layer, which extracts the index of the emoji with the highest probability.
#### One-hot encoding
* To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$,
* Each row is a one-hot vector giving the label of one example.
* Here, `Y_oh` stands for "Y-one-hot" in the variable names `Y_oh_train` and `Y_oh_test`:
```
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
```
Let's see what `convert_to_one_hot()` did. Feel free to change `index` to print out different values.
```
idx = 50
print(f"Sentence '{X_train[50]}' has label index {Y_train[idx]}, which is emoji {label_to_emoji(Y_train[idx])}", )
print(f"Label index {Y_train[idx]} in one-hot encoding format is {Y_oh_train[idx]}")
```
All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
### 1.3 - Implementing Emojifier-V1
As shown in Figure 2 (above), the first step is to:
* Convert each word in the input sentence into their word vector representations.
* Then take an average of the word vectors.
* Similar to the previous exercise, we will use pre-trained 50-dimensional GloVe embeddings.
Run the following cell to load the `word_to_vec_map`, which contains all the vector representations.
```
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs( '../../readonly/glove.6B.50d.txt' )
```
You've loaded:
- `word_to_index`: dictionary mapping from words to their indices in the vocabulary
- (400,001 words, with the valid indices ranging from 0 to 400,000)
- `index_to_word`: dictionary mapping from indices to their corresponding words in the vocabulary
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
Run the following cell to check if it works.
```
word = "cucumber"
idx = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(idx) + "th word in the vocabulary is", index_to_word[idx])
```
**Exercise**: Implement `sentence_to_avg()`. You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words.
* `X.lower()` and `X.split()` might be useful.
2. For each word in the sentence, access its GloVe representation.
* Then take the average of all of these word vectors.
* You might use `numpy.zeros()`.
#### Additional Hints
* When creating the `avg` array of zeros, you'll want it to be a vector of the same shape as the other word vectors in the `word_to_vec_map`.
* You can choose a word that exists in the `word_to_vec_map` and access its `.shape` field.
* Be careful not to hard code the word that you access. In other words, don't assume that if you see the word 'the' in the `word_to_vec_map` within this notebook, that this word will be in the `word_to_vec_map` when the function is being called by the automatic grader.
* Hint: you can use any one of the word vectors that you retrieved from the input `sentence` to find the shape of a word vector.
```
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
"""
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros(())
# Step 2: average the word vectors. You can loop over the words in the list "words".
total = 0
for w in words:
total = total + word_to_vec_map[w]
avg = total/len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = \n", avg)
```
**Expected Output**:
```Python
avg =
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
```
#### Model
You now have all the pieces to finish implementing the `model()` function.
After using `sentence_to_avg()` you need to:
* Pass the average through forward propagation
* Compute the cost
* Backpropagate to update the softmax parameters
**Exercise**: Implement the `model()` function described in Figure (2).
* The equations you need to implement in the forward pass and to compute the cross-entropy cost are below:
* The variable $Y_{oh}$ ("Y one hot") is the one-hot encoding of the output labels.
$$ z^{(i)} = W . avg^{(i)} + b$$
$$ a^{(i)} = softmax(z^{(i)})$$
$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Y_{oh,k}^{(i)} * log(a^{(i)}_k)$$
**Note** It is possible to come up with a more efficient vectorized implementation. For now, let's use nested for loops to better understand the algorithm, and for easier debugging.
We provided the function `softmax()`, which was imported earlier.
```
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(np.multiply(Y_oh[i], np.log(a)))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map) #predict is defined in emo_utils.py
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
```
Run the next cell to train your model and learn the softmax parameters (W,b).
```
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
```
**Expected Output** (on a subset of iterations):
<table>
<tr>
<td>
**Epoch: 0**
</td>
<td>
cost = 1.95204988128
</td>
<td>
Accuracy: 0.348484848485
</td>
</tr>
<tr>
<td>
**Epoch: 100**
</td>
<td>
cost = 0.0797181872601
</td>
<td>
Accuracy: 0.931818181818
</td>
</tr>
<tr>
<td>
**Epoch: 200**
</td>
<td>
cost = 0.0445636924368
</td>
<td>
Accuracy: 0.954545454545
</td>
</tr>
<tr>
<td>
**Epoch: 300**
</td>
<td>
cost = 0.0343226737879
</td>
<td>
Accuracy: 0.969696969697
</td>
</tr>
</table>
Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.
### 1.4 - Examining test set performance
* Note that the `predict` function used here is defined in emo_util.spy.
```
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
```
**Expected Output**:
<table>
<tr>
<td>
**Train set accuracy**
</td>
<td>
97.7
</td>
</tr>
<tr>
<td>
**Test set accuracy**
</td>
<td>
85.7
</td>
</tr>
</table>
* Random guessing would have had 20% accuracy given that there are 5 classes. (1/5 = 20%).
* This is pretty good performance after training on only 127 examples.
#### The model matches emojis to relevant words
In the training set, the algorithm saw the sentence
>"*I love you*"
with the label ❤️.
* You can check that the word "adore" does not appear in the training set.
* Nonetheless, lets see what happens if you write "*I adore you*."
```
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
```
Amazing!
* Because *adore* has a similar embedding as *love*, the algorithm has generalized correctly even to a word it has never seen before.
* Words such as *heart*, *dear*, *beloved* or *adore* have embedding vectors similar to *love*.
* Feel free to modify the inputs above and try out a variety of input sentences.
* How well does it work?
#### Word ordering isn't considered in this model
* Note that the model doesn't get the following sentence correct:
>"not feeling happy"
* This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
#### Confusion matrix
* Printing the confusion matrix can also help understand which classes are more difficult for your model.
* A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
```
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
```
## What you should remember from this section
- Even with a 127 training examples, you can get a reasonably good model for Emojifying.
- This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as *"This movie is not good and not enjoyable"*
- It doesn't understand combinations of words.
- It just averages all the words' embedding vectors together, without considering the ordering of words.
**You will build a better algorithm in the next section!**
## 2 - Emojifier-V2: Using LSTMs in Keras:
Let's build an LSTM model that takes word **sequences** as input!
* This model will be able to account for the word ordering.
* Emojifier-V2 will continue to use pre-trained word embeddings to represent words.
* We will feed word embeddings into an LSTM.
* The LSTM will learn to predict the most appropriate emoji.
Run the following cell to load the Keras packages.
```
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
```
### 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement:
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> **Figure 3**: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>
### 2.2 Keras and mini-batching
* In this exercise, we want to train Keras using mini-batches.
* However, most deep learning frameworks require that all sequences in the same mini-batch have the **same length**.
* This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.
#### Padding handles sequences of varying length
* The common solution to handling sequences of **different length** is to use padding. Specifically:
* Set a maximum sequence length
* Pad all sequences to have the same length.
##### Example of padding
* Given a maximum sequence length of 20, we could pad every sentence with "0"s so that each input sentence is of length 20.
* Thus, the sentence "I love you" would be represented as $(e_{I}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$.
* In this example, any sentences longer than 20 words would have to be truncated.
* One way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.
### 2.3 - The Embedding layer
* In Keras, the embedding matrix is represented as a "layer".
* The embedding matrix maps word indices to embedding vectors.
* The word indices are positive integers.
* The embedding vectors are dense vectors of fixed size.
* When we say a vector is "dense", in this context, it means that most of the values are non-zero. As a counter-example, a one-hot encoded vector is not "dense."
* The embedding matrix can be derived in two ways:
* Training a model to derive the embeddings from scratch.
* Using a pretrained embedding
#### Using and updating pre-trained embeddings
* In this part, you will learn how to create an [Embedding()](https://keras.io/layers/embeddings/) layer in Keras
* You will initialize the Embedding layer with the GloVe 50-dimensional vectors.
* In the code below, we'll show you how Keras allows you to either train or leave fixed this layer.
* Because our training set is quite small, we will leave the GloVe embeddings fixed instead of updating them.
#### Inputs and outputs to the embedding layer
* The `Embedding()` layer's input is an integer matrix of size **(batch size, max input length)**.
* This input corresponds to sentences converted into lists of indices (integers).
* The largest integer (the highest word index) in the input should be no larger than the vocabulary size.
* The embedding layer outputs an array of shape (batch size, max input length, dimension of word vectors).
* The figure shows the propagation of two example sentences through the embedding layer.
* Both examples have been zero-padded to a length of `max_len=5`.
* The word embeddings are 50 units in length.
* The final dimension of the representation is `(2,max_len,50)`.
<img src="images/embedding1.png" style="width:700px;height:250px;">
<caption><center> **Figure 4**: Embedding layer</center></caption>
#### Prepare the input sentences
**Exercise**:
* Implement `sentences_to_indices`, which processes an array of sentences (X) and returns inputs to the embedding layer:
* Convert each training sentences into a list of indices (the indices correspond to each word in the sentence)
* Zero-pad all these lists so that their length is the length of the longest sentence.
##### Additional Hints
* Note that you may have considered using the `enumerate()` function in the for loop, but for the purposes of passing the autograder, please follow the starter code by initializing and incrementing `j` explicitly.
```
for idx, val in enumerate(["I", "like", "learning"]):
print(idx,val)
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = X[i].lower().split()
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j+1
### END CODE HERE ###
return X_indices
```
Run the following cell to check what `sentences_to_indices()` does, and check your results.
```
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =\n", X1_indices)
```
**Expected Output**:
```Python
X1 = ['funny lol' 'lets play baseball' 'food is ready for you']
X1_indices =
[[ 155345. 225122. 0. 0. 0.]
[ 220930. 286375. 69714. 0. 0.]
[ 151204. 192973. 302254. 151349. 394475.]]
```
#### Build embedding layer
* Let's build the `Embedding()` layer in Keras, using pre-trained word vectors.
* The embedding layer takes as input a list of word indices.
* `sentences_to_indices()` creates these word indices.
* The embedding layer will return the word embeddings for a sentence.
**Exercise**: Implement `pretrained_embedding_layer()` with these steps:
1. Initialize the embedding matrix as a numpy array of zeros.
* The embedding matrix has a row for each unique word in the vocabulary.
* There is one additional row to handle "unknown" words.
* So vocab_len is the number of unique words plus one.
* Each row will store the vector representation of one word.
* For example, one row may be 50 positions long if using GloVe word vectors.
* In the code below, `emb_dim` represents the length of a word embedding.
2. Fill in each row of the embedding matrix with the vector representation of a word
* Each word in `word_to_index` is a string.
* word_to_vec_map is a dictionary where the keys are strings and the values are the word vectors.
3. Define the Keras embedding layer.
* Use [Embedding()](https://keras.io/layers/embeddings/).
* The input dimension is equal to the vocabulary length (number of unique words plus one).
* The output dimension is equal to the number of positions in a word embedding.
* Make this layer's embeddings fixed.
* If you were to set `trainable = True`, then it will allow the optimization algorithm to modify the values of the word embeddings.
* In this case, we don't want the model to modify the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix.
* Note that this is part of the code is already completed for you and does not need to be modified.
```
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Step 1
# Initialize the embedding matrix as a numpy array of zeros.
# See instructions above to choose the correct shape.
emb_matrix = np.zeros((vocab_len, emb_dim))
# Step 2
# Set each row "idx" of the embedding matrix to be
# the word vector representation of the idx'th word of the vocabulary
for word, idx in word_to_index.items():
emb_matrix[idx, :] = word_to_vec_map[word]
# Step 3
# Define Keras embedding layer with the correct input and output sizes
# Make it non-trainable.
embedding_layer = Embedding(vocab_len, emb_dim)
### END CODE HERE ###
# Step 4 (already done for you; please do not modify)
# Build the embedding layer, it is required before setting the weights of the embedding layer.
embedding_layer.build((None,)) # Do not modify the "None". This line of code is complete as-is.
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
```
**Expected Output**:
```Python
weights[0][1][3] = -0.3403
```
## 2.3 Building the Emojifier-V2
Lets now build the Emojifier-V2 model.
* You feed the embedding layer's output to an LSTM network.
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> **Figure 3**: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>
**Exercise:** Implement `Emojify_V2()`, which builds a Keras graph of the architecture shown in Figure 3.
* The model takes as input an array of sentences of shape (`m`, `max_len`, ) defined by `input_shape`.
* The model outputs a softmax probability vector of shape (`m`, `C = 5`).
* You may need to use the following Keras layers:
* [Input()](https://keras.io/layers/core/#input)
* Set the `shape` and `dtype` parameters.
* The inputs are integers, so you can specify the data type as a string, 'int32'.
* [LSTM()](https://keras.io/layers/recurrent/#lstm)
* Set the `units` and `return_sequences` parameters.
* [Dropout()](https://keras.io/layers/core/#dropout)
* Set the `rate` parameter.
* [Dense()](https://keras.io/layers/core/#dense)
* Set the `units`,
* Note that `Dense()` has an `activation` parameter. For the purposes of passing the autograder, please do not set the activation within `Dense()`. Use the separate `Activation` layer to do so.
* [Activation()](https://keras.io/activations/).
* You can pass in the activation of your choice as a lowercase string.
* [Model](https://keras.io/models/model/)
Set `inputs` and `outputs`.
#### Additional Hints
* Remember that these Keras layers return an object, and you will feed in the outputs of the previous layer as the input arguments to that object. The returned object can be created and called in the same line.
```Python
# How to use Keras layers in two lines of code
dense_object = Dense(units = ...)
X = dense_object(inputs)
# How to use Keras layers in one line of code
X = Dense(units = ...)(inputs)
```
* The `embedding_layer` that is returned by `pretrained_embedding_layer` is a layer object that can be called as a function, passing in a single argument (sentence indices).
* Here is some sample code in case you're stuck
```Python
raw_inputs = Input(shape=(maxLen,), dtype='int32')
preprocessed_inputs = ... # some pre-processing
X = LSTM(units = ..., return_sequences= ...)(processed_inputs)
X = Dropout(rate = ..., )(X)
...
X = Dense(units = ...)(X)
X = Activation(...)(X)
model = Model(inputs=..., outputs=...)
...
```
```
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices, outputs=X)
### END CODE HERE ###
return model
```
Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose `max_len = 10`. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001\*50 = 20,000,050 non-trainable parameters.
```
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
```
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, `adam` optimizer and `['accuracy']` metrics:
```
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
```
It's time to train your model. Your Emojifier-V2 `model` takes as input an array of shape (`m`, `max_len`) and outputs probability vectors of shape (`m`, `number of classes`). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
```
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
```
Fit the Keras model on `X_train_indices` and `Y_train_oh`. We will use `epochs = 50` and `batch_size = 32`.
```
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
```
Your model should perform around **90% to 100% accuracy** on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
```
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
```
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
```
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
```
Now you can try it on your own example. Write your own sentence below.
```
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
```
## LSTM version accounts for word order
* Previously, Emojify-V1 model did not correctly label "not feeling happy," but our implementation of Emojiy-V2 got it right.
* (Keras' outputs are slightly random each time, so you may not have obtained the same result.)
* The current model still isn't very robust at understanding negation (such as "not happy")
* This is because the training set is small and doesn't have a lot of examples of negation.
* But if the training set were larger, the LSTM model would be much better than the Emojify-V1 model at understanding such complex sentences.
### Congratulations!
You have completed this notebook! ❤️❤️❤️
## What you should remember
- If you have an NLP task where the training set is small, using word embeddings can help your algorithm significantly.
- Word embeddings allow your model to work on words in the test set that may not even appear in the training set.
- Training sequence models in Keras (and in most other deep learning frameworks) requires a few important details:
- To use mini-batches, the sequences need to be **padded** so that all the examples in a mini-batch have the **same length**.
- An `Embedding()` layer can be initialized with pretrained values.
- These values can be either fixed or trained further on your dataset.
- If however your labeled dataset is small, it's usually not worth trying to train a large pre-trained set of embeddings.
- `LSTM()` has a flag called `return_sequences` to decide if you would like to return every hidden states or only the last one.
- You can use `Dropout()` right after `LSTM()` to regularize your network.
#### Input sentences:
```Python
"Congratulations on finishing this assignment and building an Emojifier."
"We hope you're happy with what you've accomplished in this notebook!"
```
#### Output emojis:
# 😀😀😀😀😀😀
## Acknowledgments
Thanks to Alison Darcy and the Woebot team for their advice on the creation of this assignment.
* Woebot is a chatbot friend that is ready to speak with you 24/7.
* Part of Woebot's technology uses word embeddings to understand the emotions of what you say.
* You can chat with Woebot by going to http://woebot.io
<img src="images/woebot.png" style="width:600px;height:300px;">
| github_jupyter |
# Bayesian Estimation of Orbital Scaling Parameters
## Introduction
These notes briefly outline a Bayesian approach to estimating the statistical distribution of the orbital scaling (or $\lambda$) parameters from NIST data and their associated experimental error bars. The atomic structure calculation can be viewed as a mapping $f$ from the true orbital scaling parameter $\lambda \in \mathbb{R}^m$ to a set of observable quantities of interest, $f(\lambda) \in \mathbb{R}^n$. The output of this mapping is measured experimentally, albeit with some error.
The simplest "observation model" is to assume that a _random_ error $E = [E_1,...,E_n]\in \mathbb{R}^n$ is added to the output $f(\lambda)$, yielding a _random_ observation
\begin{equation}
Y = f(\lambda) + E. \label{eq:observation_model}
\end{equation}
Other observation models are also possible.
Now, suppose we don't know the true value of $\lambda$, but that we have a vector $\vec y_{\text{nist}} = [y_1, y_2, ..., y_n]^T$ of actual NIST measurements of the energies and A-values. Due to the random measurement noise $E$, we cannot be certain that the observed values are the actual outputs $f(\lambda)$. To reflect our uncertainty, we estimate the true $\lambda$ by a random quantity $\Lambda$ whose distribution is consistent with the observations $y_{\text{nist}}$ and the statistical distribution of the error $E$. More specifically, we seek to estimate the conditional density function $\pi_{\Lambda|y_{\text{nist}}}$ given the NIST observations. Bayes formula allows us to express this density in terms of the likelihood $\pi_{Y|\Lambda}$ and a prior density $\pi_\Lambda$:
\begin{equation}\label{eq:bayes_formula}
\pi_{\Lambda|Y}(\lambda|y_{\text{nist}}) = \frac{\pi_\Lambda(\lambda)\pi_{Y|\Lambda}(y_{\text{nist}}|\lambda)}{\pi_{Y}(y_{\text{nist}})},
\end{equation}
provided $\pi_{Y}(y_{\text{nist}})\neq 0$.
### The Likelihood Function
If we have a density function for the measurement noise $\pi_E(e)$, then our given observation model \eqref{eq:observation_model} makes it easy to determine the likelihood. Indeed,
$$
\pi_{Y|\Lambda}(y_{\text{nist}}|\lambda)= \mathbb{P}(Y= y_{\text{nist}}|\Lambda=\lambda) = \mathbb{P}(E = y_{\text{nist}}-f(\lambda)) = \pi_E(y_{\text{nist}}-f(\lambda))
$$
#### The Noise Density
We now turn to estimating $\pi_E$. To this end we make the following simplifying assumption - it can be relaxed in principle.
> _Assumption:_ The measurement errors of the various energies and A-values are statistically independent.
The above assumption allows us to write the joint density function $\pi_{E}$ of the absolute errors $E = [E_1,...,E_n]$ as the product of univariate densities, i.e.
$$
\pi_{E}(e) = \prod_{i=1}^n \pi_{E_i}(e_i), \ \text{for } e = [e_1,...,e_n] \in \mathbb{R}^n.
$$
Let $\vec \eta = [\eta_1, ..., \eta_n]$ be the relative errors for each quantity of interest, reported in the NIST database. We use these to specify the statistical distribution of the absolute errors.
##### Uniform Errors
If assume that the NIST errors are uniformly distributed, we can use $\eta_i$ to find the range. Specifically, let $a_i = y_i - \eta_i y_i$ and $b_i=y_i + \eta_i y_i$, so that
$$
\pi_{E_i}(e_i) = \left\{
\begin{array}{ll}
\frac{1}{b_i-a_i}, & \text{if } a_i \leq e_i \leq b_i \\
0, &\text{otherwise}\end{array} \right. .
$$
The joint distribution is therefore
$$
\pi_{E}(e) = \prod_{i=1}^n \pi_{E_i}(e_i) = \left\{
\begin{array}{ll}
\prod_{i=1}^n \frac{1}{b_i-a_i}, & \text{if } a_i \leq e_i \leq b_i \text{ for } i=1,...,n \\
0, &\text{otherwise}\end{array} \right. .
$$
while its logarithm,
$$
\ln \pi_E(e) = \left\{ \begin{array}{ll} -\sum_{i=1}^n \ln(b_i-a_i), & \text{ if } a_i \leq e_i \leq b_i \text{ for } i=1,...,n \\
-\infty, &\text{otherwise}\end{array} \right. .
$$
##### Gaussian Errors
We can also assume that the NIST errors are Gaussian, in which case $\eta_i$ can be used to determine the standard deviations. Specifically, if we assume that the error range $[y_i-\eta_i y_i, y_i + \eta_i y_i]$ represents a 99.7\% confidence interval (corresponding to 3 standard deviations $\sigma_i$), we can compute $\sigma_i=\frac{2}{3}\eta_iy_i$. The densities are then given by
$$
\pi_{E_i}(e_i) = \frac{1}{\sqrt{2\pi}\sigma_i} \exp\left(-\frac{e_i^2}{2\sigma_i^2}\right).
$$
The joint distribution is
$$
\pi_E(e) = \prod_{i=1}^n \pi_{E_i}(e_i) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi}\sigma_i} \exp\left(-\frac{e_i^2}{2\sigma_i^2}\right) = (2\pi)^{-n/2} \left(\prod_{i=1}^n \frac{1}{\sigma_i}\right) \exp\left(-\frac{1}{2}\sum_{i=1}^n \frac{e_i^2}{\sigma_i^2}\right)
$$
and its logarithm is
$$
\ln \pi_E(e) = -\frac{n}{2}\ln(2\pi) - \sum_{i=1}^n \ln(\sigma_i) - \frac{1}{2}\sum_{i=1}^n \left(\frac{e_i}{\sigma_i}\right)^2
$$
### The emcee package
Once we have the prior density and the likelihood function, we can use these to generate samples from the posterior density. This is usually achieved by means of a Monte Carlo Markov Chain (MCMC), which generates a random process (a Markov process) whose stationary (long term) distribution equals the desired posterior. The software library [emcee](http://dfm.io/emcee/current/) implements one form of the MCMC algorithm.
We first import the library:
```
import emcee
import numpy as np
```
To test this package, we use a very simple example. Let $\lambda = [\lambda_1, \lambda_2]$ be two input parameters and consider the mapping
$$
f(\lambda) = \left[\begin{array}{c} \lambda_1^2 + \lambda_2^2 \\ \lambda_1 + 3\lambda_2 \end{array} \right].
$$
Here we define the mapping in python.
```
def f(lmd):
"""
Forward Mapping
Inputs:
lmd: (2,) numpy array [lmd1, lmd2]
Outputs:
flmd: (2,) numpy array [lmd1^2 + lmd2^2, lmd1 + 3lmd2]
"""
lmd1, lmd2 = lmd
return np.array([lmd1**2 + lmd2**2, lmd1+3*lmd2])
```
Let's say the parameter's true value is $\lambda = [1,2]^T$. This results in an output $[5, 7]^T$, which we can use as our measurement. Suppose further that the measurements $Y_1, Y_2$ of the two output quantities of interest have associated errors, $E_i$, $i=1,2$ that are independent and normally distributed with $E_1 \sim N(0, 4)$ and $E_2 \sim N(0,1)$.
The `emcee` package requires logarithms of density functions as inputs. In our case, we can use the expresion for the Gaussian errors above to show that the likelihood function takes the form
$$
\pi_{Y|\Lambda}(y_{\text{obs}}|\lambda) = -\ln(2\pi) - \ln(4) -\ln(1) - \frac{1}{2} \left(\frac{5-f_1(\lambda)}{16}\right)^2 - \frac{1}{2}\left(\frac{7-f_2(\lambda)}{1}\right)^2
$$
This is how we define the log likelihood in Python.
```
def ln_likelihood(lmd, y_obs, f):
"""
Returns the log-likelihood function for the given observations and forward mapping
Inputs:
lmd: (2,) numpy array - input parameter
y_obs: (2, ) numpy array - measurements
f: function, mapping from lambdas to output
Output: double, log(P(y_obs|lmd))
"""
f1, f2 = f(lmd)
return -np.log(8*np.pi) - 0.5*( (y_obs[0]-f1)**2/16 + (y_obs[1]-f2)**2/1 )
```
Let's specify the prior density for $\lambda$. We use a uniform prior for each component, with $\lambda_1 \sim U([0,3])$ and $\lambda_2 \sim U([1.5,2.5])$. The joint log-prior then takes the form
$$
\ln \pi_\Lambda(\lambda) = \left\{ \begin{array}{ll} -\ln(3), & \text{ if } \lambda \in [0,3]\times [1,3]\\
-\infty, & \text{otherwise} \end{array} \right.
$$
In Python it looks like this:
```
def ln_prior(lmd):
"""
Compute the log prior density function at a parameter value lmd
"""
lmd1, lmd2 = lmd
if 0<=lmd1 and lmd1<=3 and 1.5<=lmd2 and lmd2<=2.5:
return -np.log(3)
else:
return -np.infty
#
# Try it out
#
# Point inside region
lmd = np.array([0.5,2.5])
print(ln_prior(lmd))
# Point outside region
lmd = np.array([-1,2.4])
print(ln_prior(lmd))
```
Recall that the posterior density is (up to a scaling constant) the product of the prior and the likelihood, so that the log posterior is the sum of their logs
```
def ln_posterior(lmd, y_obs, f):
"""
Evaluate the log-posterior density function at a given lmd
Inputs:
lmd: input variable
y_obs: observed output
f: function, forward mapping
Output:
ln_prior + ln_likelihood
"""
return ln_prior(lmd) + ln_likelihood(lmd, y_obs, f)
```
We now run the MCMC algorithm
```
# Specify the observation vector
y_obs = np.array([5,7])
# Specify the dimension of the input space and the number of starting points
n_dim, n_walkers = 2, 100
# Specify starting points for each Markov chain (in a tight ball around optimum)
pos = [np.array([1,2]) + 1e-4*np.random.randn(n_dim) for i in range(n_walkers)]
# Initialize the sampler
sampler = emcee.EnsembleSampler(n_walkers, n_dim, ln_posterior, args=(y_obs, f))
# Run the MCMC routine
sampler.run_mcmc(pos, 1000);
# The sampler.chain has shape (n_walkers, n_steps, n_dim) = (100, 1000, 2)
# Reshape into a (100'000,2) array of samples (but throw away initial sample -> (95'000,2)
samples = sampler.chain[:, 50:, :].reshape((-1, n_dim))
```
We plot the results using the package ```corner.py```
```
import corner
#
# Plot samples
#
corner.corner(samples, labels=["$\lambda_1$", "$\lambda_2$"], truths=[1, 2]);
```
## O6+
Here we try to estimate the $\lambda$ parameters for OVI+, in particular those corresponding to the 1s, 2s and 2p orbitals, i.e. $\lambda = [\lambda_{1s}, \lambda_{2s}, \lambda_{2p}]$. Our estimation is based on observations of various computable energies and/or A-values.
### Automating the Structure Calculations
The computation of each observable quantity for a single $\lambda$-value is achieved by means of an R-matrix structure calculation, encoded in the Perl script ```adas803.testern.pl```. Its inputs include, among others, an ```input.dat``` file in which the $\lambda$-values are stored.
To facilitate the evaluation of multiple such structure calculations corresponding to different $\lambda$-values, we have written a Python class ```LambdaPdf```, contained in the module ```scaling_parameters.py```. This class allows users to
- Run the structure calculations for a specific the $\lambda$-value, as well as specific observable quantities to be extracted from the ```adf04ic``` output file.
- Generate a linear interpolant, by running the structure calculations for every $\lambda$-point in a pre-specified grid, recording the resulting quantities of interest, and fitting a piecewise linear function to the data. The interpolant can be used as an approximation for the exact stucture calculation, leading to much faster sampling rates.
- Ultimately, this class is designed to store the statistical distribution of the lambda parameters.
### The NIST Data
The NIST data used for this calibration consists of energies 2-7 (to be found [here](https://physics.nist.gov/cgi-bin/ASD/energy1.pl)) and A-values 2,5, and 7 (to be found [here](https://physics.nist.gov/cgi-bin/ASD/lines1.pl?spectra=O6&limits_type=0&low_w=&upp_w=&unit=1&submit=Retrieve+Data&de=0&format=0&line_out=0&en_unit=0&output=0&bibrefs=1&page_size=15&show_obs_wl=1&show_calc_wl=1&unc_out=1&order_out=0&max_low_enrg=&show_av=2&max_upp_enrg=&tsb_value=0&min_str=&A_out=0&intens_out=on&max_str=&allowed_out=1&forbid_out=1&min_accur=&min_intens=&conf_out=on&term_out=on&enrg_out=on&J_out=on)). The NIST database entries for the energy levels list no error bounds. In our computations, we make them up by setting them to 'AAA'.
The NIST error ratings correspond to the following relative errors
| Rating | Relative Error |
| ---| ---|
|'AAA'| 0.003|
|'AA' | 0.01 |
|'A+' | 0.02 |
|'A' | 0.03 |
|'B+' | 0.07 |
|'B' | 0.1 |
|'C+' | 0.18 |
|'C' | 0.25 |
|'D+' | 0.4 |
|'D' | 0.5 |
|'E' | 0.5 |
In the code, each quantity of interest is initiated as a ```Qoi``` object, a class that store the NIST values, NIST errors, a search label for locating it in the output file, etc.
> __Example__ Here we initialize the A-value 2.
```
#
# Import modules
#
import sys
sys.path.append('../src/')
from scaling_parameters import Qoi, LmdPdf
#
# Initialize Quantity of Interest
#
a2 = Qoi(category='A-value', tag=2,
search_label=' 2 1',
nist_value=1.04e3, nist_rating='AA')
```
> __Example:__ We initialize a new $\lambda$-object with one output quantity: ```a2```.
```
#
# Initialize lmd object
#
tags = ['1s', '2s', '2p'] # names
rng = np.array([[0.8, 1.2], [0.8, 1.2], [0.8, 1.2]]) # lower and upper bounds for each lambda
resolution = (2,2,2) # resolution of the grid in each direction
path_to_input = '/home/hans-werner/Dropbox/work/projects'+\
'/atomic_data_uncertainty/code/icft/o_6/'
output_qois = [a2]
lmd = LmdPdf(tags, rng, resolution, path_to_input, output_qois)
```
For the O6+ atom, all three $\lambda$-parameters are constrained within the interval $[0.8,1.2]$. We used a resolution of 20 sub-intervals in each direction, amounting to $20^3 = 8000$ forward calculations. We recorded this data and used it to construct a piecewise linear interpolant.
To prevent having to recompute the interpolant during every run, we use the ```pickle``` library, which allows us to store all sorts of objects. We load the saved $\lambda$-object as follows
```
import pickle
with open('lmd_o6.pickle', 'rb') as f:
lmd = pickle.load(f)
```
The forward mapping
```
def f_o6(point, lmd, qoi_indices):
"""
Returns the interpolated forward map
Inputs:
point: double, (3,) array of lambda values
qoi_indices: int, n-list of indices of output quantities
we want to use to estimate the lambda's
Outputs:
fi: double, (n,) array of outputs for the given input point
"""
# Compute the output for each listed output quantity
fv = [lmd.interpolants[i](point) for i in qoi_indices]
# Turn it into an array
fv = np.array(fv).ravel()
return fv
# Display the output quantity names
for i in range(9):
print(lmd.qois[i].category, lmd.qois[i].tag, lmd.qois[i].nist_value,lmd.qois[i].nist_rating)
#
# Specify the indices of the desired output
#
# All
# qoi_indices = [i for i in range(9)]
# A-values
qoi_indices = [0,1,2,3,4,5]
#
# The observation vector
#
y_obs = np.array([lmd.qois[i].nist_value for i in qoi_indices])
a2.nist_value
```
The Log likelihood
```
def ln_likelihood_gauss(point, lmd, y_obs, f_o6, sgm, qoi_indices):
"""
"""
fv = f_o6(point, lmd, qoi_indices)
n = len(qoi_indices)
ln_p = -0.5*n*np.log(2*np.pi) - np.sum(np.log(sgm))
for i in range(n):
ln_p -= 0.5*((y_obs[i]-fv[i])/sgm[i])**2
return ln_p
#
# Get the standard deviations
#
sgm = []
rating_table = {'AAA': 0.003,
'AA' : 0.01,
'A+' : 0.02,
'A' : 0.03,
'B+' : 0.07,
'B' : 0.1,
'C+' : 0.18,
'C' : 0.25,
'D+' : 0.4,
'D' : 0.5,
'E' : 0.5}
for i in qoi_indices:
qoi = lmd.qois[i]
rel_error = rating_table[qoi.nist_rating]
# sgm.append(2/3*rel_error*qoi.nist_value)
sgm.append(rel_error*qoi.nist_value)
sgm = np.array(sgm)
#
# Check
#
point = np.array([0.8,0.9,1.1])
ln_likelihood_gauss(point, lmd, y_obs, f_o6, sgm, qoi_indices)
def ln_prior(point):
"""
Compute the log prior density function at a parameter value lmd
"""
x, y, z = point
if 0.8<=x and x<=1.2 and 0.8<=y and y<=1.2 and 0.8<=z and z<=1.2:
return -3*np.log(0.4)
else:
return -np.infty
def ln_posterior(point, lmd, y_obs, f_o6, sgm, qoi_indices):
"""
Evaluate the log-posterior density function at a given lmd
Inputs:
lmd: input variable
y_obs: observed output
f: function, forward mapping
Output:
ln_prior + ln_likelihood
"""
lp = ln_prior(point)
if not np.isfinite(lp):
return -np.infty
else:
return lp + ln_likelihood_gauss(point, lmd, y_obs, f_o6, sgm, qoi_indices)
# Specify the dimension of the input space and the number of starting points
n_dim, n_walkers = 3, 100
# Specify starting points for each Markov chain (in a tight ball around optimum)
pos = [np.array([1,1,1]) + 1e-4*np.random.randn(n_dim) for i in range(n_walkers)]
# Initialize the sampler
sampler = emcee.EnsembleSampler(n_walkers, n_dim, ln_posterior, args=(lmd, y_obs, f_o6, sgm, qoi_indices))
# Run the MCMC routine
sampler.run_mcmc(pos, 1000);
# The sampler.chain has shape (n_walkers, n_steps, n_dim) = (100, 1000, 2)
# Reshape into a (100'000,2) array of samples (but throw away initial sample -> (95'000,2)
samples = sampler.chain[:, 50:, :].reshape((-1, n_dim))
#
# Plot samples
#
corner.corner(samples, labels=["$\lambda_{1s}$", "$\lambda_{2s}$", "$\lambda_{2p}$"]);
```
| github_jupyter |
##### Copyright 2021 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/qcvv/xeb_coherent_noise>"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/qcvv/xeb_coherent_noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/qcvv/xeb_coherent_noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/qcvv/xeb_coherent_noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
```
# XEB and Coherent Error
```
import numpy as np
import cirq
from cirq.contrib.svg import SVGCircuit
```
## Set up Random Circuits
We create a set of 10 random, two-qubit `circuits` which uses `SINGLE_QUBIT_GATES` to randomize the circuit and `SQRT_ISWAP` as the entangling gate. We will ultimately truncate each of these circuits according to `cycle_depths`. Please see [the XEB Theory notebook](./xeb_theory.ipynb) for more details.
```
exponents = np.linspace(0, 7/4, 8)
exponents
import itertools
SINGLE_QUBIT_GATES = [
cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a)
for a, z in itertools.product(exponents, repeat=2)
]
SINGLE_QUBIT_GATES[:10], '...'
import cirq_google as cg
from cirq.experiments import random_quantum_circuit_generation as rqcg
q0, q1 = cirq.LineQubit.range(2)
# Make long circuits (which we will truncate)
n_circuits = 10
circuits = [
rqcg.random_rotations_between_two_qubit_circuit(
q0, q1,
depth=100,
two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b),
single_qubit_gates=SINGLE_QUBIT_GATES)
for _ in range(n_circuits)
]
# We will truncate to these lengths
max_depth = 100
cycle_depths = np.arange(3, max_depth, 9)
cycle_depths
```
## Emulate coherent error
We request a $\sqrt{i\mathrm{SWAP}}$ gate, but the quantum hardware may execute something subtly different. Therefore, we move to a more general 5-parameter two qubit gate, `cirq.PhasedFSimGate`.
This is the general excitation-preserving two-qubit gate, and the unitary matrix of PhasedFSimGate(θ, ζ, χ, γ, φ) is:
[[1, 0, 0, 0],
[0, exp(-iγ - iζ) cos(θ), -i exp(-iγ + iχ) sin(θ), 0],
[0, -i exp(-iγ - iχ) sin(θ), exp(-iγ + iζ) cos(θ), 0],
[0, 0, 0, exp(-2iγ-iφ)]].
This parametrization follows eq (18) in https://arxiv.org/abs/2010.07965. Please read the docstring for `cirq.PhasedFSimGate` for more information.
With the following code, we show how `SQRT_ISWAP` can be written as a specific `cirq.PhasedFSimGate`.
```
sqrt_iswap_as_phased_fsim = cirq.PhasedFSimGate.from_fsim_rz(
theta=-np.pi/4, phi=0,
rz_angles_before=(0,0), rz_angles_after=(0,0))
np.testing.assert_allclose(
cirq.unitary(sqrt_iswap_as_phased_fsim),
cirq.unitary(cirq.SQRT_ISWAP),
atol=1e-8
)
```
We'll also create a perturbed version. Note the $\pi/16$ `phi` angle:
```
perturbed_sqrt_iswap = cirq.PhasedFSimGate.from_fsim_rz(theta=-np.pi/4, phi=np.pi/16,
rz_angles_before=(0,0), rz_angles_after=(0,0))
np.round(cirq.unitary(perturbed_sqrt_iswap), 3)
```
We'll use this perturbed gate along with the `GateSubstitutionNoiseModel` to create simulator which has a constant coherent error. Namely, each `SQRT_ISWAP` will be substituted for our perturbed version.
```
def _sub_iswap(op):
if op.gate == cirq.SQRT_ISWAP:
return perturbed_sqrt_iswap.on(*op.qubits)
return op
noise = cirq.devices.noise_model.GateSubstitutionNoiseModel(_sub_iswap)
noisy_sim = cirq.DensityMatrixSimulator(noise=noise)
```
## Run the benchmark circuits
We use the function `sample_2q_xeb_circuits` to execute all of our circuits at the requested `cycle_depths`.
```
from cirq.experiments.xeb_sampling import sample_2q_xeb_circuits
sampled_df = sample_2q_xeb_circuits(sampler=noisy_sim, circuits=circuits,
cycle_depths=cycle_depths, repetitions=10_000)
sampled_df.head()
```
## Compute fidelity assuming `SQRT_ISWAP`
In contrast to the XEB Theory notebook, here we only have added coherent error (not depolarizing). Nevertheless, the random, scrambling nature of the circuits shows circuit fidelity decaying with depth (at least when we assume that we were trying to use a pure `SQRT_ISWAP` gate)
```
from cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities
fids = benchmark_2q_xeb_fidelities(sampled_df, circuits, cycle_depths)
fids.head()
%matplotlib inline
from matplotlib import pyplot as plt
xx = np.linspace(0, fids['cycle_depth'].max())
plt.plot(xx, (1-5e-3)**(4*xx), label=r'Exponential Reference')
plt.plot(fids['cycle_depth'], fids['fidelity'], 'o-', label='Perturbed fSim')
plt.ylabel('Circuit fidelity')
plt.xlabel('Cycle Depth $d$')
plt.legend(loc='best')
```
## Optimize `PhasedFSimGate` parameters
We know what circuits we requested, and in this simulated example, we know what coherent error has happened. But in a real experiment, there is likely unknown coherent error that you would like to characterize. Therefore, we make the five angles in `PhasedFSimGate` free parameters and use a classical optimizer to find which set of parameters best describes the data we collected from the noisy simulator (or device, if this was a real experiment).
fids_opt = simulate_2q_xeb_fids(sampled_df, pcircuits, cycle_depths, param_resolver={'theta': -np.pi/4, 'phi': 0.1})
```
import multiprocessing
pool = multiprocessing.get_context('spawn').Pool()
from cirq.experiments.xeb_fitting import \
parameterize_circuit, characterize_phased_fsim_parameters_with_xeb, SqrtISwapXEBOptions
options = SqrtISwapXEBOptions(
characterize_theta=True,
characterize_phi=True,
characterize_chi=False,
characterize_gamma=False,
characterize_zeta=False
)
p_circuits = [parameterize_circuit(circuit, options) for circuit in circuits]
res = characterize_phased_fsim_parameters_with_xeb(
sampled_df,
p_circuits,
cycle_depths,
options,
pool=pool,
xatol=1e-3,
fatol=1e-3)
xx = np.linspace(0, fids['cycle_depth'].max())
p_depol = 5e-3 # from above
plt.plot(xx, (1-p_depol)**(4*xx), label=r'Exponential Reference')
plt.axhline(1, color='grey', ls='--')
plt.plot(fids['cycle_depth'], fids['fidelity'], 'o-', label='Perturbed fSim')
plt.plot(res.fidelities_df['cycle_depth'], res.fidelities_df['fidelity'], 'o-', label='Refit fSim')
plt.ylabel('Circuit fidelity')
plt.xlabel('Cycle Depth')
plt.legend(loc='best')
plt.tight_layout()
```
| github_jupyter |
```
import open3d as o3d
import numpy as np
import sys
# monkey patches visualization and provides helpers to load geometries
sys.path.append('..')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = False
```
# File IO
This tutorial shows how basic geometries are read and written by Open3D.
## Point cloud
This code below reads and writes a point cloud.
```
print("Testing IO for point cloud ...")
pcd = o3d.io.read_point_cloud("../../TestData/fragment.pcd")
print(pcd)
o3d.io.write_point_cloud("copy_of_fragment.pcd", pcd)
```
`print()` can be used for displaying a summary of pcd.
By default, Open3D tries to infer the file type by the filename extension. Below is a list of supported point cloud file types.
Format | Description
---------|---------------
`xyz` | Each line contains `[x, y, z]`, where `x`, `y`, `z` are the 3D coordinates
`xyzn` | Each line contains `[x, y, z, nx, ny, nz]`, where `nx`, `ny`, `nz` are the normals
`xyzrgb` | Each line contains `[x, y, z, r, g, b]`, where `r`, `g`, `b` are in floats of range `[0, 1]`
`pts` | The first line is an integer representing the number of points. Each subsequent line contains `[x, y, z, i, r, g, b]`, where `r`, `g`, `b` are in `uint8`
`ply` | See [Polygon File Format](http://paulbourke.net/dataformats/ply), the ply file can contain both point cloud and mesh data
`pcd` | See [Point Cloud Data](http://pointclouds.org/documentation/tutorials/pcd_file_format.php)
It’s also possible to specify the file type explicitly. In this case, the file extension will be ignored.
```
pcd = o3d.io.read_point_cloud("../../TestData/my_points.txt", format='xyz')
```
## Mesh
This code below reads and writes a mesh.
```
print("Testing IO for meshes ...")
mesh = o3d.io.read_triangle_mesh("../../TestData/knot.ply")
print(mesh)
o3d.io.write_triangle_mesh("copy_of_knot.ply", mesh)
```
Compared to the data structure of point cloud, mesh has triangles that define the 3D surface.
By default, Open3D tries to infer the file type by the filename extension. Below is a list of supported triangle mesh file types.
Format | Description
---------|---------------
`ply` | See [Polygon File Format](http://paulbourke.net/dataformats/ply/), the ply file can contain both point cloud and mesh data
`stl` | See [StereoLithography](http://www.fabbers.com/tech/STL_Format)
`obj` | See [Object Files](http://paulbourke.net/dataformats/obj/)
`off` | See [Object File Format](http://www.geomview.org/docs/html/OFF.html)
`gltf` | See [GL Transmission Format](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0)
## Image
The code below reads and writes an image.
```
print("Testing IO for images ...")
img = o3d.io.read_image("../../TestData/lena_color.jpg")
print(img)
o3d.io.write_image("copy_of_lena_color.jpg", img)
```
The size of the image is readily displayed using `print(img)`.
| github_jupyter |
```
%matplotlib inline
```
PyTorch是什么?
================
基于Python的科学计算包,服务于以下两种场景:
- 作为NumPy的替代品,可以使用GPU的强大计算能力
- 提供最大的灵活性和高速的深度学习研究平台
开始
---------------
Tensors(张量)
^^^^^^^
Tensors与Numpy中的 ndarrays类似,但是在PyTorch中
Tensors 可以使用GPU进行计算.
```
from __future__ import print_function
import torch
```
创建一个 5x3 矩阵, 但是未初始化:
```
x = torch.empty(5, 3)
print(x)
```
创建一个随机初始化的矩阵:
```
x = torch.rand(5, 3)
print(x)
```
创建一个0填充的矩阵,数据类型为long:
```
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
```
创建tensor并使用现有数据初始化:
```
x = torch.tensor([5.5, 3])
print(x)
```
根据现有的张量创建张量。 这些方法将重用输入张量的属性,例如, dtype,除非设置新的值进行覆盖
```
x = x.new_ones(5, 3, dtype=torch.double) # new_* 方法来创建对象
print(x)
x = torch.randn_like(x, dtype=torch.float) # 覆盖 dtype!
print(x) # 对象的size 是相同的,只是值和类型发生了变化
```
获取 size
***译者注:使用size方法与Numpy的shape属性返回的相同,张量也支持shape属性,后面会详细介绍***
```
print(x.size())
```
<div class="alert alert-info"><h4>Note</h4><p>``torch.Size`` 返回值是 tuple类型, 所以它支持tuple类型的所有操作.</p></div>
操作
^^^^^^^^^^
操作有多种语法。
我们将看一下加法运算。
加法1:
```
y = torch.rand(5, 3)
print(x + y)
```
加法2
```
print(torch.add(x, y))
```
提供输出tensor作为参数
```
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
```
替换
```
# adds x to y
y.add_(x)
print(y)
```
<div class="alert alert-info"><h4>Note</h4><p>任何 以``_`` 结尾的操作都会用结果替换原变量.
例如: ``x.copy_(y)``, ``x.t_()``, 都会改变 ``x``.</p></div>
你可以使用与NumPy索引方式相同的操作来进行对张量的操作
```
print(x[:, 1])
```
``torch.view``: 可以改变张量的维度和大小
***译者注:torch.view 与Numpy的reshape类似***
```
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # size -1 从其他维度推断
print(x.size(), y.size(), z.size())
```
如果你有只有一个元素的张量,使用``.item()``来得到Python数据类型的数值
```
x = torch.randn(1)
print(x)
print(x.item())
```
**Read later:**
100+ Tensor operations, including transposing, indexing, slicing,
mathematical operations, linear algebra, random numbers, etc.,
are described
`here <https://pytorch.org/docs/torch>`_.
NumPy 转换
------------
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory
locations, and changing one will change the other.
Converting a Torch Tensor to a NumPy Array
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
```
See how the numpy array changed in value.
```
a.add_(1)
print(a)
print(b)
```
NumPy Array 转化成 Torch Tensor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
使用from_numpy自动转化
```
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
```
所有的 Tensor 类型默认都是基于CPU, CharTensor 类型不支持到
NumPy 的转换.
CUDA 张量
------------
使用``.to`` 方法 可以将Tensor移动到任何设备中
```
# is_available 函数判断是否有cuda可以使用
# ``torch.device``将张量移动到指定的设备中
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA 设备对象
y = torch.ones_like(x, device=device) # 直接从GPU创建张量
x = x.to(device) # 或者直接使用``.to("cuda")``将张量移动到cuda中
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` 也会对变量的类型做更改
```
| github_jupyter |
# Getting started with machine learning <br> using scikit-learn
## James Bourbeau
### Big Data Madison Meetup
April 24, 2018
### GitHub repo with materials:
https://github.com/jrbourbeau/big-data-madison-ml-sklearn <br>
### Slides:
https://jrbourbeau.github.io/big-data-madison-ml-sklearn
### Contact:
E-mail: james@jamesbourbeau.com
GitHub: [jrbourbeau](https://github.com/jrbourbeau)
Twitter: [\__jrbourbeau__](https://twitter.com/__jrbourbeau__)
LinkedIn: [jrbourbeau](https://www.linkedin.com/in/jrbourbeau/)
Source code for `plotting` Python module can be found on GitHub with the rest of the materials for this talk
```
import plotting
import numpy as np
np.random.seed(2)
%matplotlib inline
```
## Supervised machine learning workflow

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
## Outline
- What is machine learning?
- Classical programming vs. machine learning
- Supervised machine learning
- scikit-learn:
- Data representation
- Estimator API
- Example algorithm: decision tree classifier
- Model validation
- Cross validation
- Validation curves
# Machine learning vs. classical programming
## Classical programming
- Devise a set of rules (an algorithm) that are used to accomplish a task
- For example, labeling e-mails as either "spam" or "not spam"
```
def spam_filter(email):
"""Function that labels an email as 'spam' or 'not spam'
"""
if 'Act now!' in email.contents:
label = 'spam'
elif 'hotmail.com' in email.sender:
label = 'spam'
elif email.contents.count('$') > 20:
label = 'spam'
else:
label = 'not spam'
return label
```
## Machine learning
- "Field of study that gives computers the ability to learn without being explicitly programmed" — Arthur Samuel (1959)
- "A machine-learning system is trained rather than explicitly programmed. It’s presented with many examples relevant to a task, and it finds statistical structure in these examples that eventually allows the system to come up with rules for automating the task." — Francois Chollet, _Deep Learning with Python_
## Supervised machine learning
- From a labeled dataset, an algorithm learns a mapping between input data and the desired output label
- Goal is to have model generalize well to future, yet unseen, data
- Supervised machine learning is further divided into two types of problems:
- Classification — Labels are discrete. E.g. determine if a picture is of a cat, dog, or person.
- Regression — Labels are continuous. E.g. predict home prices.
```
plotting.plot_classification_vs_regression()
```
# Machine learning in Python with scikit-learn
## scikit-learn
- Popular Python machine learning library
- Designed to be a [well documented](http://scikit-learn.org/stable/) and approachable for non-specialist
- Built on top of NumPy and SciPy
- scikit-learn can be easily installed with `pip` or `conda`
- `pip install scikit-learn`
- `conda install scikit-learn`
## Data representation in scikit-learn
- Training dataset is described by a pair of matrices, one for the input data and one for the output
- Most commonly used data formats are a NumPy `ndarray` or a Pandas `DataFrame` / `Series`
- Each row of these matrices corresponds to one sample of the dataset
- Each column represents a quantitative piece of information that is used to describe each sample (called "features")
```
plotting.plot_data_representation()
```
## Iris dataset
- Dataset consists of 150 samples (individual flowers) that have 4 features: sepal length, sepal width, petal length, and petal width (all in cm)
- Each sample is labeled by its species: Iris Setosa, Iris Versicolour, Iris Virginica
- Task is to develop a model that predicts iris species
- Iris dataset is freely available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris)

## Loading the iris dataset
```
import pandas as pd
iris = pd.read_csv('iris.csv')
iris = iris.sample(frac=1, random_state=2).reset_index(drop=True)
iris.head()
# Only include first two training features (sepal length and sepal width)
feature_columns = ['sepal_length', 'sepal_width']
X = iris[feature_columns].values
y = iris['species'].values
print(f'First 5 samples in X: \n{X[:5]}')
print(f'First 5 labels in y: \n{y[:5]}')
plotting.plot_2D_iris()
```
## Estimators in scikit-learn
- Algorithms are implemented as estimator classes in scikit-learn
- Each estimator in scikit-learn is extensively documented (e.g. the [KNeighborsClassifier documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)) with API documentation, user guides, and example usages.
```
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.ensemble import RandomForestClassifier, GradientBoostingRegressor
from sklearn.svm import SVC, SVR
from sklearn.linear_model import LinearRegression, LogisticRegression
```
- A model is an instance of one of these estimator classes
```
model = KNeighborsClassifier(n_neighbors=5)
print(model)
```
## Estimator API
<br>
```python
class Estimator(BaseClass):
def __init__(self, **hyperparameters):
# Setup Estimator here
def fit(self, X, y):
# Implement algorithm here
return self
def predict(self, X):
# Get predicted target from trained model
# Note: fit must be called before predict
return y_pred
```
<br>
See [API design for machine learning software:
experiences from the scikit-learn project](https://arxiv.org/pdf/1309.0238.pdf) for a discusses of the API design choices for scikit-learn
## Training a model — fit then predict
```
# Create the model
model = KNeighborsClassifier(n_neighbors=5)
# Fit the model
model.fit(X, y)
# Get model predictions
y_pred = model.predict(X)
y_pred[:10]
```
# Example algorithm: decision tree classifier
## Decision tree classifier
Idea behind the decision tree algorithm is to sequentially partition a training dataset by asking a series of questions.

<p style="font-size:14px">
Image source: Raschka, Sebastian, and Vahid Mirjalili. <a href="https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939">Python Machine Learning</a>, 2nd Ed. Packt Publishing, 2017.
</p>
## Node splitting to maximize purity

## Features of decision tree classifier
- Easy to understand and interpretable model
- Requires little data preparation
- Can model non-linear relationships
- Building block for more advanced models (e.g. random forests, boosted decision trees)
## Decision tree classifier in scikit-learn
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=2)
clf.fit(X, y)
```
## Visualizing decision trees — tree graph
```
plotting.plot_decision_tree(clf)
```
## Visualizing decision trees — decision regions
```
plotting.plot_tree_decision_regions(clf)
```
# Model validation
## Model performance metrics
- There are many different performance metrics for classification and regression problems. Which metric you should use depends on the particular problem you are working on
- Many commonly used performance metrics are built into the `metrics` subpackage in scikit-learn
- Custom user-defined scoring function can be created using the `sklearn.metrics.make_scorer` function
```
# Classification metrics
from sklearn.metrics import (accuracy_score, precision_score,
recall_score, f1_score, log_loss)
# Regression metrics
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
y_true = [0, 1, 1, 3, 2]
y_pred = [0, 2, 1, 3, 1]
accuracy_score(y_true, y_pred)
mean_squared_error(y_true, y_pred)
```
## Separate training & testing sets
- A trained model will generally perform better on data that was used to train it
- Want to measure how well a model generalizes to new, unseen data
- Need to have two separate datasets. One for training models and one for evaluating model performance
- scikit-learn has a convenient `train_test_split` function that randomly splits a dataset into a testing and training set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=2)
print(f'X.shape = {X.shape}')
print(f'X_test.shape = {X_test.shape}')
print(f'X_train.shape = {X_train.shape}')
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
print(f'training accuracy = {accuracy_score(y_train, clf.predict(X_train))}')
print(f'testing accuracy = {accuracy_score(y_test, clf.predict(X_test))}')
```
## Model selection — hyperparameter optimization
- Choose model hyperparameter values to avoid under- and over-fitting
- Under-fitting — model isn't sufficiently complex enough to properly model the dataset at hand
- Over-fitting — model is too complex and begins to learn the noise in the training dataset

<p style="font-size:14px">
Image source: <a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html">Underfitting vs. Overfitting</a> in scikit-learn examples
</p>
## $k$-fold cross validation diagram

<p style="font-size:14px">
Image source: Raschka, Sebastian, and Vahid Mirjalili. <a href="https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939">Python Machine Learning</a>, 2nd Ed. Packt Publishing, 2017.
</p>
## Cross validation in scikit-learn
```
from sklearn.model_selection import cross_validate
clf = DecisionTreeClassifier(max_depth=2)
scores = cross_validate(clf, X_train, y_train,
scoring='accuracy', cv=10,
return_train_score=True)
print(scores.keys())
test_scores = scores['test_score']
train_scores = scores['train_score']
print(test_scores)
print(train_scores)
print('\n10-fold CV scores:')
print(f'training score = {np.mean(train_scores)} +/- {np.std(train_scores)}')
print(f'validation score = {np.mean(test_scores)} +/- {np.std(test_scores)}')
```
## Validation curves
Validation curves are a good way to diagnose if a model is under- or over-fitting
```
plotting.plot_validation_curve()
plotting.plot_max_depth_validation(clf, X_train, y_train)
```
## Hyperparameter tuning via GridSearchCV
- In practice, you'll want to optimize many different hyperparameter values simultaneously
- The `GridSearchCV` object in scikit-learn's `model_selection` subpackage can be used to scan over many different hyperparameter combinations
- Calculates cross-validated training and testing scores for each hyperparameter combinations
- The combination that maximizes the testing score is deemed to be the "best estimator"
```
from sklearn.model_selection import GridSearchCV
# Instantiate a model
clf = DecisionTreeClassifier()
# Specify hyperparameter values to test
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
# Run grid search
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
# Get best model
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
print(gridsearch.best_estimator_)
```
## Supervised machine learning workflow

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
## Step 1 — Separate training and testing datasets

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=2)
```
## Steps 2 & 3 — Optimize hyperparameters via cross validation

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
clf = DecisionTreeClassifier()
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
best_clf = gridsearch.best_estimator_
best_clf
```
## Steps 4 — Model performance

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
y_pred = best_clf.predict(X_test)
test_acc = accuracy_score(y_test, y_pred)
print(f'test_acc = {test_acc}')
```
## Steps 5 — Train final model on full dataset

<p style="font-size:14px">
Image source: <a href="https://sebastianraschka.com/blog/2016/model-evaluation-selection-part3.html">Model evaluation, model selection, and algorithm selection in machine learning</a> by Sebastian Raschka
</p>
```
final_model = DecisionTreeClassifier(**gridsearch.best_params_)
final_model.fit(X, y)
```
## Iris classification problem
```
# Step 1: Get training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=2)
# Step 2: Use GridSearchCV to find optimal hyperparameter values
clf = DecisionTreeClassifier(random_state=2)
parameters = {'max_depth': range(1, 20),
'criterion': ['gini', 'entropy']}
gridsearch = GridSearchCV(clf, parameters, scoring='accuracy', cv=10)
gridsearch.fit(X_train, y_train)
print(f'gridsearch.best_params_ = {gridsearch.best_params_}')
# Step 3: Get model with best hyperparameters
best_clf = gridsearch.best_estimator_
# Step 4: Get best model performance from testing set
y_pred = best_clf.predict(X_test)
test_acc = accuracy_score(y_test, y_pred)
print(f'test_acc = {test_acc}')
# Step 5: Train final model on full dataset
final_model = DecisionTreeClassifier(random_state=2, **gridsearch.best_params_)
final_model.fit(X, y);
```
## Additional Resources
- _Python Machine Learning_ by Sebastian Raschka [[GitHub](https://github.com/rasbt/python-machine-learning-book-2nd-edition)][[Amazon](https://www.amazon.com/Python-Machine-Learning-scikit-learn-TensorFlow/dp/1787125939)]
- _Data Science Handbook_ by Jake VanderPlas [[GitHub](https://github.com/jakevdp/PythonDataScienceHandbook)][[Amazon](https://www.amazon.com/_/dp/1491912057?tag=oreilly20-20)]
- _The Elements of Statistical Learning_ by Hastie, Tibshirani and Friedman [[Free book!](https://web.stanford.edu/~hastie/ElemStatLearn/)]
- _Deep Learning_ by Ian Goodfellow, Yoshua Bengio, and Aaron Courville [[Amazon](https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618)]
# Thank you
## Any questions?
| github_jupyter |
# Generating 3D People in Scenes without People
Here we give a frontend demo of how to generate body meshes in a scene without people.
+ First, we use a pre-trained conditional VAE model to generate body meshes. Here we only show the one-stage model without scene loss.
+ Second, we perform scene geometry-aware fitting.
The code in this demo is slightly different from the code in other places. __To efficiently generate a large amount of body meshes for various scenes, we recommend to use the frontend sh scripts.__
## (1) loading dependencies, models and setup environments
```
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import sys, os, glob
import json
import argparse
import numpy as np
import scipy.io as sio
import open3d as o3d
# proj_path = '/is/ps2/yzhang/workspaces/PSI-internal'
proj_path = '/home/yzhang/workspaces/smpl-env-gen-3d-internal'
sys.path.append(proj_path)
sys.path.append(proj_path+'/source')
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init
import torch.optim as optim
from torch.optim import lr_scheduler
import smplx
from human_body_prior.tools.model_loader import load_vposer
from cvae import HumanCVAES1, HumanCVAES2, ContinousRotReprDecoder
import time
import smplx
from human_body_prior.tools.model_loader import load_vposer
import chamfer_pytorch.dist_chamfer as ext
```
We put some auxilary functions here, mainly for coordinate transform and file parsing.
```
def recover_global_T(x_batch, cam_intrisic, max_depth):
xt_batch = x_batch[:,:3]
xr_batch = x_batch[:,3:]
fx_batch = cam_intrisic[:,0,0]
fy_batch = cam_intrisic[:,1,1]
px_batch = cam_intrisic[:,0,2]
py_batch = cam_intrisic[:,1,2]
s_ = 1.0 / torch.max(px_batch, py_batch)
z = (xt_batch[:, 2]+1.0)/2.0 * max_depth
x = xt_batch[:,0] * z / s_ / fx_batch
y = xt_batch[:,1] * z / s_ / fy_batch
xt_batch_recoverd = torch.stack([x,y,z],dim=-1)
return torch.cat([xt_batch_recoverd, xr_batch],dim=-1)
def convert_to_3D_rot(x_batch):
xt = x_batch[:,:3]
xr = x_batch[:,3:9]
xb = x_batch[:,9:]
xr_mat = ContinousRotReprDecoder.decode(xr) # return [:,3,3]
xr_aa = ContinousRotReprDecoder.matrot2aa(xr_mat) # return [:,3]
return torch.cat([xt, xr_aa, xb], dim=-1)
def body_params_encapsulate(x_body_rec, to_numpy=True, batched=False):
if to_numpy:
x_body_rec_np = x_body_rec.detach().cpu().numpy()
else:
x_body_rec_np = x_body_rec
if batched:
body_params_batch_rec={}
body_params_batch_rec['transl'] = x_body_rec_np[:,:3]
body_params_batch_rec['global_orient'] = x_body_rec_np[:,3:6]
body_params_batch_rec['betas'] = x_body_rec_np[:,6:16]
body_params_batch_rec['body_pose'] = x_body_rec_np[:,16:48]
body_params_batch_rec['left_hand_pose'] = x_body_rec_np[:,48:60]
body_params_batch_rec['right_hand_pose'] = x_body_rec_np[:,60:]
return body_params_batch_rec
else:
n_batch = x_body_rec_np.shape[0]
rec_list = []
for b in range(n_batch):
body_params_batch_rec={}
body_params_batch_rec['transl'] = x_body_rec_np[b:b+1,:3]
body_params_batch_rec['global_orient'] = x_body_rec_np[b:b+1,3:6]
body_params_batch_rec['betas'] = x_body_rec_np[b:b+1,6:16]
body_params_batch_rec['body_pose'] = x_body_rec_np[b:b+1,16:48]
body_params_batch_rec['left_hand_pose'] = x_body_rec_np[b:b+1,48:60]
body_params_batch_rec['right_hand_pose'] = x_body_rec_np[b:b+1,60:]
rec_list.append(body_params_batch_rec)
return rec_list
def data_preprocessing(img, modality, target_domain_size=[128, 128]):
"""
input:
- img (depthmap or semantic map): [height, width].
- modality: 'depth' or 'seg'
output:
canvas: with shape of target_domain_size, where the input is in the
center tightly, with shape target_domain_size
factor: the resizing factor
"""
# prepare the canvas
img_shape_o = img.shape
canvas = torch.zeros([1,1]+target_domain_size, dtype=torch.float32,
device=torch.device("cuda"))
# filter out unavailable values
if modality == 'depth':
img[img>6.0]=6.0
if modality == 'seg':
img[img>41] = 41
## rescale to [-1,1]
max_val = torch.max(img)
_img = 2* img / max_val - 1.0
## put _img to the canvas
if img_shape_o[0]>= img_shape_o[1]:
factor = float(target_domain_size[0]) / img_shape_o[0]
target_height = target_domain_size[0]
target_width = int(img_shape_o[1] * factor) //2 *2
# for depth map we use bilinear interpolation in resizing
# for segmentation map we use bilinear interpolation as well.
# note that float semantic label is not real in practice, but
# helpful in our work
target_size = [target_height, target_width]
_img = _img.view(1,1,img_shape_o[0],img_shape_o[1])
img_resize = F.interpolate(_img, size=target_size, mode='bilinear',
align_corners=False)
na = target_width
nb = target_domain_size[1]
lower = (nb //2) - (na //2)
upper = (nb //2) + (na //2)
canvas[:,:,:, lower:upper] = img_resize
else:
factor = float(target_domain_size[1]) / img_shape_o[1]
target_height = int(factor*img_shape_o[0]) //2 *2
target_width = target_domain_size[1]
target_size = [target_height, target_width]
_img = _img.view(1,1,img_shape_o[0],img_shape_o[1])
img_resize = F.interpolate(_img, size=target_size, mode='bilinear',
align_corners=False)
na = target_height
nb = target_domain_size[0]
lower = (nb //2) - (na //2)
upper = (nb //2) + (na //2)
canvas[:,:,lower:upper, :] = img_resize
return canvas, factor, max_val
def scipy_matfile_parse(filename):
'''
parse data from files and put them to GPU
Note that this function is for demo, and is different from the ones used in other places.
'''
data = sio.loadmat(filename)
depth0_np = data['depth']
seg0_np = data['seg']
## change them to torch tensor
depth0 = torch.tensor(depth0_np, dtype=torch.float32, device=torch.device("cuda"))
seg0 = torch.tensor(seg0_np, dtype=torch.float32, device=torch.device("cuda"))
## pre_processing
depth, factor_d,max_d = data_preprocessing(depth0, 'depth', target_domain_size=[128, 128])
seg, factor_s,_ = data_preprocessing(seg0, 'seg', target_domain_size=[128, 128])
cam_intrinsic_np = data['cam'][0][0]['intrinsic']
cam_intrinsic = torch.tensor(cam_intrinsic_np, dtype=torch.float32, device=torch.device("cuda")).unsqueeze(0)
cam_extrinsic_np = data['cam'][0][0]['extrinsic']
cam_extrinsic_np = np.linalg.inv(cam_extrinsic_np)
cam_extrinsic = torch.tensor(cam_extrinsic_np, dtype=torch.float32, device=torch.device("cuda")).unsqueeze(0)
return depth, seg, max_d.view(1), cam_intrinsic, cam_extrinsic
```
## (2) Prepare the scene without people
Our method requires the following data about a scene:
+ depth map
+ semantic segmentation
+ the camera parameters (extrinsic and intrinsic)
+ the scene signed distance function (SDF)
+ the scene mesh
Note that SDF and scene mesh are only used for scene-geometry aware fitting. For generating body meshes with the CVAE model, only the first three attributes are sufficient.
Here we use the 'MPH16' scene in the __PROXE__ dataset.
```
scenename = 'MPH16'
proxe_path = '/home/yzhang/Videos/PROXE'
## read the depth and semantics
scene_matfile_path = os.path.join(proxe_path, 'snapshot_for_testing/MPH16_00157_01/rec_000000.mat')
depth, seg, max_d, cam_intrinsic, cam_extrinsic = scipy_matfile_parse(scene_matfile_path)
## read the sdf
with open(os.path.join(proxe_path, 'scenes_sdf',scenename+'.json')) as f:
sdf_data = json.load(f)
grid_min = np.array(sdf_data['min'])
grid_max = np.array(sdf_data['max'])
grid_dim = sdf_data['dim']
sdf = np.load(os.path.join(proxe_path, 'scenes_sdf', scenename + '_sdf.npy')).reshape(grid_dim, grid_dim, grid_dim)
## read the scene mesh
scene_mesh = o3d.io.read_triangle_mesh(os.path.join(proxe_path, 'scenes_downsampled', scenename+'.ply'))
scene_verts = np.asarray(scene_mesh.vertices)
scene_faces = np.asarray(scene_mesh.triangles)
## We could visualize the scene data, or skip this step.
import matplotlib.pyplot as plt
plt.subplot(1,2,1)
depth_processed_np = depth.detach().cpu().squeeze().numpy()
plt.imshow(depth_processed_np)
plt.subplot(1,2,2)
seg_processed_np = seg.detach().cpu().squeeze().numpy()
plt.imshow(seg_processed_np)
# # we use webGL to visualize 3D, which is a different case from running locally.
# # only works for point cloud visualization
# # note that visualizing 3D here may cause slow responses.
# pcd = o3d.geometry.PointCloud()
# pcd.points = scene_mesh.vertices
# pcd.colors = scene_mesh.vertex_colors
# from open3d import JVisualizer
# visualizer = JVisualizer()
# visualizer.add_geometry(pcd)
# visualizer.show()
```
## (3) Generating body meshes using the pre-trained conditional VAE model
For demonstration purposes, we only use the **one-stage model without scene loss**. For other models, the pipeline is the same.
```
testconfig={
'smplx_model_path': '/home/yzhang/body_models/VPoser',
'scene_model_ckpt': '/home/yzhang/workspaces/smpl-env-gen-3d-internal/data/resnet18.pth',
'vposer_ckpt_path': '/home/yzhang/body_models/VPoser/vposer_v1_0',
'device': torch.device("cuda" if torch.cuda.is_available() else "cpu"),
'ckpt_dir': 'checkpoints_v2/checkpoints_proxtrain_models1_batch32_epoch30_LR0.0003_LossVposer0.001_LossKL0.1_LossContact0.000001_LossCollision0.000001',
'n_samples': 5
}
### our conditional vae model
model_h = HumanCVAES1(latentD=256, # default value in our checkpoints
n_dim_body=75,# global T(3d) + global R(6d) + shape (10d) + pose (32d) + hand (24d)
scene_model_ckpt=None,
test=True)
# model_h = HumanCVAES2(latentD_g=256, # default value in our checkpoints
# latentD_l=256, # default value in our checkpoints
# n_dim_body=75,# global T(3d) + global R(6d) + shape (10d) + pose (32d) + hand (24d)
# scene_model_ckpt=None,
# test=True)
### VPoesr
vposer, _ = load_vposer(testconfig['vposer_ckpt_path'], vp_model='snapshot')
### smplx
body_mesh_model = smplx.create(testconfig['smplx_model_path'],
model_type='smplx',
gender='neutral', ext='npz',
num_pca_comps=12,
create_global_orient=True,
create_body_pose=True,
create_betas=True,
create_left_hand_pose=True,
create_right_hand_pose=True,
create_expression=True,
create_jaw_pose=True,
create_leye_pose=True,
create_reye_pose=True,
create_transl=True,
batch_size=testconfig['n_samples']
)
## setup models and load checkpoints
model_h.eval()
model_h.to(testconfig['device'])
vposer.to(testconfig['device'])
body_mesh_model.to(testconfig['device'])
ckp_path = sorted(glob.glob(os.path.join(testconfig['ckpt_dir'],'epoch-*.ckp')),
key=os.path.getmtime)[-1]
checkpoint = torch.load(ckp_path)
print('[INFO] load checkpoints: ' + ckp_path)
model_h.load_state_dict(checkpoint['model_h_state_dict'])
```
Run the following code block to sample body configurations.
```
## generating body configurations
### concatenate depth and seg
xs = torch.cat([depth, seg],dim=1)
xs_n = xs.repeat(testconfig['n_samples'], 1,1,1)
### model inference
xhnr_gen= model_h.sample(xs_n)
### recover to the original translation/orientation range
xhn_gen = convert_to_3D_rot(xhnr_gen)
xh_gen = recover_global_T(xhn_gen, cam_intrinsic.repeat(testconfig['n_samples'],1,1),
max_d.repeat(testconfig['n_samples']))
```
In the following, we visualize the generated body configurations.
```
## visualizing a body mesh. Note that we use WebGL, which may cause slow responses or even stuck.
body_params = body_params_encapsulate(xh_gen, to_numpy=False, batched=True)
body_params['body_pose'] = vposer.decode(body_params['body_pose'], output_type='aa').view(testconfig['n_samples'],-1)
smplx_out = body_mesh_model(**body_params)
smplx_verts = smplx_out.vertices.detach().cpu().numpy().squeeze()
cam_ext = cam_extrinsic.squeeze().detach().cpu().numpy()
### create a body point cloud
pcd_body_list = []
for body_index in range(testconfig['n_samples']):
# body_index = 20
pcd_body = o3d.geometry.PointCloud()
pcd_body.points = o3d.utility.Vector3dVector(smplx_verts[body_index])
pcd_body = pcd_body.uniform_down_sample(every_k_points=10)
### perform transformation
pcd_body.transform(cam_ext)
pcd_body_list.append(pcd_body)
### create a scene point cloud
pcd_scene = o3d.geometry.PointCloud()
pcd_scene.points = scene_mesh.vertices
pcd_scene.colors = scene_mesh.vertex_colors
pcd_scene = pcd_scene.uniform_down_sample(every_k_points=10)
### create coord frame
mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(
size=0.6, origin=[0, 0, 0])
pcd_coord = o3d.geometry.PointCloud()
pcd_coord.points = mesh_frame.vertices
pcd_coord.colors = mesh_frame.vertex_colors
pcd_coord.transform(cam_ext)
### visualize in WebGL
from open3d import JVisualizer
visualizer = JVisualizer()
visualizer.add_geometry(pcd_scene)
visualizer.add_geometry(pcd_coord)
for body_index in range(testconfig['n_samples']):
visualizer.add_geometry(pcd_body_list[body_index])
visualizer.show()
```
### (4) scene geometry-aware fitting
One see that some generated body meshes are not physically plausible, either floating in the air or penetrating into the scene mesh. Therefore, we have this geometry-aware fitting to overcome these problems.
```
import torch.optim as optim
from torch.autograd import Variable
import chamfer_pytorch.dist_chamfer as ext
def get_contact_id(body_segments_folder, contact_body_parts=['L_Hand', 'R_Hand']):
contact_verts_ids = []
contact_faces_ids = []
for part in contact_body_parts:
with open(os.path.join(body_segments_folder, part + '.json'), 'r') as f:
data = json.load(f)
contact_verts_ids.append(list(set(data["verts_ind"])))
contact_faces_ids.append(list(set(data["faces_ind"])))
contact_verts_ids = np.concatenate(contact_verts_ids)
contact_faces_ids = np.concatenate(contact_faces_ids)
return contact_verts_ids, contact_faces_ids
def verts_transform(verts_batch, cam_ext_batch):
verts_batch_homo = F.pad(verts_batch, (0,1), mode='constant', value=1)
verts_batch_homo_transformed = torch.matmul(verts_batch_homo,
cam_ext_batch.permute(0,2,1))
verts_batch_transformed = verts_batch_homo_transformed[:,:,:-1]
return verts_batch_transformed
def cal_loss(xhr, xhr_rec, cam_ext_batch, s_verts_batch,
s_sdf_batch,s_grid_min_batch, s_grid_max_batch,
lossconfig, fittingconfig):
### reconstruction loss
loss_rec = lossconfig['weight_loss_rec']*F.l1_loss(xhr, xhr_rec)
xh_rec = convert_to_3D_rot(xhr_rec)
### vposer loss
vposer_pose = xh_rec[:,16:48]
loss_vposer = lossconfig['weight_loss_vposer'] * torch.mean(vposer_pose**2)
### contact loss
body_param_rec = body_params_encapsulate(xh_rec, to_numpy=False, batched=True)
body_param_rec['body_pose'] = vposer.decode(body_param_rec['body_pose'],
output_type='aa').view(xhr.shape[0], -1)
smplx_output = body_mesh_model(return_verts=True, **body_param_rec)
body_verts_batch = smplx_output.vertices #[b, 10475,3]
body_verts_batch = verts_transform(body_verts_batch, cam_ext_batch)
vid, fid = get_contact_id(body_segments_folder=fittingconfig['body_segments_folder'],
contact_body_parts=fittingconfig['contact_part'])
body_verts_contact_batch = body_verts_batch[:, vid, :]
dist_chamfer_contact = ext.chamferDist()
contact_dist, _ = dist_chamfer_contact(body_verts_contact_batch.contiguous(),
s_verts_batch.contiguous())
loss_contact = lossconfig['weight_contact'] * torch.mean(torch.sqrt(contact_dist+1e-4)
/(torch.sqrt(contact_dist+1e-4)+0.01))
### sdf collision loss
s_grid_min_batch = s_grid_min_batch.unsqueeze(1)
s_grid_max_batch = s_grid_max_batch.unsqueeze(1)
norm_verts_batch = (body_verts_batch - s_grid_min_batch) / (s_grid_max_batch - s_grid_min_batch) *2 -1
n_verts = norm_verts_batch.shape[1]
body_sdf_batch = F.grid_sample(s_sdf_batch.unsqueeze(1),
norm_verts_batch[:,:,[2,1,0]].view(-1, n_verts,1,1,3),
padding_mode='border')
# if there are no penetrating vertices then set sdf_penetration_loss = 0
if body_sdf_batch.lt(0).sum().item() < 1:
loss_sdf_pene = torch.tensor(0.0, dtype=torch.float32, device=self.device)
else:
loss_sdf_pene = body_sdf_batch[body_sdf_batch < 0].abs().mean()
loss_collision = lossconfig['weight_collision']*loss_sdf_pene
return loss_rec, loss_vposer, loss_contact, loss_collision
def fitting(xhr_in, cam_extrinsic,
s_verts, s_sdf, s_grid_min, s_grid_max, max_d,
fittingconfig, lossconfig):
batch_size = xhr_in.shape[0]
xhr_rec = Variable(torch.randn(batch_size,75).cuda(), requires_grad=True)
optimizer = optim.Adam([xhr_rec], lr=fittingconfig['init_lr_h'])
xhr_rec.data = xhr_in.clone()
cam_ext_batch = cam_extrinsic.repeat(batch_size, 1,1)
max_d_batch = max_d.repeat(batch_size)
s_verts_batch = s_verts.repeat(batch_size, 1,1)
s_sdf_batch = s_sdf.repeat(batch_size, 1,1,1)
s_grid_min_batch = s_grid_min.repeat(batch_size, 1)
s_grid_max_batch = s_grid_max.repeat(batch_size, 1)
for ii in range(fittingconfig['num_iter']):
optimizer.zero_grad()
loss_rec, loss_vposer, loss_contact, loss_collision = cal_loss(xhr_in, xhr_rec, cam_ext_batch, s_verts_batch,
s_sdf_batch,s_grid_min_batch, s_grid_max_batch,
lossconfig, fittingconfig)
loss = loss_rec + loss_vposer + loss_contact + loss_collision
if fittingconfig['verbose']:
print('[INFO][fitting] iter={:d}, l_rec={:f}, l_vposer={:f}, l_contact={:f}, l_collision={:f}'.format(
ii, loss_rec.item(), loss_vposer.item(),
loss_contact.item(), loss_collision.item()) )
loss.backward(retain_graph=True)
optimizer.step()
### recover global translation and orientation
xh_rec = convert_to_3D_rot(xhr_rec)
return xh_rec
fittingconfig={'init_lr_h': 0.05,
'num_iter': 50,
'contact_part': ['back','butt','L_Hand','R_Hand','L_Leg',
'R_Leg','thighs'],
'body_segments_folder': os.path.join(proxe_path,'body_segments'),
'verbose': True
}
lossconfig={
'weight_loss_rec': 1,
'weight_loss_vposer':0.01,
'weight_contact': 0.1,
'weight_collision' : 0.5
}
### put scene to tensors
s_verts = torch.tensor(scene_verts, dtype=torch.float32).cuda().unsqueeze(0)
s_grid_min = torch.tensor(grid_min, dtype=torch.float32).cuda().unsqueeze(0)
s_grid_max = torch.tensor(grid_max, dtype=torch.float32).cuda().unsqueeze(0)
s_sdf = torch.tensor(sdf, dtype=torch.float32).cuda().unsqueeze(0)
xhr_gen = recover_global_T(xhnr_gen, cam_intrinsic.repeat(testconfig['n_samples'],1,1),
max_d.repeat(testconfig['n_samples']))
xh_fitting = fitting(xhr_gen, cam_extrinsic,
s_verts, s_sdf, s_grid_min, s_grid_max, max_d,
fittingconfig, lossconfig)
## visualizing a body mesh. Note that we use WebGL, which may cause slow responses or even stuck.
body_params = body_params_encapsulate(xh_fitting, to_numpy=False, batched=True)
body_params['body_pose'] = vposer.decode(body_params['body_pose'], output_type='aa').view(testconfig['n_samples'],-1)
smplx_out = body_mesh_model(**body_params)
smplx_verts = smplx_out.vertices.detach().cpu().numpy().squeeze()
cam_ext = cam_extrinsic.squeeze().detach().cpu().numpy()
### create a body point cloud
pcd_body_list = []
for body_index in range(testconfig['n_samples']):
# body_index = 20
pcd_body = o3d.geometry.PointCloud()
pcd_body.points = o3d.utility.Vector3dVector(smplx_verts[body_index])
pcd_body = pcd_body.uniform_down_sample(every_k_points=10)
### perform transformation
pcd_body.transform(cam_ext)
pcd_body_list.append(pcd_body)
### create a scene point cloud
pcd_scene = o3d.geometry.PointCloud()
pcd_scene.points = scene_mesh.vertices
pcd_scene.colors = scene_mesh.vertex_colors
pcd_scene = pcd_scene.uniform_down_sample(every_k_points=10)
### create coord frame
mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(
size=0.6, origin=[0, 0, 0])
pcd_coord = o3d.geometry.PointCloud()
pcd_coord.points = mesh_frame.vertices
pcd_coord.colors = mesh_frame.vertex_colors
pcd_coord.transform(cam_ext)
### visualize in WebGL
from open3d import JVisualizer
visualizer = JVisualizer()
visualizer.add_geometry(pcd_scene)
visualizer.add_geometry(pcd_coord)
for body_index in range(testconfig['n_samples']):
visualizer.add_geometry(pcd_body_list[body_index])
visualizer.show()
```
| github_jupyter |
# 02 - XOR Modell mit TensorFlow
```
# see https://aimatters.wordpress.com/2016/01/16/solving-xor-with-a-neural-network-in-tensorflow/
import tensorflow as tf
import time
```
#### Trainings- und Testdaten
```
XOR_X = [[0,0],[0,1],[1,0],[1,1]]
XOR_Y = [[0],[1],[1],[0]]
```
#### Weight und Bias definieren
```
x_ = tf.placeholder(tf.float32, shape=[4,2], name = 'x-input')
y_ = tf.placeholder(tf.float32, shape=[4,1], name = 'y-input')
Weight1 = tf.Variable(tf.random_uniform([2,2], -1, 1, seed=80636), name = "Weight1")
Weight2 = tf.Variable(tf.random_uniform([2,1], -1, 1, seed=80636), name = "Weight2")
Bias1 = tf.Variable(tf.zeros([2]), name = "Bias1")
Bias2 = tf.Variable(tf.zeros([1]), name = "Bias2")
```
#### Layer definieren

```
with tf.name_scope("layer2") as scope:
A2 = tf.sigmoid(tf.matmul(x_, Weight1) + Bias1)
with tf.name_scope("layer3") as scope:
Hypothesis = tf.sigmoid(tf.matmul(A2, Weight2) + Bias2)
with tf.name_scope("cost") as scope:
cost = tf.reduce_mean(( (y_ * tf.log(Hypothesis)) +
((1 - y_) * tf.log(1.0 - Hypothesis)) ) * -1)
with tf.name_scope("train") as scope:
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
```
#### initialisieren
```
init = tf.global_variables_initializer()
sess = tf.Session()
```
#### TensorBoard
```
writer = tf.summary.FileWriter("./logs/xor_logs/xor_tf", sess.graph)
```
#### Training
```
sess.run(init)
t_start = time.clock()
for i in range(100001):
sess.run(train_step, feed_dict={x_: XOR_X, y_: XOR_Y})
if i % 10000 == 0:
print('Epoch ', i)
print('Hypothesis ', sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('cost ', sess.run(cost, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('Weight1 ', sess.run(Weight1))
print('Bias1 ', sess.run(Bias1))
print('Weight2 ', sess.run(Weight2))
print('Bias2 ', sess.run(Bias2))
t_end = time.clock()
print('Elapsed time ', t_end - t_start)
```
#### Ergebnis
```
print sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y})
```
#### freeze Modell und als TensorFlow-Datei speichern
```
freeze_var_names = list(set(v.op.name for v in tf.global_variables()))
print freeze_var_names
output_names = [Hypothesis.op.name]
print output_names
from tensorflow.python.framework.graph_util import remove_training_nodes
sub_graph_def = remove_training_nodes(sess.graph_def)
from tensorflow.python.framework import graph_util
frozen_graph = graph_util.convert_variables_to_constants(sess,
sub_graph_def,
output_names,
freeze_var_names)
graph_path = tf.train.write_graph(frozen_graph, "models", "xor_tf.pb", as_text=False)
print('%s written' % graph_path)
```
## uTensor
utensor-cli convert models/xor2n.pb --output-nodes=layer3_3/Sigmoid
unsupported op type in uTensor: Sigmoid
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
from datetime import datetime,timedelta,date,time
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.cluster import KMeans
from collections import Counter
column_names = ['DateTime', 'y']
area = 'C:/Users/home/Desktop/Smart-Meter/ratiodata/'
dir_path = os.getcwd()
# 將所有用戶資料讀取進來,儲存在 dt 中
# dt 為 dict,key:用戶名稱,value:該用戶之所有 data 之 DataFrame
dt = {}
for d_name in region[region_name].values.flatten():
#print(d_name)
dpath = area + d_name + '.csv'
dt[d_name] = pd.read_csv(dpath)
dt[d_name].columns = column_names
dt[d_name]['DateTime'] = pd.to_datetime(dt[d_name]['DateTime'])
# 輸入一個 DataFrame,開始與結束的月份,回傳符合條件的所有紀錄
def get_DT_between_month(dataT, start_month, end_month):
return dataT[[start_month<=dtime.month<=end_month for dtime in dataT['DateTime']]]
#get_DT_between_month(dt['M00001'], 6, 9)
# 輸入一個 DataFrame,回傳非假日的所有紀錄
def get_weekday_DT(dataT):
return dataT[[0<=dtime.weekday()<=4 for dtime in dataT['DateTime']]]
#get_weekday_DT(dt['M00001.csv'])
# 將所有 6~9 月份的非假日資料過濾出來
summer_dt = {}
for k in dt.keys():
summer_dt[k] = get_weekday_DT(get_DT_between_month(dt[k], 6, 9))
# all_times 為所有時間點的 datetime.time,[00:00, 01:0, 02:00, ... , 23:00,00:00]
all_times = [time(hour=1, minute=0, second=0)]
while True:
next_time = datetime.combine(date.today(), all_times[-1]) + timedelta(hours=1)
#next_time = all_times[-1] + time.timedelta(minutes=15)
if next_time.time() == all_times[0]:
break
all_times.append(next_time.time())
# 將每個 user 的所有時間點用電情形做整理
# summer_tdt 為 dict, key: 用戶名稱
# , value: 該用戶的每個時間點用電情形, 為 dict
# 用戶用電情形, key: 時間點, ex: 01:00
# , value: list, 所有該用戶在該時間點用電的紀錄, ex: [ 0.2, 0.3, 0.4]
summer_tdt = {}
for k in summer_dt.keys():
#print(k)
summer_tdt[k] = {}
for t in all_times:
summer_tdt[k][t] = []
for i in range(len(summer_dt[k])):
if np.isnan(summer_dt[k]['y'].iloc[i]):
continue
summer_tdt[k][summer_dt[k]['DateTime'].iloc[i].time()].append(summer_dt[k]['y'].iloc[i])
# 將所有 user 的所有時間點用電取 mean,並儲存在 summer_avgdt
# summer_avgdt 為 dict, key: 每個時間點
# , value: list, 紀錄每個 user 該時段平均用電
summer_avgdt = {}
for t in all_times:
summer_avgdt[t] = []
for k in summer_dt.keys():
summer_avgdt[t].append(np.mean(summer_tdt[k][t]))
# 對每個時間點做boxplot。想看,整體而言,所有用戶在每個時段用電的差異
summer_avgdt_list = []
for i in range(len(all_times)):
summer_avgdt_list.append(summer_avgdt[all_times[i]])
# 有畫 outlier
plt.figure(figsize=(15,10))
plt.boxplot(summer_avgdt_list, labels=all_times)
plt.xticks(rotation=90)
plt.show()
# 沒畫 outlier
plt.figure(figsize=(15,10))
plt.boxplot(summer_avgdt_list, 0, '', labels=all_times)
plt.xticks(rotation=90)
plt.show()
# 將所有時段做標準差,並儲存在 std_summer_avgdt
# std_summer_avgdt 為 dict, key: 每個時間點
# , value: 該時段的 std
std_summer_avgdt = {}
for k in summer_avgdt.keys():
std_summer_avgdt[k] = np.std(summer_avgdt[k])
# 接著就是做分群了 ~
x = [summer_avgdt[k] for k in summer_avgdt.keys()]
X = [list(i) for i in zip(*x)]
ss=[]
for n in range(1,20):
kmeans = KMeans(n_clusters=n, random_state=0).fit(X)
ss.append(kmeans.inertia_)
plt.style.use('ggplot')
plt.figure(figsize=(10,10))
plt.plot(range(1,20),ss)
plt.xlabel("n_clusters")
plt.xticks( np.arange(20) )
plt.ylabel("within-cluster sum-of-squares")
# 挑轉折差別最大的點為分群數
n_c = 3
kmeans = KMeans(n_clusters=n_c, random_state=0).fit(X)
print(Counter(kmeans.labels_))
groups = []
for i in range(n_c):
groups.append({})
for t in all_times:
groups[i][t] = []
user_keys = [ k for k in summer_tdt.keys()]
for i in range(len(kmeans.labels_)):
u = user_keys[i]
g = kmeans.labels_[i]
for t in all_times:
groups[g][t].append(np.mean(summer_tdt[u][t]))
# 畫boxplot看個群的變異程度
plt.figure(figsize=(15,10))
for i in range(n_c):
yy = [groups[i][t] for t in all_times]
pyy = yy[3::4]
pxx = all_times[3::4]
plt.subplot(np.ceil(n_c/2),2,i+1)
plt.title("Group"+str(i+1))
plt.boxplot(pyy, 0, '', labels=pxx)
plt.xticks(rotation=90)
plt.tight_layout()
plt.show()
# 畫line plot看個群的相似度
plt.figure(figsize=(15,10))
for i in range(n_c):
yy = [groups[i][t] for t in all_times[:-1]]
y = [list(i) for i in zip(*yy)]
plt.subplot(np.ceil(n_c/2),2,i+1)
plt.title("Group"+str(i+1))
#plt.ylim(0,2)
for num in range(len(y)):
plt.plot(all_times[:-1],y[num],'b--')
#plt.scatter(all_times,y[num])
plt.xticks(rotation=90)
pyy = [np.mean(groups[i][t]) for t in all_times[:-1]]
plt.plot(all_times[:-1], pyy,'r-',linewidth=5.0)
plt.xticks(all_times[:-1])
#plt.scatter(all_times[:-1], pyy,'r-',linewidth=5.0)
plt.tight_layout()
plt.savefig(region_name+'_clustering.png')
plt.show()
# 畫scatter plot看個群的相似度
plt.figure(figsize=(15,10))
for i in range(n_c):
yy = [groups[i][t] for t in all_times[:-1]]
y = [list(i) for i in zip(*yy)]
plt.subplot(np.ceil(n_c/2),2,i+1)
plt.title("Group"+str(i+1))
#plt.ylim(0,2)
for num in range(len(y)):
#plt.plot(all_times[:-1],y[num],'b--')
plt.scatter(all_times[:-1],y[num],color='black')
plt.xticks(rotation=90)
pyy = [np.mean(groups[i][t]) for t in all_times[:-1]]
#plt.plot(all_times[:-1], pyy,'r-',linewidth=5.0)
plt.scatter(all_times[:-1], pyy,color='red')
plt.xticks(all_times[:-1])
plt.tight_layout()
plt.show()
plt.figure(figsize=(15,10))
for i in range(n_c):
yy = [np.mean(groups[i][t]) for t in all_times[:-1]]
xx = all_times[:-1]
plt.plot(xx, yy,label="Group"+str(i+1)+"_mean")
plt.xticks(xx, xx, rotation=90)
plt.legend()
plt.title(str(n_c)+' clusters mean curve')
plt.savefig(region_name+'_clustering_mean_curve.png')
plt.show()
```
| github_jupyter |
```
import json
import uuid
from pymongo import MongoClient
db_dict = {
"id": "evmirna",
"title": "EVmiRNA",
"url": "http://bioinfo.life.hust.edu.cn/EVmiRNA/",
"description": "EVmiRNA is a database of miRNA profiling in extracellular vesicles",
"basicInfo": "Extracellular vesicles (EVs) released by living cells include exosomes and microvesicles (MVs), which contain various molecules from parental cells and are potential sources for disease diagnostic and therapeutic biomarkers. miRNA is the most well-studied molecule type in EVs because of its important functions and biomarker properties. Thus, we build the Extracellular Vesicles miRNA database (EVmiRNA) to collect comprehensive miRNA expression profiles in EVs. In EVmiRNA database, we analyzed 462 smRNA sequencing datasets of EVs from 17 tissues/diseases. The miRNA expression profiles, miRNA regulated pathways, miRNA function, miRNA related drugs and publications are showed to support the miRNA biomarker discovery.",
"categories": ["miRNA", "extracellular vesicles", "disease"],
"species": ["Homo Sapiens"],
"updatedAt": "2019-08-30 11:11:11"
}
class MongoMir:
__mongo = MongoClient("mongodb://username:passwd@ip:port/dbname")
def __init__(self, col_name = 'mirinfo'):
self.__col_name = col_name
def get_data(self, output={}, condition={}):
output['_id'] = 0
mcur = self.__mongo.mirnasnp[self.__col_name].find(
condition, output, no_cursor_timeout=True
)
return mcur.count()
def get_mirnas(self):
mcur = self.__mongo.EVmiRNA.mir_annotation.find(
{}, {'_id': 0, 'miRNA_id': 1}
)
res = [item['miRNA_id'] for item in mcur]
return res
class ENTRY(object):
def __init__(self, type, title, url):
self.id = str(uuid.uuid4())
self.type = type
self.title = title
self.url = url
self.dbId = "evmirna"
self.updatedAt = "2019-08-30 11:11:11"
self.description = ""
self.basicInfo = ""
self.species = ["Homo Sapiens"]
self.attrs = {
"symbol": title,
}
def __getattr__(self, attr):
return self[attr]
def get_entry(it, type = 'miRNA ID'):
url = f'http://bioinfo.life.hust.edu.cn/EVmiRNA#!/miRNA_info?miRNA={it}'
e = ENTRY(type, it, url)
return json.dumps(e.__dict__)
mongo_mirnasnp = MongoMir()
mirna_ids = mongo_mirnasnp.get_mirnas()
with open('/home/liucj/index.bs', 'w') as fh:
header = 'DB' + '\t' + json.dumps(db_dict) + '\n'
fh.write(header)
for it in mirna_ids:
line = 'ENTRY' + '\t' + get_entry(it = it, type = 'miRNA ID') + '\n'
fh.write(line)
```
| github_jupyter |
# Checkpoints
Sometimes, it might be useful to store some checkpoints while executing an algorithm. In particular, if a run is very time-consuming.
**pymoo** offers to resume a run by serializing the algorithm object and loading it. Resuming runs from checkpoints is possible
- the functional way by calling the `minimize` method,
- the object-oriented way by repeatedly calling the `next()` method or
- from a text file ([Biased Initialization](../customization/initialization.ipynb) from `Population` )
## Functional
```
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
res = minimize(problem,
algorithm,
('n_gen', 5),
seed=1,
copy_algorithm=False,
verbose=True)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
# only necessary if for the checkpoint the termination criterion has been met
checkpoint.has_terminated = False
res = minimize(problem,
checkpoint,
('n_gen', 20),
seed=1,
copy_algorithm=False,
verbose=True)
```
## Object Oriented
```
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.factory import get_termination
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt1", n_var=5)
algorithm = NSGA2(pop_size=100)
algorithm.setup(problem, seed=1, termination=('n_gen', 20))
for k in range(5):
algorithm.next()
print(algorithm.n_gen)
np.save("checkpoint", algorithm)
checkpoint, = np.load("checkpoint.npy", allow_pickle=True).flatten()
print("Loaded Checkpoint:", checkpoint)
while checkpoint.has_next():
checkpoint.next()
print(checkpoint.n_gen)
```
## From a Text File
First, load the data from a file. Usually, this will include the variables `X`, the objective values `F` (and the constraints `G`). Here, they are created randomly. Always make sure the `Problem` you are solving would return the same values for the given `X` values. Otherwise the data might be misleading for the algorithm.
(This is not the case here. It is really JUST for illustration purposes)
```
import numpy as np
from pymoo.factory import G1
problem = G1()
N = 300
np.random.seed(1)
X = np.random.random((N, problem.n_var))
# here F and G is re-evaluated - in practice you want to load them from files too
F, G = problem.evaluate(X, return_values_of=["F", "G"])
```
Then, create a population object using your data:
```
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.model.problem import StaticProblem
# now the population object with all its attributes is created (CV, feasible, ...)
pop = Population.new("X", X)
pop = Evaluator().eval(StaticProblem(problem, F=F, G=G), pop)
```
And finally run it with a non-random initial population `sampling=pop`:
```
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.optimize import minimize
# the algorithm is now called with the population - biased initialization
algorithm = GA(pop_size=100, sampling=pop)
res = minimize(problem,
algorithm,
('n_gen', 10),
seed=1,
verbose=True)
```
| github_jupyter |
# Introduction
This notebook demostrates the prediction pipeline for the trained classifiers. With the 3 pretrained classifiers, you can easily classify a new structure that is not included in the original training set.
**Note**:
- For easier readability, you can change the fontsize of this notebook by navigating to `Settings` -> `JupyterLab Theme` and increasing or decreasing the fontsize from the dropdown menu.
- For first-time JupyterLab users, you can use your mouse to select and run individual code cell by pressing <kbd>⇧ Shift</kbd> + <kbd>↩ Return</kbd> (it is recommended that you run the cells in order).
- To run all the cells, you can navigate to `Run` -> `Run All Cells`. You can also clear all outputs and rerun all cells by clicking on ▶▶.
- To properly display the progress bar (you may see it as `HBox`) and the Plotly interactive figure, you need to run all the cells first!
# Import packages and functions
```
import sys
# force the notebook to look for files in the upper level directory
sys.path.insert(1, '../')
import numpy as np
import pandas as pd
import xgboost as xgb
import seaborn as sns
from glob import glob
from tqdm import tqdm
import plotly.express as px
import matplotlib.pyplot as plt
from IPython.display import IFrame
from sklearn.impute import KNNImputer
from model.model_building import load_data
from data.data_cleaning import abbreviate_features
from data.compound_featurizer import read_new_struct, composition_featurizer, structure_featurizer, handbuilt_featurizer
```
# Set up constants
The `REDUCED_PATH` contains the reduced feature set used to train the classifiers. The `FULL_PATH` contains the full feature set used to impute missing values in the uploaded new structure should there be any. In terms of the number of predictors, the reduced feature dataset is a subset of the full feature dataset, with features selected according to SHAP feature importance and domain knowledge.
The `NEW_STRUCT_PATH` contains the demo CIF structure file and you can _**test your own structure**_ by uploading the CIF file to the "user_defined_structures" folder and changing the `NEW_STRUCT_PATH`. You can save time by pressing <kbd>⇥ Tab</kbd> for auto-completion after typing the first few words.
**Note**: If you choose to upload your own cif structure file, it is preferable that the structure already has an oxidation state assigned to each site. If not, the featurizer will try to guess the oxidation states using the [oxi_state_guesses()](https://pymatgen.org/pymatgen.core.composition.html?highlight=oxi_state_guesses#pymatgen.core.composition.Composition.oxi_state_guesses) function from Pymatgen. There is no guarantee that the guessed oxidation states will be correct and the script will also ask for user input if it is unable to guess the oxidation states. In addition, the uploaded structure has to have at least **2 different elements** (i.e., at least a binary compound). A single element structure such as Si will lead to an error in the script.
There are 3 demo structures: CuNiO$_2$, Ca2CoN$_2$ and Ga(MoSe$_2$)$_4$ for you to try out and they are not present in the [training database](../data/processed/csv_version/IMT_Classification_Dataset_Reduced_Feature_Set_v10.csv).
```
REDUCED_PATH = "../data/processed/IMT_Classification_Dataset_Reduced_Feature_Set_v10.xlsx"
FULL_PATH = "../data/processed/IMT_Classification_Dataset_Full_Feature_Set_v10.xlsx"
NEW_STRUCT_PATH = "./user_defined_structures/CuNiO2_mp-1178372_primitive.cif"
```
# Define some helper functions
```
def assign_oxi_state(elem_symbol):
"""Allow the user to assign oxidation state to each element."""
oxi_state = input("{}:".format(elem_symbol))
return float(oxi_state)
def check_all_zero_oxi_state(structure):
"""Check if all the oxidation states in the structure are all zero"""
try:
# get all the oxidation states by specie
oxi_states = [specie.oxi_state for specie in structure.composition.elements]
except AttributeError:
# if there are no species but elements present, then return True
return True
# if the species all have zero oxidation state, also return True
if np.sum(np.array(oxi_states) == 0) == len(oxi_states):
return True
return False
def check_oxi_state(structure):
"""Check if the structure has no oxidation states assigned and the guessed oxidation states are all zero. If so, trigger user input."""
if (check_all_zero_oxi_state(structure)) and (not structure.composition.oxi_state_guesses()):
# get all the elements in the input structure
elem_lst = [element.symbol for element in structure.composition.element_composition.elements]
# get the reduced formula
reduced_formula = structure.composition.reduced_formula
print("Unable to guess oxidation states for {}. Please manually assign oxidation states by element".format(reduced_formula))
# get a dictionary to overwrite the default guessed oxidation states
elem_oxi_states = {elem_symbol: [assign_oxi_state(elem_symbol)] for elem_symbol in elem_lst}
return elem_oxi_states
return None
def featurizer_wrapper(df_input):
"""A wrapper function around the composition, structure and handbuilt featurizers."""
# get the structure from the initialized dataframe
new_struct = df_input.at[0, "structure"]
# check if the guessed oxidation states are all zeros and allow user-overwrite if true
oxi_states_by_element = check_oxi_state(new_struct.get_primitive_structure())
# featurize the given structure using 3 predefined featurizers
df_output = composition_featurizer(df_input, oxi_states_override=oxi_states_by_element)
df_output = structure_featurizer(df_output, oxi_states_override=oxi_states_by_element)
df_output = handbuilt_featurizer(df_output)
return df_output
def process_new_struct_df(df_new, df_full_set):
"""Process the newly featurized structure(s) and impute any missing values with KNNImputer"""
new_struct_df_with_name = abbreviate_features(df_new)
# check if the dataframe contains missing values: if not, then return immediately
if new_struct_df_with_name.isna().sum(axis=1).sum() == 0:
return new_struct_df_with_name.drop(columns="Compound"), new_struct_df_with_name
# select the same features as the full feature set
new_struct_df = new_struct_df_with_name.filter(items=df_full_set.columns).drop(columns="Compound")
# combine the full feature set with the new structure's features
df_with_new_struct = pd.concat([df_full_set.drop(columns=["Compound", "Label", "struct_file_path"]),
new_struct_df], ignore_index=True)
# impute the missing values with the values from the 5 nearest neighbors
# weighted by their distances to the new structures' non-missing values
knn_imputer = KNNImputer(n_neighbors=5, weights="distance")
# get the imputed dataframe for the new structure
new_struct_df_imputed = knn_imputer.fit_transform(df_with_new_struct)[-df_new.shape[0]:]
# add back the column names
new_struct_df = pd.DataFrame(new_struct_df_imputed, columns=new_struct_df.columns)
# get the new structure name and create a copy of new_struct_df with the compound name
new_struct_name = new_struct_df_with_name["Compound"].to_list()
new_struct_df_with_name = new_struct_df.copy()
new_struct_df_with_name["Compound"] = new_struct_name
return new_struct_df, new_struct_df_with_name
```
# Read in the reduced dataset
This is a quick overview of the training dataset. It will be used later on to select the relevant features from the raw output of the featurizer.
```
df = pd.read_excel(REDUCED_PATH)
df
```
As we can see here, the reduced dataset used for training the classifier has 343 unique compounds and 13 columns. Excluding `Compound`, `Label` and `struct_file_path`, there are 10 features used for prediction (i.e., 10 predictors).
# Make a prediction on a never-before-seen structure
## 1. Load the three trained models
```
# load the metal vs. non_metal classifier
metal_model = xgb.XGBClassifier()
metal_model.load_model("../model/saved_models/new_models/metal_reduced.model")
# load the insulator vs. non_insulator classifier
insulator_model = xgb.XGBClassifier()
insulator_model.load_model("../model/saved_models/new_models/insulator_reduced.model")
# load the mit vs. non_mit classifier
mit_model = xgb.XGBClassifier()
mit_model.load_model("../model/saved_models/new_models/mit_reduced.model")
```
## 2. Read in and featurize the new structure
If you find the featurization process to be slow, please set the `supercell_matrix` argument to <span style="background-color:yellow">None</span>.
```
new_struct_df = read_new_struct(NEW_STRUCT_PATH, supercell_matrix=[2, 2, 2])
new_struct_df = featurizer_wrapper(new_struct_df)
```
Here is the raw output from the featurizer.
```
new_struct_df
```
## 3. Only select predictors that are in the reduced feature set
We need to load in the full feature set to help impute the missing values in the newly featurized structure, should there be any. The imputing process utilizes the [KNNImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.html) from scikit-learn.
```
df_full = pd.read_excel(FULL_PATH)
df_full
new_struct_df, new_struct_df_with_name = process_new_struct_df(new_struct_df, df_full)
new_struct_df = new_struct_df.filter(items=df.columns)
new_struct_df
```
Here is a printout of the number of predictors in the newly featurized structure
```
new_struct_df.shape[1]
```
Compare the number of predictors with the training data loaded into the MIT classifier
```
train_x, _ = load_data(df, "MIT")
train_x.shape[1]
```
## 4. Print out the prediction label and probability
After selecting the relevant features, we are now ready to make a prediction for the given structure. Below, you will see the outputs from the metal _vs._ non_metal, insulator _vs._ non_insulator and mit _vs._ non_mit classifiers. `1` means the structure is predicted to the positive class and `0` means it is predicted to be the negative class.
**Note**: It is possible for the classifier to classify a structure as multiple classes. (e.g., as both a metal and an MIT). We've provided you with the probability of each prediction and we'll let you be the final judge.
```
print("Is a metal: {}, and the probability of being a metal is :{:0.4f}\n".format(metal_model.predict(new_struct_df)[0], metal_model.predict_proba(new_struct_df)[0][1]))
print("Is an insulator: {}, and the probability of being an insulator is :{:0.4f}\n".format(insulator_model.predict(new_struct_df)[0],
insulator_model.predict_proba(new_struct_df)[0][1]))
print("Is an mit: {}, and the probability of being an mit is :{:0.4f}".format(mit_model.predict(new_struct_df)[0], mit_model.predict_proba(new_struct_df)[0][1]))
```
## 5. Plot the data using the Range of the Mendeleev Number & the Average Deviation of the Covalent Radius
These two features are identified with high feature importance for the MIT _vs._ non-MIT classifier and have been shown to separate MITs from the non-MITs quite well.
We can plot all the data points (training set + the new structure) on a 2D scatter plot of `range MendeleevNumber` vs. `avg_dev CovalentRadius`.
```
x_plot = "range_MendeleevNumber"
y_plot = "avg_dev_CovalentRadius"
# get the relevant columns from the training set
df_plot = df[["Compound", "Label", x_plot, y_plot]]
# get the relevant columns from the new structure dataframe
new_struct_plot = new_struct_df_with_name[["Compound", x_plot, y_plot]]
# assign "New_struct" as the new strucuture's label to distinguish it from the original training set
new_struct_plot["Label"] = "New_struct"
# combine the two datasets
combined_df_plot = pd.concat([df_plot, new_struct_plot], ignore_index=True)
# change the numeric label into string format to allow discrete color scale for plotting
combined_df_plot = combined_df_plot.replace({"Label": {0: "Metal", 1: "Insulator", 2: "MIT"}})
combined_df_plot
```
Create a scatter plot.
```
with sns.axes_style("ticks"):
plt.figure(figsize=(5,5), dpi=200)
sns.scatterplot(data=combined_df_plot, x=x_plot, y=y_plot, hue="Label", alpha=0.7)
```
### Here is an interactive version with Plotly
Things you can do
1. Hover your cursor over data points to display the compound name, feature values and class label.
2. Drag your cursor to zoom in on a specific region and double left click to zoom out.
3. Click once on one of the class labels on the legend to hide the points within that class.
4. Click twice on one of the class labels on the legend to show only the points within that class.
```
mendeleev_cov_radius_fig = px.scatter(combined_df_plot, x=x_plot, y=y_plot, hover_name="Compound",
height=1000, width=1000, color="Label", template="simple_white"
)
mendeleev_cov_radius_fig.update_traces(mode='markers', marker_line_width=1, marker_size=10, marker_line_color="white")
mendeleev_cov_radius_fig.write_html("../plots/new_struct_mendeleev_cov_radius.html")
IFrame(src='../plots/new_struct_mendeleev_cov_radius.html', width=1100, height=1100)
```
## 6. Batch processing
The following section allows a user to upload a folder containing several CIFs and classify all of them at the same time.
First, specify the folder path. **If you want to upload a folder of your own and carry out the classification for all the compounds at once, all you have to do is to change the `batch_folder_path` variable.**
```
batch_folder_path = "./user_defined_structures/"
```
Then, get all the CIF paths and read in the structures as a dataframe.
**Note**: By default, any structure is read in as a supercell (a'=2a, b'=2b, c'=2c), which might lead to prolonged featurization time. Again, if you wish not to read in the structures as supercells or find the featurization process too slow, please specify the `supercell_matrix` argument as <span style="background-color:yellow">None</span>.
```
# initialize an empty list of dataframes
df_lst = []
# get the file paths of all the cif files
cif_file_paths = glob(batch_folder_path + "*.cif")
# iterate over all files and read in the structure
for struct_file_path in cif_file_paths:
# add the newly read in dataframe to the list
df_lst.append(read_new_struct(struct_file_path, supercell_matrix=[2, 2, 2]))
# concatenate all the dataframes in the list
df_batch = pd.concat(df_lst, ignore_index=True)
df_batch
```
Check the oxidation states of the structures read in.
```
with tqdm(df_batch.index) as t:
for row_index in t:
# print out a progress bar
t.set_description("Checking %s" % df_batch.at[row_index, "Compound"])
# access the structure and create a copy
struct_to_check = df_batch.at[row_index, "structure"].copy()
# check the oxidation states and ask for input if there is a need to add the oxidation state by hand
oxi_states_by_element = check_oxi_state(struct_to_check.get_primitive_structure())
# if there is a need to overwrite the original structure
if oxi_states_by_element:
# extract the number from the list of oxidation states for each element
oxi_states_by_element = {element: oxi_state_lst[0] for element, oxi_state_lst in oxi_states_by_element.items()}
# add the oxidation states by hand
struct_to_check.add_oxidation_state_by_element(oxidation_states=oxi_states_by_element)
# overwrite the original structure in the dataframe
df_batch.at[row_index, "structure"] = struct_to_check
```
Next, we can featurize the new structures in the batch.
```
df_batch_output = composition_featurizer(df_batch)
df_batch_output = structure_featurizer(df_batch_output)
df_batch_output = handbuilt_featurizer(df_batch_output)
df_batch_output
```
Just like before, we also need to process the newly featurized structures by imputing the missing values with KNNImputer if there is any, as well as selecting the features in the reduced feature set.
```
new_batch_df, new_batch_df_with_name = process_new_struct_df(df_batch_output, df_full)
new_batch_df = new_batch_df.filter(items=df.columns)
new_batch_df
```
We are ready to make the classification for all the structures.
```
# get the number of compounds in the batch
num_compounds = df_batch.shape[0]
# initialize an empty list to store all the classification result
classification_lst = []
# iterate through all the models
for model in [metal_model, insulator_model, mit_model]:
# get the binary classification as 0 or 1
classification = np.reshape(model.predict(new_batch_df), (num_compounds, 1))
# get the classification probability for the positive class
classification_proba = np.reshape(model.predict_proba(new_batch_df)[:, 1], (num_compounds, 1))
# for each model, concatenate the binary classification and classification probability
classification_lst.append(np.concatenate((classification, classification_proba), axis=1))
# create a dataframe to store the classification result
classification_result_df = pd.DataFrame(np.concatenate(classification_lst, axis=1), columns=["is_metal", "is_metal_proba",
"is_insulator", "is_insulator_proba",
"is_mit", "is_mit_proba"])
# add back the compound formula
classification_result_df = pd.concat([new_batch_df_with_name[["Compound"]], classification_result_df], axis=1)
classification_result_df = classification_result_df.sort_values(by="Compound", ignore_index=True)
classification_result_df
def highlight_one(s):
"""Define a function to highlight 1 with yellow in a pandas series"""
is_one = s == 1
return ['background-color: yellow' if v else '' for v in is_one]
def highlight_training_data(s):
"""Define a function to highlight 1 with red in a pandas series"""
is_one = s == 1
return ['background-color: red' if v else '' for v in is_one]
def retrieve_classification(row):
"""Retrieve the original classification label if the compound is found in the training set"""
if row["in_training_set"] == 1:
compound_name = row["Compound"]
training_label = df_full[df_full.Compound == compound_name].reset_index().at[0, "Label"]
if training_label == 0:
return "metal"
elif training_label == 1:
return "insulator"
else:
return "mit"
return "N/A"
# get a list of all the compounds in training data
training_compounds = df_full["Compound"].to_list()
# create a new column where if the compound is in training set, it will have a value of 1 and 0 otherwise
classification_result_df["in_training_set"] = classification_result_df["Compound"].apply(lambda compound: 1 if compound in training_compounds else 0)
# create a new column that contains the original classification label if a compound is found in the training set
classification_result_df["training_set_label"] = classification_result_df.apply(retrieve_classification, axis=1)
```
Print the classification result table with each row showing the result for one compound. If a compound is classified as any of the three classes, the class classified will be highlighted with yellow. If any compound is found in the training set, it will be highlighted with red in the `in_training_set` column and the original classification label assigned in the training set will also be shown.
**Note**: the way that the code determines whether or not a compound is in the training set is by comparing the compound formulas. No actual structures are being compared. For example, since we are only comparing two compound names as strings, if you have a compound's name read in as "A(BX)2", but the compound formula in the training set are "AB2X2", no match will be returned.
```
classification_result_df.style.format("{:n}", subset=["is_metal", "is_insulator", "is_mit"])\
.apply(highlight_one, subset=["is_metal", "is_insulator", "is_mit"])\
.apply(highlight_training_data, subset=["in_training_set"])\
.format("{:.4f}", subset=["is_metal_proba", "is_insulator_proba", "is_mit_proba"])
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# 05. Train in Spark
* Create Workspace
* Create Experiment
* Copy relevant files to the script folder
* Configure and Run
## Prerequisites
Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## Create Experiment
```
experiment_name = 'train-on-remote-vm'
from azureml.core import Experiment
exp = Experiment(workspace = ws, name = experiment_name)
```
## View `train-spark.py`
For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train-spark.py` in a cell to show the file.
```
with open('train-spark.py', 'r') as training_script:
print(training_script.read())
```
## Configure & Run
### Attach an HDI cluster
To use HDI commpute target:
1. Create an Spark for HDI cluster in Azure. Here is some [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor, NOT CentOS.
2. Enter the IP address, username and password below
```
from azureml.core.compute import HDInsightCompute
try:
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
hdi_compute_new = HDInsightCompute.attach(ws,
name="hdi-attach",
address="hdi-ignite-demo-ssh.azurehdinsight.net",
ssh_port=22,
username='<username>',
password='<password>')
except UserErrorException as e:
print("Caught = {}".format(e.message))
print("Compute config already attached.")
hdi_compute_new.wait_for_completion(show_output=True)
```
### Configure HDI run
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Load the "cpu-dsvm.runconfig" file (created by the above attach operation) in memory
run_config = RunConfiguration(framework = "python")
# Set compute target to the Linux DSVM
run_config.target = hdi_compute.name
# Use Docker in the remote VM
# run_config.environment.docker.enabled = True
# Use CPU base image from DockerHub
# run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE
# print('Base Docker image is:', run_config.environment.docker.base_image)
# Ask system to provision a new one based on the conda_dependencies.yml file
run_config.environment.python.user_managed_dependencies = False
# Prepare the Docker and conda environment automatically when executingfor the first time.
# run_config.prepare_environment = True
# specify CondaDependencies obj
# run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
# load the runconfig object from the "myhdi.runconfig" file generated by the attach operaton above.
```
### Submit the script to HDI
```
script_run_config = ScriptRunConfig(source_directory = '.',
script= 'train-spark.py',
run_config = run_config)
run = experiment.submit(script_run_config)
# get the URL of the run history web page
run
run.wait_for_completion(show_output = True)
# get all metris logged in the run
metrics = run.get_metrics()
print(metrics)
```
| github_jupyter |
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAgAAAADhCAYAAAC+/w30AAAABmJLR0QA/wD/AP+gvaeTAAAACXBIWXMAAC4jAAAuIwF4pT92AAAAB3RJTUUH4wEeDgYF/Qy0kwAAIABJREFUeNrsnXl4G9W5/z8zWrxNHDt29j0kBEgQMYQdwtqW0CKgtHS5XShtb1vfbrSltOpduro/StfbW3VvaaG0QEtApQ1Q9n1JIlDCEkL23fES2+NFsjTz++MckUHYia0ZW5J9vs+jR7JkHc3MOXPe77uDgoKCgoKCgoKCgoKCgoKCgkIBYSbCA75WUFBQUFBQGAfCX5EABQUFBQWvoalLUNQkwAD+AuwGvmCEYt3qqigoKCgoeAFdXYKixqnA24ELgXnqcigoKCgoKAIwPjBZPvcDvepyKCgoKCgoAjA+UCuf04Ay/ysoKCgoKAIwTlDjIAA96nIoKCgoKCgCMD5QrQiAgoKCgoIiAOPYAmCEYpliOjBVo0BBQUFBEQCF0bEAFJXwN0IxzETYr0iAgoKCgiIACt6jUj73FpOQlcL/XKDfTIR/aoRiGKGYmi0FBQUFRQAUPLYApLKCt4jwHfm8TE2TgoKCgiIAYwJFZM6uks9dxXRdzET4ZOAMIAncUWTXTEFBQUFhCPCrSwBmIqwBc4H5wFojFOvM+rkLjAr5nCyG6+S4Hl+Tz/uA3+R8pqCgoKCgLAAlgzLgv4E7gXcXifAHEQNgA31FpP0vBc6Sb99ihGJdSvtXUFBQUASgVGEBBxA+9w8BM4rkuCrksRVTGeBPICoU9gPfU0tHQUFBQRGAkoURiqWA24AMsAI4tkgOrUw+20VwjTAT4fnA2XLd/N4IxQ5mP1NQUFBQUASgVLEN4QIAuMpMhCuKwLTtkxaAziK5Rhc7yNF31JJRUFBQUARgLFgBWoG75J8fAKYUgWZbXegDcPj+a4G3AkHgLiMU26F8/woKCgqKAJQ0HILsSeA5+foTOZ8V4nhs+ShYGWAHCToROFe+/nERBUoqKCgoKCgC4E7IGaHYZuBh+XajmQhXFkLI5fymTYEbAZmJcDnwNoRF4hkgroS/goKCgiIAY80KsArYDUwEriqUFaDIUA98UL7+DUVSmEhBQUFBQREAL60ATwEJ+fZXBtDIxxsx0hG+/2nAZuAZIxSz1IpRUFBQUARgLOL3CLP7DDMRvnwcC38Q1SI/Ld9aDbzi+ExBQUFBQRGAMWUNuB1R5tYHfHa8Cn9p+TgeaADagQeMUKxfBQAqKCgoKAIwVrVegP+TzyEzET5lHGu9X5XPLwCPSoKkFoqCgoKCIgBjTvvPvvwVItitFvjweBN8svLfVOCdiLK/DxuhWJsy/SsoKCgoAjCmrQBGKNaNiHjXgLPMRHjBeLECOM7x0/L8W4Bb1MpQUFBQUARgvFgBvi+flwArx4sVwHGOH0HUIXjWCMU2jZfzV1BQUFAEQGE/8BdEMOBbzUR48njR/s1E+N1AHZBGWEIKbv1Q7gcFBQUFRQBGCxngp/L1hYiI+DEt6Bwa/oeBcqDZCMXuLqT2n70eMi6hykyEa8xEWFPLU0FBQUERgJEUhi8DjwGVwEpZFne0hHB2jiaMsvZ/nIPs/LxYtG8zEa4D7kCkJB6jVqiCgoKCIgAjiQ7gJvn6Q8DUUfztdIG0/0sRlf8Afl1I7d+h+ZcD30RUJQSYpJamgoKCQokTgFztslh8vTIbwAKeAF4FpkgrgDZKx9iDiMIPjuI5TwDOlr95txGKNRdyPsxEOLsePgV8Ur79LeBpFROgoKCgUMIEIFtVzkyE/WYiPNnxdzFdp1cQZXABvjTKv60BZaNIwpYDy+TrHxRS+3dUHFwJfFeu1xuB7xuhWEbdvgoKCgolTACksA8CX0CU3/13MxHWiiHdzNEgyALuQ2QFHAVcNErHd1ASgMBonStwGjAd2A6sLQJiOB+4TZKgx4GvG6FYZyGJiYKCgoIiAN7BB8yUx/NL4OJslHcRWQLuB9bL15FR+k1bEoCKURK6MxHZDgA3A30FFv7lwEOAIQnJV41QbLsy/SsoKCiMAQIgN/te4CfAs/LtvwMrHBaCYjjGlDyuHkRlwKWjQFCyFoDRigGYB5wnX99phGL9BRT+GnAXMBdRkvk7Rij2uGpEpKCgoDCGLAByU9+CCPJaL4XePWYifGExkACHwPkTsFe+/lrOZyOBlHyuHOnrbybCAeACee0fAXYVUPgDfA94i/zo/4xQ7NdK81dQUFAYQwQgK0Dl5h8HPgFsRRSh+YuZCIeLgQTI42sF/oEwzV9kJsKzRtgK0CGfR8MFUAG8X75eDTQXQvhLXI2I+teA3xuhWGQAMqagoKCgUOoWAOemboRiTwHvQwTc1QE/KxYSIPF9wEL4pf9jhH8rawGYMArX/zhgMdAKPGeEYtZoXmsHETwLkeZXBfwLERyKMv0rKCgojEECkCvcjVDsGYQv2gRmAVEzEb64kBqgIyNgJxAD/IiaADUjeExx+dw5kpq3xAfl8zpgQyGusZkIT0dUHpwBbAY+aYRiB5XwV1BQUBjDBGAAEvAycDoiAGwmcIuZCJ+bI7RGW0BlX35HPs8HLh/BY/oGsBR4z0gTGwcBWGOEYs2jSbYcMQi/kedrAh+TcSFK+CsoKCiMdQLg3Oyl1rcBOAvYDUxEBAaei/ANjzoRyBIUIxRbi8iRr0akLAZGyD1hG6HYi0Yo1j2S52omwm9BuBn2AY+O1rXN+Y1vAxcD/Yh0v4cLSfYUFBQUFAEoMBEwQrEEcCWwCVEM5kHgnYWqGOjQRr8rn0+Xj5H8rZHWgrPa/25EsZ0R17pzzPqfBL6MCK78DfCLAf5HQUFBQWE8EACn9meEYk8iGvEkpPZ/O/A5MxH2FYIEyN+7F1GcZiZwoZkI+0tYWCWANcDNRihmjvT1dAp2MxF+N/C/8qP7gK8ZoVi6lIR/Q2PTqP/eaP9mqWE0r4+aC4VSRdH3VM8RFkuA3wGnIEzFPwb+xwjFekdTYEgB6UOkq/0UuB5oMkKxZJ4asJfadD7f9wELgG1GKNY/ktfSQeowE+HzgT9KEvUyosTyjmIX/g2NTcSjkYHeDwJzEEGMUxEdC2sR7pUqROBopeO+s4EkkEEUmOqVz13y0SEfrYiiUC3xaKQv3+Mby8J+sPNtaGyaLdfXFGCyYz4q5cPPGwtt9SG6cKbkXHQjYlJMRDBum5yTNqA1Ho2Yaj4UFAEYHUsAZiI8DdEQ5m1y47wF+IwRinWMtuAwE+EKRKW6ZqBtsN8+3HGZiXA1ItMhd4MK5sxPSgqFTikQ9gG7jFCszQ0pyP2/kb6GDtfNUkS54RPkeZ1ihGKvFKvwH2gTb2hsWoLonbAckUZZj6inUI5wVwURfRwCHNnaZsv1nJECKC1Jbr+c++xzUs7/HmmB2glsk4/N8WjEGg+CaJD5mIGIGco2tJqeMx9lci788nEkZOcikzMf2bnIPjoQhbN2ysd2OR9b4tFIz+GsBuOZGJT6emxobNLi0YhdytdRK5WLnUMCKoCfAR+RHz8IvMcIxVoKKUBy4xJyj8NMhE9AxAucJDeoBVJI6HIuss/aYYRE9mHJ5xSwA2HGX4cw5T9thGLpkbI4eHCdpgJ/kCQO4DSZ+ll0fn/nzdXQ2OQHLkJkZbwVUQvCL61BvlE+NGuAhy3JwUZpUdkAvAisj0cjXWNhE8493obGptPlfKwEZst7yDdEAT/S82FJK8Im4CX52ABsiEcj+wotvIAocJkkNJq0QCU9/BlbzkW1fI1jn/t5PBr5dqmSgOxxNzQ2XYWIXzLkR2mp0OgeX8cq3tgUTkdUpW2MRyPPjWkLwCBCJIBIk/uqfGuz3JS3OKL1R1XwO/7OCoRZwNuBS4FzRnlT2oDwqf9dEoNeIG2EYnYhSIGDHBmINsP/Lj96H3ArIuOh2AR+thPjYuDzwHsZ4bLMI4iDkiTG5WMtsEVqt1lhlZGbWtFsyk5NWb72IdwsH0FUi6wv0flIIcqerwOel8T9ZSmMrey8xKMReyTmo6GxSZeKw8wCXoPaeDRysIQtABWIJnFnFPAwPhyPRv44rghAjqb9BeC/EWmC3cA7gQeMUCwzmCY+goK/DuH/fYcUbMcXySXrAe6WgnYt0GKEYt0FmK8goofCf8uP/gv4f7nWiiLRLOsRDamuQZiVxyo2SELwPPCYG21ihMlYLcLN8jng3WN4Pl4DXpBz8jjwbDwa6R2Ba6sj3BSzC3iuH41HI78rVStUQ2PTmQg35rwCHs6H4tHITfl+2V+Kd0iOhv9DMxHeh8gjnw/cA3zVTIR/nu0bP1KariM48VjJAi+XGn+xoRKRSnklIl7hDjMR/hew1gjFto+kRSBnzC86hP8vgR8VQ8R/jqCpkQTuUwVm9qOFpfLxQXn+1YO5Cwqk9ZchrGcflet3rGOhfFwh/74IkXE0FnEFIqi7pOAgKw0FtqC4hl6qB57ja78F+ACii50G/D/gF2YivMBrC4AzRc5MhE8yE+HvSc36N0Uq/HMxBZF3fxuiuuI3zUT45BxC47nwNxPhrwHZfKk7EOl+3UUm/LNZCTeOE+GfizSHfJkFnQ8p/JciUkTvGCfCfyCUjeFzO6WhsamsRGMAAogA5oAiAAUmAVKIPClJwE/lx+8D7syWD3Yr2HKCEGeaifAvgL8B11I8pv7hwCeF3H8Bt5qJ8K1mIrzCKyKQI/y/AmTv8oeBTxihWGuRCf/vIsx5lzD6AX3FAhsRK1Bw7aqhsenfgL8iYkWqxul8dCNid8YqqoALS/TY55Tovj92CECusDJCsV3AV4CPI4Jojgf+aSbC1+ZaDYYjyHKEWQQRuPMxRArgWMB8qWHFzER4tZkIh/K9XgMI/88C/4lwQ8SBy7LZGsWgaTY0Nk1qaGx6EOGemM44x0j4m4c6H47X35VWmMXjfDq8jsovNgQR5b9LppiS4zjnSgtASUMbqytLFpn5nUNI3we8ywjFunI1+iGOtwxh5j9pnGw+NyECLFtySNZQr5cGfBj4FcJMthFYnq00WCjNP0frXyS1zBAKAN3xaMQYzaCsnPnQEe60d6mpAERtgXfHo5GnR+C6F0MQIMCaeDRycilNirx2n0EUois0XAUB6mPtjnFo7A8C5yLSNPoRKYIbzUT4wmwJ4SONIV9Xm4nwfwBPjSPhDyIobB/wBTMRnuC0CBxJe5fC/63A76XwfwFYUWjhLzXc7E18ghL+b5465zUa5U3VAP6lhP8b0IdIFxzLmNLQ2HRyKVkBEBln546Fiz/mCIARijmzBLYZodhbgO8jiiZMR2QJfMNMhGsGEkRZASXHOBbRn/7/ENXExht8iJz9+81E+Pxs18MhuAZ0YBHCp/wE8M7RbjE8mLYpn48Ffq2E/5vQXghrjEzx+ytwvpqCcUcAJjNCzdRGmACsUASgyImAw8wfQdTtf0AKta8NNIE5vutLEZHy71f7EKcAdwLfkqWYDyvIjVAsgwio+xBwlRGKbcle30IKfylsJktSd7Ka1jehpwDzYQA/4VBVSIVD6EfEMo1lVAAnQmmURZbFwY5F9PlQBKCESMA9CLP2f0pt49XDCP8vIvz9S9Ue9DomANcBq7LZAodzBxih2EEjFLvZCMVeyw2kLAQcm8uvEHnlCm9G5ygLfx1RaEmR7PFrAQA4pqGxaUGJHKuP0kj3VgQgV1M1QrG9wA2ICP5Xc4W/mQjrZiL8beBblG6J0ZHGacBdZiL81SG6A153qRRS+5fP1yNqnysMjFGpDukgY2+VpNKnLv2AyDaFGutYLB9FHQfgOLYxs4fo4+VOclgCUkYo1mGEYlZObn8Zoprg1xBmKYXBUYNwB9yRjQsodFrfELTNcxCRuwqDo3cU52Ua8D3Gb47/UJBEuAHGw36yJIccFh3ksS2ixKv/OeEfL3fSEaL+yxF54F8t1PHZaNi2Lp812TpLG+C/AM1GE/+Jrllo2NlPRhM+ROnjx8xE+L1GKLYtS7SKpZufQ/iXISrKVRx5HsCv2ZQFLGxbo7dfz39KD2lwGcejFRFsZyJ87lkfb0AKQ00eZ6V8rpbXOtslUj/Ms+6S1JujNC8+4NOMfiGV3PlII9weLdL60cGhrnXZGzDbha1azkcVh6olDnVONBfHO9ZjALI4raGxqS4ejbQWsyKB6DUzZjBuCMBAFgGpuQaALyHM/iMKDRtNswGNZKaMZKacpBWkN1PJwVQtHakazLRBb6ZSfubHtg/tHQE9Q7kvSbmvB8PfTXWgg9qydir9XQT1FGV6knJfH7qWfp1MjAJOBe42E+FPGKHYE6PdiXEIjB1EtcZjDve/GVvDr9vMmJDkjDkdfODEPaTSPq66fQk9QycBFqLXQgui095aRMe3V4Gtg/WGH+IGpCHiMGrko1Y+6uRjsnxMRQQoleU8yh3Pg5ncR1TbdGyixyIsbaOBduAAsBNRwOsF4BVgSzwaaXd5PlWSGEx0zMck+ZiCcCPWyzmpQhS+GWg+gkcgLOOCAMj1W5QEwLGXjF8CMNDGXmz92/M4j+tGWvhrmoVPy9CXruJAcjKtyTqa+6ZyoG8Kbck6uvoNNKnJa1K7ZxDx/brubx96LvelqC1ro66slcllB5hU1kJtsI3asnaCepKM7cOyR9TbswS42UyEP2+EYncVw7pwaP/HIMpCBwdU0TMauqZx3BSTU2Z38LajWzhuWie2BXu7yqnw23T3H1GFywDPAI8CDwJPxaMR8whCcFjnEY9GbKmtdiLauB7pe35JDOod5CD7mCKFVY1DeNUh+tWPxib601FYAusRtTseQHQ53OvlfMjz6ZaWg71D/O5Eef3rcuZiinzPSe4mIwoBjeVSwE7MBBY0NDa9UqxugIbGpjpkxsJYwbBVRDMRPgrRoepZIxR7rtROeIAa9U2MUEVEDYugL01bsp5NnYvY2TOblr7JHEzVkLE1fJrlEPj5w0kILFunwpd8nRDMqNjNPGMrdWUHyNg+MvaIxlu1AF80QrE/ymJAdoFJgIbodfCNXMGfyujowPKZXVy48AAnzupk8WQTXbfY1V7F6o2TeWTrJF5qrjzSzzyKyBh5KB6N7MpXsIyAlj1UgVTrEDrr49FIywgTskuAkVwUryIyPe6LRyPr3Qj7AsxHpWMuJgHN8Whk4wgdV7FUAnTih0AkHo0UZfnjhsamj8p7vZjgqhJgPgTgdkS71GapidwP3GKEYptKwSrgMP1/VC646pH4nYDeT1d/NevaTmFL13w6UhNJWsHXffYjCRuwpcZf5ktR5e9mWsU+jq9Zz/wJr5G2RpQIdALfMEKxHxaKBDiEzSJE/YLjsp+lMsJte9acDt55/D6OmdzFlAlJyvwWbd3l3PL8DB7aUsuujjKSaR2fPuhcWYigwjvi0ci+Qgv+QgisPMd/gZErwPQb4IZ4NPJqqcyHs/XxaM5JkRKAVxA1R7qKac4c+8m9iMyVcU0APoso3JFFv3y8hKi9f5MRipnFSAYcwv8c4A+MQDOfbEDehoPLeLblFMz+qkIE6L3JQqBj4dfT1JW1cebkJ5hrbMYauTiBHuALRij2y0LNv9zgrgJ+C5C2RJDlyqNbee8Ju1lQ103AZ1HmtzjYW8ZN62Zy10uT6e7XSVvaka7KDkQqUCIejWTGguAfpQ30EkStf6+zbNLAZ4GfqzkoaQIAsCgejbxWhNcriCiNXjuWCMCwHMNyM/9fRITy1cDjUuEsB5YDUaDDTIRXmYnwuTK1Ts8do1CQwn8OwiQ8Ip38DqYmcdv2f+PePW+lq98ouPB3kpJ+K8C+3qn8dccV3LHjStpT9SN1fJXA92R2wKjOe/Z3LJuJwFWaBkbQ4n2hZlZ/ZA3ffNvLLJ7Shd9n0WKW8eNHj+LtN57EH9ZNozPpI3Nk4b8euDAejcRxRGgrwTM4HNfmE3hfUrsfuCwejfzcqVErjCq+DZyNN7Ed4WKaR8dxXOTB2n0GkTl1DaNUdMtTAuCI8O41QrHfG6HY2VKQXgc8B+yXm+JlwEMIf1zETISPMRPh2uwY2Y16tMmATPf7HB5XgtOw6beCxNtO5vebP8ru7un4teIN3tWw2dy1gD9svoo1LafTk64aCbdENXCDmQif51w7IyX0c7s7njP/4DGL6nrP/szpu7jzQ2v5ynmvUl/VR0dvgE0tBj94ZCHv+lMDv183nVQG9KHxoI3AVfFoZJPS+Ie3gTY0NoUQaX9eMs408K54NPIPR7Ckuuijj1XxaORx4HrgZZdjFVWUvWM9XcTg2RpDQTvw3Xg0cieiTHpPMZzfsNMAczU6IxTbh2i2830zEV4uGdwKRNrVHESE/beAO81EeJUkChuNUMxyWBVGXCuUvxFGtLj1jkFpNi19U3j8wAo2dR6FX0ujlUCTZZ+WwQYe2ncOr5kLOX3y48yt2oZPS3uZMTAL+KGZCP+bEYq95PU8Oxs3yb/L5Lo7qaV766fqKvsxKpL0JYXQ33jA4MHN9Ty0pYZkWqPcL1wAQ0Qb8JV4NLJOaZl5baBvB6Z5PPwX49FITJGxgiPgEHIPItI888WZDY1N1fFopLPIzvE03FWsbAMelq9nUCQp+HkfhHMzd2zEa4A1ZiJcBZwlzULnAWdIq8BlwIvAg2Yi/C/gXiMUS40kEXD4/echuvp5rEkv4vHmsznQV1/UWv/Axw9lviR7e6bwj12XsKz2eU6sew7D3+VlkOAySQLeY4RiHV7M8wCCfxaik9wKYIWu2Ytm1/TQl/bx2ObJPLezhjW7q1m/z8AGArpFRWBYFg8b+KVk77mCTeHIVoBKuQcEPRz2j8Av1FwUFdnraWhsehb4D5dDvR34cxGt39MQqZr5wkbEC3XIv61iOTdPWIjTKiA35m7gXuBeKXiPkaadKxE540uA9wKvmInwXcAfjFCsJceygJfHhkgNmuyd5m+xvr2BJ5rPpDtdiU8r3YJdPs2i3/KzpvUkDiQnc87Uh6gra8byjgS8TVqJPu6BFce53sLApcBJwEKgSgM6+wLWfZum6I9vq2FrWwXN3UEsGwK+vO+7V4Cvy81ACZyhb5zZaxXCkYnhAbYAP4pHIyl1lYsOLwOvyfsxX7wX+HOh7zXH75+JqNOQLzKIbKSig6fVYXKtAvK9bbIT3zVSG/wPeQNPlhaCbwPrzET4Z2YiPNvLxjHZYzAT4U8Bb/GMNWlp1rWeygN7z6cnU4GuWSV/12ZjALaZc7lr5zvZ1zvL67iAj5mJ8MeGGwuQSwjNRLhBrpUtUgu8GjgBUWntqcqq1BUfv2Pp3dGnZ/HMzokc6A6gazZ+3dW5fDEejaSU8B+2Rph9uQTwstvbH+PRyPPZTVqh6AiA24JS5xeDZcfx+yfiLgDQAu4Y8wRgMKuA/LsbkXYSlexwJfC0vLCzgUZgk5kI/8lMhBfkCoB8j8FMhOuA//FOW06zpvUMHtp3DjbaiOf0F4IIHExVc9u297KrZ67X5/dzMxFedCQS4PxM/u8CMxH+tpkIb0aUc/0UMB9Rxa4d+AFwdNXxa87Qjrrnjl0d5eck0zq6ZnsRj7E2Ho2sVvt63lpUlbTQeClg7hpgk1YoDtJnAs+Dq42joqGx6a1Fsn4XAUe7HObJwSqDjlkCMJBVwPHaNkKxe4xQ7HSgAfgLohGHD9EbfLOZCN9uJsINiJQyhkMIcv7nO3jU2lfXLJ5vP4WH9q0oKq3ftm0qggFqqrzJsNKwSdsat217D5vNxV6SAD9wm5kIawORABnNHwAmmonwMjMR/rqZCL8CbEbUjl+AiJ5tk4z67UYoNskIxb5khGKbNG0PDY1NJ0pi4BW+roSNK0wGTvZwvGey2r9C0eIJREaYG7l0SZFYeI7BnTsDKd+KEqPeDjgneAsjFHveCMXeh3AP/C+wAZHb+y6p7d1iJsIXmYlwvUMjHNJvmYnwKYiqha6d2bpm8WrncTy67+yiE/66rnHBCQu46KRFnmUgaICuZfj7zkvY2LlEtj3zhAgcD3xzkLmsAD4pNYi4tNwsBg4igkdvl5/PNUKxK4xQ7J/ZdZSTr+sV9gP3KjOzK9TLe9sLtAOrikQwKAyOpxhif4TDbD9F4QYAliLKM7vBHcU6UXohf9wpAGSswBeBixF1Be5H+E4uBVYDN5qJ8FVmIlx/pOIyclwNUQnOde9mXbPY3TOHR/evoN8urgaKmqaxfOFMjpszhQXTJjFjUjWW7Z3Z3sbm/r0X8rIkAR7AB1xtJsKnDvBZLaJYzDxEV7CHJSlsBC42QrErjVDsJulOGtDKhIexHoh83dIP8CgQZLW5xXiX8tQM3KOsMUU958SjkS5E10U3987khsamhgKfSz2iNLEbPBePRg4UK2HVC30AA8QK7DRCsR8BH0ZkDayS//p24JfAHWYi/DkzEa4YiAg4Xp8stX+XAtams7+Gx/avoCNVXVQ+f9uG+VNrOWvJXPw+MZUXLjsKy/LuGDUgZQV4bP8KtpgLvbJ+zAA+bSbCRo4VoAVRp6EReDfwQSMU+5wRiv3ZCMV2HI70yVKzPilwvMJdQEYJm7wRRORPewEL0WUxpbT/ksDdQJ8b0QBcUGBrzxQP1u+fipmwFo06OwAR2AP8zUyEH0T0Hvgiwi90NiKo6CNmIvxTRAphOkf7D0jC4LrOddoKsKblFHb3zEQvslS/iUYZF598NBXBwOvvzayr5oR503l+214CPm/4nYaN2V/FI/vPozrYQV3wgBclhN8F3Ab83VFhMgXcl/uPA6UADoJleFdqdiewXQl/VwggSoR7gYwUKgpFDMf9cjeQJCeGaxioAE4vlPCUpGMh7otX3VnM86UX2wFlYwQcRKAd0XL1MrkgHpeL6gREbv81AwwzC9ED3r0U6J7LurYTi074a8Blpx7HxMo3y7tzT5hHRcCP7aErQNcsWpM1PLzvApKWJzK2HJEaWOcgbm8Q+nnUhFgGlHl0yk9TJPW6SxFyAw0ggny9gI2oLaLM/6VBBJJyr3aDBQ2NTQsKdArlwIUux1gLHFAEwIVFQL62AcsIxZ6W/QcuQjRlaQd25QgNXZptFrk9hmSmnHv3rCw64W/ZNueGFjCrfuBOxhMqyrhg2VGkM96inH6aAAAgAElEQVS6r32axeau+cRbT/aqiVAYOD4r7HMzRoYq+B0mwiV4V21uLWCi4EYTnOYhIXstHo2YyvxfUrjd5fdnS1JfCDeAFwTg74iAdkUAvCIEUlDca4RiIeAoIxT7c45fuAwRSOZa4310//mY6cqi8vtnLIuF0+s4/ZjZhzl2jWNnT2bxrHoylrfHHtRTPN58Jnt7Z3sVD/B5t0LboRF6GXD2SjwaSSuB484Q4OFYDyvtv+SwCneBgHUIS28h5n0K7noaAPwrHo0oAjCCloGOAT6bi8tufz4tzfbuBWw4uKSo6vtbto1RHuTik45s3KgsC3DGsXOZWFnmaVYAgE9Pc+/ut5HMVHhBji4F5rmt/tjQ2FSGR/UeEClMzWr/do3jPRzrKXU5S4j5iWwAE3jM5VAnyGj80cYlLr+/AYd1WhGAESYFDivAB9yMpWGTtoI8tn9F0Zn+MxmLFUvnUz3Egj9zJk/kpIUzX88Q8AqiWuBEnjhwNgGfJwT3k4DbdsEzEZHDXmAXIg1RaZzu4GX9/7XqcpYOHPfNH91yCWQw92hY4xy/8W6XQz2MyGpSBGCUcbWbL/v1fjYcPIG25KTiEv6Wxfxpkzh65vDI8KmLZzG7fqL3C0ez2NhxNLu653rRCOmjWSLnAtMR/QC8sgC0qm3cNTxLyYxHIy+ry1mSuMvl9+cAR40WGZfpxDXAqS6HeiYejfQoAjC6VoAGKQjy1mx7M1W82HEcGbvYLo3G0rlTmVAxPHe536dz8fKjCfp9nh9RX6acta0no2uu3QDVZiK80uUYUz0kAAcQ1QcV3GlRR3k05M7R0gAVPF8HB3HvBjhLtpQeLbzd5fdfQ3QQLfo1OyYIgMN07Mps49MyvNZ1NAdTtUV1frZtU1NVxuTq/O6BWqOCC5ct9DQtEERe1t7eaWzvnu+FFeADLr8/mfxzjnPREo9GMii40aIMPCjBLbFptDRABW/XASII8G8uhzoXmDCKh+7W/L8BeLUU1uyYiQGQcOVE7reCbOuaR0+6wpPjylgWi6bXMaOu2tU4lp1h3baHWbf1ybzHWLZgGgtn1HmaFaABnalqNncu9CItMK8a/g6GPQlRPMQLXtOqNE7XmOPhWNvU5SxZ2MjyzS5wAlA/ivej206EL8Sjkc5S2D/GjAvATIQn4cLnqGsWzX1T2dc33bNmP5Zls2B6LeFTjqEimH922v7OLTy5eTXfi/2MF7ZtyHuct5+8GKM86HGvANjdM5O2ZL3bjICqQfoDDEXLANFHwIviBL2IboNK4yweArBTXc6StgK0As+6HOrC0ZBXsg2xm0pn+yihjJWxFAR4nitBZms0902lPVnrSd6/ZdvUGhXMmFTNpAkVnLVkHv3p4ROL3n6TNdtWY9tpdrc3c0MsSm+yNz9LSXmQ809Y4GmvAEGcptDcN8Wt+PUhTH35wisTYS+iNbWCO0z1cKy96nKWNDo9sAJcJgX0SAn+7MvLXQ61D3iyVBSIsUQAznBzPn2ZCnb3zMLyqOedZdksmlHHlBoRl3bSwhksmF47rOp8Pj1AfPt9tHbvxaf7qQiW8/jGNdz2ZP7lpY+eWcdR0yd5WiUwZQXY3TObVMZVPR8fcCYMPx2wobEpiHcpgElFADzBNA/H2q8uZ0lbAVKI0tpucC4wYaSEqmPcc8hflbGA52U3RGUBGA3kdP/L+3x6MpXs7p2FjvvYL9u2mTShgiVzp+LTdakpa1x04iL8Pn1I9gVd87GjdT2vNa/Drx9q9lMeKOPHq39Dd193XsdWEQxw0lEzCAzxOIYkubUMu3pm05dx5YLXgKNySwIP9bTwLgMgBXSj4BaTPRxLpWSWKBya9VYg4XK4t4zwsZ7kct32IlrXl0z8UMkTAJn+58dF+h+A2V9NW1+tFylt6LrOwul1zJn8xvz7SRMqWbFkHv3pI5EMDctO8/SWGFoOF9U0DbOvl98+cHPexzezvpq5U2o9cwXomsX+3sl0Z1wr4Qb5xXGU4V0XwBTQg4JbeFlIo11dzpLV/rMvtwNrXA737pEQro7xzsedJbEP2bGyVOKHxooLYCEuIsAtW2d370w0j4L/aqvKWbF03ps1ZV3j+PlTmVVXfVjh69f9rNt+Lz2pLgayRvl9fv78ZP5FcyZUlLFgWi0+XfNsAmw09nbPxHZXP6GS/Jo4BT0kAGnJ5BXcocbDsVRXxhK3AsSjkV5ENUc3Wsc7RkK4OsY7y+U+8lw8GukppeyhsUIA5rqZOMvW2d87zZPof13XWLF0HpVlgQE/n1hZzmmLZxPwD2yC1zUfbd172NwcP/yO2NfNA4lH8j7OOVNqqDUq8CohQNcs9vZNdxtDUQnMz9MCUOHRWupXFoCiIwAHVUrm2DAIIFwBee8PDY1NK0aIpCwA5rkc5i+lpP2PJQIwGxdtRy10DvRNQcMdAUhnLI6bPYXj5kw57P8dPaueRTPqB+TCNhYbdj9GMn14JdS2be594eG8j3VKdRUTKio8XEgWzX3T3FoAyiWZG24goB/Re94LZBBuAAV38LJym6ZSMktY6h+auzjuazq8K2tV8Mo6IXGilCNu9o07S21uxgoBmO5GAKStAAdTE135/23bpraqnLcsO3L106Dfx6mLZ2FUBN/AAXy6n/0dW9nb8RrWEciIbdts2Pkq2PmRFps0O1rXk7a86VapaxZtyVoy7paUjoweH2YgYADv+s6nEZkACu7gVVpmBoqoH7eCGyLQB6yT91i+WOmllu0YZxmilki+eCoejXSUmqVqrBCAKbjoKd/ZX4Nlu/OHJ9MZ3nriIirKhsZDZtZVc8L8aeiOKL/+TJLtresxkx1oQzClm33dbNy9Oa/jPdjdwdptD9Bm7vZsEtKWTne/68ZD1WYiPNx16fNqLds2mfiOmiERAJfdC8c6gh6N068uZenDIRjvx11Mx9SGxqbjPT62yUDI5TA3leK8+Et5UTlSxvKuAqdh09Ffg+ZC+89YFotn1jN/2vAI5LlL5/PKzgO0mb1omk67uYvtrS8NSfgD9Kb6eGXPayyeNfy4udaudsxkN89t+weXnPBpbA+ULE2zOZiqpq5sv5vSwFXARIYX+X1EC4Bti0BFW5g/sAd4T9OgptzW9vzXg/WTv/eOqmRar3SsLV3+TlISji1GKLYpz7TF8QCv0jK7lAVgTGj/TgLQQf5ZImWIsuHrZXChK1Iivz8XUW7YDf7qpWVCEYAhwLHxuvI3dvcbrqr/WRactnjW8DvuaaI87x8feh7LSrGr/RU6+9oI+IamPPWkenlp96tcyvAb6bWZB+lLJWnu2sPm5rUsnLocy3YXA6Fh05NxbfmtRKTitA+gbesDrAGrKpjx96VFwQXtdTIiXtvYVPqhrjJNndHH5Mo0kyr7mVLVx6SKDHXyuaain+ryJMBpaUvbkUwf2aBgJsIhIxRb77GW5JOPQgm9fo82Ma8sAFYh95iGxiYd75oa5YN0PBoZEwRICtxMQ2PTE4iAOy3PdXUhcIPbder4/gLcla5+EhmoOooEwJL7oav7w8/YgKvNpjdTnjcByFgWC6bVMqUmP8E3e/JETlwwnYdffFEW/Rn6lPRn0mzZvwPbstD04VnA28x2elK9+PUgz21bzZy6JQT85a46Bop2yhWg2ZC/S+VNEf1S+M9G5OkGHcJxYs/6S7Rt7S8e+8Le6lqfbjExaFFdnmZieYpJFRlqK1MEA9KKbMujfN0CoJGxNPotjbSl0ZX0o2mky/2ZToTv2ZIP23GjZa0BQbx3ofmBq4F/QwQijubGr0nSdTVgejSeJwKwwBaA98r5CIzy7+oIU/k3gefHmBXgb8D7XBCreQ2NTbPj0YjrHhGyzfCZHmj/9ggK/wyiOFmPfLTINbEOeGhcEwDpL3bF0FNW0NV2tWhGHRMq8ucg55+wgH/EV9Pe3UxZYOiR+Zqmsb/jAJv3b2Ph9AXD+s1Ws52eZA+appFM97B2+z2cufBdpG0XLlcNUpZfat55YyBzfgD4OPBfb6LBtsb8Sd0cO7UT5w/3p/30pXUO9vnoM4Mk0xq9/T5SGZ3etE6y30dfWqc75cdM+ulM6nQlAwR99itfOW/TV3v6fb22TR/C5N+PKPKhyUcl0GWEYq967AIIAFcAKwp4S/1YajRu4VVdhu4CE4BLgIsL+Pt/GysEwIF/SkGWr7lwMqL0+60eaN3VUrFwg3s9XqPtQLN87Ae2AC8DLwGvxKORzgEsK+4JQLH7M7NBV14fo4U/r/mzbZvqynKmT3Jn9g76fZx4VB2r1qYpG4aeoaGxp30/m/ZtHT4B6DpIX3+KoD+IpulsbVnPgskNTJu4gIyVf5CuB3UFBkrpywCPIRqKZCerF7B1jb7dHeWT4nsmntmVDFT2pXV6+zV6Un560z56+3V6Un56+jXMVIDefh0zpdPbH6CnXyNjZd0FNpoGts3er37qx3cPdT2OwP1SyAwEL7Vtr7IyNA+tCfnOh13AYyioC8RrSGGVamhsegRZ2CcP1AKnALe6jQFAZB0tdXFKa4H9wzwOnUMVB01gsxTyWxEVE3fIx854NHLgMNcx17KSPwHIbmZmIlyHSIko45BvvYxDptdyxwatyb/9g2gAQxFnNiLQZ6AbLPt+ErjXCMW2FVPktW1D/YRK6ia4S3nWNI0LQ+fyx0f/xu72Zvy6b8jfa+/uYOPu11jZcMGQf68/3U+r2f6Gnb4/k2TDrkeYPGG2XJ8FU7retOEboZhlJsIPSE0ou6b6ASqmdyb/869nnr69vWJpf0avTEuTfnZpafB6OeVsXICGDZpF+cD2ryFbk1Tw3xHvawV1Hd8Ah7C6yQUBAFjS0Ng0KR6NtLkYQ8dlF1lE7f/husxagC9JId8sNf42oC0ejViHE/YDXEfX2lbuZvY7DrVl1QbYlHM36IH+Hi5ztwZZ7JaDTPzUTIS/aoRinldp08jkRfBtYGJVOUaF+3inWXUz+Nj57+c/b/0e/uDw3AAbdr5Cc8cBpkwcWh+Lrt4u2rra35CCCLC/aztbDrzA0VNPdlEfwPv9SpJTCxiQDYc+9TZT17LC3iboU7JHQaHIcZfL7y+Wj6fyMYFL7V8H3GqUT8SjkeFa7fYBv4xHI+kjafdeCvvBGNBAmnu1fEyQDwOR1lMlrQIVjke5tBCUSe2sB2Ge7UH474by6OVQ/fUyxyP7GxqiJ3jvAFqYhcvKbUG9L6/vBXw6kyZ4U01P13RWHHsapx61jHRm6CZ4n+5j/Y6X2dM29I6pXX3dtJrtaDnTn0z3sOXA83T1tQ05FTFX9pf70m7dABlyCoUcSdP2abamoaCgUEKWgCTCd54v5gHH5Csg5XcqHcpuPljPMCsbSuFuZYX/QIWDRjOV8E0uAOBSRFpEtiVqtjuaLk0dabkpF0TNGsTv6spvWulLYtvasI0AAb/OhHLvgoPnTJ7NxSdewIZdr2INUYrqms6egwdIbH+R4+cei28I7oPOnk5aOlvRcsSmrvnYc3ATu9pfYdHUk/OQ/xoVvl5cukv78yB0liQOCsWDbkQ9B7dQ3G7s4s/A21x8/+SGxqbb4tFIvu27h/rblpQxSbk3HQReAG5nmL0NRsqU75oASP8/RijWh4g2dCOQR0X4O/7ucDNuVcAcduEa27bpTraRzEzCZSfiN2Dlsgv4Z/xBnt/+Ero2tCyzskCQe55/iEtPWcnEyuoj/n9Hj0lzZ+ubXABCiNu8svdpptcsxCgbXmEjG41Kf5fbS5Bk+N348iENgxoU1L7sCdIejVOpLuWYxX0uv386UO9QVoejhQO8Z5B/6UL45duBVkSA3nr5SMSjkdZBxis5vCEMKh9BPpqBULm/5fi7WQqBYavjNhq1wXZhARgGMnaaJ167iwUz3sryhcd5do5Taiazctl5vLJnM6n00Pzwfp+fZzY/z/bmHYTmHTmg9UDHATp7TarKqwawKPg4YO5kS3Oc0OzzhqWA2bZGTaDTTRVApPA3hz2N3gUf6A2NTfpAwTgKBSEAPmUFGHuQpu8ORB57voF4y4AZiMj5YWnhDY1N2ZTbpPz+TvnYIQX+FmBLPBrZfQQSQSk3qhorhYD2SA0wL3v8hEAHfn3o+72maexue5nNzRv46zMZTpizlOPnLfHsZN5zxmX8+Yk72dG6d2jHg0bGsrntibuOSACS/Um2Nu9AO4yrwO8Lsn73o8yrD1FdUT/k4w7oGS8sAN0Mv1Z4Gu9qxvsQGS99jE94JWx7PRqnAoUxBymEU4gOem4i8U9vaGx6brCAusPpk4jaItlI/GageaBxsn760QrMG02MlWZAO3ARB+DT+qkNtmMNsZWtZVms2X4vfj3Aq3u28MnffpXP/OYr/P251bR0HHC/45VV8I4TL3yTj/5wKA8E+duae2jtOnxWTF9/ks3NO/AdpnKghkZ/Jsmabavx6UPjVJatUxM8iE9z7YrvMkKx4ZrzR4IAjFd4lWXT5dE45coCMGZJQBp4CnfWu5VAWR5d+Dri0chv4tHIffFo5Pl4NLJnsMC8eDTCWG1HPVYIwBY3GpuuWUyt2Ic1hMvh0/28tOcxzGQ7mqZjI+rqP/DSU0Ru/R4Xfvt9XHHD1Ty0/lFXJ/TBFe8ecgxA1iqRSqf59b8O35SqN9XHq3s3HzFY0Kf72dn+MvsOvjak47DRqC87gKa5spzbiGyP4cLrGIAyxi+8IlLdHo0TVARgTGMv8LSL718IVAxHQGcj8QcS9mNJux9PBGCjG83Fh8XMyt1HtABomkZfyiSx62F07Y0C1LZt0pkMyXSKDbs38e+//gpfv/V6unrzU4RqjBqWz1+CZQ1doAb9AW5+/A7Mw/xmV08Xm/ZuHVK2gKbprNm++k3nOpgFYGpFs6umSpLEbc/jeym8q6AXRAWeeYFOD8dSBGDs4gDw+BAUg2w5blOShtXA/0gLwLD2/qyAX3N5mHg0wprLw6y5PBxcc3m4fM3lYW3N5eOnzfdY6AWAEYqlzUR4O3BUPpuFpllMq9iD/wjma58W4Pmdfz9ikRxd09B8fn73yO34dB9fuvTTVASHXxr9bSecx2Mb1xIcRqOftGXxw7//gv++8to330W2TWL7i2TsoYXpaWi0mnvY0bqB2XVLDlsi2EZjavledHcWgB5EWczhIol3pms/yu/sBdo8HGsiIvVKYQxBauLJhsam53I+6kW4kLokkdyCqAIaB+LxaGTPIGMN6/eXr4qx5vKwBpwMfAiYAvwMeJRxUsmy5AmAIxPgCURRh7zSuKp8JtMr97KnZxq+AYSYrvlo697NttYNQ1ZZKssq+e1Dt3LcrMVccfolwz6m0xadSMbKMJzYRk3TiK27n/eddTmLZizM0dItntq0loBv6OPZ2Ly453FmTToWBmnzY6MxIdDNxGC72+nsRjS9yMdy4FXQWRDvetnng7FilWv1cKxa8rMMeQGVhTDy2AD8Rd53O4BNCKvuxng0svUw5OFNWv1wIDX9ZcDfpfAHOBuYxTipK+IfQ+fyAJC386bS38Ocqh3s6p45IAFAg5f3Pkmyf+iKpgYEA2VcH4ty/NxjOTpHIB8J86bOzas9b0+qj5/deyM//si332gdyKR59KWnCfiHN+1tPfvZ3b6RWZOOHdAKYNk60yv2UuZzbYXvMEKx1/L4Xi/e+pwLRQCsYQpOq4hJwwEPx6orEUtGxkEaFI4Ah9B+DfgcIjAvOZCwzxXybv30Uvj7gE86hH+WbI4bjBkCYIRij5qJcC95tpj06/3MqNiN4e+mzyp7gy9b13wc6NzO3o7NWFjDKpPr03Xaujv40k3f4LZrfkX5sOr857evW7bFYxuf465nV3PpKStff//+Fx6muauNiuDwLNypdA9bWl5gZu3igXc928fMyl2U+1xlzlmIQhvDLi4Vj0Z6GxqbTI+WUjneVLDLBylEy+NfcGQTZIZD9Q9mAz8E5hfRLbnfw7GmFfA8vgf8aQgky1mN0kJUiTtKifnDQ2ry/Yg0vCMRBa+hA+/Nee9xxlEjqzFBABwC41/AO/MZw7Y1plXsZVrFPraa89A02yFQM+xoe4mO3pYhBcTlIuDz8+q+bXzyV1/mxk//dMjf292ye1ipgIcsDxpmXzc/ved3zJ8yh9C8JVhWhutjUcoC5XlcG2gxd9Fi7qTemI1lH7KQ2GhUB7qYUr4Pn5YecirlIALtITfLwKPlVIbog1EIjcjmUCvQ4WyiuxElSYuJAOwpdQIghdNuYHce3908jgiA7mLNF+SApf+/YYB7PTaeCMCY8Dc6tMWb8x1D+LE7mGtsI6D3v0GYHuzZz862l/NrkOMY59nNCRp/fR396aFlrN365F34fflxNF3T2dm2l0/8+lqa7vgJ4es/THNn24Dlf49sidDo6DnA3o4tgxCnfUwubx52NcUcpCWBG5b270jjafdoOb1uAcgjt7hQ0Ci+vvG7PRxrRoEImZuvj6VKkrbDwpFGBOY9DtwAvAtRF79k4Ijyv3yAj+9TFoDSxZ0If3BeUdxp28fR1Rt56eASWpJ18i622N+5jRZzNwFfmcu7yOb+DY/z6d9+la+98/PMmTx70P99eedGfv/IbQR8+U+Rrum0d3fx24f+gl/3oeu6q2Pfe/A15tefQFVwIra8R/x6mrlV26kOdNJvuVpO24xQbHse5v/sy1ZEMGC5yzVUhvQ5j6d84BHANg/HmqUu56gixaEOrb2IKPx1wBpgTTwa2TKItaQkTm75qthgBOAFoNXxuSIApQIpOGwzEb4R+FReQs7WmVTWwoIJW2hNTsJGI9nfw6b9z+HTvblUPt3HvYnH2N6ym09e+CFOO/okptVOff3zTCbNui0JvnJLE/2W5Tr8WNM0gn73HQs1TWdf51Y6epqpCk583WpSV9bGouqNpG3XxqSbXX6/RW5W5R5M0yQlA1xrz60eWlAWqCs6IsjeK/skgW5B5Ni/jGgI92I8GhkwK2c0e9aPkBVgOpAb1PQg3rkSFQEYTTi0xl/kSwAAUpkgJ9Y9x8bOYziYmkCLuZMDXTvw+7wrDlcRLOe1/Tv48p++w+lHn8iJ849nWs0UMlaGV/du4d4XHqbV7Dhsud7RhoZGKt3H7vaNTKmei08P4NMyHFO9kdpgKynLdfXcm3LmcbhoRmQCeBHFW9fQ2FQej0bGaz8AV3AIhx3AHA+GXFRqWmYJwAa+LwXeVkSa5bZ4NNIzFgX+ABgoVuyJ5ativeNpEYwpF4CZCAO8iosOUzYaVX6Tc6Y+xKodl/PK3qfQNO+zegI+PzbwxKtreXzjc1QFK8jYFn39Kfw+X1EJ/9cXix5g84HnWTrrHHx6gEnBgyybtIZ+y7WF4WkjFNvpsrX0XrxLBZwC1EjNSGH42n/25SaPCMDEhsamysGEk0Jec2QjMhyOKOzHiMDPav5ZF8AVOR/tQBYhc/zPmIc+Bs8pCfzEzQBpy88C4zXOqLuLzS2bPTP/v1mrFkQg6A+SsjJYtk3QHxhWD4BRtQJoOt2pDvYc3ETG9nHetAcJ6Cm37X8B/ncQMvem10cgAF41oJmOcgN4gY0ejnWcupzeY7zVwpfR/1XACTkfPY8MXC208L9u5bUDvlYE4AgwQjGMUMwG1uKuwQQ2cPLkV1la7yM9CvG8pVJqzKf7eXnfOk6sizO7aisZ27V1pAO4Ozt/WYFvhGKYibAv9/3ByEA8GunwkADMBOoH2yCLEMUatbzBw7GWK3E9IpaA8XjaF/DmQPH48lWxA4U8qKywv371DVy38tqp16281pCvFQEYJhHYBdzoXtjBV86oQtdsbFttFiCK/kwr28n50+7zQviDqL3d6xTsUvhPAHaYiXCfmQj/0UyEV0qtvMJMhN/Al77642uyL7fiTfpVHTL3vEQ2SJ3h1IsePTLhJQE4o4QImUJx4yLe2PK7HUjAG1IECyX4z75u5bVbEe7HfdetvPYd16++QRGAocIhSO5F9Jp2hdBUPx9fViFKro1zEmDZUKbDNadWuin440QSuMUIxdID+P4nyxuzDPgg8E+Ej+4XwBVmIrzUTIQnAXz38z/KfuclvGsLvKShsSlYIlPjZQtjL/1dXroAzhrHGquCtziVN5Zr3o6sZVBI8/91K689DVgFzJNvVQEjykhKjgAMZAJ2moazgsQIxbYBtyByw/NG0AcfOL6cixYEPVePSgm2LdwU/95QTmiqZzLiZgZo8iJdAFuAf0e0/LwLkVdeg+jadTtwD/BzMxH+vJkIn2vb8yu2t1c8m8roScubSVpOnmWlCzE9Hi5NT/ogSE09iajz7gVmNzQ2TS+lW0bJ2eLDmsvDJ0vlwokty1fFhtyF9LqV1w5ols/HVH/dymuzmn8d8C3e3PdiRNdRyRCAXAFvJsJnmYnw18xEeHmu9uggCb8DnnOr9daWa3zh1ApOm+kfl1aAbBmwS48O8v6l5fi9WTUdwN+MUMwcbL6NUOxJIxT7phT67wE+Kud0B8JPfyXwI+Dmvg1L71rz2SeuumBBe6AyAF1JH6mM5ma+ziohAlB0kJp6P6KAjFd4u4NcFDuCahUUleDPvjwNGd8j0Qc8k/M/R4QU2vXXrbz2E9etvDZy3cpr5+bjr5ff0eQed37Oxylg17glAANp9mYi/B4zEX5AaoHfloLhDelj0oeMEYr1AP8pNZG8kbZgqqHzjRVVHDNJxxpnJMC24ezZAT5/SiVGUPOKBN0HPDKQVUcGczrnvtMIxZ41QrHfAZ9HtH2+COEO2AfMzNjaW6ZUpa6+7rzXKn9zRYLrL3qN8xd0ENChO+UjbQ07zLIaaGhobFKtYPNHyi0Bd8AHXFZC514xXiZ5iFk6BT0Oh2n/pJy56czuQ0Mx/zs09ulAFPgxwkp5z3Urrw3m6a9fDHxhAHnchchOGDH4i3lROYR+FfAJ4DOIFK2sv7MNePRw48gugb8GPu3meDIWTKnSia6cwIfu6mKXaaGPE9GwuM7Hd86pZGK55pNQZN4AACAASURBVBX5aQH+IgnaoLn/zvez68EIxbqALjMR3oao9/Bl4HjgKuDK2orUxEmVKebU9HD+wgN0p/ys3VXDPa9O4cntE7AAXbOHOnfvBu5uaGzqH0e+Z8/obTwaSTc0NnlFADREXMaMeDSypwSuY0nHVw1Wk8NMhGs41EBnvxGKJXOzd0ZT2OcqfgAdz15anmz31VlpUcDFX24n6y+4Y/8Dc8PT7H4WaX6caVfty1fFnslDY79E7g9ZHGOjLUZ2NB0KpLWgHPgiA5e77gKeHfMEYAAtXwP8ZiI8VV6cj3PIN9mPiPb+AfBrIxRLDbTwchbGZ2QUuavuXJYNtRU6t19RzYf/3sWrbZkxTwLmVOv8cuUEaso1Mt5ZPp42QrE7hvOFAebXNhPhlBGKpYAn5ePfb7z5U9dUBjI/PGt+G5XBfiaWp3jL0fu56Jh9dKeCPPhaHf/aNJnEvirSFmQs7XAS7z1AYzwaOTiOtPZOj8drBnYiWha7Rb3ceH9ZAtex5NZMzj7qMxPhIBBCFM25AFiWS2zMRPggoonXz6QyZo8UGehcdynVJ97lJFg+MxE+GngbwmV3AjYL0Gwqp6ffdG52Bjt5QLNaHgTzVRtsrKwCOcziP/XA+52MuUrr54MVL+373jCEvyQTpwAfG+Tftl+/+ob9IznnRcFSHSZfzUyEpyB8ITfLjePzQCWiv/g9wLuNUGyBEYr9bDDhP4hZ6DJcBgSCMIeXBzRuvGQC584OoGuMOZeALcnOMZN83HhJNbUVngr/VuA7RzLZDZcUZMe66qen/PK/71+Qvuh3y/nyP5bwj1emsaXV4IBZRsCX5vKle/jFO1/gzg+t40srtnP6nE5mVCcxghlsWxOEwH7D/XE1jKv0M68p7QFEExlPtgrgHVm3jEoJ9F4BMxPhgJkILwQ+C7yIqKdyLXDiIPKiRmrCDwN/MhPh2pE4vo5nhPDf/9g7ta4XwvMRFuEEGhs0nR9oOpfbGRZkkpA2Id0F/R3iOW1CpgfsDFr5dNtXs1xD87++1d0NQ4/+l1r7AuCc1xVDNI72H7Q6reBHhyn8K4HfDvJvaYSbdETrAPgLuehyNP5lwNnAe4HTX7+2IojoUeDPRii2Jvf7R2KZjniADWYi/FngV16QgMqAxv+7wOBX63q57ZUknUnbq+C4giJLZs6dE+AbKyqF5u9tIaQ/GaHY015rCEYoJkuYfqSn4qSmNTac9sSOCTy0dSK15RkaZnRy0qwOjq7vZnZNHzOre7ly2U6uXLaLHa0Ga3dXs25PNdvaK9hxsJzWngAaNj7dvvak/2iKrv1ZpK/Ia9EXJQ2NRyMt0g1wuUdDngy8Jbs5Kni6Dy9GBNZ+CuFqHS7eB/SYiXCjtMy51xYeuRwjtEqYVJ68dLa/Kn0ZGp9FY6HmE4I92Qz9bRrJA5Bqg4wJVj9YKdADoPlALwf/BPBVQvdmG1sYCHqHs46kINYR7kaHELXtuXqHnkGvGebp/Rew8DD38z9AuB3GBAFwMk0pmMsQkb0rEYFd2YvRLU/+HuARmRI2LME/iKZ4I7BUslvXgrLCD43LK1hc5+MX63rZ1J4h6Ctdn0DGEmmPVx5bxidOrKC6zFPNH0Re+IhJUIdwvkmD04I+m6AvQzIDT2yfyMNba5lcmWZhfTeL6rtZMqWb46d1saDeZE6dyWVL9rK9vYpNLZVsbDF4cZ/B+n0Tpm3rKPsy8M1pVZminTpcBrqOBByE6QVETQcvtMOpwJUNjU2Px6ORHtUgyL3wlwrYB4FrpCI2EJKIPittUm7MYWC3Thj4M/CAW5Lf8dylTDx5Vfb15XrA/iwa52o+sNPQ8aKG+RL07oTUAVGsTdMdNizNQYtFoqxt26K1i2zv8szyVbG+oZr/pdZejYjYlzeexjxflxXUMj5bVKAdqvZ/srzeg6H5+tU3vDDSa2BUCYCDadYhUrrCiAjIeoe58JeIoi+vGKFYuxvBP8Dv95uJ8LcQhRZch65mbFEt8G1HBTmm3s+NL/Tyl5eTBH0apcYD0hZMLINrTqnk4oVByv3aSLg2rjZCse5RCBa6DeGTBEDXQPfZBHwZzJTG2t0TWLd7AhPLM9RXpZhT08fpsw9y+tyDzKs3mVdnctb8Ng50B2npDvJy88TPfKn/x7euvuHzG4tU4NiSBBQrEogiTWd6NN4H5Bzfp4S/u/3YTIQrgSZEnFXlIKT914gYmxaEG1VDBK+9DxGE6/zeZKlkPZDvPX5IUbyLPXdeETTm9H9D89kf1zTqNB+kWjWa7xOafKYHNA20AGhija0DtshjPSiPrQ4RJ7AiZ1u+PY/Du8x5vpats8DXriFi0/4+FBIh8SPeXLzLdtCXe52EYaQw4kbrnKYuc81E+OeISMlvyg2hHtE17KPAEuBbRij2VFb4DxTt6dL60CKZlyfpFbYtrAFzJ+p8+YxKfv/2Ccys0ulNl05gQL8FR0/y8dt3VHPZ4jLKfCMi/L+GrMw4ksJf+oVbsjdQLjQN/LqNT7fpSulsbSvn0a01/PjJOVx1+1KuvrWBv8Rn09UXYGZ1H0undbJy8b76E2aa/5u1MhSh77ko6WZWOMejkV14Ww+gDPhBQ2NTDQp574VS+P8R+NwAwr8HYSk9A/iJ3JM3GaHYTiMU22GEYq8aodg3GLja43QzEc67NLXDQlxXvaD/Ft3PdZpGneaHzhc1tv0Kul60sXpB09mCxrWIAO+zgE8iYoyiiDiyXwExRPpfLv4GQ0//k/jC68IfjTq915qoJXXgr8tXxfqGMsZ1K6+9Jud4OhGB7c77+I4cwlD8BCA3qMthYjrDTITvRlRz+ySHUvkeBN5ihGJHG6HY74xQ7AA5pVy9EhY5qSpbgA/jYZEFy4Yyn8YpMwP89V3VNDZUUObTir4cmA18/IRybn9nNYtqfa+/5zEeAqKyUdOoCB2GECmuSUIAkEzrHOzz8/zeKq5/ZC4X37icT9xxArEXZ/C39TPY1ek/r6Gx6ds5v6EwNEIGwp3X7OHQS4H/a2hs0nN+R+EIe7Sj0dZfeHNbXCRZW2yEYj9FmPzTh9nnWwbYMrR8jy37bCbCcxDZBVcAGjrsX62x62abTLcNsAuNDwDHLF8V+/7yVbEtiJS+biC1fFUss3xVzEb0D/kXb65wuWH5qljLUIr/OMz2RyNSjl8nAAv8HVpQswC+OxTt/7qV1x4FfERaULLb7XZkK2IHVo/GetC9WlQ5wSQBmcL3fjMRfgF4AuHrTyHatv4BONYIxS4wQrH7Byr4M8JmL4xQLIFI8/I8zaLCr/HF0yu49fIJXDg3QE25hkZxZAtka8YGfRoNU/3cdlk1Xzit0tNasjnYA1xjhGIHR6tgiBTQjyLbew5bndbAsm2e22XwP/cv4CdPziboswLANQ2NTR9TUeh5WQH+KbUcL/FvwHcaGpv8WcuMmpMjC3+JXyBSKt+gxwA3G6HYSUYotmsgt2vOPl8DLBpA4O8zQrF+F8e2RMqMBjTI9GrsvEmj9REbzUe3lB9Llq+K/Wn5qlh/VojnavJrLg9XI4r8THOj/TvwyTdq/31M101Nx75/+arY+sHIRE4U/yecJAIRG3MzhwLfAf51/eobrKFG/+f+32CligeC34sF5VgQExF9uy9E5DbOcZzkSwiz7G+NUGzPSGj5wyQBmhGKPWkmwlfICZjnpZDtS8PcGh//t3ICa/b089eXU6w/kGZnZ4bufhu/ro1qDYGMDZYlyhofW+/jkkVlvGNREF2DVGbEmEk3cJ0Rir1QgP2uC+G7/Hq+A/ikq8CBSkTlL39DY9Pv4tFISgWhDc0KIK/RnxHpZF52LvwKUNHQ2PTNeDTSlvN7YwFeNbdy7tOfl/uz0+dsAT8yQrEvDSCQBxqjGuHHzi1gsxPZin2osT45pOJ8hG9+Ehok92nsjUHvDpv/3955x8lR1o//vVdzZC+VkNAJhBLKQuBEqkQFZQQOgmIQpPhTkW8URf2Oo4nYkMAwwFdUIirNQhPxZFEGQZp0CASPXkInvecu5dr+/ng+wz333Ozu7N7utczn9ZrXttmZZ57P83x6SVSyDLiwoSn9a2HwNDSlezDx4Lv5MxpHy3X2yXLbO6POm2aG/7DwTyUZdq1cQ32ijQxcmCuQMPi/Y9lHoOJXdLhbrCi6hSKS+d/oIjhGcLHS9b3FUQWAoiwAIYV7dmhpbjwL+A0qr/JnwvzfQeU5ngtYyVT6wmQqvShXX/d+FAIysvAeFUvAgpIz3S5obcuQ2qaKiz8xkss+OZLzP1LHZ3arYdLICjZ1ZNjUkaGjqzzP2JmBzZ0ZNnZk2Km+gpP3qmHO4VtxxdFJPrtXLZkMZbu3EBQPuLEQYlBCzbNNNvmyEl96JCrA8NJps+buEjCaWPPMbwUQgWxJCS8dMK9vAX+YNmvuR4z7DQcoiZqgmdcP1oRi/drzRJgyGXLYtY4G/oRyo1YZAtkfKCDWx7jXmcL8xpGA1tcTfHAbbHwnQ6KCRcCXTeZvaPwB8x8pgvoxWW67BHg1qvlfXo9BgtUzwKTKVnauXAtkbgaezmdJkJx/M71yDaqM8JeM0/8VZVyu7wXMf5rQpCbgV45l7xg1dqAoC4CGsH1RKRFHAfvTHdX4rGjVDwHPB+agUkTyl8ESQDKVfqqlufEM4EpUxauSQnsXtHdlmDy2kinjqjh+YxfvrOtk4eouFixp57mlHby6spPODFRVQGWFyiIoxEKQyYiWn1ECRQbYPlnBR7ar5pDtqtlzfCW7jq2kvqaC9q5MfwQp/g64WBO0BkLjfBEVLf6NEt+iQpjO4dNmzb0W+O2CebMzuiDQz0xo0EecCk42TJs19wrRHEsNxwP7TZs19wbg8gXzZq8fYGvAoMKJ0LoK4EfA6BCG87NkKt1h7lVdyUNF0h+PaqcbVifgesDV6GohzP/bIpiMSlTCuhcSLLkzQ8c6SFSyDvhqQ1P6rhzm/oD5V6NSjc/IcWsfI64hApyKNHiqS3SyX9UKRtC5opPEbxqa0hvzCRAijJxm/PwTVHrlEdp3LwGLc0X/6785lm2JohVYOnZA1TaIVO+m2GCNKlSE6AWojmlBb+V/AL8WAWBVMpXuZAhBS3PjdsCvgJPLLdIHFQRb2zO0tmVYuznDSys6eGF5J6+t6mDhqk6Wb8yoFBc0YUA+B9kHGXk/uibBDqMqmDK2kqlbV5LapprtR1UwsjpBska5GzKZfqNKtwJnSNrlgAp802bNPUq0lR3LdIsNwMvANcB1YnmgnMzHvO60WXPHi6DziT5eugM4YMG82S+Wa8wStPd2GfERdFD7E3DVgnmzl+eauzKuu3+XSJn43IJ5s2/vq/YvTPkkWad6y9l3gJnJVHc9fKnPsisq0LJBGNSOqKp/2Tpk/hC4PJlKbypkz0sw4g+El9QkqmD9Kwk+uCVD12bJ61ea828lqC8rzJ/RmEC1EJ9Hbuv251FR+5HIoWPZI1FpkKkMCQ6v+YAdK9bRRcIFZjc0pbtyuQAcy64FXqGnm/kVYBqqv41eQfgK4Puu77VHGNfpqJL4Ew2rwlmu70VCQLECwAhUasUxIsH8EbgE1aa1M4j27q/AvlKYxzRJtBL4pSy6fk2vymSU3TwoRbuxI8OidZ0s25Bh1cYuWtqUHz8D1NfC+LoKJo5MsG19JXXVKtAwiGwfwB4FtwKnRtUC+sEKwLRZc69GBd+UE7pQcQf/RPke710wb3ZrAUy8IMY0bdbcrYRI7yIWuG/Rd9962QQAQwg4lvJHOXehcrMfFWvkXQvmzV5awJopFB9VwM4o1+dBommXop10nwUAjc5dg0q31mEV8ALKtTUa2AYVoR6Qk4o8zPQllBX42cDaF4Xmy3mVwIUiAJCohA3vJXj3mgyZbtXxL8A5DU3ptTkYf/D20yifei5YDxzW0JR+Ieq8OZZ9LHBDJ4mJ06qXM7VyBV0kngE+29CUfieX9i8m+osR94oGlut7dzuW/QI94xQ+DdyTS/sXfJwryqqJm7+5vvfZQpTRYhZSAhW1+DHgd8lUepXJSIciGILAOcBlDHBP+MACELzq9sWMZgEYJHALKjK7azAIfhrDmYxKBdqtn4fwtljDXkKl+byHyjpZLZYDcy9WipY1Wl7HowqrTJIjqL62K+XpN19WASDAiRCt2+n/9r5BX4KXUPnr7wo+Vgo+zIiYKlTg5xhUFcMAHxNQJvAdRACbjOpTUA44bcG82TeXQPvfGxWPc0Bf9RSxsrwFeNKiO7LJP6ALYmW4MhDMExWwaVmCd6+Bzo0fErTFwFkNTel7c/n85f3+qEp8ldopjwmudte++zdwRkNTOnIsimPZP8+Q+MEeVWsqDqpaTAeJtRk49+Cm9C3ZNH8jddCslXAXKu5sG3qm/60ADnN97/UQph9cb4RYW+aECLx3u753nH7/fFBsDECG7g5sQ0bTj/BcwbMkkqn071qaGx8Xk9lBxsLqV6tAJth2gxc6UGlF302m0l0a0RnQdRGkhi2YN/utabPmXiaWnep+HMIucpw8hLZBotw4AbqmzZprowqBTejHZ5uAKjtuDSF8lGrn7wbsVeR/O8VSsBxVxO3PyVT6H2GKU0TlaiIqYHwGkKGCxOaVCT64BTo3ZPQV+J+GpvS9EO7z197vjWpGVGlYJs6ld3D348JoI8GXP31BHWzaf9fKtRUHVC2hkwRLupKvn3DHTbeEjSuE+V5v/NwG/J/rey2OZX/P+O0xUQ5Cr+VY9iQRmj5v/G8T8HvX975ZCPOHEtUBGEyBfSV6liBw7XmUX9Wj9NHkwwXWiUT63bDujEFFr5bmxnpdWOxPIUBerwZuitGVF+r66T4LAQdlpo+hjPRMYCe6i8/kgjahdS8JU70RZab/MnBUMpU+NWD+hZj7Neb/EeAOguZQCRKdrbQsuTOzdvPSHsx/bbBfs0XrS9Df3sDfxUqjW95OQ+XbJwxF5b8NTemOfBkAwe+TKjdO2btq1W4HVC+lii6WdCU7H2vb/h8Af2j8Sj7LwXn0zO9H5vNxeW+mBD6pCwAG8z8QFeVvMv9lgF0M8y/aArAlbZxkKt0K/KClufFeVBnI4+LZ+RDekjn5e5jPX76bikp1qW9pbvxxMpWeLy6kTH8JjZo/95so8/mRMeqyQnU/4SMzbdbcW1Dd/f4nnvbygZjbdwyxLNyICkbrRNXsaEH5yFej3CJLk6n0YpOZF6r0acz/dBEmJn9obqpg2ZpnuLH1dU5NVPTITliMijPLFfG/OyrNXDfxrwS+3dCU/u/8GY2zDQHgVVTZ+bzMv6EpTeYseGTtW6fWJDr3GEEHi7qSPN0+iQ2ZqlsBzkpfk1X7dyx7L1Rcjn7/D4DrXd9rldS9yYbg9YLre50hzP94VHDg7sbt3gC+7freP4ph/luUAJDNTJXPfKWlLt7f0ty4AJUGcynh1aW2JLgdVeRnYZ55nAh8CuVD3beluXF2MpXu19oAmitg3bRZc89GpcnsRgxhkOlHfGycNmvuD1ENwT4RT31ZhbrRIXi+KplKP1EI/Sxkv2quwCqUFfWr9Cx4s7miljOX+UxMVPXojJcBHhImn63Qz7Yoc/ghBhO9oKEp/ff5MxorUK5bnQG/JkfWiH39fs+sa7yyLtH+tWq6KpdkRvJ0+yQ2Zirfv/Lui1/JovEHzL8SPuxPEEAX0OT63sPy+Qx6FmJ6FdXEyGT+X5K5G2/crhkV7f9cscwf+qEZ0GDT6FuaGz/a0tz4Wktz45UtzY3JfAva8GWvFql5KqqD1pYI7WIOPCMC8yeZSj8o57eizJC/aWlunGPObT8ynTdRAThLiGHAQar3fQFlco6hfJCIQv+z7cdCGb/G/HcBnhZtWGf+G0gwPT2FexNVH7aB15nlfWHXFuY/SpiiGcvxi4am9G/k/eH0dAt0Ac/natijCRdj5s9o/A/wzUq6ahd1JXmybVtaM9UkJMsgrNKexoBPorep/j2xfgRwQoh14i1DoJgDXBfC/B8DPt5X5r9FCABaBayKlubGY1F+rd1l8YwuRHgIFlIylV6TTKXnoNJ+mii8qMRQhftRPRyuS6a6i19EsKA0oYqHLEZlVfyspbnx/1qaG+v6uTpg8PoMyv+2OOYLA8b4gQ/dActQOfML45kpm1UnjEaN6gujzyU4CPM/T7TaA0KY3V7J/dJP7HYySXqXE84g3SO1KP/gtQ5VQOd04z+3NTSlHc23/zF6ZmasAJ4yGb4eCyDM/zhUbYQjE8DirnqebN+WlsyHnrF7DGaPLhA4lr0L8L/GvdsB1/W9ZXLOUSFM/SXX94K04QrHsq8Dfh4y1X9zfe9w1/dW6dkBxcKwFQAMKXQEKiLURwXCvI2qDldwsxjDx/1uMpU+GZWHfQ/K/zTcoA1V6OYr0rxpYSGlnLXYgBdRhUUek3V3PjBPii/RX+Whg4YxC+bNvg+Vv/xqzB8GVhAQfCxBFZ15Mp6Vsuzh1SEWgam5tP5CaK1mYa0BDm5pbnwIlXVjBuHdBnwkmUq/J6y+NoQZZhqa0gtDNPNaVODot+npqvp3Q1P687rAIApHrXbOMlMACPoIzJ/RmJg/o3Hn+TMaf48qZjcK2LQyU/f8o+3bbxbNP4BHzefXTP+1qHiWQ4xTHnF97zfa5+PpmV6+HMlWcCx7G1Sq4pdCcDg3yPHvi9Y/7AUAI990gpiLrgqQAZySTKXv6asJWrvPY8lU+tMild4sUu9Qhw5UzvTPgcOSqfS1+mYvRFPQhIBVouldJ1Lx2cC1Lc2N++aqPV5GzfPfKPPzv2MeMSiEgDWoWIDrUalNMZQApBT7+yECwKdKofUHzdWkz8AlqE6cHzNOX4LKFjotmUqv174Pai2YzO5DDV2Y9AjR/H+sjR/gYeBzhpVgN7ob0QUWhYUNTenl+nnyfhKqMdJ98grKMnjR3Zt2+U57pqK2olvWaHV9b2mORjufA8zUvhWongmBsDACOIye8XfLgKckMPBe4OPGNZYDX3d9b04pmf+wFAB0BtXS3LiXMJugFvyNKN/1fLOFcZEby9wI/0qm0qeJIPBD4AFUhO1QY/z/Eg19hjRwWtPXnH5NCNgkUvJPUXEBxwI3tTQ3flrDW38KAQtQKUNzZTwxDKwQsAFVHOY7KFNsDCVQhkQAMGvWH9zS3NhQiAVOP0ejf0eLtn+baOe1xt/uBk5PptJu0G8gRBjRoV3X0oX5X0rvanpPAf+voSm91kjrS9HTrdCGcl/q1xw5f0bjl1Alo39Hd8De/cDZDU3pn1cluqYkesbEbjQ1f8eyE6L9fwpVgtiEWa7vvacJDYcawgmoglT7oQKrU8ZvbwBnu753TamZf9jED6eF3wD8FtV+FJEcr0ym0mvLVaDGFChamhu3B6agXATHAQcnKjLU1naR6KeZ7+pM0NZWEaVa4CpUydQ7geZkKr0s7JlKNUdScvlU2TSjgEXABYVUFisFGDXqpwuROWYL5RcdwMcWzJv9+EANwCjHux/Kn3rmFszDv7Bg3uxbSrDv9kDl1R9kaMaPAmcmU+m3otI1+W5rVKDbCcK8tg/7K6pi3V+SKVV5z7zW/BmNE1CpfCcYAsAODU3pZfNnNI5FNY860+BXLwNfbGhKP2vedP6Mxh/Tsw34OmBaQ1P6zfkzGmtQlscvCm8IAhM3ARcD1zY0pT/4nmVXJxQt+Jlx+ZNd32syBIHPiBBhzsFcVEnoTi2qfw49fftdqNiXEfRO1XwOFenfXA7mP+wEAI25HI6qIb2d/HQ68NewQjXlkrgNQaAGqK+szIxd+Fry9EcfHP+TFcuqSSTKm3GVySSYstdGPnnsUkbUhRoiMsLw/yimtPV6cF+55srwGR6Oqp8/GtXI4qJkKn3ZQAgB8n4Mykd3SRaiNtwFgOkL5s1+dCAHYeCjDlUrwAMO3gIFgD6VAjb23S9Q0fgmDVgoytINyVR6RY7/7ydC8vGo7q+jyF446lZUg5+3dK3f3M/zZzQmUQ1tzjHG9F9UHYCjRWvWedX7wMkNTemn5Rp6sOBYVNtpvR7+6yiXxDdQZvqdjHE/i2rK82RDUzrIw69EWSp/ZTzXarFGXCVj+i4wC2kVrMHfgK+5vrdCixEYL8LOiaaeRm9r/L3ATNf3VpeL+Q8rAUBj/p9GBXJUoapJzQTuGYi2tOb9vv8ZuyaTYY5Ihf0CmzdVcNasD9h9z3X6AvaBv8oiawkRWMo+T4YQsDMqOHA7YUI/Bi4Na0/aj0yoBmWK/iGqZveWAG3A5AXzZi8a6IHobZXlfaVYZi6m7/XshwqsA2YsmDf7/hLtuX1RWUtTsigDCZQ/+g2hEx0imO+IKmldEYFvvIgquPVgMpXuikJP5s9o/Dqqi2y2MemwCji6oSm9wGT+8nlfET72DrmWOfY2ocVeQ1O6S7sGDU1pHMs+jJCgP3rXyjDH+BTwxZCa/gfK2KbkQdVvXd87V/tfWZj/sBAANMafENPOdfLTm6jI9QcGy1gdy64TCfAL/XXPzo4Ehxy5+r6jj1t2U1V15j/JVPqNfOa9AcLfWFQwXuCy+QmqvWhLf48xpFPf50UYOFBMdSOG2DYJ0sDatdd2VCOcl4RovwQ8smDe7OX91TK3D/g5TDSvwzQtdKjRsgAXOl5Wo7JSXpbjPwvmzX6rxPvsbNG4x5VwbbWi/NgXJ1PpP0elLVqA30dRwdOT89xrMWBJlb/QYj7zZzQehio1vHWO8W4UJfG8hibl6gy7nmPZE1A9Tgrp5fEccKbre8+bjNux7E/K2EZmI9fAD13fu6QUKX7DWgAwIlDrUL7CwF8zHzgvmUo/MZgaFQ2EAACwaVPFd698wL1isDD9PELAH1EmRoBfAD9LptKrB2LMIYLATkIMPika0QQhNJWDYBq7UB3t9CMo77oYlfr6lhwLF8yb/XaUZx5kjN/Ex1igEVXTY0+6O/XVDJIhbxQc6PhoRUXEv6PhZKHgpLOcOGlpnjQL7AAAIABJREFUbkyIJfQ0lG9+ah94QAsqjfq/wI3JVDrdF/oyf0bjj0Qbz7aXHgNOb2hKv52N+ct1PipMdmIWAeJR4PKGJlUBMUI3v4NQDeHyWZ3aUJkEs1zfeztMa3cs+xgZW5jbZBHwA9f3/tifC3RICgAG8x8rjD+I9L8bOD+ZSr862LoUigBwHSr4rV9vDVzh+l7HEMDteJS/92xZnzejGg0tHkB3QC8CPG3W3B1REbv7oCKId0C5MMrBhDajarS3yLFOPq9DxU2sRaUbrUClDC2TY+mCebNbcz0XMKi1/ajjFmFgGqrWxBSUn3dbVMnuremdatYX6BA8BALWWsHHWsHJapSpeoXgYblYW5YumDd77QDTzUAI2AUVWHe4CE87kjsrbJ0wqbdR5XQXAA8H1UD7wPgDK0AdKvPji/TsWvgWymx+SRDtn435y/W2QbkTTtG+XoBKTWxqaEo/FJFWf6iBO5adQmVFWfQuAd+BMvn/FbjK9b22bCZ7x7J3Q2WifdQQ3O8CLnN97yFd+IgFgGjM4tcoP39CmOucZCq9ZDC2KJYc0Hn0LvJQbvgKqglF1xAR6upRLoDvyM//Bs4NihANJE6zaWPTZs0NrAFj5RhFdw/5UcKAtpJ1OlJj7B2iPWzWNMRWjdlvkiNUu8/F4IeCVl9GPI0RQWw8qiRsveBjHMqvnUSlq5mCWkbmt0M0+E0GPjYKrjbI+w0m3rJp8oPN4ibvt0GZ3reR+RlDd456Rp55jRzLgUVBhlCpLIpGEN8BIlCPl7l/EXgmF9MPud6uqFz6USKwvNzQlH4l7H4FCAFboVyAU2W+KlGF314F/uv63tv5mLdj2QkRUoMqtGtR7ZWfdH1vaX8z/yEpAGjm4npU3mSQsjUXFTi2drCZuLUFUCHCSn+2pe0CDnZ975khZtmpRjXUuEh+fgL4UjKVfmWwCHdRGOu0WXOrhKBW0m3erNRwk9FeO1B+wM4F82ZnSj2WLYTx55yHabPmVhr4CKOBnYKPTsFN54J5s7uGEz5yNUfT5ySZSmcK+X8phIBsvwMFnTN/RmOioal7/FEZf5ggYPjya2WO2nSlKg/z1xv8BDShI7DKlsLnf9GtDwIwZ+b0vOcF5wwpAUBj/luhKvpNk58uQAWMbRyszF9bCNViptyD8nde60T56Ba6vpcZaniW9+egUpQAnkEVFHl1sOO5P4WMGPLPIZTG1TFc8RG1K2p/QzFMu1hGX6ggEIXxl/L/OuOOIAhsj3JLbo9y72yPCjadN2fm9A+GlAXAYAhbiza4G8p0ehEqUIyhwBQcy65HdYvaTwSAcjHmhGgvDwH3u77XNoStPV9EVexCBJrPBu6AwWAJiCGGGGLoC9OWcxNCtyuyvCZQbprtsxwBw89Wm2ETcN6cmdOvGTICgMH890AFTOyG8k1dkkylLxpICbVA5l+JSie7qp9vfaTre48MZc2kpbnxDFT8RBLlN5uZTKVfjoWAGGKIYTAzdjm/EhVvUgNUa++Dz3WomJWJqEDDSagg1m3kdVtCujeGwGY52kKOVwFnzszpC4eEAGAw/4OBG1BBGOtRZWOvHCrMXwSAOpQ5+4x+vvUs4Heu73UO5U3X0tx4JnAFKkDodeALyVT6mVgIiCGGGAZIawcV1JtEBfkmtc8j5ahHBQiPE9qlHxNEq88HHcL31qMCT9fTMzh1PSrrJMg+WSnHcmDVnJnTV4Y9X9VgRoTG/A9D5WJORUVOzkqm0jcNJeavQfUA3HMUQ7zmgzQK+mNLc+MmEaJ2B25uaW78cjKVfrgUzZ1iiCGGmKkH511064NB5khwjBEmPkY+jxbamtRe6+UYJb9Habi3Rpj1Knm/SvtuNSoFcx3dKad6KvD6OTOnbyx2HgYtU9BMvwej6vrvLA99SjKVvmcoMv+BKgSEampx+VCoA5BvPcj741G1tqtROcnnJFPph2JLQAwxxMw9wvmB5r0N3XU7JoqWvg3ddSNqtWMEylxfJ++jKM9rUfUflmmvwfugXsdmlG8+9HXOzOllTd0elBYAjfnvi6pbP04m6zSkreMQ1vYGIhe/hfJnHJQV9CDPZCr9D+n5cD8qm+L3Lc2NZyVT6cfL3Uo4hhhiGHwa+0W3PlgrTDzwlwc+9ImowLiJcmwlWnkFKg1Uf62KqLGvRFUVXCTHEnldjKqOuES08x6ppMb7rjkzp2dKPReFwqCyABha3lRUSd+tRGL6UjKV9ofyopb8z28Bl/UTQw7wOz2oMjUcQBMQjxIhoAIVGPj5oE5AbAWIIYbBy9xD+E/Y+0rRxrcT5r2Dxti3k/fBa5Ry3BnjVX/fJkpmwMQXGe8XAR/MmTl9bdRnLBfTHpYCgMH8DwQeRPlSFqPq+t8+HEy8jmWPQ5nkp4o02FUGYSCQcBNiQbne9b2Nw4mIaELAscBtKP9bM3BiMpV+OyazMcTQf1q49p+gwJJeAMt8DSLegyj3SfI6UXu/Lcr0ng+6hHkHRbQ6tKMTZUoPSjEvltcl8rpImP6iOTOntw8nxj6kBACD+R8hBH2SIOqbyVT6tmHC/MMaRGyLCmgrFXQB813f25Tv3sNIGPgCKrVyLHBTMpU+PSbfMcRQMqaeoLsLZp0w5jrjuxGowDc92j14P47ussz1EW+7ie5Sy8H7Tcb3QaS7GfX+4XdRzezDkbkPKQuAxvxvQOX5LwW+mkyl7xzOwV2OZT9Bz+YQpYBfuL737eG+eA3B8WxUw46bkqn0pTGpjyGG3oxcZ4gX3frgSGHa9SgLWtAjoV77Xn8/KuR9EAkfBTpR0e3rtdcgwt18HzRWMhstrZszc3pLuYWeWADof2K+D6qj0l5ilpmZTKUfHObMv1qk1foSX/pp1/cO3hIWsOYKqEK16H0vmUpvjrd2DDGEMsNPoIqRjTa0d/39VmTvWR8GgfYd5KCvpjsnfZX8FjRS2mho9EFTpU1RzfAxYy8dDKYsgAnC/FsC5h/8MIwDuqpFKi41VGwpC1jLDugA3jAtAzHEEEMP+AYwI8J5esraMmHoi1Em9uXyfg3Z/e8f+uHLlcoWM//hJQA8DhwtC+7FLYGQu763IegCVWLYsCUtYnONxMw/hhiywu+Eaa+hOyguCIwLguP0viGZbK8F5t7HDDsWAMJBGP1m4D7ju2E78VpQ3u3AyYSnpkQFM4XmhnhpxxBDDCFM+G7g7v5m1jHzH5yQiKdgQIWAhOt7GceyR2iMP6hEFdVslhHJfS3K9N/m+l5ncO14lmOIIYYYYhi0FoAtGDIAesqeY9knAA49zXC5BLi1wGzX9/5lWBdi5h9DDDHEEEMsAAxGyJKXvwiVzx41M+B14N0I140hhhhiiCGGD6EinoJBB2lUo5uoFoSrXd97eUuesDIFUhZ174EcS75x6mMbrOMcKuOLIYaBWqNh9433Sx8RNRgmMBiDY9mVjmVf5Vh2JsfR4Vj214baAihknMU+U3/MRS7GP5hwETLOmlKPs5zPK/0zhvz6NgWwGIYuLwnZU9X9tBdKvu8SwxVJgRncseygy1MtkAJ2RdUcqEVVlnof+C/K9N4JdA6kCT1Aqut7OJa9D6p50NGo2thdwEKxEvzC9b0V+vmDmSDK8wS4SAL7AZNRVcRqUdW+WlABjW+h0pHaxcrRJXjJGNceAdwFfFz7eqbre3/pp2f7M6CXHf4pMNf1vbZBhoPTUKWSx8hXy4C9XN9b3dc9JvitFhxOQfW4GC33Wo9KSV2OclO9g6rN3qUf+toVYvptwNVud5vre58fQvQnBfwPypXXBdzn+t61w4CuBiWBs8UXdbi+1xG17LhcL1e9/07X99oH6VychCpcFzQhWgxMdn1vc5nvWyX7qoZuC/7Bru89Xcz1hl0MgMH8xwHHA18Bjszz10Wo9LmbHct+WSLp+52xamPH9b0XgXOiPOsQwMV44FhUIZJDIl7ifeAVVKe/u4F7jN8rgH2N7z4G/KWfHvFA4/PeqCpqbYMMFRM05o8IYAcAD/SR+dcB04BvoorL1ET4+1rgZcHrA6i+H3qjqkoRdnX4+BAjQ/8LnKF9nupY9i2u77UOcfLaADwl61tnzAEz+iXwnQJo0pGopm8ZYw0ggsathoA9mGAsPTsQbiv7f0GZ75uUudFhT8eyn3F9r+CCS8MuBkBjOIcBdwB/iMD8QbWVnA08BHzdsexqTcMZyOfIal4e7Mxfe3+AMOU/F8D8QbX/PFo0wm9l0xKMz+/142N2hDC3zkGIDrPjZKeMtS/MfzxwIfAoMDMi80esA4cAZwPfQ3WA0yETomGuHGJkKGE8Q9swobU7aQx/pHZUCzM8xbHsPQugDefJXFUY1xsp1xs/iOcizAqyZIDuu44iO8oOKwuAtrAOBm5E1YYvFMYBV4pE94PBItAMBaYfNnbHsvcTISzVx8sti3jemgF85NZBKgCEEZH1fdT8LwfO6uM4AhdBlPOGElTR071aQ/S6HoMZ1kUQ7o4AXs1lnZQ1lAA+E0FwHUqwvJ+ES0IUj1gAEKgDrs7C/O9FmR3fQZmwxglj+qxoIhltgr/vWPaLru/9eTi30i2XICabvB6wszD/N1H++zdRzUMCLWAEylw9TnAStBF9d5Bqevk251DSYKIKo18DTstCiO4CmlFNYDYLTmsEp+MFr9ui2n1/EIGpDEW4W+jPSJSV6M5hYP6PwpDrRfm6NgK9PBLlLhtO0B8Cy4ZS7OVhKQAI0zkb5ZfUoV2kzSeEKLVrQWm1wMUoc+bZxv+uEN9dR8zWi4KPinClQyfwM7GyBL7ETmGegTmwAmUC1I/B2OGvZUtDqGPZk4HPo8y+OtyFiu9YquE0I7hMhOC0Ss7ZNAyn6WZUoG4Fmn97mCoSrfTsHLiXY9nbub63KJdyAJwaYl0YQXRX0mCEsisAru+1ldItPRwtAE7Id590fe9hcwNK0MRG4H3Hsr8nGsoJ2v/GowIIrzYC2kI3cpQNXkCELEZ0dJgmRrYsgFzZAcWOoUBBbATKf29K+Ve5vvezkPFlNAGh3y0Whc6hJliW7H755jxX/Ec/MpeDgY8Y3/0XOMn1vfaQcXSWE2/55qqY9V/oXBqZLkgWSFs23EW5X641Wcgz6dkaRazvKHAcKjg3YNx7oDJBFuWxIp1k7KMfAP8POKgv+M9FJ8tJ8/oy1gHcy8MuBmBfYGfj6z8CT+UjFq7vLXcs+1rgGLqjLCtEUr3aEBwCBhf49hLABskcCNLcRmjz2ymCxvoguyAfcZJ7VIpZrVbutRWwybHsNtGcWoOUsxDhJrjeCNHWuujuFbBZrB8jUS6TGnmGLiFcLa7vbcxGOCLCWGC68d164PwSEJ0+bUB59gBH1TIPnY5lbxRitFHmoCgToGPZW2nzWilz2ylWjPXZcGaMsUrDe7WspSp53+5YdofgqrUEuIo6h1Wo7IEqg8FfmIX5lx2fWipZjTZXFYLbhOC0Q/bLetf3uvKNUcPBSNlzespVF92tbtu0/aTThmrBf2AS7nR9b2M+IUXWTXC/WqDGsewWuc9G1/da9MDkEJpUKfcNNNG2IC1Nu0e10JQa1/dKEbT2KvCcCIagXDtT0Rq7hTz3AaieJwG8hnLL1hSCe20uAjxVa/uu1bHszYL3Ftf3OqMIjRFxv1nuU5BSpOE4CJwcAVQJjtuFh7T2x15GY3DDCayQ725xfW9zxMl8XhazDlNkMZjwQ1Sg4dXyeoxj2QeiIlv/jvJtvyfHm6iMhK+KeSxUKzAW4K4oX+vtssneRaVOvS3HXYDjWPa+jmVXhGUsyOdzgT8B81DBeOc7lr0zKk3pennm9+T676PaMl/iWPbBfcRFvTALHW6X5kelxHlnVIIh76eIpvFn0Vzfl/l9A+WTflpweopj2Vvn094MmIRyeVwO3C9ETZ/b+4DZjmXvpWuMpmDpWPZ0VObDr4E7ZU1+INd7Q17fB/4DeI5lH+pYdmU/ZK2MQ9Vv0KHF9b3b+1uok2fdCxWIeBFwE/Awqk7G+zJPr8v7l1ApZWc7lj0hG061QlwjHMs+HvgN8Ixc410Njy8LLv8A/MSx7KTx7KfInrsauBaYExQ0ymI1GOlY9ieBK2T/fSA04xW537PAbx3LPtGx7PHZ6AewO3CNjOsPwI+1+yVlT9uAL2u+FDBClKweViLHskfnsGCdZJjLX5KjosC9PEH2229Q7t0PBO8vyR55GZVm+j+y78m2R/S4JceyPy308ukQ3L8o8/c/hViHHMuucyz7WOAX2lgXCh4CujPPsewTHMuu768MtOEmABxhfF4szDKSdCaLxjy/ht655ogmezwqT/UEIfr/kk18lCEhjkDlp/8GuMGx7Km5zD6OZX8GVWTiKuATqOhac0yHoHzpdwNnOZZdFbJoxgCNwInAmahc7QtQ/skbZPNsY1x7N1Re9x2OZX+mmIUo50+id5GP+0vMKDJEiyIPxvU5VDri7wVnk0JO21lw+hfgesey9y5gDj4nxPBcVOCj7idPAPsLUU4LkQmDycA/gUtRMSmH0zsvPrjeVODrQpDO6getYbQwGR2eK1BIKiX8VoTY74gpeq8smtk44NPCjK91LHtc2DwJnmtl7u8QIXnHLNat/WU/fT+EPnxB9t3pqHiJE0P2cHC/rUWAuVsE/ikh99sBFXT5d+DXjmXvkQXPO6FSMmfI8QPHsrd3LPtEYWj3yL0OBkaZQkmRUCNMVofDQuiKvjaP1XhPuwhZK4jgQ9do5H6C+78KnnbNgqdjgF8JPZthFLAymf8Ewf3dQi93DrnmeKG9+xVgpRolfOFO4KtZxrqz3DMtON62P4SAYSEAaJNkapxvUEAKkZiJzDzyWsIzCszgpb1RbXzzwTHAhYEkH6K1f1II27SIw95eNMXjpQWwySRMP/VIoqXkTQKucyx7bBFMpUKYkwnNZUB/JuImnFHgvCIC3q2OZW8dcTNGJai7A5cGgqBx3cDVUyhj/gkq6LKcRGMk3bngPXA6QMFtY4r4zwkiOGerr3GhWPEKoY3mHjODhjdgxAQEmj+qeuS3Clg7p4rVZ2LI2gmzht0igs8ZhhCSoWfwXtE8xPW9ZcCThhKxS5b9OMVgrMtQtVeqC6D324ul47gCxrk38DsRPsLcJxWC+3NLaaUSuE6uW0W0iP0zRVGsyBYHFgsA4RM9wfhpCb0rTOUTIt4z/lNF72Il+YjBvwXpf0GlQ5lwomj2PQJ0HMueKJaFHYzzfdFg9gY+JeZFHbYCfi6aTlR4WyTdwFz4eMg5E1FBkMWsqzDp+Y2BWBtStOanIfPzKjBXNK/zhKiYBXL2FQ2iEPhA5vZ6sQi8FHJOSrQ1k3muQLmU7pL//lLGeAHwXdF2Lxbrlg47IjEXZWTGW9G7CtnCAdz6N4vV7Tax6lwu++B7MldOiHYKyhVXHcIEpsr/TOGyHWXyv1mY6Y0o19y9YklrDrHO5BRUJXbhKGCWcd5yYA4qgPZosRiZ+eWNYlmIgusjCC+ok3F9b20JcfFn4/PHs/SdOJqenU6XuL73RBQBQIqi1ch+PThEIfuV/PZlEfLeMs7ZGvieY9mTQsZ1HL0zlkC5ES6WNTVXhIi7yZPzr5n+v25cN4GyNM8SK9JBqMqR7xiXOFr2uukuKmnc3rBxAYgpzXyeFiKWZdU20gpDu68SU1I+eFG0i9GolMMgV3p7IeYZ45rTg9gC7d4Hidapw2Wu730GuFe6/v0b5fc0Ccc+wKFCWAJoC3n+tcAZru9NFkLyFTk+LgzJlFA/X4T0mQgzAZLFXB/12kW6IkCZZHczfv4LKpr9AmH880RKP5zecSDHO5a9WwRi+yCwp9yrEVXG+cuu7+2DinI2UxkPF21Gh0Uo0/9JQsi+C/xIiNAvUOmTc4R4mLnlu0vthXJZ2cL2waK+XrcPmo0ne+502Q+OWEKukLm6TDRf8wY1aNHm2v2/adCQhKyT7VHxRWfKGjlb1tRxwFeDAEgNWnNpenJujSFsADwia+cSlLvsftFK96J3lbkTHMversD52iwC09wQa2lf4e/GZwsYERKw+DHN8tAh2n82oSmbEGpWBX1XLA7ny16+QQSAKUCTKZgAqZBxfcKw4HaKsLcfKt7rCtmHX5c1d0FEfjLb+OkJlBX4atf3ml3fe9b1vcuBkw1FoQIVs2JadUeWEmnDKQtgTMgi2kjhaUjrQ0x6UQSl3wL/DILcNM2+07HsU2QD68T5QNFIW4Uo1AVWAQ2ed33PlsWakYUVBNFdjQpmazA0A13Y6AohRItd3wuk9XZtcXU4ln2PmAy/YJjOitEqTZ/nujxaeo0wNVNgqZZN/47re+8UaRk6ip5m9XdRkevrTZ+g63svOpZ9qWz+wJ9ci/KpXpbnls8B7+kNQbTrXiLrQO8fsAfKv/+BFtmcCTEh92IgkrXygCEw1hdiSi0QElmIT0uOcVai3B1mA5mE7NeVwIvF1DCXOe3Mtb9lnjY7lv2kCPZbG9Ytc52cZFxiPvA91/eWF7DWyEdzZL2NC9nvZ+lrUnuGVZKmrAfbHSJrJ58A1orysf8B1VRpvTE/pVofq4DHUP5/RMAY7/reOu1+O9AzxmEzKtaiEGveoagGYjqc5vre0rBUbceyZwpN1+ORjhPBY7Ocu6PQHtN6fI7re11ZUvjy1iWRoL9xhvJ1qet7r4fEIDzrWPaNYvGp0SwWR6CCW8sCwykIsCaL2a3QKkltFFfRKShw8uFi0YSADcDfjPOnGkwyCRxqnPN/YQRG3idE2tXhoxHGuTLb5nd9b40Qix5rJCyiNwKzqMhlAg2Bj6IajTxnHE/LZv1jkRrmtqg+Dz00Btf3XjBxZRD+VwxBOYp/vcIUQg0/7a3G+TuQJW4k1320sS4K0Y76W6jPVQxplGi1CwycLkBV5Py7aLdlAW2e1tM7Fsg0yW8Xwlj+VajQWQAcbXx+EliZLZ3X9b0/GYJFPb1dnmFwuut7R7m+d10gXJSpg2hbiBWg0VjLB9AzNmCV63v/KfA+ZmOo94HnwuZNvmsPETIOMfbJtiKM63BHNuZfgOXx48Z9npf9kA3+aVh6RhIeSxVbAEIgTGsqhpHXlFIw0hbP3fSsnT5amL5OvPc2/r4g20aVxflf4+sd85jSMoHFIUcWwlK6awYEzHwChTWPCdNiKyOY9vIRmGJggiFodQbzmo3YClFZbgg0uxRLNLX/PB4iMIzVz9FznOVzlWjL9bJekmKZqKSwgMZSQNh+qouwFnIJD0WXTw2po5DU9lWS7piFBvLHx+wSsnfeLoOmHIDpw14IrM1zn3dRWSIfMi4JFMs1h805LBWlFLY6HMt+yPh6JnClds9phtDyzyJuZQqMT6PiGXLtuycRV6bA7qj8++Cc8Sg3jw73lmC/72/w2BWBNSkL3XmNnm7SGsIzlWIBIATCTMy1RTDzMC2qFDWenw/5bry2CKvpHdU8x7Hst7Iw9QS900mimH8TeRbtBjmS2vmjinheU2BI6u6REChXSdikIVx0IcFBOZqVbHQs21xPdRI41pf+5GG51+PEz5cxTJiHofyouwnzGqUJAUExkZEhQnBZ6pHL+MLq2Y/uw2W7yOPuyMH8gzkbK8T9QJRZf7TMUz3dBWJGRRBAxxu0Yi2qR0W5mKaZTtkAXClFvrIJUhND9ns+33l/0vhFwmwDS+ShkkW0WlwepsD6x3x0KQS2CRGK8rl5zeDjsajsBf1zFHpdKJj42t+x7Muz7NGMrNNJxpxUlxNhw0YAcH1vSYjZdIwIAYU04tianpHOnZSm5ntYS9OgWlm2lJzPFXiPUnTCC/OrFlqfOyOWBJNo1+s+yBBrx+9E+BiJ8rlvXYLn2Yqe/r9MQNjzaJVhFow6iiz/KxCWERK0Pu3QKkxegIrvKFT6L9Z91Zf1lWuMG1DBWPvI+j81RGgtuH664CjjWPbRqLiMKRQeHJXIQwvbKW//CVPY34PeZuh8sJrB1X1yGSoQVndFNqLiD3aiZ/zLCtf3Hi9i7W4dsubzuReXZsO/BE2PziLMFA0S/2KuqclIZH9E6MhFq0oBw60QkElgJxCxZKMmPOxgaIwdRG9FW+hcJ/rAZLMt9EwZ5jVZhGb3dsj3k3PM/TpUFLaNStl7v0RjD2Mw1YNovwRNcwJz/9dQkcOTchCF5SjTrkkc6iJoun2BDSHC9JQc529GBaue7/reHJQLpM8CilYF8BqUmTUb89+EMqu+GLIv6geYVpViLS0fTM2FpMT1fENIDgKKd6NnavBfDbrbXuTayBRJA/LhorMMdKdQaM+iOMYCQBYwfeI7RNUMtI20YwgRe7sEYwvTZlvp9l+tzbIAgo55+Q5QpU/LAYUu5i56BtEFkDI2/YdzL8dm6bzYXkJBZoOhySXIkdaprYOaEMbb1770YfUkWjRiExSGMdefi/IZj3B9r9r1vW1c39sflaJm4qmcHcla6d2Wed9cc+n6Xrvre5tKRFR1OD9krz6KSqfaFahyfa/O9b09Uel77+RZ06ZlpzZQBMpUWGlTyJ5pj3h0yLoZjC2GX6ZnOtvHJTX1Y8Z5Nxr7rVgr1OgIfGzrbIKDZN1sjPifQoShMHdcIThuF/wuLieyhls3wMdQBXMC2F00qdeiNIGQGvmmlroReKEEYwuLdl6hp+HJotQJ01GGmWxIgMzlUpk73QJzFL0LhpQbWg1iWxHgOEcHtboQq8fmoKFIH2CfPGvAzAwB+Jzre/8w1+oACfBrRcjUI5MPyjK2soGU693PeP5ngM8YaWfBeGojWH1WGgS7HgkcLNMzmSbmO13fO6mIuRhsLYZfFYvL/hqP+SJwpMHEnyhy7MtChOqKPNcyrVSrDGF0VRZ6vaiPc2EKeb7re8cPJhwPNwvAPSHfnZqvUYo2uSl613h+2/W9KH6YTDbkBZKw8dNyQ4LfTO8yxLuVUQMpBIrRxluQaHsNZjiWXddfjS4CBmtYVyoDRpyjleiO9Aw2ygQaZDHj1v4zPURoueXpAAALxElEQVS4XJODUG0AHsrRBjjTz+tgDb2rGialzDL9uFa3oXdgZ7Pre+uyzFUn+YMN3wr5bpegsFYZnutl85mkEU3BwvZgAWFUHSg3wGbN0hL0xgjgtj6sXbPy5AHZ+Jg2l4eGXKNT+30NveO8Di/Bfn/TFColaLWg65YTx8NNAHiC3iUav4pWNlPPgzWKMYxHVRMzI95vioiwihyWhTpU+V+TAKw1mMGLxjnn6OOOsODKRYA3FvGfVahcbx3GI8V0cnQ0CxhbSZib63vv09uMtptj2fsH9w8ObaMdQM/KgR2iYebbjJkwvGgtYk0N7x16pxvqUI3h0zcq55luiq5yCgWSAfEsPV0qlcAPgy6X5h4zcFyqsSVCPlfn2Bdd9I6N6DKebQm93XCfEmEwa+vuPuy5x4zPhwJTw+avn/d6KeARuvPZE8L89fm/HegqkrGZdQN2AfbP0eCnjp5W4cBa1KHdf6kwax1OMmmvMd9tERj2c4alYSqqBkFOmt6fOB5OpYCDib/YIDZVqO5KFzmWPUnPsdaY0FGoOt/HGpddgqrpH0UKmyzSb4+FILnJ19C7o9tTBvFfF7K4j3Qs+/JAa44iIYacVwqiW3CqlgQE3RdiXvuKY9nXOpadq8JgBSUIitQ2z+OGOW5b4EdB200j735PVG3uemMOm6JYSqTok7m+dkL1BdgzRAjUfervhggA0038yrraKtBSNBhTLsKhXe8RVO61DtOAW6SLZY/5NHA8gtLEKKwy8JkApgb4DLnvASH7b3zIsz0Q8lyeY9m75Np3RVq0nqF3lsF1jmUfZs5frr0+mIQAbT6eRvXDCINlwEviey+GTj0SMm9XO5Y9OWRuEqgKraZb7V8GA38/xCKzn2PZPwrBc9Kx7ONQbo188A/juSYAPw1arUfBcch+aSslzoZTGmDw9gZUwR29tOM4VHT52Y5lLxJzX7sQzN2FGIQVCvma63ttEX0wX5Fc13eEmSdRJt2P0LszVgvwiDDJYPwdjmU/Qu+SpeehTOdBb/A2bWGMkWM7eYajgLe0sVZS3qjwfMLYw6jqd3rv7BpULfUTHMteLExvjTzPaLqLcmxbQiH2OhnDZG3uTgQecSw7jQpYHC+4OobeucYPSB+GfHCmCDabtbFNRAWjTjDG2gbcL9UXAwiLNbnWseyUjLFeW1N70Ltp1BHA1xzL/oXrey3l2F+u7y12LPs6YY4jtfk8ElVjfTkqaHaZaNnVMp+TCC+2k9eSEjKWFseyzSyRA4C/OZZ9C8p1sh0qQLEBlYJm7oNvOJb9qOt7z2j75Qp6lsEG1cjlKMeyg57wG+SZRss9xgEHB2nIcq2KCMLUem1dBrAPqlX0m4LvFs2qM1LuuTUqmv5mVDnrDYOJDmtz8B9Zp6Zl5l7CA55rIs7bSlT3U72Pwr7Ao1KIKMhC2F32sulWexZ4VhdApELi02IpTmjC92zHsk8V4aBDrIKTBBf1EfbMi45lP0XPNvUNwN1CzxdqtA9UvNRouccE4HHX987uqzK2RQgA2uJb7Vj22aia+Nsa2tR2chyURbrSYY7re+kCbl+Hav6Tz1wJquezb5qJRVP9MyrCWR/3ZDkyea6/Ez19mXVETIMsE7PocCz7AlRcxREG054gR0p7rkK1w4DB5hvHSsey56AKj1RpwlEQ86EHXyZChLUzDeKWDcaJ2TgKPBxYlzQtrlUI+xeMa/4kzxj1+bhQiIhtENdEifYXru9d71j2gcK8dMY6Wo4p9K79XwhOo6Sd3i6m3XoNn0cDn4w4V7sDtzmWfXoQaOv63nzHsm9FujRq150oR0OWZ9oZ1dVOx1kiz7psdyz7l8J0xhuWifHGvcL2++6ytzcMUnL8D+AbIQLAfXqgpgaj8ln9ZN7aHMu+GtVwbaph1Ztp4M7EwSbAc33v3RCB5XZU4yhdcayVe+wVcR1nQq57Dj3jZoIspEMRd0AOHIdp++NLiaRhFQOgEfznZIE8nWUSs6VLdYpE9nXX9+YWUTM7EXLosBnV4eu7umVBbxwkBPyOPOPOdv11IVJ1jfH/0RGeoaJI4h3GMFYKLu4ge0GlKOlr7Vn+Vx8iiIWN42axpqwPYU4VIWPoRAVlHuL63rIszL+YudkkZszTXN/boK8BSZebS++YhbAxttFd4z4Tco9cc5Iodn9pe+I8VOOStYSn9yUi4LUzZOxhlScTIWO5RUy57XnmKiNMcm2WNWRqVGejitl0FfBM5nybZa2rw9al63uvoDpxLo+41/X7FtPorK9QXQANfhRV/nyh0OHXBF8PG0JvACOJ2BLY9b03gC+Jxawrx5zpjHk58ANZN2Z8Dq7vvS3r+f0I9GkT4SmYo829IpbDzxIeR5ULxxnCq9uOLiVCh1saoI7Y5xzLPkI29OfErDxaNmegEXUIYlrEtPQk8EtZYIVGXwaRpHVy/UrZoJuFSC9D+ZG9sCYT2oJZJVHVtiycicLkamXMnUL8Nwlha5F7Pwa8FtJb/hmxegQEY36e51iBit4fpZkf1/WFYUj1v5Mcy24UfOxu4KJS7tNuPFur3Ps9IR5hxPufcr02lI/5zSzjSLi+d7Vj2Y+iCu3sL2MICE9GcNUic3AvcLHre2tzaP6viEm2UvBTq5kyq+mOPt8oa2AJcLvre7/KZlFwfe8Fx7JPRNUDCNICawQXG4WRLUH5qx+QPXyuWLW2kvGYWSsvokqbVmvPuLYvOJX3F4nGfL5oNGNEe9fT7oKc5k0y/lY5lqLy9peHCHqPo7rKBQLjM1ksEac4ln0xqj3reMFlhayFYE+/Jha3gBAfLxp6veBkg3HdTY5lHyP77yRt/wVld9uN53id3mVjH6HbjZcRRrU+yzP83bHs94Dviyl7rNCQKrpLJW/W7rlOmNS1hvsooEELNMGnhtJUBw1glcxni6z5ijBGqK2PU/K4CXR4W/Ac0LcuwktnB/9/UipB/i8qu2Ybbe0FeGqVMT+P6knwZNj9Nfpwh2PZ76Da/u4juKiVudwkc/kBqunRGlRb7g1yvy5zT2n072+OZR+uXXe00KoAxwHdC2jeepmPa0Ie/115nhEypjpgdbGZAgmGKYQ0CtlJTJPBhq6QCV8pG+pVrWBJXnOvY9krDHPMr1Ed/8YJgmtl464WpC0ImnZEqUkg72tR5TN3kjFXCdLXy8JeAnwQlvusvW4l/28NxpOrnr0EzgQaWBeq+MzyvuSihuBiojDtiXKvWiF0G4TArRSBaZHrexv1sZnBQxJdv6M8W3sObd2c221RzZe2E8bRJZv6PVRKWWuE56oTN0Y93fEYARHaStvUy1HtjN/Itb6M8SVQfvZAWGqX67whmqP535SYorcH7nF9702DIO8s62cd0CmCZp9SjELW6j5y/zEypxlNSF0l41/i+t6qPOujQvbqZlnvq/V4mZB7byMC0CR5xvVCqF8MuVfQknY7wc8/jTWmX3eECIo7aZamVlmfi1Gtn9eZGqVGb7rkHq2u77WGPKf5nymoQNHxsn465H5rRShdJFkt2eYhcJ8EFryKUuDZoCsTBCdVAZMNCegr9tr1spc/ZIQikOWbtyTKnbej7JWE/H8JKuBwUdj/wmhf8CyOZe8hpv8xso5XiYL1urY/xwmOKoE2iU3JxzemyHXHC/MOcLxG9seiYLwhYwrmKYj7Wi2C4bpiW2oPeygkSjZX+k2YAOBYdkY7vlHqsfT1OaOkmUS9ZymijYu9RinmotQ4KOezRD0nSrpYrt8HI05LuWajPqv5W6FzEyVtq5RrKWx8fZmfgdoLpaBJBdDrsu/PUuO41Ptgi7IA9INgYVoAvg382kwFjCGGGGKIIYbBCBXxFMQQQwwxxBBDLADEEEMMMcQQQwyxABCDCZqvpTqejRhiiCGGGGIBYAsBLcIzb65yDDHEEEMMMcQCwPADM02sjf7vzhZDDDHEEEMMsQDQH6C5AD4GzEA1EDoa+Cv9X5krhhhiiCGGGGIYACEghhhiiCGGGGKIIYYYYoghhhhiiCGGGGKIIYYYBin8f6QOdVsixkBdAAAAAElFTkSuQmCC" align="left">
# Basin-scale effects
We have seen how changes in slope or temperature can affect a glacier. In many places, there is not just one glacier, but many.

Suppose we have a river that gets water from 3 different glaciers. How can we use our model to find out the changes in *all* of those glaciers?
First, we import our modules as usual and configure what will be the same throughout our basin.
```
# import modules, constants and set plotting defaults
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (9, 6) # Default plot size
# Scientific packages
import numpy as np
# Constants
from oggm import cfg
cfg.initialize()
# OGGM models
from oggm.core.massbalance import LinearMassBalance
from oggm.core.flowline import FluxBasedModel, RectangularBedFlowline, TrapezoidalBedFlowline, ParabolicBedFlowline
# There are several numerical implementations in OGGM core. We use the "FluxBasedModel"
from functools import partial
FlowlineModel = partial(FluxBasedModel, min_dt=0, cfl_number=0.01)
# import oggm-edu helper package
import oggm_edu as edu
import CdeC as cdec
# define horizontal resolution of the model:
# nx: number of grid points
# map_dx: grid point spacing in meters
nx = 200
map_dx = 100
# calculate the distance from the top to the bottom of the glacier in km
distance_along_glacier = edu.distance_along_glacier(nx, map_dx)
# ELA at 3000 m a.s.l., gradient 4 mm m-1
initial_ELA = 3000 #equilibrium line altitude in meters above sea level
altgrad = 4 #altitude gradient in mm/m
mb_model = LinearMassBalance(initial_ELA, grad=altgrad)
```
## Our mix of glaciers
We will make a set of 3 glaciers, each with a different slope and a different accumulation-zone width.
```
# define characteristics of glaciers in our basin
top = 3400 #m - allow this to be manipulated too?
initial_width = 300 #width in meters
# the lists below define three different glaciers. If you modify them, make sure the lists are the same length
slopes = [0.1, 0.1, 0.15]
upper_widths = [300, 400, 300]
```
We ask for a summary of the characteristics of the basin.
```
# What does our basin look like?
print('There are {} glaciers in our basin.'.format(len(slopes)))
print('The basin peak elevation is {} m.'.format(top))
print('The basin climate gives an ELA of {} m a.s.l. with altitude gradient of {} mm/m.'.format(initial_ELA, altgrad))
for k in range(len(slopes)):
print('Glacier {} has a slope of {} and an accumulation area width of {} m.'.format(k+1, slopes[k], upper_widths[k]))
```
### Initialization
First, we simulate the growth of the glaciers at equilibrium.
```
# Set up glaciers in our basin, currently in equilibrium
models = []
beds = []
distances_along = []
# Colors for the graphs
colors = ['C1', 'C3', 'C5']
for k, slope in enumerate(slopes):
# create a linear bedrock profile from top to bottom
bottom = top - nx * map_dx * slope #m, elevation of the bottom of the incline based on the slope we defined
bed_h, surface_h = edu.define_linear_bed(top, bottom, nx)
beds.append(bed_h)
widths = np.zeros(nx) + initial_width/map_dx
widths[0:15] = upper_widths[k]/map_dx #adjust the upstream width to the value we've chosen
# ask the model to calculate the distance from the top to the bottom of the glacier in km
distances_along.append(edu.distance_along_glacier(nx, map_dx))
flowline = RectangularBedFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx)
# The models require the initial glacier bed, a mass balance model, and an initial time (the year y0)
models.append(FlowlineModel(flowline, mb_model=mb_model, y0=0.))
# First, look at the state of the glacier models after the chosen amount of years
models[k].run_until_equilibrium(rate=0.006)
```
Let's see what our three glaciers look like at equilibrium.
```
f1, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(21,5))
ax_list = (ax1, ax2, ax3)
for k in range(len(slopes)):
ax = ax_list[k]
ax.plot(distances_along[k], models[k].fls[-1].surface_h, label='Slope = {}'.format(slope), color=colors[k])
cdec.plot_xz_bed(distances_along[k], beds[k], ax=ax, ylim=(1500, 3700))
```
We can add up all the ice stored in our glaciers to see how much is in the basin as a whole.
```
# report the equilibrium volume of our basin
glacier_vols = [models[k].volume_km3 for k in range(len(slopes))] # a list of the volumes of each glacier
basin_volume = np.sum(glacier_vols) #sum of the list
for k, v in enumerate(glacier_vols):
print('Equilibrium volume of Glacier {} is {:.3f} km3.'.format(k+1, glacier_vols[k]))
print('In total, basin ice storage is {:.3f} km3, equivalent to {:.3E} liters fresh water'.format(basin_volume, cdec.ice_to_freshwater(basin_volume)))
```
**What combination of `slopes` and` widths` above gives you the basin with the largest volume of ice at equilibrium?**
## Our basin in a different climate
We are going to introduce a climate change as we did in [notebook 4, "A changing climate"](4_a_changing_climate.ipynb), and [notebook 5, "Differing reactions"](5_differing_reactions.ipynb).
```
# Time
yrs = np.arange(0, 201, 5, dtype=np.float32)
nsteps = len(yrs)
change_time = 50 #when to apply the step change
# Output containers
elas = [np.zeros(nsteps) for i in range(len(slopes))]
lengths = [np.zeros(nsteps) for i in range(len(slopes))]
areas = [np.zeros(nsteps) for i in range(len(slopes))]
volumes = [np.zeros(nsteps) for i in range(len(slopes))]
# Loop
current_ELA = initial_ELA
change = 100 #m, the amount by which we want the initial ELA to change
for k, m in enumerate(models):
ela_arr = elas[k]
length_arr = lengths[k]
area_arr = areas[k]
volume_arr = volumes[k]
# establish the glacier in equilibrium first
m_perturb = FlowlineModel(m.fls, mb_model = mb_model, y0=0.)
m_perturb.run_until_equilibrium(rate=0.006)
initial_time = m_perturb.yr
for i, yr in enumerate(yrs):
m_perturb.run_until(initial_time + yr)
if yr >= change_time:
current_ELA = initial_ELA + change
m_perturb.mb_model = LinearMassBalance(current_ELA, grad=4)
ela_arr[i] = current_ELA
length_arr[i] = m_perturb.length_m
area_arr[i] = m_perturb.area_km2
volume_arr[i] = m_perturb.volume_km3
print(m_perturb.yr)
# summing up basin volume
basin_vol_t = np.sum(volumes, axis=0)
```
Now, we graph the total change in the volume of ice in the basin:
```
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.plot(yrs, basin_vol_t, color='k')
ax1.set_xlabel('Year of simulation')
ax1.set_ylabel('Ice volume [km3]')
ax2.plot(yrs, basin_vol_t/(basin_vol_t[0]), color='b')
ax2.set_xlabel('Year of simulation')
ax2.set_ylabel('Percent initial volume')
```
Try different basin configurations and compare your results.
**What characteristics of the basin lead to the largest loss of ice volume with this climate change?**
### Advanced activity
As we learned in Notebooks 4 and 5, several factors influence the _response time_ of each glacier. But to plan our water resources, we would like to know the response time of our entire basin. How would you summarize the response time of the basin that you have simulated?
Hint: You can use the built-in function to calculate the response time for each individual glacier:
```
# With the following function you can calculate the response time in years
# (the reference model has to be in equilibrium state)
for k,m in enumerate(models):
response_time, model_eq = edu.response_time_vol(models[0], mb_model)
print('The response time of glacier {} is {} years.'.format(k, response_time))
```
## Real glaciers at last!
Now that we have learned a lot of theory about glaciers, we are ready to study real glaciers. [Go to notebook 7!](7_simulating_real_glaciers.ipynb)
| github_jupyter |
##### Copyright 2020 The TensorFlow IO Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 音频数据准备和增强
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/audio"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
## 概述
自动语音识别面临的最大挑战之一是音频数据的准备和增强。音频数据分析可能涉及时域或频域,与图像等其他数据源相比,这提高了复杂性。
作为 TensorFlow 生态系统的一部分,`tensorflow-io` 软件包提供了不少与音频相关的 API。这些 API 非常有用,可简化音频数据的准备和增强。
## 设置
### 安装要求的软件包,然后重新启动运行时
```
!pip install tensorflow-io
```
## 使用方法
### 读取音频文件
在 TensorFlow IO 中,利用类 `tfio.audio.AudioIOTensor` 可以将音频文件读取到延迟加载的 `IOTensor` 中:
```
import tensorflow as tf
import tensorflow_io as tfio
audio = tfio.audio.AudioIOTensor('gs://cloud-samples-tests/speech/brooklyn.flac')
print(audio)
```
在上面的示例中,Flac 文件 `brooklyn.flac` 来自 [Google Cloud](https://cloud.google.com/speech-to-text/docs/quickstart-gcloud) 中可公开访问的音频片段。
示例中直接使用 GCS 地址 `gs://cloud-samples-tests/speech/brooklyn.flac`,因为 TensorFlow 支持 GCS 文件系统。除了 `Flac` 格式,凭借自动文件格式检测,`AudioIOTensor` 还支持 `WAV`、`Ogg`、`MP3` 和 `MP4A` 格式。
`AudioIOTensor` 是一个延迟加载张量,因此,刚开始只显示形状、dtype 和采样率。`AudioIOTensor` 的形状用 `[samples, channels]` 表示,这表示您加载的音频片段是单声道音频(`int16` 类型的 `28979` 个样本)。
仅需要时才会读取该音频片段的内容。要读取音频片段的内容,可通过 `to_tensor()` 将 `AudioIOTensor` 转换为 `Tensor`,也可以通过切片读取。如果只需要一个大音频片段的一小部分,切片尤其实用:
```
audio_slice = audio[100:]
# remove last dimension
audio_tensor = tf.squeeze(audio_slice, axis=[-1])
print(audio_tensor)
```
音频可通过以下方式播放:
```
from IPython.display import Audio
Audio(audio_tensor.numpy(), rate=audio.rate.numpy())
```
更方便的方式是,将张量转换为浮点数并在计算图中显示音频片段:
```
import matplotlib.pyplot as plt
tensor = tf.cast(audio_tensor, tf.float32) / 32768.0
plt.figure()
plt.plot(tensor.numpy())
```
### 降噪
为音频降噪有时很有意义,这可以通过 API `tfio.experimental.audio.trim` 实现。从该 API 返回的是片段的一对 `[start, stop]` 位置:
```
position = tfio.experimental.audio.trim(tensor, axis=0, epsilon=0.1)
print(position)
start = position[0]
stop = position[1]
print(start, stop)
processed = tensor[start:stop]
plt.figure()
plt.plot(processed.numpy())
```
### 淡入和淡出
一种有用的音频工程技术是淡入淡出,也就是逐渐增强或减弱音频信号。这可以通过 `tfio.experimental.audio.fade` 实现。`tfio.experimental.audio.fade` 支持不同的淡入淡出形状,如 `linear`、`logarithmic` 或 `exponential`:
```
fade = tfio.experimental.audio.fade(
processed, fade_in=1000, fade_out=2000, mode="logarithmic")
plt.figure()
plt.plot(fade.numpy())
```
### 声谱图
高级音频处理通常需要根据时间调整音频频率。在 `tensorflow-io` 中,可通过 `tfio.experimental.audio.spectrogram` 将波形图转换为声谱图。
```
# Convert to spectrogram
spectrogram = tfio.experimental.audio.spectrogram(
fade, nfft=512, window=512, stride=256)
plt.figure()
plt.imshow(tf.math.log(spectrogram).numpy())
```
也可以转换为其他不同的比例:
```
# Convert to mel-spectrogram
mel_spectrogram = tfio.experimental.audio.melscale(
spectrogram, rate=16000, mels=128, fmin=0, fmax=8000)
plt.figure()
plt.imshow(tf.math.log(mel_spectrogram).numpy())
# Convert to db scale mel-spectrogram
dbscale_mel_spectrogram = tfio.experimental.audio.dbscale(
mel_spectrogram, top_db=80)
plt.figure()
plt.imshow(dbscale_mel_spectrogram.numpy())
```
### SpecAugment
除上述数据准备和增强 API 外,`tensorflow-io` 软件包还提供了高级声谱图增强,最主要的是在 [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition (Park et al., 2019)](https://arxiv.org/pdf/1904.08779.pdf) 中讨论的频率掩蔽和时间掩蔽。
#### 频率掩蔽
在频率掩蔽中,对频率通道 `[f0, f0 + f)` 进行掩蔽,其中 `f` 选自从 `0` 到频率掩蔽参数 `F` 的均匀分布,而 `f0` 则选自 `(0, ν − f)`,其中 `ν` 是频率通道的数量。
```
# Freq masking
freq_mask = tfio.experimental.audio.freq_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(freq_mask.numpy())
```
#### 时间掩蔽
在时间掩蔽中,对 `t` 个连续时间步骤 `[t0, t0 + t)` 进行掩蔽,其中 `t` 选自从 `0` 到时间掩蔽参数 `T` 的均匀分布,而 `t0` 则选自 `[0, τ − t)`,其中 `τ` 是时间步数。
```
# Time masking
time_mask = tfio.experimental.audio.time_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(time_mask.numpy())
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.