text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Tutorial Title
Your name
Tutorial Date
---
# Overview
If you have an introductory paragraph, lead with it here! Then continue into the required list of topics below:
1. Ideally These should map approximately to your main sections of content
2. Or each second-level, ##, header in this tutorial notebook
3. Keep the size and scope of your seminar in check
4. Let the attendee know up front the important concepts they'll be leaving with
# Prerequisites
What concepts, packages, or other background information does your audience need before learning your material? Is this tutorial a continuation in a seminar series? Link to past required or recommended tutorials here.
Here is a table if you would like to present the prequisite concepts as helpful/necessary as such.
|Concepts | Importance | Notes |
| --- | --- | --- |
| Pandas Part 1 | Necessary | - |
- Time to learn: 50 minutes.
---
# Imports
```
import numpy
import pandas
```
# Your first content section
Be sure to make your examples and lessons targed to an atmospheric and oceanic science audience. We want to keep things as relevant and interesting as possible to our attendees. Keep in mind that many scientists will want to copy and paste code from these lessons, and then make small edits so it works for their particular specific analysis.
```
print("Hello world!")
```
## A content subsection
Consider having excercises interspersed where the audience can attempt some coding similar to what you have shown them so that they can test their understanding or just explore. Then show the answer. For instance:
In the next cell can you have Python print out the sum of 1 and 1?
Answer:
```
print(1 + 1)
```
The amount of explanatory markdown you contain in your notebook is up to you. These tutorials are designed to be followed either with you live, or with you in the recording. Your commentary will complement the notebook skeleton regardless of the style of presentation you are most comfortable with.
---
# Summary
Conclude your presentation with a brief summary of the key pieces that were learned and how they tied to your objectives. Look to reiterate what the most important takeaways were. If you are coming back for a follow up tutorial seminar soon, let the audience know!
## Resources and references
List your citations and references as necessary. Give credit where credit is due. Also, feel free to link to relevant external material, further reading, documentation, etc. Only include what you're legally allowed: no copyright infringement or plagiarism
Thank you for your contribution! And good luck on your tutorial!
| github_jupyter |
# Basic Condorcet
For a quick test, let's look at basic Condorcet voting. Recall that Condorcet looks for the option that wins all pairwise majority elections against every other option. Consider the set of agents $N = \{ A, B, C, D\}$ voting over solutions $\{1, 2, 3, 4\}$ and the following preference profile:
<div>
<img src="preference_profile.png" width="300"/>
</div>
Investigating all of the pairwise duels in a matrix, we'd get (for example, the three agents $A, B, C$ like 1 better than 2, only $D$ prefers 2 to 1, so 1 wins 3:1 over 2) :
<div>
<img src="duels.png" width="500"/>
</div>
And option 4 is the Condorcet winner that we'd hope to find. I've used numbers for options to make the transition to MiniZinc a bit easier (maybe we turn to enums a,b,c for better illustration).
## MiniZinc basic toy example
I wanted to abstract away some of the complications from directly considering solutions but also did not only want to vote over the value of a single integer variable, so only some combinationsof variable assignments $x$ and $y$ are valid and these get arbitrarily numbered (indicated by the variable `control`):
<div>
<img src="sols.png" width="300"/>
</div>
That at least keeps the illusion of voting over solutions to the CSP.
```
constraint if control = 1 then x = 1 /\ y = 3 endif;
constraint if control = 2 then x = 2 /\ y = 2 endif;
constraint if control = 3 then x = 1 /\ y = 2 endif;
constraint if control = 4 then x = 3 /\ y = 1 endif;
```
Now the above encoding of agents' preferences can simply written as an array of integers denoting the ordering of control values.
<div>
<img src="preference_profile.png" width="200"/>
</div>
```
array[AGENTS,CHOICES] of CHOICES: prefs =
[| 1, 2, 4, 3 | 4, 1, 3, 2 | 3, 4, 1, 2 | 4, 2, 3, 1 |];
```
The full [MiniZinc model](base_model_with_prefs.mzn)
does not much more other than providing a rank for every agent how they perceive the current solution according to prefs. So, if `control = 2` then `rank[1] = 2`, `rank[2] = 4`, etc.
```
array[AGENTS] of var 1..n_options: rank;
constraint forall(a in AGENTS) (
prefs[a,rank[a]] = control
);
```
which we will use to post the Condorcet-improvement criterion during custom Branch-and-Bound.
## The Meta-Search
Basically, everything works just as in normal branch-and-bound for now, but instead of posting
```
child.add_string(f"constraint obj > {res['obj']}")
```
I would post the following (now in prose):
```
Given the current solution that we're at, give me the next solution that has better ranks (is liked better) by more than (or equal to) half the agents:
```
Let's implement this directly
```
import minizinc
from minizinc import Instance, Model, Result, Solver, Status
import nest_asyncio
nest_asyncio.apply()
gecode = Solver.lookup("gecode")
m = Model("base_model_with_prefs.mzn")
inst = Instance(gecode, m)
res: Result = inst.solve()
print(res.solution)
while res.status == Status.SATISFIED:
with inst.branch() as child:
child.add_string("array[AGENTS] of 1..n_options+1: old_rank;")
child["old_rank"] = res["rank"] # copy the current ranks
child.add_string("constraint sum(a in AGENTS) ( bool2int(rank[a] < old_rank[a] ) ) > win_thresh;")
res = child.solve()
if res.solution is not None:
print(res.solution)
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from scipy.spatial.distance import pdist, squareform
sns.set()
from sklearn import preprocessing
from scipy.special import kl_div
from scipy import stats
TRAINING_DATA = pd.read_csv(r'stratigraphic_geometry_dataset.csv', index_col=[0])
TRUNCATION_COLOR = "#ffffbf"
ONLAP_COLOR = "#2c7bb6"
HORIZ_COLOR = "#d7191c"
def flatten(container):
"Flattens lists"
for i in container:
if isinstance(i, (list, tuple)):
for j in flatten(i):
yield j
else:
yield i
def feature_list(no_of_neighbors):
"""
Creates a list of features given number of adjacent wells
param no_of_neighbors: number of adjacent wells used in feature engineering
"""
print("Getting the features")
initial = ["thickness", "thickness natural log", "thickness power"]
features = []
for item in initial:
features.append(item)
for i in range(1, no_of_neighbors + 1):
features.append(item + " neighbor " + str(i))
features.append(["x location", "y location", "class"])
return list(flatten(features))
wells_in_vicinity = 300
flat_features = feature_list(wells_in_vicinity)
subset = TRAINING_DATA[flat_features]
le = preprocessing.LabelEncoder()
le_class = le.fit_transform(subset['class'])
subset.loc[:,'le_class'] = le_class
subset.drop('class', inplace=True, axis=1)
X_train, X_test, y_train, y_test = train_test_split(
subset.drop("le_class", axis=1), subset["le_class"], test_size=0.2, random_state=86,
)
subset = TRAINING_DATA[flat_features]
trunc = subset[subset['class'] == 'truncation']
onlap = subset[subset['class'] == 'onlap']
horiz = subset[subset['class'] == 'horizontal']
ft = list(stats.relfreq(onlap['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
sam = stats.relfreq(onlap.iloc[3600,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values, numbins=10, defaultreallimits=(0,1))[0]
print(sum(kl_div(sam, ft)))
g = sns.histplot(onlap.iloc[3600,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values,label='Nat. Log Single Sample', stat='probability', binwidth=0.1, color='k',binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
sns.histplot(onlap['thickness natural log'].values,label='Nat. Log All Samples', stat='probability', binwidth=0.1, color=ONLAP_COLOR,binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
plt.legend()
g.text(0.55, 0.55, f'KL={sum(kl_div(sam, ft)).round(2)}')
# plt.savefig('onlap_KL.pdf')
ft = list(stats.relfreq(trunc['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
sam = stats.relfreq(trunc.iloc[3600,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values, numbins=10, defaultreallimits=(0,1))[0]
print(sum(kl_div(sam, ft)))
g = sns.histplot(trunc.iloc[3600,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values,label='Nat. Log Single Sample', stat='probability', binwidth=0.1, color='k',binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
sns.histplot(trunc['thickness natural log'].values,label='Nat. Log All Samples', stat='probability', binwidth=0.1, color=TRUNCATION_COLOR,binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
plt.legend()
g.text(0.55, 0.55, f'KL={sum(kl_div(sam, ft)).round(2)}')
# plt.savefig('trunc_KL.pdf')
ft = list(stats.relfreq(horiz['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
sam = stats.relfreq(horiz.iloc[3600,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values, numbins=10, defaultreallimits=(0,1))[0]
print(sum(kl_div(sam, ft)))
g = sns.histplot(horiz.iloc[9599,wells_in_vicinity+1:wells_in_vicinity+2+wells_in_vicinity].values,label='Nat. Log Single Sample', stat='probability', binwidth=0.1, color='k',binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
sns.histplot(horiz['thickness natural log'].values,label='Nat. Log All Samples', stat='probability', binwidth=0.1, color=HORIZ_COLOR,binrange=(0,1))
plt.xlim(0,1)
plt.ylim(0,0.75)
plt.legend()
g.text(0.55, 0.55, f'KL={sum(kl_div(sam, ft)).round(2)}')
# plt.savefig('horiz_KL.pdf')
kldiv = []
fullkdv = []
ft = list(stats.relfreq(trunc['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
for i in range(wells_in_vicinity):
kldiv = []
for sample in range(100):
sam = stats.relfreq(trunc.iloc[sample,wells_in_vicinity+1:wells_in_vicinity+3+i].values, numbins=10, defaultreallimits=(0,1))[0]
kldiv.append(sum(kl_div(sam, ft)))
fullkdv.append(kldiv)
plt.plot(fullkdv, c='k', alpha=0.1)
plt.plot(np.mean(fullkdv, axis=1))
kldiv = []
fullkdv = []
ft = list(stats.relfreq(onlap['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
for i in range(wells_in_vicinity):
kldiv = []
for sample in range(500):
sam = stats.relfreq(onlap.iloc[sample,wells_in_vicinity+1:wells_in_vicinity+3+i].values, numbins=10, defaultreallimits=(0,1))[0]
kldiv.append(sum(kl_div(sam, ft)))
fullkdv.append(kldiv)
#plt.plot(fullkdv, c='k', alpha=0.1)
plt.plot(np.mean(fullkdv, axis=1))
kldiv = []
fullkdv = []
ft = list(stats.relfreq(horiz['thickness natural log'].values, numbins=10, defaultreallimits=(0,1))[0])
for i in range(wells_in_vicinity):
kldiv = []
for sample in range(500):
sam = stats.relfreq(horiz.iloc[sample,wells_in_vicinity+1:wells_in_vicinity+3+i].values, numbins=10, defaultreallimits=(0,1))[0]
kldiv.append(sum(kl_div(sam, ft)))
fullkdv.append(kldiv)
#plt.plot(fullkdv, c='k', alpha=0.1)
plt.plot(np.mean(fullkdv, axis=1))
TRAINING_DATA
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import csv
import os
import re
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, AutoMinorLocator
FOLDER = "logs"
files = os.listdir(FOLDER)
len(files)
def label_replace(attn_name, update_rule):
model_name = attn_name + "_" + update_rule
if model_name == "tanh_fwm":
return "Schlag (2021)"
elif model_name == "dpfp1_fwm":
return "Schlag (2021) with DPFP-1"
else:
return model_name
def label_replace(attn_name, update_rule):
if attn_name == "softmax":
return "Softmax"
elif attn_name == "linear":
return "Linear"
elif attn_name == "favor64":
return "FAVOR+ (64)"
elif attn_name == "favor128":
return "FAVOR+ (128)"
elif attn_name == "favor512":
return "FAVOR+ (512)"
elif attn_name == "dpfp1":
return "DPFP-1"
elif attn_name == "dpfp2":
return "DPFP-2"
elif attn_name == "dpfp3":
return "DPFP-3"
else:
return attn_name
df = pd.DataFrame(
columns=["dataloader","attn_name","hidden_size","seq_len",
"n_keys","with_replace","csvfile", "steps", "loss", "label"])
# filename example: d64_dpfp1_sum_AssocSeq80N40R1.csv
pattern = re.compile("d(\d+)\_(.+)_(.+)_AssocSeq(\d+)N(\d+)R(\d+)")
for idx, f in enumerate(files):
m = pattern.match(f)
csv_df = pd.read_csv(os.path.join(FOLDER, f))
df = df.append({
"dataloader": "AssocSeq",
"attn_name": m.group(2),
"update_rule": m.group(3),
"hidden_size": int(m.group(1)),
"seq_len": int(m.group(4)),
"n_keys": int(m.group(5)),
"with_replace": int(m.group(6)),
"csvfile": f,
"steps": int(csv_df.iloc[-1].step),
"loss": csv_df.iloc[-1]["eval-loss"],
"label": f"{label_replace(m.group(2), m.group(3))}"
}, ignore_index=True)
```
## Plot loss over number of keys
```
dfq = df.copy()
dfq = dfq[dfq.with_replace == 0]
set(dfq.label)
fig, ax = plt.subplots(figsize=(13,8))
scat = sns.scatterplot(x="n_keys",
y="loss",
style="label",
hue="label",
data=dfq,
s=130,
alpha=0.8)
#scat.axes.set_title("setting 1", fontsize=16)
scat.axes.set_xlabel("number of unique keys / sequence length", fontsize=17)
scat.axes.set_ylabel("loss", fontsize=18)
scat.tick_params(labelsize=15)
plt.xticks(np.arange(0, 620, 40))
# sort both labels and handles by labels
handles, labels = scat.axes.get_legend_handles_labels()
labels, handles = zip(*sorted(zip(labels, handles), key=lambda t: t[0], reverse=True))
scat.axes.legend(handles, labels, loc="center right", bbox_to_anchor=(0.33,0.70), prop={'size': 13})
# major and minor ticks
scat.axes.xaxis.set_major_locator(MultipleLocator(40))
scat.axes.xaxis.set_major_formatter(FormatStrFormatter('%d'))
scat.axes.xaxis.set_minor_locator(MultipleLocator(20))
scat.axes.yaxis.set_major_locator(MultipleLocator(0.1))
scat.axes.yaxis.set_minor_locator(MultipleLocator(0.05))
# grid
scat.axes.xaxis.grid(True, which='both')
scat.axes.yaxis.grid(True, which='both')
fig, ax = plt.subplots(figsize=(13,8))
line = sns.lineplot(x="n_keys",
y="loss",
hue="label",
data=dfq,
alpha=0.7,
linewidth=3)
#line.axes.set_title("setting 1", fontsize=20)
line.axes.set_xlabel("number of unique keys / sequence length", fontsize=15)
line.axes.set_ylabel("loss", fontsize=15)
line.tick_params(labelsize=12)
line.axes.legend(handles, labels, loc="center right", bbox_to_anchor=(0.33,0.70), prop={'size': 13})
plt.xticks(np.arange(20, 601, 20))
plt.grid()
```
## Plot loss curve for specific experiments
```
N_KEYS = 600
dfq = df.copy()
dfq = dfq[dfq.steps > 0]
dfq = dfq[dfq.n_keys == N_KEYS]
dfq = dfq[dfq.with_replace == 0]
dfq
df_losses = pd.DataFrame(columns=["label", "loss", "step"])
for _, row in dfq.iterrows():
df_csv = pd.read_csv(os.path.join(FOLDER, row.csvfile))
for _, csv_row in df_csv.iterrows():
df_losses = df_losses.append({
"label": row.label,
"loss": csv_row["eval-loss"],
"step": csv_row.step
}, ignore_index=True)
fig, ax = plt.subplots(figsize=(13,8))
line = sns.lineplot(x="step",
y="loss",
hue="label",
data=df_losses,
alpha=0.7,
linewidth=3)
line.axes.set_title(f"Setting 1 with {N_KEYS} unique keys", fontsize=20)
line.axes.set_xlabel("step", fontsize=14)
line.axes.set_ylabel("loss", fontsize=14)
line.tick_params(labelsize=14)
plt.grid()
```
| github_jupyter |
# Custom Mini-Batch and Training loop
### Imports
```
import Python
let request = Python.import("urllib.request")
let pickle = Python.import("pickle")
let gzip = Python.import("gzip")
let np = Python.import("numpy")
let plt = Python.import("matplotlib.pyplot")
import TensorFlow
```
### MNIST
Data
```
let result = request.urlretrieve(
"https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz",
"mnist.pkl.gz")
let filename = result[0]; filename
let mnist = pickle.load(gzip.open(filename), encoding:"latin-1")
// read train, validation and test datasets
let train_mnist = mnist[0]
let valid_mnist = mnist[1]
let test_mnist = mnist[2]
func unsequeeze(_ array: PythonObject, _ dtype: PythonObject = np.float32) -> PythonObject {
return np.expand_dims(array, axis:-1).astype(dtype)
}
// read training tuple into separate variables
let pyobj_train_x = train_mnist[0]
let pyobj_train_y = train_mnist[1].astype(np.int32) // expand dimension
// read validation tuple into separate variables
let pyobj_valid_x = valid_mnist[0]
let pyobj_valid_y = valid_mnist[1].astype(np.int32) // expand dimension
// read test tuple into separate variables
let pyobj_test_x = test_mnist[0]
let pyobj_test_y = test_mnist[1].astype(np.int32) // expand dimension
// read tensorflow arrays into Tensors
let X_train = Tensor<Float32>(numpy: pyobj_train_x)! // ! to unwrap optionals
let y_train = Tensor<Int32>(numpy: pyobj_train_y)! // ! to unwrap optionals
X_train.shape
```
Model
```
let m : Int = Int(X_train.shape[0]) // number of samples
let n_in: Int = Int(X_train.shape[1]) // number of features
let nh: Int = 50 // number of
let n_out: Int = 10 //number of classes
print("\(n_in) -> \(nh) -> \(n_out)")
struct Model: Layer {
var layer1 = Dense<Float>(inputSize: n_in, outputSize: nh, activation: relu)
var layer2 = Dense<Float>(inputSize: nh, outputSize: n_out)
@differentiable
func applied(to input: Tensor<Float>, in context: Context) -> Tensor<Float> {
return input.sequenced(in: context, through: layer1, layer2)
}
var description: String {
return "description here"
}
}
var model = Model()
let ctx = Context(learningPhase: .training)
// Apply the model to a batch of features.
let preds = model.applied(to: X_train, in: ctx)
preds[0..<2]
// test helper functions
func test_near_zero(_ val: Float32, _ msg: String) -> Void {
assert(val < 1e-3, msg)
}
func test_almost_eq(_ t1: Tensor<Float32>, _ t2: Tensor<Float32>, _ msg: String, _ epsilon: Float32 = 1e-3) -> Void {
assert(t1 - t2 < epsilon, msg)
}
```
### Custom loss function
We need to compute the softmax of our activations, then apply a log:
$$ i = \frac{e^{x_i}}{\sum_{0 \leq j \leq n-1} e^{x_j}} $$
```
func log_softmax(_ x: Tensor<Float>) -> Tensor<Float> {
let softmax = exp(x) / (exp(x).sum(alongAxes: -1))
return log(softmax)
}
```
with a sample check that our implementation is equal to tensorflow implementation
```
let x: Tensor<Float> = Tensor<Float>(arrayLiteral: [1, 2, 3, 4], [4, 3, 2, 1])
log_softmax(x)
logSoftmax(x)
test_almost_eq(log_softmax(x), logSoftmax(x), "Our impl should be same as Tensorflow impl")
let y_hat: Tensor<Float> = log_softmax(preds)
```
Given $x$ and its prediction $p(x)$, the **Cross Entropy** loss is:
$$ - \sum x \log p(x) $$
Now as the output of the NN is a 1-hot encoded array, we can rewrite the formula for the index $i$ of a desired target as follows:
$$-\log(p_{i})$$
Technically, if the predictions are of shape (m, 10) and target is (m, 1) then result should be `predictions[:, target]`.
```
let x1: Tensor<Float> = Tensor<Float>(arrayLiteral: [2], [3])
let x2: Tensor<Float> = log_softmax(x)
print("\(x1.shape) \(x2.shape)")
x2[1..<2]
let i: Int32 = 0
let pos: Int32 = Int32(x1[i][0].scalar!)
x2[i][pos].scalar!
```
Finnally a minually calculated loss looks like:
```
func nll(labels: Tensor<Int32>, logits: Tensor<Float>) -> Float {
let size = labels.shape[0]
var sum : Float = 0
for i in 0..<size {
let pos: Int32 = labels[i][0].scalar!
sum += logits[i][pos].scalar!
}
return sum / Float(size)
}
// our way
let loss1: Float = nll(labels: y_train, logits: y_hat)
// tensorflow-way
let loss2: Float = softmaxCrossEntropy(logits: preds, labels: y_train).scalar!
test_near_zero(loss1-loss2, "Loss manually calculated should be similar to Tensorflow-way")
```
Accuracy function:
```
func accuracy(_ logits: Tensor<Float>, _ labels: Tensor<Int32>) -> Float {
return Tensor<Float>(logits.argmax(squeezingAxis: -1) .== labels).mean().scalarized()
}
accuracy(preds, y_train)
```
### Basic training loop
- Grap a batch from the dataset
- Do a forward pass to get the output of the model on this batch
- compute a loss by comparint the output with the labels
- Do a backward pass to calculate the gradients of the loss
- update the model parameters with the gradients
```
let bs: Int32 = 64
// grap batch
let X_batch: Tensor<Float> = X_train[0..<bs]
let y_batch: Tensor<Int32> = y_train[0..<bs]
let ctx = Context(learningPhase: .training)
let (loss, grads) = model.valueWithGradient { model -> Tensor<Float> in
// forward pass
let preds = model.applied(to: X_batch, in: ctx)
// compute loss
return softmaxCrossEntropy(logits: preds, labels: y_batch)
}
// backward pass
/**
print("Current loss: \(loss)")
print("Current accuracy: \(accuracy(preds, y_batch))")
Continue from 47:00
*/
for l in model {
print(l)
}
```
| github_jupyter |
# Example 1
Trying out the example codes in https://github.com/marinkaz/scikit-fusion
```
import pylab as plt
import matplotlib
from IPython.display import display, HTML
import numpy as np
import pandas as pd
from skfusion import fusion
%matplotlib inline
R12 = np.random.rand(50, 100)
R13 = np.random.rand(50, 40)
R23 = np.random.rand(100, 40)
t1 = fusion.ObjectType('Type 1', 10)
t2 = fusion.ObjectType('Type 2', 20)
t3 = fusion.ObjectType('Type 3', 30)
relations = [fusion.Relation(R12, t1, t2),
fusion.Relation(R13, t1, t3),
fusion.Relation(R23, t2, t3)]
fusion_graph = fusion.FusionGraph()
fusion_graph.add_relations_from(relations)
fuser = fusion.Dfmf()
fuser.fuse(fusion_graph)
m1 = fuser.factor(t1)
print m1.shape
plt.imshow(m1)
m2 = fuser.factor(t2)
print m2.shape
plt.imshow(m2)
from skfusion import datasets
pharma = datasets.load_pharma()
action = pharma.get_object_type('Action')
pmid = pharma.get_object_type('PMID')
depositor = pharma.get_object_type('Depositor')
fingerprint = pharma.get_object_type('Fingerprint')
depo_cat = pharma.get_object_type('Depositor category')
chemical = pharma.get_object_type('Chemical')
type(pharma)
action = pharma.get_object_type('Action')
pharma.relations
```
# Example 2
More example codes from https://github.com/marinkaz/scikit-fusion/blob/master/examples/dicty_association.py
Fusion of three data sources for gene function prediction in Dictyostelium
Fuse three data sets: gene expression data (Miranda et al., 2013, PLoS One),
slim gene annotations from Gene Ontology and protein-protein interaction
network from STRING database.
Learnt latent matrix factors are utilized for the prediction of slim GO
terms in Dictyostelium genes that are unavailable in the training phase.
This example demonstrates how latent matrices estimated by data fusion
can be utilized for association prediction.
```
from sklearn import cross_validation, metrics
import numpy as np
from skfusion import datasets
from skfusion import fusion as skf
dicty = datasets.load_dicty()
gene = dicty.get_object_type("Gene")
go_term = dicty.get_object_type("GO term")
exp_cond = dicty.get_object_type("Experimental condition")
print(dicty)
print(dicty.object_types)
print(dicty.relations)
n_folds = 10
n_genes = dicty[gene][go_term][0].data.shape[0]
cv = cross_validation.KFold(n_genes, n_folds=n_folds)
fold_mse = np.zeros(n_folds)
ann_mask = np.zeros_like(dicty[gene][go_term][0].data).astype('bool')
relations = [
skf.Relation(dicty[gene][go_term][0].data, gene, go_term),
skf.Relation(dicty[gene][exp_cond][0].data, gene, exp_cond),
skf.Relation(dicty[gene][gene][0].data, gene, gene)]
fusion_graph = skf.FusionGraph(relations)
fuser = skf.Dfmc(max_iter=30, n_run=1, init_type='random', random_state=0)
for i, (train_idx, test_idx) in enumerate(cv):
ann_mask[:] = False
ann_mask[test_idx, :] = True
fusion_graph[gene][go_term][0].mask = ann_mask
fuser.fuse(fusion_graph)
pred_ann = fuser.complete(fuser.fusion_graph[gene][go_term][0])[test_idx]
true_ann = dicty[gene][go_term][0].data[test_idx]
fold_mse[i] = metrics.mean_squared_error(pred_ann, true_ann)
print("MSE: %5.4f" % np.mean(fold_mse))
```
| github_jupyter |
# Working with Unknown Dataset Sizes
This notebook demonstrates the features built into OpenDP to handle unknown or private dataset sizes.
### Load exemplar dataset
```
import os
data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
with open(data_path) as data_file:
data = data_file.read()
```
By looking at the private data, we see this dataset has 1000 observations (rows).
Oftentimes the number of observations is public information.
For example, a researcher might run a random poll of 1000 respondents and publicly announce the sample size.
However, there are cases where simply the number of observations itself can leak private information.
For example, if a dataset contained all the individuals with a rare disease in a community,
then knowing the size of the dataset would reveal how many people in the community had that condition.
In general, any given dataset may be some well-defined subset of a population.
The given dataset's size is equivalent to a count query on that subset,
so we should protect the dataset size just as we would protect any other query we want to provide privacy guarantees for.
OpenDP assumes the sample size is private information.
If you know the dataset size (or any other parameter) is publicly available,
then you are free to make use of such information while building your measurement.
OpenDP will not assume you truthfully or correctly know the size of the dataset.
Moreover, OpenDP cannot respond with an error message if you get the size incorrect;
doing so would permit an attack whereby an analyst could repeatedly guess different dataset sizes until the error message went away,
thereby leaking the exact dataset size.
If we know the dataset size, we can incorporate it into the analysis as below,
where we provide `size` as an argument to the release of a sum on age.
While the "sum of ages" is not a particularly useful statistic, it's plenty capable of demonstrating the concept.
```
from opendp.trans import *
from opendp.meas import make_base_geometric
from opendp.mod import enable_features
enable_features("contrib")
# Define parameters up-front
# Each parameter is either a guess, a DP release, or public information
var_names = ["age", "sex", "educ", "race", "income", "married"] # public information
size = 1000 # public information
age_bounds = (0, 100) # an educated guess
constant = 38 # average age for entire US population (public information)
dp_sum = (
# Load data into a dataframe of string columns
make_split_dataframe(separator=",", col_names=var_names) >>
# Selects a column of df, Vec<str>
make_select_column(key="age", TOA=str) >>
# Cast the column as Vec<Int>
make_cast(TIA=str, TOA=int) >>
# Impute missing values to 0
make_impute_constant(constant) >>
# Clamp age values
make_clamp(bounds=age_bounds) >>
# Resize with the known `size`
make_bounded_resize(size=size, bounds=age_bounds, constant=constant) >>
# Aggregate
make_sized_bounded_sum(size=size, bounds=age_bounds) >>
# Noise
make_base_geometric(scale=1.)
)
release = dp_sum(data)
print("DP sum:", release)
```
### Providing incorrect dataset size values
However, if we provide an incorrect value of `n` we still receive an answer.
`make_sum_measurement` is just a convenience constructor for building a sum measurement from a `size` argument.
```
preprocessor = (
make_split_dataframe(separator=",", col_names=var_names) >>
make_select_column(key="age", TOA=str) >>
make_cast_default(TIA=str, TOA=int) >>
make_clamp(age_bounds)
)
def make_sum_measurement(size):
return make_bounded_resize(size=size, bounds=age_bounds, constant=constant) >> \
make_sized_bounded_sum(size=size, bounds=age_bounds) >> \
make_base_geometric(scale=1.0)
lower_n = (preprocessor >> make_sum_measurement(size=200))(data)
real_n = (preprocessor >> make_sum_measurement(size=1000))(data)
higher_n = (preprocessor >> make_sum_measurement(size=2000))(data)
print("DP sum (n=200): {0}".format(lower_n))
print("DP sum (n=1000): {0}".format(real_n))
print("DP sum (n=2000): {0}".format(higher_n))
```
### Analysis with no provided dataset size
If we do not believe we have an accurate estimate for `size` we can instead pay some of our privacy budget
to estimate the dataset size.
Then we can use that estimate in the rest of the analysis.
Here is an example:
```
# First, make the measurement
dp_count = (
make_split_dataframe(separator=",", col_names=var_names) >>
make_select_column(key="age", TOA=str) >>
make_count(TIA=str) >>
make_base_geometric(scale=1.)
)
dp_count_release = dp_count(data)
print("DP count: {0}".format(dp_count_release))
dp_sum = preprocessor >> make_sum_measurement(dp_count_release)
dp_sum_release = dp_sum(data)
print("DP sum: {0}".format(dp_sum_release))
```
Note that our privacy usage has increased because we apportioned some epsilon for both the release count of the dataset,
and the mean of the dataset.
### OpenDP `resize` vs. other approaches
The standard formula for the mean of a variable is:
$\bar{x} = \frac{\sum{x}}{n}$
The conventional, and simpler, approach in the differential privacy literature, is to:
1. compute a DP sum of the variable for the numerator
2. compute a DP count of the dataset rows for the denominator
3. take their ratio
This is sometimes called a 'plug-in' approach, as we are plugging-in differentially private answers for each of the
terms in the original formula, without any additional modifications, and using the resulting answer as our
estimate while ignoring the noise processes of differential privacy. While this 'plug-in' approach does result in a
differentially private value, the utility here is generally lower than the solution in OpenDP. Because the number of
terms summed in the numerator does not agree with the value in the denominator, the variance is increased and the
resulting distribution becomes both biased and asymmetrical, which is visually noticeable in smaller samples.
We have noticed that for the same privacy loss,
the distribution of answers from OpenDP's resizing approach to the mean is tighter around the true dataset value (thus lower in error) than the conventional plug-in approach.
*Note, in these simulations, we've shown equal division of the epsilon for all constituent releases,
but higher utility (lower error) can be generally gained by moving more of the epsilon into the sum,
and using less in the count of the dataset rows, as in earlier examples.*
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import sys, os
import matplotlib.pyplot as plt
sys.path.append(os.path.join('..'))
from FACT.helper import *
from FACT.fairness import *
from FACT.data_util import *
from FACT.plot import *
from FACT.lin_opt import *
# Fair Data
X_train, y_train, X_test, y_test, X_train_removed, X_test_removed, dtypes, dtypes_, sens_idc, race_idx, sex_idx = get_dataset('synth', corr_sens=False)
plot_synth_data(X_train_removed, y_train, X_train[:,2])
clf = sklearn.linear_model.LogisticRegression(solver='lbfgs')
clf.fit(X_train_removed, y_train)
print(clf.score(X_test_removed, y_test))
fm = FairnessMeasures(X_train, y_train, X_test, y_test, X_train_removed, X_test_removed, clf, 2)
mats, mats_dict, M_const, b_const = get_fairness_mats(fm)
print(mats_dict.keys())
print('TPRs : %f\t%f'%(fm.pos_group_stats['TPR'], fm.neg_group_stats['TPR']))
print('FPRs : %f\t%f'%(fm.pos_group_stats['FPR'], fm.neg_group_stats['FPR']))
print('Base rates: %f\t%f'%(b_const[1] / b_const[0], b_const[3] / b_const[2]))
print(fm.FOR, fm.PPV)
fm.pos_base_rate, fm.neg_base_rate
fm.group_parity_diff(), fm.pos_class_balance()
fm.equalized_odds_diff()
# Test all possible combinations of fairness measures ..
result = test_all_enumerations(fm, mats)
# Get fairness trade-off table from the list of fairness names
some_names = [['PosClassBal', 'ClassBal'],
['PredEqual', 'NegClassBal'],
['EqOdd', 'DemoParity'],
['EqOdd', 'PosClassBal', 'DemoParity'],
['EqOdd', 'ClassBal', 'PredEqual', 'DemoParity'],
['EqOdd', 'ClassBal', 'PredEqual', 'EqOpp', 'DemoParity'],
['PosClassBal', 'DemoParity'],
['Calibration', 'ClassBal', 'EqOpp', 'DemoParity'],
['PosClassBal', 'NegClassBal', 'Calibration'],
]
dd = test_some_names(fm, some_names)
res2text(dd)
# First get (eps, delta) pairs for optimizing over different lambdas
lmbds_used = get_eps_delta_over_lambdas(mats_dict, M_const, b_const, some_names)
# Plot eps-delta curve
# NOTE manually set up the group for colors
groups = [0, 0, 1, 1, 1, 1, 2, 3, 4]
colors = ['m', 'b', 'g', 'r', 'k']
plot_eps_delta_curves(fm,
some_names,
lmbds_used,
groups=groups,
colors=colors,
data_name='S(U)',
save=True)
# Multi-dimensional regularization for fairness definitions:
name = ['EqOdd', 'DemoParity']
plot_accuracy_contours(mats_dict,
name,
M_const,
b_const,
bound=(4,4),
data_name='S(U)',
save=True)
## Orderings
# NOTE place in the order of adding to the rest.
list_name = ['PosClassBal', 'DemoParity', 'EqOdd']
plot_slices(mats_dict, list_name, M_const, b_const, save=True, data_name='SU')
```
| github_jupyter |
```
import glob
import os
import pickle
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
import datetime as dt
from ta import add_all_ta_features
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
#### Requirements
- pandas==0.25.1
- ta==0.4.7
- scikit-learn==21.3
#### Background on Trade Recommender Models
Trade recommender models were created with with the goal of predicting whether the price of a cryptocurrency will go up or down in the next time period (the period is determined by the specific model). If the time period for the model was 6hrs, and if the model predicted that the price will go up, that would mean that if you bought that cryptocurrency 6 hours after the prediction time (this time comes from the data point that the model is predicting off of), the price of the crypto should have gone up after 6 hours from the time that you bought it.
100s of iterations of models were generated in this notebook and the best ones were selected from each exchange/trading pair based on which iteration returned the highest net profit. When training the random forest classifier models, performance was highly varied with different periods and parameters so there was no one size fits all model, and that resulted in the models having unique periods and parameters. The data was obtained from the respective exchanges via their api, and models were trained on 1 hour candlestick data from 2015 - Oct 2018. The test set contained data from Jan 2019 - Oct 2019 with a two month gap left between the train and test sets to prevent data leakage. The models' predictions output 0 (sell) and 1 (buy) and profit was calculated by backtesting on the 2019 test set. The profit calculation incorporated fees like in the real world and considered any consecutive "buy" prediction as a "hold" trade instead so that fees wouldn't have to be paid on those transactions. The final models were all profitable with gains anywhere from 40% - 95% within the Jan 1, 2019 to Oct 30, 2019 time period. Visualizations for how these models performed given a $10K portfolio can be viewed at https://github.com/Lambda-School-Labs/cryptolytic-ds/blob/master/finalized_notebooks/visualization/tr_performance_visualization.ipynb
The separate models created for each exchange/trading pair combination were:
- Bitfinex BTC/USD
- Bitfinex ETH/USD
- Bitfinex LTC/USD
- Coinbase Pro BTC/USD
- Coinbase Pro ETH/USD
- Coinbase Pro LTC/USD
- HitBTC BTC/USD
- HitBTC ETH/USD
- HitBTC LTC/USD
##### Folder Structure:
├── trade_recommender/ <-- The top-level directory for all trade recommender work
│ │
│ ├── trade_rec_models.ipynb <-- Notebook for trade recommender models
│ │
│ ├── data/ <-- Directory for csv files of 1 hr candle data
│ │ └── data.csv
│ │
│ ├── pickles/ <-- Directory for all trade rec models
│ │ └── models.pkl
│ │
│ ├── tr_pickles/ <-- Directory for best trade rec models
└── models.pkl
### Get all csv filenames into a variable - 1 hr candles
```
csv_filenames = glob.glob('data/*.csv') # modify to your filepath for data
print(len(csv_filenames))
csv_filenames
```
# Functions
#### OHLCV Data Resampling
```
def resample_ohlcv(df, period):
""" Changes the time period on cryptocurrency ohlcv data.
Period is a string denoted by '{time_in_minutes}T'(ex: '1T', '5T', '60T')."""
# Set date as the index. This is needed for the function to run
df = df.set_index(['date'])
# Aggregation function
ohlc_dict = {'open':'first',
'high':'max',
'low':'min',
'close': 'last',
'base_volume': 'sum'}
# Apply resampling
df = df.resample(period, how=ohlc_dict, closed='left', label='left')
return df
```
#### Filling NaNs
```
# resample_ohlcv function will create NaNs in df where there were gaps in the data.
# The gaps could be caused by exchanges being down, errors from cryptowatch or the
# exchanges themselves
def fill_nan(df):
"""Iterates through a dataframe and fills NaNs with appropriate
open, high, low, close values."""
# Forward fill close column.
df['close'] = df['close'].ffill()
# Backward fill the open, high, low rows with the close value.
df = df.bfill(axis=1)
return df
```
#### Feature Engineering
```
def feature_engineering(df, period):
"""Takes in a dataframe of 1 hour cryptocurrency trading data
and returns a new dataframe with selected period, new technical analysis features,
and a target.
"""
# Add a datetime column to df
df['date'] = pd.to_datetime(df['closing_time'], unit='s')
# Convert df to selected period
df = resample_ohlcv(df, period)
# Add feature to indicate gaps in the data
df['nan_ohlc'] = df['close'].apply(lambda x: 1 if pd.isnull(x) else 0)
# Fill in missing values using fill function
df = fill_nan(df)
# Reset index
df = df.reset_index()
# Create additional date features
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
# Add technical analysis features
df = add_all_ta_features(df, "open", "high", "low", "close", "base_volume")
# Replace infinite values with NaNs
df = df.replace([np.inf, -np.inf], np.nan)
# Drop any features whose mean of missing values is greater than 20%
df = df[df.columns[df.isnull().mean() < .2]]
# Replace remaining NaN values with the mean of each respective column and reset index
df = df.apply(lambda x: x.fillna(x.mean()),axis=0)
# Create a feature for close price difference
df['close_diff'] = (df['close'] - df['close'].shift(1))/df['close'].shift(1)
# Function to create target
def price_increase(x):
if (x-(.70/100)) > 0:
return True
else:
return False
# Create target
target = df['close_diff'].apply(price_increase)
# To make the prediction before it happens, put target on the next observation
target = target[1:].values
df = df[:-1]
# Create target column
df['target'] = target
# Remove first row of dataframe bc it has a null target
df = df[1:]
# Pick features
features = ['open', 'high', 'low', 'close', 'base_volume', 'nan_ohlc',
'year', 'month', 'day', 'volume_adi', 'volume_obv', 'volume_cmf',
'volume_fi', 'volume_em', 'volume_vpt', 'volume_nvi', 'volatility_atr',
'volatility_bbh', 'volatility_bbl', 'volatility_bbm', 'volatility_bbhi',
'volatility_bbli', 'volatility_kcc', 'volatility_kch', 'volatility_kcl',
'volatility_kchi', 'volatility_kcli', 'volatility_dch', 'volatility_dcl',
'volatility_dchi', 'volatility_dcli', 'trend_macd', 'trend_macd_signal',
'trend_macd_diff', 'trend_ema_fast', 'trend_ema_slow',
'trend_adx_pos', 'trend_adx_neg', 'trend_vortex_ind_pos',
'trend_vortex_ind_neg', 'trend_vortex_diff', 'trend_trix',
'trend_mass_index', 'trend_cci', 'trend_dpo', 'trend_kst',
'trend_kst_sig', 'trend_kst_diff', 'trend_ichimoku_a',
'trend_ichimoku_b', 'trend_visual_ichimoku_a', 'trend_visual_ichimoku_b',
'trend_aroon_up', 'trend_aroon_down', 'trend_aroon_ind', 'momentum_rsi',
'momentum_mfi', 'momentum_tsi', 'momentum_uo', 'momentum_stoch',
'momentum_stoch_signal', 'momentum_wr', 'momentum_ao',
'others_dr', 'others_dlr', 'others_cr', 'close_diff', 'date', 'target']
df = df[features]
return df
```
#### Profit and Loss function
```
def performance(X_test, y_preds):
""" Takes in a test dataset and a model's predictions, calculates and returns
the profit or loss. When the model generates consecutive buy predictions,
anything after the first one are considered a hold and fees are not added
for the hold trades. """
fee_rate = 0.35
# creates dataframe for features and predictions
df_preds = X_test
df_preds['y_preds'] = y_preds
# creates column with 0s for False predictions and 1s for True predictions
df_preds['binary_y_preds'] = df_preds['y_preds'].shift(1).apply(lambda x: 1 if x == True else 0)
# performance results from adding the closing difference percentage of the rows where trades were executed
performance = ((10000 * df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculating fees and improve trading strategy
# creates a count list for when trades were triggered
df_preds['preds_count'] = df_preds['binary_y_preds'].cumsum()
# feature that determines the instance of whether the list increased
df_preds['increase_count'] = df_preds['preds_count'].diff(1)
# feature that creates signal of when to buy(1), hold(0), or sell(-1)
df_preds['trade_trig'] = df_preds['increase_count'].diff(1)
# number of total entries(1s)
number_of_entries = (df_preds.trade_trig.values==1).sum()
# performance takes into account fees given the rate at the beginning of this function
pct_performance = ((df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculate the percentage paid in fees
fees_pct = number_of_entries * 2 * fee_rate/100
# calculate fees in USD
fees = number_of_entries * 2 * fee_rate / 100 * 10000
# calculate net profit in USD
performance_net = performance - fees
# calculate net profit percent
performance_net_pct = performance_net/10000
return pct_performance, performance, fees, performance_net, performance_net_pct
```
#### Modeling Pipeline
```
def modeling_pipeline(csv_filenames, periods=['360T','720T','960T','1440T']):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles. The best models
are moved to a directory called tr_pickles at the end"""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
csv = pd.read_csv(file, index_col=0)
for period in periods:
max_depth_list = [17]
# max_depth_list = [17, 20, 25, 27]
for max_depth in max_depth_list:
max_features_list = [40]
# max_features_list = [40, 45, 50, 55, 60]
for max_features in max_features_list:
print(line + name + ' ' + period + ' ' + str(max_depth) + ' ' + str(max_features) + line)
# create a copy of the csv
df = csv.copy()
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
try:
# filter out datasets that are too small
if X_test.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
(pickle.dump(model, open('pickles/{model}_{t}_{max_features}_{max_depth}.pkl'
.format(model=name, t=t,
max_features=str(max_features),
max_depth=str(max_depth)), 'wb')))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, max_features, max_depth, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
except:
print('error with model')
# create dataframe for model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'max_features',
'max_depth','pct_gain','gain', 'fees',
'net_profit', 'pct_net_profit'])
# sort by net profit descending and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
# get the names, periods, max_features, max_depth for best models
models = df2['ex_tp'].values
periods = df2['period'].values
max_features = df2['max_features'].values
max_depth = df2['max_depth'].values
# save the best models in a new directory /tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1] + '_' + str(max_features[i]) + '_' + str(max_depth[i])
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
periods=['360T']
df, df2 = modeling_pipeline(csv_filenames, periods)
```
## training models with specific parameters
This part is not necessary if you do the above. It's for when you want to only train the best models if you know the parameters so you don't have to train 100s of models
```
def modeling_pipeline(csv_filenames, param_dict):
"""Takes csv file paths of data for modeling and parameters, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
params = param_dict[name]
print(params)
period = params['period']
print(period)
max_features = params['max_features']
max_depth = params['max_depth']
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
# fit model
if X_train.shape[0] > 500:
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
# sort performance by net_profit and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
models = df2['ex_tp'].values
periods = df2['period'].values
# move models to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
param_dict = {'bitfinex_ltc_usd': {'period': '1440T', 'max_features': 50, 'max_depth': 20},
'hitbtc_ltc_usdt': {'period': '1440T', 'max_features': 45, 'max_depth': 27},
'coinbase_pro_ltc_usd': {'period': '960T', 'max_features': 50, 'max_depth': 17},
'hitbtc_btc_usdt': {'period': '360T', 'max_features': 40, 'max_depth': 17},
'coinbase_pro_btc_usd': {'period': '960T', 'max_features': 55, 'max_depth': 25},
'coinbase_pro_eth_usd': {'period': '960T', 'max_features': 50, 'max_depth': 27},
'bitfinex_btc_usd': {'period': '1200T', 'max_features': 55, 'max_depth': 25},
'bitfinex_eth_usd': {'period': '1200T', 'max_features': 60, 'max_depth': 20}
}
# 'hitbtc_eth_usdt': {'period': '1440T', 'max_depth': 50}
# ^ this cant go in param dict bc its trained differently
csv_paths = csv_filenames.copy()
del csv_paths[4]
print(csv_paths)
print(len(csv_paths))
len(csv_filenames)
df, df2 = modeling_pipeline(csv_paths)
```
#### train hitbtc eth_usdt model separately - was a special case where it performed better with less parameters
```
# for the hitbtc eth usdt model
def modeling_pipeline(csv_filenames):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
period = '1440T'
print(period)
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_depth=50,
n_estimators=100,
n_jobs=-1,
random_state=42)
# filter out datasets that are too small
if X_train.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
models = df2['ex_tp'].values
periods = df2['period'].values
# move model to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
filepath = ['data/hitbtc_eth_usdt_3600.csv']
df, df2 = modeling_pipeline(filepath)
```
## What's next?
- neural networks
- implement NLP with data scraped from twitter to see how frequency of crypto discussion affects the predictions
- more exchange/trading pair support
| github_jupyter |
```
import numpy as np
import sklearn.datasets as sk_dataset
from sklearn.model_selection import train_test_split, KFold
from scipy.io import loadmat
n_node = 10 # num of nodes in hidden layer
lam = 1 # regularization parameter, lambda
weight_range = [-1, 1] # range of random weights
bias_range = [0, 1] # range of random biases
class RVFL:
""" RVFL Classifier """
def __init__(self, n_node, lam, w_range, b_range, activation='relu', same_feature=False):
self.n_node = n_node
self.lam = lam
self.w_range = w_range
self.b_range = b_range
self.weight = None
self.bias = None
self.beta = None
a = Activation()
self.activation_function = getattr(a, activation)
self.std = None
self.mean = None
self.same_feature = same_feature
def train(self, data, label, n_class):
assert len(data.shape) > 1
assert len(data) == len(label)
assert len(label.shape) == 1
data = self.standardize(data) # Normalize
n_sample = len(data)
n_feature = len(data[0])
self.weight = (self.w_range[1] - self.w_range[0]) * np.random.random([n_feature, self.n_node]) + self.w_range[0]
self.bias = (self.b_range[1] - self.b_range[0]) * np.random.random([1, self.n_node]) + self.b_range[0]
h = self.activation_function(np.dot(data, self.weight) + np.dot(np.ones([n_sample, 1]), self.bias))
d = np.concatenate([h, data], axis=1)
# d = np.concatenate([d, np.ones_like(d[:, 0:1])], axis=1) # concat column of 1s
y = self.one_hot_encoding(label, n_class)
# Minimize training complexity
if n_sample > (self.n_node + n_feature):
self.beta = np.linalg.inv((self.lam * np.identity(d.shape[1]) + np.dot(d.T, d))).dot(d.T).dot(y)
else:
self.beta = d.T.dot(np.linalg.inv(self.lam * np.identity(n_sample) + np.dot(d, d.T))).dot(y)
def predict(self, data, raw_output=False):
data = self.standardize(data) # Normalize
h = self.activation_function(np.dot(data, self.weight) + self.bias)
d = np.concatenate([h, data], axis=1)
# d = np.concatenate([d, np.ones_like(d[:, 0:1])], axis=1)
result = self.softmax(np.dot(d, self.beta))
if not raw_output:
result = np.argmax(result, axis=1)
return result
def eval(self, data, label):
assert len(data.shape) > 1
assert len(data) == len(label)
assert len(label.shape) == 1
data = self.standardize(data) # Normalize
h = self.activation_function(np.dot(data, self.weight) + self.bias)
d = np.concatenate([h, data], axis=1)
# d = np.concatenate([d, np.ones_like(d[:, 0:1])], axis=1)
result = np.dot(d, self.beta)
result = np.argmax(result, axis=1)
acc = np.sum(np.equal(result, label))/len(label)
return acc
def one_hot_encoding(self, label, n_class):
y = np.zeros([len(label), n_class])
for i in range(len(label)):
y[i, label[i]] = 1
return y
def standardize(self, x):
if self.same_feature is True:
if self.std is None:
self.std = np.maximum(np.std(x), 1/np.sqrt(len(x)))
if self.mean is None:
self.mean = np.mean(x)
return (x - self.mean) / self.std
else:
if self.std is None:
self.std = np.maximum(np.std(x, axis=0), 1/np.sqrt(len(x)))
if self.mean is None:
self.mean = np.mean(x, axis=0)
return (x - self.mean) / self.std
def softmax(self, x):
return np.exp(x) / np.repeat((np.sum(np.exp(x), axis=1))[:, np.newaxis], len(x[0]), axis=1)
class Activation:
def sigmoid(self, x):
return 1 / (1 + np.e ** (-x))
def sine(self, x):
return np.sin(x)
def sign(self, x):
return np.sign(x)
def relu(self, x):
return np.maximum(0, x)
if __name__=="__main__":
dataset = loadmat('coil20.mat')
label = np.array([dataset['Y'][i][0] - 1 for i in range(len(dataset['Y']))])
data = dataset['X']
n_class = 20
# train-test-split
X_train, X_test, y_train, y_test = train_test_split(data, label, test_size=0.2, random_state=42)
kf = KFold(10, True, 1)
val_acc = []
max_index = -1
for i, kf_values in enumerate(kf.split(X_train, y_train)):
# print(f'train: {train_index}, val: {val_index}')
print('Validation: {}'.format(i + 1))
train_index, val_index = kf_values
X_val_train, X_val_test = X_train[train_index], X_train[val_index]
y_val_train, y_val_test = y_train[train_index], y_train[val_index]
rvfl = RVFL(n_node, lam, weight_range, bias_range, 'relu', False)
rvfl.train(X_val_train, y_val_train, n_class)
prediction = rvfl.predict(X_val_test, True)
acc = rvfl.eval(X_val_test, y_val_test)
print(f'Validation accuracy: {acc}')
val_acc.append(acc)
if acc >= max(val_acc):
max_index = train_index
X_train, y_train = X_train[max_index], y_train[max_index]
rvfl = RVFL(n_node, lam, weight_range, bias_range, 'relu', False)
rvfl.train(X_train, y_train, n_class)
prediction = rvfl.predict(X_test, True)
acc = rvfl.eval(X_test, y_test)
print(f'Accuracy: {acc}')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import pathlib
import os
os.chdir('..')
import warnings
warnings.simplefilter('ignore')
from fp.traindata_samplers import CompleteData
from fp.missingvalue_handlers import CompleteCaseAnalysis
from fp.dataset_experiments import GermanCreditDatasetSexExperiment
from fp.scalers import NamedStandardScaler
from fp.learners import LogisticRegression, DecisionTree
from fp.post_processors import NoPostProcessing, RejectOptionPostProcessing
from fp.pre_processors import NoPreProcessing, DIRemover
import matplotlib.pyplot as plt
import seaborn as sns
# creating list of parameters that we will alter to observe variations
seeds = [0xbeef, 0xcafe, 0xdead]
learners = [LogisticRegression()]
processors = [(NoPreProcessing(), NoPostProcessing()), (DIRemover(1.0), NoPostProcessing()), (NoPreProcessing(), RejectOptionPostProcessing())]
# specify the strategy to filter the optimal results on validation set from alll the settings of processors above.
# E.g. if a list ['accuracy', 'selection_rate', 'false_discovery_rate'] is specified, the optimal result is the setting with highest accuracy, selection rate and false discovery rate. If two settings have the same accuracy, the one with highest selection rate is the optimal one, etc. The input list specifies a skyline order to select the optimal one.
# E.g. if a dict {'accuracy': 0.5, 'selection_rate': 0.3, 'false_discovery_rate': 0.2} is specified, the optimal result is the setting with highest values from formula accuracy*0.5+selection_rate*0.3+false_discovery_rate*0.2.
# If more than one settings have the highest value by the above strategies, then all of the settings are returned as optimal.
filter_res_on_val_by_order = ['accuracy', 'selection_rate', 'false_discovery_rate']
filter_res_on_val_by_weight_sum = {'accuracy': 0.5, 'selection_rate': 0.3, 'false_discovery_rate': 0.2}
def calculate_metrics(seed, learners, pre_processors, post_processors, filter_val_strategy):
'''
Experiment function to run the experiments with multiple combinations of learners and processors in the input
'''
exp = GermanCreditDatasetSexExperiment(
fixed_random_seed=seed,
train_data_sampler=CompleteData(),
missing_value_handler=CompleteCaseAnalysis(),
numeric_attribute_scaler=NamedStandardScaler(),
learners=learners,
pre_processors=pre_processors,
post_processors=post_processors,
optimal_validation_strategy=filter_val_strategy)
exp.run()
return exp.generate_file_path()
def run_exp(seeds, learners, processors, filter_val_strategy):
'''
This is the main driver function that calls the calculate_metrics to give metrices on combinations of various learners, pre and post processing techniques.
'''
skyline_res_folder = {}
for seed in seeds:
input_preprocessors = [x[0] for x in processors]
input_postprocessors = [x[1] for x in processors]
skyline_res_folder[seed] = calculate_metrics(seed, learners, input_preprocessors, input_postprocessors, filter_val_strategy)
return skyline_res_folder
# running experiments using above parameters
filter_order_results = run_exp(seeds, learners, processors, filter_res_on_val_by_order)
print (filter_order_results)
filter_formula_results = run_exp(seeds, learners, processors, filter_res_on_val_by_weight_sum)
print (filter_formula_results)
```
## Visualize the result of skyline selection for a single trial
```
def get_skyline_candidates(seed_path_map, focus_seed):
'''
Prepare the skyline candidates data for visualization
'''
setting_labels = {'reject_option': 'RO', 'diremover-1.0': 'DI1.0',
'no_pre_processing': 'NoPre', 'no_post_processing': 'NoPost',
'DecisionTree': 'DT', 'LogisticRegression': 'LR'}
skyline_df = pd.read_csv(seed_path_map[focus_seed] + "skyline_options.csv")
# only keep the name of preprocessor (idx 1), learner (idx 5), and postprocessor (idx 6) in the settings for visualization purpose
skyline_df['setting'] = skyline_df['setting'].apply(lambda x: "__".join([x.split('__')[i] for i in range(len(x.split('__'))) if i in [1, 5, 6]]))
# remove the seed name in the setting for visualization purpose
skyline_df['setting'] = skyline_df['setting'].apply(lambda x: x.replace('-' + str(focus_seed), ''))
# rename (shorten) the settings' names for visualization purpose
skyline_df['setting'] = skyline_df['setting'].apply(lambda x: '_'.join([setting_labels[stepi] for stepi in x.split('__')]))
# show the candidates using only one fairness intervention method
return skyline_df[skyline_df['setting'].apply(lambda x: 'NoP' in x)]
# read the skyline options for current seed
focus_seed = 0xdead
filter_order_options = get_skyline_candidates(filter_order_results, focus_seed)
filter_order_options.head(5)
filter_formula_options = get_skyline_candidates(filter_formula_results, focus_seed)
filter_formula_options.head(5)
def output_scatter_plot(f_name, df, x_col, y_col, hue_col='setting', color_p='Set2'):
'''
Visualization of the skyline options w.r.t. two metrics (X and Y axis) in the skyline inputs.
'''
sns.set(style='whitegrid', font_scale=1.5)
# add jitters for x to account for ties in the values
data = df.copy()
noise_param = 100
data[x_col] += np.random.random(data.shape[0]) / noise_param - 1 / noise_param / 2
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(13, 6))
sns.scatterplot(x_col, y_col, hue_col, data=data.query("data == 'val'"), ax=ax1, style='optimal', s=100)
sns.scatterplot(x_col, y_col, hue_col, data=data.query("data == 'test'"), ax=ax2, style='optimal', s=100)
ax1.set_title('validation')
ax2.set_title('test')
plt.tight_layout()
# save plot into the disc
cur_f_path = f_name[0:f_name.rfind("/") + 1]
if not os.path.exists(cur_f_path):
directory = os.path.dirname(cur_f_path)
pathlib.Path(directory).mkdir(parents=True, exist_ok=True)
plt.savefig(f_name + '.png')
filter_order_options
filter_formula_options
x_col = 'selection_rate'
y_col = 'accuracy'
output_fname = "_".join(['examples/skyline_plots/Order', 's'+str(focus_seed), x_col, y_col])
output_scatter_plot(output_fname, filter_order_options, x_col, y_col)
x_col = 'selection_rate'
y_col = 'accuracy'
output_fname = "_".join(['examples/skyline_plots/Formula', 's'+str(focus_seed), x_col, y_col])
output_scatter_plot(output_fname, filter_formula_options, x_col, y_col)
```
| github_jupyter |
```
import os
import sys
module_path = os.path.abspath(os.path.join('../../src'))
print(module_path)
if module_path not in sys.path:
sys.path.append(module_path)
import csv
from pathlib import Path
from os import listdir
import pickle
from labeling_utils import load_labels
import numpy as np
from sklearn.metrics import precision_recall_fscore_support
from sklearn.model_selection import train_test_split
import pandas as pd
tags=["Songbird","Water Bird","Insect","Running Water","Rain","Cable","Wind","Aircraft"]
from tabulate import tabulate
tag_set=tags[:]
import torch
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
```
* Count only highest ranking tag (what if both of them exist)
* Bird tag is confusing, it can be also waterbird, how to handle ? Also Animal
*
```
#LOAD MODEL predictions
splits_path= Path('/files/scratch/enis/data/nna/labeling/splits/')
import csv
from os import listdir
from pathlib import Path
# LOAD LABELS by human
labelsbyhumanpath=Path('/scratch/enis/data/nna/labeling/results/')
# filter csv extension also by username
labelsbyhuman=[i for i in listdir(labelsbyhumanpath) if (".csv" in i ) ]
humanresults={}
counter=0
for apath in labelsbyhuman:
with open(labelsbyhumanpath / apath, newline='') as f:
reader=csv.reader(f)
for row in reader:
counter+=1
humanresults[row[0]]=row[1:]
print("unique files:",len(humanresults),"\ntotal files",counter)
#Join vehicle and Aircraft
for file_name,tagshere in humanresults.items():
# print(file_name,tagshere)
tagshere=["Aircraft" if tag == "Vehicle" else tag for tag in tags]
# load name of the labels
labels=load_labels()
# returns a dictionary, keys are tags from tag set and values are binary list
#
def vectorized_y_true(humanresults,tag_set):
y_true={tag: np.zeros(len(humanresults)) for tag in tag_set}
for i,tags in enumerate(humanresults.values()):
# we only look for tags in tag_set
for tag in tag_set:
if tag in tags:
y_true[tag][i] = 1
else:
y_true[tag][i] = 0
return y_true
y_true_dict = vectorized_y_true(humanresults,tags)
y_true_all = pd.DataFrame(y_true_dict)
y_true = np.array(y_true_all["Songbird"]).astype("long")
def map_reduce(X,func_type):
if func_type=="Average":
return np.mean(X,axis=1)
elif func_type=="Concat":
return np.reshape(X,(-1,1280))
else:
raise Exception("ERROR with embed type")
def pick_embed(embed_type):
# humanresults[proc_embeds[0].replace("_embed.npy",".mp3")]
X= np.empty((len(humanresults),10,128))
for index,i in enumerate(humanresults):
if embed_type=="Raw":
file_name=i.replace(".mp3","_rawembed.npy")
elif embed_type=="Normalized":
file_name=i.replace(".mp3","_embed.npy")
elif embed_type=="Unsupervised":
file_name=i.replace(".mp3","_superembed.npy")
else:
raise Exception("ERROR with embed type")
an_x=np.load(split_path / file_name)
# print(index,an_x)
X[index,:,:]=an_x[:]
return X
split_path=Path('/scratch/enis/data/nna/labeling/split_embeddings/')
# # filter by username
# split_embeds=[i for i in listdir(split_path) ]
# raw_embeds = [i for i in split_embeds if "rawembed" in i]
# proc_embeds = [i for i in split_embeds if "_embed" in i]
# super_embed = [i for i in split_embeds if "_superembed" in i]
X=pick_embed("Unsupervised")
# X=pick_embed("Raw")
# X=map_reduce(X,map_reduce_type)
humanresults_keys=list(humanresults.keys())
X=X.astype("float32")
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_auc_score
def cal_metrics(y_true_dict,y_pred_dict):
results={}
for tag in tag_set:
y_true = y_true_dict[tag]
y_pred = y_pred_dict[tag]
metrics=precision_recall_fscore_support(y_true, y_pred,pos_label=1,average="binary")
results[tag]=(metrics)
return results
def cal_auc(y_true_dict,y_pred_dict_prob):
results={}
for tag in tag_set:
y_true = y_true_dict[tag]
y_pred = y_pred_dict_prob[tag]
metrics_auc=roc_auc_score(y_true, y_pred) #average_precision_score
results[tag]=(metrics_auc)
return results
def print_results(results,y_true_dict,):
headers= ["Label","Positive","Precision","Recall","Fscore"]
table=[]
sample_count=len(next(iter(y_true_dict.values())))
print("Total sample:",sample_count,"And threshold is",prob_threshold)
for tag in (tag_set):
positive_count=sum(y_true_dict[tag])
table.append([tag,positive_count,*results[tag][:-1]])
print(tabulate(table, headers=headers))
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# device="cpu"
def augment_data(data,augmentad_size):
if data.shape[0]>=augmentad_size:
return data[:]
# Augment post samples
augmentad = data[:]
while augmentad.shape[0] < augmentad_size:
left=augmentad_size-augmentad.shape[0]
left= augmentad.shape[0] if left>augmentad.shape[0] else left
# left = 2 if left==1 else left
# print(augmentad.shape[0],left)
new=np.empty((left,augmentad.shape[1],augmentad.shape[2]))
# print(torch.randperm(augmentad.shape[0])[:left])
first,second=augmentad[torch.randperm(augmentad.shape[0])[:left],:,:].reshape(-1,10,128),augmentad[torch.randperm(augmentad.shape[0])[:left],:,:].reshape(-1,10,128)
# print(first.shape,second.shape)
new[:,0:5,:],new[:,5:10,:]=first[:,0:5,:],second[:,5:10,:]
augmentad=np.concatenate([augmentad,new])
# print(augmentad.shape)
augmentad=np.unique(augmentad,axis=0)
# print(augmentad.shape)
return augmentad
def nogradloss(X_test,y_test):
with torch.no_grad():
outputs_test = net(X_test)
loss = criterion(outputs_test, y_test)
return loss.item()
def nogradmetrics(X_test,y_test,net,multi_segment=False):
with torch.no_grad():
if not multi_segment:
y_pred = net(X_test)
# print(y_pred.shape)
# print(y_pred[1:5,:])
# print(y_pred)
loss = criterion(y_pred, y_test)
# print(y_pred)
y_pred=torch.exp(y_pred)
# _, predicted = torch.min(y_pred,1)
# print(np.exp(_.cpu().numpy()),predicted)
# print(y_test.shape,y_pred.shape)
train_auc=roc_auc_score(y_test.cpu().numpy(),
y_pred[:,1].cpu().numpy())
return loss.item(),train_auc
else:
y_pred = net(X_test)
y_pred_10 =y_pred.reshape(-1,10,2)
indices=torch.max(y_pred[:,1].reshape(-1,10),dim=1).indices
y_pred_10 = y_pred_10[range(y_pred_10.shape[0]),indices,:].reshape(-1,2)
y_test_10 = torch.max(y_test.reshape(-1,10),dim=1).values
loss = criterion(y_pred_10, y_test_10)
train_auc=roc_auc_score(y_test_10.cpu().numpy(),
y_pred_10[:,1].cpu().numpy())
return loss.item(),train_auc
```
#### Run only one of the following cells that does splitting
```
# MEAN, AUGMENTAD entire dataset including test and validate
pos_index= (y_true==1)
neg_index= (y_true==0)
X_shuffled = X[:,torch.randperm(X.shape[1]),:]
X_shuffled_pos=X_shuffled[pos_index,:,:]
X_shuffled_neg=X_shuffled[neg_index,:,:]
augmentad_pos=augment_data(X_shuffled_pos,2000)
augmentad_neg=augment_data(X_shuffled_neg,2000)
X_augmented=np.concatenate([augmentad_pos,augmentad_neg]).astype("float32")
y_true_aug=np.concatenate([np.ones(augmentad_pos.shape[0]),np.zeros(augmentad_neg.shape[0])]).astype("int64")
MULTI_SEGMENT = False
X_augmented_mean=X_augmented.mean(axis=1)
X_train, X_test, y_train, y_test = train_test_split(
X_augmented_mean, y_true_aug, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=42)
# X_train=X_train.reshape(X_train.shape[0],-1,128)
# X_test=X_test.reshape(X_test.shape[0],-1,128)
# X_val=X_val.reshape(X_test.shape[0],-1,128)
# AUGMENTATION Experiment
# from "Shuffling and Mixing Data Augmentation for Environmental Sound Classification" by Tadanobu Inoue et. al
MULTI_SEGMENT = False
FLAT=False
X_train, X_test, y_train, y_test = train_test_split(
X.reshape(X.shape[0],-1), y_true, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=42)
X_train=X_train.reshape(X_train.shape[0],-1,128)
X_test=X_test.reshape(X_test.shape[0],-1,128)
X_val=X_val.reshape(X_test.shape[0],-1,128)
pos_index= (y_train==1)
neg_index= (y_train==0)
# shuffle each sample sound within itself, granularity is 1 second, samples are 10 second
# so change order of seconds
X_shuffled = X_train[:,torch.randperm(X_train.shape[1]),:]
# no shuffle
# X_shuffled = X_train[:,:,:]
X_shuffled_pos=X_shuffled[pos_index,:,:]
X_shuffled_neg=X_shuffled[neg_index,:,:]
augmentation_ratio=1.2
augmentation_ratio=(1/augmentation_ratio)
augmentad_pos=augment_data(X_shuffled_pos,int(X_shuffled_pos.shape[0]//augmentation_ratio))
# augmentad_neg=augment_data(X_shuffled_neg,int(X_shuffled_neg.shape[0]//augmentation_ratio))
augmentad_neg=augment_data(X_shuffled_neg,X_shuffled_neg.shape[0])
X_train_augmented=np.concatenate([augmentad_pos,augmentad_neg]).astype("float32")
y_train_aug=np.concatenate([np.ones(augmentad_pos.shape[0]),np.zeros(augmentad_neg.shape[0])]).astype("int64")
X_train=X_train_augmented[:]
y_train=y_train_aug[:]
X_train=X_train.mean(axis=1)
X_test=X_test.mean(axis=1)
X_val=X_val.mean(axis=1)
# concat inputs
MULTI_SEGMENT = False
FLAT=True
X_train, X_test, y_train, y_test = train_test_split(
X.reshape(X.shape[0],-1), y_true, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=42)
X_train=X_train.reshape(X_train.shape[0],-1,128)
X_test=X_test.reshape(X_test.shape[0],-1,128)
X_val=X_val.reshape(X_test.shape[0],-1,128)
# MEAN
MULTI_SEGMENT = False
FLAT=False
X_train, X_test, y_train, y_test = train_test_split(
X_mean, y_true, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=42)
# X_train=X_train.reshape(X_train.shape[0],-1,128)
# X_test=X_test.reshape(X_test.shape[0],-1,128)
# X_val=X_val.reshape(X_test.shape[0],-1,128)
# BEST model came from this cell
# seperate,
# different AUC calculation PART, to be fair (do max of 10 predictions)
MULTI_SEGMENT = True
FLAT=False
X_train, X_test, y_train, y_test = train_test_split(
X.reshape(X.shape[0],-1), y_true, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=42)
# use 1 second as samples
X_train=X_train.reshape(-1,128)
X_test=X_test.reshape(-1,128)
X_val=X_val.reshape(-1,128)
# repeat labels
y_train=np.repeat(y_train,10)
y_test=np.repeat(y_test,10)
y_val=np.repeat(y_val,10)
```
### From here, run all cells: moving data to device, model creation and training
```
X_train=torch.from_numpy(X_train).to(device)
X_test=torch.from_numpy(X_test).to(device)
X_val=torch.from_numpy(X_val).to(device)
# birds
y_val=torch.from_numpy(y_val).to(device)
y_test=torch.from_numpy(y_test).to(device)
y_train=torch.from_numpy(y_train).to(device)
class audioDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self,X,y, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.X=X
self.y=y
# self.landmarks_frame = pd.read_csv(csv_file)
# self.root_dir = root_dir
self.transform = transform
def __len__(self):
return self.X.shape[0]
def __getitem__(self, idx):
sample = self.X[idx],self.y[idx]
if self.transform:
sample = self.transform(sample)
return sample
params = {'batch_size': 200,
'shuffle': True,
'num_workers': 0}
training_set=audioDataset(X_train,y_train)
training_generator = DataLoader(training_set, **params)
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# self.conv1 = nn.Conv2d(3, 6, 5)
# self.pool = nn.MaxPool2d(2, 2)
# self.conv2 = nn.Conv2d(6, 16, 5)
if FLAT:
self.fc1 = nn.Linear(1280, 100)
else:
self.fc1 = nn.Linear(128, 100)
# self.fc1 = nn.Linear(128, 32)
torch.nn.init.xavier_normal_(self.fc1.weight)
self.fc1_bn = nn.BatchNorm1d(100)
self.fc2 = nn.Linear(32, 32)
torch.nn.init.xavier_normal_(self.fc2.weight)
self.fc2_bn = nn.BatchNorm1d(32)
self.fc3 = nn.Linear(32,100)
torch.nn.init.xavier_normal_(self.fc3.weight)
self.fc3_bn = nn.BatchNorm1d(100)
self.fc4 = nn.Linear(100, 2)
torch.nn.init.xavier_normal_(self.fc4.weight)
self.drop = nn.Dropout(p=0.2)
def forward(self, x):
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
if FLAT:
x = x.view(-1,1280)
else:
x = x.view(-1,128)
x = F.relu(self.fc1_bn(self.fc1(x)))
# x = F.relu((self.fc1(x)))
x=self.drop(x)
# x = F.relu(self.fc2_bn(x))
# x=self.drop(x)
# x = F.relu(self.fc3(x))
# x=self.drop(x)
x = self.fc4(x)
# print(x)
x = F.log_softmax(x,dim=1)
# print(x)
return x
net = Net().to(device)
loss_values={"val":[],"train":[],"train_auc":[],"val_auc":[]}
import torch.optim as optim
#cross-entropy loss is sklearn one
# criterion = nn.CrossEntropyLoss(weight=torch.tensor([1.0,5.0]).to(device))
criterion = nn.CrossEntropyLoss()
# criterion = nn.NLLLoss()
optimizer = optim.Adam(net.parameters(),weight_decay=0.001)
import copy
from IPython import display
import time
best_acc1=0
for epoch in range(100): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(training_generator, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# print(inputs)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
# print(inputs.shape)
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
# running_loss += loss.item()
# if epoch % 20 == 0: # print every 2000 mini-batches
net.eval()
test_loss,test_auc=nogradmetrics(X_val,y_val,net,multi_segment=MULTI_SEGMENT)
train_loss,train_auc=nogradmetrics(X_train,y_train,net,multi_segment=MULTI_SEGMENT)
net.train()
loss_values["val"].append(test_loss)
loss_values["train"].append(train_loss)
loss_values["val_auc"].append(test_auc)
loss_values["train_auc"].append(train_auc)
if epoch % 20 == 0: # print every 2000 mini-batches
print('[%d] test : %.3f train: %.3f test auc%.3f train auc%.3f' %
(epoch ,test_loss, train_loss,test_auc,train_auc))
is_best = test_auc > best_acc1
best_acc1 = max(test_auc, best_acc1)
if is_best:
best_model=copy.deepcopy(net)
# display.clear_output(wait=True)
# display.display(results.plot())
print('Finished Training')
best_acc1
# save best model, change name accordingly, by adding validation accuracy
import time
timestr = time.strftime("%Y%m%d-%H%M%S")
torch.save(best_model.state_dict(), "../../data/models/bird_FC_089valid_"+timestr+".pth")
# visualize results
results=pd.DataFrame(loss_values)
first=results[["val","train"]].plot()
second = results[["val_auc","train_auc"]].plot()
fig=second.get_figure()
fig.savefig("unsupervisedresults.png")
# results on test dataset, loss and AUC
nogradmetrics(X_test,y_test,best_model,multi_segment=MULTI_SEGMENT)
test_set=audioDataset(X_test,y_test)
test_generator = DataLoader(test_set, **params)
```
Following linse are for comparing sklearn MLP and this model
```
X_last=X[:]
# X_last=X_last.reshape(X_last.shape[0],-1)
X_last=X_last.reshape(-1,128)
X_last_sklearn=X_last[:]
X_last=torch.from_numpy(X_last).to(device)
X_last.shape
y_pred_10.shape,y_test_10.shape
X_val.shape,y_val.shape
best_model.eval()
with torch.no_grad():
y_pred = best_model(X_val)
y_pred_10 =y_pred.reshape(-1,10,2)
indices=torch.max(y_pred[:,1].reshape(-1,10),dim=1).indices
y_pred_10 = y_pred_10[range(y_pred_10.shape[0]),indices,:].reshape(-1,2)
y_val_10 = torch.max(y_val.reshape(-1,10),dim=1).values
# loss = criterion(y_pred_10, y_true)
train_auc=roc_auc_score(y_val_10.cpu().numpy(),
y_pred_10[:,1].cpu().numpy())
print(train_auc)
y_pred=y_pred_10.cpu()
y_pred=torch.exp(y_pred[:,1])
y_pred[y_pred>=0.5]=1
y_pred[y_pred<0.5]=0
from sklearn.metrics import confusion_matrix
tn, fp, fn, tp=confusion_matrix(y_val_10.cpu().numpy(), y_pred).ravel()
tn, fp, fn, tp
(164, 23, 21, 52)
sklearn pytorch
tp 52 68
fp 23 67
tn 164 120
fn 21 5
total=0
for i,mm in enumerate(y_true):
predict= 1 if y_pred[i]>0.5 else 0
if mm!=predict:
total+=1
print(humanresults_keys[i])
print(total)
```
#### compare with sklearn
```
import pickle
# and later you can load it
with open('../Visualizations/raw_many2one_NN.pkl', 'rb') as f:
clf = pickle.load(f)
def many2one_predict(X,clf):
result_count=(X.shape[0]//10) if X.shape[0]%10==0 else (X.shape[0]//10)+1
results=np.empty(result_count)
for i in range(0,X.shape[0],10):
result10=clf.predict(X[i:i+10,:])
results[(i//10)] = np.max(result10)
return results
X_last_sklearn.shape
#TEST
samples=np.ones((200,128))
y_pred_sklearn=many2one_predict(X_val.cpu().numpy(),clf['Neural Net_Songbird'])
from sklearn.metrics import confusion_matrix
tn, fp, fn, tp=confusion_matrix(y_val_10.cpu().numpy(), y_pred_sklearn).ravel()
tn, fp, fn, tp
total=0
for i,mm in enumerate(y_true):
predict= 1 if y_pred[i]>0.5 else 0
predict_sklearn=int(y_pred_sklearn[i])
if mm!=predict and mm==1:
total+=1
print(humanresults_keys[i],mm,predict,predict_sklearn)
print(total)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred)
np.exp([-6.7866, -4.4634])
F.softmax(torch.Tensor([0.5403, 0.4597]),dim=0),torch.exp(F.log_softmax(torch.Tensor([ 0.1147, -0.0307]),dim=0))
```
| github_jupyter |
# 3 Branches
So far we have concentrated mainly on sequential programs with a single pathway through them, where the flow of control proceeds through the program statements in linear sequence, except when it encounters a loop element. If a loop is encountered, then the control flow is redirected back ‘up’ the program to the start of a loop block.
However, in several of the notebooks, you have also seen how a conditional `if...` statement can be used to optionally pass control to a set of instructions in the sequential program *if* a particular condition is met.
The `if...` statement fits the the sequential program model by redirecting control flow, albeit briefly, to a set of ‘extra’ commands if the conditional test evaluates to true.
A sequential program will always follow the same sequentially ordered path. But to be useful, a robot program will often need to make decisions and behave differently in different circumstances. To do this, the program has to have alternative *branches* in the program flow where it can follow different courses of action depending on some conditional test.
Although the `while` command does appear to offer some sort of branch-like behaviour, we still think of it as a sequential-style operator because the flow of control keeps trying to move in the same forwards direction.
*In other programming languages, this construct may be referred to as an `if...then...else...`. In Python, the ‘then’ is assumed.*
We'll be trying out some conditional statements using `nbtutor` to step through some simple programs as we execute them, so let's load it in now:
```
%load_ext nbtutor
```
## 3.1 An `if...` without an `else...`
It is sometimes useful to have just a single branch to the `if` statement. Python provides a simple `if...` statement for this purpose, which you may recall from previous notebooks.
### 3.1.1 Activity – An `if` on its own
Run the following code cell as it stands, with the `x` variable taking the initial value `1` (`x = 1`). Can you predict what will happen?
```
#%%nbtutor --reset --force
x = 1
print("Are you ready?")
if x == 1:
print("x equals 1")
print("All done...")
```
Try to predict what will happen if you change the initial value and run the cell again.
*Double-click this cell to edit it and record your own prediction*
Run the code cell above again with a modified initial value. Was your prediction about what would happen correct? Make a note in the cell below about how successful your prediction was. If your prediction was incorrect, try to explain why you think your prediction was different to how the program actually behaved.
*Double-click this cell to edit it to record notes on the success or otherwise of your own prediction.*
Uncomment the *%%nbtutor* magic and run the code cell using different values of `x`, observing how the program flow progresses in each case.
#### Discussion
*Click on the arrow in the sidebar or run this cell to reveal my observations.*
With the initial value of the variable `x` set to `1` (`x = 1`) the program displayed the messages *Are you ready?*, *x equals 1* and *All done...* because the `if...` statement evaluated the `x == 1` test condition as `True` and passed control *into* the `if...` block.
When `x` was initialised to a different value, for example as `x = 2`, only the messages *Are you ready?* and *All done...* were displayed because the `if...` conditional test failed and redirected control flow to the first statement *after* the `if...` block.
## 3.2 A single branch — `if..else..`
On it's own, an `if` statement will test a condition and, if the condition evaluates as `True` will then evaluate the code block within the if statement, otherwise passing control to the next program statement.
If we wanted to choose alternative actions depending on the evaluation of a particular condition, we *could* create multiple `if` statements, one to handle each outcome:
```python
if raining==True:
print("Take your coat")
if raining==False:
print("Looks like a nice day.")
```
Alternatively, we can use an `if..else` statement to take one path if the condition evaluates as `True`, otherwise (`else`) perform the alternative action:
```python
if raining==True:
print("Take your coat")
else:
print("Looks like a nice day.")
```
In the branching `if...else...` operator, the program control flow takes one of two different ‘forward-flowing’ paths depending on whether the conditional statement evaluated as part of the `if...` statement evaluates to true or false. If it evaluates to `True`, then the statements in the first if block of code are evaluated; if the condition evaluates to `False`, then the statements in the else block are evaluated. In both cases, control then flows forwards to the next statement after the `if...else...` block.
### 3.2.1 Activity – Stepping through an `if...else...` statement
In this activity we will look at a simple brancing program to explore how `if...else...` works in more detail.
If you were to run the following code in a code cell, what do you think will happen?
```python
x = 1
if x == 1:
print("x equals 1")
else:
print("x does not equal 1")
print("All done...")
```
*Double-click this cell to edit it and make your prediction here.*
Once you have made your prediction, run the following cell. In the cell beneath it, record what happened and how it compared to your prediction.
*You may find it informative to use `nbtutor` to step through each line of code in turn to see how the program flow progresses. To do this, uncomment the `%%nbtutor` magic in the first line of the code cell by deleting the `#` at the start of the line before running the code cell.*
```
#%%nbtutor --reset --force
x = 1
if x == 1:
print("x equals 1")
else:
print("x does not equal 1")
print("All done...")
```
*Double-click this cell to edit it to record here what happened when you ran the code in the above cell. Did its behaviour match your prediction?*
What do you think will happen when you run the following code cell?
Run the cell and use *nbtutor* to step through the program. How does the program flow differ from the case where `x` had the value `1`?
```
%%nbtutor --reset --force
x = 2
if x == 1:
print("x equals 1")
else:
print("x does not equal 1")
print("All done...")
```
#### Discussion
*Click the arrow in the sidebar or run this cell to reveal my observations.*
In the cell where `x = 1`, I predicted that the program would print the message *'x equals 1'* and then the messge *'All done...'*.
Viewing the trace, I could see how the program started by initialising the `x` variable to the value `1`, then checked whether `x == 1` (that is, whether `x` was equal to `1`); because it was, the program then moved onto the `print("x equals 1")` statement and printed the first message. Then the program flow continued to the first instruction after the `if...else...` block, which was the statement that printed the *'All done...'* message.
When I ran the program with a value of `x` other then `1`, the control passed from the `if...` statement, where the conditional test evaluated as `False`, to the first line in the `else...` block, which printed the message *'x does not equal 1'*, before moving onto the first line after the `if...else...` block as before.
## 3.3 Combining loops and branching statements
It is important to be clear that the condition in a branching statement (`if...` or `if...else...`) is checked only when execution reaches that part of the program.
In the examples above, you stepped through the programs and saw that execution passed through the `if` statement only once. When creating useful robot programs, we often want conditions to be checked repeatedly. For example the robot may need to repeatedly check that it has not bumped into an obstacle, or whether it has found a bright or dark area.
You have already seen how the `while...` loop tests a condition at the start of a loop and and then passes control to the statements inside the loop before looping back to test the `while...` condition again.
### 3.3.1 Using conditional statements inside a loop
One commonly used robot programming design pattern is to embed conditional statements with an outer infinite control loop:
```python
# Loop forever
while True:
if condition:
do_this()
else:
do_that()
if another_condition:
do_something_else()
```
You may also recall from an earlier notebook that we also used an `if...` statement to return the control flow back to the top of a loop before all the statements in the loop body had been executed, or break out of a loop early and pass control to the first statement after the loop block.
This ability to combine loop and branching statements is very powerful and even a very simple program can produce quite complex robot behaviour.
### 3.3.2 Nested `if` statements
Sometimes you may want to develop quite a complicated reasoning path.
In some cases, testing a compounded logical statement using Boolean logic operators may suit our purposes, but we are still limited by the fact that that the conditional test must return a single `True` or `False` value:
```
weather = 'rain'
temperature = 'cold'
if (weather=='rain') or (temperature=='cold'):
print("Wear a coat")
if (weather=='rain') and (temperature=='cold'):
print("...and maybe a scarf too...")
```
In other cases, we may need to make used of a so-called __nested if__ statement, where we build up a ladder of if statements, one inside the other.
For example, with the specified weather conditions, what does the program recommend you do?
```
weather = 'rain'
temperature = 'warm'
windy = False
if temperature=='warm':
print("It's warm today...")
if weather=="rain" and not windy:
print("...but take a brolly")
if windy:
print("...and windy enough to fly a kite.")
```
What does it suggest if you change `windy = False` to `windy = True`?
*Other more elaborate variants of compounded or nested branching statements are supported in other languages, for example in the form of `case` or `switch` statements that can select from multiple courses of action based on the element that is evaluated at the start of the statement.*
## 3.4 Multiple conditions using `if...elif...else...`
The `if...else...` statement allows us to creating a branching control flow statement that performs a conditional test and then chooses between two alternative outcomes depending on the result of the test.
Python also supports a yet more complex branch construction in the form of an `if...elif...else...` statement that allows us to make multiple conditional tests. Run the following code cell and use `nbtutor` to explore the flow through the program.
```
%%nbtutor --reset --force
days_of_week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday']
for day in days_of_week:
print(f'Today is {day}...')
if day == 'Wednesday':
print('...half day closing')
elif day in ['Saturday', 'Sunday']:
print('...the weekend')
else:
print('...a weekday')
print("And that's all the days of the week.")
```
We can also have multiple `elif` statements between the opening `if...` and the closing `else`.
Read through the code in the following code cell. Try to work out what the program will do and how the control flow will pass though the program as it executes before you run the cell and step though the code using `nbtutor`.
```
%%nbtutor --reset --force
days_of_week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday']
for day in days_of_week:
message = f'Today is {day}...'
print(message)
if day == 'Monday':
feeling = "...I don't like Mondays..."
elif day == 'Tuesday':
feeling = '...Ruby Tuesday'
elif day == 'Friday':
feeling = "...Friday I'm In Love"
else:
feeling = "...I don't know a song title for that day"
print(feeling)
```
Run the previous code cell and step through its execution using *nbtutor*; observe how the control flow steps increasingly through the stack of `...elif...` tests as the `for...` loop iterates through the items in the `days_of_week` list.
Note that there is no requirement that you test the same variable in each step. The different steps could test a different variable or range of variables.
For example, in the following program we might decide what to take out with us on a walk based on a variety of conditions:
```
raining = False
temperature = 'warm'
if raining:
print("Wear boots")
elif temperature == 'warm':
print("Wear sandals")
else:
print("Wear shoes")
```
Also note that there is an *order* in which we test the various conditions as the control passes through the `if...elif...` conditional tests. We can use this as an informal way of prioritising one behaviour over another:
```python
if this_really_important_thing:
...
elif this_less_important_thing:
...
elif this minor_thing:
...
else:
...
```
## 3.5 Using control flow statements in robot programs
Branching operators, when combined with loop based control flow operators, mean we can construct a wide range of control strategies and behaviours for a mobile robot. In the following activities, you will see several such approaches.
In order to get started, we need to load in the RoboLab simulator in the normal way:
```
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
```
### 3.5.1 Activity – Detecting black and grey
Using the *Grey\_and\_black* background into the simulator, download the program to the simulator and then run it several times with the robot moved to different starting positions.
What does the program cause the robot to do?
```
%%sim_magic_preloaded -b Grey_and_black -x 400 -y 200
# Start the robot driving forwards
tank_drive.on(SpeedPercent(50), SpeedPercent(50))
#Sample the light sensor reading
sensor_value = colorLeft.reflected_light_intensity_pc
#Check the light sensor reading
while sensor_value == 100:
# Whilst we are on the white background
# update the reading
sensor_value = colorLeft.reflected_light_intensity_pc
# and optionally display it
#print(sensor_value)
# We also (implicitly) keep driving forwards
# When the reading is below 100
# we have started to see something.
# Drive a little way onto the band to get a good reading
tank_drive.on_for_rotations(SpeedPercent(50),
SpeedPercent(50), 0.2)
#Check the sensor reading
sensor_value = colorLeft.reflected_light_intensity_pc
# and optionally display it
#print(sensor_value)
# Now make a decision about what we see
if sensor_value < 50:
say("I see black")
else:
say("I see grey")
```
*Double click this cell to edit it and add your description here.*
#### Example solution
*Click the arrow in the sidebar or run this cell to reveal an example solution.*
The robot moves forward over the white background until it reaches the grey or black area. If the background is black, then the robot says *black*; otherwise, it says *grey*.
The program works by driving the robot forwards and continues in that direction while it is over the white background (a reflected light sensor reading of 100). When the light sensor reading goes below the white background value of 100, control passes out of the while loop and on to the statement that drives the robot forwards a short distance further (0.2 wheel rotations) to ensure the sensor is fully over the band. The robot then checks its sensor reading again, and makes a decision about what to say based on the value of the sensor reading.
### 3.5.2 Activity – Combining loops and branching statements
Can you predict what the following program will cause the robot to do when it is downloaded and run in the simulator using the *Loop* background with the robot initially placed inside the loop ?
*Double click this cell to edit it and record your prediction.*
```
%%sim_magic_preloaded --background Loop -x 500
tank_drive.on(SpeedPercent(30), SpeedPercent(30))
while True:
if colorLeft.reflected_light_intensity < 100:
tank_drive.on_for_rotations(SpeedPercent(-30),
SpeedPercent(-30), 2)
tank_turn.on_for_rotations(-100, SpeedPercent(75), 2)
tank_drive.on(SpeedPercent(30), SpeedPercent(30))
```
Download the program to the simulator and run it there to check your prediction. After a minute or two, stop the program from executing.
How does the behaviour of the program lead to the robot’s emergent behaviour in the simulator?
#### Example discussion
*Click on the arrow in the sidebar or run this cell to reveal an example discussion.*
When the program runs, the robot will explore the inside of the black oval, remaining inside it and reversing direction each time it encounters the black line.
The program is constructed from an `if` statement inside a *forever* loop. The `if` statement checks the light sensor reading; when this is low (which it will be when the black line is reached) the motor direction is reversed.
The `while True:` loop is a so-called *infinite loop* that will run indefinitely. In this case it is useful because we want the robot to continue to keep on behaving in the same way as the program runs.
In other circumstances, we might want the loop to continue only while some condition holds true. In such cases, using the `while` statement to test the truth of a conditional statement is more useful.
### 3.5.3 Challenge – Three shades of grey
An earlier activity provided an example of a program that used an `if...else...` statement to distinguish between black and grey areas. The background (loaded into the simulator as the *Grey\_and\_black* background) actually contains four different shades: black, dark grey, medium grey and light grey. Can you construct a program that will report which shade the robot encounters?
A copy of the original program is provided below as a starting point. You will need to extend the code so that it can decide between three grey alternatives as well as the black band and say which band it saw.
```
%%sim_magic_preloaded -b Grey_and_black
# Start the robot driving forwards
tank_drive.on(SpeedPercent(50), SpeedPercent(50))
#Sample the light sensor reading
sensor_value = colorLeft.reflected_light_intensity_pc
#Check the light sensor reading
while sensor_value == 100:
# Whilst we are on the white background
# update the reading
sensor_value = colorLeft.reflected_light_intensity_pc
# and display it
print(sensor_value)
# When the reading is below 100
# we have started to see something.
# Drive a little way onto the band to get a good reading
tank_drive.on_for_rotations(SpeedPercent(50), SpeedPercent(50), 0.2)
#Check the sensor reading
sensor_value = colorLeft.reflected_light_intensity_pc
# and display it
print(sensor_value)
# Now make a decision about what we see
if sensor_value < 50:
say("I see black")
else:
say("I see grey")
```
When you have modified the code, run the cell to download it to the simulator, ensure the *Grey\_and\_black* background is loaded, and then run the program in the simulator for various starting positions of the robot. Does it behave as you intended?
*Use this cell to record your own notes and predictions related to this challenge.*
##### Hint: click the arrow in the sidebar or run this cell to reveal a hint.
The original program uses an `if...else...` condition to distinguish between black and grey reflected light readings. An `...elif...` statement lets you test alternative values within the same `if...else...` block.
To identify the values to use in the condition statements, inspect the simulator output window messages to see what sensor values are reported when the robot goes over different bands.
#### Example solution
*Click the arrow in the sidebar or run this cell to display an example solution.*
The robot sees the following values over each of the grey bands:
- light grey: ~86
- medium grey: ~82
- dark grey: ~50
- black: 0
Generally, when we see lots of decimal places, we assume that the chances of ever seeing exactly the same sequence of numbers may be unlikely, so rather than testing for an exact match, we use one or more threshold tests to see if the number lies within a particular *range* of values, or is above a certain minimum value.
If we assume those sensor readings are reliable, and the same value is always reported for each of those bands, then we can make the make the following decisions:
```python
if sensor_value > 86:
print('light grey')
elif sensor_value > 82:
print('medium grey')
elif sensor_value > 50:
print('dark grey')
else:
print('black')
```
We can make the test even more reliable by setting the threshold test values to values that are halfway between the expected values for a particular band. For example, 84, rather than 82, for distinguishing between light and medium grey; 66 rather than 82 for distinguishing between dark and medium grey; and 25 rather than 50 for distinguishing between black and dark grey.
This means that if there is a slight error in the reading, our thresholded test is likely to make the right decision about which side of the threshold value the (noisy) reading actually falls on.
For example, if we have a value of `sensor_value = 86` exactly, the conditional test `sensor_value > 86` will evaluate as `False` because the variable value 86 is not strictly greater than threshold value 86. But if the sensor value is `sensor_value = 86.00000000000000000001`, the test will evaluate as `True`.
```
%%sim_magic_preloaded -b Grey_and_black
# Start the robot driving forwards
tank_drive.on(SpeedPercent(50), SpeedPercent(50))
#Sample the light sensor reading
sensor_value = colorLeft.reflected_light_intensity
#Check the light sensor reading
while sensor_value == 100:
# Whilst we are on the white background
# update the reading
sensor_value = colorLeft.reflected_light_intensity
# and display it
print(sensor_value)
# When the reading is below 100
# we have started to see something.
# Drive onto the band to get a good reading
tank_drive.on_for_rotations(SpeedPercent(50), SpeedPercent(50), 0.2)
#Check the sensor reading
sensor_value = colorLeft.reflected_light_intensity
# and display it
print(sensor_value)
# Now make a decision about what we see
if sensor_value > 86:
say("I see light grey")
elif sensor_value > 82:
say("I see medium grey")
elif sensor_value > 50:
say("I see dark grey")
else:
say("I see black")
```
Other solutions are possible.
## 3.6 Noise and variation in real and simulated robots
One thing you might notice is that sometimes the robot may appear to give the "wrong answer" or an answer that diverges from the value you expect, when taking a particular measurement. For example, if the sensor is not completely over a band it will give a reading that does not exactly match a value you used in your conditional tests.
In a real robot, the sensors are also likely to be subject to various forms of environmental *noise*, such as electrical noise in the sensor, or perturbations in light readings caused by vibrations as the robot moves and slightly changes the height of the reflected sensor above the floor. (Shadows and variations in illumination can also affect light sensor readings.
If you modify the position of the *Light sensor noise* slider in the simulator, you can see how the sensor readings are perturbed according to the amount of noise added.
In the idealised world of a robot simulator, it may at first seem as if we don't have to cope with the messiness of the physical world noise if we don't want to. But even in a simulator, we find there are issues relating to precision in the way numbers are represented. For example, even if we think we have set a variable to a specific value, it may not actually be represented as that value at the machine level, as the following example shows:
```
point_one = 0.1
# The `format()` function lets us to control
# the output display of a variable
# In the following case, we can display
# the represented value 0.1 to 20 significant digits
format(point_one, '.20g')
```
You will learn a little bit more about noise, and how to deal with it, later in the module.
## 3.7 Summary
In this notebook, you have seen how `if...` statements can be used to make a variety of decisions and trigger a range of different actions based on one or more tested conditions. In particular:
- a simple `if...` statement lets us perform one or more actions once and once only if a single conditional test evaluates to true
- an `if...else...` statement allows us to *branch* between two possible futures based on the whether a single conditional test evaluates to true; if it is true, do one action, if not, do the other
- an `if...elif...else...` construction lets us run multiple different conditional tests. If the first test is true, do one thing, otherwise test the next thing, and if that is true, do something, otherwise, do another test, and so on. If all the other `elif...` tests evaluate to false, then do the final `else` condition.
In the next notebook, you will have an opportunity to explore a few more ways us using control flow statements in the context of a robot control program.
| github_jupyter |
# Amazon sentiment analysis: Structural correspondence learning
Data downloaded from: processed_acl.tar.gz, processed for John Blitzer, Mark Dredze, Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. Association of Computational Linguistics (ACL), 2007
Method is based on the above paper and the [original SCL paper](http://john.blitzer.com/papers/emnlp06.pdf)
```
import numpy as np
import matplotlib.pyplot as plt
from read_funcs import organise_data, vectorise_data, select_high_freq_data
%matplotlib inline
from sklearn.preprocessing import Binarizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mutual_info_score
src = 'dvd'
tgt = 'kitchen'
XB, Y_src, XD, Y_tgt = organise_data(src, tgt)
# Vectorise the raw data
X_src, X_tgt, features = vectorise_data(XB, XD)
# Reduce the no. of features
N = 10000
X_src, X_tgt, features = select_high_freq_data(X_src, X_tgt, features, N)
# Visualise the difference in the frequent features
B_count = np.sum(X_src,0)
D_count = np.sum(X_tgt,0)
plt.plot(B_count[-3:-100:-1])
plt.plot(D_count[-3:-100:-1])
plt.legend([src, tgt])
```
## 1. Select pivot features
```
def compute_mutual_info(X, Y):
N = X.shape[1]
mutual_info = []
for i in range(N):
mutual_info.append(mutual_info_score(X[:,i], Y))
return mutual_info
mutual_info_src = compute_mutual_info(X_src, Y_src)
sort_idx = np.argsort(mutual_info_src)
m = 50
pivot_features = [features[i] for i in sort_idx[-m:]]
print(np.asarray(pivot_features[-1:-20:-1]))
```
## 2. Pivot predictor
```
# Binarise the data
X = np.r_[X_src, X_tgt]
binarizer = Binarizer().fit(X)
X_bin = binarizer.transform(X)
plt.plot(X_bin[:,10])
W = np.zeros((X.shape[1], m))
for i in range(m):
Y_pivot = X_bin[:, sort_idx[-i]]
model = LogisticRegression(C = 1)
model.fit(X_bin, Y_pivot)
W[:, i] = model.coef_
```
## 3. Low-dimensional feature space
```
u, s, vh = np.linalg.svd(W, full_matrices=False)
# Visualise low-dimensional space
u1 = u[:,2]
u1_sorted = sorted(range(len(u1)), key=lambda i: u1[i])
u1_pos_subspace = [features[i] for i in u1_sorted[-1:-21:-1]]
u1_neg_subspace = [features[i] for i in u1_sorted[:20]]
print(np.asarray(u1_pos_subspace))
print(np.asarray(u1_neg_subspace))
plt.plot(s[:10])
# The low dimensional subspace from the third components show confusing features
l = 50
theta = u[:,:l]
theta.shape
```
## 4. Prediction using enhanced feature space
```
# Baseline Classifier
model_BL= LogisticRegression(C = 1) # Regularisation parameter C
model_BL.fit(X_src, Y_src)
print('train {:s} acc: {:.3f}, test {:s} acc: {:.3f}'\
.format(src, model_BL.score(X_src, Y_src), tgt, model_BL.score(X_tgt,Y_tgt)))
def tune_reg_param_unsupervised(C_test, X_src_SCL, Y_src, X_tgt, Y_tgt, dev_size):
X_train, X_dev, Y_train, Y_dev = train_test_split(X_src_SCL, Y_src, test_size = dev_size, random_state = 3)
acc_train = []
acc_dev = []
for C in C_test:
model_SCL = LogisticRegression(C = C)
model_SCL.fit(X_train, Y_train)
acc_train.append(model_SCL.score(X_train, Y_train))
acc_dev.append(model_SCL.score(X_dev, Y_dev))
C_opt = C_test[np.argmax(acc_dev)]
model_SCL = LogisticRegression(C = C_opt)
model_SCL.fit(X_train, Y_train)
print('optimal alpha', C_opt, 'max acc', max(acc_dev),'test acc', model_SCL.score(X_tgt, Y_tgt))
plt.plot(C_test, acc_train)
plt.plot(C_test, acc_dev)
# Enhanced feature space
scale_factor = 1
# scale_factor = X_mean/X_SCL_mean*5
X_src_SCL = np.c_[X_src, scale_factor*np.dot(X_src, theta)]
X_tgt_SCL = np.c_[X_tgt, scale_factor*np.dot(X_tgt, theta)]
C_test = np.linspace(0.01,0.5,20)
tune_reg_param_unsupervised(C_test, X_src_SCL, Y_src, X_tgt_SCL, Y_tgt, 0.1)
```
## Scaling of enhanced feature in SCL
**The 2006 work scaled the enhanced feature such that their $\ell_1$ norm is 5 times the original features.**
```
X_l1 = np.sum(np.abs(X_src))
X_SCL_l1 = np.sum(np.abs(np.dot(X_src, theta)))
print(X_SCL_l1/X_l1)
scale_factor = X_l1/X_SCL_l1*5
print('scaling factor for 5 times l1 norm in enhanced feature space', scale_factor)
X_src_SCL = np.c_[X_src, scale_factor*np.dot(X_src, theta)]
X_tgt_SCL = np.c_[X_tgt, scale_factor*np.dot(X_tgt, theta)]
C_test = np.linspace(0.01,1,20)
tune_reg_param_unsupervised(C_test, X_src_SCL, Y_src, X_tgt_SCL, Y_tgt, 200)
# Try using only the low-dimensional space
X_src_SCL2 = np.dot(X_src, theta)
X_tgt_SCL2 = np.dot(X_tgt, theta)
C_test = np.linspace(0.01,1,20)
tune_reg_param_unsupervised(C_test, X_src_SCL2, Y_src, X_tgt_SCL2, Y_tgt, 200)
# Try out different scale factors
scale_list = [1]+[5*(i+1) for i in range(10)]
scale_acc = []
for i in range(11):
scale_factor = scale_list[i]
X_src_SCL = np.c_[X_src, scale_factor*np.dot(X_src, theta)]
X_tgt_SCL = np.c_[X_tgt, scale_factor*np.dot(X_tgt, theta)]
model_SCL= LogisticRegression(C = 0.06) # Regularisation parameter C
model_SCL.fit(X_src_SCL, Y_src)
scale_acc.append(model_SCL.score(X_tgt_SCL,Y_tgt))
plt.plot(scale_list, scale_acc)
# Weights of the pivot features
u_pivot = u[sort_idx[-m:],:]
u_non_pivot = u[sort_idx[:-m],:]
print('pivot mean/Non pivot mean: ', np.mean(abs(u_pivot))/np.mean(abs(u_non_pivot)))
```
## Finding the subspace of non-pivot feature only
Given the results above, the pivot features have 16 as much weight as the non-pivot ones. Is SCL simply enhancing the pivot features? Would the subspace trained on non-pivot features work too?
```
# Select non-pivot features
X_bin_non = X_bin[:, sort_idx[:-m]]
X_bin_non.shape
from sklearn.linear_model import LogisticRegression
W_non = np.zeros((X_bin_non.shape[1], m))
for i in range(m):
Y_pivot = X_bin[:, sort_idx[-i]]
model = LogisticRegression(C = 1)
model.fit(X_bin_non, Y_pivot)
W_non[:, i] = model.coef_
u_non, s, vh = np.linalg.svd(W_non, full_matrices=False)
non_pivot_features = [features[i] for i in sort_idx[:-m]]
u1 = u_non[:,3]
u1_sorted = sorted(range(len(u1)), key=lambda i: u1[i])
u1_pos_subspace = [non_pivot_features[i] for i in u1_sorted[-1:-21:-1]]
u1_neg_subspace = [non_pivot_features[i] for i in u1_sorted[:20]]
print(np.asarray(u1_pos_subspace))
print(np.asarray(u1_neg_subspace))
l = 50
theta = u_non[:,:l]
theta.shape
# Train a classifier with enhanced subspace
X_src_SCL_non = np.c_[X_src, np.dot(X_src[:, sort_idx[:-m]], theta)]
X_tgt_SCL_non = np.c_[X_tgt, np.dot(X_tgt[:, sort_idx[:-m]], theta)]
C_test = np.linspace(0.01,1,20)
tune_reg_param_unsupervised(C_test, X_src_SCL_non, Y_src, X_tgt_SCL_non, Y_tgt, 200)
```
| github_jupyter |
# Understanding Deepfakes with Keras
```
!pip3 install tensorflow==2.1.0 pillow matplotlib
!pip3 install git+https://github.com/am1tyadav/tfutils.git
%matplotlib notebook
import tensorflow as tf
import numpy as np
import os
import tfutils
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Dense, Flatten, Conv2D, BatchNormalization
from tensorflow.keras.layers import Conv2DTranspose, Reshape, LeakyReLU
from tensorflow.keras.models import Model, Sequential
from PIL import Image
print('TensorFlow version:', tf.__version__)
```
# Importing and Plotting the Data
```
# download the MNIST dataset, unzip the data, pre-process it, normalize it
# this done by the code given in (https://github.com/am1tyadav/tfutils/tree/master/tfutils)
(x_train, y_train), (x_test, y_test) = tfutils.datasets.mnist.load_data(one_hot=False)
# eachc image is 28X28, we use only zeros
x_train = tfutils.datasets.mnist.load_subset([0], x_train, y_train)
x_test = tfutils.datasets.mnist.load_subset([0], x_test, y_test)
# combine both the training and testing sets
x = np.concatenate([x_train, x_test], axis=0)
# plot some training samples
tfutils.datasets.mnist.plot_ten_random_examples(plt, x, np.zeros((x.shape[0], 1))).show()
```
# Discriminator
```
# Using the same idea as mentioned in https://arxiv.org/pdf/1503.03832.pdf
size = 28
noise_dim = 1
discriminator = Sequential([
Conv2D(64, 3, strides=2, input_shape=(28, 28, 1)),
LeakyReLU(),
BatchNormalization(),
Conv2D(128, 5, strides=2),
LeakyReLU(),
BatchNormalization(),
Conv2D(256, 5, strides=2),
LeakyReLU(),
BatchNormalization(),
Flatten(),
Dense(1, activation='sigmoid')
])
opt = tf.keras.optimizers.Adam(lr=2e-4, beta_1=0.5)
discriminator.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
discriminator.summary()
```
# Generator
```
generator = Sequential([
Dense(256, activation='relu', input_shape=(noise_dim,)),
Reshape((1, 1, 256)),
Conv2DTranspose(256, 5, activation='relu'),
BatchNormalization(),
Conv2DTranspose(128, 5, activation='relu'),
BatchNormalization(),
Conv2DTranspose(64, 5, strides=2, activation='relu'),
BatchNormalization(),
Conv2DTranspose(32, 5, activation='relu'),
BatchNormalization(),
Conv2DTranspose(1, 4, activation='sigmoid')
])
generator.summary()
# visualize thegenrated image without training
noise = np.random.randn(1, noise_dim) # there is only one prediction
generated_images = generator.predict(noise) # there is only one prediction
gen_image = generated_images[0] # to bring the plotting sahpe to (28,28,1)
# gen_image = generator.predict(noise)[0]
plt.figure()
plt.imshow(np.reshape(gen_image, (28, 28)), cmap='binary')
```
# Generative Adversarial Network (GAN)
```
# We have discriminator and generator netoworks. The following will connect both the networks
input_layer = tf.keras.layers.Input(shape=(noise_dim,)) # noise input
gen_out = generator(input_layer) # create noise
disc_out = discriminator(gen_out) # ask the discriminator to determine if the input is real or fake
gan = Model(
input_layer,
disc_out
)
discriminator.trainable = False # to train the generator, later change this to True for
# training discriminator. Should train either Generator or
# discriminator and not both at the same time
gan.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
gan.summary()
```
# Training the GAN
```
%%time
epochs = 25
batch_size = 128
steps_per_epoch = int(2 * x.shape[0]/batch_size)
print('Steps per epoch=', steps_per_epoch)
dp = tfutils.plotting.DynamicPlot(plt, 5, 5, (8, 8))
for e in range(0, epochs):
dp.start_of_epoch(e)
for step in range(0, steps_per_epoch):
true_examples = x[int(batch_size/2)*step: int(batch_size/2)*(step + 1)]
true_examples = np.reshape(true_examples, (true_examples.shape[0], 28, 28, 1))
noise = np.random.randn(int(batch_size/2), noise_dim)
generated_examples = generator.predict(noise)
x_batch = np.concatenate([generated_examples, true_examples], axis=0)
y_batch = np.array([0] * int(batch_size/2) + [1] * int(batch_size/2))
indices = np.random.choice(range(batch_size), batch_size, replace=False)
x_batch = x_batch[indices]
y_batch = y_batch[indices]
# train the discriminator
discriminator.trainable = True
discriminator.train_on_batch(x_batch, y_batch) # first train disc, then set it to False
discriminator.trainable = False
# train the generator
loss, _ = gan.train_on_batch(noise, np.ones((int(batch_size/2), 1))) # to train the gan network (generator alone)
_, acc = discriminator.evaluate(x_batch, y_batch, verbose=False) # high acc - discriminator is doing good job
noise = np.random.randn(1, noise_dim)
generated_example = generator.predict(noise)[0]
dp.end_of_epoch(np.reshape(generated_example, (28, 28)), 'binary',
'DiscAcc:{:.2f}'.format(acc), 'GANLoss:{:.2f}'.format(loss))
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table align="left">
<td>
<a href="https://colab.research.google.com/github/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/pytorch_cifar10_vertex_pipelines.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/pytorch_cifar10_vertex_pipelines.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://raw.githubusercontent.com/amygdala/code-snippets/master/ml/vertex_pipelines/pytorch/cifar/pytorch_cifar10_vertex_pipelines.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
# Vertex Pipelines: Pytorch resnet CIFAR10 e2e example
## Overview
This notebook shows two variants of a PyTorch resnet [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) end-to-end example using [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines). The example is in GitHub [here](https://github.com/amygdala/code-snippets/tree/master/ml/vertex_pipelines/pytorch/cifar).
Thanks to the PyTorch team at Facebook for some of the underlying code and much helpful advice.
The first variant trains the model directly as a Vertex Pipelines step, using 1 GPU. The second variant trains the model using Vertex AI Custom training, using (by default) 2 gpus.
In both cases, after training, the model is uploaded to Vertex AI and deployed to an endpoint, so that it can be used for prediction.
### Set up your local development environment
**If you are using Colab or Google Cloud Notebooks**, your environment already meets
all the requirements to run this notebook. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
1. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3. Activate the virtual environment.
1. To install Jupyter, run `pip install jupyter` on the
command-line in a terminal shell.
1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
1. Open this notebook in the Jupyter Notebook Dashboard.
### Install additional packages
```
PROJECT_ID = 'your-project-id' # <---CHANGE THIS
!gcloud config set project {PROJECT_ID}
```
On colab, authenticate first:
```
import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
```
Then, install the libraries.
```
import sys
if 'google.colab' in sys.modules:
USER_FLAG = ''
else:
USER_FLAG = '--user'
!python3 -m pip install {USER_FLAG} torch sklearn webdataset torchvision pytorch-lightning boto3 google-cloud-build --upgrade
!pip3 install {USER_FLAG} google-cloud-aiplatform --upgrade
!pip3 install {USER_FLAG} google-cloud-pipeline-components==0.1.7 kfp --upgrade
```
### Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
```
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
```
!python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
!python3 -c "import google_cloud_pipeline_components; print('components version: {}'.format(google_cloud_pipeline_components.__version__))"
```
## Before you begin
This notebook does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).
Also [enable the Cloud Build API](https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com).
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Set your project ID
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
```
Otherwise, set your project ID here.
```
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already
authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
1. In the Cloud Console, go to the [**Create service account key**
page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).
2. Click **Create service account**.
3. In the **Service account name** field, enter a name, and
click **Create**.
4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"
into the filter box, and select
**Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
5. Click *Create*. A JSON file that contains your key downloads to your
local environment.
6. Enter the path to your service account key as the
`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket as necessary
You will need a Cloud Storage bucket for this example. If you don't have one that you want to use, you can make one now.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are
available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
**Change the bucket name below** before running the next cell.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Import libraries and define constants
Define some constants. The `USER` variable is useful when there is more than one person on your team running a pipeline. Setting this will ensure pipeline artifacts are written to a subdirectory with your username. You can set it as any identifying string you'd like.
```
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
USER = 'your-user-name' # <---CHANGE THIS
PIPELINE_ROOT = '{}/pipeline_root/{}'.format(BUCKET_NAME, USER)
PIPELINE_ROOT
```
Do some imports:
```
import json
from typing import NamedTuple
from kfp import dsl
from kfp.v2 import compiler
from typing import NamedTuple
from kfp.v2 import dsl
from kfp.v2.dsl import (
component,
InputPath,
OutputPath,
Input,
Output,
Artifact,
Dataset,
Model,
ClassificationMetrics,
Metrics,
)
from kfp.v2.google.client import AIPlatformClient
from google_cloud_pipeline_components import aiplatform as gcc_aip
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
```
## Define the pipeline **components**
This notebook shows two variants of an end-to-end PyTorch pipeline. They differ in the training *component* (that is, pipeline step).
Some of the components used in these pipelines are drawn from the prebuilt set of components defined in [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud). These make it easy to access Vertex AI services.
Others are 'custom' components defined directly in this notebook, as Python-function-based components. Lightweight Python function-based components make it easier to iterate quickly by letting you build your component code as a Python function and generating the component specification for you.
You will notice a `@component` decorator arg named `output_component_file`. When the components are evaluated, a component `yaml` spec file is generated. While we don't show it in this example, the component yaml files can be shared & placed under version control, and used later to define a pipeline step.
All of the custom components are defined in this section, with the exception of the second version of the training step, which is defined in a section below.
We'll start by setting the container images that we'll use for some of the components. You can find the Dockerfiles for these images in the example repo: [Dockerfile](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/Dockerfile) and [Dockerfile-gpu](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/Dockerfile-gpu), respectively.
```
CONTAINER_URI = "gcr.io/google-samples/pytorch-pl:v2"
GPU_CONTAINER_URI = "gcr.io/google-samples/pytorch-pl-gpu:v5"
```
### Define the 'preprocess' component
This component fetches the cifar10 dataset.
```
@component(
base_image=CONTAINER_URI,
output_component_file="cifar_preproc.yaml",
)
def cifar_preproc(
cifar_dataset: Output[Dataset],
):
import subprocess
import logging
from pathlib import Path
import torchvision
import webdataset as wds
from sklearn.model_selection import train_test_split
logging.getLogger().setLevel(logging.INFO)
logging.info("Dataset path is: %s", cifar_dataset.path)
output_pth = cifar_dataset.path
Path(output_pth).mkdir(parents=True, exist_ok=True)
trainset = torchvision.datasets.CIFAR10(
root="./", train=True, download=True
)
testset = torchvision.datasets.CIFAR10(
root="./", train=False, download=True
)
Path(output_pth + "/train").mkdir(parents=True, exist_ok=True)
Path(output_pth + "/val").mkdir(parents=True, exist_ok=True)
Path(output_pth + "/test").mkdir(parents=True, exist_ok=True)
random_seed = 25
y = trainset.targets
trainset, valset, y_train, y_val = train_test_split(
trainset,
y,
stratify=y,
shuffle=True,
test_size=0.2,
random_state=random_seed,
)
for name in [(trainset, "train"), (valset, "val"), (testset, "test")]:
with wds.ShardWriter(
output_pth + "/" + str(name[1]) + "/" + str(name[1]) + "-%d.tar",
maxcount=1000,
) as sink:
for index, (image, cls) in enumerate(name[0]):
sink.write(
{"__key__": "%06d" % index, "ppm": image, "cls": cls}
)
entry_point = ["ls", "-R", output_pth]
run_code = subprocess.run(entry_point, stdout=subprocess.PIPE)
print(run_code.stdout)
```
### Define a component to create torchserve `Dockerfile` and `config.properties` files from the pipeline params
This component creates configuration files that will be used to deploy the trained model. It can be run concurrently with other work.
The `config.properties` file will be used to create the model archive after training.
For this example, the torchserve-based container is using a GPU base image, and we will serve the model using a GPU-enabled instance.
```
@component(
output_component_file="cifar_config.yaml",
)
def cifar_config(
mar_model_name: str,
version: str,
port: int,
cifar_config: Output[Artifact],
):
import os
from pathlib import Path
Path(cifar_config.path).mkdir(parents=True, exist_ok=True)
config_properties = f"""inference_address=http://0.0.0.0:{port}
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
enable_metrics_api=true
metrics_format=prometheus
number_of_netty_threads=4
job_queue_size=10
service_envelope=kfserving
model_store=/home/model-server/model-store
model_snapshot={{"name":"startup.cfg","modelCount":1,"models":{{"{mar_model_name}":{{"{version}":{{"defaultVersion":true,"marName":"{mar_model_name}.mar","minWorkers":1,"maxWorkers":5,"batchSize":1,"maxBatchDelay":5000,"responseTimeout":120}}}}}}}}
"""
# write to artifact dir
properties_path = os.path.join(cifar_config.path, "config.properties")
with open(properties_path, "w") as f:
f.write(config_properties)
torchserve_dockerfile_str = f"""FROM pytorch/torchserve:0.4.0-gpu
RUN pip install --upgrade pip
RUN pip install grpcio==1.32.0
RUN pip install pytorch-lightning
COPY config.properties /home/model-server/config.properties
COPY {mar_model_name}.mar /home/model-server/model-store/
"""
# write to artifact dir
dockerfile_path = os.path.join(cifar_config.path, "Dockerfile")
with open(dockerfile_path, "w") as f:
f.write(torchserve_dockerfile_str)
```
### Define Version 1 of the `train` component: train on the pipeline step node
The train component will take as input the `Dataset` Artifact generated in the preproc component above, using that as the data source; and write its training data to the `Model` artifact's GCSFuse path. That means that the trained model info is in GCS.
This component is configured to train on 1 GPU. If you want to train on CPU, remove the `gpus` arg from the `trainer_args` definition. You'll also need to edit the pipeline definition below to remove the requirement that the training step run on a GPU-enabled instance.
> Note: For this variant of the training step, you can not use > 1 GPU. (This constraint is tied to how the pipeline steps are launched, and will probably change in future).
See the second variant of the training step below, which uses Vertex AI custom training, for a scenario that allows multiple GPUs.
```
@component(
base_image=GPU_CONTAINER_URI,
output_component_file="cifar_train.yaml",
)
def cifar_train(
model_name: str,
max_epochs: int,
model_display_name: str,
tensorboard_instance:str,
cifar_dataset: Input[Dataset],
cifar_model: Output[Model],
):
import pytorch_lightning as pl
import logging
import os
from subprocess import Popen, DEVNULL
import sys
from pytorch_pipeline.components.trainer.component import Trainer
from argparse import ArgumentParser
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_lightning.callbacks import (
EarlyStopping,
LearningRateMonitor,
ModelCheckpoint,
)
logging.getLogger().setLevel(logging.INFO)
logging.info("datset root path: %s", cifar_dataset.path)
logging.info("model root path: %s", cifar_model.path)
model_output_root = cifar_model.path
# Argument parser for user defined paths
parser = ArgumentParser()
parser.add_argument(
"--tensorboard_root",
type=str,
default=f"{model_output_root}/tensorboard",
help="Tensorboard Root path (default: output/tensorboard)",
)
parser.add_argument(
"--checkpoint_dir",
type=str,
default=f"{model_output_root}/train/models",
help="Path to save model checkpoints ",
)
parser.add_argument(
"--dataset_path",
type=str,
default=cifar_dataset.path,
help="Cifar10 Dataset path (default: output/processing)",
)
parser.add_argument(
"--model_name",
type=str,
default="resnet.pth",
help="Name of the model to be saved as (default: resnet.pth)",
)
sys.argv = sys.argv[:1]
parser = pl.Trainer.add_argparse_args(parent_parser=parser)
args = vars(parser.parse_args())
# Enabling Tensorboard Logger, ModelCheckpoint, Earlystopping
lr_logger = LearningRateMonitor()
tboard = TensorBoardLogger(f"{model_output_root}/tensorboard")
early_stopping = EarlyStopping(
monitor="val_loss", mode="min", patience=5, verbose=True
)
checkpoint_callback = ModelCheckpoint(
dirpath=f"{model_output_root}/train/models",
filename="cifar10_{epoch:02d}",
save_top_k=1,
verbose=True,
monitor="val_loss",
mode="min",
)
# Setting the trainer-specific arguments
trainer_args = {
"logger": tboard,
"profiler": "pytorch",
"checkpoint_callback": True,
"max_epochs": max_epochs,
"callbacks": [lr_logger, early_stopping, checkpoint_callback],
"gpus": 1,
}
# Setting the datamodule specific arguments
data_module_args = {"train_glob": cifar_dataset.path}
if tensorboard_instance:
try:
logging.warning('setting up Vertex tensorboard experiment')
tb_gs = f"{model_output_root}/tensorboard".replace("/gcs/", "gs://")
logging.info('tb gs path: %s', tb_gs)
tb_args = ["/opt/conda/bin/tb-gcp-uploader", "--tensorboard_resource_name", tensorboard_instance,
"--logdir", tb_gs, "--experiment_name", model_display_name,
# '--one_shot=True'
]
logging.warning('tb args: %s', tb_args)
Popen(tb_args, stdout=DEVNULL, stderr=DEVNULL)
except Exception as e:
logging.warning(e)
# Initiating the training process
logging.info("about to call the Trainer...")
trainer = Trainer(
module_file="cifar10_train.py",
data_module_file="cifar10_datamodule.py",
module_file_args=parser,
data_module_args=data_module_args,
trainer_args=trainer_args,
)
```
### Define the 'mar' component
This component generates the [model archive file](https://github.com/pytorch/serve/blob/master/model-archiver/README.md) from the training results.
```
@component(
base_image=CONTAINER_URI,
output_component_file="mar.yaml",
)
def generate_mar_file(
model_name: str,
mar_model_name: str,
handler: str,
version: str,
cifar_model: Input[Model],
cifar_mar: Output[Model],
):
import logging
import pytorch_lightning as pl
import os
import subprocess
from pathlib import Path
def _validate_mar_config(mar_config):
mandatory_args = [
"MODEL_NAME",
"SERIALIZED_FILE",
"MODEL_FILE",
"HANDLER",
"VERSION",
]
missing_list = []
for key in mandatory_args:
if key not in mar_config:
missing_list.append(key)
if missing_list:
logging.warning(
"The following Mandatory keys are missing in the config file {} ".format(
missing_list
)
)
raise Exception(
"Following Mandatory keys are missing in the config file {} ".format(
missing_list
)
)
logging.getLogger().setLevel(logging.INFO)
model_output_root = cifar_model.path
mar_output_root = cifar_mar.path
export_path = f"{mar_output_root}/model-store"
try:
Path(export_path).mkdir(parents=True, exist_ok=True)
except Exception as e:
logging.warning(e)
# retry after pause
import time
time.sleep(2)
Path(export_path).mkdir(parents=True, exist_ok=True)
mar_config = {
"MODEL_NAME": mar_model_name,
"MODEL_FILE": "pytorch_pipeline/examples/cifar10/cifar10_train.py",
"HANDLER": handler,
"SERIALIZED_FILE": os.path.join(
f"{model_output_root}/train/models",
model_name,
),
"VERSION": version,
"EXPORT_PATH": f"{cifar_mar.path}/model-store",
}
logging.warning("mar_config: %s", mar_config)
print(f"mar_config: {mar_config}")
try:
logging.info("validating config")
_validate_mar_config(mar_config)
except Exception as e:
logging.warning(e)
archiver_cmd = "torch-model-archiver --force --model-name {MODEL_NAME} --serialized-file {SERIALIZED_FILE} --model-file {MODEL_FILE} --handler {HANDLER} -v {VERSION}".format(
MODEL_NAME=mar_config["MODEL_NAME"],
SERIALIZED_FILE=mar_config["SERIALIZED_FILE"],
MODEL_FILE=mar_config["MODEL_FILE"],
HANDLER=mar_config["HANDLER"],
VERSION=mar_config["VERSION"],
)
if "EXPORT_PATH" in mar_config:
archiver_cmd += " --export-path {EXPORT_PATH}".format(
EXPORT_PATH=mar_config["EXPORT_PATH"]
)
if "EXTRA_FILES" in mar_config:
archiver_cmd += " --extra_files {EXTRA_FILES}".format(
EXTRA_FILES=mar_config["EXTRA_FILES"]
)
if "REQUIREMENTS_FILE" in mar_config:
archiver_cmd += " -r {REQUIREMENTS_FILE}".format(
REQUIREMENTS_FILE=mar_config["REQUIREMENTS_FILE"]
)
print("Running Archiver cmd: ", archiver_cmd)
logging.warning("archiver command: %s", archiver_cmd)
try:
return_code = subprocess.Popen(archiver_cmd, shell=True).wait()
if return_code != 0:
error_msg = (
"Error running command {archiver_cmd} {return_code}".format(
archiver_cmd=archiver_cmd, return_code=return_code
)
)
print(error_msg)
except Exception as e:
logging.warning(e)
```
### Define the component to build a torchserve docker image
This component uses the results of the 'config' component as well as the model archive file. It builds a torchserve image using [Cloud Build](https://cloud.google.com/build/docs).
```
@component(
base_image="gcr.io/deeplearning-platform-release/tf2-gpu.2-3:latest",
output_component_file="build_image.yaml",
)
def build_torchserve_image(
model_name: str,
cifar_mar: Input[Model],
cifar_config: Input[Artifact],
project: str,
) -> NamedTuple("Outputs", [("serving_container_uri", str),],):
from datetime import datetime
import logging
import os
import google.auth
from google.cloud.devtools import cloudbuild_v1
logging.getLogger().setLevel(logging.INFO)
credentials, project_id = google.auth.default()
client = cloudbuild_v1.services.cloud_build.CloudBuildClient()
mar_model_name = f"{model_name}.mar"
build_version = datetime.now().strftime("%Y%m%d%H%M%S")
dockerfile_path = os.path.join(cifar_config.path, "Dockerfile")
gs_dockerfile_path = dockerfile_path.replace("/gcs/", "gs://")
config_prop_path = os.path.join(cifar_config.path, "config.properties")
gs_config_prop_path = config_prop_path.replace("/gcs/", "gs://")
export_path = f"{cifar_mar.path}/model-store"
model_path = os.path.join(export_path, mar_model_name)
gs_model_path = model_path.replace("/gcs/", "gs://")
logging.warning("gs_model_path: %s", gs_model_path)
image_uri = f"gcr.io/{project}/torchservetest:{build_version}"
logging.info("image uri: %s", image_uri)
build = cloudbuild_v1.Build(images=[image_uri])
build.steps = [
{
"name": "gcr.io/cloud-builders/gsutil",
"args": [
"cp",
gs_config_prop_path,
"config.properties",
],
},
{
"name": "gcr.io/cloud-builders/gsutil",
"args": ["cp", f"{gs_model_path}", f"{mar_model_name}"],
},
{
"name": "gcr.io/cloud-builders/gsutil",
"args": [
"cp",
gs_dockerfile_path,
"Dockerfile",
],
},
{
"name": "gcr.io/cloud-builders/docker",
"args": ["build", "-t", image_uri, "."],
},
]
operation = client.create_build(project_id=project, build=build)
print("IN PROGRESS:")
print(operation.metadata)
result = operation.result()
# Print the completed status
print("RESULT:", result.status)
return (image_uri,)
```
## Optional: Create a Vertex Tensorboard instance
If you like, you can configure the pipeline to upload the training logs to the Vertex TensorBoard service. To do this, you will need to pre-create a Vertex TensorBoard instance. Follow the [instructions here](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview#create_a_instance).
As described in the docs, you will need the Vertex TensorBoard instance name, which is a string that includes your project and instance ID. It will look something like: `projects/123/locations/us-central1/tensorboards/456`. If you create your TensorBoard instance from the Vertex AI console, you can get the ID by running `gcloud beta ai tensorboards list` with the gcloud CLI.
Make note of that instance name, and you will use it to set a parameter when submitting the pipeline run.
## Define and run the Pipeline Version 1
Define a pipeline that uses these components.
Before you evaluate the pipeline, **edit the GPU type** for both the `cifar_train_task` and the `model_deploy_op` depending upon what GPU quota you have available. **You may need to request more GPU quota first**.
The pipeline will look like this:
<a href="https://storage.googleapis.com/amy-jo/images/mp/pytorch_train1.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/pytorch_train1.png" width="95%"/></a>
Define some constants:
```
from datetime import datetime
ts = datetime.now().strftime("%Y%m%d%H%M%S")
MODEL_NAME = f'resnet{ts}'
PORT = 8080
MAR_MODEL_NAME = 'cifar10'
print(MODEL_NAME)
@dsl.pipeline(
name="pytorch-cifar-pipeline",
pipeline_root=PIPELINE_ROOT,
)
def pytorch_cifar_pipeline(
project: str = PROJECT_ID,
model_name: str = "resnet.pth",
model_display_name: str = MODEL_NAME,
max_epochs: int = 1,
mar_model_name: str = MAR_MODEL_NAME,
handler: str = "image_classifier",
version: str = "1.0",
port: int = PORT,
tensorboard_instance: str = ''
):
cifar_config_task = cifar_config(mar_model_name, version, port)
cifar_preproc_task = cifar_preproc()
cifar_train_task = cifar_train(
model_name=model_name,
max_epochs=max_epochs,
model_display_name=model_display_name,
tensorboard_instance=tensorboard_instance,
cifar_dataset=cifar_preproc_task.outputs["cifar_dataset"],
).set_gpu_limit(1).set_memory_limit('32G')
cifar_train_task.add_node_selector_constraint(
# You can change this to use a different accelerator. Ensure you have quota for it.
"cloud.google.com/gke-accelerator", "nvidia-tesla-v100"
)
cifar_mar_task = generate_mar_file(
model_name,
mar_model_name,
handler,
version,
cifar_train_task.outputs["cifar_model"],
)
build_image_task = build_torchserve_image(
mar_model_name, cifar_mar_task.outputs["cifar_mar"],
cifar_config_task.outputs['cifar_config'],
project
)
gcc_aip.ModelUploadOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
model_upload_op = gcc_aip.ModelUploadOp(
project=project,
display_name=model_display_name,
serving_container_image_uri=build_image_task.outputs['serving_container_uri'],
serving_container_predict_route="/predictions/{}".format(MAR_MODEL_NAME),
serving_container_health_route="/ping",
serving_container_ports=[PORT]
)
gcc_aip.EndpointCreateOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
endpoint_create_op = gcc_aip.EndpointCreateOp(
project=project,
display_name=model_display_name,
)
gcc_aip.ModelDeployOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
model_deploy_op = gcc_aip.ModelDeployOp(
project=project,
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
machine_type="n1-standard-4",
accelerator_type='NVIDIA_TESLA_P100', # CHANGE THIS as necessary
accelerator_count=1
)
```
Compile the pipeline:
```
from kfp.v2 import compiler as v2compiler
v2compiler.Compiler().compile(pipeline_func=pytorch_cifar_pipeline,
package_path='pytorch_pipeline_spec.json')
```
**Edit the following cell** if you would like to upload training logs to a Vertex Tensorboard instance. You can get this by running `gcloud beta ai tensorboards list`.
```
TENSORBOARD_INSTANCE = 'projects/123/locations/us-central1/tensorboards/456' # CHANGE THIS TO YOUR INSTANCE NAME
```
Run the pipeline. If you set up a tensorboard instance, **edit the cell above to your instance name, then uncomment the `tensorboard_instance` line below before evaluating the cell.**
```
job = aiplatform.PipelineJob(
display_name=MODEL_NAME,
template_path="pytorch_pipeline_spec.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={
"model_name": "resnet.pth", "max_epochs": 5,
"project": PROJECT_ID, "model_display_name": MODEL_NAME,
# "tensorboard_instance": TENSORBOARD_INSTANCE
},
)
job.run(sync=False
)
```
You can view the running pipeline in the Cloud Console by clicking the generated link above.
### Viewing model training information using TensorBoard
If you set up a Vertex TensorBoard instance and configured the pipeline to use it, then once the pipeline training step is underway, you can view the TensorBoard server by navigating to 'Vertex AI > Experiments' in the Cloud Console. Click on 'OPEN TENSORBOARD' next to the newly created "experiment", which will use the MODEL_NAME generated above. You will see something like this:

### Using the PyTorch profiler with TensorBoard
In addition to the TensorBoard data we can see in Vertex AI, the training code is writing profiler information, which can be viewed in TensorBoard by installing a plugin. The Vertex TensorBoard service does not support adding arbitrary plugins, but you can view this information as follows (you will need the `gcloud` SDK installed):
- on your local machine, ideally within a virtual environment, run: `pip install -U torch_tb_profiler` and `pip install -U tensorboard`.
- Find the link to the TensorBoard logs produced during training. You can do this by navigating to the pipeline in the Cloud Console and clicking on the Model Artifact produced as output by the training step. In the right panel you will see a `URI` link that starts with `gs://` and ends with `cifar_model`. Append `tensorboard` to that URI, which should result in a URI like this:
`gs://<your-bucket>/.../cifar_model/tensorboard`.
- Copy the tensorboard logs to your local machine, e.g. (replacing with your URI):
`gsutil cp gs://<your-bucket>/.../cifar_model/tensorboard /tmp`
- Run the TensorBoard server: `tensorboard --logdir=/tmp/tensorboard`
- visit the TensorBoard server at the given localhost port
You should see something like this if you click on the `PYTORCH_PROFILER` tab:

## Run the pipeline on an OSS KFP installation using 'V2 compatibility mode'
You can also run this pipeline in ['v2 compatibility mode'](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/#compiling-and-running-pipelines-in-v2-compatibility-mode) on an OSS KFP installation of version >= 1.7.0.
See the [README](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/README.md) for setup instructions.
```
import kfp
# CHANGE THIS to use your host URL (see the README)
client = kfp.Client(host='https://xxxxxxxxx-dot-us-central1.pipelines.googleusercontent.com')
# run the pipeline in v2 compatibility mode
client.create_run_from_pipeline_func(
pytorch_cifar_pipeline,
arguments={"model_name": "resnet.pth", "max_epochs": 5,
"project": PROJECT_ID, "model_display_name": MODEL_NAME,
},
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE,
# enable_caching=False
)
```
## Using your deployed model to get predictions
First, [download this](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/input.json) `input.json` file.
Then, find the 'endpoint' artifact output by the 'endpoint-create' step in the pipeline and click on it. Look for the endpoint URI in the right-hand panel. Copy the last part of that URI (a long number). That is the endpoint ID. (You can also find this information in the 'Vertex AI > Endpoints' panel in the Cloud Console.)
**Change the cell below to use your endpoint ID before you run it.**
```
ENDPOINT_ID = 'xxxxxxxxxxxx' # <---- CHANGE THIS
!gcloud ai endpoints predict {ENDPOINT_ID} --json-request=input.json
```
## Define Version 2 of the training component
This version of the training component uses Vertex AI Custom Training. It does single-node, multiple-GPU training, with— by default— 2 GPUs.
A prebuilt custom training container is used, which includes the training code. You can view the code and [Dockerfile definition](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/Dockerfile-gpu-ct) in the [example repo](https://github.com/amygdala/code-snippets/tree/master/ml/vertex_pipelines/pytorch/cifar).
So, this component uses the Vertex AI SDK to define and launch the custom training job, then waits for it to complete.
```
@component(
base_image="gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest",
output_component_file="cifar_vertex_train.yaml",
packages_to_install=["google-cloud-aiplatform"],
)
def cifar_vertex_train(
project: str,
region: str,
staging_bucket: str,
custom_container_uri: str,
display_name: str,
model_name: str,
max_epochs: int,
num_gpus: int,
accelerator_type: str,
tensorboard_instance: str,
cifar_dataset: Input[Dataset],
cifar_model: Output[Model],
):
import logging
import os
import sys
import subprocess
from google.cloud import aiplatform
logging.getLogger().setLevel(logging.INFO)
gs_dataset_path = cifar_dataset.path
gs_model_path = cifar_model.path
gs_dataset_path = gs_dataset_path.replace("/gcs/", "gs://")
gs_model_path = gs_model_path.replace("/gcs/", "gs://")
logging.info('datset root path: %s', gs_dataset_path)
logging.info('model root path: %s', gs_model_path)
aiplatform.init(
project=project, location=region,
staging_bucket=staging_bucket,
)
custom_job = aiplatform.CustomContainerTrainingJob(
display_name=display_name,
container_uri=custom_container_uri,
)
if not tensorboard_instance:
tensorboard_instance = ''
trainer_args = ['--gcs_tensorboard_root', f"{gs_model_path}/tensorboard", '--gcs_checkpoint_dir',
f"{gs_model_path}/train/models", '--gcs_dataset_path',
gs_dataset_path, '--gcs_mar_dir', f"{gs_model_path}/model-store",
'--vertex_num_gpus', num_gpus, '--vertex_max_epochs', max_epochs,
'--gcs_tensorboard_instance', tensorboard_instance]
logging.info('trainer_args: %s', trainer_args)
custom_model = custom_job.run(
replica_count=1,
args=trainer_args,
sync=False,
machine_type="n1-standard-8",
# accelerator_type='NVIDIA_TESLA_P100',
accelerator_type=accelerator_type,
accelerator_count=int(num_gpus)
)
```
## Define and run Pipeline Version 2
This pipeline differs from Version 1 in the training component, but its other components are otherwise the same.
The pipeline will look like this:
<a href="https://storage.googleapis.com/amy-jo/images/mp/pytorch_train2.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/pytorch_train2.png" width="95%"/></a>
```
from datetime import datetime
ts = datetime.now().strftime("%Y%m%d%H%M%S")
MODEL_NAME = f'resnet{ts}'
PORT = 8080
MAR_MODEL_NAME = 'cifar10'
print(MODEL_NAME)
@dsl.pipeline(
name="pytorch-cifar-customtrain-pipeline",
pipeline_root=PIPELINE_ROOT,
)
def pytorch_cifar_pipeline2(
project: str = PROJECT_ID,
region: str = REGION,
staging_bucket: str = PIPELINE_ROOT,
model_name: str = "resnet.pth",
model_display_name: str = MODEL_NAME,
max_epochs: int = 1,
mar_model_name: str = MAR_MODEL_NAME,
handler: str = "image_classifier",
version: str = "1.0",
port: int = PORT,
num_train_gpus: int = 2,
accelerator_train_type: str = 'NVIDIA_TESLA_P100',
tensorboard_instance: str = '',
custom_container_uri: str = 'gcr.io/google-samples/pytorch-pl-gpu-ct:v4',
):
cifar_config_task = cifar_config(mar_model_name, version, port)
cifar_preproc_task = cifar_preproc()
cifar_train_task = cifar_vertex_train(
project=project,
region=region,
staging_bucket=staging_bucket,
custom_container_uri=custom_container_uri,
display_name = model_display_name,
model_name=model_name,
max_epochs=max_epochs,
num_gpus=num_train_gpus,
accelerator_type=accelerator_train_type,
tensorboard_instance=tensorboard_instance,
cifar_dataset=cifar_preproc_task.outputs["cifar_dataset"],
)
cifar_mar_task = generate_mar_file(
model_name,
mar_model_name,
handler,
version,
cifar_train_task.outputs["cifar_model"],
)
build_image_task = build_torchserve_image(
mar_model_name, cifar_mar_task.outputs["cifar_mar"],
cifar_config_task.outputs['cifar_config'],
project
)
gcc_aip.ModelUploadOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
model_upload_op = gcc_aip.ModelUploadOp(
project=project,
display_name=model_display_name,
serving_container_image_uri=build_image_task.outputs['serving_container_uri'],
serving_container_predict_route="/predictions/{}".format(MAR_MODEL_NAME),
serving_container_health_route="/ping",
serving_container_ports=[PORT]
)
gcc_aip.EndpointCreateOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
endpoint_create_op = gcc_aip.EndpointCreateOp(
project=project,
display_name=model_display_name,
)
gcc_aip.ModelDeployOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7"
model_deploy_op = gcc_aip.ModelDeployOp(
project=project,
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
machine_type="n1-standard-4",
accelerator_type='NVIDIA_TESLA_P100',
accelerator_count=1
)
```
Compile the pipeline:
```
from kfp.v2 import compiler as v2compiler
v2compiler.Compiler().compile(pipeline_func=pytorch_cifar_pipeline2,
package_path='pytorch_ct_pipeline_spec.json')
```
**Edit the following cell** if you would like to upload training logs to a Vertex Tensorboard instance. See the "Pipeline Version 1" section on Vertex TensorBoard for more information.
```
TENSORBOARD_INSTANCE = 'projects/123/locations/us-central1/tensorboards/456' # CHANGE THIS TO YOUR INSTANCE NAME
```
Run the pipeline. If you set up a tensorboard instance, **edit the cell above to your instance name, then uncomment the `tensorboard_instance` line below before evaluating the cell.**
```
job = aiplatform.PipelineJob(
display_name=MODEL_NAME,
template_path="pytorch_ct_pipeline_spec.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={
"model_name": "resnet.pth", "max_epochs": 5,
"project": PROJECT_ID, "model_display_name": MODEL_NAME, "num_train_gpus": 2,
"custom_container_uri": 'gcr.io/google-samples/pytorch-pl-gpu-ct:v4',
# "tensorboard_instance": TENSORBOARD_INSTANCE
},
)
job.run(sync=False
)
```
You can view the running pipeline in the Cloud Console by following the generated link above.
You can send prediction requests to the deployed model in the same way as described above for the 'Version 1' pipeline.
### Viewing model training information using TensorBoard
See the TensorBoard sections above for 'Pipeline Version 1', for information on using TensorBoard. For this version, the TensorBoard *experiment* won't be created until training has completed.
## Run the pipeline on an OSS KFP installation using 'V2 compatibility mode'
You can also run this pipeline in ['v2 compatibility mode'](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/#compiling-and-running-pipelines-in-v2-compatibility-mode) on an OSS KFP installation of version >= 1.7.0.
See the [README](https://github.com/amygdala/code-snippets/blob/master/ml/vertex_pipelines/pytorch/cifar/README.md) for setup instructions.
```
import kfp
# CHANGE THIS to use your host URL (see the README)
client = kfp.Client(host='https://xxxxxxxxx-dot-us-central1.pipelines.googleusercontent.com')
# run the pipeline in v2 compatibility mode
client.create_run_from_pipeline_func(
pytorch_cifar_pipeline2,
arguments={"model_name": "resnet.pth", "max_epochs": 5,
"project": PROJECT_ID, "model_display_name": MODEL_NAME,
},
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE,
# enable_caching=False
)
```
## Cleanup
When you're done with the example, you may want to undeploy your model. One way to do this is via the Cloud Console: visit the 'Vertex AI > Endpoints' panel, click on the endpoint(s) to which your model(s) were deployed, and delete those models. Once the models are deleted, you can delete the endpoints as well.
Then, visit the 'Vertex AI > Notebooks' panel and remove the "tensorboard notebook" created by the pipeline.
You may also want to do other cleanup by removing the GCS artifacts used by the pipeline and by removing the GCR image builds.
## Provenance
```
!pip freeze
```
---
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
# Backtest a Single Model
The way to gauge the performance of a time-series model is through re-training models with different historic periods and check their forecast within certain steps. This is similar to a time-based style cross-validation. More often, we called it `backtest` in time-series modeling.
The purpose of this notebook is to illustrate how to do 'backtest' on a single model using `BackTester`
`BackTester` will compose a `TimeSeriesSplitter` within it, but `TimeSeriesSplitter` is useful as a standalone, in case there are other tasks to perform that requires splitting but not backtesting. You can also retrieve the composed `TimeSeriesSplitter` object from `BackTester` to utilize the additional methods in `TimeSeriesSplitter`
Currently, there are two schemes supported for the back-testing engine: expanding window and rolling window.
* expanding window: for each back-testing model training, the train start date is fixed, while the train end date is extended forward.
* rolling window: for each back-testing model training, the training window length is fixed but the window is moving forward.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from orbit.models import DLT
from orbit.diagnostics.backtest import BackTester, TimeSeriesSplitter
from orbit.diagnostics.plot import plot_bt_predictions
from orbit.diagnostics.metrics import smape, wmape
from orbit.utils.dataset import load_iclaims
from orbit.utils.plot import get_orbit_style
%load_ext autoreload
%autoreload 2
%reload_ext autoreload
```
## Load data
```
raw_data = load_iclaims()
data = raw_data.copy()
print(data.shape)
data.head(5)
```
## Create a BackTester
```
# instantiate a model
dlt = DLT(date_col='week',
response_col='claims',
regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
seasonality=52,
estimator='stan-map')
bt = BackTester(model=dlt,
df=data,
min_train_len=100,
incremental_len=100,
forecast_len=20)
```
## Backtest Fit and Predict
The most expensive portion of backtesting is fitting the model iteratively. Thus, we separate the API calls for `fit_predict` and `score` to avoid redundant computation for multiple metrics or scoring methods.
```
bt.fit_predict();
```
Once `fit_predict()` is called, the fitted models and predictions can be easily retrieved from `BackTester`. Here the data is grouped by the date, split_key, and whether or not that observation is part of the training or test data.
```
predicted_df = bt.get_predicted_df()
predicted_df.head()
```
We also provide a plotting utility to visualize the predictions against the actuals for each split.
```
plot_bt_predictions(predicted_df, metrics=smape, ncol=2, include_vline=True);
```
Users might find this useful for any custom computations that may need to be performed on the set of predicted data. Note that the columns are renamed to generic and consistent names.
Sometimes, it might be useful to match the data back to the original dataset for ad-hoc diagnostics. This can easily be done by merging back to the orignal dataset
```
predicted_df.merge(data, left_on='date', right_on='week')
```
## Backtest Scoring
The main purpose of `BackTester` are the evaluation metrics. Some of the most widely used metrics are implemented and built into the `BackTester` API.
The default metric list is **smape, wmape, mape, mse, mae, rmsse**.
```
bt.score()
```
It is possible to filter for only specific metrics of interest, or even implement your own callable and pass into the `score()` method. For example, see this function that uses last observed value as a predictor and computes the `mse`. Or `naive_error` which computes the error as the delta between predicted values and the training period mean.
Note these are not really useful error metrics, just showing some examples of callables you can use ;)
```
def mse_naive(test_actual):
actual = test_actual[1:]
predicted = test_actual[:-1]
return np.mean(np.square(actual - predicted))
def naive_error(train_actual, test_predicted):
train_mean = np.mean(train_actual)
return np.mean(np.abs(test_predicted - train_mean))
bt.score(metrics=[mse_naive, naive_error])
```
It doesn't take additional time to refit and predict the model, since the results are stored when `fit_predict()` is called. Check docstrings for function criteria that is required for it to be supported with this API.
In some cases, we may want to evaluate our metrics on both train and test data. To do this you can call score again with the following indicator
```
bt.score(include_training_metrics=True)
```
## Backtest Get Models
In cases where `BackTester` doesn't cut it or for more custom use-cases, there's an interface to export the `TimeSeriesSplitter` and predicted data, as shown earlier. It's also possible to get each of the fitted models for deeper diving.
```
fitted_models = bt.get_fitted_models()
model_1 = fitted_models[0]
model_1.get_regression_coefs()
```
### Get TimeSeriesSplitter
BackTester composes a TimeSeriesSplitter within it, but TimeSeriesSplitter can also be created on its own as a standalone object. See section below on TimeSeriesSplitter for more details on how to use the splitter.
All of the additional TimeSeriesSplitter args can also be passed into BackTester on instantiation
```
ts_splitter = bt.get_splitter()
ts_splitter.plot()
plt.grid();
```
## Appendix
### Create a TimeSeriesSplitter
#### Expanding window
```
min_train_len = 380
forecast_len = 20
incremental_len = 20
ex_splitter = TimeSeriesSplitter(df=data,
min_train_len=min_train_len,
incremental_len=incremental_len,
forecast_len=forecast_len,
window_type='expanding',
date_col='week')
print(ex_splitter)
ex_splitter.plot()
plt.grid();
```
#### Rolling window
```
roll_splitter = TimeSeriesSplitter(df=data,
min_train_len=min_train_len,
incremental_len=incremental_len,
forecast_len=forecast_len,
window_type='rolling',
date_col='week')
roll_splitter.plot()
plt.grid();
```
#### Specifying number of splits
User can also define number of splits using `n_splits` instead of specifying minimum training length. That way, minimum training length will be automatically calculated.
```
ex_splitter2 = TimeSeriesSplitter(df=data,
min_train_len=min_train_len,
incremental_len=incremental_len,
forecast_len=forecast_len,
n_splits=5,
window_type='expanding',
date_col='week')
ex_splitter2.plot()
plt.grid();
```
| github_jupyter |
# Making a new material file
**Optional**: build protocol buffer package
```
!protoc --python_out=. -I=../proto ../proto/material.proto
```
Import library. Note: if you get an error that says "no module named google," make sure you have protobuf python library installed (try `pip install protobuf`)
```
import material_pb2 as mat
energy_groups = [2e7, 1.353e6, 9.119e3, 3.928, 0.6251, 0.1457, 0.0569, 0]
```
MOX 4.3 w/o fuel cell
```
moderator = mat.Material()
moderator.full_name = "Moderator"
moderator.id = "mod"
moderator.abbreviation = "mod"
moderator.is_fissionable = False
moderator.number_of_groups = 7
e_max = 1e6
energy_groups = [e_max, 1e3, 1e2, 1e1, 0.625, 0.625/2, 0.625/4, 0]
eg = mat.Material.VectorProperty()
eg.id = mat.Material.ENERGY_GROUPS
eg.value.extend(energy_groups)
sigma_t = mat.Material.VectorProperty()
sigma_t.id = mat.Material.SIGMA_T
sigma_t.value.extend([1.26032e-1, 2.93160e-1, 2.84240e-1, 2.80960e-1, 3.34440e-1, 5.65640e-1, 1.17215])
diff = mat.Material.VectorProperty()
diff.id = mat.Material.DIFFUSION_COEFF
diff.value.extend([1.0/(3.0 * val) for val in [1.26032e-1, 2.93160e-1, 2.84240e-1, 2.80960e-1, 3.34440e-1, 5.65640e-1, 1.17215]])
sigma_s = mat.Material.MatrixProperty()
sigma_s.id = mat.Material.SIGMA_S
sigma_s.value.extend([6.61659e-2, 0, 0, 0, 0, 0, 0,
5.907e-2, 2.40377e-1, 0, 0, 0, 0, 0,
2.8334e-4, 5.2435e-2, 1.83297e-1, 0, 0, 0, 0,
1.4622e-6, 2.499e-4, 9.2397e-2, 7.88511e-2, 3.7333e-5, 0, 0,
2.0642e-8, 1.9239e-5, 6.9446e-3, 1.7014e-1, 9.97372e-2, 9.1726e-4, 0,
0, 2.9875e-6, 1.0803e-3, 2.5881e-2, 2.0679e-1, 3.16765e-1, 4.9792e-2,
0, 4.214e-7, 2.0567e-4, 4.9297e-3, 2.4478e-2, 2.3877e-1, 1.09912])
moderator.vector_property.extend([eg, sigma_t, diff])
moderator.matrix_property.extend([sigma_s])
filename = '/home/josh/repos/bart/benchmarks/mox_2005/moderator.mat'
f = open(filename, 'wb')
f.write(moderator.SerializeToString())
f.close()
import numpy as np
sigma_s = np.array([6.61659e-2, 0, 0, 0, 0, 0, 0,
5.907e-2, 2.40377e-1, 0, 0, 0, 0, 0,
2.8334e-4, 5.2435e-2, 1.83297e-1, 0, 0, 0, 0,
1.4622e-6, 2.499e-4, 9.2397e-2, 7.88511e-2, 3.7333e-5, 0, 0,
2.0642e-8, 1.9239e-5, 6.9446e-3, 1.7014e-1, 9.97372e-2, 9.1726e-4, 0,
0, 2.9875e-6, 1.0803e-3, 2.5881e-2, 2.0679e-1, 3.16765e-1, 4.9792e-2,
0, 4.214e-7, 2.0567e-4, 4.9297e-3, 2.4478e-2, 2.3877e-1, 1.09912])
for group in range(7):
for group_in in range(group + 1, 7):
sigma_s[group*7 + group_in] *= 1.5
sigma_s * 1.1
high_scattering_moderator = mat.Material()
high_scattering_moderator.full_name = "high_scattering_moderator"
high_scattering_moderator.id = "mod"
high_scattering_moderator.abbreviation = "mod"
high_scattering_moderator.is_fissionable = False
high_scattering_moderator.number_of_groups = 7
e_max = 1e6
energy_groups = [e_max, 1e3, 1e2, 1e1, 0.625, 0.625/2, 0.625/4, 0]
eg = mat.Material.VectorProperty()
eg.id = mat.Material.ENERGY_GROUPS
eg.value.extend(energy_groups)
sigma_t = mat.Material.VectorProperty()
sigma_t.id = mat.Material.SIGMA_T
sigma_t.value.extend([1.26032e-1, 2.93160e-1, 2.84240e-1, 2.80960e-1, 3.34440e-1, 5.65640e-1, 1.17215])
sigma_s = mat.Material.MatrixProperty()
sigma_s.id = mat.Material.SIGMA_S
sigma_s.value.extend([7.2782490e-02, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 6.4977000e-02,
2.6441470e-01, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00, 0.0000000e+00, 3.1167400e-04, 5.7678500e-02,
2.0162670e-01, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00, 1.6084200e-06, 2.7489000e-04, 1.0163670e-01,
8.6736210e-02, 4.1066300e-05, 0.0000000e+00, 0.0000000e+00,
2.2706200e-08, 2.1162900e-05, 7.6390600e-03, 1.8715400e-01,
1.0971092e-01, 1.0089860e-03, 0.0000000e+00, 0.0000000e+00,
3.2862500e-06, 1.1883300e-03, 2.8469100e-02, 2.2746900e-01,
3.4844150e-01, 5.4771200e-02, 0.0000000e+00, 4.6354000e-07,
2.2623700e-04, 5.4226700e-03, 2.6925800e-02, 2.6264700e-01,
1.2090320e+00])
high_scattering_moderator.vector_property.extend([eg, sigma_t])
high_scattering_moderator.matrix_property.extend([sigma_s])
filename = '/home/josh/repos/bart/benchmarks/mox_2005/high_scattering_moderator.mat'
f = open(filename, 'wb')
f.write(high_scattering_moderator.SerializeToString())
f.close()
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/gdrive')
import pandas as pd
import glob
import datetime as dt
import multiprocessing as mp
from datetime import datetime
import numpy as np
import plotly
from pandas import Series
import sys
from scipy import stats
import os
from sklearn.pipeline import Pipeline
# For sending GET requests from the API
import requests
cd gdrive/My Drive/TFM/
# raw trade data from https://public.bitmex.com/?prefix=data/trade/
Dollar_bars = pd.DataFrame()
for i,file in enumerate(glob.glob("data/bars/*.csv")):
if i == 0:
Dollar_bars = Dollar_bars.append(pd.read_csv(file))
print('Percentge of files already Loaded:',round((i/len(glob.glob("data/bars/*.csv")))*100,1), '%. There are', len(glob.glob("data/bars/*.csv"))-i,
"files left", end='')
else:
Dollar_bars = Dollar_bars.append(pd.read_csv(file))
print('\r Percentge of files already Loaded:',round((i/len(glob.glob("data/bars/*.csv")))*100,1), '%. There are', len(glob.glob("data/bars/*.csv"))-i,
"files left",end='', flush=True)
Dollar_bars
Dollar_bars['timestamp'] = Dollar_bars.timestamp.map(lambda t: datetime.strptime(t, "%Y-%m-%d %H:%M:%S.%f"))
Dollar_bars.set_index('timestamp', inplace=True)
Dollar_bars['timestamp'] = Dollar_bars.index
Dollar_bars.drop(columns=['timestamp.1'], inplace=True)
Dollar_bars
!pip install pytrends
import pandas as pd
from pytrends.request import TrendReq
pytrend = TrendReq()
kw_list=['ETH']
df = pytrend.get_historical_interest(kw_list, year_start=2021, month_start=2, day_start=14, hour_start=16, year_end=2021, month_end=8, day_end= 10, hour_end=12, cat=0, geo='', gprop='', sleep=0)
#df = df.drop(['isPartial'], axis=1)
df
start_time = "2019-12-21T16:00:00.000Z"
end_time = "2021-05-25T00:00:00.000Z"
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=1, cols=1,specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(
x=df.index,
y=df['ETH'],
name="ETH topic",
mode = 'lines',
textfont_family="Arial_Black"),
row= 1 ,
col= 1 )
!pip install --upgrade --user git+https://github.com/GeneralMills/pytrends
from pytrends.request import TrendReq
pytrend = TrendReq()
kw_list=['Ethereum']
df1 = pytrend.get_historical_interest(kw_list, year_start=2021, month_start=2, day_start=14, hour_start=16, year_end=2021, month_end=8, day_end= 10, hour_end=12, cat=0, geo='', gprop='', sleep=0)
df1 = df1.drop(['isPartial'], axis=1)
df1
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=1, cols=1,specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(
x=df1.index,
y=df1['Ethereum'],
name="Ethereum topic",
mode = 'lines',
textfont_family="Arial_Black"),
row= 1 ,
col= 1 )
fig.add_trace(go.Scatter(
x=df1.index,
y=df['ETH'],
name="ETH topic",
mode = 'lines',
textfont_family="Arial_Black"),
row= 1 ,
col= 1 )
fig.add_trace(go.Scatter(
x=Dollar_bars.index,
y=Dollar_bars['close'],
name="Closing price",
mode = 'lines',
textfont_family="Arial_Black"),
secondary_y=True,
row= 1 ,
col= 1 )
fig.update_layout(
legend=dict(
x=0.0,
y=0.98,
traceorder="normal",
font=dict(
family="sans-serif",
size=12,
color="black"
),
)
)
df1[df1['Ethereum'] == 0] = np.NaN
df1.dropna(inplace=True)
df1
df1[df1['Ethereum'] < 20] = 28
df1[df1['Ethereum'] == 28]
kw_list=['ETH']
df1 = pytrend.get_historical_interest(kw_list, year_start=2020, month_start=2, day_start=1, hour_start=16, year_end=2020, month_end=3, day_end= 27, hour_end=12, cat=0, geo='', gprop='', sleep=0)
from datetime import datetime
from datetime import timedelta
df['start'] = df.index
df['end'] = df.index + timedelta(hours=1)
df
from datetime import datetime
from datetime import timedelta
df1['start'] = df1.index
df1['end'] = df1.index + timedelta(hours=1)
df1
import numpy as np
Dollar_bars['Google_trend2'] = np.nan
for index1, row1 in Dollar_bars.iterrows():
count = 0
for index, row in df1.iterrows():
if (row1['timestamp'] > row['start'] and row1['timestamp'] < row['end']):
count = row['Ethereum']
#Dollar_bars.set_value(index1,'tweet_count',count)
Dollar_bars.at[index1,'Google_trend2'] = count
print('\r Timestamp',row1['timestamp'], ' is in between:',row['start'],' and:',row['end'],end='', flush=True)
#print('And the number of tweets for that period is: ',count)
Dollar_bars.to_csv('Dollar_bars_google1.csv')
!cp Dollar_bars_google1.csv "gdrive/My Drive/TFM/Dollar_bars_google1.csv"
Dollar_bars.dropna(inplace=True)
Dollar_bars
Dollar_bars.to_csv('Dollar_bars_test.csv')
!cp Dollar_bars_google1.csv "gdrive/My Drive/TFM/Dollar_bars_google1.csv"
#Dollar_bars['Google_trend1'] = np.nan
Dollar_bars
Dollar_bars['timestamp'] = Dollar_bars['timestamp'].map(lambda t: datetime.strptime(t, "%Y-%m-%d %H:%M:%S.%f"))
#Dollar_bars.set_index('timestamp', inplace=True)
Dollar_bars
import numpy as np
Dollar_bars['Google_trend2'] = np.nan
for index1, row1 in Dollar_bars.iterrows():
count = 0
for index, row in df.iterrows():
if (row1['timestamp'] > row['start'] and row1['timestamp'] < row['end']):
count = row['ETH']
#Dollar_bars.set_value(index1,'tweet_count',count)
Dollar_bars.at[index1,'Google_trend2'] = count
print('Timestamp',row1['timestamp'], ' is in between:',row['start'],' and:',row['end'])
#print('And the number of tweets for that period is: ',count)
Dollar_bars.to_csv('Dollar_bars_tweet_counts.csv')
!cp Dollar_bars_tweet_counts.csv "gdrive/My Drive/TFM/Dollar_bars_tweet_counts.csv"
kw_list=['ETH']
df1 = pytrend.get_historical_interest(kw_list, year_start=2019, month_start=12, day_start=22, hour_start=16, year_end=2020, month_end=6, day_end=30, hour_end=23, cat=0, geo='', gprop='', sleep=0)
kw_list=['Ethereum']
df2 = pytrend.get_historical_interest(kw_list, year_start=2019, month_start=12, day_start=22, hour_start=16, year_end=2020, month_end=6, day_end=30, hour_end=23, cat=0, geo='', gprop='', sleep=0)
df2.isnull().sum()
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=1, cols=1,specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(
x=df.index,
y=df['ETH'],
name="ETH topic",
mode = 'lines',
textfont_family="Arial_Black"),
row= 1 ,
col= 1 )
fig.add_trace(go.Scatter(
x=df2.index,
y=df2['Ethereum'],
name="Ethereum topic",
mode = 'lines',
textfont_family="Arial_Black"),
row= 1 ,
col= 1 )
fig.add_trace(go.Scatter(
x=Dollar_bars['timestamp'],
y=np.log(Dollar_bars['close']),
name="logarithmic closing price",
mode = 'lines',
textfont_family="Arial_Black"),
secondary_y=True,
row= 1 ,
col= 1 )
fig.update_yaxes(title_text="<b> Google search count </b>", secondary_y=False)
fig.update_yaxes(title_text="<b> ETHUSD Log price", secondary_y=True)
import plotly.io as pio
!pip install plotly==5.3.1
!pip install -U kaleido
pio.write_image(fig, 'tweet_counts.png')
fig.write_image('/content/gdrive/My Drive/TFM/images/Google_trends.png')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ksdkamesh99/LowLightEnhancer/blob/master/model_gradient.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
cd /content/drive/My Drive/LowLightEnhancement
import tensorflow as tf
import keras
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
img_high=np.load("image_high.npy")
img_low=np.load("image_low.npy")
img_low=img_low/255
img_high=img_high/255
img_high.shape
plt.imshow(img_low[0])
```
## Illumination Mask Network
```
input_layer_1=keras.layers.Input(shape=(96,96,3))
top=keras.layers.Conv2D(64,kernel_size=(3,3),input_shape=(96,96,3),padding='same')(input_layer_1)
top=keras.layers.Conv2D(64,kernel_size=(3,3),padding='same')(top)
top.get_shape
bottom_inp=input_layer_1
bottom_resize=tf.keras.layers.Lambda(
lambda img: tf.image.resize(img,(60,60))
)(bottom_inp)
bottom=keras.layers.Conv2D(64,kernel_size=(3,3),input_shape=(60,60,3),padding='same')(bottom_resize)
bottom=keras.layers.Conv2D(64,kernel_size=(3,3),padding='same')(bottom)
bottom=keras.layers.Conv2D(64,kernel_size=(3,3),padding='same')(bottom)
bottom.get_shape()
bottom=keras.layers.experimental.preprocessing.Resizing(96,96)(bottom)
bottom.get_shape()
top.get_shape()
merged=keras.layers.concatenate([top,bottom])
merged
merged=keras.layers.Conv2D(32,kernel_size=(7,7),padding='same')(merged)
merged=keras.layers.Conv2D(8,kernel_size=(1,1),padding='same')(merged)
merged=keras.layers.Conv2D(1,kernel_size=(5,5),activation='sigmoid',padding='same')(merged)
merged.get_shape()
model_illumination_mask=keras.models.Model(inputs=input_layer_1,outputs=merged)
model_illumination_mask.summary()
```
# Illumination Map
```
merged.get_shape
merged=keras.layers.Concatenate()([input_layer_1,merged])
merged
def ieb(input_feature):
ieb1=keras.layers.Conv2D(32,kernel_size=(3,3),activation='relu',padding='same')(input_feature)
ieb1=keras.layers.Conv2D(32,kernel_size=(3,3),activation='relu',padding='same')(ieb1)
max_pool=keras.layers.GlobalMaxPooling2D()(ieb1)
avg_pool=keras.layers.GlobalAveragePooling2D()(ieb1)
dense1=keras.layers.Dense(8,activation='relu')
dense2=keras.layers.Dense(32,activation='sigmoid')
max_pool=dense1(max_pool)
max_pool=dense2(max_pool)
avg_pool=dense1(avg_pool)
avg_pool=dense2(avg_pool)
'''max_pool=keras.layers.Lambda(
lambda image: keras.backend.expand_dims(keras.backend.expand_dims(image,axis=1),axis=1))(max_pool)
avg_pool=keras.layers.Lambda(
lambda image: keras.backend.expand_dims(keras.backend.expand_dims(image,axis=1),axis=1))(avg_pool)'''
channel=keras.layers.Add()([max_pool,avg_pool])
ieb1=keras.layers.Multiply()([ieb1,channel])
max_pool_s=tf.keras.layers.Lambda(
lambda x: keras.backend.max(x,axis=3,keepdims=True))(ieb1)
avg_pool_s=keras.layers.Lambda(
lambda x: keras.backend.mean(x,axis=3,keepdims=True))(ieb1)
concat_slayers=keras.layers.Concatenate(axis=3)([avg_pool_s,max_pool_s])
spacial=keras.layers.Conv2D(1,7,activation='sigmoid',padding='same')(concat_slayers)
#spacial=keras.layers.experimental.preprocessing.Resizing(92,92)(spacial)
ieb1=keras.layers.Multiply()([ieb1,spacial])
ieb1=keras.layers.BatchNormalization()(ieb1)
ieb1=keras.layers.Activation('relu')(ieb1)
#ieb1=keras.layers.experimental.preprocessing.Resizing(96,96)(ieb1)
return ieb1
ieb_1=ieb(merged)
ieb_2=ieb(ieb_1)
ieb_3=ieb(ieb_2)
ieb_4=ieb(ieb_3)
ieb_5=ieb(ieb_4)
added_ieb=keras.layers.concatenate([ieb_1,ieb_2,ieb_3,ieb_4,ieb_5])
added_ieb
impnet=keras.layers.Conv2D(32,(3,3),padding='same')(added_ieb)
impnet=keras.layers.Conv2D(8,(3,3),padding='same')(impnet)
impnet=keras.layers.Conv2D(1,(3,3),padding='same')(impnet)
```
# S/L Block
```
'''impnet=keras.layers.Lambda(
lambda x: x+keras.backend.constant(0.001)
)(impnet)'''
s_l=keras.layers.Lambda(
lambda input:input[0]/input[1]
)([input_layer_1,impnet])
s_l
```
# Correction Network
```
def correction_network(input_feature):
conv1=keras.layers.Conv2D(32,kernel_size=(3,3),strides=(1,1),activation='relu',padding='same')(input_feature)
conv2=keras.layers.Conv2D(32,kernel_size=(3,3),strides=(1,1),activation='relu',padding='same')(conv1)
conv3=keras.layers.Conv2D(16,kernel_size=(3,3),strides=(1,1),activation='relu',padding='same')(conv2)
conv4=keras.layers.Conv2D(16,kernel_size=(3,3),strides=(1,1),activation='relu',padding='same')(conv3)
conv5=keras.layers.Conv2D(3,kernel_size=(3,3),strides=(1,1),activation='sigmoid',padding='same')(conv4)
#conv5=keras.layers.experimental.preprocessing.Resizing(96,96)(conv5)
#conv5=keras.layers.multiply([impnet,conv5])
return conv5
final_output=correction_network(s_l)
```
# Custom Loss Function
```
import loss as l
import keras.backend as K
def enhancement_loss(x,y):
x=K.cast(x,dtype='float32')
y=K.cast(y,dtype='float32')
norm=tf.norm(x-y)
return norm
enhancement_loss(img_low[0],img_high[0])
def color_loss(x,y):
x=K.cast(x,dtype='float32')
y=K.cast(y,dtype='float32')
cosine_loss = keras.losses.CosineSimilarity()(x,y)
colorloss=1-cosine_loss
return colorloss
color_loss(img_low[0],img_high[0])
sobelFilter = K.variable([[[[1., 1.]], [[0., 2.]],[[-1., 1.]]],
[[[2., 0.]], [[0., 0.]],[[-2., 0.]]],
[[[1., -1.]], [[0., -2.]],[[-1., -1.]]]])
def expandedSobel(inputTensor):
inputChannels = K.reshape(K.ones_like(inputTensor[0,0,0,:]),(1,1,-1,1))
return sobelFilter * inputChannels
def squareSobelLoss(yTrue,yPred):
yTrue=K.cast(yTrue,dtype='float32')
yPred=K.cast(yPred,dtype='float32')
filt = expandedSobel(yTrue)
squareSobelTrue =K.square(K.depthwise_conv2d(yTrue,filt))
squareSobelPred =K.square(K.depthwise_conv2d(yPred,filt))
newShape = K.shape(squareSobelTrue)
newShape = K.concatenate([newShape[:-1],
newShape[-1:]//2,
K.variable([2],dtype='int32')])
squareSobelTrue = K.sum(K.reshape(squareSobelTrue,newShape),axis=-1)
squareSobelPred = K.sum(K.reshape(squareSobelPred,newShape),axis=-1)
return K.mean(K.abs(squareSobelTrue - squareSobelPred))
def MeanGradientError(outputs, targets):
outputs=tf.cast(outputs,dtype='float32')
targets=tf.cast(targets,dtype='float32')
filter_x = tf.tile(tf.expand_dims(tf.constant([[-1, -2, -2], [0, 0, 0], [1, 2, 1]], dtype = 'float32'), axis = -1), [1, 1, outputs.shape[-1]])
filter_x = tf.tile(tf.expand_dims(filter_x, axis = -1), [1, 1, 1, outputs.shape[-1]])
filter_y = tf.tile(tf.expand_dims(tf.constant([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]], dtype = 'float32'), axis = -1), [1, 1, targets.shape[-1]])
filter_y = tf.tile(tf.expand_dims(filter_y, axis = -1), [1, 1, 1, targets.shape[-1]])
# output gradient
output_gradient_x = tf.math.square(tf.nn.conv2d(outputs, filter_x, strides = 1, padding = 'SAME'))
output_gradient_y = tf.math.square(tf.nn.conv2d(outputs, filter_y, strides = 1, padding = 'SAME'))
#target gradient
target_gradient_x = tf.math.square(tf.nn.conv2d(targets, filter_x, strides = 1, padding = 'SAME'))
target_gradient_y = tf.math.square(tf.nn.conv2d(targets, filter_y, strides = 1, padding = 'SAME'))
# square
output_gradients = tf.math.sqrt(tf.math.add(output_gradient_x, output_gradient_y))
target_gradients = tf.math.sqrt(tf.math.add(target_gradient_x, target_gradient_y))
# compute mean gradient error
shape = output_gradients.shape[1:3]
mge = tf.math.reduce_sum(tf.math.squared_difference(output_gradients, target_gradients) / (shape[0] * shape[1]))
return mge
def max_rgb_filter(img):
# img=tf.keras.preprocessing.image.img_to_array(img)
r=img[:,:,:,0]
g=img[:,:,:,1]
b=img[:,:,:,2]
max_c=tf.maximum(K.maximum(r,g),b)
'''
b_broadcast = K.zeros(K.shape(r), dtype=r.dtype)
bool_r=K.less(r,max)
bool_g=K.less(g,max)
bool_b=K.less(b,max)
r=K.switch(bool_r,b_broadcast,r)
g=K.switch(bool_g,b_broadcast,g)
b=K.switch(bool_b,b_broadcast,b)
# print(K.shape(r))
r=K.expand_dims(r)
g=K.expand_dims(g)
b=K.expand_dims(b)
img=K.concatenate([r,g,b],axis=-1)
# print(K.shape(img))
# img_rgb_filter=tf.keras.preprocessing.image.array_to_img(img)
return img'''
return tf.expand_dims(max_c,axis=-1)
def light_mask_loss(input_img,pred_img,true_img):
pred_img=tf.cast(pred_img,tf.uint8)
true_img=tf.cast(true_img,tf.uint8)
input_img=tf.cast(input_img,tf.uint8)
m_i=max_rgb_filter(input_img)
m_t=max_rgb_filter(true_img)
# m_t=m_t+K.constant(0.001,shape=m_t.shape,dtype=m_t.dtype)
m_div_it=tf.divide(m_i,m_t)
m_div_it=tf.cast(m_div_it,tf.uint8)
light_mask=tf.subtract(pred_img,m_div_it)
light_mask=tf.cast(light_mask,tf.float32)
lightmask_loss=tf.norm(light_mask)
return lightmask_loss
a1=max_rgb_filter(tf.expand_dims(img_low[1],axis=0))
a2=max_rgb_filter(tf.expand_dims(img_high[1],axis=0))
b=a1/a2
img_low[0]-b
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true,y_pred):
# lm_loss=light_mask_loss(input_img=input_tensor,pred_img=y_pred,true_img=y_true)
# print(lm_loss)
e_loss=enhancement_loss(y_true,y_pred)
c_loss=color_loss(y_true,y_pred)
s_loss=squareSobelLoss(y_true,y_pred)
total_loss=e_loss+s_loss*0.2+0.2*c_loss
# total_loss=total_loss+(10*lm_loss)
return total_loss
return custom_loss
```
# Model
```
model=keras.models.Model(inputs=[input_layer_1],outputs=final_output)
model.summary()
```
# Plot a DL Model
```
# keras.utils.plot_model(model,show_shapes=True,show_layer_names=True)
```
# Model Compile
```
opt=tf.optimizers.Adam()
EPOCHS=3
BATCH=28
import os
import random
for i in range(EPOCHS):
b=0
for j in range(0,img_high.shape[0],BATCH):
b=b+1
img_inp=img_low[j:j+BATCH]
img_out=img_high[j:j+BATCH]
with tf.GradientTape() as tape:
img_pred=model([img_inp])
lm_loss=light_mask_loss(input_img=img_inp,pred_img=img_pred,true_img=img_out)
e_loss=enhancement_loss(img_out,img_pred)
c_loss=color_loss(img_out,img_pred)
s_loss=MeanGradientError(img_out,img_pred)
total_loss=e_loss*4+s_loss*0.25+c_loss*1+lm_loss*5
# according to paper:- total_loss=e_loss*1+s_loss*0.2+c_loss*1+lm_loss*10
mse=tf.losses.mse(img_out,img_pred).numpy().sum()
# os.system('cls')
print(i,' ',b,' ',total_loss.numpy(),' ',mse)
grads = tape.gradient(total_loss, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
model.save('model_improved.h5')
```
# Inference
```
import matplotlib.pyplot as plt
model
def high_light(index):
img=np.expand_dims(img_low[index],axis=0)
a=model([img])
plt.imshow(img[0])
plt.show()
plt.imshow(a[0])
plt.show()
plt.imshow(img_high[index])
plt.show()
high_light(1443)
from tensorflow.keras.models import load_model
import tensorflow as tf
import keras
model=load_model('model_improved.h5')
model
```
| github_jupyter |
```
!unzip Images.zip
!unzip Airplanes_Annotations.zip
import os,cv2,keras
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
path = "Images"
annot = "Airplanes_Annotations"
for e,i in enumerate(os.listdir(annot)):
if e < 10:
filename = i.split(".")[0]+".jpg"
print(filename)
img = cv2.imread(os.path.join(path,filename))
df = pd.read_csv(os.path.join(annot,i))
plt.imshow(img)
for row in df.iterrows():
x1 = int(row[1][0].split(" ")[0])
y1 = int(row[1][0].split(" ")[1])
x2 = int(row[1][0].split(" ")[2])
y2 = int(row[1][0].split(" ")[3])
cv2.rectangle(img,(x1,y1),(x2,y2),(255,0,0), 2)
plt.figure()
plt.imshow(img)
break
cv2.setUseOptimized(True);
ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation()
im = cv2.imread(os.path.join(path,"42850.jpg"))
ss.setBaseImage(im)
ss.switchToSelectiveSearchFast()
rects = ss.process()
imOut = im.copy()
for i, rect in (enumerate(rects)):
x, y, w, h = rect
# print(x,y,w,h)
# imOut = imOut[x:x+w,y:y+h]
cv2.rectangle(imOut, (x, y), (x+w, y+h), (0, 255, 0), 1, cv2.LINE_AA)
# plt.figure()
plt.imshow(imOut)
train_images=[]
train_labels=[]
#count intersection over union (IOU)
def get_iou(bb1, bb2):
assert bb1['x1'] < bb1['x2']
assert bb1['y1'] < bb1['y2']
assert bb2['x1'] < bb2['x2']
assert bb2['y1'] < bb2['y2']
x_left = max(bb1['x1'], bb2['x1'])
y_top = max(bb1['y1'], bb2['y1'])
x_right = min(bb1['x2'], bb2['x2'])
y_bottom = min(bb1['y2'], bb2['y2'])
if x_right < x_left or y_bottom < y_top:
return 0.0
intersection_area = (x_right - x_left) * (y_bottom - y_top)
bb1_area = (bb1['x2'] - bb1['x1']) * (bb1['y2'] - bb1['y1'])
bb2_area = (bb2['x2'] - bb2['x1']) * (bb2['y2'] - bb2['y1'])
iou = intersection_area / float(bb1_area + bb2_area - intersection_area)
assert iou >= 0.0
assert iou <= 1.0
return iou
ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation()
for e,i in enumerate(os.listdir(annot)):
try:
if i.startswith("airplane"):
filename = i.split(".")[0]+".jpg"
print(e,filename)
image = cv2.imread(os.path.join(path,filename))
df = pd.read_csv(os.path.join(annot,i))
gtvalues=[]
for row in df.iterrows():
x1 = int(row[1][0].split(" ")[0])
y1 = int(row[1][0].split(" ")[1])
x2 = int(row[1][0].split(" ")[2])
y2 = int(row[1][0].split(" ")[3])
gtvalues.append({"x1":x1,"x2":x2,"y1":y1,"y2":y2})
ss.setBaseImage(image)
ss.switchToSelectiveSearchFast()
ssresults = ss.process()
imout = image.copy()
counter = 0
falsecounter = 0
flag = 0
fflag = 0
bflag = 0
for e,result in enumerate(ssresults):
if e < 2000 and flag == 0:
for gtval in gtvalues:
x,y,w,h = result
iou = get_iou(gtval,{"x1":x,"x2":x+w,"y1":y,"y2":y+h})
if counter < 30:
if iou > 0.70:
timage = imout[y:y+h,x:x+w]
resized = cv2.resize(timage, (224,224), interpolation = cv2.INTER_AREA)
train_images.append(resized)
train_labels.append(1)
counter += 1
else :
fflag =1
if falsecounter <30:
if iou < 0.3:
timage = imout[y:y+h,x:x+w]
resized = cv2.resize(timage, (224,224), interpolation = cv2.INTER_AREA)
train_images.append(resized)
train_labels.append(0)
falsecounter += 1
else :
bflag = 1
if fflag == 1 and bflag == 1:
print("inside")
flag = 1
except Exception as e:
print(e)
print("error in "+filename)
continue
X_new = np.array(train_images)
y_new = np.array(train_labels)
X_new.shape
from keras.layers import Dense
from keras import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import VGG16
vggmodel = VGG16(weights='imagenet', include_top=True)
vggmodel.summary()
for layers in (vggmodel.layers)[:15]:
print(layers)
layers.trainable = False
X= vggmodel.layers[-2].output
predictions = Dense(2, activation="softmax")(X)
model_final = Model(input = vggmodel.input, output = predictions)
from keras.optimizers import Adam
opt = Adam(lr=0.0001)
model_final.compile(loss = keras.losses.categorical_crossentropy, optimizer = opt, metrics=["accuracy"])
model_final.summary()
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
class MyLabelBinarizer(LabelBinarizer):
def transform(self, y):
Y = super().transform(y)
if self.y_type_ == 'binary':
return np.hstack((Y, 1-Y))
else:
return Y
def inverse_transform(self, Y, threshold=None):
if self.y_type_ == 'binary':
return super().inverse_transform(Y[:, 0], threshold)
else:
return super().inverse_transform(Y, threshold)
lenc = MyLabelBinarizer()
Y = lenc.fit_transform(y_new)
X_train, X_test , y_train, y_test = train_test_split(X_new,Y,test_size=0.10)
print(X_train.shape,X_test.shape,y_train.shape,y_test.shape)
trdata = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=90)
traindata = trdata.flow(x=X_train, y=y_train)
tsdata = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=90)
testdata = tsdata.flow(x=X_test, y=y_test)
from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("ieeercnn_vgg16_1.h5", monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=1, mode='auto')
hist = model_final.fit_generator(generator= traindata, steps_per_epoch= 10, epochs= 1000, validation_data= testdata, validation_steps=2, callbacks=[checkpoint,early])
import matplotlib.pyplot as plt
# plt.plot(hist.history["acc"])
# plt.plot(hist.history['val_acc'])
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title("model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Loss","Validation Loss"])
plt.show()
plt.savefig('chart loss.png')
im = X_test[1600]
plt.imshow(im)
img = np.expand_dims(im, axis=0)
out= model_final.predict(img)
if out[0][0] > out[0][1]:
print("plane")
else:
print("not plane")
z=0
for e,i in enumerate(os.listdir(path)):
if i.startswith("4"):
z += 1
img = cv2.imread(os.path.join(path,i))
ss.setBaseImage(img)
ss.switchToSelectiveSearchFast()
ssresults = ss.process()
imout = img.copy()
for e,result in enumerate(ssresults):
if e < 2000:
x,y,w,h = result
timage = imout[y:y+h,x:x+w]
resized = cv2.resize(timage, (224,224), interpolation = cv2.INTER_AREA)
img = np.expand_dims(resized, axis=0)
out= model_final.predict(img)
if out[0][0] > 0.65:
cv2.rectangle(imout, (x, y), (x+w, y+h), (0, 255, 0), 1, cv2.LINE_AA)
plt.figure()
plt.imshow(imout)
```
| github_jupyter |
# Imports
The following packages will be used:
1. tensorflow
2. numpy
3. pprint
```
%%capture
!pip install --upgrade wandb
import wandb
from wandb.keras import WandbCallback
wandb.login()
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Conv2D, BatchNormalization, MaxPool2D, ReLU, ELU, LeakyReLU, Flatten, Dense, Add, AveragePooling2D, GlobalAveragePooling2D
import pprint
pp = pprint.PrettyPrinter(indent=4)
import numpy as np
np.random.seed(666)
tf.random.set_seed(666)
# Which GPU is being used?
!nvidia-smi
```
# Data
The data that is being used for this experiment is the CIFAR10.
The dataset has 60,000 images of dimensions 32,32,3.
```
# Load the training and testing set of CIFAR10
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
X_train = X_train.astype('float32')
X_train = X_train/255.
X_test = X_test.astype('float32')
X_test = X_test/255.
y_train = tf.reshape(tf.one_hot(y_train, 10), shape=(-1, 10))
y_test = tf.reshape(tf.one_hot(y_test, 10), shape=(-1, 10))
# Create TensorFlow dataset
BATCH_SIZE = 256
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))
train_ds = train_ds.shuffle(1024).cache().batch(BATCH_SIZE).prefetch(AUTOTUNE)
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test))
test_ds = test_ds.cache().batch(BATCH_SIZE).prefetch(AUTOTUNE)
```
# Organism
An organism contains the following:
1. phase - This denotes which phase does the organism belong to
2. chromosome - A dictionary of genes (hyperparameters)
3. model - The `tf.keras` model corresponding to the chromosome
4. prevBestOrganism - The best organism in the previous **phase**
```
options_phase0 = {
'a_filter_size': [(1,1), (3,3), (5,5), (7,7), (9,9)],
'a_include_BN': [True, False],
'a_output_channels': [8, 16, 32, 64, 128, 256, 512],
'activation_type': [ReLU, ELU, LeakyReLU],
'b_filter_size': [(1,1), (3,3), (5,5), (7,7), (9,9)],
'b_include_BN': [True, False],
'b_output_channels': [8, 16, 32, 64, 128, 256, 512],
'include_pool': [True, False],
'pool_type': [MaxPool2D, AveragePooling2D],
'include_skip': [True, False]
}
options = {
'include_layer': [True, False],
'a_filter_size': [(1,1), (3,3), (5,5), (7,7), (9,9)],
'a_include_BN': [True, False],
'a_output_channels': [8, 16, 32, 64, 128, 256, 512],
'b_filter_size': [(1,1), (3,3), (5,5), (7,7), (9,9)],
'b_include_BN': [True, False],
'b_output_channels': [8, 16, 32, 64, 128, 256, 512],
'include_pool': [True, False],
'pool_type': [MaxPool2D, AveragePooling2D],
'include_skip': [True, False]
}
class Organism:
def __init__(self,
chromosome={},
phase=0,
prevBestOrganism=None):
'''
chromosome is a dictionary of genes
phase is the phase that the individual belongs to
prevBestOrganism is the best organism of the previous phase
'''
self.phase = phase
self.chromosome = chromosome
self.prevBestOrganism=prevBestOrganism
if phase != 0:
# In a later stage, the model is made by
# attaching new layers to the prev best model
self.last_model = prevBestOrganism.model
def build_model(self):
'''
This is the function to build the keras model
'''
keras.backend.clear_session()
inputs = Input(shape=(32,32,3))
if self.phase != 0:
# Slice the prev best model
# Use the model as a layer
# Attach new layer to the sliced model
intermediate_model = Model(inputs=self.last_model.input,
outputs=self.last_model.layers[-3].output)
for layer in intermediate_model.layers:
# To make the iteration efficient
layer.trainable = False
inter_inputs = intermediate_model(inputs)
x = Conv2D(filters=self.chromosome['a_output_channels'],
padding='same',
kernel_size=self.chromosome['a_filter_size'],
use_bias=self.chromosome['a_include_BN'])(inter_inputs)
# This is to ensure that we do not randomly chose anothere activation
self.chromosome['activation_type'] = self.prevBestOrganism.chromosome['activation_type']
else:
# For PHASE 0 only
# input layer
x = Conv2D(filters=self.chromosome['a_output_channels'],
padding='same',
kernel_size=self.chromosome['a_filter_size'],
use_bias=self.chromosome['a_include_BN'])(inputs)
if self.chromosome['a_include_BN']:
x = BatchNormalization()(x)
x = self.chromosome['activation_type']()(x)
if self.chromosome['include_pool']:
x = self.chromosome['pool_type'](strides=(1,1),
padding='same')(x)
if self.phase != 0 and self.chromosome['include_layer'] == False:
# Except for PHASE0, there is a choice for
# the number of layers that the model wants
if self.chromosome['include_skip']:
y = Conv2D(filters=self.chromosome['a_output_channels'],
kernel_size=(1,1),
padding='same')(inter_inputs)
x = Add()([y,x])
x = GlobalAveragePooling2D()(x)
x = Dense(10, activation='softmax')(x)
else:
# PHASE0 or no skip
# in the tail
x = Conv2D(filters=self.chromosome['b_output_channels'],
padding='same',
kernel_size=self.chromosome['b_filter_size'],
use_bias=self.chromosome['b_include_BN'])(x)
if self.chromosome['b_include_BN']:
x = BatchNormalization()(x)
x = self.chromosome['activation_type']()(x)
if self.chromosome['include_skip']:
y = Conv2D(filters=self.chromosome['b_output_channels'],
padding='same',
kernel_size=(1,1))(inputs)
x = Add()([y,x])
x = GlobalAveragePooling2D()(x)
x = Dense(10, activation='softmax')(x)
self.model = Model(inputs=[inputs], outputs=[x])
self.model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
def fitnessFunction(self,
train_ds,
test_ds,
generation_number):
'''
This function is used to calculate the
fitness of an individual.
'''
wandb.init(entity="authors",
project="vlga",
group='KAGp{}'.format(self.phase),
job_type='g{}'.format(generation_number))
self.model.fit(train_ds,
epochs=3,
callbacks=[WandbCallback()],
verbose=0)
_, self.fitness = self.model.evaluate(test_ds,
verbose=0)
def crossover(self,
partner,
generation_number):
'''
This function helps in making children from two
parent individuals.
'''
child_chromosome = {}
endpoint = np.random.randint(low=0, high=len(self.chromosome))
for idx, key in enumerate(self.chromosome):
if idx <= endpoint:
child_chromosome[key] = self.chromosome[key]
else:
child_chromosome[key] = partner.chromosome[key]
child = Organism(chromosome= child_chromosome, phase=self.phase, prevBestOrganism=self.prevBestOrganism)
child.build_model()
child.fitnessFunction(train_ds,
test_ds,
generation_number=generation_number)
return child
def mutation(self, generation_number):
'''
One of the gene is to be mutated.
'''
index = np.random.randint(0, len(self.chromosome))
key = list(self.chromosome.keys())[index]
if self.phase != 0:
self.chromosome[key] = options[key][np.random.randint(len(options[key]))]
else:
self.chromosome[key] = options_phase0[key][np.random.randint(len(options_phase0[key]))]
self.build_model()
self.fitnessFunction(train_ds,
test_ds,
generation_number=generation_number)
def show(self):
'''
Util function to show the individual's properties.
'''
pp.pprint(self.chromosome)
def random_hyper(phase):
if phase == 0:
return {
'a_filter_size': options_phase0['a_filter_size'][np.random.randint(len(options_phase0['a_filter_size']))],
'a_include_BN': options_phase0['a_include_BN'][np.random.randint(len(options_phase0['a_include_BN']))],
'a_output_channels': options_phase0['a_output_channels'][np.random.randint(len(options_phase0['a_output_channels']))],
'activation_type': options_phase0['activation_type'][np.random.randint(len(options_phase0['activation_type']))],
'b_filter_size': options_phase0['b_filter_size'][np.random.randint(len(options_phase0['b_filter_size']))],
'b_include_BN': options_phase0['b_include_BN'][np.random.randint(len(options_phase0['b_include_BN']))],
'b_output_channels': options_phase0['b_output_channels'][np.random.randint(len(options_phase0['b_output_channels']))],
'include_pool': options_phase0['include_pool'][np.random.randint(len(options_phase0['include_pool']))],
'pool_type': options_phase0['pool_type'][np.random.randint(len(options_phase0['pool_type']))],
'include_skip': options_phase0['include_skip'][np.random.randint(len(options_phase0['include_skip']))]
}
else:
return {
'a_filter_size': options['a_filter_size'][np.random.randint(len(options['a_filter_size']))],
'a_include_BN': options['a_include_BN'][np.random.randint(len(options['a_include_BN']))],
'a_output_channels': options['a_output_channels'][np.random.randint(len(options['a_output_channels']))],
'b_filter_size': options['b_filter_size'][np.random.randint(len(options['b_filter_size']))],
'b_include_BN': options['b_include_BN'][np.random.randint(len(options['b_include_BN']))],
'b_output_channels': options['b_output_channels'][np.random.randint(len(options['b_output_channels']))],
'include_pool': options['include_pool'][np.random.randint(len(options['include_pool']))],
'pool_type': options['pool_type'][np.random.randint(len(options['pool_type']))],
'include_layer': options['include_layer'][np.random.randint(len(options['include_layer']))],
'include_skip': options['include_skip'][np.random.randint(len(options['include_skip']))]
}
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
```
# Generation
This is a class that hold generations of models.
1. fitSurvivalRate - The amount of fit individuals we want in the next generation.
2. unfitSurvivalProb - The probability of sending unfit individuals
3. mutationRate - The mutation rate to change genes in an individual.
4. phase - The phase that the generation belongs to.
5. population_size - The amount of individuals that the generation consists of.
6. prevBestOrganism - The best organism (individual) is the last phase
```
class Generation:
def __init__(self,
fitSurvivalRate,
unfitSurvivalProb,
mutationRate,
phase,
population_size,
prevBestOrganism):
self.population_size = population_size
self.population = []
self.generation_number = 0
self.mutationRate = mutationRate
self.fitSurvivalRate = fitSurvivalRate
self.unfitSurvivalProb = unfitSurvivalProb
self.prevBestOrganism = prevBestOrganism
self.phase = phase
# creating the first population: GENERATION_0
# can be thought of as the setup function
for idx in range(self.population_size):
org = Organism(chromosome=random_hyper(self.phase), phase=self.phase, prevBestOrganism=self.prevBestOrganism)
org.build_model()
org.fitnessFunction(train_ds,
test_ds,
generation_number=self.generation_number)
self.population.append(org)
# sorts the population according to fitness (high to low)
self.sortModel()
self.generation_number += 1
def sortModel(self):
'''
sort the models according to the
fitness in descending order.
'''
fitness = [ind.fitness for ind in self.population]
sort_index = np.argsort(fitness)[::-1]
self.population = [self.population[index] for index in sort_index]
def generate(self):
'''
Generate a new generation in the same phase
'''
number_of_fit = int(self.population_size * self.fitSurvivalRate)
new_pop = self.population[:number_of_fit]
for individual in self.population[number_of_fit:]:
if np.random.rand() <= self.unfitSurvivalProb:
new_pop.append(individual)
for index, individual in enumerate(new_pop):
if np.random.rand() <= self.mutationRate:
new_pop[index].mutation(generation_number=self.generation_number)
fitness = [ind.fitness for ind in new_pop]
children=[]
for idx in range(self.population_size-len(new_pop)):
parents = np.random.choice(new_pop, replace=False, size=(2,), p=softmax(fitness))
A=parents[0]
B=parents[1]
child=A.crossover(B, generation_number=self.generation_number)
children.append(child)
self.population = new_pop+children
self.sortModel()
self.generation_number+=1
def evaluate(self, last=False):
'''
Evaluate the generation
'''
fitness = [ind.fitness for ind in self.population]
wandb.log({'Best fitness': fitness[0]})
wandb.log({'Average fitness': sum(fitness)/len(fitness)})
self.population[0].show()
if last:
return self.population[0]
population_size = 10
number_generation = 3
fitSurvivalRate = 0.5
unfitSurvivalProb = 0.2
mutationRate = 0.1
number_of_phases = 5
prevBestOrganism = None
for phase in range(number_of_phases):
# print("PHASE {}".format(phase))
generation = Generation(fitSurvivalRate=fitSurvivalRate,
unfitSurvivalProb=unfitSurvivalProb,
mutationRate=mutationRate,
population_size=population_size,
phase=phase,
prevBestOrganism=prevBestOrganism)
while generation.generation_number < number_generation:
generation.generate()
if generation.generation_number == number_generation:
# Last generation is the phase
# print('I AM THE BEST IN THE PHASE')
prevBestOrganism = generation.evaluate(last=True)
keras.utils.plot_model(prevBestOrganism.model, to_file='best.png')
wandb.log({"best_model": [wandb.Image('best.png', caption="Best Model")]})
else:
generation.evaluate()
```
| github_jupyter |
# Iteratief ontwerpen
Overal herhalingen

Oneindige fractals ... Zie [Xaos](https://xaos-project.github.io/) voor de hypnotiserende ervaring!
## Herhalingen
`while` met ontsnapping!
```
from random import choice
def escape(hidden):
guess = 0
count = 0
while guess != hidden:
guess = choice(range(100))
count += 1
return count
```
## Simulaties
Monte Carlo simulaties ...
```
LC = [escape(42) for _ in range(1000)]
sum(LC) / len(LC)
```
## Verjaardagenparadox
Wat is de kans dat iemand op dezelfde dag jarig is?
Met hoeveel mensen bij elkaar is deze kans 50%?
Kan dit worden gesimuleerd?
### Aanpak?
Vul één voor één een kamer met mensen tot twee dezelfde verjaardag hebben.
**De ontsnapping?**
Bijf de kamer vullen zolang (`while`) de verjaardagen in de kamer uniek zijn!
De kamer? Een list!
```python
def until_a_repeat(high):
"""Fills a list of random values until a first repeat
Argument: high, the random value upper boundary
Return value: the number of elements in the list.
"""
```
### Hoe lang tot een herhaling?
Sneller dan je denkt!

```
def unique(L):
"""Returns whether all elements in L are unique.
Argument: L, a list of any elements.
Return value: True, if all elements in L are unique,
or False, if there is any repeated element
"""
if len(L) == 0:
return True
elif L[0] in L[1:]:
return False
else:
return unique(L[1:])
```
Deze hulpfunctie wordt gegeven!
### Een verjaardag is maar een dag
```
L = [bday for bday in range(365)]
L[:10]
```
Zet 1 Januari op 0, en verder tot 31 december (364) ...
```
unique(L)
```
### Toevallige verjaardagen
Simulatie met random!
```
%run simulate.py
LC = [until_a_repeat(365) for _ in range(1000)]
LC[:10]
min(LC)
max(LC)
sum(LC) / len(LC)
```
## Denken in lussen
`for`
```python
for x in range(42):
print(x)
```
`while`
```python
x = 1
while x < 42:
print(x)
x *= 2
```
### Verschillen
Wat zijn de verschillen in ontwerp tussen deze twee Python lussen?
`for` — eindige herhaling
Voor een bestaande list of bekend aantal herhalingen
`while` — oneindige herhaling
Voor een onbekend aantal herhalingen
## Pi met pijltjes
Pi of $\pi$ is een *constante*: de verhouding tussen de omtrek en de diameter van een cirkel
### Pithon?
```
import math
math.pi
```
### Pi bepalen?
Kan $\pi$ worden bepaald door middel van een simulatie?


### Algoritme
- gooi een aantal pijlen willekeurig (random!) op het vlak
- tel het aantal pijlen dat is geland in de cirkel
- bereken $\pi$ als volgt
$$
\pi = 4 \times \dfrac{\text{Pijlen in cirkel}}{\text{Pijlen totaal}}
$$

### Hoe werkt dit?
Verhoudingen!
$$
\dfrac{\text{Pijlen in cirkel}}{\text{Pijlen totaal}} \approx \dfrac{\text{Oppervlakte cirkel}}{\text{Oppervlakte vierkant}}
$$
Gegeven: het oppervlakte van een cirkel is gelijk aan $\pi \cdot r^2$
*Oppervlakte cirkel*
Straal $r$ is in dit geval 0.5, de oppervlakte van de cirkel is dus $\pi \cdot 0.25$, of $\dfrac{\pi}{4}$
*Oppervlakte vierkant*
De breedte van het vierkant is 1 dus de oppervlakte van het vierkant is 1
$$
\dfrac{\text{Oppervlakte cirkel}}{\text{Oppervlakte vierkant}} = \frac{\dfrac{\pi}{4}}{1}
$$
wat kan worden vereenvoudigd tot
$$
\dfrac{\text{Oppervlakte cirkel}}{\text{Oppervlakte vierkant}} = \dfrac{\pi}{4}
$$
en vervolgens vereenvoudigd kan worden tot
$$
\dfrac{\text{Oppervlakte cirkel}}{\text{Oppervlakte vierkant}} \times 4 = \pi
$$
### `for` of `while`?
Welke functie zal welk type lus gebruiken?
```python
pi_one(e)
```
`e` = hoe dichtbij we bij π moeten komen
`while`
```python
pi_two(n)
```
`n` = het aantal pijltjes dat gegooid moet worden
`for`
### Simuleer!
```python
def for_pi(n):
"""Calculate pi with a for loop
"""
...
```
```
for_pi(100000)
```
## Geneste lussen
Zijn heel erg bekend!

### Seconden tikken weg ...
```python
for minute in range(60):
for second in range(60):
tick()
```
### Tijd vliegt!
```python
for year in range(84):
for month in range(12):
for day in range(f(month, year)):
for hour in range(24):
for minute in range(60):
for second in range(60):
tick()
```
## Quiz
Wat zal worden geprint?
```python
for x in range(0, 1):
for y in range(x, 2):
print(x, y)
```
### Oplossing
```
for x in range(0, 1):
for y in range(x, 2):
print(x, y)
```
## Tweedimensionale structuren
Rijen en kolommen
Let op, als over "arrays" wordt gesproken (2D arrays): dit is wat je kent als lists!

### List comprehension
```
def mul_table(n):
"""Returns a multiplication table for n
"""
return [[x * y for x in range(1, n + 1)] for y in range(1, n + 1)]
mul_table(5)
```
### Iteratief
```
def mul_table(n):
"""Returns a multiplication table for n
"""
table = [] # start with an empty table
for x in range(1, n + 1): # for every row in this table ...
row = [] # start with an empty row
for y in range(1, n + 1): # for every column in this row ...
row += [x * y] # add the column value to the row
table += [row] # add the row to the table
return table # return table
mul_table(5)
```
### Een dozijn
```
def dozen(n):
"""Eggs by the dozen!
"""
for x in range(n):
row = ""
for y in range(12): # fixed, dozen is always 12!
row += "🥚"
print(row)
dozen(1)
dozen(12)
```
### Syntax
En semantiek...
```
row = ""
for y in range(12):
row += "🥚"
print(row)
print(12 * "🥚")
```

Python [ASCII Art](https://en.wikipedia.org/wiki/ASCII_art)!
### Rijen en kolommen
En nieuwe regels ...
```
for row in range(3):
for col in range(4):
print("#")
for row in range(3):
for col in range(4):
print("#", end="")
for row in range(3):
for col in range(4):
print("#", end="")
print()
```
```console
____ _
/ ___| _ _ ___ ___ ___ ___| |
\___ \| | | |/ __/ __/ _ \/ __| |
___) | |_| | (_| (_| __/\__ \_|
|____/ \__,_|\___\___\___||___(_)
```
| github_jupyter |
### 2. 학습 데이터 준비
```
# PyTorch 라이브러리 임포트
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
# pandas 라이브러리 임포트
import pandas as pd
# NumPy 라이브러리 임포트
import numpy as np
# matplotlib 라이브러리 임포트
from matplotlib import pyplot as plt
%matplotlib inline
# 데이터를 읽어 들여 화면에 출력
dat = pd.read_csv('../data/weather_data.csv', skiprows=[0, 1, 2, 3, 4, 5], encoding="cp949")
dat
# 평균 기온값 추출 및 시각화
temp = dat['평균기온(℃)']
temp.plot()
plt.show()
# 데이터 집합을 훈련 데이터와 테스트 데이터로 분할
train_x = temp[:1461] # 2011년 1월 1일 ~ 2014년 12월 31일
test_x = temp[1461:] # 2015년 1월 1일 ~ 2016년 12월 31일
# NumPy 배열로 변환
train_x = np.array(train_x)
test_x = np.array(test_x)
# 설명 변수의 수
ATTR_SIZE = 180 # 6개월
tmp = []
train_X = []
# 데이터 점 1개 단위로 윈도우를 슬라이드시키며 훈련 데이터를 추출
for i in range(0, len(train_x) - ATTR_SIZE):
tmp.append(train_x[i:i+ATTR_SIZE])
train_X = np.array(tmp)
# 훈련 데이터를 데이터프레임으로 변환하여 화면에 출력
pd.DataFrame(train_X)
```
### 3. 신경망 구성
```
# 신경망 구성
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(180, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 128)
self.fc4 = nn.Linear(128,180)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
# 인스턴스 생성
model = Net()
```
### 4. 모형 학습
```
# 오차함수
criterion = nn.MSELoss()
# 최적화 기법 선택
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 학습
for epoch in range(1000):
total_loss = 0
d = []
# 훈련 데이터를 미니배치로 분할
for i in range(100):
# 훈련 데이터에 인덱스 부여
index = np.random.randint(0, 1281)
# 미니배치 분할
d.append(train_X[index])
# NumPy 배열로 변환
d = np.array(d, dtype='float32')
# 계산 그래프 구성
d = Variable(torch.from_numpy(d))
# 경사 초기화
optimizer.zero_grad()
# 순전파 계산
output = model(d)
# 오차 계산
loss = criterion(output, d)
# 역전파 계산
loss.backward()
# 가중치 업데이트
optimizer.step()
# 오차 누적 계산
total_loss += loss.data[0]
# 100 에포크마다 누적 오차를 출력
if (epoch+1) % 100 == 0:
print(epoch+1, total_loss)
# 입력 데이터 플로팅
plt.plot(d.data[0].numpy(), label='original')
plt.plot(output.data[0].numpy(), label='output')
plt.legend(loc='upper right')
plt.show()
```
### 5. 이상 점수 계산
```
tmp = []
test_X = []
# 테스트 데이터를 6개월 단위로 분할
tmp.append(test_x[0:180])
tmp.append(test_x[180:360])
tmp.append(test_x[360:540])
tmp.append(test_x[540:720])
test_X = np.array(tmp, dtype="float32")
# 데이터를 데이터프레임으로 변환하여 화면에 출력
pd.DataFrame(test_X)
# 모형 적용
d = Variable(torch.from_numpy(test_X))
output = model(d)
# 입력 데이터 플로팅
plt.plot(test_X.flatten(), label='original')
plt.plot(output.data.numpy().flatten(), label='prediction')
plt.legend(loc='upper right')
plt.show()
# 이상 점수 계산
test = test_X.flatten()
pred = output.data.numpy().flatten()
total_score = []
for i in range(0, 720):
dist = (test[i] - pred[i])
score = pow(dist, 2)
total_score.append(score)
# 이상 점수를 [0,1] 구간으로 정규화
total_score = np.array(total_score)
max_score = np.max(total_score)
total_score = total_score / max_score
# 이상 점수 출력
total_score
# 이상 점수 플로팅
plt.plot(total_score)
plt.show()
```
| github_jupyter |
# Introduction
This sample notebook takes you through an end-to-end workflow to demonstrate the functionality of SageMaker Ground Truth and Amazon Rekognition Custom Labels
```
import datetime
import tarfile
import boto3
import os
from sagemaker import get_execution_role
import sagemaker
from IPython.display import HTML, display, Image as IImage
from PIL import Image, ImageDraw, ImageFont
```
## Upload Images to S3
```
bucket_name = 'sagemaker-aiml' ## Updates the value with the bucket name created earlier in the lab
## Uploading Licensed Images for raw data
region = boto3.Session().region_name
source_dir = '../images/raw-data/LicensedImages-CreativeCommons'
dest_dir = 'raw-data/images'
file_list = os.listdir(source_dir)
s3_client = boto3.client('s3', region_name=region)
for file in file_list :
if file != '.ipynb_checkpoints':
response = s3_client.upload_file(source_dir+'/'+file, bucket_name, dest_dir+"/"+file)
print (file + ' uploaded')
print('Raw Data Upload Complete to '+bucket_name+'/'+dest_dir)
## Uploading Non-Licensed Images for raw data
source_dir = '../images/raw-data/LicenseNotNeeded_Images'
dest_dir = 'raw-data/images'
file_list = os.listdir(source_dir)
s3_client = boto3.client('s3', region_name=region)
for file in file_list :
if file != '.ipynb_checkpoints':
response = s3_client.upload_file(source_dir+'/'+file, bucket_name, dest_dir+"/"+file)
print (file + ' uploaded')
print('Raw Data Upload Complete to '+bucket_name+'/'+dest_dir)
## Uploading Test Data
source_dir = '../images/test-data'
dest_dir = 'test-data/images'
file_list = os.listdir(source_dir)
s3_client = boto3.client('s3', region_name=region)
for file in file_list :
response = s3_client.upload_file(source_dir+'/'+file, bucket_name, dest_dir+"/"+file)
print (file + ' uploaded')
print('Test Data Upload Complete to '+bucket_name+'/'+dest_dir)
```
### Let's look at one of the images
```
imageName = "raw-data/images/800px-Woodpeckers-Telephone-Cable.jpg"
display(IImage(url=s3_client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': imageName})))
```
## Detect Object using Amazon Rekogniton
#### Attach IAM Managed Policy
- Click on the generated URL
- Click on **Attach policies** button
- Search for **Rekog** in **Filter policies** bar
- Select **AmazonRekognitionFullAccess** and click on **Attach Policy**
```
role_name = get_execution_role().split('/')[1]
job_url = "https://console.aws.amazon.com/iam/home?#/roles/"+role_name+"?section=permissions"
print (job_url)
```
<img src="../lab-images/15.png">
<img src="../lab-images/16.png">
#### Assume Role
- Click on the generated URL
- Click on **Edit trust relationship** button
- Edit Policy with Service as **["sagemaker.amazonaws.com","rekognition.amazonaws.com"]**
- Click on **Update Trust Policy** button
```
job_url = "https://console.aws.amazon.com/iam/home?#/roles/"+role_name+"?section=trust"
print (job_url)
```
<img src="../lab-images/17.png" width="600">
<img src="../lab-images/18.png" width="600">
### Lets look at Object Detection
```
# Call Amazon Rekognition to detect objects in the image
# https://docs.aws.amazon.com/rekognition/latest/dg/API_DetectLabels.html
imageName = "raw-data/images/4278289454_d4bcb08484_o.jpg"
# Init clients
rekognition = boto3.client('rekognition')
detectLabelsResponse = rekognition.detect_labels(
Image={
'S3Object': {
'Bucket': bucket_name,
'Name': imageName,
}
}
)
imageName = "raw-data/images/4278289454_d4bcb08484_o.jpg"
display(IImage(url=s3_client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': imageName})))
##Display list of Objects
print("Detected object:")
for label in detectLabelsResponse["Labels"]:
print("- {} (Confidence: {})".format(label["Name"], label["Confidence"]))
```
As you can notice in the above response that Amazon Rekognition service detected objects in the provided image but did not detect the holes as an object of interest. This means we need to use Amazon Rekognition Custom Labels to detect custom labels in the image. Typically, you would need to identify the correct machine learning algorithm, machine learning model, tune the model and perform hyperparameter tuning. All of this is handled for you with just few lines of code in Amazon Rekognition Custom Labels. Let's see it in action.
# Ground Truth labeling job
Part or all of your images will be annotated by human annotators. It is essential to provide good instructions that help the annotators give you the annotations you want. Good instructions are:
Concise. We recommend limiting verbal/textual instruction to two sentences, and focusing on clear visuals.
Visual. In the case of image classification, we recommend providing one labeled image in each of the classes as part of the instruction.
When used through the AWS Console, Ground Truth helps you create the instructions using a visual wizard.
### Create Labeling Workforce
- Select **'Labeling workforces'** then click the **'Private'** tab.
- On the **'Private'** tab click **'Create private team'**

On the **'Create private team'** page
- Enter the **'Team name'** as **Labeling-experts**
- Click **Create private team**

Select **Invite new workers**

On the **Add workers by email address** page
- Add your **email address** to invite private annotators to access the job. For the purpose of this exercise, you can use your own email address. Typically, this will be the list of email addresses of workers in your organization.
- Click **Invite new workers**

### Ground Truth Label Job
In the left hand menu select **Labeling Job**

Click **Create labeling job**

### Specify Job Parameters
- Specify Job Name - **'aws-workshops-woodpecker-holes'**
- Check the box next to "I want to specify a label attribute name different from the labeling job name."
- Specify a value of **'labels'** in the "Label attribute name" field
- Under "Input data setup" select "**Automated data setup**"
- For "S3 location for input datasets" specify the S3 location of images - **'s3://{your-bucket-name}/raw-data/images/'**
- Next select "Specify a new location" under "S3 location for output datasets" and specify the output location for annotated data - **'s3://{your-bucket-name}/annotated-data/'**
- For "Data type" select "images"
**Note:** When you see **{your-bucket-name}** replace it with the name of the bucket that you created earlier
<img src="../lab-images/labelingJob.png" width="700">
### Create IAM Role
- Select the option to **create a new role**
- Specify S3 Bucket Name - **'{your-bucket-name}'**
- Click on **Create** button
<img src="../lab-images/4.png" width="600">
<img src="../lab-images/5.png" width="600">
### Complete Data setup
- Click on "**Complete Data Setup**". This will created the image manifest file and update the S3 input location path. Wait for "**Input data connection successful**"
<img src="../lab-images/completedatasetup.png">
### Additional Configuration
- Expand **Additional Configuration**
- Validate that **Full dataset** is selected (This is used to specify whether you want to provide all the images to labeling job or a sub set of images based on filters or random sampling)
<img src="../lab-images/6.png" width="600">
### Labeling Task
- From the **Task type** drop down, select **Image**. Since you need to do annotation on images
<img src="../lab-images/7.png" width="600">
### Task Selection
- This is Object Detection use case so you need to select **Bounding box** option
- Leave other options as default and click on **Next** button
<img src="../lab-images/8.png" width="600">
### Select workers and configure tool
- Select **Private** in **Worker types**. For this lab, you will select an internal workforce to annotate the images. You have the option to select Public contractual workforce i.e **Amazon Mechanical Turk** or Partner workforce i.e. **Vendor managed** depending on your use case.
- In **Private teams** select the team name - **'labeling-experts'**
<img src="../lab-images/9.png" width="600">
### Labeling Instructions Template
- Leave other configurations default and scroll down to **Bounding box labeling tool**
- Add two labels as shown below - **'hole'** and **'no_hole'**
- Add detailed instructions in the **Description tab** for providing instructions to the workers - For example, you can specify - **You need to label woodpecker hole in the provided image. Please ensure that you select label 'hole' and draw the box around the woodpecker hole just to fit the hole for better quality of label data. You also need to label other areas which look similar to woodpecker holes but are not woodpecker holes with label 'no_hole'**
- You can also *optionally* provide examples of Good and Bad labeling image. You need to make sure that these images are publicly accessible.
- Click on **Create** button
<img src="../lab-images/10.png" width="500">
### Start Labeling Job
- Once you have successfully created the job, you will see that the **status** of the job is **"InProgress"**. This means that the job is created and the private workforce is notified via **email** regarding the task assigned to them. Since in this case, you have assigned the task to yourself. You should have received email with instructions to login to Ground Truth Labeling project
- **Open the email** and click on the **link** provided
- Enter **username** and **password** provided in the email. You may have to change the one time password provided in the email with a new password after login
- After you login, you will see the below screen
- Click on **Start working** button
<img src="../lab-images/11.png" width="700">
### Labeling Task
- You can use the provided tools to **Zoom in**, **Zoom out**, **Move** and **Box** sections in the images.
- You need to first select a **label** i.e. either **hole** or **no_hole** and then draw box in the image to annotate.
- Once you are annotating the required objects, click on **Submit** button
<img src="../lab-images/12.png" heigth="900">
### Complete Labeling Task
- You need to ensure that the bounding box is just enough to bound the object of interest
- Everytime you need to drsw bounding box, you need to first select the label on the right panel and then draw box around the object
<img src="../lab-images/13.png">
### Check Labeling Job Status
A Ground Truth job can take a few hours to complete (if your dataset is larger than 10000 images, it can take much longer than that!). One way to monitor the job's progress is through AWS Console. In this notebook, we will use Ground Truth output files to monitor the progress.
You can re-evaluate the next cell repeatedly. It sends a `describe_labeling_job` request which should tell you whether the job is completed or not. If it is, then 'LabelingJobStatus' will be 'Completed'.
```
job_name = 'aws-workshops-woodpecker-holes'
sagemaker_client = boto3.client('sagemaker')
sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobStatus']
```
### Inspect Labeled Data Sets
```
job_url = "https://"+region+".console.aws.amazon.com/sagemaker/groundtruth?region="+region+"#/labeling-jobs/details/"+job_name
print (job_url)
```
### Labeled Data Sets
- Once you have labeled all the images, you will be taken to the SageMaker labeling project home page. This page shows you the **Labeled dataset** as shown below
- You can see how the different labels are applied. Now, training data for Amazon Rekognition Custom Labels is ready.
<img src="../lab-images/14.png">
# Review
We covered a lot of ground in this notebook! Let's recap what we accomplished. We uploaded images to S3 bucket and used SageMake Ground Truth labeling job to label the images and generated new labels for all of the images in our dataset.
| github_jupyter |
# Cálculo promedio de remuneración UNRC
Según datos oficiales extraídos del sistema de información de la UNRC y declaraciones públicas varias.
Se extrae de **Recursos humanos UNRC**: [Estadísticas Sireh](https://sisinfo.unrc.edu.ar/estadisticas/estadisticas_sireh.php) la cantidad de personal clasificados según *categoría* y *horas semanales*:
**AUTORIDADES**
| **Dedicación** | Exclusiva | Simple | Tiempo Completo | Tiempo Parcial |
|-----------------------|-----------|--------|-----------------|----------------|
| **Cantidad personas** | 41 | 39 | 2 | 1 |
| **Horas semanales** | 40 | 20 | 40 | 20 |
**DOCENTES**
| **Dedicación** | Exclusiva | Otra | Semi-Exclusiva | Simple |
|-----------------------|-----------|------|----------------|--------|
| **Cantidad personas** | 705 | 171 | 581 | 418 |
| **Horas semanales** | 40 | 20 | 20 | 10 |
**NO DOCENTES**
| **Categoría** | C1 | C2 | C3 | C4 | C5 | C6 | C7 |
|-----------------------|----|----|-----|-----|-----|----|-----|
| **Cantidad personas** | 16 | 45 | 110 | 104 | 144 | 49 | 122 |
| **Horas semanales** | 40 | 40 | 40 | 40 | 40 | 40 | 40 |
```
import matplotlib.pyplot as plt
import numpy as np
import math
%matplotlib inline
x = [0, 3, 4, 7, 8, 9]
y = [418, 752, 40, 705, 43, 590]
colors = ['green', 'green', 'blue', 'green', 'blue', 'orange']
bars = plt.bar(x, y, color=colors)
plt.xticks([0, 3.5, 8], [10, 20, 40], fontsize=12)
plt.yticks([40] + list(range(100, 800, 100)), fontsize=12)
plt.xlim(-1, 10)
plt.xlabel('Cantidad de horas semanales', fontsize=14)
plt.ylabel('Cantidad de personas', fontsize=14)
plt.title('Recursos humanos UNRC', fontsize=20)
plt.legend([bars[0], bars[2], bars[-1]], ['Docente', 'Autoridad', 'No docente'])
#plt.hlines(40, -1, 7.6, linestyles='--', alpha=0.3)
plt.grid(axis='y')
plt.savefig('../content/img/rrhh_unrc.png', dpi=100, bbox_inches='tight')
```
El sitio oficial de la UNRC **no publica** presupuesto para 2018, la última publicación al respecto data de 2016 *([Presupuesto UNRC](https://www.unrc.edu.ar/unrc/presupuesto.php))*. Una [noticia de puntal](http://www.puntal.com.ar/noticia/UNRC-el-presupuesto-para-2018-crece-25-y-llega-a--1.478-millones-20170920-0017.html) informa el monto de **$1.478** millones para el presupuesto 2018.
Según declaraciones públicas de autoridades de la UNRC en una [nota](https://www.unrc.edu.ar/unrc/n_comp.cdc?nota=32358) en el sitio oficial: **"*Los gastos de funcionamiento, que insumen entre el 10 y el 15 por ciento del presupuesto de la UNRC (el resto es para sueldos) fueron otro de los tópicos.*"**
Para un **85%** del presupuesto destinado a sueldos **(aproximadamente $1200 millones)**, se calcula un promedio por hora y así la supuesta remuneración por personal según su dedicación:
```
presupuesto_2018 = 1_478_000_000
presupuesto_sueldos_2018 = presupuesto_2018 * 0.85
total_horas = 3803800
pago_hora = presupuesto_sueldos_2018 / total_horas
pago_hora
pago_semana = []
pago_mes = []
horas = [10, 20, 30, 40]
for hora in horas:
semana = round(hora * pago_hora, 2)
pago_semana.append(semana)
mes = round(4 * hora * pago_hora, 2)
pago_mes.append(mes)
print(pago_semana)
print(pago_mes)
x = [10, 20, 30, 40]
y1 = pago_semana
y2 = pago_mes
#colors = ['green', 'green', 'blue', 'green', 'blue', 'orange']
bars = plt.bar(x, y1, width=4)#, color=colors)
#plt.xlim(-1, 45)
plt.xlabel('Cantidad de horas semanales', fontsize=14)
plt.ylabel('Remuneración en $', fontsize=14)
plt.title('Pago por semana', fontsize=20)
#plt.legend([bars[0], bars[2], bars[-1]], ['Docente', 'Autoridad', 'No docente'])
plt.xticks([10, 20, 30, 40])
plt.yticks([3000, 6500, 9500, 13000])
plt.grid(axis='y')
plt.savefig('../content/img/pago_semanal.png', dpi=100, bbox_inches='tight')
```
| Horas semanales | 10 | 20 | 30 | 40 |
|------------------|----------|----------|----------|----------|
| **Pago mensual** | \$13211.0 | \$26422.0 | \$39633.0 | \$52844.0 |
```
bars = plt.bar(x, y2, width=4)#, color=colors)
#plt.xlim(-1, 45)
plt.xlabel('Cantidad de horas semanales', fontsize=14)
plt.ylabel('Remuneración en $', fontsize=14)
plt.title('Pago por mes', fontsize=20)
#plt.legend([bars[0], bars[2], bars[-1]], ['Docente', 'Autoridad', 'No docente'])
plt.xticks([10, 20, 30, 40])
plt.yticks([13000, 25000, 40000, 50000])
plt.grid(axis='y')
plt.savefig('../content/img/pago_mensual.png', dpi=100, bbox_inches='tight')
# Cantidad de personas por dedicación exclusiva, semi-exclusiva, simple
autoridades = [43, 1, 40]
docentes = [705, 581, 418+171]
no_docentes = [16, 45, 110, 104, 144, 49, 122]
# Remuneración por dedicación/categoría
exclusiva_max = 77_410
exclusiva_promedio = (77_410 + 42_335) / 2
semi_exclusiva_max = 38_689
semi_exclusiva_promedio = (38_689 + 21_152) / 2
simple_max = 19_326
simple_promedio = (19_326 + 10_557) / 2
cat_no_docentes_max = [
52699 + 3074 + 10540 + 13175 + 527 + 5270 + 13649,
43916 + 3074 + 8783 + 10979 + 439 + 4391 + 6148,
36538 + 3074 + 7307 + 9134 + 365 + 3653 + 5164,
30390 + 3074 + 6078 + 7597 + 607 + 3039 + 4304,
25296 + 500 + 3074 + 5059 + 6324 + 505 +2529 + 3566,
21079 + 2500 + 3074 + 4216 + 5270 + 421 + 2108 + 2951,
17566 + 2500 + 3074 + 3513 + 4391 + 351 + 1756 + 2459
]
cat_no_docentes_promedio = [
52699 + 3074 + 10540 + 13175 + 527 + 5270 + ((13649 + 1949)/2),
43916 + 3074 + 8783 + 10979 + 439 + 4391 + ((6148 + 878)/2),
36538 + 3074 + 7307 + 9134 + 365 + 3653 + ((5164 + 737) /2),
30390 + 3074 + 6078 + 7597 + 607 + 3039 + ((4304 + 614) / 2),
25296 + 500 + 3074 + 5059 + 6324 + 505 +2529 + ((3566 + 509) /2),
21079 + 2500 + 3074 + 4216 + 5270 + 421 + 2108 + ((2951 + 421) /2),
17566 + 2500 + 3074 + 3513 + 4391 + 351 + 1756 + ((2459 + 351) /2)
]
remuneracion_autoridades_max = []
remuneracion_autoridades_promedio = []
remuneracion_docentes_max = []
remuneracion_docentes_promedio = []
remuneracion_no_docentes_max = []
remuneracion_no_docentes_promedio = []
# Aproximación para remuneración mensual promedio
remuneracion_autoridades_promedio.append(autoridades[0] * exclusiva_promedio * 12)
remuneracion_autoridades_promedio.append(autoridades[1] * semi_exclusiva_promedio * 12)
remuneracion_autoridades_promedio.append(autoridades[2] * simple_promedio * 12)
remuneracion_docentes_promedio.append(docentes[0] * exclusiva_promedio * 12)
remuneracion_docentes_promedio.append(docentes[1] * semi_exclusiva_promedio * 12)
remuneracion_docentes_promedio.append(docentes[2] * simple_promedio * 12)
for i, cant in enumerate(no_docentes):
remuneracion_no_docentes_promedio.append(cant * cat_no_docentes_promedio[i] * 12)
total_autoridades = sum(remuneracion_autoridades_promedio)
total_docentes = sum(remuneracion_docentes_promedio)
total_no_docentes = sum(remuneracion_no_docentes_promedio)
print('Total autoridades: $', total_autoridades)
print('Total docentes: $', total_docentes)
print('Total no docentes: $', total_no_docentes)
total_sueldos = total_autoridades + total_docentes + total_no_docentes
print('Total sueldos: $', total_sueldos)
presupuesto_2018 = 1_478_000_000
presupuesto_sueldos_2018 = presupuesto_2018 * 0.85
print(f'Presupuesto sueldos 2018: $ {presupuesto_sueldos_2018}')
resto = presupuesto_sueldos_2018 - total_sueldos
print('Resto: $', resto)
def div(a):
return a/1_000_000
y0 = [total_autoridades, total_docentes, total_no_docentes, resto]
y1 = [presupuesto_sueldos_2018]
y0 = list(map(div, y0))
y1 = list(map(div, y1))
#y_millones = list(map(div, y))
#y_millones
y0_cum = np.cumsum(y0)
y0_cum_shift = np.zeros_like(y0_cum)
y0_cum_shift[1:] = y0_cum[:-1]
colors = ['b', 'g', 'orange', 'r']
bars0 = plt.bar(x=0, height=y0, width=0.7, bottom=y0_cum_shift, color=colors)
bars1 = plt.bar(x=1, height=y1, color=['purple'])
plt.xlim(-3.25, 1.5)
plt.xlabel('Balance', fontsize=14)
plt.ylabel('Monto en millones de $', fontsize=14)
plt.title('Balance de sueldos con salario promedio', fontsize=20)
plt.yticks(y0_cum)
plt.xticks([])
plt.grid(axis='y')
plt.legend([bars[0], bars[1], bars[2], bars[3], bars[4]],
['Autoridad', 'Docente', 'No docente', 'Presupuesto para sueldos 2018', 'Resto'])
plt.savefig('../content/img/balance_promedio.png', dpi=100, bbox_inches='tight')
plt.show()
# Aproximación para remuneración mensual maximo
remuneracion_autoridades_max.append(autoridades[0] * exclusiva_max * 12)
remuneracion_autoridades_max.append(autoridades[1] * semi_exclusiva_max * 12)
remuneracion_autoridades_max.append(autoridades[2] * simple_max * 12)
remuneracion_docentes_max.append(docentes[0] * exclusiva_max * 12)
remuneracion_docentes_max.append(docentes[1] * semi_exclusiva_max * 12)
remuneracion_docentes_max.append(docentes[2] * simple_max * 12)
for i, cant in enumerate(no_docentes):
remuneracion_no_docentes_max.append(cant * cat_no_docentes_max[i] * 12)
total_autoridades = sum(remuneracion_autoridades_max)
total_docentes = sum(remuneracion_docentes_max)
total_no_docentes = sum(remuneracion_no_docentes_max)
print('Total autoridades: $', total_autoridades)
print('Total docentes: $', total_docentes)
print('Total no docentes: $', total_no_docentes)
total_sueldos = total_autoridades + total_docentes + total_no_docentes
print('Total sueldos: $', total_sueldos)
presupuesto_2018 = 1_478_000_000
presupuesto_sueldos_2018 = presupuesto_2018 * 0.85
print(f'Presupuesto sueldos 2018: $ {presupuesto_sueldos_2018}')
resto = presupuesto_sueldos_2018 - total_sueldos
print('Resto: $', resto)
colors
y0 = [total_autoridades, total_docentes, total_no_docentes, resto]
y1 = [presupuesto_sueldos_2018]
y0 = list(map(div, y0))
y1 = list(map(div, y1))
#y_millones = list(map(div, y))
#y_millones
y0_cum = np.cumsum(y0)
y0_cum_shift = np.zeros_like(y0_cum)
y0_cum_shift[1:] = y0_cum[:-1]
colors = ['b', 'g', 'orange', 'r']
bars0 = plt.bar(x=0, height=y0, width=0.7, bottom=y0_cum_shift, color=colors)
bars1 = plt.bar(x=1, height=y1, color=['purple'])
plt.xlim(-3.25, 1.5)
plt.xlabel('Balance', fontsize=14)
plt.ylabel('Monto en millones de $', fontsize=14)
plt.title('Balance de sueldos con salario promedio', fontsize=20)
plt.yticks(y0_cum)
plt.xticks([])
plt.grid(axis='y')
plt.legend([bars[0], bars[1], bars[2], bars[3], bars[4]],
['Autoridad', 'Docente', 'No docente', 'Presupuesto para sueldos 2018', 'Resto'])
plt.savefig('../content/img/balance_maximo.png', dpi=100, bbox_inches='tight')
plt.show()
```
| github_jupyter |
```
# ECE 180 python project
# Global imports
import urllib2
from StringIO import StringIO
import gzip
import sys
import os
import numpy as np
import pandas as pd
import gmaps
import matplotlib.pyplot as plt
import seaborn
import itertools
import csv
%matplotlib inline
# Use this to set the env api key
# os.environ['GOOGLE_API_KEY']= API_KEY_YOU_GET_FROM_GOOGLE_AUTH
data_path = './data/'
gmaps.configure(api_key=os.getenv('GOOGLE_API_KEY'))
def populate_M_FIRE(yy1,mm1,yy2=2017,mm2=11):
'''
This function downloads and unzips monthly Active Fires CSV files from NEO global datasets in
ftp://neoftp.sci.gsfc.nasa.gov/csv/MOD14A1_M_FIRE/ for a given time interval.
If only one month-year is given, it downloads data from that month to 11-2017 (Latest available data)
:param yy1: int, start year
:param mm1: int, start month
:param yy2: int, end year
:param mm2: int, end month
For example:
yy1 = 2000
mm1 = 4
yy2 = 2000
mm2 = 6
populate_M_FIRE(yy1,mm1,yy2,mm2)
'''
assert isinstance(yy1, int) and isinstance(mm1, int)
assert isinstance(yy2, int) and isinstance(mm2, int)
assert (1 <= mm1 <= 12) and (yy1 >= 2000)
assert (1 <= mm2 <= 12) and (yy2 >= 2000)
# Local output directory
baseURL = 'ftp://neoftp.sci.gsfc.nasa.gov/csv/MOD14A1_M_FIRE/'
mm = mm1
yy = yy1
while yy < yy2 or mm < mm2:
if mm >= 10:
fdate = '{}-{}'.format(yy, mm)
else:
fdate = '{}-0{}'.format(yy, mm)
if mm % 12 == 0:
yy = yy + 1
mm = mm % 12 + 1
filename = 'MOD14A1_M_FIRE_' + fdate + '.CSV.gz'
outFilePath = data_path + 'MOD14A1_M_FIRE_' + fdate + '.csv'
response = urllib2.urlopen(baseURL + filename)
compressedFile = StringIO()
compressedFile.write(response.read())
# Set the file's current position to the beginning of the file
# so that gzip.GzipFile can read its contents from the top.
compressedFile.seek(0)
decompressedFile = gzip.GzipFile(fileobj=compressedFile, mode='rb')
if not os.path.exists(data_path):
os.makedirs(data_path)
with open(outFilePath, 'w') as outfile:
outfile.write(decompressedFile.read())
def create_global_grid_csv():
'''
divides the global degrees (360x180) by the dimensions of the monthly Active Fires CSV files (3600x1800 pixels)
to determine the geolocation of each element (pixel)
:return:
'''
N_pixels_lon = 3600
N_pixels_lat = 1800
delta_lon = 360./N_pixels_lon
delta_lat = 180./N_pixels_lat
# create longitude vector
lon_vec = []
lon_vec.append(-180)
for ii in range(N_pixels_lon-1):
lon_vec.append(lon_vec[ii]+delta_lon)
# create latitude vector
lat_vec = []
lat_vec.append(90)
for ii in range(N_pixels_lat-1):
lat_vec.append(lat_vec[ii] - delta_lat)
return lat_vec, lon_vec
def files_to_dfs():
'''
Reads all the csv files from the /data folder and returns a dictionary of the pandas dataframes for each month
'''
pds = {}
for dirpath, dnames, fnames in os.walk("./data/"):
for f in fnames:
if f.endswith(".csv"):
pds[f] = csv_to_df(os.path.join(dirpath, f))
return pds
def csv_to_df(filename, lat1=90.0, lon1=-180.0, lat2=-90.0, lon2=180.0):
'''
Reads a csv file, converts it to a dataframe of lat-lon-mag columns, filters a particular location coordinate
and returns the dataframe
@param filename : string, represents filename
@param lat1,lat2,lon1,lon2 : float, latitudes and longitudes
dataframe for points between (lat1,lon1) and (lat2,lon2) are returned
'''
with open(filename,'r') as f:
reader=csv.reader(f)
lis=[]
for row in reader:
lis.extend(map(float,row))
lat,lon = create_global_grid_csv()
latlons = [i for i in itertools.product(lat,lon)]
# 0.1 - represents land, 99999.0 - represents water
df = pd.DataFrame([x + (y,) for x, y in zip(latlons,lis) if y not in [0.1,99999.0]],columns=('lat','lon','mag'))
df = df[(df.lat < lat1) & (df.lat > lat2)]
df = df[(df.lon > lon1) & (df.lon < lon2)]
return df
def df_to_heatmap(df):
locations = df[['lat','lon']]
weights = df['mag']
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations, weights=weights))
return fig
df = csv_to_df('data/MOD14A1_M_FIRE_2017-01.csv')
df_to_heatmap(df)
```
| github_jupyter |
# FBSDE
Ji, Shaolin, Shige Peng, Ying Peng, and Xichuan Zhang. “Three Algorithms for Solving High-Dimensional Fully-Coupled FBSDEs through Deep Learning.” ArXiv:1907.05327 [Cs, Math], February 2, 2020. http://arxiv.org/abs/1907.05327.
```
%load_ext tensorboard
import os
from makers.gpu_utils import *
os.environ["CUDA_VISIBLE_DEVICES"] = str(pick_gpu_lowest_memory())
import numpy as np
import tensorflow as tf
from keras.layers import Input, Dense, Lambda, Reshape, concatenate, Layer
from keras import Model, initializers
from keras.callbacks import ModelCheckpoint
from keras.metrics import mean_squared_error
import matplotlib.pyplot as plt
from datetime import datetime
from keras.metrics import mse
from keras.optimizers import Adam
print("Num GPUs Available: ", len(tf.config.list_physical_devices("GPU")))
```
# Inputs
```
# numerical parameters
n_paths = 2 ** 18
n_timesteps = 16
n_dimensions = 4
n_diffusion_factors = 2
n_jump_factors = 2
T = 10.
dt = T / n_timesteps
batch_size = 128
epochs = 1000
learning_rate = 1e-5
# model parameters
nu = 0.1
eta = 1.
zeta = 0.1
epsilon = 0.1
lp = 0.2
lm = 0.2
k = 1.
phi = 1e-2
psi = 1e-2
# coefficients
def b(t, x, y, z, r):
ad = y[2] / y[3] - x[0]
dp = tf.maximum(0., 1./k + ad)
dm = tf.maximum(0., 1./k - ad)
return [
x[1],
-eta * x[1],
lm * tf.exp(-k * dm) - lp * tf.exp(-k * dp),
lp * (x[0] + dp) * tf.exp(-k * dp) - lm * (x[0] - dm) * tf.exp(-k * dm),
]
def s(t, x, y, z, r):
return [[nu, 0], [0, zeta], [0, 0], [0, 0]]
# - dH_dx
def f(t, x, y, z, r):
ad = y[2] / y[3] - x[0]
dp = tf.maximum(0., 1./k + ad)
dm = tf.maximum(0., 1./k - ad)
return [
-(y[3] * lp * tf.exp(-k * dp) - y[3] * lm * tf.exp(-k * dm)),
-(y[0] - eta * y[1]),
-(-2. * phi * x[2]),
-(0.)
]
def v(t, x, y, z, r):
return [[0, 0], [epsilon, -epsilon], [0, 0], [0, 0]]
# dg_dx
def g(x):
return [x[2], 0., x[0] - 2 * psi * x[2], 1.]
```
# Initial value layer
```
class InitialValue(Layer):
def __init__(self, y0, **kwargs):
super().__init__(**kwargs)
self.y0 = y0
def call(self, inputs):
return self.y0
```
# Model
```
def dX(t, x, y, z, r, dW, dN):
def drift(arg):
x, y, z, r = arg
return tf.math.multiply(b(t, x, y, z, r), dt)
a0 = tf.vectorized_map(drift, (x, y, z, r))
def noise(arg):
x, y, z, r, dW = arg
return tf.tensordot(s(t, x, y, z ,r), dW, [[1], [0]])
a1 = tf.vectorized_map(noise, (x, y, z, r, dW))
def jump(arg):
x, y, z, r, dN = arg
return tf.tensordot(v(t, x, y, z ,r), dN, [[1], [0]])
a2 = tf.vectorized_map(jump, (x, y, z, r, dN))
return a0 + a1 + a2
def dY(t, x, y, z, r, dW, dN):
def drift(arg):
x, y, z, r = arg
return tf.math.multiply(f(t, x, y, z, r), dt)
a0 = tf.vectorized_map(drift, (x, y, z, r))
def noise(arg):
x, y, z, r, dW = arg
return tf.tensordot(z, dW, [[1], [0]])
a1 = tf.vectorized_map(noise, (x, y, z, r, dW))
def jump(arg):
x, y, z, r, dN = arg
return tf.tensordot(r, dN, [[1], [0]])
a2 = tf.vectorized_map(jump, (x, y, z, r, dN))
return a0 + a1 + a2
paths = []
n_hidden_units = n_dimensions + n_diffusion_factors + n_jump_factors + 10
inputs_dW = Input(shape=(n_timesteps, n_diffusion_factors))
inputs_dN = Input(shape=(n_timesteps, n_jump_factors))
x0 = tf.Variable([[10., 0., 0., 0.]], trainable=False)
y0 = tf.Variable([g(x0[0])], trainable=True)
x = InitialValue(x0, name='x_0')(inputs_dW)
y = InitialValue(y0, name='y_0')(inputs_dW)
z = concatenate([x, y])
z = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='z1_0')(z)
z = Dense(n_dimensions * n_diffusion_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='z2_0')(z)
z = Reshape((n_dimensions, n_diffusion_factors), name='zr_0')(z)
r = concatenate([x, y])
r = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='r1_0')(r)
r = Dense(n_dimensions * n_jump_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name='r2_0')(r)
r = Reshape((n_dimensions, n_jump_factors), name='rr_0')(r)
paths += [[x, y, z, r]]
# pre-compile lambda layers
@tf.function
def hx(args):
i, x, y, z, r, dW, dN = args
return x + dX(i * dt, x, y, z, r, dW, dN)
@tf.function
def hy(args):
i, x, y, z, r, dW, dN = args
return y + dY(i * dt, x, y, z, r, dW, dN)
for i in range(n_timesteps):
step = InitialValue(tf.Variable(i, dtype=tf.float32, trainable=False))(inputs_dW)
dW = Lambda(lambda x: x[0][:, tf.cast(x[1], tf.int32)])([inputs_dW, step])
dN = Lambda(lambda x: x[0][:, tf.cast(x[1], tf.int32)])([inputs_dN, step])
x, y = (
Lambda(hx, name=f'x_{i+1}')([step, x, y, z, r, dW, dN]),
Lambda(hy, name=f'y_{i+1}')([step, x, y, z, r, dW, dN]),
)
# we don't train z for the last time step; keep for consistency
z = concatenate([x, y])
z = Dense(n_hidden_units, activation='relu', name=f'z1_{i+1}')(z)
z = Dense(n_dimensions * n_diffusion_factors, activation='relu', name=f'z2_{i+1}')(z)
z = Reshape((n_dimensions, n_diffusion_factors), name=f'zr_{i+1}')(z)
# we don't train r for the last time step; keep for consistency
r = concatenate([x, y])
r = Dense(n_hidden_units, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name=f'r1_{i+1}')(r)
r = Dense(n_dimensions * n_jump_factors, activation='relu', kernel_initializer=initializers.RandomNormal(stddev=1e-1), name=f'r2_{i+1}')(r)
r = Reshape((n_dimensions, n_jump_factors), name=f'rr_{i+1}')(r)
paths += [[x, y, z, r]]
outputs_loss = Lambda(lambda r: r[1] - tf.transpose(tf.vectorized_map(g, r[0])))([x, y])
outputs_paths = tf.stack(
[tf.stack([p[0] for p in paths[1:]], axis=1), tf.stack([p[1] for p in paths[1:]], axis=1)] +
[tf.stack([p[2][:, :, i] for p in paths[1:]], axis=1) for i in range(n_diffusion_factors)] +
[tf.stack([p[3][:, :, i] for p in paths[1:]], axis=1) for i in range(n_jump_factors)], axis=2)
adam = Adam(learning_rate=learning_rate)
model_loss = Model([inputs_dW, inputs_dN], outputs_loss)
model_loss.compile(loss='mse', optimizer=adam)
# (n_sample, n_timestep, x/y/z_k, n_dimension)
# skips the first time step
model_paths = Model([inputs_dW, inputs_dN], outputs_paths)
model_loss.summary()
```
# Transfer learning
```
# transfer weights right-to-left
model_loss.get_layer('y_0').set_weights(m_old.get_layer('y_0').get_weights())
n_small = 16
for i in range(n_small):
model_loss.get_layer(f'z1_{n_timesteps - n_small + i}').set_weights(m_old.get_layer(f'z1_{i}').get_weights())
model_loss.get_layer(f'z2_{n_timesteps - n_small + i}').set_weights(m_old.get_layer(f'z2_{i}').get_weights())
model_loss.get_layer(f'r1_{n_timesteps - n_small + i}').set_weights(m_old.get_layer(f'r1_{i}').get_weights())
model_loss.get_layer(f'r2_{n_timesteps - n_small + i}').set_weights(m_old.get_layer(f'r2_{i}').get_weights())
# try transfer learning from another starting point
model_loss.get_layer('y_0').set_weights(m_large.get_layer('y_0').get_weights())
for i in range(n_timesteps):
model_loss.get_layer(f'z1_{i}').set_weights(m_large.get_layer(f'z1_{i}').get_weights())
model_loss.get_layer(f'z2_{i}').set_weights(m_large.get_layer(f'z2_{i}').get_weights())
model_loss.get_layer(f'r1_{i}').set_weights(m_large.get_layer(f'r1_{i}').get_weights())
model_loss.get_layer(f'r2_{i}').set_weights(m_large.get_layer(f'r2_{i}').get_weights())
# transfer learning from cruder discretization
model_loss.get_layer('y_0').set_weights(m_small.get_layer('y_0').get_weights())
n_small = 4
for i in range(n_small):
for j in range(n_timesteps // n_small):
model_loss.get_layer(f'z1_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'z1_{i}').get_weights())
model_loss.get_layer(f'z2_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'z2_{i}').get_weights())
model_loss.get_layer(f'z1_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'z1_{i}').get_weights())
model_loss.get_layer(f'z2_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'z2_{i}').get_weights())
model_loss.get_layer(f'r1_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'r1_{i}').get_weights())
model_loss.get_layer(f'r2_{n_timesteps // n_small * i}').set_weights(m_small.get_layer(f'r2_{i}').get_weights())
model_loss.get_layer(f'r1_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'r1_{i}').get_weights())
model_loss.get_layer(f'r2_{n_timesteps // n_small * i + j}').set_weights(m_small.get_layer(f'r2_{i}').get_weights())
model_loss.save_weights('_models/weights0000.h5')
```
# Training
```
dW = tf.sqrt(dt) * tf.random.normal((n_paths, n_timesteps, n_diffusion_factors))
dN = tf.random.poisson((n_paths, n_timesteps), [dt * lp, dt * lm])
target = tf.zeros((n_paths, n_dimensions))
# check for exploding gradients before training
with tf.GradientTape() as tape:
loss = mse(model_loss([dW, dN]), target)
# bias of the last dense layer
variables = model_loss.variables[-1]
tape.gradient(loss, variables)
log_dir = "_logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
checkpoint_callback = ModelCheckpoint('_models/weights{epoch:04d}.h5', save_weights_only=True, overwrite=True)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
history = model_loss.fit([dW, dN], target, batch_size=128, epochs=1000, callbacks=[checkpoint_callback, tensorboard_callback])
# validate
dW_test = tf.sqrt(dt) * tf.random.normal((n_paths//8, n_timesteps, n_diffusion_factors))
dN_test = tf.random.poisson((n_paths//8, n_timesteps), [dt * lp, dt * lm])
target_test = tf.zeros((n_paths//8, n_dimensions))
model_loss.evaluate([dW_test, dN_test], target_test)
```
# Display paths and loss
```
# load bad model
model_loss.load_weights('_models/weights0109.h5')
loss = model_loss([dW, dN]).numpy()
loss
paths = model_paths([dW, dN]).numpy()
def output(n0):
x = tf.transpose(paths[n0, :, 0, :], (1, 0))
dp = tf.maximum(0., 1./k + (paths[n0, :, 1, 2] / paths[n0, :, 1, 3] - paths[n0, :, 0, 0]))
dm = tf.maximum(0., 1./k - (paths[n0, :, 1, 2] / paths[n0, :, 1, 3] - paths[n0, :, 0, 0]))
return tf.concat([x, tf.expand_dims(dp, 0), tf.expand_dims(dm, 0)], axis=0)
for i in range(120, 140):
print(output(i))
fig, ax = plt.subplots(nrows=2, figsize=(10, 8))
out = output(502).numpy()
ax[0].set_title('midprice and d±')
ax[0].plot(out[0], c='b')
ax[0].plot(out[0] - out[5], c='r')
ax[0].plot(out[0] + out[4], c='r')
ax[1].set_title('alpha (red) and inventory (blue)')
ax[1].plot(out[1], c='r')
ax[1].twinx().plot(out[2], c='b')
# plt.plot(output(120).numpy().transpose())
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JavaFXpert/qiskit4devs-workshop-notebooks/blob/master/grover_search_party.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Using Grover search for boolean satisfiability
### *Throwing a party while avoiding the drama*
Imagine you are inviting some friends to a party, some who are couples, and some who are not on speaking terms. Specifically, **Alice** and **Bob** are in a relationship, as are **Carol** and **David**. However, **Alice** and **David** had a bad breakup a while ago and haven't been civil with each other since. Armed with a quantum computer and Qiskit Aqua, how can you leverage Grover search algorithm to identify friendly combinations of people to invite?
Fortunately, Grover search may be used for [boolean satisfiability problems](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem), and the constraints for our party planning problem may be formulated with the following boolean expression:
`((A and B) or (C and D)) and not (A and D)`
```
# Do the necessary import for our program
#!pip install qiskit-aqua
from qiskit import BasicAer
from qiskit.aqua.algorithms import Grover
from qiskit.aqua.components.oracles import LogicalExpressionOracle, TruthTableOracle
from qiskit.tools.visualization import plot_histogram
from qiskit.compiler import transpile
```
Let's go ahead and use our expression in a Grover search to find out compatible combinations of people to invite.
> Note: We'll represent `and` with `&`, `or` with `|`, `not` with `~` in our expression.
```
oracle_type = "Bit" #<-"Log" or "Bit"
#log_expr = '((A & B) | (C & D)) & ~(A & D) & (F | G)'
#log_expr = '(A & B & C)' #<- Oracle for |111>
#bitstr = '00000001'
#log_expr = '(~A & ~B & ~C)' #<- Oracle for |000>
#bitstr = '10000000'
#log_expr = '((~A & ~B & ~C) & (A & B & C))' #<- Oracle for |000> + |111>
#bitstr = '10000001'
log_expr = '(~A & B & C)' #<- Oracle for |110>
bitstr = '00000010'
if oracle_type=="Log":
algorithm = Grover(LogicalExpressionOracle(log_expr))
circuit = Grover(LogicalExpressionOracle(log_expr)).construct_circuit()
else:
algorithm = Grover(TruthTableOracle(bitstr))
circuit = Grover(TruthTableOracle(bitstr)).construct_circuit()
print(circuit)
```
Now we'll run the algorithm on a simulator, printing the result that occurred most often. This result is expressed as the numeric representations of our four friends; a minus sign indicating which ones Grover advised against inviting in that particular result.
```
# Run the algorithm on a simulator, printing the most frequently occurring result
backend = BasicAer.get_backend('qasm_simulator')
result = algorithm.run(backend)
print(result['top_measurement'])
print(result['measurement'])
```
Finally, we'll plot the results. Each basis state represents our four friends, with the least significant bit representing Alice. If a bit is 1, then the advice is to invite the person that the bit represents. If the bit is 0, then Grover advises not to send an invitation.
```
plot_histogram(result['measurement'])
"""Test"""
bitstr_test = '1000'
oracle_test = TruthTableOracle(bitstr_test)
display(oracle_test.circuit.draw(output='mpl'))
expression_test2 = ('(~A & ~B)')
oracle_test2 = LogicalExpressionOracle(expression_test2)
display(oracle_test2.circuit.draw(output='mpl'))
from qiskit.quantum_info.operators import Operator
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, IBMQ
from qiskit.compiler import transpile
%matplotlib inline
IBMQ.load_account()
provider = IBMQ.load_account()
unitary_oracle_0 = Operator([
[1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, -1, 0],
[0, 0, 0, -1]])
qr=QuantumRegister(2)
oracle_test3=QuantumCircuit(qr)
oracle_test3.append(unitary_oracle_0,qr)
display(oracle_test3.draw(output='mpl'))
device = provider.get_backend('ibmqx2')
trans_test = transpile(oracle_test3, device)
trans_test.draw()
trans_bell2 = transpile(bell2, device)
trans_bell2.draw()
print("Ch 8: Running “diagnostics” with the state vector simulator")
print("-----------------------------------------------------------")
# Import the required Qiskit classes
from qiskit import(
QuantumCircuit,
execute,
Aer,
IBMQ)
# Import Blochsphere visualization
from qiskit.visualization import *
# Import some math that we will need
from math import pi
# Set numbers display options
import numpy as np
np.set_printoptions(precision=3)
# Create a function that requests and display the state vector
# Use this function as a diagnositc tool when constructing your circuits
backend = Aer.get_backend('statevector_simulator')
def s_vec(circuit):
print(circuit.n_qubits, "qubit quantum circuit:\n------------------------")
print(circuit)
psi=execute(circuit, backend).result().get_statevector(circuit)
print("State vector for the",circuit.n_qubits,"qubit circuit:\n\n",psi)
print("\nState vector as Bloch sphere.\n")
display(plot_bloch_multivector(psi))
print("\nState vector as Q sphere.")
display(iplot_state_qsphere(psi,figsize=(5,5)))
input("Press enter to continue...\n")
# One qubit states
qc = QuantumCircuit(1,1)
s_vec(qc)
qc.h(0)
s_vec(qc)
qc.rz(pi/2,0)
s_vec(qc)
# Two qubit states
qc = QuantumCircuit(2,2)
s_vec(qc)
qc.h([0])
s_vec(qc)
qc.swap(0,1)
s_vec(qc)
# Entangled qubit states
qc = QuantumCircuit(2,2)
s_vec(qc)
qc.h(0)
s_vec(qc)
qc.cx(0,1)
s_vec(qc)
qc.rz(pi/4,0)
s_vec(qc)
# Three qubit states
qc = QuantumCircuit(3,3)
s_vec(qc)
qc.h(0)
s_vec(qc)
qc.h(1)
s_vec(qc)
qc.ccx(0,1,2)
s_vec(qc)
qc.rz(pi/4,0)
s_vec(qc)
# Notice how the Bloch sphere visualization doesn't lend itself very well to displaying entangled qubits, as they cannot be thought of as individual entities. And there is no good way of displaying multiple qubits on one Bloch sphere. A better option here is the density matrix, displayed as a state city.
# Measuring entangled qubits
qc.measure([0,1],[0,1])
print("Running the",qc.n_qubits,"qubit circuit on the qasm_simulator:\n")
print(qc)
backend_count = Aer.get_backend('qasm_simulator')
counts=execute(qc, backend_count,shots=10000).result().get_counts(qc)
print("Result:\n", counts)
```
#### Now it's you're turn to play!
Create and implement your own scenario that can be modeled as a boolean satisfiability problem using Grover search. Have fun with it, and carry on with your quantum computing journey!
| github_jupyter |
# T1049 - System Network Connections Discovery
Adversaries may attempt to get a listing of network connections to or from the compromised system they are currently accessing or from remote systems by querying for information over the network.
An adversary who gains access to a system that is part of a cloud-based environment may map out Virtual Private Clouds or Virtual Networks in order to determine what systems and services are connected. The actions performed are likely the same types of discovery techniques depending on the operating system, but the resulting information may include details about the networked cloud environment relevant to the adversary's goals. Cloud providers may have different ways in which their virtual networks operate.(Citation: Amazon AWS VPC Guide)(Citation: Microsoft Azure Virtual Network Overview)(Citation: Google VPC Overview)
Utilities and commands that acquire this information include [netstat](https://attack.mitre.org/software/S0104), "net use," and "net session" with [Net](https://attack.mitre.org/software/S0039). In Mac and Linux, [netstat](https://attack.mitre.org/software/S0104) and <code>lsof</code> can be used to list current connections. <code>who -a</code> and <code>w</code> can be used to show which users are currently logged in, similar to "net session".
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - System Network Connections Discovery
Get a listing of network connections.
Upon successful execution, cmd.exe will execute `netstat`, `net use` and `net sessions`. Results will output via stdout.
**Supported Platforms:** windows
#### Attack Commands: Run with `command_prompt`
```command_prompt
netstat
net use
net sessions
```
```
Invoke-AtomicTest T1049 -TestNumbers 1
```
### Atomic Test #2 - System Network Connections Discovery with PowerShell
Get a listing of network connections.
Upon successful execution, powershell.exe will execute `get-NetTCPConnection`. Results will output via stdout.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
Get-NetTCPConnection
```
```
Invoke-AtomicTest T1049 -TestNumbers 2
```
### Atomic Test #3 - System Network Connections Discovery Linux & MacOS
Get a listing of network connections.
Upon successful execution, sh will execute `netstat` and `who -a`. Results will output via stdout.
**Supported Platforms:** linux, macos
#### Dependencies: Run with `sh`!
##### Description: Check if netstat command exists on the machine
##### Check Prereq Commands:
```sh
if [ -x "$(command -v netstat)" ]; then exit 0; else exit 1; fi;
```
##### Get Prereq Commands:
```sh
echo "Install netstat on the machine."; exit 1;
```
```
Invoke-AtomicTest T1049 -TestNumbers 3 -GetPreReqs
```
#### Attack Commands: Run with `sh`
```sh
netstat
who -a
```
```
Invoke-AtomicTest T1049 -TestNumbers 3
```
## Detection
System and network discovery techniques normally occur throughout an operation as an adversary learns the environment. Data and events should not be viewed in isolation, but as part of a chain of behavior that could lead to other activities, such as Lateral Movement, based on the information obtained.
Monitor processes and command-line arguments for actions that could be taken to gather system and network information. Remote access tools with built-in features may interact directly with the Windows API to gather information. Information may also be acquired through Windows system management tools such as [Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047) and [PowerShell](https://attack.mitre.org/techniques/T1059/001).
## Shield Active Defense
### Software Manipulation
Make changes to a system's software properties and functions to achieve a desired effect.
Software Manipulation allows a defender to alter or replace elements of the operating system, file system, or any other software installed and executed on a system.
#### Opportunity
There is an opportunity for the defender to observe the adversary and control what they can see, what effects they can have, and/or what data they can access.
#### Use Case
A defender can manipulate the output of commands commonly used to enumerate a system's network connections. They could seed this output with decoy systems and/or networks or remove legitimate systems from the output in order to direct an adversary away from legitimate systems.
#### Procedures
Hook the Win32 Sleep() function so that it always performs a Sleep(1) instead of the intended duration. This can increase the speed at which dynamic analysis can be performed when a normal malicious file sleeps for long periods before attempting additional capabilities.
Hook the Win32 NetUserChangePassword() and modify it such that the new password is different from the one provided. The data passed into the function is encrypted along with the modified new password, then logged so a defender can get alerted about the change as well as decrypt the new password for use.
Alter the output of an adversary's profiling commands to make newly-built systems look like the operating system was installed months earlier.
Alter the output of adversary recon commands to not show important assets, such as a file server containing sensitive data.
| github_jupyter |
```
# Statistics
import pandas as pd
import numpy as np
import math as mt
# Data Visualization
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Data Preprocessing - Standardization, Encoding, Imputation
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.compose import ColumnTransformer
# Data Preprocessing - Feature Engineering
from sklearn.preprocessing import PolynomialFeatures
from sklearn.feature_selection import mutual_info_regression
from sklearn.decomposition import PCA
# Data Preprocessing - ML Pipelines
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
# ML - Modeling
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBRegressor
# ML - Evaluation
from sklearn.model_selection import cross_val_score
# ML - Tuning
from sklearn.model_selection import GridSearchCV
class Model_Blending:
def __init__(self):
import warnings
warnings.filterwarnings('ignore')
# Import datasets
self.df_train = pd.read_csv('train_folds.csv')
#self.df_test = pd.read_csv('../input/30-days-of-ml/test.csv')
self.df_test = pd.read_csv('data/test.csv')
self.sample_submission = pd.read_csv('../input/30-days-of-ml/sample_submission.csv')
# Define features
self.num_cols = ['cont0', 'cont1', 'cont2', 'cont3', 'cont4', 'cont5', 'cont6', 'cont7', 'cont8', 'cont9', 'cont10', 'cont11', 'cont12', 'cont13']
self.onehot_cols = ['cat0', 'cat1', 'cat3', 'cat5', 'cat6', 'cat7', 'cat8'] # remove 'cat2', 'cat4' due to the low MI scores
self.ordinal_cols = ['cat9']
self.cat_cols = self.onehot_cols + self.ordinal_cols
self.useful_features = self.num_cols + self.cat_cols
self.target = 'target'
# Preprocessing solution 0
def _ordinal_encoding(self, X_train, X_valid, X_test, params=True):
# Preprocessing - Ordinal Encoding
oe = OrdinalEncoder()
X_train[self.cat_cols] = oe.fit_transform(X_train[self.cat_cols])
X_valid[self.cat_cols] = oe.transform(X_valid[self.cat_cols])
X_test[self.cat_cols] = oe.transform(X_test[self.cat_cols])
# 200
# 0.7172987346930846
# XGBoost params
xgb_params = {
'alpha': 7.128681031027614,
'lambda': 0.40760576474680843,
'gamma': 0.08704298132127238,
'reg_alpha': 25.377502919374336,
'reg_lambda': 0.003401041649454036,
'colsample_bytree': 0.1355660282707954,
'subsample': 0.6999406375783235,
'learning_rate': 0.02338550339980208,
'n_estimators': 9263,
'max_depth': 6,
'random_state': 2021,
'min_child_weight': 138
}
# 200
# 0.7174088504920006
# LightGBM params
lgb_params = {
'random_state': 0,
'num_iterations': 9530,
'learning_rate': 0.018509357813869098,
'max_depth': 6,
'num_leaves': 98,
'min_data_in_leaf': 1772,
'lambda_l1': 0.0010866230909549698,
'lambda_l2': 1.6105154171511057e-05,
'feature_fraction': 0.09911317646202211,
'bagging_fraction': 0.8840672050147438,
'bagging_freq': 6,
'min_child_samples': 35
}
if params == True:
return X_train, X_valid, X_test, xgb_params, lgb_params
else:
return X_train, X_valid, X_test
# Preprocessing solution 1
def _onehot_encoding(self, X_train, X_valid, X_test):
# Preprocessing - One-hot Encoding
ohe = OneHotEncoder(sparse=False, handle_unknown="ignore")
X_train_ohe = ohe.fit_transform(X_train[self.onehot_cols])
X_valid_ohe = ohe.transform(X_valid[self.onehot_cols])
X_test_ohe = ohe.transform(X_test[self.onehot_cols])
X_train_ohe = pd.DataFrame(X_train_ohe, columns=[f"ohe_{i}" for i in range(X_train_ohe.shape[1])])
X_valid_ohe = pd.DataFrame(X_valid_ohe, columns=[f"ohe_{i}" for i in range(X_valid_ohe.shape[1])])
X_test_ohe = pd.DataFrame(X_test_ohe, columns=[f"ohe_{i}" for i in range(X_test_ohe.shape[1])])
X_train = pd.concat([X_train.drop(columns=self.onehot_cols), X_train_ohe], axis=1)
X_valid = pd.concat([X_valid.drop(columns=self.onehot_cols), X_valid_ohe], axis=1)
X_test = pd.concat([X_test.drop(columns=self.onehot_cols), X_test_ohe], axis=1)
# Preprocessing - Ordinal Encoding
oe = OrdinalEncoder()
X_train[self.ordinal_cols] = oe.fit_transform(X_train[self.ordinal_cols])
X_valid[self.ordinal_cols] = oe.transform(X_valid[self.ordinal_cols])
X_test[self.ordinal_cols] = oe.transform(X_test[self.ordinal_cols])
# 200
# 0.7174931253475558
# XGBoost params
xgb_params = {
'alpha': 3.046687193123841,
'lambda': 0.7302844649944737,
'gamma': 0.10108768743909796,
'reg_alpha': 14.711350393993625,
'reg_lambda': 1.6855306764481926e-07,
'colsample_bytree': 0.15006790036326567,
'subsample': 0.9761751211889541,
'learning_rate': 0.02730958701307226,
'n_estimators': 7897,
'max_depth': 4,
'random_state': 0,
'min_child_weight': 203
}
# 200
# 0.7172624587909345
# LightGBM params
lgb_params = {
'random_state': 42,
'num_iterations': 6969,
'learning_rate': 0.014404708757048168,
'max_depth': 7,
'num_leaves': 21,
'min_data_in_leaf': 1121,
'lambda_l1': 4.1636932334315094e-07,
'lambda_l2': 1.0975422991510602e-08,
'feature_fraction': 0.08082581387850206,
'bagging_fraction': 0.6804475225598854,
'bagging_freq': 2,
'min_child_samples': 32
}
return X_train, X_valid, X_test, xgb_params, lgb_params
# Preprocessing solution 2
def _standardization(self, X_train, X_valid, X_test):
# Preprocessing - Standardization
scaler = StandardScaler()
X_train[self.num_cols] = scaler.fit_transform(X_train[self.num_cols])
X_valid[self.num_cols] = scaler.transform(X_valid[self.num_cols])
X_test[self.num_cols] = scaler.transform(X_test[self.num_cols])
# 200
# 0.7172152365762312
# XGBoost params
xgb_params = {
'alpha': 0.029925179326119784,
'lambda': 0.12530061860157662,
'gamma': 0.5415753114227984,
'reg_alpha': 14.992919845445886,
'reg_lambda': 0.42076728548917974,
'colsample_bytree': 0.10022710624560974,
'subsample': 0.5596856445758918,
'learning_rate': 0.020866717779139694,
'n_estimators': 6852,
'max_depth': 7,
'random_state': 2021,
'min_child_weight': 62
}
# 200
# 0.7173410652198884
# LightGBM params
lgb_params = {
'random_state': 0,
'num_iterations': 6439,
'learning_rate': 0.03625416364918611,
'max_depth': 6,
'num_leaves': 11,
'min_data_in_leaf': 745,
'lambda_l1': 4.1932281223524115e-06,
'lambda_l2': 0.043343249414638636,
'feature_fraction': 0.08623933710228435,
'bagging_fraction': 0.7934935001504152,
'bagging_freq': 3,
'min_child_samples': 23
}
return X_train, X_valid, X_test, xgb_params, lgb_params
# Preprocessing solution 3
def _log_transformation(self, X_train, X_valid, X_test):
# Preprocessing - Log transformation
for col in self.num_cols:
X_train[col] = np.log1p(X_train[col])
X_valid[col] = np.log1p(X_valid[col])
X_test[col] = np.log1p(X_test[col])
# 200
# 0.7172539872780895
# XGBoost params
xgb_params = {
'alpha': 0.08862033338686888,
'lambda': 0.003553846716302233,
'gamma': 0.4097695581309838,
'reg_alpha': 17.808150656220917,
'reg_lambda': 1.6112661145526217,
'colsample_bytree': 0.11935885763757494,
'subsample': 0.7326515814471944,
'learning_rate': 0.04006687786137418,
'n_estimators': 5239,
'max_depth': 5,
'random_state': 2021,
'min_child_weight': 258
}
# 200
# 0.7174737448879298
# LightGBM params
lgb_params = {
'random_state': 0,
'num_iterations': 7945,
'learning_rate': 0.05205269244224801,
'max_depth': 6,
'num_leaves': 9,
'min_data_in_leaf': 1070,
'lambda_l1': 1.0744924634974802e-07,
'lambda_l2': 1.1250360028635182,
'feature_fraction': 0.10421484055936374,
'bagging_fraction': 0.916143112009066,
'bagging_freq': 6,
'min_child_samples': 20
}
return X_train, X_valid, X_test, xgb_params, lgb_params
# Preprocessing solution 4
def _target_encoding(self, X_train, X_valid, X_test, y_train):
# Preprocessing - Target Encoding
te = MEstimateEncoder(cols=self.cat_cols, m=8) # m is from previous step
X_train = te.fit_transform(X_train, y_train)
X_valid = te.transform(X_valid)
X_test = te.transform(X_test)
# 300
# 0.7172617296722674
# XGBoost params
xgb_params = {
'alpha': 0.012609024116174448,
'lambda': 0.7990281671135536,
'gamma': 0.16689280834519887,
'reg_alpha': 16.48576968441873,
'reg_lambda': 4.83082534682402e-08,
'colsample_bytree': 0.1162304168345657,
'subsample': 0.9126362948665406,
'learning_rate': 0.05528416190414117,
'n_estimators': 9670,
'max_depth': 5,
'random_state': 42,
'min_child_weight': 280
}
# 200
# 0.7173917173794985
# LightGBM params
lgb_params = {
'random_state': 2021,
'num_iterations': 7977,
'learning_rate': 0.01618931564625682,
'max_depth': 5,
'num_leaves': 50,
'min_data_in_leaf': 890,
'lambda_l1': 0.003233614433753064,
'lambda_l2': 2.0001872037801434e-06,
'feature_fraction': 0.13638848986185334,
'bagging_fraction': 0.7045068716734475,
'bagging_freq': 2,
'min_child_samples': 79
}
return X_train, X_valid, X_test, xgb_params, lgb_params
def _xgboost_reg(self, xgb_params):
model = XGBRegressor(
tree_method='gpu_hist',
gpu_id=0,
predictor='gpu_predictor',
n_jobs=-1,
**xgb_params
)
return model
def _lightgbm_reg(self, lgb_params):
model = LGBMRegressor(
device='gpu',
gpu_platform_id=0,
gpu_device_id=0,
n_jobs=-1,
metric='rmse',
**lgb_params
)
return model
def blending(self, model: str):
'''Model blending. Generate 5 predictions according to 5 data preprocessing solutions.
Args:
model: One of xgboost or lightgbm
Returns:
None
'''
assert model in ['xgboost', 'lightgbm'], "ValueError: model must be one of ['xgboost', 'lightgbm']!"
# Loop preprocessing solutions
for preprocessing_solution in range(5):
final_valid_predictions = {} # store final predictions of X_valid for each preprocessing_solution
final_test_predictions = [] # store final predictions of X_test for each preprocessing_solution
scores = [] # store RMSE scores for each preprocessing_solution
print(f"Data Preprocessing Solution: {preprocessing_solution}, Model: {model}")
print(f"Training ...")
# Loop KFolds
for fold in range(5):
# Data Preprocessing
X_train = self.df_train[self.df_train.kfold != fold].reset_index(drop=True)
X_valid = self.df_train[self.df_train.kfold == fold].reset_index(drop=True)
X_test = self.df_test.copy()
# get X_valid id
X_valid_ids = X_valid.id.values.tolist()
y_train = X_train.pop(self.target)
X_train = X_train[self.useful_features] # not include id, cat2, cat4
y_valid = X_valid.pop(self.target)
X_valid = X_valid[self.useful_features] # not include id, cat2, cat4
X_test = X_test[self.useful_features]
# Ordinal Encoding
if preprocessing_solution == 0:
X_train, X_valid, X_test, xgb_params, lgb_params = self._ordinal_encoding(X_train, X_valid, X_test)
# One-hot Encoding + Ordinal Encoding
elif preprocessing_solution == 1:
X_train, X_valid, X_test, xgb_params, lgb_params = self._onehot_encoding(X_train, X_valid, X_test)
# Ordinal Encoding + Standardization
elif preprocessing_solution == 2:
X_train, X_valid, X_test = self._ordinal_encoding(X_train, X_valid, X_test, params=False)
X_train, X_valid, X_test, xgb_params, lgb_params = self._standardization(X_train, X_valid, X_test)
# Ordinal Encoding + Log Transformation
elif preprocessing_solution == 3:
X_train, X_valid, X_test = self._ordinal_encoding(X_train, X_valid, X_test, params=False)
X_train, X_valid, X_test, xgb_params, lgb_params = self._log_transformation(X_train, X_valid, X_test)
# Target Encoding
elif preprocessing_solution == 4:
X_train, X_valid, X_test, xgb_params, lgb_params = self._target_encoding(X_train, X_valid, X_test, y_train)
# Define model
if model == 'xgboost':
reg = self._xgboost_reg(xgb_params)
elif model == 'lightgbm':
reg = self._lightgbm_reg(lgb_params)
# Modeling - Training
reg.fit(
X_train, y_train,
early_stopping_rounds=300,
eval_set=[(X_valid, y_valid)],
verbose=False
)
# Modeling - Inference
valid_preds = reg.predict(X_valid)
test_preds = reg.predict(X_test)
final_valid_predictions.update(dict(zip(X_valid_ids, valid_preds))) # loop 5 times with different valid id
final_test_predictions.append(test_preds) # loop 5 times and get the mean predictions for each row later
rmse = mean_squared_error(y_valid, valid_preds, squared=False)
scores.append(rmse)
print(f'Data Preprocessing Solution: {preprocessing_solution}, Fold: {fold}, RMSE: {rmse}')
# Export results
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", f"{model}_{preprocessing_solution}_pred"]
final_valid_predictions.to_csv(f"{model}_{preprocessing_solution}_valid_pred.csv", index=False)
test_mean_preds = np.mean(np.column_stack(final_test_predictions), axis=1) # get the meam predictions for each row
test_mean_preds = pd.DataFrame({'id': self.sample_submission.id, f"{model}_{preprocessing_solution}_pred": test_mean_preds})
test_mean_preds.to_csv(f"{model}_{preprocessing_solution}_test_pred.csv", index=False)
print(f'Average RMSE: {np.mean(scores)}, STD of RMSE: {np.std(scores)}')
print('-----------------------------------------------------------------')
%%time
# Model 2 polynominal features
train_data = pd.read_csv('../input/30days-folds/train_folds.csv')
test_data = pd.read_csv('../input/30-days-of-ml/test.csv')
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith("cont")]
test_data = test_data[useful_features]
poly = PolynomialFeatures(degree=3,
interaction_only=True, # If true, only interaction features are produced: features that are products of at most degree distinct input features (so not x[1] ** 2, x[0] * x[2] ** 3, etc.).
include_bias=False)
train_poly = poly.fit_transform(train_data[num_cols])
test_poly = poly.fit_transform(test_data[num_cols])
df_train_poly = pd.DataFrame(train_poly, columns=[f"poly_{i}" for i in range(train_poly.shape[1])])
df_test_poly = pd.DataFrame(test_poly, columns=[f"poly_{i}" for i in range(test_poly.shape[1])])
train_data = pd.concat([train_data, df_train_poly], axis=1)
test_data = pd.concat([test_data, df_test_poly], axis=1)
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
test_data = test_data[useful_features]
final_valid_predictions = {}
final_test_predictions = []
scores = []
for fold in range(5):
# Preprocessing
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
X_valid_ids = X_valid.id.values.tolist()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
params = {
'random_state': 1,
'booster': 'gbtree',
'n_estimators': 10000,
'learning_rate': 0.03628302216953097,
'reg_lambda': 0.0008746338866473539,
'reg_alpha': 23.13181079976304,
'subsample': 0.7875490025178415,
'colsample_bytree': 0.11807135201147481,
'max_depth': 3
}
model = XGBRegressor(
tree_method='gpu_hist',
gpu_id=0,
predictor='gpu_predictor',
**params
)
model.fit(X_train, y_train, early_stopping_rounds=300, eval_set=[(X_valid, y_valid)], verbose=1000)
# Evaluation and Inference
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_valid_predictions.update(dict(zip(X_valid_ids, preds_valid)))
final_test_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_2"]
final_valid_predictions.to_csv("train_pred_2.csv", index=False)
preds = np.mean(np.column_stack(final_test_predictions), axis=1)
preds = pd.DataFrame({'id': sample_submission.id, 'pred_2': preds})
preds.to_csv("test_pred_2.csv", index=False)
%%time
# Model 3 targeting encoding
train_data = pd.read_csv('../input/30days-folds/train_folds.csv')
test_data = pd.read_csv('../input/30-days-of-ml/test.csv')
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
test_data = test_data[useful_features]
for col in cat_cols:
temp_df = []
temp_test_feat = None
for fold in range(5):
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
feat = X_train.groupby(col)["target"].agg("mean")
feat = feat.to_dict()
#print(feat)
X_valid.loc[:, f"tar_enc_{col}"] = X_valid[col].map(feat)
temp_df.append(X_valid)
if temp_test_feat is None:
temp_test_feat = test_data[col].map(feat)
else:
temp_test_feat += test_data[col].map(feat)
temp_test_feat /= 5
test_data.loc[:, f"tar_enc_{col}"] = temp_test_feat
train_data = pd.concat(temp_df)
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if col.startswith("cat")]
test_data = test_data[useful_features]
final_valid_predictions = {}
final_test_predictions = []
scores = []
for fold in range(5):
# Preprocessing
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
X_valid_ids = X_valid.id.values.tolist()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
params = {
'random_state': 1,
'booster': 'gbtree',
'n_estimators': 10000,
'learning_rate': 0.03628302216953097,
'reg_lambda': 0.0008746338866473539,
'reg_alpha': 23.13181079976304,
'subsample': 0.7875490025178415,
'colsample_bytree': 0.11807135201147481,
'max_depth': 3
}
model = XGBRegressor(
tree_method='gpu_hist',
gpu_id=0,
predictor='gpu_predictor',
**params
)
model.fit(X_train, y_train, early_stopping_rounds=300, eval_set=[(X_valid, y_valid)], verbose=1000)
# Evaluation and Inference
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_valid_predictions.update(dict(zip(X_valid_ids, preds_valid)))
final_test_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ["id", "pred_3"]
final_valid_predictions.to_csv("train_pred_3.csv", index=False)
preds = np.mean(np.column_stack(final_test_predictions), axis=1)
preds = pd.DataFrame({'id': sample_submission.id, 'pred_3': preds})
preds.to_csv("test_pred_3.csv", index=False)
train_data = pd.read_csv('../input/30days-folds/train_folds.csv')
test_data = pd.read_csv('../input/30-days-of-ml/test.csv')
df_train1 = pd.read_csv('train_pred_1.csv')
df_train2 = pd.read_csv('train_pred_2.csv')
df_train3 = pd.read_csv('train_pred_3.csv')
df_test1 = pd.read_csv('test_pred_1.csv')
df_test2 = pd.read_csv('test_pred_2.csv')
df_test3 = pd.read_csv('test_pred_3.csv')
train_data = train_data.merge(df_train1, on="id", how="left")
train_data = train_data.merge(df_train2, on="id", how="left")
train_data = train_data.merge(df_train3, on="id", how="left")
test_data = test_data.merge(df_test1, on="id", how="left")
test_data = test_data.merge(df_test2, on="id", how="left")
test_data = test_data.merge(df_test3, on="id", how="left")
train_data
test_data
from sklearn.linear_model import LinearRegression
useful_features = ["pred_1", "pred_2", "pred_3"]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
model = LinearRegression()
model.fit(X_train, y_train)
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
# Export submission.csv
preds = np.mean(np.column_stack(final_predictions), axis=1)
preds = pd.DataFrame({'id': sample_submission.id, 'target': preds})
preds.to_csv('submission.csv', index=False)
%%time
# With Standardization
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Preprocessing - Standardization
scaler = StandardScaler()
X_train[num_cols] = scaler.fit_transform(X_train[num_cols])
X_valid[num_cols] = scaler.transform(X_valid[num_cols])
X_test[num_cols] = scaler.transform(X_test[num_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
%%time
# With Normalization
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Preprocessing - Normalizatino
normalizer = Normalizer()
X_train[num_cols] = normalizer.fit_transform(X_train[num_cols])
X_valid[num_cols] = normalizer.transform(X_valid[num_cols])
X_test[num_cols] = normalizer.transform(X_test[num_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
%%time
# With Standardization + Normalization
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Preprocessing - Standardization
scaler = StandardScaler()
X_train[num_cols] = scaler.fit_transform(X_train[num_cols])
X_valid[num_cols] = scaler.transform(X_valid[num_cols])
X_test[num_cols] = scaler.transform(X_test[num_cols]) # Q. The last transform
# Preprocessing - Normalizatino
normalizer = Normalizer()
X_train[num_cols] = normalizer.fit_transform(X_train[num_cols])
X_valid[num_cols] = normalizer.transform(X_valid[num_cols])
X_test[num_cols] = normalizer.transform(X_test[num_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
%%time
# With Standardization
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Preprocessing - Standardization
scaler = StandardScaler()
X_train[num_cols] = scaler.fit_transform(X_train[num_cols])
X_valid[num_cols] = scaler.transform(X_valid[num_cols])
X_test[num_cols] = scaler.transform(X_test[num_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
%%time
# Log transformation + Tuning
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
for col in num_cols:
train_data[col] = np.log1p(train_data[col])
test_data[col] = np.log1p(test_data[col])
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
print('You need to reset dataframe!')
%%time
# polynomial features + Tuning
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
poly = PolynomialFeatures(degree=2,
interaction_only=True, # If true, only interaction features are produced: features that are products of at most degree distinct input features (so not x[1] ** 2, x[0] * x[2] ** 3, etc.).
include_bias=False)
train_poly = poly.fit_transform(train_data[num_cols])
test_poly = poly.fit_transform(test_data[num_cols])
df_train_poly = pd.DataFrame(train_poly, columns=[f"poly_{i}" for i in range(train_poly.shape[1])])
df_test_poly = pd.DataFrame(test_poly, columns=[f"poly_{i}" for i in range(test_poly.shape[1])])
train_data = pd.concat([train_data, df_train_poly], axis=1)
test_data = pd.concat([test_data, df_test_poly], axis=1)
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
print('You need to reset dataframe!')
test_data
%%time
# One-Hot Encoding + Ordinal Encoding + Tuning
# pd.cut
# Model Tuning + drop cat2
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
oe_cols = ['cat9']
ohe_cols = cat_cols
ohe_cols.remove('cat9')
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[oe_cols] = ordinal_encoder.fit_transform(X_train[oe_cols])
X_valid[oe_cols] = ordinal_encoder.transform(X_valid[oe_cols])
X_test[oe_cols] = ordinal_encoder.transform(X_test[oe_cols]) # Q. The last transform
# Preprocessing - One-Hot Encoding
ohe = OneHotEncoder(sparse=False, handle_unknown="ignore")
X_train_ohe = ohe.fit_transform(X_train[ohe_cols])
X_valid_ohe = ohe.transform(X_valid[ohe_cols])
X_test_ohe = ohe.transform(X_test[ohe_cols]) # Q. The last transform
X_train_ohe = pd.DataFrame(X_train_ohe, columns=[f"ohe_{i}" for i in range(X_train_ohe.shape[1])])
X_valid_ohe = pd.DataFrame(X_valid_ohe, columns=[f"ohe_{i}" for i in range(X_valid_ohe.shape[1])])
X_test_ohe = pd.DataFrame(X_test_ohe, columns=[f"ohe_{i}" for i in range(X_test_ohe.shape[1])])
X_train = pd.concat([X_train.drop(columns=ohe_cols), X_train_ohe], axis=1)
X_valid = pd.concat([X_valid.drop(columns=ohe_cols), X_valid_ohe], axis=1)
X_test = pd.concat([X_test.drop(columns=ohe_cols), X_test_ohe], axis=1)
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
print('You need to reset dataframe!')
%%time
# Model Tuning + drop cat2
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold", "cat2")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
#model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
print('You need to reset dataframe!')
%%time
# Model Tuning + drop cat2, cat6
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold", "cat2", "cat6")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
#model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
print('You need to reset dataframe!')
%%time
# Tuning + Standardization
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Preprocessing - Standardization
scaler = StandardScaler()
X_train[num_cols] = scaler.fit_transform(X_train[num_cols])
X_valid[num_cols] = scaler.transform(X_valid[num_cols])
X_test[num_cols] = scaler.transform(X_test[num_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
#model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor')
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
%%time
# Only Model Tuning
useful_features = [col for col in train_data.columns if col not in ("id", "target", "kfold")]
cat_cols = [col for col in useful_features if "cat" in col]
num_cols = [col for col in useful_features if col.startswith('cont')]
test_data = test_data[useful_features]
final_predictions = []
scores = []
for fold in range(5):
# Preprocessing - Kfold
X_train = train_data[train_data.kfold != fold].reset_index(drop=True)
X_valid = train_data[train_data.kfold == fold].reset_index(drop=True)
X_test = test_data.copy()
y_train = X_train.target
y_valid = X_valid.target
X_train = X_train[useful_features]
X_valid = X_valid[useful_features]
# Preprocessing - Ordinal Encoding
ordinal_encoder = OrdinalEncoder()
X_train[cat_cols] = ordinal_encoder.fit_transform(X_train[cat_cols])
X_valid[cat_cols] = ordinal_encoder.transform(X_valid[cat_cols])
X_test[cat_cols] = ordinal_encoder.transform(X_test[cat_cols]) # Q. The last transform
# Training
#model = RandomForestRegressor(random_state=fold, n_jobs=-1)
#model = XGBRegressor(random_state=fold, n_jobs=8)
model = XGBRegressor(random_state=fold, tree_method='gpu_hist', gpu_id=0, predictor='gpu_predictor',
learning_rate=0.1, n_estimators=1000, max_depth=3, colsample_bytree=0.3)
model.fit(X_train, y_train)
# Evaluation
preds_valid = model.predict(X_valid)
test_preds = model.predict(X_test)
final_predictions.append(test_preds)
rmse = mean_squared_error(y_valid, preds_valid, squared=False)
print(fold, rmse)
scores.append(rmse)
print(np.mean(scores), np.std(scores))
# Export submission.csv
preds = np.mean(np.column_stack(final_predictions), axis=1)
preds = pd.DataFrame({'id': sample_submission.id, 'target': preds})
preds.to_csv('submission.csv', index=False)
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# _*Quantum Tic-Tac-Toe*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
***
### Contributors
[Maor Ben-Shahar](https://github.com/MA0R)
***
An example run of quantum Tic-Tac-Toe is provided below, with explanations of the game workings following after. Despite the ability to superimpose moves, a winning strategy still exists for both players (meaning the game will be a draw if both implement it). See if you can work it out.
```
#Import the game!
import sys
sys.path.append('game_engines')
from q_tic_tac_toe import Board
#inputs are (X,Y,print_info).
#X,Y are the dimensions of the board. print_info boolean controls if to print instructions at game launch.
#Since it is our first time playing, lets set it True and see the instructions!
B = Board(3,3,True)
B.run()
```
When playing the game, the two players are asked in turn if to make a classical (1 cell) or quantum move (1 or 2 cells at most, for now). When making any move there are several scenarios that can happen, they are explained below. The terminology used:
- Each turn a "move" is made
- Each move consists of one or two "cells", the location(s) where the move is made. It is a superposition of classical moves.
Quantum moves are restricted to two cells only due to them requiring an increasing number of qubits, which is slow to simulate.
## One move on an empty cell
This is the simplest move, it is a "classical" move. The game registers this move as a set of coordinates, and the player who made the move. No qubits are used here.
It is registered as such:
`Play in one or two cells?1
x index: 0
y index: 0`
First the player is asked how many moves (we chose 1, classical). Then it asks for the index of the move.
And the board registers it as
`
[['O1','',''],
['','',''],
['','','']]
`
This move is *always* present at the end of the game.
## Two-cell moves in empty cells
This is a quantum move, the game stores a move that is in a superposition of being played at *two* cells. Ordered coordinates for the two cells to be occupied need to be provided. A row in the board with a superposition move would look like so
`[X1,X1,'']`
Two qubits were used in order to register this move. They are in a state $|10>+|01>$, if the first qubit is measured to be 1 then the board becomes `[X1,'','']` and vice versa. Why can we not use just one qubit to record this? We can, and the qubit would have to be put into a state $|0>+|1>$ but I did not implement this yet since this is method will be consistant with later types of quantum moves.
Let us see this in action:
```
B = Board(3,3)
B.run()
```
The game outcome is almost 50% in each cell, as we would expect. There is a redundant bit at the end of the bit code (to be removed soon!). And note that the bit strings are the inversed order to what we write here, this is because the quantum register in qiskit has positions $|q_n,...,q_0>$.
## One-cell move plyed in a maybe-occupied cell
It is possible that after the game is in state `[X1,X1,'']` one would definitely want to make a move at position (0,0). This could be when the game is completely full perhaps, since it is not a very good strategy. Such a move can be erased from the game history! Let us see how it is recorded. The first row of the board is now
`[X1 O2,X1,'']`
and the state of the game qubits is
$$ |100>+|011> $$
with the first qubit recording sucess of the first move at cell (0,0), the second qubit is the success of the first move in cell (0,1) and the third qubit is the move by player O, which is anti correlated with the move by X at cell (0,0).
Notice that this move can be completely erased!
```
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([indx1,indx2],player) 0=X, 1=O.
B.add_move([[0,0]],1)
B.compute_winner()
```
Once again note that the move could be erased completely, and in fact this happens 50% of the time. Notice how the bit string output from QISKIT is translated into a board state.
## Two-cell moves in maybe-occupied cells
Instead of the above, player O might like to choose a better strategy. Perhaps O is interested in a quantum move on cells (0,0) and (0,2). In such a case the game records the two moves in the order they are entered.
- In order (0,0) then (0,2): The state of the game is first made into $ |100>+|011> $ as above, with the third qubit recording the sucess of player O getting position (0,0). Then the (0,2) position is registered, anti-correlated with suceeding in position (0,0): $|1001>+|0110>$. Now, unlike before, player O suceeds in registering a move regardless of the outcome.
- In order (0,2) then (0,0): Now playing at (0,2) is not dependent on anything, and so the game state is $(|10>+|01>)\otimes (|1>+|0>) = |101>+|100>+|011>+|010>$. And when the move in position (0,0) is added too, it is anti correlated with BOTH the move in (0,2) AND the pre-existing move in (0,0). So qubit state becomes $|1010>+|1000>+|0110>+|0101>$. Notice how now the move could be erased, so order does matter!
```
B = Board(3,3)
#Instead of running the game, for the purpose of demonstrating, we can just create the appropriate state manually.
#Directly adding moves, ([[y1,x1],[x2,y2]],player) with player=0->X, 1->O.
B.add_move([[0,0],[0,1]],0)
B.add_move([[0,0],[0,2]],1)
B.compute_winner()
```
### Exercise: what if player O chose coordinates (x=0,y=0) and (x=1,y=0) instead?
### Exercise: At this stage, can player X ensure that no matter what O plays, both (x=0,y=0) and (x=1,y=0) are occupied by X?
And that is all there is to quantum tic tic toe! Remember, to run the game, import the board:
`from q_tic_tac_toe import Board`
Create the board you want to play on:
`B = Board(3,3,True)`
and run!
`B.run()`
```
keywords = {'Topics': ['Games','Superposition','Entanglement'], 'Commands': ['Custom gates']}
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)**
</i></small></small>
# Python Statement, Indentation and Comments
In this class, you will learn about Python statements, why indentation is important and use of comments in programming.
## 1. Python Statement
Instructions that a Python interpreter can execute are called statements. For example, **`a = 1`** is an assignment statement. **`if`** statement, **`for`** statement, **`while`** statement, etc. are other kinds of statements which will be discussed later.
### Multi-line statement
In Python, the end of a statement is marked by a newline character. But we can make a statement extend over multiple lines with the line continuation character **`\`**.
* Statements finish at the end of the line:
* Except when there is an open bracket or paranthesis:
```python
>>> 1+2
>>> +3 #illegal continuation of the sum
```
* A single backslash at the end of the line can also be used to indicate that a statement is still incomplete
```python
>>> 1 + \
>>> 2 + 3 # this is also okay
```
For example:
```
1+2 # assignment line 1
+3 # assignment line 2
# Python is only calculating assignment line 1
1+2\ # "\" means assignment line is continue to next line
+3
a = 1 + 2 + 3 + \
4 + 5 + 6 + \
7 + 8 + 9
print(a)
```
This is an explicit line continuation. In Python, line continuation is implied inside:
1. parentheses **`( )`**,
For Example:
```python
(1+2
+ 3) # perfectly OK even with spaces
```
2. brackets **`[ ]`**, and
3. braces **`{ }`**.
For instance, we can implement the above multi-line statement as:
```
(1+2
+3)
a = (1 + 2 + 3 +
4 + 5 + 6 +
7 + 8 + 9)
print(a)
```
Here, the surrounding parentheses **`( )`** do the line continuation implicitly. Same is the case with **`[ ]`** and **`{ }`**. For example:
```
colors = ['red',
'blue',
'green']
print(colors)
```
We can also put multiple statements in a single line using semicolons **`;`** as follows:
```
a = 1; b = 2; c = 3
print(a,b,c)
a,b,c
```
## 2. Python Indentation
No spaces or tab characters allowed at the start of a statement: Indentation plays a special role in Python (see the section on control statements). For now simply ensure that all statements start at the beginning of the line.
<div>
<img src="img/ind1.png" width="700"/>
</div>
Most of the programming languages like C, C++, and Java use braces **`{ }`** to define a block of code. Python, however, uses indentation.
A comparison of C & Python will help you understand it better.
<div>
<img src="img/ind2.png" width="700"/>
</div>
A code block (body of a **[function](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)**, **[loop](https://github.com/milaan9/03_Python_Flow_Control/blob/main/005_Python_for_Loop.ipynb)**, etc.) starts with indentation and ends with the first unindented line. The amount of indentation is up to you, but it must be consistent throughout that block.
Generally, four whitespaces are used for indentation and are preferred over tabs. Here is an example.
> **In the case of Python, indentation is not for styling purpose. It is rather a requirement for your code to get compiled and executed. Thus it is mandatory!!!**
```
for i in range(1,11):
print(i) #press "Tab" one time for 1 indentation
if i == 6:
break
```
The enforcement of indentation in Python makes the code look neat and clean. This results in Python programs that look similar and consistent.
Indentation can be ignored in line continuation, but it's always a good idea to indent. It makes the code more readable. For example:
```
if True:
print('Hello')
a = 6
```
or
```
if True: print('Hello'); a = 6
```
both are valid and do the same thing, but the former style is clearer.
Incorrect indentation will result in **`IndentationError`**
.
## 3. Python Comments
Comments are very important while writing a program. They describe what is going on inside a program, so that a person looking at the source code does not have a hard time figuring it out.
You might forget the key details of the program you just wrote in a month's time. So taking the time to explain these concepts in the form of comments is always fruitful.
In Python, we use the hash **`#`** symbol to start writing a comment.
It extends up to the newline character. Comments are for programmers to better understand a program. Python Interpreter ignores comments.
Generally, comments will look something like this:
```python
#This is a Comment
```
Because comments do not **execute**, when you run a program you will not see any indication of the comment there. Comments are in the source code for **humans** to **read**, not for **computers to execute**.
```
#This is a Comment
```
### 1. Single lined comment:
In case user wants to specify a single line comment, then comment must start with **`#`**.
```python
#This is single line comment.
```
```
#This is single line comment.
```
### 2. Inline comments
If a comment is placed on the same line as a statement, it is called an inline comment. Similar to the block comment, an inline comment begins with a single hash (#) sign and followed by a space and comment.
It is recommended that an inline comment should separate from the statement at least **two spaces**. The following example demonstrates an inline comment
```python
>>>n+=1 # increase/add n by 1
```
```
n=9
n+=1 # increase/add n by 1
n
```
### 3. Multi lined comment:
We can have comments that extend up to multiple lines. One way is to use the hash **`#`** symbol at the beginning of each line. For example:
```
#This is a long comment
#and it extends
#to multiple lines
#This is a comment
#print out Hello
print('Hello')
```
Another way of doing this is to use triple quotes, either `'''` or `"""`.
These triple quotes are generally used for multi-line strings. But they can be used as a multi-line comment as well. Unless they are not docstrings, they do not generate any extra code.
```python
#single line comment
>>>print ("Hello Python"
'''This is
multiline comment''')
```
```
"""This is also a
perfect example of
multi-line comments"""
'''This is also a
perfect example of
multi-line comments'''
#single line comment
print ("Hello Python"
'''This is
multiline comment''')
```
### 4. Docstrings in Python
A docstring is short for documentation string.
**[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)** (documentation strings) are the **[string](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)** literals that appear right after the definition of a function, method, class, or module.
Triple quotes are used while writing docstrings. For example:
```python
>>>def double(num):
>>> """Function to double the value"""
>>> return 3*num
```
Docstrings appear right after the definition of a function, class, or a module. This separates docstrings from multiline comments using triple quotes.
The docstrings are associated with the object as their **`__doc__`** attribute.
So, we can access the docstrings of the above function with the following lines of code:
```
def double(num):
"""Function to double the value"""
return 3*num
print(double.__doc__)
```
To learn more about docstrings in Python, visit **[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)** .
## Help topics
Python has extensive help built in. You can execute **`help()`** for an overview or **`help(x)`** for any library, object or type **`x`**. Try using **`help("topics")`** to get a list of help pages built into the help system.
`help("topics")`
```
help("topics")
```
| github_jupyter |
# XLA in Python
[](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/notebooks/XLA_in_Python.ipynb)
<img style="height:100px;" src="https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/compiler/xla/g3doc/images/xlalogo.png"> <img style="height:100px;" src="https://upload.wikimedia.org/wikipedia/commons/c/c3/Python-logo-notext.svg">
_Anselm Levskaya_, _Qiao Zhang_
XLA is the compiler that JAX uses, and the compiler that TF uses for TPUs and will soon use for all devices, so it's worth some study. However, it's not exactly easy to play with XLA computations directly using the raw C++ interface. JAX exposes the underlying XLA computation builder API through a python wrapper, and makes interacting with the XLA compute model accessible for messing around and prototyping.
XLA computations are built as computation graphs in HLO IR, which is then lowered to LLO that is device specific (CPU, GPU, TPU, etc.).
As end users we interact with the computational primitives offered to us by the HLO spec.
# Caution: This is a pedagogical notebook covering some low level XLA details, the APIs herein are neither public nor stable!
## References
__xla__: the doc that defines what's in HLO - but note that the doc is incomplete and omits some ops.
https://www.tensorflow.org/xla/operation_semantics
more details on ops in the source code.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/client/xla_builder.h
__python xla client__: this is the XLA python client for JAX, and what we're using here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/xla_client.py
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/xla_client_test.py
__jax__: you can see how jax interacts with the XLA compute layer for execution and JITing in these files.
https://github.com/google/jax/blob/master/jax/lax.py
https://github.com/google/jax/blob/master/jax/lib/xla_bridge.py
https://github.com/google/jax/blob/master/jax/interpreters/xla.py
## Colab Setup and Imports
```
import numpy as np
# We only need to import JAX's xla_client, not all of JAX.
from jax.lib import xla_client as xc
xops = xc.ops
# Plotting
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import gridspec
from matplotlib import rcParams
rcParams['image.interpolation'] = 'nearest'
rcParams['image.cmap'] = 'viridis'
rcParams['axes.grid'] = False
```
## Simple Computations
```
# make a computation builder
c = xc.XlaBuilder("simple_scalar")
# define a parameter shape and parameter
param_shape = xc.Shape.array_shape(np.dtype(np.float32), ())
x = xops.Parameter(c, 0, param_shape)
# define computation graph
y = xops.Sin(x)
# build computation graph
# Keep in mind that incorrectly constructed graphs can cause
# your notebook kernel to crash!
computation = c.Build()
# get a cpu backend
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(computation)
# define a host variable with above parameter shape
host_input = np.array(3.0, dtype=np.float32)
# place host variable on device and execute
device_input = cpu_backend.buffer_from_pyval(host_input)
device_out = compiled_computation.execute([device_input ,])
# retrive the result
device_out[0].to_py()
# same as above with vector type:
c = xc.XlaBuilder("simple_vector")
param_shape = xc.Shape.array_shape(np.dtype(np.float32), (3,))
x = xops.Parameter(c, 0, param_shape)
# chain steps by reference:
y = xops.Sin(x)
z = xops.Abs(y)
computation = c.Build()
# get a cpu backend
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(computation)
host_input = np.array([3.0, 4.0, 5.0], dtype=np.float32)
device_input = cpu_backend.buffer_from_pyval(host_input)
device_out = compiled_computation.execute([device_input ,])
# retrive the result
device_out[0].to_py()
```
## Simple While Loop
```
# trivial while loop, decrement until 0
# x = 5
# while x > 0:
# x = x - 1
#
in_shape = xc.Shape.array_shape(np.dtype(np.int32), ())
# body computation:
bcb = xc.XlaBuilder("bodycomp")
x = xops.Parameter(bcb, 0, in_shape)
const1 = xops.Constant(bcb, np.int32(1))
y = xops.Sub(x, const1)
body_computation = bcb.Build()
# test computation:
tcb = xc.XlaBuilder("testcomp")
x = xops.Parameter(tcb, 0, in_shape)
const0 = xops.Constant(tcb, np.int32(0))
y = xops.Gt(x, const0)
test_computation = tcb.Build()
# while computation:
wcb = xc.XlaBuilder("whilecomp")
x = xops.Parameter(wcb, 0, in_shape)
xops.While(test_computation, body_computation, x)
while_computation = wcb.Build()
# Now compile and execute:
# get a cpu backend
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(while_computation)
host_input = np.array(5, dtype=np.int32)
device_input = cpu_backend.buffer_from_pyval(host_input)
device_out = compiled_computation.execute([device_input ,])
# retrive the result
device_out[0].to_py()
```
## While loops w/ Tuples - Newton's Method for sqrt
```
Xsqr = 2
guess = 1.0
converged_delta = 0.001
maxit = 1000
in_shape_0 = xc.Shape.array_shape(np.dtype(np.float32), ())
in_shape_1 = xc.Shape.array_shape(np.dtype(np.float32), ())
in_shape_2 = xc.Shape.array_shape(np.dtype(np.int32), ())
in_tuple_shape = xc.Shape.tuple_shape([in_shape_0, in_shape_1, in_shape_2])
# body computation:
# x_{i+1} = x_i - (x_i**2 - y) / (2 * x_i)
bcb = xc.XlaBuilder("bodycomp")
intuple = xops.Parameter(bcb, 0, in_tuple_shape)
y = xops.GetTupleElement(intuple, 0)
x = xops.GetTupleElement(intuple, 1)
guard_cntr = xops.GetTupleElement(intuple, 2)
new_x = xops.Sub(x, xops.Div(xops.Sub(xops.Mul(x, x), y), xops.Add(x, x)))
result = xops.Tuple(bcb, [y, new_x, xops.Sub(guard_cntr, xops.Constant(bcb, np.int32(1)))])
body_computation = bcb.Build()
# test computation -- convergence and max iteration test
tcb = xc.XlaBuilder("testcomp")
intuple = xops.Parameter(tcb, 0, in_tuple_shape)
y = xops.GetTupleElement(intuple, 0)
x = xops.GetTupleElement(intuple, 1)
guard_cntr = xops.GetTupleElement(intuple, 2)
criterion = xops.Abs(xops.Sub(xops.Mul(x, x), y))
# stop at convergence criteria or too many iterations
test = xops.And(xops.Gt(criterion, xops.Constant(tcb, np.float32(converged_delta))),
xops.Gt(guard_cntr, xops.Constant(tcb, np.int32(0))))
test_computation = tcb.Build()
# while computation:
# since jax does not allow users to create a tuple input directly, we need to
# take multiple parameters and make a intermediate tuple before feeding it as
# an initial carry to while loop
wcb = xc.XlaBuilder("whilecomp")
y = xops.Parameter(wcb, 0, in_shape_0)
x = xops.Parameter(wcb, 1, in_shape_1)
guard_cntr = xops.Parameter(wcb, 2, in_shape_2)
tuple_init_carry = xops.Tuple(wcb, [y, x, guard_cntr])
xops.While(test_computation, body_computation, tuple_init_carry)
while_computation = wcb.Build()
# Now compile and execute:
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(while_computation)
y = np.array(Xsqr, dtype=np.float32)
x = np.array(guess, dtype=np.float32)
maxit = np.array(maxit, dtype=np.int32)
device_input_y = cpu_backend.buffer_from_pyval(y)
device_input_x = cpu_backend.buffer_from_pyval(x)
device_input_maxit = cpu_backend.buffer_from_pyval(maxit)
device_out = compiled_computation.execute([device_input_y, device_input_x, device_input_maxit])
# retrive the result
print("square root of {y} is {x}".format(y=y, x=device_out[1].to_py()))
```
## Calculate Symm Eigenvalues
Let's exploit the XLA QR implementation to solve some eigenvalues for symmetric matrices.
This is the naive QR algorithm, without acceleration for closely-spaced eigenvalue convergence, nor any permutation to sort eigenvalues by magnitude.
```
Niter = 200
matrix_shape = (10, 10)
in_shape_0 = xc.Shape.array_shape(np.dtype(np.float32), matrix_shape)
in_shape_1 = xc.Shape.array_shape(np.dtype(np.int32), ())
in_tuple_shape = xc.Shape.tuple_shape([in_shape_0, in_shape_1])
# body computation -- QR loop: X_i = Q R , X_{i+1} = R Q
bcb = xc.XlaBuilder("bodycomp")
intuple = xops.Parameter(bcb, 0, in_tuple_shape)
x = xops.GetTupleElement(intuple, 0)
cntr = xops.GetTupleElement(intuple, 1)
Q, R = xops.QR(x, True)
RQ = xops.Dot(R, Q)
xops.Tuple(bcb, [RQ, xops.Sub(cntr, xops.Constant(bcb, np.int32(1)))])
body_computation = bcb.Build()
# test computation -- just a for loop condition
tcb = xc.XlaBuilder("testcomp")
intuple = xops.Parameter(tcb, 0, in_tuple_shape)
cntr = xops.GetTupleElement(intuple, 1)
test = xops.Gt(cntr, xops.Constant(tcb, np.int32(0)))
test_computation = tcb.Build()
# while computation:
wcb = xc.XlaBuilder("whilecomp")
x = xops.Parameter(wcb, 0, in_shape_0)
cntr = xops.Parameter(wcb, 1, in_shape_1)
tuple_init_carry = xops.Tuple(wcb, [x, cntr])
xops.While(test_computation, body_computation, tuple_init_carry)
while_computation = wcb.Build()
# Now compile and execute:
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(while_computation)
X = np.random.random(matrix_shape).astype(np.float32)
X = (X + X.T) / 2.0
it = np.array(Niter, dtype=np.int32)
device_input_x = cpu_backend.buffer_from_pyval(X)
device_input_it = cpu_backend.buffer_from_pyval(it)
device_out = compiled_computation.execute([device_input_x, device_input_it])
host_out = device_out[0].to_py()
eigh_vals = host_out.diagonal()
plt.title('D')
plt.imshow(host_out)
print('sorted eigenvalues')
print(np.sort(eigh_vals))
print('sorted eigenvalues from numpy')
print(np.sort(np.linalg.eigh(X)[0]))
print('sorted error')
print(np.sort(eigh_vals) - np.sort(np.linalg.eigh(X)[0]))
```
## Calculate Full Symm Eigensystem
We can also calculate the eigenbasis by accumulating the Qs.
```
Niter = 100
matrix_shape = (10, 10)
in_shape_0 = xc.Shape.array_shape(np.dtype(np.float32), matrix_shape)
in_shape_1 = xc.Shape.array_shape(np.dtype(np.float32), matrix_shape)
in_shape_2 = xc.Shape.array_shape(np.dtype(np.int32), ())
in_tuple_shape = xc.Shape.tuple_shape([in_shape_0, in_shape_1, in_shape_2])
# body computation -- QR loop: X_i = Q R , X_{i+1} = R Q
bcb = xc.XlaBuilder("bodycomp")
intuple = xops.Parameter(bcb, 0, in_tuple_shape)
X = xops.GetTupleElement(intuple, 0)
O = xops.GetTupleElement(intuple, 1)
cntr = xops.GetTupleElement(intuple, 2)
Q, R = xops.QR(X, True)
RQ = xops.Dot(R, Q)
Onew = xops.Dot(O, Q)
xops.Tuple(bcb, [RQ, Onew, xops.Sub(cntr, xops.Constant(bcb, np.int32(1)))])
body_computation = bcb.Build()
# test computation -- just a for loop condition
tcb = xc.XlaBuilder("testcomp")
intuple = xops.Parameter(tcb, 0, in_tuple_shape)
cntr = xops.GetTupleElement(intuple, 2)
test = xops.Gt(cntr, xops.Constant(tcb, np.int32(0)))
test_computation = tcb.Build()
# while computation:
wcb = xc.XlaBuilder("whilecomp")
X = xops.Parameter(wcb, 0, in_shape_0)
O = xops.Parameter(wcb, 1, in_shape_1)
cntr = xops.Parameter(wcb, 2, in_shape_2)
tuple_init_carry = xops.Tuple(wcb, [X, O, cntr])
xops.While(test_computation, body_computation, tuple_init_carry)
while_computation = wcb.Build()
# Now compile and execute:
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(while_computation)
X = np.random.random(matrix_shape).astype(np.float32)
X = (X + X.T) / 2.0
Omat = np.eye(matrix_shape[0], dtype=np.float32)
it = np.array(Niter, dtype=np.int32)
device_input_X = cpu_backend.buffer_from_pyval(X)
device_input_Omat = cpu_backend.buffer_from_pyval(Omat)
device_input_it = cpu_backend.buffer_from_pyval(it)
device_out = compiled_computation.execute([device_input_X, device_input_Omat, device_input_it])
host_out = device_out[0].to_py()
eigh_vals = host_out.diagonal()
eigh_mat = device_out[1].to_py()
plt.title('D')
plt.imshow(host_out)
plt.figure()
plt.title('U')
plt.imshow(eigh_mat)
plt.figure()
plt.title('U^T A U')
plt.imshow(np.dot(np.dot(eigh_mat.T, X), eigh_mat))
print('sorted eigenvalues')
print(np.sort(eigh_vals))
print('sorted eigenvalues from numpy')
print(np.sort(np.linalg.eigh(X)[0]))
print('sorted error')
print(np.sort(eigh_vals) - np.sort(np.linalg.eigh(X)[0]))
```
## Convolutions
I keep hearing from the AGI folks that we can use convolutions to build artificial life. Let's try it out.
```
# Here we borrow convenience functions from LAX to handle conv dimension numbers.
from typing import NamedTuple, Sequence
class ConvDimensionNumbers(NamedTuple):
"""Describes batch, spatial, and feature dimensions of a convolution.
Args:
lhs_spec: a tuple of nonnegative integer dimension numbers containing
`(batch dimension, feature dimension, spatial dimensions...)`.
rhs_spec: a tuple of nonnegative integer dimension numbers containing
`(out feature dimension, in feature dimension, spatial dimensions...)`.
out_spec: a tuple of nonnegative integer dimension numbers containing
`(batch dimension, feature dimension, spatial dimensions...)`.
"""
lhs_spec: Sequence[int]
rhs_spec: Sequence[int]
out_spec: Sequence[int]
def _conv_general_proto(dimension_numbers):
assert type(dimension_numbers) is ConvDimensionNumbers
lhs_spec, rhs_spec, out_spec = dimension_numbers
proto = xc.ConvolutionDimensionNumbers()
proto.input_batch_dimension = lhs_spec[0]
proto.input_feature_dimension = lhs_spec[1]
proto.output_batch_dimension = out_spec[0]
proto.output_feature_dimension = out_spec[1]
proto.kernel_output_feature_dimension = rhs_spec[0]
proto.kernel_input_feature_dimension = rhs_spec[1]
proto.input_spatial_dimensions.extend(lhs_spec[2:])
proto.kernel_spatial_dimensions.extend(rhs_spec[2:])
proto.output_spatial_dimensions.extend(out_spec[2:])
return proto
Niter=13
matrix_shape = (1, 1, 20, 20)
in_shape_0 = xc.Shape.array_shape(np.dtype(np.int32), matrix_shape)
in_shape_1 = xc.Shape.array_shape(np.dtype(np.int32), ())
in_tuple_shape = xc.Shape.tuple_shape([in_shape_0, in_shape_1])
# Body computation -- Conway Update
bcb = xc.XlaBuilder("bodycomp")
intuple = xops.Parameter(bcb, 0, in_tuple_shape)
x = xops.GetTupleElement(intuple, 0)
cntr = xops.GetTupleElement(intuple, 1)
# convs require floating-point type
xf = xops.ConvertElementType(x, xc.DTYPE_TO_XLA_ELEMENT_TYPE['float32'])
stamp = xops.Constant(bcb, np.ones((1,1,3,3), dtype=np.float32))
conv_dim_num_proto = _conv_general_proto(ConvDimensionNumbers(lhs_spec=(0,1,2,3), rhs_spec=(0,1,2,3), out_spec=(0,1,2,3)))
convd = xops.ConvGeneralDilated(xf, stamp, [1, 1], [(1, 1), (1, 1)], (), (), conv_dim_num_proto)
# # logic ops require integer types
convd = xops.ConvertElementType(convd, xc.DTYPE_TO_XLA_ELEMENT_TYPE['int32'])
bool_x = xops.Eq(x, xops.Constant(bcb, np.int32(1)))
# core update rule
res = xops.Or(
# birth rule
xops.And(xops.Not(bool_x), xops.Eq(convd, xops.Constant(bcb, np.int32(3)))),
# survival rule
xops.And(bool_x, xops.Or(
# these are +1 the normal numbers since conv-sum counts self
xops.Eq(convd, xops.Constant(bcb, np.int32(4))),
xops.Eq(convd, xops.Constant(bcb, np.int32(3))))
)
)
# Convert output back to int type for type constancy
int_res = xops.ConvertElementType(res, xc.DTYPE_TO_XLA_ELEMENT_TYPE['int32'])
xops.Tuple(bcb, [int_res, xops.Sub(cntr, xops.Constant(bcb, np.int32(1)))])
body_computation = bcb.Build()
# Test computation -- just a for loop condition
tcb = xc.XlaBuilder("testcomp")
intuple = xops.Parameter(tcb, 0, in_tuple_shape)
cntr = xops.GetTupleElement(intuple, 1)
test = xops.Gt(cntr, xops.Constant(tcb, np.int32(0)))
test_computation = tcb.Build()
# While computation:
wcb = xc.XlaBuilder("whilecomp")
x = xops.Parameter(wcb, 0, in_shape_0)
cntr = xops.Parameter(wcb, 1, in_shape_1)
tuple_init_carry = xops.Tuple(wcb, [x, cntr])
xops.While(test_computation, body_computation, tuple_init_carry)
while_computation = wcb.Build()
# Now compile and execute:
cpu_backend = xc.get_local_backend("cpu")
# compile graph based on shape
compiled_computation = cpu_backend.compile(while_computation)
# Set up initial state
X = np.zeros(matrix_shape, dtype=np.int32)
X[0,0, 5:8, 5:8] = np.array([[0,1,0],[0,0,1],[1,1,1]])
# Evolve
movie = np.zeros((Niter,)+matrix_shape[-2:], dtype=np.int32)
for it in range(Niter):
itr = np.array(it, dtype=np.int32)
device_input_x = cpu_backend.buffer_from_pyval(X)
device_input_it = cpu_backend.buffer_from_pyval(itr)
device_out = compiled_computation.execute([device_input_x, device_input_it])
movie[it] = device_out[0].to_py()[0,0]
# Plot
fig = plt.figure(figsize=(15,2))
gs = gridspec.GridSpec(1,Niter)
for i in range(Niter):
ax1 = plt.subplot(gs[:, i])
ax1.axis('off')
ax1.imshow(movie[i])
plt.subplots_adjust(left=0.0, right=1.0, top=1.0, bottom=0.0, hspace=0.0, wspace=0.05)
```
## Fin
There's much more to XLA, but this hopefully highlights how easy it is to play with via the python client!
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from models import SimpleModel, ConcreteModel, ConcreteDropout, normal_nll
torch.manual_seed(2809)
np.random.seed(2809)
torch.cuda.manual_seed(2809)
%load_ext autoreload
%autoreload 2
def print_weights(model):
print(model.fc1.weight.data)
print(model.fc2.weight.data)
print(model.fc1.bias.data)
print(model.fc2.bias.data)
def save_checkpoint(state, filename='checkpoint.pth'):
torch.save(state, filename)
def evaluate_loss(pred, true, log_var=None):
if log_var is None:
criterion = nn.MSELoss()
return criterion(pred, true)
else:
return normal_nll(pred, true, log_var)
def generate_data(N, X_dim, Y_dim):
"""
Function to generate data
"""
sigma = 0.7 # ground truth
X = torch.randn(N, X_dim)
w = torch.ones((X_dim, Y_dim))*2.0
b = 8.0
Y = torch.mm(X, w) + b + sigma*torch.randn(N, Y_dim)
return X, Y
checkpoint_path = 'checkpoint.pth'
batch_size = 25
n_hidden = 3
n_train = 1000
n_val = 100
n_data = n_train + n_val
X_dim = 4
Y_dim = 2
epoch = 0
# For ConcreteModel
l = 1e-4 # length scale
wr = l**2. / n_train
dr = 2. / n_train
#model = SimpleModel(X_dim, n_hidden, Y_dim)
model = ConcreteModel(X_dim, n_hidden, Y_dim, wr, dr)
model_resume = SimpleModel(X_dim, n_hidden, Y_dim)
model_resume = ConcreteModel(X_dim, n_hidden, Y_dim, wr, dr)
optimizer = optim.Adam(model.parameters(),
betas=(0.9, 0.999), eps=1e-08, amsgrad=True)
optimizer_resume = optim.Adam(model_resume.parameters(),
betas=(0.9, 0.999), eps=1e-08, amsgrad=True)
X, Y = generate_data(n_data, X_dim, Y_dim)
X_train, Y_train = X[:n_train], Y[:n_train]
X_val, Y_val = X[n_train:], Y[n_train:]
# Train for one epoch
losses = []
while epoch < 200:
optimizer.zero_grad()
pred, log_var, reg = model(X_train)
loss = evaluate_loss(pred, Y_train, log_var)
loss.backward()
optimizer.step()
loss_val = loss.item()
losses.append(loss_val)
epoch += 1
save_checkpoint({
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'optimizer' : optimizer.state_dict(),
}, checkpoint_path)
#print_weights(model)
#optimizer.state_dict()
# Resume training
print("=> loading checkpoint '{}'".format(checkpoint_path))
checkpoint = torch.load(checkpoint_path)
epoch = checkpoint['epoch']
model_resume.load_state_dict(checkpoint['state_dict'])
optimizer_resume.load_state_dict(checkpoint['optimizer'])
#print_weights(model_resume)
#optimizer_resume.state_dict()
# Train model for 100 more epochs
while epoch < 300:
optimizer_resume.zero_grad()
pred, log_var, reg = model_resume(X_train)
loss = evaluate_loss(pred, Y_train, log_var)
loss.backward()
optimizer_resume.step()
loss_val = loss.item()
losses.append(loss_val)
epoch += 1
losses_arr = np.array(losses)
plt.plot(losses)
```
| github_jupyter |
```
##### Import packages
# Basic packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Modelling packages
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
# To avoid warnings
import warnings
warnings.filterwarnings("ignore")
##### Import data
# Check the csv's path before running it
df_acc_final = pd.read_csv('df_final.csv')
df_acc_final
##### Creating Mean Absolute Percentage Error
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
##### Format change to datetime on some energy columns
for col in ['date_Hr', 'startDate_energy', 'endDate_energy']:
df_acc_final[col] = pd.to_datetime(df_acc_final[col])
##### Creating new variables based on energy data
df_acc_final["time_elapsed"] = (df_acc_final["startDate_energy"] - df_acc_final["date_Hr"]).astype('timedelta64[s]')
df_acc_final["day"] = df_acc_final.date_Hr.apply(lambda x: x.day)
df_acc_final["month"] = df_acc_final.date_Hr.apply(lambda x: x.month)
df_acc_final["hour"] = df_acc_final.date_Hr.apply(lambda x: x.hour)
df_acc_final.drop(['date_Hr', 'startDate_energy', 'endDate_energy','totalTime_energy'], axis=1, inplace=True)
df_acc_final.head()
##### To avoid problems while using MAPE, I multiply whole target x 10
df_acc_final.value_energy = df_acc_final.value_energy.apply(lambda x: x*10)
```
# Modelling
```
##### Selecting all the columns to use to modelling (also the target)
# Before trying different models, it's important to keep in mind that the problem ask us for a model with not high computational
# costs and that does not occupy much in the memory. In addition, it's valued the simplicity, clarity and explicitness.
features = list(df_acc_final)
for col in ['id_', 'value_energy']:
features.remove(col)
print('Columns used on X:', features)
##### Creation of X and y
X = df_acc_final[features].values.astype('int')
y = df_acc_final['value_energy'].values.astype('int')
##### Creation of X and y split -- train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
```
## Decision Tree Regressor
```
##### Decision Tree Regressor
# This is a lightweight model related with memory usage and computationally
model = DecisionTreeRegressor()
params = {'criterion':['mae'],
'max_depth': [4,5,6,7],
'max_features': [7,8,9,10],
'max_leaf_nodes': [30,40,50],
'min_impurity_decrease' : [0.0005,0.001,0.005],
'min_samples_split': [2,4]}
# GridSearch
grid_solver = GridSearchCV(estimator = model,
param_grid = params,
scoring = 'neg_median_absolute_error',
cv = 10,
refit = 'neg_median_absolute_error',
verbose = 0)
model_result = grid_solver.fit(X_train,y_train)
reg = model_result.best_estimator_
reg.fit(X,y)
##### Mean Absolute Percentage Error
yhat = reg.predict(X_test)
print("Mean Absolute Percentage Error = %.2f" %mean_absolute_percentage_error(yhat,y_test),'%')
##### Feature Importance
features_importance = reg.feature_importances_
features_array = np.array(features)
features_array_ordered = features_array[(features_importance).argsort()[::-1]]
features_array_ordered
plt.figure(figsize=(16,10))
sns.barplot(y = features_array, x = features_importance, orient='h', order=features_array_ordered[:50])
plt.show()
```
## Random Forest Regressor
```
##### Random Forest Regressor
# Random Forest model should lower the metric further because it maintains the bias and reduces the variance by making
# combinations of models with low bias and high correlations but different from one value.
# The tree has a low bias but a high variance then I will try to combine models with low bias and that aren't completely correlated
# in order to to reduce the variance to its minimum value.
model = RandomForestRegressor()
params = {'bootstrap': [True],
'criterion':['mae'],
'max_depth': [8,10],
'max_features': [10,12],
'max_leaf_nodes': [10,20,30],
'min_impurity_decrease' : [0.001,0.01],
'min_samples_split': [2,4],
'n_estimators': [10,15]}
# GridSearch
grid_solver = GridSearchCV(estimator = model,
param_grid = params,
scoring = 'neg_median_absolute_error',
cv = 7,
refit = 'neg_median_absolute_error',
verbose = 0)
model_result = grid_solver.fit(X_train,y_train)
reg = model_result.best_estimator_
reg.fit(X,y)
##### Mean Absolute Percentage Error
yhat = reg.predict(X_test)
print("Mean Absolute Percentage Error = %.2f" %mean_absolute_percentage_error(yhat,y_test),'%')
##### Feature Importance
features_importance = reg.feature_importances_
features_array = np.array(features)
features_array_ordered = features_array[(features_importance).argsort()[::-1]]
features_array_ordered
plt.figure(figsize=(16,10))
sns.barplot(y = features_array, x = features_importance, orient='h', order=features_array_ordered[:50])
plt.show()
```
## SVM
```
##### SVM linear
# Although computationally it requires more effort, once the model is trained it takes up less memory space and it is very intuitive.
# After seeing graphs on EDA, it doesn't seem that the relations are linear but while trees have much flexibility, that algorithm is based on
# cuts by hyperplanes. I'll train different kernels for SVM to see if it fits better to the problem.
# Lineal Tuning
lineal_tuning = dict()
for c in [0.001,0.01, 1]:
svr = SVR(kernel = 'linear', C = c)
scores = cross_val_score(svr, X, y, cv = 5, scoring = 'neg_median_absolute_error')
lineal_tuning[c] = scores.mean()
best_score = min(lineal_tuning, key = lineal_tuning.get)
print(f'Best score = {lineal_tuning[best_score]} is achieved with c = {best_score}')
reg = SVR(kernel = 'linear', C = best_score)
reg.fit(X_train, y_train)
##### Mean Absolute Percentage Error
yhat = reg.predict(X_test)
print("Mean Absolute Percentage Error = %.2f" %mean_absolute_percentage_error(yhat,y_test),'%')
##### SVM poly
reg = SVR(kernel = 'linear', C = 0.01)
reg.fit(X_train, y_train)
##### Mean Absolute Percentage Error
yhat = reg.predict(X_test)
print("Mean Absolute Percentage Error = %.2f" %mean_absolute_percentage_error(yhat,y_test),'%')
##### SVM radial
reg = SVR(kernel = 'rbf', C = 0.01, gamma = 0.1)
reg.fit(X_train, y_train)
##### Mean Absolute Percentage Error
yhat = reg.predict(X_test)
print("Mean Absolute Percentage Error = %.2f" %mean_absolute_percentage_error(yhat,y_test),'%')
```
# Activity Intensity
```
##### Activity Intensity
# In addition to calculate the energy expenditure, for each time interval, the level of intensity of the activity carried out must be calculated.
# The classification of the intensity level is based on the metabolic equivalents or METS (kcal/kg*h) of the activity being:
# light activity < 3 METS, moderate 3 - 6 METS and intense > 6 METS.
# To estimate it, I consider a person of 75 kg. The model chosen is the Random Forest Regressor which has the lowest MAPE.
reg = RandomForestRegressor(criterion='mae', max_depth=8, max_features=12,
max_leaf_nodes=30, min_impurity_decrease=0.001,
n_estimators=15)
reg.fit(X,y)
yhat = reg.predict(X)
ids = df_acc_final['id_'].to_frame()
ids['yhat'] = yhat
ids['METs'] = ids["yhat"] / (75 * 62 / 3600)
conditions = [(ids["METs"] < 3 ),((3 < ids["METs"]) & (ids["METs"] < 6)),(ids["METs"] > 6)]
names = ['ligera', 'moderada', 'intensa']
ids['intensidad'] = np.select(conditions, names)
ids
##### Conclusions and Future Work
# The substantial improvement that can be seen when we introduce the non-linearity of the model is relevant to deduce that
# the relationships between the variables and the target are not linear.
# The dataset doesn't have full potential to establish a clear model then more efforts should be made to collect all the information on physical
# activity, I suggest signal treatment variables such as Zero Crossing Rate, Spectral Centroid, Spectral Rolloff and MFCC - Mel-Frequency Cepstral Coefficients.
# Additional information about individuals such as age, sex and weight would help to improve the MAPE of final model.
# Time was decisive on this project (3-4h only) so some workstreams couldn't be done and would be important to have a look on.
# Extra efforts should be made in the selection of predictive variables to analyze the L1 and L2 error, otherwise we would be
# losing explicitness, memory and battery.
```
| github_jupyter |
# Autokeras
[PCoE][pcoe]の No.6 Turbofan Engine Degradation Simulation Dataset に対して [Autokeras][autokeras] を利用したAutoMLの実行テスト。
[autokeras]: https://autokeras.com/
[pcoe]: https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/
# Install Autokeras
```
try:
import autokeras as ak
except ModuleNotFoundError:
# https://autokeras.com/install/
!pip install git+https://github.com/keras-team/keras-tuner.git
!pip install autokeras
import autokeras as ak
from autokeras import StructuredDataRegressor
```
# Preset
```
# default packages
import logging
import pathlib
import zipfile
from typing import Any, Dict, List, Sequence, Tuple
# third party packages
import IPython.display as display
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import seaborn as sns
import sklearn.model_selection as skmselection
import tensorflow.keras.models as tkmodels
import tensorflow.keras.callbacks as tkcallbacks
import tqdm.autonotebook as tqdm
# mode
MODE_DEBUG = False
# logger
_logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG if MODE_DEBUG else logging.INFO)
# seaborn
sns.set()
```
# Global parameters
```
PATH_ARCHIVE = pathlib.Path("turbofun.zip")
PATH_EXTRACT = pathlib.Path("turbofun")
# 利用する変数を定義
COLUMNS_ALL = [
*[f"op{i:02}" for i in range(3)],
*[f"sensor{i:02}" for i in range(26)],
]
COLUMNS_INVALID = [
"op02",
"sensor01",
"sensor04",
"sensor09",
"sensor15",
"sensor17",
"sensor18",
"sensor21",
"sensor22",
"sensor23",
"sensor24",
"sensor25",
]
COLUMNS_VALID = sorted(list(set(COLUMNS_ALL) - set(COLUMNS_INVALID)))
COLUMNS_TARGET = ["rul"]
```
# Load dataset
```
def download(filename: pathlib.Path) -> None:
"""zipファイルをダウンロード."""
if filename.exists():
return
url = "https://ti.arc.nasa.gov/c/6/"
res = requests.get(url, stream=True)
if res.status_code != 200:
_logger.error(res.status_code)
return
with open(filename, "wb") as f:
for chunk in tqdm.tqdm(res):
f.write(chunk)
download(PATH_ARCHIVE)
def extractall(src: pathlib.Path, dst: pathlib.Path) -> None:
"""zipファイルを解凍."""
if not src.exists():
_logger.error(f"{src} does not exist.")
return
if dst.exists():
_logger.error(f"{dst} exists.")
return
with zipfile.ZipFile(src) as zf:
zf.extractall(dst)
extractall(PATH_ARCHIVE, PATH_EXTRACT)
```
# Convert data shape
```
def get_unit_series(df: pd.DataFrame, unit: int) -> Dict[str, Any]:
"""unit単位のnumpy.arrayへ変換する."""
df_unit = df[df["unit"] == unit].copy()
df_unit.sort_values(by=["time"], ignore_index=True, inplace=True)
names_op = [f"op{i:02}" for i in range(3)]
names_sensor = [f"sensor{i:02}" for i in range(26)]
data = {
"unit": unit,
**{name: df_unit[name].to_numpy().ravel() for name in names_op},
**{name: df_unit[name].to_numpy().ravel() for name in names_sensor},
}
return data
def load_data(filename: pathlib.Path) -> pd.DataFrame:
"""データを読み取り、1セルに1unit分のデータをnumpy.arrayで保持するDataFrameとする."""
df = pd.read_csv(
filename,
header=None,
sep=" ",
names=[
"unit",
"time",
*[f"op{i:02d}" for i in range(3)],
*[f"sensor{i:02d}" for i in range(26)],
],
)
return df
DF_FD001_TRAIN = load_data(PATH_EXTRACT.joinpath("train_FD001.txt"))
DF_FD001_TEST = load_data(PATH_EXTRACT.joinpath("test_FD001.txt"))
display.display(DF_FD001_TRAIN)
display.display(DF_FD001_TEST)
def load_rul(filepath: pathlib.Path) -> pd.DataFrame:
"""テスト用のRULを読み込む."""
df = pd.read_csv(
filepath,
header=None,
sep=" ",
names=["rul", "none"],
)
df.drop(["none"], axis=1, inplace=True)
df["unit"] = range(len(df))
df.set_index(["unit"], inplace=True)
return df
DF_FD001_TEST_RUL = load_rul(PATH_EXTRACT.joinpath("RUL_FD001.txt"))
display.display(DF_FD001_TEST_RUL)
def create_train_rul(df: pd.DataFrame) -> pd.Series:
"""学習データに対するRULを算出する."""
df_rul = df.copy()
df_max_time = df.groupby(["unit"])["time"].max()
df_rul["rul"] = df_rul.apply(
lambda x: df_max_time.at[x["unit"]] - x["time"],
axis=1,
)
return df_rul
DF_FD001_TRAIN = create_train_rul(DF_FD001_TRAIN)
display.display(DF_FD001_TRAIN)
```
# Data split
```
def train_test_split(df: pd.DataFrame) -> Tuple[pd.DataFrame, pd.DataFrame]:
"""学習用データと検証用データを分割する."""
units = df["unit"].unique()
units_train, units_test = skmselection.train_test_split(
units,
test_size=0.2,
random_state=42,
)
df_train = df[df["unit"].isin(units_train)]
df_test = df[df["unit"].isin(units_test)]
return df_train, df_test
DF_TRAIN, DF_VALID = train_test_split(DF_FD001_TRAIN)
DF_TRAIN.info()
DF_VALID.info()
```
# Autokeras
```
def fit(df_feature: pd.DataFrame, df_target: pd.DataFrame) -> StructuredDataRegressor:
"""モデルの探索."""
max_trials = 3 if MODE_DEBUG else 100
epochs = 10 if MODE_DEBUG else 100
early_stopping = tkcallbacks.EarlyStopping(
monitor="val_loss",
min_delta=1e-4,
patience=10,
)
regressor = StructuredDataRegressor(
overwrite=True,
max_trials=max_trials,
loss="mean_squared_error",
metrics="mean_squared_error",
objective="val_loss",
seed=42,
)
regressor.fit(
df_feature.to_numpy(),
df_target.to_numpy(),
epochs=epochs,
validation_split=0.2,
callbacks=[early_stopping],
)
return regressor
REGRESSOR = fit(DF_TRAIN[COLUMNS_VALID], DF_TRAIN[COLUMNS_TARGET])
def export_model(regressor: StructuredDataRegressor, output: pathlib.Path) -> None:
"""モデルをファイルとして保存."""
model = regressor.export_model()
model.save(str(output), save_format="tf")
# test
loaded_model = tkmodels.load_model(str(output), custom_objects=ak.CUSTOM_OBJECTS)
export_model(REGRESSOR, pathlib.Path("model_autokeras"))
```
## Results
```
def predict(
regressor: StructuredDataRegressor,
df_info: pd.DataFrame,
df_feature: pd.DataFrame,
df_target: pd.DataFrame,
units: List[int],
) -> None:
"""予測結果を可視化する."""
results = regressor.predict(df_feature.to_numpy())
df_results = df_info.copy()
df_results["rul"] = df_target.to_numpy().ravel()
df_results["pred"] = results
for unit in units:
df_target = df_results[df_results["unit"] == unit]
fig, axes = plt.subplots(1, 1, figsize=(9, 4), tight_layout=True)
ax = axes
ax.plot(df_target["time"], df_target["rul"], label="rul")
ax.plot(df_target["time"], df_target["pred"], label="pred")
ax.set_title(f"unit{unit:02}")
plt.show()
plt.close()
fig.clf()
predict(
REGRESSOR,
DF_VALID[["unit", "time"]],
DF_VALID[COLUMNS_VALID],
DF_VALID[COLUMNS_TARGET],
DF_VALID["unit"].unique()[:3],
)
```
| github_jupyter |
<img src="https://jaipresentation.blob.core.windows.net/comm/jai_avatar.png" width="100" align="right"/>
# JAI - Trust your data
## Fill: leverage JAI to smart-fill your missing data
This is an example of how to use the fill missing values capabilities of JAI.
In this notebook we will use a subset of the [PC Games 2020](https://www.kaggle.com/jesneuman/pc-games) dataset to mask some values about whether or not a game is Indie and fill them again using JAI.
You can install JAI in your environment using `pip install jai-sdk`.
And you can read the docs [here](https://jai-sdk.readthedocs.io/en/stable/)!
If you have any comments or suggestions, feel free to contact us: support@getjai.com
*Drop by drop is the water pot filled. Likewise, the wise man, gathering it little by little, fills himself with good.* - Buddha
```
# JAI imports
from jai import Jai
from jai.processing import process_predict
# I/O and data manipulation imports
import pandas as pd
import numpy as np
```
## Reading data
```
# it might take a few seconds to download this dataset (10MB) to your computer
DATASET_URL = "https://jaipresentation.blob.core.windows.net/data/games_jai.parquet"
df_games = pd.read_parquet(DATASET_URL).astype({"Indie": "object"})
```
### Let's check how many NaN are there in each column
```
df_games.isna().sum()
```
### And let's also check how many unique values are in each column
```
df_games.nunique()
```
### And the number of rows as well
```
df_games.shape[0]
```
Columns like 'Genres' and 'Players' have too many unique values compared to the total number of rows. So we will use the 'Indie' column instead.
In the following cells, we are going to randomly select 15% of rows and set their 'Indie' value to NaN.
After that, we will use JAI's `fill` method to actually fill these values we deliberately masked.
## Create a random mask using 15% of rows
```
mask = np.unique(np.random.randint(low=0, high=df_games.shape[0], size=int(df_games.shape[0] * 0.15)))
```
## Create a new dataframe where the indexes will be used to set the 'Indie' column to NaN
```
column_to_fill = "Indie"
df_masked = df_games.copy()
df_masked.loc[mask, column_to_fill] = np.nan
# make sure we masked some values in the Indie column
df_masked.isna().sum()
```
## Now we can use JAI to fill these missing values!
```
j = Jai("YOUR_AUTH_KEY")
```
### We call `fill` passing a given `name` for the database, the `data` itself and the `column` we want the NaN values to be filled.
### There is a 'gotcha', though...
As a rule of thumb, we should send data that us humans would normally use to actually fill those values. In this sense, columns `Name`, `Genres` and `Indie` should suffice to learn if a NaN value is an Indie game or not. Other columns like `Players` or `Description` do not provide much relevant information and would probably get in the way of JAI's learning.
```
# set which columns to use
cols_to_use = ["id", "Name", "Genres", "Indie"]
db_name = "games_fill"
results = j.fill(name=db_name,
data=df_masked[cols_to_use],
column=column_to_fill,
db_type="FastText",
hyperparams={"learning_rate": 0.0001})
```
### Finally, we process the results...
```
processed = process_predict(results)
df_result = pd.DataFrame(processed).sort_values('id')
df_result
```
### ... and check the accuracy of the fill
```
predicted = df_result["predict"]
ground_truth = df_games.loc[mask].drop_duplicates().sort_index()[column_to_fill]
np.equal(predicted.to_numpy(), ground_truth.astype(str).to_numpy()).sum() / predicted.shape[0]
```
The `fill` method correctly predicted the values on over 80% of the samples! Let's plug these results back into our original dataframe
```
df_filled = df_masked.copy()
df_filled.loc[mask, "Indie"] = df_result["predict"].tolist()
df_filled.isna().sum()
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df = pd.read_csv("/kaggle/input/real-or-fake-fake-jobposting-prediction/fake_job_postings.csv")
df
df.columns
df.info()
df = df.drop(['job_id', 'title', 'location', 'department', 'salary_range','telecommuting', 'has_company_logo', 'has_questions', 'employment_type','required_experience', 'required_education', 'industry', 'function'],axis = 1)
df
df.info()
df = df.fillna(" ")
df['description'] = df['description'] + " " + df["company_profile"] + " " + df["requirements"] + " " +df["benefits"]
df.head()
df.columns
df = df.drop(['company_profile', 'requirements','benefits'],axis =1)
df
from matplotlib import pyplot as plt
from wordcloud import WordCloud,STOPWORDS
stopwords = set(STOPWORDS)
wcf = WordCloud(background_color = 'white',max_words = 2000,stopwords = stopwords )
wcf.generate(" ".join(df['description']))
fig = plt.figure()
fig.set_figwidth(25)
fig.set_figheight(20)
plt.imshow(wcf,interpolation = 'bilinear')
plt.axis('off')
plt.show()
df.fraudulent.value_counts()
df.columns
X=df.description.astype('str')
y=df.fraudulent
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
vocab=3000
tokenizer=Tokenizer(vocab,oov_token="<oov>")
tokenizer.fit_on_texts(X_train)
train_sequence=tokenizer.texts_to_sequences(X_train)
test_sequence=tokenizer.texts_to_sequences(X_test)
padded_train=pad_sequences(train_sequence,maxlen=1500)
padded_test=pad_sequences(test_sequence,maxlen=1500)
from keras.models import Sequential
from keras.layers import Dense,LSTM,Embedding,GlobalAveragePooling1D
from keras.optimizers import Adam
model=Sequential()
model.add(Embedding(vocab,3000))
model.add(GlobalAveragePooling1D())
model.add(Dense(128,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.001),loss='binary_crossentropy',metrics=['accuracy'])
model.summary()
history = model.fit(padded_train,y_train,validation_data=(padded_test,y_test),epochs=10)
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
| github_jupyter |
# x-filter Overlay - Demostration Notebook
通过HLS高层次综合工具,可以很方便的通过C/C++语言将算法综合为可在Vivado中直接例化的硬件IP,利用FPGA并行计算的优势,帮助我们实现算法加速,提高系统响应速度。在本示例中通过HLS工具实现了一个阶数与系数均可实时修改的FIR滤波器IP。
x-filter Overlay实现了对该滤波器的系统集成,Block Design如下图所示,ARM处理器可通过AXI总线和DMA访问该IP。
<img src="./images/x-order_filter.PNG"/>
*注:Overlay可以理解为具体的FPGA比特流 + 相应的Python API驱动*
而在PYNQ框架下,通过Python API我们可以很方便的对Overlay中的IP进行调用。而基于Python的生态,导入数据分析库如numpy和图形库matplotlib,通过简单的几行代码即可对FIR滤波器进行分析和验证。在本notebook中我们展示了通过numpy库产生的多个频率的叠加信号作为FIR滤波器的输入,并对经过FIR滤波器滤波前后的信号在时域和频频进行了分析。
下表为HLS工具自动为IP产生的驱动头文件,在notebook中需要对照该头文件来对IP进行调用。
```
# ==============================================================
# File generated on Mon Oct 07 01:59:23 +0800 2019
# Vivado(TM) HLS - High-Level Synthesis from C, C++ and SystemC v2018.3 (64-bit)
# SW Build 2405991 on Thu Dec 6 23:38:27 MST 2018
# IP Build 2404404 on Fri Dec 7 01:43:56 MST 2018
# Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
# ==============================================================
# AXILiteS
# 0x00 : Control signals
# bit 0 - ap_start (Read/Write/COH)
# bit 1 - ap_done (Read/COR)
# bit 2 - ap_idle (Read)
# bit 3 - ap_ready (Read)
# bit 7 - auto_restart (Read/Write)
# others - reserved
# 0x04 : Global Interrupt Enable Register
# bit 0 - Global Interrupt Enable (Read/Write)
# others - reserved
# 0x08 : IP Interrupt Enable Register (Read/Write)
# bit 0 - Channel 0 (ap_done)
# bit 1 - Channel 1 (ap_ready)
# others - reserved
# 0x0c : IP Interrupt Status Register (Read/TOW)
# bit 0 - Channel 0 (ap_done)
# bit 1 - Channel 1 (ap_ready)
# others - reserved
# 0x10 : Data signal of coe
# bit 31~0 - coe[31:0] (Read/Write)
# 0x14 : reserved
# 0x18 : Data signal of ctrl
# bit 31~0 - ctrl[31:0] (Read/Write)
# 0x1c : reserved
# (SC = Self Clear, COR = Clear on Read, TOW = Toggle on Write, COH = Clear on Handshake)
```
为了帮助我们在notebook上对算法进行验证,我们通过matlab工具设计了2个滤波器,预设信号频率分量最高为750Hz,根据采样定理知采样频率要大于信号频率2倍,在设计的2个滤波器中,均设置扫描频率为1800Hz。
下图为在matlab中设计的的FIR低通滤波器幅频曲线,示例中设计了1个截至频率为500Hz的10阶FIR低通滤波器。
<img src="./images/MagnitudeResponse.PNG" width="70%" height="70%"/>
导出系数:[107,280,-1193,-1212,9334,18136,9334,-1212,-1193,280,107]
修改滤波器设置,重新设计1个截至频率为500Hz的15阶FIR高通滤波器.
<img src="./images/MagnitudeResponse_500Hz_HP.png" width="70%" height="70%"/>
导出系数:[-97,-66,435,0,-1730,1101,5506,-13305,13305,-5506,-1101,1730,0,-435,66,97]
# 步骤1 - 导入Python库,实例化用于控制FIR滤波器的DMA设备。
### 注:我们可以通过“Shift + Enter”组合键来逐一执行notebook中每一个cell内的python脚本。cell左边的"*"号表示脚本正在执行,执行完毕后会变为数字。
```
#导入必要的python库
import pynq.lib.dma #导入访问FPGA内侧DMA的库
import numpy as np #numpy为pyrhon的数值分析库
from pynq import Xlnk #Xlnk()可实现连续内存分配,访问FPGA侧的DMA需要该库
from scipy.fftpack import fft,ifft #python的FFT库
import matplotlib.pyplot as plt #python图表库
import scipy as scipy
#加载FPGA比特流
firn = pynq.Overlay("/usr/local/lib/python3.6/dist-packages/x-filter/bitstream/x-order_filter.bit")
#实例化Overlay内的DMA模块
dma = firn.axi_dma_0
led_4bits = firn.axi_gpio_0
rgb_leds = firn.axi_gpio_1
btn_4bits = firn.axi_gpio_2
fir_filter = firn.x_order_fir_0
led_4bits.write(0x04,0x00)
led_4bits.write(0x00,0x0A)
rgb_leds.write(0x04,0x00)
rgb_leds.write(0x00,0x0A)
#对Overlay内的DMA进行配置,每次传输1800个数据点。
xlnk = Xlnk()
in_buffer = xlnk.cma_array(shape=(1800,), dtype=np.int32)
out_buffer = xlnk.cma_array(shape=(1800,), dtype=np.int32)
#coe_buffer = xlnk.cma_array(shape=(11,), dtype=np.int32)
coe_buffer = xlnk.cma_array(shape=(16,), dtype=np.int32)
ctrl_buffer = xlnk.cma_array(shape=(2,), dtype=np.int32)
#coe = [107,280,-1193,-1212,9334,18136,9334,-1212,-1193,280,107]
coe = [-97,-66,435,0,-1730,1101,5506,-13305,13305,-5506,-1101,1730,0,-435,66,97]
for i in range (16):
coe_buffer[i] = coe[i]
ctrl_buffer[0] = 1
#ctrl_buffer[1] = 10
ctrl_buffer[1] = 16
coe_buffer.physical_address
fir_filter.write(0x10,coe_buffer.physical_address)
fir_filter.write(0x18,ctrl_buffer.physical_address)
fir_filter.write(0x00,0x81)
```
# 步骤2 - 叠加多个不同频率和幅值的信号,作为滤波器的输入信号。
```
#采样频率为1800Hz,即1秒内有1800个采样点,我们将采样点个数选择1800个。
x=np.linspace(0,1,1800)
#产生滤波器输入信号
f1 = 600 #设置第1个信号分量频率设置为600Hz
a1 = 100 #设置第1个信号分量幅值设置为100
f2 = 450 #设置第2个信号分量频率设置为450Hz
a2 = 100 #设置第2个信号分量幅值设置为100
f3 = 200 #设置第3个信号分量频率设置为200Hz
a3 = 100 #设置第3个信号分量幅值设置为100
f4 = 650 #设置第4个信号分量频率设置为650Hz
a4 = 100 #设置第5个信号分量幅值设置为100
#产生2个不同频率分量的叠加信号,将其作为滤波器的输入信号,我们还可以叠加更多信号。
#y=np.int32(a1*np.sin(2*np.pi*f1*x) + a2*np.sin(2*np.pi*f2*x))
y=np.int32(a1*np.sin(2*np.pi*f1*x) + a2*np.sin(2*np.pi*f2*x) + a3*np.sin(2*np.pi*f3*x) + a4*np.sin(2*np.pi*f4*x))
#绘制滤波器输入信号波形图
fig1 = plt.figure()
ax1 = fig1.gca()
plt.plot(y[0:50]) #为便于观察,这里仅显示前50个点的波形,如需要显示更多的点,请将50改为其它数值
plt.title('input signal',fontsize=10,color='b')
#通过DMA将数据发送in_buffer内的数值到FIR滤波器的输入端
for i in range(1800):
in_buffer[i] = y[i]
dma.sendchannel.transfer(in_buffer)
#获取滤波器的输出信号数据存储在out_buffer中
dma.recvchannel.transfer(out_buffer)
#绘制滤波器输出信号图
fig2 = plt.figure()
ax2 = fig2.gca()
plt.plot(out_buffer[0:50]/32768) #除于32768的原因是滤波器系数为16位有符号定点小数,运算过程中被当作整数计算。
plt.title('output signal',fontsize=10,color='b')
```
# 步骤3 - 对滤波器输入和输出信号做频域分析
```
#FFT变换函数体
def fft(signal_buffer,points):
yy = scipy.fftpack.fft(signal_buffer)
yreal = yy.real # 获取实部
yimag = yy.imag # 获取虚部
yf1 = abs(yy)/((len(points)/2)) #归一化处理
yf2 = yf1[range(int(len(points)/2))] #由于对称性,只取一半区间
xf1 = np.arange(len(signal_buffer)) # 频率
xf2 = xf1[range(int(len(points)/2))] #取一半区间
#混合波的FFT(双边频率范围)
#plt.subplot(222)
plt.plot(xf2,yf2,'r') #显示原始信号的FFT模值,本例只显示其中的750个点,如需要显示更多请调整750为其它数值
plt.title('FFT of Mixed wave',fontsize=10,color='r') #注意这里的颜色可以查询颜色代码
return
#对输入信号做FFT变换
fft(in_buffer,x)
#对输出信号做FFT变换
fft(out_buffer/32768,x)#除于32768的原因是滤波器系数为16位有符号定点小数,运算过程中被当作整数计算。
#dma.sendchannel.wait()
#dma.recvchannel.wait()
in_buffer.close()
out_buffer.close()
xlnk.xlnk_reset()
```
| github_jupyter |
#### Script for downloading a ground truth non-subtweets dataset
#### Import libraries for accessing the API and managing JSON data
```
import tweepy
import json
```
#### Load the API credentials
```
consumer_key, consumer_secret, access_token, access_token_secret = (open("../../credentials.txt")
.read().split("\n"))
```
#### Authenticate the connection to the API using the credentials
```
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
```
#### Connect to the API
```
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True)
```
#### Define a function for recursively accessing parent tweets
```
def first_tweet(tweet_status_object):
try:
return first_tweet(api.get_status(tweet_status_object.in_reply_to_status_id_str,
tweet_mode="extended"))
except tweepy.TweepError:
return tweet_status_object
```
#### Define a function for finding tweets with replies that specifically do not call them subtweets
```
def get_non_subtweets(max_tweets=10000000,
query=("-subtweet AND @ since:2018-03-01 exclude:retweets filter:replies")):
non_subtweets_ids_list = []
non_subtweets_list = []
i = 0
for potential_non_subtweet_reply in tweepy.Cursor(api.search, lang="en",
tweet_mode="extended", q=query).items(max_tweets):
i += 1
potential_non_subtweet_original = first_tweet(potential_non_subtweet_reply)
if (not potential_non_subtweet_original.in_reply_to_status_id_str
and potential_non_subtweet_original.user.lang == "en"):
if (potential_non_subtweet_original.id_str in non_subtweets_ids_list
or "subtweet" in potential_non_subtweet_original.full_text
or "Subtweet" in potential_non_subtweet_original.full_text
or "SUBTWEET" in potential_non_subtweet_original.full_text):
continue
else:
non_subtweets_ids_list.append(potential_non_subtweet_original.id_str)
non_subtweets_list.append({"tweet_data": potential_non_subtweet_original._json,
"reply": potential_non_subtweet_reply._json})
with open("../data/other_data/non_subtweets.json", "w") as outfile:
json.dump(non_subtweets_list, outfile, indent=4)
print(("Tweet #{0} was a reply to a non-subtweet: {1}\n"
.format(i, potential_non_subtweet_original.full_text.replace("\n", " "))))
return non_subtweets_list
```
#### Show the results
```
non_subtweets_list = get_non_subtweets()
print(len(non_subtweets_list))
```
| github_jupyter |
```
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import *
from collections import Counter
import seaborn as sns
import pandas as pd
from tqdm import tqdm
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
IMAGE_DIR = 'image_contest_level_2'
CROP_DIR = 'crop_split2'
from multiprocessing import Pool, Lock, Manager
```
# 数据并行预处理
```
def f(index):
img = cv2.imread('%s/%d.png'%(IMAGE_DIR, index))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
eq = cv2.equalizeHist(gray)
b = cv2.medianBlur(eq, 9)
m, n = img.shape[:2]
b2 = cv2.resize(b, (n//4, m//4))
m1 = cv2.morphologyEx(b2, cv2.MORPH_OPEN, np.ones((7, 40)))
m2 = cv2.morphologyEx(m1, cv2.MORPH_CLOSE, np.ones((4, 4)))
_, bw = cv2.threshold(m2, 127, 255, cv2.THRESH_BINARY_INV)
bw = cv2.resize(bw, (n, m))
img2, ctrs, hier = cv2.findContours(bw, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(ctrs) > 3:
print index
# 微调三个公式
d = 20
d2 = 5
imgs = []
sizes = []
for i, ctr in enumerate(ctrs):
x, y, w, h = cv2.boundingRect(ctr)
if w*h < 1000:
continue
roi = img[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
p, q, _ = roi.shape
x = b[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
x = cv2.morphologyEx(x, cv2.MORPH_CLOSE, np.ones((3, 3)))
_, x = cv2.threshold(x, 127, 255, cv2.THRESH_BINARY_INV)
_, x, _ = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
x, y, w, h = cv2.boundingRect(np.vstack(x))
roi2 = roi[max(0, y-d2):min(p, y+h+d2),max(0, x-d2):min(q, x+w+d2)]
imgs.append(roi2)
sizes.append(roi2.shape)
cv2.imwrite('%s/%d_%d.png'%(CROP_DIR, index, i), roi2)
# 连接三个公式
sizes = np.array(sizes)
img = np.zeros((sizes[:,0].max(), sizes[:,1].sum()+(len(sizes)-1)*2, 3), dtype=np.uint8)
x = 0
for a in imgs[::-1]:
iw = a.shape[1]
img[:a.shape[0], x:x+iw] = a
x += iw + 2
cv2.imwrite('%s/%d.png'%(CROP_DIR, index), img)
return [index, len(sizes)]
%%time
try:
p
except:
p = Pool(12)
n = 100000
if __name__ == '__main__':
rs = []
for r in tqdm(p.imap_unordered(f, range(n)), total=n):
rs.append(r)
import struct
import imghdr
def get_image_size(fname):
'''Determine the image type of fhandle and return its size.
from draco'''
with open(fname, 'rb') as fhandle:
head = fhandle.read(24)
if len(head) != 24:
return
if imghdr.what(fname) == 'png':
check = struct.unpack('>i', head[4:8])[0]
if check != 0x0d0a1a0a:
return
width, height = struct.unpack('>ii', head[16:24])
elif imghdr.what(fname) == 'gif':
width, height = struct.unpack('<HH', head[6:10])
elif imghdr.what(fname) == 'jpeg':
try:
fhandle.seek(0) # Read 0xff next
size = 2
ftype = 0
while not 0xc0 <= ftype <= 0xcf:
fhandle.seek(size, 1)
byte = fhandle.read(1)
while ord(byte) == 0xff:
byte = fhandle.read(1)
ftype = ord(byte)
size = struct.unpack('>H', fhandle.read(2))[0] - 2
# We are at a SOFn block
fhandle.seek(1, 1) # Skip `precision' byte.
height, width = struct.unpack('>HH', fhandle.read(4))
except Exception: #IGNORE:W0703
return
else:
return
return width, height
df = pd.read_csv('size.csv')
sizes = []
fnames = []
for i in tqdm(range(100000)):
for j in range(1, df['r'][i]):
fname = '%s/%d_%d.png'%(CROP_DIR, i, j)
fnames.append(fname)
size = get_image_size(fname)
sizes.append(size)
s = np.array(sizes)
print 'wmin wmax hmin hmax'
print s[:,0].min(), s[:,0].max(), s[:,1].min(), s[:,1].max()
sns.boxplot(s[:,0])
sizes = []
for i in tqdm(range(100000)):
fname = '%s/%d_%d.png'%(CROP_DIR, i, 0)
fnames.append(fname)
size = get_image_size(fname)
sizes.append(size)
s = np.array(sizes)
print 'wmin wmax hmin hmax'
print s[:,0].min(), s[:,0].max(), s[:,1].min(), s[:,1].max()
sns.boxplot(s[:,0])
plt.scatter(s[:,0], s[:,1])
s[:,1].argmax()
Image('%s/%d_%d.png'%(CROP_DIR, s[:,1].argmax(), 0))
```
# 结果可视化
```
def disp2(img):
cv2.imwrite('a.png', img)
return display(Image('a.png'))
def disp(img, txt=None, first=False):
global index
if first:
index = 1
plt.figure(figsize=(16, 9))
else:
index += 1
plt.subplot(4, 3, index)
if len(img.shape) == 2:
plt.imshow(img, cmap='gray')
else:
plt.imshow(img[:,:,::-1])
if txt:
plt.title(txt)
```
# 技术原理
* [转灰度图](http://docs.opencv.org/master/df/d9d/tutorial_py_colorspaces.html)
* [二值化](http://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html)
* [直方图均衡](http://docs.opencv.org/master/d5/daf/tutorial_py_histogram_equalization.html)
* [中值滤波](http://docs.opencv.org/master/d4/d13/tutorial_py_filtering.html)
* [开运算](http://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html)
* [轮廓查找](http://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html)
* [边界矩形](http://docs.opencv.org/master/dd/d49/tutorial_py_contour_features.html)
参考链接:
* http://docs.opencv.org/master/df/d9d/tutorial_py_colorspaces.html
* http://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html
* http://docs.opencv.org/master/d5/daf/tutorial_py_histogram_equalization.html
* http://docs.opencv.org/master/d4/d13/tutorial_py_filtering.html
* http://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html
* http://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html
* http://docs.opencv.org/master/dd/d49/tutorial_py_contour_features.html
```
def plot(index):
global img, gray, b, eq, bw, m, n, m1, m2, r, roi, ctrs, d, d2
img = cv2.imread('%s/%d.png'%(IMAGE_DIR, index))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
eq = cv2.equalizeHist(gray)
b = cv2.medianBlur(eq, 9)
m, n = img.shape[:2]
b2 = cv2.resize(b, (n//4, m//4))
m1 = cv2.morphologyEx(b2, cv2.MORPH_OPEN, np.ones((7, 40)))
m2 = cv2.morphologyEx(m1, cv2.MORPH_CLOSE, np.ones((4, 4)))
_, bw = cv2.threshold(m2, 127, 255, cv2.THRESH_BINARY_INV)
bw = cv2.resize(bw, (n, m))
r = img.copy()
img2, ctrs, hier = cv2.findContours(bw, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for ctr in ctrs:
x, y, w, h = cv2.boundingRect(ctr)
cv2.rectangle(r, (x, y), (x+w, y+h), (0, 255, 0), 10)
x, y, w, h = cv2.boundingRect(np.vstack(ctrs))
disp(img, 'raw img', 1)
disp(eq, 'eq')
disp(b, 'blur')
disp(m1, 'm1')
disp(m2, 'm2')
disp(r, 'rect')
# 微调三个公式
d = 20
d2 = 5
imgs = []
sizes = []
for i, ctr in enumerate(ctrs):
x, y, w, h = cv2.boundingRect(ctr)
roi = img[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
p, q, _ = roi.shape
x = b[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
x = cv2.morphologyEx(x, cv2.MORPH_CLOSE, np.ones((3, 3)))
_, x = cv2.threshold(x, 127, 255, cv2.THRESH_BINARY_INV)
_, x, _ = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
x, y, w, h = cv2.boundingRect(np.vstack(x))
roi2 = roi[max(0, y-d2):min(p, y+h+d2),max(0, x-d2):min(q, x+w+d2)]
imgs.append(roi2)
sizes.append(roi2.shape)
# 连接三个公式
sizes = np.array(sizes)
img2 = np.zeros((sizes[:,0].max(), sizes[:,1].sum()+len(sizes)-1, 3), dtype=np.uint8)
x = 0
for a in imgs[::-1]:
w = a.shape[1]
img2[:a.shape[0], x:x+w] = a
x += w + 1
return disp2(img2)
# plot(56044)
# plot(42030)
# plot(59934)
# plot(57424)
# plot(42126)
plot(93631)
for i, ctr in enumerate(ctrs):
x, y, w, h = cv2.boundingRect(ctr)
roi = img[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
p, q, _ = roi.shape
x = b[max(0, y-d):min(m, y+h+d),max(0, x-d):min(n, x+w+d)]
x = cv2.morphologyEx(x, cv2.MORPH_CLOSE, np.ones((3, 3)))
disp2(x)
```
| github_jupyter |
# Publishing SDs, Shapefiles and CSVs
Publishing your data can be accomplished in two simple steps:
1. Add the local data as an item to the portal
2. Call the publish() method on the item
This sample notebook shows how different types of GIS datasets can be added to the GIS, and published as web layers.
```
from IPython.display import display
from arcgis.gis import GIS
import os
gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")
```
# Publish all the service definition files in a folder
The sample below lists all the service definition (.sd) files in a data directory and publishes them as web layers. To publish a service definition file, we first add the .sd file to the Portal, and then call the publish() method:
```
# path relative to this notebook
data_dir = "data/"
#Get list of all files
file_list = os.listdir(data_dir)
#Filter and get only .sd files
sd_file_list = [x for x in file_list if x.endswith(".sd")]
print("Number of .sd files found: " + str(len(sd_file_list)))
# Loop through each file and publish it as a service
for current_sd_file in sd_file_list:
item = gis.content.add({}, data_dir + current_sd_file) # .sd file is uploaded and a .sd file item is created
published_item = item.publish() # .sd file item is published and a web layer item is created
display(published_item)
```
In the example shown above, one .sd file produced a web feature layer and another produced a web tile layer
# Publish a feature service from a shapefile and update the item information
To publish a shapefile, we first add the zipped shapefile to the Portal as an item, then call publish() method on the item to create a web layer. Often times, your shape files or service definitions may not contain the metadata you want to show on the portal item. This sample demonstrates how you can update those properties after publishing a web layer.
```
data = "data/power_pedestals_2012.zip"
shpfile = gis.content.add({}, data)
shpfile
published_service = shpfile.publish()
display(published_service)
```
The web layer item has minimal information and a default thumbnail.
### Update the layer item's metadata
To update the metadata and set the thumbnail, use the update() method on the web layer's item obtained during publishing.
```
thumbnail_path = "data/power_pedestals_thumbnail.PNG"
item_properties = {"snippet":"""This dataset was collected from Utah DOT open data portal.
Source URL: <a href="http://udot.uplan.opendata.arcgis.com/
datasets/a627bb128ac44767832402f7f9bde909_10">http://udot.uplan.opendata.arcgis.com/
datasets/a627bb128ac44767832402f7f9bde909_10</a>""",
"title":"Locations of power pedestals collected in 2012",
"tags":"opendata"}
published_service.update(item_properties, thumbnail=thumbnail_path)
display(published_service)
```
# Publish a CSV file and move it into a folder
To publish a CSV file, we first add the .csv file to the Portal, and then call the publish() method to publish it as a layer. Once published, we create a destination folder on the server and then move the published items into that folder
```
csv_file = 'data/Chennai_precipitation.csv'
csv_item = gis.content.add({}, csv_file)
display(csv_item)
```
The csv file used in this sample has a column titled `LOCATION` containing place names in text. During the publishing process we specify this column as an `address_fields` parameter. The server geocodes these locations to create point features for the web layer.
```
csv_lyr = csv_item.publish(None, {"Address":"LOCATION"})
display(csv_lyr)
```
### Create a new folder for the items
The `create_folder()` from `GIS.content` can be used to create a new folder. Once created, the `move()` of the `Item` can be used to move the items into the folder.
```
# create a new folder called 'Rainfall Data'
new_folder_details = gis.content.create_folder("Rainfall Data")
print(new_folder_details)
# move both the csv_item and csv_lyr items into this new folder
csv_item.move(new_folder_details) # Here you could either pass name of the folder or the dictionary
# returned from create_folder() or folders property on a User object
csv_lyr.move(new_folder_details)
```
Now that the items are moved, we can request for the item's `ownerFolder` property and ensure it matches the `id` of the folder created in the previous step
```
print(csv_lyr.ownerFolder)
```
| github_jupyter |
## Write SEG-Y with `obspy`
Before going any further, you might like to know, [What is SEG-Y?](http://www.agilegeoscience.com/blog/2014/3/26/what-is-seg-y.html). See also the articles in [SubSurfWiki](http://www.subsurfwiki.org/wiki/SEG_Y) and [Wikipedia](https://en.wikipedia.org/wiki/SEG_Y).
We'll use the [obspy](https://github.com/obspy/obspy) seismology library to read and write SEGY data.
Technical SEG-Y documentation:
* [SEG-Y Rev 1](http://seg.org/Portals/0/SEG/News%20and%20Resources/Technical%20Standards/seg_y_rev1.pdf)
* [SEG-Y Rev 2 proposal](https://www.dropbox.com/s/txrqsfuwo59fjea/SEG-Y%20Rev%202.0%20Draft%20August%202015.pdf?dl=0) and [draft repo](http://community.seg.org/web/technical-standards-committee/documents/-/document_library/view/6062543)
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
ls -l ../data/*.sgy
```
## 2D data
```
filename = '../data/HUN00-ALT-01_STK.sgy'
from obspy.io.segy.segy import _read_segy
section = _read_segy(filename) # unpack_headers=True slows you down here
data = np.vstack([t.data for t in section.traces])
plt.figure(figsize=(16,8))
plt.imshow(data.T, cmap="Greys")
plt.colorbar(shrink=0.5)
plt.show()
```
Formatted header:
```
def chunk(string, width=80):
lines = int(np.ceil(len(string) / width))
result = ''
for i in range(lines):
line = string[i*width:i*width+width]
result += line + (width-len(line))*' ' + '\n'
return result
s = section.textual_file_header.decode()
print(chunk(s))
section.binary_file_header
section.traces[0].header
len(section.traces[0].data)
```
## Change the data
Let's scale the data.
```
scaled = data / 1000
scaled[np.isnan(scaled)] = 0
scaled
vm = np.percentile(scaled, 99)
plt.figure(figsize=(16,8))
plt.imshow(scaled.T, cmap="Greys", vmin=-vm, vmax=vm)
plt.colorbar(shrink=0.5)
plt.show()
```
## Write data
Let's write this all back to a new SEG-Y file.
```
from obspy.core import Trace, Stream, UTCDateTime
from obspy.io.segy.segy import SEGYTraceHeader
stream = Stream()
for i, trace in enumerate(scaled):
# Make the trace.
tr = Trace(trace)
# Add required data.
tr.stats.delta = 0.004
tr.stats.starttime = 0 # Not strictly required.
# Add yet more to the header (optional).
tr.stats.segy = {'trace_header': SEGYTraceHeader()}
tr.stats.segy.trace_header.trace_sequence_number_within_line = i + 1
tr.stats.segy.trace_header.receiver_group_elevation = 0
# Append the trace to the stream.
stream.append(tr)
stream
stream.write('../data/out.sgy', format='SEGY', data_encoding=5) # encode 5 for IEEE
```
## Add a file-wide header
So far we only attached metadata to the traces, but we can do more by attaching some filewide metadata, like a textual header. A SEGY file normally has a file wide text header. This can be attached to the stream object.
If this header and the binary header are not set, they will be autocreated with defaults.
```
from obspy.core import AttribDict
from obspy.io.segy.segy import SEGYBinaryFileHeader
# Text header.
stream.stats = AttribDict()
stream.stats.textual_file_header = '{:80s}'.format('This is the textual header.').encode()
stream.stats.textual_file_header += '{:80s}'.format('This file contains seismic data.').encode()
# Binary header.
stream.stats.binary_file_header = SEGYBinaryFileHeader()
stream.stats.binary_file_header.trace_sorting_code = 4
stream.stats.binary_file_header.seg_y_format_revision_number = 0x0100
import sys
stream.write('../data/out.sgy', format='SEGY', data_encoding=5, byteorder=sys.byteorder)
```
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2016</p>
</div>
| github_jupyter |
```
from lxml import etree as ET
import json
import os
import pprint
#temp create template json without config
test = open('mif/defParse300.json',)
print(test)
jsontest = json.load(test)
print(jsontest)
#jsontest = json.load(open('30382939.xml'))
recordTree = ET.parse('30382939.xml')
#print(recordTree.tostring())
#print(ET.tostring(recordTree))
root = recordTree.getroot()
print(root)
print(root.keys())
for child in root:
print(child.tag)
testall = root.findall('.//')
for i in testall:
print(test)
if i.tag[28:]=="participant" and i.attrib["id"]=="4044480":
#if i.tag[28:]=="participant":
#print("found")
#if i.attrib["id"]=="4044480":
#print('bingo')
print(i)
#testing out xml parsing
xmltest = XmlRecord()
xmltest.parseXml('7984244.xml', 1, True)
pprint.pprint(xmltest.path)
print(type(xmltest.root))
pprint.pprint(xmltest.root)
#temp = xmltest.root
#while temp is not None:
#for i in temp:
#print(i)
#temp =
class XmlRecord():
"""XML-based record representation."""
def modifyTag(self,item,ver):
""" Modifies tag of an item if necessary."""
#print(self.config[ver]["NSL"])
#tag = item.tag[self.config[ver]["NSL"]:]
tag = item.tag[28:]
print(tag)
return tag
def toNsString(self,ns, prefix=None):
"""Converts namespace to string prefix for element tags."""
mif_ns = ns[prefix]
mifstr = "{%s}" % mif_ns
return mifstr
def __init__(self, root=None, config=None):
#adding path to view:
self.path = []
if root is not None:
self.root = root
else:
self.root = {}
##temp solution to not having config:
#self.config[ver]["NSL"] = 31
if config is not None:
print("test1")
for ver in self.config.keys():
self.config[ver] = {}
#"loads" in appropriate json file depending on format.
#config is a deeply nested dictionary:
#{version: {IN or OUT: {json file as a dictionary} } }
#config[ver]["IN"] is a json file
#self.config[ver]["IN"] = json.load( open( config[ver]["IN"] ) )
#what is NSL?
#self.config[ver]["NSL"] = len(self.config[ver]["IN"]["@NS"])+2
##temp solution to not having config:
self.config[ver]["NSL"] = 31
#self.config[ver]["OUT"] = json.load( open( config[ver]["OUT"] ) )
# re-key default ("*") namespace
defns = None
for nk in self.config[ver]["OUT"]["@NS"]:
if nk == "*":
defns = self.config[ver]["OUT"]["@NS"]["*"]
if defns is not None:
self.config[ver]["OUT"]["@NS"].pop("*", None)
self.config[ver]["OUT"]["@NS"][None] = defns
def parseXml(self, filename, ver, debug=False):
#template = self.config[ver]["IN"]
template = jsontest
#tree data structure from etree; holds the parsed data from passed in xml file.
#returns elementTree object
#documentobject structure
recordTree = ET.parse( filename )
print(ET.tostring(recordTree))
## add conditional for 254 case:
if True:
for x in recordTree.iter():
if (x.tag[28:] == "attribute" and "name" in x.attrib):
if (x.attrib["name"]=="comment"):
print(x.text)
#root object has a tag and attribute
#root.tag, root.attrib
#for child in root iterates over children of the root.
#TODO: test this out with xml file to determine exactly how it works - what constitutes a "child" in the context of these xml files
rootElem = recordTree.getroot()
if True:
print(rootElem.findall('attribute'))
#template describes what to expect
self.genericParse( template, ver, self.root, [] , rootElem, debug )
return self
#elem is root element?
def genericParse(self, template, ver, rec, rpath, elem, wrapped=False, debug=False):
tag = self.modifyTag( elem, ver )
#find corresponding template
#else default in json
#template is the dictionary representation of the appropriate json file, as pulled from the config dictionary
#ttempl is now just the single row corresponding to appropriate tag
if tag in template:
ttempl = template[tag]
#print("found")
else:
#print("default")
ttempl = template["*"]
if debug:
print("\nTAG", tag, wrapped, len(rpath) )
print(" TTEM", ttempl)
#what is the relevence of asking if the root element has a parent; won't this always be false.
#recursion^
if elem.getparent() is not None:
parentElem = elem.getparent()
if wrapped:
#going two levels up if current element is just a wrapper.
if parentElem.getparent() is not None:
parentElem = parentElem.getparent()
else:
parentElem = None
else:
parentElem = None
#just recording the parent tag
if parentElem is not None:
parentTag = self.modifyTag( parentElem, ver )
if debug:
print(" PAR", parentTag )
else:
parentTag = None
if debug:
print("PAR: ROOT ELEM !!!")
#print(parentTag)
#noting down template for same attributes found under different parents;
#this is the second level in the json files.
if parentTag is not None and parentTag in ttempl:
ctempl = ttempl[parentTag]
else:
ctempl = ttempl["*"]
if debug:
print(" CTEMPL", ctempl )
# find wrap flag
if "wrapper" in ctempl and ctempl["wrapper"] is not None:
cwrap = ctempl["wrapper"]
else:
#default
cwrap = template["*"]["*"]["wrapper"]
#cwrap is a boolean, says whether current element is a wrapper
if debug:
print( " CWRAP ", cwrap )
if cwrap:
#recursing one child deeper if current element is just a wrapper.
for cchld in elem:
if debug:
print(" CHLD",cchld.tag);
self.genericParse( template, ver, rec, rpath, cchld, wrapped =True)
if debug:
print( json.dumps(self.root, indent=2) )
return
# find current key:
#checking the attributes, not the tags
if "name" in ctempl and ctempl["name"] is not None:
ckey = ctempl["name"]
else:
ckey = tag
# find complex flag
if "complex" in ctempl and ctempl["complex"] is not None:
ccomplex = ctempl["complex"]
else:
#default
ccomplex = template["*"]["*"]["complex"]
# test if reference
# ref is of form /entrySet/entry/experimentList, which is xpath, using actual tags.
rtgt = None
if "reference" in ctempl and ctempl["reference"] is not None:
rtgt = ctempl["reference"]
else:
#default
rtgt = template["*"]["*"]["reference"]
# find current store type (direct/list/index)
if "store" in ctempl and ctempl["store"] is not None:
cstore = ctempl["store"]
else:
#default
cstore = template["*"]["*"]["store"]
##adding postprocess
if "postprocess" in ctempl and ctempl["postprocess"] is not None:
#string corresponding to function to be called
cpostprocess = ctempl["postprocess"]
print("testing")
else:
#default
cpostprocess = template["*"]["*"]["store"]
print("nah")
if debug:
print( " CKEY ", ckey )
print( " CCMPLX", ccomplex )
print( " CSTORE", cstore )
print( " CREFTG", rtgt )
if rtgt is not None:
# add referenced data
# elem.text: a reference
# rtgt : path to referenced dictionary along current path
# within record data structure
#splitting xpath into each tag
stgt = rtgt.split('/')
for i in range(1,len(stgt)):
#checks if each tag is in the rpath that has been passed in this round of recursion.
if stgt[i] in rpath[i-1]:
##what is supposed to be here??
pass
else:
break
"""rewrite as:
for i in range(len(stgt) - 1):
if stgt[i] in rpath[i]:
pass
else:
break
"""
#what is the structure and type of rpath?
#rpath starts as an empty list.
#this part makes no sense, what i is this?
#lastmatch is assigned to
lastmatch = rpath[i-2][stgt[i-1]]
if cstore == "list":
if ckey not in rec:
rec[ckey] = []
#appends the information
rec[ckey].append( lastmatch[stgt[i]][elem.text] )
elif cstore == "direct":
rec[ckey] = lastmatch[stgt[i]][elem.text]
else:
# build/add current value
#just assigns cvalue to the text string if not complex
#if complex, cvalue is a dictionary with the text string udner as value of "value" key
cvalue = None
if ccomplex:
cvalue = {}
if elem.text:
val = str(elem.text).strip()
if len(val) > 0:
cvalue["value"] = val
else:
cvalue = str( elem.text )
#if ckey in rec:
# print(ckey, rec[ckey])
if cstore == "direct":
#rec is a dictionary
##is rec a nested dictionary or is each ckey on the same level?
##can test this and just skip the reference part?
#ckey is either the tag of the element (first thing after < in xml), or the name as defined in the josn
#this assigns the actual text of an element to its name.
#creating new key value pair in rec, key is ckey (tag or name), value is cvalue (text in element)
rec[ckey] = cvalue
elif cstore == "list":
if ckey not in rec:
rec[ckey] = []
#if list, adds text to the list that is the value of the key (tag or name)
rec[ckey].append(cvalue)
elif cstore == "index":
if ckey not in rec:
rec[ckey] = {}
if "ikey" in ctempl and ctempl["ikey"] is not None:
ikey = ctempl["ikey"]
else:
#default
ikey = template["*"]["*"]["ikey"]
if ikey.startswith("@"):
#ex iattr = id
iattr= ikey[1:]
#getting the attribute from the xml tree
ikeyval = elem.get(iattr)
#inside red dictionary, value of ckey (when store is index) is another dictionary, containing
#a key (value of attribute in xml) whose value is cvalue (jsut elem.text or dictionary with elem.text)
"""
ex:
xml: <primaryRef db="psi-mi" dbAc="MI:0488" id="MI:0465" refType="identity" refTypeAc="MI:0356"/>
json: "availability":{"entry":{"store":"index","ikey":"@id","name":"availabilityList"}}, (pretend these correspond)
ikey = "@id"
iattr = "id"
ikeyval = "MI:0465"
rec[availabilityList(from name)][MI:0465] = **cvalue (what is cvalue in this case; no text)
"""
if ikeyval is not None:
rec[ckey][ikeyval] = cvalue
#create xrefInd inside xref
#what is this? both json do not have index as a key in ctempl
if "index" in ctempl and ctempl["index"] is not None:
ckeyRefInd = ctempl["index"]["name"]
rec[ckey][ckeyRefInd] = {}
# parse elem contents
# add dom attributes
#elem.attrib probably returns a dictionary with the attributes in xml and their values.
for cattr in elem.attrib:
#cvalue can be assumed to be dictionary here because it must be complex if contains attributes.
#in addition to "value": value, adds each attribute and its value to cvalue.
cvalue[cattr] = str( elem.attrib[cattr] )
cpath = []
#seems like rpath is supposed to containg a list of every tag: cvalue pair up to the current element
#hence, structure is in the order of rpath list
#here, we append the current tag: cvalue pair to the end and pass it in for recursion.
for p in rpath:
cpath.append(p)
#append wrapper for later reconstruction too?
cpath.append( {tag: cvalue })
if 'index' in ctempl:
iname = ctempl["index"]["name"]
ientry = ctempl["index"]["entry"]
#this is important for defining the structure of this rpath
#
for cchld in elem:
cchldTag = self.modifyTag(cchld, ver)
if debug:
print(" CHLD", cchld.tag);
#passes in cvalue for rec; can assume it is a dictionary since only the last child
#will be non complex and will skip all instances of treating it like a dictionary
#
self.genericParse( template, ver, cvalue, cpath, cchld)
if 'index' in ctempl:
if cchldTag in ientry and ientry[cchldTag] is not None:
keyList = ientry[cchldTag]["key"]
kval = []
for k in keyList:
kvl = cchld.xpath(k)
if kvl:
kval.append(kvl[0])
dbkey = ':'.join(kval)
rec[ckey][ckeyRefInd][dbkey] = cvalue[cchldTag][0] if type(cvalue[cchldTag]) is list else cvalue[cchldTag]
if debug:
print( json.dumps(self.root, indent=2) )
self.path = cpath
return
class dumbClass():
def __init__(self):
self.name= "Nolan"
def changeName(self, newName):
self.name = newName
poop = dumbClass()
print(poop.name)
poop.changeName("name")
print(poop.name)
dictionary = {1:"first", 2:"two"}
dictionary[3] = "three"
print(dictionary)
for i in dictionary:
print(i)
print(dictionary[i])
print(dictionary)
print(dictionary.get(0))
def myFunc(number):
return 3*number
print(myFunc(4))
def myFunc2(number):
print(number)
if number == 0:
return number
return myFunc2(number-1)
print(myFunc2(20))
test = "plop dar5"
test2 = [char for char in test]
for i in test2:
if i.isnumeric():
print("yes")
print(test2)
test3 = "3"
print(int(test3))
print(bytes(5))
print(root[0][0].tag[28:])
print(root[0][0].attrib)
['3676833', '3676835']
```
| github_jupyter |
# Prepare and Deploy a TensorFlow Model to AI Platform for Online Serving
This Notebook demonstrates how to prepare a TensorFlow 2.x model and deploy it for serving with AI Platform Prediction. This example uses the pretrained [ResNet V2 101](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4) image classification model from [TensorFlow Hub](https://tfhub.dev/) (TF Hub).
The Notebook covers the following steps:
1. Downloading and running the ResNet module from TF Hub
2. Creating serving signatures for the module
3. Exporting the model as a SavedModel
4. Deploying the SavedModel to AI Platform Prediction
5. Validating the deployed model
## Setup
This Notebook was tested on **AI Platform Notebooks** using the standard TF 2.2 image.
### Import libraries
```
import base64
import os
import json
import requests
import time
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from typing import List, Optional, Text, Tuple
```
### Configure GCP environment settings
```
PROJECT_ID = 'jk-mlops-dev' # Set your project Id
BUCKET = 'labs-workspace' # Set your bucket name Id
REGION = 'us-central' # Set your region for deploying the model
MODEL_NAME = 'resnet_101'
MODEL_VERSION = 'v1'
GCS_MODEL_LOCATION = 'gs://{}/models/{}/{}'.format(BUCKET, MODEL_NAME, MODEL_VERSION)
THUB_MODEL_HANDLE = 'https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4'
IMAGENET_LABELS_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
IMAGES_FOLDER = 'test_images'
!gcloud config set project $PROJECT_ID
```
### Create a local workspace
```
LOCAL_WORKSPACE = '/tmp/workspace'
if tf.io.gfile.exists(LOCAL_WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(LOCAL_WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(LOCAL_WORKSPACE)
```
## 1. Loading and Running the ResNet Module
### 1.1. Download and instantiate the model
```
os.environ["TFHUB_DOWNLOAD_PROGRESS"] = 'True'
local_savedmodel_path = hub.resolve(THUB_MODEL_HANDLE)
print(local_savedmodel_path)
!ls -la {local_savedmodel_path}
model = hub.load(THUB_MODEL_HANDLE)
```
The expected input to most TF Hub TF2 image classification models, including ResNet 101, is a rank 4 tensor conforming to the following tensor specification: `tf.TensorSpec([None, height, width, 3], tf.float32)`. For the ResNet 101 model, the expected image size is `height x width = 224 x 224`. The color values for all channels are expected to be normalized to the [0, 1] range.
The output of the model is a batch of logits vectors. The indices into the logits are the `num_classes = 1001` classes from the ImageNet dataset. The mapping from indices to class labels can be found in the [labels file](download.tensorflow.org/data/ImageNetLabels.txt) with class 0 for "background", followed by 1000 actual ImageNet classes.
We will now test the model on a couple of JPEG images.
### 1.2. Display sample images
```
image_list = [tf.io.read_file(os.path.join(IMAGES_FOLDER, image_path))
for image_path in os.listdir(IMAGES_FOLDER)]
ncolumns = len(image_list) if len(image_list) < 4 else 4
nrows = int(len(image_list) // ncolumns)
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(10,10))
for axis, image in zip(axes.flat[0:], image_list):
decoded_image = tf.image.decode_image(image)
axis.set_title(decoded_image.shape)
axis.imshow(decoded_image.numpy())
```
### 1.3. Preprocess the testing images
The images need to be preprocessed to conform to the format expected by the ResNet101 model.
```
def _decode_and_scale(image, size):
image = tf.image.decode_image(image, expand_animations=False)
image_height = image.shape[0]
image_width = image.shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.cast(tf.image.resize(image, [size, size]), tf.uint8)
return image
size = 224
raw_images = tf.stack(image_list)
preprocessed_images = tf.map_fn(lambda x: _decode_and_scale(x, size), raw_images, dtype=tf.uint8)
preprocessed_images = tf.image.convert_image_dtype(preprocessed_images, tf.float32)
print(preprocessed_images.shape)
```
### 2.4. Run inference
```
predictions = model(preprocessed_images)
predictions
```
The model returns a batch of arrays with logits. This is not a very user friendly output so we will convert it to the list of ImageNet class labels.
```
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
IMAGENET_LABELS_URL)
imagenet_labels = np.array(open(labels_path).read().splitlines())
```
We will display the 5 highest ranked labels for each image
```
for prediction in list(predictions):
decoded = imagenet_labels[np.argsort(prediction.numpy())[::-1][:5]]
print(list(decoded))
```
## 2. Create Serving Signatures
The inputs and outputs of the model as used during model training may not be optimal for serving. For example, in a typical training pipeline, feature engineering is performed as a separate step preceding model training and hyperparameter tuning. When serving the model, it may be more optimal to embed the feature engineering logic into the serving interface rather than require a client application to preprocess data.
The ResNet V2 101 model from TF Hub is optimized for recomposition and fine tuning. Since there are no serving signatures in the model's metadata, it cannot be served with TF Serving as is.
```
list(model.signatures)
```
To make it servable, we need to add a serving signature(s) describing the inference method(s) of the model.
We will add two signatures:
1. **The default signature** - This will expose the default predict method of the ResNet101 model.
2. **Prep/post-processing signature** - Since the expected inputs to this interface require a relatively complex image preprocessing to be performed by a client, we will also expose an alternative signature that embeds the preprocessing and postprocessing logic and accepts raw unprocessed images and returns the list of ranked class labels and associated label probabilities.
The signatures are created by defining a custom module class derived from the `tf.Module` base class that encapsulates our ResNet model and extends it with a method implementing the image preprocessing and output postprocessing logic. The default method of the custom module is mapped to the default method of the base ResNet module to maintain the analogous interface.
The custom module will be exported as `SavedModel` that includes the original model, the preprocessing logic, and two serving signatures.
This technique can be generalized to other scenarios where you need to extend a TensorFlow model and you have access to the serialized `SavedModel` but you don't have access to the Python code implementing the model.
#### 2.1. Define the custom serving module
```
LABELS_KEY = 'labels'
PROBABILITIES_KEY = 'probabilities'
NUM_LABELS = 5
class ServingModule(tf.Module):
"""
A custom tf.Module that adds image preprocessing and output post processing to
a base TF 2 image classification model from TF Hub.
"""
def __init__(self, base_model, input_size, output_labels):
super(ServingModule, self).__init__()
self._model = base_model
self._input_size = input_size
self._output_labels = tf.constant(output_labels, dtype=tf.string)
def _decode_and_scale(self, raw_image):
"""
Decodes, crops, and resizes a single raw image.
"""
image = tf.image.decode_image(raw_image, dtype=tf.dtypes.uint8, expand_animations=False)
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.image.resize(image, [self._input_size, self._input_size])
image = tf.cast(image, tf.uint8)
return image
def _preprocess(self, raw_inputs):
"""
Preprocesses raw inputs as sent by the client.
"""
# A mitigation for https://github.com/tensorflow/tensorflow/issues/28007
with tf.device('/cpu:0'):
images = tf.map_fn(self._decode_and_scale, raw_inputs, dtype=tf.uint8)
images = tf.image.convert_image_dtype(images, tf.float32)
return images
def _postprocess(self, model_outputs):
"""
Postprocesses outputs returned by the base model.
"""
probabilities = tf.nn.softmax(model_outputs)
indices = tf.argsort(probabilities, axis=1, direction='DESCENDING')
return {
LABELS_KEY: tf.gather(self._output_labels, indices, axis=-1)[:,:NUM_LABELS],
PROBABILITIES_KEY: tf.sort(probabilities, direction='DESCENDING')[:,:NUM_LABELS]
}
@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], tf.float32)])
def __call__(self, x):
"""
A pass-through to the base model.
"""
return self._model(x)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def predict_labels(self, raw_images):
"""
Preprocesses inputs, calls the base model
and postprocesses outputs from the base model.
"""
# Call the preprocessing handler
images = self._preprocess(raw_images)
# Call the base model
logits = self._model(images)
# Call the postprocessing handler
outputs = self._postprocess(logits)
return outputs
serving_module = ServingModule(model, 224, imagenet_labels)
```
#### 2.2. Test the custom serving module
```
predictions = serving_module.predict_labels(raw_images)
predictions
```
## 3. Save the custom serving module as `SavedModel`
```
model_path = os.path.join(LOCAL_WORKSPACE, MODEL_NAME, MODEL_VERSION)
default_signature = serving_module.__call__.get_concrete_function()
preprocess_signature = serving_module.predict_labels.get_concrete_function()
signatures = {
'serving_default': default_signature,
'serving_preprocess': preprocess_signature
}
tf.saved_model.save(serving_module, model_path, signatures=signatures)
```
### 3.1. Inspect the `SavedModel`
```
!saved_model_cli show --dir {model_path} --tag_set serve --all
```
### 3.2. Test loading and executing the `SavedModel`
```
model = tf.keras.models.load_model(model_path)
model.predict_labels(raw_images)
```
### 3.3 Copy the model to Google Cloud Storage
```
!gsutil cp -r {model_path} {GCS_MODEL_LOCATION}
!gsutil ls {GCS_MODEL_LOCATION}
```
## License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Question 1a: %timeit
You may know from your experiences with matlab that you should always prefer vector- or matrix-based operations over for loops, if possible (hence the name **mat**(rix)**lab**(oratory)). The same is true of python -- you should prefer numpy-array-based operations over for loops. This will also be important for tensorflow -- as much as possible, you should avoid using python for loops when writing tensorflow code. To examine the impact of using for loops over numpy-array-based operations, for this question, you will exploit one of jupyter's built-in magic commands, `%timeit`:
```
import numpy as np
%timeit np.zeros((100,100)) # provide statistics on how long it takes to generate a 100x100 array of 0s
```
As you can see, all you need to do is put `%timeit` before the command that you would normally run and jupyter will run that line multiple times to generate computation timing statistics.
Now, let's compare the computation timing for multiplying two random matrices, each with a dimension of 100x100, using 1) `np.matmul` and 2) multiple embedded for loops. For (2), please write your own function to implement the for loops. Feel free to wrap (2) into a function definition. Verify that (1) and (2) produce the same output. According to `%timeit`, how many times faster is (1) than (2)?
```
# your code here
```
# Question 1b
There are two main ways of computing convolutions digitally: 1) directly, using the definition of a convolution, and 2) using the convolution theorem that you proved in the written portion of this homework assignment (i.e., using ffts). Which method is more efficient depends on the sizes of the inputs. Let's use `%timeit` to compare the speeds for 1D convolutions using [`scipy.signal.convolve `](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html). This function has an argument called "method", which can be set to "direct" or "fft", which correspond to (1) and (2) above. Use this function to convolve two random 1D signals of lengths $n=100, 500, 1000,$ and $2000$, and compare the speed of both methods. For which n do(es) method 1 outperform method 2, and vice versa? Can you make any generalizations based on these results about when one method outperforms the other?
```
from scipy.signal import convolve
# your code here; feel free to use multiple cells
```
# Question 2: the convolution theorem
As we investigated in question 1b, it is also possible to do convolutions using Fourier transforms, and in some cases this is the preferable method. In fact, there is some body of work investigating the use of ffts and multiplication to do convolution operations in convolutional neural networks.
For this question, to illustrate this theorem, given a convolutional kernel you will find the corresponding Fourier operation that produces the same result. To this end,
1. create a 7x7 Gaussian kernel with a standard deviation $\sigma=2$ (using a pixel grid spacing of 1)
2. load an image, if it is color then convert it to grayscale (you can just sum the 3 color channels), and then resize the image into a 128x128 array
3. compute the convolution - you can use a numpy (np) or scipy function. Make sure the output is the same size as the input image, which is slightly different than the formal definition of a discrete convolution, but is something that is usually convenient to do.
4. Find the Fourier filter that does the same operation in the Fourier domain, and show the resulting blurred image implemented using the Fourier method (i.e., if $I_{2}=I_{1}*h$, then $\mathcal{F}[I_{2}]=\mathcal{F}[I_{1}]\mathcal{F}[h]$, so find the correct array for $\mathcal{F}[h]$ and re-generate $I_2$).
```
# the following line will cause subsequent plotting commands to display directly in the notebook
%matplotlib inline
```
# Question 3: data augmentation
One indispensable tool used in deep learning is data augmentation. That is, we can to some extent artificially increase the size of our dataset by randomly altering the current dataset. One common augmenting operation is to do random crops of the original image. For example, researchers designing neural networks for ImageNet, a dataset of natural RGB images, typically resize the images to 256x256x3 and then take a random 224x224x3 crop such that the latter fits entirely in the former.
For this question, take a picture with your phone or find a picture online, load it into jupyter, resize it to 256x256x3 (discard the alpha channel if one is present), and then perform the random 224x224x3 crop. The crops should be uniformly distributed within the bounding 256x256 box and do not need to be rotated. Please display the 256x256x3 image and 5 random crops using `plt.imshow`.
```
import numpy as np
import matplotlib.pyplot as plt
# your code here; feel free to use multiple cells
```
| github_jupyter |
### Generating human faces with Adversarial Networks
<img src="images/nvidia_cool_gan.png" width="400px"/>
_© research.nvidia.com_
This time we'll train a neural net to generate plausible human faces in all their subtlty: appearance, expression, accessories, etc. 'Cuz when us machines gonna take over Earth, there won't be any more faces left. We want to preserve this data for future iterations. Yikes...
Based on https://github.com/Lasagne/Recipes/pull/94 .
## Setup
```
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.setup_week4()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
# token expires every 30 min
COURSERA_TOKEN = 'uk9amHnZfk7WA6XG'
COURSERA_EMAIL = 'dangledangkhoa@gmail.com'
import download_utils
download_utils.link_week_4_resources()
```
## Dataset
```
#The following line fetches you two datasets: images, usable for autoencoder training and attributes.
#Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
from lfw_dataset import load_lfw_dataset
data,attrs = load_lfw_dataset(dimx=36,dimy=36)
#preprocess faces
data = np.float32(data)/255.
IMG_SHAPE = data.shape[1:]
#print random image
plt.imshow(
data[np.random.randint(data.shape[0])],
cmap="gray", interpolation="none")
```
## Generative adversarial nets 101
<img src="images/noise_to_face.png" width="400px"/>
_© torch.github.io_
Deep learning is simple, isn't it?
* build some network that generates the face (small image)
* make up a __measure__ of __how good that face is__
* optimize with gradient descent :)
The only problem is: how can we engineers tell well-generated faces from bad? And i bet you we won't ask a designer for help.
__If we can't tell good faces from bad, we delegate it to yet another neural network!__
That makes the two of them:
* __G__enerator - takes random noize for inspiration and tries to generate a face sample.
* Let's call him __G__(z), where z is a gaussian noize.
* __D__iscriminator - takes a face sample and tries to tell if it's great or fake.
* Predicts the probability of input image being a __real face__
* Let's call him __D__(x), x being an image.
* __D(x)__ is a predition for real image and __D(G(z))__ is prediction for the face made by generator.
Before we dive into training them, let's construct the two networks.
```
from keras_utils import reset_tf_session
import keras
from keras.models import Sequential
from keras import layers as L
import tensorflow as tf
s = reset_tf_session()
CODE_SIZE = 256
generator = Sequential()
generator.add(L.InputLayer([CODE_SIZE],name='noise'))
generator.add(L.Dense(10*8*8, activation='elu'))
generator.add(L.Reshape((8,8,10)))
generator.add(L.Deconv2D(64,kernel_size=(5,5),activation='elu'))
generator.add(L.Deconv2D(64,kernel_size=(5,5),activation='elu'))
generator.add(L.UpSampling2D(size=(2,2)))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Conv2D(3,kernel_size=3,activation=None))
assert generator.output_shape[1:] == IMG_SHAPE, \
"generator must output an image of shape %s, but instead it produces %s"%(IMG_SHAPE,generator.output_shape[1:])
```
## Discriminator
* Discriminator is your usual convolutional network with interlooping convolution and pooling layers
* The network does not include dropout/batchnorm to avoid learning complications.
* We also regularize the pre-output layer to prevent discriminator from being too certain.
```
discriminator = Sequential()
discriminator.add(L.InputLayer(IMG_SHAPE))
# <build discriminator body>
discriminator.add(L.Conv2D(8, (3, 3)))
discriminator.add(L.LeakyReLU(0.1))
discriminator.add(L.Conv2D(16, (3, 3)))
discriminator.add(L.LeakyReLU(0.1))
discriminator.add(L.MaxPool2D())
discriminator.add(L.Conv2D(32, (3, 3)))
discriminator.add(L.LeakyReLU(0.1))
discriminator.add(L.Conv2D(64, (3, 3)))
discriminator.add(L.LeakyReLU(0.1))
discriminator.add(L.MaxPool2D())
discriminator.add(L.Flatten())
discriminator.add(L.Dense(256,activation='tanh'))
discriminator.add(L.Dense(2,activation=tf.nn.log_softmax))
```
## Training
We train the two networks concurrently:
* Train __discriminator__ to better distinguish real data from __current__ generator
* Train __generator__ to make discriminator think generator is real
* Since discriminator is a differentiable neural network, we train both with gradient descent.
<img src="images/gan.png" width="600px"/>
_© deeplearning4j.org_
Training is done iteratively until discriminator is no longer able to find the difference (or until you run out of patience).
### Tricks:
* Regularize discriminator output weights to prevent explosion
* Train generator with __adam__ to speed up training. Discriminator trains with SGD to avoid problems with momentum.
* More: https://github.com/soumith/ganhacks
```
noise = tf.placeholder('float32',[None,CODE_SIZE])
real_data = tf.placeholder('float32',[None,]+list(IMG_SHAPE))
logp_real = discriminator(real_data)
generated_data = generator(noise) # <gen(noise)>
logp_gen = discriminator(generated_data) # <log P(real | gen(noise))
########################
#discriminator training#
########################
d_loss = -tf.reduce_mean(logp_real[:,1] + logp_gen[:,0])
#regularize
d_loss += tf.reduce_mean(discriminator.layers[-1].kernel**2)
#optimize
disc_optimizer = tf.train \
.GradientDescentOptimizer(1e-3) \
.minimize(d_loss,var_list=discriminator.trainable_weights)
########################
###generator training###
########################
g_loss = -tf.reduce_mean(logp_gen[:,1]) # <generator loss>
gen_optimizer = tf.train \
.AdamOptimizer(1e-4) \
.minimize(g_loss,var_list=generator.trainable_weights)
s.run(tf.global_variables_initializer())
```
### Auxilary functions
Here we define a few helper functions that draw current data distributions and sample training batches.
```
def sample_noise_batch(bsize):
return np.random.normal(size=(bsize, CODE_SIZE)).astype('float32')
def sample_data_batch(bsize):
idxs = np.random.choice(np.arange(data.shape[0]), size=bsize)
return data[idxs]
def sample_images(nrow,ncol, sharp=False):
images = generator.predict(sample_noise_batch(bsize=nrow*ncol))
if np.var(images)!=0:
images = images.clip(np.min(data),np.max(data))
for i in range(nrow*ncol):
plt.subplot(nrow,ncol,i+1)
if sharp:
plt.imshow(images[i].reshape(IMG_SHAPE),cmap="gray", interpolation="none")
else:
plt.imshow(images[i].reshape(IMG_SHAPE),cmap="gray")
plt.show()
def sample_probas(bsize):
plt.title('Generated vs real data')
plt.hist(np.exp(discriminator.predict(sample_data_batch(bsize)))[:,1],
label='D(x)', alpha=0.5,range=[0,1])
plt.hist(np.exp(discriminator.predict(generator.predict(sample_noise_batch(bsize))))[:,1],
label='D(G(z))',alpha=0.5,range=[0,1])
plt.legend(loc='best')
plt.show()
```
### Training
Main loop.
We just train generator and discriminator in a loop and draw results once every N iterations.
```
import tqdm_utils
for epoch in tqdm_utils.tqdm_notebook_failsafe(range(50000)):
feed_dict = {
real_data:sample_data_batch(100),
noise:sample_noise_batch(100)
}
for i in range(5):
s.run(disc_optimizer,feed_dict)
s.run(gen_optimizer,feed_dict)
if epoch %100==0:
display.clear_output(wait=True)
sample_images(2,3,True)
sample_probas(1000)
#The network was trained for about 15k iterations.
#Training for longer yields MUCH better results
plt.figure(figsize=[16,24])
sample_images(16,8)
```
## Submission
```
from submit_honor import submit_honor
submit_honor((generator, discriminator), COURSERA_EMAIL, COURSERA_TOKEN)
```
| github_jupyter |
# 3D MNIST
https://medium.com/shashwats-blog/3d-mnist-b922a3d07334
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
from matplotlib import animation
# import seaborn as sns
import h5py
import os, sys
sys.path.append('data/')
from voxelgrid import VoxelGrid
from plot3D import *
%matplotlib inline
# plt.rcParams['image.interpolation'] = None
plt.rcParams['image.cmap'] = 'gray'
with h5py.File('./3d-mnist-kaggle/train_point_clouds.h5', 'r') as f:
# Reading digit at zeroth index
a = f["0"]
# Storing group contents of digit a
digit = (a["img"][:], a["points"][:], a.attrs["label"])
digits = []
with h5py.File("./3d-mnist-kaggle/train_point_clouds.h5", 'r') as h5:
for i in range(15):
d = h5[str(i)]
digits.append((d["img"][:],d["points"][:],d.attrs["label"]))
len(digits)
plt.imshow(digit[0])
# Plot some examples from original 2D-MNIST
fig, axs = plt.subplots(3,5, figsize=(12, 12), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = .5, wspace=.2)
for ax, d in zip(axs.ravel(), digits):
ax.imshow(d[0][:])
ax.set_title("Digit: " + str(d[2]))
digit[0].shape, digit[1].shape
voxel_grid = VoxelGrid(digit[1], x_y_z = [16, 16, 16])
def count_plot(array):
cm = plt.cm.get_cmap('gist_rainbow')
n, bins, patches = plt.hist(array, bins=64)
bin_centers = 0.5 * (bins[:-1] + bins[1:])
# scale values to interval [0,1]
col = bin_centers - min(bin_centers)
col /= max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c))
plt.show()
voxel_grid.structure[:, -1]
# Get the count of points within each voxel.
plt.title("DIGIT: " + str(digits[0][-1]))
plt.xlabel("VOXEL")
plt.ylabel("POINTS INSIDE THE VOXEL")
count_plot(voxel_grid.structure[:,-1])
voxels = []
for d in digits:
voxels.append(VoxelGrid(d[1], x_y_z=[16,16,16]))
# Visualizing the Voxel Grid sliced around the z-axis.
voxels[0].plot()
plt.show()
# Save Voxel Grid Structure as the scalar field of Point Cloud.
cloud_vis = np.concatenate((digit[1], voxel_grid.structure), axis=1)
np.savetxt('Cloud Visualization - ' + str(digit[2]) + '.txt', cloud_vis)
for i in range(cloud_vis.shape[1]):
plt.figure()
plt.plot(cloud_vis[:,i], '.')
```
# Train Classifier
```
with h5py.File("./3d-mnist-kaggle/full_dataset_vectors.h5", 'r') as h5:
X_train, y_train = h5["X_train"][:], h5["y_train"][:]
X_test, y_test = h5["X_test"][:], h5["y_test"][:]
X_train.shape, y_train.shape, X_test.shape, y_test.shape
np.max(X_train[0])
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.ensemble import RandomForestClassifier
reg = LogisticRegression()
reg.fit(X_train,y_train)
print("LR-Accuracy: ", reg.score(X_test,y_test))
dt = DecisionTreeClassifier()
dt.fit(X_train,y_train)
print("DT-Accuracy: ", dt.score(X_test,y_test))
svm = LinearSVC()
svm.fit(X_train,y_train)
print("SVM-Accuracy: ", svm.score(X_test,y_test))
knn = KNN()
knn.fit(X_train,y_train)
print("KNN-Accuracy: ", knn.score(X_test,y_test))
rf = RandomForestClassifier(n_estimators=500)
rf.fit(X_train,y_train)
print("RF-Accuracy: ", rf.score(X_test,y_test))
```
| github_jupyter |
# Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. We'll build a convolutional autoencoder to compress the MNIST dataset.
>The encoder portion will be made of convolutional and pooling layers and the decoder will be made of **transpose convolutional layers** that learn to "upsample" a compressed representation.
<img src='notebook_ims/autoencoder_1.png' />
### Compressed Representation
A compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!
<img src='notebook_ims/denoising.png' width=60%/>
Let's get started by importing our libraries and getting the dataset.
```
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
```
### Visualize the Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
```
---
## Convolutional Autoencoder
#### Encoder
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers.
#### Decoder
The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide, reconstructed image. For example, the representation could be a 7x7x4 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the compressed representation. A schematic of the network is shown below.
<img src='notebook_ims/conv_enc_1.png' width=640px>
Here our final encoder layer has size 7x7x4 = 196. The original images have size 28x28 = 784, so the encoded vector is 25% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, in fact, you're encouraged to add additional layers to make this representation even smaller! Remember our goal here is to find a small representation of the input data.
### Transpose Convolutions, Decoder
This decoder uses **transposed convolutional** layers to increase the width and height of the input layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. PyTorch provides us with an easy way to create the layers, [`nn.ConvTranspose2d`](https://pytorch.org/docs/stable/nn.html#convtranspose2d).
It is important to note that transpose convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer.
> We'll show this approach in another notebook, so you can experiment with it and see the difference.
#### TODO: Build the network shown above.
> Build the encoder out of a series of convolutional and pooling layers.
> When building the decoder, recall that transpose convolutional layers can upsample an input by a factor of 2 using a stride and kernel_size of 2.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
# conv layer (depth from 1 --> 16), 3x3 kernels
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
# conv layer (depth from 16 --> 4), 3x3 kernels
self.conv2 = nn.Conv2d(16, 4, 3, padding=1)
# pooling layer to reduce x-y dims by two; kernel and stride of 2
self.pool = nn.MaxPool2d(2, 2)
## decoder layers ##
## a kernel of 2 and a stride of 2 will increase the spatial dims by 2
self.t_conv1 = nn.ConvTranspose2d(4, 16, 2, stride=2)
self.t_conv2 = nn.ConvTranspose2d(16, 1, 2, stride=2)
def forward(self, x):
## encode ##
# add hidden layers with relu activation function
# and maxpooling after
x = F.relu(self.conv1(x))
x = self.pool(x)
# add second hidden layer
x = F.relu(self.conv2(x))
x = self.pool(x) # compressed representation
## decode ##
# add transpose conv layers, with relu activation function
x = F.relu(self.t_conv1(x))
# output layer (with sigmoid for scaling from 0 to 1)
x = F.sigmoid(self.t_conv2(x))
return x
# initialize the NN
model = ConvAutoencoder()
print(model)
```
---
## Training
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
We are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:
```
loss = criterion(outputs, images)
```
Otherwise, this is pretty straightfoward training with PyTorch. Since this is a convlutional autoencoder, our images _do not_ need to be flattened before being passed in an input to our model.
```
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 30
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
```
## Checking out the results
Below I've plotted some of the test images along with their reconstructions. These look a little rough around the edges, likely due to the checkerboard effect we mentioned above that tends to happen with transpose layers.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# prep images for display
images = images.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
| github_jupyter |
```
import os
import shutil
from collections import OrderedDict
from copy import deepcopy
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import scipy.io
import numpy as np
from numpy import exp,arange
from pylab import meshgrid,cm,imshow,contour,clabel,colorbar,axis,title,show
from tqdm import trange, tqdm
from matplotlib import pyplot as plt
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
```
## **Physics Informed Neural Network**
**Task 1:** Given fixed model parameters λ what can be said about the unknown hidden state u(t, x)
of the system?
**Task 2:** what are the parameters λ that best describe the observed data?
**New Learning**
1. Meshgrid creates a rectangular grid out of two given one-dimensional arrays representing the Cartesian indexing or Matrix indexing.
2. Subplots return a figure and axs to plot sub figures in the main figure
3. `scipy.io` has a function to read matlab files. `.mat` extension are files that are in the binary data container format that the MATLAB program uses.
4. **Xavier Initialization:** In a word, the Xavier initialization method tries to initialize weights with a smarter value, such that neurons won’t start training in saturation. Basically it tries to make sure the distribution of the inputs to each activation function is zero mean and unit variance.
```
DATA_PATH = "../Data/burgers_shock.mat"
data_dict = scipy.io.loadmat(DATA_PATH)
data_dict.keys()
x_data,t_data, u_data = data_dict['x'], data_dict['t'], data_dict['usol']
x_data.shape, t_data.shape, u_data.shape
# To plot the actual value of the functino at particular point in time over range of space
# for t = 0.25
# make all the values other than 0.25 as 0
X,Y = meshgrid(t_data, x_data) # grid of point
T = deepcopy(X)
# print(T.shape[0])
for x_c in range(T.shape[0]):
for y_c in range(T.shape[1]):
if T[x_c][y_c] not in (0.25, 0.5, 0.75) : T[x_c][y_c] = 0
fig, axs = plt.subplots(2,2, sharey=True, figsize=(20,16))
axs[0][0].imshow(u_data, cmap="rainbow", interpolation="nearest", extent=[t_data.min(), t_data.max(), x_data.min(), x_data.max()], origin='lower', aspect='auto')
axs[0][0].set_title("$U(t,x)$ as image (Nearest interpolation)")
axs[0][0].set_xlabel("t (Time)")
axs[0][0].set_ylabel("x (Space)")
axs[0][1].contourf(X,Y,u_data, cmap='rainbow')
axs[0][1].set_title("$U(t,x)$ as contour")
axs[0][1].set_xlabel("t (Time)")
axs[0][1].set_ylabel("x (Space)")
axs[1][0].contourf(Y,u_data,X, cmap='rainbow')
axs[1][0].set_title("t (Time)")
axs[1][0].set_ylabel("$U(t,x)$")
axs[1][0].set_xlabel("x (Space)")
axs[1][1].contourf(Y,u_data,T, cmap="rainbow", levels=[0.25, 0.5, 0.75])
axs[1][1].set_title("t (Time)")
axs[1][1].set_ylabel("$U(t,x)$")
axs[1][1].set_xlabel("x (Space)")
fig.savefig('dataviz.png')
show()
```
| github_jupyter |
# Семинар 7 - Классификация методами машинного обучения
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore')
plt.style.use('seaborn')
%matplotlib inline
```
# Логистическая регрессия
## Краткая теория

Где линейная модель - это: $$ \hat{y} = f(x) = \theta_0*1 + \theta_1*x_1 + ... + \theta_n*x_n = \theta^T*X$$
Функция активации $\sigma(x) = \frac{1}{1 + \exp^{-x}}$
```
from sklearn.datasets import fetch_olivetti_faces
# загрузим данные
data = fetch_olivetti_faces(shuffle=True)
X = data.data
y = data.target
print(X.shape, y.shape)
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
def plot_gallery(title, images, n_col=n_col, n_row=n_row, cmap=plt.cm.gray):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(comp.reshape(image_shape), cmap=cmap)
plt.axis('off')
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
plot_gallery("Olivetti faces", X[:n_components])
```
## Разделим выборку на две части: обучающую и тестовую
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
shuffle=True,
random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
## Логистическая регрессия для многоклассовой классификации
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Разделим выборку на тренировочную и тестовую
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=42)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
```
*Логистическая регрессия позволяет решать задачу многоклассовой классификации. Класс ``LogisticRegression`` позвляет это делать двумя способами:*
- Стандартный One vs Rest (т.е. каждый класс отделяется от всех других). Параметр `multi_class='ovr'`.*
- One vs One: Используя кросс-энтропию (оценивается сразу вектор вероятностей принадлежности классам). Параметр `multi_class='multinomial'`.*
#### One vs Rest
Find 𝐾 − 1 classifiers 𝑓 , 𝑓 , ... , 𝑓 12 𝐾−1
- 𝑓 classifies1𝑣𝑠{2,3,...,𝐾} 1
- 𝑓 classifies2𝑣𝑠{1,3,...,𝐾} 2
- ...
- 𝑓 classifies𝐾−1𝑣𝑠{1,2,...,𝐾−2}
- 𝐾−1
- Points not classified to classes {1,2, ... , 𝐾 − 1} are put to class 𝐾
#### Cross-entropy
В случае с бинарной классификацией функция потерь:
$$ \sum_{i=1}^l \bigl( y_i \log a_i - (1-y_i) \log (1-a_i) \bigr) \rightarrow min$$
$a_i$ – ответ (вероятность) алгоритма на i-м объекте на вопрос принадлежности к классу $y_i$
Обобщается для многомерного случая:
$$-\frac{1}{q} \sum_{i=1}^q \sum_{j=1}^l y_{ij} \log a_{ij} \rightarrow min $$
где
$q$ – число элементов в выборке,
$l$ – число классов,
$a_{ij}$ – ответ (вероятность) алгоритма на i-м объекте на вопрос принадлежности его к j-му классу
__Проблемы:__
- Сложности в поиске глобального минимума, так как присутствуют Локальные минимумы и плато
## Solvers

Source: [User Guide](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)
### Liblinear
Используется обычный покоординантный спуск.
Алгоритм:
- Инициализацируем любыми значениями вектор весов
- Повторяем для каждого i из пространства признаков:
- фиксируем значения всех переменных кроме $x_i$
- проводим одномерную оптимизацию по переменной $x_i$, любым методом одномерной оптимизации
- если достигнули минимума по одной координате, то возвращаем текущее значение вектора весов
Как это выглядит для минимизации функционала

__Недостатки:__
1. Не параллелится
2. Может "застрять" в локальном минимуме
3. Следствие п.2 - Не может использоваться кросс-энтропия для многомерного случая, так как легко "Застревает" в локальных минимумах. Вместо этого для каждого класса строит отдельный классификатор (One-vs-Rest)
```
from sklearn.model_selection import GridSearchCV
%%time
lr = LogisticRegression(solver='liblinear', multi_class='ovr')
lr.fit(x_train, y_train)
accuracy_score(lr.predict(x_test), y_test)
%%time
len_c = 10
param_grid = {
'C': np.linspace(0.01, 1, len_c),
'penalty': ['l1', 'l2']
}
gs = GridSearchCV(lr,param_grid=param_grid, cv=3, n_jobs=-1, scoring='accuracy')
gs.fit(x_train, y_train)
accuracy_score(gs.predict(x_test), y_test)
def print_cv_results(a, len_gs, params, param_r, param_sep):
d = len(params['param_grid'][param_sep])
ar = np.array(a).reshape(d, len_gs).T
df = pd.DataFrame(ar)
pen_par = params['param_grid'][param_sep]
c_par = params['param_grid'][param_r].tolist()
columns_mapper = dict(zip(range(0, len(pen_par)), pen_par))
row_mapper = dict(zip(range(0, len(c_par)), c_par))
df.rename(columns=columns_mapper, index=row_mapper, inplace=True)
plot = df.plot(title='Mean accuracy rating', grid=True)
plot.set_xlabel(param_r, fontsize=13)
plot.set_ylabel('acc', rotation=0, fontsize=13, labelpad=15)
print_cv_results(gs.cv_results_['mean_test_score'],
len_c, gs.get_params(), 'C','penalty')
```
### Stochatic Average Gradient (SAG)
Объединение градиентного спуска и стохастического.
При этом, он имеет низкую стоимость итерации свойственной SGD, но делает шаг градиента по отношению к аппроксимации полного градиента:
__Недостатки:__
- Нет L1
- Непрактичен для больших выборок, так как имеет высокую вычислительную сложность
```
%%time
lr = LogisticRegression(solver='sag', penalty='l2')
lr.fit(x_train, y_train)
accuracy_score(lr.predict(x_test), y_test)
%%time
len_c = 10
param_grid = {
'C': np.linspace(0.01, 1, len_c),
'multi_class': ['ovr', 'multinomial']
}
gs = GridSearchCV(lr,param_grid=param_grid, cv=3,
n_jobs=-1, scoring='accuracy')
gs.fit(x_train, y_train)
accuracy_score(gs.predict(x_test), y_test)
print_cv_results(gs.cv_results_['mean_test_score'],
len_c, gs.get_params(), 'C','multi_class')
```
### Stochatic Average Gradient Augmented (SAGA)
SAGA является вариантом SAG, но который поддерживает опцию non-smooth penalty=l1 (т. е. регуляризацию L1).
Кроме того, это единственный Solver, поддерживающий регуляризацию = "elasticnet".
[Подробнее: ](https://www.di.ens.fr/~fbach/Defazio_NIPS2014.pdf)
```
lr_clf = LogisticRegression(solver='saga', max_iter=1500)
%%time
len_c = 10
param_grid = {
'C': np.linspace(0.01, 1, len_c),
'penalty': ['l1', 'l2']
}
gs = GridSearchCV(lr_clf,param_grid=param_grid, cv=3,
n_jobs=-1, scoring='accuracy')
gs.fit(x_train, y_train)
print_cv_results(gs.cv_results_['mean_test_score'],
len_c, gs.get_params(), 'C','penalty')
accuracy_score(gs.predict(x_test), y_test)
```
# Support Vector Machine (SVM)
## Краткая теория
Задачу оптимизации линейной SVM можно сформулировать как
$$ \frac{1}{n} \sum_{i=1}^n \max(0, 1 - y_i (w X_i - b)) + \lambda ||w||_2 \to \min_w $$
Эта проблема может быть решена с помощью градиентных или субградиентных методов.
-----
Тогда как задача оптимизации формулируется следующим образом:
$$
\sum_{i=1}^n c_i - \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n y_i c_i (X_i \cdot X_j ) y_j c_j \to \max_{c_1,...,c_n} \text{subject to}
\sum_{i=1}^n c_iy_i = 0
$$
$$
0 \leq c_i \leq \frac{1}{2n\lambda} \forall i
$$
$$f(x) = \sum_{i=1}^n \beta_i K(x_i, x)$$
$$K: K_{i,j} = K(x_i, x_j)$$
$$ \lambda \vec{\beta^T} K \vec{\beta} + \sum_{i=1}^n L(y_i, K_i^T \vec{\beta}) \to \min_{\vec{\beta}}$$
где L is Hinge loss: $L(y_i, K_i^T \vec{\beta}) = \max(0, 1 - y_i (K_i^T \vec{\beta}))$
## Playing with `sklearn`'s implementation
[original post](https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html)
Сделаем данные
```
from sklearn.datasets import make_blobs
X, Y = make_blobs(n_samples=300, centers=2, random_state=45, cluster_std=0.6)
Y[Y == 0] = -1 # for convenience with formulas
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap='plasma')
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1e5)
model.fit(X, Y)
def plot_svc_decision_function(model, ax=None, plot_support=True):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='autumn')
plot_svc_decision_function(model);
model.support_vectors_
```
### Эксперименты с разными ядрами
```
from sklearn.datasets import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
y[y == 0] = -1
clf = SVC(kernel='linear', C=1e5).fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf, plot_support=False);
clf = SVC(kernel='poly', degree=20, C=1e6, max_iter=1e4)
y[y == 0] = -1
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
```
### Different margins for nonseparable cases
```
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=1.2)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=1.2)
y[y == 0] = -1
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, C in zip(ax, [10.0, 0.005]):
model = SVC(kernel='linear', C=C).fit(X, y)
axi.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model, axi)
axi.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
axi.set_title('C = {0:.1f}'.format(C), size=14)
```
| github_jupyter |
## Practice: Sequence to Sequence for Neural Machne Translation.
*This notebook is based on [open-source implementation](https://github.com/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb) of seq2seq NMT in PyTorch.*
We are going to implement the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper.
The model will be trained for German to English translations, but it can be applied to any problem that involves going from one sequence to another, such as summarization.
## Introduction
The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which (commonly) use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. You can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.

The above image shows an example translation. The input/source sentence, "guten morgen", is input into the encoder (green) one word at a time. We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the current word, $x_t$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. You can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $x_t$ and $h_{t-1}$:
$$h_t = \text{EncoderRNN}(x_t, h_{t-1})$$
We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit).
Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.
Once the final word, $x_T$, has been passed into the RNN, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.
Now we have our context vector, $z$, we can start decoding it to get the target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the current word, $y_t$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:
$$s_t = \text{DecoderRNN}(y_t, s_{t-1})$$
In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$.
$$\hat{y}_t = f(s_t)$$
We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, and you can read about it more [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference (i.e. real world usage) it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.
Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model.
## Preparing Data
We'll be coding up the models in PyTorch and using TorchText to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
Uncomment the next cell if `torchtext` is not present on your machine.
```
# ! pip install torchtext
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import TranslationDataset, Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import random
import math
import time
```
We'll set the random seeds for deterministic results.
```
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
spaCy has model for each language ("de" for German and "en" for English) which need to be loaded so we can access the tokenizer of each model.
**Note**: the models must first be downloaded using the following on the command line:
```
python -m spacy download en
python -m spacy download de
```
Uncomment the next cell if you are using Colab or it's the first time you work with this notebook.
__If you a working on local machine and in special env, you should run the code below in the exact env the current jupyter kernel is based on.__ E.g. using the following command if you use `conda`:
```
conda activate my_special_env
python -m spacy download en
python -m spacy download de
```
```
# ! python -m spacy download en
# ! python -m spacy download de
```
We load the models as such:
```
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
```
Next, we create the tokenizer functions. These can be passed to TorchText and will take in the sentence as a string and return the sentence as a list of tokens.
In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier".
```
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
```
TorchText's `Field`s handle how data should be processed. You can read all of the possible arguments [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61).
We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
```
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
```
Next, we download and load the train, validation and test data.
The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence.
`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
```
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
train_data[0].trg
```
We can double check that we've loaded the right number of examples:
```
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
```
We can also print out an example, making sure the source sentence is reversed:
```
print(vars(train_data.examples[0]))
```
The period is at the beginning of the German (src) sentence, so looks good!
Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer) and this is used to build a one-hot encoding for each token (a vector of all zeros except for the position represented by the index, which is 1). The vocabularies of the source and target languages are distinct.
Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
It is important to note that your vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into your model, giving you artifically inflated validation/test scores.
```
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
```
The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
We also need to define a `torch.device`. This is used to tell TorchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, TorchText iterators handle this for us!
We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
Let's look up what's stored in train and test iterators:
```
for x in train_iterator:
break
print(x)
print(x.src.shape, x.trg.shape)
```
One could mention, that the first dimention is now `seq_len`, not `batch` as used to be. It's because in PyTorch LSTM (and other recurrent units) await for input in format `(seq_len, batch, input_size)`. Be careful with that (especially in your homework assignment).
## Building the Seq2Seq Model
We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
### Encoder
First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers.
For a multi-layer RNN, the input sentence, $X$, goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:
$$h_t^1 = \text{EncoderRNN}^1(x_t, h_{t-1}^1)$$
The hidden states in the second layer are given by:
$$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$
Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.
Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post if you want to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.
$$\begin{align*}
h_t &= \text{RNN}(x_t, h_{t-1})\\
(h_t, c_t) &= \text{LSTM}(x_t, (h_{t-1}, c_{t-1}))
\end{align*}$$
You can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.
Extending our multi-layer equations to LSTMs, we get:
$$\begin{align*}
(h_t^1, c_t^1) &= \text{EncoderLSTM}^1(x_t, (h_{t-1}^1, c_{t-1}^1))\\
(h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.
So our encoder looks something like this:

We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:
- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.
- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions.
- `hid_dim` is the dimensionality of the hidden and cell states.
- `n_layers` is the number of layers in the RNN.
- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.
To get more info about `nn.Embedding` one could refer to these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/).
The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.
One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.
In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! You may notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros.
The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).
As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`.
The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, because for now we are working only with one-direction LSTM.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = # <YOUR CODE HERE>
self.rnn = # <YOUR CODE HERE>
self.dropout = # <YOUR CODE HERE>
def forward(self, src):
#src = [src sent len, batch size]
# Compute an embedding from the src data and apply dropout to it
embedded = # <YOUR CODE HERE>
#embedded = [src sent len, batch size, emb dim]
# Compute the RNN output values of the encoder RNN.
# outputs, hidden and cell should be initialized here. Refer to nn.LSTM docs ;)
output, (hidden, cell) = # <YOUR CODE HERE>
#outputs = [src sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
```
### Decoder
Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.

The `Decoder` class does a single step of decoding. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feed it through the LSTM with the current token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.
$$\begin{align*}
(s_t^1, c_t^1) = \text{DecoderLSTM}^1(y_t, (s_{t-1}^1, c_{t-1}^1))\\
(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.
We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$.
$$\hat{y}_{t+1} = f(s_t^L)$$
The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the one-hot vectors that will be input to the decoder. These are equal to the vocabulary size of the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.
Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.output_dim = output_dim
self.n_layers = n_layers
self.embedding = # <YOUR CODE HERE>
self.rnn = # <YOUR CODE HERE>
self.out = # <YOUR CODE HERE>
self.dropout = # <YOUR CODE HERE>
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
# Compute an embedding from the input data and apply dropout to it
embedded = # <YOUR CODE HERE>
#embedded = [1, batch size, emb dim]
# Compute the RNN output values of the encoder RNN.
# outputs, hidden and cell should be initialized here. Refer to nn.LSTM docs ;)
output, (hidden, cell) = # <YOUR CODE HERE>
#output = [sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#sent len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
```
### Seq2Seq
For the final part of the implemenetation, we'll implement the seq2seq model. This will handle:
- receiving the input/source sentence
- using the encoder to produce the context vectors
- using the decoder to produce the predicted output/target sentence
Our full model will look like this:

The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).
For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, you do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if you do something like having a different number of layers you will need to make decisions about how this is handled. For example, if your encoder has 2 layers and your decoder only has 1, how is this handled? Do you average the two context vectors output by the decoder? Do you pass both through a linear layer? Do you only use the context vector from the highest layer? Etc.
Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence.
The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.
We then feed the input/source sentence, $X$/`src`, into the encoder and receive out final hidden and cell states.
The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. During each iteration of the loop, we:
- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder
- receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder
- place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`
- decide if we are going to "teacher force" or not
- if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`
- if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`
Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src sent len, batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
# Again, now batch is the first dimention instead of zero
batch_size = trg.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, max_len):
output, hidden, cell = self.decoder(input, hidden, cell)
outputs[t] = output
teacher_force = random.random() < teacher_forcing_ratio
top1 = output.max(1)[1]
input = (trg[t] if teacher_force else top1)
return outputs
```
# Training the Seq2Seq Model
Now we have our model implemented, we can begin training it.
First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
# dont forget to put the model to the right device
model = Seq2Seq(enc, dec, device).to(device)
```
Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
```
def init_weights(m):
# <YOUR CODE HERE>
model.apply(init_weights)
```
We also define a function that will calculate the number of trainable parameters in the model.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
```
optimizer = optim.Adam(model.parameters())
```
Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
PAD_IDX = TRG.vocab.stoi['<pad>']
criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX)
```
Next, we'll define our training loop.
First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
At each iteration:
- get the source and target sentences from the batch, $X$ and $Y$
- zero the gradients calculated from the last batch
- feed the source and target into the model to get the output, $\hat{Y}$
- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
- we also don't want to measure the loss of the `<sos>` token, hence we slice off the first column of the output and target tensors
- calculate the gradients with `loss.backward()`
- clip the gradients to prevent them from exploding (a common issue in RNNs)
- update the parameters of our model by doing an optimizer step
- sum the loss value to a running total
Finally, we return the loss that is averaged over all batches.
```
PLOT_STEP = 0
def train(model, iterator, optimizer, criterion, clip, train_history=None, valid_history=None, plot_local=False, writer=None):
global PLOT_STEP
model.train()
epoch_loss = 0
history = []
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
# Let's clip the gradient
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
history.append(loss.cpu().data.numpy())
if (i+1)%10==0:
PLOT_STEP += i
if writer is not None:
writer.add_scalar('train loss', history[-1], PLOT_STEP)
if plot_local:
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8))
clear_output(True)
ax[0].plot(history, label='train loss')
ax[0].set_xlabel('Batch')
ax[0].set_title('Train loss')
if train_history is not None:
ax[1].plot(train_history, label='general train history')
ax[1].set_xlabel('Epoch')
if valid_history is not None:
ax[1].plot(valid_history, label='general valid history')
plt.legend()
plt.show()
return epoch_loss / len(iterator)
```
Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
```
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
history = []
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Next, we'll create a function that we'll use to tell us how long an epoch takes.
```
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
import matplotlib
matplotlib.rcParams.update({'figure.figsize': (16, 12), 'font.size': 14})
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
example_input = next(iter(train_iterator))
example_input.src.shape
from torch.utils.tensorboard import SummaryWriter
# default `log_dir` is "runs" - we can be more specific here
writer = SummaryWriter()
writer.add_graph(model, (example_input.src[:1], example_input.trg[:1]))
writer.close()
import numpy as np
def translate_sentence(sentence, src_field, trg_field, model, device, max_len = 50):
model.eval()
if isinstance(sentence, str):
nlp = spacy.load('de')
tokens = [token.text.lower() for token in nlp(sentence)]
else:
tokens = [token.lower() for token in sentence]
tokens = [src_field.init_token] + tokens + [src_field.eos_token]
src_indexes = [src_field.vocab.stoi[token] for token in tokens]
src_tensor = torch.LongTensor(src_indexes).unsqueeze(1).to(device)
src_len = torch.LongTensor([len(src_indexes)]).to(device)
with torch.no_grad():
hidden, cell = model.encoder(src_tensor)
trg_indexes = [trg_field.vocab.stoi[trg_field.init_token]]
for t in range(1, max_len):
trg_tensor = torch.LongTensor([trg_indexes[-1]]).to(device)
#insert input token embedding, previous hidden state and all encoder hidden states
#receive output tensor (predictions) and new hidden state
output, hidden, cell = model.decoder(trg_tensor, hidden, cell)
pred_token = output.argmax(1).item()
trg_indexes.append(pred_token)
if pred_token == trg_field.vocab.stoi[trg_field.eos_token]:
break
trg_tokens = [trg_field.vocab.itos[i] for i in trg_indexes]
return trg_tokens[1:]
def get_example_translation():
example_idx = np.random.choice(np.arange(len(test_data)))
src = vars(train_data.examples[example_idx])['src']
trg = vars(train_data.examples[example_idx])['trg']
src_string = f'src = {" ".join(src)}'
trg_string = f'trg = {" ".join(trg)}'
translation = translate_sentence(src, SRC, TRG, model, device)
translation_string = f'predicted trg = {" ".join(translation)}'
# print(src_string)
# print()
# print(trg_string)
# print()
# print(translation_string)
return ('\n\n'.join([src_string, trg_string, translation_string]))
```
Let's call the `tensorboard` here (if you are working on Colab). If you work locally better run it as another process in your terminal using the command
```
tensorboard --logdir .runs/ --reload_interval=5
```
```
# On Colab just uncomment and run this cell
# %load_ext tensorboard
# %tensorboard --logdir runs
```
We can finally start training our model!
At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
```
train_history = []
valid_history = []
N_EPOCHS = 15
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP, train_history, valid_history, writer=writer)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
train_history.append(train_loss)
valid_history.append(valid_loss)
writer.add_scalar('mean train loss per epoch', train_loss, global_step=epoch)
writer.add_scalar('mean val loss per epoch', valid_loss, global_step=epoch)
val_example_data = next(iter(valid_iterator))
to_print = []
writer.add_text('translation example', get_example_translation(), global_step=epoch)
writer.close()
# print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
# print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
# print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
print(get_example_translation())
```
We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
```
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
| github_jupyter |
# USAD
## Environment
```
!rm -r sample_data
!git clone https://github.com/manigalati/usad
%cd usad
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import torch.nn as nn
from utils import *
from usad import *
!nvidia-smi -L
device = get_default_device()
```
## EDA - Data Pre-Processing
### Download dataset
```
!mkdir input
#normal period
!python gdrivedl.py https://drive.google.com/open?id=1rVJ5ry5GG-ZZi5yI4x9lICB8VhErXwCw input/
#anomalies
!python gdrivedl.py https://drive.google.com/open?id=1iDYc0OEmidN712fquOBRFjln90SbpaE7 input/
```
### Normal period
```
#Read data
normal = pd.read_csv("input/SWaT_Dataset_Normal_v1.csv")#, nrows=1000)
normal = normal.drop(["Timestamp" , "Normal/Attack" ] , axis = 1)
normal.shape
# Transform all columns into float64
for i in list(normal):
normal[i]=normal[i].apply(lambda x: str(x).replace("," , "."))
normal = normal.astype(float)
```
#### Normalization
```
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
x = normal.values
x_scaled = min_max_scaler.fit_transform(x)
normal = pd.DataFrame(x_scaled)
normal.head(2)
```
### Attack
```
#Read data
attack = pd.read_csv("input/SWaT_Dataset_Attack_v0.csv",sep=";")#, nrows=1000)
labels = [ float(label!= 'Normal' ) for label in attack["Normal/Attack"].values]
attack = attack.drop(["Timestamp" , "Normal/Attack" ] , axis = 1)
attack.shape
# Transform all columns into float64
for i in list(attack):
attack[i]=attack[i].apply(lambda x: str(x).replace("," , "."))
attack = attack.astype(float)
```
#### Normalization
```
from sklearn import preprocessing
x = attack.values
x_scaled = min_max_scaler.transform(x)
attack = pd.DataFrame(x_scaled)
attack.head(2)
```
### Windows
```
window_size=12
windows_normal=normal.values[np.arange(window_size)[None, :] + np.arange(normal.shape[0]-window_size)[:, None]]
windows_normal.shape
windows_attack=attack.values[np.arange(window_size)[None, :] + np.arange(attack.shape[0]-window_size)[:, None]]
windows_attack.shape
```
## Training
```
import torch.utils.data as data_utils
BATCH_SIZE = 7919
N_EPOCHS = 100
hidden_size = 100
w_size=windows_normal.shape[1]*windows_normal.shape[2]
z_size=windows_normal.shape[1]*hidden_size
windows_normal_train = windows_normal[:int(np.floor(.8 * windows_normal.shape[0]))]
windows_normal_val = windows_normal[int(np.floor(.8 * windows_normal.shape[0])):int(np.floor(windows_normal.shape[0]))]
train_loader = torch.utils.data.DataLoader(data_utils.TensorDataset(
torch.from_numpy(windows_normal_train).float().view(([windows_normal_train.shape[0],w_size]))
) , batch_size=BATCH_SIZE, shuffle=False, num_workers=0)
val_loader = torch.utils.data.DataLoader(data_utils.TensorDataset(
torch.from_numpy(windows_normal_val).float().view(([windows_normal_val.shape[0],w_size]))
) , batch_size=BATCH_SIZE, shuffle=False, num_workers=0)
test_loader = torch.utils.data.DataLoader(data_utils.TensorDataset(
torch.from_numpy(windows_attack).float().view(([windows_attack.shape[0],w_size]))
) , batch_size=BATCH_SIZE, shuffle=False, num_workers=0)
model = UsadModel(w_size, z_size)
model = to_device(model,device)
history = training(N_EPOCHS,model,train_loader,val_loader)
plot_history(history)
torch.save({
'encoder': model.encoder.state_dict(),
'decoder1': model.decoder1.state_dict(),
'decoder2': model.decoder2.state_dict()
}, "model.pth")
```
## Testing
```
checkpoint = torch.load("model.pth")
model.encoder.load_state_dict(checkpoint['encoder'])
model.decoder1.load_state_dict(checkpoint['decoder1'])
model.decoder2.load_state_dict(checkpoint['decoder2'])
results=testing(model,test_loader)
windows_labels=[]
for i in range(len(labels)-window_size):
windows_labels.append(list(np.int_(labels[i:i+window_size])))
y_test = [1.0 if (np.sum(window) > 0) else 0 for window in windows_labels ]
y_pred=np.concatenate([torch.stack(results[:-1]).flatten().detach().cpu().numpy(),
results[-1].flatten().detach().cpu().numpy()])
threshold=ROC(y_test,y_pred)
```
| github_jupyter |
# Comparing the performance of optimizers
```
import pennylane as qml
import numpy as np
from qiskit import IBMQ
import itertools
import matplotlib.pyplot as plt
import pickle
import scipy
```
## Hardware-friendly circuit
```
n_wires = 5
n_shots_list = [10, 100, 1000]
devs = [qml.device("default.qubit", wires=n_wires, shots=shots, analytic=False) for shots in n_shots_list]
devs.append(qml.device("default.qubit", wires=n_wires))
devs
def layers_circ(weights):
for i in range(n_wires):
qml.RX(weights[i], wires=i)
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[2, 1])
qml.CNOT(wires=[3, 1])
qml.CNOT(wires=[4, 3])
return qml.expval(qml.PauliZ(1))
layers = [qml.QNode(layers_circ, d) for d in devs]
seed = 2
weights = qml.init.basic_entangler_layers_uniform(n_layers=1, n_wires=5, seed=seed).flatten()
weights
grads = [qml.grad(l, argnum=0) for l in layers]
[l(weights) for l in layers]
g_exact = np.round(grads[-1](weights), 7)
g_exact
```
## Calculating the Hessian
```
s = 0.5 * np.pi
denom = 4 * np.sin(s) ** 2
shift = np.eye(len(weights))
LAMBDA = 0.2 # regulirization parameter for the Hessian
lr_gds = 0.15
lr_newton = 0.15
#weights[0] = 1.8
#weights[1] = 2.2
weights[0] = 0.1
weights[1] = 0.15
ARGS = 2
def is_pos_def(x):
return np.all(np.linalg.eigvals(x) > 0)
# First method
def regularize_hess(hess, lr):
return (1 / lr_newton) * (hess + LAMBDA * np.eye(len(hess)))
def regularize_diag_hess(hess, lr):
return (1 / lr_newton) * (hess + LAMBDA)
# Second method
def regularize_hess(hess, lr):
if is_pos_def(hess - LAMBDA * np.eye(len(hess))):
return (1 / lr_newton) * hess
return (1 / lr) * np.eye(len(hess))
def regularize_diag_hess(hess, lr):
if np.all(hess - LAMBDA > 0):
return (1 / lr_newton) * hess
return (1 / lr) * np.ones(len(hess))
# Third method
def regularize_hess(hess, lr):
abs_hess = scipy.linalg.sqrtm(hess @ hess)
return (1 / lr_newton) * (abs_hess + LAMBDA * np.eye(len(hess)))
def regularize_diag_hess(hess, lr):
return (1 / lr_newton) * (np.abs(hess) + LAMBDA)
# Forth method
def regularize_hess(hess, lr):
eig_vals, eig_vects = np.linalg.eig(hess)
epsilon = LAMBDA * np.ones(len(hess))
regul_eig_vals = np.max([eig_vals, epsilon], axis=0)
return (1 / lr_newton) * eig_vects @ np.diag(regul_eig_vals) @ np.linalg.inv(eig_vects)
def regularize_diag_hess(hess, lr):
epsilon = LAMBDA * np.ones(len(hess))
return (1 / lr_newton) * np.max([hess, epsilon], axis=0)
def hess_gen_results(func, weights, args=None):
results = {}
if not args:
args = len(weights)
for c in itertools.combinations(range(args), r=2):
weights_pp = weights + s * (shift[c[0]] + shift[c[1]])
weights_pm = weights + s * (shift[c[0]] - shift[c[1]])
weights_mp = weights - s * (shift[c[0]] - shift[c[1]])
weights_mm = weights - s * (shift[c[0]] + shift[c[1]])
f_pp = func(weights_pp)
f_pm = func(weights_pm)
f_mp = func(weights_mp)
f_mm = func(weights_mm)
results[c] = (f_pp, f_mp, f_pm, f_mm)
f = func(weights)
for i in range(args):
f_p = func(weights + 0.5 * np.pi * shift[i])
f_m = func(weights - 0.5 * np.pi * shift[i])
results[(i, i)] = (f_p, f_m, f)
return results
def hess_diag_gen_results(func, weights, args=None):
results = {}
if not args:
args = len(weights)
f = func(weights)
for i in range(args):
f_p = func(weights + 0.5 * np.pi * shift[i])
f_m = func(weights - 0.5 * np.pi * shift[i])
results[(i, i)] = (f_p, f_m, f)
return results
def grad_gen_results(func, weights, args=None):
results = {}
if not args:
args = len(weights)
for i in range(args):
f_p = func(weights + 0.5 * np.pi * shift[i])
f_m = func(weights - 0.5 * np.pi * shift[i])
results[i] = (f_p, f_m)
return results
def get_hess_diag(func, weights, args=None):
if not args:
args = len(weights)
hess = np.zeros(args)
results = hess_diag_gen_results(func, weights, args)
for i in range(args):
r = results[(i, i)]
hess[i] = (r[0] + r[1] - 2 * r[2]) / 2
grad = np.zeros(args)
for i in range(args):
r = results[(i, i)]
grad[i] = (r[0] - r[1]) / 2
return hess, results, grad
def get_grad(func, weights, args=None):
if not args:
args = len(weights)
grad = np.zeros(args)
results = grad_gen_results(func, weights, args)
for i in range(args):
r = results[i]
grad[i] = (r[0] - r[1]) / 2
return results, grad
def get_hess(func, weights, args=None):
if not args:
args = len(weights)
hess = np.zeros((args, args))
results = hess_gen_results(func, weights, args)
for c in itertools.combinations(range(args), r=2):
r = results[c]
hess[c] = (r[0] - r[1] - r[2] + r[3]) / denom
hess = hess + hess.T
for i in range(args):
r = results[(i, i)]
hess[i, i] = (r[0] + r[1] - 2 * r[2]) / 2
grad = np.zeros(args)
for i in range(args):
r = results[(i, i)]
grad[i] = (r[0] - r[1]) / 2
return hess, results, grad
```
## Visualizing optimization surface
```
grid = 200
xs = np.linspace(- 2 * np.pi, 2 * np.pi, grid)
ys = np.linspace(- 2 * np.pi, 2 * np.pi, grid)
xv, yv = np.meshgrid(xs, ys)
zv = np.zeros((grid, grid))
for i in range(grid):
for j in range(grid):
w = weights.copy()
w[0] = xv[i, j]
w[1] = yv[i, j]
zv[i, j] = layers[-1](w)
np.savez("grid.npz", xs=xs, ys=ys, zv=zv)
g = np.load("grid.npz")
xs = g["xs"]
ys = g["ys"]
zv = g["zv"]
weights
def gradient_descent(func, weights, reps, lr, i, args=ARGS):
ws = [weights.copy()]
res_dict = {}
gs = []
costs = [func(weights)]
for r in range(reps):
res, g = get_grad(func, ws[-1], args)
res_dict[r] = res
gs.append(g)
w_updated = ws[-1].copy()
w_updated[:args] -= lr * g
ws.append(w_updated)
costs.append(func(w_updated))
if r % 5 == 0:
print("Calculated for repetition {}".format(r))
with open("gds_results_{}.pickle".format(i), "wb") as f:
pickle.dump([ws, res, gs, costs], f)
return ws, res_dict, gs, costs
reps = 50
lr = lr_gds
args = ARGS
for i, l in enumerate(layers):
print("Calculating for layer {}".format(i))
ws, res, gs, costs = gradient_descent(l, weights, reps, lr, i)
def newton(func, weights, reps, lr, i, args=ARGS):
ws = [weights.copy()]
res_dict = {}
gs = []
hs = []
costs = [func(weights)]
for r in range(reps):
hess_r, res, g = get_hess(func, ws[-1], args)
res_dict[r] = res
gs.append(g)
hs.append(hess_r)
w_updated = ws[-1].copy()
hess_regul = regularize_hess(hess_r, lr)
h_inv = np.real(np.linalg.inv(hess_regul))
w_updated[:args] -= h_inv @ g
ws.append(w_updated)
costs.append(func(w_updated))
if r % 5 == 0:
print("Calculated for repetition {}".format(r))
with open("new_results_{}.pickle".format(i), "wb") as f:
pickle.dump([ws, res, gs, hs, costs], f)
return ws, res_dict, gs, hs, costs
reps = 50
lr = lr_gds
for i, l in enumerate(layers):
print("Calculating for layer {}".format(i))
ws, res, gs, hs, costs = newton(l, weights, reps, lr, i)
def newton_diag(func, weights, reps, lr, ii, args=ARGS):
ws = [weights.copy()]
res_dict = {}
gs = []
hs = []
costs = [func(weights)]
for r in range(reps):
hess_r, res, g = get_hess_diag(func, ws[-1], args)
res_dict[r] = res
gs.append(g)
hs.append(hess_r)
w_updated = ws[-1].copy()
hess_regul = regularize_diag_hess(hess_r, lr)
update = g / hess_regul
for i in range(len(update)):
if np.isinf(update[i]):
update[i] = 0
w_updated[:args] -= update
ws.append(w_updated)
costs.append(func(w_updated))
if r % 5 == 0:
print("Calculated for repetition {}".format(r))
with open("new_d_results_{}.pickle".format(ii), "wb") as f:
pickle.dump([ws, res, gs, hs, costs], f)
return ws, res_dict, gs, hs, costs
reps = 50
lr = lr_gds
for i, l in enumerate(layers):
print("Calculating for layer {}".format(i))
ws, res, gs, hs, costs = newton_diag(l, weights, reps, lr, i)
```
| github_jupyter |
# 1. Hidden Markov Models Introduction
This post is going to cover **hidden markov models**, which are used for modeling sequences of data. Sequences appear everywhere, from stock prices, to language, credit scoring, webpage visits.
Often, we may be dealing with sequences in machine learning and we don't even realize it; or we may even ignore the fact that it came from a sequence. For instance, the following sentence:
> "Like and cats dogs I"
Clearly, this model does not make any sense. This is what happens when you use a model such as bag of words. The fact that it becomes much harder to tell what a sentence means when you take away the time aspect, tells you that there is a lot of information carried there. The original sentence was:
> "I like cats and dogs"
This may be relatively easy to decode on your own, but you can imagine that this gets much harder as the sentence gets longer.
### 1.1 Outline
1. We will start by looking at the most **basic markov model**, with no hidden portion. These are very useful for modeling sequences as we will see. We will talk about the mathematical properties of the markov model, and go through a ton of examples so we can see how they are used. Google's PageRank algorithm is based on markov models. So, despite being based on old technology, markov models are still very useful and relevant today.
2. We will also talk about how to model language, and how to analyze web visitor data, so you can fix problems like high bounce rate.
3. Next, we will look at the **hidden markov model**. This will be very complex mathematically, but the first section should prepare you. We will look at the three basic problems in hidden markov modeling:
* Predicting the probability of a sequence
* Predicting the most likely sequence of hidden states given an observed sequence
* How to train a hidden markov model
* We will even go further and look at how this relates to deep learning by using gradient descent to train our HMM. Typically, the expectation maximization algorithm is used. We will do this too, but we will see how gradient descent makes this much easier.
4. We will finally look at Hidden Markov Models for real-valued data.
<br>
## 2. Unsupervised or Supervised?
We can now discuss where HMM's fit in the spectrum of machine learning techniques. Hidden markov models are for modeling sequences. If you think of a sequence by itself, that could look like:
```
x(1), x(2),...,x(t),...,x(T)
```
We can see that there is no label there, so HMM's just model the distribution of a sequence; this means that it is unsupervised.
### Classification
However, we often see HMMs being used for classifiers as well. For instance, we could train an HMM to model a male voice and a female voice. Then we could predict, given a new voice sample, whether the voice is male or female.
How can we do this, given that we only have a model for the probability of the data, $p(X)$? The key idea here is bayes rule. What we actually modeled was $p(X \; | \; male)$ and $p(X \; | \; female)$. We know that bayes rule helps us reverse the conditional, leaving us with:
$$p(male \; | \; x) \; and \; p(female \; | \; x)$$
Now we can find the most probable class, and our prediction becomes whatever the most probable class is. From bayes rule we know that:
$$p(male \; | \; x) = \frac{p(X \; | \; male) p(male)}{p(X)}$$
$$p(female \; | \; x) = \frac{p(X \; | \; female) p(female)}{p(X)}$$
And in general:
$$posterior = \frac{likelihood * prior}{normalization \; constant}$$
We do not care about the actual probability, $p(X \; | \; C)$, just which one is greater. It should be noted that while we can model it with an HMM, but also with Naive Bayes:
$$P(X \; | \; C) = P(x(1,1) \; | \; C)*P(x(1,2) \; | \; C)*...*P(x(T,D) \; | \; C)$$
With Naive Bayes, we make the independence assumption, meaning that each sample is independent. So, we take the probability of each given feature, and multiply them together to get the final $P(X \; | \; C)$. What we can even do is extend this idea behind hidden markov models and model the data using more general concepts like Bayesian Belief Networks.
### Conclusion
At its core, HMMs are unsupervised. However, it can easily be used for classification just by creating a separate model for each class, and then making the prediction based on which model gives you the maximum posterior probability.
| github_jupyter |
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
import numpy as np
%matplotlib inline
from qiskit import Aer
from qiskit import execute
from qiskit.tools.visualization import matplotlib_circuit_drawer as drawer
from qiskit import IBMQ
from qiskit import compile
from qiskit.tools.visualization import plot_histogram
my_style = {'cregbundle': True}
```
In this notebook, we will try to implement Shor's algorithm for a simple example using Qiskit. Shor's algorithm applies to composite numbers M which are odd and not a prime power. So the smallest meaningful example is M=15.
In our implementation, we will actually cheat - we will use knowledge about the factors of M at some points that we would not have for a real example with a large integer M.
To start, let us recall that the algorithm picks a number a relatively prime to M and calculates the period of a module M. We will use a = 11. Let us manually determine its period first.
```
M = 15
a = 11
print("a**2: ", a*a % M)
```
Thus the period r is two. How would we find the period using a quantum circuit? For our implementation, it turns out to be most useful to describe Shor's algorithm in terms of the quantum phase estimation. In this description, we apply the quantum phase estimation algorithm to the state $|1 \rangle$. So let us see how a circuit for this QPE looks like.
We start with a very simple version - a one-qubit QPE. Recall that in the QPE algorithm relevant for period finding, we have a primary register p holding the initial state - $|1 \rangle$ in our case - and an working register w with t qubits (in our case, we start with t = 1). The circuit acts with a Hadamard on each qubit of the working register and prepares the primary register in the state $|1 \rangle$. We then apply a sequence of t controlled $U^k$ operations, where $U$ is the unitary operator on the primary register given by multiplication with a, i.e.
$$
|x \rangle \mapsto |a x \rangle
$$
Thus the first thing that we need is a circuit implementing this controlled operation. However, we are lucky - we only need to implement this for the input $|1 \rangle$. Then the result is $|11 \rangle$, and our resulting circuit is very simple - we simply need to toggle qubits 1 and 3 conditional on the working register
```
def oneQbitCircuit(p, w, c):
circuit = QuantumCircuit(w,p, c)
# Prepare initial state 1 in primary register
circuit.x(p[0])
circuit.barrier(p)
circuit.barrier(w)
# Add Hadamard gate to working register
circuit.h(w[0])
# Add conditional multiplication by a to primary register
circuit.cx(w[0], p[1])
circuit.cx(w[0], p[3])
return circuit
p = QuantumRegister(4,"p")
w = QuantumRegister(1,"w")
c = ClassicalRegister(1, "c")
circuit = oneQbitCircuit(p,w,c)
drawer(circuit, style=my_style)
```
Let us see what state we expect here. According to the general logic of the QPE circuit, we should be in the state
$$
\frac{1}{\sqrt{2}} \sum_{s=0}^1 |a^s \rangle |s \rangle =\frac{1}{\sqrt{2}} ( |1 \rangle |0\rangle +
|11\rangle) |1 \rangle = \frac{1}{\sqrt{2}} ( |2 \rangle +
|23\rangle)
$$
Here we follow the Qiskit convention and write the primary register first, yielding the most significant bits. Let us test this using the state vector simulator.
```
backend = Aer.get_backend('statevector_simulator')
job = execute(circuit, backend)
job.result().get_statevector()
```
Nice, this is what we expected. Let us now see how we can extend this to two qubits. For that purpose, we need to add a conditional version of $a^2$. But this is trivial, as $a^2$ is already 1 module 15. So our two-qubit circuit is simply the same as the one qubit circuit, with one additional working qubit.
```
def twoQbitCircuit(p, w, c):
circuit = QuantumCircuit(w,p, c)
# Prepare initial state 1 in primary register
circuit.x(p[0])
circuit.barrier(p)
circuit.barrier(w)
# Add Hadamard gate to working register
circuit.h(w[0])
circuit.h(w[1])
# Add conditional multiplication by a to primary register
circuit.cx(w[0], p[1])
circuit.cx(w[0], p[3])
return circuit
p = QuantumRegister(4,"p")
w = QuantumRegister(2,"w")
c = ClassicalRegister(3, "c")
circuit = twoQbitCircuit(p,w,c)
drawer(circuit, style=my_style)
```
Let us again test this. We expect the following four basis vectors to show up
$$
|1\rangle |0 \rangle = |4\rangle
$$
$$
|11\rangle |1 \rangle = |45\rangle
$$
$$
|1\rangle |2 \rangle = |6\rangle
$$
$$
|11\rangle |3 \rangle = |47\rangle
$$
```
backend = Aer.get_backend('statevector_simulator')
job = execute(circuit, backend)
state = np.around(job.result().get_statevector(), 2)
for i in range(2**6):
if (state[i] != 0):
print("|",i,"> ---> ", state[i])
```
Finally, the last circuit we need is multiplication by $a^4$ modulo 15. But this is again one, so the circuit is the identity. Therefore our three qubit circuit is identical to our two qubit circuit, just with one more qubit in the working register.
```
def threeQbitCircuit(p, w, c):
circuit = QuantumCircuit(w,p, c)
# Prepare initial state 1 in primary register
circuit.x(p[0])
circuit.barrier(p)
circuit.barrier(w)
# Add Hadamard gates to working register
circuit.h(w[0])
circuit.h(w[1])
circuit.h(w[2])
# Add conditional multiplication by a to primary register
circuit.cx(w[0], p[1])
circuit.cx(w[0], p[3])
return circuit
p = QuantumRegister(4,"p")
w = QuantumRegister(3,"w")
c = ClassicalRegister(3, "c")
circuit = threeQbitCircuit(p,w,c)
drawer(circuit, style=my_style)
#
# Print out expected amplitudes up to normalisation
#
for s in range(2**3):
x = a**s % 15
print("|", x*8 + s, "> = |",x,">|", s,">")
backend = Aer.get_backend('statevector_simulator')
job = execute(circuit, backend)
state = np.around(job.result().get_statevector(), 2)
for i in range(2**7):
if (state[i] != 0):
print("|",i,"> = |", i // 8, ">|", i % 8,"> ---> ", state[i])
```
So our circuit seems to work fine. Let us now combine this into one circuit with the QFT and see what we get.
```
def nBitQFT(q,c,n=3):
circuit = QuantumCircuit(q,c)
#
# We start with the most significant bit
#
for k in range(n):
j = n - k
# Add the Hadamard to qubit j-1
circuit.h(q[j-1])
#
# there is one conditional rotation for
# each qubit with lower significance
for i in reversed(range(j-1)):
circuit.cu1(2*np.pi/2**(j-i),q[i], q[j-1])
#
# Finally we need to swap qubits
#
for i in range(n//2):
circuit.swap(q[i], q[n-i-1])
return circuit
circuit = threeQbitCircuit(p,w,c) + nBitQFT(w,c,n=3)
circuit.barrier(w)
circuit.measure(w,c)
drawer(circuit, style=my_style)
```
Before we run this, let us try to understand what output we expect. The value in the working register after a measurement will be a multiple or $2^n$ / r. In our case, r=2 and n=3, so the output is a multiple of four. Thus we expect peaks at 000 and 100. Let us see whether this is what we get.
```
backend = Aer.get_backend('qasm_simulator')
job = execute(circuit, backend)
counts = job.result().get_counts()
plot_histogram(counts)
```
Before running this on real hardware, let us try to optimize the circuit a bit. For that purpose, we combine the two subcircuits into one function. We can then apply a few optimizations. First, we skip the final swap gates in the QFT circuit - this will change our expected output from 100 to 001, but we can keep track of this manually when setting up the measurement. Next, we have two Hadamard gates on w[2] that cancel at that we can therefore remove. And the Pauli X gate on p[0] is never really used and can be dropped. We then run this once more on the simulator to verify that our circuit still works, and finally do a run on reald hardware.
```
def shorAlgorithm(n=3):
# Create registers and circuit
p = QuantumRegister(4,"p")
w = QuantumRegister(n,"w")
c = ClassicalRegister(n, "c")
circuit = QuantumCircuit(w,p,c)
# Add Hadamard gates to working register
circuit.h(w[0])
circuit.h(w[1])
# Add conditional multiplication by a to primary register
circuit.cx(w[0], p[1])
circuit.cx(w[0], p[3])
#
# Now build the QFT part. We start with the most significant bit
#
for k in range(n):
j = n - k
# Add the Hadamard to qubit j-1
if (j - 1) != 2:
circuit.h(w[j-1])
#
# there is one conditional rotation for
# each qubit with lower significance
for i in reversed(range(j-1)):
circuit.cu1(2*np.pi/2**(j-i),w[i], w[j-1])
#
# and add the measurements
#
circuit.barrier(w)
circuit.measure(w[0],c[2])
circuit.measure(w[2], c[0])
circuit.measure(w[1], c[1])
return circuit
circuit = shorAlgorithm()
drawer(circuit, style=my_style)
backend = Aer.get_backend('qasm_simulator')
job = execute(circuit, backend)
counts = job.result().get_counts()
plot_histogram(counts)
IBMQ.load_accounts()
backend = IBMQ.get_backend('ibmq_16_melbourne')
job = execute(circuit, backend)
counts = job.result().get_counts()
plot_histogram(counts)
```
| github_jupyter |
# COVID-19 correlated variables of Mexican States
This Notebook downloads Geopandas GeoDataFrames for States (admin1) derived from the 2020 Mexican Census: [INEGI](https://www.inegi.org.mx/temas/mg/).
For details how these dataframe was created, see the [mexican-boundaries](https://github.com/sbl-sdsc/mexico-boundaries) GitHub project.
It also uses the variables of dataframe obtain in the [Week 3 analyzes](Week3States.ipynb).
```
from io import BytesIO
from urllib.request import urlopen
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
import ipywidgets as widgets
import seaborn as sns
import numpy as np
%matplotlib inline
%reload_ext autoreload
%autoreload 2
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
```
## Boundaries of Mexican States
Read boundary polygons for Mexican states from shapefile
```
admin1_url = 'https://raw.githubusercontent.com/sbl-sdsc/mexico-boundaries/main/data/mexico_admin1.parquet'
resp = urlopen(admin1_url)
admin1 = gpd.read_parquet(BytesIO(resp.read()))
```
Calculate the area of each state (convert area from m^2 to km^2
```
admin1.plot();
```
## Map of COVID-19 correlated variables by State
Get COVID-19 correlated variables from data files
```
var_admin1 = pd.read_csv('../data/week3analyzesStates.csv')
var_admin1.head()
```
Add CVE_ENT state code column (example: convert 1 -> 01)
```
var_admin1['CVE_ENT'] = var_admin1['cve_ent'].apply(lambda i: f'{i:02d}')
var_admin1.head()
```
Merge the geo dataframe with the COVID-19 analyzes and variables dataframe using the common CVE_ENT column
```
df_admin1 = admin1.merge(var_admin1, on='CVE_ENT')
var = var_admin1.columns[1:-1]
```
To pick another variable to plot is neccesary to run the code only from the next cell
```
var_widget = widgets.Dropdown(options=var, description='Select variable:',value='case_rate')
```
After running the next cell is neccesary to pick the variable of interest and keep running the code
```
display(var_widget)
var_widget = var_widget.value
```
Plot selected COVID-19 correlated data
```
title = '{} of Mexico by State'.format(var_widget)
ax1 = df_admin1.plot(column=var_widget,
cmap='OrRd',
legend=True,
legend_kwds={'label': '{} by State'.format(var_widget),
'orientation': 'horizontal'},
figsize=(16, 11));
mexico = ax1.set_title(title, fontsize=15);
mexico = mexico.get_figure()
```
The variables of most interest where the ones that demonstrated the highest correlation on the Week 3 analyzes and the variables obtain in the Week 1 analyzes. This were the ones that were plotted and analyzed in this week analyzes:
- Case rate ('case_rate'):
The states with the highest cases rate have the biggest cities of the country but also in the states with the most attractive tourist places.
- Case rate last 60 days ('case_rate_last_60_days'):
The previous statement of the previous variable also applies here but a certain difference exists, now not all the biggest cities have the highest case rate instead the ones with the highest are the ones that are more attractive for tourist, we assume that due the vaccination campaingns that every country is implementing a lot of tourist of other or even of the same country are now visiting the tourist areas more frequently since a lot more people are vaccinated.
- Population density ('case_rate'):
The center of the country can be seen notoriously that has the highest population density and it also has one of the highest case rates in the country.
- Mental problems ('pct_mental_problems'):
We can see that mostly in the states that have urban areas is where the highest percentages of population with mental problems are concentrated.
- No health problems ('pct_no_problems'):
We can see that the previous statement also applies here so that mostly in the states that have urban areas is where the highest percentages of population with no health problems are concentrated.
- Obesity ('pct_pop_obesity'):
In the case of the obesity we could see that in the north and south of the country demonstrated the highest percentage of obesity we assume that this is because of the policys that have been implemented since 2020 ([GOBIERNO DE MEXICO](https://www.insp.mx/avisos/5091-dia-mundial-obesidad-politicas.html)), in which a lot of junk foods have being prohibited to be bought by minors and also this type of food are now labeled with precautions that contain and alert of an excess of a certain substance that can lead to develop obesity, this policys have being implemented first in the center of the country well in the north and the south are still relatively new.
This statements can be corrovorated by selecting the variables listed on the select menu.
## Heatmap of correlation variables with case/death rate
The variables to correlated are selected
```
df = var_admin1[['case_rate', 'case_rate_last_60_days', 'death_rate',
'death_rate_last_60_days',var_widget]].copy()
```
The correlation of the variable selected with the case/death rate can be observed as following
```
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
heatmap=sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
sns.set_context('paper', font_scale=2)
heatmap = heatmap.get_figure()
```
This correlation can show a direct and more discreate visual representation compared to the Week 3 analyzes but it has the limitation of only using one variable to see is correlation.
After running the next cell is neccesary to pick if the map figure will be saved and keep running the code
```
save_widget = widgets.Dropdown(options=['Yes','No'], description='Save File?',value='No')
display(save_widget)
save_widget = save_widget.value
if save_widget != 'No':
mexico.savefig('../maps/{}_mexico_states.png'.format(var_widget))
```
| github_jupyter |
# VQEによる量子化学計算
このチュートリアルでは、Amazon Braket で PennyLane を使用して量子化学の重要な問題、すなわち分子の基底状態エネルギーを見つける方法を説明します。この問題は、変分量子固有値ソルバー (VQE) アルゴリズムを実装することにより、近項量子ハードウェアを使用して対処できます。量子化学とVQEの詳細については、[Braket VQE ノートブック](../Hybrid_quantum_algorithms/vqe_Chemistry/vqe_Chemistry_braket.ipynb) や [PennyLane チュートリアル](https://pennylane.ai/qml/demos/tutorial_qubit_rotation.html) を参考にして下さい。
<div class="alert alert-block alert-info">
<b>注:</b> このノートブックの実行には PennyLane バージョン 0.16 以上が必要です。
</div>
## 量子化学から量子回路へ
まず最初のステップは、量子化学の問題を量子コンピュータで扱えるよう変換することです。PennyLane では ``qchem`` パッケージを使います。ローカルマシン上で実行している場合、``qchem`` パッケージは [これら](https://pennylane.readthedocs.io/en/stable/introduction/chemistry.html) の指示に従って別途インストールする必要があります。
```
import pennylane as qml
from pennylane import qchem
from pennylane import numpy as np
```
入力化学データは、多くの場合、分子に関する詳細を含むジオメトリファイルの形式で提供されます。ここで、[h2.xyz](./qchem/h2.xyz) ファイルに保存された $\mathrm {H} _2$ の原子構造を考えます。量子ビットハミルトニアンは ``qchem`` パッケージを使って構成されています。
```
symbols, coordinates = qchem.read_structure('qchem/h2.xyz')
h, qubits = qchem.molecular_hamiltonian(symbols, coordinates, name="h2")
print(h)
```
VQE アルゴリズムでは、変分量子回路上の上記のハミルトニアンの期待値を測定することにより、$\mathrm {H} _2$ 分子のエネルギーを計算します。我々の目的は、ハミルトニアンの期待値が最小になるように回路のパラメータを訓練し、それによって分子の基底状態エネルギーを見つけることです。
このチュートリアルでは、トータルスピンも計算します。そのために、``qchem`` パッケージを使ってトータルスピン演算子 $S^2$ を構築します。
```
electrons = 2 # Molecular hydrogen has two electrons
S2 = qchem.spin2(electrons, qubits)
print(S2)
```
## 回路の実行を減らすためにオブザーバブルをグループ化
電子ハミルトニアン ``h`` の期待値を測定したいとします。このハミルトニアンは、パウリ作用素のテンソル積である15の個々のオブザーバブルから構成されます。
```
print("Number of Pauli terms in h:", len(h.ops))
```
期待値を測定する簡単なアプローチは、回路を15回実装し、毎回ハミルトニアン ``h`` の一部を形成するパウリ項の1つを測定することです。しかし、もっと効率的な方法があるかもしれません。パウリ項は、単一の回路で同時に測定できるグループ(PennyLane の [グループ化](https://pennylane.readthedocs.io/en/stable/code/qml_grouping.html) モジュールを参照)に分けることができます。各グループの要素は、量子ビットごとに交換可能なオブザーバブルとして知られています。ハミルトニアン ``h`` は5つのグループに分けることができます:
```
groups, coeffs = qml.grouping.group_observables(h.ops, h.coeffs)
print("Number of qubit-wise commuting groups:", len(groups))
```
実際には、これは15の別々の回路を実行する代わりに、5つを実行するだけで済むことを意味します。この節約は、ハミルトニアンのパウリ項の数が増えるにつれて、さらに顕著になります。例えば、より大きな分子または異なる化学的基底集合に切り替えると、量子ビット数と項数の両方が増加する可能性があります。
幸い、PennyLane/Braket パイプラインには、デバイスの実行回数を最小限に抑えるためにオブザーバブルを事前にグループ化するための機能が組み込まれており、リモートデバイスを使用するときの実行時間とシミュレーション料金の両方を節約できます。このチュートリアルの残りの部分では、最適化されたオブザーバブルのグループ化を使用します。

## Ansatz 回路の定義
ここで、ハミルトニアンの基底状態を準備するために訓練される ansatz 回路を設定します。最初のステップは、ローカルの Braket デバイスを読み込むことです。
```
dev = qml.device("braket.local.qubit", wires=qubits)
```
このチュートリアルでは、[Delgado et al. (2020)](https://arxiv.org/abs/2106.13840) の [`AllSinglesDoubles`](https://pennylane.readthedocs.io/en/stable/code/api/pennylane.templates.subroutines.UCCSD.html) ansatz という化学インスパイアドな回路を使います。これを使用するには、量子化学の観点から追加の入力項目をいくつか定義する必要があります。
```
# Hartree-Fock state
hf_state = qchem.hf_state(electrons, qubits)
# generate single- and double-excitations
singles, doubles = qchem.excitations(electrons, qubits)
```
<div class="alert alert-block alert-info">
<b>注:</b> さまざまな ansatz とテンプレートが<a href="https://pennylane.readthedocs.io/en/stable/introduction/templates.html#quantum-chemistry-templates">利用可能で</a>、違うものを選ぶと、回路の深さや学習可能なパラメータ数が異なります。
</div>
この ansatz 回路は簡単に定義できます:
```
def circuit(params, wires):
qml.templates.AllSinglesDoubles(params, wires, hf_state, singles, doubles)
```
出力の測定はまだ定義されていないことに注意してください。これは次のステップで行います。
## エネルギーとトータルスピンの測定
先に説明したように、$\mathrm {H} _2$ のエネルギーに対応する量子ビットハミルトニアンの期待値を最小化したいと考えています。このハミルトニアンとトータルスピン $\hat {S} ^2$ 演算子の期待値は、以下を使用して定義できます。
```
energy_expval = qml.ExpvalCost(circuit, h, dev, optimize=True)
S2_expval = qml.ExpvalCost(circuit, S2, dev, optimize=True)
```
``optimize=True`` オプションに注意してください。これにより、PennyLane と Braket は、デバイスの実行効率を高めるために、各ハミルトニアンを量子ビットごとの交換可能なグループに分割するように指示します。
次に、ランダムな値をいくつか初期化し、エネルギーとスピンを評価しましょう。準備された状態のトータルスピン$S$は、$S=-\frac {1} {2} +\sqrt {\frac {1} {4} +\langle\hat {S} ^2\rangle}$ を用いて、期待値 $\langle \hat {S}^2 \rangle$ から得ることができます。$S$ を計算する関数はこのように定義することができます:
```
def spin(params):
return -0.5 + np.sqrt(1 / 4 + S2_expval(params))
np.random.seed(1967)
params = np.random.normal(0, np.pi, len(singles) + len(doubles))
```
エネルギーとトータルスピンは、
```
print("Energy:", energy_expval(params))
print("Spin: ", spin(params))
```
ランダムなパラメータを選んだので、測定されたエネルギーは基底状態エネルギーに対応しておらず、準備状態はトータルスピン演算子の固有状態ではありません。ここで、最小エネルギーを見つけるためにパラメータをトレーニングする必要があります。
## エネルギー最小化
エネルギーは、オプティマイザーを選択し、標準の最適化ループを実行することで最小化できます。
```
opt = qml.GradientDescentOptimizer(stepsize=0.4)
iterations = 40
energies = []
spins = []
for i in range(iterations):
params = opt.step(energy_expval, params)
e = energy_expval(params)
s = spin(params)
energies.append(e)
spins.append(s)
if (i + 1) % 5 == 0:
print(f"Completed iteration {i + 1}")
print("Energy:", e)
print("Total spin:", s)
print("----------------")
print(f"Optimized energy: {e} Ha")
print(f"Corresponding total spin: {s}")
```
水素分子の基底状態エネルギーの正確な値は ``-1.136189454088`` Hartrees (Ha) として理論的に計算されています。最適化されたエネルギーの誤差は、Hartree の $10^ {-5} $ 未満であることに注意してください。さらに、最適化された状態は、$\mathrm {H} _2$分子の基底状態に予想される固有値$S=0$を持つトータルスピン演算子の固有状態です。したがって、上記の結果は非常に有望に見えます!反復回数を増やすと、理論値にさらに近づきます。
最適化中に 2 つの量がどのように変化したかを視覚化してみましょう。
```
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
theory_energy = -1.136189454088
theory_spin = 0
plt.hlines(theory_energy, 0, 39, linestyles="dashed", colors="black")
plt.plot(energies)
plt.xlabel("Steps")
plt.ylabel("Energy")
axs = plt.gca()
inset = inset_axes(axs, width="50%", height="50%", borderpad=1)
inset.hlines(theory_spin, 0, 39, linestyles="dashed", colors="black")
inset.plot(spins, "r")
inset.set_xlabel("Steps")
inset.set_ylabel("Total spin");
```
このノートブックでは、Pennylane/Braket パイプラインを使用して、分子の基底状態エネルギーを効率的に見つける方法を学びました!
<div class="alert alert-block alert-info">
<b>次のステップは?</b> <code>qchem</code> フォルダには、水素分子の異なる原子間距離を表す追加の分子構造ファイルが含まれています。原子間距離の 1 つを選択し、基底状態のエネルギーを求めましょう。原子間距離によって基底状態のエネルギーはどのように変化するでしょう?
</div>
| github_jupyter |
```
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import os
```
# Target Connectivity
## Board specific settings
Boards specific settings can be collected into a JSON
platform description file:
```
!ls -la $LISA_HOME/libs/utils/platforms/
!cat $LISA_HOME/libs/utils/platforms/hikey.json
```
## Single configuration dictionary
```
# Check which Android devices are available
!adb devices
ADB_DEVICE = '00b43d0b08a8a4b8'
# Unified configuration dictionary
my_conf = {
# Target platform
"platform" : 'android',
# Location of external tools (adb, fastboot, systrace, etc)
# These properties can be used to override the environment variables:
# ANDROID_HOME and CATAPULT_HOME
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
# Boards specific settings can be collected into a JSON
# platform description file, to be placed under:
# LISA_HOME/libs/utils/platforms
"board" : 'hikey',
# If you have multiple Android device connected, here
# we can specify which one to target
"device" : ADB_DEVICE,
# Folder where all the results will be collected
"results_dir" : "ReleaseNotes_v16.09",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
```
# Energy Meters Support
- Simple unified interface for multiple acquisition board
- exposes two simple methods: **reset()** and **report()**
- reporting **energy** consumptions
- reports additional info supported by the specific probe,<br>
e.g. collected samples, stats on current and voltages, etc.
```
from time import sleep
def sample_energy(energy_meter, time_s):
# Reset the configured energy counters
energy_meter.reset()
# Run the workload you want to measure
#
# In this simple example we just wait some time while the
# energy counters accumulate power samples
sleep(time_s)
# Read and report the measured energy (since last reset)
return energy_meter.report(te.res_dir)
```
- Channels mapping support
- allows to give a custom name to each channel used
## ARM Energy Probe (AEP)
Requirements:
1. the **caimin binary tool** must be availabe in PATH<br>
https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#arm-energy-probe-aep
2. the **ttyACMx device** created once you plug in the AEP device
```
!ls -la /dev/ttyACM*
ACM_DEVICE = '/dev/ttyACM1'
```
### Direct usage
```
# Energy Meters Configuration for ARM Energy Probe
aep_conf = {
'conf' : {
# Value of the shunt resistor [Ohm] for each channel
'resistor_values' : [0.010],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
'channel_map' : {
'BAT' : 'CH0'
}
}
from energy import AEP
ape_em = AEP(target, aep_conf, '/tmp')
nrg_report = sample_energy(ape_em, 2)
print nrg_report
!cat $nrg_report.report_file
```
### Usage via TestEnv
```
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "aep",
# Configuration parameters require by the AEP device
"conf" : {
# Value of the shunt resistor in Ohm
'resistor_values' : [0.099],
# Device entry assigned to the probe on the host
'device_entry' : ACM_DEVICE,
},
# Map AEP's channels to logical names (used to generate reports)
'channel_map' : {
'BAT' : 'CH0'
}
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['BAT'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
```
## BayLibre's ACME board (ACME)
Requirements:
1. the **iio-capture tool** must be available in PATH<br>
https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#iiocapture---baylibre-acme-cape
2. the ACME CAPE should be reacable by network
```
!ping -c1 baylibre-acme.local | grep '64 bytes'
```
### Direct usage
```
# Energy Meters Configuration for BayLibre's ACME
acme_conf = {
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
"channel_map" : {
"Device0" : 0,
"Device1" : 1,
},
}
from energy import ACME
acme_em = ACME(target, acme_conf, '/tmp')
nrg_report = sample_energy(acme_em, 2)
print nrg_report
!cat $nrg_report.report_file
```
### Usage via TestEnv
```
my_conf = {
# Configure the energy meter to use
"emeter" : {
# Require usage of an AEP meter
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
'channel_map' : {
'Device0' : 0,
'Device1' : 1,
},
},
# Other target configurations
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
for i in xrange(1,11):
nrg_report = sample_energy(te.emeter, 1)
nrg_bat = float(nrg_report.channels['Device1'])
print "Sample {:2d}: {:.3f}".format(i, nrg_bat)
```
# Android Integration
A new Android library has been added which provides APIs to:
- simplify the interaction with a device
- execute interesting workloads and benchmarks
- make it easy the integration of new workloads and benchmarks
Not intended to replace WA, but instead to provide a Python based<br>
programming interface to **automate reproducible experiments** on<br>
and Android device.
## System control APIs
```
from android import System
print "Supported functions:"
for f in dir(System):
if "__" in f:
continue
print " ", f
```
Capturing main useful actions, for example:
- ensure we set AIRPLAIN_MODE before measuring scheduler energy
- provide simple support for input actions (relative swipes)
```
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use tab to complete)
System.
System.menu(target)
System.back(target)
youtube_apk = System.list_packages(target, 'YouTube')
if youtube_apk:
System.start_app(target, youtube_apk[0])
logging.getLogger().setLevel(logging.INFO)
```
## Screen control APIs
```
from android import Screen
print "Supported functions:"
for f in dir(Screen):
if "__" in f:
continue
print " ", f
# logging.getLogger().setLevel(logging.DEBUG)
# Example (use TAB to complete)
Screen.
Screen.set_brightness(target, auto=False, percent=100)
Screen.set_orientation(target, auto=False, portrait=False)
# logging.getLogger().setLevel(logging.INFO)
```
## Workloads Execution
A simple workload class allows to easily add a wrapper for the exection
of a specific Android application.
**NOTE:** To keep things simple, LISA does not provide APKs installation support.
*All the exposes APIs assume that the required packages are already installed<br>
in the target. Whenever a package is missing, LISA reports that and it's up<br>
to the user to install it before using it.*
A wrapper class usually requires to specify:
- a package name<br>
which will be used to verify if the APK is available in the target
- a run method<br>
which usually exploits the other Android APIs to defined a **reproducible
exection** of the specified workload
A reproducible experiment should take care of:
- setups wirelesse **connections status**
- setup **screen orientation and backlight** level
- properly collect **energy measurements** across the sensible part of the experiment
- possibly collect **frames statistics** whenever available
### Example of YouTube integration
Here is an example wrapper which allows to play a YouTube<br>
video for a specified number of seconds:
https://github.com/ARM-software/lisa/blob/master/libs/utils/android/workloads/youtube.py
### Example usage of the Workload API
```
# logging.getLogger().setLevel(logging.DEBUG)
from android import Workload
# Get the list of available workloads
wloads = Workload(te)
wloads.availables(target)
yt = Workload.get(te, name='YouTube')
# Playback big bug bunny for 15s starting from 1:20s
video_id = 'XSGBVzeBUbk'
video_url = "https://youtu.be/{}?t={}s".format(video_id, 80)
# Play video and measure energy consumption
results = yt.run(te.res_dir,
video_url, video_duration_s=16,
collect='energy')
results
framestats = results[0]
!cat $framestats
```
## Benchmarks
Android benchmarks can be integrated as standalone Notebook, like for example
what we provide for PCMark:
https://github.com/ARM-software/lisa/blob/master/ipynb/android/benchmarks/Android_PCMark.ipynb
Alternatively we are adding other benchmarks as predefined Android workloads.
### UiBench support
Here is an example of UiBench workload which can run a specified number
of tests:
```
from android import Workload
ui = Workload.get(te, name='UiBench')
# Play video and measure energy consumption
results = ui.run(te.res_dir,
ui.test_GlTextureView,
duration_s=5,
collect='energy')
results
framestats = results[0]
!cat $framestats
```
# Improved Trace Analysis support
The Trace module is a wrapper around the TRAPpy library which has been
updated to:
- support parsing of **systrace** file format<br>
requires catapult locally installed<br>
https://github.com/catapult-project/catapult
- parsing and DataFrame generation for **custom events**
## Create an example trace
**NOTE:** the cells in this sections are required just to create
a trace file to be used by the following sections
```
# The following exanples uses an HiKey board
ADB_DEVICE = '607A87C400055E6E'
# logging.getLogger().setLevel(logging.DEBUG)
# Unified configuration dictionary
my_conf = {
# Tools required
"tools" : ['rt-app', 'trace-cmd'],
# RTApp calibration
#"modules" : ['cpufreq'],
"rtapp-calib" : {
"0": 254, "1": 252, "2": 252, "3": 251,
"4": 251, "5": 252, "6": 251, "7": 251
},
# FTrace configuration
"ftrace" : {
# Events to trace
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_wakeup_tracking",
"sched_stat_wait",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_filter",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"walt_update_task_ravg",
"walt_update_history",
"walt_migration_update_sum",
],
# # Kernel functions to profile
# "functions" : [
# "pick_next_task_fair",
# "select_task_rq_fair",
# "enqueue_task_fair",
# "update_curr_fair",
# "dequeue_task_fair",
# ],
# Per-CPU buffer configuration
"buffsize" : 10 * 1024,
},
# Target platform
"platform" : 'android',
"board" : 'hikey',
"device" : ADB_DEVICE,
"results_dir" : "ReleaseNotes_v16.09",
"ANDROID_HOME" : "/opt/android-sdk-linux",
"CATAPULT_HOME" : "/home/derkling/Code/catapult",
}
from env import TestEnv
te = TestEnv(my_conf, force_new=True)
target = te.target
from wlgen import RTA,Ramp
# Let's run a simple RAMP task
rta = RTA(target, 'ramp')
rta.conf(
kind='profile',
params = {
'ramp' : Ramp().get()
}
);
te.ftrace.start()
target.execute("echo 'my_marker: label=START' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
rta.run(out_dir=te.res_dir)
target.execute("echo 'my_marker: label=STOP' > /sys/kernel/debug/tracing/trace_marker",
as_root=True)
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
```
## DataFrame namespace
```
from trace import Trace
events_to_parse = my_conf['ftrace']['events']
events_to_parse += ['my_marker']
trace = Trace(trace_file, events_to_parse, te.platform)
trace.available_events
# Use TAB to complete
trace.data_frame.
rt_tasks = trace.data_frame.rt_tasks()
rt_tasks.head()
lat_df = trace.data_frame.latency_df('ramp')
lat_df.head()
custom_df = trace.data_frame.trace_event('my_marker')
custom_df
ctxsw_df = trace.data_frame.trace_event('sched_switch')
ctxsw_df.head()
```
## Analysis namespace
```
# Use TAB to complete
trace.analysis.
trace.analysis.tasks.plotTasks(tasks='ramp',
signals=['util_avg', 'boosted_util',
'sched_overutilized', 'residencies'])
lat_data = trace.analysis.latency.plotLatency('ramp')
lat_data.T
trace.analysis.frequency.plotClusterFrequencies()
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=True)
trace.analysis.frequency.plotClusterFrequencyResidency(pct=True, active=False)
```
# Notebooks
## New collection of examples
Each new API introduced in LISA has an associated notebook which shows a
complete example of its usage.<br>
Examples are usually defined to:
- setup the connection to a target (usually a JUNO board)
- configure a workload (usually using RTApp)
- run workload and collect required measures
- show the most common functions exposed by the new API
- Energy meters APIs:<br>
https://github.com/ARM-software/lisa/tree/master/ipynb/examples/energy_meter
- Trace analysis APIs:<br>
https://github.com/ARM-software/lisa/tree/master/ipynb/examples/trace_analysis
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("fivethirtyeight")
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
data=pd.read_csv("data.txt",sep=",")
data.head()
data.describe().transpose()
data.isnull().sum()
data.count()
data["Date_of_Journey"]=pd.to_datetime(data["Date_of_Journey"])
data.head()
data["year"]=data["Date_of_Journey"].apply(lambda date: date.year)
data["month"]=data["Date_of_Journey"].apply(lambda date: date.month)
data["day"]=data["Date_of_Journey"].apply(lambda date: date.day)
data.drop(["Date_of_Journey"],axis=1,inplace=True)
data.head()
data["Arrival_Time"]=data["Arrival_Time"].str.split(' ').str[0]
data["Arrival_hour"]=data["Arrival_Time"].str.split(':').str[0].astype(int)
data["Arrival_minute"]=data["Arrival_Time"].str.split(':').str[1].astype(int)
data.drop(["Arrival_Time"],axis=1,inplace=True)
data.head()
data["Dep_Time"]=data["Dep_Time"].str.split(' ').str[0]
data["Dep_hour"]=data["Dep_Time"].str.split(':').str[0].astype(int)
data["Dep_minute"]=data["Dep_Time"].str.split(':').str[1].astype(int)
data.drop(["Dep_Time"],axis=1,inplace=True)
data.head()
data["Total_Stops"].isnull().sum()
data[data["Total_Stops"].isnull()==True]
data["Total_Stops"]=data["Total_Stops"].fillna("1 stop")
data["Total_Stops"].isnull().sum()
data["Total_Stops"]=data["Total_Stops"].replace("non-stop","0 stop")
data["Totalstops"]=data["Total_Stops"].str.split(" ").str[0].astype(int)
data.drop(["Total_Stops"],axis=1,inplace=True)
data.head()
data["Route"].isnull().sum()
data[data["Route"].isnull()==True]
data["Route"][9039]="DEL?COK"
print(data["Route"].isnull().sum())
data[9039:9040]
data["route1"]=data["Route"].str.split('?').str[0]
data["route2"]=data["Route"].str.split('?').str[1]
data["route3"]=data["Route"].str.split('?').str[2]
data["route4"]=data["Route"].str.split('?').str[3]
data.drop(["Route"],axis=1,inplace=True)
data.head()
data["route1"].fillna("None",inplace=True)
data["route2"].fillna("None",inplace=True)
data["route3"].fillna("None",inplace=True)
data["route4"].fillna("None",inplace=True)
data.head()
data["Duration_hours"]=data["Duration"].str.split(" ").str[0]
data["Duration_minutes"]=data["Duration"].str.split(" ").str[1]
data["Duration_hours"]=data["Duration_hours"].str.split("h").str[0]
data["Duration_minutes"]=data["Duration_minutes"].str.split("m").str[0]
data["Duration_hours"][6474]=int(0)
data["Duration_minutes"][6474]=int(5)
data["Duration_hours"]=data["Duration_hours"].astype(int)
data["Duration_minutes"]=data["Duration_hours"].astype(int)
data["Duration_hours"].fillna(0,inplace=True)
data["Duration_minutes"].fillna(0,inplace=True)
data.drop(["Duration"],axis=1,inplace=True)
data.head()
from sklearn.preprocessing import LabelEncoder
encoder=LabelEncoder()
data["Airline"]=encoder.fit_transform(data["Airline"])
data["Source"]=encoder.fit_transform(data["Source"])
data["Destination"]=encoder.fit_transform(data["Destination"])
data["Additional_Info"]=encoder.fit_transform(data["Additional_Info"])
data["route1"]=encoder.fit_transform(data["route1"])
data["route2"]=encoder.fit_transform(data["route2"])
data["route3"]=encoder.fit_transform(data["route3"])
data["route4"]=encoder.fit_transform(data["route4"])
data.head()
data.describe().transpose()
data.corr()
plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),yticklabels=False)
fig=plt.figure(figsize=(19,12.5))
fig.add_subplot(2,2,1)
sns.countplot(data["Airline"])
fig.add_subplot(2,2,2)
sns.countplot(data["Source"])
fig.add_subplot(2,2,3)
sns.countplot(data["Destination"])
fig.add_subplot(2,2,4)
sns.countplot(data["Totalstops"])
fig=plt.figure(figsize=(19,12.5))
fig.add_subplot(2,2,1)
sns.boxplot(x="Airline",y="Price",data=data)
fig.add_subplot(2,2,2)
sns.boxplot(x="Source",y="Price",data=data)
fig.add_subplot(2,2,3)
sns.boxplot(x="Destination",y="Price",data=data)
fig.add_subplot(2,2,4)
sns.boxplot(x="Totalstops",y="Price",data=data)
fig=plt.figure(figsize=(19,12.5))
fig.add_subplot(2,2,1)
sns.lineplot(x="Duration_hours",y="Price",data=data)
fig.add_subplot(2,2,2)
sns.lineplot(x="day",y="Price",data=data,c="r")
fig.add_subplot(2,2,3)
sns.lineplot(x="month",y="Price",data=data,c="g")
fig.add_subplot(2,2,4)
sns.lineplot(x="Totalstops",y="Price",data=data,c="black")
plt.figure(figsize=(10,6))
sns.lineplot(x="Airline",y="Price",data=data)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
#data.drop(["Duration_hours","Duration_minutes"],axis=1,inplace=True)
x=data.drop(["Price"],axis=1)
y=data["Price"]
x_train,x_test,y_train,y_test=train_test_split(x,y,random_state=10)
scalar=StandardScaler()
x_train=scalar.fit_transform(x_train)
x_test=scalar.transform(x_test)
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import EarlyStopping
def creatingmodel(optimizer="adam",loss="mse"):
model=Sequential()
model.add(Dense(150,input_dim=18,activation="relu"))
model.add(Dense(100,activation="relu"))
model.add(Dense(85,activation="relu"))
model.add(Dense(75,activation="relu"))
model.add(Dense(63,activation="relu"))
model.add(Dense(56,activation="relu"))
model.add(Dense(18,activation="relu"))
model.add(Dense(9,activation="relu"))
model.add(Dense(1,activation="relu"))
model.compile(optimizer=optimizer,loss=loss)
return model
model=creatingmodel()
callbacks=EarlyStopping(monitor="val_loss",mode="min",verbose=1,patience=400)
model.fit(x=x_train,y=y_train,
validation_data=(x_test,y_test),
epochs=2000,batch_size=20,verbose=1,callbacks=[callbacks])
loss=pd.DataFrame(model.history.history)
loss.tail()
loss.plot()
y_pred=model.predict(x_test)
from sklearn.metrics import r2_score,mean_absolute_error,mean_squared_error,explained_variance_score
error=pd.DataFrame([[mean_squared_error(y_test,y_pred),
np.sqrt(mean_squared_error(y_test,y_pred)),
mean_absolute_error(y_test,y_pred),
explained_variance_score(y_test,y_pred)]],
columns=["mean_squared_error","mean_squared_root_error",
"mean_absolute_error","explained_variance_score"])
error
from tensorflow.keras.models import model_from_json
model_json=model.to_json()
with open("model.json","w") as json_file:
json_file.write(model_json)
model.save_weights("model.h5")
json_file=open("model.json","r")
loaded_model_json=json_file.read()
json_file.close()
loaded_model=model_from_json(loaded_model_json)
loaded_model.load_weights("model.h5")
loaded_model.compile(loss="mse",optimizer="adam")
scores=loaded_model.predict(x_test)
errorloadedwights=pd.DataFrame([[mean_squared_error(y_test,scores),
np.sqrt(mean_squared_error(y_test,scores)),
mean_absolute_error(y_test,scores),
explained_variance_score(y_test,scores)]],
columns=["mean_squared_error","mean_squared_root_error",
"mean_absolute_error","explained_variance_score"])
errorloadedwights
from sklearn.ensemble import RandomForestRegressor
RFG=RandomForestRegressor(n_estimators=100,random_state=5)
RFG.fit(x_train,y_train)
y_predict_rfg=RFG.predict(x_test)
error_rfg=pd.DataFrame([[mean_squared_error(y_test,y_predict_rfg),
np.sqrt(mean_squared_error(y_test,y_predict_rfg)),
mean_absolute_error(y_test,y_predict_rfg),
explained_variance_score(y_test,y_predict_rfg)]],
columns=["mean_squared_error","mean_squared_root_error",
"mean_absolute_error","explained_variance_score"])
error_rfg
from sklearn.model_selection import GridSearchCV
param_grid={"max_depth":[10,20,30,None],
"max_leaf_nodes":[5,10,20,None],
"n_estimators":[100,200,150],
"bootstrap":[True,False],
"min_samples_leaf":[1,2,3]}
gridsearch=GridSearchCV(RandomForestRegressor(),param_grid=param_grid,verbose=3)
gridsearch.fit(x_train,y_train)
print(gridsearch.best_params_)
RFG=RandomForestRegressor(n_estimators=100,random_state=5,
max_depth=None,max_leaf_nodes=None,
min_samples_leaf=1,bootstrap=True)
RFG.fit(x_train,y_train)
y_predict_rfg=RFG.predict(x_test)
error_rfg=pd.DataFrame([[mean_squared_error(y_test,y_predict_rfg),
np.sqrt(mean_squared_error(y_test,y_predict_rfg)),
mean_absolute_error(y_test,y_predict_rfg),
explained_variance_score(y_test,y_predict_rfg)]],
columns=["mean_squared_error","mean_squared_root_error",
"mean_absolute_error","explained_variance_score"])
error_rfg
rfg_train_score=RFG.score(x_train,y_train)
rfg_train_score
rfg_test_score=RFG.score(x_test,y_test)
rfg_test_score
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import intake,yaml
import intake_esm
from scipy import special
import keras
from keras.models import Model
from keras.layers import Dense, Input
def latest_version(cat):
"""
input
cat: esmdatastore
output
esmdatastore with latest DRS versions
"""
latest_cat = cat.df.sort_values(by=['version','path']).drop_duplicates(['temporal_subset','source_id','table_id',
'institution_id','variable_id','member_id',
'grid_label','experiment_id'],keep='last')
return latest_cat
#col_url = "https://cmip6-nc.s3.us-east-2.amazonaws.com/esgf-world.json"
col_url = "https://raw.githubusercontent.com/aradhakrishnanGFDL/gfdl-aws-analysis/master/esm-collection-spec-examples/esgf-world.json"
col = intake.open_esm_datastore(col_url)
esmcol_data = col.esmcol_data
model_name = 'UKESM1-0-LL'
#mlotst, tos,uo,vo
#tos_ sea surface temperature
#area cello
#ofx ocean fixed
#omon ocean monthly average
query_Omon_tos = dict(experiment_id=['abrupt-4xCO2','1pctCO2','historical'],
table_id=['Omon'],
member_id=["r1i1p1f1","r1i1p1f2"],
source_id=model_name,
grid_label=['gn'],
variable_id=["tos"])
query_Ofx = dict(#experiment_id=['abrupt-4xCO2','1pctCO2','historical'],
table_id=['Ofx'],
#member_id=["r1i1p1f1","r1i1p1f2"],
source_id=model_name,
grid_label=['gn'],
variable_id=["areacello"])
def pp_enso(ds):
ds = ds.copy() #the wrapper function makes a copy of the ds and works from this
#ds = rename_cmip6(ds)
ds = fix_time(ds)
#ds = fix_units(ds)
#ds = correct_units(ds)
return ds
cat_Omon_tos = col.search(**query_Omon_tos)
cat_Omon_tos_lat = latest_version(cat_Omon_tos)
cat_Omon_tos_latest = intake.open_esm_datastore(cat_Omon_tos_lat,esmcol_data=esmcol_data)
cat_Omon_tos_latest.df
cat_Ofx = col.search(**query_Ofx)
cat_Ofx_lat = latest_version(cat_Ofx)
cat_Ofx_latest = intake.open_esm_datastore(cat_Ofx_lat,esmcol_data=esmcol_data)
cat_Ofx.df
dict_Omon_tos = cat_Omon_tos_latest.to_dataset_dict(storage_options=dict(anon=True), cdf_kwargs={'decode_times': True,'chunks': {'time': 1}})
dict_Ofx = cat_Ofx_latest.to_dataset_dict(storage_options=dict(anon=True),cdf_kwargs={'decode_times': True,'chunks': {}})
dict_Omon_tos.keys()
dict_Ofx.keys()
ds_Ofx = dict_Ofx["CMIP6.MOHC.UKESM1-0-LL.piControl.Ofx"] #xarray dataset object to access Ofx areacello dataset used to calculate the weighted average
```
CALCULATING the summation of areacello
```
def distance_on_unit_sphere(lat1, long1, lat2, long2):
# Convert latitude and longitude to
# spherical coordinates in radians.
degrees_to_radians = np.pi / 180.0
# phi = 90 - latitude
phi1 = (90.0 - lat1) * degrees_to_radians
phi2 = (90.0 - lat2) * degrees_to_radians
# theta = longitude
theta1 = long1 * degrees_to_radians
theta2 = long2 * degrees_to_radians
# Compute spherical distance from spherical coordinates.
# For two locations in spherical coordinates
# (1, theta, phi) and (1, theta, phi)
# cosine( arc length ) =
# sin phi sin phi' cos(theta-theta') + cos phi cos phi'
# distance = rho * arc length
cos = np.sin(phi1) * np.sin(phi2) * np.cos(theta1 - theta2) + np.cos(phi1) * np.cos(
phi2
)
arc = np.arccos(cos)
# Remember to multiply arc by the radius of the earth
# in your favorite set of units to get length.
return arc
def find_closest_grid_point(lon, lat, gridlon, gridlat):
"""find integer indices of closest grid point in grid of coordinates
gridlon, gridlat for a given geographical lon/lat.
PARAMETERS:
-----------
lon (float): longitude of point to find
lat (float): latitude of point to find
gridlon (numpy.ndarray): grid longitudes
gridlat (numpy.ndarray): grid latitudes
RETURNS:
--------
iclose, jclose: integer
grid indices for geographical point of interest
"""
if isinstance(gridlon, xr.core.dataarray.DataArray):
gridlon = gridlon.values
if isinstance(gridlat, xr.core.dataarray.DataArray):
gridlat = gridlat.values
dist = distance_on_unit_sphere(lat, lon, gridlat, gridlon)
jclose, iclose = np.unravel_index(dist.argmin(), gridlon.shape)
return iclose, jclose
i,j= find_closest_grid_point(-157, -5, ds_Ofx.longitude, ds_Ofx.latitude)#for southwest
i,j
k,l = find_closest_grid_point(-90, 5, ds_Ofx.longitude, ds_Ofx.latitude)#for northwest
k,l
ds_Ofx.dims
#TODO INSERT CELL SELECT region of interest in areacello
areacello_nino3 = ds_Ofx.areacello.sel(j = slice(j,l), i = slice(i,k))
plt.imshow(areacello_nino3[0])
plt.colorbar()
#TODO
#CALCULATE total_areacello, summation across lat,lon (in our dataset y,x respectively)
total_areacello = areacello_nino3.sum(dim=('i', 'j'))
```
NINO3 INDEX CALCULATION
REGION
(5S-5N , 150W-90W)
SELECT tos and areacello for the region of interest
## Historical
```
ds_hist = dict_Omon_tos["CMIP6.MOHC.UKESM1-0-LL.historical.Omon"]
ds_hist.coords
#ds_hist.time.to_dataframe()
tos_his = ds_hist.tos.sel(time = slice("1980", "2011"))
tos_his.longitude
#TODO INSERT CORRECT CODE TO SELECT SPECIFIED REGION (lat range and lon range) in TOS
#tos = ds.tos......
tos_his = ds_hist.tos.sel(j = slice(j,l), i = slice(i,k), time = slice("1980", "2011"))
tos_his
tos_his.isel(time=0).plot()
```
CALCULATE SEA SURFACE TEMPERATURE WEIGHTED AVERAGE
```
areacello_nino3
tos_mean_nino3_his = (tos_his * areacello_nino3).sum(dim=('i', 'j')) / total_areacello
#nino3_index.plot()
tos_mean_nino3_his.compute()
tos_mean_nino3_his.size #1980 values, 1 value per month , 12 per year for 165 years
tos_mean_nino3_his.isel(time=0).compute()
datetimeindex = tos_mean_nino3_his.indexes['time'].to_datetimeindex()
tos_mean_nino3_his['time'] = datetimeindex
tos_mean_nino3_his.sel(time = slice("1980", "2011")).plot(aspect=2, size=3)
plt.title("NINO3 index for UKESM1-0-LL historical")
plt.tight_layout()
plt.draw()
```
ADDITIONAL EXPLORATION
CLIMATOLOGY (average all Jans, Febs, etc) CALC EXAMPLES (YEARS NEED A TWEAK, calculate for 20 year chunks or as needed)
```
tos_nino3_climatology = tos_his.sel(time=slice('1980','2011')).groupby('time.month').mean(dim='time')
tos_nino3_climatology.compute()
tos_nino3_climatology.isel(month=0).plot()
```
monthly anomaly of SST (or TOS here) over the Nino3 region
we subtract the monthly climatology values calculated above from the TOS value values and then do
a spatial average across the region of interest
```
tos_sel = tos_his.sel(time=slice('1980','2011'))
index_nino3 = (tos_sel.groupby('time.month')-tos_nino3_climatology).mean(dim=['i','j'])
index_nino3.compute()
#datetimeindex = index_nino3.indexes['time'].to_datetimeindex()
#index_nino3['time'] = datetimeindex
index_nino3.plot()
plt.savefig("UKESM1-0-LL_Historical_1980_2011.png", dpi=150)
```
## 1%CO2
```
ds_1pct = dict_Omon_tos["CMIP6.NOAA-GFDL.GFDL-ESM4.1pctCO2.Omon"]
ds_1pct.time.to_dataframe()
#ds_1pct.info
#tos_1pct = ds_1pct.tos#.sel(time = slice("1980", "2011"))
#TODO INSERT CORRECT CODE TO SELECT SPECIFIED REGION (lat range and lon range) in TOS
tos_1pct = ds_1pct.tos.sel(y = slice(-5,5), x = slice(-150,-90))
#tos
tos_1pct.isel(time=0).plot()
```
CALCULATE SEA SURFACE TEMPERATURE WEIGHTED AVERAGE
```
tos_mean_nino3_1pct = (tos_1pct * areacello_nino3).sum(dim=('x', 'y')) / total_areacello
#nino3_index.plot()
tos_mean_nino3_1pct.compute()
tos_mean_nino3_1pct.size
tos_mean_nino3_1pct.isel(time=0).compute()
#datetimeindex_1pct = tos_mean_nino3_1pct.indexes['time'].to_datetimeindex()
#tos_mean_nino3_1pct['time'] = datetimeindex_1pct
tos_mean_nino3_1pct.plot(aspect=2, size=3)
plt.title("NINO3 index for ESM4 1%CO2")
plt.tight_layout()
plt.draw()
tos_nino3_climatology_1pct = tos_1pct.sel(time=slice('0119','0150')).groupby('time.month').mean(dim='time')
tos_nino3_climatology_1pct.compute()
tos_nino3_climatology_1pct.isel(month=0).plot()
```
monthly anomaly of SST (or TOS here) over the Nino3 region
we subtract the monthly climatology values calculated above from the TOS value values and then do
a spatial average across the region of interest
```
tos_sel = tos_1pct.sel(time=slice('0119','0150'))
index_nino3 = (tos_sel.groupby('time.month')-tos_nino3_climatology_1pct).mean(dim=['x','y'])
index_nino3.compute()
#datetimeindex = index_nino3.indexes['time'].to_datetimeindex()
#index_nino3['time'] = datetimeindex
index_nino3.plot()
```
## Abrupt-4xCO2
```
ds_4x = dict_Omon_tos["CMIP6.NOAA-GFDL.GFDL-ESM4.abrupt-4xCO2.Omon"]
ds_4x.time.to_dataframe()
tos_4x = ds_4x.tos.sel(time = slice("0119", "0150"))
#TODO INSERT CORRECT CODE TO SELECT SPECIFIED REGION (lat range and lon range) in TOS
#tos = ds.tos......
tos_4x = ds_4x.tos.sel(y = slice(-5,5), x = slice(-150,-90))
#tos
tos_4x.isel(time=0).plot()
```
CALCULATE SEA SURFACE TEMPERATURE WEIGHTED AVERAGE
```
tos_mean_nino3_4x = (tos_4x * areacello_nino3).sum(dim=('x', 'y')) / total_areacello
#nino3_index.plot()
tos_mean_nino3_4x.compute()
tos_mean_nino3_4x.size #1980 values, 1 value per month , 12 per year for 165 years
tos_mean_nino3_4x.isel(time=0).compute()
#datetimeindex_4x = tos_mean_nino3_4x.indexes['time'].to_datetimeindex()
#tos_mean_nino3_4x['time'] = datetimeindex_4x
tos_mean_nino3_4x.sel(time=slice('0119','0150')).plot(aspect=2, size=3)
plt.title("NINO3 index for ESM4 abrupt-4xCO2")
plt.tight_layout()
plt.draw()
tos_nino3_climatology_4x = tos_4x.sel(time=slice('0119','0150')).groupby('time.month').mean(dim='time')
tos_nino3_climatology_4x.compute()
tos_nino3_climatology_4x.isel(month=0).plot()
```
monthly anomaly of SST (or TOS here) over the Nino3 region
we subtract the monthly climatology values calculated above from the TOS value values and then do
a spatial average across the region of interest
```
tos_sel_4x = tos_4x.sel(time=slice('0119','0150'))
index_nino3_4x = (tos_sel_4x.groupby('time.month')-tos_nino3_climatology_4x).mean(dim=['x','y'])
index_nino3_4x.compute()
#datetimeindex = index_nino3.indexes['time'].to_datetimeindex()
#index_nino3['time'] = datetimeindex
index_nino3_4x.plot()
```
| github_jupyter |
```
n=5
s='*'
for i in range(n):
print(s)
s = s+'*'
n = 5
for i in range(n+1):
print("*"*i)
n = 5
for i in range(n+1):
print(" "*(n-i), "*"*i)
n = 10
for i in range(n+1):
if i%2 == 1:
print(" "*int((n-i)/2), "*"*i)
```
두 정수 a, b가 주어졌을 때 a와 b 사이에 속한 모든 정수의 합을 리턴하는 함수, solution을 완성하세요.
예를 들어 a = 3, b = 5인 경우, 3 + 4 + 5 = 12이므로 12를 리턴합니다.
제한 조건
a와 b가 같은 경우는 둘 중 아무 수나 리턴하세요.
a와 b는 -10,000,000 이상 10,000,000 이하인 정수입니다.
a와 b의 대소관계는 정해져있지 않습니다.
```
def solution(a, b):
answer = 0
li = [a, b]
li.sort()
for i in li:
return answer
solution(5,3)
def solution(a, b):
answer = 0
if a > b:
for i in range(b, a+1):
answer = answer + i
return answer
elif a < b:
for i in range(a, b+1):
answer = answer + i
return answer
elif a == b:
answer = a
return answer
solution(3,3)
def solution(a, b):
answer = 0
for i in range(min(a, b), max(a, b) + 1):
answer += i
return answer
solution(5,3)
def solution(a, b):
if b >= a:
answer = sum(range(a,b+1))
else:
answer = sum(range(b,a+1))
return answer
solution(3,3)
def solution(a,b):
if a>b:
a, b = b ,a
return sum(range(a,b+1))
solution(5,3)
array = [1,5,2,6,3,7,4]
commands=[2, 5, 3]
def solution(array, commands):
answer = []
item = []
i = commands[0]
j = commands[1]
k = commands[2]
for ar in array[i:j]:
item.append(ar)
item.sort()
return item[k-1]
solution(array, commands)
array = [1, 5, 2, 6, 3, 7, 4]
commands = [[2, 5, 3], [4, 4, 1], [1, 7, 3]]
#commands = [[2,5,3]]
def solution(array, commands):
answer = []
for co in commands:
i = co[0]
j = co[1]
k = co[2]
print(i,j,k)
new_array = array[i-1:j]
print(new_array)
new_ar2 = sorted(new_array)
print(new_ar2)
answer.append(new_ar2[k-1])
print(answer)
return answer
solution(array, commands)
array = [1, 5, 2, 6, 3, 7, 4]
commands = [[2, 5, 3], [4, 4, 1], [1, 7, 3]]
#commands = [[2,5,3]]
def solution(array, commands):
answer = []
for co in commands:
i = co[0]
j = co[1]
k = co[2]
print(i,j,k)
new_array = array[i-1:j]
print(new_array)
new_array.sort()
print(new_array)
answer.append(new_array[k-1])
print(answer)
return answer
solution(array, commands)
li = [1, 2, 3]
len([1,2,3])
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Inferencing with TensorFlow 2.0 on Azure Machine Learning Service
## Overview of Workshop
This notebook is Part 2 (Inferencing and Deploying a Model) of a four part workshop that demonstrates an end-to-end workflow for implementing a BERT model using Tensorflow 2.0 on Azure Machine Learning Service. The different components of the workshop are as follows:
- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)
- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)
- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)
- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)
This workshop shows how to convert a TF 2.0 BERT model and deploy the model as Webservice in step-by-step fashion:
* Initilize your workspace
* Download a previous saved model (saved on Azure Machine Learning)
* Test the downloaded model
* Display scoring script
* Defining an Azure Environment
* Deploy Model as Webservice (Local, ACI and AKS)
* Test Deployment (Azure ML Service Call, Raw HTTP Request)
* Clean up Webservice
## What is Azure Machine Learning Service?
Azure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.

#### How can we use Azure Machine Learning SDK for deployment and inferencing of a machine learning models?
Deployment and inferencing of a machine learning model, is often an cumbersome process. Once you a trained model and a scoring script working on your local machine, you will want to deploy this model as a web service.
To facilitate deployment and inferencing, the Azure Machine Learning Python SDK provides a high-level abstraction for model deployment of a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance ([ACI](https://azure.microsoft.com/en-us/services/container-instances/)) or Azure Kubernetes Service ([AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/)), which allows users to easily deploy their models in the Azure ecosystem.
## Prerequisites
* Understand the [architecture and terms](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) to:
* Install the AML SDK
* Create a workspace and its configuration file (config.json)
* For local scoring test, you will also need to have Tensorflow and Keras installed in the current Jupyter kernel.
* Please run through Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook first to register your model
* Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.
```bash
sudo usermod -a -G docker $USER
newgrp docker
```
#### Enable Docker for non-root users
```
!sudo usermod -a -G docker $USER
!newgrp docker
```
Check if you have the correct permissions to run Docker. Running the line below should print:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
```
!docker ps
```
>**Note:** Make you shutdown your Jupyter notebook to enable this access. Go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.
## Azure Service Classification Problem
One of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered.
**In order to solve this problem, we will be building a model to classify posts on Stackoverflow with the appropriate Azure service tag.**
We will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Language. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications.
For more information about the BERT, please read this [paper](https://arxiv.org/pdf/1810.04805.pdf)
## Checking Azure Machine Learning Python SDK Version
If you are running this on a Notebook VM, the Azure Machine Learning Python SDK is installed by default. If you are running this locally, you can follow these [instructions](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/install?view=azure-ml-py) to install it using pip.
This tutorial requires version 1.0.69 or higher. We can import the Python SDK to ensure it has been properly installed:
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Connect To Workspace
Initialize a [Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the prerequisites step. Workspace.from_config() creates a workspace object from the details stored in config.json.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
## Register Datastore
A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts.
In this tutorial, the model was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).
We need to define the following parameters to register a datastore:
- `ws`: The workspace object
- `datastore_name`: The name of the datastore, case insensitive, can only contain alphanumeric characters and _.
- `container_name`: The name of the azure blob container.
- `account_name`: The storage account name.
- `sas_token`: An account SAS token, defaults to None.
```
from azureml.core.datastore import Datastore
datastore_name = 'tfworld'
container_name = 'azureml-blobstore-7c6bdd88-21fa-453a-9c80-16998f02935f'
account_name = 'tfworld6818510241'
sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-11-02T06:01:06Z&st=2019-11-08T22:01:06Z&spr=https&sig=9XcJPwqp4c2cSgsGL1X7cXKO46bzhHCaX75N3gc98GU%3D'
datastore = Datastore.register_azure_blob_container(workspace=ws,
datastore_name=datastore_name,
container_name=container_name,
account_name=account_name,
sas_token=sas_token)
```
#### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell.
```
datastore = ws.datastores['tfworld']
```
### Download Model from Datastore
Get the trained model from an Azure Blob container. The model is saved into two files, ``config.json`` and ``model.h5``.
```
from azureml.core.model import Model
datastore.download('./',prefix="azure-service-classifier/model")
```
### Registering the Model with the Workspace
Register the model to use in your workspace.
```
model = Model.register(model_path = "./azure-service-classifier/model",
model_name = "azure-service-classifier", # this is the name the model is registered as
tags = {'pretrained': "BERT"},
workspace = ws)
model_dir = './azure-service-classifier/model'
```
### Downloading and Using Registered Models
> If you already completed Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook.You can dowload your registered BERT Model and use that instead of the model saved on the blob storage.
```python
model = ws.models['azure-service-classifier']
model_dir = model.download(target_dir='.', exist_ok=True, exists_ok=None)
```
## Inferencing on the test set
Let's check the version of the local Keras. Make sure it matches with the version number printed out in the training script. Otherwise you might not be able to load the model properly.
```
import keras
import tensorflow as tf
print("Keras version:", keras.__version__)
print("Tensorflow version:", tf.__version__)
```
#### Install Transformers Library
We have trained BERT model using Tensorflow 2.0 and the open source [huggingface/transformers](https://github.com/huggingface/transformers) libary. So before we can load the model we need to make sure we have also installed the Transformers Library.
```
%pip install transformers
```
#### Load the Tensorflow 2.0 BERT model.
Load the downloaded Tensorflow 2.0 BERT model
```
from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer
from transformers.modeling_tf_utils import get_initializer
class TFBertForMultiClassification(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super(TFBertForMultiClassification, self).__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.bert = TFBertMainLayer(config, name='bert')
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(config.num_labels,
kernel_initializer=get_initializer(config.initializer_range),
name='classifier',
activation='softmax')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
max_seq_length = 128
labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']
loaded_model = TFBertForMultiClassification.from_pretrained(model_dir, num_labels=len(labels))
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
print("Model loaded from disk.")
```
Feed in test sentence to test the BERT model. And time the duration of the prediction.
```
%%time
import json
# Input test sentences
raw_data = json.dumps({
'text': 'My VM is not working'
})
# Encode inputs using tokenizer
inputs = tokenizer.encode_plus(
json.loads(raw_data)['text'],
add_special_tokens=True,
max_length=max_seq_length
)
input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"]
# The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.
attention_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = max_seq_length - len(input_ids)
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
# Make prediction
predictions = loaded_model.predict({
'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),
'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32),
'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32)
})
result = {
'prediction': str(labels[predictions[0].argmax().item()]),
'probability': str(predictions[0].max())
}
print(result)
```
As you can see based on the sample sentence the model can predict the probability of the StackOverflow tags related to that sentence.
## Inferencing with ONNX
### ONNX and ONNX Runtime
**ONNX (Open Neural Network Exchange)** is an interoperable standard format for ML models, with support for both DNN and traditional ML. Models can be converted from a variety of frameworks, such as TensorFlow, Keras, PyTorch, scikit-learn, and more (see [ONNX Conversion tutorials](https://github.com/onnx/tutorials#converting-to-onnx-format)). This provides data teams with the flexibility to use their framework of choice for their training needs, while streamlining the process to operationalize these models for production usage in a consistent way.
In this section, we will demonstrate how to use ONNX Runtime, a high performance inference engine for ONNX format models, for inferencing our model. Along with interoperability, ONNX Runtime's performance-focused architecture can also accelerate inferencing for many models through graph optimizations, utilization of custom accelerators, and more. You can find more about performance tuning [here](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md).
#### Download ONNX Model
To visualize the model, we can use Netron. Click [here](https://lutzroeder.github.io/netron/) to open the browser version and load the model.
```
datastore.download('.',prefix="azure-service-classifier/model/bert_tf2.onnx")
```
#### Install ONNX Runtime
```
%pip install onnxruntime
```
#### Loading ONNX Model
Load the downloaded ONNX BERT model.
```
import numpy as np
import onnxruntime as rt
from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer
max_seq_length = 128
labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
sess = rt.InferenceSession("./azure-service-classifier/model/bert_tf2.onnx")
print("ONNX Model loaded from disk.")
```
#### View the inputs and outputs of converted ONNX model
```
for i in range(len(sess.get_inputs())):
input_name = sess.get_inputs()[i].name
print("Input name :", input_name)
input_shape = sess.get_inputs()[i].shape
print("Input shape :", input_shape)
input_type = sess.get_inputs()[i].type
print("Input type :", input_type)
for i in range(len(sess.get_outputs())):
output_name = sess.get_outputs()[i].name
print("Output name :", output_name)
output_shape = sess.get_outputs()[i].shape
print("Output shape :", output_shape)
output_type = sess.get_outputs()[i].type
print("Output type :", output_type)
```
#### Inferencing with ONNX Runtime
```
%%time
import json
# Input test sentences
raw_data = json.dumps({
'text': 'My VM is not working'
})
labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']
# Encode inputs using tokenizer
inputs = tokenizer.encode_plus(
json.loads(raw_data)['text'],
add_special_tokens=True,
max_length=max_seq_length
)
input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"]
# The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.
attention_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = max_seq_length - len(input_ids)
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
# Make prediction
convert_input = {
sess.get_inputs()[0].name: np.array(tf.convert_to_tensor([token_type_ids], dtype=tf.int32)),
sess.get_inputs()[1].name: np.array(tf.convert_to_tensor([input_ids], dtype=tf.int32)),
sess.get_inputs()[2].name: np.array(tf.convert_to_tensor([attention_mask], dtype=tf.int32))
}
predictions = sess.run([output_name], convert_input)
result = {
'prediction': str(labels[predictions[0].argmax().item()]),
'probability': str(predictions[0].max())
}
print(result)
```
## Deploy models on Azure ML
Now we are ready to deploy the model as a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/) or Azure Kubernetes Service [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in.
> **Note:** For this Notebook, we'll use the original model format for deployment, but the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.

### Deploying a web service
Once you've tested the model and are satisfied with the results, deploy the model as a web service. For this Notebook, we'll use the original model format for deployment, but note that the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.
To build the correct environment, provide the following:
* A scoring script to show how to use the model
* An environment file to show what packages need to be installed
* A configuration file to build the web service
* The model you trained before
Read more about deployment [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where)
### Create score.py
First, we will create a scoring script that will be invoked by the web service call. We have prepared a [score.py script](code/scoring/score.py) in advance that scores your BERT model.
* Note that the scoring script must have two required functions, ``init()`` and ``run(input_data)``.
* In ``init()`` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.
* In ``run(input_data)`` function, the model is used to predict a value based on the input data. The input and output to run typically use JSON as serialization and de-serialization format but you are not limited to that.
```
%pycat score.py
```
### Create Environment
You can create and/or use a Conda environment using the [Conda Dependencies object](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) when deploying a Webservice.
```
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','pandas'],
pip_packages=['numpy','pandas','inference-schema[numpy-support]','azureml-defaults','tensorflow==2.0.0','transformers==2.0.0'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
```
Review the content of the `myenv.yml` file.
```
%pycat myenv.yml
```
## Create Inference Configuration
We need to define the [Inference Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) for the web service. There is support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.
Note: in that case, your entry_script and conda_file paths are relative paths to the source_directory path.
Sample code for using a source directory:
```python
inference_config = InferenceConfig(source_directory="C:/abc",
runtime= "python",
entry_script="x/y/score.py",
conda_file="env/myenv.yml")
```
- source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder
- runtime = Which runtime to use for the image. Current supported runtimes are 'spark-py' and 'python
- entry_script = contains logic specific to initializing your model and running predictions
- conda_file = manages conda and python package dependencies.
> **Note:** Deployment uses the inference configuration deployment configuration to deploy the models. The deployment process is similar regardless of the compute target. Deploying to AKS is slightly different because you must provide a reference to the AKS cluster.
```
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(source_directory="./",
runtime= "python",
entry_script="score.py",
conda_file="myenv.yml"
)
```
## Deploy as a Local Service
Estimated time to complete: **about 3-7 minutes**
Configure the image and deploy it locally. The following code goes through these steps:
* Build an image on local machine (or VM, if you are using a VM) using:
* The scoring file (`score.py`)
* The environment file (`myenv.yml`)
* The model file
* Define [Local Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.localwebservice?view=azure-ml-py#deploy-configuration-port-none-)
* Send the image to local docker instance.
* Start up a container using the image.
* Get the web service HTTP endpoint.
* This has a very quick turnaround time and is great for testing service before it is deployed to production
> **Note:** Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.
```bash
sudo usermod -a -G docker $USER
newgrp docker
```
#### Deploy Local Service
```
from azureml.core.model import InferenceConfig, Model
from azureml.core.webservice import LocalWebservice
# Create a local deployment for the web service endpoint
deployment_config = LocalWebservice.deploy_configuration()
# Deploy the service
local_service = Model.deploy(
ws, "mymodel", [model], inference_config, deployment_config)
# Wait for the deployment to complete
local_service.wait_for_deployment(True)
# Display the port that the web service is available on
print(local_service.port)
```
This is the scoring web service endpoint:
```
print(local_service.scoring_uri)
```
### Test Local Service
Let's test the deployed model. Pick a random samples about an issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.
After the invocation, we print the returned predictions.
```
%%time
import json
raw_data = json.dumps({
'text': 'My VM is not working'
})
prediction = local_service.run(input_data=raw_data)
```
### Reloading Webservice
You can update your score.py file and then call reload() to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, reload() is fast.
```
%%writefile score.py
import os
import json
import tensorflow as tf
from transformers import TFBertPreTrainedModel, TFBertMainLayer, BertTokenizer
from transformers.modeling_tf_utils import get_initializer
import logging
logging.getLogger("transformers.tokenization_utils").setLevel(logging.ERROR)
class TFBertForMultiClassification(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super(TFBertForMultiClassification, self) \
.__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.bert = TFBertMainLayer(config, name='bert')
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(
config.num_labels,
kernel_initializer=get_initializer(config.initializer_range),
name='classifier',
activation='softmax')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(
pooled_output,
training=kwargs.get('training', False))
logits = self.classifier(pooled_output)
# add hidden states and attention if they are here
outputs = (logits,) + outputs[2:]
return outputs # logits, (hidden_states), (attentions)
max_seq_length = 128
labels = ['azure-web-app-service', 'azure-storage',
'azure-devops', 'azure-virtual-machine', 'azure-functions']
def init():
global tokenizer, model
# os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'azure-service-classifier')
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model')
model = TFBertForMultiClassification \
.from_pretrained(model_dir, num_labels=len(labels))
print("hello from the reloaded script")
def run(raw_data):
# Encode inputs using tokenizer
inputs = tokenizer.encode_plus(
json.loads(raw_data)['text'],
add_special_tokens=True,
max_length=max_seq_length
)
input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"]
# The mask has 1 for real tokens and 0 for padding tokens.
# Only real tokens are attended to.
attention_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = max_seq_length - len(input_ids)
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
# Make prediction
predictions = model.predict({
'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),
'attention_mask': tf.convert_to_tensor(
[attention_mask],
dtype=tf.int32),
'token_type_ids': tf.convert_to_tensor(
[token_type_ids],
dtype=tf.int32)
})
result = {
'prediction': str(labels[predictions[0].argmax().item()]),
'probability': str(predictions[0].max())
}
print(result)
return result
init()
run(json.dumps({
'text': 'My VM is not working'
}))
local_service.reload()
```
### Updating Webservice
If you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call update(), instead (see below).
```python
local_service.update(models=[loaded_model],
image_config=None,
deployment_config=None,
wait=False, inference_config=None)
```
### View service Logs (Debug, when something goes wrong )
>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell
You should see the phrase **"hello from the reloaded script"** in the logs, because we added it to the script when we did a service reload.
```
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(local_service.get_logs())
```
## Deploy in ACI
Estimated time to complete: **about 3-7 minutes**
Configure the image and deploy. The following code goes through these steps:
* Build an image using:
* The scoring file (`score.py`)
* The environment file (`myenv.yml`)
* The model file
* Define [ACI Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.aciwebservice?view=azure-ml-py#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none-)
* Send the image to the ACI container.
* Start up a container in ACI using the image.
* Get the web service HTTP endpoint.
```
%%time
from azureml.core.webservice import Webservice
from azureml.exceptions import WebserviceException
from azureml.core.webservice import AciWebservice, Webservice
## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container.
## If you feel you need more later, you would have to recreate the image and redeploy the service.
aciconfig = AciWebservice.deploy_configuration(cpu_cores=2,
memory_gb=4,
tags={"model": "BERT", "method" : "tensorflow"},
description='Predict StackoverFlow tags with BERT')
aci_service_name = 'asc-aciservice'
try:
# if you want to get existing service below is the command
# since aci name needs to be unique in subscription deleting existing aci if any
# we use aci_service_name to create azure ac
aci_service = Webservice(ws, name=aci_service_name)
if aci_service:
aci_service.delete()
except WebserviceException as e:
print()
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
This is the scoring web service endpoint:
```
print(aci_service.scoring_uri)
```
### Test the deployed model
Let's test the deployed model. Pick a random samples about an Azure issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.
After the invocation, we print the returned predictions.
```
%%time
import json
raw_data = json.dumps({
'text': 'My VM is not working'
})
prediction = aci_service.run(input_data=raw_data)
print(prediction)
```
### View service Logs (Debug, when something goes wrong )
>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell
```
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(aci_service.get_logs())
```
## Deploy in AKS (Single Node)
Estimated time to complete: **about 15-25 minutes**, 10-15 mins for AKS provisioning and 5-10 mins to deploy service
Configure the image and deploy. The following code goes through these steps:
* Provision a Production AKS Cluster
* Build an image using:
* The scoring file (`score.py`)
* The environment file (`myenv.yml`)
* The model file
* Define [AKS Provisioning Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.akscompute?view=azure-ml-py#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none-)
* Provision an AKS Cluster
* Define [AKS Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.akswebservice?view=azure-ml-py#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none-)
* Send the image to the AKS cluster.
* Start up a container in AKS using the image.
* Get the web service HTTP endpoint.
#### Provisioning Cluster
```
from azureml.core.compute import AksCompute, ComputeTarget
# Use the default configuration (you can also provide parameters to customize this).
# For example, to create a dev/test cluster, use:
# prov_config = AksCompute.provisioning_configuration(cluster_purpose = AksCompute.ClusterPurpose.DEV_TEST)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'myaks'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
# Wait for the create process to complete
aks_target.wait_for_completion(show_output = True)
```
#### Deploying the model
```
from azureml.core.webservice import AksWebservice, Webservice
from azureml.core.model import Model
aks_target = AksCompute(ws,"myaks")
## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your cluster.
## If you feel you need more later, you would have to recreate the image and redeploy the service.
deployment_config = AksWebservice.deploy_configuration(cpu_cores = 2, memory_gb = 4)
aks_service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config, aks_target)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
```
### Test the deployed model
#### Using the Azure SDK service call
We can use Azure SDK to make a service call with a simple function
```
%%time
import json
raw_data = json.dumps({
'text': 'My VM is not working'
})
prediction = aks_service.run(input_data=raw_data)
print(prediction)
```
This is the scoring web service endpoint:
```
print(aks_service.scoring_uri)
```
#### Using HTTP call
We will make a Jupyter widget so we can now send construct raw HTTP request and send to the service through the widget.
#### Test Web Service with HTTP call
```
import ipywidgets as widgets
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider, VBox
from IPython.display import display
import requests
text = widgets.Text(
value='',
placeholder='Type a query',
description='Question:',
disabled=False
)
button = widgets.Button(description="Get Tag!")
output = widgets.Output()
items = [text, button]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
width='70%')
box_auto = Box(children=items, layout=box_layout)
def on_button_clicked(b):
with output:
input_data = '{\"text\": \"'+ text.value +'\"}'
headers = {'Content-Type':'application/json'}
resp = requests.post(local_service.scoring_uri, input_data, headers=headers)
print("="*10)
print("Question:", text.value)
print("POST to url", local_service.scoring_uri)
print("Prediction:", resp.text)
print("="*10)
button.on_click(on_button_clicked)
#Display the GUI
VBox([box_auto, output])
```
Doing a raw HTTP request and send to the service through without a widget.
```
query = 'My VM is not working'
input_data = '{\"text\": \"'+ query +'\"}'
headers = {'Content-Type':'application/json'}
resp = requests.post(local_service.scoring_uri, input_data, headers=headers)
print("="*10)
print("Question:", query)
print("POST to url", local_service.scoring_uri)
print("Prediction:", resp.text)
print("="*10)
```
### View service Logs (Debug, when something goes wrong )
>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell
```
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(aks_service.get_logs())
```
## Summary of workspace
Let's look at the workspace after the web service was deployed. You should see
* a registered model named and with the id
* an AKS and ACI webservice called with some scoring URL
```
models = ws.models
for name, model in models.items():
print("Model: {}, ID: {}".format(name, model.id))
webservices = ws.webservices
for name, webservice in webservices.items():
print("Webservice: {}, scoring URI: {}".format(name, webservice.scoring_uri))
```
## Delete ACI to clean up
You can delete the ACI deployment with a simple delete API call.
```
local_service.delete()
aci_service.delete()
aks_service.delete()
```
| github_jupyter |
<h1>Block file parser</h1>
<h2>Structure of Block</h2>
<p>
Block contains pre-header, header and transactions list.<br>
Block header hash must meet difficulty criteria which can be calculated from "Bits" in block header. This is achieved by setting "Nounce" in block header.<br>
For fields where bytes of value is [1-9 Bytes] such as Number of transactions, number of inputs in transaction, number of out in transaction:<br>
<i>Less than 0xfd is 1 byte, 0xfd is 2 bytes, 0xfe is 4 bytes, 0xff is 8 bytes</i>
</p>
<h4>Building important Methods and Constants</h4>
<p> Values in Bitcoin network is in Little Endian while most tools accepts and returns Big Endian values. This is taken care in below program</p>
<br>
```
import os
import glob
import binascii
import datetime
import shutil
import mmap
import hashlib
import json
#from bitcoinrpc.authproxy import AuthServiceProxy, JSONRPCException
BLOCK_PATH=os.path.join(os.getenv('HOME'), '.bitcoin', 'blocks')
#rpc_connection = AuthServiceProxy("http://%s:%s@127.0.0.1:8332"%('alice', 'passw0rd'))
def getCount(count_bytes):
txn_size = int(binascii.hexlify(count_bytes[0:1]), 16)
if txn_size < 0xfd:
return txn_size
elif txn_size == 0xfd:
txn_size = int(binascii.hexlify(count_bytes[1:3][::-1]), 16)
return txn_size
elif txn_size == 0xfe:
txn_size = int(binascii.hexlify(count_bytes[1:5][::-1]), 16)
return txn_size
else:
txn_size = int(binascii.hexlify(count_bytes[1:9][::-1]), 16)
return txn_size
def getCountBytes(mptr: mmap):
mptr_read = mptr.read(1)
count_bytes = mptr_read
txn_size = int(binascii.hexlify(mptr_read), 16)
if txn_size < 0xfd:
return count_bytes
elif txn_size == 0xfd:
mptr_read = mptr.read(2)
count_bytes += mptr_read
txn_size = int(binascii.hexlify(mptr_read[::-1]), 16)
return count_bytes
elif txn_size == 0xfe:
mptr_read = mptr.read(4)
count_bytes += mptr_read
txn_size = int(binascii.hexlify(mptr_read[::-1]), 16)
return count_bytes
else:
mptr_read = mptr.read(8)
count_bytes += mptr_read
txn_size = int(binascii.hexlify(mptr_read[::-1]), 16)
return count_bytes
```
<h4>Block Pre-Header</h4>
<p>
[4 Bytes (Mainnet: F9 BE B4 D9 or 0xD9B4BEF9, Testnet: FA BF B5 DA or 0xDAB5BFFA)] Magic ID<br>
[4 Bytes] Block length<br>
</p>
```
def getBlockPreHeader(mptr: mmap):
block_pre_header = {}
block_pre_header['magic_number'] = bytes.decode(binascii.hexlify(mptr.read(4)[::-1]))
print('magic_number = %s' % block_pre_header['magic_number'])
block_pre_header['block_length'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
return block_pre_header
```
<h4>Block Header</h4>
<p>
[4 Bytes] Version<br>
[32 Bytes] Previous Block Hash<br>
[32 Bytes] Merkle Tree Root<br>
[4 Bytes] Timestamp<br>
[4 Bytes] Bits<br>
[4 Bytes] Nounce<br>
</p>
```
def getBlockHeader(mptr: mmap):
block_header = {}
block_header['block_version'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
block_header['prev_block_hash'] = bytes.decode(binascii.hexlify(mptr.read(32)[::-1]))
block_header['merkle_tree_root'] = bytes.decode(binascii.hexlify(mptr.read(32)[::-1]))
block_header['timestamp'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
block_header['date_time'] = datetime.datetime.fromtimestamp(block_header['timestamp']).strftime('%Y-%m-%d %H:%M:%S')
block_header['bits'] = bytes.decode(binascii.hexlify(mptr.read(4)[::-1]))
block_header['nounce'] = bytes.decode(binascii.hexlify(mptr.read(4)[::-1]))
return block_header
```
<h4>Block Header Hash calculation</h4>
```
def getBlockHeaderHash(mptr: mmap, start: int):
seek = start + 8
mptr.seek(seek) ## ignore magic number and block size
block_header = mptr.read(80)
block_header_hash = hashlib.sha256(hashlib.sha256(block_header).digest()).digest()
return bytes.decode(binascii.hexlify(block_header_hash[::-1]))
```
<h4>Number of Transactions</h4>
<p>
[1-9 Bytes] Number of Transactions<br>
</p>
```
def getTransactionCount(mptr: mmap):
count_bytes = getCountBytes(mptr)
txn_count = getCount(count_bytes)
return txn_count
```
<h4>Coinbase Transaction Format</h4>
<p>
[4 Bytes] version<br>
[1-9 Bytes] Number of inputs<br>
[1 Byte] If "Number of inputs" is zero then this byte if not zero denotes Segwit<br>
[1-9 Bytes] If "Number of inputs" is zero then these bytes denote "Number of inputs"<br>
-- For each input [Start]<br>
-- [32 Bytes] Previous Transaction hash<br>
-- [4 Bytes] Previous Transaction out index<br>
-- [1-9 Bytes] Bytes in Coinbase data<br>
-- [1-9 Bytes] Bytes in Height of this Block<br>
-- [Bytes in Height] Height of this Block<br>
-- [Remaining Bytes in Coinbase data] Coinbase Arbitrary Data<br>
-- [4 Bytes] Sequence<br>
-- [End]<br>
[1-9 Bytes] Number of outs<br>
-- For each out [Start]
-- [8 Bytes] Amount in satoshis<br>
-- [1-9 Bytes] Bytes in scriptpubkey<br>
-- [Bytes in scriptpubkey] scriptpubkey<br>
-- [End]<br>
-- If Segwit byte is non-zero then for each input [Start]<br>
-- [1-9 Bytes] Number of Witness<br>
-- -- For Each Witness [Start]<br>
-- -- [1-9] Bytes in Witness<br>
-- -- [Bytes in Witness] Witness<br>
-- -- [End]<br>
-- [End]<br>
[4] Locktime<br>
</p>
```
def getCoinbaseTransaction(mptr: mmap):
txn = {}
txn_version = mptr.read(4)
txn['version'] = int(binascii.hexlify(txn_version[::-1]), 16)
count_bytes = getCountBytes(mptr)
input_count = getCount(count_bytes)
if input_count == 0:
# post segwit
txn['is_segwit'] = bool(int(binascii.hexlify(mptr.read(1)), 16))
count_bytes = getCountBytes(mptr)
txn['input_count'] = getCount(count_bytes)
else:
txn['input_count'] = input_count # this will be 1
txn['input'] = []
for index in range(txn['input_count']):
txn_input = {}
txn_input['prev_txn_hash'] = bytes.decode(binascii.hexlify(mptr.read(32)[::-1]))
txn_input['prev_txn_out_index'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
count_bytes = getCountBytes(mptr)
txn_input['coinbase_data_size'] = getCount(count_bytes)
fptr1 = mptr.tell()
count_bytes = getCountBytes(mptr)
txn_input['coinbase_data_bytes_in_height'] = getCount(count_bytes)
txn_input['coinbase_data_block_height'] = int(binascii.hexlify(mptr.read(txn_input['coinbase_data_bytes_in_height'])[::-1]), 16)
fptr2 = mptr.tell()
arbitrary_data_size = txn_input['coinbase_data_size'] - (fptr2 - fptr1)
txn_input['coinbase_arbitrary_data'] = bytes.decode(binascii.hexlify(mptr.read(arbitrary_data_size)[::-1]))
txn_input['sequence'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
txn['input'].append(txn_input)
count_bytes = getCountBytes(mptr)
txn['out_count'] = getCount(count_bytes)
txn['out'] = []
for index in range(txn['out_count']):
txn_out = {}
txn_out['satoshis'] = int(binascii.hexlify(mptr.read(8)[::-1]), 16)
count_bytes = getCountBytes(mptr)
txn_out['scriptpubkey_size'] = getCount(count_bytes)
txn_out['scriptpubkey'] = bytes.decode(binascii.hexlify(mptr.read(txn_out['scriptpubkey_size'])))
txn['out'].append(txn_out)
if 'is_segwit' in txn and txn['is_segwit'] == True:
for index in range(txn['input_count']):
count_bytes = getCountBytes(mptr)
txn['input'][index]['witness_count'] = getCount(count_bytes)
txn['input'][index]['witness'] = []
for inner_index in range(txn['input'][index]['witness_count']):
txn_witness = {}
count_bytes = getCountBytes(mptr)
txn_witness['size'] = getCount(count_bytes)
txn_witness['witness'] = bytes.decode(binascii.hexlify(mptr.read(txn_witness['size'])))
txn['input'][index]['witness'].append(txn_witness)
txn['locktime'] = int(binascii.hexlify(mptr.read(4)[::-1]), 16)
return txn
```
<h4>Other Transaction Format</h4>
<p>
[4 Bytes] version<br>
[1-9 Bytes] Number of inputs<br>
[1 Byte] If "Number of inputs" is zero then this byte if not zero denotes Segwit<br>
[1-9 Bytes] If "Number of inputs" is zero then these bytes denote "Number of inputs"<br>
-- For each input [Start]<br>
-- [32 Bytes] Previous Transaction hash<br>
-- [4 Bytes] Previous Transaction out index<br>
-- [1-9 Bytes] Bytes in scriptsig<br>
-- [Bytes in scriptsig] scriptsig<br>
-- [4 Bytes] Sequence<br>
-- [End]<br>
[1-9 Bytes] Number of outs<br>
-- For each out [Start]
-- [8 Bytes] Amount in satoshis<br>
-- [1-9 Bytes] Bytes in scriptpubkey<br>
-- [Bytes in scriptpubkey] scriptpubkey<br>
-- [End]<br>
-- If Segwit byte is non-zero then for each input [Start]<br>
-- [1-9 Bytes] Number of Witness<br>
-- -- For Each Witness [Start]<br>
-- -- [1-9] Bytes in Witness<br>
-- -- [Bytes in Witness] Witness<br>
-- -- [End]<br>
-- [End]<br>
[4] Locktime<br>
</p>
```
def getTxnHash(txn: bytes):
txn_hash = hashlib.sha256(hashlib.sha256(txn).digest()).digest()
return bytes.decode(binascii.hexlify(txn_hash[::-1]))
def getTransaction(mptr: mmap):
txn = {}
mptr_read = mptr.read(4)
raw_txn = mptr_read
txn['version'] = int(binascii.hexlify(mptr_read[::-1]), 16)
mptr_read = getCountBytes(mptr)
input_count = getCount(mptr_read)
if input_count == 0:
# post segwit
txn['is_segwit'] = bool(int(binascii.hexlify(mptr.read(1)), 16))
mptr_read = getCountBytes(mptr)
txn['input_count'] = getCount(mptr_read)
else:
txn['input_count'] = input_count
raw_txn += mptr_read
txn['input'] = []
for index in range(txn['input_count']):
txn_input = {}
mptr_read = mptr.read(32)
raw_txn += mptr_read
txn_input['prev_txn_hash'] = bytes.decode(binascii.hexlify(mptr_read[::-1]))
mptr_read = mptr.read(4)
raw_txn += mptr_read
txn_input['prev_txn_out_index'] = int(binascii.hexlify(mptr_read[::-1]), 16)
mptr_read = getCountBytes(mptr)
raw_txn += mptr_read
txn_input['scriptsig_size'] = getCount(mptr_read)
mptr_read = mptr.read(txn_input['scriptsig_size'])
raw_txn += mptr_read
txn_input['scriptsig'] = bytes.decode(binascii.hexlify(mptr_read))
mptr_read = mptr.read(4)
raw_txn += mptr_read
txn_input['sequence'] = int(binascii.hexlify(mptr_read[::-1]), 16)
txn['input'].append(txn_input)
mptr_read = getCountBytes(mptr)
raw_txn += mptr_read
txn['out_count'] = getCount(mptr_read)
txn['out'] = []
for index in range(txn['out_count']):
txn_out = {}
mptr_read = mptr.read(8)
raw_txn += mptr_read
txn_out['_satoshis'] = int(binascii.hexlify(mptr_read[::-1]), 16)
mptr_read = getCountBytes(mptr)
raw_txn += mptr_read
txn_out['scriptpubkey_size'] = getCount(mptr_read)
mptr_read = mptr.read(txn_out['scriptpubkey_size'])
raw_txn += mptr_read
txn_out['scriptpubkey'] = bytes.decode(binascii.hexlify(mptr_read))
txn['out'].append(txn_out)
if 'is_segwit' in txn and txn['is_segwit'] == True:
for index in range(txn['input_count']):
mptr_read = getCountBytes(mptr)
txn['input'][index]['witness_count'] = getCount(mptr_read)
txn['input'][index]['witness'] = []
for inner_index in range(txn['input'][index]['witness_count']):
txn_witness = {}
mptr_read = getCountBytes(mptr)
txn_witness['size'] = getCount(mptr_read)
txn_witness['witness'] = bytes.decode(binascii.hexlify(mptr.read(txn_witness['size'])))
txn['input'][index]['witness'].append(txn_witness)
mptr_read = mptr.read(4)
raw_txn += mptr_read
txn['locktime'] = int(binascii.hexlify(mptr_read[::-1]), 16)
txn['txn_hash'] = getTxnHash(raw_txn)
# check_raw_txn = rpc_connection.getrawtransaction(txn['txn_hash'])
# print('checked raw txn = %s' % check_raw_txn)
# print('txn_hash = %s' % txn['txn_hash'])
return txn
```
<h4>Building Block in JSON</h4>
```
def getBlock(mptr: mmap, start: int):
block = {}
block['block_header_hash'] = getBlockHeaderHash(mptr, start)
print('block_header_hash = %s' % block['block_header_hash'])
mptr.seek(start) ## ignore magic number and block size
block['block_pre_header'] = getBlockPreHeader(mptr)
if block['block_pre_header']['magic_number'] == '00000000':
raise EOFError
block['block_header'] = getBlockHeader(mptr)
block['txn_count'] = getTransactionCount(mptr)
txn_list = []
txn_list.append(getCoinbaseTransaction(mptr))
for index in range(1, block['txn_count']):
txn = getTransaction(mptr)
txn_list.append(txn)
block['txn_list'] = txn_list
return block
```
<h4>Building Block file parser</h4>
```
def blockFileParser():
with open('blk01231.dat', 'rb') as latest_block_file:
# load file to memory
mptr = mmap.mmap(latest_block_file.fileno(), 0, prot=mmap.PROT_READ) #File is open read-only
block_file = []
try:
while True:
start = mptr.tell()
block_file.append(getBlock(mptr, start))
except EOFError:
pass
print(json.dumps(block_file, indent=4))
```
| github_jupyter |
[Table of Contents](./table_of_contents.ipynb)
# The Extended Kalman Filter
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
We have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique.
The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature.
## Linearizing the Kalman Filter
The Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The drag coefficient varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.
For the linear filter we have these equations for the process and measurement models:
$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\\
\mathbf z &= \mathbf{Hx} + w_z
\end{aligned}$$
Where $\mathbf A$ is the systems dynamic matrix. Using the state space methods covered in the **Kalman Filter Math** chapter these equations can be tranformed into
$$\begin{aligned}\bar{\mathbf x} &= \mathbf{Fx} \\
\mathbf z &= \mathbf{Hx}
\end{aligned}$$
where $\mathbf F$ is the *fundamental matrix*. The noise $w_x$ and $w_z$ terms are incorporated into the matrices $\mathbf R$ and $\mathbf Q$. This form of the equations allow us to compute the state at step $k$ given a measurement at step $k$ and the state estimate at step $k-1$. In earlier chapters I built your intuition and minimized the math by using problems describable with Newton's equations. We know how to design $\mathbf F$ based on high school physics.
For the nonlinear model the linear expression $\mathbf{Fx} + \mathbf{Bu}$ is replaced by a nonlinear function $f(\mathbf x, \mathbf u)$, and the linear expression $\mathbf{Hx}$ is replaced by a nonlinear function $h(\mathbf x)$:
$$\begin{aligned}\dot{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\\
\mathbf z &= h(\mathbf x) + w_z
\end{aligned}$$
You might imagine that we could proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the **Nonlinear Filtering** chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.
The EKF does not alter the Kalman filter's linear equations. Instead, it *linearizes* the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter.
*Linearize* means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2−2x$ at $x=1.5$.
```
import kf_book.ekf_internal as ekf_internal
ekf_internal.show_linearization()
```
If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.
We linearize systems by taking the derivative, which finds the slope of a curve:
$$\begin{aligned}
f(x) &= x^2 -2x \\
\frac{df}{dx} &= 2x - 2
\end{aligned}$$
and then evaluating it at $x$:
$$\begin{aligned}m &= f'(x=1.5) \\&= 2(1.5) - 2 \\&= 1\end{aligned}$$
Linearizing systems of differential equations is similar. We linearize $f(\mathbf x, \mathbf u)$, and $h(\mathbf x)$ by taking the partial derivatives of each to evaluate $\mathbf F$ and $\mathbf H$ at the point $\mathbf x_t$ and $\mathbf u_t$. We call the partial derivative of a matrix the [*Jacobian*](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant). This gives us the the discrete state transition matrix and measurement model matrix:
$$
\begin{aligned}
\mathbf F
&= {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}} \\
\mathbf H &= \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
\end{aligned}
$$
This leads to the following equations for the EKF. I put boxes around the differences from the linear filter:
$$\begin{array}{l|l}
\text{linear Kalman filter} & \text{EKF} \\
\hline
& \boxed{\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}}} \\
\mathbf{\bar x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\bar x} = f(\mathbf x, \mathbf u)} \\
\mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \\
\hline
& \boxed{\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}} \\
\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z - \boxed{h(\bar{x})}\\
\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \\
\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \\
\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}
\end{array}$$
We don't normally use $\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\bar{\mathbf x}$ using a suitable numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\mathbf{\bar x} = f(\mathbf x, \mathbf u)$. For the same reasons we don't use $\mathbf{H\bar{x}}$ in the computation for the residual, opting for the more accurate $h(\bar{\mathbf x})$.
I think the easiest way to understand the EKF is to start off with an example. Later you may want to come back and reread this section.
## Example: Tracking a Airplane
This example tracks an airplane using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.
Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object.
The relationship between the radar's slant range distance $r$ and elevation angle $\epsilon$ with the horizontal position $x$ and altitude $y$ of the aircraft is illustrated in the figure below:
```
ekf_internal.show_radar_chart()
```
This gives us the equalities:
$$\begin{aligned}
\epsilon &= \tan^{-1} \frac y x\\
r^2 &= x^2 + y^2
\end{aligned}$$
### Design the State Variables
We want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:
$$\mathbf x = \begin{bmatrix}\mathtt{distance} \\\mathtt{velocity}\\ \mathtt{altitude}\end{bmatrix}= \begin{bmatrix}x \\ \dot x\\ y\end{bmatrix}$$
### Design the Process Model
We assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want
$$\mathbf F = \left[\begin{array}{cc|c} 1 & \Delta t & 0\\
0 & 1 & 0 \\ \hline
0 & 0 & 1\end{array}\right]$$
I've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.
However, let's practice finding these matrices. We model systems with a set of differential equations. We need an equation in the form
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{w}$$
where $\mathbf{w}$ is the system noise.
The variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:
$$\begin{aligned}v &= \dot x \\
a &= \ddot{x} = 0\end{aligned}$$
Now we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as
$$\begin{aligned}\begin{bmatrix}\dot x \\ \ddot{x}\end{bmatrix} &= \begin{bmatrix}0&1\\0&0\end{bmatrix} \begin{bmatrix}x \\
\dot x\end{bmatrix} \\ \dot{\mathbf x} &= \mathbf{Ax}\end{aligned}$$
where $\mathbf A=\begin{bmatrix}0&1\\0&0\end{bmatrix}$.
Recall that $\mathbf A$ is the *system dynamics matrix*. It describes a set of linear differential equations. From it we must compute the state transition matrix $\mathbf F$. $\mathbf F$ describes a discrete set of linear equations which compute $\mathbf x$ for a discrete time step $\Delta t$.
A common way to compute $\mathbf F$ is to use the power series expansion of the matrix exponential:
$$\mathbf F(\Delta t) = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A \Delta t)^3}{3!} + ... $$
$\mathbf A^2 = \begin{bmatrix}0&0\\0&0\end{bmatrix}$, so all higher powers of $\mathbf A$ are also $\mathbf{0}$. Thus the power series expansion is:
$$
\begin{aligned}
\mathbf F &=\mathbf{I} + \mathbf At + \mathbf{0} \\
&= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\
\mathbf F &= \begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}
\end{aligned}$$
This is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate finding the state transition matrix from linear differential equations. We will conclude the chapter with an example that will require the use of this technique.
### Design the Measurement Model
The measurement function takes the state estimate of the prior $\bar{\mathbf x}$ and turn it into a measurement of the slant range distance. We use the Pythagorean theorem to derive:
$$h(\bar{\mathbf x}) = \sqrt{x^2 + y^2}$$
The relationship between the slant distance and the position on the ground is nonlinear due to the square root. We linearize it by evaluating its partial derivative at $\mathbf x_t$:
$$
\mathbf H = \frac{\partial{h(\bar{\mathbf x})}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
$$
The partial derivative of a matrix is called a Jacobian, and takes the form
$$\frac{\partial \mathbf H}{\partial \bar{\mathbf x}} =
\begin{bmatrix}
\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \\
\frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \\
\vdots & \vdots
\end{bmatrix}
$$
In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the $x$ variables. For our problem we have
$$\mathbf H = \begin{bmatrix}{\partial h}/{\partial x} & {\partial h}/{\partial \dot{x}} & {\partial h}/{\partial y}\end{bmatrix}$$
Solving each in turn:
$$\begin{aligned}
\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \\
&= \frac{x}{\sqrt{x^2 + y^2}}
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial \dot{x}} &=
\frac{\partial}{\partial \dot{x}} \sqrt{x^2 + y^2} \\
&= 0
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial y} &= \frac{\partial}{\partial y} \sqrt{x^2 + y^2} \\
&= \frac{y}{\sqrt{x^2 + y^2}}
\end{aligned}$$
giving us
$$\mathbf H =
\begin{bmatrix}
\frac{x}{\sqrt{x^2 + y^2}} &
0 &
&
\frac{y}{\sqrt{x^2 + y^2}}
\end{bmatrix}$$
This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf x$ so we need to take the derivative of the slant range with respect to $\mathbf x$. For the linear Kalman filter $\mathbf H$ was a constant that we computed prior to running the filter. For the EKF $\mathbf H$ is updated at each step as the evaluation point $\bar{\mathbf x}$ changes at each epoch.
To make this more concrete, let's now write a Python function that computes the Jacobian of $h$ for this problem.
```
from math import sqrt
def HJacobian_at(x):
""" compute Jacobian of H matrix at x """
horiz_dist = x[0]
altitude = x[2]
denom = sqrt(horiz_dist**2 + altitude**2)
return array ([[horiz_dist/denom, 0., altitude/denom]])
```
Finally, let's provide the code for $h(\bar{\mathbf x})$:
```
def hx(x):
""" compute measurement for slant range that
would correspond to state x.
"""
return (x[0]**2 + x[2]**2) ** 0.5
```
Now let's write a simulation for our radar.
```
from numpy.random import randn
import math
class RadarSim(object):
""" Simulates the radar signal returns from an object
flying at a constant altityude and velocity in 1D.
"""
def __init__(self, dt, pos, vel, alt):
self.pos = pos
self.vel = vel
self.alt = alt
self.dt = dt
def get_range(self):
""" Returns slant range to the object. Call once
for each new measurement at dt time from last call.
"""
# add some process noise to the system
self.vel = self.vel + .1*randn()
self.alt = self.alt + .1*randn()
self.pos = self.pos + self.vel*self.dt
# add measurement noise
err = self.pos * 0.05*randn()
slant_dist = math.sqrt(self.pos**2 + self.alt**2)
return slant_dist + err
```
### Design Process and Measurement Noise
The radar measures the range to a target. We will use $\sigma_{range}= 5$ meters for the noise. This gives us
$$\mathbf R = \begin{bmatrix}\sigma_{range}^2\end{bmatrix} = \begin{bmatrix}25\end{bmatrix}$$
The design of $\mathbf Q$ requires some discussion. The state $\mathbf x= \begin{bmatrix}x & \dot x & y\end{bmatrix}^\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use `Q_discrete_white_noise` noise to compute the values for the upper left hand side of $\mathbf Q$. The third element of $\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\mathbf Q$ of:
$$\mathbf Q = \begin{bmatrix}\mathbf Q_\mathtt{x} & 0 \\ 0 & \mathbf Q_\mathtt{y}\end{bmatrix}$$
### Implementation
`FilterPy` provides the class `ExtendedKalmanFilter`. It works similarly to the `KalmanFilter` class we have been using, except that it allows you to provide a function that computes the Jacobian of $\mathbf H$ and the function $h(\mathbf x)$.
We start by importing the filter and creating it. The dimension of `x` is 3 and `z` has dimension 1.
```python
from filterpy.kalman import ExtendedKalmanFilter
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
```
We create the radar simulator:
```python
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
```
We will initialize the filter near the airplane's actual position:
```python
rk.x = array([radar.pos, radar.vel-10, radar.alt+100])
```
We assign the system matrix using the first term of the Taylor series expansion we computed above:
```python
dt = 0.05
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt
```
After assigning reasonable values to $\mathbf R$, $\mathbf Q$, and $\mathbf P$ we can run the filter with a simple loop. We pass the functions for computing the Jacobian of $\mathbf H$ and $h(x)$ into the `update` method.
```python
for i in range(int(20/dt)):
z = radar.get_range()
rk.update(array([z]), HJacobian_at, hx)
rk.predict()
```
Adding some boilerplate code to save and plot the results we get:
```
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import ExtendedKalmanFilter
from numpy import eye, array, asarray
import numpy as np
dt = 0.05
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
# make an imperfect starting guess
rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000])
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]]) * dt
range_std = 5. # meters
rk.R = np.diag([range_std**2])
rk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)
rk.Q[2,2] = 0.1
rk.P *= 50
xs, track = [], []
for i in range(int(20/dt)):
z = radar.get_range()
track.append((radar.pos, radar.vel, radar.alt))
rk.update(array([z]), HJacobian_at, hx)
xs.append(rk.x)
rk.predict()
xs = asarray(xs)
track = asarray(track)
time = np.arange(0, len(xs)*dt, dt)
ekf_internal.plot_radar(xs, track, time)
```
## Using SymPy to compute Jacobians
Depending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.
As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.
```
import sympy
sympy.init_printing(use_latex=True)
x, x_vel, y = sympy.symbols('x, x_vel y')
H = sympy.Matrix([sympy.sqrt(x**2 + y**2)])
state = sympy.Matrix([x, x_vel, y])
H.jacobian(state)
```
This result is the same as the result we computed above, and with much less effort on our part!
## Robot Localization
It's time to try a real problem. I warn you that this section is difficult. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to solve a real world problem.
We will consider the problem of robot localization. We already implemented this in the **Unscented Kalman Filter** chapter, and I recommend you read it now if you haven't already. In this scenario we have a robot that is moving through a landscape using a sensor to detect landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.
The robot has 4 wheels in the same configuration used by automobiles. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model.
The robot has a sensor that measures the range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry.
Both the process model and measurement models are nonlinear. The EKF accommodates both, so we provisionally conclude that the EKF is a viable choice for this problem.
### Robot Motion Model
At a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations.
For lower speed robotic applications a simpler *bicycle model* has been found to perform well. This is a depiction of the model:
```
ekf_internal.plot_bicycle()
```
In the **Unscented Kalman Filter** chapter we derived these equations:
$$\begin{aligned}
\beta &= \frac d w \tan(\alpha) \\
x &= x - R\sin(\theta) + R\sin(\theta + \beta) \\
y &= y + R\cos(\theta) - R\cos(\theta + \beta) \\
\theta &= \theta + \beta
\end{aligned}
$$
where $\theta$ is the robot's heading.
You do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter.
### Design the State Variables
For our filter we will maintain the position $x,y$ and orientation $\theta$ of the robot:
$$\mathbf x = \begin{bmatrix}x \\ y \\ \theta\end{bmatrix}$$
Our control input $\mathbf u$ is the velocity $v$ and steering angle $\alpha$:
$$\mathbf u = \begin{bmatrix}v \\ \alpha\end{bmatrix}$$
### Design the System Model
We model our system as a nonlinear motion model plus noise.
$$\bar x = f(x, u) + \mathcal{N}(0, Q)$$
Using the motion model for a robot that we created above, we can expand this to
$$\bar{\begin{bmatrix}x\\y\\\theta\end{bmatrix}} = \begin{bmatrix}x\\y\\\theta\end{bmatrix} +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\
R\cos(\theta) - R\cos(\theta + \beta) \\
\beta\end{bmatrix}$$
We find The $\mathbf F$ by taking the Jacobian of $f(x,u)$.
$$\mathbf F = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix}
\frac{\partial f_1}{\partial x} &
\frac{\partial f_1}{\partial y} &
\frac{\partial f_1}{\partial \theta}\\
\frac{\partial f_2}{\partial x} &
\frac{\partial f_2}{\partial y} &
\frac{\partial f_2}{\partial \theta} \\
\frac{\partial f_3}{\partial x} &
\frac{\partial f_3}{\partial y} &
\frac{\partial f_3}{\partial \theta}
\end{bmatrix}
$$
When we calculate these we get
$$\mathbf F = \begin{bmatrix}
1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \\
0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \\
0 & 0 & 1
\end{bmatrix}$$
We can double check our work with SymPy.
```
import sympy
from sympy.abc import alpha, x, y, v, w, R, theta
from sympy import symbols, Matrix
sympy.init_printing(use_latex="mathjax", fontsize='16pt')
time = symbols('t')
d = v*time
beta = (d/w)*sympy.tan(alpha)
r = w/sympy.tan(alpha)
fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)],
[theta+beta]])
F = fxu.jacobian(Matrix([x, y, theta]))
F
```
That looks a bit complicated. We can use SymPy to substitute terms:
```
# reduce common expressions
B, R = symbols('beta, R')
F = F.subs((d/w)*sympy.tan(alpha), B)
F.subs(w/sympy.tan(alpha), R)
```
This form verifies that the computation of the Jacobian is correct.
Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in *control space*. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system.
$$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \\ 0 & \sigma_\alpha^2\end{bmatrix}$$
If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$.
$$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}
\frac{\partial f_1}{\partial v} & \frac{\partial f_1}{\partial \alpha} \\
\frac{\partial f_2}{\partial v} & \frac{\partial f_2}{\partial \alpha} \\
\frac{\partial f_3}{\partial v} & \frac{\partial f_3}{\partial \alpha}
\end{bmatrix}$$
These partial derivatives become very difficult to work with. Let's compute them with SymPy.
```
V = fxu.jacobian(Matrix([v, alpha]))
V = V.subs(sympy.tan(alpha)/w, 1/R)
V = V.subs(time*v/R, B)
V = V.subs(time*v, 'd')
V
```
This should give you an appreciation of how quickly the EKF become mathematically intractable.
This gives us the final form of our prediction equations:
$$\begin{aligned}
\mathbf{\bar x} &= \mathbf x +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\
R\cos(\theta) - R\cos(\theta + \beta) \\
\beta\end{bmatrix}\\
\mathbf{\bar P} &=\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}
\end{aligned}$$
This form of linearization is not the only way to predict $\mathbf x$. For example, we could use a numerical integration technique such as *Runge Kutta* to compute the movement
of the robot. This will be required if the time step is relatively large. Things are not as cut and dried with the EKF as for the Kalman filter. For a real problem you have to carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns.
### Design the Measurement Model
The robot's sensor provides a noisy bearing and range measurement to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf T$ into a range and bearing to the landmark. If $\mathbf p$
is the position of a landmark, the range $r$ is
$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$
The sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:
$$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$
Thus our measurement model $h$ is
$$\begin{aligned}
\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\\
&= \begin{bmatrix}
\sqrt{(p_x - x)^2 + (p_y - y)^2} \\
\arctan(\frac{p_y - y}{p_x - x}) - \theta
\end{bmatrix} &+ \mathcal{N}(0, R)
\end{aligned}$$
This is clearly nonlinear, so we need linearize $h$ at $\mathbf x$ by taking its Jacobian. We compute that with SymPy below.
```
px, py = symbols('p_x, p_y')
z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)],
[sympy.atan2(py-y, px-x) - theta]])
z.jacobian(Matrix([x, y, theta]))
```
Now we need to write that as a Python function. For example we might write:
```
from math import sqrt
def H_of(x, landmark_pos):
""" compute Jacobian of H matrix where h(x) computes
the range and bearing to a landmark for state x """
px = landmark_pos[0]
py = landmark_pos[1]
hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2
dist = sqrt(hyp)
H = array(
[[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0],
[ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]])
return H
```
We also need to define a function that converts the system state into a measurement.
```
from math import atan2
def Hx(x, landmark_pos):
""" takes a state variable and returns the measurement
that would correspond to that state.
"""
px = landmark_pos[0]
py = landmark_pos[1]
dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2)
Hx = array([[dist],
[atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]])
return Hx
```
### Design Measurement Noise
It is reasonable to assume that the noise of the range and bearing measurements are independent, hence
$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$
### Implementation
We will use `FilterPy`'s `ExtendedKalmanFilter` class to implement the filter. Its `predict()` method uses the standard linear equations for the process model. Ours is nonlinear, so we will have to override `predict()` with our own implementation. I'll want to also use this class to simulate the robot, so I'll add a method `move()` that computes the position of the robot which both `predict()` and my simulation can call.
The matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's `evalf` function. `evalf` evaluates a SymPy `Matrix` with specific values for the variables. I decided to demonstrate this technique to you, and used `evalf` in the Kalman filter code. You'll need to understand a couple of points.
First, `evalf` uses a dictionary to specify the values. For example, if your matrix contains an `x` and `y`, you can write
```python
M.evalf(subs={x:3, y:17})
```
to evaluate the matrix for `x=3` and `y=17`.
Second, `evalf` returns a `sympy.Matrix` object. Use `numpy.array(M).astype(float)` to convert it to a NumPy array. `numpy.array(M)` creates an array of type `object`, which is not what you want.
Here is the code for the EKF:
```
from filterpy.kalman import ExtendedKalmanFilter as EKF
from numpy import dot, array, sqrt
class RobotEKF(EKF):
def __init__(self, dt, wheelbase, std_vel, std_steer):
EKF.__init__(self, 3, 2, 2)
self.dt = dt
self.wheelbase = wheelbase
self.std_vel = std_vel
self.std_steer = std_steer
a, x, y, v, w, theta, time = symbols(
'a, x, y, v, w, theta, t')
d = v*time
beta = (d/w)*sympy.tan(a)
r = w/sympy.tan(a)
self.fxu = Matrix(
[[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)],
[theta+beta]])
self.F_j = self.fxu.jacobian(Matrix([x, y, theta]))
self.V_j = self.fxu.jacobian(Matrix([v, a]))
# save dictionary and it's variables for later use
self.subs = {x: 0, y: 0, v:0, a:0,
time:dt, w:wheelbase, theta:0}
self.x_x, self.x_y, = x, y
self.v, self.a, self.theta = v, a, theta
def predict(self, u=0):
self.x = self.move(self.x, u, self.dt)
self.subs[self.theta] = self.x[2, 0]
self.subs[self.v] = u[0]
self.subs[self.a] = u[1]
F = array(self.F_j.evalf(subs=self.subs)).astype(float)
V = array(self.V_j.evalf(subs=self.subs)).astype(float)
# covariance of motion noise in control space
M = array([[self.std_vel*u[0]**2, 0],
[0, self.std_steer**2]])
self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T)
def move(self, x, u, dt):
hdg = x[2, 0]
vel = u[0]
steering_angle = u[1]
dist = vel * dt
if abs(steering_angle) > 0.001: # is robot turning?
beta = (dist / self.wheelbase) * tan(steering_angle)
r = self.wheelbase / tan(steering_angle) # radius
dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)],
[r*cos(hdg) - r*cos(hdg + beta)],
[beta]])
else: # moving in straight line
dx = np.array([[dist*cos(hdg)],
[dist*sin(hdg)],
[0]])
return x + dx
```
Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a angular difference of $-358^\circ$, whereas the correct value is $2^\circ$. We have to write code to correctly compute the bearing residual.
```
def residual(a, b):
""" compute residual (a-b) between measurements containing
[range, bearing]. Bearing is normalized to [-pi, pi)"""
y = a - b
y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi)
if y[1] > np.pi: # move to [-pi, pi)
y[1] -= 2 * np.pi
return y
```
The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable `landmarks` that contains the landmark coordinates. I update the simulated robot position 10 times a second, but run the EKF only once per second. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.
```
from filterpy.stats import plot_covariance_ellipse
from math import sqrt, tan, cos, sin, atan2
import matplotlib.pyplot as plt
dt = 1.0
def z_landmark(lmark, sim_pos, std_rng, std_brg):
x, y = sim_pos[0, 0], sim_pos[1, 0]
d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2)
a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0]
z = np.array([[d + randn()*std_rng],
[a + randn()*std_brg]])
return z
def ekf_update(ekf, z, landmark):
ekf.update(z, HJacobian=H_of, Hx=Hx,
residual=residual,
args=(landmark), hx_args=(landmark))
def run_localization(landmarks, std_vel, std_steer,
std_range, std_bearing,
step=10, ellipse_step=20, ylim=None):
ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel,
std_steer=std_steer)
ekf.x = array([[2, 6, .3]]).T # x, y, steer angle
ekf.P = np.diag([.1, .1, .1])
ekf.R = np.diag([std_range**2, std_bearing**2])
sim_pos = ekf.x.copy() # simulated position
# steering command (vel, steering angle radians)
u = array([1.1, .01])
plt.figure()
plt.scatter(landmarks[:, 0], landmarks[:, 1],
marker='s', s=60)
track = []
for i in range(200):
sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot
track.append(sim_pos)
if i % step == 0:
ekf.predict(u=u)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='k', alpha=0.3)
x, y = sim_pos[0, 0], sim_pos[1, 0]
for lmark in landmarks:
z = z_landmark(lmark, sim_pos,
std_range, std_bearing)
ekf_update(ekf, z, lmark)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='g', alpha=0.8)
track = np.array(track)
plt.plot(track[:, 0], track[:,1], color='k', lw=2)
plt.axis('equal')
plt.title("EKF Robot localization")
if ylim is not None: plt.ylim(*ylim)
plt.show()
return ekf
landmarks = array([[5, 10], [10, 5], [15, 15]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
print('Final P:', ekf.P.diagonal())
```
I have plotted the landmarks as solid squares. The path of the robot is drawn with a black line. The covariance ellipses for the predict step are light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$.
We can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We determine that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements and the errors improve.
I used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\mathbf x$ is concerned.
Now let's add another landmark.
```
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
plt.show()
print('Final P:', ekf.P.diagonal())
```
The uncertainly in the estimates near the end of the track are smaller. We can see the effect that multiple landmarks have on our uncertainty by only using the first two landmarks.
```
ekf = run_localization(
landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
```
The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:
```
ekf = run_localization(
landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
```
As you probably suspected, one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.
```
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10],
[10,14], [23, 14], [25, 20], [10, 20]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1, ylim=(0, 21))
print('Final P:', ekf.P.diagonal())
```
### Discussion
I said that this was a real problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to simpler Jacobians. On the other hand, my model of the movement is also simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in *Probabilistic Robots* that this simplified model is justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the CPU time required to perform the linear algebra.
Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic.
## UKF vs EKF
In the last chapter I used the UKF to solve this problem. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model.
There are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That undertaking is not trivial, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates.
So, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.
Let's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result.
The EKF linearizes the function by taking the derivative to find the slope at the evaluation point $x$. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.
```
import kf_book.nonlinear_plots as nonlinear_plots
nonlinear_plots.plot_ekf_vs_mc()
```
The EKF computation is rather inaccurate. In contrast, here is the performance of the UKF:
```
nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)
```
Here we can see that the computation of the UKF's mean is accurate to 2 decimal places. The standard deviation is slightly off, but you can also fine tune how the UKF computes the distribution by using the $\alpha$, $\beta$, and $\gamma$ parameters for generating the sigma points. Here I used $\alpha=0.001$, $\beta=3$, and $\gamma=1$. Feel free to modify them to see the result. You should be able to get better results than I did. However, avoid over-tuning the UKF for a specific test. It may perform better for your test case, but worse in general.
| github_jupyter |
# Functions (Magic spell boxes)
Functions are magic spell boxes, which store their own sleeping princesses and incantations.\
You can cast the spell with ()\
Casting the spell with () creates it own sub realm, which disappers after the sub realm returns an object to the main realm at the end of the spell\
The sleeping princess in the spell box might share the same name with a princess in the main realm but they are not the same princesses.
```
alice = 400
bella = 500
caroline = 600
daisy = 700
```
Now main realm has 4 princesses.
```
alice, bella, caroline, daisy
def add(alice, bella):
return alice+bella
```
Now the main realm has 4 princesses and 1 magic spell box called add. All are shown as below
```
alice, bella, caroline, daisy, add
```
Within the magic spell box called **add** we have 2 more sleeping princesses. These are not the same princesses as in main realm.
These are a different set of alice and bella and they are sleeping. They only wake up when the spell is cast.
When the spell is cast, Genie will need to give something for the princess to hold.
That could be objects directly such as **add(1,2)** or it could be objects in the main realm represented by the name of their princesses such as **add(alice, bella)**
```
add(1,2)
add(alice, bella)
```
In the above examples, the alice and bella are simply holding the objects, what their name sakes in main realm are holding, which are number objects 400 and 500 respectively.
```
add(caroline, daisy)
```
In the above example they are holding, what the other 2 princesses in the main realm are holding.\
Once the spell is cast and all the incantations in the spell are completed, the magic spell box returns something and the subrealm disappears, whcih means the the princesses in the magic spell box go back to sleep again.\
So after all this, in the main realm we still have 4 princesses and a magic spell box\
SO What happened to the 900 and 1300 number objects?
They were simply recycled as we havn't asked for a new princess or one of the existing princess to hold on to them.\
But if we
```
alice, bella, caroline, daisy, add
```
So after all this, in the main realm we still have 4 princesses and a magic spell box add
```
add
alice, bella, caroline, daisy, add
```
Now there is bit a twist, we have 2 more magic boxes called multiply and divide.
```
def multiply(alice, bella):
caroline = alice * bella
return caroline
```
This spell has 3 sleeping princess, who have similar names to main relam sprincess and not the same. They are currently sleeping.
```
def divide(alice, bella):
dot = alice / bella
return daisy
```
This spell has 3 more sleeping princess, who have similar names to main relam sprincess and princesses in other magic boxes but again they are not the same. They are again are currently sleeping.
```
multiply(alice, bella)
divide(alice, bella)
```
Now how many princes are there in the realm what are they holding?
```
alice, bella, caroline, daisy, add, multiply, divide
```
Where are the caroline and dot form the multiply and divide subrealms respectively.
They have diasspered, when their disappered after the relam reurned the object.
So, where are the objects 200000 and 0.8 which were retuned to the main realm living.
Well, these were recycled by Geneie as we have not asked Genie to give it to any princess
```
caroline = multiply(alice, bella)
gauri = divide(alice,bella)
alice, bella, caroline, daisy, gauri, add, multiply, divide
```
Let's have a look at another magic box
```
def surprise_subtract(alice , bella):
helena = 10
return alice - bella - helena
```
This magic box has 3 princesses. 2 of them are sleeping and they will need to be given some objects to hold, when the spell is cast and a sub realm is built.
The other princess helena is already awake as he is holding a number object, but she is in a limbo. She does not have a realm to live in, until teh spell is cast. She just lives in the box, with her object unseen by others, until her realm is cast. She goes back into the box with her object, after the spell is completed.
```
surprise_subtract(500, 400)
# Here the princess in the box alice and bella are given number objects directly
surprise_subtract(bella, alice)
# Here the princesses alice and bella in the magic box are holding
# the objects what princesses bella and alice in the main realm are holding respectively
```
| github_jupyter |
# DrugNorm
author -- AR Dirkson --
date -- 08-02-2019 --
python version -- 3 --
This script first subsets the dictionary for the drug names that are in your corpus and then uses simple matching to replace them by the generic drug name chosen as a key in the dictionary.
The CELEX_lwrd_unique is a list of all the unique lowercased words in the CELEX. Alternatives can be used but must be in list form for this script.
Data input needs to be tokenized and the module only deals with lowercased words!
```
import pickle
from nltk import word_tokenize
class DrugNorm ():
def __init__(self):
pass
#to use this function the files need to be sorted in the same folder as the script under /obj_lex/
def load_obj(self, name):
with open('obj_lex/' + name + '.pkl', 'rb') as f:
return pickle.load(f, encoding='latin1')
def subset_drug_normalize_dict (self, msgs):
drug_norm_dict = self.load_obj('drug_normalize_dict')
#subset the dictionary for the drug names actually used in the data
alt_names_flat = [item for sublist in list(drug_norm_dict.values()) for item in sublist]
set_drug = set(alt_names_flat)
msgs_flat = [item for sublist in msgs for item in sublist]
set_msgs = set (msgs_flat)
inters_drug_msgs = set_drug.intersection(set_msgs)
#remove all words from the drug normalization subset that are generic words in the CELEX using a set operation
lex_normal= self.load_obj ('celex_lwrd_unique')
lex_normal2 = list(lex_normal)
lex_normal_set = set(lex_normal2)
inters_drug_msgs_remove= lex_normal_set.intersection(inters_drug_msgs)
inters_drug_msgs_new = []
for word in inters_drug_msgs:
if word not in inters_drug_msgs_remove:
inters_drug_msgs_new.append(word)
inters_drug_msgs_new2 = []
for a, word in enumerate (inters_drug_msgs_new):
if len(word) > 2:
inters_drug_msgs_new2.append(word)
drug_norm_dict_small = {}
for key, value in drug_norm_dict.items():
temp = []
for word in value:
if word in inters_drug_msgs_new2:
temp.append(word)
drug_norm_dict_small[key] = temp
#Remove all keys with an empty list
list_of_kept_keys = []
for key,value in drug_norm_dict_small.items():
if value != []:
list_of_kept_keys.append(key)
drug_norm_subdict_small = {k: drug_norm_dict_small[k] for k in (list_of_kept_keys)}
return drug_norm_subdict_small, inters_drug_msgs_new2
#normalization
def drug_normalize (self, msgs):
drug_norm_dict, inters_drug_msgs = self.subset_drug_normalize_dict (msgs)
msgs2 = []
total_cnt = []
replaced = []
replaced_with = []
for post in msgs:
cnt = 0
for a, word in enumerate (post):
if word in inters_drug_msgs:
for key, value in drug_norm_dict.items ():
if word in value:
cnt += 1
txt = word.replace (word, key)
replaced.append (word)
replaced_with.append (key)
post[a] = txt
total_cnt.append (cnt)
msgs2.append(post)
return msgs2, total_cnt, replaced, replaced_with
msgs = ['the drug imatinib causes nausea', 'paracetamol is good for headaches', 'ibuprofen helps to relieve']
msgs_tok = [word_tokenize(m) for m in msgs]
msgs2, total_cnt, replaced, replaced_with = DrugNorm().drug_normalize(msgs_tok)
print(msgs2)
```
| github_jupyter |
<center><img src="http://alacip.org/wp-content/uploads/2014/03/logoEscalacip1.png" width="500"></center>
<center> <h1>Curso: Introducción al Python</h1> </center>
<br></br>
* Profesor: <a href="http://www.pucp.edu.pe/profesor/jose-manuel-magallanes/" target="_blank">Dr. José Manuel Magallanes, PhD</a> ([jmagallanes@pucp.edu.pe](mailto:jmagallanes@pucp.edu.pe))<br>
- Profesor del **Departamento de Ciencias Sociales, Pontificia Universidad Católica del Peru**.<br>
- Senior Data Scientist del **eScience Institute** and Visiting Professor at **Evans School of Public Policy and Governance, University of Washington**.<br>
- Fellow Catalyst, **Berkeley Initiative for Transparency in Social Sciences, UC Berkeley**.
## Parte 6: Redes sociales en Python
Finalmente, veamos como usar información de Twitter para analizar redes sociales:
<a id='part1'></a>
## 1. Llamando al API
```
import json
# get the security info from file
keysAPI = json.load(open('data/keysAPI.txt','r'))
import tweepy
# recovering security info
consumer_key = keysAPI['consumer_key']
consumer_secret = keysAPI['consumer_secret']
access_token = keysAPI['access_token']
access_token_secret = keysAPI['access_token_secret']
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api=tweepy.API(auth, retry_count=5,
retry_delay=10,
retry_errors=set([401, 404, 500, 503]),
wait_on_rate_limit=True,
wait_on_rate_limit_notify=True,
parser=tweepy.parsers.JSONParser())
```
## 2. Armando la red
1. Usemos **networkx** para armar la red:
```
import networkx as nx
amix = nx.DiGraph()
```
2. Indiquemos los NODOS de la red:
```
famosos=['pontifex_es','ernestosamperp','mbachelet','NicolasMaduro',
'mashirafael','lopezobrador_','realDonaldTrump',
'ppkamigo','evoespueblo','jairbolsonaro']
```
3. Creemos todas las combinaciones:
```
import itertools
pares=itertools.combinations(famosos,2)
```
4. Para cada par, veamos quien sigue a quien:
```
for poli_1,poli_2 in pares:
Amistad=api.show_friendship(source_screen_name=poli_1,target_screen_name=poli_2)
realAmistad=Amistad['relationship']['source']['following'], Amistad['relationship']['target']['following']
# se siguen mutuamente
if realAmistad[0] and realAmistad[1]:
amix.add_edge(poli_1, poli_2, color='r',weight=6)
amix.add_edge(poli_2, poli_1, color='r',weight=6)
# poli1 sigue a poli2
if realAmistad[0] and not realAmistad[1]:
amix.add_edge(poli_1, poli_2,color='grey',weight=2)
# poli2 sigue a poli1
if not realAmistad[0] and realAmistad[1]:
amix.add_edge(poli_2, poli_1,color='grey',weight=2)
```
5. Guardemos la red
```
nx.write_gexf(amix, "amistades.gexf")
```
6. Llamemos al archivo creado:
```
from urllib.request import urlopen
data = urlopen('https://raw.githubusercontent.com/escuela-alacip/introPython/master/amistades.gexf')
laNet=nx.read_gexf(data)
```
## 3. Visualizando la red
```
# vis basica:
import matplotlib.pyplot as plot
%matplotlib inline
plot.figure(figsize=(8,8))
nx.draw_kamada_kawai(laNet,arrows =True,with_labels=True)
# vis detallada:
pos = nx.circular_layout(laNet)
edges = laNet.edges(data=True)
colors = [c['color'] for [u,v,c] in laNet.edges(data=True)]
weights = [c['weight'] for [u,v,c] in laNet.edges(data=True)]
plot.figure(figsize=(8,8))
nx.draw(laNet, pos, edges=edges, edge_color=colors, width=weights,with_labels=True)
```
### Explorando la red
```
#numero de nodes
len(laNet.nodes())
#numero de edges
len(laNet.edges())
# Density:
#de 0 a 1, donde 1 es 'complete' network
nx.density(laNet)
# Clustering coefficient de un nodo mide que tan conectadas estan mis vecinos inmediatos.
# Este es el promedio de esa medida.
nx.average_clustering(laNet)
```
* **Random networks** tienen *small shortest path* y *small clustering coefficient*.
```
# Transitivity
# que tan probable es que dos nodos con algun nodo en común también estén conectados:
nx.transitivity(laNet)
# Assortativity (degree)
# Si está cerca a 1 indica que los nodos tienden a conectarse a los más populares.
# Cerca a -1 indica lo contrario. Cerca a 0 indica que no hay assortativity
nx.degree_assortativity_coefficient(laNet)
#Central nodes: degree
from operator import itemgetter
NodeInDegree=sorted(laNet.in_degree(), key=itemgetter(1),reverse=True)
NodeInDegree[:5]
NodeOutDegree=sorted(laNet.out_degree(), key=itemgetter(1),reverse=True)
NodeOutDegree[:5]
# Computing centrality measures:
degrI=nx.in_degree_centrality(laNet)
degrO=nx.out_degree_centrality(laNet)
clos=nx.closeness_centrality(laNet) # "rapides" de acceso a los demas
betw=nx.betweenness_centrality(laNet) # "puente" en la red
import pandas as pd # measures into a data frame:
Centrality=[ [famoso, degrI[famoso],degrO[famoso],clos[famoso],betw[famoso]] for famoso in laNet]
headers=['Famoso','InDegree','OutDegree','Closeness','Betweenness']
DFCentrality=pd.DataFrame(Centrality,columns=headers)
DFCentrality
```
_____
**AUSPICIO**:
* El desarrollo de estos contenidos ha sido posible gracias al grant del Berkeley Initiative for Transparency in the Social Sciences (BITSS) at the Center for Effective Global Action (CEGA) at the University of California, Berkeley
<center>
<img src="https://www.bitss.org/wp-content/uploads/2015/07/bitss-55a55026v1_site_icon.png" style="width: 200px;"/>
</center>
* Este curso cuenta con el auspicio de:
<center>
<img src="https://www.python.org/static/img/psf-logo@2x.png" style="width: 500px;"/>
</center>
**RECONOCIMIENTO**
EL Dr. Magallanes agradece a la Pontificia Universidad Católica del Perú, por su apoyo en la participación en la Escuela ALACIP.
<center>
<img src="https://dci.pucp.edu.pe/wp-content/uploads/2014/02/Logotipo_colores-290x145.jpg" style="width: 400px;"/>
</center>
El autor reconoce el apoyo que el eScience Institute de la Universidad de Washington le ha brindado desde el 2015 para desarrollar su investigación en Ciencia de Datos.
<center>
<img src="https://escience.washington.edu/wp-content/uploads/2015/10/eScience_Logo_HR.png" style="width: 500px;"/>
</center>
<br>
<br>
| github_jupyter |
```
import numpy as np
import sys
import os
import copy
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
from abc import ABC, abstractmethod
import math
import copy
from copy import deepcopy
import PIL
from skimage.color import rgb2gray
from skimage.filters import threshold_otsu
import torchvision
import torchvision.models as torchmodels
import torch.nn.functional as F
import openslide
import torch.utils.data
list_pathstoadd = ["../"]
for path in list_pathstoadd:
if(path not in sys.path):
sys.path.append(path)
import pydmed
from pydmed.utils.data import *
import pydmed.lightdl
from pydmed.lightdl import *
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
#make dataset (section 1 of tutorial) ===================
rootdir = "../NonGit/Data/"
list_relativedirs = ["1.svs", "2.svs", "3.svs", "4.svs", "5.svs"]
list_relativedirs.sort()
#make a list of patients
list_patients = []
for fname in list_relativedirs:
new_patient = Patient(\
int_uniqueid = list_relativedirs.index(fname),
dict_records = \
{"H&E":Record(rootdir, fname, {"resolution":"40x"}),\
"HER2-status": np.random.randint(0,4)}) #TODO:set real labels
list_patients.append(new_patient)
#make the dataset
dataset = pydmed.utils.data.Dataset("myHER2dataset", list_patients)
def otsu_getpoint_from_foreground(fname_wsi):
#settings =======
scale_thumbnail = 0.01
width_targetpatch = 5000
#extract the foreground =========================
osimage = openslide.OpenSlide(fname_wsi)
W, H = osimage.dimensions
size_thumbnail = (int(scale_thumbnail*W), int(scale_thumbnail*H))
pil_thumbnail = osimage.get_thumbnail(size_thumbnail)
np_thumbnail = np.array(pil_thumbnail)
np_thumbnail = np_thumbnail[:,:,0:3]
np_thumbnail = rgb2gray(np_thumbnail)
thresh = threshold_otsu(np_thumbnail)
background = (np_thumbnail > thresh) + 0.0
foreground = 1.0 - background
#apply the padding on foreground
w_padding_of_thumbnail = int(width_targetpatch * scale_thumbnail)
foreground[0:w_padding_of_thumbnail, :] = 0
foreground[-w_padding_of_thumbnail::, :] = 0
foreground[: , 0:w_padding_of_thumbnail] = 0
foreground[: , -w_padding_of_thumbnail::] = 0
#select a random point =========================
one_indices = np.where(foreground==1.0)
i_oneindices, j_oneindices = one_indices[0].tolist(), one_indices[1].tolist()
n = random.choice(range(len(i_oneindices)))
i_selected, j_selected = i_oneindices[n], j_oneindices[n]
assert(foreground[i_selected, j_selected] == 1)
#print("i,j selected = [{},{}]".format(i_selected, j_selected))
i_selected_realscale, j_selected_realscale =\
int(i_selected/scale_thumbnail), int(j_selected/scale_thumbnail)
x, y = j_selected_realscale, i_selected_realscale
return x,y
class WSIRandomBigchunkLoader(BigChunkLoader):
@abstractmethod
def extract_bigchunk(self, last_message_fromroot):
'''
Extract and return a bigchunk.
Please note that in this function you have access to
self.patient and self.const_global_info.
'''
#read `idx_bigchunk` from checkpoint =======
checkpoint = self.get_checkpoint()
if(checkpoint == None):
idx_bigchunk = 0
else:
idx_bigchunk = checkpoint["checkpoint_for_bigchunk"]
#extract bigchunk =======
wsi = self.patient.dict_records["H&E"]
fname_wsi = wsi.rootdir + wsi.relativedir
osimage = openslide.OpenSlide(fname_wsi)
w, h = 2000, 2000
W, H = osimage.dimensions
x, y = self.get_bigchunk_position(idx_bigchunk, W, H)
#this function is implemented below
pil_bigchunk = osimage.read_region([x, y], 0, [w,h])
np_bigchunk = np.array(pil_bigchunk)[:,:,0:3]
bigchunk = BigChunk(data=np_bigchunk,\
dict_info_of_bigchunk={"x":x, "y":y},\
patient=self.patient)
return bigchunk
def get_bigchunk_position(self, idx_bigchunk, W, H):
O = np.array([W/2.0, H/2.0])
theta = idx_bigchunk*10.0*math.pi/180.0
r = 10000.0
x = int(W*0.5 + r*math.cos(theta))
y = int(H*0.5 + r*math.sin(theta))
return x, y
class WSIRandomSmallchunkCollector(SmallChunkCollector):
def __init__(self, *args, **kwargs):
#grab privates
self.tfms_onsmallchunkcollection =\
torchvision.transforms.Compose([
torchvision.transforms.ToPILImage(),\
torchvision.transforms.Resize((224,224)),\
torchvision.transforms.ColorJitter(brightness=0,\
contrast=0,\
saturation=0.5,\
hue=[-0.1, 0.1]),\
torchvision.transforms.ToTensor(),\
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],\
std=[0.229, 0.224, 0.225])
])
super(WSIRandomSmallchunkCollector, self).__init__(*args, **kwargs)
@abstractmethod
def extract_smallchunk(self, call_count, bigchunk, last_message_fromroot):
'''
Extract and return a smallchunk. Please note that in this function you have access to
self.bigchunk, self.patient, self.const_global_info.
Inputs:
- bigchunk: the returned bigchunk.
'''
if(call_count == 0): #if the `SmallChunkCollector` has just started
old_checkpoint = self.get_checkpoint()
if(old_checkpoint == None):
#It is the first time that a `SmallChunkCollector`
#is loaded for the `self.patient`
new_checkpoint = {"checkpoint_for_bigchunk":1}
self.set_checkpoint(new_checkpoint)
else:
new_checkpoint = {"checkpoint_for_bigchunk":\
old_checkpoint["checkpoint_for_bigchunk"]+1}
self.set_checkpoint(new_checkpoint)
W, H = bigchunk.data.shape[1], bigchunk.data.shape[0]
w, h = 224, 224
rand_x, rand_y = np.random.randint(0, W-w), np.random.randint(0, H-h)
np_smallchunk = bigchunk.data[rand_y:rand_y+h, rand_x:rand_x+w, :]
#apply the transformation ===========
if(self.tfms_onsmallchunkcollection != None):
toret = self.tfms_onsmallchunkcollection(np_smallchunk)
toret = toret.cpu().detach().numpy() #[3 x 224 x 224]
toret = np.transpose(toret, [1,2,0]) #[224 x 224 x 3]
else:
toret = np_smallchunk
#wrap in SmallChunk
smallchunk = SmallChunk(data=toret,\
dict_info_of_smallchunk={"x":rand_x, "y":rand_y},\
dict_info_of_bigchunk = bigchunk.dict_info_of_bigchunk,\
patient=bigchunk.patient)
#time.sleep(0.1) #TODO:REMOVE
# print("===== in extract_smallchunk, shape = {} ================"
# .format(smallchunk.data.shape))
return smallchunk
def visualize_one_patient(patient, list_smallchunks):
'''
Given all smallchunks collected for a specific patient, this function
should visualize the patient.
Inputs:
- patient: the patient under considerations, an instance of `utils.data.Patient`.
- list_smallchunks: the list of all collected small chunks for the patient,
a list whose elements are an instance of `lightdl.SmallChunk`.
'''
#settings =======
vis_scale = 0.01 #=====
fname_wsi = patient.dict_records["H&E"].rootdir + patient.dict_records["H&E"].relativedir
opsimage = openslide.OpenSlide(fname_wsi)
W, H = opsimage.dimensions
opsimageW, opsimageH = opsimage.dimensions
W, H = int(W*vis_scale), int(H*vis_scale)
pil_thumbnail = opsimage.get_thumbnail((W,H))
plt.ioff()
fig, ax = plt.subplots(1,2, figsize=(2*10,10))
ax[0].imshow(pil_thumbnail)
ax[0].axis('off')
ax[0].set_title("patient {}, H&E [{} x {}]."\
.format(patient.int_uniqueid, opsimageW, opsimageH))
ax = ax[1]
ax.imshow(pil_thumbnail)
ax.axis('off')
print("Visualizing patient {} with {} smallchunks"\
.format(patient, len(list_smallchunks)))
list_colors = ['lawngreen', 'cyan', 'gold', 'greenyellow']
list_shownbigchunks = []
for smallchunk in list_smallchunks:
#show the bigchunk ================
x = smallchunk.dict_info_of_bigchunk["x"]
y = smallchunk.dict_info_of_bigchunk["y"]
x, y = int(x*vis_scale), int(y*vis_scale)
if(not([x,y] in list_shownbigchunks)):
w, h = int(2000*vis_scale), int(2000*vis_scale)
rect = patches.Rectangle((x,y), w, h, linewidth=1,\
linestyle="--",\
edgecolor=random.choice(list_colors),\
facecolor='none', fill=False)
ax.add_patch(rect)
list_shownbigchunks.append([x,y])
#get x,y,w,h ======
x = smallchunk.dict_info_of_smallchunk["x"]*vis_scale +\
smallchunk.dict_info_of_bigchunk["x"]*vis_scale
y = smallchunk.dict_info_of_smallchunk["y"]*vis_scale +\
smallchunk.dict_info_of_bigchunk["y"]*vis_scale
x, y = int(x), int(y)
w, h = int(224*vis_scale), int(224*vis_scale)
x_centre, y_centre = int(x+0.5*w), int(y+0.5*h)
#make-show the rect =====
circle = patches.Circle((x_centre, y_centre), radius=w*0.05,\
facecolor=random.choice(list_colors),\
fill=True)
ax.add_patch(circle)
plt.title("patient {} (extracted big/small chunks)".format(patient.int_uniqueid), fontsize=20)
plt.savefig("Sample_2_Output/patient_{}.eps"\
.format(patient.int_uniqueid), bbox_inches='tight', format='eps')
plt.close(fig)
#make dataloader ==================
tfms = torchvision.transforms.ToTensor()
const_global_info = {
"num_bigchunkloaders":3,
"maxlength_queue_smallchunk":200,
"maxlength_queue_lightdl":10000,
"interval_resched": 5,
"core-assignment":{"lightdl":None,
"smallchunkloaders":None,
"bigchunkloaders":None}
}
dataloader = LightDL(dataset=dataset,\
type_bigchunkloader=WSIRandomBigchunkLoader,\
type_smallchunkcollector=WSIRandomSmallchunkCollector,\
const_global_info=const_global_info,\
batch_size=10, tfms=tfms)
#build the model and optimizer====================
model = torchmodels.resnet18(pretrained=True)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
criterion = torch.nn.CrossEntropyLoss()
model.to(device)
model.train()
print("")
#train the model ============================
dataloader.start()
time.sleep(20)
tstart = time.time()
batchcount = 0
while True:
x, list_patients, list_smallchunks = dataloader.get()
y = torch.from_numpy(np.array([patient.dict_records['HER2-status']
for patient in list_patients])).to(device)
batchcount += 1
optimizer.zero_grad()
netout = model(x.to(device))
loss = criterion(netout, y)
loss.backward()
if((batchcount%10)==0):
print("************* batchcount = {} ************".format(batchcount))
if(batchcount>10000):
dataloader.pause_loading()
break
dataloader.visualize(visualize_one_patient)
```
| github_jupyter |
## Manual publication DB insertion from raw text using syntax features
### Publications and conferences of Dr. POP F. Horia, Profesor Universitar
#### http://www.cs.ubbcluj.ro/~hfpop
#### Text copied from professor's dynamic webpage.
```
text = """
Principal component analysis versus fuzzy principal component analysis: a case study: the quality of Danube water (1985–1996)
C Sarbu, HF Pop
Talanta 65 (5), 1215-1220 185 2005
Robust Fuzzy Principal Component Analysis (FPCA). A Comparative Study Concerning Interaction of Carbon− Hydrogen Bonds with Molybdenum− Oxo Bonds
TR Cundari, C Sârbu, HF Pop
Journal of chemical information and computer sciences 42 (6), 1363-1369 61 2002
A new fuzzy regression algorithm
HF Pop, C Sârbu
Analytical chemistry 68 (5), 771-778 57 1996
Introducere în algoritmi
TH Cormen, CE Leiserson, RR Rivest, HF Pop, S Motogna, PA Blaga
Computer Libris Agora 38 2004
A fuzzy classification of the chemical elements
HF Pop, C Sârbu, O Horowitz, D Dumitrescu
Journal of chemical information and computer sciences 36 (3), 465-482 32 1996
Fuzzy soft-computing methods and their applications in chemistry
C Sârbu, HF Pop
Reviews in Computational Chemistry 20, 249 26 2004
A fuzzy divisive hierarchical clustering algorithm for the optimal choice of sets of solvent systems
D Dumitrescu, C S [acaron] rbu, H Pop
Analytical letters 27 (5), 1031-1054 26 1994
Classical and fuzzy principal component analysis of some environmental samples concerning the pollution with heavy metals
HF Pop, JW Einax, C Sârbu
Chemometrics and Intelligent Laboratory Systems 97 (1), 25-32 25 2009
Structural Analysis of Transition Metal β-X Substituent Interactions. Toward the Use of Soft Computing Methods for Catalyst Modeling
TR Cundari, J Deng, HF Pop, C Sârbu
Journal of chemical information and computer sciences 40 (4), 1052-1061 24 2000
A study of Roman pottery (terra sigillata) using hierarchical fuzzy clustering
HF Pop, D Dumitrescu, C Sǎrbu
Analytica chimica acta 310 (2), 269-279 22 1995
Principal components analysis based on a fuzzy sets approach
H Pop
Mij 1 (2), 1 21 2001
Learning grammar weights using genetic algorithms
I Schröder, HF Pop, W Menzel, KA Foth
IN PROCEEDINGS RECENT ADVANCES IN NATURAL LANGUAGE PROCESSING, RANLP-2001 20 2001
Fuzzy clustering analysis of the first 10 MEIC chemicals
C Sârbu, HF Pop
Chemosphere 40 (5), 513-520 20 2000
A new component selection algorithm based on metrics and fuzzy clustering analysis
C Şerban, A Vescan, HF Pop
International Conference on Hybrid Artificial Intelligence Systems, 621-628 16 2009
The fuzzy hierarchical cross-clustering algorithm. Improvements and comparative study
HF Pop, C Sârbu
Journal of chemical information and computer sciences 37 (3), 510-516 16 1997
Fuzzy robust estimation of central location
C Sârbu, HF Pop
Talanta 54 (1), 125-130 15 2001
A Fuzzy cross-classification of the chemical elements, based on their physical, chemical, and structural features
C Sârbu, O Horowitz, HF Pop
Journal of chemical information and computer sciences 36 (6), 1098-1108 15 1996
Fuzzy hierarchical cross-classification of Greek muds
D Dumitrescu, HF Pop, C Sarbu
Journal of chemical information and computer sciences 35 (5), 851-857 15 1995
GFBA: a biclustering algorithm for discovering value-coherent biclusters
X Fei, S Lu, HF Pop, LR Liang
International Symposium on Bioinformatics Research and Applications, 1-12 14 2007
Degenerate and non-degenerate convex decomposition of finite fuzzy partitions—I
D Dumitrescu, HF Pop
Fuzzy sets and systems 73 (3), 365-376 14 1995
Data analysis with fuzzy sets: a short survey
HF Pop
Studia Universitatis Babes-Bolyai, Series Informatica 49 (2), 111-122 12 2004
A study of dependence of software attributes using data analysis techniques
M Frentiu, HF Pop
Studia Univ. Babes-Bolyai, Series Informatica 2, 53-66 12 2002
Selecting and optimally combining the systems of solvents in the thin film cromatography using the fuzzy sets theory
C Sârbu, D Dumitrescu, HF Pop
Rev. Chim.(Bucharest) 44, 450-459 12 1993
Software quality assessment using a fuzzy clustering approach
C Serban, HF POP
Studia Universitas Babes-Bolyai, Seria Informatica 53 (2), 27-38 9 2008
Evolutionary algorithms for the component selection problem
A Vescan, C Grosan, HF Pop
2008 19th International workshop on database and expert systems applications … 8 2008
Tehnici de Inteligenta Artificiala. Abordari bazate pe Agenti Inteligenti
G Serban, HF Pop
Ed. Mediamira, Cluj-Napoca 8 2004
Tehnici de Inteligenta Artificiala. Abordari bazate pe Agenti Inteligenti
G Serban, HF Pop
Ed. Mediamira, Cluj-Napoca 8 2004
An experiment on incremental analysis using robust parsing techniques
KA Foth, W Menzel, HF Pop, I Schroder
COLING 2000 Volume 2: The 18th International Conference on Computational … 8 2000
Intelligent Systems in Classification Problems
HF Pop
Ph. D. thesis," Babeş-Bolyai" University, Faculty of Mathematics and … 8 1995
A conceptual framework for component-based system metrics definition
C Şerban, A Vescan, HF Pop
9th RoEduNet IEEE International Conference, 73-78 7 2010
Intelligent disease identification based on discriminant analysis of clinical data
C Sarbu, HF Pop, R Elekes, G Covaci
Rev Chimie 59, 1237-1241 7 2008
The component selection problem as a constraint optimization problem
A Vescan, HF Pop
Software Engineering Techniques in Progress, Wroclaw University of … 7 2008
Learning weights for a natural language grammar using genetic algorithms
I Schröder, HF Pop, W Menzel, KA Foth
7 2002
Assessment of heart disease using fuzzy classification techniques
HF Pop, TL Pop, C Sârbu
TheScientificWorldJournal 1, 369-390 7 2001
A new fuzzy discriminant analysis method
HF Pop, C Sârbu
natural science (chemometrics, environmental sciences, biology, geology, etc … 6 2013
An experiment in incremental parsing using weighted constraints
K Foth, W Menzel, HF Pop, I Schröder
Proceedings of the 18th International Conference on Computational … 6 2000
Degenerate and non-degenerate convex decomposition of finite fuzzy partitions (II)
D Dumitrescu, HF Pop
Fuzzy sets and systems 96 (1), 111-118 6 1998
A formal model for component-based system assessment
C Serban, A Vescan, HF Pop
2010 Second International Conference on Computational Intelligence … 5 2010
An adaptive fuzzy agent clustering algorithm for search engines
RD Gaceanu, HF Pop
MACS2010: Proceedings of the 8th Joint Conference on Mathematics and … 5 2010
A study of licence examination results using Fuzzy Clustering techniques
M Frentiu, HF Pop
Babes-Bolyai University, Faculty of Mathematics and Computer Science … 5 2001
Fuzzy classification of the first 10 MEIC
C Sârbu, H Pop
Chemosphere 40 (513), e520 5 2000
Fuzzy regression. II. Outliers cases
HF Pop, C Sârbu
Revista de Chimie 48 (10-11), 888-891 5 1997
SAADI: Software for fuzzy clustering and related fields
HF Pop
Studia Universitatis Babes-Bolyai, Series Informatica 41 (1), 69-80 5 1996
Recognizing Emotions in Short Texts.
O Serban, A Pauchet, HF Pop
ICAART (1), 477-480 4 2012
An incremental ASM-based fuzzy clustering algorithm
RD Gaceanu, HF Pop
Informatics, 198-204 4 2011
A context-aware ASM-based clustering algorithm
RD GACEANU, HF Pop
Studia Universitatis Babes-Bolyai Series Informatica 56 (2), 55-61 4 2011
Automatic configuration for the component selection problem
A Vescan, HF Pop
Proceedings of the 5th international conference on Soft computing as … 4 2008
Component selection based on fuzzy clustering analysis
C Serban, A Vescan, HF Pop
Creative Mathematics and Informatics 17 (3), 505-510 4 2008
On Individual Projects in Software Engineering Education
M Frentiu, I Lazar, HF Pop
Studia Universitatis Babes-Bolyai Series Informatica 48 (2), 83-94 4 2003
Development of robust fuzzy regression techniques using a fuzzy clustering approach
HF Pop
Pure Mathematics and Applications 14 (3), 221-232 4 2003
Fuzzy classification and comparison of some Romanian and American coals
C Sârbu, HF Pop
MATCH-Communications in Mathematical and in Computer Chemistry, 387-400 4 2001
Fuzzy regression. 1. The heteroscedastic case
C Sârbu, H Pop
REVISTA DE CHIMIE 48 (8), 732-737 4 1997
DISCOVERING PATTERNS IN DATA USING ORDINAL DATA ANALYSIS.
AM COROIU, RD GĂCEANU, HF POP
Studia Universitatis Babes-Bolyai, Informatica 61 (1) 3 2016
Prognostic Factors in Liver Failure in Children by Discriminant Analysis of Clinical Data. A Chemometric Approach
HF Pop, C Sarbu, A Stefanescu, A Bizo, TL Pop
STUDIA UNIVERSITATIS BABES-BOLYAI CHEMIA 60 (2), 101-108 3 2015
Constraint optimization-based component selection problem
A Vescan, HF Pop
Studia Univ, Babes-Bolyai, Informatica 53 (2) 3 2008
Education for engineering students-The case of logic
H Pop, L Pop
Proceedings 6th International Conference on Electromechanical and Power … 3 2007
Tracking mistakes in software measurement using fuzzy data analysis
HF Pop, M Frenţiu
The 4-th International Conference RoEduNet Romania (Sovata, Târgu-Mures, 150-157 3 2005
Sisteme inteligente în probleme de clasificare
HF Pop
Mediamira 3 2004
Programare în inteligenţa artificială: LISP si PROLOG
HF Pop, G Şerban
Editura Albastră 3 2003
Rational Classification of the Chemical Elements
O Horovitz, C Sârbu, HF Pop
Dacia Publisher House, Cluj-Napoca 3 2000
Classification procedure for selectivity control in acrylonitrile electroreduction
DA Lowy, D Dumitrescu, L Oniciu, HF Pop, S Kiss-Szetsi
The 7th International Forum Process Analytical Chemistry (Process Analysis … 3 1993
Improving movement analysis in physical therapy systems based on kinect interaction
AD Călin, H F. Pop, R F. Boian
Proceedings of the 31st International BCS Human Computer Interaction … 2 2017
A fuzzy incremental clustering approach to hybrid data discovery
RD Gaceanu, HF Pop
Acta electrotechnica et informatica 12 (2), 16 2 2012
An incremental approach to the set covering problem
RD Gaceanu, HF Pop
Studia Universitatis Babes-Bolyai Series Informatica 47 (2), 61-72 2 2012
AP041 Joining the EuReCA–The Romanian Registry on Cardiac arrest–a year later
H Sabau, O Tudorache, H Pop, V Georgescu, V Strambu, I Dimitriu, ...
Resuscitation 82, S19 2 2011
A fuzzy clustering algorithm for dynamic environments
RD Gaceanu, HF POP
KEPT2011: Knowledge Engineering Principles and Techniques, Selected Papers … 2 2011
Romanian registry on cardiac arrest—A piece in the puzzle-Romanian contribution in the EuReCA project
V Georgescu, H Pop, O Tudorache, H Sabau, C Ciontu, I Dimitriu, ...
Resuscitation 81 (2), S39 2 2010
Effort Estimation by Analogy based on Soft Computing Methods, KEPT2009: Knowledge Engineering: Principles and Techniques, Selected Papers
HF Pop, M Frenţiu
Cluj University Press, Cluj-Napoca 2 2009
A New Component Selection Algorithm Based on Metrics and Fuzzy Clustering
C Serban, A Vescan, HF Pop
Creative Mathematics and Informatics 1 (3), 505-510 2 2009
Fundamentals of Programming
M Frenţiu, HF Pop
Cluj University Press 2 2006
Improving Virtual Team Performance: An Empirical Approach
D Radoiu, C Enachescu, HF Pop
A research paper of Sysgenic Sourcing, Available at sourcing. sysgenic. com … 2 2006
Supervised fuzzy classifiers
HF Pop
Studia Universitatis Babes-Bolyai, Series Mathematica 40 (3), 89-100 2 1995
OPTIMUM SELECTIONS AND COMBINATION OF SOLVENT SYSTEMS IN THIN-LAYER CHROMATOGRAPHY, USING THE FUZZY SET-THEORY
C Sârbu, D Dumitrescu, H Pop
Revista de Chimie 44 (5), 450-459 2 1993
Preliminary measurements in identifying design flaws
C SERBAN, A VESCAN, HF POP
Studia Universitatis Babes-Bolyai, Series Informatica 62 (1), 60-74 1 2017
AN AGENT BASED APPROACH FOR PARALLEL CONSTRAINT VERIFICATION
RD Gaceanu, HF Pop, SA Sotoc
Studia Universitatis Babes-Bolyai, Series Informatica 58 (3), 5-16 1 2013
An agent based approach for parallel constraint verification
RD Gaceanu, HF Pop, SA SOTOC
Studia Universitatis Babes-Bolyai Series Informatica 58 (3), 5-16 1 2013
Stereomatching using radiometric invariant measures
A Miron, S Ainouz, A Rogozan, A Bensrhair, HF POP
UNIVERSITATIS BABEŞ-BOLYAI INFORMATICA, 91 1 2011
Improving similarity join algorithms using fuzzy clustering technique
L Tan, F Fotouhi, W Grosky, HF Pop, N Mouaddib
2009 IEEE International Conference on Data Mining Workshops, 545-550 1 2009
OVERVIEW OF FUZZY METHODS FOR EFFORT ESTIMATION BY ANALOGY.
M Frenţiu, HF Pop
Studia Universitatis Babes-Bolyai, Informatica 1 2009
Lighting quality-component of indoor environment
F Pop, HF Pop, M Pop
LUX Eur, 499-506 1 2009
Applications of principal components methods
HF Pop, M Frentiu
2008 First International Conference on Complexity and Intelligence of the … 1 2008
Programming Fundamentals
M Frenţiu, HF Pop, G Şerban
Presa Universitară Clujeană 1 2006
Distance Learning and Supporting Tools at Babeş-Bolyai University
FM Boian, RF Boian, A Vancea, HF Pop
1
CHARACTERIZATION AND CLASSIFICATION OF MEDICINAL PLANT EXTRACTS ACCORDING TO THEIR ANTIOXIDANT ACTIVITY USING HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY AND MULTIVARIATE ANALYSIS.
IM Simion, AC MOȚ, RD GĂCEANU, HF Pop, C Sarbu
Studia Universitatis Babes-Bolyai, Chemia 65 (1) 2020
A Comparison Study of Similarity Measures in Rough Sets Clustering
A Szederjesi-Dragomir, RD Găceanu, HF Pop, C Sârbu
2019 IEEE 15th International Scientific Conference on Informatics, 000037-000042 2019
A Machine Learning Perspective for Order Reduction in Electrical Motors Modeling
M Nutu, HF Pop, C Martis, SI Cosman, AM Nicorici
2019 21st International Symposium on Symbolic and Numeric Algorithms for … 2019
Principal Component Analysis for Computation of the Magnetization Characteristics of Synchronous Reluctance Machine
M Nutu, R Martis, HF Pop, C Martis
2018 AEIT International Annual Conference, 1-6 2018
SPECTROPHOTOMETRIC CHARACTERIZATION OF ROUMANIAN MEDICINAL HERBS ASSISTED BY ROBUST CHEMOMETRICS EXPERTISE
IM Simion, HF POPb, C Sarbu
Rev. Roum. Chim 63 (5-6), 489-496 2018
The Best Writing on Mathematics 2015
HF Pop
STUDIA UNIVERSITATIS BABES-BOLYAI MATHEMATICA 61 (1), 123-124 2016
PROGNOSTIC FACTORS IN LIVER FAILURE IN CHILDREN BY DISCRIMINANT ANALYSIS OF CLINICAL DATA. A CHEMOMETRIC APPROACH.
C SÂRBU, A BIZO, TL POP, HF POP, ANA ŞTEFANESCU
Studia Universitatis Babes-Bolyai, Chemia 60 2015
The Best Writing on Mathematics 2014
HF Pop
STUDIA UNIVERSITATIS BABES-BOLYAI MATHEMATICA 59 (3), 393-394 2014
Medical procedure breaches detection using a fuzzy clustering approach
R Găceanu, H Pop
Open Computer Science 4 (3), 127-140 2014
An incremental clustering approach to the set covering problem
RD Gaceanu, HF Pop
Zoltán Csörnyei (Ed.), 45 2012
Automatic criteria-based configuration for the component selection problem
A Vescan, HF Pop
International Journal of Computer Information Systems and Industrial … 2012
Recent developments in fuzzy statistical analysis
HF Pop
MaCS’10, 47 2010
PROCESSING ECG DATA USING MULTIVARIATE DATA ANALYSIS
MV PUŞCĂ, HF POP, NM ROMAN, V IANCU
ACADEMY OF ROMANIAN SCIENTISTS, 23 2010
M. Effort estimation by analogy based on soft computing methods
HF POP, M FRENTIU
KEPT 2009 International Conference Knowledge Engineering Principles and … 2009
Knowledge Engineering: Principles and Techniques: KEPT 2009: Cluj-Napoca, July 2-4, 2009
M Frențiu, HF Pop
Cluj University Press 2009
Iluminat eficient energetic în locuinţe
F Pop, D Beu, HF Pop, C Ciugudeanu
Revista Română de Informatică şi Automatică 18 (3), 101-112 2008
A Tutorial on Object-Oriented Functional Programming
HF Pop
Central European Functional Programming School, 228-249 2007
ON SOFTWARE ATTRIBUTES RELATIONSHIP USING A NEW FUZZY C-BIPARTITIONING METHOD
HF Pop, M Frentiu
Studia Universitatis Babes-Bolyai, Informatica Special Issue, 219-226 2007
Management of web pages using XML documents
L T ÂMBULEA, HF POP
Studia Universitatis Babes-Bolyai, Informatica Special Issue, 236-243 2007
Desired employment/Occupational field
N Italian, G Male
Cell 39, 081678652 2003
Appraisal of indoor lighting systems quality
M POP, HF POP, F POP
Ingineria Iluminatului, 37 2001
Papers from the 1999 Symposium on Mathematical Chemistry, Duluth, MN, May 1999-MOLECULAR MODELING-Structural Analysis of Transition Metal bX Substituent Interactions. Toward …
TR Cundari, J Deng, HF Pop, C Sarbu
Journal of Chemical Information and Computer Sciences 40 (4), 1052-1061 2000
Recognition of the forms applied to chemical elements
O Horowitz, C Sarbu, HF Pop
REVISTA DE CHIMIE 51 (1), 17-29 2000
REGRESIE FUZZY. II. CAZUL PUNCTELOR EXTREME (OUTLIERS)
HF POP, C SARBU
Revista de chimie 48 (10-11), 888-891 1997
REGRESIE FUZZY. I. CAZUL HETEROSCEDASTIC
C SARBU, H POP
Revista de chimie 48 (8), 732-737 1997
Desired employment Occupational field
F Horia
Education and training 1995 1992
KEPT2013: THE FOURTH INTERNATIONAL CONFERENCE ON KNOWLEDGE ENGINEERING, PRINCIPLES AND TECHNIQUES
M FRENTIU, HF POP, S MOTOGNA
UNIVERSITATIS BABEŞ-BOLYAI INFORMATICA, 5
Object-oriented logic programming
HF Pop, MM Dogaru
LSD–Lighting Systems Desing–un program pentru proiectarea sistemelor de iluminat
F Horia, POP Florin
SAADI: Software for Fuzzy Clustering and Related Fields
F Horia
Residential Energy Efficient Lighting
POP Florin, F Horia
A guide for writing a scientific paper
M Frenţiu, HF Pop
Metode de recunoastere a formelor bazate pe agenti
UBB Îndrumator, HF Pop
THE FIRST INTERNATIONAL CONFERENCE ON KNOWLEDGE ENGINEERING PRINCIPLES AND TECHNIQUES (KEPT 2007)
D TATAR, HF Pop, M FRENTIU, D Dumitrescu
COMMON MISTAKES IN WRITING A SCIENTIFIC PAPER
M FRENTIU, HF POP
"""
mylines = []
ctr = 0
title = ""
authors = ""
affiliations = ""
date = ""
for line in text.split('\n')[1:]:
# print(ctr, line)
if ctr == 0:
title = line
elif ctr == 1:
authors = line
elif ctr == 2:
affiliations = line.split('\t')[0]
date = line.split('\t')[-1]
ctr += 1
if ctr == 3:
mylines.append((title, authors, affiliations, date))
print(mylines[-1])
ctr = 0
title = ""
authors = ""
affiliations = ""
date = ""
for i, paper in enumerate(mylines):
print(i, paper[0])
errors_index = [113, 111]
for i, paper in enumerate(mylines):
if i in errors_index:
print(i, paper)
#mylines[i][0] = mylines[i][1]
```
# DB Storage (TODO)
Time to store the entries in the `papers` DB table.

```
import mariadb
import json
with open('../credentials.json', 'r') as crd_json_fd:
json_text = crd_json_fd.read()
json_obj = json.loads(json_text)
credentials = json_obj["Credentials"]
username = credentials["username"]
password = credentials["password"]
table_name = "publications_cache"
db_name = "ubbcluj"
mariadb_connection = mariadb.connect(user=username, password=password, database=db_name)
mariadb_cursor = mariadb_connection.cursor()
for paper in mylines:
title = ""
authors = ""
pub_date = ""
affiliations = ""
try:
title = paper[0].lstrip()
except:
pass
try:
authors = paper[1].lstrip()
except:
pass
try:
affiliations = paper[2].lstrip()
except AttributeError:
pass
try:
pub_date = paper[3].lstrip()
pub_date = str(pub_date) + "-01-01"
if len(pub_date) != 10:
pub_date = ""
except:
pass
insert_string = "INSERT INTO {0} SET ".format(table_name)
insert_string += "Title=\'{0}\', ".format(title)
insert_string += "ProfessorId=\'{0}\', ".format(5)
if pub_date != "":
insert_string += "PublicationDate=\'{0}\', ".format(str(pub_date))
insert_string += "Authors=\'{0}\', ".format(authors)
insert_string += "Affiliations=\'{0}\' ".format(affiliations)
print(insert_string)
print(paper)
try:
mariadb_cursor.execute(insert_string)
except mariadb.ProgrammingError as pe:
print("Error")
raise pe
except mariadb.IntegrityError:
continue
mariadb_connection.close()
```
# Conclusion
### In the end, the DB only required ~1 manual modifications with this code.
This was first stored in a DB cache table which is a duplicate of the main, reviewed, then inserted in the main table.

| github_jupyter |
# Overview
This notebook contains all experiment results exhibited in our paper.
```
%matplotlib inline
import glob
import numpy as np
import pandas as pd
import json
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib
sns.set(style='white')
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
from tqdm.auto import tqdm
from joblib import Parallel, delayed
def func(x, N=80):
ret = x.ret.copy()
x = x.rank(pct=True)
x['ret'] = ret
diff = x.score.sub(x.label)
r = x.nlargest(N, columns='score').ret.mean()
r -= x.nsmallest(N, columns='score').ret.mean()
return pd.Series({
'MSE': diff.pow(2).mean(),
'MAE': diff.abs().mean(),
'IC': x.score.corr(x.label),
'R': r
})
ret = pd.read_pickle("data/ret.pkl").clip(-0.1, 0.1)
def backtest(fname, **kwargs):
pred = pd.read_pickle(fname).loc['2018-09-21':'2020-06-30'] # test period
pred['ret'] = ret
dates = pred.index.unique(level=0)
res = Parallel(n_jobs=-1)(delayed(func)(pred.loc[d], **kwargs) for d in dates)
res = {
dates[i]: res[i]
for i in range(len(dates))
}
res = pd.DataFrame(res).T
r = res['R'].copy()
r.index = pd.to_datetime(r.index)
r = r.reindex(pd.date_range(r.index[0], r.index[-1])).fillna(0) # paper use 365 days
return {
'MSE': res['MSE'].mean(),
'MAE': res['MAE'].mean(),
'IC': res['IC'].mean(),
'ICIR': res['IC'].mean()/res['IC'].std(),
'AR': r.mean()*365,
'AV': r.std()*365**0.5,
'SR': r.mean()/r.std()*365**0.5,
'MDD': (r.cumsum().cummax() - r.cumsum()).max()
}, r
def fmt(x, p=3, scale=1, std=False):
_fmt = '{:.%df}'%p
string = _fmt.format((x.mean() if not isinstance(x, (float, np.floating)) else x) * scale)
if std and len(x) > 1:
string += ' ('+_fmt.format(x.std()*scale)+')'
return string
def backtest_multi(files, **kwargs):
res = []
pnl = []
for fname in files:
metric, r = backtest(fname, **kwargs)
res.append(metric)
pnl.append(r)
res = pd.DataFrame(res)
pnl = pd.concat(pnl, axis=1)
return {
'MSE': fmt(res['MSE'], std=True),
'MAE': fmt(res['MAE'], std=True),
'IC': fmt(res['IC']),
'ICIR': fmt(res['ICIR']),
'AR': fmt(res['AR'], scale=100, p=1)+'%',
'VR': fmt(res['AV'], scale=100, p=1)+'%',
'SR': fmt(res['SR']),
'MDD': fmt(res['MDD'], scale=100, p=1)+'%'
}, pnl
```
# Preparation
You could prepare the source data as below for the backtest code:
1. Linear: see Qlib examples
2. LightGBM: see Qlib examples
3. MLP: see Qlib examples
4. SFM: see Qlib examples
5. ALSTM: `qrun` configs/config_alstm.yaml
6. Transformer: `qrun` configs/config_transformer.yaml
7. ALSTM+TRA: `qrun` configs/config_alstm_tra_init.yaml && `qrun` configs/config_alstm_tra.yaml
8. Tranformer+TRA: `qrun` configs/config_transformer_tra_init.yaml && `qrun` configs/config_transformer_tra.yaml
```
exps = {
'Linear': ['output/Linear/pred.pkl'],
'LightGBM': ['output/GBDT/lr0.05_leaves128/pred.pkl'],
'MLP': glob.glob('output/search/MLP/hs128_bs512_do0.3_lr0.001_seed*/pred.pkl'),
'SFM': glob.glob('output/search/SFM/hs32_bs512_do0.5_lr0.001_seed*/pred.pkl'),
'ALSTM': glob.glob('output/search/LSTM_Attn/hs256_bs1024_do0.1_lr0.0002_seed*/pred.pkl'),
'Trans.': glob.glob('output/search/Transformer/head4_hs64_bs1024_do0.1_lr0.0002_seed*/pred.pkl'),
'ALSTM+TS':glob.glob('output/LSTM_Attn_TS/hs256_bs1024_do0.1_lr0.0002_seed*/pred.pkl'),
'Trans.+TS':glob.glob('output/Transformer_TS/head4_hs64_bs1024_do0.1_lr0.0002_seed*/pred.pkl'),
'ALSTM+TRA(Ours)': glob.glob('output/search/finetune/LSTM_Attn_tra/K10_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/pred.pkl'),
'Trans.+TRA(Ours)': glob.glob('output/search/finetune/Transformer_tra/K3_traHs16_traSrcLR_TPE_traLamb1.0_head4_hs64_bs512_do0.1_lr0.0005_seed*/pred.pkl')
}
res = {
name: backtest_multi(exps[name])
for name in tqdm(exps)
}
report = pd.DataFrame({
k: v[0]
for k, v in res.items()
}).T
report
# print(report.to_latex())
```
# RQ1
Case study
```
df = pd.read_pickle('output/search/finetune/Transformer_tra/K3_traHs16_traSrcLR_TPE_traLamb0.0_head4_hs64_bs512_do0.1_lr0.0005_seed1000/pred.pkl')
code = 'SH600157'
date = '2018-09-28'
lookbackperiod = 50
prob = df.iloc[:, -3:].loc(axis=0)[:, code].reset_index(level=1, drop=True).loc[date:].iloc[:lookbackperiod]
pred = df.loc[:,["score_0","score_1","score_2","label"]].loc(axis=0)[:, code].reset_index(level=1, drop=True).loc[date:].iloc[:lookbackperiod]
e_all = pred.iloc[:,:-1].sub(pred.iloc[:,-1], axis=0).pow(2)
e_all = e_all.sub(e_all.min(axis=1), axis=0)
e_all.columns = [r'$\theta_%d$'%d for d in range(1, 4)]
prob = pd.Series(np.argmax(prob.values, axis=1), index=prob.index).rolling(7).mean().round()
fig, axes = plt.subplots(1, 2, figsize=(7, 3))
e_all.plot(ax=axes[0], xlabel='', rot=30)
prob.plot(ax=axes[1], xlabel='', rot=30, color='red', linestyle='None', marker='^', markersize=5)
plt.yticks(np.array([0, 1, 2]), e_all.columns.values)
axes[0].set_ylabel('Predictor Loss')
axes[1].set_ylabel('Router Selection')
plt.tight_layout()
# plt.savefig('select.pdf', bbox_inches='tight')
plt.show()
```
# RQ2
You could prepared the source data for this test as below:
1. Random: Setting `src_info` = "NONE"
2. LR: Setting `src_info` = "LR"
3. TPE: Setting `src_info` = "TPE"
4. LR+TPE: Setting `src_info` = "LR_TPE"
```
exps = {
'Random': glob.glob('output/search/LSTM_Attn_tra/K10_traHs16_traSrcNONE_traLamb1.0_hs256_bs1024_do0.1_lr0.0001_seed*/pred.pkl'),
'LR': glob.glob('output/search/LSTM_Attn_tra/K10_traHs16_traSrcLR_traLamb1.0_hs256_bs1024_do0.1_lr0.0001_seed*/pred.pkl'),
'TPE': glob.glob('output/search/LSTM_Attn_tra/K10_traHs16_traSrcTPE_traLamb1.0_hs256_bs1024_do0.1_lr0.0001_seed*/pred.pkl'),
'LR+TPE': glob.glob('output/search/finetune/LSTM_Attn_tra/K10_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/pred.pkl')
}
res = {
name: backtest_multi(exps[name])
for name in tqdm(exps)
}
report = pd.DataFrame({
k: v[0]
for k, v in res.items()
}).T
report
# print(report.to_latex())
```
# RQ3
Set `lamb` = 0 to obtain results without Optimal Transport(OT)
```
a = pd.read_pickle('output/search/finetune/Transformer_tra/K3_traHs16_traSrcLR_TPE_traLamb0.0_head4_hs64_bs512_do0.1_lr0.0005_seed3000/pred.pkl')
b = pd.read_pickle('output/search/finetune/Transformer_tra/K3_traHs16_traSrcLR_TPE_traLamb2.0_head4_hs64_bs512_do0.1_lr0.0005_seed3000/pred.pkl')
a = a.iloc[:, -3:]
b = b.iloc[:, -3:]
b = np.eye(3)[b.values.argmax(axis=1)]
a = np.eye(3)[a.values.argmax(axis=1)]
res = pd.DataFrame({
'with OT': b.sum(axis=0) / b.sum(),
'without OT': a.sum(axis=0)/ a.sum()
},index=[r'$\theta_1$',r'$\theta_2$',r'$\theta_3$'])
res.plot.bar(rot=30, figsize=(5, 4), color=['b', 'g'])
del a, b
```
# RQ4
You could prepared the source data for this test as below:
1. K=1: which is exactly the alstm model
2. K=3: Setting `num_states` = 3
3. K=5: Setting `num_states` = 5
4. K=10: Setting `num_states` = 10
5. K=20: Setting `num_states` = 20
```
exps = {
'K=1': glob.glob('output/search/LSTM_Attn/hs256_bs1024_do0.1_lr0.0002_seed*/info.json'),
'K=3': glob.glob('output/search/finetune/LSTM_Attn_tra/K3_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/info.json'),
'K=5': glob.glob('output/search/finetune/LSTM_Attn_tra/K5_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/info.json'),
'K=10': glob.glob('output/search/finetune/LSTM_Attn_tra/K10_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/info.json'),
'K=20': glob.glob('output/search/finetune/LSTM_Attn_tra/K20_traHs16_traSrcLR_TPE_traLamb2.0_hs256_bs1024_do0.1_lr0.0001_seed*/info.json')
}
report = dict()
for k, v in exps.items():
tmp = dict()
for fname in v:
with open(fname) as f:
info = json.load(f)
tmp[fname] = (
{
"IC":info["metric"]["IC"],
"MSE":info["metric"]["MSE"]
})
tmp = pd.DataFrame(tmp).T
report[k] = tmp.mean()
report = pd.DataFrame(report).T
fig, axes = plt.subplots(1, 2, figsize=(6,3)); axes = axes.flatten()
report['IC'].plot.bar(rot=30, ax=axes[0])
axes[0].set_ylim(0.045, 0.062)
axes[0].set_title('IC performance')
report['MSE'].astype(float).plot.bar(rot=30, ax=axes[1], color='green')
axes[1].set_ylim(0.155, 0.1585)
axes[1].set_title('MSE performance')
plt.tight_layout()
# plt.savefig('sensitivity.pdf')
report
```
| github_jupyter |
```
%matplotlib inline
```
# K-means Clustering
The plots display firstly what a K-means algorithm would yield
using three clusters. It is then shown what the effect of a bad
initialization is on the classification process:
By setting n_init to only 1 (default is 10), the amount of
times that the algorithm will be run with different centroid
seeds is reduced.
The next plot displays what using eight clusters would deliver
and finally the ground truth.
```
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
# Though the following import is not directly being used, it is required
# for 3D projection to work
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = [('k_means_iris_8', KMeans(n_clusters=8)),
('k_means_iris_3', KMeans(n_clusters=3)),
('k_means_iris_bad_init', KMeans(n_clusters=3, n_init=1,
init='random'))]
fignum = 1
titles = ['8 clusters', '3 clusters', '3 clusters, bad initialization']
for name, est in estimators:
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2],
c=labels.astype(np.float), edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(titles[fignum - 1])
ax.dist = 12
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0),
('Versicolour', 1),
('Virginica', 2)]:
ax.text3D(X[y == label, 3].mean(),
X[y == label, 0].mean(),
X[y == label, 2].mean() + 2, name,
horizontalalignment='center',
bbox=dict(alpha=.2, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title('Ground Truth')
ax.dist = 12
fig.show()
```
| github_jupyter |
# Python cheatsheet
Inspired by [A Whirlwind Tour of Python](https://jakevdp.github.io/WhirlwindTourOfPython/) and [another Python Cheatsheet](https://www.pythoncheatsheet.org/).
Only covers Python 3.
```
import this
```
## Basics
```
# Print statement
print("Hello World!") # Python 3 - No parentheses in Python 2
# Optional separator
print(1, 2, 3)
print(1, 2, 3, sep='--')
# Variables (dynamically typed)
mood = "happy" # or 'happy'
print("I'm", mood)
```
## String formatting
```
# https://realpython.com/python-f-strings/
# https://cito.github.io/blog/f-strings/
name = "Garance"
age = 11
message = "My name is %s and I'm %s years old." % (name, age) # Original language syntax
print(message)
message = "My name is {} and I'm {} years old.".format(name, age) # Python 2.6+
print(message)
message = f"My name is {name} and I'm {age} years old." # Python 3.6+
print(message)
```
## Numbers and arithmetic
```
# Type: int
a = 4
# Type: float
b = 3.14
a, b = b, a
print(a, b)
print(13 / 2)
print(13 // 2)
# Exponential operator
print(3 ** 2)
print(2 ** 3)
```
## Flow control
### The if/elif/else statement
```
name = 'Bob'
age = 30
if name == 'Alice':
print('Hi, Alice.')
elif age < 12:
print('You are not Alice, kiddo.')
else:
print('You are neither Alice nor a little kid.')
```
### The while loop
```
num = 1
while num <= 10:
print(num)
num += 1
```
### The for/else loop
The optional `else`statement is only useful when a `break` condition can occur in the loop:
```
for i in [1, 2, 3, 4, 5]:
if i == 3:
print(i)
break
else:
print("No item of the list is equal to 3")
```
## Data structures
### Lists
```
countries = ["France", "Belgium", "India"]
print(len(countries))
print(countries[0])
print(countries[-1])
# Add element at end of list
countries.append("Ecuador")
print(countries)
```
### List indexing and slicing
```
spam = ['cat', 'bat', 'rat', 'elephant']
print(spam[1:3])
print(spam[0:-1])
print(spam[:2])
print(spam[1:])
print(spam[:])
print(spam[::-1])
```
### Tuples
Contrary to lists, tuples are immutable (read-only).
```
eggs = ('hello', 42, 0.5)
print(eggs[0])
print(eggs[1:3])
# TypeError: a tuple is immutable
# eggs[0] = 'bonjour'
```
### Dictionaries
```
numbers = {'one':1, 'two':2, 'three':3}
numbers['ninety'] = 90
print(numbers)
for key, value in numbers.items():
print(f'{key} => {value}')
```
### Sets
A set is an unordered collection of unique items.
```
# Duplicate elements are automatically removed
s = {1, 2, 3, 2, 3, 4}
print(s)
```
### Union, intersection and difference of sets
```
primes = {2, 3, 5, 7}
odds = {1, 3, 5, 7, 9}
print(primes | odds)
print(primes & odds)
print(primes - odds)
```
## Functions
### Function definition and function call
```
def square(x):
""" Returns the square of x """
return x ** 2
# Print function docstring
help(square)
print(square(0))
print(square(3))
```
### Default function parameters
```
def fibonacci(n, a=0, b=1):
""" Returns a list of the n first Fibonacci numbers"""
l = []
while len(l) < n:
a, b = b, a + b
l.append(a)
return l
print(fibonacci(7))
```
### Flexible function arguments
```
def catch_all(*args, **kwargs):
print("args =", args)
print("kwargs = ", kwargs)
catch_all(1, 2, 3, a=10, b='hello')
```
### Lambda (anonymous) functions
```
add = lambda x, y: x + y
print(add(1, 2))
```
## Iterators
### A unified interface
```
for element in [1, 2, 3]:
print(element)
for element in (4, 5, 6):
print(element)
for key in {'one':1, 'two':2}:
print(key)
for char in "baby":
print(char)
```
### Under the hood
- An **iterable** is a object that has an `__iter__` method which returns an **iterator** to provide iteration support.
- An **iterator** is an object with a `__next__` method which returns the next iteration element.
- A **sequence** is an iterable which supports access by integer position. Lists, tuples, strings and range objects are examples of sequences.
- A **mapping** is an iterable which supports access via keys. Dictionaries are examples of mappings.
- Iterators are used implicitly by many looping constructs.
### The range() function
It doesn't return a list, but a `range`object (which exposes an iterator).
```
for i in range(10):
if i % 2 == 0:
print(f"{i} is even")
else:
print(f"{i} is odd")
for i in range(0, 10, 2):
print(i)
for i in range(5, -1, -1):
print(i)
```
### The enumerate() function
```
supplies = ['pens', 'staplers', 'flame-throwers', 'binders']
for i, supply in enumerate(supplies):
print(f'Index {i} in supplies is: {supply}')
```
## Comprehensions
### Principle
- Provide a concise way to create sequences.
- General syntax: `[expr for var in iterable]`.
### List comprehensions
```
# Using explicit code
L = []
for n in range(12):
L.append(n ** 2)
print(L)
# Using a list comprehension
[n ** 2 for n in range(12)]
```
### Set and dictionary comprehensions
```
# Create an uppercase set
s = {"abc", "def"}
print({e.upper() for e in s})
# Obtains modulos of 4 (elimitaing duplicates)
print({a % 4 for a in range(1000)})
# Switch keys and values
d = {'name': 'Prosper', 'age': 7}
print({v: k for k, v in d.items()})
```
## Generators
### Principle
- A **generator** defines a recipe for producing values.
- A generator does not actually compute the values until they are needed.
- It exposes an iterator interface. As such, it is a basic form of iterable.
- It can only be iterated once.
### Generators expressions
They use parentheses, not square brackets like list comprehensions.
```
G1 = (n ** 2 for n in range(12))
print(list(G1))
print(list(G1))
```
### Generator functions
- A function that, rather than using `return` to return a value once, uses `yield` to yield a (potentially infinite) sequence of values.
- Useful when the generator algorithm gets complicated.
```
def gen():
for n in range(12):
yield n ** 2
G2 = gen()
print(list(G2))
print(list(G2))
```
## Object-oriented programming
### Classes and objects
```
class Vehicle:
def __init__(self, number_of_wheels, type_of_tank):
self.number_of_wheels = number_of_wheels
self.type_of_tank = type_of_tank
@property
def number_of_wheels(self):
return self.__number_of_wheels
@number_of_wheels.setter
def number_of_wheels(self, number):
self.__number_of_wheels = number
def make_noise(self):
print('VRUUUUUUUM')
tesla_model_s = Vehicle(4, 'electric')
tesla_model_s.number_of_wheels = 2 # setting number of wheels to 2
print(tesla_model_s.number_of_wheels)
tesla_model_s.make_noise()
```
### Class and instance attributes
```
class Employee:
empCount = 0
def __init__(self, name, salary):
self._name = name
self._salary = salary
Employee.empCount += 1
def count():
return f'Total employees: {Employee.empCount}'
def description(self):
return f'Name: {self._name}, salary: {self._salary}'
e1 = Employee('Ben', '30')
print(e1.description())
print(Employee.count())
```
### Inheritance
```
class Animal:
def __init__(self, species):
self.species = species
class Dog(Animal):
def __init__(self, name):
Animal.__init__(self, 'Mammal')
self.name = name
doggo = Dog('Fang')
print(doggo.name)
print(doggo.species)
```
## Modules and packages
```
# Importing all module content into a namespace
import math
print(math.cos(math.pi)) # -1.0
# Aliasing an import
import numpy as np
print(np.cos(np.pi)) # -1.0
# Importing specific module content into local namespace
from math import cos, pi
print(cos(pi)) # -1.0
# Importing all module content into local namespace (use with caution)
from math import *
print(sin(pi) ** 2 + cos(pi) ** 2) # 1.0
```
| github_jupyter |
## Dependencies
```
import glob
import numpy as np
import pandas as pd
from transformers import TFBertModel
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, concatenate
# Datasets
def get_test_dataset():
dataset = tf.data.Dataset.from_tensor_slices(x_test)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
## TPU configuration
```
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
dataset_base_path = '/kaggle/input/jigsaw-dataset-toxic-distilbert/'
x_test_path = dataset_base_path + 'x_test.npy'
x_test = np.load(x_test_path)
print('Test samples %d' % len(x_test))
```
# Model parameters
```
MAX_LEN = 512
BATCH_SIZE = 64 * strategy.num_replicas_in_sync
base_path = '/kaggle/input/bert-base-ml-cased-huggingface/bert_base_cased/'
base_model_path = base_path + 'bert-base-multilingual-cased-tf_model.h5'
config_path = base_path + 'bert-base-multilingual-cased-config.json'
model_path_list = glob.glob('/kaggle/input/9-jigsaw-train-bert-ml-toxic-pb/' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = "\n")
```
# Model
```
def model_fn():
input_word_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_word_ids')
base_model = TFBertModel.from_pretrained(base_model_path, config=config_path)
sequence_output = base_model(input_word_ids)[0]
avg_p = GlobalAveragePooling1D()(sequence_output)
max_p = GlobalMaxPooling1D()(sequence_output)
x = concatenate([avg_p, max_p])
x = Dropout(0.25)(x)
output = Dense(1, activation='sigmoid', name='output')(x)
model = Model(inputs=input_word_ids, outputs=output)
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(x_test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
for model_path in model_path_list:
print(model_path)
with strategy.scope():
model = model_fn()
model.load_weights(model_path)
test_preds += model.predict(get_test_dataset()) / len(model_path_list)
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
# PETs/TETs – Hyperledger Aries – Authority Agent (Issuing Authority) 🏛️
```
%%javascript
document.title='🏛️ Authority'
```
## PART 2: Issue a VC to the Manufacturer Agents
**What:** Issue verifiable credentials (VCs) to all manufacturers
**Why:** Manufacturers will be able to store VCs, and prove to the city (the data scientist) that they are manufacturers, without revealing their identity.
**How:** <br>
1. [Initiate Authority's AgentCommunicationManager (ACM)](#1) <br>
2. [Connect with Manufacturer1](#2)
3. [Issue VC to Manufacturer1](#3)
4. [Repeat Steps 2-3 for Manufacturer2](#4)
4. [Repeat Steps 2-3 for Manufacturer3](#5)
**Accompanying Agents and Notebooks:**
* Manufacturer1 🚗: `02_get_manufacturer1_VC.ipynb`
* Manufacturer2 🚛: `02_get_manufacturer2_VC.ipynb`
* Manufacturer3 🛵: `02_get_manufacturer3_VC.ipynb`
---
### 0 - Setup
#### 0.1 - Imports
```
import os
from aries_cloudcontroller import AriesAgentController
import libs.helpers as helpers
from libs.agent_connection_manager import IssuingAuthority
```
#### 0.2 – Variables
```
# Get identifier data defined in notebook 00_init_authority_as_issuingAuthority.ipynb
identifiers = helpers.get_identifiers()
schema_manufacturer_id = identifiers["manufacturer_schema_identifiers"]["schema_id"]
cred_def_manufacturer_id = identifiers["manufacturer_schema_identifiers"]["cred_def"]
# Get environment variables
api_key = os.getenv("ACAPY_ADMIN_API_KEY")
admin_url = os.getenv("ADMIN_URL")
webhook_port = int(os.getenv("WEBHOOK_PORT"))
webhook_host = "0.0.0.0"
```
---
<a id=1> </a>
### 1 – Initiate Authority Agent
#### 1.1 – Init ACA-PY agent controller
```
# Setup
agent_controller = AriesAgentController(admin_url,api_key)
print(f"Initialising a controller with admin api at {admin_url} and an api key of {api_key}")
```
#### 1.2 – Start Webhook Server to enable communication with other agents
@todo: is communication with other agents, or with other docker containers?
```
# Listen on webhook server
await agent_controller.init_webhook_server(webhook_host, webhook_port)
print(f"Listening for webhooks from agent at http://{webhook_host}:{webhook_port}")
```
#### 1.3 – Init ACM issuing authority
```
# The IssuingAuthority registers relevant webhook servers and event listeners
authority_agent = IssuingAuthority(agent_controller)
```
---
<a id=2> </a>
### 2 – Establish a connection with Manufacturer1 🚗
A connection with the credential issuer (i.e., the authority agent) must be established before a VC can be received. In this scenario, the manufacturer1 requests a connection with the Authority to be certified as an official city agency. Thus, the manufacturer1 agent sends an invitation to the Authority. In real life, the invitation can be shared via video call 💻, phone ☎️, E-Mail 📧, or fax 📠. In this PoC, this is represented by copy and pasting the invitation into the manufacturers' notebooks.
### 2.1 – Receive invitation from `Manufacturer1` agent
Copy the invitation from Step 2.1 in the City's `01_hold_VC.ipynb` notebook into the following cell.
Several state changes of the connection between the Manufacturer agent, the inviter (A), and the authority agent, the invitee (B), are required before successfully establishing a connection:
| Step | State | Agent | Description | Function/Prompt/Variable |
| --- | --- | --- | --- | --- |
| 1 | invitation-sent | A | A sent an invitation to B | `create_connection_invitation()`
| 2 | invitation-received | B | B receives the invitation of A | Prompt: Paste invitation from A |
| 3 | request-sent | B | B sends B connection request | Prompt: Accept invitation OR `auto_accept=True` |
| 4 | request-received | A | A receives the connection request from B | Prompt: Accept invitation request response OR `auto_accept=True` |
| 5 | response-sent | A | A sends B response to B | - |
| 6 | response-received | B | B receives the response from A | - |
| 7 | active (completed) | A | B pings A to finalize connection | Prompt: Trust ping OR `auto_ping=True` |
```
# Variables
alias = None
auto_accept= True
# Receive connection invitation
connection_id_m1 = authority_agent.receive_connection_invitation(alias=alias, auto_accept=auto_accept)
```
<div style="font-size: 25px"><center><b>Break Point 2</b></center></div>
<div style="font-size: 50px"><center>🏛 ➡️ 🚗</center></div><br>
<center><b>Please return to the Manufacturer1's notebook 🚗. <br>Check the prompts in Step 2.1 (e.g., if auto_accept or auto_ping are set to False), and then proceed to Step 3.</b></center>
---
<a id=3> </a>
## 3 – Process VC request by Manufacturers
### 3.1 – Check messages / requests by Manufacturers
Check inbox and await messages sent by Manufacturer1 🚗
```
# Verify inbox
message_ids = authority_agent.verify_inbox()
for m_id in message_ids:
authority_agent.get_message(m_id)
```
### 3.3 – Offer VC to `Manufacturer1` agent 🚗
The next step is to offer a VC to the manufacturer agent. The manufacturer can then request the offer and store it in their wallet. The following table provides an overview of the individual states between I (Issuer, the Authority agent) and H (Holder, the Manufacturer).
| Step | State | Role | Description | Function/Prompt/Variable |
| --- | --- | --- | --- | --- |
| 1 | offer_sent | I | I sends I VC offer with personalized information to H| `offer_vc()` |
| 2 | offer_received | H | H receives offer made by I | - |
| 3 | request_sent | H | Request VC offer | `request_vc()` AND (Prompt: request VC OR `auto_request=True`) |
| 4 | request_received | I | M1's request to get VC was received | - |
| 5 | credential_issued | I | Automatic response to issue credential | - |
| 6 | credential_received | H| H receives VC and is asked to store it | Prompt: Store VC OR `auto_store=True`
| 7 | credential_acked | I / H | Credential was issued and stored | - |
If you enter the information that was sent by the Manufacturer1 Agent (see `Text` attribute in message) when prompted, the proposed credential should look something like this:
```
{
'@type': 'did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/issue-credential/1.0/credential-preview',
'attributes': [
{'name': 'manufacturerCity', 'value': 'Berlin'},
{'name': 'manufacturerName', 'value': 'Manufacturer1'},
{'name': 'manufacturerCountry', 'value': 'Germany'},
{'name': 'isManufacturer', 'value': 'TRUE'},
],
}
```
```
# MAKE VC ZKP-able! SEE https://github.com/hyperledger/aries-cloudagent-python/blob/main/JsonLdCredentials.md
comment = "Issuing VC that Manufacturer1 is a manufacturer"
auto_remove = True
trace = False
# Offer Manufacturer1 a VC with manufacturer_schema
authority_agent.offer_vc(
connection_id_m1,
schema_manufacturer_id,
cred_def_manufacturer_id,
comment=comment,
# Comment out next line if you do not want to get the prompts to enter VC information
#credential_attributes=[{"name": "manufacturerName", "value": "undisclosedManufacturer1"}, {"name": "manufacturerCity", "value": "Berlin"}, {"name": "manufacturerCountry", "value": "Germany"}, {"name": "isManufacturer", "value": "TRUE"}]
)
```
<div style="font-size: 25px"><center><b>Break Point 4</b></center></div>
<div style="font-size: 50px"><center>🏛 ➡️ 🚗</center></div><br>
<center><b>Please return to the Manufacturer1's notebook 🚗. <br>Continue with Step 3.2</b></center>
---
<a id=4> </a>
### 4 – 🔁 Repeat Steps 2 and 3 with Manufacturer2 🚛
🤦 Execute the following cells to certify Manufacturer2 🚛 that the agent is a manufacturer.
#### 4.1 – Receive connection invitation by Manufacturer2
```
# Variables
alias = None
auto_accept= True
# Receive connection invitation
connection_id_m2= authority_agent.receive_connection_invitation(alias=alias, auto_accept=auto_accept)
```
#### 4.2 Offer VC to Manufacturer2
```
# MAKE VC ZKP-able! SEE https://github.com/hyperledger/aries-cloudagent-python/blob/main/JsonLdCredentials.md
comment = "Issuing VC that Manufacturer2 is a manufacturer"
auto_remove = True
trace = False
# Offer Manufacturer1 a VC with manufacturer_schema
authority_agent.offer_vc(
connection_id_m2,
schema_manufacturer_id,
cred_def_manufacturer_id,
comment=comment,
# Comment out next line if you do not want to get the prompts to enter VC information
credential_attributes=[{"name": "manufacturerName", "value": "truckManufacturer"}, {"name": "manufacturerCity", "value": "City2"}, {"name": "manufacturerCountry", "value": "DE"}, {"name": "isManufacturer", "value": "TRUE"}]
)
```
<div style="font-size: 25px"><center><b>Break Point 6</b></center></div>
<div style="font-size: 50px"><center>🏛 ➡️ 🚛</center></div><br>
<center><b>Please return to the Manufacturer2's notebook 🚛. <br>Continue with Step 3.1</b></center>
---
<a id=5> </a>
### 5 – 🔁 Repeat Steps 2 and 3 with Manufacturer3 🛵
🙇 Execute the following cells to certify Manufacturer3 that the agent is a manufacturer.
#### 5.1 – Establish a connection with Manufacturer3
All variables are set to auto_accept to speed up the conneciton process.
```
# Variables
alias = None
auto_accept= True
# Receive connection invitation
connection_id_m3 = authority_agent.receive_connection_invitation(alias=alias, auto_accept=auto_accept)
```
#### 5.2 Offer VC to Manufacturer2
```
# MAKE VC ZKP-able! SEE https://github.com/hyperledger/aries-cloudagent-python/blob/main/JsonLdCredentials.md
comment = "Issuing VC that Manufacturer3 is a manufacturer"
auto_remove = True
trace = False
# Offer Manufacturer1 a VC with manufacturer_schema
authority_agent.offer_vc(
connection_id_m3, #
schema_manufacturer_id,
cred_def_manufacturer_id,
comment=comment,
# Comment out next line if you do not want to get the prompts to enter VC information
credential_attributes=[{"name": "manufacturerName", "value": "scooterManufacturer"}, {"name": "manufacturerCity", "value": "City3"}, {"name": "manufacturerCountry", "value": "DE"}, {"name": "isManufacturer", "value": "TRUE"}]
)
```
<div style="font-size: 25px"><center><b>Break Point 9</b></center></div>
<div style="font-size: 50px"><center>🏛 ➡️ 🛵</center></div><br>
<center><b>Please return to the Manufacturer3's notebook 🛵. <br>Continue with Step 3.1</b></center>
---
## 6 - Terminate Controller
Whenever you have finished with this notebook, be sure to terminate the controller. This is especially important if your business logic runs across multiple notebooks.
```
await agent_controller.terminate()
```
---
### 🔥🔥🔥 You are done 🙌 and can close this notebook now 🔥🔥🔥
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
This line is only for jupyter notebooks, for another editor, simply we use: **plt.show()** at the end of all your plotting commands to have the figure pop up in another window.
### Basic plot
```
x = np.arange(1,10)
y = x ** 2
plt.plot(x, y, 'r')
plt.plot(x, y, 'r')
plt.title('any Title')
plt.plot(x, y, 'r')
plt.title('any Title')
plt.xlabel('X Axis Label')
plt.ylabel('Y Axis Label')
```
#### Customizing figure height and width
```
fig = plt.figure(figsize = (12,8))
plt.plot(x, y, 'r')
plt.title('any Title')
plt.xlabel('X Axis Label')
plt.ylabel('Y Axis Label')
```
#### Saving the visualization output
```
fig.savefig("first.png")
cars = pd.read_csv("cars93.csv")
cars.head()
```
### Box Plot
```
fig = plt.figure(figsize = (12,8))
plt.boxplot(cars["Price"])
plt.title("Boxplot of Price")
plt.show()
B = plt.boxplot(cars["Price"])
[item.get_ydata()[1] for item in B['whiskers']]
fig = plt.figure(figsize = (12,8))
plt.boxplot(cars["Horsepower"])
plt.title("Boxplot of Horsepower")
plt.show()
# using Pandas
cars["Horsepower"].plot.box(figsize = (10,7))
plt.show()
```
### Histogram
```
a = np.arange(17)
plt.scatter(a, np.zeros_like(a))
plt.scatter(cars["Horsepower"], np.zeros_like(cars["Horsepower"]))
plt.show()
plt.hist(cars["Horsepower"])
plt.title("Histogram of Horsepower")
plt.show()
plt.hist(cars["Horsepower"], bins = 16)
plt.title("Histogram of Horsepower")
plt.show()
```
### Bar Chart
```
cars["Type"].value_counts()
plt.bar(cars["Type"].value_counts().index, cars["Type"].value_counts().values)
plt.xlabel("Type")
plt.ylabel("Count")
plt.title("Distribution of Type across various categories")
plt.show()
plt.bar(cars["Type"].values, cars["MPG.city"],width=0.2,label="Mileage in city")
plt.xlabel("Type")
plt.ylabel("MPG.city")
plt.title("MPG.city across the car types")
plt.show()
grouped_df = cars.groupby("Type").mean()
grouped_df
grouped_df.reset_index(inplace = True)
plt.bar(grouped_df["Type"], grouped_df["MPG.city"])
plt.xlabel("Type")
plt.ylabel("MPG.city")
plt.title("MPG.city mean across the car types")
plt.show()
```
### Scatter plot
```
plt.scatter(cars["Horsepower"], cars["MPG.city"])
plt.xlabel("Horsepower")
plt.ylabel("MPG.city")
plt.title("Scatter plot of Horsepower vs. MPG.city")
plt.show()
plt.scatter(cars["Horsepower"], cars["MPG.city"], c = "r",marker = "*")
plt.xlabel("Horsepower")
plt.ylabel("MPG.city")
plt.title("Scatter plot of Horsepower vs. MPG.city")
plt.show()
```
### Line Chart
```
plt.plot(cars["Horsepower"], cars["MPG.city"])
cars93_ordered = cars.sort_values(by = "Horsepower")
cars93_ordered.head()
plt.plot(cars93_ordered["Horsepower"], cars93_ordered["MPG.city"])
plt.xlabel("Horsepower")
plt.ylabel("MPG.city")
plt.title("Line chart of Horsepower vs. MPG.city")
```
| github_jupyter |
```
from fastai.vision import *
DATA = untar_data(URLs.IMAGENETTE_160)
src = (ImageList.from_folder(DATA).filter_by_rand(0.3, seed=42)
.split_by_folder(valid='val')
.label_from_folder()
.transform(([flip_lr(p=0.5)], []), size=160))
data = (src.databunch(bs=64, num_workers=6)
.normalize(imagenet_stats))
data
from fastai import layers
def conv_layer(ni:int, nf:int, ks:int=3, stride:int=1, padding:int=None, bias:bool=None, is_1d:bool=False,
norm_type:Optional[NormType]=NormType.Batch, use_activ:bool=True, activ_fn:Callable=None, leaky:float=None,
transpose:bool=False, init:Callable=nn.init.kaiming_normal_, self_attention:bool=False):
"Create a sequence of convolutional (`ni` to `nf`), ReLU (if `use_activ`) and batchnorm (if `bn`) layers."
activ_fn = ifnone(activ_fn, partial(relu, inplace=True, leaky=leaky))
if padding is None: padding = (ks-1)//2 if not transpose else 0
bn = norm_type in (NormType.Batch, NormType.BatchZero)
if bias is None: bias = not bn
conv_func = nn.ConvTranspose2d if transpose else nn.Conv1d if is_1d else nn.Conv2d
conv = init_default(conv_func(ni, nf, kernel_size=ks, bias=bias, stride=stride, padding=padding), init)
if norm_type==NormType.Weight: conv = weight_norm(conv)
elif norm_type==NormType.Spectral: conv = spectral_norm(conv)
layers = [conv]
if use_activ: layers.append(activ_fn())
if bn: layers.append((nn.BatchNorm1d if is_1d else nn.BatchNorm2d)(nf))
if self_attention: layers.append(SelfAttention(nf))
return nn.Sequential(*layers)
def simple_cnn(data, actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None, bn=False, activ_fn=None,
lin_ftrs:Optional[Collection[int]]=None, ps:Floats=0.5,
concat_pool:bool=True, bn_final:bool=False) -> nn.Sequential:
"CNN with `conv_layer` defined by `actns`, `kernel_szs` and `strides`, plus batchnorm if `bn`."
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i],
norm_type=(NormType.Batch if bn and i<(len(strides)-1) else None), activ_fn=activ_fn) for i in range_of(strides)]
nf_head = actns[-1] * (2 if concat_pool else 1)
head = create_head(nf_head, data.c, lin_ftrs=lin_ftrs, ps=ps, concat_pool=concat_pool, bn_final=bn_final)
return nn.Sequential(*layers, head)
actns = [3,64,64,128,128,256,256,512,512]
strides = [1,2]*(len(actns)//2)
```
# Relu
```
mdl_relu = simple_cnn(data, actns=actns, strides=strides)
mdl_relu
lrn = Learner(data, mdl_relu, metrics=[accuracy,top_k_accuracy])
lrn.fit_one_cycle(5, 1e-3)
lrn.destroy()
```
# Mish
```
class Mish(nn.Module):
def forward(self, x):
return x * torch.tanh(F.softplus(x))
mdl_mish = simple_cnn(data, actns=actns, strides=strides, activ_fn=Mish)
mdl_mish
lrn = Learner(data, mdl_mish, metrics=[accuracy,top_k_accuracy])
lrn.fit_one_cycle(5, 1e-3)
lrn.destroy()
```
## Mish CUDA
```
from mish_cuda import MishCuda
mdl_mish = simple_cnn(data, actns=actns, strides=strides, activ_fn=MishCuda)
mdl_mish
lrn = Learner(data, mdl_mish, metrics=[accuracy,top_k_accuracy])
lrn.fit_one_cycle(5, 1e-3)
lrn.destroy()
```
## Stability
```
from mish_cuda import mish_backward
def stress_bwd():
grad_out = torch.ones(1000,100)
for _ in progress_bar(range(1000)):
inp = torch.randn(1000,100) + torch.randint(-1000, 1000, (1000,1)).float()
y = mish_backward(inp, grad_out)
ni = (y == y.new_full((1,), np.inf)).sum()
nn = (y == y.new_full((1,), np.nan)).sum()
if ni > 0 or nn > 0:
print(F"Found non-finite: {ni} inf; {nn} nan")
return (inp,grad_out,y)
print("No non-finites found")
return None
bad = stress_bwd()
```
| github_jupyter |
# Badge Holder Tests
The purpose of these tests was to determine if badges in antistatic / non-antistatic holders behave differently.
Our initial hypothesis is that the antistatic holders were obstructing the bluetooth signal due to the slight conductivitiy of antistatic surfaces. Therefore we expect to see more issues with the antistatic holders' RSSIs than of those in regular holders.
Expected issues include:
-large variance when readings are taken at a constant distance
-lower signal strength (higher RSSI), leading to less accurate range calculations.
Ideally, there would be no great difference between holder types, because then either one could be used without affecting data collection.
We will also be watching out for variation between individual badges. Each of these badges came from the same "batch," so we expect that they will have increased precision.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
holders_anti = ["EE:B4:AF:3F:05:80", # Badge 262
"FF:0C:74:ED:C4:CD", # Badge 263
"EC:FB:84:DE:35:4A", # Badge 264
"C5:85:B9:18:8D:C3", # Badge 265
"CB:5C:4B:7C:43:81", # Badge 266
"E1:07:CE:CE:52:BE"] # Badge 267
holders_noanti = ["F1:4E:55:EA:ED:A4", # Badge 210
"E8:FA:0D:3C:01:82", # Badge 211
"C2:57:36:E6:71:6E", # Badge 212
"D9:86:6F:AF:E8:90", # Badge 213
"C1:96:24:5B:EB:97", # Badge 214
"C8:2F:80:DA:26:94"] # Badge 215
```
# Prepping the Data
```
# loads all of the distances' test data for both holder types
# @return - df_all - a PANDAS DataFrame of all the test data
def load_all():
# @param - antistatic - boolean indicating whether to load antistatic or regular holders
# @return - df_holders - a PANDAS DataFrame of all distances' test data for the specified holder type
def load_holders(antistatic):
as_str = "antistatic" if antistatic else "noantistatic"
holders = holders_anti if antistatic else holders_noanti
# @param - dist - int indicating the distance between the receiver and transmitter during the test
def load_dist(dist):
raw = pd.read_csv('logs_' + as_str + '/BLE_range_test_' + str(dist) + 'ft_' + as_str + '_CSV/000.csv')
raw = raw.loc[(raw['RSSI']>-70)] # RSSI values below about -70 are too weak to be significant
df_dist = raw.loc[raw['MAC'].isin(holders)] # filter by holder type
return df_dist
distances = [2, 4, 6, 8, 10]
df_holders = pd.concat([load_dist(d) for d in distances], keys=distances, names=["FT"])
return df_holders
df_anti = load_holders(antistatic=True)
df_noanti = load_holders(antistatic=False)
df_all = pd.concat([df_anti, df_noanti], keys=["antistatic", "regular"], names=["HOLDER"])
return df_all
data = load_all()
```
# Raw Data
```
raw_data_0 = data.loc["antistatic"].drop(columns=["DATETIME", "MAC"]).unstack(level="FT")
raw_data_1 = data.loc["regular"].drop(columns=["DATETIME", "MAC"]).unstack(level="FT")
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,5), sharey=True)
raw_data_0.plot(ax=axes[0], kind="hist", alpha=.8, bins=15, xlim=(-70,-50), title="Antistatic")
raw_data_1.plot(ax=axes[1], kind="hist", alpha=.8, bins=15, xlim=(-70,-50), title="Regular")
```
Here are some plots of the raw tests data, filtered by MAC address so that only those involved in the test are shown.
RSSI is plotted by count at each of the 5 distances used in the test. The peaks in the "Antistatic" plot seem to line up more clearly than in the "Regular" plot, but there isn't much else to say about these plots since the RSSI values here are based on a single sample, which is not how it's done on the badge. If anything, it is clear from these plots that some processing of the RSSI values needs to be done in order to get a reasonable range estimate.
# Rolling Mean
The rolling mean uses a window of 5 to generate samples, each of which are then averaged. This is similar to the actual mechanism used by the badge to collect RSSI data, in which a sample of 5 is collected and averaged before being stored. The roll is done per badge, so as to better simulate this system. Therefore these results should be more indicative of the actual badges' accuracy.
```
roll_mean0 = data.loc["antistatic"].groupby("MAC").rolling(5).mean().drop(columns=["MAC", "DATETIME"]).unstack(level="FT")
roll_mean1 = data.loc["regular"].groupby("MAC").rolling(5).mean().drop(columns=["MAC", "DATETIME"]).unstack(level="FT")
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,5), sharey=True)
roll_mean0.plot(ax=axes[0], kind='hist', bins=20, alpha=.8, xlim=(-70,-50), title="Antistatic")
roll_mean1.plot(ax=axes[1], kind='hist', bins=20, alpha=.8, xlim=(-70,-50), title="Regular")
```
# Rolling Max
Likewise with the rolling mean, the rolling max uses a window of 5, of which the maximum is taken. This method seemed to be promising as a way of pre-processing the data during a previous test, so it is re-analyzed here. If this method proves to be better than rolling mean, it would be very easy to implement this change to improve the accuracy of RSSI ranging.
```
roll_max0 = data.loc["antistatic"].groupby("MAC").rolling(5).max().drop(columns=["MAC", "DATETIME"]).unstack(level="FT")
roll_max1 = data.loc["regular"].groupby("MAC").rolling(5).max().drop(columns=["MAC", "DATETIME"]).unstack(level="FT")
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,5), sharey=True)
roll_max0.plot(ax=axes[0], kind='hist', bins=15, alpha=.8, xlim=(-70,-50), title="Antistatic")
roll_max1.plot(ax=axes[1], kind='hist', bins=15, alpha=.8, xlim=(-70,-50), title="Regular")
```
# Individual badges
## Antistatic Holders
### Rolling Mean
```
rmn2 = data.loc["antistatic"].loc[2].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn4 = data.loc["antistatic"].loc[4].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn6 = data.loc["antistatic"].loc[6].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn8 = data.loc["antistatic"].loc[8].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn10 = data.loc["antistatic"].loc[10].groupby("MAC").rolling(5).mean().unstack(level="MAC")
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(15,15), sharey=True)
rmn2.plot(ax=axes[0][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Blues", title="2ft")
rmn4.plot(ax=axes[1][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Oranges", title="4ft")
rmn6.plot(ax=axes[2][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Greens", title="6ft")
rmn8.plot(ax=axes[0][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Reds", title="8ft")
rmn10.plot(ax=axes[1][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Purples", title="10ft")
fig.delaxes(axes[2][1])
```
### Rolling Max
```
rmx2 = data.loc["antistatic"].loc[2].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx4 = data.loc["antistatic"].loc[4].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx6 = data.loc["antistatic"].loc[6].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx8 = data.loc["antistatic"].loc[8].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx10 = data.loc["antistatic"].loc[10].groupby("MAC").rolling(5).max().unstack(level="MAC")
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(15,15), sharey=True)
rmx2.plot(ax=axes[0][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Blues", title="2ft")
rmx4.plot(ax=axes[1][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Oranges", title="4ft")
rmx6.plot(ax=axes[2][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Greens", title="6ft")
rmx8.plot(ax=axes[0][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Reds", title="8ft")
rmx10.plot(ax=axes[1][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Purples", title="10ft")
fig.delaxes(axes[2][1])
```
## Regular Holders
### Rolling Mean
```
rmn2 = data.loc["regular"].loc[2].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn4 = data.loc["regular"].loc[4].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn6 = data.loc["regular"].loc[6].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn8 = data.loc["regular"].loc[8].groupby("MAC").rolling(5).mean().unstack(level="MAC")
rmn10 = data.loc["regular"].loc[10].groupby("MAC").rolling(5).mean().unstack(level="MAC")
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(15,15), sharey=True)
rmn2.plot(ax=axes[0][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Blues", title="2ft")
rmn4.plot(ax=axes[1][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Oranges", title="4ft")
rmn6.plot(ax=axes[2][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Greens", title="6ft")
rmn8.plot(ax=axes[0][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Reds", title="8ft")
rmn10.plot(ax=axes[1][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Purples", title="10ft")
fig.delaxes(axes[2][1])
```
### Rolling Max
```
rmx2 = data.loc["regular"].loc[2].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx4 = data.loc["regular"].loc[4].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx6 = data.loc["regular"].loc[6].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx8 = data.loc["regular"].loc[8].groupby("MAC").rolling(5).max().unstack(level="MAC")
rmx10 = data.loc["regular"].loc[10].groupby("MAC").rolling(5).max().unstack(level="MAC")
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(15,15), sharey=True)
rmx2.plot(ax=axes[0][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Blues", title="2ft")
rmx4.plot(ax=axes[1][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Oranges", title="4ft")
rmx6.plot(ax=axes[2][0], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Greens", title="6ft")
rmx8.plot(ax=axes[0][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Reds", title="8ft")
rmx10.plot(ax=axes[1][1], kind='hist', bins=10, alpha=.8, xlim=(-70,-50), colormap="Purples", title="10ft")
fig.delaxes(axes[2][1])
```
| github_jupyter |
# First a little bit of statistics review:
# Variance
Variance is a measure of the spread of numbers in a dataset. Variance is the average of the squared differences from the mean. So naturally, you can't find the variance of something unless you calculate it's mean first. Lets get some data and find its variance.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
# Lets generate two variables with 50 random integers each.
variance_one = []
variance_two = []
for x in range(50):
variance_one.append(random.randint(25,75))
variance_two.append(random.randint(0,100))
variance_data = {'v1': variance_one, 'v2': variance_two}
variance_df = pd.DataFrame(variance_data)
variance_df['zeros'] = pd.Series(list(np.zeros(50)))
variance_df.head()
# Now some scatter plots
plt.scatter(variance_df.v1, variance_df.zeros)
plt.xlim(0,100)
plt.title("Plot One")
plt.show()
plt.scatter(variance_df.v2, variance_df.zeros)
plt.xlim(0,100)
plt.title("Plot Two")
plt.show()
```
Now I know this isn't complicated, but each of the above plots has the same number of points, but we can tell visually that "Plot Two" has the greater variance because its points are more spread out. What if we didn't trust our eyes though? Lets calculate the variance of each of these variables to prove it to ourselves
$\overline{X}$ is the symbol for the mean of the dataset.
$N$ is the total number of observations.
$v$ or variance is sometimes denoted by a lowercase v. But you'll also see it referred to as $\sigma^{2}$.
\begin{align}
v = \frac{\sum{(X_{i} - \overline{X})^{2}} }{N}
\end{align}
How do we calculate a simple average? We add up all of the values and then divide by the total number of values. this is why there is a sum in the numerator and N in the denomenator.
However in this calculation, we're not just summing the values like we would if we were calculateing the mean, we are summing the squared difference between each point and the mean. (The squared distance between each point in the mean.)
```
# Since we generated these random values in a range centered around 50, that's
# about where their means should be.
# Find the means for each variable
v1_mean = variance_df.v1.mean()
print("v1 mean: ", v1_mean)
v2_mean = variance_df.v2.mean()
print("v2 mean: ", v2_mean)
# Find the distance between each point and its corresponding mean
variance_df['v1_distance'] = variance_df.v1-v1_mean
variance_df['v2_distance'] = variance_df.v2-v2_mean
variance_df.head()
# Now we'll square the distances from the means
variance_df['v1_squared_distance'] = variance_df.v1_distance**2
variance_df['v2_squared_distance'] = variance_df.v2_distance**2
# Notice that squaring the distances turns all of our negative values into positive ones?
variance_df.head()
# Now we'll sum the squared distances and divide by the number of observations.
observations = len(variance_df)
print("Number of Observations: ", observations)
Variance_One = variance_df.v1_squared_distance.sum()/observations
Variance_Two = variance_df.v2_squared_distance.sum()/observations
print("Variance One: ", Variance_One)
print("Variance Two: ", Variance_Two)
```
Woah, so what is the domain of V1 and V2?
Well, V1 goes from 25 to 75 so its range is ~50 and V2 goes from 0 to 100 so its range is about 100
So even though V2 is roughly twice as spread out, how much bigger is its variance than V1?
```
print("How many times bigger is Variance_One than Variance_Two? ", Variance_Two/Variance_One)
# About 3.86 times bigger! Why is that?
```
## A note about my code quality
Why did I go to the trouble of calculating all of that by hand, and add a bunch of extra useless rows to my dataframe? That is some bad code!
Because I wanted to make sure that you understood all of the parts of the equation. I didn't want the function to be some magic thing that you put numbers in and out popped a variance. Taking time to understand the equation will reinforce your intuition about the spread of the data. After all, I could have just done this:
```
print(variance_df.v1.var(ddof=1))
print(variance_df.v2.var(ddof=1))
```
But wait! Those variance values are different than the ones we calculated above, oh no! This is because variance is calculated slightly differently for a population vs a sample. Lets clarify this a little bit.
The **POPULATION VARIANCE** $\sigma^{2}$ is a **PARAMETER** (aspect, property, attribute, etc) of the population.
The **SAMPLE VARIANCE** $s^{2}$ is a **STATISTIC** (estimated attribute) of the sample.
We use the sample statistic to **estimate** the population parameter.
The sample variance $s^{2}$ is an estimate of the population variance $\sigma^{2}$.
Basically, if you're calculating a **sample** variance, you need to divide by $N-1$ or else your estimate will be a little biased. The equation that we were originally working from is for a **population variance**.
If we use the ddof=0 parameter (default is ddof=1) in our equation, we should get the same result. "ddof" stands for Denominator Degrees of Freedom.
```
print(variance_df.v1.var(ddof=0))
print(variance_df.v2.var(ddof=0))
```
# Standard Deviation
If you understand how variance is calculated, then standard deviation is a cinch. The standard deviation is the square root $\sqrt()$ of the variance.
## So why would we use one over the other?
Remember how we squared all of the distances from the mean before we added them all up? Well then taking the square root of the variance will put our measures back in the same units as the mean. So the Standard Deviation is a measure of spread of the data that is expressed in the same units as the mean of the data. Variance is the average squared distance from the mean, and the Standard Deviation is the average distance from the mean. You'll remember that when we did hypothesis testing and explored the normal distribution we talked in terms of standard deviations, and not in terms of variance for this reason.
```
print(variance_df.v1.std(ddof=0))
print(variance_df.v2.std(ddof=0))
```
# Covariance
Covariance is a measure of how changes in one variable are associated with changes in a second variable. It's a measure of how they Co (together) Vary (move) or how they move in relation to each other. For this topic we're not really going to dive into the formula, I just want you to be able to understand the topic intuitively. Since this measure is about two variables, graphs that will help us visualize things in two dimensions will help us demonstrate this idea. (scatterplots)

Lets look at the first scatterplot. the y variable has high values where the x variable has low values. This is a negative covariance because as one variable increases (moves), the other decreases (moves in the opposite direction).
In the second scatterplot we see no relation between high and low values of either variable, therefore this cloud of points has a near 0 covariance
In the third graph, we see that the y variable takes on low values in the same range where the x value takes on low values, and simiarly with high values. Because the areas of their high and low values match, we would expect this cloud of points to have a positive covariance.


Check out how popular this site is:
<https://tylervigen.com>
<https://www.similarweb.com/website/tylervigen.com#overview>
## Interpeting Covariance
A large positive or negative covariance indicates a strong relationship between two variables. However, you can't necessarily compare covariances between sets of variables that have a different scale, since the covariance of variables that take on high values will always be higher than since covariance values are unbounded, they could take on arbitrarily high or low values. This means that you can't compare the covariances between variables that have a different scale. Two variablespositive covariance variable that has a large scale will always have a higher covariance than a variable with an equally strong relationship, yet smaller scale. This means that we need a way to regularlize
One of the challenges with covariance is that its value is unbounded and variables that take on larger values will have a larger covariance irrespective of
Let me show you what I mean:
```
a = [1,2,3,4,5,6,7,8,9]
b = [1,2,3,4,5,6,7,8,9]
c = [10,20,30,40,50,60,70,80,90]
d = [10,20,30,40,50,60,70,80,90]
fake_data = {"a": a, "b": b, "c": c, "d": d,}
df = pd.DataFrame(fake_data)
plt.scatter(df.a, df.b)
plt.xlim(0,100)
plt.ylim(0,100)
plt.show()
plt.scatter(df.c, df.d)
plt.xlim(0,100)
plt.ylim(0,100)
plt.show()
```
Which of the above sets of variables has a stronger relationship?
Which has the stronger covariance?
# The Variance-Covariance Matrix
In order to answer this problem we're going to use a tool called a variance-covariance matrix.
This is matrix that compares each variable with every other variable in a dataset and returns to us variance values along the main diagonal, and covariance values everywhere else.
```
df.cov()
```
What type of special square matrix is the variance-covariance matrix?
The two sets of variables above show relationships that are equal in their strength, yet their covariance values are wildly different.
How can we counteract this problem?
What if there was some statistic of a distribution that represented how spread out the data was that we could use to standardize the units/scale of the variables?
# Correlation Coefficient
Well, it just so happens that we do have such a measure of spread of a variable. It's called the Standard Deviation! And we already learned about it. If we divide our covariance values by the product of the standard deviations of the two variables, we'll end up with what's called the Correlation Coefficient. (Sometimes just referred to as the correlation).
Correlation Coefficients have a fixed range from -1 to +1 with 0 representing no linear relationship between the data.
In most use cases the correlation coefficient is an improvement over measures of covariance because:
- Covariance can take on practically any number while a correlation is limited: -1 to +1.
- Because of it’s numerical limitations, correlation is more useful for determining how strong the relationship is between the two variables.
- Correlation does not have units. Covariance always has units
- Correlation isn’t affected by changes in the center (i.e. mean) or scale of the variables
[Statistics How To - Covariance](https://www.statisticshowto.datasciencecentral.com/covariance/)
The correlation coefficient is usually represented by a lower case $r$.
\begin{align}
r = \frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}}
\end{align}
```
df.corr()
```
Correlation coefficients of 1 tell us that all of these varaibles have a perfectly linear positive correlation with one another.

Correlation and other sample statistics are somewhat limited in their ability to tell us about the shape/patterns in the data.
[Anscombe's Quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet)

Or take it to the next level with the [Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats)
# Orthogonality
Orthogonality is another word for "perpendicularity" or things (vectors or matrices) existing at right angles to one another. Two vectors that are perpendicular to one another are orthogonal.
## How to tell if two vectors are orthogonal
Two vectors are orthogonal to each other if their dot product will be zero.
Lets look at a couple of examples to see this in action:
```
vector_1 = [0, 2]
vector_2 = [2, 0]
# Plot the Scaled Vectors
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-1,3)
plt.ylim(-1,3)
plt.title("Orthogonal Vectors")
plt.show()
```
Clearly we can see that the above vectors are perpendicular to each other, what does the formula say?
\begin{align}
a = \begin{bmatrix} 0 & 2\end{bmatrix}
\qquad
b = \begin{bmatrix} 2 & 0\end{bmatrix}
\\
a \cdot b = (0)(2) + (2)(0) = 0
\end{align}
```
vector_1 = [-2, 2]
vector_2 = [2, 2]
# Plot the Scaled Vectors
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-3,3)
plt.ylim(-1,3)
plt.title("Orthogonal Vectors")
plt.show()
```
Again the dot product is zero.
\begin{align}
a = \begin{bmatrix} -2 & 2\end{bmatrix}
\qquad
b = \begin{bmatrix} 2 & 2\end{bmatrix}
\\
a \cdot b = (-2)(2) + (2)(2) = 0
\end{align}
# Unit Vectors
In Linear Algebra a unit vector is any vector of "unit length" (1). You can turn any non-zero vector into a unit vector by dividing it by its norm (length/magnitude).
for example if I have the vector
\begin{align}
b = \begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix}
\end{align}
and I want to turn it into a unit vector, first I will calculate its norm
\begin{align}
||b|| = \sqrt{1^2 + 2^2 + 2^2} = \sqrt{1 + 4 + 4} = \sqrt{9} = 3
\end{align}
I can turn b into a unit vector by dividing it by its norm. Once something has been turned into a unit vector we'll put a ^ "hat" symbol over it to denote that it is now a unit vector.
\begin{align}
\hat{b} = \frac{1}{||b||}b = \frac{1}{3}\begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix} = \begin{bmatrix} \frac{1}{3} \\ \frac{2}{3} \\ \frac{2}{3} \end{bmatrix}
\end{align}
You might frequently see mentioned the unit vectors used to denote a certain dimensional space.
$\mathbb{R}$ unit vector: $\hat{i} = \begin{bmatrix} 1 \end{bmatrix}$
$\mathbb{R}^2$ unit vectors: $\hat{i} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$, $\hat{j} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$
$\mathbb{R}^3$ unit vectors: $\hat{i} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$, $\hat{j} = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}$, $\hat{k} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$
You'll notice that in the corresponding space, these basis vectors are the rows/columns of the identity matrix.
```
# Axis Bounds
plt.xlim(-1,2)
plt.ylim(-1,2)
# Unit Vectors
i_hat = [1,0]
j_hat = [0,1]
# Fix Axes
plt.axes().set_aspect('equal')
# PLot Vectors
plt.arrow(0, 0, i_hat[0], i_hat[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.arrow(0, 0, j_hat[0], j_hat[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')
plt.title("basis vectors in R^2")
plt.show()
```
## Vectors as linear combinations of scalars and unit vectors
Any vector (or matrix) can be be described in terms of a linear combination of scaled unit vectors. Lets look at an example.
\begin{align}
c = \begin{bmatrix} 2 \\ 3 \end{bmatrix}
\end{align}
We think about a vector that starts at the origin and extends to point $(2,3)$
Lets rewrite this in terms of a linear combination of scaled unit vectors:
\begin{align}
c = \begin{bmatrix} 2 \\ 3 \end{bmatrix} = 2\begin{bmatrix} 1 \\ 0 \end{bmatrix} + 3\begin{bmatrix} 0 \\ 1 \end{bmatrix} = 2\hat{i} + 3\hat{j}
\end{align}
This says that matrix $\begin{bmatrix} 2 \\ 3 \end{bmatrix}$ will result from scaling the $\hat{i}$ unit vector by 2, the $\hat{j}$ vector by 3 and then adding the two together.
We can describe any vector in $\mathbb{R}^2$ in this way. Well, we can describe any vector in any dimensionality this way provided we use all of the unit vectors for that space and scale them all appropriately. In this examply we just happen to be using a vector whose dimension is 2.
# Span
The span is the set of all possible vectors that can be created with a linear combination of two vectors (just as we described above).
A linear combination of two vectors just means that we're composing to vectors (via addition or subtraction) to create a new vector.
## Linearly Dependent Vectors
Two vectors that live on the same line are what's called linearly dependent. This means that there is no linear combination (no way to add, or subtract scaled version of these vectors from each other) that will ever allow us to create a vector that lies outside of that line.
In this case, the span of these vectors (lets say the green one and the red one for example - could be just those two or a whole set) is the line that they lie on, since that's what can be produced by scaling and composing them together.
The span is the graphical area that we're able to cover via a linear combination of a set of vectors.
## Linearly Independent Vectors
Linearly independent vectors are vectors that don't lie on the same line as each other. If two vectors are linearly independent, then there ought to be some linear combination of them that could represent any vector in the space ($\mathbb{R}^2$ in this case).
```
# Plot Linearly Dependent Vectors
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,0]
# Scaled Vectors
v2 = np.multiply(3, v)
v3 = np.multiply(-1,v)
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0,0, v2[0], v2[1], linewidth=3, head_width=.05, head_length=0.05, color ='yellow')
plt.arrow(0,0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0,0, v3[0], v3[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Linearly Dependent Vectors")
plt.show()
# Plot Linearly Dependent Vectors
# Axis Bounds
plt.xlim(-2,3.5)
plt.ylim(-1,3)
# Original Vector
a = [-1.5,.5]
b = [3, 1]
# Plot Vectors
plt.arrow(0,0, a[0], a[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')
plt.arrow(0,0, b[0], b[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Linearly Independent Vectors")
plt.show()
```
# Basis
The basis of a vector space $V$ is a set of vectors that are linearly independent and that span the vector space $V$.
A set of vectors spans a space if their linear combinations fill the space.
For example, the unit vectors in the "Linearly Independent Vectors" plot above form a basis for the vector space $\mathbb{R}^2$ becayse they are linearly independent and span that space.
## Orthogonal Basis
An orthogonal basis is a set of vectors that are linearly independent, span the vector space, and are orthogonal to each other. Remember that vectors are orthogonal if their dot product equals zero.
## Orthonormal Basis
An orthonormal basis is a set of vectors that are linearly independent, span the vector space, are orthogonal to eachother and each have unit length.
For more on this topic (it's thrilling, I know) you might research the Gram-Schmidt process -which is a method for orthonormalizing a set of vectors in an inner product space.
The unit vectors form an orthonormal basis for whatever vector space that they are spanning.
# Rank
The rank of a matrix is the dimension of the vector space spanned by its columns. Just because a matrix has a certain number of rows or columns (dimensionality) doesn't neccessarily mean that it will span that dimensional space. Sometimes there exists a sort of redundancy within the rows/columns of a matrix (linear dependence) that becomes apparent when we reduce a matrix to row-echelon form via Gaussian Elimination.
## Gaussian Elimination
Gaussian Elimination is a process that seeks to take any given matrix and reduce it down to what is called "Row-Echelon form." A matrix is in Row-Echelon form when it has a 1 as its leading entry (furthest left) in each row, and zeroes at every position below that main entry. These matrices will usually wind up as a sort of upper-triangular matrix (not necessarly square) with ones on the main diagonal.

Gaussian Elimination takes a matrix and converts it to row-echelon form by doing combinations of three different row operations:
1) You can swap any two rows
2) You can multiply entire rows by scalars
3) You can add/subtract rows from each other
This takes some practice to do by hand but once mastered becomes the fastest way to find the rank of a matrix.
For example lets look at the following matrix:
\begin{align}
P = \begin{bmatrix}
1 & 0 & 1 \\
-2 & -3 & 1 \\
3 & 3 & 0
\end{bmatrix}
\end{align}
Now, lets use gaussian elimination to get this matrix in row-echelon form
Step 1: Add 2 times the 1st row to the 2nd row
\begin{align}
P = \begin{bmatrix}
1 & 0 & 1 \\
0 & -3 & -3 \\
3 & 3 & 0
\end{bmatrix}
\end{align}
Step 2: Add -3 times the 1st row to the 3rd row
\begin{align}
P = \begin{bmatrix}
1 & 0 & 1 \\
0 & -3 & 3 \\
0 & 3 & -3
\end{bmatrix}
\end{align}
Step 3: Multiply the 2nd row by -1/3
\begin{align}
P = \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & -1 \\
0 & 3 & -3
\end{bmatrix}
\end{align}
Step 4: Add -3 times the 2nd row to the 3rd row
\begin{align}
P = \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & -1 \\
0 & 0 & 0
\end{bmatrix}
\end{align}
Now that we have this in row-echelon form we can see that we had one row that was linearly dependent (could be composed as a linear combination of other rows). That's why we were left with a row of zeros in place of it. If we look closely we will see that the first row equals the second row plus the third row.
Because we had two rows with leading 1s (these are called pivot values) left after the matrix was in row-echelon form, we know that its Rank is 2.
What does this mean? This means that even though the original matrix is a 3x3 matrix, it can't span $\mathbb{R}^3$, only $\mathbb{R}^2$
# Linear Projections in $\mathbb{R}^{2}$
Assume that we have some line $L$ in $\mathbb{R}^{2}$.
```
# Plot a line
plt.xlim(-1,4)
plt.ylim(-1,4)
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
plt.plot(x_vals, y_vals, '--', color='b')
plt.title("A Line")
plt.show()
```
We know that if we have a vector $v$ that lies on that line, if we scale that vector in any direction, the resulting vectors can only exist on that line.
```
# Plot a line
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,0]
# Scaled Vectors
v2 = np.multiply(3, v)
v3 = np.multiply(-1,v)
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0,0, v2[0], v2[1], linewidth=3, head_width=.05, head_length=0.05, color ='yellow')
plt.arrow(0,0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0,0, v3[0], v3[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("v scaled two different ways")
plt.show()
```
Lets call the green vector $v$
This means that line $L$ is equal to vector $v$ scaled by all of the potential scalars in $\mathbb{R}$. We can represent this scaling factor by a constant $c$. Therefore, line $L$ is vector $v$ scaled by any scalar $c$.
\begin{align}
L = cv
\end{align}
Now, say that we have a second vector $w$ that we want to "project" onto line L
```
# Plot a line
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,0]
w = [2,2]
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("vector w")
plt.show()
```
## Projection as a shadow cast onto the target vector at a right angle
This is the intuition that I want you to develop. Imagine that we are shining a light down onto lin $L$ from a direction that is exactly orthogonal to it. In this case shining a light onto $L$ from a direction that is orthogonal to it is as if we were shining a light down from directly above. How long will the shadow be?
Imagine that you're **projecting** light from above to cast a shadow onto the x-axis.
Well since $L$ is literally the x-axis you can probably tell that the length of the projection of $w$ onto $L$ is 2.
A projection onto an axis is the same as just setting the variable that doesn't match the axis to 0. in our case the coordinates of vector $w$ is $(2,2)$ so it projects onto the x-axis at (2,0) -> just setting the y value to 0.
### Notation
In linear algebra we write the projection of w onto L like this:
\begin{align}proj_{L}(\vec{w})\end{align}
```
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,0]
w = [2,2]
proj = [2,0]
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')
plt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Shadow of w")
plt.show()
```
The problem here is that we can't just draw a vector and call it a day, we can only define that vector in terms of our $v$ (green) vector.
Our gray vector is defined as:
\begin{align}
cv = proj_{L}(w)
\end{align}
But what if $L$ wasn't on the x-axis? How would calculate the projection?
```
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,1/2]
w = [2,2]
proj = np.multiply(2.4,v)
# Set axes
axes = plt.gca()
plt.axes().set_aspect('equal')
# Get Vals for L
x_vals = np.array(axes.get_xlim())
y_vals = 1/2*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')
plt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("non x-axis projection")
plt.show()
```
Remember, that it doesn't matter how long our $v$ (green) vectors is, we're just looking for the c value that can scale that vector to give us the gray vector $proj_{L}(w)$.
```
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,1/2]
w = [2,2]
proj = np.multiply(2.4,v)
x_minus_proj = w-proj
# Set axes
axes = plt.gca()
plt.axes().set_aspect('equal')
# Get Vals for L
x_vals = np.array(axes.get_xlim())
y_vals = 1/2*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')
plt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.arrow(proj[0], proj[1], x_minus_proj[0], x_minus_proj[1], linewidth=3, head_width=.05, head_length=0.05, color = 'yellow')
plt.title("non x-axis projection")
plt.show()
```
Lets use a trick. We're going to imagine that there is yellow vector that is orthogonal to $L$, that starts at the tip of our projection (gray) and ends at the tip of $w$ (red).
### Here's the hard part
This may not be intuitive, but we can define that yellow vector as $w-proj_{L}(w)$. Remember how two vectors added together act like we had placed one at the end of the other? Well this is the opposite, if we take some vector and subtract another vector, the tip moves to the end of the subtracted vector.
Since we defined $proj_{L}(w)$ as $cv$ (above). We then rewrite the yellow vector as:
\begin{align}
yellow = w-cv
\end{align}
Since we know that our yellow vector is orthogonal to $v$ we can then set up the following equation:
\begin{align}
v \cdot (w-cv) = 0
\end{align}
(remember that the dot product of two orthogonal vectors is 0)
Now solving for $c$ we get
1) Distribute the dot product
\begin{align}
v \cdot w - c(v \cdot v) = 0
\end{align}
2) add $c(v \cdot v)$ to both sides
\begin{align}
v \cdot w = c(v \cdot v)
\end{align}
3) divide by $v \cdot v$
\begin{align}
c = \frac{w \cdot v}{v \cdot v}
\end{align}
Since $cv = proj_{L}(w)$ we know that:
\begin{align}
proj_{L}(w) = \frac{w \cdot v}{v \cdot v}v
\end{align}
This is the equation for the projection of any vector $w$ onto any line $L$!
Think about if we were trying to project an already orthogonal vector onto a line:
```
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
# v = [1,0]
w = [0,2]
proj = [2,0]
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Shadow of w")
plt.show()
```
Now that you have a feel for linear projections, you can see that the $proj_{L}(w)$ is 0 mainly because $w \cdot v$ is 0.
Why have I gone to all of this trouble to explain linear projections? Because I think the intuition behind it is one of the most important things to grasp in linear algebra. We can find the shortest distance between some data point (vector) and a line best via an orthogonal projection onto that line. We can now move data points onto any given line and be certain that they move as little as possible from their original position.
The square of the norm of a vector is equivalent to the dot product of a vector with itself.
The dot product of a vector and itself can be rewritten as that vector times the transpose of itself.
| github_jupyter |
<a href="https://colab.research.google.com/github/arunraja-hub/Preference_Extraction/blob/master/export_lucid.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Install and imports
```
%tensorflow_version 1.x
!pip uninstall lucid -y
!pip install git+https://github.com/tensorflow/lucid.git#egg=lucid
!git clone https://github.com/arunraja-hub/Preference_Extraction.git
!pip install tf-agents==0.3.0
!pip uninstall tensorflow-probability -y
!pip install tensorflow-probability==0.7.0
!npm install -g svelte-cli@2.2.0
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np
from tf_agents.trajectories.time_step import TimeStep
from tf_agents.specs.tensor_spec import TensorSpec
from tf_agents.specs.tensor_spec import TensorSpec
from tf_agents.specs.tensor_spec import BoundedTensorSpec
from tf_agents.networks import q_network
import lucid.modelzoo.vision_models as models
import concurrent.futures
import itertools
import os
import pickle
import random
import sys
import time
import numpy as np
import io
import collections
import urllib.request
from urllib.error import HTTPError
```
# Examine the checkpoint vars
```
cpt_name = "Preference_Extraction/model_ckpt"
cpt_var_names = tf.compat.v1.train.list_variables(cpt_name)
[name for name in cpt_var_names if (("bias" in name[0]) or ("kernel" in name[0])) and not ("OPTIMIZER" in name[0])]
```
# Setup model
```
tf.reset_default_graph()
input_shape = [14, 16, 5]
my_input = tf.placeholder(tf.float32, shape=[None] + input_shape, name="my_input")
q_vals = q_network.QNetwork(input_tensor_spec=TensorSpec(shape=(14, 16, 5)), action_spec=BoundedTensorSpec((), tf.int32, 0, 2), conv_layer_params = [[16, 3, 1], [32, 3, 2]], fc_layer_params = [64])(my_input)
[tensor for op in tf.get_default_graph().get_operations() for tensor in op.values()]
```
# Get the var_name_to_prev_var_name mapping by matching tensors with the same shape.
```
cpt_var_info = tf.compat.v1.train.list_variables(cpt_name)
cpt_var_info = [var for var in cpt_var_info if (("bias" in var[0]) or ("kernel" in var[0])) and not ("OPTIMIZER" in var[0]) and not ("_target_q_network" in var[0])]
shape_to_cpt_var_name = {tuple(var[1]): var[0] for var in cpt_var_info}
shape_to_cpt_var_name
current_vars = tf.get_collection(tf.GraphKeys.VARIABLES)
current_vars
shape_to_current_var_name = {tuple(var.get_shape().as_list()): var.name[:-2] for var in current_vars}
shape_to_current_var_name
var_name_to_prev_var_name = {}
for shape in shape_to_current_var_name:
var_name_to_prev_var_name[shape_to_current_var_name[shape]] = shape_to_cpt_var_name[shape]
var_name_to_prev_var_name
```
# Read data
```
class Trajectory(
collections.namedtuple('Trajectory', [
'step_type',
'observation',
'action',
'policy_info',
'next_step_type',
'reward',
'discount',
])):
"""Stores the observation the agent saw and the action it took.
The rest of the attributes aren't used in this code."""
__slots__ = ()
class ListWrapper(object):
def __init__(self, list_to_wrap):
self._list = list_to_wrap
def as_list(self):
return self._list
class RenameUnpickler(pickle.Unpickler):
def find_class(self, module, name):
if name == "Trajectory":
return Trajectory
if name == "ListWrapper":
return ListWrapper
return super(RenameUnpickler, self).find_class(module, name)
def rename_load(s):
"""Helper function analogous to pickle.loads()."""
return RenameUnpickler(s, encoding='latin1').load()
# Modified read trajectories functions to read files from local storage
def load_file(full_path):
try:
with open(full_path, 'rb') as f:
data = rename_load(f)
return data
except:
return None
def all_load_data(base_path):
executor = concurrent.futures.ThreadPoolExecutor(max_workers=100)
futures = []
for i in range(5000):
full_path = os.path.join(base_path, "ts"+str(i)+".pickle")
future = executor.submit(load_file, full_path)
futures.append(future)
raw_data = []
for future in concurrent.futures.as_completed(futures):
result = future.result()
if result:
raw_data.append(result)
return raw_data
all_raw_data = all_load_data("Preference_Extraction/data/simple_env_1/")
```
# Do the warmstart and verify it worked
```
tf.train.warm_start(cpt_name, var_name_to_prev_var_name=var_name_to_prev_var_name)
init_op = tf.global_variables_initializer()
activation_tensor = tf.get_default_graph().get_tensor_by_name("QNetwork/EncodingNetwork/EncodingNetwork/dense/Relu:0")
weights = tf.get_default_graph().get_tensor_by_name("QNetwork/dense/kernel/Read/ReadVariableOp:0")
bias = tf.get_default_graph().get_tensor_by_name("QNetwork/dense/bias/Read/ReadVariableOp:0")
with tf.Session() as sess:
sess.run(init_op)
for i in range(len(all_raw_data[0].observation)):
single_observation = np.array([all_raw_data[0].observation[i]])
restored_activations = sess.run(activation_tensor, {my_input: single_observation})[0]
old_activations = all_raw_data[0].policy_info["activations"][i]
if i < 3:
print("restored_activations", restored_activations, "old_activations", old_activations)
np.testing.assert_allclose(restored_activations, old_activations, rtol=.1)
```
# Save Lucid model.
```
with tf.Session() as sess:
sess.run(init_op)
models.Model.suggest_save_args()
models.Model.save(
input_name='my_input',
image_shape=input_shape,
output_names=["QNetwork/dense/BiasAdd"],
image_value_range=[0,1],
save_url="lucid_save_model.pb"
)
```
| github_jupyter |
```
#@title Copyright 2020 Google LLC. Double-click here for license information.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Introduction to Neural Nets
This Colab builds a deep neural network to perform more sophisticated linear regression than the earlier Colabs.
## Learning Objectives:
After doing this Colab, you'll know how to do the following:
* Create a simple deep neural network.
* Tune the hyperparameters for a simple deep neural network.
## The Dataset
Like several of the previous Colabs, this Colab uses the [California Housing Dataset](https://developers.google.com/machine-learning/crash-course/california-housing-data-description).
## Use the right version of TensorFlow
The following hidden code cell ensures that the Colab will run on TensorFlow 2.X.
```
#@title Run on TensorFlow 2.x
%tensorflow_version 2.x
from __future__ import absolute_import, division, print_function, unicode_literals
```
## Import relevant modules
The following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory.
```
#@title Import relevant modules
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from matplotlib import pyplot as plt
import seaborn as sns
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format
print("Imported modules.")
```
## Load the dataset
Like most of the previous Colab exercises, this exercise uses the California Housing Dataset. The following code cell loads the separate .csv files and creates the following two pandas DataFrames:
* `train_df`, which contains the training set
* `test_df`, which contains the test set
```
train_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
train_df = train_df.reindex(np.random.permutation(train_df.index)) # shuffle the examples
test_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv")
```
## Normalize values
When building a model with multiple features, the values of each feature should cover roughly the same range. The following code cell normalizes datasets by converting each raw value to its Z-score. (For more information about Z-scores, see the Classification exercise.)
```
#@title Convert raw values to their Z-scores
# Calculate the Z-scores of each column in the training set:
train_df_mean = train_df.mean()
train_df_std = train_df.std()
train_df_norm = (train_df - train_df_mean)/train_df_std
# Calculate the Z-scores of each column in the test set.
test_df_mean = test_df.mean()
test_df_std = test_df.std()
test_df_norm = (test_df - test_df_mean)/test_df_std
print("Normalized the values.")
```
## Represent data
The following code cell creates a feature layer containing three features:
* `latitude` X `longitude` (a feature cross)
* `median_income`
* `population`
This code cell specifies the features that you'll ultimately train the model on and how each of those features will be represented. The transformations (collected in `my_feature_layer`) don't actually get applied until you pass a DataFrame to it, which will happen when we train the model.
```
# Create an empty list that will eventually hold all created feature columns.
feature_columns = []
# We scaled all the columns, including latitude and longitude, into their
# Z scores. So, instead of picking a resolution in degrees, we're going
# to use resolution_in_Zs. A resolution_in_Zs of 1 corresponds to
# a full standard deviation.
resolution_in_Zs = 0.3 # 3/10 of a standard deviation.
# Create a bucket feature column for latitude.
latitude_as_a_numeric_column = tf.feature_column.numeric_column("latitude")
latitude_boundaries = list(np.arange(int(min(train_df_norm['latitude'])),
int(max(train_df_norm['latitude'])),
resolution_in_Zs))
latitude = tf.feature_column.bucketized_column(latitude_as_a_numeric_column, latitude_boundaries)
# Create a bucket feature column for longitude.
longitude_as_a_numeric_column = tf.feature_column.numeric_column("longitude")
longitude_boundaries = list(np.arange(int(min(train_df_norm['longitude'])),
int(max(train_df_norm['longitude'])),
resolution_in_Zs))
longitude = tf.feature_column.bucketized_column(longitude_as_a_numeric_column,
longitude_boundaries)
# Create a feature cross of latitude and longitude.
latitude_x_longitude = tf.feature_column.crossed_column([latitude, longitude], hash_bucket_size=100)
crossed_feature = tf.feature_column.indicator_column(latitude_x_longitude)
feature_columns.append(crossed_feature)
# Represent median_income as a floating-point value.
median_income = tf.feature_column.numeric_column("median_income")
feature_columns.append(median_income)
# Represent population as a floating-point value.
population = tf.feature_column.numeric_column("population")
feature_columns.append(population)
# Convert the list of feature columns into a layer that will later be fed into
# the model.
my_feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
```
## Build a linear regression model as a baseline
Before creating a deep neural net, find a [baseline](https://developers.google.com/machine-learning/glossary/#baseline) loss by running a simple linear regression model that uses the feature layer you just created.
```
#@title Define the plotting function.
def plot_the_loss_curve(epochs, mse):
"""Plot a curve of loss vs. epoch."""
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Mean Squared Error")
plt.plot(epochs, mse, label="Loss")
plt.legend()
plt.ylim([mse.min()*0.95, mse.max() * 1.03])
plt.show()
print("Defined the plot_the_loss_curve function.")
#@title Define functions to create and train a linear regression model
def create_model(my_learning_rate, feature_layer):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Add the layer containing the feature columns to the model.
model.add(feature_layer)
# Add one linear layer to the model to yield a simple linear regressor.
model.add(tf.keras.layers.Dense(units=1, input_shape=(1,)))
# Construct the layers into a model that TensorFlow can execute.
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.MeanSquaredError()])
return model
def train_model(model, dataset, epochs, batch_size, label_name):
"""Feed a dataset into the model in order to train it."""
# Split the dataset into features and label.
features = {name:np.array(value) for name, value in dataset.items()}
label = np.array(features.pop(label_name))
history = model.fit(x=features, y=label, batch_size=batch_size,
epochs=epochs, shuffle=True)
# Get details that will be useful for plotting the loss curve.
epochs = history.epoch
hist = pd.DataFrame(history.history)
rmse = hist["mean_squared_error"]
return epochs, rmse
print("Defined the create_model and train_model functions.")
```
Run the following code cell to invoke the the functions defined in the preceding two code cells. (Ignore the warning messages.)
**Note:** Because we've scaled all the input data, **including the label**, the resulting loss values will be *much less* than previous models.
**Note:** Depending on the version of TensorFlow, running this cell might generate WARNING messages. Please ignore these warnings.
```
# The following variables are the hyperparameters.
learning_rate = 0.01
epochs = 15
batch_size = 1000
label_name = "median_house_value"
# Establish the model's topography.
my_model = create_model(learning_rate, my_feature_layer)
# Train the model on the normalized training set.
epochs, mse = train_model(my_model, train_df_norm, epochs, batch_size, label_name)
plot_the_loss_curve(epochs, mse)
test_features = {name:np.array(value) for name, value in test_df_norm.items()}
test_label = np.array(test_features.pop(label_name)) # isolate the label
print("\n Evaluate the linear regression model against the test set:")
my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
```
## Define a deep neural net model
The `create_model` function defines the topography of the deep neural net, specifying the following:
* The number of [layers](https://developers.google.com/machine-learning/glossary/#layer) in the deep neural net.
* The number of [nodes](https://developers.google.com/machine-learning/glossary/#node) in each layer.
The `create_model` function also defines the [activation function](https://developers.google.com/machine-learning/glossary/#activation_function) of each layer.
```
def create_model(my_learning_rate, my_feature_layer):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Add the layer containing the feature columns to the model.
model.add(my_feature_layer)
# Describe the topography of the model by calling the tf.keras.layers.Dense
# method once for each layer. We've specified the following arguments:
# * units specifies the number of nodes in this layer.
# * activation specifies the activation function (Rectified Linear Unit).
# * name is just a string that can be useful when debugging.
# Define the first hidden layer with 20 nodes.
model.add(tf.keras.layers.Dense(units=20,
activation='relu',
name='Hidden1'))
# Define the second hidden layer with 12 nodes.
model.add(tf.keras.layers.Dense(units=12,
activation='relu',
name='Hidden2'))
# Define the output layer.
model.add(tf.keras.layers.Dense(units=1,
name='Output'))
model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.MeanSquaredError()])
return model
```
## Define a training function
The `train_model` function trains the model from the input features and labels. The [tf.keras.Model.fit](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#fit) method performs the actual training. The `x` parameter of the `fit` method is very flexible, enabling you to pass feature data in a variety of ways. The following implementation passes a Python dictionary in which:
* The *keys* are the names of each feature (for example, `longitude`, `latitude`, and so on).
* The *value* of each key is a NumPy array containing the values of that feature.
**Note:** Although you are passing *every* feature to `model.fit`, most of those values will be ignored. Only the features accessed by `my_feature_layer` will actually be used to train the model.
```
def train_model(model, dataset, epochs, label_name,
batch_size=None):
"""Train the model by feeding it data."""
# Split the dataset into features and label.
features = {name:np.array(value) for name, value in dataset.items()}
label = np.array(features.pop(label_name))
history = model.fit(x=features, y=label, batch_size=batch_size,
epochs=epochs, shuffle=True)
# The list of epochs is stored separately from the rest of history.
epochs = history.epoch
# To track the progression of training, gather a snapshot
# of the model's mean squared error at each epoch.
hist = pd.DataFrame(history.history)
mse = hist["mean_squared_error"]
return epochs, mse
```
## Call the functions to build and train a deep neural net
Okay, it is time to actually train the deep neural net. If time permits, experiment with the three hyperparameters to see if you can reduce the loss
against the test set.
```
# The following variables are the hyperparameters.
learning_rate = 0.01
epochs = 20
batch_size = 1000
# Specify the label
label_name = "median_house_value"
# Establish the model's topography.
my_model = create_model(learning_rate, my_feature_layer)
# Train the model on the normalized training set. We're passing the entire
# normalized training set, but the model will only use the features
# defined by the feature_layer.
epochs, mse = train_model(my_model, train_df_norm, epochs,
label_name, batch_size)
plot_the_loss_curve(epochs, mse)
# After building a model against the training set, test that model
# against the test set.
test_features = {name:np.array(value) for name, value in test_df_norm.items()}
test_label = np.array(test_features.pop(label_name)) # isolate the label
print("\n Evaluate the new model against the test set:")
my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
```
## Task 1: Compare the two models
How did the deep neural net perform against the baseline linear regression model?
```
#@title Double-click to view a possible answer
# Assuming that the linear model converged and
# the deep neural net model also converged, please
# compare the test set loss for each.
# In our experiments, the loss of the deep neural
# network model was consistently lower than
# that of the linear regression model, which
# suggests that the deep neural network model
# will make better predictions than the
# linear regression model.
```
## Task 2: Optimize the deep neural network's topography
Experiment with the number of layers of the deep neural network and the number of nodes in each layer. Aim to achieve both of the following goals:
* Lower the loss against the test set.
* Minimize the overall number of nodes in the deep neural net.
The two goals may be in conflict.
```
#@title Double-click to view a possible answer
# Many answers are possible. We noticed the
# following trends:
# * Two layers outperformed one layer, but
# three layers did not perform significantly
# better than two layers; two layers
# outperformed one layer.
# In other words, two layers seemed best.
# * Setting the topography as follows produced
# reasonably good results with relatively few
# nodes:
# * 10 nodes in the first layer.
# * 6 nodes in the second layer.
# As the number of nodes in each layer dropped
# below the preceding, test loss increased.
# However, depending on your application, hardware
# constraints, and the relative pain inflicted
# by a less accurate model, a smaller network
# (for example, 6 nodes in the first layer and
# 4 nodes in the second layer) might be
# acceptable.
```
## Task 3: Regularize the deep neural network (if you have enough time)
Notice that the model's loss against the test set is *much higher* than the loss against the training set. In other words, the deep neural network is [overfitting](https://developers.google.com/machine-learning/glossary/#overfitting) to the data in the training set. To reduce overfitting, regularize the model. The course has suggested several different ways to regularize a model, including:
* [L1 regularization](https://developers.google.com/machine-learning/glossary/#L1_regularization)
* [L2 regularization](https://developers.google.com/machine-learning/glossary/#L2_regularization)
* [Dropout regularization](https://developers.google.com/machine-learning/glossary/#dropout_regularization)
Your task is to experiment with one or more regularization mechanisms to bring the test loss closer to the training loss (while still keeping test loss relatively low).
**Note:** When you add a regularization function to a model, you might need to tweak other hyperparameters.
### Implementing L1 or L2 regularization
To use L1 or L2 regularization on a hidden layer, specify the `kernel_regularizer` argument to [tf.keras.layers.Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense). Assign one of the following methods to this argument:
* `tf.keras.regularizers.l1` for L1 regularization
* `tf.keras.regularizers.l2` for L2 regularization
Each of the preceding methods takes an `l` parameter, which adjusts the [regularization rate](https://developers.google.com/machine-learning/glossary/#regularization_rate). Assign a decimal value between 0 and 1.0 to `l`; the higher the decimal, the greater the regularization. For example, the following applies L2 regularization at a strength of 0.05.
```
model.add(tf.keras.layers.Dense(units=20,
activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(l=0.01),
name='Hidden1'))
```
### Implementing Dropout regularization
You implement dropout regularization as a separate layer in the topography. For example, the following code demonstrates how to add a dropout regularization layer between the first hidden layer and the second hidden layer:
```
model.add(tf.keras.layers.Dense( *define first hidden layer*)
model.add(tf.keras.layers.Dropout(rate=0.25))
model.add(tf.keras.layers.Dense( *define second hidden layer*)
```
The `rate` parameter to [tf.keras.layers.Dropout](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) specifies the fraction of nodes that the model should drop out during training.
```
#@title Double-click for a possible solution
# The following "solution" uses L2 regularization to bring training loss
# and test loss closer to each other. Many, many other solutions are possible.
def create_model(my_learning_rate, my_feature_layer):
"""Create and compile a simple linear regression model."""
# Discard any pre-existing version of the model.
model = None
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Add the layer containing the feature columns to the model.
model.add(my_feature_layer)
# Describe the topography of the model.
# Implement L2 regularization in the first hidden layer.
model.add(tf.keras.layers.Dense(units=20,
activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
name='Hidden1'))
# Implement L2 regularization in the second hidden layer.
model.add(tf.keras.layers.Dense(units=12,
activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
name='Hidden2'))
# Define the output layer.
model.add(tf.keras.layers.Dense(units=1,
name='Output'))
model.compile(optimizer=tf.keras.optimizers.Adam(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.MeanSquaredError()])
return model
# Call the new create_model function and the other (unchanged) functions.
# The following variables are the hyperparameters.
learning_rate = 0.007
epochs = 140
batch_size = 1000
label_name = "median_house_value"
# Establish the model's topography.
my_model = create_model(learning_rate, my_feature_layer)
# Train the model on the normalized training set.
epochs, mse = train_model(my_model, train_df_norm, epochs,
label_name, batch_size)
plot_the_loss_curve(epochs, mse)
test_features = {name:np.array(value) for name, value in test_df_norm.items()}
test_label = np.array(test_features.pop(label_name)) # isolate the label
print("\n Evaluate the new model against the test set:")
my_model.evaluate(x = test_features, y = test_label, batch_size=batch_size)
```
| github_jupyter |
```
import numpy as np
from collections import Counter
class Mission:
def __init__(self, missionTitle, game_size, difficulty_modifier):
self.event_list = ["A pressurized line has ruptured",
"An air lock has broken",
"Electrical lines are damaged",
"Exposed wires have shorted",
"Important display panels are cracked",
"A large fire has broken out and is spreading",
"Interior heat shields have broken off",
"Vital systems are shutting down",
"Multiple electrical systems have failed",
"A critical drop in cabin pressure has occured",
"A series of small explosions have caused damage",
"A power coupling has untethered",
"Falling debris has trapped crew members"]
self.location_list = ["cargo hold",
"medical bay",
"biology labratory",
"service corridor",
"maintenance crawlspace",
"observatory",
"armory"
"cockpit",
"command bridge",
"crews living quarters",
"logistics facility",
"dormatories",
"dining hall",
"tech labratory",
"engine room alpha",
"engine room beta",
"fore passage",
"aft passageway",
"infirmary",
"passenger quarters",
"warpdrive containment unit",
"captains quarters",
"long-distance communications hub",
"short-field communications console room"]
self.title = missionTitle
self.event = self.event_list[np.random.randint(0, len(self.event_list))]
self.location = self.location_list[np.random.randint(0, len(self.location_list))]
self.difficulty = np.random.randint(10,20)+difficulty_modifier
if self.difficulty>=10 and self.difficulty <14:
self.difficulty_text = "LOW"
self.max_party_size = 1
elif self.difficulty>=14 and self.difficulty <17:
self.difficulty_text = "MODERATE"
self.max_party_size = 2
elif self.difficulty>=18:
self.difficulty_text = "HIGH"
self.max_party_size = 3
self.missionBrief = f"{self.event} in the {self.location}. Threat level for this mission is set to: {self.difficulty_text}. I would advise sending at least {self.max_party_size} on this mission"
self.partyMembers = []
self.missionChecks = []
self.missionResult = 'Mission Incomplete'
self.missionParameters = {'missionTitle' : self.title,
'missionSummary' : self.missionBrief,
'missionEvent': self.event,
'missionLocation' : self.location,
'missionDifficulty' : self.difficulty,
'missionParty' : self.partyMembers,
'missionResult' : self.missionResult,
'missionChecks' : self.missionChecks}
def updateMissionParam(self):
self.missionParameters = {'missionTitle' : self.title,
'missionSummary' : self.missionBrief,
'missionEvent': self.event,
'missionLocation' : self.location,
'missionDifficulty' : self.difficulty,
'missionParty' : self.partyMembers,
'missionResult' : self.missionResult,
'missionChecks' : self.missionChecks}
def addPartyMembers(self, party_members):
self.partyMembers = party_members
self.updateMissionParam()
def updateMissionResult(self, checks):
self.missionChecks = checks
results = [c >= self.difficulty for c in checks]
if 99 in checks:
result = "Failed"
elif True in results:
result = "Passed"
else:
result = 'Failed'
self.missionResult = result
self.updateMissionParam()
class GameParameters:
def __init__(self):
self.playerList = []
self.number_players = len(self.playerList)
self.Missions = []
self.missionLog = {}
self.mission_titles = [f"Mission {n}" for n in np.arange(1,101)]
def createNewMission(self):
mission_object = Mission(self.mission_titles.pop(0), self.number_players)
self.Missions.append(mission_object)
# self.missionLog[f"{mission_object.title}"] = mission_object.missionParameters
def addPlayers(self):
return
def generateMissionLog(self):
for mission in self.Missions:
self.missionLog[mission.title] = {
"Mission Summary" : mission.missionBrief,
"Threat Level" : mission.difficulty_text,
"Mission Result" : mission.missionResult}
def generateStatusReport(self):
self.generateMissionLog()
self.statusReport = Counter([v['Mission Result'] for k,v in Game1.missionLog.items()])
# class criticalMissions: #high difficulty story beats with game impacting consequences
# def __init__(self, game_size):
# self.game_size = game_size
# # If mission fails - future missions get more difficult
# Reavers and Raiders have been spotted on the radar systems. Get the cloaking system up in time
# Life Support Shutdown and diminishing oxygen supply
# Gravitational Anomoly
# an alien lifeform has been detected onboard
# Power has been cut and the ship goes dark
game_size = 7
random_player = "***THIS GUY***"
criticalMission_dict = {
"Raiders" : {
'Intro_Text' : """Your distress beacon has been picked up by a nearby ship! Out of the window, you notice a light moving agaist the backdrop of stars.
However, as the craft draws near, you see the crimson red hull grotesquely decorated with corpses and bodies or starbound travelers.
The Reaver ship begins to circle. . . """,
'Mission_Text' : """Shut down all power on the ship and go dark. The enemy ship is a long way out, perhaps your can go unseen. . . """,
'Difficulty' : round(game_size * (np.random.randint(3,8) / 10)),
'Passed' : "You were able to successfully avoid detection. Your dark ship blends into the dark backdrop of space as the Reavers continue moving and slip out of view. You take a moment of quiet and much needed rest before turning your attention to other matters",
'Failed' : "It is too late! Harpoons puncture your hull and raiders begin to board your ship. Getting out of this mess was hard enough but now you have to do it while fighting pirates? So be it . . ",
},
"LifeSupport" : {
'Intro_Text' : """As your discuss the best course of action with your crew, the air begins to feel thin as you struggle to breath. Multiple electrical shortages have caused the life support systems and
carbon-monoxide scrubbers to malfunction. You begin to feel unsteady and light-headed as hypoxia sets in. Time and oxygen are in short supply. . . """,
'Mission_Text' : """Find a way to stablize your crew and restore oxygen to the cabin """,
'Difficulty' : round(game_size * (np.random.randint(3,6) / 10)),
'Passed' : "You were able to successfully avoid detection. Your dark ship blends into the dark backdrop of space as the Reavers continue moving and slip out of view. You take a moment of quiet and much needed rest before turning your attention to other matters",
'Failed' : "f{random_player} succumbs to his injuries. However, some of the crew was able to find oxygen masks in one of the air locks. While you can breath, seeing out of the foggy masks is difficult in the ships dark passage ways . . ",
},
"Darkness" : {
'Intro_Text' : """The expansive darkness of space has never bothered you. At times, the solitude was even comforting. However, the primary generator has locked up and backup can only provide enough to
keep essential systems online. As you make your way through the corridors, your vision limited to the dull yellow haze from your lighter, the darkness of space feels oppressively close . . . """,
'Mission_Text' : """Get the primary generator online and restore power to the ship """,
'Difficulty' : round(game_size * (np.random.randint(1,4) / 10)),
'Passed' : "Sometimes, a problem requires a methodical resolution. Other times, hitting pipes with a hammer gets the job done. The humm of the turbine restores light to the darkened hallways",
'Failed' : "Dark or light there is a job that needs doing if we are going to get out of this. . .",
}
}
criticalMission_dict['Raiders']['Passed']
np.random.randint(2,7) / 10
Mission_Consequences = [
"The ship begins to break apart"
"A fire begins to spread through the halls of the lower decks as one by one crucial systems fall offline"
"More and more of the halls and passages of your ship become unusable. You begin to feel the crushing expanse of space collapsing in"
"Power is becoming intermittent. As more and more of the ship falls into darkness, the silence of space becomes deafening",
"Life support systems are shutting down. You can feel the air growing thin as oxygen becomes a valuable resource",
"You look out a nearby window and wonder if your distress beacon has been pickup by any nearby "
]
# utility_functions
def genChecks(num):
return list(np.random.randint(1,20,num))
# Players Join Game1
players = ['Jeff', 'Sherley', 'Britta', 'Pierce', 'Annie', 'Abed', 'Troy']
Game1 = GameParameters()
Game1.players = players
print(f"Beginning Game with {players}")
# Go on Mission:
Game1.createNewMission()
# Group recieves crisis alert
print(Game1.Missions[-1].missionBrief)
# Group votes on mission assignees
mission_party_size = Game1.Missions[-1].max_party_size
party_votes = 'Jeff Jeff Jeff Troy Abed Troy Annie Annie Jeff Jeff Britta Annie'.split()
mission_assignees = list(Counter(player_votes).keys())[:mission_party_size]
print(f"{mission_assignees} will go on this mission")
# assignees roll a 1d20
# Results are compiled into array
mission_checks = genChecks(mission_party_size)
print()
print(f"Secret Diff: {Game1.Missions[-1].difficulty}")
print()
# self.updateMissionResult() with array
Game1.Missions[-1].updateMissionResult(mission_checks)
# assignees return to the communication hub
# everyone is notified about result
print(f"{mission_assignees}rolled {mission_checks}")
print(f"The Mission {Game1.Missions[-1].missionResult}")
Game1.generateMissionLog()
Game1.missionLog
Game1.generateStatusReport()
Game1.statusReport
# Players Vote on who they think is BadGuy
#Players Vote to Punish BadGuy
#Players either succeed or fail
# If player
```
| github_jupyter |
## Wavelets
An increasingly popular family of basis functions is called **wavelets**. By construction, wavelets are localized in both frequency and time domains. Individual wavelets are specified by a set of wavelet filter coefficients. Given a wavelet, a complete
orthonormal set of basis functions can be constructed by scalings and translations. Different wavelet families trade the localization of a wavelet with its smoothness.
### Wavelet transform of Gaussian Noise
Below we have an example using a particular wavelet to compute a wavelet PSD as a
function of time $t_0$ and frequency $f_0$. The wavelet used is of the form
$$w(t|t_0,f_0,Q) = A exp[i2\pi f_0 (t-t_0)]exp[-f_0^2(t-t_0)^2/Q^2]$$
where $t_0$ is the central time, $f_0$ is the central frequency, and the dimensionless parameter Q is
a model parameter which controls the width of the frequency window.
The Fourier transform of this form is
$$W(f|t_0,f_0,Q)=(\frac{\pi}{f^2_0/Q^2})^{1/2} exp(-i2\pi f t_0) exp[\frac{-\pi^2Q^2(f-f_0)^2}{Qf^2_0}]$$
Note that the form given by above equations is not technically a wavelet because it
does not meet the admissibility criterion (the equivalent of orthogonality in Fourier transforms).
This form is closely related to a true wavelet, the *Morlet wavelet*, through a simple scaling and offset. Therefore, these equations should probaly be referred to as "matched filters" rather than "wavelets".
However, these functions display quite nicely one
main property of wavelets: the localization of power in both time and frequency. For this reason,
we will refer to these functions as "wavelets," and explore their ability to localize frequency signals.
#### Imput signal
We take a localized Gaussian noise as imput signal, as shown below.
```
import numpy as np
from matplotlib import pyplot as plt
from astroML.fourier import sinegauss, wavelet_PSD, FT_continuous, IFT_continuous
from astroML.plotting import setup_text_plots
setup_text_plots(usetex=True)
# Sample the function: localized noise
np.random.seed(0)
N = 1024
t = np.linspace(-5, 5, N)
x = np.ones(len(t))
h = np.random.normal(0, 1, len(t))
h *= np.exp(-0.5 * (t / 0.5) ** 2)
# Show signal
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
ax = fig.add_subplot(111)
ax.plot(t, h, '-k', lw=1)
ax.text(0.02, 0.95, ("Input Signal:\n"
"Localized Gaussian noise"),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlim(-4, 4)
ax.set_ylim(-2.9, 2.9)
ax.set_ylabel('$h(t)$')
```
#### Compute wavelet
We compute the wavelet from sample data using *sinegauss* function in *astroML.fourier*.
Here we take Q=1.0 to control the width of the frequency window.
In the plot, solid line and dashed line show the real part and imaginary part respectively.
```
# Compute an example wavelet
W = sinegauss(t, 0, 1.5, Q=1.0)
# Show the example wavelet
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
ax = fig.add_subplot(111)
ax.plot(t, W.real, '-k', label='real part', lw=1)
ax.plot(t, W.imag, '--k', label='imag part', lw=1)
ax.text(0.02, 0.95, ("Example Wavelet\n"
"$t_0 = 0$, $f_0=1.5$, $Q=1.0$"),
ha='left', va='top', transform=ax.transAxes)
ax.text(0.98, 0.05,
(r"$w(t; t_0, f_0, Q) = e^{-[f_0 (t - t_0) / Q]^2}"
"e^{2 \pi i f_0 (t - t_0)}$"),
ha='right', va='bottom', transform=ax.transAxes)
ax.legend(loc=1)
ax.set_xlim(-4, 4)
ax.set_ylim(-1.4, 1.4)
ax.set_ylabel('$w(t; t_0, f_0, Q)$')
```
#### Compute PSD
The wavelet PSD (power spectral density) is defined by $PSD_w(f0, t0;Q) = |Hw(t_0; f_0,Q)|^2$. Unlike
the typical Fourier-transform PSD, the wavelet PSD allows detection of frequency information
which is localized in time.
Here we compute the wavelet PSD from sample wavelet using *wavelet_PSD* function in *astroML.fourier*.
The plot shows the PSD as a function of the frequency $f_0$ and the time $t_0$, for Q = 1.0.
```
# Compute the wavelet PSD
f0 = np.linspace(0.5, 7.5, 100)
wPSD = wavelet_PSD(t, h, f0, Q=1.0)
# Plot the results
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
# Third panel: the spectrogram
ax = plt.subplot(111)
ax.imshow(wPSD, origin='lower', aspect='auto',
extent=[t[0], t[-1], f0[0], f0[-1]])
ax.text(0.02, 0.95, ("Wavelet PSD"), color='w',
ha='left', va='top', transform=ax.transAxes)
ax.set_xlim(-4, 4)
ax.set_ylim(0.5, 7.5)
ax.set_xlabel('$t$')
ax.set_ylabel('$f_0$')
```
### Wavelet transform of a Noisy Spike
Here we use wavelet transform when the imput data is noisy spike rather than local Gaussian.
#### Define functions and construct imput noise
This example uses a Gaussian spike in the presence of white (Gaussian) noise as the imput noise. The imput signal is shown below.
```
def wavelet(t, t0, f0, Q):
return (np.exp(-(f0 / Q * (t - t0)) ** 2)
* np.exp(2j * np.pi * f0 * (t - t0)))
def wavelet_FT(f, t0, f0, Q):
# this is its fourier transform using
# H(f) = integral[ h(t) exp(-2pi i f t) dt]
return (np.sqrt(np.pi) * Q / f0
* np.exp(-2j * np.pi * f * t0)
* np.exp(-(np.pi * (f - f0) * Q / f0) ** 2))
def check_funcs(t0=1, f0=2, Q=3):
t = np.linspace(-5, 5, 10000)
h = wavelet(t, t0, f0, Q)
f, H = FT_continuous(t, h)
assert np.allclose(H, wavelet_FT(f, t0, f0, Q))
# Create the simulated dataset
np.random.seed(5)
t = np.linspace(-40, 40, 2001)[:-1]
h = np.exp(-0.5 * ((t - 20.) / 1.0) ** 2)
hN = h + np.random.normal(0, 0.5, size=h.shape)
# Plot the results
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
# plot the signal
ax = fig.add_subplot(111)
ax.plot(t, hN, '-k', lw=1)
ax.text(0.02, 0.95, ("Input Signal:\n"
"Localized spike plus noise"),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlim(-40, 40)
ax.set_ylim(-1.2, 2.2)
ax.set_ylabel('$h(t)$')
```
#### Compute wavelet
Compute the convolution via the continuous Fourier transform. This is more exact than using the discrete transform, because we have an analytic expression for the FT of the wavelet. The wavelet transform applied to data h(t) is given by
$$H_w(t_0;f_0,Q)=\int^{\infty}_{\infty} h(t)w(t|t_0,f_0,Q)dt$$
By the convolution theorem $H(f) = A(f)B(f)$, we can write the Fourier transform
of $H_w$ as the pointwise product of the Fourier transforms of h(t) and $w*(t; t_0; f_0, Q)$. The first can
be approximated using the discrete Fourier transform as shown in appendix E in the textbook; the second can be found using the analytic formula for W(f) in the prevoius section. This allows us to quickly evaluate $H_w$ as a
function of $t_0$ and $f_0$, using two $O(N logN)$ fast Fourier transforms.
```
Q = 0.3
f0 = 2 ** np.linspace(-3, -1, 100)
f, H = FT_continuous(t, hN)
# Plot the results
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
# plot the wavelet
ax = fig.add_subplot(111)
W = wavelet(t, 0, 0.125, Q)
ax.plot(t, W.real, '-k', label='real part', lw=1)
ax.plot(t, W.imag, '--k', label='imag part', lw=1)
ax.legend(loc=1)
ax.text(0.02, 0.95, ("Example Wavelet\n"
"$t_0 = 0$, $f_0=1/8$, $Q=0.3$"),
ha='left', va='top', transform=ax.transAxes)
ax.text(0.98, 0.05,
(r"$w(t; t_0, f_0, Q) = e^{-[f_0 (t - t_0) / Q]^2}"
"e^{2 \pi i f_0 (t - t_0)}$"),
ha='right', va='bottom', transform=ax.transAxes)
ax.set_xlim(-40, 40)
ax.set_ylim(-1.4, 1.4)
ax.set_ylabel('$w(t; t_0, f_0, Q)$')
```
#### Compute spectrogram
We compute spectrogram using *IFT_continuous* in *astroML.fourier*.
The plot below shows the power spectral density as a function of the frequency $f_0$ and the time $t_0$, for Q = 0.3.
```
W = np.conj(wavelet_FT(f, 0, f0[:, None], Q))
t, HW = IFT_continuous(f, H * W)
# Plot the results
fig = plt.figure(figsize=(6, 2))
fig.subplots_adjust(hspace=0.05, left=0.12, right=0.95, bottom=0.08, top=0.95)
# plot the spectrogram
ax = fig.add_subplot(111)
ax.imshow(abs(HW) ** 2, origin='lower', aspect='auto', cmap=plt.cm.binary,
extent=[t[0], t[-1], np.log2(f0)[0], np.log2(f0)[-1]])
ax.set_xlim(-40, 40)
ax.text(0.02, 0.95, ("Wavelet PSD"), color='w',
ha='left', va='top', transform=ax.transAxes)
ax.set_ylim(np.log2(f0)[0], np.log2(f0)[-1])
ax.set_xlabel('$t$')
ax.set_ylabel('$f_0$')
ax.yaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, *args: ("1/%i"
% (2 ** -x))))
```
### Examples of Wavelets
The resulting wavelets vary from different parameters Q and $f_0$.
Here we take several different parameters Q and $f_0$ and show the result wavelets, taking the prevoius Gaussian sample imput.
```
# Set up the wavelets
t0 = 0
t = np.linspace(-0.4, 0.4, 10000)
f0 = np.array([5, 5, 10, 10])
Q = np.array([1, 0.5, 1, 0.5])
# compute wavelets all at once
W = sinegauss(t, t0, f0[:, None], Q[:, None])
```
Solid lines show the real part and dashed lines show the imaginary part.
```
# Plot the wavelets
fig = plt.figure(figsize=(5, 3.75))
fig.subplots_adjust(hspace=0.05, wspace=0.05)
# in each panel, plot and label a different wavelet
for i in range(4):
ax = fig.add_subplot(221 + i)
ax.plot(t, W[i].real, '-k')
ax.plot(t, W[i].imag, '--k')
ax.text(0.04, 0.95, "$f_0 = %i$\n$Q = %.1f$" % (f0[i], Q[i]),
ha='left', va='top', transform=ax.transAxes)
ax.set_ylim(-1.2, 1.2)
ax.set_xlim(-0.35, 0.35)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.2))
if i in (0, 1):
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel('$t$')
if i in (1, 3):
ax.yaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_ylabel('$w(t)$')
```
| github_jupyter |
# Generative Adversarial Network in Tensorflow
**Generative Adversarial Networks**, introduced by Ian Goodfellow in 2014, are neural nets we can train to _produce_ new images (or other kinds of data) that look as though they came from our true data distribution. In this notebook, we'll implement a small GAN for generating images that look as though they come from the MNIST dataset.
The key insight behind the GAN is to pit two neural networks against each other. On the one hand is the **Generator**, a neural network that takes random noise as input and produces an image as output. On the other hand is the **Discriminator**, which takes in an image and classifies it as real (from MNIST) or fake (from our Generator). During training, we alternate between training the Generator to fool the Discriminator, and training the Discriminator to call the Generator's bluff.
Implementing a GAN in Tensorflow will give you practice turning more involved models into working code, and is also a great showcase for Tensorflow's **variable scope** feature. (Variable scope has made cameos in previous tutorials, but we'll discuss it in a bit more depth here. If you want to see how variable scope is used in TensorFlow Slim, definitely go revisit Kevin Liang's VAE tutorial!)
## Imports
```
%matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import time
# Use if running on a GPU
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.log_device_placement = True
```
## Loading the data
As in previous examples, we'll use MNIST, because it's a small and easy-to-use dataset that comes bundled with Tensorflow.
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
```
## Utility functions
Let's define some utility functions that will help us quickly construct layers for use in our model. There are two things worth noting here:
1. Instead of `tf.Variable`, we use `tf.get_variable`.
The reason for this is a bit subtle, and you may want to skip this and come back to it once you've seen the rest of the code. Here's the basic explanation. Later on in this notebook, we will call `fully_connected_layer` from a couple different places. Sometimes, we will want _new variables_ to be added to the graph, because we are creating an entirely new layer of our network. Other times, however, we will want to use the same weights as an already-existing layer, but acting on different inputs.
For example, the Discriminator network will appear _twice_ in our computational graph; in one case, the input neurons will be connected to the "real data" placeholder (which we will feed MNIST images), and in the other, they will be connected to the output of the Generator. Although these networks form two separate parts of our computational graph, we want them to share the same weights: conceptually, there is _one_ Discriminator function that gets applied twice, not two different functions altogether. Since `tf.Variable` _always_ creates a new variable when called, it would not be appropriate for use here.
Variable scoping solves this problem. Whenever we are adding nodes to a graph, we are operating within a _scope_. Scopes can be named, and you can create a new scope using `tf.variable_scope('name')` (more on this later). When a scope is open, it can optionally be in _reuse mode_. The result of calling `tf.get_variable` depends on whether you are in reuse mode or not. If not (this is the default), `tf.get_variable` will create a new variable, or cause an error if a variable by the same name already exists in the current scope. If you _are_ in reuse mode, the behavior is the opposite: `tf.get_variable` will look up and return an existing variable (with the specified name) within your scope, or throw an error if it doesn't exist. By carefully controlling our scopes later on, we can create exactly the graph we want, with variables shared across the graph where appropriate.
2. The `variables_from_scope` function lists all variables created within a given scope. This will be useful later, when we want to update all "discriminator" variables, but no "generator" variables, or vice versa.
```
def shape(tensor):
"""
Get the shape of a tensor. This is a compile-time operation,
meaning that it runs when building the graph, not running it.
This means that it cannot know the shape of any placeholders
or variables with shape determined by feed_dict.
"""
return tuple([d.value for d in tensor.get_shape()])
def fully_connected_layer(in_tensor, out_units, activation_function=tf.nn.relu):
"""
Add a fully connected layer to the default graph, taking as input `in_tensor`, and
creating a hidden layer of `out_units` neurons. This should be called within a unique variable
scope. Creates variables W and b, and computes activation_function(in * W + b).
"""
_, num_features = shape(in_tensor)
W = tf.get_variable("weights", [num_features, out_units], initializer=tf.truncated_normal_initializer(stddev=0.1))
b = tf.get_variable("biases", [out_units], initializer=tf.constant_initializer(0.1))
return activation_function(tf.matmul(in_tensor, W) + b)
def variables_from_scope(scope_name):
"""
Returns a list of all variables in a given scope. This is useful when
you'd like to back-propagate only to weights in one part of the network
(in our case, the generator or the discriminator).
"""
return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope_name)
```
We'll also provide a simple function for displaying a few 28-pixel images. This will help us understand the progress of our GAN as it trains; we'll use it to visualize the generated 'fake digit' images.
```
def visualize_row(images, img_width=28, cmap='gray'):
"""
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
"""
im = np.reshape(images, [-1, img_width])
plt.figure()
plt.imshow(im, cmap=cmap)
plt.show()
```
## Generator
A GAN is made up of two smaller networks: a generator and a discriminator. The generator is responsible for sampling images from a distribution that we hope will get closer and closer, as we train, to the real data distribution.
Neural networks are deterministic, so in order to sample a new image from the generator, we first create some random noise `z` (in our case, `z` will be a 100-dimensional uniform random variable) and then feed that noise to the network. You can think of `z` as being a latent, low-dimensional representation of some image `G(z)`, though in a vanilla GAN, it is usually difficult to interpret `z`'s components in a meaningful way.
Our generator is a dead-simple multi-layer perceptron (feed-forward network), with 128 hidden units.
```
def generator(z):
"""
Given random noise `z`, use a simple MLP with 128 hidden units to generate a
sample image (784 values between 0 and 1, enforced with the sigmoid function).
"""
with tf.variable_scope("fc1"):
fc1 = fully_connected_layer(z, 128)
with tf.variable_scope("fc2"):
return fully_connected_layer(fc1, 784, activation_function=tf.sigmoid)
```
## Discriminator
Although it isn't necesssary, it makes some sense for our discriminator to mirror the generator's architecture, as we do here. The discriminator takes in an image (perhaps a real one from the MNIST dataset, perhaps a fake one from our generator), and attempts to classify it as real (1) or fake (0). Our architecture is again a simple MLP, taking 784 pixels down to 128 hidden units, and finally down to a probability.
```
def discriminator(x):
"""
This discriminator network takes in a tensor with shape [batch, 784], and classifies
each example image as real or fake. The network it uses is quite simple: a fully connected
layer with ReLU activation takes us down to 128 dimensions, then we collapse that to 1 number
in [0, 1] using a fully-connected layer with sigmoid activation. The result can be interpreted
as a probability, the discriminator's strength-of-belief that a sample is from the
real data distribution.
"""
with tf.variable_scope("fc1"):
fc1 = fully_connected_layer(x, 128)
with tf.variable_scope("fc2"):
return fully_connected_layer(fc1, 1, activation_function=tf.sigmoid)
```
## GAN
Given a generator and discriminator, we can now set up the GAN's computational graph.
We use Tensorflow's variable scope feature for two purposes.
1. First, it helps separate the variables used by the generator and by the discriminator; this is important, because when training, we want to alternate between updating each set of variables according to a different objective.
2. Second, scoping helps us reuse the same set of discriminator weights both for the operations we perform on _real_ images and for those performed on _fake_ images. To achieve this, after calling `discriminator` for the first time (and creating these weight variables), we tell our current scope to `reuse_variables()`, meaning that on our next call to `discriminator`, existing variables will be reused rather than creating new ones.
```
def gan(batch_size, z_dim):
"""
Given some details about the training procedure (batch size, dimension of z),
this function sets up the rest of the computational graph for the GAN.
It returns a dictionary containing six ops/tensors: `train_d` and `train_g`, the
optimization steps for the discriminator and generator, `real_data` and `noise`,
two placeholders that should be fed in during training, `d_loss`, the discriminator loss
(useful for estimating progress toward convergence), and `fake_data`, which can be
evaluated (with noise in the feed_dict) to sample from the generator's distribution.
"""
z = tf.placeholder(tf.float32, [batch_size, z_dim], name='z')
x = tf.placeholder(tf.float32, [batch_size, 784], name='x')
with tf.variable_scope('generator'):
fake_x = generator(z)
with tf.variable_scope('discriminator') as scope:
d_on_real = discriminator(x)
scope.reuse_variables()
d_on_fake = discriminator(fake_x)
g_loss = -tf.reduce_mean(tf.log(d_on_fake))
d_loss = -tf.reduce_mean(tf.log(d_on_real) + tf.log(1. - d_on_fake))
optimize_d = tf.train.AdamOptimizer().minimize(d_loss, var_list=variables_from_scope("discriminator"))
optimize_g = tf.train.AdamOptimizer().minimize(g_loss, var_list=variables_from_scope("generator"))
return {'train_d': optimize_d,
'train_g': optimize_g,
'd_loss': d_loss,
'fake_data': fake_x,
'real_data': x,
'noise': z}
```
## Training a GAN
Our training procedure is a bit more involved than in past demos. Here are the main differences:
1. Each iteration, we first train the generator, then (separately) the discriminator.
2. Each iteration, we need to feed in a batch of images, just as in previous notebooks. But we also need a batch of noise samples. For this, we use Numpy's `np.random.uniform` function.
3. Every 1000 iterations, we log some data to the console and visualize a few samples from our generator.
```
def train_gan(iterations, batch_size=50, z_dim=100):
"""
Construct and train the GAN.
"""
model = gan(batch_size=batch_size, z_dim=z_dim)
def make_noise():
return np.random.uniform(-1.0, 1.0, [batch_size, z_dim])
def next_feed_dict():
return {model['real_data']: mnist.train.next_batch(batch_size)[0],
model['noise']: make_noise()}
initialize_all = tf.global_variables_initializer()
with tf.Session(config=config) as sess:
sess.run(initialize_all)
start_time = time.time()
for t in range(iterations):
sess.run(model['train_g'], feed_dict=next_feed_dict())
_, d_loss = sess.run([model['train_d'], model['d_loss']], feed_dict=next_feed_dict())
if t % 1000 == 0 or t+1 == iterations:
fake_data = sess.run(model['fake_data'], feed_dict={model['noise']: make_noise()})
print('Iter [%8d] Time [%5.4f] d_loss [%.4f]' % (t, time.time() - start_time, d_loss))
visualize_row(fake_data[:5])
```
## Moment of truth
It's time to run our GAN! Watch as it learns to draw recognizable digits in about three minutes.
```
train_gan(25000)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
from copy import deepcopy
import pickle as pkl
from ex_cosmology import p
from matplotlib import gridspec
import matplotlib.patches as mpatches
# adaptive-wavelets modules
import awave
from awave.data.cosmology import get_dataloader, load_pretrained_model
from awave.data.cosmology import get_validation
from awave.utils.misc import tuple_to_tensor
from awave.trim import TrimModel
from umap import UMAP
# evaluation
from eval_cosmology import load_results, rmse_bootstrap, extract_patches
from peak_counting import PeakCount
dirs = [
"db5_saliency_warmstart_seed=1_new"
]
dics, results, models = load_results(dirs)
# get dataloader and model
train_loader, val_loader = get_dataloader(p.data_path,
img_size=p.img_size[2],
split_train_test=True,
batch_size=p.batch_size)
model = load_pretrained_model(model_name='resnet18', device=device, data_path=p.model_path)
# validation dataset
test_loader = get_validation(p.data_path,
img_size=p.img_size[2],
batch_size=p.batch_size)
```
# Optimal wavelet
```
# # DB5
wt_o = awave.DWT2d(wave='db5', mode='zero', J=4,
init_factor=1, noise_factor=0, const_factor=0)
# load optimal wavelet for prediction on heldout dataset
bds = np.linspace(0.015, 0.035, 5)
scores = pkl.load(open('results/scores_new.pkl', 'rb'))
row, col = np.unravel_index(np.argmin(scores, axis=None), scores.shape)
bd_opt = bds[row]
idx1, idx2 = list(dics[0]['wt'].keys())[col + 1] ## NEED TO CHECK
# idx2 = 4
wt = dics[0]['wt'][(idx1, idx2)].to('cpu')
X1_batch = []
X2_batch = []
y_test = []
for x,y in test_loader:
X1_batch.append(wt(x))
X2_batch.append(wt_o(x))
y_test.append(y)
X1 = tuple()
X2 = tuple()
for idx in range(5):
a = [x[idx] for x in X1_batch]
X1 += (torch.cat(a, dim=0),)
b = [x[idx] for x in X2_batch]
X2 += (torch.cat(b, dim=0),)
y_test = torch.cat([a[:,1] for a in y_test], dim=0)
# umap
umap = UMAP(n_components=2, random_state=42)
```
# UMAP
```
# run t-SNE
batch_size = 2000
idx = 0
d1 = X1[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d2 = X2[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d = np.concatenate((d1,d2), axis=0)
embedding = umap.fit_transform(d)
# run t-SNE
idx = 1
d1 = X1[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d2 = X2[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d = np.concatenate((d1,d2), axis=0)
embedding2 = umap.fit_transform(d)
# run t-SNE
idx = 2
d1 = X1[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d2 = X2[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d = np.concatenate((d1,d2), axis=0)
embedding3 = umap.fit_transform(d)
# run t-SNE
idx = 3
d1 = X1[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d2 = X2[idx].detach().cpu().numpy().squeeze().reshape(batch_size, -1)
d = np.concatenate((d1,d2), axis=0)
embedding4 = umap.fit_transform(d)
fig = plt.figure(constrained_layout=True, dpi=200, figsize=(4,4))
spec = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
colors = ['red', 'lightblue']
n = batch_size
# embedding1 vs embedding2
f_ax1 = fig.add_subplot(spec[0, 0])
h1 = plt.scatter(embedding[:n, 0], embedding[:n, 1], marker=".", s=5, alpha=0.2) #c=y_test, cmap='Blues')
h2 = plt.scatter(embedding[n:, 0], embedding[n:, 1], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
blue_patch = mpatches.Patch(color='lightblue', label='AWD')
red_patch = mpatches.Patch(color='red', label='DB5')
plt.legend((h1, h2),
('AWD', 'DB5'),
scatterpoints=1,
loc='lower left',
ncol=3,
fontsize=8,
handles=(blue_patch, red_patch))
plt.title("Approx. Coef.", fontsize=6)
plt.xticks([])
plt.yticks([])
plt.xlabel('UMAP Dim 1', fontsize=6)
plt.ylabel('UMAP Dim 2', fontsize=6)
# embedding1 vs embedding2
f_ax2 = fig.add_subplot(spec[0, 1])
b1 = plt.scatter(embedding2[:n, 0], embedding2[:n, 1], marker=".", s=5, alpha=0.2) #c=y_test, cmap='Blues')
b2 = plt.scatter(embedding2[n:, 0], embedding2[n:, 1], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
# plt.legend()
plt.title("Detail Coef. Level 1", fontsize=6)
plt.xticks([])
plt.yticks([])
plt.xlabel('UMAP Dim 1', fontsize=6)
plt.ylabel('UMAP Dim 2', fontsize=6)
# plt.colorbar(b1)
# embedding1 vs embedding2
f_ax3 = fig.add_subplot(spec[1, 0])
plt.scatter(embedding3[:n, 0], embedding3[:n, 1], marker=".", s=5, alpha=0.2) #c=y_test, cmap='Blues')
plt.scatter(embedding3[n:, 0], embedding3[n:, 1], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
# plt.legend()
plt.title("Detail Coef. Level 2", fontsize=6)
plt.xticks([])
plt.yticks([])
plt.xlabel('UMAP Dim 1', fontsize=6)
plt.ylabel('UMAP Dim 2', fontsize=6)
# embedding1 vs embedding2
f_ax4 = fig.add_subplot(spec[1, 1])
r1 = plt.scatter(embedding4[:n, 0], embedding4[:n, 1], marker=".", s=5, alpha=0.2) #c=y_test, cmap='Blues')
r2 = plt.scatter(embedding4[n:, 0], embedding4[n:, 1], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
plt.title("Detail Coef. Level 3", fontsize=6)
plt.xticks([])
plt.yticks([])
# plt.colorbar(r2)
plt.xlabel('UMAP Dim 1', fontsize=6)
plt.ylabel('UMAP Dim 2', fontsize=6)
plt.show()
```
| github_jupyter |
## Outlier Engineering
An outlier is a data point which is significantly different from the remaining data. “An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.” [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980].
Statistics such as the mean and variance are very susceptible to outliers. In addition, **some Machine Learning models are sensitive to outliers** which may decrease their performance. Thus, depending on which algorithm we wish to train, we often remove outliers from our variables.
We discussed in section 3 of this course how to identify outliers. In this section, we we discuss how we can process them to train our machine learning models.
## How can we pre-process outliers?
- Trimming: remove the outliers from our dataset
- Treat outliers as missing data, and proceed with any missing data imputation technique
- Discrestisation: outliers are placed in border bins together with higher or lower values of the distribution
- Censoring: capping the variable distribution at a max and / or minimum value
**Censoring** is also known as:
- top and bottom coding
- winsorization
- capping
## Censoring or Capping.
**Censoring**, or **capping**, means capping the maximum and /or minimum of a distribution at an arbitrary value. On other words, values bigger or smaller than the arbitrarily determined ones are **censored**.
Capping can be done at both tails, or just one of the tails, depending on the variable and the user.
Check my talk in [pydata](https://www.youtube.com/watch?v=KHGGlozsRtA) for an example of capping used in a finance company.
The numbers at which to cap the distribution can be determined:
- arbitrarily
- using the inter-quantal range proximity rule
- using the gaussian approximation
- using quantiles
### Advantages
- does not remove data
### Limitations
- distorts the distributions of the variables
- distorts the relationships among variables
## In this Demo
We will see how to perform capping with the inter-quantile range proximity rule using the Boston House Dataset
## Important
When doing capping, we tend to cap values both in train and test set. It is important to remember that the capping values MUST be derived from the train set. And then use those same values to cap the variables in the test set
I will not do that in this demo, but please keep that in mind when setting up your pipelines
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
from feature_engine.outliers import Winsorizer
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# function to create histogram, Q-Q plot and
# boxplot. We learned this in section 3 of the course
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.histplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('Variable quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
# let's find outliers in RM
diagnostic_plots(boston, 'RM')
# visualise outliers in LSTAT
diagnostic_plots(boston, 'LSTAT')
# outliers in CRIM
diagnostic_plots(boston, 'CRIM')
```
There are outliers in all of the above variables. RM shows outliers in both tails, whereas LSTAT and CRIM only on the right tail.
To find the outliers, let's re-utilise the function we learned in section 3:
```
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries outside which sit the outliers
# for skewed distributions
# distance passed as an argument, gives us the option to
# estimate 1.5 times or 3 times the IQR to calculate
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# find limits for RM
RM_upper_limit, RM_lower_limit = find_skewed_boundaries(boston, 'RM', 1.5)
RM_upper_limit, RM_lower_limit
# limits for LSTAT
LSTAT_upper_limit, LSTAT_lower_limit = find_skewed_boundaries(boston, 'LSTAT', 1.5)
LSTAT_upper_limit, LSTAT_lower_limit
# limits for CRIM
CRIM_upper_limit, CRIM_lower_limit = find_skewed_boundaries(boston, 'CRIM', 1.5)
CRIM_upper_limit, CRIM_lower_limit
# Now let's replace the outliers by the maximum and minimum limit
boston['RM']= np.where(boston['RM'] > RM_upper_limit, RM_upper_limit,
np.where(boston['RM'] < RM_lower_limit, RM_lower_limit, boston['RM']))
# Now let's replace the outliers by the maximum and minimum limit
boston['LSTAT']= np.where(boston['LSTAT'] > LSTAT_upper_limit, LSTAT_upper_limit,
np.where(boston['LSTAT'] < LSTAT_lower_limit, LSTAT_lower_limit, boston['LSTAT']))
# Now let's replace the outliers by the maximum and minimum limit
boston['CRIM']= np.where(boston['CRIM'] > CRIM_upper_limit, CRIM_upper_limit,
np.where(boston['CRIM'] < CRIM_lower_limit, CRIM_lower_limit, boston['CRIM']))
# let's explore outliers in the trimmed dataset
# for RM we see much less outliers as in the original dataset
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston, 'LSTAT')
diagnostic_plots(boston, 'CRIM')
```
We can see that the outliers are gone, but the variable distribution was distorted quite a bit.
## Censoring with Feature-engine
```
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# create the capper
windsoriser = Winsorizer(capping_method='iqr', # choose iqr for IQR rule boundaries or gaussian for mean and std
tail='both', # cap left, right or both tails
fold=1.5,
variables=['RM', 'LSTAT', 'CRIM'])
windsoriser.fit(boston)
boston_t = windsoriser.transform(boston)
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston_t, 'RM')
# we can inspect the minimum caps for each variable
windsoriser.left_tail_caps_
# we can inspect the maximum caps for each variable
windsoriser.right_tail_caps_
```
| github_jupyter |
# Implementing Shazam from scratch
Shazam is a great application that can tell you the title of a song by listening to a short sample. We will implement a simplified copy of this app by dealing with hashing algorithms. In particular implementing an LSH algorithm that takes as input an audio track and finds relevant matches.
# 1. The dataset
We used a kaggle dataset containing songs in an mp3 format that we will convert to wav:
https://www.kaggle.com/dhrumil140396/mp3s32k
```
data_folder = Path(f.PATH_SONGS_FOLDER)
mp3_tracks = data_folder.glob("*/*/*.mp3")
tracks = data_folder.glob("*/*/*.wav")
for track in tqdm(mp3_tracks, total=N_TRACKS):
convert_mp3_to_wav(str(track))
```
# 2. Fingerprint Hashing
We want to create a representation of our audio signal that allows us to characterize it with respect to its peaks. Once this process is complete, we can adopt a hashing function to get a fingerprint of each song.
#### First we extract the peaks for each song
To apply the LSH it is important to round the shingles in order to have a smaller number of shingles and less discriminant, this will allow us to find the buckets when implementing the LSH
```
song_peaks = f.extract_peaks(song_path, rounded = True)
```
#### Then we store in an array all the unique shingles
This will allow us to create the shingles matrix, a matrix with the shingles on the rows and the songs on the columns. There will be a 1 in the cell **(i,j)** if the shingle **i** is present in the song **j**.
```
shingles = f.unique_shingles(song_peaks)
```
#### Finally we build the shingls matrix
```
matrix = shingles_matrix(shingles, song_peaks)
```
#### Hashing the shingles matrix
This technique consists in permutating the matrix rows and for each column take the index of the first non-zero value. This will be the new row of the hash matrix. The hash matrix will have number of rows equal to the number of permutations we decided to apply and each column will be the fingerprint of a song.
It is important to set a seed because then we'll apply the same permutation to the queries to get their fingerprints.
```
hash_matrix = f.hash_matrix(matrix, shingles, song_peaks)
```
# 2. Applying LSH
We suggest to read this article in order to have a better idea of the algorithm (https://www.learndatasci.com/tutorials/building-recommendation-engine-locality-sensitive-hashing-lsh-python/).
The hash matrix will be divided into **b** bands of **r** rows each. We'll then create a dictionary to find all the songs in which a certain bucket is present.
This will allow us when processing a query to only look for the songs contained in the buckets of the query.
```
buckets = f.db_buckets(hash_matrix, n_bands=5)
```
#### Matching the songs
To match a song the steps will be the following:
1. Convert the query to shingles.
2. Apply MinHash and LSH to the shingle set, which maps it to a specific bucket.
3. Conduct a similarity search between the query item and the other items in the bucket.
```
for i in range(1,11):
f.shazamLSH(f.PATH_TEST_QUERY + f'{i}.wav', hash_matrix_rounded, shingles_rounded, buckets)
```
| github_jupyter |
# Case Study: Stock Charts
```
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
pd.set_option('display.notebook_repr_html', False)
pd.set_option('precision', 3)
pd.set_option('display.max_rows', 8)
pd.set_option('display.max_columns', 15)
```
References:
https://python-programming.quantecon.org/index_toc.html
https://python.quantecon.org/index_tools_and_techniques.html
```
from pandas_datareader import data
# ?data.DataReader
# Retrive Data
start = datetime.datetime(2019, 1, 1)
end = datetime.date.today()
nvda = data.DataReader("NVDA", "yahoo", start, end)
type(nvda)
# save data to a csv file
nvda.to_csv("nvda.csv")
# Retrive data from the csv file
nvda = pd.read_csv("nvda.csv", index_col=0, parse_dates=True)
nvda
# plot the data
```
### OHLC chart
An OHLC chart is a type of bar chart that shows open, high, low, and closing prices for each period. OHLC charts are useful since they show the four major data points over a period, with the closing price being considered the most important by many traders.
https://www.investopedia.com/terms/o/ohlcchart.asp
### Adjusted Closing Price
The closing price is simply the cash value of that specific piece of stock at day's end while the adjusted closing price reflects the closing price of the stock in relation to other stock attributes. In general, the adjusted closing price is considered to be a more technically accurate reflection of the true value of the stock.
https://www.investopedia.com/terms/a/adjusted_closing_price.asp
https://finance.zacks.com/adjusted-closing-price-vs-closing-price-9991.html
```
nvda[['Adj Close']].plot(title="NVDA Adjusted Close Price")
# Convert the adjusted closing prices to cumulative returns.
returns = nvda[['Adj Close']].pct_change()
cumulative_returns = (returns + 1.).cumprod() - 1.0
cumulative_returns.plot(title='NVDA Cumulative Returns')
```
## Concatenation
### Concatenate with rows
```
nvda2019 = data.DataReader("NVDA", "yahoo", datetime.date(2019, 1, 1), datetime.date(2019, 12, 31))
nvda2020 = data.DataReader("NVDA", "yahoo", datetime.date(2020, 1, 1), datetime.date.today())
nvda2019_2020 = pd.concat([nvda2019, nvda2020], axis=0)
nvda2019
nvda2020
nvda2019_2020
```
### Concatenate with columns
```
nvda2020 = data.DataReader("NVDA", "yahoo", datetime.date(2020, 1, 1), datetime.date.today())
tsla2020 = data.DataReader("TSLA", "yahoo", datetime.date(2020, 1, 1), datetime.date.today())
# Concatnate with duplicate names
nvda_tsla = pd.concat([nvda2020, tsla2020], axis=1)
nvda_tsla
# Create MultiIndex for the columns to handle duplicate names
nvda_tsla = pd.concat([nvda2020, tsla2020], axis=1, keys=["NVDA", "TSLA"])
nvda_tsla
```
## Merge
Merging in pandas differs from concatenation in that the pd.merge() function
combines data based on the values of the data in one or more columns instead of
using the index label values along a specific axis.
```
nvda = nvda2020[['Adj Close']].reset_index()
tsla = tsla2020[['Adj Close']].reset_index()
nvda
tsla
nvda_tsla = nvda.merge(tsla, left_on="Date", right_on='Date')
nvda_tsla
nvda_tsla.rename(columns={'Adj Close_x': 'NVDA', 'Adj Close_y': 'TSLA'}, inplace=True)
nvda_tsla
nvda_tsla.set_index("Date", inplace=True)
nvda_tsla.plot()
```
## Time Series
```
nvda2020.loc['2020-08-31']
try:
nvda2020.loc['2020-09-31']
except KeyError:
pass
aug2020 = pd.Period('2020-08', freq='M')
aug2020
# help(pd.Period)
```
## Shifting
```
nvda2020
nvda_shifted_1 = nvda2020.shift(1)
nvda_shifted_1
```
## Daily Percentage Change
The daily percentage change provides a better measure of stock price changes over a single trading day. Daily percentage change is computed from daily adjusted closing price as follows:
$pct = \frac{Price_d - Price_{d-1}}{Price_{d-1}} = \frac{Price_d}{Price_{d-1}} - 1.0$
```
# method 1
nvda2020_pct = nvda2020.iloc[1:] / nvda2020.iloc[:-1].values - 1
nvda2020_pct
nvda2020_pct[['Adj Close']].plot()
# method 2
nvda2020_pct = nvda2020 / nvda2020.shift(1) - 1
nvda2020_pct[['Adj Close']].plot()
# method 3
nvda2020_pct = nvda2020.pct_change()
# nvda2020_pct.dropna()
nvda2020_pct.fillna(0, inplace=True)
_ = nvda2020_pct[['Adj Close']].plot()
```
## Daily Cumulative Returns
$cumlative\_returns_{d} = cumlative\_returns_{d} \times (1+pct_{d})$
$cumlative\_returns_{d} = 1$
```
nvda2020_cum_return = (nvda2020_pct + 1).cumprod()
_ = nvda2020_cum_return[['Adj Close']].plot()
# Compare NVDA and TSLA
tsla2020_pct = tsla2020 / tsla2020.shift(1) - 1
tsla2020_cum_return = (tsla2020_pct + 1).cumprod()
_ = tsla2020_cum_return[['Adj Close']].plot()
# Put TSLA and NVDA into the same figure
nvda_tsla_compare = pd.concat([nvda2020_cum_return[['Adj Close']], tsla2020_cum_return[['Adj Close']]], axis=1)
nvda_tsla_compare.columns=("NVDA", "TSLA")
nvda_tsla_compare.plot()
```
### Moving Average
In statistics, a moving average is a calculation used to analyze data points by creating a series of averages of different subsets of the full data set. In finance, a moving average (MA) is a stock indicator that is commonly used in technical analysis. The reason for calculating the moving average of a stock is to help smooth out the price data by creating a constantly updated average price.
```
nvda2020_ma20 = nvda2020[['Adj Close']].rolling(20).mean()
nvda2020_ma50 = nvda2020[['Adj Close']].rolling(50).mean()
df = pd.concat([nvda2020[['Adj Close']], nvda2020_ma20, nvda2020_ma50], axis=1)
df.columns = ("Adj Close", "MA20", "MA60")
_ = df.plot()
```
| github_jupyter |
# IFRS17 Simulation (Lapse Scenario)
If you're viewing this page as a static HTML page on https://lifelib.io, the same contents are also available [here on binder] as Jupyter notebook executable online (it may take a while to load)
To run this notebook and get all the outputs below, Go to the **Cell** menu above, and then click **Run All**.
[here on binder]: https://mybinder.org/v2/gh/fumitoh/lifelib/binder?filepath=lifelib%2Fprojects%2Fifrs17sim%2Fifrs17sim_charts_lapsescen.ipynb
## About this notebook
This notebook draws several waterfall charts that show how sources of change in IFRS17 accounts emerge when the actual lapse rate changes at a future point in time and future lapse rate assumptions from that point change accordingly for current-estimate liability valuations at future points.
Those charts are:
* Actual cashflows
* Present value of expected cashflows
* CSM amortization
* IFRS17 Financial performance
A notebook for the baseline simulation is also available on [lifelib]. The baseline simulation assumes
that actual cashflow emerge exacltly as esimated at the beginning of simulation.
[ifrs17sim]: https://lifelib.io/projects/ifrs17sim.html
[lifelib]: https://lifelib.io
<div class="alert alert-warning">
**Warning:**
The primary purpose of this model is to showcase the capability of [lifelib] and its base system [modelx], and less attention has been paid to the accuracy of the model or the compliance with the accounting standards.
At very least, following items are identified as over-simplification or lack of implementation.
<ul>
<li>The timing of cashflows is either the beginning or end of each step.</li>
<li>All expenses are included in insurance cashflows.</li>
<li>Loss component logic is not yet incorporated, so `CSM` can be negative.</li>
<li>Coverage unit is set to sum assured.</li>
<li>The amortization schedule of acquisition cashflows is constant over time.</li>
<li>All insurance cashflows are considered non-market sensitive, i.e. no TVOG is considered.</li>
<li>Risk adjustment is not yet modeled.</li>
</ul>
</div>
[modelx]: http://docs.modelx.io
## How to use Jupyter Notebook
Jupter notebook enables you to run a Python script piece by piece. You can run each piece of code (called a "cell") by putting the cursor in the cell and pressing **Shift + Enter**, and get the output right below the input code of the cell. To learn more about Jupyter Notebook, [this tutorial] will help you. There are also plenty of other resources on the internet as Jupyter Notebook is quite popular.
[this tutorial]: https://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Running%20Code.ipynb
You can play around with this notebook, by changeng input values and formulas, re-running code and checking how output tables and charts change accodingly.
Note that change in code in one cell may change the results of other cells. To reflect change in one cell on the output of other cells, the other cells need to be re-run after the change.
## Initial set-up
The first line `%matplotlib notebook`, is for specifying drawing mode.
The next few lines are import statements, by which functions defined in other modules become avaialbe in this script.
`ifrs17sim` and `draw_charts` modules are in the project directory of this project. To see what fiels are in the project directory, select **Open** from the **File** menu in the tool bar above.
```
%matplotlib notebook
import pandas as pd
import collections
import matplotlib.pyplot as plt
from draw_charts import draw_waterfall, get_waterfalldata, draw_actest_pairs
```
## Building the model
The next line is to create a model from `build` function defined in `ifrs17sim` module which has just been imported.
By supplying `True` to `load_saved` parameter of the `build` function, the input data is read from `ifrs17sim.mx`, the 'pickled' file to save loading time. To read input from `input.xlsm`, call `build` with `load_saved=False` or without any parameter because `False` is the default value of `load_saved`.
If you run this code multiple time, the previous model is renamed to `ifrs17sim_BAK*`, and a new model is created and returned as `model`.
In `model` thre is a space called `OuterProj` and other spaces. `OuterProj` is parametrized by Policy ID, i.e. each of the spaces with parameters corresponds to a projection of one policy. For example, `model.OuterProj[1]` return the projection of policy ID 1, `model.OuterProj[171]` return the projection of policy ID 171.
The first line below sets `proj` as a shorthand for the projection of Policy ID 1.
You can change the sample policy by supplying some other ID.
```
import modelx as mx
model = mx.read_model("model")
proj = model.OuterProj[1]
inner = proj.InnerProj
```
## Adjusting lapse rates
By defalut, base lapse rates are read in from the input file. The initial rates are constant at 8%.
```
proj.asmp.SurrRate.to_frame(range(6))
```
In this model, adjustments to the base lapse rates can be made through the cells ``SurrRateMult``.
The cells ``SurrRateMult`` in outer and in inner spaces have have different formulas.
For the outer actual simulation, as seen in the formula below, ``SurrRateMult`` is set to 1 by default, but can be overwritten by user input. If the user overwrite ``SurrRateMult(t)``, then the new value applies from ``t`` goint forward.
```
proj.asmp.SurrRateMult.formula
```
For the inner projectins, ``SurrRateMult`` is set to the same value as the outer simulation at time 0.
At each step of outer simulation(``t0``), the lapse rates for the inner projection starting at ``t0``, are set to the lapse rate of the actual(outer) simulation applied in the previos period (from ``t0-1`` to ``t0``)
```
inner.asmp.SurrRateMult.formula
```
We assumes that the actual lapse rate doubles from time 2, i.e. the beginning of the 3rd year, and continues to be doubled.
Accordingly, we assume we double our lapse rate assumption from the end of the 3rd yer, one year later later than the change in actual lapse.
```
proj.asmp.SurrRateMult[1] = 1
proj.asmp.SurrRateMult[2] = 2 # Actual lapse rate from t=2
inner[1].asmp.SurrRateMult[1] = 1
inner[2].asmp.SurrRateMult[2] = 1
inner[3].asmp.SurrRateMult[3] = 2 # The lapse assumption for estimated cashflows at t=3 and thereafter
```
The figure below shows how underlying poicies change over time.
The 3 columns in the fiture represents 3 cells, and filled lines denote actual(outer) while the dotted lines denote expected(inner).
For each column, rows from the top to the bottom represent the time steps of the outer simulation, stating at time 0.
The leftmost column is the graph of policies in-force at the end of each period.
The column in the middle shows how ``SurrRateMult`` for actual lapse rate and estimated lapse rates change over time.
``SurrRateMult`` stays at time 0 and 1 (1st and 2nd rows). Then it doubles for the acutal at time 2 (3rd row), while the estimate stays still at 1 for 1 year. Then the estimate catches up the actual at time 3 (4th row).
The right most column show the movement in terms of the number of surrender policies.
```
draw_actest_pairs(proj, inner, ['PolsIF_End', 'SurrRateMult', 'PolsSurr'], 5, 5)
```
## Actual cashflows
The code below generates a waterfall chart that simulates actual insurance cashflows that are assumed to be equal to the expected. The net asset balance is reset to zero at the end of each period, so the assets are equal to the liabilities at the beginning of each period.
The assets are held as cash, and bear interest at the same rate as discount rate.
```
proj.IntAccumCF.formula
actcf = get_waterfalldata(
proj,
items=['PremIncome',
'IntAccumCF',
'ExpsAcqTotal',
'BenefitTotal',
'ExpsMaintTotal',
'ActualNetCF'],
length=4,
reverseitems=['ExpsAcqTotal',
'BenefitTotal',
'ExpsMaintTotal'])
actcf
draw_waterfall(actcf, stocks=[0, 5], title='Actual Cashflows')
```
## Present value of expected cashflows
The code below generate a waterfall chart that shows how the present value of expected insurance cashflows unwinds over time.
The waterfall bars are disconected at t = 3, because we change the lapse assumption there. The present value of future cashflows decrease due to the decrease in future policies.
```
estcf = get_waterfalldata(
proj,
items=['PV_FutureCF',
'EstPremIncome',
'EstIntOnCF',
'EstAcqCashflow',
'EstClaim',
'EstExps'],
length=4,
reverseitems=['EstPremIncome'])
estcf
draw_waterfall(estcf, title='Expected Cashflows')
```
## CSM amortization
The CSM amortization chart below depicts items that increase/decrease CSM balance.
The adjustment to CSM for changes in fulfilment cashflows (``AdjCSM_FlufCF``) is negative in the 3rd period.
This offsets to a great extent the decrease in ``PV_FutureCF`` in the chart above (or increase in liability, if ``PV_FutureCF`` is negative unlike this sample, which is often the case)
``AdjCSM_FlufCF`` takes the difference of ``PV_Cashflows`` with different parameters.
``PV_Cashflow(t+1, t+1, 0)`` starts projection at ``t+1`` into the future using the lapse assumption updated at ``t+1``, and discount the cashflows using the discount rate fixed at time zero, back to time ``t+1``.
``PV_Cashflow(t, t+1, 0)`` starts projection at ``t`` into the future using the lapse rate assumption before the change, and discount the cashflows using the discount rate fixed at time zero, back to time ``t+1``.
The sources of difference between these valuese are the difference in the number of policies in-force, and the difference in future laps rates, which impact projected policies in-force and cashflows from ``t+1``.
```
proj.AdjCSM_FlufCF.formula
csmrf = get_waterfalldata(
proj,
items=['CSM',
'IntAccrCSM',
'AdjCSM_FlufCF',
'TransServices'],
length=4,
reverseitems=['TransServices'])
csmrf
draw_waterfall(csmrf, title='CSM Amortization')
```
## IFRS17 Financial performance
The chart below simulates P&L accounts based on IFRS17 from the background data used to draw the charts above. The profit in each period is released and the outstanding net balance (`NetBalance`) is reset to zero.
The profit in the 3rd period is decreased due to the adverse change in lapse assumption, but its impact on the current profit is limited because of the offset between the change in CSM and the PV of future cashflows.
```
proj.InsurRevenue.formula
proj.InsurServiceExps.formula
ifrspl = get_waterfalldata(
proj,
items=['InsurRevenue',
'InsurServiceExps',
'InsurFinIncomeExps',
'ProfitBefTax'],
length=5,
reverseitems=['InsurServiceExps'])
ifrspl
draw_waterfall(ifrspl, stocks=[0, 3], title='IFRS17 Profit/Loss')
```
| github_jupyter |
# Simple Toy Problem
This notebook contains a simple artificial experiment setup to illustrate optimal control.
```
%load_ext autoreload
%autoreload 2
%config IPCompleter.greedy=True
# Importing relevant libraries
import cvxpy as cp
import numpy as np
from solara.constants import PROJECT_PATH
EXPERIMENT_NAME = "experiment_01_penalty_grid"
PLOT_DIR = PROJECT_PATH + "/figures/experiments/"
OUT_FORMAT = ".svg" # Output format of figures
# Loading data
load_data = np.loadtxt(PROJECT_PATH + "/data/solar_trace_data_v2/load_5796.txt", delimiter=",")
pv_data = np.loadtxt(PROJECT_PATH + "/data/solar_trace_data_v2/PV_5796.txt", delimiter=",")
import matplotlib.pyplot as plt
plt.plot(load_data)
import scipy.interpolate
solar = [0,0,0,0,0,0,0,0.05,0.2,0.8,0.95,1,0.95,0.8,0.2,0.05,0,0,0,0,0,0,0,0,0]
load = [0.5] * 25
#load[6] = 0.9
load[19] = 1.4
pv_data = np.array(solar)
load_data = np.array(load)
print(len(y1))
x1 = np.linspace(0,24,num=len(solar))
plt.plot(x1,solar)
plt.plot(x1,load)
solar_trace(x2)
x_values
# Setting all the variables
## Given variables
### Basic
T_u = 1 # Time slot duration
T_h = 24 # Time horizon (hours)
### Grid
pi_b = 0.14 #0.14 # Base price per unit of energy purchased ($/kWh)
pi_d = 0.86 # Demand price penalty per unit of energy purchased with power demand exceeding Γ($/kWh)
Gamma = 1.00 # np.percentile(load_data, 80) # Threshold above which the demand price is paid (kW)
p_bar = 0.12 # Price per unit of energy sold at time t ($/kWh)
### Battery variables
size = 10
kWh_per_cell = 0.011284
num_cells = size / kWh_per_cell
nominal_voltage_c = 3.8793
nominal_voltage_d = 3.5967
u1 = 0.1920
v1_bar = 0.0
u2 = -0.4865
v2_bar = kWh_per_cell * num_cells
eta_d = 1 / 0.9 # taking reciprocal so that we don't divide by eta_d
eta_c = 0.9942
alpha_bar_d = (
v2_bar * 1
) # the 1 indicates the maximum discharging C-rate
alpha_bar_c = (
v2_bar * 1
) # the 1 indicates the maximum charging C-rate
# Given variables from data set
num_timesteps = T_h
start = 0#24*12
power_load = load_data[start:start+num_timesteps] #np.random.randn(num_timesteps) # Load at time t (kW)
power_solar = pv_data[start:start+num_timesteps] #np.random.randn(num_timesteps) # Power generated by solar panels at timet(kW)
# Variables that are being optimised over
power_direct = cp.Variable(num_timesteps) # Power flowing directly from PV and grid to meet the load or be sold at time t (kW) (P_dir)
power_charge = cp.Variable(num_timesteps) # Power used to charge the ESD at time t (kW) (P_c)
power_discharge = cp.Variable(num_timesteps) # Power from the ESD at time t (kW) (P_d)
power_grid = cp.Variable(num_timesteps) # Power drawn from the grid at time t (kW) (P_g)
power_sell = cp.Variable(num_timesteps) # Power sold to the grid at timet(kW) (P_sell)
power_over_thres = cp.Variable(num_timesteps) # Purchased power that exceeds Γ at time t (not in notation table) (P_over)
# Implicitly defined variable (not in paper in "given" or "optimized over" set of variables)
energy_battery = cp.Variable(num_timesteps+1) # the energy content of the ESD at the beginning of interval t (E_ESD)
base_constraints = [
0 <= power_grid, # from Equation (13)
0 <= power_direct,
0 <= power_sell,
0 <= power_charge, # Eq (18)
0 <= power_discharge, # Eq (19)
# Power flow
power_direct + power_discharge == power_load + power_sell, # from Equation (14)
0 <= power_charge + power_direct, # Eq (17)
power_charge + power_direct <= power_solar + power_grid, # Eq (17)
]
grid_constraints = [
0 <= power_over_thres,
power_grid - Gamma <= power_over_thres, # Eq (24)
power_sell == 0, # stopping selling to the grid
]
battery_constraints = [
energy_battery[0] == 0,
energy_battery[1:] == energy_battery[:-1] + eta_c*power_charge*T_u - eta_d * power_discharge * T_u,
energy_battery >= 0,
power_discharge <= alpha_bar_d,
power_charge <= alpha_bar_c, #equation (5)
u1 * ((power_discharge)/nominal_voltage_d) + v1_bar <= energy_battery[1:], # equation (4)
u2 * ((power_charge)/nominal_voltage_c) + v2_bar >= energy_battery[1:], # equation (4)
]
constraints = base_constraints + battery_constraints + grid_constraints
objective = cp.Minimize(cp.sum(pi_b*power_grid + pi_d*power_over_thres - cp.multiply(p_bar,power_sell)))
prob = cp.Problem(objective, constraints)
result = prob.solve(verbose=True)
charging_power = power_charge.value - power_discharge.value
episode_data = {
'load': power_load,
'pv_gen': power_solar,
'battery_cont': energy_battery.value,
'charging_power': charging_power,
'cost': pi_b*power_grid.value + pi_d*power_over_thres.value,
'price_threshold': np.ones(25) * Gamma,
'actions': charging_power / 10,
'rewards': - (pi_b*power_grid.value + pi_d*power_over_thres.value),
'power_diff': np.zeros(24),
}
import solara.utils.rllib
import solara.plot.widgets
initial_visibility = ['load','pv_gen','energy_cont','net_load',
'charging_power','cost','price_threshold',
'actions']
#initial_visibility = ['energy_cont', 'pv_gen', 'actions', 'charging_power', 'energy_cont']
solara.plot.widgets.InteractiveEpisodes([episode_data], initial_visibility=initial_visibility)
import matplotlib.pyplot as plt
# Plotting configuration
POLICY_PLOT_CONF = {
"selected_keys": ['load','pv_gen','energy_cont','net_load',
'charging_power','cost','price_threshold', #'battery_cont',
],
"y_min":-1.3,
"y_max":1.5,
"show_grid":False,
}
solara.plot.pyplot.plot_episode(episode_data,title=None, **POLICY_PLOT_CONF)
plt.savefig(fname=PLOT_DIR + EXPERIMENT_NAME + "_plot_09_convex_solution" + OUT_FORMAT, bbox_inches='tight')
plt.show()
import solara.envs.components.solar
import solara.envs.components.load
import solara.envs.components.grid
import solara.envs.components.battery
import solara.envs.battery_control
import solara.utils.logging
from solara.constants import PROJECT_PATH
def battery_env_creator(env_config=None):
"""Create a battery control environment."""
PV_DATA_PATH = PROJECT_PATH + "/data/solar_trace_data/PV_5796.txt"
LOAD_DATA_PATH = PROJECT_PATH + "/data/solar_trace_data/load_5796.txt"
# Setting up components of environment
battery_model = solara.envs.components.battery.LithiumIonBattery(size=10,
chemistry="NMC",
time_step_len=1)
pv_model = solara.envs.components.solar.DataPV(data_path=PV_DATA_PATH,
fixed_sample_num=12)
load_model = solara.envs.components.load.DataLoad(data_path=LOAD_DATA_PATH,
fixed_sample_num=12)
grid_model = solara.envs.components.grid.PeakGrid(peak_threshold=1.0)
# Fixing load and PV trace to single sample
episode_num = 12
load_model.fix_start(episode_num)
pv_model.fix_start(episode_num)
env = solara.envs.battery_control.BatteryControlEnv(
battery = battery_model,
solar = pv_model,
grid = grid_model,
load = load_model,
infeasible_control_penalty=True,
grid_charging=True,
logging_level = "WARNING",
)
return env
env = battery_env_creator()
solara.plot.widgets.InteractiveEpisodes([episode_data],
initial_visibility=initial_visibility,
manual_mode=True,
manual_start_actions=episode_data["actions"],
env=env)
episode_data["actions"][0]
```
| github_jupyter |
# Scale Seldon Deployments based on Prometheus Metrics.
This notebook shows how you can scale Seldon Deployments based on Prometheus metrics via KEDA.
[KEDA](https://keda.sh/) is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
With the support of KEDA in Seldon, you can scale your seldon deployments with any scalers listed [here](https://keda.sh/docs/2.0/scalers/).
In this example we will scale the seldon deployment with Prometheus metrics as an example.
## Install Seldon Core
Install Seldon Core as described in [docs](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html)
Make sure add `--set keda.enabled=true`
## Install Seldon Core Analytic
seldon-core-analytics contains Prometheus and Grafana installation with a basic Grafana dashboard showing the default Prometheus metrics exposed by Seldon for each inference graph deployed.
Later we will use the Prometheus service installed to provide metrics in order to scale the Seldon models.
Install Seldon Core Analytics as described in [docs](https://docs.seldon.io/projects/seldon-core/en/latest/analytics/analytics.html)
```
!helm install seldon-core-analytics ../../helm-charts/seldon-core-analytics -n seldon-system --wait
```
## Install KEDA
```
!kubectl delete -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
!kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
!kubectl get pod -n keda
```
## Create model with KEDA
To create a model with KEDA autoscaling you just need to add a KEDA spec referring in the Deployment, e.g.:
```yaml
kedaSpec:
pollingInterval: 15 # Optional. Default: 30 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
triggers:
- type: prometheus
metadata:
# Required
serverAddress: http://seldon-core-analytics-prometheus-seldon.seldon-system.svc.cluster.local
metricName: access_frequency
threshold: '10'
query: rate(seldon_api_executor_client_requests_seconds_count{seldon_app=~"seldon-model-example"}[10s]
```
The full SeldonDeployment spec is shown below.
```
VERSION = !cat ../../version.txt
VERSION = VERSION[0]
VERSION
%%writefile model_with_keda_prom.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:1.5.0-dev
imagePullPolicy: IfNotPresent
name: classifier
resources:
requests:
cpu: '0.5'
kedaSpec:
pollingInterval: 15 # Optional. Default: 30 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
triggers:
- type: prometheus
metadata:
# Required
serverAddress: http://seldon-core-analytics-prometheus-seldon.seldon-system.svc.cluster.local
metricName: access_frequency
threshold: '10'
query: rate(seldon_api_executor_client_requests_seconds_count{seldon_app=~"seldon-model-example"}[1m])
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: example
!kubectl create -f model_with_keda_prom.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model -o jsonpath='{.items[0].metadata.name}')
```
## Create Load
We label some nodes for the loadtester. We attempt the first two as for Kind the first node shown will be the master.
```
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[1].metadata.name}') role=locust
```
Before add loads to the model, there is only one replica
```
!kubectl get deployment seldon-model-example-0-classifier
!helm install seldon-core-loadtesting seldon-core-loadtesting --repo https://storage.googleapis.com/seldon-charts \
--set locust.host=http://seldon-model-example:8000 \
--set oauth.enabled=false \
--set locust.hatchRate=1 \
--set locust.clients=1 \
--set loadtest.sendFeedback=0 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=1
```
After a few mins you should see the deployment scaled to 5 replicas
```
import json
import time
def getNumberPods():
dp = !kubectl get deployment seldon-model-example-0-classifier -o json
dp = json.loads("".join(dp))
return dp["status"]["replicas"]
scaled = False
for i in range(60):
pods = getNumberPods()
print(pods)
if pods > 1:
scaled = True
break
time.sleep(5)
assert scaled
!kubectl get deployment/seldon-model-example-0-classifier scaledobject/seldon-model-example-0-classifier
```
## Remove Load
```
!helm delete seldon-core-loadtesting
```
After 5-10 mins you should see the deployment replica number decrease to 1
```
!kubectl get pods,deployments,hpa,scaledobject
!kubectl delete -f model_with_keda_prom.yaml
```
| github_jupyter |
# NASBench-101
This colab accompanies [**NAS-Bench-101: Towards Reproducible Neural Architecture Search**](https://arxiv.org/abs/1902.09635) and the rest of the code at https://github.com/google-research/nasbench.
In this colab, we demonstrate how to use the dataset for simple benchmarking and analysis. The publicly available and free hosted colab instances are sufficient to run this colab.
## Load NASBench library and dataset
```
# Download the raw data (only 108 epoch data points, for full dataset,
# uncomment the second line for nasbench_full.tfrecord).
# !curl -O https://storage.googleapis.com/nasbench/nasbench_only108.tfrecord
# !curl -O https://storage.googleapis.com/nasbench/nasbench_full.tfrecord
# Clone and install the code and dependencies.
# !git clone https://github.com/google-research/nasbench
# !pip install ./nasbench
# Initialize the NASBench object which parses the raw data into memory (this
# should only be run once as it takes up to a few minutes).
from nasbench import api
%load_ext autoreload
%autoreload 2
import sys
import os
os.chdir('/home/yukaiche/pycharm/automl/search_policies/cnn/nasbench101')
sys.path.append("/home/yukaiche/pycharm/nasbench")
sys.path.append('/home/yukaiche/pycharm/automl')
from nasbench import api
# Use nasbench_full.tfrecord for full dataset (run download command above).
nasdata = api.NASBench('/home/yukaiche/pycharm/nasbench/nasbench_only108.tfrecord')
from search_policies.cnn.search_space.nasbench101.nasbench_api_v2 import NASBench_v2
nasdata_v2 = NASBench_v2('/home/yukaiche/data/nasbench_only108.tfrecord', only_hash=False)
```
# Test Graph generation from json style file.
```
import json
from search_policies.cnn.search_space.nasbench101.sampler import random_spec
from search_policies.cnn.search_space.nasbench101.nasbench_api_v2 import ModelSpec_v2
import search_policies.cnn.search_space.nasbench101.util as util
# r_spec = random_spec(nasdata_v2)
# print(r_spec.ops[1:-1])
# load the graph.json
with open('/home/yukaiche/pycharm/nasbench/nasbench/scripts/graph_v4.json') as f:
graph_json = json.load(f)
hashs = [h for h in graph_json.keys()]
_hash = hashs[-10]
print(graph_json[_hash])
# lists = graph_json[_hash]
# [i + 2 for i in lists[1]]
# available_ops=('conv3x3-bn-relu', 'conv1x1-bn-relu', 'maxpool3x3')
hash_v2 = ModelSpec_v2.load_from_list(graph_json[_hash])
print("hash v2 created graph", hash_v2)
query = nasdata.get_metrics_from_hash(_hash)
print("hash query from NasBench v1")
util.display_cell(query[0])
print("model_spec query from NasBench v1")
util.display_cell(nasdata.query(hash_v2))
# print('hash query from NasBench v2')
# util.display_cell(nasdata_v2.get_metrics_from_hash(_hash))
print("model_spec query from NasBench v2")
util.display_cell(nasdata_v2.query(hash_v2))
# fixed_stat, computed_stat = nasdata_v2.get_metrics_from_hash(_hash)
# util.display_cell(fixed_stat)
# util.display_cell(computed_stat)
# print(computed_stat[108][0])
# Check the output
# input graph_json is now bi-literal relationship.
hash_v2.model_spec_to_json() == graph_json[_hash]
```
# Produce the ranking of NASBench based on hash.
```
NASBENCH_CONFIG = 'v4_e9_op3'
with open(f'/home/yukaiche/data/nasbench_all_graphs_{NASBENCH_CONFIG}.json', 'r') as f:
all_graphs = json.load(f)
all_hashs = all_graphs.keys()
print('All models in NASBench', len(all_graphs))
import pandas as pd
all_hashs = [k for k in all_hashs]
hash_vs_valid_acc = {}
for _hash in all_hashs:
d = nasdata_v2.query(ModelSpec_v2.load_from_list(all_graphs[_hash]))
# print([k for k in d.keys()])
acc = d['validation_accuracy']
hash_vs_valid_acc[_hash] = acc
# print(acc)
# print([k for k in hash_vs_valid_acc.keys()])
pd_hash = pd.DataFrame.from_dict(
{'hash': [k for k in hash_vs_valid_acc.keys()],
'validation_accuracy':[ hash_vs_valid_acc[new_k] for new_k in [k for k in hash_vs_valid_acc.keys()]]}
)
pd_hash.reset_index(inplace=True)
pd_hash.sort_values('validation_accuracy',inplace=True)
pd_hash.reset_index(inplace=True, drop=True)
print(pd_hash[:10])
new_dict = pd_hash.to_dict()
# with open('/home/yukaiche/data/nasbench_hash-rank_v7_e9_op3.json', 'w') as f:
# json.dump(new_dict, f)
# print([k for k in new_dict.keys()])
only_hash = [new_dict['hash'][k] for k in sorted([v for v in new_dict['index'].values()])]
print(only_hash[:10])
with open(f'/home/yukaiche/data/nasbench_hash_rank_simple_{NASBENCH_CONFIG.replace("_", "-")}.json', 'w') as f:
json.dump([new_dict['hash'][k] for k in sorted([v for v in new_dict['index'].values()])], f)
# Sanity check
util.display_cell(nasdata_v2.query(ModelSpec_v2.load_from_list(all_graphs['278a65c91e279624407615a84f3282c4'])))
# Visualize the histogram
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,1, figsize=(4,5))
ax.hist(new_dict['validation_accuracy'].values(), bins = 20)
# Test sampling the data
# Not random, but just take a bunch of from
# sample a set of 30 hash, build their graph and train.
# put this funciton into the manual_define_sampled_search
from search_policies.cnn.search_space.nasbench101.sampler import manual_define_sampled_search
print(manual_define_sampled_search())
_hash = 'ef11a63e3dec4177d65648771c3689aa'
util.display_cell(nasdata_v2.query_hash(_hash))
print(nasdata_v2.hash_to_model_spec(_hash))
```
# Node = 4 case to do sanity checking.
This is to test the use case while node = 4, in CNN, for soft-weight-sharing.
```
with open('/home/yukaiche/data/nasbench_all_graphs_v4_e9_op3.json', 'r') as f:
all_graphs = json.load(f)
all_hashs = all_graphs.keys()
print('All models in NASBench', len(all_graphs))
from search_policies.cnn.search_space.nasbench101.model import NasBenchNet
net = NasBenchNet(3, nasdata_v2.hash_to_model_spec(_hash))
print(len(net.stacks))
acell = net.stacks['stack0']['module0']
print(acell.dag)
import networkx as nx
model_spec = nasdata_v2.hash_to_model_spec(_hash)
dag = nx.from_numpy_matrix(model_spec.matrix, create_using=nx.DiGraph())
print(acell.execution_order.keys())
print(model_spec.matrix)
for vert in nx.topological_sort(dag):
print(vert)
print(list(dag.predecessors(vert)))
print(nx.to_dict_of_dicts(dag))
print(nx.to_edgelist(dag))
nx.from_edgelist(genotype,create_using=nx.DiGraph())
```
| github_jupyter |
# Experiment 5.1 - Features extracted using Inception Resnet v2 + SVM
Reproduce Results of [Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images](https://pubmed.ncbi.nlm.nih.gov/30094778/). We used a pre-trained CNN to extract features based on B-mode images.
The CNNfeatures are extracted using the pretrained Inception-Resnet-v2 implemented in Keras.
See reference: https://jkjung-avt.github.io/keras-inceptionresnetv2/

```
import sys
import random
sys.path.append('../src')
import warnings
warnings.filterwarnings("ignore")
from utils.compute_metrics import get_metrics, get_majority_vote,log_test_metrics
from utils.split import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.model_selection import GroupKFold
from tqdm import tqdm
from pprint import pprint
from itertools import product
import pickle
import pandas as pd
import numpy as np
import mlflow
import matplotlib.pyplot as plt
```
## 1. Retrieve Extracted Features
```
with open('../data/03_features/inception_dict_tensor_avg_interpolation_pooling.pickle', 'rb') as handle:
features_dict = pickle.load(handle)
df_features = features_dict ['features']
interpolation = features_dict ['Interpolation']
```
# 2. Cross Validation using SVM Classification
> Methods that exclude outliers were used to normalize the features. Patient-specific leave-one-out cross-validation (LOOCV) was applied to evaluate the classification. In each case, the test set consisted of10 images from the same patient and the training set contained 540 images from the remaining 54 patients. For each training set, fivefold cross-validation and grid search were applied to indicate the optimal SVM classifier hyperparameters and the best kernel. To address the problem of class imbalance, the SVM hyperparameter C of each class was adjusted inversely proportional to that class frequency in the training set. Label 1 indicated the image containing a fatty liver and label −1 otherwise.
```
# Set the parameters by cross-validation
param_gamma = [1e-3, 1e-4]
param_C = [1, 10, 1000]
kernel = ['linear', 'poly', 'rbf', 'sigmoid']
params = list(product(kernel,param_gamma, param_C ))
def train_valid(param, X_train,X_valid,y_train, y_valid):
#The “balanced” mode uses the values of y to automatically adjust weights inversely
#proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).
model = SVC(kernel =param[0], gamma=param[1], C=param[2], class_weight= 'balanced')#,probability = True)
model.fit(X_train, y_train)
predictions = model.predict(X_valid)
acc, auc, specificity, sensitivity = get_metrics(y_valid, predictions)
return acc, auc, specificity, sensitivity , predictions
def log_val_metrics(params, metrics, test_n_splits, num_components = 5):
mlflow.set_experiment('val_inception_avg_pooling_svm_experiment')
# log mlflow params
for param in params:
with mlflow.start_run():
#log params
mlflow.log_param('pca_n',pca_n_components)
mlflow.log_param('model',f'svm: {param[0]}')
mlflow.log_param('test K fold', test_n_splits)
mlflow.log_param('gamma',param[1])
mlflow.log_param('Num Components', num_components)
mlflow.log_param('C',param[2])
#log metrics
mlflow.log_metric('accuracy',np.array(metrics[str(param)]['acc']).mean())
mlflow.log_metric('AUC',np.array(metrics[str(param)]['auc']).mean())
mlflow.log_metric('specificity',np.array(metrics[str(param)]['specificity']).mean())
mlflow.log_metric('sensitivity',np.array(metrics[str(param)]['sensitivity']).mean())
print("Done logging validation params in MLFlow")
df = df_features
pca_n_components = 5
standardize = True
test_metrics={}
#majority vote results
test_metrics_mv={}
test_n_splits = 11
group_kfold_test = GroupKFold(n_splits=test_n_splits)
seed= 11
df_pid = df['id']
df_y = df['labels']
fold_c =1
predictions_prob =[]
labels =[]
for train_index, test_index in group_kfold_test.split(df,
df_y,
df_pid):
random.seed(seed)
random.shuffle(train_index)
X_train, X_test = df.iloc[train_index], df.iloc[test_index]
y_train, y_test = df_y.iloc[train_index], df_y.iloc[test_index]
X_test = X_test.drop(columns=['id', 'labels'])
X_train_pid = X_train.pop('id')
X_train = X_train.drop(columns=['labels'])
# Do cross-validation for hyperparam tuning
group_kfold_val = GroupKFold(n_splits=5)
metrics={}
#X_train_y = df.pop('class')
for subtrain_index, valid_index in group_kfold_val.split(X_train,
y_train,
X_train_pid):
X_subtrain, X_valid = X_train.iloc[subtrain_index], X_train.iloc[valid_index]
y_subtrain, y_valid = y_train.iloc[subtrain_index], y_train.iloc[valid_index]
#standardize
if standardize:
scaler = StandardScaler()
X_subtrain = scaler.fit_transform(X_subtrain)
X_valid = scaler.transform(X_valid)
pca = PCA(n_components=pca_n_components,random_state = seed)
X_subtrain = pca.fit_transform(X_subtrain)
X_valid = pca.transform(X_valid)
for param in tqdm(params):
if str(param) not in metrics.keys() :
metrics[str(param)] ={'acc':[], 'auc':[], 'sensitivity':[], 'specificity':[]}
acc, auc, specificity, sensitivity,_ = train_valid(param, X_subtrain,X_valid,y_subtrain, y_valid)
metrics[str(param)]['auc'].append(auc)
metrics[str(param)]['acc'].append(acc)
metrics[str(param)]['sensitivity'].append(sensitivity)
metrics[str(param)]['specificity'].append(specificity)
#log validation metrics for all combination of params
log_val_metrics(params, metrics, test_n_splits,pca_n_components, standardize)
#highest accuracy
index_param_max = np.array([np.array(metrics[str(param)]['auc']).mean() for param in params]).argmax()
print('From all the combinations, the highest accuracy was achieved with', params[index_param_max])
#standardize
if standardize:
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
pca = PCA(n_components=pca_n_components)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
#acc, auc, specificity, sensitivity, predictions = train_valid(params[index_param_max], X_train, X_test, y_train, y_test)
model = SVC(kernel ='sigmoid', gamma=0.001, C=1, class_weight= 'balanced',probability = True)
model.fit(X_train, y_train)
predictions_prob = predictions_prob + [probe[1] for probe in model.predict_proba(X_test)]
labels = labels + list(y_test)
#compute majority vote metrics
acc_mv, auc_mv, specificity_mv, sensitivity_mv = get_majority_vote(y_test, predictions)
print('FOLD '+ str(fold_c) + ': acc ' + str(acc) + ', auc ' + str(auc) + ', specificity '+ str(specificity)
+ ', sensitivity ' + str(sensitivity))
print('FOLD '+ str(fold_c) + ': MV acc ' + str(acc_mv) + ', MV auc ' + str(auc_mv) + ', MV specificity '+ str(specificity_mv)
+ ', MV sensitivity ' + str(sensitivity_mv))
test_metrics[fold_c]= {'acc':acc, 'auc':auc, 'sensitivity':sensitivity, 'specificity':specificity, 'param':params[index_param_max]}
test_metrics_mv[fold_c]= {'acc':acc_mv, 'auc':auc_mv, 'sensitivity':sensitivity_mv, 'specificity':specificity_mv, 'param':params[index_param_max]}
fold_c +=1
log_test_metrics(test_metrics, test_metrics_mv, test_n_splits, 'AVG Pooling Inception features + SVM', interpolation , seed, pca_n_components, standardize)
```
| github_jupyter |
```
# Copyright 2020 IITK EE604A Image Processing. All Rights Reserved.
#
# Licensed under the MIT License. Use and/or modification of this code outside of EE604 must reference:
#
# © IITK EE604A Image Processing
# https://github.com/ee604/ee604_assignments
#
# Author: Shashi Kant Gupta, Cheeranjeev and Prof K. S. Venkatesh, Department of Electrical Engineering, IIT Kanpur
```
# Task 4 (Bonus Question): Using Optical Flow and Pinhole Camera Model to Estimate Camera Motion
---
## Theory
In this bonus task you have to implement Optical Flow algorithm to estimate motion of the camera. Recall that images captured by a camera is a 2D projection of a 3D point on the camera's image sensor. By using pinhole camera model we can approximately determine the relationship between this 3D coordinate point to 2D point on the image plane. This relationship is:
$$
[x', y']^T = \frac{f}{z}[x, y]^T
$$
Here, $x'$ and $y'$ is 2D projection of 3D point $P = [x, y, z]^T$ on the image plane and $f$ is the focal length of the camera. If you want to study in detail about camera model follow this [link](https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf)
### Optical flow
Optical flow is the apparent motion of image object between two consecutive frames due to the relative motion between the camera and the object. In a layman term, optical flow is velocity of certain pixel point on the image. Consider the example given below:

In the example given above you can see the the pixel velocity for the green object will be $[2/\Delta t, 1/\Delta t]^T$. Where $\Delta t = t_2 - t_1$. Basically, this pixel velocity is the optical flow for the green object. For a more formal defination, you can check this [link](https://en.wikipedia.org/wiki/Optical_flow)
### Estimating camera velocity from optical flow
Now come back to the pinhole camera model. If we somehow calculate the camera distance from the object (i.e. $z$), we can easily estimate the motion of the green object with relative to the camera. Let's this relative velocity is $[v_x, v_y]^T$, then:
$$
[v_x, v_y]^T = \frac{z}{f}[\Delta x'/\Delta t, \Delta y'/\Delta t]^T = \frac{z}{f}[2/\Delta t, 1/\Delta t]^T
$$
As we know, the green object was at rest. The motion of the camera will be $[-v_x, -v_y]^T$. This was just an intuitive explanation. And there exist numbers of different method to estimate optical flow. Also in actual implementation we do not calculate camera velocity using only one but instead we estimate the camera velocity by multiple points and then take the average.
---
## Your Task
Given an input video with some **N** number of known static objects. Estimate the camera motion. We have simplified the problem for you, the sample video contains 10 objects on a white background, all the 10 objects are either red i.e. image location having value `[255, 0, 0]`, blue (`[0, 0, 255]`), or green (`[0, 255, 0]`) in RGB notation. Following are the other usefull parameters.
* focal length $f = 10mm$
* Perpendicular distance of the camera from the scene, i.e. $z = 60cm$
* Sensor size = 1cm x 1cm
* $\Delta t$: You do not need this. Hint: you know how to calculate fps for a video right?
**Calculate the camera motion ($v_{cx}$ and $v_{cy}$) in $cm/s$ and plot $v_{cx}$ vs $t$ and $v_{cy}$ vs $t$ plot**
**Note:** You can use openCV modules for reading video files, thresholding etc. But you should not use any direct implementation for features extraction or optical flow calculation.
```
%%bash
pip install git+https://github.com/ee604/ee604_plugins
# Importing required libraries
import cv2
import numpy as np
from IPython.display import display
from PIL import Image
import matplotlib.pyplot as plt
from ee604_plugins import download_dataset, playVideo
download_dataset(assignment_no=1, task_no=4)
# This is the video
playVideo("data/optical_flow.mp4", width=600, height=600)
def calculate_motion(video_path, video_length=50, f=1, z=60):
'''
Inputs:
+ video - path to the video file using which you will estimate camera motion
+ video_length - this is the length of your video in secs
+ f - focal length
+ z - perpendicular distance of the camera from objects
Ouputs:
+ v_cx - camera velocity in x direction at time 't'
+ v_cy - camera velocity in y direction at time 't'
+ t - time 't'
Allowed external package:
+ You must not use any direct implementation of optical flow
'''
#############################
# Start your code from here #
#############################
# Replace with your code...
#############################
# End your code here ########
#############################
return v_cx, v_cy, t
v_cx, v_cy, t = calculate_motion("data/optical_flow.mp4")
plt.plot(t, v_cx, label="v_cx")
plt.plot(t, v_cy, label="v_cy")
plt.show()
```
| github_jupyter |
# Pandas
Os exemplos abaixo foram tirados do artigo a seguir: https://towardsdatascience.com/pandas-from-basic-to-advanced-for-data-scientists-aee4eed19cfe
Pandas é a biblioteca python mais comumente usada para manipulação e análise de dados.
## Importando o Pandas
Vamos importar o pandas. Costumamos chamá-lo de pd.
```
import pandas as pd
```
## Lendo um dataframe
Para ler um arquivo externo, comumente usamos o read_csv, que vem no pandas.
```
df = pd.read_csv("exemplo.csv") # df significa dataframe
```
## Filtrando o dataframe
Às vezes, você precisa manipular apenas colunas específicas. E se você quisesse analisar como a temperatura está mudando? Neste caso, vamos selecionar a temperatura e o dia, conforme a célula abaixo:
```
df[['temperature','day']]
```
## Mudando o nome de uma coluna no dataframe
Deu o nome errado para um coluna e não quer ir até o excel mudar? Aqui você pode fazer isso da seguinte forma:
```
df.rename(columns = {'temperature': 'temp', 'event':'eventtype'})
```
Também podemos filtrar o dataframe: Suponha que você gostaria de ver as cidades que são mais quentes junto com as datas. Você pode fazer o seguinte:
```
df[['day','city']][df.event=='Sunny']
```
## Usando o groupby e o agg
E se você quisesse ver a temperatura média e a velocidade média do vento? Podemos usar o groupby e o agg para fazer isso.
```
df.groupby('city').agg({'temperature':'mean', 'windspeed':'mean'})
```
## Mesclando dois dataframes
E se houver mais de um dataframe e você quiser analisá-los juntos? Nesse caso, precisamos mesclá-los. Então, usamos o merge! Nos exemplos a seguir, nossa chave de cruzamento ( será a coluna 'city'.
Primeiro, vamos criar dois dataframes a partir do df, que já temos:
```
df1 = pd.DataFrame({
'city': ['new york','florida','mumbai'],
'temperature': [22,37,35],
})
df2 = pd.DataFrame({
'city': ['chicago','new york','florida'],
'humidity': [65,68,75],
})
df1
df2
```
Agora sim, vamos usar o merge. Primeiro, vamos usar o merge simples, que apenas nos retorna as linhas correspondentes em ambos os dataframes.
```
pd.merge(df1,df2,on='city')
```
Se você quiser obter todas as linhas de ambos os dataframes, você pode adicionar um novo parâmetro: how.
```
pd.merge(df1,df2,on='city',how='outer')
```
Podemos também fazer um LEFT JOIN, que nos retorna todos os registros do dataframe à esquerda (df1) e os registros correspondentes do dataframe à direita (df2). O resultado é NULL do lado direito, se não houver correspondência.
```
pd.merge(df1,df2,on='city',how='left')
```
Podemos também fazer um RIGHT JOIN, que nos retorna todos os registros do dataframe à direita (df2) e os registros correspondentes do dataframe à esquerda (df1). O resultado é NULL do lado esquerdo, quando não há correspondência.
```
pd.merge(df1,df2,on='city',how='right')
```
## Crosstab
Suponha que você queira ver a contagem de frequência do tipo de evento (chuvoso, ensolarado, etc) em cada cidade. O crosstab torna isso muito fácil.
```
pd.crosstab(df.city,df.event)
```
| github_jupyter |
```
! pip install -U pip
! pip install -U torch==1.5.0
! pip install -U torchtext==0.6.0
! pip install -U matplotlib==3.2.1
! pip install -U clearml>=0.15.0
! pip install -U tensorboard==2.2.1
import os
import time
import torch
import torch.nn as nn
from torchtext.datasets import text_classification
from torch.utils.tensorboard import SummaryWriter
from clearml import Task
%matplotlib inline
task = Task.init(project_name='Text Example', task_name='text classifier')
configuration_dict = {'number_of_epochs': 6, 'batch_size': 16, 'ngrams': 2, 'base_lr': 1.0}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
print(configuration_dict) # printing actual configuration (after override in remote mode)
if not os.path.isdir('./data'):
os.mkdir('./data')
train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](root='./data',
ngrams=configuration_dict.get('ngrams', 2))
vocabulary = train_dataset.get_vocab()
def generate_batch(batch):
label = torch.tensor([entry[0] for entry in batch])
# original data batch input are packed into a list and concatenated as a single tensor
text = [entry[1] for entry in batch]
# offsets is a tensor of delimiters to represent the beginning index of each sequence in the text tensor.
offsets = [0] + [len(entry) for entry in text]
# torch.Tensor.cumsum returns the cumulative sum of elements in the dimension dim.
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text = torch.cat(text)
return text, offsets, label
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = configuration_dict.get('batch_size', 16),
shuffle = True, pin_memory=True, collate_fn=generate_batch)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = configuration_dict.get('batch_size', 16),
shuffle = False, pin_memory=True, collate_fn=generate_batch)
classes = ("World", "Sports", "Business", "Sci/Tec")
class TextSentiment(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
VOCAB_SIZE = len(train_dataset.get_vocab())
EMBED_DIM = 32
NUN_CLASS = len(train_dataset.get_labels())
model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUN_CLASS)
device = torch.cuda.current_device() if torch.cuda.is_available() else torch.device('cpu')
print('Device to use: {}'.format(device))
model.to(device)
criterion = torch.nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=configuration_dict.get('base_lr', 1.0))
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 2, gamma=0.9)
tensorboard_writer = SummaryWriter('./tensorboard_logs')
def train_func(data, epoch):
# Train the model
train_loss = 0
train_acc = 0
for batch_idx, (text, offsets, cls) in enumerate(data):
optimizer.zero_grad()
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
output = model(text, offsets)
loss = criterion(output, cls)
train_loss += loss.item()
loss.backward()
optimizer.step()
train_acc += (output.argmax(1) == cls).sum().item()
iteration = epoch * len(train_loader) + batch_idx
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'
.format(epoch, batch_idx * len(cls), len(train_dataset),
100. * batch_idx / len(train_loader), loss))
tensorboard_writer.add_scalar('training loss/loss', loss, iteration)
tensorboard_writer.add_scalar('learning rate/lr', optimizer.param_groups[0]['lr'], iteration)
# Adjust the learning rate
scheduler.step()
return train_loss / len(train_dataset), train_acc / len(train_dataset)
def test(data, epoch):
loss = 0
acc = 0
for idx, (text, offsets, cls) in enumerate(data):
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
with torch.no_grad():
output = model(text, offsets)
predicted = output.argmax(1)
loss = criterion(output, cls)
loss += loss.item()
acc += (predicted == cls).sum().item()
iteration = (epoch + 1) * len(train_loader)
if idx % debug_interval == 0: # report debug text every "debug_interval" mini-batches
offsets = offsets.tolist() + [len(text)]
for n, (pred, label) in enumerate(zip(predicted, cls)):
ids_to_text = [vocabulary.itos[id] for id in text[offsets[n]:offsets[n+1]]]
series = '{}_{}_label_{}_pred_{}'.format(idx, n, classes[label], classes[pred])
tensorboard_writer.add_text('Test text samples/{}'.format(series),
' '.join(ids_to_text), iteration)
return loss / len(test_dataset), acc / len(test_dataset)
log_interval = 200
debug_interval = 500
for epoch in range(configuration_dict.get('number_of_epochs', 6)):
start_time = time.time()
train_loss, train_acc = train_func(train_loader, epoch)
test_loss, test_acc = test(test_loader, epoch)
secs = int(time.time() - start_time)
print('Epoch: %d' %(epoch + 1), " | time in %d minutes, %d seconds" %(secs / 60, secs % 60))
print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)')
print(f'\tLoss: {test_loss:.4f}(test)\t|\tAcc: {test_acc * 100:.1f}%(test)')
tensorboard_writer.add_scalar('accuracy/train', train_acc, (epoch + 1) * len(train_loader))
tensorboard_writer.add_scalar('accuracy/test', test_acc, (epoch + 1) * len(train_loader))
from torchtext.data.utils import ngrams_iterator
from torchtext.data.utils import get_tokenizer
def predict(text, model, vocab, ngrams):
tokenizer = get_tokenizer("basic_english")
with torch.no_grad():
text = torch.tensor([vocab[token]
for token in ngrams_iterator(tokenizer(text), ngrams)])
output = model(text, torch.tensor([0]))
return output.argmax(1).item()
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
ans = predict(ex_text_str, model.to("cpu"), vocabulary, configuration_dict.get('ngrams', 2))
print("This is a %s news" %classes[ans])
```
| github_jupyter |
# Bite Size Bayes
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## The "Girl Named Florida" problem
In [The Drunkard's Walk](https://www.goodreads.com/book/show/2272880.The_Drunkard_s_Walk), Leonard Mlodinow presents "The Girl Named Florida Problem":
>"In a family with two children, what are the chances, if [at least] one of the children is a girl named Florida, that both children are girls?"
I added "at least" to Mlodinow's statement of the problem to avoid a subtle ambiguity (which I'll explain at the end).
To avoid some real-world complications, let's assume that this question takes place in an imaginary city called Statesville where:
* Every family has two children.
* 50% of children are male and 50% are female.
* All children are named after U.S. states, and all state names are chosen with equal probability.
* Genders and names within each family are chosen independently.
To answer Mlodinow's question, I'll create a DataFrame with one row for each family in Statesville and a column for the gender and name of each child.
Here's a list of genders and a [dictionary of state names](https://gist.github.com/tlancon/9794920a0c3a9990279de704f936050c):
```
gender = ['B', 'G']
us_states = {
'Alabama': 'AL',
'Alaska': 'AK',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
# 'District of Columbia': 'DC',
'Florida': 'FL',
'Georgia': 'GA',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
'Missouri': 'MO',
'Montana': 'MT',
'Nebraska': 'NE',
'Nevada': 'NV',
'New Hampshire': 'NH',
'New Jersey': 'NJ',
'New Mexico': 'NM',
'New York': 'NY',
'North Carolina': 'NC',
'North Dakota': 'ND',
'Ohio': 'OH',
'Oklahoma': 'OK',
'Oregon': 'OR',
'Pennsylvania': 'PA',
'Rhode Island': 'RI',
'South Carolina': 'SC',
'South Dakota': 'SD',
'Tennessee': 'TN',
'Texas': 'TX',
'Utah': 'UT',
'Vermont': 'VT',
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY'
}
```
To enumerate all possible combinations of genders and names, I'll use `from_product`, which makes a Pandas MultiIndex.
```
names = ['gender1', 'name1', 'gender2', 'name2']
index = pd.MultiIndex.from_product([gender, us_states]*2,
names=names)
```
Now I'll create a DataFrame with that index:
```
df = pd.DataFrame(index=index)
df.head()
```
It will be easier to work with if I reindex it so the levels in the MultiIndex become columns.
```
df = df.reset_index()
df.head()
```
This DataFrame contains one row for each family in Statesville; for example, the first row represents a family with two boys, both named Alabama.
As it turns out, there are 10,000 families in Statesville:
```
len(df)
```
## Probabilities
To compute probabilities, we'll use Boolean Series. For example, the following Series is `True` for each family where the first child is a girl:
```
girl1 = (df['gender1']=='G')
```
The following function takes a Boolean Series and counts the number of `True` values, which is the probability that the condition is true.
```
def prob(A):
"""Computes the probability of a proposition, A.
A: Boolean series
returns: probability
"""
assert isinstance(A, pd.Series)
assert A.dtype == 'bool'
return A.mean()
```
Not surprisingly, the probability is 50% that the first child is a girl.
```
prob(girl1)
```
And so is the probability that the second child is a girl.
```
girl2 = (df['gender2']=='G')
prob(girl2)
```
Mlodinow's question is a conditional probability: given that one of the children is a girl named Florida, what is the probability that both children are girls?
To compute conditional probabilities, I'll use this function, which takes two Boolean Series, `A` and `B`, and computes the conditional probability $P(A~\mathrm{given}~B)$.
```
def conditional(A, B):
"""Conditional probability of A given B.
A: Boolean series
B: Boolean series
returns: probability
"""
return prob(A[B])
```
For example, here's the probability that the second child is a girl, given that the first child is a girl.
```
conditional(girl2, girl1)
```
The result is 50%, which is the same as the unconditioned probability that the second child is a girl:
```
prob(girl2)
```
So that confirms that the genders of the two children are independent, which is one of my assumptions.
Now, Mlodinow's question asks about the probability that both children are girls, so let's compute that.
```
gg = (girl1 & girl2)
prob(gg)
```
In 25% of families, both children are girls. And that should be no surprise: because they are independent, the probability of the conjunction is the product of the probabilities:
```
prob(girl1) * prob(girl2)
```
While we're at it, we can also compute the conditional probability of two girls, given that the first child is a girl.
```
conditional(gg, girl1)
```
That's what we should expect. If we know the first child is a girl, and the probability is 50% that the second child is a girl, the probability of two girls is 50%.
## At least one girl
Before I answer Mlodinow's question, I'll warm up with a simpler version: given that at least one of the children is a girl, what is the probability that both are?
To compute the probability of "at least one girl" I will use the `|` operator, which computes the logical `OR` of the two Series:
```
at_least_one_girl = (girl1 | girl2)
prob(at_least_one_girl)
```
75% of the families in Statesville have at least one girl.
Now we can compute the conditional probability of two girls, given that the family has at least one girl.
```
conditional(gg, at_least_one_girl)
```
Of the families that have at least one girl, `1/3` have two girls.
If you have not thought about questions like this before, that result might surprise you. The following figure might help:
<img width="200" src="https://github.com/AllenDowney/BiteSizeBayes/raw/master/GirlNamedFlorida1.png">
In the top left, the gray square represents a family with two boys; in the lower right, the dark blue square represents a family with two girls.
The other two quadrants represent families with one girl, but note that there are two ways that can happen: the first child can be a girl or the second child can be a girl.
There are an equal number of families in each quadrant.
If we select families with at least one girl, we eliminate the gray square in the upper left. Of the remaining three squares, one of them has two girls.
So if we know a family has at least one girl, the probability they have two girls is 33%.
## What's in a name?
So far, we have computed two conditional probabilities:
* Given that the first child is a girl, the probability is 50% that both children are girls.
* Given that at least one child is a girl, the probability is 33% that both children are girls.
Now we're ready to answer Mlodinow's question:
* Given that at least one child is a girl *named Florida*, what is the probability that both children are girls?
If your intuition is telling you that the name of the child can't possibly matter, brace yourself.
Here's the probability that the first child is a girl named Florida.
```
gf1 = girl1 & (df['name1']=='Florida')
prob(gf1)
```
And the probability that the second child is a girl named Florida.
```
gf2 = girl2 & (df['name2']=='Florida')
prob(gf2)
```
To compute the probability that at least one of the children is a girl named Florida, we can use the `|` operator again.
```
at_least_one_girl_named_florida = (gf1 | gf2)
prob(at_least_one_girl_named_florida)
```
We can double-check it by using the disjunction rule:
```
prob(gf1) + prob(gf2) - prob(gf1 & gf2)
```
So, the percentage of families with at least one girl named Florida is a little less than 2%.
Now, finally, here is the answer to Mlodinow's question:
```
conditional(gg, at_least_one_girl_named_florida)
```
That's right, the answer is about 49.7%. To summarize:
* Given that the first child is a girl, the probability is 50% that both children are girls.
* Given that at least one child is a girl, the probability is 33% that both children are girls.
* Given that at least one child is a girl *named Florida*, the probability is 49.7% that both children are girls.
If your brain just exploded, I'm sorry.
Here's my best attempt to put your brain back together.
For each child, there are three possibilities: boy (B), girl not named Florida (G), and girl named Florida (GF), with these probabilities:
$P(B) = 1/2 $
$P(G) = 1/2 - x $
$P(GF) = x $
where $x$ is the percentage of people who are girls named Florida.
In families with two children, here are the possible combinations and their probabilities:
$P(B, B) = (1/2)(1/2)$
$P(B, G) = (1/2)(1/2-x)$
$P(B, GF) = (1/2)(x)$
$P(G, B) = (1/2-x)(1/2)$
$P(G, G) = (1/2-x)(1/2-x)$
$P(G, GF) = (1/2-x)(x)$
$P(GF, B) = (x)(1/2)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
If we select only the families that have at least one girl named Florida, here are their probabilities:
$P(B, GF) = (1/2)(x)$
$P(G, GF) = (1/2-x)(x)$
$P(GF, B) = (x)(1/2)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
Of those, if we select the families with two girls, here are their probabilities:
$P(G, GF) = (1/2-x)(x)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
To get the conditional probability of two girls, given at least one girl named Florida, we can add up the last 3 probabilities and divide by the sum of the previous 5 probabilities.
With a little algebra, we get:
$P(\mathrm{two~girls} ~|~ \mathrm{at~least~one~girl~named~Florida}) = (1 - x) / (2 - x)$
As $x$ approaches $0$ the answer approaches $1/2$.
As $x$ approaches $1/2$, the answer approaches $1/3$.
Here's what all of that looks like graphically:
<img width="200" src="https://github.com/AllenDowney/BiteSizeBayes/raw/master/GirlNamedFlorida2.png">
Here `B` a boy, `Gx` is a girl with some property `X`, and `G` is a girl who doesn't have that property. If we select all families with at least one `Gx`, we get the five blue squares (light and dark). Of those, the families with two girls are the three dark blue squares.
If property `X` is common, the ratio of dark blue to all blue approaches `1/3`. If `X` is rare, the same ratio approaches `1/2`.
In the "Girl Named Florida" problem, `x` is 1/100, and we can compute the result:
```
x = 1/100
(1-x) / (2-x)
```
Which is what we got by counting all of the families in Statesville.
## Controversy
[I wrote about this problem in my blog in 2011](http://allendowney.blogspot.com/2011/11/girl-named-florida-solutions.html). As you can see in the comments, my explanation was not met with universal acclaim.
One of the issues that came up is the challenge of stating the question unambiguously. In this article, I rephrased Mlodinow's statement to clarify it.
But since we have come all this way, let me also answer a different version of the problem.
>Suppose you choose a house in Statesville at random and ring the doorbell. A girl (who lives there) opens the door and you learn that her name is Florida. What is the probability that the other child in this house is a girl?
In this version of the problem, the selection process is different. Instead of selecting houses with at least one girl named Florida, you selected a house, then selected a child, and learned that her name is Florida.
Since the selection of the child was arbitrary, we can say without loss of generality that the child you met is the first child in the table.
In that case, the conditional probability of two girls is:
```
conditional(gg, gf1)
```
Which is the same as the conditional probability, given that the first child is a girl:
```
conditional(gg, girl1)
```
So in this version of the problem, the girl's name is irrelevant.
| github_jupyter |
```
%pylab inline
rcParams["figure.figsize"] = (16,5)
import sys
sys.path.insert(0, "..")
!pip3 install pysptk
!pip3 install pyworld
import torch
from scipy.io import wavfile
import pysptk
from pysptk.synthesis import Synthesizer, MLSADF
import pyworld
from os.path import join, basename
#from nnmnkwii import preprocessing as P
#from nnmnkwii.paramgen import unit_variance_mlpg_matrix
#import gantts
#from hparams import vc as hp
import librosa
import librosa.display
import IPython
from IPython.display import Audio
from os.path import join, basename
name = "1.wav"
path_algan="/content/data/algan_vc/"
path_cycle="/content/data/cyclegan_vc/"
path_cycle2="/content/data/cyclegan_vc2/"
path_spcycle="/content/data/sp_cycle/"
path_cycle_drn="/content/data/al_drn/"
path_cycle_blrs="/content/data/al_blrs/"
path_cycle_l1="/content/data/al_l1/"
path_cycle_l2="/content/data/al_l2/"
src_path_algan = join(path_algan, name)
src_path_cycle= join(path_cycle, name)
src_path_cycle2 = join(path_cycle2, name)
src_path_spcycle = join(path_spcycle, name)
src_path_cycle_drn = join(path_cycle_drn, name)
src_path_cycle_blrs = join(path_cycle_blrs, name)
src_path_cycle_l1 = join(path_cycle_l1, name)
src_path_cycle_l2= join(path_cycle_l2, name)
print(src_path_algan)
print(src_path_cycle)
print(src_path_cycle2)
print(src_path_spcycle)
print(src_path_cycle_drn)
print(src_path_cycle_blrs)
print(src_path_cycle_l1)
print(src_path_cycle_l2)
def compute_static_features(path):
fs, x = wavfile.read(path)
x = x.astype(np.float64)
f0, timeaxis = pyworld.dio(x, fs, frame_period=5.0)
f0 = pyworld.stonemask(x, f0, timeaxis, fs)
spectrogram = pyworld.cheaptrick(x, f0, timeaxis, fs)
aperiodicity = pyworld.d4c(x, f0, timeaxis, fs)
alpha = pysptk.util.mcepalpha(fs)
mc = pysptk.sp2mc(spectrogram, order=24, alpha=alpha)
c0, mc = mc[:, 0], mc[:, 1:]
return mc
algan=compute_static_features(src_path_algan).T
cycle=compute_static_features(src_path_cycle).T
cycle2=compute_static_features(src_path_cycle2).T
spcycle=compute_static_features(src_path_spcycle).T
drn=compute_static_features(src_path_cycle_drn).T
blrs=compute_static_features(src_path_cycle_blrs).T
l1=compute_static_features(src_path_cycle_l1).T
l2=compute_static_features(src_path_cycle_l2).T
print(algan)
print(cycle)
print(cycle2)
print(spcycle)
print(drn)
print(blrs)
print(l1)
print(l2)
def vis_difference(idx, algan,cycle, cycle2, spcycle, drn, blrs, l1, l2, which_dims=8, T_max=None):
#static_paramgen = MLPG(gmm, windows=[(0,0, np.array([1.0]))], diff=False)
#paramgen = MLPG(gmm, windows=windows, diff=False)
#x = trim_zeros_frames(x)
#y = trim_zeros_frames(y)[:,:static_dim]
#y_hat1 = static_paramgen.transform(x)[:,:static_dim]
#y_hat2 = paramgen.transform(x)
fig, ax = plt.subplots()
if T_max is not None:
algan,cycle, cycle2, spcycle, drn, blrs, l1, l2 = algan[:T_max],cycle[:T_max],cycle2[:T_max],spcycle[:T_max],drn[:T_max],blrs[:T_max],l1[:T_max],l2[:T_max]
#figure(figsize=(16,4))
#for idx, which_dim in enumerate(which_dims):
#subplot(len(which_dims), 1, idx+1)
ax.plot(drn[:], "-", linewidth=2, label="ALGAN-VC without DRN")
lgd=plt.legend(loc=0, prop={'size': 16}, bbox_to_anchor=(1, 1))
#lgd=plt.legend(loc=0, prop={'size': 16}, bbox_to_anchor=(1, 1))
#ax.legend(fontsize=16)
#xlim(0, 100) # 50hz cutoff
# xlim(0, 100)
#legend(prop={"size": 18})
#ax.set_xticklabels(values[::400])
#print(values)
plt.xlabel("Frame index",fontsize=14)
plt.ylabel("{}-th Mel-cepstrum".format(idx),fontsize=18)
#plt.title("Set X labels in Matplotlib Plot")
fig.savefig("names{:03}.png".format(idx), bbox_extra_artists=(lgd,), bbox_inches='tight')
dims = [ 8,13, 22]
figure(figsize=(30, 30), dpi=80)
figure(figsize=(30, 6*len(dims)))
for idx, dim in enumerate(dims):
#vis_difference(X_aligned[idx], Y_aligned[idx], T_max=300, which_dims=which_dims)
vis_difference(dim, algan[idx],cycle[idx], cycle2[idx], spcycle[idx], drn[idx], blrs[idx], l1[idx], l2[idx], which_dims=8, T_max=300,)
#which_dims = np.arange(0, static_dim, step=2)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mengwangk/dl-projects/blob/master/04_02_auto_ml_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Automated ML
```
COLAB = True
if COLAB:
!sudo apt-get install git-lfs && git lfs install
!rm -rf dl-projects
!git clone https://github.com/mengwangk/dl-projects
#!cd dl-projects && ls -l --block-size=M
if COLAB:
!cp dl-projects/utils* .
!cp dl-projects/preprocess* .
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as ss
import math
import matplotlib
from scipy import stats
from collections import Counter
from pathlib import Path
plt.style.use('fivethirtyeight')
sns.set(style="ticks")
# Automated feature engineering
import featuretools as ft
# Machine learning
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, MinMaxScaler, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, roc_curve, mean_squared_error, accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from IPython.display import display
from utils import *
from preprocess import *
# The Answer to the Ultimate Question of Life, the Universe, and Everything.
np.random.seed(42)
%aimport
```
## Preparation
```
if COLAB:
from google.colab import drive
drive.mount('/content/gdrive')
GDRIVE_DATASET_FOLDER = Path('gdrive/My Drive/datasets/')
if COLAB:
DATASET_PATH = GDRIVE_DATASET_FOLDER
ORIGIN_DATASET_PATH = Path('dl-projects/datasets')
else:
DATASET_PATH = Path("datasets")
ORIGIN_DATASET_PATH = Path('datasets')
DATASET = DATASET_PATH/"feature_matrix_2.csv"
ORIGIN_DATASET = ORIGIN_DATASET_PATH/'4D.zip'
if COLAB:
!ls -l gdrive/"My Drive"/datasets/ --block-size=M
!ls -l dl-projects/datasets --block-size=M
data = pd.read_csv(DATASET, header=0, sep=',', quotechar='"', parse_dates=['time'])
origin_data = format_tabular(ORIGIN_DATASET)
data.info()
```
## Exploratory Data Analysis
```
feature_matrix = data
feature_matrix.columns
feature_matrix.head(4).T
origin_data[origin_data['LuckyNo']==911].head(10)
# feature_matrix.groupby('time')['COUNT(Results)'].mean().plot()
# plt.title('Average Monthly Count of Results')
# plt.ylabel('Strike Per Number')
```
## Feature Selection
```
from utils import feature_selection
%load_ext autoreload
%autoreload 2
feature_matrix_selection = feature_selection(feature_matrix.drop(columns = ['time', 'NumberId']))
feature_matrix_selection['time'] = feature_matrix['time']
feature_matrix_selection['NumberId'] = feature_matrix['NumberId']
feature_matrix_selection['Label'] = feature_matrix['Label']
feature_matrix_selection.columns
```
## Correlations
```
feature_matrix_selection.shape
corrs = feature_matrix_selection.corr().sort_values('TotalStrike')
corrs['TotalStrike'].head()
corrs['Label'].dropna().tail(8)
corrs['TotalStrike'].dropna().tail(8)
```
## Visualization
```
#pip install autoviz
#from autoviz.AutoViz_Class import AutoViz_Class
```
### XgBoost
```
import xgboost as xgb
model = xgb.XGBClassifier()
def predict_dt(dt, feature_matrix, return_probs = False):
feature_matrix['date'] = feature_matrix['time']
# Subset labels
test_labels = feature_matrix.loc[feature_matrix['date'] == dt, 'Label']
train_labels = feature_matrix.loc[feature_matrix['date'] < dt, 'Label']
print(f"Size of test labels {len(test_labels)}")
print(f"Size of train labels {len(train_labels)}")
# Features
X_train = feature_matrix[feature_matrix['date'] < dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year', 'index'], errors='ignore')
X_test = feature_matrix[feature_matrix['date'] == dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year', 'index'], errors='ignore')
print(f"Size of X train {len(X_train)}")
print(f"Size of X test {len(X_test)}")
feature_names = list(X_train.columns)
# Impute and scale features
pipeline = Pipeline([('imputer', SimpleImputer(strategy = 'median')),
('scaler', MinMaxScaler())])
# Fit and transform training data
X_train = pipeline.fit_transform(X_train)
X_test = pipeline.transform(X_test)
# Labels
y_train = np.array(train_labels).reshape((-1, ))
y_test = np.array(test_labels).reshape((-1, ))
print('Training on {} observations.'.format(len(X_train)))
print('Testing on {} observations.\n'.format(len(X_test)))
# Train
model.fit(X_train, y_train)
# Make predictions
predictions = model.predict(X_test)
probs = model.predict_proba(X_test)[:, 1]
# Total positive
positive = np.where((predictions==1))
print('Total predicted to be positive: ', len(positive[0]))
# Calculate metrics
p = precision_score(y_test, predictions)
r = recall_score(y_test, predictions)
f = f1_score(y_test, predictions)
auc = roc_auc_score(y_test, probs)
a = accuracy_score(y_test, predictions)
cm = confusion_matrix(y_test, predictions)
print(f'Precision: {round(p, 5)}')
print(f'Recall: {round(r, 5)}')
print(f'F1 Score: {round(f, 5)}')
print(f'ROC AUC: {round(auc, 5)}')
print(f'Accuracy: {round(a, 5)}')
#print('Probability')
#print(len(probs), probs)
# print('Probability >= Avg proba')
# avg_p = np.average(probs)
# print(f'Average probablity: {avg_p}')
# hp = np.where((probs >= avg_p * 2) & (predictions==1) )
# print(len(hp[0]), probs[hp[0]], hp[0])
print('Confusion matrix')
print(cm)
# Total predicted matches
print('Predicted matches')
m = np.where((predictions==1))
print(len(m[0]), m)
if len(positive[0]) > 0:
# Matching draws
print('Matched draws')
m = np.where((predictions==1) & (y_test == 1))
print(len(m[0]), m)
data = feature_matrix.loc[feature_matrix['date'] == dt]
display(data.iloc[m[0]][
['NumberId', 'Label', 'month', 'MODE(Results.PrizeType)_1stPrizeNo',
'MODE(Results.PrizeType)_2ndPrizeNo',
'MODE(Results.PrizeType)_3rdPrizeNo',
'MODE(Results.PrizeType)_ConsolationNo1',
'MODE(Results.PrizeType)_ConsolationNo10',
'MODE(Results.PrizeType)_ConsolationNo2',
'MODE(Results.PrizeType)_ConsolationNo3',
'MODE(Results.PrizeType)_ConsolationNo4',
'MODE(Results.PrizeType)_ConsolationNo5',
'MODE(Results.PrizeType)_ConsolationNo6',
'MODE(Results.PrizeType)_ConsolationNo7',
'MODE(Results.PrizeType)_ConsolationNo8',
'MODE(Results.PrizeType)_ConsolationNo9',
'MODE(Results.PrizeType)_SpecialNo1',
'MODE(Results.PrizeType)_SpecialNo10',
'MODE(Results.PrizeType)_SpecialNo2',
'MODE(Results.PrizeType)_SpecialNo3',
'MODE(Results.PrizeType)_SpecialNo4',
'MODE(Results.PrizeType)_SpecialNo5',
'MODE(Results.PrizeType)_SpecialNo6',
'MODE(Results.PrizeType)_SpecialNo7',
'MODE(Results.PrizeType)_SpecialNo8',
'MODE(Results.PrizeType)_SpecialNo9']].T)
else:
print('No luck this month')
# Feature importances
fi = pd.DataFrame({'feature': feature_names, 'importance': model.feature_importances_})
if return_probs:
return fi, probs
return fi
# All the months
len(feature_matrix_selection['time'].unique()), feature_matrix_selection['time'].unique()
```
### Prediction by months
```
from utils import plot_feature_importances
%time oct_2018 = predict_dt(pd.datetime(2018,10,1), feature_matrix_selection)
norm_oct_2018_fi = plot_feature_importances(oct_2018)
%time may_2019 = predict_dt(pd.datetime(2019,5,1), feature_matrix_selection)
norm_may_2019_fi = plot_feature_importances(may_2019)
%time june_2019 = predict_dt(pd.datetime(2019,6,1), feature_matrix_selection)
norm_june_2019_fi = plot_feature_importances(june_2019)
%time july_2019 = predict_dt(pd.datetime(2019,7,1), feature_matrix_selection)
norm_july_2019_fi = plot_feature_importances(july_2019)
%time aug_2019 = predict_dt(pd.datetime(2019,8,1), feature_matrix_selection)
norm_aug_2019_fi = plot_feature_importances(aug_2019)
%time oct_2019 = predict_dt(pd.datetime(2019,10,1), feature_matrix_selection)
norm_oct_2019_fi = plot_feature_importances(oct_2019)
%time sep_2019 = predict_dt(pd.datetime(2019,9,1), feature_matrix_selection)
```
## Tuning - GridSearchCV
## Check Raw Data
```
origin_data.tail(10)
origin_data[(origin_data['DrawDate'].dt.year == 2019) & (origin_data['DrawDate'].dt.month == 6)]['DrawNo'].nunique()
origin_data[(origin_data['DrawDate'].dt.year == 2019) & (origin_data['DrawDate'].dt.month == 10)]['DrawNo'].nunique()
print(15 * 45 + 14 * 45)
```
## Testing
```
import numpy as np
import pandas as pd
data = [['no_1', 1], ['no_2', 2], ['no_3', 3], ['no_4', 4], ['no_5', 5], ['no_6', 6], ['no_7', 7]]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['Name', 'Age'])
a = np.array([0,0,0,1,0,1, 1])
b = np.array([0,0,0,1,0,0, 1])
print(len(a))
m = np.where((a==1) & (b ==1))
print(len(m[0]), m[0], a[m[0]])
print(df.iloc[m[0]])
probs = np.array([0.03399902, 0.03295987, 0.03078781, 0.04921166, 0.03662422, 0.03233755])
print(np.average(probs))
mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
{'a': 100, 'b': 200, 'c': 300, 'd': 400},
{'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
df = pd.DataFrame(mydict)
df.iloc[[0]][['a','b']]
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.