text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Training checkpoints
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The phrase "Saving a TensorFlow model" typically means one of two things:
1. Checkpoints, OR
2. SavedModel.
Checkpoints capture the exact value of all parameters (`tf.Variable` objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).
This guide covers APIs for writing and reading checkpoints.
## Setup
```
import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
```
## Saving from `tf.keras` training APIs
See the [`tf.keras` guide on saving and
restoring](https://www.tensorflow.org/guide/keras/save_and_serialize).
`tf.keras.Model.save_weights` saves a TensorFlow checkpoint.
```
net.save_weights('easy_checkpoint')
```
## Writing checkpoints
The persistent state of a TensorFlow model is stored in `tf.Variable` objects. These can be constructed directly, but are often created through high-level APIs like `tf.keras.layers` or `tf.keras.Model`.
The easiest way to manage variables is by attaching them to Python objects, then referencing those objects.
Subclasses of `tf.train.Checkpoint`, `tf.keras.layers.Layer`, and `tf.keras.Model` automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.
You can easily save a model-checkpoint with `Model.save_weights`.
### Manual checkpointing
#### Setup
To help demonstrate all the features of `tf.train.Checkpoint`, define a toy dataset and optimization step:
```
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
"""Trains `net` on `example` using `optimizer`."""
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
```
#### Create the checkpoint objects
Use a `tf.train.Checkpoint` object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object.
A `tf.train.CheckpointManager` can also be helpful for managing multiple checkpoints.
```
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
```
#### Train and checkpoint the model
The following training loop creates an instance of the model and of an optimizer, then gathers them into a `tf.train.Checkpoint` object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
```
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
```
#### Restore and continue training
After the first training cycle you can pass a new model and manager, but pick up training exactly where you left off:
```
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
```
The `tf.train.CheckpointManager` object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
```
print(manager.checkpoints) # List the three remaining checkpoints
```
These paths, e.g. `'./tf_ckpts/ckpt-10'`, are not files on disk. Instead they are prefixes for an `index` file and one or more data files which contain the variable values. These prefixes are grouped together in a single `checkpoint` file (`'./tf_ckpts/checkpoint'`) where the `CheckpointManager` saves its state.
```
!ls ./tf_ckpts
```
<a id="loading_mechanics"/>
## Loading mechanics
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the `"l1"` in `self.l1 = tf.keras.layers.Dense(5)`. `tf.train.Checkpoint` uses its keyword argument names, as in the `"step"` in `tf.train.Checkpoint(step=...)`.
The dependency graph from the example above looks like this:

The optimizer is in red, regular variables are in blue, and the optimizer slot variables are in orange. The other nodes—for example, representing the `tf.train.Checkpoint`—are in black.
Slot variables are part of the optimizer's state, but are created for a specific variable. For example, the `'m'` edges above correspond to momentum, which the Adam optimizer tracks for each variable. Slot variables are only saved in a checkpoint if the variable and the optimizer would both be saved, thus the dashed edges.
Calling `restore` on a `tf.train.Checkpoint` object queues the requested restorations, restoring variable values as soon as there's a matching path from the `Checkpoint` object. For example, you can load just the bias from the model you defined above by reconstructing one path to it through the network and the layer.
```
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # This gets the restored value.
```
The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that `tf.train.Checkpoint` uses to number checkpoints.

`restore` returns a status object, which has optional assertions. All of the objects created in the new `Checkpoint` have been restored, so `status.assert_existing_objects_matched` passes.
```
status.assert_existing_objects_matched()
```
There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. `status.assert_consumed` only passes if the checkpoint and the program match exactly, and would throw an exception here.
### Deferred restorations
`Layer` objects in TensorFlow may defer the creation of variables to their first call, when input shapes are available. For example, the shape of a `Dense` layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a `Layer` also reads the variable's value, a restore must happen between the variable's creation and its first use.
To support this idiom, `tf.train.Checkpoint` defers restores which don't yet have a matching variable.
```
deferred_restore = tf.Variable(tf.zeros([1, 5]))
print(deferred_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = deferred_restore
print(deferred_restore.numpy()) # Restored
```
### Manually inspecting checkpoints
`tf.train.load_checkpoint` returns a `CheckpointReader` that gives lower level access to the checkpoint contents. It contains mappings from each variable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above.
Note: There is no higher level structure to the checkpoint. It only know's the paths and values for the variables, and has no concept of `models`, `layers` or how they are connected.
```
reader = tf.train.load_checkpoint('./tf_ckpts/')
shape_from_key = reader.get_variable_to_shape_map()
dtype_from_key = reader.get_variable_to_dtype_map()
sorted(shape_from_key.keys())
```
So if you're interested in the value of `net.l1.kernel` you can get the value with the following code:
```
key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'
print("Shape:", shape_from_key[key])
print("Dtype:", dtype_from_key[key].name)
```
It also provides a `get_tensor` method allowing you to inspect the value of a variable:
```
reader.get_tensor(key)
```
### Object tracking
Checkpoints save and restore the values of `tf.Variable` objects by "tracking" any variable or trackable object set in one of its attributes. When executing a save, variables are gathered recursively from all of the reachable tracked objects.
As with direct attribute assignments like `self.l1 = tf.keras.layers.Dense(5)`, assigning lists and dictionaries to attributes will track their contents.
```
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
```
You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.
```
restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy()
```
Trackable objects include `tf.train.Checkpoint`, `tf.Module` and its subclasses (e.g. `keras.layers.Layer` and `keras.Model`), and recognized Python containers:
* `dict` (and `collections.OrderedDict`)
* `list`
* `tuple` (and `collections.namedtuple`, `typing.NamedTuple`)
Other container types are **not supported**, including:
* `collections.defaultdict`
* `set`
All other Python objects are **ignored**, including:
* `int`
* `string`
* `float`
## Summary
TensorFlow objects provide an easy automatic mechanism for saving and restoring the values of variables they use.
| github_jupyter |
```
# imports and utils
import tensorflow.compat.v2 as tf
import ddsp.training
_AUTOTUNE = tf.data.experimental.AUTOTUNE
from IPython.display import Audio, display
from livelossplot import PlotLosses
import numpy as np
import random
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import time
import data
import random
import copy
import pydash
import tqdm
import soundfile
import os
import model
# define constants
CLIP_S=4
SAMPLE_RATE=48000
N_SAMPLES=SAMPLE_RATE*CLIP_S
SEED=1
FT_FRAME_RATE=250
tf.random.set_seed(
SEED
)
np.random.seed(SEED)
random.seed(SEED)
# define some utilis
def play(audio):
display(Audio(audio,rate=SAMPLE_RATE))
USE_NSYNTH=False
INSTRUMENT_FAMILY="Trombone"
IR_DURATION=1
Z_SIZE=1024 if INSTRUMENT_FAMILY=="**" else 512
N_INSTRUMENTS=200
BIDIRECTIONAL=True
USE_F0_CONFIDENCE=True
N_NOISE_MAGNITUDES=192
N_HARMONICS=192
ae=model.get_model(SAMPLE_RATE,CLIP_S,FT_FRAME_RATE,Z_SIZE,N_INSTRUMENTS,IR_DURATION,BIDIRECTIONAL,USE_F0_CONFIDENCE,N_HARMONICS,N_NOISE_MAGNITUDES)
# define loss
fft_sizes = [64]
while fft_sizes[-1]<SAMPLE_RATE//4:
fft_sizes.append(fft_sizes[-1]*2)
spectral_loss = ddsp.losses.SpectralLoss(loss_type='L1',
fft_sizes=fft_sizes,
mag_weight=1.0,
logmag_weight=1.0)
if USE_NSYNTH:
tfds.load("nsynth/gansynth_subset.f0_and_loudness",split="train", try_gcs=False,download=True)
trn_data_provider = data.CustomNSynthTfds(data_dir="/root/tensorflow_datasets/",split="train")
tfds.load("nsynth/gansynth_subset.f0_and_loudness",split="valid", try_gcs=False,download=True)
val_data_provider = data.CustomNSynthTfds(data_dir="/root/tensorflow_datasets/",split="valid")
def crepe_is_certain(x):
is_playing = tf.cast(x["loudness_db"]>-100.0,dtype=tf.float32)
average_certainty=tf.reduce_sum(x["f0_confidence"]*is_playing)/tf.reduce_sum(is_playing)
return average_certainty
def preprocess_dataset(dataset):
if INSTRUMENT_FAMILY!="all":
dataset=dataset.filter(lambda x: x["instrument_family"]==INSTRUMENT_FAMILY)
return dataset
trn_dataset = preprocess_dataset(trn_data_provider.get_dataset())
val_dataset = preprocess_dataset(val_data_provider.get_dataset())
else:
trn_data_provider=data.MultiTFRecordProvider(f"datasets/AIR/tfr48k/dev/{INSTRUMENT_FAMILY}/*",sample_rate=SAMPLE_RATE)
trn_dataset= trn_data_provider.get_dataset()
val_data_provider=data.MultiTFRecordProvider(f"datasets/AIR/tfr48k/tst/{INSTRUMENT_FAMILY}/*",sample_rate=SAMPLE_RATE)
val_dataset=val_data_provider.get_dataset(shuffle=False)
# remove some samples if number of recordings greater than model capacity
trn_dataset = trn_dataset.filter(lambda x: int(x["instrument_idx"])<N_INSTRUMENTS)
checkpoint_path=f"checkpoints/48k_{'bidir' if BIDIRECTIONAL else 'unidir'}_z{Z_SIZE}_conv_family_{INSTRUMENT_FAMILY}{'_f0c' if USE_F0_CONFIDENCE else ''}"
print(checkpoint_path)
try:
print("loading checkpoint")
ae.load_weights(checkpoint_path)
except:
print("couldn't load checkpoint")
pass
plotlosses = PlotLosses()
## training loop with adam
BATCH_SIZE=1
batched_trn_dataset=trn_dataset.shuffle(10000).batch(BATCH_SIZE,drop_remainder=True)
# 1e-4 was good for saxophone (got us to 4.7-ish in 20 hours our so)
optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5)
e=0
while True:
batch_counter=0
epoch_loss=0
for batch in batched_trn_dataset:
with tf.GradientTape() as tape:
output=ae(batch,train_shared=True)
loss_value=spectral_loss(batch["audio"],output["audio_synth"])
gradients = tape.gradient(loss_value, ae.trainable_variables)
epoch_loss+=loss_value.numpy()
optimizer.apply_gradients(zip(gradients, ae.trainable_variables))
if batch_counter % 10==0:
print(f"batch nr {batch_counter}, loss: {loss_value.numpy()}")
if batch_counter ==0:
play(tf.reshape(output["audio"],(-1)))
play(tf.reshape(output['audio_synth'],(-1)))
play(tf.reshape(output['harmonic+fn']["signal"],(-1)))
play(tf.reshape(output['harmonic']["signal"],(-1)))
play(tf.reshape(output["fn"]["signal"],(-1)))
play(tf.reshape(output["ir"],(-1)))
if batch_counter>1000-1 and batch_counter % 1000==0:
ae.save_weights(checkpoint_path)
batch_counter+=1
plotlosses.update({'loss': epoch_loss/batch_counter,})
plotlosses.send()
ae.save_weights(checkpoint_path)
e+=1
# define some data utilities
N_FIT_SECONDS = 16
TRAIN_SHARED=False
USE_FNR=True
EARLY_REFLECTION_DURATION=0.2
def join_batch(batch):
for key in batch.keys():
assert len(batch[key].shape)<3
if len(batch[key].shape)==2:
batch[key]=tf.reshape(batch[key],(1,-1))
return batch
def window_signal(a,window_len,hop_len):
assert(a.shape[0]==1)
windows=[]
start_frame=0
while True:
windows.append(a[:,start_frame:start_frame+window_len,...])
start_frame+=hop_len
if start_frame > a.shape[1]-window_len:
break
return tf.concat(windows,axis=0)
def window_sample(instance,win_s,hop_s):
instance["audio"]=window_signal(instance["audio"],win_s*SAMPLE_RATE,hop_s*SAMPLE_RATE)
for key in ["f0_hz","loudness_db","f0_confidence"]:
instance[key]=window_signal(instance[key],win_s*FT_FRAME_RATE,hop_s*FT_FRAME_RATE)
instance["instrument"]=tf.repeat(instance["instrument"][0],(instance["audio"].shape[0]))
instance["instrument_idx"]=tf.repeat(instance["instrument_idx"][0],(instance["audio"].shape[0]))
#for key,item in instance.items():
# assert(len(item.shape)<2 or item.shape[0]>1)
return instance
def join_and_window(instance,win_s=4,hop_s=1):
return window_sample(join_batch(instance),win_s,hop_s)
def rf2cf(row_form):
return {k:[s[k] for s in row_form] for k in row_form[0].keys()}
# few shot voice cloning
def regularization(batch):
ir = batch["ir"]
ir = ir/tf.reduce_max(tf.abs(ir)+1e-10)
return tf.reduce_mean((ir**2)*tf.cast(tf.linspace(0,1,ir.shape[-1])[None,:],tf.float32))*0.0
# constants
BATCH_SIZE=1
#5e-4 to 1e-5 has worked well
N_DEMO_SAMPLES=int(4*SAMPLE_RATE)
n_fit_windows=int(N_FIT_SECONDS/CLIP_S)
N_FIT_ITERATIONS= 100 if TRAIN_SHARED else int(100*(16/N_FIT_SECONDS))
VAL_LR=3e-5 if TRAIN_SHARED else 2e-3
SAVE = True
timestamp=0
DEMO_PATH=f"artefacts/demos/{INSTRUMENT_FAMILY}_{timestamp}_{N_FIT_SECONDS}_{'train_shared' if TRAIN_SHARED else ''}/"
DEMO_IR_DURATION=1.0
DEMO_IR_SAMPLES=int(DEMO_IR_DURATION*SAMPLE_RATE)
val_dataset=list(val_dataset)
# group by instrument id
val_dataset_by_instrument=pydash.collections.group_by(list(val_dataset),lambda x: str(x["instrument"].numpy()))
val_dataset_by_instrument = {k:v for k,v in val_dataset_by_instrument.items() if len(v)>n_fit_windows*2}
# load model
ae_test=model.get_model(SAMPLE_RATE,CLIP_S,FT_FRAME_RATE,Z_SIZE,N_INSTRUMENTS,IR_DURATION,BIDIRECTIONAL,USE_F0_CONFIDENCE,N_HARMONICS,N_NOISE_MAGNITUDES)
# load model weights
ae_test.set_is_shared_trainable(True)
ae_test.load_weights(checkpoint_path)
ae_test.instrument_weight_metadata["ir"]["initializer"]=lambda batch_size: tf.zeros([batch_size,int(DEMO_IR_DURATION*SAMPLE_RATE)])
if USE_FNR:
er_samples=int(EARLY_REFLECTION_DURATION*SAMPLE_RATE)
er_amp=np.ones((er_samples))
er_amp[er_samples//2:er_samples]=np.linspace(1,0,er_samples//2)
frame_rate=1000
n_filter_bands=100
n_frames=int(frame_rate*DEMO_IR_DURATION)
ir_fn=ddsp.synths.FilteredNoise(n_samples=DEMO_IR_SAMPLES,
window_size=750,
scale_fn=tf.nn.relu,
initial_bias=0.0001)
def processing_fn(batched_feature):
batch_size=batched_feature.shape[0]
er_ir = tf.nn.tanh(batched_feature[:,:er_samples])
er_amp=np.ones(DEMO_IR_SAMPLES)
er_amp[er_samples//2:er_samples]=np.linspace(1,0,er_samples//2)
er_amp[er_samples:]=0
er_amp = er_amp[None,:]
fn_amp= 1-er_amp
fn_mags=tf.reshape(batched_feature[:,er_samples:],[batch_size,n_frames,n_filter_bands])
fn_ir=ir_fn(fn_mags)
ir=fn_ir*fn_amp+tf.pad(er_ir,[[0,0],[0,int(DEMO_IR_DURATION*SAMPLE_RATE)-er_samples]])*er_amp
#ir = ddsp.core.fft_convolve( fn_ir,er_ir, padding='same', delay_compensation=0)
return ir
ae_test.instrument_weight_metadata["ir"]["processing"]=processing_fn
ae_test.instrument_weight_metadata["ir"]["initializer"]=lambda batch_size: tf.zeros([batch_size,er_samples+n_frames*n_filter_bands])
ae_test.instrument_weight_metadata["wet_gain"]["initializer"]=lambda batch_size: tf.ones([batch_size,1])*0.5
ae_test.initialize_instrument_weights()
ae_test.set_is_shared_trainable(True)
TMP_CHECKPOINT_PATH="artefacts/tmp_checkpoint"
ae_test.save_weights(TMP_CHECKPOINT_PATH)
for ii,instrument_set in enumerate(list(val_dataset_by_instrument.values())):
print(f"instrument nr {ii}")
ae_test.set_is_shared_trainable(True)
ae_test.load_weights(TMP_CHECKPOINT_PATH)
ae_test.initialize_instrument_weights()
# data
# reshape data
fit_data=instrument_set[:n_fit_windows]
print(f"{len(instrument_set)-5}>={n_fit_windows}")
assert len(instrument_set)-5>=n_fit_windows
# Use last 4 windows (16 s) as test data
test_data=instrument_set[-5:-1]
def playback_and_save(x,fn):
print(fn)
play(x)
if SAVE:
os.makedirs(DEMO_PATH,exist_ok=True)
path=DEMO_PATH+f"recording nr: {ii} "+fn+".wav"
soundfile.write(path,x,SAMPLE_RATE)
# convert to column form
fit_data = rf2cf(fit_data)
# get one batch for fitting
fit_batch= next(iter(tf.data.Dataset.from_tensor_slices(fit_data).batch(len(list(fit_data)))))
playback_and_save(tf.reshape(fit_data["audio"],[-1]),"training data")
# transform data so that the clips overlap
fit_batch=join_and_window(fit_batch,4,1)
fit_data=tf.data.Dataset.from_tensor_slices(fit_batch)
fit_batched=fit_data.batch(BATCH_SIZE)
# prepare test data
test_data = rf2cf(test_data)
test_batched= tf.data.Dataset.from_tensor_slices(test_data).batch(BATCH_SIZE)
fit_losses=[]
tst_losses=[]
# set up optimizer
val_optimizer = tf.keras.optimizers.Adam(learning_rate=VAL_LR)
for i in tqdm.tqdm(range(N_FIT_ITERATIONS)):
fit_batched=fit_batched.shuffle(100)
epoch_loss=0
batch_counter=0
test_epoch_loss=0
test_batch_counter=0
for fit_batch in fit_batched:
with tf.GradientTape() as tape:
output = ae_test(fit_batch,train_shared=TRAIN_SHARED)
loss_value=spectral_loss(fit_batch["audio"],output["audio_synth"])+regularization(output)
epoch_loss+=loss_value.numpy()
batch_counter+=1
gradients = tape.gradient(loss_value, ae_test.trainable_weights)
val_optimizer.apply_gradients(zip(gradients, ae_test.trainable_weights))
fit_losses.append(epoch_loss/batch_counter)
for test_batch in test_batched:
test_output=ae_test(test_batch,train_shared=False)
loss_value=spectral_loss(test_batch["audio"],test_output["audio_synth"])
test_epoch_loss+=loss_value.numpy()
test_batch_counter+=1
tst_losses.append(test_epoch_loss/test_batch_counter)
if i%50==0:
print("target")
play(tf.reshape(fit_batch["audio"],(-1)))
print("estimate")
play(tf.reshape(output['audio_synth'],(-1)))
# loss plot
plt.plot(tst_losses,label="tst")
plt.plot(fit_losses,label="trn")
plt.yscale("log")
plt.legend()
plt.show()
ir=output['ir'][0]
plt.plot(ir)
plt.show()
play(tf.reshape(ir,(-1)))
plt.imshow(ddsp.spectral_ops.compute_mel(ir).numpy().T,aspect="auto",origin="lower")
plt.show()
print(f"wet gain: {output['wet_gain']['controls']['gain_scaled']}")
print(f"dry gain: {output['dry_gain']['controls']['gain_scaled']}")
plt.plot(tst_losses,label="tst")
plt.plot(fit_losses,label="trn")
plt.yscale("log")
plt.legend()
plt.show()
print(">> seen data:")
playback_and_save(tf.reshape(fit_batch["audio"],[-1]),"training target")
playback_and_save(tf.reshape(output["audio_synth"],[-1]),"training estimate")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["f0_hz"]=transposed_fit_batch["f0_hz"]*0.7
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"transposed down")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["f0_hz"]=transposed_fit_batch["f0_hz"]*1.3
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"transposed up")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["loudness_db"]=transposed_fit_batch["loudness_db"]-12
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"loudness -12 db")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["loudness_db"]=transposed_fit_batch["loudness_db"]-6
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"loudness -6 db")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["loudness_db"]=transposed_fit_batch["loudness_db"]+6
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"loudness +6 db")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["loudness_db"]=transposed_fit_batch["loudness_db"]+12
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"loudness +12 db")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["f0_confidence"]=transposed_fit_batch["f0_confidence"]*0.9
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"low confidence")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["f0_confidence"]=transposed_fit_batch["f0_confidence"]*0.5
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"very low f0 confidence")
transposed_fit_batch = copy.deepcopy(fit_batch)
transposed_fit_batch["f0_confidence"]=transposed_fit_batch["f0_confidence"]*0.0
transposed_output=ae_test(transposed_fit_batch,train_shared=False)
playback_and_save(tf.reshape(transposed_output['audio_synth'],(-1)),"no f0 confidence")
print(">> unseen data:")
playback_and_save(tf.reshape(test_batch["audio"][:,:N_DEMO_SAMPLES],(-1)),"unseen target")
test_batch_output = ae_test(test_batch,train_shared=False)
playback_and_save(tf.reshape(test_batch_output['audio_synth'][:,:N_DEMO_SAMPLES],(-1)),"unseen estimate")
```
| github_jupyter |
# Transfer Learning
```
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
batch_size = 32
img_size = 299
train_path = '../data/sports/train/'
test_path = '../data/sports/test/'
```
## Transfer learning
```
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
base_model = Xception(include_top=False,
weights='imagenet',
input_shape=(img_size, img_size, 3),
pooling='avg')
model = Sequential([
base_model,
Dense(256, activation='relu'),
Dropout(0.5),
Dense(3, activation='softmax')
])
model.summary()
model.layers[0].trainable = False
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
preprocessing_function=preprocess_input)
batch_size = 32
train_generator = datagen.flow_from_directory(
train_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
test_generator = datagen.flow_from_directory(
test_path,
target_size=(img_size, img_size),
batch_size=batch_size,
shuffle=False)
model.fit(train_generator, steps_per_epoch=len(train_generator))
model.evaluate_generator(test_generator, steps=len(test_generator))
```
Yay! in a single epoch we can classify images with a decent accuracy.
## Exercise 1
The base model takes an image and returns a vector of 2048 numbers. We will call this vector a bottleneck feature. Use the `base_model.predict_generator` function and the train generator to generate bottleneck features for the training set. Save these vectors to a numpy array using the code provided.
```
base_model.summary()
base_model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
bf_train = base_model.predict_generator(train_generator, verbose=2,
steps=len(train_generator))
import os
os.makedirs('models', exist_ok = True)
np.save(open('models/bf_train.npy', 'wb'), bf_train)
```
## Exercise 2
At https://keras.io/applications/#documentation-for-individual-models you can find all the pre-trained models available in keras.
- Try using another model, for example MobileNet and repeat the training. Do you get to a better accuracy?
check out:
- https://keras.io/applications/#usage-examples-for-image-classification-models
- https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
- https://keras.io/applications/
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import pickle
import gseapy as gp
import networkx as nx
import numpy as np
import pandas as pd
# This option is needed for printing all the information from the pandas dataframes
pd.set_option('display.max_colwidth', None)
```
# Information on Communities Characterisation
In this notebook you will find instructions on how to find useful information regarding the different communities found in each tissue.
Inside the `figures` folder, you can find, for each tissue, which communities provide a better prediction power for each tissue, when using their genes.
## Interesting communities
A python dictionary, saved in `interesting_communitites.pkl` and generated using code in the `13_plots_for_paper` notebook, is available in this repository to more easily access the community IDs that can achieve prediction power as defined in the paper:
```
interesting_communities = pickle.load(open('results/interesting_communitites.pkl','rb'))
interesting_communities['Brain_Hippocampus']
```
## General information for one tissue
The following function prints latex code in order to show the general figure of community prediction for one tissue
```
def print_tissue_header(tissue):
print(f"\\section*{{{tissue.replace('_', ' ')}}}")
print('\\begin{figure}[h!]')
print('\\centering')
print(f'\\includegraphics[width=.9\\textwidth]{{figures/tissue_prediction_from_{tissue}.pdf}}')
print('\\end{figure}')
```
Here it is the function's output for the Brain Hippocampus:
```
print_tissue_header('Brain_Hippocampus')
```
The following function, will create output for a specific community in a tissue, including the genes presented, enrichment analysis and the co-correlation network structure.
We provide options to print LaTeX text or just show the images inside the notebook, as illustrated by the examples in the next cells.
```
def show_info_from_community_id(tissue, community_id, print_latex=True):
print(f'\\subsection*{{Community with ID {community_id}}}' if print_latex else f'Community with ID {community_id}' )
dic_community = pickle.load(open("svm_results/" + tissue + '_' + str(community_id) + ".pkl", "rb"))
print("This community has the following genes: ", end='')
print(*dic_community['genes'], sep = ", ", end='')
print('\n\\\\' if print_latex else '\n')
enr = gp.enrichr(gene_list=dic_community['genes'],
organism='human',
description=tissue + "_" + str(community_id),
gene_sets='Reactome_2016',
no_plot=True, # skip plotting, because you don't need these figures
cutoff=0.05,
outdir='results/EnrichClass')
filtered_df = enr.results[enr.results['Adjusted P-value'] < 0.05].loc[:, ['Adjusted P-value', 'Term']]
#display(filtered_df.loc[:, ['Adjusted P-value', 'Term']])
if filtered_df.shape[0] == 0:
print('This community has no enriched pathways below adjusted p-value $< 0.05$.')
elif print_latex:
print(filtered_df.to_latex(index=False,
longtable=True,
column_format='p{2.4cm}p{14.5cm}',
escape=False,
header=['Adjusted \\newline P-value', 'Term'],
caption=f'Enrichment for community {community_id}'))
else:
print('Enriched Reactome pathways:')
display(filtered_df)
print()
# Displaying upper triangle latex table
com_genes = pd.read_pickle("data/corr_" + tissue + ".pkl").loc[dic_community['genes'],dic_community['genes']]
com_genes = com_genes.replace([np.inf], np.nan).fillna(0)
com_genes = com_genes.round(2)
tril_indices = np.tril(np.ones(com_genes.shape)).astype(np.bool)
com_genes[tril_indices] = np.nan
df_subset = com_genes.iloc[:-1,1:] # Removing first column and last row
header_names = [f'\\rot{{{head_name}}}' for head_name in df_subset.columns.values]
if print_latex:
print(df_subset.to_latex(longtable=True, escape=False, na_rep='', header=header_names,
caption=f'Connectivity of community {community_id} (with Fisher z-transformed values)'))
else:
print(f'Connectivity of community {community_id} (with Fisher z-transformed values):')
display(df_subset.style.format(None, na_rep="-"))
print()
# Displaying the network - it needs to reload everything because of previous round()
com_genes = pd.read_pickle("data/corr_" + tissue + ".pkl").loc[dic_community['genes'],dic_community['genes']]
com_genes = com_genes.replace([np.inf], np.nan).fillna(0)
com_genes = np.tanh(com_genes) # Inverse of fisher-z transform
com_genes = com_genes.round(2)
com_genes[com_genes < 0.8] = 0
com_graph = nx.from_pandas_adjacency(com_genes)
plt.figure(figsize=(20, 10))
try:
pos = nx.nx_agraph.graphviz_layout(com_graph)
except ImportError:
pos = nx.spring_layout(com_graph, iterations=20, k=20)
#nx.draw_networkx(com4_graph, with_labels=True)
nx.draw_networkx_edges(com_graph, pos, edge_color="m")
nx.draw_networkx_nodes(com_graph, pos, alpha=0.6)
#nx.draw_networkx_edges(com4_graph, pos, alpha=0.4, node_size=0, width=1, edge_color="k")
labels = nx.get_edge_attributes(com_graph,'weight')
nx.draw_networkx_edge_labels(com_graph,pos,edge_labels=labels,label_pos=0.6, font_size=15)
_=nx.draw_networkx_labels(com_graph, pos, font_size=20, font_weight='bold')
plt.tight_layout()
if print_latex:
plt.savefig(f'figures/corr_network{tissue}_{community_id}.pdf', bbox_inches = 'tight', pad_inches = 0)
plt.close()
print('\\begin{figure}[h!]')
print('\\centering')
print(f'\\includegraphics[width=.5\\textwidth]{{figures/corr_network{tissue}_{community_id}.pdf}}')
print(f'\\caption{{Community {community_id} structure. Only showing connections with correlation (inversed of Fisher z-transformation) greater than 0.8}}')
print('\\end{figure}')
else:
print(f'Community {community_id} structure. Only showing connections with correlation (inversed of Fisher z-transformation) greater than 0.8:')
plt.show()
```
Here is an example with the LaTeX text for community with ID 5 in the Brain Hippocampus. Note that when LaTeX option is activated (like in the following cell), an image with the network structure of that community will be created in the `figures` folder. In this example, the resulting file will be called `figures/corr_networkBrain_Hippocampus_5.pdf`
```
show_info_from_community_id('Brain_Hippocampus', interesting_communities['Brain_Hippocampus'][1])
```
If you just want to see the information directly in this notebook without any LaTeX text, and without saving any file in disk, you can do so by setting the argument `print_latex` to False, as shown in the following example:
```
show_info_from_community_id('Brain_Hippocampus',
interesting_communities['Brain_Hippocampus'][1],
print_latex=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/vectice/vectice-examples/blob/master/Notebooks/Vanilla/Brain_Failure_Prediction/Brain%20Failure%20Prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
The objective of this notebook is to explore the Vectice integration to BigQuery and build models to better estimate the probability of suffering a stroke based on many variables related to various Healthy and Unhealthy habits.
## Installing vectice and the other required packages
In order to keep the Vectice library lite, we install just the primary dependencies and let the user install the the other dependencies when needed. Here, we install github because our notebook is on Github and we are going to need the github package to be able to point to the notebook from the Vectice UI. You have to add the other dependencies (gitlab, bitbucket) if you're going to use them (!pip install -q "vectice[github, gitlab, bitbucket]")
```
!pip install -q numpy pandas sklearn xgboost imblearn
!pip install -q vectice[github]
!pip install -q fsspec
!pip install -q gcsfs
!pip show vectice
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns #visualization of the variables
import os
import imblearn
from scipy.stats import chi2_contingency, ttest_ind
from xgboost import XGBClassifier
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, f1_score
import logging
from vectice import Vectice
from vectice.models.job_type import JobType
from vectice.models.model import ModelType
from vectice.models.model_version import ModelVersionStatus
```
# Stroke Prediction Research:
The main question is that we want to understand how the predictor variables can help estimate the probability of suffering a stroke.
## Is there other than age relationship?
Does having a heart disease or high BMI and glucose level related to have a higher change of suffering a stroke?
Plans:
We should visualize a distribution of the target variable, which is the stroke, then a distribution of variables in respect to the target variable.
Split the model into categorial features and objects. - Done. Do Hot encoding?
Call the distributions on an object based way e.g. fig, ax.
Continue building on the models. Next XGBoost.
Predict, predict, predict.
Draw final conclusions.
Add an index to notebook.
Add more distribution visualizations.
Models:
Logistic regression, random forest and xgboost.
## Exploratory Data Analysis (EDA)
It's nothing but a data exploration technique to understand the various aspects of the data. The idea is to check for relationship between variables and to check their distributions.
It follows a systematic set of steps to explore the data in the most efficient way possible
## Steps:
- Understand the Data
- Clean up the Data
- Analysis of relationships between variables
## Vectice Credentials
To connect to the Vectice App through the SDK you'll need the Project Token, Vectice API Endpoint and the Vectice API Token. You'll find all of this in the Vectice App. The Workspace allows you to create the Vectice API Token, in Projects you'll be able to get the Project Token, as seen below. The Vectice API Endpoint is 'https://beta.vectice.com'. You're provided with the GCS Service Account JSON, which will allow you to connect to the GCS Bucket in the Vectice App and get the needed data for the example.
## Credentials Setup:
The Vectice API Endpoint and Token are needed to connect to the Vectice UI. Furthermore, a Google Cloud Storage credential JSON is needed to connect to the Google Cloud Storage to retrieve and upload the datasets. A project token links the runs to the relevant project and it's needed to create runs.
```
# Upload your GCS Storage json for access to GCS
from google.colab import files
uploaded = files.upload()
# Vectice Project Token
PROJECT_TOKEN = "yKYVndJgu5Pjg31okdxJ"
# The GCS permissions scope
SCOPES = ['https://www.googleapis.com/auth/bigquery.readonly']
os.environ["VECTICE_API_ENDPOINT"] = "beta.vectice.com"
os.environ["VECTICE_API_TOKEN"] = "MRoylvZjX.wpk4L8xJWOa59eY0DAGwKMRoylvZjXy13bBlgdVNQz67rvPqm2"
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "test.json"
from google.cloud import bigquery
# Big Query Project ID
project_id = "nodal-unity-277700"
```
## Getting the data from BigQuery
```
# Client connection to Big Query
client = bigquery.Client(project=project_id)
df = client.query('''
SELECT * FROM `nodal-unity-277700.healthcareData.StrokeData`''')
data = df.to_dataframe()
data.head(5)
data.head()
data['bmi'] = pd.to_numeric(data['bmi'], errors='coerce')
#Categorical info
cat_feat = [i for i in data.columns if data[i].dtype == 'object' or data[i].dtype == 'bool']
cat_feat
#remove categorial data from our set to create the model. Can be added encoded later in the process.
num_feat = data.drop(cat_feat, axis = 1)
data.shape
data.describe()
# The ID column is useless for the analysis, so we drop it
num_feat = num_feat.drop('id', axis = 1)
# num_feat.groupby(num_feat['bmi'].isnull()).mean()
num_feat.isna().sum()
# 97.6 BMI? That is odd, but it is what the "max" value shows us. Let's find out how many occurrences there are
num_feat[num_feat['bmi']==97.6]
data[data['bmi']==97.6]
num_feat = num_feat[num_feat['bmi']!=97.6]
#Checking again
num_feat[num_feat['bmi']>40] #sort_values('bmi')
num_feat.describe()
num_feat.groupby('stroke').mean()
num_feat.groupby(num_feat.bmi > 40)[['stroke', 'hypertension', 'heart_disease']].sum()
bmi_over_40 = num_feat[num_feat['bmi'] > 40 ]
bmi_over_40[num_feat['stroke'] == 1 ].sort_values(by='age')
plt.figure(figsize = (9,7))
sns.scatterplot(x = 'bmi', y = 'avg_glucose_level', hue = 'stroke', data =bmi_over_40)
plt.show()
```
Adult Body Mass Index (BMI)¶
BMI does not measure body fat directly, but research has shown that BMI is moderately correlated with more direct measures of body fat obtained from skinfold thickness measurements, bioelectrical impedance, underwater weighing, dual energy x-ray absorptiometry (DXA) and other methods 1,2,3. Furthermore, BMI appears to be strongly correlated with various adverse health outcomes consistent with these more direct measures of body fatness
```
#check for unique values
data.nunique()
```
It seems the majorities of values are binaries, which mean that they are categorical values e.g. "yes" or "no" except for gender which is says it has 3 types. We need to check if that is not because a typo or blank entries.
The categorial variables with more different values are the following in ascending order:
smoking_status 4
work_type 5
```
data.gender.value_counts()
data = data[data['gender']!='Other']
data.smoking_status.value_counts()
data[data['smoking_status'] == 'Unknown']
#smokers and goverment jobs
smokers = data[data['smoking_status']=='smokes']
smokers.work_type.value_counts(normalize=True)
smokers['age'].groupby(smokers['age']).count()
## Create a Vectice instance to start creating runs in Vectice
from vectice import Vectice
logging.basicConfig(level=logging.INFO)
vectice = Vectice(project_token=PROJECT_TOKEN)
## Create the run inputs
## With this line we declare a reference to code existing in GitHub as the code at the origin of the outputs
input_code = Vectice.create_code_version_with_github_uri("https://github.com/vectice/vectice-examples",
"Notebooks/Vanilla/Brain_Failure_Prediction/Brain%20Failure%20Prediction.ipynb")
# Creating a dataset as an input for the run
input_ds = vectice.create_dataset_version().with_parent_name("Brain Imaging Data")
inputs = [input_ds, input_code]
# Creating a run and assigning it's job type
run = vectice.create_run('Brain Imaging Data Cleaning', JobType.PREPARATION, 'Data_cleaning')
# Starting the run and assigning the inputs from above
vectice.start_run(run, inputs=inputs)
#handling missing values
data['bmi'] = data['bmi'].fillna(round (data['bmi'].median(), 2))
data.isnull().sum()
```
## Uploading the cleaned data into BigQuery
Upload the cleaned data to BigQuery requires having writing rights in the service accounts. As the provided service account has only reading rights, we already uploaded the cleaned data for you.
If you have a service account with reading rights, you uncomment the following cell and rn it.
```
# Uploading the cleaned data into BigQuery
# TODO(developer): Set table_id to the ID of the table to create.
#table_id = "nodal-unity-277700.healthcareData.StrokeDataClean"
# The job_config holds schema information. Column names, data types etc.
#job_config = bigquery.LoadJobConfig(
# schema=[bigquery.SchemaField(col, str(data[col].dtype if data[col].dtype != 'object' else "STRING")) for col in data.columns],)
#load_job = client.load_table_from_dataframe(data, table_id, job_config=job_config) # Make an API request.
#load_job.result() # Waits for the job to complete.
#destination_table = client.get_table(table_id) # Make an API request.
#print("Loaded {} rows.".format(destination_table.num_rows))
# Creating the new dataset as an output for the run
output = [vectice.create_dataset_version().with_parent_name("Cleaned Brain Imaging Data").with_property("Fillna", "Median")]
vectice.end_run(outputs=output)
correlation = data.drop('id', axis = 1).corr()
plt.figure(figsize=(7,7))
sns.heatmap(correlation, xticklabels =correlation.columns, yticklabels = correlation.columns, annot=True)
plt.show()
```
As we can see it looks that the more related variable to stroke is the age feature. We may consider to use a model to only use the wanted variables to remove id for example.
```
sns.countplot(x = 'smoking_status', data = data)
plt.title("Count Plot for smoking status")
plt.show()
sns.countplot(x = 'work_type', data = data)
plt.title('Count Plot for Work Type')
plt.show()
#Ploting the distribution of Stroke
num_data = num_feat
sns.countplot(x='stroke', data=num_data)
plt.show()
x = pd.DataFrame(num_data.groupby(['stroke'])['stroke'].count())
# plot
fig, ax = plt.subplots(figsize = (6,6), dpi = 70)
ax.barh([1], x.stroke[1], height = 0.7, color = 'red')
plt.text(-1150,-0.08, 'Healthy',{'fontfamily': 'Serif','weight':'bold','Size': '16','style':'normal', 'color':'green'})
#plt.text(5000,-0.08, '95%',{'font':'Serif','weight':'bold' ,'size':'16','color':'green'})
plt.text(5000,-0.08, f"{(num_data.shape[0]/num_data.shape[0]*100) - (x.shape[0]/(num_data.shape[0])*100)*100:.0f}%" ,{'fontfamily':'Serif','weight':'bold' ,'size':'16','color':'green'})
ax.barh([0], x.stroke[0], height = 0.7, color = 'green')
plt.text(-1000,1, 'Stroke', {'fontfamily': 'Serif','weight':'bold','Size': '16','style':'normal', 'color':'red'})
plt.text(300,1, f"{((x.shape[0]/data.shape[0])*100)*100:.0f}%",{'fontfamily':'Serif', 'weight':'bold','size':'16','color':'red'})
fig.patch.set_facecolor('#f6f5f5')
ax.set_facecolor('#f6f5f5')
plt.text(-1150,1.77, 'Percentage of People Having Strokes' ,{'fontfamily': 'Serif', 'Size': '25','weight':'bold', 'color':'black'})
plt.text(4650,1.65, 'Stroke ', {'fontfamily': 'Serif','weight':'bold','Size': '16','weight':'bold','style':'normal', 'color':'red'})
plt.text(5650,1.65, '|', {'color':'black' , 'size':'16', 'weight': 'bold'})
plt.text(5750,1.65, 'Healthy', {'fontfamily': 'Serif','weight':'bold', 'Size': '16','style':'normal', 'weight':'bold','color':'green'})
plt.text(-1150,1.5, 'It is a highly unbalanced distribution,\nand clearly seen that 4 in 100 people are susceptible \nto strokes.',
{'fontfamily':'Serif', 'size':'12.5','color': 'black'})
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(True)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.figure(figsize = (16,11))
plt.subplot(2,3,1)
sns.countplot(x = 'gender', data = data)
plt.title('Countplot of Gender distribution')
plt.subplot(2,3,2)
sns.countplot(x = 'ever_married', data = data)
plt.title('Countplot of Married status distribution')
plt.subplot(2,3,3)
sns.countplot(x='work_type', data = data)
plt.title('Countplot of Work Type distribution')
plt.subplot(2,3,4)
sns.countplot(x = 'Residence_type', data = data)
plt.title('Countplot of Residence type distribution')
plt.subplot(2,3,5)
sns.countplot(x = 'smoking_status',data = data)
plt.title('Countplot of Smoking status distribution')
plt.subplot(2,3,6)
sns.countplot(x = 'heart_disease',data = data)
plt.title('Countplot of Heart Disease distribution')
plt.show()
num_data = num_feat
#handling missing values
num_data['bmi'] = num_data['bmi'].fillna(round (num_data['bmi'].median(), 2))
# Checking the distribution of the predictor variables.
# Here, we will use both distplot and boxplot as shown below.
# Let us plot each variable to show its distribution in the dataset.
#fig, ax = plt.subplots(figsize = (6,6), dpi = 70)
plt.figure(1)
plt.title('BMI Distribution before droping the abnormal entry')
plt.subplot(121), sns.distplot(num_data['bmi'])
plt.subplot(122), num_data['bmi'].plot.box(figsize=(16,5))
plt.show()
plt.figure(1)
plt.title('Stroke Distribution with BMI over 40')
plt.subplot(121), sns.distplot(bmi_over_40['bmi'])
plt.subplot(122), bmi_over_40['bmi'].plot.box(figsize=(16,5))
plt.show()
plt.figure(1)
plt.subplot(121), sns.distplot(data['age'])
plt.subplot(122), data['age'].plot.box(figsize=(16,5))
plt.show()
plt.figure(1)
plt.subplot(121), sns.countplot(data['heart_disease'])
plt.subplot(122), data['heart_disease'].plot.box(figsize=(16,5))
plt.show()
plt.figure(1)
plt.subplot(121), sns.distplot(data['avg_glucose_level'])
plt.subplot(122), data['avg_glucose_level'].plot.box(figsize=(16,5))
plt.show()
sns.relplot(x='stroke', y='age', hue='gender', data=data )
plt.show()
```
With this it seems that a confusion matrix and a logistic regression may have a better relationship because this is showing that there is not a linear relationship.
From this bar chart we can clearly see that for people over 40 years old the majority suffered a stroke. We have an uptick at age 40 then it drops until about age 55 through 65 and drops again and goes all the way up at age 80.
```
# Scatter Plot
plt.figure(figsize = (9,7))
sns.scatterplot(x = 'bmi', y = 'avg_glucose_level', hue = 'stroke', data =bmi_over_40)
plt.title('Stroke cases - For those with a BMI over 40 ',y=1.05)
plt.xlabel('BMI Level')
plt.ylabel('Avg Glucose Level')
plt.show()
```
# Hypothesis Testing
```
def chi2_dependency(data_df, x,y):
ctab = pd.crosstab(data_df[x], data_df[y])
stat, p, dof, expected = chi2_contingency(ctab)
alpha1 = 0.05
alpha2 = 0.01
print('--------------Chi Squared Hypothesis Test Results-------------------')
print('Variable X: ',x)
print('Variable Y: ',y)
if p<alpha1 and p > alpha2:
print('P-value: ',p)
print('We reject the NUll Hypothesis H0')
print('There is some evidence to suggest that {} and {} are dependent'.format(x,y))
if p < alpha1 and p < alpha2:
print('P-value: ',p)
print('We reject the NUll Hypothesis H0')
print('There is substantial evidence to suggest that {} and {} are dependent'.format(x,y))
else:
print('P-value: ',p)
print('We fail to reject the NUll Hypothesis H0')
print('There is no evidence to suggest that {} and {} are independent'.format(x,y))
print()
chi2_dependency(data,'gender','stroke')
chi2_dependency(data,'ever_married','stroke')
chi2_dependency(data,'hypertension','stroke')
chi2_dependency(data,'heart_disease','stroke')
chi2_dependency(data,'work_type','stroke')
chi2_dependency(data,'Residence_type','stroke')
chi2_dependency(data,'smoking_status','stroke')
```
Gender and Residential Type do not seem to have an impact on stroke
Smoking Status, Work Type, Heart Disease, Hypertension and Married status have an impact on stroke
```
ctab = pd.crosstab(data['smoking_status'], data['stroke'])
ctab.plot.bar(stacked = True, figsize = (8,5))
plt.xlabel('Smoking Status')
plt.ylabel('Stroke')
plt.title('Smoking Status and Stroke')
plt.show()
ctab = pd.crosstab(data['ever_married'], data['stroke'])
ctab.plot.bar(stacked = True, figsize = (8,5))
plt.xlabel('Ever Married')
plt.ylabel('Stroke')
plt.title('Married Status and Stroke')
plt.show()
print('Ratio of stroke affected from ever_married class',
len(data[data['stroke']==1])/len(data[data['ever_married']==True]))
print('Ratio of stroke affected from never married class',
len(data[data['stroke']==1])/len(data[data['ever_married']==False]))
target_col = ['stroke']
num_cols = ['id', 'age', 'avg_glucose_level', 'bmi']
cat_cols = [col for col in data.columns if col not in num_cols+target_col]
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
data['gender'] = label_encoder.fit_transform(data['gender'])
data['ever_married'] = label_encoder.fit_transform(data['ever_married'])
data['Residence_type'] = label_encoder.fit_transform(data['Residence_type'])
data = pd.get_dummies(data, prefix = ['work_type'], columns = ['work_type'])
data = pd.get_dummies(data, prefix = ['smoking_status'], columns = ['smoking_status'])
data.head()
```
## Alter data to be conducive for a BigQuery ingest job
This requires having writing rights in the service accounts. As the provided service account has only reading rights, we already did the job for you.
If you have a service account with reading rights, you uncomment the following cell and rn it.
```
# Create the input for the run
#input_ds = vectice.create_dataset_version().with_parent_name("Cleaned Brain Imaging Data")
#inputs = [input_ds, input_code]
# Create the run
#run = vectice.create_run("Brain Imaging Data Training", JobType.TRAINING)
# Start the run with the run and inputs
#vectice.start_run(run, inputs=inputs)
# Alter data to be conducive for a BigQuery ingest job.
names_types = [("_".join(col.split("-")) if "-" in col else "_".join(col.split(" ")), data[col].dtype if data[col].dtype != 'uint8' else "INT") for col in data.columns]
# The column names must match in the DataFrame and BigQuery ingestion. For example Big Query doesn't accept "-" in column names.
data = data.rename(columns={x:y[0] for x,y in zip(data.columns, names_types)})
# TODO(developer): Set table_id to the ID of the table to create.
table_id = "nodal-unity-277700.healthcareData.StrokeDataTraining"
job_config = bigquery.LoadJobConfig(
schema=[bigquery.SchemaField(col, str(data[col].dtype if data[col].dtype != 'uint8' else "INT64")) for col in data.columns],)
load_job = client.load_table_from_dataframe(data, table_id, job_config=job_config) # Make an API request.
load_job.result() # Waits for the job to complete.
destination_table = client.get_table(table_id) # Make an API request.
print("Loaded {} rows.".format(destination_table.num_rows))
```
## Splitting the dataset
```
#include categorical values in the dataset
#Since we are using a Tree based model, One-Hot encoding is not an absolute necessity
#However, this dataset, train and test sets will be updated whenever one-hot encoding will be used
from sklearn.model_selection import train_test_split
#Splitting the dataset
x = data.drop('stroke', axis=1)
y = data.stroke
xtrain, xtest, ytrain, ytest = train_test_split(x, y, train_size = 0.3, random_state=1)
```
**Before proceeding for tree based models, lets check rank of feature importance on a decision tree**
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(xtrain,ytrain)
print(len(xtrain.columns.tolist()))
len(dt.feature_importances_)
plt.figure(figsize = (8,8))
sns.barplot(x = dt.feature_importances_, y = xtrain.columns.tolist())
```
## Modeling
ML algorithms can have a poor performance when dealing with datasets in which one or more classes represent only a small proportion of the overall dataset compared with a dominant class. This problem can be solved by balancing the number of examples between different classes. Here we suggest to use SMOTE (Synthetic Minority Over-sampling Technique) that creates synthetic data based on creating new data points that are mid-way between two near neighbours in any particular class.SMOTE is applied ONLY on the training set and not on the test set to avoid biased results.
Here we are going to models, one using the SMOTE method to balance the dataset and another one with our original data in order to compare them
```
##SMOTE data
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 2)
xtrain_mod, ytrain_mod = sm.fit_resample(xtrain, ytrain)
```
In the cell below, we create dataset versions that will be used as inputs for our modeling runs
```
input_non_smote_ds = vectice.create_dataset_version().with_parent_name("Brain Imaging Training Data")
input_smote_ds = vectice.create_dataset_version().with_parent_name("Brain Imaging SMOTE Data")
```
### RandomForest
```
#Building the model using RandomForest
from sklearn.ensemble import RandomForestClassifier
n_estimators=500
rfc = RandomForestClassifier(n_estimators)
rfc.fit(xtrain, ytrain)
preds = rfc.predict(xtest)
from sklearn.metrics import confusion_matrix
confusion_matrix(ytest, preds)
```
True positives are on the upper left. Then the botton right is the true negative. Which means, that I was supposed a negative and the model got a negative.
The false positive is the number on the upper right. False negative are the numbers on the bottom left.
Here we have the True negatives or 0s because we don't have many cases of strokes. Meaning that 3,388 people did not have a stroke.
On the inverse we have the True positive or 1s for those who suffered a stroke.
is the False negative, those who were predicted as 1 but they were 0s. Number 13
```
from sklearn.metrics import accuracy_score
accuracy_score_dt = accuracy_score(ytest,preds)
accuracy_score_dt
from sklearn.metrics import f1_score
f1_score_dt = f1_score(ytest, preds, average='micro')
f1_score_dt
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the Random Forest Classifier to the list of Non_smote_models
vectice.create_run("Modeling-Non-SMOTE", "TRAINING", "Non-SMOTE_RandomForest").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_non_smote_ds, input_code])
# Let's log the model we trained along with its metrics, as a new version
# of the "Non_smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_dt), ("F1 Score", f1_score_dt)]
model_version = vectice.create_model_version().with_parent_name("Non_smote_models").with_algorithm("Random Forest Classifier").with_type(ModelType.CLASSIFICATION).with_user_version().with_metrics(metrics).with_properties([("n_estimators", str(n_estimators))])
vectice.end_run(outputs=[model_version])
```
True positives are on the upper left. The botton right is for true negatives. Which means, that I was supposed a negative and the model got a negative.
The false positive is the number on the upper right. False negative are the numbers on the bottom left.
Gradient Boost Classifier
Include categorical values in the dataset. Since we are using a Tree based model, One-Hot encoding is not an absolute necessity
However, this dataset, train and test sets will be updated whenever one-hot enconding will be used
### Using GradientBoost Classifier
**Non Smote Dataset**
```
from sklearn.ensemble import GradientBoostingClassifier
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the GradientBoost Classifier to the list of Non_smote_models
vectice.create_run("Modeling-Non-SMOTE", "TRAINING", "Non-SMOTE_GradientBoost").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_non_smote_ds, input_code])
random_state = 123
n_estimators = 500
gbc = GradientBoostingClassifier(random_state=random_state, n_estimators=n_estimators)
gbc.fit(xtrain,ytrain)
preds = gbc.predict(xtest)
print(confusion_matrix(ytest, preds))
print(classification_report(ytest, preds, output_dict = True))
accuracy_score_gbc = accuracy_score(ytest, preds)
f1_score_gbc = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "Non_smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
properties = [("random_state", str(random_state)), ("n_estimators", str(n_estimators))]
metrics = [("Accuracy Score", accuracy_score_gbc), ("F1 Score", f1_score_gbc)]
model_version = vectice.create_model_version().with_parent_name("Non_smote_models").with_algorithm("Gradient Boosting Classifier").with_type(ModelType.CLASSIFICATION).with_metrics(metrics).with_properties(properties).with_user_version()
vectice.end_run(outputs=[model_version])
```
**Smote Dataset**
```
## Applying on Smote Dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the GradientBoost Classifier to the list of smote_models
vectice.create_run("Modeling- SMOTE", "TRAINING", "SMOTE_GradientBoost").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_smote_ds, input_code])
random_state = 123
n_estimators = 30
max_depth = 2
gbc2 = GradientBoostingClassifier(random_state= random_state, n_estimators=n_estimators, max_depth=max_depth)
gbc2.fit(xtrain_mod,ytrain_mod)
preds = gbc2.predict(xtest)
print(confusion_matrix(ytest, preds))
accuracy_score_gbc_smote = accuracy_score(ytest, preds)
f1_score_gbc_smote = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_gbc_smote), ("F1 Score", f1_score_gbc_smote)]
properties = [("random_state", str(random_state)), ("n_estimators", str(n_estimators)), ("max_depth", str(max_depth))]
model_version = vectice.create_model_version().with_parent_name("smote_models").with_algorithm("Gradient Boosting Classifier SMOTE").with_type(ModelType.CLASSIFICATION).with_user_version().with_metrics(metrics).with_properties(properties).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
### XGBoost
**Non Smote Dataset**
```
#Fitting model on non-SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the XGBoost Classifier to the list of Non_smote_models
vectice.create_run("Modeling-Non-SMOTE", "TRAINING", "Non-SMOTE_XGBoost").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_non_smote_ds, input_code])
n_estimators = 250
xgb1 = XGBClassifier(n_estimators=n_estimators)
xgb1.fit(xtrain, ytrain)
preds = xgb1.predict(xtest)
print(confusion_matrix(ytest, preds))
train_preds = xgb1.predict(xtrain)
print('Train Accuracy Score: ',accuracy_score(ytrain, train_preds))
print('Train F1 Score: ',f1_score(ytrain, train_preds))
xgb_accuracy_score = accuracy_score(ytest, preds)
xgb_f1_score = f1_score(ytest,preds)
print('Test Accuracy Score: ', xgb_accuracy_score)
print('F1 Score: ', xgb_f1_score)
# Let's log the model we trained along with its metrics, as a new version
# of the "Non_smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
properties = [("n_estimators", str(n_estimators))]
metrics = [("Accuracy Score", xgb_accuracy_score), ("F1 Score", xgb_f1_score)]
model_version = vectice.create_model_version().with_parent_name("Non_smote_models").with_algorithm("XGBoost").with_type(ModelType.CLASSIFICATION).with_user_version().with_metrics(metrics).with_properties(properties).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
**Smote Dataset**
```
#Fitting model on SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the XGBoost Classifier to the list of smote_models
vectice.create_run("Modeling- SMOTE", "TRAINING", "SMOTE_XGBoost").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_smote_ds, input_code])
n_estimators = 255
reg_alpha=0.5
reg_lambda = 0.4
max_depth= 1
xgb = XGBClassifier(n_estimators=n_estimators, reg_alpha=reg_alpha, reg_lambda=reg_lambda, max_depth=max_depth)
xgb.fit(xtrain_mod, ytrain_mod)
preds = xgb.predict(xtest)
print(confusion_matrix(ytest, preds))
#print(classification_report(ytest, preds, output_dict = True))
train_preds = xgb.predict(xtrain_mod)
print('Train Accuracy Score: ',accuracy_score(ytrain_mod, train_preds))
print('Train F1 Score: ',f1_score(ytrain_mod, train_preds))
accuracy_score_xgb_smote = accuracy_score(ytest, preds)
f1_score_xgb_smote = f1_score(ytest,preds)
print('Accuracy Score: ', accuracy_score_xgb_smote)
print('F1 Score: ',f1_score_xgb_smote)
# Let's log the model we trained along with its metrics, as a new version
# of the "smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
properties = [("n_estimators", str(n_estimators)), ("reg_alpha", str(reg_alpha)), ("reg_lambda", str(reg_lambda)), ("max_depth", str(max_depth))]
metrics = [("Accuracy Score", accuracy_score_xgb_smote), ("F1 Score", f1_score_xgb_smote)]
model_version = vectice.create_model_version().with_parent_name("smote_models").with_algorithm("XGBoost").with_type(ModelType.CLASSIFICATION).with_user_version().with_metrics(metrics).with_properties(properties).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
### Gaussian Naive Bayes
**Non Smote Dataset**
```
#Applying on non-SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the Naive Bayes Classifier to the list of Non_smote_models
from sklearn.naive_bayes import GaussianNB
vectice.create_run("Modeling-Non-SMOTE", "TRAINING", "Non_SMOTE_NaiveBayes").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_non_smote_ds, input_code])
gnb = GaussianNB()
gnb.fit(xtrain, ytrain)
preds = gnb.predict(xtest)
print(confusion_matrix(ytest, preds))
train_preds = gnb.predict(xtrain)
print('Train Accuracy Score: ',accuracy_score(ytrain, train_preds))
print('Train F1 Score: ',f1_score(ytrain, train_preds))
accuracy_score_gnb = accuracy_score(ytest, preds)
f1_score_gnb = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "Non_smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_gnb), ("F1 Score", f1_score_gnb)]
model_version = vectice.create_model_version().with_parent_name("Non_smote_models").with_algorithm("Gaussian NB").with_type(ModelType.CLASSIFICATION).with_generated_version().with_metrics(metrics).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
**Smote Dataset**
```
#Applying on SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the Naive Bayes Classifier to the list of smote_models
from sklearn.naive_bayes import GaussianNB
vectice.create_run("Modeling- SMOTE", "TRAINING", "SMOTE_NaiveBayes").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_smote_ds, input_code])
gnb = GaussianNB()
gnb.fit(xtrain_mod, ytrain_mod)
preds = gnb.predict(xtest)
print(confusion_matrix(ytest, preds))
train_preds = gnb.predict(xtrain_mod)
print('Train Accuracy Score: ',accuracy_score(ytrain_mod, train_preds))
print('Train F1 Score: ',f1_score(ytrain_mod, train_preds))
accuracy_score_gnb_smote = accuracy_score(ytest, preds)
f1_score_gnb_smote = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_gnb_smote), ("F1 Score", f1_score_gnb_smote)]
model_version = vectice.create_model_version().with_parent_name("smote_models").with_algorithm("Gaussian NB SMOTE").with_type(ModelType.CLASSIFICATION).with_generated_version().with_metrics(metrics).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
We observe that there is a significant increase in True negatives, but there are also a significant increase in false negatives
Earlier, we had noticed that Gender and Residence_type do not have an impact on stroke.
To strengthen our classifier, they can be dropped from train and test X dataset
```
#Removing non-impactful features from smote_dataset
xtrain_mod_dropped = xtrain_mod.drop(['gender', 'Residence_type'], axis = 1)
xtest_dropped = xtest.drop(['gender', 'Residence_type'], axis = 1)
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the Naive Bayes Classifier after dropping the non impactful features to the list of smote_models
vectice.create_run("Modeling- SMOTE", "TRAINING", "SMOTE_NaiveBayes_feature_selection").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_smote_ds, input_code])
gnb = GaussianNB()
gnb.fit(xtrain_mod_dropped, ytrain_mod)
preds = gnb.predict(xtest_dropped)
print(confusion_matrix(ytest, preds))
train_preds = gnb.predict(xtrain_mod_dropped)
print('Train Accuracy Score: ',accuracy_score(ytrain_mod, train_preds))
print('Train F1 Score: ',f1_score(ytrain_mod, train_preds))
accuracy_score_gnb_fs = accuracy_score(ytest, preds)
f1_score_gnb_fs = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score_gnb_fs)
print('F1 Score: ',f1_score_gnb_fs)
# Let's log the model we trained along with its metrics, as a new version
# of the "smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_gnb_fs), ("F1 Score", f1_score_gnb_fs)]
model_version = vectice.create_model_version().with_parent_name("smote_models").with_algorithm("GNB_features_selection").with_type(ModelType.CLASSIFICATION).with_generated_version().with_metrics(metrics).with_status(ModelVersionStatus.EXPERIMENTATION)
vectice.end_run(outputs=[model_version])
```
### KNN
**Non Smote Dataset**
```
#Applying on non-SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the KNN Classifier to the list of Non_smote_models
from sklearn.neighbors import KNeighborsClassifier
vectice.create_run("Modeling-Non-SMOTE", "TRAINING", "Non_SMOTE_KNN").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_non_smote_ds, input_code])
n_neighbors=3
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
knn.fit(xtrain, ytrain)
preds = knn.predict(xtest)
print(confusion_matrix(ytest, preds))
train_preds = knn.predict(xtrain)
print('Train Accuracy Score: ',accuracy_score(ytrain, train_preds))
print('Train F1 Score: ',f1_score(ytrain, train_preds))
accuracy_score_knn = accuracy_score(ytest, preds)
f1_score_knn = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "Non_smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_knn), ("F1 Score", f1_score_knn)]
properties = [("n_neighbors", str(n_neighbors))]
model_version = vectice.create_model_version().with_parent_name("Non_smote_models").with_algorithm("KNeighbors Classifier").with_generated_version().with_metrics(metrics).with_properties(properties).with_status(ModelVersionStatus.EXPERIMENTATION).with_type(ModelType.CLASSIFICATION)
vectice.end_run(outputs=[model_version])
```
**Smote Dataset**
```
#Applying on SMOTE dataset
# Each execution for a given job is called a run.
# Setting a job's name is mandatory when starting a run
# and is useful to group and search runs in the Vectice UI.
# Adding a version of the KNN Classifier to the list of smote_models
vectice.create_run("Modeling- SMOTE", "TRAINING", "SMOTE_KNN").with_properties([("A run property", "A value")])
vectice.start_run(inputs=[input_smote_ds, input_code])
n_neighbors=2
p=1
knn = KNeighborsClassifier(n_neighbors=n_neighbors, p=p)
knn.fit(xtrain_mod, ytrain_mod)
preds = knn.predict(xtest)
print(confusion_matrix(ytest, preds))
train_preds = knn.predict(xtrain_mod)
print('Train Accuracy Score: ',accuracy_score(ytrain_mod, train_preds))
print('Train F1 Score: ',f1_score(ytrain_mod, train_preds))
accuracy_score_knn_smote = accuracy_score(ytest, preds)
f1_score_knn_smote = f1_score(ytest,preds)
print('Accuracy Score: ',accuracy_score(ytest, preds))
print('F1 Score: ',f1_score(ytest,preds))
# Let's log the model we trained along with its metrics, as a new version
# of the "smote_models" model in Vectice.
## Here we use with_user_version() to create a new model version. You can provide a version name for your model version. An error will be thrown if the given user
## version already exists and if you don't provide a version name, the version name will be "Version"+"the version number".
metrics = [("Accuracy Score", accuracy_score_knn_smote), ("F1 Score", f1_score_knn_smote)]
properties = [("n_neighbors", str(n_neighbors)), ("p", str(p))]
model_version = vectice.create_model_version().with_parent_name("smote_models").with_algorithm("KNeighbors Classifier").with_generated_version().with_metrics(metrics).with_properties(properties).with_status(ModelVersionStatus.EXPERIMENTATION).with_type(ModelType.CLASSIFICATION)
vectice.end_run(outputs=[model_version])
```
We can get the models list by using **vectice.list_models()**
```
vectice.list_models().list
```
We can update a model's type or description by using **vectice.update_model()**
```
vectice.update_model(parent_name="smote_models", model_type=ModelType.CLASSIFICATION, description="A description")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import gpplot as gpp
from poola import core as pool
import anchors
import core_functions as fns
gpp.set_aesthetics(palette='Set2')
def run_guide_residuals(lfc_df, paired_lfc_cols=[]):
'''
Calls get_guide_residuals function from anchors package to calculate guide-level residual z-scores
Inputs:
1. lfc_df: data frame with log-fold changes (relative to pDNA)
2. paired_lfc_cols: grouped list of initial populations and corresponding resistant populations
'''
lfc_df = lfc_df.drop_duplicates()
if not paired_lfc_cols:
paired_lfc_cols = fns.pair_cols(lfc_df)[1] #get lfc pairs
modified = []
unperturbed = []
#reference_df: column1 = modifier condition, column2 = unperturbed column
ref_df = pd.DataFrame(columns=['modified', 'unperturbed'])
row = 0 #row index for reference df
for pair in paired_lfc_cols:
#number of resistant pops in pair = len(pair)-1
res_idx = 1
#if multiple resistant populations, iterate
while res_idx < len(pair):
ref_df.loc[row, 'modified'] = pair[res_idx]
ref_df.loc[row, 'unperturbed'] = pair[0]
res_idx +=1
row +=1
print(ref_df)
#input lfc_df, reference_df
#guide-level
residuals_lfcs, all_model_info, model_fit_plots = anchors.get_guide_residuals(lfc_df, ref_df)
return residuals_lfcs, all_model_info, model_fit_plots
def select_top_ranks(df, rank = 5, largest=True): #pick top ranks from each column of df with ranks, rank = top rank threshold (e.g. 5 if top 5)
'''
Inputs:
1. df: Dataframe with columns "Gene Symbol" and data used to rank
2. rank: top number of rows to select
Outputs:
1. final_top_rank_df: Data frame with top ranked rows
'''
rank_cols = df.columns.to_list()[1:]
prev_top_rank_rows = pd.DataFrame(columns = df.columns)
final_top_rank_df = pd.DataFrame() #for final list
for col in rank_cols:
#top_rank_rows = df.copy().loc[lambda df: df[col] <= rank, :] #pick rows with rank <= 5
if largest:
top_rank_rows = df.copy().nlargest(rank, col)
else:
top_rank_rows = df.copy().nsmallest(rank, col)
top_rank_df = pd.concat([prev_top_rank_rows, top_rank_rows]) #concat with rows selected from previous column
prev_top_rank_rows = top_rank_df #set combined list as previous
final_top_rank_df = prev_top_rank_rows.drop_duplicates(subset = ['Gene Symbol']) #drop duplicate gene rows
return final_top_rank_df
```
# A549
## Data summary
```
reads_nopDNA_all = pd.read_csv('../../../Data/Reads/Goujon/A549/Cas9ActSecondaryLibrary/counts-JD_GPP2844_Goujon_Plate2.txt', sep='\t')
A549_cols = ['Construct Barcode'] + [col for col in reads_nopDNA_all.columns if 'A549' in col]
Huh7_5_1_cols = ['Construct Barcode'] + [col for col in reads_nopDNA_all.columns if 'Huh751' in col]
A549_reads_nopDNA = reads_nopDNA_all[A549_cols]
pDNA_reads_all = pd.read_csv('../../../Data/Reads/Goujon/Calu3/Secondary_Library/counts-LS20210325_A01_AAHG03_XPR502_G0_CP1663_M-AM39.txt', sep='\t')
pDNA_reads_all
pDNA_reads = pDNA_reads_all[['Construct Barcode','A01_AAHG03_XPR502_G0_CP1663_M-AM39']].copy()
pDNA_reads = pDNA_reads.rename(columns = {'A01_AAHG03_XPR502_G0_CP1663_M-AM39': 'pDNA'})
pDNA_reads
reads = pd.merge(pDNA_reads, A549_reads_nopDNA, how = 'right', on='Construct Barcode')
empty_cols = [col for col in reads.columns if 'EMPTY' in col]
reads = reads.copy().drop(empty_cols, axis=1)
reads
# Gene Annotations
chip = pd.read_csv('../../../Data/Interim/Goujon/Secondary_Library/CP1663_GRCh38_NCBI_strict_gene_20210707.chip', sep='\t')
chip = chip.rename(columns={'Barcode Sequence':'Construct Barcode'})
chip_reads = pd.merge(chip[['Construct Barcode', 'Gene Symbol']], reads, on = ['Construct Barcode'], how = 'right')
chip_reads
#Calculate lognorm
cols = chip_reads.columns[2:].to_list() #reads columns = start at 3rd column
lognorms = fns.get_lognorm(chip_reads.dropna(), cols = cols)
col_list = []
for col in lognorms.columns:
if 'intitial'in col:
new_col = col.replace('intitial', 'initial')
col_list.append(new_col)
else:
col_list.append(col)
lognorms.columns = col_list
lognorms
```
## Quality Control
### Population Distributions
```
#Calculate log-fold change relative to pDNA
target_cols = list(lognorms.columns[3:])
pDNA_lfc = fns.calculate_lfc(lognorms,target_cols)
pDNA_lfc
pair1 = list(pDNA_lfc.columns[2:4])
pair2 = list(pDNA_lfc.columns[4:])
paired_cols = (True, [pair1, pair2])
#Plot population distributions of log-fold changes
fns.lfc_dist_plot(pDNA_lfc, paired_cols=paired_cols, filename = 'A549_SecondaryLibraryAct_Goujon')
```
### Distributions of control sets
```
fns.control_dist_plot(pDNA_lfc, paired_cols=paired_cols, control_name=['ONE_INTERGENIC_SITE'], filename = 'A549_SecondaryLibraryAct_Goujon')
```
### ROC_AUC
Essential gene set: Hart et al., 2015
<br>
Non-essential gene set: Hart et al., 2014
<br>
AUC expected to be ~0.5 because no cutting occurred
```
ess_genes, non_ess_genes = fns.get_gene_sets()
initial_cols = [col for col in pDNA_lfc.columns if 'initial' in col]
tp_genes = ess_genes.loc[:, 'Gene Symbol'].to_list()
fp_genes = non_ess_genes.loc[:, 'Gene Symbol'].to_list()
initial_roc_dict = {}
intial_roc_auc_dict = {}
for col in initial_cols:
roc_auc, roc_df = pool.get_roc_aucs(pDNA_lfc, tp_genes, fp_genes, gene_col = 'Gene Symbol', score_col=col)
initial_roc_dict[col] = roc_df
intial_roc_auc_dict[col] = roc_auc
fig,ax=plt.subplots(figsize=(6,6))
for key, df in initial_roc_dict.items():
roc_auc = intial_roc_auc_dict[key]
ax=sns.lineplot(data=df, x='fpr',y='tpr', ci=None, label = key+',' + str(round(roc_auc,2)))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('ROC-AUC')
plt.xlabel('False Positive Rate (non-essential)')
plt.ylabel('True Positive Rate (essential)')
```
## Gene level analysis
### Residual z-scores
```
lfc_df = pDNA_lfc.drop('Gene Symbol', axis = 1)
lfc_df
# run_guide_residuals(lfc_df.drop_duplicates(), cols)
residuals_lfcs, all_model_info, model_fit_plots = run_guide_residuals(lfc_df, paired_lfc_cols=paired_cols[1])
residuals_lfcs
guide_mapping = pool.group_pseudogenes(chip[['Construct Barcode', 'Gene Symbol']], pseudogene_size=10, gene_col='Gene Symbol', control_regex=['ONE_INTERGENIC_SITE'])
guide_mapping
A549_gene_residuals = anchors.get_gene_residuals(residuals_lfcs.drop_duplicates(), guide_mapping)
A549_gene_residuals
```
Average across resistant screens and apply guide filter
```
A549_gene_residual_sheet = fns.format_gene_residuals(A549_gene_residuals, guide_min = 8, guide_max = 11, ascending=True)
guide_residual_sheet = pd.merge(guide_mapping, residuals_lfcs.drop_duplicates(), on = 'Construct Barcode', how = 'inner')
guide_residual_sheet
with pd.ExcelWriter('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/A549_SecondaryLibraryAct_Goujon.xlsx') as writer:
A549_gene_residual_sheet.to_excel(writer, sheet_name='A549_avg_zscore', index =False)
reads.to_excel(writer, sheet_name='A549_genomewide_reads', index =False)
guide_mapping.to_excel(writer, sheet_name='A549_guide_mapping', index =False)
with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/SecondaryLibraryIndivScreens/A549_SecondaryLibraryAct_Goujon.xlsx') as writer:
A549_gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
```
### A549 Act Replicate Correlations
```
A549_screen1_df = A549_gene_residuals[A549_gene_residuals['condition'].str.contains('#1')]
A549_screen2_df = A549_gene_residuals[A549_gene_residuals['condition'].str.contains('#2')]
A549_zscore_df = pd.merge(A549_screen1_df[['Gene Symbol', 'residual_zscore']], A549_screen2_df[['Gene Symbol', 'residual_zscore']], on = 'Gene Symbol', how = 'outer', suffixes = ['_screen#1', '_screen#2'])
A549_zscore_df
# Screen 2 vs Screen 1
fig, ax = plt.subplots(figsize = (2, 2))
ax = gpp.point_densityplot(A549_zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', s=6)
ax = gpp.add_correlation(A549_zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', fontsize=7)
top_ranked_screen1 = A549_zscore_df.nsmallest(5, 'residual_zscore_screen#1')
top_ranked_screen2 = A549_zscore_df.nsmallest(5, 'residual_zscore_screen#2')
bottom_ranked_screen1 = A549_zscore_df.nlargest(5, 'residual_zscore_screen#1')
bottom_ranked_screen2 = A549_zscore_df.nlargest(5, 'residual_zscore_screen#2')
screen1_ranked = pd.concat([top_ranked_screen1, bottom_ranked_screen1])
screen2_ranked = pd.concat([top_ranked_screen2, bottom_ranked_screen2])
ranked = pd.concat([screen1_ranked, screen2_ranked]).drop_duplicates()
# Annotate common hits
# common_ranked = pd.merge(screen1_ranked, screen2_ranked, on = ['Gene Symbol', 'residual_zscore_screen#1', 'residual_zscore_screen#2'], how = 'inner')
# common_ranked
sns.scatterplot(data=ranked, x='residual_zscore_screen#1', y='residual_zscore_screen#2', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True)
texts= []
for j, row in ranked.iterrows():
texts.append(ax.text(row['residual_zscore_screen#1']+0.25, row['residual_zscore_screen#2'], row['Gene Symbol'], fontsize=7,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
plt.title('A549 Cas9 CRISPRa Secondary Library Screen', fontsize=7)
plt.xlabel('Screen #1', fontsize=7)
plt.ylabel('Screen #2', fontsize=7)
plt.xticks(fontsize=7)
plt.yticks(fontsize=7)
sns.despine()
gpp.savefig('../../../Figures/Scatterplots/A549_Cas9_Act_Secondary_Screen1vs2_scatterplot.pdf', dpi=300)
```
### Primary KO vs Cas9 Secondary A549 Act screens
```
Cas9_KO_A549_Sanjana_primary = pd.read_excel('../../../Data/Processed/GEO_submission_v2/A549_GeCKOv2_Sanjana_v2.xlsx')
Cas9_KO_A549_Zhang_primary = pd.read_excel('../../../Data/Processed/GEO_submission_v2/A549_Brunello_Zhang.xlsx')
Cas9_KO_A549_primary = pd.merge(Cas9_KO_A549_Sanjana_primary, Cas9_KO_A549_Zhang_primary, on = 'Gene Symbol', how = 'outer')
Cas9_KO_A549_primary = Cas9_KO_A549_primary.rename(columns = {'residual_zscore_avg':'residual_zscore_avg_Sanjana',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_Sanjana',
'residual_zscore':'residual_zscore_Zhang',
'Rank_residual_zscore':'Rank_residual_zscore_Zhang'})
primaryvssecondary_A549 = pd.merge(Cas9_KO_A549_primary, A549_gene_residual_sheet, on = 'Gene Symbol', how = 'inner')
primaryvssecondary_A549 = primaryvssecondary_A549.rename(columns={'residual_zscore_avg':'residual_zscore_avg_SecondaryAct',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_SecondaryAct'})
fig, axs = plt.subplots(ncols = 2, figsize = (8, 4), sharey=True)
axs[0] = gpp.point_densityplot(primaryvssecondary_A549.dropna(), 'residual_zscore_avg_Sanjana', 'residual_zscore_avg_SecondaryAct', s=6, ax = axs[0])
axs[0] = gpp.add_correlation(primaryvssecondary_A549.dropna(), 'residual_zscore_avg_Sanjana', 'residual_zscore_avg_SecondaryAct', loc = 'lower right', ax = axs[0])
axs[0].set_xlabel('Sanjana (primary KO, mean z-score)')
axs[0].set_ylabel('Secondary (activation, mean z-score)')
axs[1] = gpp.point_densityplot(primaryvssecondary_A549.dropna(), 'residual_zscore_Zhang', 'residual_zscore_avg_SecondaryAct', s=6, ax = axs[1])
axs[1] = gpp.add_correlation(primaryvssecondary_A549.dropna(), 'residual_zscore_Zhang', 'residual_zscore_avg_SecondaryAct', loc = 'lower right', ax = axs[1])
axs[1].set_xlabel('Zhang (primary KO, mean z-score)')
axs[1].set_ylabel('Secondary (act, mean z-score)')
top_ranked_Sanjana = primaryvssecondary_A549.nlargest(5, 'residual_zscore_avg_Sanjana')
bottom_ranked_Sanjana = primaryvssecondary_A549.nsmallest(5, 'residual_zscore_avg_Sanjana')
top_ranked_Zhang = primaryvssecondary_A549.nlargest(5, 'residual_zscore_Zhang')
bottom_ranked_Zhang = primaryvssecondary_A549.nsmallest(5, 'residual_zscore_Zhang')
top_ranked_SecondaryAct = primaryvssecondary_A549.nlargest(5, 'residual_zscore_avg_SecondaryAct')
bottom_ranked_SecondaryAct = primaryvssecondary_A549.nsmallest(5, 'residual_zscore_avg_SecondaryAct')
Sanjana_ranked = pd.concat([top_ranked_Sanjana, bottom_ranked_Sanjana])
Zhang_ranked = pd.concat([top_ranked_Zhang, bottom_ranked_Zhang])
SecondaryAct_ranked = pd.concat([top_ranked_SecondaryAct, bottom_ranked_SecondaryAct])
# Annotate hits
annot_df_Sanjana = pd.concat([Sanjana_ranked, SecondaryAct_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Sanjana, x='residual_zscore_avg_Sanjana', y='residual_zscore_avg_SecondaryAct', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[0])
texts= []
for j, row in annot_df_Sanjana.iterrows():
texts.append(axs[0].text(row['residual_zscore_avg_Sanjana']+0.25, row['residual_zscore_avg_SecondaryAct'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
annot_df_Zhang = pd.concat([Zhang_ranked, SecondaryAct_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Zhang, x='residual_zscore_Zhang', y='residual_zscore_avg_SecondaryAct', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[1])
texts= []
for j, row in annot_df_Zhang.iterrows():
texts.append(axs[1].text(row['residual_zscore_Zhang']+0.25, row['residual_zscore_avg_SecondaryAct'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
plt.suptitle('A549 Act secondary vs KO primary screens')
# plt.title('Calu-3 Cas12a Secondary vs Cas9 Primary KO Screens', fontsize=7)
# plt.xlabel('Cas9 Primary', fontsize=7)
# plt.ylabel('Cas12a Secondary', fontsize=7)
# plt.xticks(fontsize=7)
# plt.yticks(fontsize=7)
sns.despine()
# fig.savefig('../../../Figures/Scatterplots/Calu3_KO_Cas9PrimaryvsCas12aSecondary_scatterplot.png', bbox_inches = 'tight', dpi=300)
gpp.savefig('../../../Figures/Scatterplots/A549_KOPrimaryvsActSecondary_scatterplot.pdf', dpi=300)
# with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/Calu3_SecondaryLibraryKO_Goujon_indiv_screens.xlsx') as writer:
# zscore_df.to_excel(writer, sheet_name='indiv_screen_zscore', index =False)
# A549_gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
# guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
```
# Huh-7.5.1
## Data summary
```
reads_nopDNA_all = pd.read_csv('../../../Data/Reads/Goujon/A549/Cas9ActSecondaryLibrary/counts-JD_GPP2844_Goujon_Plate2.txt', sep='\t')
Huh7_5_1_cols = ['Construct Barcode'] + [col for col in reads_nopDNA_all.columns if 'Huh751' in col]
Huh751_reads_nopDNA = reads_nopDNA_all[Huh7_5_1_cols]
pDNA_reads_all = pd.read_csv('../../../Data/Reads/Goujon/Calu3/Secondary_Library/counts-LS20210325_A01_AAHG03_XPR502_G0_CP1663_M-AM39.txt', sep='\t')
pDNA_reads_all
pDNA_reads = pDNA_reads_all[['Construct Barcode','A01_AAHG03_XPR502_G0_CP1663_M-AM39']].copy()
pDNA_reads = pDNA_reads.rename(columns = {'A01_AAHG03_XPR502_G0_CP1663_M-AM39': 'pDNA'})
pDNA_reads
reads = pd.merge(pDNA_reads, Huh751_reads_nopDNA, how = 'right', on='Construct Barcode')
empty_cols = [col for col in reads.columns if 'EMPTY' in col]
reads = reads.copy().drop(empty_cols, axis=1)
reads
# Gene Annotations
chip = pd.read_csv('../../../Data/Interim/Goujon/Secondary_Library/CP1663_GRCh38_NCBI_strict_gene_20210707.chip', sep='\t')
chip = chip.rename(columns={'Barcode Sequence':'Construct Barcode'})
chip_reads = pd.merge(chip[['Construct Barcode', 'Gene Symbol']], reads, on = ['Construct Barcode'], how = 'right')
chip_reads
#Calculate lognorm
cols = chip_reads.columns[2:].to_list() #reads columns = start at 3rd column
lognorms = fns.get_lognorm(chip_reads.dropna(), cols = cols)
col_list = []
for col in lognorms.columns:
if 'intitial'in col:
new_col = col.replace('intitial', 'initial')
col_list.append(new_col)
else:
col_list.append(col)
lognorms.columns = col_list
lognorms
```
## Quality Control
### Population Distributions
```
#Calculate log-fold change relative to pDNA
target_cols = list(lognorms.columns[3:])
pDNA_lfc = fns.calculate_lfc(lognorms,target_cols)
pDNA_lfc
pair1 = list(pDNA_lfc.columns[2:4])
pair2 = list(pDNA_lfc.columns[4:])
paired_cols = (True, [pair1, pair2])
#Plot population distributions of log-fold changes
fns.lfc_dist_plot(pDNA_lfc, paired_cols=paired_cols, filename = 'A549_SecondaryLibraryAct_Goujon')
```
### Distributions of control sets
```
fns.control_dist_plot(pDNA_lfc, paired_cols=paired_cols, control_name=['ONE_INTERGENIC_SITE'], filename = 'A549_SecondaryLibraryAct_Goujon')
```
### ROC_AUC
Essential gene set: Hart et al., 2015
<br>
Non-essential gene set: Hart et al., 2014
<br>
AUC expected to be ~0.5 because no cutting occurred
```
ess_genes, non_ess_genes = fns.get_gene_sets()
initial_cols = [col for col in pDNA_lfc.columns if 'initial' in col]
tp_genes = ess_genes.loc[:, 'Gene Symbol'].to_list()
fp_genes = non_ess_genes.loc[:, 'Gene Symbol'].to_list()
initial_roc_dict = {}
intial_roc_auc_dict = {}
for col in initial_cols:
roc_auc, roc_df = pool.get_roc_aucs(pDNA_lfc, tp_genes, fp_genes, gene_col = 'Gene Symbol', score_col=col)
initial_roc_dict[col] = roc_df
intial_roc_auc_dict[col] = roc_auc
fig,ax=plt.subplots(figsize=(6,6))
for key, df in initial_roc_dict.items():
roc_auc = intial_roc_auc_dict[key]
ax=sns.lineplot(data=df, x='fpr',y='tpr', ci=None, label = key+',' + str(round(roc_auc,2)))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('ROC-AUC')
plt.xlabel('False Positive Rate (non-essential)')
plt.ylabel('True Positive Rate (essential)')
```
## Gene level analysis
### Residual z-scores
```
lfc_df = pDNA_lfc.drop('Gene Symbol', axis = 1)
lfc_df
# run_guide_residuals(lfc_df.drop_duplicates(), cols)
residuals_lfcs, all_model_info, model_fit_plots = run_guide_residuals(lfc_df, paired_lfc_cols=paired_cols[1])
residuals_lfcs
guide_mapping = pool.group_pseudogenes(chip[['Construct Barcode', 'Gene Symbol']], pseudogene_size=10, gene_col='Gene Symbol', control_regex=['ONE_INTERGENIC_SITE'])
guide_mapping
gene_residuals = anchors.get_gene_residuals(residuals_lfcs.drop_duplicates(), guide_mapping)
gene_residuals
gene_residual_sheet = fns.format_gene_residuals(gene_residuals, guide_min = 8, guide_max = 11, ascending=True)
guide_residual_sheet = pd.merge(guide_mapping, residuals_lfcs.drop_duplicates(), on = 'Construct Barcode', how = 'inner')
guide_residual_sheet
with pd.ExcelWriter('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Huh751_SecondaryLibraryAct_Goujon.xlsx') as writer:
gene_residual_sheet.to_excel(writer, sheet_name='Huh751_avg_zscore', index =False)
reads.to_excel(writer, sheet_name='Huh751_genomewide_reads', index =False)
guide_mapping.to_excel(writer, sheet_name='Huh751_guide_mapping', index =False)
with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/SecondaryLibraryIndivScreens/Huh751_SecondaryLibraryAct_Goujon.xlsx') as writer:
gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
```
### Huh-7.5.1 Act Replicate Correlations
```
Huh751_screen1_df = gene_residuals[gene_residuals['condition'].str.contains('#1')]
Huh751_screen2_df = gene_residuals[gene_residuals['condition'].str.contains('#2')]
Huh751_zscore_df = pd.merge(Huh751_screen1_df[['Gene Symbol', 'residual_zscore']], Huh751_screen2_df[['Gene Symbol', 'residual_zscore']], on = 'Gene Symbol', how = 'outer', suffixes = ['_screen#1', '_screen#2'])
Huh751_zscore_df
# Screen 2 vs Screen 1
fig, ax = plt.subplots(figsize = (2, 2))
ax = gpp.point_densityplot(Huh751_zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', s=6)
ax = gpp.add_correlation(Huh751_zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', fontsize=7)
top_ranked_screen1 = Huh751_zscore_df.nsmallest(5, 'residual_zscore_screen#1')
top_ranked_screen2 = Huh751_zscore_df.nsmallest(5, 'residual_zscore_screen#2')
bottom_ranked_screen1 = Huh751_zscore_df.nlargest(5, 'residual_zscore_screen#1')
bottom_ranked_screen2 = Huh751_zscore_df.nlargest(5, 'residual_zscore_screen#2')
screen1_ranked = pd.concat([top_ranked_screen1, bottom_ranked_screen1])
screen2_ranked = pd.concat([top_ranked_screen2, bottom_ranked_screen2])
ranked = pd.concat([screen1_ranked, screen2_ranked]).drop_duplicates()
# Annotate common hits
# common_ranked = pd.merge(screen1_ranked, screen2_ranked, on = ['Gene Symbol', 'residual_zscore_screen#1', 'residual_zscore_screen#2'], how = 'inner')
# common_ranked
sns.scatterplot(data=ranked, x='residual_zscore_screen#1', y='residual_zscore_screen#2', color = sns.color_palette('Set2')[0], edgecolor=None, s=6)
texts= []
for j, row in ranked.iterrows():
texts.append(ax.text(row['residual_zscore_screen#1']+0.25, row['residual_zscore_screen#2'], row['Gene Symbol'], fontsize=7,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
plt.title('Huh751 Cas9 CRISPRa Secondary Library Screen', fontsize=7)
plt.xlabel('Screen #1 (Mean z-score)', fontsize=7)
plt.ylabel('Screen #2 (Mean z-score)', fontsize=7)
plt.xticks(fontsize=7)
plt.yticks(fontsize=7)
sns.despine()
gpp.savefig('../../../Figures/Scatterplots/Huh751_Cas9_Act_Secondary_Screen1vs2_scatterplot.pdf', dpi=300)
```
### Primary KO vs Cas9 Secondary Huh-7.5.1 Act screens
```
Cas9_KO_Huh7_5_Puschnik_primary = pd.read_excel('../../../Data/Processed/GEO_submission_v2/Huh751_GeCKOv2_Puschnik.xlsx')
Cas9_KO_Huh7_5_Poirier_primary = pd.read_excel('../../../Data/Processed/GEO_submission_v2/Huh75_Brunello_Poirier_v2.xlsx')
Cas9_KO_Huh7_5_primary = pd.merge(Cas9_KO_Huh7_5_Puschnik_primary, Cas9_KO_Huh7_5_Poirier_primary, on = 'Gene Symbol', how = 'outer')
Cas9_KO_Huh7_5_primary = Cas9_KO_Huh7_5_primary.rename(columns = {'residual_zscore':'residual_zscore_Puschnik',
'Rank_residual_zscore':'Rank_residual_zscore_Puschnik',
'residual_zscore_avg':'residual_zscore_avg_Poirier',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_Poirier'})
primaryvssecondary_Huh7_5 = pd.merge(Cas9_KO_Huh7_5_primary, gene_residual_sheet, on = 'Gene Symbol', how = 'inner')
primaryvssecondary_Huh7_5
primaryvssecondary_Huh7_5 = primaryvssecondary_Huh7_5.rename(columns={'residual_zscore_avg':'residual_zscore_avg_SecondaryAct',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_SecondaryAct'})
fig, axs = plt.subplots(ncols = 2, figsize = (8, 4), sharey=True)
axs[0] = gpp.point_densityplot(primaryvssecondary_Huh7_5.dropna(), 'residual_zscore_Puschnik', 'residual_zscore_avg_SecondaryAct', s=6, ax = axs[0])
axs[0] = gpp.add_correlation(primaryvssecondary_Huh7_5.dropna(), 'residual_zscore_Puschnik', 'residual_zscore_avg_SecondaryAct', loc = 'lower right', ax = axs[0])
axs[0].set_xlabel('Puschnik (primary KO, mean z-score)')
axs[0].set_ylabel('Secondary (activation, mean z-score)')
axs[1] = gpp.point_densityplot(primaryvssecondary_Huh7_5.dropna(), 'residual_zscore_avg_Poirier', 'residual_zscore_avg_SecondaryAct', s=6, ax = axs[1])
axs[1] = gpp.add_correlation(primaryvssecondary_Huh7_5.dropna(), 'residual_zscore_avg_Poirier', 'residual_zscore_avg_SecondaryAct', loc = 'lower right', ax = axs[1])
axs[1].set_xlabel('Poirier (primary KO, mean z-score)')
axs[1].set_ylabel('Secondary (act, mean z-score)')
top_ranked_Puschnik = primaryvssecondary_Huh7_5.nlargest(5, 'residual_zscore_Puschnik')
bottom_ranked_Puschnik = primaryvssecondary_Huh7_5.nsmallest(5, 'residual_zscore_Puschnik')
top_ranked_Poirier = primaryvssecondary_Huh7_5.nlargest(5, 'residual_zscore_avg_Poirier')
bottom_ranked_Poirier = primaryvssecondary_Huh7_5.nsmallest(5, 'residual_zscore_avg_Poirier')
top_ranked_SecondaryAct = primaryvssecondary_Huh7_5.nlargest(5, 'residual_zscore_avg_SecondaryAct')
bottom_ranked_SecondaryAct = primaryvssecondary_Huh7_5.nsmallest(5, 'residual_zscore_avg_SecondaryAct')
Puschnik_ranked = pd.concat([top_ranked_Puschnik, bottom_ranked_Puschnik])
Poirier_ranked = pd.concat([top_ranked_Poirier, bottom_ranked_Poirier])
SecondaryAct_ranked = pd.concat([top_ranked_SecondaryAct, bottom_ranked_SecondaryAct])
# Annotate hits
annot_df_Puschnik = pd.concat([Puschnik_ranked, SecondaryAct_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Puschnik, x='residual_zscore_Puschnik', y='residual_zscore_avg_SecondaryAct', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[0])
texts= []
for j, row in annot_df_Puschnik.iterrows():
texts.append(axs[0].text(row['residual_zscore_Puschnik']+0.25, row['residual_zscore_avg_SecondaryAct'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
annot_df_Poirier = pd.concat([Poirier_ranked, SecondaryAct_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Poirier, x='residual_zscore_avg_Poirier', y='residual_zscore_avg_SecondaryAct', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[1])
texts= []
for j, row in annot_df_Poirier.iterrows():
texts.append(axs[1].text(row['residual_zscore_avg_Poirier']+0.25, row['residual_zscore_avg_SecondaryAct'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
plt.suptitle('Huh-7.5 Act secondary vs KO primary screens')
sns.despine()
# # fig.savefig('../../../Figures/Scatterplots/Calu3_KO_Cas9PrimaryvsCas12aSecondary_scatterplot.png', bbox_inches = 'tight', dpi=300)
gpp.savefig('../../../Figures/Scatterplots/Huh7_5_KOPrimaryvsActSecondary_scatterplot.pdf', dpi=300)
# with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/Calu3_SecondaryLibraryKO_Goujon_indiv_screens.xlsx') as writer:
# zscore_df.to_excel(writer, sheet_name='indiv_screen_zscore', index =False)
# gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
# guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
```
# Secondary Activation Screen comparison
```
# Read in processed z-scores for all secondary activation screens
Calu3_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Calu3_SecondaryLibraryAct_Goujon.xlsx')
A549_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/A549_SecondaryLibraryAct_Goujon.xlsx')
Huh751_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Huh751_SecondaryLibraryAct_Goujon.xlsx')
Calu3_Secondary_Act = Calu3_Secondary_Act.rename(columns = {'residual_zscore_avg':'residual_zscore_avg_Calu3',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_Calu3'})
SecondaryAct = pd.merge(Calu3_Secondary_Act,
pd.merge(A549_Secondary_Act, Huh751_Secondary_Act, on = 'Gene Symbol', how = 'inner', suffixes = ['_A549', '_Huh751']),
on = 'Gene Symbol', how = 'inner')
top_ranked_Calu3 = SecondaryAct.nlargest(5, 'residual_zscore_avg_Calu3')
bottom_ranked_Calu3 = SecondaryAct.nsmallest(5, 'residual_zscore_avg_Calu3')
top_ranked_A549 = SecondaryAct.nlargest(5, 'residual_zscore_avg_A549')
bottom_ranked_A549 = SecondaryAct.nsmallest(5, 'residual_zscore_avg_A549')
top_ranked_Huh751 = SecondaryAct.nlargest(5, 'residual_zscore_avg_Huh751')
bottom_ranked_Huh751 = SecondaryAct.nsmallest(5, 'residual_zscore_avg_Huh751')
Calu3_ranked = pd.concat([top_ranked_Calu3, bottom_ranked_Calu3])
A549_ranked = pd.concat([top_ranked_A549, bottom_ranked_A549])
Huh751_ranked = pd.concat([top_ranked_Huh751, bottom_ranked_Huh751])
fig, axs = plt.subplots(ncols=3, figsize=(12, 4))
# A549 vs Calu-3
gpp.point_densityplot(data = SecondaryAct, x = 'residual_zscore_avg_Calu3', y = 'residual_zscore_avg_A549', s=6, ax = axs[0])
ax = gpp.add_correlation(SecondaryAct.dropna(), 'residual_zscore_avg_Calu3', 'residual_zscore_avg_A549', loc = 'upper left', ax = axs[0])
axs[0].set_xlabel('Calu-3 (Mean z-score)')
axs[0].set_ylabel('A549 (Mean z-score)')
# Annotate
annot_df_Calu3_A549 = pd.concat([Calu3_ranked, A549_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Calu3_A549, x='residual_zscore_avg_Calu3', y='residual_zscore_avg_A549', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[0])
texts= []
for j, row in annot_df_Calu3_A549.iterrows():
texts.append(axs[0].text(row['residual_zscore_avg_Calu3']+0.25, row['residual_zscore_avg_A549'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
# Huh-7.5.1 vs Calu-3
gpp.point_densityplot(data = SecondaryAct, x = 'residual_zscore_avg_Calu3', y = 'residual_zscore_avg_Huh751', s=6, ax = axs[1])
ax = gpp.add_correlation(SecondaryAct.dropna(), 'residual_zscore_avg_Calu3', 'residual_zscore_avg_Huh751', loc = 'upper left', ax = axs[1])
axs[1].set_xlabel('Calu-3 (Mean z-score)')
axs[1].set_ylabel('Huh-7.5.1 (Mean z-score)')
# Annotate
annot_df_Calu3_Huh751 = pd.concat([Calu3_ranked, Huh751_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Calu3_Huh751, x='residual_zscore_avg_Calu3', y='residual_zscore_avg_Huh751', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[1])
texts= []
for j, row in annot_df_Calu3_Huh751.iterrows():
texts.append(axs[1].text(row['residual_zscore_avg_Calu3']+0.25, row['residual_zscore_avg_Huh751'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
# Huh-7.5.1 vs A549
gpp.point_densityplot(data = SecondaryAct, x = 'residual_zscore_avg_A549', y = 'residual_zscore_avg_Huh751', s=6, ax = axs[2])
ax = gpp.add_correlation(SecondaryAct.dropna(), 'residual_zscore_avg_A549', 'residual_zscore_avg_Huh751', loc = 'upper left', ax = axs[2])
axs[2].set_xlabel('A549 (Mean z-score)')
axs[2].set_ylabel('Huh-7.5.1 (Mean z-score)')
# Annotate
annot_df_A549_Huh751 = pd.concat([A549_ranked, Huh751_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_A549_Huh751, x='residual_zscore_avg_A549', y='residual_zscore_avg_Huh751', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[2])
texts= []
for j, row in annot_df_A549_Huh751.iterrows():
texts.append(axs[2].text(row['residual_zscore_avg_A549']+0.25, row['residual_zscore_avg_Huh751'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
sns.despine()
plt.suptitle('Secondary Activation Screens Calu-3 vs A549 vs Huh-7.5.1')
plt.subplots_adjust(wspace=0.5)
gpp.savefig('../../../Figures/Scatterplots/SecondaryActivationScreenComparison_Calu3A549Huh751.pdf', dpi=300)
```
## Comparison with Caco-2 CRISPRa
```
# Read in processed z-scores for all secondary activation screens
Caco2_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Caco2_SecondaryLibraryAct_Goujon.xlsx')
# A549_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/A549_SecondaryLibraryAct_Goujon.xlsx')
# Huh751_Secondary_Act = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Huh751_SecondaryLibraryAct_Goujon.xlsx')
Caco2_Secondary_Act = Caco2_Secondary_Act.rename(columns = {'residual_zscore_avg':'residual_zscore_avg_Caco2',
'Rank_residual_zscore_avg':'Rank_residual_zscore_avg_Caco2'})
SecondaryAct_2 = pd.merge(Caco2_Secondary_Act, SecondaryAct, on = 'Gene Symbol', how = 'inner').dropna()
top_ranked_Caco2 = SecondaryAct_2.nlargest(5, 'residual_zscore_avg_Caco2')
bottom_ranked_Caco2 = SecondaryAct_2.nsmallest(5, 'residual_zscore_avg_Caco2')
top_ranked_A549 = SecondaryAct_2.nlargest(5, 'residual_zscore_avg_A549')
bottom_ranked_A549 = SecondaryAct_2.nsmallest(5, 'residual_zscore_avg_A549')
top_ranked_Huh751 = SecondaryAct_2.nlargest(5, 'residual_zscore_avg_Huh751')
bottom_ranked_Huh751 = SecondaryAct_2.nsmallest(5, 'residual_zscore_avg_Huh751')
Caco2_ranked = pd.concat([top_ranked_Caco2, bottom_ranked_Caco2])
A549_ranked = pd.concat([top_ranked_A549, bottom_ranked_A549])
Huh751_ranked = pd.concat([top_ranked_Huh751, bottom_ranked_Huh751])
fig, axs = plt.subplots(ncols=3, figsize=(12, 4))
# A549 vs Caco-2
gpp.point_densityplot(data = SecondaryAct_2, x = 'residual_zscore_avg_Caco2', y = 'residual_zscore_avg_A549', s=6, ax = axs[0])
ax = gpp.add_correlation(SecondaryAct_2.dropna(), 'residual_zscore_avg_Caco2', 'residual_zscore_avg_A549', loc = 'upper left', ax = axs[0])
axs[0].set_xlabel('Caco-2 (Mean z-score)')
axs[0].set_ylabel('A549 (Mean z-score)')
# Annotate
annot_df_Caco2_A549 = pd.concat([Caco2_ranked, A549_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Caco2_A549, x='residual_zscore_avg_Caco2', y='residual_zscore_avg_A549', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[0])
texts= []
for j, row in annot_df_Caco2_A549.iterrows():
texts.append(axs[0].text(row['residual_zscore_avg_Caco2']+0.25, row['residual_zscore_avg_A549'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
# Huh-7.5.1 vs Caco-2
gpp.point_densityplot(data = SecondaryAct_2, x = 'residual_zscore_avg_Caco2', y = 'residual_zscore_avg_Huh751', s=6, ax = axs[1])
ax = gpp.add_correlation(SecondaryAct_2.dropna(), 'residual_zscore_avg_Caco2', 'residual_zscore_avg_Huh751', loc = 'upper left', ax = axs[1])
axs[1].set_xlabel('Caco-2 (Mean z-score)')
axs[1].set_ylabel('Huh-7.5.1 (Mean z-score)')
# Annotate
annot_df_Caco2_Huh751 = pd.concat([Caco2_ranked, Huh751_ranked]).drop_duplicates()
sns.scatterplot(data=annot_df_Caco2_Huh751, x='residual_zscore_avg_Caco2', y='residual_zscore_avg_Huh751', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[1])
texts= []
for j, row in annot_df_Caco2_Huh751.iterrows():
texts.append(axs[1].text(row['residual_zscore_avg_Caco2']+0.25, row['residual_zscore_avg_Huh751'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
# Calu-3 vs Caco-2
gpp.point_densityplot(data = SecondaryAct_2, x = 'residual_zscore_avg_Caco2', y = 'residual_zscore_avg_Calu3', s=6, ax = axs[2])
ax = gpp.add_correlation(SecondaryAct_2.dropna(), 'residual_zscore_avg_Caco2', 'residual_zscore_avg_Calu3', loc = 'upper left', ax = axs[2])
axs[2].set_xlabel('Caco-2 (Mean z-score)')
axs[2].set_ylabel('Calu-3 (Mean z-score)')
# Annotate
annot_df_Caco2_Calu3 = pd.concat([Caco2_ranked, Calu3_ranked]).drop_duplicates().dropna()
sns.scatterplot(data=annot_df_Caco2_Calu3, x='residual_zscore_avg_Caco2', y='residual_zscore_avg_Calu3', color = sns.color_palette('Set2')[0], edgecolor=None, s=6, rasterized=True, ax=axs[2])
texts= []
for j, row in annot_df_Caco2_Calu3.iterrows():
texts.append(axs[2].text(row['residual_zscore_avg_Caco2']+0.25, row['residual_zscore_avg_Calu3'], row['Gene Symbol'], fontsize=9,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
sns.despine()
plt.suptitle('Secondary Activation Screens Caco-2 vs A549 vs Huh-7.5.1')
plt.subplots_adjust(wspace=0.5)
gpp.savefig('../../../Figures/Scatterplots/SecondaryActivationScreenComparison_Caco2A549Huh751Calu3.pdf', dpi=300)
```
## Heatmap
```
sensitization_heatmap_df = select_top_ranks(SecondaryAct_2)
# sensitization_heatmap_df=sensitization_heatmap_df.sort_values(by=list(sensitization_heatmap_df.columns[1:]), ascending=False)
rank_cols = [col for col in sensitization_heatmap_df.columns if 'Rank' in col]
sensitization_heatmap_df = sensitization_heatmap_df.drop(rank_cols, axis=1)
Secondary_Act_sensitization_heatmap = sensitization_heatmap_df.copy().set_index('Gene Symbol').dropna(axis = 0)
fig, ax = plt.subplots(figsize=(6, 6))
xlabels = ['Caco-2', 'Calu-3', 'A549', 'Huh-7.5.1']
g = sns.heatmap(Secondary_Act_sensitization_heatmap, cmap = gpp.diverging_cmap(), xticklabels=xlabels, yticklabels=True, square=True, vmin=-5, vmax=5, cbar_kws={'shrink':0.5, 'extend':'both', 'label':'Mean z-score'},
center = 0)
plt.title('Secondary Activation Sensitization Hits')
gpp.savefig('../../../Figures/Heatmaps/Secondary_Activation_sensitization_heatmap.pdf', dpi=300, bbox_inches = 'tight')
fig.savefig('../../../Figures/Heatmaps/Secondary_Activation_sensitization_heatmap.png', dpi=300, bbox_inches = 'tight')
resistance_heatmap_df = select_top_ranks(SecondaryAct_2, largest=False)
# resistance_heatmap_df=resistance_heatmap_df.sort_values(by=list(resistance_heatmap_df.columns[1:]), ascending=False)
rank_cols = [col for col in resistance_heatmap_df.columns if 'Rank' in col]
resistance_heatmap_df = resistance_heatmap_df.drop(rank_cols, axis=1)
Secondary_Act_resistance_heatmap = resistance_heatmap_df.copy().set_index('Gene Symbol').dropna(axis = 0)
fig, ax = plt.subplots(figsize=(6, 6))
xlabels = ['Caco-2', 'Calu-3', 'A549', 'Huh-7.5.1']
g = sns.heatmap(Secondary_Act_resistance_heatmap, cmap = gpp.diverging_cmap(), xticklabels=xlabels, yticklabels=True, square=True, vmin = -5, vmax = 5, cbar_kws={'shrink':0.5, 'label':'Mean z-score', 'extend':'both'},
center = 0)
plt.title('Secondary Activation Resistance Hits')
gpp.savefig('../../../Figures/Heatmaps/Secondary_Activation_resistance_heatmap.pdf', dpi=300, bbox_inches = 'tight')
fig.savefig('../../../Figures/Heatmaps/Secondary_Activation_resistance_heatmap.png', dpi=300, bbox_inches = 'tight')
```
| github_jupyter |
# Global sensitivity
The goal of this notebook is to showcase 2 functions, one that implements sensitivity based on the unbounded differential privacy (DP) definition, and another that implements sensitivity based on a bounded definition.
I am not aiming for efficiency but for a deeper understanding of how to implement sensitivity empirically from scratch.
Before continuing there needs to be some clarifications:
In bounded DP, the neighboring dataset is built by changing the records of the dataset (not adding to removing records). E.g. x = {1, 2, 3} (|x|=3) with universe X = {1, 2, 3, 4}, a neighboring dataset in this case would be: x' = {1, 2, 4} (|x'| = 3). They have the same cardinality.
In unbounded DP definition, the neighboring dataset is built by adding ot removing records. E.g. x = {1, 2, 3} (|x| = 3) with universe X = {1, 2, 3, 4}, a neighboring dataset in this case could be: x' = {1, 2} or {1, 3} or {1, 2, 3, 4} (|x|=2 or |x|=2 or |x|=4, but not |x|=3). Their cardinality differs by 1.
The datasets considered are multisets, and their cardinality is the sum of the multiplicities of each value they contain.
The neighboring datasets are also multisets and are considered neighbors if the hamming distance concerning the original dataset is of value k. This parameter is set by the data scientist, but the original definition of DP has a value of k=1 (used for the previous examples).
The hamming distance can be seen as the cardinality of the symmetric difference between 2 datasets. With this in mind, the definition of DP can be written as:
P(M(x) = O) = P(M(x') = O) * exp(epsilon * |x ⊖ x'|)
Where M is a randomized computation, x a dataset, x' its neighbor at hamming distance k = |x ⊖ x'|, and O an output of M given x and x'.
k is usually 1, to the best of my knowledge, because one aims to protect one individual in the dataset, and by definition, each individual within would therefore be protected. By making the probabilities of obtaining an output O similar betwen two datasets that differ only only in 1 record, one is successfully cloaking the real value of O and therefore not updating fully the knowledge of the adversary, which if done properly, would still be 50/50 between wich dataset was actually the real one.
Looking at the defintino of DP, the higher your k, the more you would increase exp(.), which means that the difference between the probabilities to obtain those outputs will be larger, and thus your privacy would not be equally conserved (although sensitivities increase with k as you can see in the plots).
I have not come across an intuition for having a larger hamming distance (please feel free to [connect](https://www.linkedin.com/in/gonzalo-munilla/) if you have an explanation). Looking at the previous paragraph, it would seem as if having a hamming distance of k=2 would aim to protect pairs of records (individuals), i.e. it accounts for the fact that there are dependencies between records in the dataset that need to be considered as they increase the probability ratio (undesirable). It could make sense if there are some binary relationship between records, e.g. pairs of siblings, or n-ary relationships for k=n, e.g. in a social network.
I am however far from certain of my hypothesis for the intuition behind a larger hamming distance.
**Contributions**:
1. I programmed a function that calculates the bounded and unbounded sensitivity of a dataset formed by numeric columns- Additionally, it allows you to vary the hamming distance. Its empirical nature will not allow it to scale well, i.e., the function creates all possible neighboring datasets, with k less or more records (for unbounded DP) and with the same amount of records but changing k values (bounded DP). Where k is the hamming distance. The function calculates the query result for each possible neighboring dataset, then calculates all possible L1 norms, and then chooses the maximum. That will be the sensitivity defined in DP.
2. The sensitivity can be calculated for most of the basic queries: mean, median, percentile, sum, var, std, count*.
I have differentiated between 2 cases:
1. (a) The universe of possible values is based on a dataset, and the size of the released dataset is known before release, i.e. the cardinality of the universe subset. This scenario could be e.g. releasing a study based on some students out of all the students at a school. (Note: the dataset to be released cannot be larger than the dataset used for the universe, only equal or smaller).
2. (b) The universe of possible values is based on a range of values, and the size of the released dataset is known before release. A range is used because the exact values that could potentially be released are not known in advance, thus a range where those values could fall into must be used to perform the sensitivity calculation. This scenario could be e.g. releasing real-time DP results from an IoT device.
We assume that the size of the released dataset is known, i.e. we know there are n amount of records being queried or from which a synopsis (statistical summary) will be made. This is safe to assume as the number of users or IoT devices in an application can be designed to be known*.
For simplicity, from now on, I will call the datasets D_universe_a and _b, D_release_a and _b, and D_neighbor for (a) and (b).
Note that this is somewhat different from the online (on) (or interactive) and the offline (off) (or non-interactive) definition that [C. Dwork](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) introduces in her work. These deal with not knowing or knowing the queries beforehand, respectively. But, we could have the case (a) and case (b) in both (on) or (off):
1. Scenario (a) + (on): API that allows external entities to query in a DP manner a subset of the school dataset you host internally (or its entirety).
2. Scenario (a) + (off): Release externally a DP synopsis (statistical summary) of a subset of the school dataset you host internally (or its entirety).
3. Scenario (b) + (on): API that allows external entities to query in a DP manner a dataset updated in real-time by IoT devices hosted internally or decentrally.
4. Scenario (b) + (off): Release externally a DP synopsis of a dataset updated in real-time by IoT devices hosted internally or decentrally.
For this notebook, we will consider Scenario (a) + (off) and (b) + (off).
Also note that (a) and (b) is also somewhat different to local DP (LDP) (DP applied on the device, usually randomized response) versus global DP (GDP) (the third party trusted with all the data, allows for other algorithms). This notebook is focused on GDP. So we have (a) + (off) + (GDP) and (b) + (off) + (GDP).
Mean questions for clarification:
- How can (b) and (GDP) go together? The third-party can host a server to process real-time data.
- Then, why does not the third party aggregate this real-time data and do (a) instead of (b)? It could, but because your dataset is ever-growing, you would need to compute sensitivities every time your dataset would change, which is in real-time, that can be computationally inefficient.
- But still, you could do (a), right? You could, but you would have to release data over a defined period and release a synopsis aggregating these data. Thus, your service would not be as close to real-time anymore as it would be with (b). But definitively, it is a question to further investigate.
- So what is the major benefit of (b)? You do not need to re-compute your sensitivities, the drawback is that if your domain of possible values is very large then your noise will be larger. In (a) your universe might not contain such wide possible ranges, so it benefits your accuracy. But you can also fine-tune your range in D_universe_b based on an older sample of D_release_b. **But definitively you would like to calculate your sensitivities in case b with an upper bound found theoretically, as finding it empirically is computationally expensive.** You could do the same for case (a), but there is the possibility to find a much better result via calculating its exact global sensitivity.
- Mmm, and what if you do not know the full domain of your universe? That is indeed a good question. Well, then you will have to do some clipping to not get any surprises. E.g., if you defined your universe like D_universe_b = {'Age': [1, 2, ..., 99, 100]}, but you get a value in real-time like [122](https://en.wikipedia.org/wiki/List_of_the_verified_oldest_people), then you should do top-coding, 122->100, so you can include its value. Outliers and DP do not go well. You can protect them, but the cost would be too high (higher noise), and why would you do that? DP allows you to perform statistical queries, the mean or the sum would not be representative of the population if there are outliers in it. DP is used to learn from a population, not from outliers, if you would like to learn about outliers, then DP is not for you.
The main difference in this notebook between scenario a and b (aside from the one mentioned), is programmatic: How you define the universe to input into the functions. The functions I created (for the sensitivities in unbounded and bounded DP) serve both scenarios. But in scenario b, aside from the fact that you have only a range of values, to calculate the sensitivity, you have to make as many replicates of each value of the universe as the size of the released dataset. Why? Because if you define your range like e.g. D_universe_b = {'Age': [1, 2, ..., 99, 100]} and with a |D_release| = 4, you could on real-time get a D_release_b={'Age':[100,100,100,100]} or another like D_release_b={'Age':[35, 35, 35, 35]}.
Something to also note is that the functions that calculate the sensitivities only need a universe and the size of the released dataset (together with the hamming distance). They do not need the actual release dataset, which could be a possibility.
Limitations:
1. The functions to calculate sensitivity do not scale well in terms of the size of your universe
2. *****The count query sensitivity should be 1 for unbounded and 2 for bounded DP. The former is clear because you just add or remove one record, increasing or decreasing the total count of the record by one. However, if you have bounded sensitivity, the change of one record might lead to the decrease of the count of one record and the increase of the count of another, yielding a total difference of 2. These 2 cases are not accounted for, we solely count the number of elements in the array, which leads to a sensitivity of 1 in unbounded and of 0 inbounded. To empirically prove the fact that for bounded you have a sensitivity of 2, there needs to be more work done on how the query results are handled, which is a lot of extra workload for obtaining a solution that is already well known.**
***** If the number of users/IoT devices is desired to be protected, then one can take a large sample of records, but not all the records, and the cardinality considered would be the number of the sampled records. Thus an attacker would not know the actual number of users/IoT devices.
## Datasets
```
# Visualization
%pylab inline
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
# handling data
import csv
import json
import pandas as pd
# Math
from random import random
import scipy.stats as ss
import numpy as np
import itertools
from collections import Counter
```
### Datasets (a)
We have 2 datsets to test our function:
- (a_s) A small one I can use to benchmark our functions against a published paper ["How much is enough? Choosing epsilon for Differential Privacy"](https://git.gnunet.org/bibliography.git/plain/docs/Choosing-%CE%B5-2011Lee.pdf). Which was implemented in my previous [blog post](https://github.com/gonzalo-munillag/Blog/tree/main/Extant_Papers_Implementations/A_method_to_choose_epsilon).
- (a_l) A large one to test the functions further.
The use case will be records from the students in a school.
###### D_a_small (Dataset for scenario a - small)
```
# We define the actual dataset (conforming the universe)
D_a_small_universe_dict = {'name': ['Chris', 'Kelly', 'Pat', 'Terry'], 'school_year': [1, 2, 3, 4], 'absence_days': [1, 2, 3, 10]}
D_a_small_universe = pd.DataFrame(D_a_small_universe_dict)
D_a_small_universe
# We define the the dataset that we will release
D_a_small_release = D_a_small_universe.drop([3], axis=0)
D_a_small_release
```
The adversary model adopted in the paper mentioned above is the worst-case scenario and it will be the one I adopted in this notebook: An attacker has infinite computation power, and because DP provides privacy given adversaries with arbitrary background knowledge, it is okay to assume that the adversary has full access to all the records (adversary knows all the universe, i.e. D_a_small_universe). But there is a dataset made from the universe without an individual (D_a_small_release), and the adversary does not know who is and who is not in it (this is the only thing the adversary does not know about the universe), but the adversary knows D_a_small_release contains people with a particular quality (the students who have not been in probation). With D_a_small_universe, the attacker will try to reconstruct the dataset he does not know (D_a_small_release) by employing queries on D_a_small_release without having access to it.
###### D_a_large (Dataset for scenario a - large)
The larger dataset as the universe is used in the notebook to test the functions with a Hamming distance >1. Furthermore, note that the universe has duplicated values both in the universe and the release, which does not mean they are the same records.
```
# We define the actual dataset (conforming the universe)
D_a_large_universe_dict = {'name': ['Chris', 'Kelly', 'Keny', 'Sherry', 'Jerry', 'Morty', "Beth", "Summer", "Squanchy", "Rick"], \
'school_year': [1, 2, 2, 2, 5, 5, 7, 8, 9, 9], 'absence_days': [1, 2, 3, 4, 5, 6, 7, 8, 15, 20]}
D_a_large_universe = pd.DataFrame(D_a_large_universe_dict)
D_a_large_universe
# We define the the dataset that we will release
D_a_large_release = D_a_large_universe.iloc[:6,:]
D_a_large_release
```
### Datasets (b)
There will be only one dataset for scnario b, designed to test the functions with a Hammig distance > 1. Notice that I only define the cardinality (Number of records in this case) of the release dataset and not of the universe. This is because we do not know all the values that conform the universe, we only know the unique values the data points may take from the universe.
The use case will be a vehicle sending the mood of the driver (1 - sad to 2 - happy) and how well he/she drives (0 - terrible to 3 - perfect).
Due to the limitations of calculating sensitivities, I cannot create a large range and length of the release dataset. But then again, I programmed this notebook to understand more deeply sensitivities, and the functions work well for a hamming distance of 1, which is the most common value.
```
# We define the actual dataset (conforming the universe)
D_b_release_length = 4
D_b_universe = {'mood': [1, 2], 'driving_proficiency': [1, 2, 3]}
# Because we know the cardinality of the release datase, then:
for key, values in D_b_universe.items():
D_b_universe[key] = values * D_b_release_length
D_b_universe
```
## Functions
### Auxiliary function
##### Here we have all the queries we could make to the numerical data
```
# With this funciton, we can make easier to call the mean, median... functions
# REF: https://stackoverflow.com/questions/34794634/how-to-use-a-variable-as-function-name-in-python
# It is not clean to have the var percentile input each function, but it is less verbose than having a function
# For each percentile. We could however limit the maount of percentiles offer to 25 and 75.
class Query_class:
"""
A class used to represent a query. YOu instantiate an object that will perform a particlar query on an array
Attributes
----------
fDic - (dict) containing the possible queries the class can be transformed into
fActive - (function) it contins the function we created the class to have
Methods
-------
run_query - I will run the query for which we instantiated the class
The other methods implement the different possible queries
"""
def __init__(self, fCase):
# mapping: string --> variable = function name
fDic = {'mean':self._mean,
'median':self._median,
'count': self._count,
'sum': self._sum,
'std': self._std,
'var': self._var,
'percentile': self._percentile}
self.fActive = fDic[fCase]
# Calculate the mean of an array
def _mean(self, array, percentile):
return np.mean(array)
# Calculate the median of an array
def _median(self, array, percentile):
return np.median(array)
# Calculate the number of elements in the array
def _count(self, array, percentile):
return len(array)
# Calculate the sum of an array
def _sum(self, array, percentile):
return np.sum(array)
# Calculate the std of an array
def _std(self, array, percentile):
return np.std(array)
# Calculate the variance of an array
def _var(self, array, percentile):
return np.var(array)
def _percentile(self, array, percentile):
return np.percentile(array, percentile)
# It will run the given query
def run_query(self, array, percentile=50):
return self.fActive(array, percentile)
# Set of checks on the input values
def verify_sensitivity_inputs(universe_cardinality, universe_subset_cardinality, hamming_distance):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (df) cardinality of the universe subset
hamming_distance - (int) hamming distance between neighboring datasets
OUTPUT:
ValueError - (str) error message due to the value of the inputs
Description:
It performs multiple checks to verify the validity of the inputs for the calculation of senstitivity
"""
# Check on unverse cardinality (1).
# The cardinality of the subset of the universe cannot be larger than the universe
if universe_cardinality < universe_subset_cardinality:
raise ValueError("Your universe dataset cannot be smaller than your release dataset.")
# Checks on the validity of the chosen hamming_distance (3)
if hamming_distance >= (universe_subset_cardinality):
raise ValueError("Hamming distance chosen is larger than the cardinality of the release dataset.")
if (hamming_distance > np.abs(universe_cardinality - universe_subset_cardinality)):
raise ValueError("Hamming distance chosen is larger than the cardinality difference between the \
universe and the release dataset, i.e., \
there are not enough values in your universe to create such a large neighboring dataset (Re-sampling records).")
# The hamming distance cannot be 0, then your neighbor dataset is equal to the original dataset
if hamming_distance == 0:
raise ValueError("Hamming distance cannot be 0.")
# Used by unbounded unbounded_empirical_global_L1_sensitivity_a
def L1_norm_max(release_dataset_query_value, neighbor_datasets, query, percentile):
"""
INPUT:
release_dataset_query_value - (float) query value of a particular possible release dataset
neighbor_datasets - (list) contains the possible neighbors of the specific release dataset
query - (object) instance of class Query_class
percentile - (int) percentile value for the percentile query
OUTPUT:
L1_norm_maximum - (float) maximum L1 norm calcuated from the differences between the query results
of the neighbor datasets and the specific release dataset
Description:
It claculates the maximum L1 norm between the query results of the neighbor datasets and the specific release dataset
"""
neighbor_dataset_query_values = []
for neighbor_dataset in neighbor_datasets:
neighbor_dataset_query_value = query.run_query(neighbor_dataset, percentile)
neighbor_dataset_query_values.append(neighbor_dataset_query_value)
# We select the maximum and minimum values of the queries, as the intermediate values will not
# yield a larger L1 norm (ultimately, we are interested in the maximum L1 norm)
neighbor_dataset_query_value_min, neighbor_dataset_query_value_max = \
min(neighbor_dataset_query_values), max(neighbor_dataset_query_values)
# We calculate the L1 norm for these two values and pick the maximum
L1_norm_i = np.abs(release_dataset_query_value - neighbor_dataset_query_value_min)
L1_norm_ii = np.abs(release_dataset_query_value - neighbor_dataset_query_value_max)
L1_norm_maximum = max(L1_norm_i, L1_norm_ii)
return L1_norm_maximum
def calculate_unbounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, unbounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'median'
median_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'count'
count_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'sum'
sum_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'std'
std_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'var'
var_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
print('Unbounded sensitivities for mean', mean_unbounded_global_sensitivities)
print('Unbounded sensitivities for median', median_unbounded_global_sensitivities)
print('Unbounded sensitivities for count', count_unbounded_global_sensitivities)
print('Unbounded sensitivities for sum', sum_unbounded_global_sensitivities)
print('Unbounded sensitivities for std', std_unbounded_global_sensitivities)
print('Unbounded sensitivities for var', var_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 25', percentile_25_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 50', percentile_50_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 75', percentile_75_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 90', percentile_90_unbounded_global_sensitivities)
unbounded_sensitivities = build_sensitivity_dict(unbounded_sensitivities, hamming_distance,\
mean_unbounded_global_sensitivities, median_unbounded_global_sensitivities, count_unbounded_global_sensitivities, \
sum_unbounded_global_sensitivities, std_unbounded_global_sensitivities, var_unbounded_global_sensitivities, \
percentile_25_unbounded_global_sensitivities, percentile_50_unbounded_global_sensitivities, \
percentile_75_unbounded_global_sensitivities, percentile_90_unbounded_global_sensitivities)
return unbounded_sensitivities
def calculate_bounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, bounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
bounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'median'
median_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'count'
count_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'sum'
sum_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'std'
std_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'var'
var_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
print('Bounded sensitivities for mean', mean_bounded_global_sensitivities)
print('Bounded sensitivities for median', median_bounded_global_sensitivities)
print('Bounded sensitivities for count', count_bounded_global_sensitivities)
print('Bounded sensitivities for sum', sum_bounded_global_sensitivities)
print('Bounded sensitivities for std', std_bounded_global_sensitivities)
print('Bounded sensitivities for var', var_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 25', percentile_25_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 50', percentile_50_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 75', percentile_75_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 90', percentile_90_bounded_global_sensitivities)
bounded_sensitivities = build_sensitivity_dict(bounded_sensitivities, hamming_distance,\
mean_bounded_global_sensitivities, median_bounded_global_sensitivities, count_bounded_global_sensitivities, \
sum_bounded_global_sensitivities, std_bounded_global_sensitivities, var_bounded_global_sensitivities, \
percentile_25_bounded_global_sensitivities, percentile_50_bounded_global_sensitivities, \
percentile_75_bounded_global_sensitivities, percentile_90_bounded_global_sensitivities)
return bounded_sensitivities
# We save the values in a dictionary
def build_sensitivity_dict(unbounded_sensitivities, hamming_distance, mean_sensitivity, median_sensitivity, count_sensitivity, _sum_sensitivity, _std_sensitivity, _var_sensitivity, percentile_25_sensitivity, percentile_50_sensitivity, percentile_75_sensitivity, percentile_90_sensitivity):
"""
INPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
hamming_distance - (int) hamming distance of the neighboring datasets
mean_sensitivity - (float) sensitivity of the mean query
median_sensitivity - (float) sensitivity of the media query
count_sensitivity - (float) sensitivity of the count query
_sum_sensitivity - (float) sensitivity of the sum query
_std_sensitivity - (float) sensitivity of the std query
_var - (float) sensitivity of the var query
percentile_25_sensitivity - (float) sensitivity of the percentile 25 query
percentile_50_sensitivity - (float) sensitivity of the percentile 50 query
percentile_75_sensitivity - (float) sensitivity of the percentile 75query
percentile_90_sensitivity - (float) sensitivity of the percentile 90 query
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
"""
unbounded_sensitivities[hamming_distance] = {}
unbounded_sensitivities[hamming_distance]['mean'] = mean_sensitivity
unbounded_sensitivities[hamming_distance]['median'] = median_sensitivity
unbounded_sensitivities[hamming_distance]['count'] = count_sensitivity
unbounded_sensitivities[hamming_distance]['sum'] = _sum_sensitivity
unbounded_sensitivities[hamming_distance]['std'] = _std_sensitivity
unbounded_sensitivities[hamming_distance]['var'] = _var_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_25'] = percentile_25_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_50'] = percentile_50_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_75'] = percentile_75_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_90'] = percentile_90_sensitivity
return unbounded_sensitivities
```
## Main Functions
```
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = h
}} \|f(x)-f(y)\|_{1}
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = h
}} \|f(x)-f(y)\|_{1}
\end{align}
### Unbounded Sensitivity
```
def unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
unbounded_global_sensitivity - (float) the unbounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
unbounded_global_sensitivity_per_colum = {}
for column in columns:
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance)
# 1) RELEASE DATASET
# We calculate all the possible release datasets formed by the combination of values sampled from the universe
release_datasets = itertools.combinations(universe[column], universe_subset_cardinality)
release_datasets = list(release_datasets)
# 2) |NEIGHBORING DATASET| < |RELEASE DATASET| //// cardinalities
# The neighboring datasets are subsets of a smaller dimension of the possible release datasets (smaller by the hamming_distance)
# The neighboring release datasets are used to calculate the max sensitivity, stemming from the DP definition
neighbor_with_less_records_datasets = []
for release_dataset in release_datasets:
# These yields the smaller possible neighboring datasets
neighbor_with_less_records_dataset = itertools.combinations(release_dataset, \
universe_subset_cardinality - hamming_distance)
neighbor_with_less_records_dataset = list(neighbor_with_less_records_dataset)
neighbor_with_less_records_datasets.append(neighbor_with_less_records_dataset)
# 3) |NEIGHBORING DATASET| > |RELEASE DATASET| //// cardinalities
# similar process but adding records
neighbor_with_more_records_datasets = []
for release_dataset in release_datasets:
# We obtain combinations of values from the univsere and these will be appended to the release datasets.
# The size of each combination is equal to the hamming distance, as the neighboring dataset will be that much larger
# However, in case your universe is a dataset and not just a range of values, then the neighboring
# dataset could contain the same record twice, which is NOT desirable (1 person appearing twice)
# Therefore, the values must be sampled from the symmetric difference between the release dataset and the universe dataset
# REF: https://www.geeksforgeeks.org/python-difference-of-two-lists-including-duplicates/
symmetric_difference = list((Counter(universe[column]) - Counter(release_dataset)).elements())
neighbor_possible_value_combinations = itertools.combinations(symmetric_difference, hamming_distance)
neighbor_possible_value_combinations = list(neighbor_possible_value_combinations)
temp_neighbor_with_more_records_datasets = []
for neighbor_possible_value_combination in neighbor_possible_value_combinations:
# We create neighboring datasets by concatenating the neighbor_possible_value_combination with the release dataset
neighbor_with_more_records_dataset = list(release_dataset + neighbor_possible_value_combination)
temp_neighbor_with_more_records_datasets.append(neighbor_with_more_records_dataset)
# We append in this manner to cluster the neighboring datasets with their respective release dataset
neighbor_with_more_records_datasets.append(temp_neighbor_with_more_records_datasets)
# 4) For each possible release datase, there is a set of neighboring datasets
# We will iterate through each possible release dataset and calculate the L1 norm with
# each of its repspective neighboring datasets
L1_norms = []
for i, release_dataset in enumerate(release_datasets):
release_dataset_query_value = query.run_query(release_dataset, percentile)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_less_records_datasets[i], query, percentile)
L1_norms.append(L1_norm)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_more_records_datasets[i], query, percentile)
L1_norms.append(L1_norm)
# We pick the maximum out of all the maximum L1_norms calculated from each possible release dataset
unbounded_global_sensitivity_per_colum[column] = max(L1_norms)
return unbounded_global_sensitivity_per_colum
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = 0
}} \|f(x)-f(y)\|_{1}
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h \\
||x|-|y|| = 0
}} \|f(x)-f(y)\|_{1}
\end{align}
### Bounded Sensitivity
```
def bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
bounded_global_sensitivity - (float) the bounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
bounded_global_sensitivity_per_column = {}
for column in columns:
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance)
# We calculate all the possible release datasets
# First we obtain the combinations within the release dataset. The size of this combinations is not the original size
# but the original size minus the hamming_distance
release_i_datasets = itertools.combinations(universe[column], universe_subset_cardinality - hamming_distance)
release_i_datasets = list(release_i_datasets)
# it will contain sets of neighboring datasets. The L1 norm will be calculated between these sets. The maximum will be chosen
# The datasets from different groups do not necesarilly need to be neighbors, thus we separate them in groups
neighbor_datasets = []
for release_i_dataset in release_i_datasets:
# second we calculate the combinations of the items in the universe that are not in the release dataset
# the size of a combination is equal to the hamming distance
symmetric_difference = list((Counter(universe[column]) - Counter(release_i_dataset)).elements())
release_ii_datasets = itertools.combinations(symmetric_difference, hamming_distance)
release_ii_datasets = list(release_ii_datasets)
# We create neighboring datasets by concatenating i with ii
temp_neighbors = []
for release_ii_dataset in release_ii_datasets:
temp_neighbor = list(release_i_dataset + release_ii_dataset)
temp_neighbors.append(temp_neighbor)
neighbor_datasets.append(temp_neighbors)
# We calculate the L1_norm for the different combinations with the aim to find the max
# We can loop in this manner because we are obtaining the absolute values
L1_norms = []
for m in range(0, len(neighbor_datasets)):
for i in range(0, len(neighbor_datasets[m])-1):
for j in range(i+1, len(neighbor_datasets[m])):
L1_norm = np.abs(query.run_query(neighbor_datasets[m][i], percentile) - query.run_query(neighbor_datasets[m][j], percentile))
L1_norms.append(L1_norm)
bounded_global_sensitivity_per_column[column] = max(L1_norms)
return bounded_global_sensitivity_per_column
```
## MAIN - testing the functions for scenario a
### Unbounded sensitivity - Scenario a
Let us begin with the small dataset and a hamming distance of 1 (the only one it allows given the shapes of the release and universe dataset)
```
D_a_small_release
D_a_small_universe
np.percentile((1, 2), 90)
columns = ['school_year', 'absence_days']
hamming_distance = 1
unbounded_sensitivities = {}
calculate_unbounded_sensitivities(D_a_small_universe, D_a_small_release.shape[0], columns, hamming_distance, unbounded_sensitivities)
;
```
My function works as intended for the benchmark dataset. The results for mean and median are the same as in the paper. The paper does not deal with other types of queries. (We also get the percentile 50 to cross-check with the median)
Let us continue with the large dataset and a hamming distance of 1. Also, we will create a dictionary to save the sensitivity values with the hamming distances as keys.
```
D_a_large_universe
D_a_large_release
columns = ['school_year', 'absence_days']
hamming_distances = [1, 2, 3, 4]
unbounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
unbounded_sensitivities = calculate_unbounded_sensitivities(D_a_large_universe, D_a_large_release.shape[0], columns, hamming_distance, unbounded_sensitivities)
```
### Visualization
```
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in unbounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in unbounded_sensitivities.keys():
y_values.append(unbounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_a_large_universe[column].values))
plt.suptitle('Sensitivities based on unbounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
### Bounded sensitivity - Scenario a
Let us begin with the small dataset and a hamming distance of 1 (the only one it allows given the shapes of the release and universe dataset)
```
D_a_small_release
D_a_small_universe
np.percentile((1, 2), 90)
columns = ['school_year', 'absence_days']
hamming_distance = 1
bounded_sensitivities = {}
calculate_bounded_sensitivities(D_a_small_universe, D_a_small_release.shape[0], columns, hamming_distance, bounded_sensitivities)
;
```
My function works as intended for the benchmark dataset. The results for mean and median are the same as in the paper. The paper does not deal with other types of queries. (We also get the percentile 50 to cross-check with the median)
Let us continue with the large dataset and a hamming distance of 1. Also, we will create a dictionary to save the sensitivity values with the hamming distances as keys.
```
D_a_large_universe
D_a_large_release
columns = ['school_year', 'absence_days']
hamming_distances = [1, 2, 3, 4]
bounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
bounded_sensitivities = calculate_bounded_sensitivities(D_a_large_universe, D_a_large_release.shape[0], columns, hamming_distance, bounded_sensitivities)
```
### Visualization
```
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in bounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in bounded_sensitivities.keys():
y_values.append(bounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_a_large_universe[column].values))
plt.suptitle('Sensitivities based on bounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
We can see that he values are in general smaller in comparison with the unbounded sensitivity. We can also see, that while in the unbounded sensitivity the values keep rising with the hamming distance, in bounded sensitivity they plateau.
### Unbounded sensitivity - Scenario b
For scenario b, we have one dataset consisting of ranges of possible values. These ranges are bounded by the max and min of the previous datasets.
```
print('Cardinality of the release dataset', D_b_release_length)
print('Universe:')
D_b_universe
```
You have to be mindful that for these particular universes, you could use different hamming distances, as the number of elements varies between your columns. In this case we would stick with the most restrictive column and use the same hamming distances as in the previous experiments.
```
columns = ['mood', 'driving_proficiency']
hamming_distances = [1, 2, 3]
unbounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
unbounded_sensitivities = calculate_unbounded_sensitivities(D_b_universe, D_b_release_length, columns, hamming_distance, unbounded_sensitivities)
```
### Visualization
```
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in unbounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in unbounded_sensitivities.keys():
y_values.append(unbounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_b_universe[column]))
plt.suptitle('Sensitivities based on unbounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
### Bounded sensitivity - Scenario b
This last iteration lasts for too long. Do not run it, I did it for you.
```
D_b_universe
columns = ['mood', 'driving_proficiency']
hamming_distances = [1, 2, 3]
bounded_sensitivities = {}
for hamming_distance in hamming_distances:
print('Hamming distance = ', hamming_distance)
bounded_sensitivities = calculate_bounded_sensitivities(D_b_universe, D_b_release_length, columns, hamming_distance, bounded_sensitivities)
```
### Visualization
```
plt.figure(figsize=(15, 7))
query_types = ['mean', 'median', 'count', 'sum', 'std', 'var', 'percentile_25', 'percentile_50', 'percentile_75', 'percentile_90']
x_values = []
for key in bounded_sensitivities.keys():
x_values.append(key)
for inedx_column, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(inedx_column+1))
plt.subplot(plot_index)
query_type_legend_handles = []
for query_type in query_types:
y_values = []
for hamming_distance in bounded_sensitivities.keys():
y_values.append(bounded_sensitivities[hamming_distance][query_type][column])
# plot the sensitivities
legend_handle, = plt.plot(x_values, y_values, label=query_type)
query_type_legend_handles.append(legend_handle)
# Legends
legend = plt.legend(handles=query_type_legend_handles, bbox_to_anchor=(0., -0.2, 1., .102), \
ncol=5, mode="expand", borderaxespad=0.)
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Hamming distance')
plt.ylabel('Sensitivity')
plt.title('{}) Universe Domain {} = {}'.format(inedx_column+1, column, D_b_universe[column]))
plt.suptitle('Sensitivities based on bounded DP of different queries for different domains for different hamming distances')
plt.show()
;
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Implement a binary search tree with an insert method.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Can we insert None values?
* No
* Can we assume we are working with valid integers?
* Yes
* Can we assume all left descendents <= n < all right descendents?
* Yes
* Do we have to keep track of the parent nodes?
* This is optional
* Can we assume this fits in memory?
* Yes
## Test Cases
### Insert
Insert will be tested through the following traversal:
### In-Order Traversal
* 5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8
* 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
If the `root` input is `None`, return a tree with the only element being the new root node.
You do not have to code the in-order traversal, it is part of the unit test.
## Algorithm
### Insert
* If the root is None, return Node(data)
* If the data is <= the current node's data
* If the current node's left child is None, set it to Node(data)
* Else, recursively call insert on the left child
* Else
* If the current node's right child is None, set it to Node(data)
* Else, recursively call insert on the right child
Complexity:
* Time: O(h), where h is the height of the tree
* In a balanced tree, the height is O(log(n))
* In the worst case we have a linked list structure with O(n)
* Space: O(m), where m is the recursion depth, or O(1) if using an iterative approach
## Code
```
%%writefile bst.py
class Node(object):
def __init__(self, data):
self.data = data
self.left = None
self.right = None
self.parent = None
def __repr__(self):
return str(self.data)
class Bst(object):
def __init__(self, root=None):
self.root = root
def insert(self, data):
if data is None:
raise TypeError('data cannot be None')
if self.root is None:
self.root = Node(data)
return self.root
else:
return self._insert(self.root, data)
def _insert(self, node, data):
if node is None:
return Node(data)
if data <= node.data:
if node.left is None:
node.left = self._insert(node.left, data)
node.left.parent = node
return node.left
else:
return self._insert(node.left, data)
else:
if node.right is None:
node.right = self._insert(node.right, data)
node.right.parent = node
return node.right
else:
return self._insert(node.right, data)
%run bst.py
```
## Unit Test
```
%run dfs.py
%run ../utils/results.py
%%writefile test_bst.py
from nose.tools import assert_equal
class TestTree(object):
def __init__(self):
self.results = Results()
def test_tree_one(self):
bst = Bst()
bst.insert(5)
bst.insert(2)
bst.insert(8)
bst.insert(1)
bst.insert(3)
in_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), '[1, 2, 3, 5, 8]')
self.results.clear_results()
def test_tree_two(self):
bst = Bst()
bst.insert(1)
bst.insert(2)
bst.insert(3)
bst.insert(4)
bst.insert(5)
in_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), '[1, 2, 3, 4, 5]')
print('Success: test_tree')
def main():
test = TestTree()
test.test_tree_one()
test.test_tree_two()
if __name__ == '__main__':
main()
%run -i test_bst.py
```
| github_jupyter |
# Homework 3
## Group 2: Hayden Smotherman, Chris Suberlak, Winnie Wang
## Assignment
Group 2 uses the gaussian profile (with the same parameters as in the lecture):
1) Can you tell which model is correct using global likelihood computation?
2) Can you tell which model is correct using BIC?
3) What happens when you increase the number of data points by a factor of 2 (using BIC)?
4) What happens when you decrease the number of data points by a factor of 2 (using BIC)?
```
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from scipy.special import gamma
from scipy.stats import norm
from scipy.optimize import curve_fit
from scipy.optimize import minimize
from sklearn.neighbors import BallTree
from astroML.plotting import hist
from astroML.plotting.mcmc import plot_mcmc
from astroML.plotting import setup_text_plots
import pymc
setup_text_plots(fontsize=8, usetex=True)
%matplotlib inline
```
### This cell defines model functions and helper functions
```
### Modeled after AstroML book figure 10.25
# Define the function from which to generate the data
def GaussAndBkgd(t, b0, A, sigW, T):
"""Gaussian profile + flat background model"""
y = np.empty(t.shape)
y.fill(b0)
y += A * np.exp(-(t - T)**2/2/sigW**2)
return y
# Define the exponential burst profile
def burst(t, b0, A, alpha, T):
"""Burst model"""
y = np.empty(t.shape)
y.fill(b0)
mask = (t >= T)
y[mask] += A * np.exp(-alpha * (t[mask] - T))
return y
# Define a function that calculates and prints the BIC for each model
def Print_BIC(Gauss_ChiSq,Exp_ChiSq,N):
print('\n================ Model BIC Information ===============')
print('For N=%i Data Points:' %N)
print('BIC for Gaussian Profile: %.1f' %(Gauss_ChiSq+4*np.log(N)))
print('BIC for Exponential Profile: %.1f' %(Exp_ChiSq+4*np.log(N)))
print('=======================================================')
# The following functions are from http://www.astroml.org/book_figures/chapter5/fig_model_comparison_mcmc.html
# This one calculates the log of the pdf and is used for calculating the bayes factor
def get_logp(S, model):
"""compute log(p) given a pyMC model"""
M = pymc.MAP(model)
traces = np.array([S.trace(s)[:] for s in S.stochastics])
logp = np.zeros(traces.shape[1])
for i in range(len(logp)):
logp[i] = -M.func(traces[:, i])
return logp
# This function estimates the log of evidence given an MCMC chain.
# This is used to calculate the odds ratio between two models
def estimate_log_evidence(traces, logp, r=0.02, return_list=False):
"""Estimate the log of the evidence,
using the local density of points.
The code is borrowed from the AstroML source
code of Fig.5.24, which in turn is based on
eq. 5.127 in Ivezic+2014
"""
D, N = traces.shape
# compute volume of a D-dimensional sphere of radius r
Vr = np.pi ** (0.5 * D) / gamma(0.5 * D + 1) * (r ** D)
# use neighbor count within r as a density estimator
bt = BallTree(traces.T)
count = bt.query_radius(traces.T, r=r, count_only=True)
logE= logp + np.log(N) + np.log(Vr) - np.log(count)
if return_list:
return logE
else:
p25, p50, p75 = np.percentile(logE, [25, 50, 75])
return p50, 0.7413 * (p75 - p25)
```
**Explanation : why the above estimation of the evidence works.**
We choose between two models M1 and M2 by sampling the posterior space with the MCMC method. The traces returned by MCMC provide an optimal sampling of the posterior space for parameters **$\theta$** (in our case, each model is parametrized with a set of four parameters : { $b_{0}$, $A$, $T$, $\sigma$} for Gaussian model M1, and { $b_{0}$, $A$, $T$, $\alpha$} for Exponential decay model M2. As eqs. 5.124-5.127 in Ivezic+2014 show, the Bayesian evidence can be estimated by
$ \mathrm{evidence} \equiv L(M) = \frac{N p(\theta)}{\rho(\theta)}$.
Taking the logs :
$\log(L) = \log(N p(\theta) / \rho(\theta) ) = \log(N p ) - \log(\rho) $
Now $\rho = \mathrm{counts} / \mathrm{volume}$ , both of which can be estimated using the BallTree algorithm that counts the number of samples in the hypersphere of a given volume. The radius is problem-dependent, but in this case we deemed $r=0.02$ to be sufficient. Thus :
$\log(L) = \log(N) + \log(p) - \log(\mathrm{counts}) + \log(\mathrm{volume})$.
Other ways to estimate the posterior density in the multidimensional parameter space include kernel density estimation methods ( see Ivezic+2014, Sec. 6.1.1). There is more literature on the problem of estimating the density of the posterior sampled with the MCMC method - see [Morey+2011](https://www.sciencedirect.com/science/article/pii/S0022249611000666), [Sharma+2017](https://arxiv.org/pdf/1706.01629.pdf), [Weinberg, M.D. 2010](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1107&context=astro_faculty_pubs).
### This cell defines the main model comparison, which does the MCMC chains, and the plotting function
```
def Run_Gaussian_Exponential_Comparison(N,b0_true=10,A_true=3.0,sigma_true=3.0,T_true=40):
'''
This function runs the model comparizon between a Gaussian Profile and an Exponential Profile.
The Data is generated from a Gaussian Profile with N data points
INPUT: N <- Number of Data Points
b0_True, A_true, sigma_true, T_true <- OPTIONAL: True values of the distribution from which data is generated
OUTPUT: t_obs, y_obs, err_y <- Observed Randomly Generated Time, Flux and Flux Error values
Gauss_Trace, Gauss_Fit, Gauss_BF <- MCMC Trace, Best-Fit Parameters for Gaussian Model, and Bayes Factor
Exp_Trace, Exp_Fit, Exp_BF <- MCMC Trace, Best-Fit Parameters for Exponential Model, and Bayes Factor
'''
# Set a fixed random seed
np.random.seed(42)
#========================================================================================
# Generate N Data Points
#========================================================================================
err0_y = 0.5
# Add noise and calculate the "observed" flux values
t_obs = 100 * np.random.random(N)
y_true = GaussAndBkgd(t_obs, b0_true, A_true, sigma_true, T_true)
err_y = 0.5*np.sqrt(y_true/10) +np.random.uniform(0, 2*err0_y)
y_obs = np.random.normal(y_true, err_y) # Observed flux values
#========================================================================================
# Run Gaussian MCMC Chain
#========================================================================================
# Set up MCMC sampling for Gaussian MCMC parameters
b0 = pymc.Uniform('b0', 0, 50, value=50 * np.random.random())
A = pymc.Uniform('A', 0, 50, value=50 * np.random.random())
T = pymc.Uniform('T', 0, 100, value=100 * np.random.random())
log_sigma = pymc.Uniform('log_sigma', -2, 2, value=0.1)
# uniform prior on log(alpha)
@pymc.deterministic
def sigma(log_sigma=log_sigma):
return np.exp(log_sigma)
@pymc.deterministic
def y_model(t=t_obs, b0=b0, A=A, sigma=sigma, T=T):
return GaussAndBkgd(t, b0, A, sigma, T)
y = pymc.Normal('y', mu=y_model, tau=err_y ** -2, observed=True, value=y_obs)
model = dict(b0=b0, A=A, T=T, log_sigma=log_sigma, sigma=sigma, y_model=y_model, y=y)
# Run the MCMC sampling
def compute_Gaussian_MCMC_results(niter=25000, burn=4000):
S = pymc.MCMC(model)
S.sample(iter=niter, burn=burn)
traces = [S.trace(s)[:] for s in ['b0', 'A', 'T', 'sigma']]
logp = get_logp(S,model)
M = pymc.MAP(model)
M.fit()
fit_vals = (M.b0.value, M.A.value, M.sigma.value, M.T.value)
return traces, fit_vals, logp
# Save the traces and best-fit values for the Gaussian profile
Gaussian_traces, Gaussian_fit_vals, Gaussian_logp = compute_Gaussian_MCMC_results()
Gaussian_Bayes_Factor, dGBF = estimate_log_evidence(np.array(Gaussian_traces), Gaussian_logp, r=0.05)
#========================================================================================
# Run Exponential MCMC Chain
#========================================================================================
# Set up MCMC sampling for exponential free parameters
b0 = pymc.Uniform('b0', 0, 50, value=50 * np.random.random())
A = pymc.Uniform('A', 0, 50, value=50 * np.random.random())
T = pymc.Uniform('T', 0, 100, value=100 * np.random.random())
log_alpha = pymc.Uniform('log_alpha', -10, 10, value=0)
# uniform prior on log(alpha)
@pymc.deterministic
def alpha(log_alpha=log_alpha):
return np.exp(log_alpha)
@pymc.deterministic
def y_model(t=t_obs, b0=b0, A=A, alpha=alpha, T=T):
return burst(t, b0, A, alpha, T)
y = pymc.Normal('y', mu=y_model, tau=sigma ** -2, observed=True, value=y_obs)
model = dict(b0=b0, A=A, T=T, log_alpha=log_alpha,
alpha=alpha, y_model=y_model, y=y)
# Run the MCMC sampling
def compute_Exponential_MCMC_results(niter=25000, burn=4000):
S = pymc.MCMC(model)
S.sample(iter=niter, burn=burn)
traces = [S.trace(s)[:] for s in ['b0', 'A', 'T', 'alpha']]
logp = get_logp(S,model)
M = pymc.MAP(model)
M.fit()
fit_vals = (M.b0.value, M.A.value, M.alpha.value, M.T.value)
return traces, fit_vals, logp
# Save the traces and best-fit values for the exponential profile
Exponential_traces, Exponential_fit_vals, Exponential_logp = compute_Exponential_MCMC_results()
Exponential_Bayes_Factor, dEBF = estimate_log_evidence(np.array(Exponential_traces),
Exponential_logp,r=0.05)
# Now we return all values
return(t_obs,y_obs,err_y,Gaussian_traces,
Gaussian_fit_vals, Gaussian_Bayes_Factor,
Exponential_traces, Exponential_fit_vals, Exponential_Bayes_Factor)
def Plot_Gaussian_Exponential_Comparison(t_obs,y_obs,err_y,
Gaussian_traces,Gaussian_fit_vals,
Exponential_traces,Exponential_fit_vals):
'''
This Function plots the comparison of the Gaussian Burst Model and the Exponential Burst Model
It uses the outputs from Run_Gaussian_Exponential_Comparison()
'''
# Plot the initial data and two models
plt.figure(figsize=[9,9])
# We need a very fine grid in t for the models because the exponential profile can be very peaked
t = np.linspace(0,100,100001)
Gauss_y_fit = GaussAndBkgd(t, *Gaussian_fit_vals)
Exp_y_fit = burst(t, *Exponential_fit_vals)
# Plot the data with errors
plt.scatter(t_obs, y_obs, s=9, lw=0, c='k')
plt.errorbar(t_obs, y_obs, err_y, fmt='.', lw=1, c='k')
# Plot the Gaussian fit
plt.plot(t,Gauss_y_fit,lw=5)
# Plot the exponential fit
plt.plot(t,Exp_y_fit,lw=5)
# Set limits, labels, and legend
plt.ylim([5,15])
plt.xlabel('Time',fontsize=16)
plt.ylabel('Flux',fontsize=16)
plt.title('Gaussian/Exponential Burst Model Comparison',fontsize=20)
plt.legend(['Gaussian Fit','Exponential Fit','Observed Data'],fontsize=16)
```
# Problem 1
### Run the model for N=100 points and use the Global Likelihood Estimate to check which is best
```
# Run the model comparison MCMC chains
N = 100 # Number of data points to generate
t_obs,y_obs,err_y,\
Gaussian_traces,Gaussian_fit_vals,Gaussian_BF,\
Exponential_traces,Exponential_fit_vals,Exponential_BF\
= Run_Gaussian_Exponential_Comparison(N)
Plot_Gaussian_Exponential_Comparison(t_obs,y_obs,err_y,
Gaussian_traces,Gaussian_fit_vals,
Exponential_traces,Exponential_fit_vals)
print('\nBayes Factor = %f' %(Gaussian_BF/Exponential_BF))
```
We are not quite sure what to make of this Bayes Factor that we got from AstroML.
# Problem 2
### Plot the data and best-fit models for N=100 Points
```
# Run the model comparison MCMC chains
N = 100 # Number of data points to generate
t_obs,y_obs,err_y,\
Gaussian_traces,Gaussian_fit_vals,Gaussian_BF,\
Exponential_traces,Exponential_fit_vals,Exponential_BF\
= Run_Gaussian_Exponential_Comparison(N)
Plot_Gaussian_Exponential_Comparison(t_obs,y_obs,err_y,
Gaussian_traces,Gaussian_fit_vals,
Exponential_traces,Exponential_fit_vals)
# Print the BIC for each model
y_Gauss_fit = GaussAndBkgd(t_obs,*Gaussian_fit_vals)
y_Exp_fit = burst(t_obs,*Exponential_fit_vals)
Gauss_ChiSq = np.sum(((y_obs-y_Gauss_fit)/err_y)**2)
Exp_ChiSq = np.sum(((y_obs-y_Exp_fit)/err_y)**2)
Print_BIC(Gauss_ChiSq,Exp_ChiSq,N)
```
We can see from the BIC of each model that the Gaussian Profile fits better.
# Problem 3
### Plot the data and best-fit models for N=200 Points
```
# Run the model comparison MCMC chains
N = 200 # Number of data points to generate
t_obs,y_obs,err_y,\
Gaussian_traces,Gaussian_fit_vals,Gaussian_BF,\
Exponential_traces,Exponential_fit_vals,Exponential_BF,\
= Run_Gaussian_Exponential_Comparison(N)
Plot_Gaussian_Exponential_Comparison(t_obs,y_obs,err_y,
Gaussian_traces,Gaussian_fit_vals,
Exponential_traces,Exponential_fit_vals)
# Print the BIC for each model
y_Gauss_fit = GaussAndBkgd(t_obs,*Gaussian_fit_vals)
y_Exp_fit = burst(t_obs,*Exponential_fit_vals)
Gauss_ChiSq = np.sum(((y_obs-y_Gauss_fit)/err_y)**2)
Exp_ChiSq = np.sum(((y_obs-y_Exp_fit)/err_y)**2)
Print_BIC(Gauss_ChiSq,Exp_ChiSq,N)
```
With 200 points, we can still use the BIC to see that the Gaussian Profile is better
# Problem 4
### Plot the data and best-fit models for N=50 Points
```
# Run the model comparison MCMC chains
N = 50 # Number of data points to generate
t_obs,y_obs,err_y,\
Gaussian_traces,Gaussian_fit_vals,Gaussian_BF,\
Exponential_traces,Exponential_fit_vals,Exponential_BF,\
= Run_Gaussian_Exponential_Comparison(N)
Plot_Gaussian_Exponential_Comparison(t_obs,y_obs,err_y,
Gaussian_traces,Gaussian_fit_vals,
Exponential_traces,Exponential_fit_vals)
# Print the BIC for each model
y_Gauss_fit = GaussAndBkgd(t_obs,*Gaussian_fit_vals)
y_Exp_fit = burst(t_obs,*Exponential_fit_vals)
Gauss_ChiSq = np.sum(((y_obs-y_Gauss_fit)/err_y)**2)
Exp_ChiSq = np.sum(((y_obs-y_Exp_fit)/err_y)**2)
Print_BIC(Gauss_ChiSq,Exp_ChiSq,N)
```
With only 50 points, however, we cannot tell which model is better based on the BIC
# BONUS ROUND
Run over a range of N and compare the BIC and Odds Ratio values as a function of N
```
N_Array = [50,75,100,150,200,400,600,800,1000]
Gauss_ChiSq = np.ones(np.size(N_Array))
Exp_ChiSq = np.ones(np.size(N_Array))
Odds_Ratio = np.ones(np.size(N_Array))
for i,N in enumerate(N_Array):
if (i != 0):
print('\n')
print('Running Model Comparison for N=%i Data Points' %N)
t_obs,y_obs,err_y,\
Gaussian_traces,Gaussian_fit_vals,Gaussian_BF,\
Exponential_traces,Exponential_fit_vals,Exponential_BF,\
= Run_Gaussian_Exponential_Comparison(N)
y_Gauss_fit = GaussAndBkgd(t_obs,*Gaussian_fit_vals)
y_Exp_fit = burst(t_obs,*Exponential_fit_vals)
Gauss_ChiSq[i] = np.sum(((y_obs-y_Gauss_fit)/err_y)**2)
Exp_ChiSq[i] = np.sum(((y_obs-y_Exp_fit)/err_y)**2)
Odds_Ratio[i] = Gaussian_BF/Exponential_BF
Gauss_BIC = Gauss_ChiSq + 4*np.log(N_Array)
Exp_BIC = Exp_ChiSq + 4*np.log(N_Array)
plt.figure(figsize=[9,9])
plt.plot(N_Array,Gauss_BIC,lw=5)
plt.plot(N_Array,Exp_BIC,lw=5)
plt.xlabel('Number of points',fontsize=16)
plt.ylabel('BIC',fontsize=16)
plt.title('BIC vs N for Exponential and Gaussian Burst Models',fontsize=20)
plt.legend(['Gaussian BIC','Exponential BIC'],fontsize=16)
plt.figure(figsize=[9,9])
plt.plot(N_Array,Odds_Ratio,lw=5)
plt.xlabel('Number of points',fontsize=16)
plt.ylabel('Odds Ratio (Gauss/Exponential)',fontsize=16)
plt.title('Odds Ratio vs N')
```
We can see from the above plot that, except at very low values of N, the BIC of the Gaussian Model is lower than that of the Exponential Model.
| github_jupyter |
```
import pandas as pd
import numpy as np
import json
import requests
import seaborn as sns
import matplotlib.pyplot as plt
from cdispyutils.hmac4 import get_auth
auth = ''
def add_keys(filename):
''' Get auth from our secret keys '''
global auth
with open(filename,'r') as f:
secrets = json.load(f)
auth = get_auth(secrets['access key'], secrets['secret key'], 'submission')
add_keys("/home/ubuntu/cred.json")
def query_api(query_txt):
''' Request results for a specific query '''
query = {'query': query_txt}
output = requests.post('http://kubenode.internal.io:30006/v0/submission/graphql/', auth=auth, json=query).text
data = json.loads(output)
if 'data' not in data and 'errors' in data:
print query
return data
#get ctc_ml based
df = pd.DataFrame(columns=('patient_id','tube_id','ml','ml_type','study_id', 'tta'))
di = 0
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
#####gets the ctc/mL data
for s in range(len(data["study"])):
study = data["study"][s]["study_id"]
for c in range(len(data["study"][s]["cases"])):
pid = data["study"][s]["cases"][c]["patient_id"]
for b in range(len(data["study"][s]["cases"][c]["biospecimens"])):
for sa in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"])):
tid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tube_id"]
tta = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tta"]
for al in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"])):
sid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["slide_id"]
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"]) == 0:
continue
for an in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"])):
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"]) == 0:
continue
ml_type = "ctc_ml"#data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["aid"]
ml = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"][0]["molecular_concentration"]
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#####gets the Low_ctc/mL data
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_low_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_low_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
for s in range(len(data["study"])):
study = data["study"][s]["study_id"]
for c in range(len(data["study"][s]["cases"])):
pid = data["study"][s]["cases"][c]["patient_id"]
for b in range(len(data["study"][s]["cases"][c]["biospecimens"])):
for sa in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"])):
tid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tube_id"]
tta = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tta"]
for al in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"])):
sid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["slide_id"]
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"]) == 0:
continue
for an in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"])):
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"]) == 0:
continue
ml_type = "ctc_low_ml"#data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["aid"]
ml = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"][0]["molecular_concentration"]
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#####gets the Small_ctc/mL data
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_small_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_small_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
for s in range(len(data["study"])):
study = data["study"][s]["study_id"]
for c in range(len(data["study"][s]["cases"])):
pid = data["study"][s]["cases"][c]["patient_id"]
for b in range(len(data["study"][s]["cases"][c]["biospecimens"])):
for sa in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"])):
tid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tube_id"]
tta = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["tta"]
for al in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"])):
sid = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["slide_id"]
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"]) == 0:
continue
for an in range(len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"])):
if len(data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"]) == 0:
continue
ml_type = "ctc_small_ml"#data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["aid"]
ml = data["study"][s]["cases"][c]["biospecimens"][b]["samples"][sa]["aliquots"][al]["analytes"][an]["quantification_assays"][0]["molecular_concentration"]
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#plt.switch_backend('agg')
plt.figure(figsize=(8,25))
g = sns.FacetGrid(df,col="ml_type")
pl = g.map(sns.swarmplot,"tta","ml")
#pl.savefig('ctc_ml_table.png')
plt.show()
### this is running with jsonToDF which converts the data Object to a pandas DataFrame!
##This way you don't have to worry about the unraveling from the examples above!
import jsonToDF as jtd
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
jdata = jtd.jsonToDF(data)
for i in range(len(jdata.index)):
ml_type = "ctc_ml"
ml = jdata.iloc[i]["molecular_concentration"]
pid = jdata.iloc[i]["patient_id"]
tid = jdata.iloc[i]["tube_id"]
study = jdata.iloc[i]["study_id"]
tta = jdata.iloc[i]["tta"]
if tta is None or np.isnan(ml):
continue
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#####gets the Low_ctc/mL data
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_low_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_low_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
jdata = jtd.jsonToDF(data)
for i in range(len(jdata.index)):
ml_type = "ctc_low_ml"
ml = jdata.iloc[i]["molecular_concentration"]
pid = jdata.iloc[i]["patient_id"]
tid = jdata.iloc[i]["tube_id"]
study = jdata.iloc[i]["study_id"]
tta = jdata.iloc[i]["tta"]
if tta is None or np.isnan(ml):
continue
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#####gets the Small_ctc/mL data
query = """ { study(first: 0, project_id: "bpa-USC_OPT1_T1", order_by_asc: "submitter_id", submitter_id:"BPA-USC-BC-EX1") { study_id: submitter_id, cases(first: 0,order_by_asc:"submitter_id") { patient_id: submitter_id , biospecimens(first: 0) { samples(first: 0) { tta: hours_to_fractionation_upper, tube_id: submitter_id , aliquots(first: 1, order_by_asc:"submitter_id") { slide_id: submitter_id , analytes(first: 0, analyte_type:"Cell Count", quick_search:"ctc_small_concentration") { aid: submitter_id, quantification_assays(quick_search: "ctc_small_concentration", not: {molecular_concentration: -1}) { qa_id: submitter_id, molecular_concentration } } } } } } }} """
data = query_api(query)
data = data["data"]
jdata = jtd.jsonToDF(data)
for i in range(len(jdata.index)):
ml_type = "ctc_small_ml"
ml = jdata.iloc[i]["molecular_concentration"]
pid = jdata.iloc[i]["patient_id"]
tid = jdata.iloc[i]["tube_id"]
study = jdata.iloc[i]["study_id"]
tta = jdata.iloc[i]["tta"]
if tta is None or np.isnan(ml):
continue
df.loc[di] = [pid,tid,ml,ml_type,study,tta]
di = di + 1
#plt.switch_backend('agg')
plt.figure(figsize=(8,25))
g = sns.FacetGrid(df,col="ml_type")
pl = g.map(sns.swarmplot,"tta","ml")
#pl.savefig('ctc_ml_table.png')
plt.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib as mpl
import seaborn
mpl.rcParams['figure.figsize']=(15.0,8.0)
mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
from matplotlib import pyplot as plt
import stingray as sr
from stingray import Lightcurve, Powerspectrum, AveragedPowerspectrum, Crossspectrum, AveragedCrossspectrum
from stingray import events
from stingray.events import EventList
import glob
import numpy as np
from astropy.modeling import models, fitting
```
# R.m.s. - intensity diagram
This diagram is used to characterize the variability of black hole binaries and AGN (see e.g. Plant et al., arXiv:1404.7498; McHardy 2010 2010LNP...794..203M for a review).
In Stingray it is very easy to calculate.
## Setup: simulate a light curve with a variable rms and rate
We simulate a light curve with powerlaw variability, and then we rescale
it so that it has increasing flux and r.m.s. variability.
```
from stingray.simulator.simulator import Simulator
from scipy.ndimage.filters import gaussian_filter1d
from stingray.utils import baseline_als
from scipy.interpolate import interp1d
np.random.seed(1034232)
# Simulate a light curve with increasing variability and flux
length = 10000
dt = 0.1
times = np.arange(0, length, dt)
# Create a light curve with powerlaw variability (index 1),
# and smooth it to eliminate some Gaussian noise. We will simulate proper
# noise with the `np.random.poisson` function.
# Both should not be used together, because they alter the noise properties.
sim = Simulator(dt=dt, N=int(length/dt), mean=50, rms=0.4)
counts_cont = sim.simulate(1).counts
counts_cont_init = gaussian_filter1d(counts_cont, 200)
# ---------------------
# Renormalize so that the light curve has increasing flux and r.m.s.
# variability.
# ---------------------
# The baseline function cannot be used with too large arrays.
# Since it's just an approximation, we will just use one every
# ten array elements to calculate the baseline
mask = np.zeros_like(times, dtype=bool)
mask[::10] = True
print (counts_cont_init[mask])
baseline = baseline_als(times[mask], counts_cont_init[mask], 1e10, 0.001)
base_func = interp1d(times[mask], baseline, bounds_error=False, fill_value='extrapolate')
counts_cont = counts_cont_init - base_func(times)
counts_cont -= np.min(counts_cont)
counts_cont += 1
counts_cont *= times * 0.003
# counts_cont += 500
counts_cont += 500
# Finally, Poissonize it!
counts = np.random.poisson(counts_cont)
plt.plot(times, counts_cont, zorder=10, label='Continuous light curve')
plt.plot(times, counts, label='Final light curve')
plt.legend()
```
## R.m.s. - intensity diagram
We use the `analyze_lc_chunks` method in `Lightcurve` to calculate two quantities: the rate and the excess variance, normalized as $F_{\rm var}$ (Vaughan et al. 2010).
`analyze_lc_chunks()` requires an input function that just accepts a light curve. Therefore, we create the two functions `rate` and `excvar` that wrap the existing functionality in Stingray.
Then, we plot the results.
Done!
```
# This function can be found in stingray.utils
def excess_variance(lc, normalization='fvar'):
"""Calculate the excess variance.
Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source
intrinsic variance: the *excess variance*, defined as
.. math:: \sigma_{XS} = S^2 - \overline{\sigma_{err}^2}
the *normalized excess variance*, defined as
.. math:: \sigma_{NXS} = \sigma_{XS} / \overline{x^2}
and the *fractional mean square variability amplitude*, or
:math:`F_{var}`, defined as
.. math:: F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x^2}}}
Parameters
----------
lc : a :class:`Lightcurve` object
normalization : str
if 'fvar', return the fractional mean square variability :math:`F_{var}`.
If 'none', return the unnormalized excess variance variance
:math:`\sigma_{XS}`. If 'norm_xs', return the normalized excess variance
:math:`\sigma_{XS}`
Returns
-------
var_xs : float
var_xs_err : float
"""
lc_mean_var = np.mean(lc.counts_err ** 2)
lc_actual_var = np.var(lc.counts)
var_xs = lc_actual_var - lc_mean_var
mean_lc = np.mean(lc.counts)
mean_ctvar = mean_lc ** 2
var_nxs = var_xs / mean_lc ** 2
fvar = np.sqrt(var_xs / mean_ctvar)
N = len(lc.counts)
var_nxs_err_A = np.sqrt(2 / N) * lc_mean_var / mean_lc ** 2
var_nxs_err_B = np.sqrt(mean_lc ** 2 / N) * 2 * fvar / mean_lc
var_nxs_err = np.sqrt(var_nxs_err_A ** 2 + var_nxs_err_B ** 2)
fvar_err = var_nxs_err / (2 * fvar)
if normalization == 'fvar':
return fvar, fvar_err
elif normalization == 'norm_xs':
return var_nxs, var_nxs_err
elif normalization == 'none' or normalization is None:
return var_xs, var_nxs_err * mean_lc **2
def fvar_fun(lc):
return excess_variance(lc, normalization='fvar')
def norm_exc_var_fun(lc):
return excess_variance(lc, normalization='norm_xs')
def exc_var_fun(lc):
return excess_variance(lc, normalization='none')
def rate_fun(lc):
return lc.meancounts, np.std(lc.counts)
lc = Lightcurve(times, counts, gti=[[-0.5*dt, length - 0.5*dt]], dt=dt)
start, stop, res = lc.analyze_lc_chunks(1000, np.var)
var = res
start, stop, res = lc.analyze_lc_chunks(1000, rate_fun)
rate, rate_err = res
start, stop, res = lc.analyze_lc_chunks(1000, fvar_fun)
fvar, fvar_err = res
start, stop, res = lc.analyze_lc_chunks(1000, exc_var_fun)
evar, evar_err = res
start, stop, res = lc.analyze_lc_chunks(1000, norm_exc_var_fun)
nvar, nvar_err = res
plt.errorbar(rate, fvar, xerr=rate_err, yerr=fvar_err, fmt='none')
plt.loglog()
plt.xlabel('Count rate')
plt.ylabel(r'$F_{\rm var}$')
tmean = (start + stop)/2
from matplotlib.gridspec import GridSpec
plt.figure(figsize=(15, 20))
gs = GridSpec(5, 1)
ax_lc = plt.subplot(gs[0])
ax_mean = plt.subplot(gs[1], sharex=ax_lc)
ax_evar = plt.subplot(gs[2], sharex=ax_lc)
ax_nvar = plt.subplot(gs[3], sharex=ax_lc)
ax_fvar = plt.subplot(gs[4], sharex=ax_lc)
ax_lc.plot(lc.time, lc.counts)
ax_lc.set_ylabel('Counts')
ax_mean.scatter(tmean, rate)
ax_mean.set_ylabel('Counts')
ax_evar.errorbar(tmean, evar, yerr=evar_err, fmt='o')
ax_evar.set_ylabel(r'$\sigma_{XS}$')
ax_fvar.errorbar(tmean, fvar, yerr=fvar_err, fmt='o')
ax_fvar.set_ylabel(r'$F_{var}$')
ax_nvar.errorbar(tmean, nvar, yerr=nvar_err, fmt='o')
ax_nvar.set_ylabel(r'$\sigma_{NXS}$')
```
| github_jupyter |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
### Note
If you haven't seen `nn.Sequential` yet, please finish the end of the Part 2 notebook.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately.
```
# TODO: Build a feed-forward network
model =
# TODO: Define the loss
criterion =
### Run this to check your work
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
## Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
You can turn off gradients for a block of code with the `torch.no_grad()` content:
```python
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
```
Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
```
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
```
Below we can see the operation that created `y`, a power operation `PowBackward0`.
```
## grad_fn shows the function that generated this variable
print(y.grad_fn)
```
The autograd module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.
```
z = y.mean()
print(z)
```
You can check the gradients for `x` and `y` but they are empty currently.
```
print(x.grad)
```
To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
```
z.backward()
print(x.grad)
print(x/2)
```
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the cost with respect to the weights. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
## Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logits = model(images)
loss = criterion(logits, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
```
## Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
* Make a forward pass through the network
* Use the network output to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
```
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
```
### Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
>**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
```
## Your solution here
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
loss =
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
```
With the network trained, we can check out it's predictions.
```
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
```
Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
| github_jupyter |
<a href="https://colab.research.google.com/github/khbae/trading/blob/master/13_Gradient_descent.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Gradient Descent : Linear Regression
### Basic Setting
```
# 기본 모듈 세팅
!pip install pandas_datareader # 주식 데이터를 불러오는 패키지
import pandas as pd
import numpy as np
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt
import math
aapl = pd.read_csv("https://www.dropbox.com/s/viyzvakacjnqe6t/AAPL.csv?dl=1")
goog = pd.read_csv("https://www.dropbox.com/s/o7b1803y7tst8hf/GOOG.csv?dl=1")
aapl.head()
goog.head()
```
### Plotting
```
plt.scatter(np.log(aapl["Close"]), np.log(goog["Close"]))
plt.xlabel('Price of AAPL')
plt.ylabel('Pricc of GOOG')
plt.show
```
선형 회귀의 개념은 표적 또는 종속 변수(y)와 일련의 설명적 변수(x1, x2…)사이의 관계를 찾는 것이다.
이렇게 유추한 변수들간의 관계는 다른 값을 예측하는 데 사용할 수 있다.
위 plot은 SK하이닉스와 LG전자의 가격의 분포를 나타낸 산점도이다.
이처럼 변수가 하나인 경우 이 관계는 베타 파라미터β로 정의되는 라인이며 y=β0+βx1형식이며, 여기서β0은 절편값을 의미한다.
이는 방정식을 벡터 형태로 확장하여 다 변수 회귀로 확장 할 수 있다 : y = Xβ
회귀선은 아래의 plot과 같이 다양한 case로 구현이 가능하다. 그렇다면 어떻게 해야 최적화된 회귀선을 찾을 수 있는가?
```
plt.plot(np.log(aapl["Close"]), np.log(goog["Close"]), "k.")
plt.plot([1.73, 1.76], [1.58,1.64], '-')
plt.plot([1.72, 1.78], [1.58,1.62], '-')
plt.plot([1.73, 1.75], [1.59,1.66], '-')
plt.xlabel('Price of LG_E')
plt.ylabel('Pricc of SK_H')
```
### Cost Function
데이터를 모델링 하기 위한 최상의 라인을 만들기 위해서는 예측 값을 가능한 실제 값에 근접하게 할 수 있는 베타값을 선택해야 한다.
이는 곧 가설 h(x)와 y사이의 거리나 잔차들을 최소화시켜야 한다는 것을 의미한다.
이에 본 수업은 cost function을 ordinary least squares(OLS)로 정의하여 사용하는데, 이는 간단한 최소제곱의 합을 의미한다.
선형 회귀선을 찾기 위해, 베타값을 조정시켜 이를 최소화 시킨다. 이를 위한 식은 아래와 같다.
$$J(\beta) = \frac{1}{2m}\sum_{i=1}^m(h_\beta(x^{(i)})-y^{(i)})^2$$
우리가 입증하고자 하는 가설은 아래의 선형 모델을 따르고,
$$h_\beta(x) = \beta^{T}x = \beta_0 + \beta_1x_1$$
**Gradient descent**를 통해 반복적 업데이트를 시행함으로써 최적의 모델을 찾아낼 수 있다.
$$\beta_j := \beta_j - \alpha\frac{1}{m}\sum_{i=1}^m (h_\beta(x^{(i)})-y^{(i)})x_{j}^{(i)}$$
### Gradient Descent
**Gradient descent**는 함수를 따라 작은 스텝을 만들어 로컬 최소값을 찾는 알고리즘이다.
**Gradient descent**는 간단한 2차 방정식을 통해 그 형식을 볼 수 있다.
```
x_quad = [n/10 for n in range(0, 100)]
y_quad = [(n-4)**2+5 for n in x_quad]
plt.figure(figsize = (10,7))
plt.plot(x_quad, y_quad, 'k--')
plt.axis([0,10,0,30])
plt.plot([1, 2, 3], [14, 9, 6], 'ro')
plt.plot([5, 7, 8],[6, 14, 21], 'bo')
plt.plot(4, 5, 'ko')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Quadratic Equation')
```
이 함수에서 로컬 최소값을 찾기 위해 x=2의 첫번째 빨간 점에서 시작하면 **gradient**를 찾을 수 있다.
이 경우 **gradient**는 기울기를 의미하며, x=2에서 찾은 기울기의 값이 음수이기 때문에
다음 포인트를 오른쪽 값으로 잡으면 기울기의 최소값에 더 가까이 도달할 수 있다.
마찬가지로 x=8의 지점에서 시작하면 **gradient**, 즉 기울기 값이 양수이므로 다음 포인트를 왼쪽으로 옮기게 되고,
이러한 작업들을 계속해서 반복함을 통해 매개 변수가 우리가 찾고있는 최소값으로 수렴 할 때까지 **gradient**값을 0에 최대한 도달하도록 할 수 있다.
이러한 작업들은 실제로 매개 변수 베타를 계속 업데이트하여 점점 더 최소화시키고 있다.
$$\beta_j := \beta_j - \alpha\frac{\partial}{\partial \beta_j} J(\beta)$$
이 식에서 α는 학습 속도를 의미하고, 식을 통해 베타값에 대한 partial differentiation을 볼 수 있다.
이러한 방식을 **“batch" gradient descent**라고 부르는데, 이는 우리는 점 X의 batch를 통해 각 **gradient**를 한점에 한번씩 산출해내기 때문이다.
### Sample in Python
파이썬으로 실습을 하기 위해 먼저 매개 변수를 선언한다. 알파는 학습률이며, iterations는 반복을 통해 업데이트를 수행 할 횟수를 의미한다.
이러한 작업 이후 데이터를 보관하는 데이터 프레임을 간단한 행렬 연산을 위해 배열로 변환해야한다.
그리고 앞서 설명해준 비용 함수를 계산하는 함수를 작성하면 된다.
```
iterations = 1500
alpha = 0.01
X_df = pd.DataFrame(np.log(aapl["Close"]))
y_df = pd.DataFrame(np.log(goog["Close"]))
## 데이터 프레임에 절편값의 칼럼을 추가해준다.
X_df['intercept'] = 1
## Dataframe을 행렬연산을 위해 array로 변환한다.
## 베타값의 시작은 (0,0)으로 잡는다.
X = np.array(X_df)
y = np.array(y_df).flatten()
beta = np.array([0, 0])
def cost_function(X, y, beta):
m = len(y) # 훈련시킬 횟수
J = np.sum((X.dot(beta)-y)**2)/2/m
return(J)
cost_function(X, y, beta)
```
이제 **gradient descent algorithm**을 단계별로 나누어 어떤 과정들이 이루어지는지 확인한다.
1. Calculate hypothesis hβ(x)
2. Calculate loss (hβ(x)−y)
3. Calculate gradient (hβ(x)−y)xj
4. Update parameter beta
5. And find the cost by using cost_function()
```
def gradient_descent(X, y, beta, alpha, iterations):
cost_history = [0] * iterations
m = len(y)
for iteration in range(iterations):
hypothesis = X.dot(beta)
loss = hypothesis - y
gradient = X.T.dot(loss)/m
beta = beta - alpha*gradient
cost = cost_function(X, y, beta)
cost_history[iteration] = cost
return(beta, cost_history)
(t, c) = gradient_descent(X, y, [1, 1], 0.01, 1000)
print(t)
## Plotting the best fit line
best_fit_x = np.linspace(10.70, 10.80, 22)
best_fit_y = [t[1] + t[0]*xx for xx in best_fit_x]
plt.plot(np.log(aapl["Close"]), np.log(goog["Close"]), "k.")
plt.plot(best_fit_x, best_fit_y, '-')
plt.xlabel('Price of AAPL')
plt.ylabel('Pricc of GOOG')
```
| github_jupyter |
# MyPy IPython Example
This is a more comprehensive example,
showing how type checking can help with data analysis.
Let's suppose we have a web API for sentiment analysis.
When we post to `http://sentiment-analyzer.example.com/analyze`,
we get a result of
`{"sentiment": "positive"}`
or
`{"sentiment": "negative"}`.
In our example, we will imagine that this is a usage-based payment API:
some amount of money for 1000 requests.
```
# from requests import Session
# SESSION = Session()
from unittest import mock
SESSION = mock.MagicMock()
```
We also have a database with people's reviews that we want to analyze for sentiment.
We will abstract the database away and assume we already have the data we need in memory.at i
Since we pay money for API usage, we mostly debug on a sample of the data.
Once we see that it works, we run it on the full sample.
In real life, the sample data might be a 1,000 elements,
and the full data 1,000,000.
In our little example, for pedagogical reasons, the sample has two items
and the full data has three -- only one more.
```
SAMPLE_DATA = [
{"name": "Jane Doe", "review": "I liked it", "product_id": 5},
{"name": "Huan Liu", "review": "it sucked", "product_id": 7},
]
FULL_DATA = SAMPLE_DATA + [
{"name": "Denzel Brown", "review": "ok I guess", "product_id": 2},
]
```
Here is the wrapper code to call the sentiment analyzer:
```
def is_positive(text):
results = SESSION.post("http://sentiment-analyzer.example.com/analyze", json=dict(text=text))
return results.json()["sentiment"] == "positive"
```
Unfortunately, even on our small sample,
this was sometimes hanging for a long time.
But, easy enough to fix:
we added a little retry loop that tries three times,
and added a 3 second timeout.
```
def sentiment(text):
for i in range(3):
try:
results = SESSION.post("http://sentiment-analyzer.example.com/analyze",
json=dict(text=text), timeout=3)
except OSError:
continue
else:
return 1 if results.json()["sentiment"] == "positive" else -1
SESSION.post.side_effect = [
mock.MagicMock(**{"json.return_value": dict(sentiment=sentiment)})
for sentiment in ["positive", "negative"]
]
average_sentiment = sum(sentiment(datum["review"]) for datum in SAMPLE_DATA)
print(average_sentiment)
```
Looks good! It even handles errors:
```
import random
side_effect = [
mock.MagicMock(**{"json.return_value": dict(sentiment=sentiment)})
for sentiment in ["positive", "negative"]
] + [OSError("woops too long")] * 2
random.shuffle(side_effect)
SESSION.post.side_effect = side_effect
average_sentiment = sum(sentiment(datum["review"]) for datum in SAMPLE_DATA)
print(average_sentiment)
```
Looks good! Let's wrap it in a function:
```
def get_average_sentiment(data):
return sum(sentiment(datum["review"]) for datum in data)
SESSION.post.side_effect = [
mock.MagicMock(**{"json.return_value": dict(sentiment=sentiment)})
for sentiment in ["positive", "negative"]
]
get_average_sentiment(SAMPLE_DATA)
```
But, on the full sample, sometimes requests fail three times.
What happens then?
```
SESSION.post.side_effect = [
mock.MagicMock(**{"json.return_value": dict(sentiment=sentiment)})
for sentiment in ["positive", "negative"]
] + [OSError("woops too long")] * 4
get_average_sentiment(FULL_DATA)
```
Woops! Too bad. The fix is simple: it is rare for requests to fail three times,
so we can just return `0`: it is not going to change the average too much.
```
def sentiment(text):
for i in range(3):
try:
results = SESSION.post("http://sentiment-analyzer.example.com/analyze",
json=dict(text=text), timeout=3)
except OSError:
continue
else:
return 1 if results.json()["sentiment"] == "positive" else -1
return 0
```
We are done.
Too bad that to grab the new sentiments,
we have to use the API again...for all elements.
Oh, well.
Too bad about the usage-based cost.
```
SESSION.post.side_effect = [
mock.MagicMock(**{"json.return_value": dict(sentiment=sentiment)})
for sentiment in ["positive", "negative"]
] + [OSError("woops too long")] * 4
get_average_sentiment(FULL_DATA)
```
What if this could all have been avoided?
```
%load_ext mypy_ipython
def sentiment(text: str) -> int:
for i in range(3):
try:
results = SESSION.post("http://sentiment-analyzer.example.com/analyze",
json=dict(text=text), timeout=3)
except OSError:
continue
else:
return 1 if results.json()["sentiment"] == "positive" else -1
%mypy
```
| github_jupyter |
# Hybrid Gradient Boosting Trees Example via Unified Classification
A data set that identifies whether or not a pentient has diabetes is used to demonstrate the use of hybrid graident boosting classifier in SAP HANA.
# Pima Indians Diabetes Dataset
Original data comes from National Institute of Diabetes and Digestive and Kidney Diseases. The collected dataset is aiming at, based on certain diagnostic measurements, diagnostically predicting whether or not a patient has diabetes. In particular, patients contained in the dataset are females of Pima Indian heritage, all above the age of 20. Dataset is form Kaggle, for tutorials use only.
The dataset contains the following diagnositic <b>attributes</b>:<br>
$\rhd$ "PREGNANCIES" - Number of times pregnant,<br>
$\rhd$ "GLUCOSE" - Plasma glucose concentration a 2 hours in an oral glucose tolerance test,<br>
$\rhd$ "BLOODPRESSURE" - Diastolic blood pressure (mm Hg),<br>
$\rhd$ "SKINTHICKNESS" - Triceps skin fold thickness (mm),<br>
$\rhd$ "INSULIN" - 2-Hour serum insulin (mu U/ml),<br>
$\rhd$ "BMI" - Body mass index $(\text{weight in kg})/(\text{height in m})^2$,<br>
$\rhd$ "PEDIGREE" - Diabetes pedigree function,<br>
$\rhd$ "AGE" - Age (years),<br>
$\rhd$ "CLASS" - Class variable (0 or 1) 268 of 768 are 1(diabetes), the others are 0(non-diabetes).
```
import hana_ml
from hana_ml import dataframe
from hana_ml.algorithms.pal import metrics
from hana_ml.algorithms.pal.unified_classification import UnifiedClassification
```
# Load Data
The data is loaded into 3 tables - full set, training-validation set, and test set as follows:
<li> PIMA_INDIANS_DIABETES_TBL</li>
<li> PIMA_INDIANS_DIABETES_TRAIN_VALID_TBL</li>
<li> PIMA_INDIANS_DIABETES_TEST_TBL</li>
To do that, a connection is created and passed to the loader.
There is a config file, <b>config/e2edata.ini</b> that controls the connection parameters and whether or not to reload the data from scratch. In case the data is already loaded, there would be no need to load the data. A sample section is below. If the config parameter, reload_data is true then the tables for test, training and validation are (re-)created and data inserted into them.
#########################<br>
[hana]<br>
url=host.sjc.sap.corp<br>
user=username<br>
passwd=userpassword<br>
port=3xx15<br>
#########################<br>
```
from data_load_utils import DataSets, Settings
import plotting_utils
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, train_tbl, test_tbl, _ = DataSets.load_diabetes_data(connection_context)
from hana_ml.algorithms.pal.utility import Settings
Settings.set_log_level()
```
# Create Data Frames
Create the data frames for the full, test, training, and validation sets.
Let us also do some data exploration.
# Define Datasets - training set and testing set
Data frames are used keep references to data so computation on large data sets in HANA can happen in HANA. Trying to bring the entire data set into the client will likely result in out of memory exceptions.
```
diabetes_train = connection_context.table(train_tbl)
#diabetes_train = diabetes_train.cast('CLASS', 'VARCHAR(10)')
diabetes_test = connection_context.table(test_tbl)
#diabetes_test = diabetes_test.cast('CLASS', 'VARCHAR(10)')
```
# Simple Exploration
Let us look at the number of rows in each dataset:
```
print('Number of rows in training set: {}'.format(diabetes_train.count()))
print('Number of rows in testing set: {}'.format(diabetes_test.count()))
```
Let us look at columns of the dataset:
```
print(diabetes_train.columns)
```
Let us also look some (in this example, the top 6) rows of the dataset:
```
diabetes_train.head(6).collect()
```
We can also check the data type of all columns:
```
diabetes_train.dtypes()
```
We have a 'CLASS' column in the dataset, let us check how many classes are contained in this dataset:
```
diabetes_train.distinct('CLASS').collect()
```
Two classes are available, assuring that this is a binary classification problem.
# Model Creation & Model Selection
The lines below show the ease with which classification can be done.
Set up the label column, use default feature set and create the model:
```
cv_values = {}
cv_values['learning_rate'] = [0.1, 0.4, 0.7, 1.0]
cv_values['n_estimators'] = [4, 6, 8, 10]
cv_values['split_threshold'] = [0.1, 0.4, 0.7, 1.0]
hgc = UnifiedClassification(func='HybridGradientBoostingTree',
param_search_strategy='grid',
resampling_method='cv',
evaluation_metric='error_rate',
ref_metric=['auc'],
fold_num=5,
random_state=1,
param_values=cv_values)
hgc.fit(diabetes_train, key='ID', label='CLASS',
partition_method='stratified',
partition_random_state=1,
stratified_column='CLASS')
from hana_ml.model_storage import ModelStorage
#from hana_ml.model_storage_services import ModelSavingServices
# Creates an object called model_storage which must use the same connection with the model
model_storage = ModelStorage(connection_context=connection_context)
model_storage.clean_up()
# Saves the model for the first time
hgc.name = 'Model A' # The model name is mandatory
hgc.version = 1
model_storage.save_model(model=hgc)
# Lists models
model_storage.list_models()
# plan the schedule
model_storage.set_schedule(name='Model A',
version=1,
schedule_time='every 3 minutes',
connection_userkey='raymondyao',
init_params={"func" : 'HybridGradientBoostingTree',
"param_search_strategy" : 'grid',
"resampling_method" : 'cv',
"evaluation_metric" : 'error_rate',
"ref_metric" : ['auc'],
"fold_num" : 5,
"random_state" : 1,
"param_values" : {'learning_rate': [0.1, 0.4, 0.7, 1],
'n_estimators': [4, 6, 8, 10],
'split_threshold': [0.1, 0.4, 0.7, 1]}},
fit_params={"key" : 'ID',
"label" : 'CLASS',
"partition_method" : 'stratified',
"partition_random_state" : 1,
"stratified_column" : 'CLASS'},
training_dataset_select_statement="SELECT * FROM DIABETES_TRAIN"
)
model_storage.start_schedule('Model A', 1)
model_storage.list_models().iat[0, 7]
model_storage.terminate_schedule('Model A', 1)
model_storage.list_models().iat[0, 7]
from hana_ml.algorithms.pal.model_selection import GridSearchCV
from hana_ml.algorithms.pal.model_selection import RandomSearchCV
hgc2 = UnifiedClassification('HybridGradientBoostingTree')
gscv = GridSearchCV(estimator=hgc2,
param_grid={'learning_rate': [0.1, 0.4, 0.7, 1],
'n_estimators': [4, 6, 8, 10],
'split_threshold': [0.1, 0.4, 0.7, 1]},
train_control=dict(fold_num=5,
resampling_method='cv',
random_state=1,
ref_metric=['auc']),
scoring='error_rate')
gscv.fit(data=diabetes_train, key= 'ID',
label='CLASS',
partition_method='stratified',
partition_random_state=1,
stratified_column='CLASS',
build_report=True)
```
# Evaluation
Let us compare cross-validation accuracy and test accuracy:
```
cm = hgc.confusion_matrix_.collect()
cm
gscv.estimator.confusion_matrix_.collect()
train_accuracy = float(cm['COUNT'][cm['ACTUAL_CLASS']==cm['PREDICTED_CLASS']].sum())/cm['COUNT'].sum()
train_accuracy
features = diabetes_train.columns
features.remove('CLASS')
features.remove('ID')
print(features)
pred_res = hgc.predict(diabetes_test, key='ID', features=features)
pred_res.head(10).collect()
pred_res.dtypes()
ts = diabetes_test.rename_columns({'ID': 'TID'}) .cast('CLASS', 'NVARCHAR(256)')
jsql = '{}."{}"={}."{}"'.format(pred_res.quoted_name, 'ID', ts.quoted_name, 'TID')
results_df = pred_res.join(ts, jsql, how='inner')
cm_df, classification_report_df = metrics.confusion_matrix(results_df, key='ID', label_true='CLASS', label_pred='SCORE')
import matplotlib.pyplot as plt
from hana_ml.visualizers.metrics import MetricsVisualizer
f, ax1 = plt.subplots(1,1)
mv1 = MetricsVisualizer(ax1)
ax1 = mv1.plot_confusion_matrix(cm_df, normalize=False)
print("Recall, Precision and F_measures.")
classification_report_df.collect()
_,_,_,metrics_res = hgc.score(data=diabetes_test, key='ID', label='CLASS')
metrics_res.collect()
from hana_ml.model_storage import ModelStorage
model_storage = ModelStorage(connection_context=connection_context)
gscv.estimator.name = 'HGBT'
gscv.estimator.version = 1
model_storage.save_model(model=gscv.estimator)
# Lists models
model_storage.list_models()
from hana_ml.visualizers.unified_report import UnifiedReport
mymodel = model_storage.load_model('HGBT', 1)
mymodel.fit(data=diabetes_train, key= 'ID',
label='CLASS',
partition_method='stratified',
partition_random_state=1,
stratified_column='CLASS',
build_report=True)
UnifiedReport(mymodel).display()
model_storage.delete_model('HGBT', 1)
connection_context.close()
```
| github_jupyter |
Random Forests - Analysis.
===
***
## Introduction
Our goal for this phase is to use the reduced variable data set from our exploration phase to create a model predicting human activity, using Random Forests.
To remind ourselves, the variables we will use are:
* tAccMean, tAccSD tJerkMean, tJerkSD
* tGyroMean, tGyroSD tGyroJerkMean, tGyroJerkSD
* fAccMean, fAccSD, fJerkMean, fJerkSD,
* fGyroMean, fGyroSD, fGyroJerkMean, fGyroJerkSD,
* fGyroMeanFreq, fGyroJerkMeanFreq fAccMeanFreq, fJerkMeanFreq
* fAccSkewness, fAccKurtosis, fJerkSkewness, fJerkKurtosis
* fGyroSkewness, fGyroKurtosis fGyroJerkSkewness, fGyroJerkKurtosis
* angleAccGravity, angleJerkGravity angleGyroGravity, angleGyroJerkGravity
* angleXGravity, angleYGravity, angleZGravity
* subject, activity
Of these,
* all except the last two are numeric.
* 'subject' is an integer identifying a person, one of 21 from 1 to 27 with some missing.
* 'activity' is a categorical variable - one of six activities identified earlier -
* 'sitting', 'standing', 'lying', 'walking', 'walking up', 'walking down'.
Why do we use Random Forests? We are using Random Forests [4] in our model due to the relatively high accuracy of this method and the complexity of our data.
These are two major reasons to bring out the heavy artillery of Random Forests, especially when we have too many attrubutes even in a simplified set of attributes.
## Methods
### Expository Segue on Experiment design
Typically in analysing such data sets we are creating a model that uses the data we are given. How do we know the model will work for other data? The real answer is "We don't". And there's no way we can be sure that we can create a model that will work for new data.
But what we **can** do is reduce the chances that we are creating an "overfitted" model. That is a technical term for a model that works wonderfully on the given data (fitted to it) and fails on the next data set that comes along. There's a way to design our modeling experiment so we avoid that trap. Here's how.
We take the data set and we keep some of the given data aside and we don't use it for modeling at all. This "held out" set is called the test set.
Then we take our remaining data and we further divide it so that we have a larger set called the training set and a smaller set we call the validation set.
Then we create our model using the training set and look at how well it performs on the validation set (i.e. not counting the "held out" data).
We are allowed to tweak our modeling as much as we want using the training and validations sets but we are **not** allowed to look at the held out, test set until we are ready to declare we are done modeling. Then we apply the model to our held out test data -- when that test data also shows an acceptable error rate we have a good model.
However if we get a bad error rate from the test data we have a problem. We cannot keep tweaking the model to get a better test result because then we are simply overfitting again. So what do we do? We are allowed to mix up all the data, hold out a new test set which has to be different at least in part from the old one, and then we repeat the exercise. In some cases when we are given a data set by a third party we are not shown the held out set, and we have to submit our model without testing agains the held out set.
The third party then applies our model to the held out test set and we don't get to mix it all up. We only get one shot. We're going to do that here and see how well we do.
### Our experiment design
We hold out the last 4 subjects in the data as a test set and the rest are used for our modeling. Why do we do this? The data set, if we look at the supporting docs, suggests that we use the last 4 as test subjects. So in our first pass at this we might as well just follow the instructions. All rows relating to these 4 will be held out and not used during modeling at all.
Of the 17 remaining subjects we use the first 12 subjects as the training set and remaining 5 as the validation set. Why this proportion? Typically 30% of the the training data is used as validation set and 70% used for actual training. The validation set is used as our "internal" test set, not used in modeling and held out for each validation step. The difference between the actual test set and the validation set is that we are allowed to keep tuning our model as long as we keep mixing up the data after each attempt and re-extraction of a validation set.
There is also a methodology that takes this step even further and does n-fold validation. The training set is divided into n (usually 10) equal parts and then each part is used as a validation set while the rest used for training, with n such modeling exercises being conducted.
Then some averaging is done to create the best model.
We do not do n-fold validation here.
We divided our data based on the 'subject' variable as we included ‘subject’ in our model and want to keep all test data separate. What does this mean? The test data should actually be data about which we have no information at all - i.e. it needs to be independent of the training data. So suppose we did not separate out the data on the 4 test individuals but we just decided that we would mix up all the rows and take say 20% as test data, chosen randomly.
Note that we have some 7,000 plus rows so we have a few hundred rows on each individual. So if we mixed it all up and chose randomly, then we would most probably get data from all 21 individuals in our test set. And all 21 in our training set. The test set would not be independent of the training set as both would have somewhat similar mixtures of data.
Thus the held out set would not really provide a useful reality check - we have statistically similar info in our training set already i.e. the test set has leaked into the training set.
This would be similar to the situation where we had a homework exercise which was later solved in class the next day. Then we received a quiz question set which had questions very similar to the homework with just some numbers changed. It would not really test our understanding of the subject matter, only our ability to understand the homework (= overfitting).
So when we keep aside our test set separated by all rows for certain individuals we know that the training set has no leaked information about these individuals. It is important to be very diligent about the test data, in this fashion, so that we can have some confidence that our model is not overfitting our sample data.
## Results
### Training
We now run our RandomForest modeling software on our training set, described earlier, and derive a model along with some parameters describing how good our model is.
```
# We pull in the training, validation and test sets created according to the scheme described
# in the data exploration lesson.
import pandas as pd
samtrain = pd.read_csv('../datasets/samsung/samtrain.csv')
samval = pd.read_csv('../datasets/samsung/samval.csv')
samtest = pd.read_csv('../datasets/samsung/samtest.csv')
# We use the Python RandomForest package from the scikits.learn collection of algorithms.
# The package is called sklearn.ensemble.RandomForestClassifier
# For this we need to convert the target column ('activity') to integer values
# because the Python RandomForest package requires that.
# In R it would have been a "factor" type and R would have used that for classification.
# We map activity to an integer according to
# laying = 1, sitting = 2, standing = 3, walk = 4, walkup = 5, walkdown = 6
# Code is in supporting library randomforest.py
import randomforests as rf
samtrain = rf.remap_col(samtrain,'activity')
samval = rf.remap_col(samval,'activity')
samtest = rf.remap_col(samtest,'activity')
import sklearn.ensemble as sk
rfc = sk.RandomForestClassifier(n_estimators=500, compute_importances=True, oob_score=True)
train_data = samtrain[samtrain.columns[1:-2]]
train_truth = samtrain['activity']
model = rfc.fit(train_data, train_truth)
# use the OOB (out of band) score which is an estimate of accuracy of our model.
rfc.oob_score_
### TRY THIS
# use "feature importance" scores to see what the top 10 important features are
fi = enumerate(rfc.feature_importances_)
cols = samtrain.columns
[(value,cols[i]) for (i,value) in fi if value > 0.04]
## Change the value 0.04 which we picked empirically to give us 10 variables
## try running this code after changing the value up and down so you get more or less variables
## do you see how this might be useful in refining the model?
## Here is the code in case you mess up the line above
## [(value,cols[i]) for (i,value) in fi if value > 0.04]
```
We use the predict() function using our model on our validation set and our test set and get the following results from our analysis of errors in the predictions.
```
# pandas data frame adds a spurious unknown column in 0 position hence starting at col 1
# not using subject column, activity ie target is in last columns hence -2 i.e dropping last 2 cols
val_data = samval[samval.columns[1:-2]]
val_truth = samval['activity']
val_pred = rfc.predict(val_data)
test_data = samtest[samtest.columns[1:-2]]
test_truth = samtest['activity']
test_pred = rfc.predict(test_data)
```
####Prediction Errors and Computed Error Measures
```
print("mean accuracy score for validation set = %f" %(rfc.score(val_data, val_truth)))
print("mean accuracy score for test set = %f" %(rfc.score(test_data, test_truth)))
# use the confusion matrix to see how observations were misclassified as other activities
# See [5]
import sklearn.metrics as skm
test_cm = skm.confusion_matrix(test_truth,test_pred)
# visualize the confusion matrix
import pylab as pl
pl.matshow(test_cm)
pl.title('Confusion matrix for test data')
pl.colorbar()
pl.show()
# compute a number of other common measures of prediction goodness
```
We now compute some commonly used measures of prediction "goodness".
For more detail on these measures see
[6],[7],[8],[9]
```
# Accuracy
print("Accuracy = %f" %(skm.accuracy_score(test_truth,test_pred)))
# Precision
print("Precision = %f" %(skm.precision_score(test_truth,test_pred)))
# Recall
print("Recall = %f" %(skm.recall_score(test_truth,test_pred)))
# F1 Score
print("F1 score = %f" %(skm.f1_score(test_truth,test_pred)))
```
## Conclusions
We can make the following concrete conclusions looking at the above results.
Random Forests give us satisfactory error rates and predictive power in this scenario.
Using domain knowledge it is possible to get surprisingly high values of predictive measures, and low error rates on validation and test sets.
This is supported by the results, i.e. ~90% on predictive measures, OOB error estimates ~2%.
We only did this once and did not go back and forth tweaking the models. Note that we stuck to the rules here and did not see the test set until we were done modeling.
Focusing on magnitude and angle information for acceleration and jerk in the time and frequency domains gives us a model with surprising predictive power. It's possible that a brute force model will give better predictive power but it would simply show us how to blindly apply software. If for some reason the model misbehaved or failed, we would not have any insight at all as to why. Instead we used domain knowledge to focus on insight and in the process created a model with interpretive value.
Model performance on the test set is better than on the validation set as seen in the two “Total” rows above and each individual activity.
Let's see how we might be able to improve the model in future. It's always good to note the possible ways in which our model(s) might be deficient or incomplete or unfinished so we don't get overconfident about our models and overpromise what they can do.
### Critique
* Our model eliminated a number of Magnitude related attributes such as -mad, -max, -min also a number of Gyro related variables during the variable selection process using domain knowledge. These may be important but this was not tested. We may want to look at that the next time we do this.
* Variable importance should be investigated in detail - i.e. we really ought to look at how we can use the smaller number of attributes identified as important, to create the model and see what the difference is. Computationally this would be more efficient. We could even use simpler methods like Logistic Regression to do the classification from that point on, using only the reduced set of variables.
### Exercise
Instead of using domain knowledge to reduce variables, use Random Forests directly on the full set of columns. Then use variable importance and sort the variables.
Compare the model you get with the model you got from using domain knowledge.
You can short circuit the data cleanup process as well by simply renaming the variables x1, x2...xn, y where y is 'activity' the dependent variable.
Now look at the new Random Forest model you get. It is likely to be more accurate at prediction than the one we have above. It is a black box model, where there is no meaning attached to the variables.
* What insights does it give you?
* Which model do you prefer?
* Why?
* Is this an absolute preference or might it change?
* What might cause it to change?
## References
[1] Original dataset as R data https://spark-public.s3.amazonaws.com/dataanalysis/samsungData.rda
[2] Human Activity Recognition Using Smartphones http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
[3] Android Developer Reference http://developer.android.com/reference/android/hardware/Sensor.html
[4] Random Forests http://en.wikipedia.org/wiki/Random_forest
[5] Confusion matrix http://en.wikipedia.org/wiki/Confusion_matrix
[6] Mean Accuracy http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1054102&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1054102
[7] Precision http://en.wikipedia.org/wiki/Precision_and_recall
[8] Recall http://en.wikipedia.org/wiki/Precision_and_recall
[9] F Measure http://en.wikipedia.org/wiki/Precision_and_recall
```
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Conv2D, Activation, MaxPooling2D, Flatten, Dense, Dropout
import numpy as np
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
base_dir = './plastrial/data/'
traindir = os.path.join(base_dir, "train")
validationdir = os.path.join(base_dir, "validation")
testdir = os.path.join(base_dir, "test")
patch_size=50
traindatagen = ImageDataGenerator(rescale=1/255.)
validationdatagen = ImageDataGenerator(rescale=1/255.)
testdatagen = ImageDataGenerator(rescale=1/255.)
traingen = traindatagen.flow_from_directory(
directory=traindir,
target_size=(patch_size,patch_size),
batch_size = 1,
class_mode='binary')
validationgen = validationdatagen.flow_from_directory(
directory=validationdir,
target_size=(patch_size,patch_size),
batch_size = 1,
class_mode='binary')
testdatagen = testdatagen.flow_from_directory(
directory=testdir,
target_size=(patch_size,patch_size),
batch_size = 1,
class_mode='binary')
#Simple VGG like model
model = models.Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(patch_size, patch_size, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (5, 5), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
from tensorflow.keras.optimizers import RMSprop
model.compile(optimizer=RMSprop(learning_rate=0.001), loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit_generator(
traingen,
epochs=20,
validation_data = validationgen)
model.evaluate_generator(testdatagen)
```
| github_jupyter |
# EVM Blockexplorer API Tutorial
This tutorial aims to be a quick guide to get you started using the Etherscan API integrated into messari's python library.
```
import time #need to sleep to stop rate limiting
#from messari.blockexplorers.etherscan import Etherscan
from messari.blockexplorers import FTMscan
API_KEY='DWC3QGAEHNFQQM55Z1AYTXUTZ1GPBK51JQ'
es = FTMscan(api_key=API_KEY)
```
## API Structure
The Deep DAO Python client contains a number of functions that wrap some of Deep DAO's API endpoints. These include:
Accounts
* get_account_ether_balance
* get_account_normal_transactions
* get_account_internal_transactions
* get_transaction_internal_transactions
* get_block_range_internal_transactions
* get_account_token_transfers
* get_account_nft_transfers
* get_account_blocks_mined
Contracts
* get_contract_abi
* get_contract_source_code
Transactions
* get_contract_execution_status
* get_transaction_execution_status
Blocks
* get_block_reward
* get_block_countdown
* get_block_by_timestamp
Logs
* get_logs
Geth/Parity Proxy
* get_eth_block_number
* get_eth_block
* get_eth_uncle
* get_eth_block_transaction_count
* get_eth_transaction_by_hash
* get_eth_transaction_by_block_index
* get_eth_account_transaction_count
* get_eth_transaction_reciept
* get_eth_gas_price
Tokens
* get_token_total_supply
* get_token_account_balance
Gas Tracker
* get_est_confirmation
* get_gas_oracle
Stats
* get_total_eth_supply
* get_total_eth2_supply
* get_last_eth_price
* get_nodes_size
* get_total_nodes_count
Below are a few examples to showcase the functionality and types of data each function generates.
## Accounts
Functions to return info about given account(s)
```
accounts = ['0xBa19BdFF99065d9ABF3dF8CE942390B97fd71B12', '0x503e4bfe8299D486701BC7bc7F2Ea94f50035daC']
txns = ['0x29f2df8ce6a0e2a93bddacdfcceb9fd847630dcd1d25ad1ec3402cc449fa1eb6', '0x0bd7f9af4f8ddb18a321ab0120a2389046b39feb67561d17378e0d4dc062decc', '0x1815a03dd8a1ce7da5a7a4304fa5fae1a8f4f3c20787e341eea230614e49ff61']
tokens = ['0x1f9840a85d5af5bf1d1762f925bdaddc4201f984', '0xc18360217d8f7ab5e7c516566761ea12ce7f9d72']
```
### get_account_ether_balance
Returns the Ether (wei) balance of a given address
```
account_balances = es.get_account_native_balance(accounts)
time.sleep(1)
account_balances
```
### get_account_normal_transactions
Returns the list of transactions performed by an address, with optional pagination
```
account_normal = es.get_account_normal_transactions(accounts)
time.sleep(1)
account_normal.head()
```
### get_account_internal_transactions
Returns the list of internal transactions performed by an address, with optional pagination
```
account_internal = es.get_account_internal_transactions(accounts)
time.sleep(1)
account_internal.head()
```
### get_transaction_internal_transactions
Returns the list of internal transactions performed within a transaction
```
int_txn = '0x40eb908387324f2b575b4879cd9d7188f69c8fc9d87c901b9e2daaea4b442170'
transaction_internals = es.get_transaction_internal_transactions(int_txn)
time.sleep(1)
transaction_internals
```
### get_block_range_internal_transactions
Returns the list of internal transactions performed within a block range, with optional pagination
```
block_range_internals = es.get_block_range_internal_transactions(10000000,10001000)
time.sleep(1)
block_range_internals.head()
```
### get_account_token_transfers
Returns the list of ERC-20 tokens transferred by an address, with optional filtering by token contract
```
account_token_transfers = es.get_account_token_transfers(accounts)
time.sleep(1)
account_token_transfers.head()
```
### get_account_nft_transfers
Returns the list of ERC-721 ( NFT ) tokens transferred by an address, with optional filtering by token contract
```
account_nft_transfers = es.get_account_nft_transfers(accounts)
time.sleep(1)
account_nft_transfers.head()
```
### get_account_blocks_mined
Returns the list of blocks mined by an address
```
# Ethermine pubkey, F2Pool Old pubkey
miners = ['0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8', '0x829BD824B016326A401d083B33D092293333A830']
account_blocks_mined = es.get_account_blocks_mined(miners)
time.sleep(1)
account_blocks_mined.head()
```
## Contracts
Functions to return information about contracts
```
# Sushiswap Router, Ygov Contract
contracts = ['0xd9e1cE17f2641f24aE83637ab66a2cca9C378B9F', '0x0001FB050Fe7312791bF6475b96569D83F695C9f']
```
### get_contract_abi
Returns the Contract Application Binary Interface (ABI) of a verified smart contract
```
abis = es.get_contract_abi(contracts)
time.sleep(1)
abis
```
### get_contract_source_code
Returns the Solidity source code of a verified smart contract
```
source_code = es.get_contract_source_code(contracts)
time.sleep(1)
source_code
```
## Transactions
Functions to return information about transactions
```
txns = ['0x29f2df8ce6a0e2a93bddacdfcceb9fd847630dcd1d25ad1ec3402cc449fa1eb6', '0x0bd7f9af4f8ddb18a321ab0120a2389046b39feb67561d17378e0d4dc062decc', '0x1815a03dd8a1ce7da5a7a4304fa5fae1a8f4f3c20787e341eea230614e49ff61']
```
### get_contract_execution_status -- busted on Etherscan backend
Returns the status code of a contract execution
```
contract_execution_status = es.get_contract_execution_status(txns)
time.sleep(1)
contract_execution_status
```
### get_transaction_execution_status -- busted on Etherscan backend
Returns the status code of a transaction execution.
```
transaction_execution_status = es.get_transaction_execution_status(txns)
time.sleep(1)
transaction_execution_status
```
## Blocks
Functions to return information about given block(s)
```
blocks = [13188647, 13088500]
far_out_block = 50000000
timestamp = 1638767557
```
### get_block_reward
Returns the block reward and 'Uncle' block rewards
```
block_rewards = es.get_block_reward(blocks)
time.sleep(1)
block_rewards
```
### get_block_countdown
Returns the estimated time remaining, in seconds, until a certain block is mined
```
block_countdown = es.get_block_countdown(far_out_block)
time.sleep(1)
block_countdown
```
### get_block_by_timestamp
Returns the block number that was mined at a certain timestamp (in unix)
```
block_at_time = es.get_block_by_timestamp(timestamp)
time.sleep(1)
block_at_time
```
## Logs
This function is a wrapper around the Etherscan API which is a wrapper around the native eth_getLogs. Please check out their documentation for a more in depth explanation: https://docs.etherscan.io/api-endpoints/logs
Below are the list of supported filter parameters:
* fromBlock, toBlock, address
* topic0, topic1, topic2, topic3 (32 Bytes per topic)
* topic0_1_opr (and|or between topic0 & topic1), topic1_2_opr (and|or between topic1 & topic2), topic2_3_opr (and|or between topic2 & topic3), topic0_2_opr (and|or between topic0 & topic2), topic0_3_opr (and|or between topic0 & topic3), topic1_3_opr (and|or between topic1 & topic3)
Some parameters to take note of:
* FromBlock & ToBlock accepts the blocknumber (integer, NOT hex) or 'latest' (earliest & pending is NOT supported yet)
* Topic Operator (opr) choices are either 'and' or 'or' and are restricted to the above choices only
* FromBlock & ToBlock parameters are required
* An address and/or topic(X) parameters are required, when multiple topic(X) parameters are used the topicX_X_opr (and|or operator) is also required
```
address = '0x33990122638b9132ca29c723bdf037f1a891a70c'
fromBlock=379224
toBlock='latest'
topic0='0xf63780e752c6a54a94fc52715dbc5518a3b4c3c2833d301a204226548a2a8545'
logs = es.get_logs(address, fromBlock, to_block=toBlock, topic0=topic0)
time.sleep(1)
logs
```
## Geth/Parity Proxy
Functions to emulate a Geth/Parity Ethereum client through Etherscan
```
accounts = ['0xBa19BdFF99065d9ABF3dF8CE942390B97fd71B12', '0x503e4bfe8299D486701BC7bc7F2Ea94f50035daC']
blocks = [13188647, 13088500]
index = 0
txns = ['0x29f2df8ce6a0e2a93bddacdfcceb9fd847630dcd1d25ad1ec3402cc449fa1eb6', '0x0bd7f9af4f8ddb18a321ab0120a2389046b39feb67561d17378e0d4dc062decc', '0x1815a03dd8a1ce7da5a7a4304fa5fae1a8f4f3c20787e341eea230614e49ff61']
```
### get_eth_block_number
Returns the number of most recent block
```
blockNo = es.get_eth_block_number()
time.sleep(1)
blockNo
```
### get_eth_block
Returns information about a block by block number
```
eth_block = es.get_eth_block(blocks)
time.sleep(1)
eth_block
```
### get_eth_uncle
Returns information about a uncle by block number and index
```
uncle = es.get_eth_uncle(100, 0)
time.sleep(1)
uncle
```
### get_eth_block_transaction_count
Returns the number of transactions in a block
```
block_transaction_count = es.get_eth_block_transaction_count(blocks)
time.sleep(1)
block_transaction_count
```
### get_eth_transaction_by_hash
Returns the information about a transaction requested by transaction hash
```
txns_by_hash = es.get_eth_transaction_by_hash(txns)
time.sleep(1)
txns_by_hash
```
### get_eth_transaction_by_block_index
Returns information about a transaction by block number and transaction index position
```
txn_info = es.get_eth_transaction_by_block_index(blocks[0], index)
time.sleep(1)
txn_info
```
### get_eth_account_transaction_count
Returns the number of transactions performed by an address
```
transaction_count = es.get_eth_account_transaction_count(accounts)
time.sleep(1)
transaction_count
```
### get_eth_transaction_receipt
Returns the receipt of a transaction by transaction hash
```
receipts = es.get_eth_transaction_receipt(txns)
time.sleep(1)
receipts
```
### get_eth_gas_price
Returns the current price per gas in wei
```
gas_price = es.get_eth_gas_price()
time.sleep(1)
gas_price
```
## Tokens
Functions to return information about tokens
```
#pickle, xSushi
tokens = ['0x429881672B9AE42b8EbA0E26cD9C73711b891Ca5', '0x8798249c2E607446EfB7Ad49eC89dD1865Ff4272']
# One account w/ the above tokens, one account w/out the above tokens
accounts = ['0xBa19BdFF99065d9ABF3dF8CE942390B97fd71B12', '0x503e4bfe8299D486701BC7bc7F2Ea94f50035daC']
```
### get_token_total_supply
Returns the current amount of an ERC-20 token(s) in circulation
NOTE: supply is not adjusted for decimals of token(s)
```
total_supply = es.get_token_total_supply(tokens)
time.sleep(1)
total_supply
```
### get_token_account_balance
Returns the current balance of an ERC-20 token(s) of an address(es).
NOTE: balance is not adjusted for decimals of token(s)
```
account_balance = es.get_token_account_balance(tokens, accounts)
time.sleep(1)
account_balance
```
## Gas Tracker
Functions to return information about gas
```
gas_price_wei=2000000000
```
### get_est_confirmation
Returns the estimated time, in seconds, for a transaction to be confirmed on the blockchain gas price in wei
```
est_confirmation = es.get_est_confirmation(gas_price_wei)
time.sleep(1)
est_confirmation
```
### get_gas_oracle
Returns the current Safe, Proposed and Fast gas prices
```
gas_oracle = es.get_gas_oracle()
time.sleep(1)
gas_oracle
```
## Stats
Functions to return statistics supported by Etherscan
```
start_date='2021-01-01'
end_date='2021-01-05'
```
### get_eth_total_supply
Returns the current amount of Ether in circulation
```
total_eth_supply = es.get_total_eth_supply()
time.sleep(1)
total_eth_supply
```
### get_eth2_total_supply
Returns the current amount of Ether in circulation, ETH2 Staking rewards and EIP1559 burnt fees statistics
```
total_eth2_supply = es.get_total_eth2_supply()
time.sleep(1)
total_eth2_supply
```
### get_last_eth_price
Returns the latest price of 1 ETH in BTC & USD
```
last_eth_price = es.get_last_eth_price()
time.sleep(1)
last_eth_price
```
### get_nodes_size
Returns the size of the Ethereum blockchain, in bytes, over a date range
```
nodes_size = es.get_nodes_size(start_date=start_date, end_date=end_date)
time.sleep(1)
nodes_size
```
### get_total_nodes_count
Returns the total number of discoverable Ethereum nodes
```
total_nodes_count = es.get_total_nodes_count()
time.sleep(1)
total_nodes_count
```
| github_jupyter |
When you train a deep neural network, you essentially start with the default representation of the data (for example, images are represented as 3-D arrays), and applies a series of transformations (Conv, relu, pooling, fully connected, etc.), each of which changes the data representation.
Deep neural network is known for the ability to automatically search for the set of learnable parameters which gives the "best" representation of the data, which makes the discrminating/classifying much easier than using the original default features.
Here we visualize the transformed represenation of the handwritten digits in 2-D space by appling TSNE.
```
# Setup
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.manifold import TSNE
from autodiff import *
%matplotlib inline
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
batch = 50
iterations = 500
sess = Session()
```
Define and train the network
```
X = PlaceholderOp((batch, 28, 28, 1), False, sess)
Y_ = PlaceholderOp((batch, 10), False, sess)
P = PlaceholderOp((), False, sess)
W_CONV1 = PlaceholderOp((5, 5, 1, 32), sess=sess)
B_CONV1 = PlaceholderOp((32,), sess=sess)
W_CONV2 = PlaceholderOp((5, 5, 32, 64), sess=sess)
B_CONV2 = PlaceholderOp((64,), sess=sess)
W_FC1 = PlaceholderOp((7 * 7 * 64, 1024), sess=sess)
B_FC1 = PlaceholderOp((1024,), sess=sess)
W_FC2 = PlaceholderOp((1024, 10), sess=sess)
B_FC2 = PlaceholderOp((10,), sess=sess)
CONV1 = Conv2dOp(X, W_CONV1, [1, 1], "SAME", sess)
CONV1_B = BiasAddOp(CONV1, B_CONV1, sess)
H_CONV1 = ReluOp(CONV1_B, sess)
H_POOL1 = MaxPool2dOp(H_CONV1, [2, 2], [2, 2], "SAME", sess)
CONV2 = Conv2dOp(H_POOL1, W_CONV2, [1, 1], "SAME", sess)
CONV2_B = BiasAddOp(CONV2, B_CONV2, sess)
H_CONV2 = ReluOp(CONV2_B, sess)
H_POOL2 = MaxPool2dOp(H_CONV2, [2, 2], [2, 2], "SAME", sess)
H_POOL2_FLAT = ReshapeOp(H_POOL2, (batch, 7 * 7 * 64), sess)
FC1 = MatMulOp(H_POOL2_FLAT, W_FC1, sess)
FC1_B = BiasAddOp(FC1, B_FC1, sess)
H_FC1 = ReluOp(FC1_B, sess)
H_FC1_DROP = DropoutOp(H_FC1, P, sess)
FC2 = MatMulOp(H_FC1_DROP, W_FC2, sess)
Y_CONV = BiasAddOp(FC2, B_FC2, sess)
SOFTMAX = SoftmaxCrossEntropyWithLogitsOp(Y_, Y_CONV, sess)
CROSS_ENTROPY = ReduceMeanOp(SOFTMAX, 0, sess)
w_conv1 = np.random.normal(scale=0.1, size=W_CONV1.shape)
b_conv1 = np.ones(B_CONV1.shape) * 0.1
w_conv2 = np.random.normal(scale=0.1, size=W_CONV2.shape)
b_conv2 = np.ones(B_CONV2.shape) * 0.1
w_fc1 = np.random.normal(scale=0.1, size=W_FC1.shape)
b_fc1 = np.ones(B_FC1.shape) * 0.1
w_fc2 = np.random.normal(scale=0.1, size=W_FC2.shape)
b_fc2 = np.ones(B_FC2.shape) * 0.1
feed_dict = { W_CONV1: w_conv1,
B_CONV1: b_conv1,
W_CONV2: w_conv2,
B_CONV2: b_conv2,
W_FC1: w_fc1,
B_FC1: b_fc1,
W_FC2: w_fc2,
B_FC2: b_fc2}
params = {"alpha": 1e-3,
"beta1": .9,
"beta2": .999,
"epsilon": 1e-8,
"t": 0,
"m": { W_CONV1: np.zeros_like(w_conv1),
B_CONV1: np.zeros_like(b_conv1),
W_CONV2: np.zeros_like(w_conv2),
B_CONV2: np.zeros_like(b_conv2),
W_FC1: np.zeros_like(w_fc1),
B_FC1: np.zeros_like(b_fc1),
W_FC2: np.zeros_like(w_fc2),
B_FC2: np.zeros_like(b_fc2)},
"v": { W_CONV1: np.zeros_like(w_conv1),
B_CONV1: np.zeros_like(b_conv1),
W_CONV2: np.zeros_like(w_conv2),
B_CONV2: np.zeros_like(b_conv2),
W_FC1: np.zeros_like(w_fc1),
B_FC1: np.zeros_like(b_fc1),
W_FC2: np.zeros_like(w_fc2),
B_FC2: np.zeros_like(b_fc2)}}
for i in range(iterations):
batch_xs, batch_ys = mnist.train.next_batch(batch)
feed_dict[X] = batch_xs.reshape((batch, 28, 28, 1))
feed_dict[Y_] = batch_ys
feed_dict[P] = .5
if i % 50 == 0:
Y_CONV_val = sess.eval_tensor(Y_CONV, feed_dict)
CROSS_ENTROPY_val = sess.eval_tensor(CROSS_ENTROPY, feed_dict)
print "iteration: %d, train accuracy: %f, loss: %f" % (i, np.mean(np.argmax(Y_CONV_val, axis=1) ==
np.argmax(batch_ys, axis=1)),
CROSS_ENTROPY_val)
sess.adam_update(params, CROSS_ENTROPY, feed_dict)
```
Record the re-representated images (1024-D vectors) from the 1st FC layer, plot them in 2-D space by applying t-SNE. Digits in different classes are colored differently.
```
import matplotlib
embeddings = []
for i in range(0, mnist.test.images.shape[0], batch):
feed_dict[X] = mnist.test.images[i : i + batch].reshape((batch, 28, 28, 1))
feed_dict[P] = 1.
H_FC1_DROP_val = sess.eval_tensor(H_FC1_DROP, feed_dict)
embeddings.append(H_FC1_DROP_val)
embeddings = np.vstack(embeddings)
y_true = mnist.test.labels.argmax(axis=1)
tsne = TSNE(n_components=2)
embedding_2d = tsne.fit_transform(embeddings)
plt.scatter(x=embedding_2d[:, 0], y=embedding_2d[:, 1], c=y_true, linewidths=0.1)
fig = matplotlib.pyplot.gcf()
fig.set_size_inches(13, 9)
for i in xrange(10):
x, y = embedding_2d[y_true == i, :].mean(axis=0)
plt.text(x, y, str(i))
```
The original handwritten digits are represented as `28 x 28 x 1` 3-D arrays (or equivalently 784-D vectors), while in the 1st FC layer they are re-represented as 1024-D vectors, which are the input to the final readout layer (which is a simple linear transformation without nonlinearity).
Clearly, handwritten digits in same classes are much closer than those from a different class.
| github_jupyter |
<a name="top"></a>Übersicht: Standardbibliothek
===
* [Die Python Standardbibliothek](#standard)
* [Importieren von Modulen](#importieren)
* [Mathematik](#math)
* [Dateien und Ordner](#ospath)
* [Statistik und Zufallszahlen](#statistics)
* [Übung 06: Standardbibliothek](#uebung06)
**Lernziele:** Am Ende dieser Einheit
* wisst ihr, wie ihr Funktionen aus anderen Modulen importiert
* könnt ihr fortgeschrittenere Mathematik mit dem ```math``` Modul realisieren
* könnt ihr Dateien und Ordnernamen mit Hilfe von ```os.path``` manipulieren
* könnt ihr ein wenig Statistik mit ```statistics``` betreiben
<a name="standard"></a>Die Python Standardbibliothek
===
<a name="importieren"></a>Importieren von Modulen
---
Wir haben schon einige Funktionen kennen gelernt, die standard-mäßig in Python "eingebaut" und meist relativ hilfreich sind, z.B.
* ```print()```
* ```sum()```
* ```len()```
* ...
Eine Liste aller direkt verfügbaren Funktionen findet ihr hier: https://docs.python.org/2/library/functions.html
Zusätzlich dazu gibt es noch die sogenannte Python _Standardbibliothek_. Sie wird automatisch mit jeder Installation von Python mitgeliefert aber wir haben nicht automatisch aus jedem Skript heraus Zugriff auf alle ihre Funktionen.
Wir müssen unserem Programm erst mitteilen, dass wir auf Funktionen aus einem bestimmten Berein der Standardbibliothek - einem _Modul_ zugreifen möchten, indem wir es _importieren_.
[top](#top)
<a name="math"></a>Mathematik
---
Ein klassisches Beispiel ist das Modul ```math```. Es gibt uns Zugriff auf ein paar praktische Funktionen wie z.B. ```sin()``` und ```cos()```. Um Python zu sagen, dass wir ein Modul importieren wollen, benutzen wir das ```import``` Keyword:
```
# wir importieren das Modul, von hier an ist das
# Modul in dem ganzen restlichen Script bekannt
import math
```
Auf Funktionen des Moduls greifen wir mit Hilfe des Punktes zu:
```
math.sin(3)
ergebnis = math.cos(math.pi)
print(ergebnis)
```
##### Dokumentation
Wir können schon vermuten, dass ```math``` Funktionen wie Sinus, Cosinus, Absolutwert oder richtiges Runden zur Verfügung stellt. Wenn wir genauere Informationen über den Umfang von ```math``` wollen, dann ist die online-Dokumentation des Moduls eine gute Anlaufstelle:
Dokumentation von ```math```: https://docs.python.org/2/library/math.html
Alternativ können wir uns auch direkt im Notebook Hilfe zu einzelnen Funktionen holen:
```
help(math.cos)
? math.cos
```
[top](#top)
<a name="ospath"></a>Dateien und Ordner
---
Um mit dem umliegenden Betriebssystem, Dateien und Ordnern zu interagieren, gibt es das Modul ```os``` (für **o**perating **s**ystem):
```
import os
```
Damit können wir uns z.B. den Pfad zu unserem aktuellen Arbeitsverzeichnis (das Verzeichnis, in dem _dieses_ Notebook liegt) anzeigen lassen:
```
pfad = os.getcwd() # [c]urrent [w]orking [d]irectory, cwd
# der Pfad wird as String zurückgeliefert
print(pfad)
```
Wir können uns auch alle Dateien auflisten lassen, die in einem Arbeitsverzeichnis liegen:
```
# os.listdir gibt uns eine Liste mit den
# Dateinamen als Strings zurück
dateien = os.listdir(pfad)
# merke: es werden auch einige versteckte Dateien,
# erkennbar an dem führendne Punkt gelistet
print(dateien)
```
Im Folgenden werden wir ein paar neue Dateien erstellen. Um das Dateien-Chaos im Zaum zu halten, erstellen wir zuerst einen neuen Ordner dafür:
```
neuer_ordner_name = 'mein-ordner'
os.mkdir(neuer_ordner_name)
print(os.listdir(pfad))
```
Damit wir die neuen Dateien auch an der richtigen Stelle ablegen, updaten wir den Pfad, damit er auf den neuen Ordner zeigt. Dafür brauchen wir ein sogenanntes _Untermodul_ von ```os```, nämlich ```os.path```:
```
pfad = os.path.join(pfad, neuer_ordner_name)
print(pfad)
```
Weil wir nicht jedes mal ```os.path.join()``` tippen wollen, wenn wir Pfade modifizieren wollen, können wir die Funktion ```join()``` auch direkt in den globalen Namespace unseres Scripts importieren:
```
from os.path import join
```
Wir können mit Dateien interagieren, indem wir sie z.B. in unserem Programm öffnen, etwas hineinschreiben und sie dann wieder schließen (dafür brauchen wir noch nicht mal das Modul ```os```). Wenn wir versuchen, eine Datei zu öffnen die nicht existiert, gibt es erst einmal einen Fehler:
```
dateiname = 'meine_datei.txt'
open(join(pfad,dateiname))
```
Den Fehler können wir beheben, indem wir der Funktion ```open``` ein zusätzliches Argument mitgeben:
```
# 'w' steht hier für 'write' und erlaubt open(),
# in einer Datei zu schreiben.
open(os.path.join(pfad, dateiname), 'w')
# datei wieder entfernen:
os.remove(join(pfad, dateiname))
```
Nun wollen wir automatisiert eine Reihe von Dateien erstellen. Dafür verpacken wir die Funktionen von gerade eben in einen Loop:
```
# loop über 10 iterationen, die
# temporäre Variable i benutzen wir,
# um die Dateinamen hochzuzählen
for i in range(10):
# bastele einen String für den Dateinamen
datei_name = 'datei_{}.txt'.format(i)
# erstelle eine leere Datei mit dem Namen
# datei_name im Verzeichnis pfad
open(join(pfad, datei_name), 'w+')
```
Und jetzt wollen wir die Dateien automatisiert umbenennen:
```
filenames = os.listdir(pfad)
print(filenames)
# merke: die versteckte Datei .ipynb_checkpoints
# ist auch in dem Ordner, die wollen wir nicht anfassen!
# iteriere über alle files in der Liste,
# benutze enumerate() damit wir eine index-
# Zahl haben, die wir in den neuen Dateinamen
# schreiben können
for i, name in enumerate(filenames):
# wir wollen nur Dateien umbenennen, die mit
# '.txt' enden
if name.endswith('.txt'):
# bastele einen neuen Namen
neuer_name = '2017-06-11_{}.txt'.format(i)
# benutze rename(alter_pfad, neuer_pfad) um
# die Datei der aktuellen Interation umzubenennen
os.rename(join(pfad, name), join(pfad,neuer_name))
# WICHTIG: nicht vergessen, immer den ganzen Pfad zur Datei
# anzugeben!
```
[top](#top)
<a name="statistics"></a>Statistik und Zufallszahlen
---
Die Standardbibliothek hält auch Basisfunktionalität für Statistik und Zufallszahlen bereit. Die Dokumentation für die beiden Module findet ihr unter:
* ```statistics```: https://docs.python.org/3/library/statistics.html
* ```random```: https://docs.python.org/3/library/random.html
```
import random
import statistics
```
Damit können wir uns z.B. eine Liste gefüllt mit zufälligen Ganzzahlen erstellen:
```
# wir benutzen eine list-comprehension, um
# zehn Zufallszahlen zwischen 0 und 10 zu
# erzeugen und in einer Liste zu speichern
random_numbers = [random.randint(0,10) for i in range(10)]
# jedes neue Ausführen dieser Code-Zelle wird
# eine andere, zufällige Liste erzeugen
print(random_numbers)
```
Damit können wir schon ein bisschen interessante Statistik machen und uns den Mittelwert, die Standardabweichung und den Median anschauen:
```
# erzeuge 10 mal eine Liste mit zufälligen Ganzzahlen
for i in range(10):
# erzeuge zufällige Zahlen zwischen -10 und 10
numbers = [random.randint(-10,10) for i in range(10)]
mean = statistics.mean(numbers) # Mittelwert
std = statistics.stdev(numbers) # Standardabweichung
median = statistics.median(numbers) # Median
# gib die berechneten Werte schön formatiert aus
print('mean: {}, stdev: {}, median: {}'\
.format(mean, std, median))
```
[top](#top)
<a name="uebung06"></a>Übung 06: Standardbibliothek
===
1. **Math**
1. Informiere dich in der Dokumentation von ```math``` über die Funktionen zum Konvertieren von Grad in Radianten und umgekehrt. Konvertiere $\pi$, $\pi/2$ und $2\pi$ in Grad und $100^\circ$, $200^\circ$ und $300^\circ$ in Radianten.
2. Gegeben die Koordinaten von drei Eckpunkten eines Dreiecks:
$A = (0,0); \quad B = (3,0); \quad C = (4,2)$.
Schreibe eine Funktion die aus den $(x,y)$ Koordinaten von zwei Punkten mit Hilfe des ```math``` Moduls die Länge der Strecke dazwischen berechnet.
3. Berechne mit Hilfe der Funktion die längen der Seiten $a, b, c$ des Dreiecks.
4. **(Optional)** Berechne die gegenüberliegenden Winkel $\alpha, \beta, \gamma$ des Dreiecks:
HINWEIS: Cosinussatz https://de.wikipedia.org/wiki/Kosinussatz
2. **Dateien und Ordner**
1. Erstelle einen neuen Ordner mit ```mkdir()```.
2. Bastel dir eine Variable mit dem Pfad zum neuen Ordner.
3. Erstelle automatisiert 5 neue .txt Dateien und 5 neue .csv Dateien in dem neuen Ordner.
4. Benenne die neu erstellten Dateien automatisiert um. Wähle unterschiedliche Namen, je nachdem ob es sich um eine .txt oder eine .csv Datei handelt.
5. **(Optional)** Erstelle automatisiert einen Ordner für jede Woche im Juni. Erstelle in jedem Ordner eine Datei für jeden Wochentag der Woche für den der Ordner steht. Gib den Dateien ihr Datum (z.B. 2017-06-01 für den 1. Juni 2017) als Name.
6. **(Optional)** Recherchiere, wie man aus einem Programm heraus in eine Datei schreibt. Teste das Konzept an einer einzelnen Test-Datei.
7. **(Optional)** Schreibe in jede Datei in den Juni-Ordnern den dem Datum entsprechenden Wochentag.
3. **Statistik und Zufallszahlen**
1. Erzeuge eine Liste einer zufälligen Länge zwischen 5 und 10 Elementen gefüllt mit zufälligen Ganzzahlen
2. Schau dir die Funktion ```shuffle()``` des ```random``` Moduls an und benutze sie, um die Reihenfolge der Elemente in der Liste durchzumischen.
3. **(Optional)**: Erstelle eine Kopie der Liste. Schreibe eine Funktion die die Kopie mischt, mit dem Original vergleicht und ```True``` zurück gibt, wenn sie gleich sind, ```False``` sonst.
4. **(Optional)** Schreibe eine Schleife die mit hilfe der Misch-Funktion die Liste N mal durchmischt. Wie lange dauert es, bis die Liste zufällig wieder die gleiche Reihenfolge hat wie das Original? Wie hängt die Anzahl an notwendigen Iterationen von der Länge der Liste ab?
[top](#top)
| github_jupyter |

# Analiza danych i uczenie maszynowe w Python
Autor notebooka: Jakub Nowacki.
## Jupyter
Jako interfejsu/edytora używamy:
* [Jupyter Notebook](http://jupyter.org/) - interaktywne środowisko do pracy z Pythonem w przeglądarce
Dawniej nazywał się on iPython Notebook i wywodzi się z projektu [iPython](https://ipython.org/).
## Kod
Naciśnij `[shift]+[enter]` żeby uruchomić komórkę (cell).
```
2 + 5
pets = ["cat", "python", "elephant"]
for pet in pets:
print("I have a {}. A wonderful animal, indeed!".format(pet.upper()))
[len(pet) for pet in pets]
```
## Markdown
**Markdown** to [język](https://en.wikipedia.org/wiki/Markup_language), który pozwala nam na edycję dokumentu, jak widać tu.
Zawiera:
* listy,
* czcionki:
* *pochyłą*,
* **pogrubioną**,
* ~~przekreśloną~~,
* `o stałej szerokości`,
* [linki](http://www.sages.pl/).
Ponadto, zawiera możliwość wklejania wyrażeń w $\LaTeX$ jak np. $\sqrt{2} = 1.41\ldots$ lub:
$$\sum_{k=1}^\infty \frac{(-1)^k}{2k-1} = 1 - \tfrac{1}{3} + \tfrac{1}{5} + \ldots = \frac{\pi}{4} $$
Oraz, oczywiście kodu:
```javascript
var list = ["a", "list", "in", "JavaScript", "ES6"];
list
.map((x) => x.length)
.reduce((x, y) => x + y);
```
## Automatyczne dopełnianie kodu
```
from numpy import random
random.rand
```
Zacznij pisać `random.r` i naciśnij `tab`.
```
random.rand?
```
## IPython magic
Komórki zaczynające się od `%%cośtam` są specjalne. Przykładowo, możemy mierzyć szybkość wykonania:
```
%%timeit
acc = 0
for x in range(1000000):
acc += 1
%%timeit
acc = 0
for x in range(1000000):
acc += x**5 - 3 * x**2
```
## Wiersz poleceń
Tylko dla systemów *nix (Linux, Mac OS X)
```
!dir
files = !dir
for f in files:
if f.find("1_") >= 0:
print(f)
```
## HTML and JavaScript
```
from IPython.display import Javascript, HTML
Javascript("alert('It is JavaScript!')")
HTML("We can <i>generate</i> <code>html</code> code <b>directly</b>!")
```
## Wykresy
```
# wykresy pojawią się w notebooku
%matplotlib inline
# biblioteka do wykresy
import matplotlib.pyplot as plt
# biblioteka numeryczna
import numpy as np
X = np.linspace(-5, 5, 100) # vector z 100 równo odległymi wartościami od -5 do 5
Y = np.sin(X) # sinus wszystkich wartości X
plt.plot(X, Y); # wykres liniowy
```
## Wyświetlanie różnych rzeczy
```
from IPython import display
display.Image(url="http://imgs.xkcd.com/comics/python.png")
display.YouTubeVideo("H6dLGQw9yFQ")
display.Latex(r"$\lim_{x \to 0} (1+x)^{1/x} = e$")
```
## Inne
* Nie krępuj się notować w swoim notatniku.
* W notatnikach będą ćwiczenia. Ćwiczenia z gwiazdką (★) są dodatkowe: możesz robić je ale nie musisz.
```
print('hello')
```
| github_jupyter |
#Pre-processing the data
Import the libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dataset=pd.read_csv("Churn_Modelling.csv")
dataset
```
Making the matrix of features and seperating the dependent variable
```
X=dataset.iloc[:,3:13].values
y=dataset.iloc[:,-1].values
```
Encoding the categorical variables
```
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X_1=LabelEncoder()
X[:,1]=labelencoder_X_1.fit_transform(X[:,1])
labelencoder_X_2=LabelEncoder()
X[:,2]=labelencoder_X_2.fit_transform(X[:,2])
# so we get the encoded values but there might be problems, that is why we do dummy coding
X
```
Dummy coding the categorical variables
```
#Dummy coding using OneHotEncoder
#X
from sklearn.compose import ColumnTransformer
transformer = ColumnTransformer(
transformers=[
("OneHot", # Just a name
OneHotEncoder(), # The transformer class
[1] # The column(s) to be applied on.
)
],
remainder='passthrough' # donot apply anything to the remaining columns
)
X = transformer.fit_transform(X.tolist())
X = X.astype('float64')
# To avoid the dummy variable trap , we remove one of the dummy variables
X=X[:,1:]
X
```
Splitting the dataset into test set and training set
```
#splitting the dataset into a training set and a test set
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.20,random_state=0)
X_train
```
#Applying XGBoost to the training set and making predictions on the test set
Fitting the model
```
from xgboost import XGBClassifier
classifier=XGBClassifier()
#n_estimators is the number of trees
classifier.fit(X_train,y_train)
```
Making predictions on the test set
```
y_pred=classifier.predict(X_test)
```
# Evaluating the accuracy of the model
Using confusion matrix
```
from sklearn.metrics import confusion_matrix
cm=confusion_matrix(y_test,y_pred)
cm
```
Applying k-fold cross validation
```
from sklearn.model_selection import cross_val_score
accuracies=cross_val_score(estimator=classifier,X=X_train,y=y_train,cv=10)
accuracies.mean()
accuracies.std()
```
##**Notes**:
1.XGBoost is based on gradient boosting decision trees , so it is an excellent algorithm for classification.
2.We don't need to apply feature scaling in this model, so we can keep the interpretability of the dataset.
3.This algorithm has a very fast execution as compared to ANN but gives approximately the same result.
| github_jupyter |
```
#!/home/maghoi/.conda/envs/py37/bin/python
from Bio import SeqIO
import argparse, os, glob
#parser = argparse.ArgumentParser(description="Removes duplicate seqs from input file, writes to input_out")
#parser.add_argument("-i", dest="INPUT_FASTA", required=True, help="Input fasta file")
#parser.add_argument("-o", dest="OUTPUT_FASTA", help="Output fasta file")
#args = parser.parse_args()
#dict_args = vars(args)
def de_duplicate_FASTA_files(inDir="../data/raw/", v=1):
# Find train and test fasta files in raw
fasta_files = glob.glob(inDir + "test*.fasta") + glob.glob(inDir + "train*.fasta")
print("Checking for duplicate sequences in:\n", fasta_files, end="\n")
outDir = "../data/interim/"
# Count duplicates and entries across all fasta files
dict_seqs = {}
dict_ids = {}
count_entries = 0
# Load individual FASTA files and check whether sequences are duplicates before saving
# IDs are only stored for counting
for inFasta in fasta_files:
outFasta = outDir + "filtered_" + os.path.basename(inFasta)
with open(outFasta, 'w') as outHandle:
no_duplicates = 0
for record in SeqIO.parse(inFasta, "fasta"):
count_entries += 1
# Store unique sequences for later checking
# Write ID+sequence if sequence is unique
if str(record.seq) not in dict_seqs:
dict_seqs[str(record.seq)] = True
SeqIO.write(record, outHandle, 'fasta')
# Store unique IDs
if str(record.id) not in dict_ids:
dict_ids[str(record.id)] = True
else:
no_duplicates += 1
if v:
print(f"Found {no_duplicates} duplicate sequences in {inFasta}")
print(f"Saving to {outFasta}")
# Print statistics
print(f"\nProcessed {count_entries} sequences from {len(fasta_files)} train/test FASTA files in {inDir} to {outDir}:")
print(f"Unique IDs / Sequences: {len(dict_ids.keys())} / {len(dict_seqs.keys())}")
de_duplicate_FASTA_files()
call("" + " myarg", shell=True)
call("${PYTHON_INTERPRETER} -u " + "src/data/extract.py " + "", shell=True)
import subprocess
subprocess.run("echo hello world", shell=True, capture_output=True)
for fastaFile in glob.glob("../data/interim/test.fasta"):
print(fastaFile)
fastaName = Path(os.path.basename(fastaFile)).stem
script = "${PYTHON_INTERPRETER} -u extract.py"
model= "esm1_t6_43M_UR50S"
params = "include per_tok"
outDir = "../data/interim/" + fastaName + "/"
cmd = " ".join([script, model, fastaFile, outDir, params])
print(cmd)
subprocess.run(cmd, shell=True, capture_output=True)
import subprocess
with open('../src/data/run_esm.sh', 'rb') as file:
script = file.read()
subprocess.run("bash ../src/data/run_esm.sh", shell=True)
!pwd
!realpath bash ../src/data/run_esm.sh
import os, glob
from collections import OrderedDict
import numpy as np
import torch
from Bio import SeqIO
def stack_res_tensors(tensors):
embedding_dim = tensors[0].shape[-1]
# Prepare to stack tensors
n_seqs = len(tensors)
seq_max_length = max([t.shape[0] for t in tensors])
# Initialize empty padded vector, with 0-padding for sequences with less than max length residues
fill_tensor = torch.zeros(size=(n_seqs, seq_max_length, embedding_dim))
# Load torch tensors from ESM embeddings matching sequence, fill padded tensor
for i, tensor in enumerate(tensors):
fill_tensor[i, 0:tensor.shape[0]] = tensor
return(fill_tensor)
def PCA_transform_stacked_tensor(stacked_tensor, pca=None, pca_dim=30, v=True):
"""
Fits PCA and transforms on flattened dataset representing all positions x ESM_embeddings, before reshaping back to n_seqs x positions x pca_dim
Returns:
(pca_stacked_tensor) [torch.Tensor]: Dataset reduced to n_seqs x sequence_length x pca_dim (30)
"""
embedding_dim = stacked_tensor.shape[-1]
# Flatten training dataset to shape n_positions/residues x embedding_dim
positions = stacked_tensor.view( stacked_tensor.shape[0]*stacked_tensor.shape[1], embedding_dim )
# PCA reduce to n_dim (20) dimensions along positions
if pca is None:
print("Fitting and transforming PCA")
from sklearn.decomposition import PCA
pca = PCA(pca_dim)
positions_reduced = pca.fit_transform(positions)
else:
print("Using given pca object to transform")
positions_reduced = pca.transform(positions)
# Reshape dataset to n_seqs x n_positions x 30
pca_stacked_tensor = positions_reduced.reshape( stacked_tensor.shape[0], stacked_tensor.shape[1], pca_dim )
pca_stacked_tensor = torch.Tensor(pca_stacked_tensor)
if v:
print(f"Reduced from {stacked_tensor.shape} to {pca_stacked_tensor.shape}")
return(pca_stacked_tensor, pca)
def save_processed_tensor(tensor_proc, id_tensor_odict, outdir, verbose=0):
# Create new dict with original shape of PCA reduced tensors (remove zero-øadding in processed stacked tensor)
pca_dict = {}
for i, t_id in enumerate(id_tensor_odict.keys()):
orig_tensor = id_tensor_odict[t_id]
pca_tensor = tensor_proc[i]
# Fill with PCA reduced tensor, preserving original shape
pca_dict[t_id] = pca_tensor[0:orig_tensor.shape[0], 0:orig_tensor.shape[1]]
# Save
print("Writing PCA-reduced tensors to %s" % outdir)
os.makedirs(outdir, exist_ok=True)
for t_id, t in pca_dict.items():
outpath = outdir + str(t_id) + ".pt"
if verbose: print("Writing ID %s %s to %s" % (t_id, t.shape, outpath))
# NB: tensor.clone() ensures tensor is stored as its actual slice and not as 37 mb tensor represented in memory
# Difference of e.g. 20 kb vs 37 mb
torch.save(t.clone(), outpath)
return pca_dict
inDir = "../data/raw/"
train_pos_list = glob.glob(inDir + "train/pos/*.fasta")
train_neg_list = glob.glob(inDir + "train/neg/*.fasta")
test_pos_list = glob.glob(inDir + "test/pos/*.fasta")
test_neg_list = glob.glob(inDir + "test/neg/*.fasta")
stacked_tensor.shape
procDir = "../data/processed/"
pca_dim=60
def get_PCA_fit_train():
""" Returns fitted PCA object on training dataset """
def PCA_fit(stacked_tensor, pca_dim=60):
""" Returns fitted PCA on flattened positions (N x L x D) -> PCA(N x LD) """
# Flatten training dataset to shape n_positions/residues x embedding_dim
embedding_dim = stacked_tensor.shape[-1]
positions = stacked_tensor.view( stacked_tensor.shape[0]*stacked_tensor.shape[1], embedding_dim )
from sklearn.decomposition import PCA
pca = PCA(pca_dim)
return(pca.fit(positions))
tensor_files = glob.glob(interimDir + "train_pos/*.pt") + glob.glob(interimDir + "train_neg/*.pt")
for file in tensor_files:
tensor_dict = torch.load(file)
id_tensor_odict[tensor_dict["label"]] = tensor_dict["representations"][6]
# PCA fit
tensors = [tensor for tensor in id_tensor_odict.values()]
stacked_tensor = stack_res_tensors(tensors)
embed_dim = stacked_tensor.shape[-1]
max_len = len(stacked_tensor[0])
print(f"Fitting PCA on train dataset {stacked_tensor.shape}")
pca = PCA_fit(stacked_tensor, pca_dim)
return pca, embed_dim, max_len
pca_train, embed_dim, max_len = get_PCA_fit_train()
dirType_list = ["test_pos/", "test_neg/", "train_pos/", "train_neg/"]
fasta_list = ["test_pos.fasta", "test_neg.fasta", "train_pos.fasta", "train_neg.fasta"]
def PCA_transformed_save_embed_files(dirType_list, fasta_list, interimDir, procDir):
print(f"PCA transforming and saving embedding pt files in {dirType_list}")
# PCA transform
id_tensor_odict = OrderedDict()
for dirType in dirType_list:
os.makedirs(procDir + dirType, exist_ok=True)
for fasta, dirType in zip(fasta_list, dirType_list):
# Load FASTA file for ids
record_dict = SeqIO.to_dict(SeqIO.parse(interimDir + fasta, "fasta"))
# Load, transform and tensor files
tensor_files = glob.glob(interimDir + dirType + "*.pt")
for file in tensor_files:
name = os.path.basename(file)
t_dict = torch.load(file)
id = t_dict["label"]
# PCA transform tensor
tensor = t_dict["representations"][6]
arr = pca.transform(tensor)
padded_arr = np.zeros( (max_len, pca_dim) )
padded_arr[ :len(arr) ] = arr
pca_tensor = torch.from_numpy(padded_arr).type(torch.float32)
value_dict = {
"tensor": pca_tensor,
"sequence": str(record_dict[id].seq),
"length": len(arr)
}
outFile = procDir + dirType + name
torch.save(value_dict, outFile)
PCA_transformed_save_embed_files(dirType_list, fasta_list, interimDir, procDir)
t_dict["representations"][6].dtype
arr = pca.transform(tensor)
pca_tensor = torch.from_numpy(arr).type(torch.float32)
# Load in all FASTA ids
fasta, dir, label = "test_pos.fasta", "test_pos/", 1
id_odict = OrderedDict()
fasta_list = ["test_pos.fasta", "test_neg.fasta", "train_pos.fasta", "train_neg.fasta"]
dir_list = ["test_pos/", "test_neg/", "train_pos/", "train_neg/"]
labels = [1, 0, 1, 0]
for fasta, dir, label in zip(fasta_list, dir_list, labels):
for record in SeqIO.parse(interimDir + fasta, "fasta"):
id_odict[record.id] = {
"sequence": str(record.seq),
"label": label
}
interimDir = "../data/interim/"
# Training set
print("PCA reducing train set by fitting and transforming")
train_pos_list = glob.glob(interimDir + "test_pos/" + "*.pt")
train_neg_list = glob.glob(interimDir + "test_neg/" + "*.pt")
id_tensor_odict = OrderedDict()
for train_list in [train_pos_list, train_neg_list]:
for file in train_list:
t_dict = torch.load(file)
id = t_dict["label"]
value_dict = {
"tensor": t_dict["representations"][6],
"sequence": id_odict[id]["sequence"],
"label": id_odict[id]["label"]
}
id_tensor_odict[id] = value_dict
# PCA fit and transform along positions
tensors = [value_dict["tensor"] for value_dict in id_tensor_odict.values()]
stacked_tensor = stack_res_tensors(tensors)
pca_stacked_tensor, pca_train = PCA_transform_stacked_tensor(stacked_tensor, pca=None, pca_dim=60)
# Save
# Test set
print("PCA reducing test set using fitted train PCA")
test_pos_list = glob.glob(interimDir + "test_pos/" + "*.pt")
test_neg_list = glob.glob(interimDir + "test_neg/" + "*.pt")
id_tensor_odict = OrderedDict()
for train_list in [train_pos_list, train_neg_list]:
for file in train_list:
tensor_dict = torch.load(file)
id_tensor_odict[tensor_dict["label"]] = tensor_dict["representations"][6]
# PCA Transform using pca_train along positions
tensors = [tensor for tensor in id_tensor_odict.values()]
stacked_tensor = stack_res_tensors(tensors)
pca_stacked_tensor, pca_train = PCA_transform_stacked_tensor(stacked_tensor, pca=pca_train, pca_dim=60)
pca_train
a = stacked_tensor.shape
b = torch.transpose(stacked_tensor, )
print(a, b)
stacked_tensor.shape
id_tensor_odict
file = tensor_files[0]
torch.load(file)["representations"][6]
os.path.basename(file)
# Read in ESM tensor files to ordered dict
id_tensor_odict = OrderedDict()
tensor_files = glob.glob(args.EMBEDDING_DIR + "filtered_train*/*.pt")
if args.VERBOSE: print("Found %s ESM tensor files in %s" % (len(tensor_files), args.EMBEDDING_DIR))
for file in tensor_files:
esm_d = torch.load(file)
id_tensor_odict[esm_d["label"]] = esm_d["representations"][33]
# Stack with zero-padding by longest sequence length
tensors = [tensor for tensor in id_tensor_odict.values()]
stacked_tensor = stack_res_tensors(tensors)
if args.VERBOSE: print("Stacked tensor:", stacked_tensor.shape)
# PCA reduce to 30 dimensions
if args.VERBOSE: print("Fitting PCA model to stacked_tensor ...")
pca_stacked_tensor = PCA_reduce_stacked_tensor(stacked_tensor, pca_dim=30)
if args.VERBOSE: print("PCA stacked tensor:", pca_stacked_tensor.shape)
# Save to new files
pca_dict = save_processed_tensor(pca_stacked_tensor, id_tensor_odict, outdir=args.OUTDIR, verbose=args.VERBOSE)
```
| github_jupyter |
```
!sudo apt install tesseract-ocr
!pip install pytesseract
!pip install dataframe_image
from PIL import Image
%pylab inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tensorflow as tf
import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.models import load_model
import csv
from imutils import resize
from collections import namedtuple
import os
import tempfile
import random
import tensorflow as tf
import numpy as np
from google.protobuf import text_format
from bs4 import BeautifulSoup
import cv2
import pytesseract as pt
import pandas as pd
import dataframe_image as dfi
from imutils import paths, resize
from imutils.object_detection import non_max_suppression
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.image as mpimg
from IPython.display import HTML
try:
from PIL import Image
except ImportError:
import Image
import pytesseract
from pytesseract import Output
def display_img(image):
plt.figure(figsize=(10,10))
plt.imshow(image,cmap='gray')
plt.show()
from keras.models import load_model
model = load_model('/content/drive/MyDrive/Fintab/table 11-25 exp2 512x512 96.3/mymodel_best')
# load image
filename = '/content/A_2009_page_72.jpg'
image = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image, channels=3) #Decode a JPEG-encoded image to a uint8 tensor
image = tf.image.resize(image , [512,512])
image = tf.cast(image, tf.float32) / 255.0 # normalizing image
display_img(image)
# predict table mask
table_mask = model.predict(image[np.newaxis,:,:,:])
table_mask = tf.argmax(table_mask, axis=-1)
table_mask = table_mask[..., tf.newaxis]
table_mask = table_mask[0]
table_mask = tf.keras.preprocessing.image.array_to_img(table_mask)
table_mask = np.array(table_mask)
ret, table_mask = cv2.threshold(table_mask, 120, 255, cv2.THRESH_BINARY)
kernel = np.ones((5,5), np.uint8)
table_mask = cv2.erode(table_mask, kernel, iterations=3)
table_mask = cv2.dilate(table_mask, kernel, iterations=4)
display_img(table_mask)
# image = cv2.imread('/content/drive/MyDrive/Fintab/mixed_data/extracted_table/BAX_2014_page_87.jpg',0)
# # display_img(image)
# ret, image = cv2.threshold(image, 120, 255, cv2.THRESH_BINARY)
# kernel = np.ones((5,5), np.uint8)
# image = cv2.erode(image, kernel, iterations=3)
# # display_img(img_erosion)
# image = cv2.dilate(image, kernel, iterations=4)
# display_img(image)
image = cv2.imread(filename)
# display_img(original_image)
table_mask = cv2.resize(table_mask,image.shape[0:2][::-1])
image.shape[0:2],table_mask.shape
contours, hierarchy = cv2.findContours(table_mask,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(contours))
crop_images_list=[]
for i,contour in enumerate(contours):
x,y,w,h = cv2.boundingRect(contour)
xmin = round(0.95*x)
xmax = round(1.05*(x+w))
ymin = round(0.99*y)
ymax = round(1.01*(y+h))
print(cv2.contourArea(contour))
# print(xmin,xmax,ymin,ymax)
cv2.rectangle(table_mask,(x,y),(x+w,y+h),(255,255,0),-1)
crop_img = image[ymin:ymax, xmin:xmax]
# print(crop_img)
plt.figure(figsize=(12,12))
plt.imshow(crop_img)
plt.show()
crop_images_list.append(crop_img)
# cv2.imwrite(f'crop_image{i+1}.jpg',crop_img)
display_img(table_mask)
display_img(crop_images_list[1])
custom_config = r'-l eng --oem 3 --psm 6 -c tessedit_char_whitelist="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-:.$%./@& *"'
d = pytesseract.image_to_data(crop_images_list[1], config=custom_config, output_type=Output.DICT)
df = pd.DataFrame(d)
pd.set_option('display.max_rows', None)
df
gray_image = cv2.cvtColor(crop_images_list[1], cv2.COLOR_BGR2GRAY)
threshold_img = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
display_img(threshold_img)
test_image = crop_images_list[1].copy()
test_image.shape
# img = test_image.copy()
# hImg, wImg, _ = img.shape
# # boxes = pytesseract.image_to_boxes(img)
# for row in df.iterrows():
# # b = b.split(' ')
# # print(b)
# x, y, w, h = int(row[1]['left']), int(row[1]['top']), int(row[1]['width']), int(row[1]['height'])
# img = cv2.rectangle(img, (x,y), (x+w,y+h), (50, 50, 255), 1)
# # img = cv2.putText(img, b[0], (x, hImg - y + 13), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (50, 205, 50), 1)
# display_img(img)
# looping over row pixels
hor_lines = []
for ir in range(0,test_image.shape[0]):
flag = True
for row in df.iterrows():
ymin = row[1]['top']
ymax = ymin + row[1]['height']
if int(row[1]['conf'])<50:
continue
if ymin<=ir<=ymax:
# print(f'1line at {ir}')
# print(row[1])
flag = False
break
if flag==True:
hor_lines.append(ir)
print(lines_list)
color = (0,255,0)
thickness = 1
for i in hor_lines:
start_point = (0,i)
end_point = (test_image.shape[1]+10,i)
test_image = cv2.line(test_image,start_point,end_point,color,thickness)
display_img(test_image)
vert_lines = []
total_lines = max(df['line_num'])
for ij in range(test_image.shape[1]):
flag = True
rows_blocking = []
for row in df.iterrows():
char_len = len(row[1]['text'].replace(',','').replace('.',''))
if char_len==0:
continue
avg_char_width = row[1]['width']/char_len
xmin = row[1]['left'] - 0.55*avg_char_width
xmax = row[1]['left'] + row[1]['width'] + 0.55*avg_char_width
if int(row[1]['conf'])<40:
continue
if xmin<=ij<=xmax:
# print(f'1line at {ij}')
# print(row[1]['left'],row[1]['text'])
flag = False
rows_blocking.append(row[1]['line_num'])
# break
if len(rows_blocking)>=0.1*total_lines:
continue
else:
flag=True
if flag==True:
vert_lines.append(ij)
print(vert_lines)
color = (0,0,255)
thickness = 1
for i in vert_lines:
start_point = (i,0)
end_point = (i,test_image.shape[0]+10)
test_image = cv2.line(test_image,start_point,end_point,color,thickness)
display_img(test_image)
gray = cv2.cvtColor(crop_images_list[1], cv2.COLOR_BGR2GRAY)
# Performing OTSU threshold
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
# Specify structure shape and kernel size.
# Kernel size increases or decreases the area
# of the rectangle to be detected.
# A smaller value like (10, 10) will detect
# each word instead of a sentence.
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 15))
kernel = np.ones((5,5))
# Applying dilation on the threshold image
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 10)
dilation = cv2.erode(dilation, kernel, iterations = 2)
dilation = cv2.dilate(dilation , kernel , iterations = 10)
dilation = cv2.erode(dilation , kernel , iterations = 10)
dilation = cv2.erode(dilation , rect_kernel , iterations = 10)
display_img(dilation)
# Finding contours
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
# Creating a copy of image
im2 = crop_images_list[1].copy()
# Looping through the identified contours
# Then rectangular part is cropped and passed on
# to pytesseract for extracting text from it
# Extracted text is then written into the text file
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Drawing a rectangle on copied image
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# # Cropping the text block for giving input to OCR
# cropped = im2[y:y + h, x:x + w]
# # Open the file in append mode
# file = open("recognized.txt", "a")
# # Apply OCR on the cropped image
# text = pytesseract.image_to_string(cropped)
# # Appending the text into file
# file.write(text)
# file.write("\n")
# # Close the file
# file.close
display_img(im2)
```
| github_jupyter |
# MAGNETIZATION VECTOR FIELD RECONSTRUCTION
## Notebook Documentation
<br>
<br>
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" />
Magnetization Vector Field Reconstruction Notebook Documentation is licensed under the Creative Commons Attribution 4.0 International License (2018)
To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Documentation Authors:
- Marc Rosanes
- Doga Gursoy
- Aurelio Hierro
# Magnetization vector field reconstruction principle
(1a) $$log[T_{+\delta}] + log[T_{-\delta}] = 2\int{L(t)^{-1}dt}$$
(2a) $$log[T_{+\delta}] - log[T_{-\delta}] = 2\int{L(t)^{-1}\delta(t)[\vec{k}\bullet \vec{m(t)}]dt}$$
Equation 1 is used to obtain the values of the attenuation length (L), while equation (2a) will allow extracting the magnetization configuration (**m**) of the system.
It is important to note, that we are not directly reconstructing the reduced magnetization vector; we are reconstructing $2L(t)^{-1}\delta(t)\vec{m(t)}$. The contribution of the attenuation length can be accounted for by using the scalar field reconstruction using equeation (1a), and then using those values in the model to isolate $\delta(t)\vec{m(t)}$. The latter is proportional to the magnetization configuration.
*Reference for this slide: 3D reconstruction of magnetization from dichroic soft X-ray trasmission tomography (Aurelio Hierro et Al.)*
### Magnetization vector field reconstruction principle continuation
Thus it is necessary to take into account the attenuation lenght contribution, when this L is not consant throgout the object. Not taking into account L if it has high variations, would imply that we could be seeing mostly the L contribution in the reconstruction, instead of being seeing the contribution done by the magnetization **m**, in which we are interested. However, if L is quite constant throughout the object, reconstructing the full quantity $2L(t)^{-1}\delta(t)\vec{m(t)}$, will already give us a qualitative idea of the directions of the magnetization vector field **m**.
# Testing the 3D vector field reconstruction algorithm
## From a reconstructed 3D object to its projections and back
In order to test the algorithm, the projections of a reconstructed oject can be computed, and from these projections we can come back to the reconstructed model object. If the appearance of the reconstructed model object from its computed projections is similar to the initial 3D object, this means that the vectorial reconstruction algorithm is performing correctly.
For our purposes, we will work with a simulated object with a quite constant attenuation length L throghout the object. We will find the projections of this object on the three perpendicular cartesian axis of rotation, and then we will use all this projections in order to reconstruct the object again. This will allow us to check the validity of the algorithm.
The projections of the object will give us the difference of the logarithms of the transmittance in each one of the three perpendicular axes.
### Testing the 3D vector field reconstruction algorithm
First, let's make the necessary imports
```
import dxchange
import tomopy
import numpy as np
import matplotlib.pyplot as plt
import time
```
...let us load the our object: the three components of the magnetization vector all along the object. The shapes will be read in order to verify that the object magnetization components (in this case: $2L^{-1}\delta\vec{m}$) have been correctly loaded. Afterwards we will pad the object in order to have a cubic object.
```
obx = dxchange.read_tiff('M4R1_mx.tif').astype('float32')
oby = dxchange.read_tiff('M4R1_my.tif').astype('float32')
obz = dxchange.read_tiff('M4R1_mz.tif').astype('float32')
print(np.shape(obx))
print(np.shape(oby))
print(np.shape(obz))
npad = ((182, 182), (64, 64), (0, 0))
obx = np.pad(obx, npad, mode='constant', constant_values=0)
oby = np.pad(oby, npad, mode='constant', constant_values=0)
obz = np.pad(obz, npad, mode='constant', constant_values=0)
```
...we will downsample the three axes of each of the magnetization vector components, in order to perform the calculations more rapidly.
```
obx = tomopy.downsample(obx, level=2, axis=0)
obx = tomopy.downsample(obx, level=2, axis=1)
obx = tomopy.downsample(obx, level=2, axis=2)
oby = tomopy.downsample(oby, level=2, axis=0)
oby = tomopy.downsample(oby, level=2, axis=1)
oby = tomopy.downsample(oby, level=2, axis=2)
obz = tomopy.downsample(obz, level=2, axis=0)
obz = tomopy.downsample(obz, level=2, axis=1)
obz = tomopy.downsample(obz, level=2, axis=2)
print(np.shape(obx))
print(np.shape(oby))
print(np.shape(obz))
```
Check the subsampled inputs, to see that the given input datasets are coherent:
```
fig = plt.figure(figsize=(15, 8))
ax1 = fig.add_subplot(1, 3, 1)
ax1.imshow(obx[52,:,:])
ax2 = fig.add_subplot(1, 3, 2)
ax2.imshow(oby[52,:,:])
ax3 = fig.add_subplot(1, 3, 3)
ax3.imshow(obz[52,:,:])
fig = plt.figure(figsize=(15, 8))
ax1 = fig.add_subplot(1, 3, 1)
ax1.imshow(obx[55,:,:])
ax2 = fig.add_subplot(1, 3, 2)
ax2.imshow(oby[55,:,:])
ax3 = fig.add_subplot(1, 3, 3)
ax3.imshow(obz[55,:,:])
```
...we will define the projection angles: 31 angles, from 90 to 270 degrees:
```
ang = tomopy.angles(31, 90, 270)
```
...and calculate the projections of the object taking rotation axes around the three perpendicular cartesian axes:
```
prj1 = tomopy.project3(obx, oby, obz, ang, axis=0, pad=False)
prj2 = tomopy.project3(obx, oby, obz, ang, axis=1, pad=False)
prj3 = tomopy.project3(obx, oby, obz, ang, axis=2, pad=False)
fig = plt.figure(figsize=(15, 8))
ax1 = fig.add_subplot(1, 3, 1)
ax1.imshow(prj1[0,:,:])
ax2 = fig.add_subplot(1, 3, 2)
ax2.imshow(prj2[0,:,:])
ax3 = fig.add_subplot(1, 3, 3)
ax3.imshow(prj3[0,:,:])
```
Finally we will reconstruct the vector magnetization field components (in fact it is the quantity $2L^{-1}\delta\vec{m}$, the one which is reconstructed), taking as input the projections that we have calculated thanks to the first 3D initial object.
```
t = time.time()
rec1, rec2, rec3 = tomopy.vector3(prj1, prj2, prj3, ang, ang, ang, axis1=0, axis2=1, axis3=2, num_iter=100)
dxchange.write_tiff(rec1)
dxchange.write_tiff(rec2)
dxchange.write_tiff(rec3)
print (time.time()-t)
print(np.shape(rec1))
print(np.shape(rec2))
print(np.shape(rec3))
```
# Comparison of the test results
In this section, we compare the results of the magnetization vector field components obtained thanks to the tomopy reconstruction, against the magnetization vector field components of the object given as input:
## Comparison of the first component results: first component
Comparison of the first magnetization vector component against the input data object.
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(obx[52,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec1[52,:,:])
```
## Comparison of the second component results: second component
Comparison of the second magnetization vector component against the input data object:
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(oby[52,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec2[52,:,:])
```
## Comparison of the third component results: third component
Comparison of the third magnetization vector component against the input data object:
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(obz[52,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec3[52,:,:])
```
## Comparison of the first component results (other slice): first component
Comparison of the first magnetization vector component against the input data object.
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(obx[55,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec1[55,:,:])
```
## Comparison of the second component results (other slice): second component
Comparison of the second magnetization vector component against the input data object:
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(oby[55,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec2[55,:,:])
```
## Comparison of the third component results (other slice): third component
Comparison of the third magnetization vector component against the input data object:
```
fig = plt.figure(figsize=(9, 7))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(obz[55,:,:])
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(rec3[55,:,:])
```
| github_jupyter |
# Practical 5: Modules and Functions - Solving differential equations
<div class="alert alert-block alert-success">
<b>Objectives:</b> In this practical the overarching objective is to continue to practice constructing a program in Python based on internal functions. This will be done through 2 different sections, each of which has an exercise for you to complete:
- 1) [Learning how we solve a single differential equations using the Scipy package](#Part1)
* [Exercise 1: Integrate an exponential decay equation](#Exercise1)
- 2) [Learning how we solve a single differential equations using the Scipy package](#Part2)
* [Exercise 2: Implementing the Predator-Prey equations, or the Lotka-Volterra model](#Exercise2)
As with our other notebooks, we will provide you with a template for plotting the results. Also please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder.
</div>
<div class="alert alert-block alert-warning">
<b>Please note:</b> After reading the instructions and aims of any exercise, search the code snippets for a note that reads 'INSERT CODE HERE' to identify where you need to write your code
</div>
### Learning how we solve a single differential equations using the Scipy package <a name="Part1"></a>
You may have already studied differential equations and possible methods for solving them. In this practical we will use a pre-existing module that has functions to solve ordinary differential equations, or ODEs. Specifically we will use a Python package known as [Scipy](https://www.scipy.org). These functions, or solvers, expect us to define the differential equation we want to solve, with some starting conditions. We refer to these as initial boundary problems. Let's look at some examples.
The radio-active decay equation is often used as a relatively simple example of an ODE that we can integrate to find the analytical solution to. The differential equation is given by the following expression:
\begin{equation*}
\frac{dN}{dt} = - \Gamma * N
\end{equation*}
How might we solve this equation using Python? As an illustration, let's assume that \Gamma has a value of 5.0 and solve this equation using the Scipy package. When you run the code box below, what does the figure represent?
```
# Import the packages we need
import numpy as np # Import Numpy as usual
from scipy.integrate import odeint # We only want to use the 'odeint' [Solve Initial Value Problems] from the scipy.integrate package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
# Set some initial conditions and paramter values
Gamma = 5.0 # Our decay constant
No = 1000 # Our initial value for 'N'
t = np.arange(0, 1, 0.01) # time span and steps (start, end, length between each value). This will have 100 entries
# Define a function that gives us the value for the differential equation
# Notice the variables passed to the function and, again, the use of the ':' operator and subsequent space underneath
def differential_function(N, t):
return -1.0*Gamma*N # Does this match with the differential equation given above?
# Solve the differential equation using the 'odeint' package
# A couple of things to note here:
# 1 - The ODE solver expects key information on starting conditions and the length [duration] of our integration
# 2 - The ODE solver returns an array which gives N as one column and time [t] as the other, so we can plot this below
solution = odeint(differential_function, No, t) # integrate, refer scipy docs
# To plot the results we can do the following:
fig = plt.figure(figsize=(5,5))
ax = plt.axes()
ax.plot(t,solution)
ax.set_title('Solution to our exponential function')
ax.set_ylabel('N')
ax.set_xlabel('time [seconds]')
```
<div class="alert alert-block alert-success">
<b> Exercise 1: Integrate an exponential decay equation </b> <a name="Exercise1"></a>
To continue on this theme, in this first practical I would like you to loop through 10 different values of our exponential decay coefficient and then plot the resulting decay curves on one graph. In the box below, you will see where code is missing to complete this task. Copy the relevant code from the example above and work on looping through the different values of the decay coefficient. You will need to carry out the following operations:
- Import the required libraries
- Initialise an array that will store all of the decay values for each choice of decay coefficient
- Loop through calling the ODE solver for each choice of decay coefficient
- Update the array that stores all of the decay values
- Plot the results
You should arrive at a graph as shown in the following figure:

If you have any questions, do not hesistate to ask in class. Also please feel free to check the proposed solution provided.
</div>
```
# Import packages needed
import numpy as np # Import Numpy as usual
from scipy.integrate import odeint # We only want to use the 'odeint' [Solve Initial Value Problems] from the scipy.integrate package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
# Set some initial conditions and parameter values
# In this example we are going to create an array that holds 10 values for our decay constant between 1 to 25.
# Lets create an emtpy array and manually place some values in it
Gamma_array = np.zeros((10), dtype=float)
Gamma_array[0]=1.0
Gamma_array[1]=1.5
Gamma_array[2]=2.0
Gamma_array[3]=5.0
Gamma_array[4]=8.0
Gamma_array[5]=12.0
Gamma_array[6]=15.0
Gamma_array[7]=20.0
Gamma_array[8]=22.5
Gamma_array[9]=25.0
# <<-------------------- INSERT CODE HERE ----------------------->>
No = 1000 # Our initial value for 'N' will remain as 1000
t = # time span and steps (start, end, number of entries). Same as before, with 100 entries
# We still need a function that calculates the derivative expression
def differential_function(N, t):
return Gamma*N
# Create loop a that selects different values of the decay coefficient and calls the ODE solver.
# For each solution, store these values in a new 2D array. I will create this array for you since
# we know there are 100 entries in our time array and 10 decay constants
solution_store_array = np.zeros((100,10), dtype=float)
# Now define your loop. HINT, to access the contents of the 'solution' array as one column, you can write solution[:,0]
for i in range(10):
Gamma=
solution = # integrate
solution_store_array[:,i] = solution[:,0]
# <<------------------------------------------------------------->>
# To plot the results we can use the following and Python will vary the colours of each line:
# Please ask about the way I have labeled each line if you would like to know
fig = plt.figure(figsize=(5,5))
ax = plt.axes()
lineObjects = ax.plot(t,solution_store_array)
plt.legend(iter(lineObjects), ('1.0','1.5','2.0','5.0','8.0','12.0','15.0','20.0','22.5','25.0'))
ax.set_title('Solution to our exponential function')
ax.set_ylabel('N')
ax.set_xlabel('time [seconds]')
```
### Additional example: Implementing the Lorentz system
In the above example, we have a differential equation describing how one variable changes over time. More often than not we study systems where components interact. In part 2 we will look at a set of equations known as the predator - prey equations. Before we do that, let's look at general example of coupled differential equations known as the Lorentz system.
\begin{eqnarray}
\frac{dX}{dt} = &\sigma(Y-X)\\
\frac{dY}{dt} = &(\rho-Z)X - Y\\
\frac{dZ}{dt} = &XY-\beta Z
\end{eqnarray}
There is alot of background behind the real-world relevance of this set of coupled differential equations. Briefly,it is a system of ordinary differential equations studied by Edward Lorenz who was an American mathematician and meteorologist. You may have come across a saying relating to the potential chaos caused by a butterfly flapping its wings, also known as the 'butterfly effect'? This refers to physical systems that, despite following robust mathematical laws, exhibit chaotic behaviour from even minuscule changes in initial conditions.
In English, the equations state the following:
- The rate of change of component 'X' is a function of some constant $\sigma$ multiplied by component 'Y' minus 'X'
- The rate of change of component 'Y' is a function of some constant $\rho$ minus component 'Z', the product of which is multiplied bu component 'X'. Then subtract component 'Y'
- The rate of change of component 'Z' is a funcrion of 'X' multiplied by 'Y', minus a constant $\beta$ times 'Z'
So we have 3 variables we need to consider. As we will demonstrate below, rather than pass a single variable to the ODE solver and thus our derivative function, we can pass multiple variables via an array.
The code below shows the full example and then produces a 3D plot of <code> X </code> , <code> Y </code> and <code> Z </code> to give the typical Lorentz attractor visualisation:
```
# import libraries we need
import numpy as np # Import Numpy as usual
from scipy.integrate import odeint # We only want to use the 'odeint' [Solve Initial Value Problems] from the scipy.integrate package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
from mpl_toolkits.mplot3d import Axes3D #Import another package that allows us to plot 'nice' 3D plots
# Set Lorenz paramters and initial conditions
sigma = 10
beta = 2.667
rho = 28
X0 = 5
Y0 = 1
Z0 = 1.05
# Define the differential equation function the ODE solver will use
def differential_function(array, t):
# Notice here the function receives an array which has 3 cells
# Outside of this function we have decided that the 1st component is X
# .. the 2nd is Y and the 3rd is Z. Thus below we access those elements
# using Python indexing
X = array[0]
Y = array[1]
Z = array[2]
# The Lorenz equations
# Im using 3 new variables, with a name that hopefully makes sense.
# These follow the equations displayed above this box
dX_dt = sigma*(Y - X)
dY_dt = (rho-Z)*X-Y
dZ_dt = X*Y-beta*Z
# We now return another array, but this contains the values for each derivative. The ordering
# has to be the same order as the variables in 'array'
return [dX_dt, dY_dt, dZ_dt]
t = np.linspace(0, 100, num=10000) # time span and steps (start, end, number of entries). Same as before, with 100 entries
# Integrate the Lorenz equations
# Solve the differential equation using the 'odeint' package
# A couple of things to note here, again:
# 1 - The ODE solver expects key information on starting conditions and the length [duration] of our integration
# 2 - The ODE solver returns an array which gives N as one column and time [t] as the other, so we can plot this below
solution = odeint(differential_function, (X0, Y0, Z0), t)
# As we are now passing an array of variables, the function will return an array of each variable value at the given time stamps defined by our 't' variables
x = solution[:,0] # X is the first variable in our array, thus first column in our output
y = solution[:,1] # Y is the second variable in our array, thus second column in our output
z = solution[:,2] # Z is the third variable in our array, thus third column in our output
# Plot the Lorenz attractor using a Matplotlib 3D projection
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.plot(x, y, z, alpha=0.4) #Please note the parameter 'alpha' here defines a level of transparency.
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
```
Why not change the following variables and see how the image responds?
- Change $\beta$ to 4.0
- Change the number of time steps to 1000
In Exercise 2 we will now introduce another set of coupled equations.
<div class="alert alert-block alert-success">
<b> Exercise 2 : Implementing the Predator-Prey equations, or the Lotka-Volterra model. </b> <a name="Exercise2"></a>
This example is taken straight from the ['cookbook' of the Python package Scipy](https://scipy-cookbook.readthedocs.io/items/LoktaVolterraTutorial.html). We will have a look at the Lotka-Volterra model, also known as the predator-prey equations. This model comprises a pair of first order, non-linear, differential equations frequently used to describe the dynamics of biological systems in which two species interact; one a predator and the other its prey. The model was proposed independently by Alfred J. Lotka in 1925 and Vito Volterra in 1926.
The equations are presented as follows:
\begin{equation*}
\frac{dU}{dt} = aU - bUV \\
\frac{dV}{dt} = -cV + dbUV
\end{equation*}
Where the variables and constant parameters are defined as follows:
- U: number of preys (for example, rabbits)
- V: number of predators (for example, foxes)
- a is the natural growing rate of rabbits, when there's no fox
- b is the natural dying rate of rabbits, due to predation by foxes
- c is the natural dying rate of fox, when there's no rabbit
- d is the factor describing how many caught rabbits lead to an increased population of foxes. Note the similarity with the predation rate in the first equation.
How do we solve these equations? As with the first example, we can use the ODE solvers in the Scipy package. To do this we need to perform the following actions and then implement these as code:
- create some initial conditions that define our starting populations of rabbits and foxes
- define the constant parameters a,c,c and d
- set up the function that defines our differential equations and is called by the internal ODE solver
- run the simulation for a period of time
- plot the results.
In this example we provide the template to import the libraries needed and plot the results, but you are tasked with completing the code and running the simulation. You should see an identical graph to the figure below.

</div>
```
# import libraries we need
import numpy as np # Import Numpy as usual
from scipy.integrate import odeint # We only want to use the 'odeint' [Solve Initial Value Problems] from the scipy.integrate package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
from mpl_toolkits.mplot3d import Axes3D #Import another package that allows us to plot 'nice' 3D plots
#-------'INSERT CODE HERE'-------
# Definition of parameters
a =
b =
c =
d =
U0 =
V0 =
def differential_equations(array, t):
return [dU_dt,dV_dt]
#--------------------------------
t = np.linspace(0, 15, num=1000) # time span and steps (start, end, number of entries). Same as before, with 100 entries
# Integrate the Lorenz equations
# Solve the differential equation using the 'odeint' package
# A couple of things to note here, again:
# 1 - The ODE solver expects key information on starting conditions and the length [duration] of our integration
# 2 - The ODE solver returns an array which gives N as one column and time [t] as the other, so we can plot this below
#-------'INSERT CODE HERE'-------
solution =
#--------------------------------
# As we are now passing an array of variables, the function will return an array of each variable value at the given time stamps defined by our 't' variables
rabbits = solution[:,0] # X is the first variable in our array, thus first column in our output
foxes = solution[:,1] # Y is the second variable in our array, thus second column in our output
# Plot the Lorenz attractor using a Matplotlib 3D projection
fig = plt.figure(figsize=(10,10))
plt.plot(t, rabbits, 'r-', label='Rabbits')
plt.plot(t, foxes , 'b-', label='Foxes')
plt.grid()
plt.legend(loc='best')
plt.xlabel('time')
plt.ylabel('population')
plt.title('Evolution of fox and rabbit populations')
plt.show()
```
| github_jupyter |
```
import tensorflow as tf
tf.__version__
from google.colab import drive
drive.mount('/content/drive')
pwd #check where you are
cd /content/drive/MyDrive/Colab Notebooks/Traffic-signals
pwd
ls #Check the files inside the folder where you're now
```
## Download the dataset
```
#link dataset
link = "https://d17h27t6h515a5.cloudfront.net/topher/2017/February/5898cd6f_traffic-signs-data/traffic-signs-data.zip"
!pip install wget #command line
import wget
#wget.download(link) #only run one time
#download the dataset inside the folder where you are now, using pwd to find where you are
data = "./" #unzip your dataset file at your present position
!unzip -q traffic-signs-data.zip -d $data
```
# Run from here
```
#Create the links for each of your train/valid/test set
train_link = data + "train.p"
valid_link = data + "valid.p"
test_link = data + "test.p"
import pickle
# "rb": read binary
with open(train_link, mode="rb") as f: #f means "file"
train = pickle.load(f)
# read valid set
with open(valid_link, mode="rb") as f:
valid = pickle.load(f)
# read train set
with open(test_link, mode="rb") as f:
test = pickle.load(f)
```
## Explore the dataset
```
train
# Find the name of the dictionary by the cell above
x_train = train['features']
y_train = train['labels']
x_train.shape
```
34799 is the number of pictures with size 32*32 pixels each that has 3 colors (RGB)
```
x_train[0].shape # Dimensions of the first picture
import matplotlib.pyplot as plt
plt.imshow(x_train[1013]) # Show the image of index 1013
y_train[1013] # Labels of the image of index 1013 above
# Access classNames of each label
classNames = {
0: 'Speed limit (20km/h)',
1: 'Speed limit (30km/h)',
2: 'Speed limit (50km/h)',
3: 'Speed limit (60km/h)',
4: 'Speed limit (70km/h)',
5: 'Speed limit (80km/h)',
6: 'End of speed limit (80km/h)',
7: 'Speed limit (100km/h)',
8: 'Speed limit (120km/h)',
9: 'No passing',
10: 'No passing for vehicles over 3.5 metric tons',
11: 'Right-of-way at the next intersection',
12: 'Priority road',
13: 'Yield',
14: 'Stop',
15: 'No vehicles',
16: 'Vehicles over 3.5 metric tons prohibited',
17: 'No entry',
18: 'General caution',
19: 'Dangerous curve to the left',
20: 'Dangerous curve to the right',
21: 'Double curve',
22: 'Bumpy road',
23: 'Slippery road',
24: 'Road narrows on the right',
25: 'Road work',
26: 'Traffic signals',
27: 'Pedestrians',
28: 'Children crossing',
29: 'Bicycles crossing',
30: 'Beware of ice/snow',
31: 'Wild animals crossing',
32: 'End of all speed and passing limits',
33: 'Turn right ahead',
34: 'Turn left ahead',
35: 'Ahead only',
36: 'Go straight or right',
37: 'Go straight or left',
38: 'Keep right',
39: 'Keep left',
40: 'Roundabout mandatory',
41: 'End of no passing',
42: 'End of no passing by vehicles over 3.5 metric tons'}
plt.imshow(x_train[9999]) # Show the image of index 9999
classNames[y_train[9999]] # Labels of the image of index 9999 above
plt.imshow(x_train[1]) # Show the image of index 1 (the second image)
y_train[1] # Labels of the image of index 1 above, output = 41
classNames[y_train[1]]
```
Conclusion:
1. The dataset has clear images with different backgrounds and colors. Therefore, it's very good for training.
2. Similar images are placed next to each other, therefore, we need to shuffle it for better generalization of learning.
## Shuffle the dataset
```
from sklearn.utils import shuffle
# Split validation set
x_valid = valid['features']
y_valid = valid['labels']
# Split test set
x_test = test['features']
y_test = test['labels']
x_train, y_train = shuffle(x_train, y_train)
x_valid, y_valid = shuffle(x_valid, y_valid)
x_test, y_test = shuffle(x_test, y_test)
plt.imshow(x_train[1]) # Check if the data has been shuffled
y_train[1] # Check the label, output = 9
classNames[y_train[1]]
```
### Normalization (to values between 0 and 1)
> Indented block
```
x_train = x_train.astype("float") / 225.0 # because 0 < each value < 255
x_valid = x_valid.astype("float") / 225.0
x_test = x_test.astype("float") / 225.0
x_train
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
```
#### Normalize labels by extracting and adding columns or values in an array
```
y_train = lb.fit_transform(y_train)
y_valid = lb.fit_transform(y_valid)
y_test = lb.fit_transform(y_test)
y_train[0]
# instead of having 9 as output, it creates an array with all possible labels and put 1 in the position of
# the correct labels and 0s for other ones
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import BatchNormalization, AveragePooling2D, MaxPooling2D, Conv2D
from tensorflow.keras.layers import Activation, Dropout, Flatten, Input, Dense, concatenate
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import SGD
width = 32
height = 32
classes = 43
shape = (width, height, 3)
```
More about Sequential() model in keras: https://stackoverflow.com/questions/57751417/what-is-meant-by-sequential-model-in-keras
```
model = Sequential([
Conv2D(32, (3, 3), padding="same", input_shape=shape),
Activation("relu"),
BatchNormalization(),
Conv2D(32, (3, 3), padding="same"),
Activation("relu"),
BatchNormalization(),
MaxPooling2D(pool_size=(2,2)),
# Create the structure for DL model and nodes (not trained yet)
# There are two ways to create the structure for DL model: this one and use the add() function below
# Be careful with the order
])
#model.add(Conv2D(32, (3, 3), padding="same", input_shape=shape))
#model.add(Activation("relu"))
#model.add(BatchNormalization())
#model.add(Conv2D(32, (3, 3), padding="same"))
#model.add(Activation("relu"))
#model.add(BatchNormalization())
#model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dense(classes))
model.add(Activation("softmax"))
model.summary()
```
Param is the numbers of parameters to learn. MaxPooling doesn't need to learn anything (just find max), therefore, the number of params is 0.
```
# Học tăng cường bằng cách xoay ảnh ngang dọc, phóng to, di chuyển, etc. để máy học nhiều trạng thái của cùng 1 object trong một bức ảnh
aug = ImageDataGenerator(rotation_range=0.18, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True)
learning_rate = .01 # low => learn slower, high => can't find the optimal point (global maxima)
epochs = 10 # run 10 times
# epoch
# Steps
#
batch_size = 64 # run just a part of data in each step to save resources
opt = SGD(learning_rate=learning_rate, momentum=0.9)
model.compile(optimizer=opt, loss="categorical_crossentropy", metrics=["accuracy"])
print("Start training")
H = model.fit_generator(aug.flow(x_train, y_train, batch_size=64), validation_data=(x_valid, y_valid), steps_per_epoch=x_train.shape[0]//batch_size, epochs=epochs, verbose=1)
model.save("Traffic-signals.h5") # h5 is the format of keras
model
saved_model = tf.keras.models.load_model("Traffic-signals.h5")
import numpy as np
# Test prediction
predict = model.predict(x_test[200:201])
predict #softmax results (between 0 and 1, the largest one is the predicted label)
final1 = np.argmax(predict)
final1 = classNames[final1]
final1
plt.imshow(x_test[200])
# Second prediction
prediction = saved_model.predict(x_test[80:81])
prediction
final2 = np.argmax(prediction)
final2 = classNames[final2]
final2
plt.imshow(x_test[80])
```
Link instruction file: https://docs.google.com/document/d/1mkgS4ZKO3kPrdZDlLHYe_XJVI_VmoYEtj0ilNNbfC9E/edit
| github_jupyter |
```
import numpy as np
import sympy
sympy.init_printing(use_unicode=True)
from sympy import symbols,simplify,diff,latex,Piecewise
from sympy.solvers import solve
from IPython.display import display
from typing import Callable
from sympy.utilities.lambdify import lambdify, implemented_function
%matplotlib inline
import matplotlib.pyplot as plt
def simplified(exp, title=None):
simp = simplify(exp)
if simplified.LOG:
if title: display(title,simp)
else: display(simp)
return simp
simplified.LOG = True
def firstOrderCondition(exp, var):
diffExp = simplified(diff(exp, var))
solutions = solve(diffExp, var)
if firstOrderCondition.LOG:
display(solutions)
return solutions
firstOrderCondition.LOG = True
class Result(object): # a class for holding results of calculations
def __repr__(self): return self.__dict__.__repr__()
def display(self):
for k,v in sorted(self.__dict__.items()):
display(k,v)
def subs(self, params):
ans = Result()
for k,v in sorted(self.__dict__.items()):
if hasattr(v,"subs"):
ans.__dict__[k] = v.subs(params)
else:
ans.__dict__[k] = v
return ans
```
# Symbolic calculations
```
a,p,r,b,vmax,bmin,bmax,beta = symbols('a p r b v_{\max} b_{\min} b_{\max} \\beta', positive=True)
w,T,D,L,n,Supply = symbols('w T \\Delta L n \\tau', positive=True)
D,Supply
def exactCostPerDay(T):
return (a*p + w*b*( (1+r)**T - 1 )) / T
def approxCostPerDay(T):
return a*p/T + w*b*r
def symmetricLifetime(w):
return w**2/4/L
def asymmetricLifetime(w):
return w / D
uniformPDF = Piecewise( (1 / bmax , b<bmax), (0, True) )
powerlawPDF = Piecewise( (0 , b<bmin), (bmin / b**2, True) )
display(sympy.integrate(uniformPDF, (b, 0, np.inf))) # should be 1
display(sympy.integrate(powerlawPDF, (b, 0, np.inf))) # should be 1
params = {
L: 10, # total transfers per day
D: 6, # delta transfers per day
beta: 0.01, # value / transfer-size
r: 4/100/365, # interest rate per day
a: 1.1, # records per reset tx
Supply: 288000, # records per day
bmin: 0.001, # min transfer size (for power law distribution)
bmax: 1, # max transfer size (for uniform distribution)
}
def calculateMarketEquilibrium(costPerDay:Callable, channelLifetime:Callable, wSolutionIndex:int):
T = simplified(channelLifetime(w), "T")
CPD = simplified(costPerDay(T), "CPD")
optimal = Result()
optimal.w = simplified(firstOrderCondition(CPD,w)[wSolutionIndex], "Optimal channel funding (w)")
optimal.T = simplified(T.subs(w,optimal.w), "optimal channel lifetime (T)")
optimal.CPD = simplified(CPD.subs(w,optimal.w), "Cost-per-day")
optimal.RPD = simplified(a / optimal.T, "Potential records per day")
optimal.C = simplified(optimal.CPD*optimal.T, "Cost between resets")
optimal.V = simplified(optimal.T*L*beta*b, "Value between resets")
optimal.VCR1 = 1
optimal.VCR2 = simplified(optimal.V / optimal.C, "Value/Cost Ratio of lightning")
optimal.VCR3 = simplified(beta*b / p, "Value/Cost Ratio of blockchain")
optimal.b12 = simplified(solve(optimal.VCR1-optimal.VCR2,b)[0],"b below which an agent prefers nop to lightning")
optimal.b13 = simplified(solve(optimal.VCR1-optimal.VCR3,b)[0],"b below which an agent prefers nop to blockchain")
optimal.b23 = simplified(solve(optimal.VCR2-optimal.VCR3,b)[0],"b below which an agent prefers lightning to blockchain")
# Calculate threshold prices. This part is relevant only for uniform valuations.
optimal.p12 = simplified(solve(optimal.b12-bmax,p)[0],"price above which all agents prefer nop to lightning")
optimal.p13 = simplified(solve(optimal.b13-bmax,p)[0],"price above which all agents prefer nop to blockchain")
optimal.p23 = simplified(solve(optimal.b23-bmax,p)[0],"price above which all agents prefer lightning to blockchain")
# substitute the numeric params:
numeric = optimal.subs(params)
numeric.b23 = numeric.b23.evalf()
numeric.p23 = numeric.p23.evalf()
return (optimal,numeric)
simplified.LOG = False
firstOrderCondition.LOG = False
(asymmetricSymbolic,asymmetricNumeric) = calculateMarketEquilibrium(approxCostPerDay,asymmetricLifetime,wSolutionIndex=0)
#asymmetricSymbolic.display()
asymmetricNumeric.display()
simplified.LOG = False
firstOrderCondition.LOG = False
(symmetricSymbolic,symmetricNumeric) = calculateMarketEquilibrium(approxCostPerDay,symmetricLifetime,wSolutionIndex=0)
symmetricNumeric.display()
```
# Demand curves
```
### Generic function for calculating demand - does not give plottable expressions:
def calculateDemands(optimal, valuePDF):
optimal.demandWithoutLightning = simplified(
sympy.integrate(L * valuePDF, (b, optimal.b13,np.inf)),
"demand without lightning"
)
optimal.demandWithLightning = simplified(
sympy.integrate(a / optimal.T * valuePDF, (b, optimal.b12,optimal.b23)) +\
sympy.integrate(L * valuePDF, (b, optimal.b23,np.inf)),
"demand with lightning"
)
return optimal
simplified.LOG = True
calculateDemands(asymmetricSymbolic, uniformPDF)
asymmetricNumeric = asymmetricSymbolic.subs(params)
display(asymmetricNumeric.demandWithoutLightning)
display(asymmetricNumeric.demandWithLightning)
```
# Plots
```
def plotSymbolic(xRange, yExpression, xVariable, style, label):
plt.plot(xRange, [yExpression.subs(xVariable,xValue) for xValue in xRange], style, label=label)
def plotDemandCurves(priceRange, demandWithoutLightning, demandAsymmetric, demandSymmetric):
plotSymbolic(priceRange, demandWithoutLightning, p, "r-",label="no lightning")
plotSymbolic(priceRange, demandAsymmetric, p, "b.",label="asymmetric")
plotSymbolic(priceRange, demandSymmetric, p, "g--",label="symmetric")
plt.gca().set_ylim(-1,11)
plt.xlabel("blockchain fee $p$ [coins]")
plt.ylabel("Demand of a single pair [records/day]")
plt.legend()
def plotPriceCurves(nRange, priceWithoutLightning, priceAsymmetric, priceSymmetric):
plotSymbolic(nRange, priceWithoutLightning, n, "r-",label="no lightning")
plotSymbolic(nRange, priceAsymmetric, n, "b.",label="asymmetric")
if priceSymmetric:
plotSymbolic(nRange, priceSymmetric, n, "g--",label="symmetric")
#plt.gca().set_ylim(-1,11)
plt.xlabel("Number of pairs")
plt.ylabel("Market-equilibrium price [coins/record]")
plt.legend()
```
## Uniform distribution
```
def calculateDemandsUniformDistribution(optimal):
optimal.demandB13 = sympy.integrate(L / bmax, (b, optimal.b13, bmax))
optimal.demandWithoutLightning = simplified(Piecewise(
(optimal.demandB13, p < optimal.p13), # b13 < bmax
(0, True)),
"demand without lightning"
)
optimal.demandL1 = sympy.integrate(a / optimal.T / bmax, (b, optimal.b12, optimal.b23)) # b12<b23<bmax
optimal.demandL2 = sympy.integrate(a / optimal.T / bmax, (b, optimal.b12, bmax)) # b12<bmax<b23
optimal.demandB23 = sympy.integrate(L / bmax, (b, optimal.b23, bmax)) # b23<bmax
optimal.demandWithLightning = simplified(Piecewise(
(optimal.demandL1+optimal.demandB23 , p < optimal.p23), # b23 < bmax
(optimal.demandL2 , p < optimal.p12), # b12 < bmax
(0, True)),
"demand with lightning"
)
simplified.LOG = True
calculateDemandsUniformDistribution(asymmetricSymbolic)
asymmetricNumeric = asymmetricSymbolic.subs(params)
display(asymmetricNumeric.demandWithoutLightning)
display(asymmetricNumeric.demandWithLightning)
calculateDemandsUniformDistribution(symmetricSymbolic)
symmetricNumeric = symmetricSymbolic.subs(params)
display(symmetricNumeric.demandWithoutLightning)
display(symmetricNumeric.demandWithLightning)
#plot:
priceRange = np.linspace(0,1e-4,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightning, asymmetricNumeric.demandWithLightning, symmetricNumeric.demandWithLightning)
plt.savefig('../graphs/demand-curves-small-price.pdf', format='pdf', dpi=1000)
plt.show()
priceRange = np.linspace(0,0.015,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightning, asymmetricNumeric.demandWithLightning, symmetricNumeric.demandWithLightning)
plt.gca().set_ylim(-0.1,1)
plt.savefig('../graphs/demand-curves-large-price.pdf', format='pdf', dpi=1000)
plt.show()
simplified.LOG = True
### Demand curves
#asymmetric case:
(b12,b13,b23) = (asymmetricNumeric.b12,asymmetricNumeric.b13,asymmetricNumeric.b23)
(p12,p13,p23) = (asymmetricNumeric.p12,asymmetricNumeric.p13,asymmetricNumeric.p23)
demand1 = (2/3 * sympy.sqrt(D*a*r/p/bmax**2)*(b23**(3/2) - b12**(3/2)) + L*(bmax-b23)).subs(params)
demand2 = (2/3 * sympy.sqrt(D*a*r/p/bmax**2)*(bmax**(3/2) - b12**(3/2))).subs(params)
asymmetricNumeric.demandWithLightning = Piecewise(
(demand1 , p < p23),
(demand2 , p < p12),
(0, True))
#symmetric case:
(b12s,b13s,b23s) = (symmetricNumeric.b12,symmetricNumeric.b13,symmetricNumeric.b23)
(p12s,p13s,p23s) = (symmetricNumeric.p12,symmetricNumeric.p13,symmetricNumeric.p23)
demand1s = (3/5 * (L*a*r**2/p**2/bmax**3)**(1/3)*(b23s**(5/3) - b12s**(5/3)) + L*(bmax-b23s)).subs(params)
demand2s = (3/5 * (L*a*r**2/p**2/bmax**3)**(1/3)*(bmax**(5/3) - b12s**(5/3))).subs(params)
symmetricNumeric.demandWithLightning = Piecewise(
(demand1s , p < p23s),
(demand2s , p < p12s),
(0, True))
#plot:
priceRange = np.linspace(0,1e-4,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightning, asymmetricNumeric.demandWithLightning, symmetricNumeric.demandWithLightning)
plt.savefig('../graphs/demand-curves-small-price.pdf', format='pdf', dpi=1000)
plt.show()
priceRange = np.linspace(0,0.015,100)
plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightning, asymmetricNumeric.demandWithLightning, symmetricNumeric.demandWithLightning)
plt.gca().set_ylim(-0.1,1)
plt.savefig('../graphs/demand-curves-large-price.pdf', format='pdf', dpi=1000)
plt.show()
### Price curves
simplified.LOG = True
max_demand = params[L]
min_demand = asymmetricNumeric.demandWithLightning.subs(p,p13).evalf();
priceWithoutLightning = simplified(Piecewise(
(beta*bmax*(1-Supply/n/L) , n*L>Supply),
(0,True)).subs(params))
price1 = simplified(solve(n*demand1-Supply, p)[0])
price2 = simplified(solve(n*demand2-Supply, p)[0])
max_demand1 = demand1.subs(p,0).evalf();
min_demand1 = demand1.subs(p,p23).evalf();
max_demand2 = demand2.subs(p,p23).evalf();
min_demand2 = demand2.subs(p,p12).evalf();
asymmetricNumeric.priceWithLightning = simplified(Piecewise(
(0, Supply > n*max_demand1),
(price1 , Supply > n*min_demand1),
(price2 , Supply > n*min_demand2),
(0, True)).subs(params))
#price1s = simplified(solve(n*demand1s-Supply, p)[0])
# price2s = simplified(solve(n*demand2s-Supply, p)[0]) # unsolvable
max_demand1s = demand1s.subs(p,0).evalf();
min_demand1s = demand1s.subs(p,p23s).evalf();
max_demand2s = demand2s.subs(p,p23s).evalf();
min_demand2s = demand2s.subs(p,p12s).evalf();
symmetricNumeric.priceWithLightning = None
#symmetricNumeric.priceWithLightning = simplified(Piecewise(
# (0, Supply > n*max_demand1s),
# (price1s , Supply > n*min_demand1s),
# price2s, Supply > n*min_demand2s), # u
# (0, True)).subs(params))
nRange = np.linspace(0,3000000,100)
plotPriceCurves(nRange, priceWithoutLightning, asymmetricNumeric.priceWithLightning, symmetricNumeric.priceWithLightning)
plt.savefig('../graphs/price-curves.pdf', format='pdf', dpi=1000)
```
| github_jupyter |
# SenseHat Project: Labyrinth
Made by Werro Jeremy and Gaglio Pierre
## Début du projet
Ce projet a pour but de réaliser un jeu basé sur la mémorisation de pattern avec le module SenseHat. Pour se faire, nous avons choisi de créer 9 niveaux différents avec des chemins à difficulté variée. Le but du joueur sera de bien observer le chemin réalisé par la console afin de le reproduire par lui-même.
Dans un premier temps, nous importons le module sensehat ainsi que le module time afin d'avoir accès aux bibliothèques. En second temps, nous renommons la fonction senseHat. Finalement nous effectuons un clear de la console afin de commencer à programmer sur un espace vide programmer sur un espace vierge.
```
from sense_hat import SenseHat
from time import sleep, time
sense = SenseHat()
sense.clear()
```
### Création des variables
En second temps, nous importons les couleurs dont nous aurons besoin pour notre jeu.
```
white = (255, 255, 255)
lemon = (255, 255, 128)
pink = (255, 0, 128)
red = (255, 0, 0)
mint = (128, 255, 128)
blue = (0, 0, 255)
```
Ensuite, nous ajoutons les variables de niveaux avec leurs patterns.
```
N1 = [[0,0],[0,1],[0,2],[0,3],
[0,4],[0,5],[0,6],[0,7],
[1,7],[2,7],[3,7],[4,7],
[5,7],[6,7],[7,7],
]
N2 = [[0,0],[0,1],[0,2],[0,3],
[0,4],[1,4],[2,4],[3,4],
[4,4],[5,4],[6,4],[7,4],
[7,5],[7,6],[7,7],
]
N3 = [[0,0],[1,0],[1,1],[1,2],
[2,2],[3,2],[3,3],[3,4],
[3,5],[4,5],[5,5],[6,5],
[6,6],[6,7],[7,7],
]
N4 = [[0,0],[1,0],[1,1],[2,1],
[3,1],[3,0],[4,0],[5,0],
[5,1],[5,2],[5,3],[4,3],
[3,3],[2,3],[1,3],[0,3],
[0,4],[0,5],[0,6],[1,6],
[2,6],[3,6],[4,6],[5,6],
[6,6],[7,6],[7,7],
]
N5 = [[0,0]]
N6 = [[0,0]]
N7 = [[0,0]]
N8 = [[0,0]]
N9 = [[0,0]]
levels = [N1, N2, N3, N4, N5, N6, N7, N8, N9]
```
Nous définissons ensuite les noms des niveaux de la liste "levels" en les associant à leur position dans la liste ( N1 correspondra au niveau 1, etc...).
Cela permettra donc d'afficher le nom/numéro des niveaux dans le menu de sélection.
```
lvl_name = []
for i in range(len(levels)):
a = str(i+1)
lvl_name.append(a)
print(N1)
```
### Définission des phases de jeu
#### Phase de démonstration du pattern
Une fois les variables de jeu définies, nous définissons la fonction "patern_stage()" qui consistera à prévenir le joueur par un message, puis qui va clear afin de montrer la pattern que le joueur devra executer après la démonstration.
Pour cet exemple, nous testons la fonction avec la variable N1 définie plus tôt.
```
def patern_stage(niv):
dist = len(niv)
sense.show_message("Ready ? 3 2 1",
text_colour=WHITE, scroll_speed=0.02)
sense.clear(MINT)
sleep(1)
for step in range(dist):
sense.set_pixel(niv[step][0],niv[step][1], PINK)
sleep(1)
sleep(3)
sense.clear(MINT)
patern_stage(N1)
```
#### Phase d'action du joueur
Après avoir défini la fonction servant à montrer la pattern, nous créons la fonction "player_stage()". Cette foncion associe les mouvements du joystick du sense hat au mouvement du pixel. Chaque mouvement effectué par le joueur est enregistré et, une fois la pattern reproduite par le joueur, celui-ci n'a plus qu'à confirmer sa pattern en appuyant sur le joystick.
Si le joueur a reproduit la pattern de manière juste, un message "WINNER" s'affichera et dans le cas contraire, le message "LOSER" apparaîtra.
```
def player_stage(niv):
running = True
(x, y) = (0, 0)
state = [[0, 0]]
while running:
for event in sense.stick.get_events():
if event.action == 'pressed':
if event.direction == 'left':
x = min(x-1, 7)
state.append([x, y])
elif event.direction == 'right':
x = max(x+1, 0)
state.append([x, y])
if event.direction == 'down':
y = min(y+1, 7)
state.append([x, y])
elif event.direction == 'up':
y = max(y-1, 0)
state.append([x, y])
elif event.direction == 'middle':
running = False
sense.set_pixel(x, y, RED)
if state[:] == niv[:]:
sense.show_message("WINNER !",
text_colour=LEMON, scroll_speed=0.02)
sleep(2)
start_game()
else:
sense.show_message("LOSER !",
text_colour=BLUE, scroll_speed=0.02)
sleep(2)
main()
```
La fonction suivante peut être ajoutée si le joueur échoue. Elle propose le choix de recommencer le niveau ou de retourner au menu principal.
```
def try_again(niv):
wait = True
answer = 0 #(0 = Yes , 1 = No)
sense.show_message('Try again?',
text_colour=WHITE,scroll_speed=0.05)
sense.show_letter('Y',
text_colour=WHITE)
while wait == True:
for event in sense.stick.get_events():
if event.action == 'pressed':
if event.direction == 'left': #select to try again by clicking on the left
if answer >= 1:
answer = answer - 1
sense.show_letter('Y',
text_colour=WHITE)
else:
pass
elif event.direction == 'right': #select to go back to main menuby clicking on the right
if answer <= 0:
answer = answer + 1
sense.show_letter('N',
text_colour=WHITE)
else:
pass
elif event.direction == 'middle': #applies the selection by clicking on the middle.
wait = False
if answer == 0:
start_level(niv)
elif answer == 1:
main()
else: #If the player moves up or down, it goes back to main menu.
main()
```
#### Regroupement des 2 fonctions
Afin de réduire les lignes de codes, nous englobons les 2 fonctions définies ci-dessus afin que lors de la séléction du niveau, celles-ci s'éxecutent dans l'ordre. Pour ce fait, nous définissons une 3ème fonction nommée "start_level".
```
def start_level(niv):
patern_stage(niv)
player_stage(niv)
```
### Définission du menu de sélection
Du fait des différents niveaux que le jeu propose, nous définissons un menu de sélection "main" afin que le joueur puisse choisir celui de son choix.
Cette fonction affiche en premier lieu un message invitant le joueur à sélectionner son niveau. Une fois le message terminé, le joueur peut naviguer à travers les niveaux (de 1 à 9) grâce aux mouvements du joystick (droite ou gauche). Une fois son niveau choisi, il n'aura qu'à appuyer sur le joystick pour confirmer son choix et lancer le niveau en appelant la fonction "start_level".
```
def main():
running = True
sense.show_message("Please, select the level",
text_colour=WHITE, scroll_speed=0.02)
sleep(1)
lvl = 1
sense.show_letter(lvl_name[lvl],
text_colour=WHITE)
while running:
for event in sense.stick.get_events():
if event.action == 'pressed':
if event.direction == 'left':
if lvl >= 1:
lvl = lvl-1
sense.show_letter(lvl_name[lvl],
text_colour=WHITE)
else:
pass
elif event.direction == 'right':
if lvl <= len(lvl_name)-2:
lvl = lvl+1
sense.show_letter(lvl_name[lvl],
text_colour=WHITE)
else:
pass
if event.direction == 'down':
pass
elif event.direction == 'up':
pass
elif event.direction == 'middle':
running = False
start_level(levels[lvl])
```
| github_jupyter |
Lambda School Data Science, Unit 2: Predictive Modeling
# Kaggle Challenge, Module 3
## Assignment
- [x] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2/portfolio-project/ds6), then choose your dataset, and [submit this form](https://forms.gle/nyWURUg65x1UTRNV9), due today at 4pm Pacific.
- [x] Continue to participate in our Kaggle challenge.
- [x] Try xgboost.
- [x] Get your model's permutation importances.
- [x] Try feature selection with permutation importances.
- [x] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [x] Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
### Doing
- [ ] Add your own stretch goal(s) !
- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.
- [ ] Try other categorical encodings.
- [ ] Try other Python libraries for gradient boosting.
- [ ] Look at the bonus notebook in the repo, about monotonic constraints with gradient boosting.
- [ ] Make visualizations and share on Slack.
### Reading
Top recommendations in _**bold italic:**_
#### Permutation Importances
- _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_
- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
#### (Default) Feature Importances
- [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
- [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
#### Gradient Boosting
- [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/)
- _**[A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)**_
- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8
- [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html)
- _**[Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (2.5 minute video)**_
#### Categorical encoding for trees
- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)
- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)
- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_
- _**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_
- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)
- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html)
#### Imposter Syndrome
- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)
- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)
- ["I am not a real data scientist"](https://brohrer.github.io/imposter_syndrome.html)
- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_
### Python libraries for Gradient Boosting
- [scikit-learn Gradient Tree Boosting](https://scikit-learn.org/stable/modules/ensemble.html#gradient-boosting) — slower than other libraries, but [the new version may be better](https://twitter.com/amuellerml/status/1129443826945396737)
- Anaconda: already installed
- Google Colab: already installed
- [xgboost](https://xgboost.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://xiaoxiaowang87.github.io/monotonicity_constraint/)
- Anaconda, Mac/Linux: `conda install -c conda-forge xgboost`
- Windows: `conda install -c anaconda py-xgboost`
- Google Colab: already installed
- [LightGBM](https://lightgbm.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://blog.datadive.net/monotonicity-constraints-in-machine-learning/)
- Anaconda: `conda install -c conda-forge lightgbm`
- Google Colab: already installed
- [CatBoost](https://catboost.ai/) — can accept missing values and use [categorical features](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html) without preprocessing
- Anaconda: `conda install -c conda-forge catboost`
- Google Colab: `pip install catboost`
### Categorical Encodings
**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:
- **"Categorical Encoding":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.
- **Numeric Encoding:** Synonymous with Label Encoding, or "Ordinal" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).
- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).
- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).
**2.** The short video
**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.
Category Encoders has multiple implementations of this general concept:
- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)
- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)
- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)
- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)
- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)
- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)
Category Encoder's mean encoding implementations work for regression problems or binary classification problems.
For multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:
```python
encoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) # Both parameters > 1 to avoid overfitting
X_train_encoded = encoder.fit_transform(X_train, y_train=='functional')
X_val_encoded = encoder.transform(X_train, y_val=='functional')
```
**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.
```python
dirty_cat.TargetEncoder(clf_type='multiclass-clf')
```
It also implements an interesting idea called ["Similarity Encoder" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).
However, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.
**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals.
_**I hope it’s not too frustrating or confusing that there’s not one “canonical” way to encode categorcals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# eli5, version >= 0.9
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders eli5 pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
def wrangle(X):
X = X.copy()
X['latitude'] = X['latitude'].replace(-2e-08, 0)
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
target = 'status_group'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
import category_encoders as ce
encoder = ce.OrdinalEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_train.shape, X_val.shape, X_train_encoded.shape, X_val_encoded.shape
from xgboost import XGBClassifier
from sklearn.pipeline import make_pipeline
import eli5
from eli5.sklearn import PermutationImportance
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
feature = 'wpt_name'
score_with = pipeline.score(X_val, y_val)
X_val_permuted = X_val.copy()
X_val_permuted[feature] = np.random.permutation(X_val[feature])
score_permuted = pipeline.score(X_val_permuted, y_val)
print(f'Validation Accuracy with {feature}: {score_with}')
print(f'Validation Accuracy with {feature} permuted: {score_permuted}')
print(f'Permutation Importance: {score_with - score_permuted}')
from scipy.stats import randint, uniform
import category_encoders as ce
import numpy as np
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import RandomizedSearchCV
features = train.columns.drop([target])
X_train = train[features]
y_train = train[target]
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
SelectKBest(f_regression),
Ridge()
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'selectkbest__k': randint(1, len(X_train.columns)+1),
'ridge__alpha': uniform(1, 10),
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=100,
cv=5,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
```
| github_jupyter |
# Classification
We have seen how you can evaluate a supervised learner with a loss function. Classification is the learning task where one tried to predict a binary response variable, this can be thought of as the answer to a yes/no question, as in "Will this stock value go up today?". This is in contrast to regression which predicts a continuous response variable, answering a question such as "What will be the increase in stock value today?". Classification has some interesting lessons for other machine learning tasks, and in this chapter, we will try to introduce many of the main concepts in classification.
Recall that the supervised learning setting has that the data consists of response variables, $y_i$ and predictor variables $x_i$ for $i=1,\ldots,n$. In this chapter we will focus on binary classification which we will encode as the binary variable: $y_i \in \{0,1\}$. We will see that two hueristics can help us understand the basics of evaluating classifiers.
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import plotnine as p9
import matplotlib.pyplot as plt
import itertools
import warnings
warnings.simplefilter("ignore")
from matplotlib.pyplot import rcParams
rcParams['figure.figsize'] = 6,6
```
## Evaluating classifiers and heuristics
In this section, we will introduce two heuristics: a similarity-based search (nearest neighbors) and a custom sorting criteria (a output score). Throughout this chapter we will be using Scikit-learn (Sklearn for short), the main machine learning package in Python. Recall that the basic pipeline for offline supervised learning is the following,
- randomly split data into training and testing set
- fit on the training data with predictors and response variables
- predict on the test data with predictors
- observe losses from predictions and the test response variable
Sklearn provides tools for making train-test split via in the Sklearn.model_selection module. We will use several other modules that we will import,
```
from sklearn import neighbors, preprocessing, impute, metrics, model_selection, linear_model, svm, feature_selection
```
Let's read in a dataset from the UCI machine learning repository (CITE),
```
bank = pd.read_csv('../data/bank.csv',sep=';',na_values=['unknown',999,'nonexistent'])
bank.info()
```
The bank dataset contains the outcome of loans in the 'y' variable and many predictors of mixed type. There is also some missingness. Before we can go any further exploring the data, it is best to make the train-test split.
```
bank_tr, bank_te = model_selection.train_test_split(bank,test_size=.33)
```
Above we specified that roughly 1/3 of the data be reserved for the test set, and the rest for the training set. First notice that the response variable is binary, and in the training set 4000 of the 4521 records are 'no'. We call this situation *class imbalance*, in that the percentage of samples with each value of y is not balanced.
```
bank['y'].describe()
```
Consider the following density plot, which shows the density of age conditional on the response variable y. If we cared equally about predicting 'yes' and 'no' then we may predict that a 'yes' if age exceeds some value like 60 or is below some value such as 28. We say that "predict yes if age > 60" is a *decision rule*.
```
p9.ggplot(bank_tr, p9.aes(x = 'age',fill = 'y')) + p9.geom_density(alpha=.2)
```
Some explanation is warranted. We can state more precisely what we mean by "caring equally about predicting 'yes' and 'no'" as conditional on these values let's treat the probability of error equally. Take the decision rule above, then these error probabilities are
$$
\mathbb P\{ age > 60 | y = no\}, \quad \mathbb P\{ age \le 60 | y = yes\},
$$
which are often called the type 1 and type 2 errors respectively. If we cared equally about these probabilities then it is reasonable to set the age threshold at 60 since after this threshold it looks like the conditional density of age is greater for $y = yes$. We can do something similar for a more complicated decision rule such as "predict y=yes if age > 60 or age < 28" which may make even more sense.
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
```
We can make similar considerations for the variables duration and balance below. It seems that there is some descriminative power for balance, and more significant amount of descriminative power for duration.
```
p9.ggplot(bank_tr[['balance','y']].dropna(axis=0)) + p9.aes(x = 'balance',fill = 'y') \
+ p9.geom_density(alpha=.5)
p9.ggplot(bank_tr[['duration','y']].dropna(axis=0)) + p9.aes(x = 'duration',fill = 'y')\
+ p9.geom_density(alpha=.5)
def train_bank_to_xy(bank):
"""standardize and impute training"""
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
scaler = preprocessing.StandardScaler().fit(X)
imputer = impute.SimpleImputer(fill_value=0).fit(X)
trans_prep = lambda Z: imputer.transform(scaler.transform(Z))
X = trans_prep(X)
y = 2*(y == 'yes')-1
return (X, y), trans_prep
def test_bank_to_xy(bank, trans_prep):
"""standardize and impute test"""
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
X = trans_prep(X)
y = 2*(y == 'yes')-1
return (X, y)
(X_tr, y_tr), trans_prep = train_bank_to_xy(bank_tr)
X_te, y_te = test_bank_to_xy(bank_te, trans_prep)
## Set the score to be standardized duration
score_dur = X_te[:,2]
print(plt.style.available)
def plot_conf_score(y_te,score,tau):
y_pred = 2*(score > tau) - 1
classes = [1,-1]
conf = metrics.confusion_matrix(y_te, y_pred,labels=classes)
plot_confusion_matrix(conf, classes)
plot_conf_score(y_te,score_dur,1.)
plot_conf_score(y_te,score_dur,2.)
```
### Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>Pred 1</td><td>Pred 0</td></tr>
<tr><td>True 1</td><td>True Pos</td><td>False Neg</td></tr>
<tr><td>True 0</td><td>False Pos</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
```
plot_conf_score(y_te,score_dur,.5)
```
## Searching
### (pseudo-)Metrics
- $d(x_i,x_j) = 0$: most similar
- $d(x_i,x_j)$ larger: less similar
### K nearest neighbors:
- For a test point $x_{n+1}$
- Compute distances to $x_1,\ldots,x_n$
- Sort training points by distance
- return K closest
```
## Fit and find NNs
nn = neighbors.NearestNeighbors(n_neighbors=10,metric="l2")
nn.fit(X_tr)
dists, NNs = nn.kneighbors(X_te)
NNs[1], y_tr[NNs[1]].mean(), y_te[1]
score_nn = np.array([(y_tr[knns] == 1).mean() for knns in NNs])
plot_conf_score(y_te,score_nn,.2)
nn = neighbors.KNeighborsClassifier(n_neighbors=10)
nn.fit(X_tr, y_tr)
score_nn = nn.predict_proba(X_te)[:,1]
plot_conf_score(y_te,score_nn,.2)
def print_top_k(score_dur,y_te,k_top):
ordering = np.argsort(score_dur)[::-1]
print("k: score, y")
for k, (yv,s) in enumerate(zip(y_te[ordering],score_dur[ordering])):
print("{}: {}, {}".format(k,s,yv))
if k >= k_top - 1:
break
print_top_k(score_dur,y_te,10)
```
### Confusion matrix and metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>Pred 1</td><td>Pred -1</td></tr>
<tr><td>True 1</td><td>True Pos</td><td>False Neg</td></tr>
<tr><td>True -1</td><td>False Pos</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
```
plt.style.use('ggplot')
fpr_dur, tpr_dur, threshs = metrics.roc_curve(y_te,score_dur)
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur)
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title("ROC for 'duration'")
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur,label='duration')
plt.plot(fpr_nn,tpr_nn,label='knn')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC")
fpr_nn, tpr_nn, threshs = metrics.roc_curve(y_te,score_nn)
plot_temp()
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(rec_dur,prec_dur,label='duration')
plt.plot(rec_nn,prec_nn,label='knn')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
prec_dur, rec_dur, threshs = metrics.precision_recall_curve(y_te,score_dur)
prec_nn, rec_nn, threshs = metrics.precision_recall_curve(y_te,score_nn)
plot_temp()
```
- ROC should be in top left
- PR should be large for all recall values
## Linear classifiers
In the above analysis we chose a variable (duration) and used that as the score. Let's consider another score which is a linear combination such as $0.2 (age) + 0.8 (duration)$ and compare the training errors of these. If we took this to its logical conclusion, we would search in an automated way for the exact coefficients. This is precisely what linear classifiers, such as logistic regression, support vector machines, and the perceptron do. This would be based on the training dataset only and hence, the test error is an unbiased estimate of the true risk (the expected loss). We have already seen the form for a linear classifier, where the score function is $\hat s(x) = \hat \beta_0 + \hat \beta^\top x$ where $\beta$ is learned. With this score function in hand we can write the predict function as $\hat y(x) = \textrm{sign} (\hat s(x))$.
### Optimization and training loss
Why the different classification methods if they are all linear classifiers? Throughout we will make a distinction between the training loss, which is the loss used for training, and the test loss, the loss used for testing. We have already seen that they can be different when we trained a regressor with square error loss, but tested with the 0-1 loss in Statistical Learning Machines. In regression, we know that ordinary least squares minimizes the training square error, and these classifiers work in a similar way, except each uses a different training loss. We have already seen the 0-1 loss, recall that it takes the form,
$$
\ell_{0/1} (\hat y_{i}, y_i) = \left\{ \begin{array}{ll}
1,& \textrm{ if } \hat y_i \ne y_i\\
0,& \textrm{ if } \hat y_i = y_i
\end{array} \right.
$$
Why not use this loss function for training? To answer this we need to take a quick detour to describe optimization, also known as mathematical programming.
Suppose that we have some parameter $\beta$ and a function of that parameter $F(beta)$ that takes real values (we will allow possibly infinite values as well). Furthermore, suppose that $\beta$ is constrained to be within the set $\mathcal B$. Then an optimization (minimization) program takes the form
$$
\textrm{Find } \hat \beta \textrm{ such that } \hat \beta \in \mathcal B \textrm{ and } F(\hat \beta) \le F(\beta) \textrm{ for all } \beta \in \mathcal B.
$$
We can rewrite this problem more succinctly as
$$
\min_{\beta \in \mathcal B}. F(\beta),
$$
where $\min.$ stands for minimize. An optimization program is an idealized program, and is not itself an algorithm. It says nothing about how to find it, and there are many methods for finding the minimum based on details about the function $F$ and the constraint set $\mathcal B$.
Some functions are hard to optimize, especially discontinuous functions. Most optimization algorithms work by starting with an initial value of $\beta$ and iteratively moving it in a way that will tend to decrease $F$. When a function is discontinuous then these moves can have unpredictable consequences. Returning to our question, suppose that we wanted to minimize training error with a 0-1 training loss. Then this could be written as the optimization program,
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 \{ \textrm{sign}(\beta^\top x_i) \ne y_i \}.
\tag{0-1 min}
$$
This optimization is discontinuous because it is the sum of discontinuous functions---the indicator can suddenly jump from 0 to 1 with an arbitrarily small movement of $\beta$.
**Note:** Let us introduce the indicator notation,
$$
1\{ {\rm condition} \} = \left\{ \begin{array}{ll}
1,& \textrm{ if condition is true}\\
0,& \textrm{ if condition is not true}
\end{array} \right.
$$
We will suppress the intercept term by assuming that the first variable is an intercept variable. Specifically, if $x_1,\ldots,x_p$ were our original predictor variables, then we can include $x_0 = 1$ then if $\tilde x = (x_0,x_1,\ldots,x_p)$ and $\tilde \beta = (\beta_0, \beta_1, \ldots, \beta_p)$ we have that
$$
\tilde \beta^\top \tilde x = \beta_0 + \beta^\top x.
$$
Hence, without loss of generality, we can think of $\beta$ and $x$ as being $(p+1)$-dimensional vectors, and no longer treat the intercept as a special parameter.
Let's simulate some data for which a linear classifier will do well. We will use this to introduce the two main methods: logistic regression and support vector machines.
```
def lm_sim(N = 100):
"""simulate a binary response and two predictors"""
X1 = (np.random.randn(N*2)).reshape((N,2)) + np.array([2,3])
X0 = (np.random.randn(N*2)).reshape((N,2)) + np.array([.5,1.5])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim,y_sim,X0,X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Two dimensional classification simulation")
_ = plt.legend(loc=2)
```
The red dots correspond to ``Y = +1`` and blue is ``Y = -1``. We can see that a classifier that classifies as +1 when the point is in the upper right of the coordinate system should do pretty well. We could propose several $\beta$ vectors to form linear classifiers and observe their training errors, finally selecting the one that minimized the training error. We have seen that a linear classifier has a separator hyperplane (a line in 2 dimensions). To find out what the prediction the classifier makes for a point one just needs to look at which side of the hyperplane it falls on. Consider a few such lines.
```
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
mults=0.8
T = np.linspace(-1,4,100)
x2hat = -(beta0 + beta1*T) / beta2
line1 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line2 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line3 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,line3,c='k')
plt.plot(T,line1,c='k')
plt.plot(T,line2,c='k')
plt.ylim([-1,7])
plt.title("Three possible separator lines")
_ = plt.legend(loc=2)
```
One of these will probably do a better job at separating the training data than the others, but if we wanted to do this over all possible $\beta \in \mathbb R^{p+1}$ then we need to solve the program (0-1 min) above, which we have already said is a hard objective to optimize. Logistic regression regression uses a different loss function that can be optimized in several ways, and we can see the resulting separator line below. (It is outside the scope of this chapter to introduce specific optimization methods.)
```
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
```
The points above this line are predicted as a +1, and so we can also isolate those points that we classified incorrectly. The 0-1 loss counts each of these points as a loss of 1.
```
N = 100
y_hat = lr_sim.predict(X_sim)
plt.scatter(X0[y_hat[N:] == 1,0],X0[y_hat[N:] == 1,1],c='b',label='neg')
plt.scatter(X1[y_hat[:N] == -1,0],X1[y_hat[:N] == -1,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Points classified incorrectly")
_ = plt.legend(loc=2)
```
Logistic regression uses a loss function that mimics some of the behavior of the 0-1 loss, but is not discontinuous. In this way, it is a surrogate loss, that acts as a surrogate for the 0-1 loss. It turns out that it is one of a few nice options for surrogate losses. Notice that we can rewrite the 0-1 loss for a linear classifier as
$$
\ell_{0/1}(\beta,x_i,y_i) = 1 \{ y_i \beta^\top x_i < 0 \}.
$$
Throughout we will denote our losses as functions of $\beta$ to reflect the fact that we are only considering linear classifiers.
The logistic loss and the hinge loss are also functions of $y_i \beta^\top x_i$, they are
$$
\ell_{L} (\beta, x_i, y_i) = \log(1 + \exp(-y_i \beta^\top x_i))
\tag{logistic}
$$
and
$$
\ell_{H} (\beta, x_i, y_i) = (1 - y_i \beta^\top x_i))_+
\tag{hinge}
$$
where $a_+ = a 1\{ a > 0\}$ is the positive part of the real number $a$.
If we are free to select training loss functions, then why not square error loss? For example, we could choose
$$
\ell_{S} (\beta, x_i, y_i) = (y_i - \beta^\top x_i))^2 = (1 - y_i \beta^\top x_i))^2.
\tag{square error}
$$
In order to motivate the use of these, let's plot the losses as a function of $y_i \beta^\top x_i$.
```
z_range = np.linspace(-5,5,200)
zoloss = z_range < 0
l2loss = (1-z_range)**2.
hingeloss = (1 - z_range) * (z_range < 1)
logisticloss = np.log(1 + np.exp(-z_range))
plt.plot(z_range, logisticloss + 1 - np.log(2.),label='logistic')
plt.plot(z_range, zoloss,label='0-1')
plt.plot(z_range, hingeloss,label='hinge')
plt.plot(z_range, l2loss,label='sq error')
plt.ylim([-.2,5])
plt.xlabel(r'$y_i \beta^\top x_i$')
plt.ylabel('loss')
plt.title('A comparison of classification loss functions')
_ = plt.legend()
```
Comparing these we see that the logistic loss is smooth---it has continuous first and second derivatives---and it is decreasing as $y_i \beta^\top x_i$ is increasing. The hinge loss is interesting, it is continuous, but it has a discontinuous first derivative. This changes the nature of optimization algorithms that we will tend to use. On the other hand the hinge loss is zero for large enough $y_i \beta^\top x_i$, as opposed to the logistic loss which is always non-zero. Below we depict these two losses by weighting each point by the loss for the fitted classifier.
```
z_log = y_sim*lr_sim.decision_function(X_sim)
logisticloss = np.log(1 + np.exp(-z_log))
plt.scatter(X0[:,0],X0[:,1],s=logisticloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=logisticloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by logistic loss")
_ = plt.legend(loc=2)
hingeloss = (1-z_log)*(z_log < 1)
plt.scatter(X0[:,0],X0[:,1],s=hingeloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=hingeloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by hinge loss")
_ = plt.legend(loc=2)
l2loss = (1-z_log)**2.
plt.scatter(X0[:,0],X0[:,1],s=l2loss[N:]*10.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=l2loss[:N]*10.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by sqr. loss")
_ = plt.legend(loc=2)
```
We see that for the logistic loss the size is vanishing when the points are on the wrong side of the separator hyperplane. The hinge loss is zero if we are sufficiently far from the hyperplane. The square error loss has increased weight for those far from the hyperplane, even if they are correctly classified. Hence, square error loss is not a good surrogate for 0-1 loss.
Support vector machines are algorithmically instable if we only try to minimize the training error with the hinge loss. So, we add an additional term to make the optimization somewhat easier, which we call a ridge regularization. Regularization is the process of adding a term to the objective that biases the results in a certain way. Ridge regularization means that we minimize the following objective for our loss,
$$
\min_{\beta \in \mathbb R^{p+1}}. \frac 1n \sum_{i=1}^n \ell(\beta,x_i,y_i) + \lambda \sum_{j=1}^p \beta_j^2.
$$
This has the effect of biasing the $\hat \beta_j$ towards 0 for $j=1,\ldots,p$. Typically, this has little effect on the test error, except for when $\lambda$ is too large, but it does often make the result more computationally stable.
In Scikit-learn we can initialize a logistic regression instance with ``linear_model.LogisticRegression``. For historical reasons they parametrize the ridge regularization with the reciprocal of the $\lambda$ parameter that they call C. All supervised learners in Scikit-learn have a fit and predict method. Let's apply this to the bank data using the predictors age, balance, and duration.
```
lamb = 1.
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
```
We can then predict on the test set and see what the resulting 0-1 test error is.
```
yhat = lr.predict(X_te)
(yhat != y_te).mean()
```
On the other hand we can extract a score by using the coefficients themselves. We can also extract the ROC and PR curves for this score.
```
score_lr = X_te @ lr.coef_[0,:]
fpr_lr, tpr_lr, threshs = metrics.roc_curve(y_te,score_lr)
prec_lr, rec_lr, threshs = metrics.precision_recall_curve(y_te,score_lr)
```
We can also do this same procedure for SVMs. Notice that we set the ``kernel='linear'`` argument in SVC. SVMs in general can use a kernel to make them non-linear classifiers, but we want to only consider linear classifiers here.
```
lamb = 1.
svc = svm.SVC(C = 1/lamb,kernel='linear')
svc.fit(X_tr,y_tr)
yhat = svc.predict(X_te)
score_svc = X_te @ svc.coef_[0,:]
fpr_svc, tpr_svc, threshs = metrics.roc_curve(y_te,score_svc)
prec_svc, rec_svc, threshs = metrics.precision_recall_curve(y_te,score_svc)
(yhat != y_te).mean()
```
We see that SVMs achieves a slightly lower 0-1 test error. We can also compare these two methods based on the ROC and PR curves.
```
plt.figure(figsize=(6,6))
plt.plot(fpr_lr,tpr_lr,label='logistic')
plt.plot(fpr_svc,tpr_svc,label='svm')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC curve comparison")
plt.figure(figsize=(6,6))
plt.plot(rec_lr,prec_lr,label='logistic')
plt.plot(rec_svc,prec_svc,label='svm')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve comparison")
```
Despite having a lower misclassification rate (0-1 test error), the ROC for logistic regression is uniformly better than that for SVMs, and for all but the lowest recall regions, the precision is better for logistic regression. We might conclude that despite the misclassification rate, logistic regression is the better classifier in this case.
### Tuning the ridge penalty $\lambda$
Let's consider how regularized logistic regression performs with different values of $\lambda$. It is quite common for selection of $\lambda$ to not improve the test error in any significant way for ridge regularization. We mainly use it for computational reasons.
```
def test_lamb(lamb):
"""Test error for logistic regression and different lambda"""
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
yhat = lr.predict(X_te)
return (yhat != y_te).mean()
test_frame = pd.DataFrame({'lamb':lamb,'error':test_lamb(lamb)} for lamb in 1.5**np.arange(-5,30))
p9.ggplot(test_frame,p9.aes(x='lamb',y='error')) + p9.geom_line() + p9.scale_x_log10()\
+ p9.labels.ggtitle('Misclassification rate for tuning lambda')
(y_te == 1).mean()
```
In the above plot, we see that there is not a significant change in error as we increase $\lambda$, which could be for a few reasons. The most likely explanation is that if we look only at misclassification rate, it is hard to beat predicting every observation as a -1. Due to the class imbalance, the proportion of 1's is 0.106 which is quite close to the test error in this case, so the classifier only needs to do better than random on a very small proportion of points to beat this baseline rate.
We can also do the same thing for Precision, and obtain a similar result.
```
def get_prec(lr,X,y,K):
"""Find precision for top K"""
lr_score = X @ lr.coef_[0,:]
sc_sorted_id = np.argsort(lr_score)[::-1]
return np.mean(y[sc_sorted_id[:K]] == 1)
def prec_lamb(lamb):
"""Test error for logistic regression and different lambda"""
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
prec_K = int(.12 * len(y_te))
return get_prec(lr,X_te,y_te,prec_K)
test_frame = pd.DataFrame({'lamb':lamb,'prec':prec_lamb(lamb)} for lamb in 1.5**np.arange(-5,30))
p9.ggplot(test_frame,p9.aes(x='lamb',y='prec')) + p9.geom_line() + p9.scale_x_log10()\
+ p9.labels.ggtitle('Precision for tuning lambda')
```
### Class Weighting
Recall that the response variable in the bank data had significant class imbalance, with a prevelence of "no" responses (encoded as -1). One way that we can deal with class imbalance is by weighting differently the losses for the different classes. Specifically, specify a class weighting function $\pi(y)$ such that $\pi(1)$ is the weight applied to the positive class and $\pi(-1) = 1-\pi(1)$ is the weight on the negative classes. Then the objective for our classifier becomes
$$
\frac 1n \sum_{i=1}^n \pi(y_i) \ell(\beta,x_i,y_i).
$$
In the event that $\pi(1) > \pi(-1)$ then we will put more emphasis on classifying the 1's correctly. By default, $\pi(1) = 0.5$ which is equivalent to no weighting. This may be appropriate if we have class imbalance with fewer 1's. Let's consider what these weighted losses now look like as a function of $\beta^\top x_i$ for logistic and 0-1 loss (below $\pi(1) = .8$).
```
alpha = 4.
zolossn = z_range < 0
zolossp = z_range > 0
logisticlossn = np.log(1 + np.exp(-z_range))
logisticlossp = np.log(1 + np.exp(z_range))
plt.plot(z_range, logisticlossn + 1 - np.log(2.),label='logistic y=-1')
plt.plot(z_range, alpha*(logisticlossp + 1 - np.log(2.)),label='logistic y=1')
plt.plot(z_range, zolossn,label='0-1 loss y=-1')
plt.plot(z_range, alpha*zolossp,label='0-1 loss y=1')
plt.ylim([-.2,8])
plt.title('Class weighted losses')
plt.xlabel(r'$\beta^\top x_i$')
plt.ylabel('weighted loss')
_ = plt.legend()
```
Hence, the loss is higher if we misclassify the 1's versus the -1's. We can also visualize what this would do for the points in our two dimensional simulated dataset.
```
y_hat = lr_sim.predict(X_sim)
z_log = y_sim*lr_sim.decision_function(X_sim)
logisticloss = np.log(1 + np.exp(-z_log))
plt.scatter(X0[:,0],X0[:,1],s=alpha*logisticloss[N:]*20.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=logisticloss[:N]*20.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
_ = plt.legend(loc=2)
```
In the above plot we are weighting the loss for negative points 4 times as much as positive points. How do we choose the amount that we weight by? Typically, this is chosen to be inversely proportional to the proportion of points for that class. In Scikit-learn this is achieved using ``class_weight='balanced'``. Below we fit the balanced logistic regression and SVM and compare to the old version in PR curve.
```
lr_bal = linear_model.LogisticRegression(class_weight='balanced')
lr_bal.fit(X_tr,y_tr)
z_log = lr_bal.predict_proba(X_te)[:,1]
plt.figure(figsize=(6,6))
prec_lr_bal, rec_lr_bal, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec_lr_bal,prec_lr_bal,label='Weighted logistic')
plt.plot(rec_lr,prec_lr,label='Unweighted logistic')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
svc_bal = svm.SVC(C = 1/lamb,kernel='linear',class_weight='balanced')
svc_bal.fit(X_tr,y_tr)
plt.figure(figsize=(6,6))
z_svm = svc_bal.decision_function(X_te)
plt.figure(figsize=(6,6))
prec_svm_bal, rec_svm_bal, _ = metrics.precision_recall_curve(y_te,z_svm)
plt.plot(rec_svm_bal,prec_svm_bal,label='Weighted svm')
plt.plot(rec_svc,prec_svc,label='Unweighted svm')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
```
It seems that in this case class balancing is not essential, and does not substantially change the results.
**Aside:** This interpretation of logistic regression and SVMs is not the standard first introduction to these two methods. Instead of introducing surrogate losses, it is common to introduce the logistic model to motivate logistic regression. Under a particular statistical model, logistic regression is a maximum likelihood estimator. Also, support vector machines are often motivated by considering the separator hyperplane and trying to maximize the distance that the points are from that plane. This interpretation is complicated by the fact that if the dataset is not perfectly separated, then we need to introduce 'slack' variables that allow some slack to have misclassified points. We find that compared to these, the surrogate loss interpretation is simpler and often more enlightening.
```
K_max = np.argmax(prec_svc)
```
## Feature engineering and overfitting
Feature engineering is the process of constructing new variables from other variables or unstructured data. There are a few reasons why you might want to do this:
1. Your data is unstructured, text, or categorical and you need your predictors to be numeric;
2. due to some domain knowledge you have good guesses at important predictors that you need to construct;
3. or you want to turn a linear classifier into a non-linear classifier with a non-linear transformation.
We will discuss dealing with unstructured and text data more in later chapters, but for now, let's consider dealing with categorical data. The most common way to encode categorical data is to create dummy variables for each category. This is called the one-hot encoding, and in Scikit-learn it is ``preprocessing.OneHotEncoder``. We can see the unique values for the marital variable in the bank data.
```
bank_tr['marital'].unique()
```
We initialize the encoder which when initialized does nothing to the data. We allow it to ignore NAs and output a dense matrix. When we fit on the data it determines the unique values and assigns dummy variables for the values.
```
OH_marital = preprocessing.OneHotEncoder(handle_unknown='ignore',sparse=False)
OH_marital.fit(bank_tr[['marital']])
marital_trans = OH_marital.transform(bank_tr[['marital']])
```
Let's look at the output of the transformation.
```
marital_trans[[0,10,17],:]
```
Then look at the actual variable.
```
bank_tr.iloc[:3,2]
```
It seems that the first dummy is assigned to 'divorced', the second to 'married', and the third to 'single'. This method cannot handle missing data, so a tool such as ``impute.SimpleImputer`` should be used first. All of these methods are considered transformations, and they have fit methods. The reason for this interface design is to make it so that we can fit on the training set and then transform can be applied to the test data.
In the following, we will create an imputer and encoder for each continuous variable, then transform the training data.
```
## TRAINING TRANSFORMATIONS
## extract y and save missingness
y_tr = bank_tr['y']
del bank_tr['y']
y_tr = 2*(y_tr == 'yes').values - 1
bank_nas_tr = bank_tr.isna().values
# find object and numerical column names
obj_vars = bank_tr.dtypes[bank_tr.dtypes == 'object'].index.values
num_vars = bank_tr.dtypes[bank_tr.dtypes != 'object'].index.values
# create imputers and encoders for categorical vars and fit
obj_imp = [impute.SimpleImputer(strategy='most_frequent').fit(bank_tr[[var]])\
for var in obj_vars]
obj_tr_trans = [imp.transform(bank_tr[[var]]) for imp,var in zip(obj_imp,obj_vars)]
obj_OH = [preprocessing.OneHotEncoder(handle_unknown='ignore',sparse=False).fit(var_data)\
for var_data in obj_tr_trans]
obj_tr_trans = [OH.transform(var_data)[:,:-1] for OH, var_data in zip(obj_OH,obj_tr_trans)]
# Store the variable names associated with transformations
obj_var_names = sum(([var]*trans.shape[1] for var,trans in zip(obj_vars,obj_tr_trans)),[])
```
We can also apply a fixed transformation the variables. While this could include multiple variables and encode interactions, we will only use univariate transformations. The following method applies the transformations,
$x \to x^2$ and $x \to \log(1 + x)$ and combines these with the original numerical dataset.
```
def fixed_trans(df):
"""selected fixed transformations"""
return np.hstack([df, df**2, np.log(np.abs(df)+1)])
```
We can now apply imputation and this fixed transformation to the numerical variables.
```
# create imputers for numerical vars and fit
num_tr_vals = bank_tr[num_vars]
num_imp = impute.SimpleImputer(strategy='median').fit(num_tr_vals)
num_tr_trans = num_imp.transform(num_tr_vals)
num_tr_trans = fixed_trans(num_tr_trans)
# numerical variable names
num_var_names = list(num_tr_vals.columns.values)*3
```
I noticed that for various reasons that the standard deviation for some created variables is 0 (probably when there is only one value for a binary variable). We filter these out in the following lines.
```
# stack together for training predictors
X_tr = np.hstack(obj_tr_trans + [num_tr_trans,bank_nas_tr])
keep_cols = (X_tr.std(axis=0) != 0)
X_tr = X_tr[:,keep_cols]
```
The variable names are then also filtered out, this will be used later to be able to identify the variables by their indices in the transformed variables.
```
var_names = np.array(obj_var_names + num_var_names + list(bank_tr.columns.values))
var_names = var_names[keep_cols]
```
These transformations can now be applied to the test data. Because they were all fit with the training data, we do not run the risk of the fitting to the testing data. It is important to maintain the organization that we fit to the training data first, then transform the test data.
```
## TESTING TRANSFORMATIONS
y_te = bank_te['y']
del bank_te['y']
y_te = 2*(y_te == 'yes') - 1
y_te = np.array(y_te)
bank_nas_te = bank_te.isna().values
obj_te_trans = [imp.transform(bank_te[[var]]) for imp,var in zip(obj_imp,obj_vars)]
obj_te_trans = [OH.transform(var_data)[:,:-1] for OH, var_data in zip(obj_OH,obj_te_trans)]
num_te_vals = bank_te[num_vars]
num_te_trans = num_imp.transform(num_te_vals)
num_te_trans = fixed_trans(num_te_trans)
X_te = np.hstack(obj_te_trans + [num_te_trans,bank_nas_te])
X_te = X_te[:,keep_cols]
```
Now we are ready to train our linear classifier, and we will use logistic regression.
```
lr = linear_model.LogisticRegression()
lr.fit(X_tr,y_tr)
yhat_te = lr.predict(X_te)
yhat_tr = lr.predict(X_tr)
print("""
Training error:{}
Testing error: {}
""".format((yhat_tr != y_tr).mean(), (yhat_te != y_te).mean()))
```
We output the training error along with the testing 0-1 error. The testing error is actually lower than the training error. Notably neither are much larger than the proportion of 1s in the dataset. This indicates to us that the 0-1 loss is not a very good measure of performance in this imbalanced dataset. Instead, let's compare the PR curve for these two.
```
z_log = lr.predict_proba(X_te)[:,1]
plt.figure(figsize=(6,6))
prec, rec, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec,prec,label='all vars')
plt.plot(rec_lr,prec_lr,label='3 vars')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
```
We see that only in the low recall regions do all of these variables improve our test precision. So did we do all of that work creating new variables for nothing?
### Model selection and overfitting
What we are observing above is overfitting. When you add a new predictor variable in to the regression it may help you explain the response variable, but what if it is completely independent of the response? It turns out that this new variable can actually hurt you by adding variability to your classifier. For illustration, consider the following simulation.
```
def lm_sim(N = 20):
"""simulate a binary response and two predictors"""
X1 = (np.random.randn(N*2)).reshape((N,2)) + np.array([3,2])
X0 = (np.random.randn(N*2)).reshape((N,2)) + np.array([3,4])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim, y_sim, X0, X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
```
This dataset is generated such that only the second X variable (on the Y-axis in the plot) is influencing the probability of a point being positive or negative. There are also only 20 data points. Let's fit a logistic regression with both predictors and also only the relevant predictor, then plot the separator line.
```
def sep_lr(T,lr):
T = np.linspace(0,5,100)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
return -(beta0 + beta1*T) / beta2
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
mults=0.8
T = np.linspace(0,7,100)
x2hat = sep_lr(T,lr_sim)
X_other = X_sim.copy()
X_other[:,0] = 0.
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_other,y_sim)
x1hat = sep_lr(T,lr_sim)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x1hat,'--',c='k',label='1-var sep')
plt.plot(T,x2hat,c='k',label='2-var sep')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
```
We see that due to the randomness of the data points the classifier that uses both predictors is significantly perturbed and will result in a higher test error than the one that only uses the relevant variable. This is an example of how adding irrelevant variables can hurt your prediction because the classifier will begin to fit to the data. This problem is pronounced when you have a small amount of data or you have many predictors.
One remedy to this is to order your predictor variables by some measure that you think indicates importance. Then you can select the best K variables and look at your test error to determine K. K is called a tuning parameter, and to select it you have to compare test errors, because otherwise you will not necessarily detect overfitting. In Scikit-learn several methods are available for selecting best K predictors, and we will focus on using the method ``feature_selection.SelectKBest``. By default this considers a single predictor variable at a time, then performs an ANOVA on that predictor with the binary response variable as the independent variable (it flips the roles of the response and predictors). Then it computes the F score which when maximized gives the best predictor variable. Choosing the top K F scores gives you the best K variables. In the following code, we use the best K predictors to transform the training and test set. Like in other transformations, it is important to fit it only on the training data.
```
def get_prec(lr,X,y,K):
"""Find precision for top K"""
lr_score = X @ lr.coef_[0,:]
sc_sorted_id = np.argsort(lr_score)[::-1]
return np.mean(y[sc_sorted_id[:K]] == 1)
def test_kbest(X_tr,y_tr,X_te,y_te,k,prec_perc = .115):
"""Training and testing only on k-best variables"""
## Training
# Feature Selection
skb = feature_selection.SelectKBest(k=k)
skb.fit(X_tr,y_tr)
X_tr_kb = skb.transform(X_tr)
# Fitting
lr = linear_model.LogisticRegression()
lr.fit(X_tr_kb,y_tr)
yhat_tr = lr.predict(X_tr_kb)
prec_K = int(prec_perc * len(y_tr))
tr_prec = get_prec(lr,X_tr_kb,y_tr,prec_K)
tr_error = (yhat_tr != y_tr).mean()
## Testing
X_te_kb = skb.transform(X_te)
yhat_te = lr.predict(X_te_kb)
prec_K = int(prec_perc * len(y_te))
te_prec = get_prec(lr,X_te_kb,y_te,prec_K)
te_error = (yhat_te != y_te).mean()
return tr_error, te_error, tr_prec, te_prec
```
In the above code, we return the 0-1 errors and the precision when we recommend the best 11.5% of the scores. This number is chosen because it is the proportion of positives in the training set.
```
errors = [test_kbest(X_tr,y_tr,X_te,y_te,k) for k in range(1,X_tr.shape[1]+1)]
train_error, test_error, tr_prec, te_prec = zip(*errors)
```
We can plot the 0-1 error and see how it responds to the tuning parameter K.
```
plt.plot(train_error,label='training')
plt.plot(test_error,label='testing')
plt.legend()
```
Unfortunately, as before the 0-1 error is not very helpful as a metric, and does not deviate significantly from the just predicting all negatives. Looking at precision is somewhat more useful, and we can see that it increases in a noisy way, until it drops significantly as we increase K. This is consistent with an overfitting-underfitting tradeoff. At the beginning we are underfitting because we have not selected all of the important predictor variables, but then we start to overfit when K gets large.
```
plt.plot(tr_prec,label='training')
plt.plot(te_prec,label='testing')
plt.legend()
```
Let's look at the variables that are selected in the best 10 and best 25.
```
skb = feature_selection.SelectKBest(k=10)
skb.fit(X_tr,y_tr)
set(var_names[skb.get_support()])
skb = feature_selection.SelectKBest(k=25)
skb.fit(X_tr,y_tr)
set(var_names[skb.get_support()])
```
It seems that duration is included in both but many other variables are added in addition to age and balance. We can also compare the PR curve for the test error for different models.
```
X_tr_kb = skb.transform(X_tr)
lr = linear_model.LogisticRegression()
lr.fit(X_tr_kb,y_tr)
X_te_kb = skb.transform(X_te)
z_log = lr.predict_proba(X_te_kb)[:,1]
plt.figure(figsize=(6,6))
prec_skb, rec_skb, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec_skb,prec_skb,label='25-best vars')
plt.plot(rec,prec,label='all vars')
plt.plot(rec_lr,prec_lr,label='3 vars')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
```
It seems that the 25 best variables model has a higher precision at most recalls, which means that we can predict more positives while maintaining a good prediction precision.
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from matplotlib import colors
import dataframe_image as dfi
import seaborn as sns
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', None)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
dir_path = "C:\\Users\\davidsong\\Desktop\\USAID\\Documents\\Data\\"
file_name = "Financial_Structured_Datasets_COP17-21_20211217.txt"
df_fsd = pd.read_csv(dir_path+file_name,
sep = '\t')
df_fsd = df_fsd.rename(columns={'workplan_budget_amt': 'workplan',
'expenditure_amt': 'expenditure',
'cop_budget_total': "cop"})
cols = ['operatingunit', 'country','mech_code', 'mech_name',
'program', 'sub_program', 'interaction_type',
'beneficiary', 'sub_beneficiary', 'cost_category', 'sub_cost_category',
'planning_cycle', 'implementation_year', 'cop', 'workplan',
'expenditure']
# Confirming expenditures reported for COP20, FY21
cond_yr20 = df_fsd['planning_cycle'] == 'COP20'
print(df_fsd[cond_yr20].shape[0])
print(sum(df_fsd[cond_yr20]['expenditure'].isna()))
# workplan also already reported for COP20, FY21
cond_yr20 = df_fsd['planning_cycle'] == 'COP21'
print(df_fsd[cond_yr20].shape[0])
print(sum(df_fsd[cond_yr20]['workplan'].isna()))
cond_19 = df_fsd['planning_cycle'] == 'COP20'
cond_21 = df_fsd['planning_cycle'] == 'COP21'
# cond_agency = df_fsd['fundingagency'] == 'USAID'
# df_subset_exp = df_fsd[cond_19 & cond_agency][cols]
# df_subset_work = df_fsd[cond_21 & cond_agency][cols]
df_subset_exp = df_fsd[cond_19][cols]
df_subset_work = df_fsd[cond_21][cols]
def add_perc_rank(df, col):
col_sum = sum(df[col])
df['percent'] = df[col] / col_sum
df['rank'] = df[col].rank(ascending=False)
def count_df(df, count_col, groupby_cols = ['program', 'beneficiary']):
df_count = df.groupby(by = groupby_cols).count()[[count_col]]
df_count = df_count.sort_values(by=count_col, ascending=False)
add_perc_rank(df_count, count_col)
return df_count
def sum_df(df, sum_col, groupby_cols=['program', 'beneficiary']):
df_sum = df.groupby(by = groupby_cols).sum()[[sum_col]]
df_sum = df_sum.sort_values(by=sum_col, ascending=False)
add_perc_rank(df_sum, sum_col)
return df_sum
def highlight_diff(column, diff=5):
highlight = 'background-color: yellow;'
default = ''
return [highlight if abs(v) > diff else default for v in column]
def pipeline_compare(df1, df2):
df_merge = df1.merge(df2, how='outer', left_index=True, right_index=True)
df_merge=df_merge.sort_values(by='rank_x')
df_merge['rank_diff'] = df_merge['rank_x'] - df_merge['rank_y']
return df_merge
# drop any rows that have no expenditure for our desired Fiscal Year
cond_has_exp = df_subset_exp["expenditure"] > 0
df_subset_exp = df_subset_exp[cond_has_exp]
cond_has_workplan = df_subset_work["workplan"] > 0
df_subset_work = df_subset_work[cond_has_workplan]
df_subset_exp = df_subset_exp.replace({'sub_program':{"Comm. mobilization, behavior & norms change": "Comm. mobilization",
"Policy, planning, coordination & management of disease control programs": "Policy, planning...",
"Primary prevention of HIV and sexual violence": "Prevent HIV & sexual violence"},
'beneficiary':{"Pregnant & Breastfeeding Women" : "PBFW"},
'sub_beneficiary':{"Young women & adolescent females": "AGYW",
"Orphans & vulnerable children": "OVC",
"Military & other uniformed services": "Military...",
"Young people & adolescents": "Young people..."
}
})
df_subset_work = df_subset_work.replace({'sub_program':{"Comm. mobilization, behavior & norms change": "Comm. mobilization",
"Policy, planning, coordination & management of disease control programs": "Policy, planning...",
"Primary prevention of HIV and sexual violence": "Prevent HIV & sexual violence"},
'beneficiary':{"Pregnant & Breastfeeding Women" : "PBFW"},
'sub_beneficiary':{"Young women & adolescent females": "AGYW",
"Orphans & vulnerable children": "OVC",
"Military & other uniformed services": "Military...",
"Young people & adolescents": "Young people..."
}
})
pb_count_exp = count_df(df_subset_exp, "expenditure")
pb_count_work = count_df(df_subset_work, "workplan")
df_pb_count = pipeline_compare(pb_count_exp, pb_count_work)
pb_count_styled = df_pb_count.style.apply(highlight_diff, subset=['rank_diff'], axis=0)
pb_count_styled
#dfi.export(pb_count_styled, "pb_count.png")
# Only 37 out of 42 possible program-beneficiary combinations are observed in COP19 Expenditure
# and only 38 out of 42 for COP21 Workplan. 40 if you overlap them
df_pb_count.shape[0]
def find_idx_over_90(df, threshold=0.9, col='percent_x'):
idx_over_90 = 0
len_df = df.shape[0]
for i in range(len_df):
if df.head(i)[col].sum() > threshold:
idx_over_90 = i
break
# add +1 to idx_over_90 to include the 0th position in the percentage
return({'idx': idx_over_90, 'perc': (idx_over_90+1)/len_df})
count_exp_out = pb_count_exp.copy()
count_exp_out = count_exp_out.rename(columns={"expenditure":"count in expenditure reporting"})
count_exp_out["percent"] = count_exp_out["percent"]*100
count_exp_out_styled = count_exp_out.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '100px')])])
dfi.export(count_exp_out_styled, "pb_count.png")
pb_uniq_dict = find_idx_over_90(df_pb_count)
print(pb_uniq_dict)
print(pb_uniq_dict['idx']/42)
pb_uniq_dict = find_idx_over_90(df_pb_count, 0.5)
print(pb_uniq_dict)
print(pb_uniq_dict['idx']/42)
cond_cop_na = df_subset_exp['cop'].isna()
df_cop_non_null = df_subset_exp[~cond_cop_na]
print(df_cop_non_null.shape)
sum(df_cop_non_null['expenditure'].isna())
```
#### Conclusions on above:
Weirdly, COP and expenditures appear to always be on separate rows in COP19
```
sub_interv_grp = ['program', 'sub_program', 'beneficiary', 'sub_beneficiary']
sub_count_exp = count_df(df_subset_exp, "expenditure", sub_interv_grp)
sub_count_work = count_df(df_subset_work, "workplan", sub_interv_grp)
df_sub_count = pipeline_compare(sub_count_exp, sub_count_work)
sub_count_style = df_sub_count.head(50).style.apply(highlight_diff, subset=['rank_diff'], axis=0)
sub_count_style
dfi.export(sub_count_style, "sub_count.png")
# There are 27 sub-beneficiaries and 54 sub-program areas, meaning 1458 possible combinations.
# However, only 295 are used in COP19
true_total_subpb = 27*54
print(true_total_subpb)
print(df_sub_count.shape[0])
print(df_sub_count.shape[0]/true_total_subpb)
# https://datim.zendesk.com/hc/en-us/articles/360015671212-PEPFAR-Financial-Classifications-Reference-Guide
# Looking at the actual financial framework, there is only 34 sub-program areas if we merge service and non-service delivery
# So, the true total should be 34*27
true_total_sub_pb = 34*27
true_total_sub_pb
# 5.5% of sub-interventions make up 50% of all reported items
sub_pb_dict = find_idx_over_90(df_sub_count, 0.5)
print(sub_pb_dict)
sub_pb_dict['idx']/true_total_subpb
sub_pb_dict = find_idx_over_90(df_sub_count, 0.9)
print(sub_pb_dict)
sub_pb_dict['idx']/true_total_subpb
```
# Sum of total expenditure per intervention
Instead of unique rows, use actual sum of expenditure
```
pb_sum_exp = sum_df(df_subset_exp, "expenditure")
pb_sum_work = sum_df(df_subset_work, "workplan")
df_pb_sum = pipeline_compare(pb_sum_exp, pb_sum_work)
pb_sum_style = df_pb_sum.style.apply(highlight_diff, subset=['rank_diff'], axis=0)
pb_sum_style
dfi.export(pb_sum_style, "pb_sum.png")
sum_exp_out = pb_sum_exp.head(17).copy()
sum_exp_out = sum_exp_out.rename(columns={"expenditure":"total expenditure"})
sum_exp_out["percent"] = sum_exp_out["percent"]*100
sum_exp_out.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.format(na_rep='', precision=2, thousands=",",formatter="${:,.0f}",
subset=pd.IndexSlice[:,['total expenditure']])\
.set_table_styles([dict(selector="th",props=[('max-width', '100px')])])
pb_sum_dict = find_idx_over_90(df_pb_sum, 0.5)
print(pb_sum_dict)
pb_sum_dict['idx']/42
pb_sum_dict = find_idx_over_90(df_pb_sum, 0.9)
print(pb_sum_dict)
pb_sum_dict['idx']/42
sub_interv_grp = ['program', 'sub_program', 'beneficiary', 'sub_beneficiary']
sub_sum_exp = sum_df(df_subset_exp, "expenditure", sub_interv_grp)
sub_sum_work = sum_df(df_subset_work, "workplan", sub_interv_grp)
df_sub_sum = pipeline_compare(sub_sum_exp, sub_sum_work)
sub_sum_style = df_sub_sum.head(50).style.apply(highlight_diff, subset=['rank_diff'], axis=0)
sub_sum_style
dfi.export(sub_sum_style, "sub_sum.png")
subpb_sum_dict = find_idx_over_90(df_sub_sum, 0.5)
print(subpb_sum_dict)
subpb_sum_dict['idx']/true_total_subpb
subpb_sum_dict = find_idx_over_90(df_sub_sum, 0.9)
print(subpb_sum_dict)
subpb_sum_dict['idx']/true_total_subpb
df_sub_sum['expenditure'].head(95).sum()
```
# Heatmaps of expenditure
```
df_sub_count.shape
def make_heat(df):
df_heat = pd.pivot_table(df[["expenditure"]].reset_index(),
index = ['program', 'sub_program'],
columns = ['beneficiary', 'sub_beneficiary'],
values = 'expenditure'
)
# Add missing sub-program row
df_heat.loc[("ASP", "Injection safety"),] = np.nan
# Sort beneficiaries
sum_ben = df_heat.sum(axis=0, skipna=True)
sum_ben = sum_ben.rename(('sum', 'beneficiaries'))
df_heat = df_heat.append(sum_ben).sort_values(by=('sum', 'beneficiaries'), axis=1, ascending=False)
df_heat.drop(('sum', 'beneficiaries'), inplace=True)
# Sort programs
sum_prog = df_heat.sum(axis=1, skipna=True)
sum_prog = sum_prog.rename(('sum', 'prog'))
df_heat[('sum', 'prog')] = sum_prog
df_heat = df_heat.sort_values(by=('sum', 'prog'), ascending=False)
df_heat.drop(columns=('sum', 'prog'), inplace=True)
return df_heat
colors.Normalize(0,5)
#m = heat_sub_count.min().min()
#M = heat_sub_count.max().max()
#rng = M - m
#norm = colors.Normalize(m - (rng * 0),
# M + (rng * 0.2))
#norm(heat_sub_count.values)
# https://stackoverflow.com/questions/38931566/pandas-style-background-gradient-both-rows-and-columns/42563850#42563850
# style.background_gradient only works on columns or rows, not both, so use custom function
def background_gradient(s, m, M, cmap='PuBu', low=0, high=0):
rng = M - m
norm = colors.Normalize(m - (rng * low),
M + (rng * high))
normed = norm(s.values)
c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)]
return ['background-color: %s' % color for color in c]
# https://stackoverflow.com/questions/38931566/pandas-style-background-gradient-both-rows-and-columns/42563850#42563850
# style.background_gradient only works on columns or rows, not both, so use custom function
def background_gradient(s, m, M, cmap='PuBu', low=0, high=0):
rng = M - m
norm = colors.Normalize(m - (rng * low),
M + (rng * high))
normed = norm(s.values)
c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)]
font_c = [colors.rgb2hex(x) for x in plt.cm.get_cmap('binary')(normed)]
return ['background-color: %s; color: %s' % (color, fnt) for (color, fnt) in zip(c, font_c)]
# convert counts to percent of total, for easier comparison
df_sub_count_perc = df_sub_count.copy()
df_sub_count_perc["expenditure"] = (df_sub_count_perc["expenditure"] / df_sub_count['expenditure'].sum())*100
1/(34*27)
np.nan >0.1
def custom_gradient(val):
if val < 0.1:
color = "dimgray"
if val < 0.05:
color= "darkgray"
return 'background-color: %s' % color
if pd.isna(val):
return 'background-color: lightgrey'
heat_sub_count = make_heat(df_sub_count_perc)
heat_count_style = heat_sub_count.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%")\
.apply(background_gradient,
cmap='viridis',
m = heat_sub_count.min().min(),
M = 2, #heat_sub_count.max().max(),
low=0.001,
high=0.8)\
.applymap(custom_gradient)\
.set_table_styles([{'selector': 'td', 'props':'border: 1px solid white'}])
heat_count_style
dfi.export(heat_count_style, "heat_sub_count.png")
heat_count_style.to_excel("heat_sub_count.xlsx", engine="openpyxl")
# convert counts to percent of total, for easier comparison
df_sub_sum_perc = df_sub_sum.copy()
df_sub_sum_perc["expenditure"] = (df_sub_sum_perc["expenditure"] / df_sub_sum['expenditure'].sum())*100
# https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html
heat_sub_sum = make_heat(df_sub_sum_perc)
heat_sum_style = heat_sub_sum.style.format(na_rep='NA', precision=2, thousands=",",formatter = "{:,.2f}%")\
.applymap(lambda x: 'color: grey')\
.apply(background_gradient,
cmap='viridis',
m = heat_sub_sum.min().min(),
M = 2,
low=0,
high=0.8)\
.applymap(custom_gradient)\
.set_table_styles([{'selector': 'td', 'props':'border: 1px solid grey'}])
heat_sum_style
# dfi.export(heat_sum_style, "heat_sub_sum.png")
# heat_sum_style.to_excel("heat_sub_sum.xlsx", engine="openpyxl")
f, ax = plt.subplots(figsize=(10,8))
ax = sns.heatmap(heat_sub_count.fillna(0), vmax=0.2, center=0.1)
plt.savefig('heat_sub_count_stylized.png', bbox_inches='tight')
f, ax = plt.subplots(figsize=(10,8))
ax = sns.heatmap(heat_sub_sum.fillna(0), vmax=0.2, center=0.1)
plt.savefig('heat_sub_sum_stylized.png', bbox_inches='tight')
# Only 1 sub-program area is never useda
missing_prog = ("ASP", "Injection safety")
# df_sub_count.reset_index().groupby(['beneficiary','sub_beneficiary']).sum()[['expenditure']]
```
# Beneficiary splits
```
benef_count = pb_count_exp.groupby(by='beneficiary').sum()[["expenditure"]].sort_values(by="expenditure",
ascending=False)
add_perc_rank(benef_count, "expenditure")
benef_count = benef_count.rename(columns={"expenditure":"count in FSD"})
benef_count['percent'] = benef_count['percent']*100
benef_count.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '100px')])])
1/27
sub_ben_count = sub_count_exp.groupby(by=['beneficiary', 'sub_beneficiary']).sum()[['expenditure']].sort_values(by='expenditure',
ascending=False)
add_perc_rank(sub_ben_count, "expenditure")
sub_ben_count = sub_ben_count.rename(columns={"expenditure":"count in FSD"})
sub_ben_count['percent'] = sub_ben_count['percent']*100
sub_ben_count.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '100px')])])
sub_ben_count = sub_count_exp.groupby(by=['sub_beneficiary']).sum()[['expenditure']].sort_values(by='expenditure',
ascending=False)
add_perc_rank(sub_ben_count, "expenditure")
sub_ben_count.loc[["Not disaggregated",]]
```
# By Program Area
```
pa_count = pb_count_exp.groupby(by='program').sum()[["expenditure"]].sort_values(by="expenditure",
ascending=False)
add_perc_rank(pa_count, "expenditure")
pa_count = pa_count.rename(columns={"expenditure":"count in FSD"})
pa_count['percent'] = pa_count['percent']*100
pa_count.style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '100px')])])
sub_pa_count.shape
sub_pa_count = sub_count_exp.groupby(by=['program', 'sub_program']).sum()[['expenditure']].sort_values(by='expenditure',
ascending=False)
add_perc_rank(sub_pa_count, "expenditure")
sub_pa_count = sub_pa_count.rename(columns={"expenditure":"count in FSD"})
sub_pa_count['percent'] = sub_pa_count['percent']*100
sub_pa_count.head(17).style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '150px')])])
sub_pa_count.tail(15).style.format(na_rep='', precision=2, thousands=",",formatter = "{:,.2f}%",
subset=pd.IndexSlice[:, ["percent"]])\
.set_table_styles([dict(selector="th",props=[('max-width', '150px')])])
sub_pa_count = sub_count_exp.groupby(by=['sub_program']).sum()[['expenditure']].sort_values(by='expenditure',
ascending=False)
add_perc_rank(sub_pa_count, "expenditure")
sub_pa_count.loc[["Not Disaggregated",]]
# One-third of sub-program labels are used less than 1% of the time
sum(sub_pa_count["percent"]<0.01)/len(sub_pa_count)
sum(sub_pa_count["percent"]<0.01)
len(sub_pa_count)
```
# Comparing Mechanisms by usage of non-disaggregate vs. # unique interventions
```
def count_df(df, count_col, groupby_cols = ['program', 'beneficiary']):
df_count = df.groupby(by = groupby_cols).count()[[count_col]]
df_count = df_count.sort_values(by=count_col, ascending=False)
add_perc_rank(df_count, count_col)
return df_count
df_exp_mech = df_subset_exp.copy()
df_exp_mech["program_sub"] = df_exp_mech["program"] + " - " + df_exp_mech["sub_program"]
df_exp_mech["beneficiary_sub"] = df_exp_mech["beneficiary"] + " - " + df_exp_mech["sub_beneficiary"]
df_exp_mech["sub_intervention"] = df_exp_mech["program_sub"] + " - " + "beneficiary_sub"
df_subset_exp.head()
mech_uniques = df_exp_mech.groupby(by=["mech_name", "mech_code"]).nunique()[["program_sub", "beneficiary_sub", "sub_intervention"]]
mech_uniques.shape
mech_uniques.head()
df_exp_mech["disagg_program"] = df_exp_mech["sub_program"] == 'Not Disaggregated'
df_exp_mech["disagg_benef"] = df_exp_mech["sub_beneficiary"] == "Not disaggregated"
mech_disagg_count = df_exp_mech.groupby(by=["mech_name", "mech_code"]).sum()[["disagg_program", "disagg_benef"]]
mech_disagg_count.head()
df_subset_exp.shape
merge_mech = mech_uniques.merge(mech_disagg_count, left_index=True, right_index=True)
merge_mech["total_unique"] = merge_mech["program_sub"] + merge_mech["beneficiary_sub"]
merge_mech["total_disagg"] = merge_mech["disagg_program"] + merge_mech["disagg_benef"]
cond_prog_outlier = merge_mech['disagg_program']<150
cond_benef_outlier = merge_mech['disagg_benef']<500
mech_no_outlier = merge_mech[cond_prog_outlier & cond_benef_outlier]
plt.scatter(mech_no_outlier['disagg_program'], mech_no_outlier['disagg_benef'])
# https://stackoverflow.com/questions/8671808/matplotlib-avoiding-overlapping-datapoints-in-a-scatter-dot-beeswarm-plot
def rand_jitter(arr):
stdev = .01 * (max(arr) - min(arr))
return arr + np.random.randn(len(arr)) * stdev
def jitter(x, y, s=20, c='b', marker='o', cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, verts=None, hold=None, **kwargs):
return plt.scatter(rand_jitter(x), rand_jitter(y), s=s, c=c, marker=marker, cmap=cmap, norm=norm, vmin=vmin, vmax=vmax, alpha=alpha, linewidths=linewidths, **kwargs)
jitter(mech_no_outlier['sub_intervention'], mech_no_outlier['total_disagg'])
plt.xlabel("# unique sub-interventions")
plt.ylabel("Total # of 'Not Disaggregated'")
jitter(mech_no_outlier['program_sub'], mech_no_outlier['disagg_program'])
plt.xlabel("# unique sub-programs")
plt.ylabel("Total # of 'Not Disaggregated' in sub-programs")
jitter(mech_no_outlier['beneficiary_sub'], mech_no_outlier['disagg_benef'])
plt.xlabel("# unique sub-beneficiaries")
plt.ylabel("Total # of 'Not Disaggregated' in sub-beneficiaries")
```
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`).
Use a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
```
| github_jupyter |
John Mount
October 13, 2019
[This](https://github.com/WinVector/data_algebra/blob/master/Examples/WindowFunctions/WindowFunctions.md) is an tutorial on how to use window functions in either the `R` [`rquery`](https://github.com/WinVector/rquery) package, or in the `Python` [`data_algebra`](https://github.com/WinVector/data_algebra) package (`R` example [here](https://github.com/WinVector/rquery/blob/master/Examples/WindowFunctions/WindowFunctions.md), `Python` example [here](https://github.com/WinVector/data_algebra/blob/master/Examples/WindowFunctions/WindowFunctions.md)).
The [data_algebra](https://github.com/WinVector/data_algebra) provides a simplified (though verbose) unified interface to Pandas and SQL data transforms, including windows functions.
Let's work an example. First bring in our packages.
```
import sqlite3
import pandas
import graphviz
import data_algebra.diagram
from data_algebra.data_ops import * # https://github.com/WinVector/data_algebra
import data_algebra.util
import data_algebra.SQLite
```
Now some example data.
```
d = pandas.DataFrame({
'g': ['a', 'b', 'b', 'c', 'c', 'c'],
'x': [1, 4, 5, 7, 8, 9],
'v': [10, 40, 50, 70, 80, 90],
})
```
And we can run a number of ordered and un-ordered window functions (the distinction is given by if there is an `order_by` argument present).
```
table_description = describe_table(d)
ops = table_description. \
extend({
'row_number': '_row_number()',
'shift_v': 'v.shift()',
'cumsum_v': 'v.cumsum()',
},
order_by=['x'],
partition_by=['g']). \
extend({
'ngroup': '_ngroup()',
'size': '_size()',
'max_v': 'v.max()',
'min_v': 'v.min()',
'sum_v': 'v.sum()',
'mean_v': 'v.mean()',
},
partition_by=['g'])
res1 = ops.transform(d)
res1
```
Note: we are taking care in separating opeations beween the ordered block and un-ordered block. In databases, the presence of an order constraint in the window function often switches the operation to a cumulative mode.
One of the benefits of the `data_algebra` is the commands are saved in an object.
```
print(ops.to_python(pretty=True))
```
We can also present a diagram of the operator chain.
```
dot_1 = data_algebra.diagram.to_digraph(ops)
dot_1
```
And these commands can be re-used and even exported to SQL (including large scale SQL such as PostgreSQL, Apache Spark, or Google Big Query).
For a simple demonstration we will use small-scale SQL as realized in SQLite.
```
conn = sqlite3.connect(":memory:")
db_model = data_algebra.SQLite.SQLiteModel()
db_model.prepare_connection(conn)
ops_db = table_description. \
extend({
'row_number': '_row_number()',
'shift_v': 'v.shift()',
'cumsum_v': 'v.cumsum()',
},
order_by=['x'],
partition_by=['g']). \
extend({
# 'ngroup': '_ngroup()',
'size': '_size()',
'max_v': 'v.max()',
'min_v': 'v.min()',
'sum_v': 'v.sum()',
'mean_v': 'v.mean()',
},
partition_by=['g'])
db_model.insert_table(conn, d, table_description.table_name)
sql1 = ops_db.to_sql(db_model, pretty=True)
print(sql1)
```
And we can execute this SQL either to materialize a remote result (which involves no data motion, as we send the SQL commands to the database, not move the data to/from Python), or to bring a result back from the database to Python.
```
res1_db = db_model.read_query(conn, sql1)
res1_db
```
Notice we didn't calculate the group-id `rgroup` in the `SQL` version. This is because this is a much less common window function (and not often used in applications). This is also only interesting when we are using a composite key (else the single key column is already the per-group id). So not all data_algebra pipelines can run in all environments. However, we can compute (arbitrary) group IDs in a domain independent manner as follows.
```
id_ops_a = table_description. \
project(group_by=['g']). \
extend({
'ngroup': '_row_number()',
},
order_by=['g'])
id_ops_b = table_description. \
natural_join(id_ops_a, by=['g'], jointype='LEFT')
print(id_ops_b.to_python(pretty=True))
```
Here we land the result in the database, without moving data through Python.
```
sql2 = id_ops_b.to_sql(db_model)
cur = conn.cursor()
cur.execute('CREATE TABLE remote_result AS ' + sql2)
```
And we later copy it over to look at.
```
res2_db = db_model.read_table(conn, 'remote_result')
res2_db
```
And we can execute the same pipeline in Pandas.
```
id_ops_b.transform(d)
```
And we can diagram the group labeling operation.
```
dot_b = data_algebra.diagram.to_digraph(id_ops_b)
dot_b
```
Or all the steps in one sequence.
```
all_ops = id_ops_b. \
extend({
'row_number': '_row_number()',
'shift_v': 'v.shift()',
'cumsum_v': 'v.cumsum()',
},
order_by=['x'],
partition_by=['g']). \
extend({
'size': '_size()',
'max_v': 'v.max()',
'min_v': 'v.min()',
'sum_v': 'v.sum()',
'mean_v': 'v.mean()',
},
partition_by=['g'])
dot_all = data_algebra.diagram.to_digraph(all_ops)
dot_all
```
And we can run this whole sequence with Pandas.
```
all_ops.transform(d)
```
Or in the database (via automatic `SQL` generation).
```
db_model.read_query(conn, all_ops.to_sql(db_model))
# clean up
conn.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/YashMaster/the/blob/master/TensorFlow_with_GPU.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Setup shit
```
# Setup shit
!pip install --upgrade pip
!pip install --upgrade tensorflow tqdm nltk flask
!python -m nltk.downloader -d $HOME/nltk data punkt
#!python -m nltk.downloader -d $HOME/nltk data punkt
import nltk
nltk.downloader
```
# Confirm TensorFlow can see the GPU
Simply select "GPU" in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P).
```
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
```
# Observe TensorFlow speedup on GPU relative to CPU
This example constructs a typical convolutional neural network layer over a
random image and manually places the resulting ops on either the CPU or the GPU
to compare execution speed.
```
import tensorflow as tf
import timeit
# See https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.device('/cpu:0'):
random_image_cpu = tf.random_normal((100, 100, 100, 3))
net_cpu = tf.layers.conv2d(random_image_cpu, 32, 7)
net_cpu = tf.reduce_sum(net_cpu)
with tf.device('/gpu:0'):
random_image_gpu = tf.random_normal((100, 100, 100, 3))
net_gpu = tf.layers.conv2d(random_image_gpu, 32, 7)
net_gpu = tf.reduce_sum(net_gpu)
sess = tf.Session(config=config)
# Test execution once to detect errors early.
try:
sess.run(tf.global_variables_initializer())
except tf.errors.InvalidArgumentError:
print(
'\n\nThis error most likely means that this notebook is not '
'configured to use a GPU. Change this in Notebook Settings via the '
'command palette (cmd/ctrl-shift-P) or the Edit menu.\n\n')
raise
def cpu():
sess.run(net_cpu)
def gpu():
sess.run(net_gpu)
# Runs the op several times.
print('Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images '
'(batch x height x width x channel). Sum of ten runs.')
print('CPU (s):')
cpu_time = timeit.timeit('cpu()', number=10, setup="from __main__ import cpu")
print(cpu_time)
print('GPU (s):')
gpu_time = timeit.timeit('gpu()', number=10, setup="from __main__ import gpu")
print(gpu_time)
print('GPU speedup over CPU: {}x'.format(int(cpu_time/gpu_time)))
sess.close()
```
| github_jupyter |
# 1A.e - TD noté, 27 novembre 2012 (coloriage, correction)
Coloriage d'une image, dessin d'une spirale avec *matplotlib*.
```
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
## construction de la spirale
On utilise une représentation paramétrique de la spirale : [spirale](https://fr.wikipedia.org/wiki/Spirale).
```
import math
# cette fonction construit deux spirales imbriquées dans une matrice nb x nb
# le résultat est retourné sous forme de liste de listes
def construit_matrice (nb) :
mat = [ [ 0 for x in range (0,nb) ] for y in range(0,nb) ]
def pointij (nb,r,th,mat,c,phase) :
i,j = r*th * math.cos(th+phase), r*th*math.sin(th+phase)
i,j = int(i*100/nb), int(j*100/nb)
i,j = (i+nb)//2, (j+nb)//2
if 0 <= i < nb and 0 <= j < nb :
mat[i][j] = c
return i,j
r = 3.5
t = 0
for tinc in range (nb*100000) :
t += 1.0 * nb / 100000
th = t * math.pi * 2
i,j = pointij (nb,r,th,mat,1, 0)
i,j = pointij (nb,r,th,mat,1, math.pi)
if i >= nb and j >= nb : break
return mat
matrice = construit_matrice(100)
```
## dessin de la spirale
```
import matplotlib.pyplot as plt
def dessin_matrice (matrice) :
f, ax = plt.subplots()
ax.set_ylim([0, len(matrice[0])])
ax.set_xlim([0, len(matrice)])
colors = { 1: "blue", 2:"red" }
for i in range(0,len(matrice)) :
for j in range (0, len(matrice[i])) :
if matrice [i][j] in colors :
ax.plot ([i-0.5,i-0.5,i+0.5,i+0.5,i-0.5,i+0.5,i-0.5,i+0.5],
[j-0.5,j+0.5,j+0.5,j-0.5,j-0.5,j+0.5,j+0.5,j-0.5],
colors [ matrice[i][j] ])
return ax
dessin_matrice(matrice)
```
## Q1
```
def voisins_a_valeurs_nulle (matrice,i,j) :
res = []
if i > 0 and matrice[i-1][j] == 0 : res.append ( (i-1,j) )
if i < len(matrice)-1 and matrice[i+1][j] == 0 : res.append ( (i+1,j) )
if j > 0 and matrice[i][j-1] == 0 : res.append ( (i, j-1) )
if j < len(matrice[i])-1 and matrice[i][j+1] == 0 : res.append ( (i, j+1) )
return res
```
## Q2
```
def tous_voisins_a_valeurs_nulle (matrice, liste_points) :
res = []
for i,j in liste_points :
res += voisins_a_valeurs_nulle (matrice, i,j)
return res
```
## Q3
```
def fonction_coloriage ( matrice, i0, j0) :
# étage 1
acolorier = [ ( i0, j0 ) ]
while len (acolorier) > 0 :
# étape 2
for i,j in acolorier :
matrice [i][j] = 2
# étape 3
acolorier = tous_voisins_a_valeurs_nulle ( matrice, acolorier )
# on enlève les doublons car sinon cela prend trop de temps
acolorier = list(set(acolorier))
```
## Q5
### version 1
```
def surface_coloriee (matrice) :
surface = 0
for line in matrice :
for c in line :
if c == 2 : surface += 1
return surface
```
### version 4
```
def fonction_coloriage_1000 ( matrice, i0, j0) :
acolorier = [ ( i0, j0 ) ]
nb = 0 # ligne ajoutée
while len (acolorier) > 0 :
for i,j in acolorier :
matrice [i][j] = 2
nb += 1 # ligne ajoutée
if nb > 1000 : break # ligne ajoutée
acolorier = tous_voisins_a_valeurs_nulle ( matrice, acolorier )
d = { }
for i,j in acolorier : d [i,j] = 0
acolorier = [ (i,j) for i,j in d ]
```
## Q4 : spirale
### version 1
```
matrice = construit_matrice(100)
fonction_coloriage (matrice, 53, 53)
dessin_matrice(matrice)
surface_coloriee (matrice)
```
### version 4
```
matrice = construit_matrice(100)
fonction_coloriage_1000 (matrice, 53, 53)
dessin_matrice(matrice)
surface_coloriee (matrice)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from finance_ml.model_selection import PurgedKFold
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import BaggingClassifier
from sklearn.pipeline import Pipeline
class MyPipeline(Pipeline):
def fit(self, X, y, sample_weight=None, **fit_params):
if sample_weight is not None:
fit_params[self.steps[-1][0] + '__sample_weight'] = sample_weight
return super(MyPipeline, self).fit(X, y, **fit_params)
def clf_hyper_fit(feat, label, t1, pipe_clf, search_params, scoring=None,
n_splits=3, bagging=[0, None, 1.],
rnd_search_iter=0, n_jobs=-1, pct_embargo=0., **fit_params):
# Set defaut value for scoring
if scoring is None:
if set(label.values) == {0, 1}:
scoring = 'f1'
else:
scoring = 'neg_log_loss'
# HP serach on traing data
inner_cv = PurgedKFold(n_splits=n_splits, t1=t1, pct_embargo=pct_embargo)
if rnd_search_iter == 0:
search = GridSearchCV(estimator=pipe_clf, param_grid=search_params,
scoring=scoring, cv=inner_cv, n_jobs=n_jobs, iid=False)
else:
search = RandomizedSearchCV(estimator=pipe_clf, param_distributions=search_params,
scoring=scoring, cv=inner_cv, n_jobs=n_jobs, iid=False)
best_pipe = search.fit(feat, label, **fit_params).best_estimator_
# Fit validated model on the entirely of dawta
if bagging[0] > 0:
bag_est = BaggingClassifier(base_estimator=MyPipeline(best_pipe.steps),
n_estimators=int(bagging[0]), max_samples=float(bagging[1]),
max_features=float(bagging[2]), n_jobs=n_jobs)
bag_est = best_pipe.fit(feat, label,
sample_weight=fit_params[bag_est.base_estimator.steps[-1][0] + '__sample_weight'])
best_pipe = Pipeline([('bag', bag_est)])
return best_pipe
from scipy.stats import rv_continuous
class LogUniformGen(rv_continuous):
def _cdf(self, x):
return np.log(x / self.a) / np.log(self.b / self.a)
def log_uniform(a=1, b=np.exp(1)):
return LogUniformGen(a=a, b=b, name='log_uniform')
a = 1e-3
b = 1e3
size = 10000
vals = log_uniform(a=a, b=b).rvs(size=size)
vals.shape
```
# 9.1
```
from finance_ml.datasets import get_cls_data
X, label = get_cls_data(n_features=10, n_informative=5, n_redundant=0, n_samples=10000)
print(X.head())
print(label.head())
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
name = 'svc'
params_grid = {name + '__C': [1e-2, 1e-1, 1, 10, 100], name + '__gamma': [1e-2, 1e-1, 1, 10, 100]}
kernel = 'rbf'
clf = SVC(kernel=kernel, probability=True)
pipe_clf = Pipeline([(name, clf)])
fit_params = dict()
clf = clf_hyper_fit(X, label['bin'], t1=label['t1'], pipe_clf=pipe_clf, scoring='neg_log_loss',
search_params=params_grid, n_splits=3, bagging=[0, None, 1.],
rnd_search_iter=0, n_jobs=-1, pct_embargo=0., **fit_params)
```
# 9.2
```
name = 'svc'
params_dist = {name + '__C': log_uniform(a=1e-2, b=1e2),
name + '__gamma': log_uniform(a=1e-2, b=1e2)}
kernel = 'rbf'
clf = SVC(kernel=kernel, probability=True)
pipe_clf = Pipeline([(name, clf)])
fit_params = dict()
clf = clf_hyper_fit(X, label['bin'], t1=label['t1'], pipe_clf=pipe_clf, scoring='neg_log_loss',
search_params=params_grid, n_splits=3, bagging=[0, None, 1.],
rnd_search_iter=25, n_jobs=-1, pct_embargo=0., **fit_params)
```
| github_jupyter |
#Create the environment
```
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/ESoWC
import pandas as pd
import xarray as xr
import numpy as np
import pandas as pd
from sklearn import preprocessing
import seaborn as sns
#Our class
from create_dataset.make_dataset import CustomDataset
fn_weather_05_19 = 'Data/05_2019_weather_and_CO.nc'
fn_weather_07_19 = 'Data/07_2019_weather_and_CO.nc'
fn_weather_05_20 = 'Data/05_2020_weather_and_CO.nc'
fn_weather_06_20 = 'Data/06_2020_weather_and_CO.nc'
fn_land = 'Data/land_cover_data.nc'
fn_conc = 'Data/totalcolConcentretations_featured.nc'
fn_traffic = 'Data/emissions_traffic_hourly_merged.nc'
```
##Land
```
land_instance = CustomDataset(fn_land)
land_instance.resample("1H")
land_fixed = land_instance.get_dataset()
land_fixed = land_fixed.drop_vars('NO emissions') #They are already in the weather dataset
land_fixed = land_fixed.transpose('latitude','longitude','time')
land_fixed
```
##Conc
```
conc_fidex = xr.open_dataset(fn_conc)
conc_fidex
```
##Traffic
```
traffic_instance = CustomDataset(fn_traffic)
traffic_ds= traffic_instance.get_dataset()
traffic_ds=traffic_ds.drop_vars('emissions')
lat_bins = np.arange(43,51.25,0.25)
lon_bins = np.arange(4,12.25,0.25)
traffic_ds = traffic_ds.sortby(['latitude','longitude','hour'])
traffic_ds = traffic_ds.interp(latitude=lat_bins, longitude=lon_bins, method="linear")
days = np.arange(1,32,1)
traffic_ds=traffic_ds.expand_dims({'Days':days})
trafic_df = traffic_ds.to_dataframe()
trafic_df = trafic_df.reset_index()
trafic_df['time'] = (pd.to_datetime(trafic_df['Days']-1,errors='ignore',
unit='d',origin='2019-05') +
pd.to_timedelta(trafic_df['hour'], unit='h'))
trafic_df=trafic_df.drop(columns=['Days', 'hour'])
trafic_df = trafic_df.set_index(['latitude','longitude','time'])
traffic_fixed = trafic_df.to_xarray()
traffic_fixed = traffic_fixed.transpose('latitude','longitude','time')
traffic_fixed
```
#05_2019
```
tot_dataset_05_19 = xr.open_dataset('Data/05_2019_dataset_complete_for_model_CO.nc')
tot_dataset_05_19 = tot_dataset_05_19.rename({'EMISSIONS_2019':'EMISSIONS'})
tot_dataset_05_19
```
#07_2019
##Land
```
land_fixed_07_19 = land_fixed
land_df = land_fixed_07_19.to_dataframe()
land_df = land_df.reset_index()
land_df.time = land_df.time + pd.DateOffset(months=2)
land_df = land_df.set_index(['latitude','longitude','time'])
land_fixed_07_19 = land_df.to_xarray()
land_fixed_07_19
```
##Weather
```
weather_07_19 = xr.open_dataset('Data/07_2019_weather_and_CO_for_model.nc')
#This variable is too much correlated with the tcw
weather_fixed_07_19 = weather_07_19.drop_vars('tcwv')
weather_fixed_07_19 = weather_fixed_07_19.rename({'EMISSIONS_2019':'EMISSIONS'})
weather_fixed_07_19 = weather_fixed_07_19.transpose('latitude','longitude','time')
weather_fixed_07_19
```
##Conc
```
conc_fidex_07_19 = conc_fidex
conc_df = conc_fidex_07_19.to_dataframe()
conc_df = conc_df.reset_index()
conc_df.time = conc_df.time + pd.DateOffset(months=2)
conc_df = conc_df.set_index(['latitude','longitude','time'])
conc_fidex_07_19 = conc_df.to_xarray()
conc_fidex_07_19
```
##Traffic
```
traffic_fixed_07_19 = traffic_fixed
traffic_df = traffic_fixed_07_19.to_dataframe()
traffic_df = traffic_df.reset_index()
traffic_df.time = traffic_df.time + pd.DateOffset(months=2)
traffic_df = traffic_df.set_index(['latitude','longitude','time'])
traffic_fixed_07_19 = traffic_df.to_xarray()
traffic_fixed_07_19
```
##Merge
```
tot_dataset_07_19 = weather_fixed_07_19.merge(land_fixed_07_19)
tot_dataset_07_19 = tot_dataset_07_19.merge(conc_fidex_07_19)
tot_dataset_07_19 = tot_dataset_07_19.merge(traffic_fixed_07_19)
tot_dataset_07_19
```
#05_2020
##Land
```
land_fixed_05_20 = land_fixed
land_df = land_fixed_05_20.to_dataframe()
land_df = land_df.reset_index()
land_df.time = land_df.time + pd.DateOffset(years=1)
land_df = land_df.set_index(['latitude','longitude','time'])
land_fixed_05_20 = land_df.to_xarray()
land_fixed_05_20
```
##Weather
```
weather_05_20 = xr.open_dataset('Data/05_2020_weather_and_CO_for_model.nc')
#This variable is too much correlated with the tcw
weather_fixed_05_20 = weather_05_20.drop_vars('tcwv')
weather_fixed_05_20 = weather_fixed_05_20.rename({'EMISSIONS_2020':'EMISSIONS'})
weather_fixed_05_20 = weather_fixed_05_20.transpose('latitude','longitude','time')
weather_fixed_05_20
```
##Conc
```
conc_fidex_05_20 = conc_fidex
conc_df = conc_fidex_05_20.to_dataframe()
conc_df = conc_df.reset_index()
conc_df.time = conc_df.time + pd.DateOffset(years=1)
conc_df = conc_df.set_index(['latitude','longitude','time'])
conc_fidex_05_20 = conc_df.to_xarray()
conc_fidex_05_20
```
##Traffic
```
traffic_fixed_05_20 = traffic_fixed
traffic_df = traffic_fixed_05_20.to_dataframe()
traffic_df = traffic_df.reset_index()
traffic_df.time = traffic_df.time + pd.DateOffset(years=1)
traffic_df = traffic_df.set_index(['latitude','longitude','time'])
traffic_fixed_05_20 = traffic_df.to_xarray()
traffic_fixed_05_20
```
##Merge
```
tot_dataset_05_20 = weather_fixed_05_20.merge(land_fixed_05_20)
tot_dataset_05_20 = tot_dataset_05_20.merge(conc_fidex_05_20)
tot_dataset_05_20 = tot_dataset_05_20.merge(traffic_fixed_05_20)
tot_dataset_05_20
```
#Merge
```
tot_dataset = tot_dataset_05_19.merge(tot_dataset_07_19)
tot_dataset = tot_dataset.merge(tot_dataset_05_20)
tot_dataset
```
#Save
```
tot_dataset.to_netcdf('Data/3_months_dataset_complete_for_model_CO.nc', 'w', 'NETCDF4')
```
| github_jupyter |
# Gibbs sampling for Dirichlet process-based mixtures
In this notebook, I demonstrate the naive sampler in Orbanz' lecture notes and McEachern's collapsed sampler for DP mixtures.
```
%pylab inline
figsize(10,5)
# Import necessary libraries
import numpy as np
import numpy.random as npr
import numpy.linalg as npl
import matplotlib.pyplot as plt
import scipy.stats as spst
import time as time
import pickle as pkl
```
For simplicity, we consider one-dimensional mixtures of Gaussians with known unit variance. The model thus reads
$$
\xi = \sum w_i\delta_{\mu_i} \sim \text{DP}(\alpha, G)
$$
$$
x_1,\dots,x_n \sim \sum w_i\mathcal{N}(\cdot\vert \mu_i,1) \text{ i.i.d. }
$$
We also assume the base measure to be Gaussian $\mathcal{N}(0,\sigma^2)$, so that for $\mu,x\in\mathbb{R}$, we have the following conjugation property
$$
G(\phi) p(x\vert \phi) = \mathcal{N}(x\vert \phi,1)\mathcal{N}(\phi\vert 0,\sigma^2) = \mathcal{N}(x\vert 0,1+\sigma^2) \mathcal{N}\left(\phi\bigg\vert\frac{\sigma^2 x}{1+\sigma^2}, \frac{\sigma^2}{1+\sigma^2}\right).
$$
```
# manually checking this computation
N = lambda x,mu,s2: spst.norm(loc=mu,scale=np.sqrt(s2)).pdf(x)
phi = npr.rand()
x = npr.rand()
s2 = 3
a = s2/(1+s2)
print(N(x,phi,1)*N(phi,0,s2), N(phi,a*x,a)*N(x,0,1+s2))
```
## Naive "Polya-urn" Gibbs sampler
```
class NaiveGibbsDPM:
"""
Systematic naive Gibbs sampling
"""
def __init__(self, alpha, sigma, data):
self.alpha = alpha
self.sigma = sigma
self.G = spst.norm(scale=sigma)
self.lhd = lambda x, m: spst.norm(loc=m,scale=1).pdf(x)
self.data = data
self.n = len(data)
self.mu = [sigma*npr.randn() for _ in range(self.n)]
self.mus = [self.mu.copy()] # storing the history of the MCMC chain
self.Z = N(x,0,1+sigma**2) # normalization in conjugacy property
def step(self):
"""
Gibbs step
"""
for i in range(self.n):
u = npr.rand()
normalization = alpha*self.Z + np.sum([self.lhd(self.data[i], mu)
for j, mu in enumerate(self.mu) if j!=i])
if u < alpha*self.Z/normalization:
s = 1/np.sqrt(1+1/self.sigma**2)
self.mu[i] = spst.norm(loc=s**2*self.data[i],scale=s).rvs()
else:
p = [self.lhd(self.data[i], mu) for
j, mu in enumerate(self.mu)] # multinomial probabilities
p = np.array(p)/np.sum(p)
ind = np.where(npr.multinomial(1,p))[0][0]
while ind==i:
ind = np.where(npr.multinomial(1,p))[0][0] # need to avoid resampling the same value
self.mu[i] = self.mu[ind]
self.mus.append(self.mu.copy())
```
We need some data
```
npr.seed(1)
n = 100
data = [-2+npr.randn() for _ in range(n//4)] + [1+npr.randn() for _ in range(n//2)] + [5
+npr.randn() for _ in range(n//4)]
print(n==len(data))
plt.hist(data, histtype="stepfilled", alpha=.3)
plt.show()
```
Now let's run our naive Gibbs sampler.
```
alpha = 1.
sigma = 3.
numSteps = 100
tic = time.time()
gibbs = NaiveGibbsDPM(alpha,sigma,data)
for t in range(numSteps):
if np.mod(t, numSteps/10)==0:
print(100*t/numSteps, "%")
gibbs.step()
print("Time elapsed:", (time.time()-tic)/60, "min")
# optional: save results
jobId = "n="+str(n)+"_alpha="+str(alpha)+"sigma="+str(sigma)
with open("gibbs_"+jobId+".pkl", 'wb') as f:
pkl.dump(gibbs.mus, f)
# optional: load results
jobId = "n="+str(n)+"_alpha="+str(alpha)+"sigma="+str(sigma)
with open("gibbs_"+jobId+".pkl", 'rb') as f:
mus = pkl.load(f) # careful, later cells work on gibbs.mus
gibbs = NaiveGibbsDPM(alpha,sigma,data)
gibbs.mus = mus
# Let us visualize the samples
for i in range(len(gibbs.mus)):
u, w = np.unique(gibbs.mus[i], return_counts=1)
plt.scatter([i]*len(u),u,marker='o',s=1.0*w)
plt.xlabel("Gibbs iterations")
plt.ylabel("$\mu$")
plt.grid(True)
plt.show()
```
What can you detect here? How would you suggest improving the sampler?
```
# Let us check the cardinality of each mu
cards = []
for i in range(len(gibbs.mus)):
u = np.unique(gibbs.mus[i])
cards.append(len(u))
plt.plot(cards, label="number of components")
plt.plot([0,numSteps], [3, 3], label="true")
plt.grid(True)
plt.xlabel("Gibbs iterations")
plt.legend()
plt.show()
```
Exercise:
* Should we be worried that the actual value used in the simulation (3 components) is never visited?
* What happens if you change $\alpha$?
```
# First plot the data
plt.hist(data, alpha=.3, histtype='stepfilled')
m = np.min(data)
M = np.max(data)
delta = M-m
xPlot = np.linspace(m-.5*delta,M+.5*delta)
# Now plot a few samples
colors = ['red', 'orange', 'green', 'green', 'green']
styles = ['-', '-', '-', '--', '-.']
numIters = len(gibbs.mus)
iters = [0,50,numIters//2,3*numIters//4,-1] # We will plot the mu's for these three iterations
for i in range(len(iters)):
iter = iters[i]
color = colors[i]
style = styles[i]
u, w = np.unique(gibbs.mus[iter], return_counts=1)
for i in range(len(u)):
plt.plot(xPlot, w[i]*gibbs.lhd(xPlot,u[i]),color=color,linestyle=style)
plt.show()
```
Exercise: How would you use this posterior sample to cluster the observations?
```
# Now we investigate the growth of the number of tables with arriving customers
mu = gibbs.mus[-3]
u, v, w = np.unique(mu, return_inverse=1, return_counts=1)
print("This sample has cardinality", len(u))
tableNumbers = []
tables = []
# we first label tables as they first appear
for i in range(gibbs.n):
if not v[i] in tableNumbers:
tableNumbers.append(v[i])
tableDict = dict(zip(tableNumbers,range(gibbs.n)))
print(tableDict)
# now we can plot the tables chosen by arriving customers
for i in range(gibbs.n):
plt.plot(i,tableDict[v[i]],'s', color='b')
plt.xlabel("customers (data points)")
plt.ylabel("tables (clusters)")
plt.show()
```
What curve would you fit to #tables vs #customers? Why?
## McEachern's sampler
Exercise: implement McEachern's sampler as seen in class
## Variational approaches
Exercise:
* implement a variational approach to inference for DP mixtures.
* You can also check out the variational approaches proposed by toolboxes like [bnpy](https://github.com/bnpy/bnpy).
* If you feel datasciency, check out general purpose probabilistic programming languages like [Edward](http://edwardlib.org/), they also provide tools for DP mixtures.
| github_jupyter |
# Code Reuse
Let’s put what we learned about code reuse all together.
<br><br>
First, let’s look back at **inheritance**. Run the following cell that defines a generic `Animal` class.
```
class Animal:
name = ""
category = ""
def __init__(self, name):
self.name = name
def set_category(self, category):
self.category = category
```
What we have is not enough to do much -- yet. That’s where you come in.
<br><br>
In the next cell, define a `Turtle` class that inherits from the `Animal` class. Then go ahead and set its category. For instance, a turtle is generally considered a reptile. Although modern cladistics call this categorization into question, for purposes of this exercise we will say turtles are reptiles!
```
class Turtle(Animal):
category = "reptile"
```
Run the following cell to check whether you correctly defined your `Turtle` class and set its category to reptile.
```
print(Turtle.category)
```
Was the output of the above cell reptile? If not, go back and edit your `Turtle` class making sure that it inherits from the `Animal` class and its category is properly set to reptile. Be sure to re-run that cell once you've finished your edits. Did you get it? If so, great!
Next, let’s practice **composition** a little bit. This one will require a second type of `Animal` that is in the same category as the first. For example, since you already created a `Turtle` class, go ahead and create a `Snake` class. Don’t forget that it also inherits from the `Animal` class and that its category should be set to reptile.
```
class Snake(Animal):
category = "reptile"
```
Now, let’s say we have a large variety of `Animal`s (such as turtles and snakes) in a Zoo. Below we have the `Zoo` class. We’re going to use it to organize our various `Animal`s. Remember, inheritance says a Turtle is an `Animal`, but a `Zoo` is not an `Animal` and an `Animal` is not a `Zoo` -- though they are related to one another.
Fill in the blanks of the `Zoo` class below so that you can use **zoo.add_animal( )** to add instances of the `Animal` subclasses you created above. Once you’ve added them all, you should be able to use **zoo.total_of_category( )** to tell you exactly how many individual `Animal` types the `Zoo` has for each category! Be sure to run the cell once you've finished your edits.
```
class Zoo:
def __init__(self):
self.current_animals = {}
def add_animal(self, animal):
self.current_animals[animal.name] = animal.category
def total_of_category(self, category):
result = 0
for animal in self.current_animals.values():
if animal == category:
result += 1
return result
zoo = Zoo()
```
Run the following cell to check whether you properly filled in the blanks of your `Zoo` class.
```
turtle = Turtle("Turtle") #create an instance of the Turtle class
snake = Snake("Snake") #create an instance of the Snake class
zoo.add_animal(turtle)
zoo.add_animal(snake)
print(zoo.total_of_category("reptile")) #how many zoo animal types in the reptile category
```
Was the output of the above cell 2? If not, go back and edit the `Zoo` class making sure to fill in the blanks with the appropriate attributes. Be sure to re-run that cell once you've finished your edits.
<br>
Did you get it? If so, perfect! You have successfully defined your `Turtle` and `Snake` subclasses as well as your `Zoo` class. You are all done with this notebook. Great work!
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# ResNet50 Image Classification using ONNX and AzureML
This example shows how to deploy the ResNet50 ONNX model as a web service using Azure Machine Learning services and the ONNX Runtime.
## What is ONNX
ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).
## ResNet50 Details
ResNet classifies the major object in an input image into a set of 1000 pre-defined classes. For more information about the ResNet50 model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/vision/classification/resnet).
## Prerequisites
To make the best use of your time, make sure you have done the following:
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:
* install the AML SDK
* create a workspace and its configuration file (config.json)
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
#### Download pre-trained ONNX model from ONNX Model Zoo.
Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.
```
import urllib.request
onnx_model_url = "https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz"
urllib.request.urlretrieve(onnx_model_url, filename="resnet50v2.tar.gz")
!tar xvzf resnet50v2.tar.gz
```
## Deploying as a web service with Azure ML
### Load your Azure ML workspace
We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = '\n')
```
### Register your model with Azure ML
Now we upload the model and register it in the workspace.
```
from azureml.core.model import Model
model = Model.register(model_path = "resnet50v2/resnet50v2.onnx",
model_name = "resnet50v2",
tags = {"onnx": "demo"},
description = "ResNet50v2 from ONNX Model Zoo",
workspace = ws)
```
#### Displaying your registered models
You can optionally list out all the models that you have registered in this workspace.
```
models = ws.models
for name, m in models.items():
print("Name:", name,"\tVersion:", m.version, "\tDescription:", m.description, m.tags)
```
### Write scoring file
We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object.
```
%%writefile score.py
import json
import time
import sys
import os
import numpy as np # we're going to use numpy to process input and output data
import onnxruntime # to inference ONNX models, we use the ONNX Runtime
def softmax(x):
x = x.reshape(-1)
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
def init():
global session
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50v2.onnx')
session = onnxruntime.InferenceSession(model, None)
def preprocess(input_data_json):
# convert the JSON data into the tensor input
img_data = np.array(json.loads(input_data_json)['data']).astype('float32')
#normalize
mean_vec = np.array([0.485, 0.456, 0.406])
stddev_vec = np.array([0.229, 0.224, 0.225])
norm_img_data = np.zeros(img_data.shape).astype('float32')
for i in range(img_data.shape[0]):
norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]
return norm_img_data
def postprocess(result):
return softmax(np.array(result)).tolist()
def run(input_data_json):
try:
start = time.time()
# load in our data which is expected as NCHW 224x224 image
input_data = preprocess(input_data_json)
input_name = session.get_inputs()[0].name # get the id of the first input of the model
result = session.run([], {input_name: input_data})
end = time.time() # stop timer
return {"result": postprocess(result),
"time": end - start}
except Exception as e:
result = str(e)
return {"error": result}
```
### Create inference configuration
First we create a YAML file that specifies which dependencies we would like to see in our container.
```
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(pip_packages=["numpy", "onnxruntime", "azureml-core", "azureml-defaults"])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
```
Create the inference configuration object. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
```
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
```
### Deploy the model
```
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'demo': 'onnx'},
description = 'web service for ResNet50 ONNX model')
```
The following cell will likely take a few minutes to run as well.
```
from random import randint
aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))
print("Service", aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again.
```
if aci_service.state != 'Healthy':
# run this command for debugging.
print(aci_service.get_logs())
aci_service.delete()
```
## Success!
If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below.
```
print(aci_service.scoring_uri)
```
When you are eventually done using the web service, remember to delete it.
```
aci_service.delete()
```
| github_jupyter |
```
%matplotlib inline
import sys
import os
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import json
from datetime import datetime
import matplotlib.pyplot as plt
from matplotlib import gridspec
COLORS = [(0.12109375, 0.46484375, 0.703125),
(0.99609375, 0.49609375, 0.0546875),
(0.171875, 0.625, 0.171875),
(0.8359375, 0.15234375, 0.15625),
(0.578125, 0.40234375, 0.73828125),
(0.546875, 0.3359375, 0.29296875),
(0.88671875, 0.46484375, 0.7578125),
(0.49609375, 0.49609375, 0.49609375),
(0.734375, 0.73828125, 0.1328125),
(0.08984375, 0.7421875, 0.80859375)]
import calendar
# Set font sizes
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
from matplotlib import rcParams
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
CARBON_INTENSITY = {"biogas":18, "biomass":18, "geo":42, "hydro":4,
"imports":428, "nuclear":16, "smhydro":4, "solarpv":46, "solarth":22,
"thermal":469, "wind":12}
def loadCAISO(year):
dfp = pd.read_csv(os.path.join("../data/CAISO_DailyRenewablesWatch", 'DailyRenewablesWatch_%d.csv' % year),
index_col=0, parse_dates=True)
dfp.index -= pd.Timedelta('7h')
cols = [col for col in dfp.columns if col != 'carbon']
dfp["total"] = dfp[cols].sum(axis=1)
dfp["carbon"] = dfp.apply(lambda row:sum(row[fuel]*CARBON_INTENSITY[fuel]
for fuel in CARBON_INTENSITY)/1e3, axis=1)
dfp["carbon_intensity"] = dfp.apply(lambda row:row["carbon"]*1e3/row["total"], axis=1)
return dfp
df = pd.concat([loadCAISO(y) for y in [2015,2016,2017,2018]])
df.dropna(inplace=True)
df.loc[:, 'year'] = df.index.year
df.loc[:, 'month'] = df.index.month
df.loc[:, 'hour'] = df.index.hour
# Compute totals for dispatchable generation
cols = ["biogas", "biomass", "geo", "hydro", "imports", "nuclear", "smhydro", "thermal"]
df["total_D"] = df[cols].sum(axis=1)
df["carbon_D"] = df.apply(lambda row:sum(row[fuel]*CARBON_INTENSITY[fuel]
for fuel in cols)/1e3, axis=1)
f, ax = plt.subplots()
start = pd.to_datetime("2015-01-01")
end = pd.to_datetime("2016-01-01")
ax.plot(df[start:end].total_D.diff(), df[start:end].carbon_D.diff(), '.')
f, ax = plt.subplots()
for y in range(2015, 2019):
start = pd.to_datetime("%d-01-01" %y)
end = pd.to_datetime("%d-01-01" % (y+1))
df_diff = df.diff().dropna()
df_diff.hour = df_diff.index.hour
sel = (df_diff.index>start) & (df_diff.index<end)
lr = LinearRegression()
lr.fit(df_diff[sel].total_D.values.reshape(-1,1), df_diff[sel].carbon_D.values.reshape(-1,1))
print("%d: %.2f"% (y, 1000*lr.coef_))
ax.plot(df_diff[sel].total_D, df_diff[sel].carbon_D, '.', label=str(y))
ax.legend()
# Scenario for 2025
df25 = loadCAISO(2016)
# Initial grid mix:
tot=0
print("initial grid mix")
for col in df25.columns:
if col not in ['carbon', 'total', 'carbon_intensity']:
prct = 100*df25[col].sum()/df25["total"].sum()
print("\t%s: %.2f"%(col,prct))
tot+=prct
print("total: %.2f"%tot)
# generate scenarios
x = 2
print("max overgen power on grid (GW)")
print(min(df25.total - (1+x) * df25.solarpv)/1e3)
gencols = [col for col in df25.columns if col not in ['carbon', 'total', 'carbon_intensity']]
df_solar = df25.copy(deep=True)
# reduce thermal, but don't go negative
df_solar.thermal = df_solar.apply(
lambda row : max(row.thermal - x*row.solarpv, 0), axis=1)
# take all the solar
df_solar.solarpv = df_solar.apply(
lambda row : (1 + x) * row.solarpv, axis=1)
overgenP = df_solar.total - df_solar[gencols].sum(axis=1)
print("%d hours of curtailment" % (overgenP<0).sum())
curtailed = overgenP<0 # remember which hours could have been curtailed
overgen = overgenP.sum()
print("overgen energy: %.2f %% of total energy" % (-100 * overgen/df_solar.total.sum()))
print("Assuming imports can be reduced as well")
df_solar.imports = (df_solar.imports + overgenP)
df_solar.imports = df_solar.apply(
lambda row: max(row.imports, 0), axis=1)
overgenP = df_solar.total - df_solar[gencols].sum(axis=1)
overgen = overgenP.sum()
print("overgen energy: %.2f %% of total energy" % (-100 * overgen/df_solar.total.sum()))
# Redistribute the overgeneration on hours that have space
# Find space to put the solar
thresh = 350
space = df_solar.thermal > thresh # MW
df_solar["storage"] = 0.
gencols += ["storage"]
print("Redistributing %.2f MW on the %d timesteps that have at least %.2f MW in thermal generation" % (
-overgen / space.sum(), space.sum(), thresh))
df_solar.loc[space, "thermal"] += overgen / space.sum()
df_solar.loc[space, "storage"] -= overgen / space.sum()
# Store the solar
df_solar.solarpv += overgenP
df25 = df_solar
df25["year"] = df25.index.year
df25["hour"] = df25.index.hour
df25["carbon"] = df25.apply(lambda row:sum(row[fuel]*CARBON_INTENSITY[fuel]
for fuel in CARBON_INTENSITY)/1e3, axis=1)
df25["carbon_intensity"] = df25.apply(lambda row:row["carbon"]*1e3/row["total"], axis=1)
# 2025 grid mix
tot=0
print("2025 grid mix")
for col in df25.columns:
if col not in ['carbon', 'total', 'carbon_intensity']:
prct = 100*df25[col].sum()/df25["total"].sum()
print("\t%s: %.2f"%(col,prct))
tot+=prct
print("total: %.2f"%tot)
# Compute totals for dispatchable generation in 2025
cols = ["biogas", "biomass", "geo", "hydro", "imports", "nuclear", "smhydro","thermal"]
df25["total_D"] = df25[cols].sum(axis=1)
df25["carbon_D"] = df25.apply(lambda row:sum(row[fuel]*CARBON_INTENSITY[fuel]
for fuel in cols)/1e3, axis=1)
f, ax = plt.subplots()
start = pd.to_datetime("2016-01-01")
end = pd.to_datetime("2017-01-01")
ax.plot(df25[start:end].total_D.diff(), df25[start:end].carbon_D.diff(), '.')
# Sanity - these are the hours that solar will be considered the marginal fuel
print(curtailed.sum())
print((df25.thermal == 0.).sum())
mefs = dict()
df_diff = df.diff().dropna()
df_diff["hour"] = df_diff.index.hour
for y in range(2015, 2019):
mefs[y] = []
start = pd.to_datetime("%d-01-01" %y)
end = pd.to_datetime("%d-01-01" % (y+1))
for h in range(24):
sel = (df_diff.hour == h) & (df_diff.index>start) & (df_diff.index<end)
lr = LinearRegression()
lr.fit(df_diff[sel].total_D.values.reshape(-1,1), df_diff[sel].carbon_D.values.reshape(-1,1))
mefs[y].append(1000 * lr.coef_[0][0])
# In 2025, solar is marginal when there was no more gas generation
df_diff = df25.diff().dropna()
df_diff["hour"] = df_diff.index.hour
mefs[2025] = []
for h in range(24):
sel = (df_diff.hour == h)
lr = LinearRegression()
lr.fit(df_diff[sel].total_D.values.reshape(-1,1), df_diff[sel].carbon_D.values.reshape(-1,1))
curtailed_frac = curtailed[curtailed.index.hour==h].sum()/(curtailed.index.hour==h).sum()
mefs[2025].append(curtailed_frac * CARBON_INTENSITY["solarpv"]
+ (1-curtailed_frac) * 1000 * lr.coef_[0][0])
mefs = pd.DataFrame(mefs)
aefs = dict()
for y in range(2015, 2019):
grp = df.loc[pd.to_datetime("%d-01-01" % y):pd.to_datetime("%d-01-01" % (y+1)),
["carbon_intensity", "year", "hour"]].groupby(["year", "hour"]).mean()
aefs[y] = grp.loc[y, "carbon_intensity"].values
aefs[2025] = df25.loc[pd.to_datetime("2016-01-01"):pd.to_datetime("2017-01-01"),
["carbon_intensity", "year", "hour"]].groupby(["year", "hour"]).mean()\
.loc[2016, "carbon_intensity"].values
aefs = pd.DataFrame(aefs)
f, ax = plt.subplots()
ax.axvspan(0, 6, facecolor='b', alpha=0.05)
ax.axvspan(19, 23, facecolor='b', alpha=0.05)
ax.axvspan(6, 19, facecolor='y', alpha=0.05)
ax.text(10, 450, "DAYTIME")
ax.plot([], [], label="AEFs", color=(.33,.33,.33), marker='o')
ax.plot([], [], label="MEFs", color=(.33,.33,.33))
for i, y in enumerate(range(2015, 2019)):
ax.plot(mefs[y], label=str(y), color=COLORS[i])
ax.plot(aefs[y], label="__nolegend__", marker='o', color=COLORS[i])
i += 1
ax.plot(mefs[2025], label=str(2025), color=COLORS[i])
ax.plot(aefs[2025], label="__nolegend__", marker='o', color=COLORS[i])
ax.grid(True)
ax.legend(loc=2, bbox_to_anchor=(1.0, 0.8))
ax.set_xlim([0,23])
ax.set_ylim([0,500])
ax.set_xlabel('Hour of the day')
ax.set_title('(a) California: hourly AEFs and MEFs');
ax.set_ylabel('kg/MWh');
ax.set_yticks([0, 200, 400])
ax.set_yticks([CARBON_INTENSITY["solarpv"], CARBON_INTENSITY["thermal"]], minor=True)
ax.set_yticklabels(["Solar", "Gas"], minor=True)
# plt.savefig('figures/fig1a.pdf', bbox_inches='tight')
# plt.savefig('figures/fig1a.png', bbox_inches='tight')
# Save data for later use
mefs.to_csv('figures/CA_mefs.csv')
aefs.to_csv('figures/CA_aefs.csv')
# Compute yearly MEFs and AEFs for 2016, 2018 and 2025
mefs_Y = dict()
df_diff = df.diff().dropna()
df_diff["hour"] = df_diff.index.hour
for y in range(2015, 2019):
mefs_Y[y] = []
start = pd.to_datetime("%d-01-01" %y)
end = pd.to_datetime("%d-01-01" % (y+1))
sel = (df_diff.index>start) & (df_diff.index<end)
lr = LinearRegression()
lr.fit(df_diff[sel].total_D.values.reshape(-1,1), df_diff[sel].carbon_D.values.reshape(-1,1))
mefs_Y[y].append(1000 * lr.coef_[0][0])
# In 2025, solar is marginal when there was no more gas generation
df_diff = df25.diff().dropna()
mefs_Y[2025] = []
lr = LinearRegression()
lr.fit(df_diff.total_D.values.reshape(-1,1), df_diff.carbon_D.values.reshape(-1,1))
curtailed_frac = curtailed.sum() / len(curtailed)
mefs_Y[2025].append(curtailed_frac * CARBON_INTENSITY["solarpv"]
+ (1-curtailed_frac) * 1000 * lr.coef_[0][0])
mefs_Y = pd.DataFrame(mefs_Y)
print("Yearly MEFs")
print(mefs_Y)
aefs_Y = dict()
for y in range(2015, 2019):
aefs_Y[y] = [df.loc[pd.to_datetime("%d-01-01" % y):pd.to_datetime("%d-01-01" % (y+1)),
["carbon_intensity", "year"]].groupby(["year"]).mean()\
.loc[y, "carbon_intensity"]]
aefs_Y[2025] = [df25.loc[pd.to_datetime("2016-01-01"):pd.to_datetime("2017-01-01"),
["carbon_intensity", "year"]].groupby(["year"]).mean()\
.loc[2016, "carbon_intensity"]]
aefs_Y = pd.DataFrame(aefs_Y)
print("\nYearly AEFs")
print(aefs_Y)
```
# Case study using these AEFs and MEFs
```
# Split into different years
df["MEF"] = 0.
df25["MEF"] = 0.
df16 = df.loc[pd.to_datetime("2016-01-01"):pd.to_datetime("2017-01-01"),:].copy(deep=True)
df18 = df.loc[pd.to_datetime("2018-01-01"):pd.to_datetime("2019-12-01"),:].copy(deep=True)
for h in range(24):
df16.loc[df16.hour==h, "MEF"] = mefs.loc[h, 2016]
df18.loc[df18.hour==h, "MEF"] = mefs.loc[h, 2018]
df25.loc[df25.hour==h, "MEF"] = mefs.loc[h, 2025]
# Hourly emissions analysis (in ktonnes)
# Calculate references (2016)
footprint_h_16 = aefs[2016].sum() * 1e-6 * 365
footprint_y_16 = aefs_Y.loc[0, 2016] * 8760 * 1e-6
# Choose scenario
def calcs(df, year, verb=0):
# Scale wind and solar data to get generation
df.loc[:,"wind_100"] = df.loc[:, "wind"] * len(df) / df.wind.sum()
df.loc[:,"wind_50"] = 0.5 * df.loc[:, "wind"] * len(df) / df.wind.sum()
df.loc[:,"solarpv_100"] = df.loc[:, "solarpv"] * len(df) / df.solarpv.sum()
df.loc[:,"solarpv_50"] = 0.5 * df.loc[:, "solarpv"] * len(df) / df.solarpv.sum()
# Hourly calcs
footprint_h = 1 * df.carbon_intensity.sum() * 1e-6
df.loc[:,"avoided100_s_h"] = df.solarpv_100 * (df.MEF - CARBON_INTENSITY['solarpv'])
df.loc[:,"avoided100_w_h"] = df.wind_100 * (df.MEF - CARBON_INTENSITY['wind'])
df.loc[:,"avoided5050_h"] = (df.solarpv_50 * (df.MEF - CARBON_INTENSITY['solarpv'])
+ df.wind_50 * (df.MEF - CARBON_INTENSITY['wind']))
avoided100_s_h = np.nansum(df.avoided100_s_h) * 1e-6
avoided100_w_h = np.nansum(df.avoided100_w_h) * 1e-6
avoided5050_h = np.nansum(df.avoided5050_h) * 1e-6
# Note: I multiply by 1MW because I am considering a 1MW constant load in this study
df.loc[:,"footprint100_s_h"] = 1 * df.carbon_intensity - df.avoided100_s_h
df.loc[:,"footprint100_w_h"] = 1 * df.carbon_intensity - df.avoided100_w_h
df.loc[:,"footprint5050_h"] = 1 * df.carbon_intensity - df.avoided5050_h
footprint_100_s_h = footprint_h - avoided100_s_h
footprint_100_w_h = footprint_h - avoided100_w_h
footprint_5050_h = footprint_h - avoided5050_h
if verb > 0:
print("Hourly")
print("Emissions footprint: %g" % footprint_h)
print("Avoided tons 100 %% solar: %g" % avoided100_s_h)
print("Avoided tons 100 %% wind: %g" % avoided100_w_h)
print("Avoided tons 50 %% wind, 50 %% solar: %g" % avoided5050_h)
# Yearly calcs
GRID_AVG_CARBON = aefs_Y.loc[0, year] #df.carbon_intensity.mean()
GRID_AVG_MEF = mefs_Y.loc[0, year] #df.MEF.mean()
footprint_y = GRID_AVG_CARBON * len(df.carbon_intensity) * 1e-6
df.loc[:,"avoided100_s_y"] = df.solarpv_100 * (GRID_AVG_MEF-CARBON_INTENSITY['solarpv'])
df.loc[:,"avoided100_w_y"] = df.wind_100 * (GRID_AVG_MEF-CARBON_INTENSITY['wind'])
df.loc[:,"avoided5050_y"] = (df.solarpv_50 * (GRID_AVG_MEF-CARBON_INTENSITY['solarpv'])
+ df.wind_50 * (GRID_AVG_MEF-CARBON_INTENSITY['wind']))
avoided100_s_y = np.nansum(df.avoided100_s_y) * 1e-6
avoided100_w_y = np.nansum(df.avoided100_w_y) * 1e-6
avoided5050_y = np.nansum(df.avoided5050_y) * 1e-6
footprint_100_s_y = footprint_y-avoided100_s_y
footprint_100_w_y = footprint_y-avoided100_w_y
footprint_5050_y = footprint_y-avoided5050_y
if verb > 0:
print("\nYearly")
print("Emissions footprint: %g" % footprint_y)
print("Avoided tons 100 %% solar: %g" % avoided100_s_y)
print("Avoided tons 100 %% wind: %g" % avoided100_w_y)
print("Avoided tons 50 %% wind, 50 %% solar: %g" % avoided5050_y)
# Summary dataframe to hold the results
df_sum = pd.DataFrame(
index=["Grid", "solar100", "wind100", "sw5050"],
columns=["net_footprint_H", "credit_H", "red_H",
"net_footprint_Y", "credit_Y", "red_Y"])
df_sum.loc["Grid",:] = [footprint_h, 0., (footprint_h_16-footprint_h)/footprint_h_16,
footprint_y, 0.,(footprint_y_16-footprint_y)/footprint_y_16]
df_sum.loc["solar100",:] = [footprint_100_s_h, avoided100_s_h, (footprint_h_16-footprint_100_s_h)/footprint_h_16,
footprint_100_s_y, avoided100_s_y,(footprint_y_16-footprint_100_s_y)/footprint_y_16]
df_sum.loc["wind100",:] = [footprint_100_w_h, avoided100_w_h, (footprint_h_16-footprint_100_w_h)/footprint_h_16,
footprint_100_w_y, avoided100_w_y, (footprint_y_16-footprint_100_w_y)/footprint_y_16]
df_sum.loc["sw5050",:] = [footprint_5050_h, avoided5050_h, (footprint_h_16-footprint_5050_h)/footprint_h_16,
footprint_5050_y, avoided5050_y, (footprint_y_16-footprint_5050_y)/footprint_y_16]
return df_sum
df_sum16 = calcs(df16, 2016)
df_sum18 = calcs(df18, 2018)
df_sum25 = calcs(df25, 2025)
df_sum16.to_csv('figures/df_sum16.csv')
df_sum16
df_sum18.to_csv('figures/df_sum18.csv')
df_sum18
df_sum25.to_csv('figures/df_sum25.csv')
df_sum25
```
# Figure 2
Changed this 20190429 to fit Joule figure requirements
```
# Set font sizes
SMALL_SIZE = 7
MEDIUM_SIZE = 8
BIGGER_SIZE = 9
# column sizes
cm_to_in = 0.393701
col_width3 = cm_to_in * 17.2
col_width2 = cm_to_in * 11.2
col_width1 = cm_to_in * 5.3
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Avenir']
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('axes', linewidth=.5) # fontsize of the figure title
plt.rc('xtick.minor', width=.5) # fontsize of the figure title
plt.rc('xtick.major', width=.5) # fontsize of the figure title
plt.rc('ytick.minor', width=.5) # fontsize of the figure title
plt.rc('ytick.major', width=.5) # fontsize of the figure title
def figure2(df_plot = df18, d=df_sum18, ylim1=[-8,1200],
ylim2=[-1200,500], year="(a) 2018", save=True,
fig_name="fig2a", ytxt1=800, ytxt2=-900):
fig, ax = plt.subplots(figsize=(col_width3, 3))
ax.xaxis.set_ticklabels([])
ax.xaxis.set_ticks([])
ax.yaxis.set_ticklabels([])
ax.yaxis.set_ticks([])
# These are in unitless percentages of the figure size. (0,0 is bottom left)
#left, bottom, width, height =
width = 0.21
height = 0.31
left1 = 0.11
left2 = 0.44
left3 = 0.76
bottom1 = 0.14
bottom2 = 0.55
height2 = 0.41
lw=1
ax1 = fig.add_axes([left1, bottom1, width, height])
ax2 = fig.add_axes([left1, bottom2, width, height])
ax3 = fig.add_axes([left2, bottom1, width, height])
ax4 = fig.add_axes([left2, bottom2, width, height])
ax5 = fig.add_axes([left3, 0.45, width, height2])
axes = [ax1,ax2,ax3,ax4]
for x in axes:
x.axvspan(0, 6, facecolor='b', alpha=0.05)
x.axvspan(19, 23, facecolor='b', alpha=0.05)
x.axvspan(6, 19, facecolor='y', alpha=0.05)
for x in [ax2,ax4]:
x.set_ylim(ylim1)
x.set_ylabel('CO2 credit (kg)')
for x in [ax1,ax3]:
x.set_ylim(ylim2)
x.set_ylabel('CO2 net foot. (kg)')
x.set_xlabel('hour', labelpad=0)
#x.set_yticks([0, -500, -1000])
ax.text(0.5, 0.925, str(year), transform=ax.transAxes, fontsize=BIGGER_SIZE)
df_plot["month"] = df_plot.index.month
df_plot["hour"] = df_plot.index.hour
grped = df_plot.loc[:,[
"carbon_intensity","avoided100_s_h", "avoided100_w_h",
"avoided5050_h", "footprint100_s_h", "footprint100_w_h",
"footprint5050_h", "month", "hour"]].groupby([
"month", "hour"]).mean()
for m, x in zip([1, 8], [ax2, ax4]):
x.plot([0,23],[0,0], lw=lw, label="100% Grid", color=COLORS[0])
x.plot(grped.loc[m, "avoided100_s_h"], lw=lw, label="100% Solar", color=COLORS[3])
x.plot(grped.loc[m, "avoided100_w_h"], lw=lw, label="100% Wind", color=COLORS[2])
x.plot(grped.loc[m, "avoided5050_h"], lw=lw, label="50/50", color=COLORS[1])
x.text(12, ytxt1, "DAYTIME\n%s" % calendar.month_abbr[m].upper(), ha='center')
for m, x in zip([1, 8], [ax1, ax3]):
x.plot(grped.loc[m, "carbon_intensity"], lw=lw, label="Grid", color=COLORS[0])
x.plot(grped.loc[m, "footprint100_s_h"], lw=lw, label="Solar", color=COLORS[3])
x.plot(grped.loc[m, "footprint100_w_h"], lw=lw, label="Wind", color=COLORS[2])
x.plot(grped.loc[m, "footprint5050_h"], lw=lw, label="50/50", color=COLORS[1])
x.text(12, ytxt2, "DAYTIME\n%s" % calendar.month_abbr[m].upper(), ha='center')
ax3.legend(loc=8, ncol=2, bbox_to_anchor=(1.5, -0.1), columnspacing=1)
darkgray = 70
darkgray = (darkgray/256,darkgray/256,darkgray/256)
lightgray = 150
lightgray = (lightgray/256,lightgray/256,lightgray/256)
for i, x, lab in zip(range(4), [0,3,2,1], ["Grid", "solar100", "wind100", "sw5050"]):
ax5.bar([i-0.2], [100*d.loc[lab, "red_H"]], width=0.3,
color=COLORS[x], label='__nolegend__')
ax5.bar([i+0.2], [100*d.loc[lab, "red_Y"]], width=0.3,
color=COLORS[x], label='__nolegend__', alpha=0.4)
# ax5.bar([x-0.2 for x in range(4)], 100*d.loc[:, "red_H"], width=0.3,
# color=darkgray, label='__nolegend__', hatch='/')
# ax5.bar([x+0.2 for x in range(4)], 100*d.loc[:, "red_Y"], width=0.3, color=lightgray, label='__nolegend__')
ax5.grid(True, linewidth=.5)
ax5.set_ylim([0,160])
ax5.set_xlim([-.5,3.5])
ax5.set_xticks(range(4))
ax5.set_xticklabels(["Grid", "Solar", "Wind", "50/50"])
ax5.bar([-1], [-1], color=darkgray, label='hourly calc.')
ax5.bar([-1], [-1], color=darkgray, label='yearly calc.', alpha=0.4)
ax5.set_ylabel("Emissions reduction (%)")
ax5.legend(loc=8, bbox_to_anchor=(0.5, -.5))
for x in axes:
x.set_xlim([0,23])
x.grid(True, linewidth=.5)
plt.subplots_adjust(left=0.01, right=.99, top=.98, bottom=0.02)
if save:
plt.savefig('figures/%s.pdf' % fig_name, dpi=300)
plt.savefig('figures/%s.png' % fig_name, dpi=300)
#figure2()
# figure2(df25, df_sum25, ylim1=[-8,800], ylim2=[-500,400],
# year="(b) 2025", fig_name='fig2b',
# ytxt1=500, ytxt2=-400)
def figure2bis(fig, ax, hoffset=0.5, df_plot = df18, d=df_sum18, ylim1=[-8,1200],
ylim2=[-1200,500], year="(a) 2018", save=True,
fig_name="fig2a", ytxt1=850, ytxt2=-800):
# These are in unitless percentages of the figure size. (0,0 is bottom left)
#left, bottom, width, height =
width = 0.21
height = 0.33 / 2
# left1 = 0.11
# left2 = 0.43
# left3 = 0.75
left3 = 0.11
left1 = 0.43
left2 = 0.75
bottom1 = 0.14 / 2 + hoffset
bottom2 = 0.57 / 2 + hoffset
bottom3 = 0.39 / 2 + hoffset
height2 = 0.44 / 2
lw=1
ax1 = fig.add_axes([left1, bottom1, width, height])
ax2 = fig.add_axes([left1, bottom2, width, height])
ax3 = fig.add_axes([left2, bottom1, width, height])
ax4 = fig.add_axes([left2, bottom2, width, height])
ax5 = fig.add_axes([left3, bottom3, width, height2])
axes = [ax1,ax2,ax3,ax4]
ax.text(0.08, 0.91 / 2 + hoffset, year, transform=ax.transAxes,
fontsize=10, ha='center')
for x in axes:
x.axvspan(0, 6, facecolor='b', alpha=0.05)
x.axvspan(19, 23, facecolor='b', alpha=0.05)
x.axvspan(6, 19, facecolor='y', alpha=0.05)
for x in [ax2,ax4]:
x.set_ylim(ylim1)
x.set_ylabel('$\mathregular{CO_2}$ credit (kg)')
for x in [ax1,ax3]:
x.set_ylim(ylim2)
x.set_ylabel('$\mathregular{CO_2}$ net foot. (kg)')
x.set_xlabel('hour', labelpad=0)
#x.set_yticks([0, -500, -1000])
df_plot["month"] = df_plot.index.month
df_plot["hour"] = df_plot.index.hour
grped = df_plot.loc[:,[
"carbon_intensity","avoided100_s_h", "avoided100_w_h",
"avoided5050_h", "footprint100_s_h", "footprint100_w_h",
"footprint5050_h", "month", "hour"]].groupby([
"month", "hour"]).mean()
for m, x in zip([1, 8], [ax2, ax4]):
x.plot([0,23],[0,0], lw=lw, label="100% Grid", color=COLORS[0])
x.plot(grped.loc[m, "avoided100_s_h"], lw=lw, label="100% Solar", color=COLORS[3])
x.plot(grped.loc[m, "avoided100_w_h"], lw=lw, label="100% Wind", color=COLORS[2])
x.plot(grped.loc[m, "avoided5050_h"], lw=lw, label="50/50", color=COLORS[1])
# x.text(12, ytxt1, "%s" % calendar.month_abbr[m].upper(), ha='center')
x.text(.5, 1.05, "Day", fontsize=SMALL_SIZE, ha='center', transform=x.transAxes)
x.text(.02, 1.05, "Night", fontsize=SMALL_SIZE, ha='left', transform=x.transAxes)
x.text(.98, 1.05, "Night", fontsize=SMALL_SIZE, ha='right', transform=x.transAxes)
x.text(.02, .88, "%s" % calendar.month_abbr[m].upper(), ha='left', transform=x.transAxes)
for m, x in zip([1, 8], [ax1, ax3]):
x.plot(grped.loc[m, "carbon_intensity"], lw=lw, label="Grid", color=COLORS[0])
x.plot(grped.loc[m, "footprint100_s_h"], lw=lw, label="Solar", color=COLORS[3])
x.plot(grped.loc[m, "footprint100_w_h"], lw=lw, label="Wind", color=COLORS[2])
x.plot(grped.loc[m, "footprint5050_h"], lw=lw, label="50/50", color=COLORS[1])
# x.text(12, ytxt2, "%s" % calendar.month_abbr[m].upper(), ha='center')
x.text(.04, .05, "%s" % calendar.month_abbr[m].upper(), ha='left', transform=x.transAxes)
#ax3.legend(loc=8, ncol=2, bbox_to_anchor=(1.5, -0.1), columnspacing=1)
darkgray = 70
darkgray = (darkgray/256,darkgray/256,darkgray/256)
lightgray = 150
lightgray = (lightgray/256,lightgray/256,lightgray/256)
for i, x, lab in zip(range(4), [0,3,2,1], ["Grid", "solar100", "wind100", "sw5050"]):
ax5.bar([i-0.2], [100*d.loc[lab, "red_H"]], width=0.3,
color=COLORS[x], label='__nolegend__')
ax5.bar([i+0.2], [100*d.loc[lab, "red_Y"]], width=0.3,
color=COLORS[x], label='__nolegend__', alpha=0.4)
# ax5.bar([x-0.2 for x in range(4)], 100*d.loc[:, "red_H"], width=0.3,
# color=darkgray, label='__nolegend__', hatch='/')
# ax5.bar([x+0.2 for x in range(4)], 100*d.loc[:, "red_Y"], width=0.3, color=lightgray, label='__nolegend__')
ax5.grid(True, linewidth=.5)
ax5.set_ylim([0,160])
ax5.set_xlim([-.5,3.5])
ax5.set_xticks(range(4))
ax5.set_xticklabels(["Grid", "Solar", "Wind", "50/50"])
ax5.plot([], [], lw=lw, label="Grid", color=COLORS[0])
ax5.plot([], [], lw=lw, label="Solar", color=COLORS[3])
ax5.plot([], [], lw=lw, label="Wind", color=COLORS[2])
ax5.plot([], [], lw=lw, label="50/50", color=COLORS[1])
ax5.bar([-1], [-1], color=darkgray, label='hourly calc.')
ax5.bar([-1], [-1], color=darkgray, label='yearly calc.', alpha=0.4)
ax5.set_ylabel("Emissions reduction (%)")
ax5.legend(loc=8, bbox_to_anchor=(.32, -.7), ncol=2)
for x in axes:
x.set_xlim([0,23])
x.grid(True, linewidth=.5)
fig, ax = plt.subplots(figsize=(col_width3, 3*2))
ax.xaxis.set_ticklabels([])
ax.xaxis.set_ticks([])
ax.yaxis.set_ticklabels([])
ax.yaxis.set_ticks([])
figure2bis(fig, ax)
figure2bis(fig, ax, hoffset=0., df_plot=df25, d=df_sum25, ylim1=[-8,800], ylim2=[-500,400],
year="(b) 2025", fig_name='fig2b',
ytxt1=600, ytxt2=-300)
ax.axhline(0.5, color='k', lw=0.5)
plt.subplots_adjust(left=0.01, right=.99, top=.99, bottom=0.01)
plt.savefig('figures/fig2.pdf', dpi=300)
plt.savefig('figures/fig2.png', dpi=300)
```
| github_jupyter |
# Text Processing with NLTK
In this example we will explore some of the functionality in [Natural Language Toolkit (NLTK)](http://www.nltk.org/). NLTK is one of the most popular Python based tools to get started with NLP.
There is also an entire [NLTK book](https://www.nltk.org/book/) with fantastic examples to help you explore the tools in much greater detail.
```
import nltk
import IPython.core.display as disp
# Run the following line the first time you run this script
nltk.download("averaged_perceptron_tagger")
```
## Perform Part-of-Speech (POS) Tagging
The first step is ironically not entire a rule-based approach. We will tokenize the sentence into its individual words and then perform part-of-speech tagging, which typically requires a model trained from data -- this makes sense, because figuring the out the part-of-speech of a word has more exceptions than rules, especially in a language as weird as English!
```
def tag_sentence(sentence):
tokenized = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokenized)
return tagged
input_sentence = "Go over to the kitchen and find a big red apple."
print(input_sentence)
tagged_sentence = tag_sentence(input_sentence)
print(tagged_sentence)
```
## Create a Grammar and Parse the Sentence
Once we have a tokenized and tagged sentence, we can create a *grammar* of rules to help us *chunk* the sentence. In the example below, we create a simple grammar to extract Noun Phrases (NP) and Verb Phrases (VP) from a tagged sentence using a regular expression parser in NLTK.
To explain the grammar a little more, we have used the following (imperfect and limited) definitions:
* **Noun Phrase:** Any number of determinants -> Any number of adjectives -> Any number of nouns
* **Verb Phrase:** A verb -> any number of (prepositions, the word "to", adverbs, or particles) -> A noun phrase
Recall that the \* character in regular expressions refers to zero or more matches, and the . character refers to any character (since there are many verb tags depending on conjugration)
To understand what all these tags like `DT`, `JJ`, `NN`, etc. mean, refer to the [Penn Treebank list of tags](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html).
```
grammar = """
NP: {<DT>* <JJ>* <NN>*} # Noun phrase
VP: {<VB.*> <IN|TO|RB|RP>* <NP>} # Verb phrase
"""
chunk_parser = nltk.RegexpParser(grammar)
input_sentence = "Go over to the kitchen and find a big red apple."
tagged_sentence = tag_sentence(input_sentence)
tree = chunk_parser.parse(tagged_sentence)
print(tree)
disp.display(tree)
```
## Extract information from the parse tree
Once we have a parse tree with verb phrases and noun phrases, in theory this structure can make it easier to analyze complex sentences. For example, we could possibly infer the subject/object of a sentence, or resolve relations and references between named entities in a sentence. There is a whole field of problems with such sentences such as *Named Entity Recognition (NER)* and *Coreference Resolution*.
In our example below, we can now break down sentences into multiple verb phrases. For each verb phrase, we can extract information such as the verb and first noun and possibly infer that the human is requesting a robot to take a specific action.
```
def extract_information(tree):
"""
Extracts information (actions and targets) from a tagged and chunked sentence
"""
actions = []
targets = []
# Loop through all the trees
for elem in tree:
action = None
target = None
# Get the label of the subtree or token
if isinstance(elem, nltk.tree.Tree):
label = elem.label()
elif isinstance(elem, tuple):
label = elem[1]
# Once a verb phrase is found, pick out the first verb
if label == "VP":
leaves = elem.leaves()
for (word, label) in leaves:
if "VB" in label and action is None:
action = word
break
# Now find the first noun phrase and pick out the first noun
for st in elem.subtrees():
if st.label() == "NP":
leaves = st.leaves()
for (word, label) in leaves:
if "NN" in label and target is None:
target = word
break
# If an action and/or target is found, append it to the list
if action is not None or target is not None:
actions.append(action)
targets.append(target)
return actions, targets
(actions, targets) = extract_information(tree)
print("Input sentence: {}".format(input_sentence))
print("Actions: {}".format(actions))
print("Targets: {}".format(targets))
```
## Test on Multiple Sentences
Again, we should test our NLP pipeline on various types of sentences to see how robust our system is to the types of commands we may expect humans to give.
We as developers have no way to limit the space of possible sentences a rule-based system can see out in the wild, which makes them brittle in the real world. This is one of many reasons that machine learning approaches are dominating NLP today. However, many complex systems can benefit from a combination of rule-based and statistical methods to combine their respective advantages.
```
input_doc = "Go over to the kitchen and find a big red apple. " \
"Open the refrigerator and grab a cold water bottle. " \
"Proceed to the garage and empty out the trash."
sentences = nltk.sent_tokenize(input_doc)
for sent in sentences:
tagged_sentence = tag_sentence(sent)
tree = chunk_parser.parse(tagged_sentence)
(actions, targets) = extract_information(tree)
print("Input sentence: {}".format(sent))
disp.display(tree)
print("Actions: {}".format(actions))
print("Targets: {}".format(targets))
print("")
```
| github_jupyter |
# Threshold Tools
A single channel of an image is selected for either binary thresholding or auto thresholding (Gaussian, mean, Otsu, or triangle). For a color image, selecting a channel of an image for thresholding likely involves conversion from RGB to HSV or LAB color space, then selecting Hue, Saturation, Value, Lightness, Green-Magenta, or Blue-Yellow channels. It's best to select a channel that maximizes contrast between the target object and the background. When thresholding an image to segment a target object, it may not be possible to isolate just the target object. Multiple thresholding steps on various channels may be necessary as well as downstream noise reduction steps. For an example of this approach see the [VIS tutorial](vis_tutorial.ipynb).
```
from plantcv import plantcv as pcv
class options:
def __init__(self):
self.image = "img/tutorial_images/vis/original_image.jpg"
self.debug = "plot"
self.writeimg= False
self.outdir = "."
# Get options
args = options()
# Set debug to the global parameter
pcv.params.debug = args.debug
# Read image
# Inputs:
# filename - Image file to be read in
# mode - Return mode of image; either 'native' (default), 'rgb', 'gray', or 'csv'
img, path, filename = pcv.readimage(filename=args.image)
```
Each of the threshold methods take grayscale image data, and there are a few different ways to transform a color image into a gray color space. Examples of `rgb2gray_lab` and `rgb2gray_hsv` (and the way that multiple color spaces can be combined to further reduce noise) can be found in the [VIS tutorial](vis_tutorial.ipynb) and the [VIS/NIR tutorial](vis_nir_tutorial.ipynb).
```
# Convert the RGB img to grayscale
# Inputs:
# rgb_img - RGB image data
gray_img = pcv.rgb2gray(rgb_img=img)
# There isn't a lot of contrast between the plant and background, so we can use
# the blue-yellow channel. Feel free to try other channels and/or upload your
# own images!
# Inputs:
# rgb_img = image object, RGB color space
# channel = color subchannel ('l' = lightness, 'a' = green-magenta , 'b' = blue-yellow)
gray_img_b = pcv.rgb2gray_lab(rgb_img=img, channel='b')
# Plot a histogram to help inform a reasonable threshold value
# Inputs:
# gray_img - Grayscale image to analyze
# mask - An optional binary mask made from selected contours, default mask=None
# bins - Number of classes to divide the spectrum into, default bins=256
# color - Color of line drawn (default color='red')
# title - cusctom title for the plot gets drawn if title is not None (default)
hist_figure = pcv.visualize.histogram(gray_img=gray_img_b, mask=None, bins=256)
# Create a binary image based on a threshold value
# Inputs:
# gray_img - Grayscale image data
# threhold - Threshold value (0-255), cutoff point for thresholding
# max_value - Value to apply above the threshold (255 = white)
# object_type - 'light' (default) or 'dark', if the object is lighter than the
# background then standard threholding is done, but if darker than
# background then inverse thresholding is done.
binary_thresh1 = pcv.threshold.binary(gray_img=gray_img_b, threshold=150, max_value=255,
object_type='light')
# Too much background got picked up. Try a higher threshold value.
binary_thresh2 = pcv.threshold.binary(gray_img=gray_img_b, threshold=160, max_value=255,
object_type='light')
# Create a binary image using multiple channels
# Inputs:
# rgb_img - RGB image data
# lower_thresh - Lower threshold values
# upper_thresh - List of upper threshold values
# channel - Color-space channels of interest (either 'RGB', 'HSV', 'LAB', or 'gray')
mask, masked_img = pcv.threshold.custom_range(rgb_img=img, lower_thresh=[0,0,158], upper_thresh=[255,255,255], channel='LAB')
# Create a binary image using the Gaussian adaptive thresholding method
# Inputs:
# gray_img - Grayscale image data
# max_value - Value to apply above threshold (255 = white)
# object_type - 'light' (default) or 'dark', if the object is lighter than the
# background then standard threholding is done, but if darker than
# background then inverse thresholding is done.
gauss_thresh1 = pcv.threshold.gaussian(gray_img=gray_img_b, max_value=255, object_type='dark')
# Quite a bit of the plant was missed so try another color channel
gray_img_l = pcv.rgb2gray_lab(img, 'l')
gauss_thresh2 = pcv.threshold.gaussian(gray_img_l, 255, 'dark')
# Create a binary image using the mean adaptive thresholding method
# Inputs:
# gray_img - Grayscale image data
# max_value - Value to apply above threshold (255 = white)
# object_type - 'light' (default) or 'dark', if the object is lighter than the
# background then standard threholding is done, but if darker than
# background then inverse thresholding is done.
mean_thresh = pcv.threshold.mean(gray_img=gray_img_l, max_value=255, object_type='dark')
# Create a binary image using Otsu's method
# Inputs:
# gray_img - Grayscale image data
# max_value - Value to apply above threshold (255 = white)
# object_type - 'light' (default) or 'dark', if the object is lighter than the
# background then standard threholding is done, but if darker than
# background then inverse thresholding is done.
otsu_thresh1 = pcv.threshold.otsu(gray_img=gray_img_b, max_value=255, object_type='dark')
# The plant container and table it's sitting got picked up but the plant didn't.
# Try the green-magenta channel instead
gray_img_a = pcv.rgb2gray_lab(img, 'a')
otsu_thresh2 = pcv.threshold.otsu(gray_img_a, 255, 'dark')
# Create a binary image using adaptive thresholding
# Inputs:
# gray_img - Grayscale image data
# max_value - Value to apply above threshold (255 = white)
# object_type - 'light' (default) or 'dark', if the object is lighter than the
# background then standard threholding is done, but if darker than
# background then inverse thresholding is done.
# xstep - Value to move along the x-axis to determine the points from which to
# calculate distance (recommended to start at 1, the default, and change
# if needed)
triangle_thresh = pcv.threshold.triangle(gray_img=gray_img_l, max_value=255,
object_type='dark', xstep=1)
# Although not exactly like the rest of the thresholding functions, there is also an edge detection
# function that can be used on RGB and grayscale images
# Inputs:
# img - RGB or grayscale image data
# sigma - Optional standard deviation of the Gaussian filter
# low_thresh - Optional lower bound for hysteresis thresholding (linking edges). If None (default)
# then low_thresh is set to 10% of the image's max
# high_thresh - Optional upper bound for hysteresis thresholding (linking edges). If None (default)
# then high_thresh is set to 20% of the image's max
# thickness - Optional integer thickness of the edges, default thickness=1
# mask - Optional mask to limit the application of Canny to a certain area, takes a binary img.
# mask_color - Color of the mask provided; either None (default), 'white', or 'black' (cannot be None
# if mask is provided)
# use_quantiles - Default is False, if True then treat low_thresh and high_thresh as quantiles of the edge magnitude
# image, rather than the absolute edge magnitude values. If True then thresholds must be
# within the range `[0, 1]`.
# Use function defaults
edges = pcv.canny_edge_detect(img=img)
# sigma=2
edges2 = pcv.canny_edge_detect(img=img, sigma=2)
# Lower sigma value to pick up more edges
edges3 = pcv.canny_edge_detect(img=img, sigma=.1)
# Create a mask
# Inputs"
# img - RGB or grayscale image data
# p1 - Point at the top left corner of rectangle, (0,0) is top left corner (tuple)
# p2 - Point at the bottom right corner of rectangle (max-value(x),max-value(y)) is bottom right corner (tuple)
# color - "black", "gray","white", default is "black". This acts to select (mask)
# area from object capture (need to invert to remove).
masked, bin_img, rect_contour, hierarchy = pcv.rectangle_mask(img=img, p1=(65, 50), p2=(300,350), color='black')
# Find edges within a mask
edges4 = pcv.canny_edge_detect(img=img, mask=bin_img, mask_color='black')
# THIS FUNCTION TAKES SEVERAL MINUTE TO RUN
# Create a binary image using the texture thresholding method
# Inputs:
# gray_img - Grayscale image data
# ksize - Kernel size for the texture measure calculation
# threshold - Threshold value (0-255)
# offset - Distance offsets (default offset=3)
# texture_method - Feature of a grey level co-occurrence matrix, either
# ‘contrast’, ‘dissimilarity’ (default), ‘homogeneity’, ‘ASM’, ‘energy’,
# or ‘correlation’. For equations of different features see
# http://scikit-image.org/docs/dev/api/skimage.feature.html#greycoprops
# borders - How the array borders are handled, either ‘reflect’,
# ‘constant’, ‘nearest’ (default), ‘mirror’, or ‘wrap’
# max_value - Value to apply above threshold (usually 255 = white)
texture_thresh = pcv.threshold.texture(gray_img=gray_img_l, ksize=6, threshold=7)
```
| github_jupyter |
# Identifying suitable sites for new ALS clinics using location allocation analysis
## Clinic access for the chronically ill
Location is everything for the chronically ill. For patients with amyotrophic lateral sclerosis (ALS), visits to clinics are exhausting full-day engagements involving sessions with highly trained specialists from several disciplines. Patients with long drives to their nearest clinic may also face the additional hardship of having to plan for travel days to and from the clinic as well as for food and lodging.
This notebook demonstrates how ArcGIS can perform network analysis to identify potential sites for new ALS clinics in California to improve access for patients who do not live near a clinic.
<blockquote><b>Note:</b> The examples in this notebook is intended to serve only as a technology demonstration. The analysis results should not be used for any planning purposes as the data for ALS patient locations are fictional. The ALS clinic locations were obtained from data that was publically available in October 2017.</blockquote>
## Access the analysis data
```
# Import the ArcGIS API for Python
from arcgis import *
from IPython.display import display
# Connect to the web GIS.
gis = GIS('https://python.playground.esri.com/portal', 'arcgis_python', 'amazing_arcgis_123')
# Search for content tagged with 'ALS'
search_als = gis.content.search('tags: ALS', 'Feature Layer')
for item in search_als:
display(item)
```
## Map the challenge
Many ALS patients in California have to drive for over 90 minutes to reach their nearest clinic. For the purposes of this notebook, those patients are considered to have poor clinic access. The next block of code displays a map that displays the locations of those ALS patients who have poor clinic access.
Map Legend:
<p><img src="http://esri.github.io/arcgis-python-api/notebooks/nbimages/04_als_patients.png" style="float:left">Locations of ALS patients that must travel for over 90 minutes to access a clinic.
<p><img src="http://esri.github.io/arcgis-python-api/notebooks/nbimages/04_als_clinics.png" style="float:left">Existing ALS clinics.
<p><img src="http://esri.github.io/arcgis-python-api/notebooks/nbimages/04_als_drive_times.png" style="float:left">90 minute drive times from existing ALS clinics.
<p><img src="http://esri.github.io/arcgis-python-api/notebooks/nbimages/04_als_candidate_clinics2.png" style="float:left">Candidate cities for new ALS clinics.
```
# Display the titles of the items returned by the search.
for item in search_als:
print(item.title)
# Create a map of California.
map1 = gis.map("State of California, USA")
map1.basemap = 'dark-gray'
display(map1)
# Add the ALS content to the map.
for item in search_als:
if item.title == 'ALS_Clinics_CA':
als_clinics = item
if item.title == 'ALS_Patients_CA':
patient_locations = item
if item.title == 'ALS_Clinic_City_Candidates_CA':
city_candidates = item
elif item.title == 'ALS_Clinic_90minDriveTime_CA':
drive_times = item
map1.add_layer(drive_times)
map1.add_layer(als_clinics)
map1.add_layer(patient_locations)
map1.add_layer(city_candidates)
```
## Introduction to Location-Allocation
### ArcGIS Network Analyst
[ArcGIS Network Analyst](http://pro.arcgis.com/en/pro-app/help/analysis/networks/what-is-network-analyst-.htm) helps organizations run their operations more efficiently and improve strategic decision making by performing analysis on the travel costs between their facilities and demand locations to answer questions such as:
* What is the quickest way to get from point A to point B?
* What houses are within five minutes of our fire stations?
* What markets do our businesses cover?
* Where should we open a new branch of our business to maximize market share?
To address these questions, ArcGIS Network Analyst creates origin-destination (OD) cost matrices along a travel network such as roads, to find solutions to reduce the overall costs of travel.

To perform network analysis, you need a network dataset, which models a transportation network. In the ArcGIS API for Python, the network dataset is accessed through a network service hosted in ArcGIS Online or Portal for ArcGIS.
### Location-Allocation Analysis
The goal of [location-allocation](http://pro.arcgis.com/en/pro-app/help/analysis/networks/location-allocation-analysis-layer.htm) is to locate facilities in a way that supplies the demand points most efficiently. As the name suggests, location-allocation is a two-fold problem that simultaneously locates facilities and allocates demand points to the facilities.

### The `solve_location_allocation` tool
In this notebook, we will use the [solve_location_allocation](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.network.analysis.html#solve-location-allocation) tool to find the best locations for new ALS clinics in California. Inputs to this tool include [FeatureSet](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#featureset)s containing the following data:
* facilities
* demand points.
For the examples in this notebook, the facilities are a set of candidate locations for new ALS clinics. These locations could be actual addresses. However, for these examples, the candidate locations are cities that could potentially host new ALS clinics and are represented by their centroid points. The candidate cities were pre-selected based on the following criteria:
* they are within California
* they are outside of the 90 minute drive time areas from existing ALS clinics
* they have populations of at least 60,000, and are therefore assumed to have sufficient health care facilities and professionals to support an ALS clinic
The analyses in this notebook could lead to further multi-criteria analysis to identify specific locations within a city or other geographic area. For examples of Jupyter Notebooks with multi-criteria suitability analysis see [Calculating cost surfaces using weighted overlay analysis](https://developers.arcgis.com/python/sample-notebooks/calculating-cost-surfaces-using-weighted-overlay-analysis/) and [Finding suitable spots for placing heart defibrillator equipments in public](https://developers.arcgis.com/python/sample-notebooks/finding-suitable-spots-for-aed-devices-using-raster-analytics/)
For the demand points, we will use point locations of fictional ALS patients. These locations are aggregated to zip code centroids and contain an estimated number of ALS patients based on the total population within each zip code.
The output of the tool is a named tuple which contains the following data:
* solve_succeeded (`bool`)
* output_facilities (`FeatureSet`)
* output_demand_points (`FeatureSet`)
* output_allocation_lines (`FeatureSet`)
To prepare to run the `solve_location_allocation` tool we will first perform the following steps:
1. Extract the required input `FeatureSets` for the tool from `FeatureLayerCollection` items in the GIS.
2. Define a function to extract, symbolize, and display the output results from the `solve_location_allocation` tool.
## Analysis Preparation
```
# Extract the required input data for the analyses.
# ALS clinic city candidates FeatureSet
city_candidates_fset = city_candidates.layers[0].query()
# ALS patient locations FeatureSet
patient_locations_fset = patient_locations.layers[0].query()
# Display the ALS patients FeatureSet in a pandas Dataframe
patient_locations_sdf = patient_locations_fset.df
patient_locations_sdf.head()
```
Note that the `num_patients` field contains the numbers of ALS patients at each location. We can use the `num_patients` field to "weight" the analysis in favor of candidate cities with the greatest numbers of patients near them.
To accomplish this, add a column called `weight` to the `SpatialDataFrame` object and remove the old `num_patients` column.
```
# create the 'weight' column
patient_locations_sdf['weight'] = patient_locations_sdf['num_patients']
# drop the 'num_patients' column
patient_locations_sdf2 = patient_locations_sdf.drop('num_patients', axis=1)
# convert SpatialDataFrame back to a FeatureSet
patient_locations_weighted = patient_locations_sdf2.to_featureset()
patient_locations_weighted.df.head()
# Define a function to display the output analysis results in a map
import time
def visualize_locate_allocate_results(map_widget, solve_locate_allocate_result, zoom_level):
# The map widget
m = map_widget
# The locate-allocate analysis result
result = solve_locate_allocate_result
# 1. Parse the locate-allocate analysis results
# Extract the output data from the analysis results
# Store the output points and lines in pandas dataframes
demand_df = result.output_demand_points.df
lines_df = result.output_allocation_lines.df
# Extract the allocated demand points (patients) data.
demand_allocated_df = demand_df[demand_df['DemandOID'].isin(lines_df['DemandOID'])]
demand_allocated_fset = features.FeatureSet.from_dataframe(demand_allocated_df)
# Extract the un-allocated demand points (patients) data.
demand_not_allocated_df = demand_df[~demand_df['DemandOID'].isin(lines_df['DemandOID'])]
demand_not_allocated_fset = features.FeatureSet.from_dataframe(demand_not_allocated_df)
# Extract the chosen facilities (candidate clinic sites) data.
facilities_df = result.output_facilities.df[['Name', 'FacilityType',
'Weight','DemandCount', 'DemandWeight', 'SHAPE']]
facilities_chosen_df = facilities_df[facilities_df['FacilityType'] == 3]
facilities_chosen_fset = features.FeatureSet.from_dataframe(facilities_chosen_df)
# 2. Define the map symbology
# Allocation lines
allocation_line_symbol_1 = {'type': 'esriSLS', 'style': 'esriSLSSolid',
'color': [255,255,255,153], 'width': 0.7}
allocation_line_symbol_2 = {'type': 'esriSLS', 'style': 'esriSLSSolid',
'color': [0,255,197,39], 'width': 3}
allocation_line_symbol_3 = {'type': 'esriSLS', 'style': 'esriSLSSolid',
'color': [0,197,255,39], 'width': 5}
allocation_line_symbol_4 = {'type': 'esriSLS', 'style': 'esriSLSSolid',
'color': [0,92,230,39], 'width': 7}
# Patient points within 90 minutes drive time to a proposed clinic location.
allocated_demand_symbol = {'type' : 'esriPMS', 'url' : 'https://maps.esri.com/legends/Firefly/cool/1.png',
'contentType' : 'image/png', 'width' : 26, 'height' : 26,
'angle' : 0, 'xoffset' : 0, 'yoffset' : 0}
# Patient points outside of a 90 minutes drive time to a proposed clinic location.
unallocated_demand_symbol = {'type' : 'esriPMS', 'url' : 'https://maps.esri.com/legends/Firefly/warm/1.png',
'contentType' : 'image/png', 'width' : 19.5, 'height' : 19.5,
'angle' : 0, 'xoffset' : 0, 'yoffset' : 0}
# Selected clinic
selected_facilities_symbol = {'type' : 'esriPMS', 'url' : 'https://maps.esri.com/legends/Firefly/ClinicSites.png',
'contentType' : 'image/png', 'width' : 26, 'height' : 26,
'angle' : 0, 'xoffset' : 0, 'yoffset' : 0}
# 3. Display the analysis results in the map
# Add a slight delay for drama.
time.sleep(1.5)
# Display the patient-clinic allocation lines.
m.draw(shape=result.output_allocation_lines, symbol=allocation_line_symbol_4)
m.draw(shape=result.output_allocation_lines, symbol=allocation_line_symbol_2)
m.draw(shape=result.output_allocation_lines, symbol=allocation_line_symbol_1)
# Display the locations of patients within the specified drive time to the selected clinic(s).
m.draw(shape=demand_allocated_fset, symbol=allocated_demand_symbol)
# Display the locations of patients outside the specified drive time to the selected clinic(s).
m.draw(shape = demand_not_allocated_fset, symbol = unallocated_demand_symbol)
# Display the chosen clinic site.
m.draw(shape=facilities_chosen_fset, symbol=selected_facilities_symbol)
# Zoom out to display all of the allocated patients points.
m.zoom = zoom_level
```
## Analysis: Where could we add one new ALS clinic to reach the greatest number of ALS patients who currently have poor access to an existing clinic?
To answer this question, we will use the **"Maximize Coverage"** problem type. This problem type chooses facilities such that as many demand points as possible are allocated to facilities within the impedance cutoff. In this example, the impedance cutoff is defined as a drive time of 90 minutes.
```
# Identify the city which has the greatest number of patients within a 90 minute drive time.
result1 = network.analysis.solve_location_allocation(problem_type='Maximize Coverage',
travel_direction='Demand to Facility',
number_of_facilities_to_find='1',
demand_points=patient_locations_weighted,
facilities=city_candidates_fset,
measurement_units='Minutes',
default_measurement_cutoff=90
)
print('Analysis succeeded? {}'.format(result1.solve_succeeded))
# Display the analysis results in a pandas dataframe.
result1.output_facilities.df[['Name', 'FacilityType', 'DemandCount', 'DemandWeight']]
```
The selected facility, i.e. ALS city candidate, is assigned the value "3" in the `FacilityType` field. The `DemandCount` and `DemandWeight` fields indicate the number of demand points (patient locations) and total number of patients at those locations which are allocated to the facility.
Now, let us visualize the results on a map.
```
# Display the analysis results in a map.
# Create a map of Visalia, California.
map2 = gis.map('Visalia, CA')
map2.basemap = 'dark-gray'
display(map2)
# Call custom function defined earlier in this notebook to
# display the analysis results in the map.
visualize_locate_allocate_results(map2, result1, zoom_level=7)
```
## Analysis: Where could we add new ALS clinics to reach 50% of patients who currently must drive for over 90 minutes to reach their nearest clinic?
To answer this question we will use the **"Target Market Share"** problem type. This problem type chooses the minimum number of facilities necessary to capture a specific percentage of the total market share.
```
# Identify where to open new clinics that are within a 90 minute drive time
# of 50% of ALS patients who currently have to drive for over 90 minutes to reach their nearest clinic.
result2 = network.analysis.solve_location_allocation(
problem_type='Target Market Share',
target_market_share=70,
facilities=city_candidates_fset,
demand_points=patient_locations_weighted,
travel_direction='Demand to Facility',
measurement_units='Minutes',
default_measurement_cutoff=90
)
print('Solve succeeded? {}'.format(result2.solve_succeeded))
# Display the analysis results in a table.
result2.output_facilities.df[['Name', 'FacilityType', 'DemandCount', 'DemandWeight']]
```
The `solve_location_allocation` tool selected two of the candidate cities to host new ALS clinics. If ALS clinics are established in both of those cities, 50% of the patients who must currently drive for over 90 minutes to reach an existing clinic would be able to access one of the new clinics in 90 minutes or less.
Now, let us visualize the results on a map.
```
# Display the analysis results in a map.
# Create a map of Visalia, California
map3 = gis.map('Visalia, CA', zoomlevel=7)
map3.basemap = 'dark-gray'
# Display the map and add the analysis results to it
from IPython.display import display
display(map3)
visualize_locate_allocate_results(map3, result2, zoom_level=6)
```
# Conclusions
The ArcGIS Network Analysis toolset is a versatile set of tools that assist strategic decision making to reduce the costs of travel. This examples in this notebook only scratch the surface of the situations where you could use these tools. By performing your analysis using Python and the Jupyter Notebook, you also document your methodology in a form that can be easily shared via email to colleagues and stake holders, enabling them to reproduce your results or repeat the analyses in an iterative fashion with different parameters or different input datasets. You can also [export](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html) the output tables from the solve_location_allocation tool by to CSV files to share with stake holders via email.
| github_jupyter |
```
from mxnet import nd, gpu, cpu
ctx = gpu(1)
```
### 자료 불러오기
- 불러온 자료는 리스트입니다.
- 리스트의 원소는 (프레임 수, 17, 2) 형상을 한 mxnet ndarray 입니다
### 우리가 찍은 동영상 불러오기
```
import pickle
with open('./data_pkl/our_data_five.pkl', 'rb') as f :
x_data = pickle.load(f)
y_data = pickle.load(f)
```
### 제공받은 이미지 불러오기
```
import pickle
with open('./data_pkl/img_data_five.pkl', 'rb') as f :
x_data += pickle.load(f)
y_data += pickle.load(f)
print(len(x_data), len(y_data))
print(x_data[1000].shape, y_data[1000].shape)
```
### 자료 전처리
- 자료의 시계열 길이를 맞춰줍니다. 94 프레임. `padded`
- (17, 2) 형상의 관절점을 34개의 좌표로 펼칩니다. `flat`
```
max_len = 94
padded = [nd.concat(x, nd.zeros((max_len-x.shape[0], 17, 2)).as_in_context(ctx), dim=0) for x in x_data]
flat = [p.reshape(-1, 34) for p in padded]
```
- 리스트는 이제 필요 없습니다. ndarray로 바꿉니다
```
x_data = nd.stack(*flat)
y_data = nd.stack(*[y for y in y_data])
print(x_data.shape, y_data.shape)
```
- 학습 자료와 검증 자료로 나눕니다
```
import numpy as np
num_data = len(x_data)
choice_t = np.random.choice(num_data, (int)(num_data*0.80), replace=False)
choice_v = [n for n in range(num_data) if not n in choice_t]
x_train, y_train = x_data[choice_t], y_data[choice_t]
x_valid, y_valid = x_data[choice_v], y_data[choice_v]
num_train, num_valid = len(x_train), len(x_valid)
print(x_train.shape, x_valid.shape, num_data)
```
### 자료 분포 확인
```
dist = []
for i in range(5) :
dist.append((y_data == i).sum())
[d / sum(dist) * 100 for d in dist]
```
### 학습
```
from mxnet import gluon, init
from mxnet import autograd
from mxnet.gluon import nn, rnn
import mxnet as mx
from SkeletonARModel import SkeletonARModel
```
- 모델을 생성합니다
- 가중치를 초기화합니다.
- `SkeletonAR.params` 에는 5중 분류기의 학습된 가중치가 저장되어 있습니다.
```
model = SkeletonARModel()
model.collect_params().initialize(init.Xavier(), force_reinit=True, ctx=ctx)
model.load_parameters("./model_params/SkeletonAR.params", ignore_extra=True, allow_missing=True, ctx=ctx)
lr = 0.005
acc = mx.metric.Accuracy()
Loss = gluon.loss.SoftmaxCrossEntropyLoss(sparse_label=True)
trainer = gluon.Trainer(model.collect_params(), 'adam', {'learning_rate': lr})
for epoch in range(10000):
for i in range(4) :
choice = np.random.choice(num_train, int(num_train * 0.25), replace=False)
with autograd.record() :
y_hat = model(x_train[choice])
loss = Loss(y_hat, y_train[choice])
loss.backward()
trainer.step(batch_size=len(choice))
print("\n******Epoch {%d} ******" % epoch)
print("**loss > {}".format(nd.sum(loss / len(choice)).asscalar()))
if epoch % 20 == 0 :
try :
acc.update(preds=y_hat, labels=y_train[choice])
print("**\t train_acc >> {}".format(acc.get()))
valid_y_hat = model(x_valid)
valid_loss = Loss(valid_y_hat, y_valid)
print("**\t valid loss >> {}".format(nd.sum(valid_loss / num_valid).asscalar()))
acc.update(preds=valid_y_hat, labels=y_valid)
print("**\t valid_acc >> {}, ".format(acc.get()))
print("**\t predicts >> ", nd.argmax(valid_y_hat, axis=1))
except KeyboardInterrupt :
break
except :
print("errors")
```
### 학습 가중치 저장
```
model.save_parameters("skeletonAR.params")
model.load_parameters("skeletonAR.params", ctx=ctx)
```
### 검증 데이터로 accuracy 재보기
```
compare = model(x_valid).argmax(axis=1) == y_valid.squeeze()
compare.sum() / len(compare)
```
| github_jupyter |
# Quantum Gates
```
from qiskit import *
```
To manipulate an input state we need to apply the basic operations of quantum computing. These are known as quantum gates. Here we'll give an introduction to some of the most fundamental gates in quantum computing. Most of those we'll be looking at act only on a single qubit. This means that their actions can be understood in terms of the Bloch sphere.
### The Pauli operators
The simplest quantum gates are the Paulis: $X$, $Y$ and $Z$. Their action is to perform a half rotation of the Bloch sphere around the x, y and z axes. They therefore have effects similar to the classical NOT gate or bit-flip. Specifically, the action of the $X$ gate on the states $|0\rangle$ and $|1\rangle$ is
$$
X |0\rangle = |1\rangle,\\\\ X |1\rangle = |0\rangle.
$$
The $Z$ gate has a similar effect on the states $|+\rangle$ and $|-\rangle$:
$$
Z |+\rangle = |-\rangle, \\\\ Z |-\rangle = |+\rangle.
$$
These gates are implemented in Qiskit as follows (assuming a circuit named `qc`).
```python
qc.x(0) # x on qubit 0
qc.y(0) # y on qubit 0
qc.z(0) # z on qubit 0
```
The matrix representations of these gates have already been shown in a previous section.
$$
X= \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix}\\\\
Y= \begin{pmatrix} 0&-i \\\\ i&0 \end{pmatrix}\\\\
Z= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix}
$$
There, their job was to help us make calculations regarding measurements. But since these matrices are unitary, and therefore define a reversible quantum operation, this additional interpretation of them as gates is also possible.
Note that here we referred to these gates as $X$, $Y$ and $Z$ and `x`, `y` and `z`, depending on whether we were talking about their matrix representation or the way they are written in Qiskit. Typically we will use the style of $X$, $Y$ and $Z$ when referring to gates in text or equations, and `x`, `y` and `z` when writing Qiskit code.
### Hadamard and S
The Hadamard gate is one that we've already used. It's a key component in performing an x measurement:
```python
measure_x = QuantumCircuit(1,1)
measure_x.h(0);
measure_x.measure(0,0);
```
Like the Paulis, the Hadamard is also a half rotation of the Bloch sphere. The difference is that it rotates around an axis located halfway between x and z. This gives it the effect of rotating states that point along the z axis to those pointing along x, and vice versa.
$$
H |0\rangle = |+\rangle, \, \, \, \, H |1\rangle = |-\rangle,\\\\
H |+\rangle = |0\rangle, \, \, \, \, H |-\rangle = |1\rangle.
$$
This effect makes it an essential part of making x measurements, since the hardware behind quantum computing typically only allows the z measurement to be performed directly. By moving x basis information to the z basis, it allows an indirect measurement of x.
The property that $H |0\rangle = |+\rangle $ also makes the Hadamard our primary means of generating superposition states. Its matrix form is
$$
H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1&1 \\\\ 1&-1 \end{pmatrix}.
$$
The $S$ and $S^\dagger$ gates have a similar role to play in quantum computation.
```python
qc.s(0) # s gate on qubit 0
qc.sdg(0) # s† on qubit 0
```
They are quarter turns of the Bloch sphere around the z axis, and so can be regarded as the two possible square roots of the $Z$ gate,
$$
S = \begin{pmatrix} 1&0 \\\\ 0&i \end{pmatrix}, \, \, \, \, S^\dagger = \begin{pmatrix} 1&0 \\\\ 0&-i \end{pmatrix}.
$$
The effect of these gates is to rotate between the states of the x and y bases.
$$
S |+\rangle = |\circlearrowright\rangle, \, \, \, \, S |-\rangle = |\circlearrowleft\rangle,\\\\
S^\dagger |\circlearrowright\rangle = |+\rangle, \, \, \, \, S^\dagger |\circlearrowleft\rangle = |-\rangle.
$$
They are therefore used as part of y measurements.
```python
measure_y = QuantumCircuit(1,1)
measure_y.sdg(0)
measure_y.h(0)
measure_y.measure(0,0);
```
The $H$, $S$ and $S^\dagger$ gates, along with the Paulis, form the so-called 'Clifford group' for a single qubit, which will be discussed more in later sections. These gates are extremely useful for many tasks in making and manipulating superpositions, as well as facilitating different kinds of measurements. But to unlock the full potential of qubits, we need the next set of gates.
### Other single-qubit gates
We've already seen the $X$, $Y$ and $Z$ gates, which are rotations around the x , y and z axes by a specific angle. More generally we can extend this concept to rotations by an arbitrary angle $\theta$. This gives us the gates $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$. The angle is expressed in radians, so the Pauli gates correspond to $\theta=\pi$ . Their square roots require half this angle, $\theta=\pm \pi/2$, and so on.
In Qasm, these rotations can be implemented with `rx`, `ry`, and `rz` as follows.
```python
qc.rx(theta,0) # rx rotation on qubit 0
qc.ry(theta,0) # ry rotation on qubit 0
qc.rz(theta,0) # rz rotation on qubit 0
```
Two specific examples of $R_z(\theta)$ have their own names: those for $\theta=\pm \pi/4$. These are the square roots of $S$, and are known as $T$ and $T^\dagger$.
```python
qc.t(0) # t gate on qubit 0
qc.tdg(0) # t† on qubit 1
```
Their matrix form is
$$
T = \begin{pmatrix} 1&0 \\\\ 0&e^{i\pi/4}\end{pmatrix}, \, \, \, \, T^\dagger = \begin{pmatrix} 1&0 \\\\ 0&e^{-i\pi/4} \end{pmatrix}.
$$
All single-qubit operations are compiled down to gates known as $U_1$ , $U_2$ and $U_3$ before running on real IBM quantum hardware. For that reason they are sometimes called the _physical gates_. Let's have a more detailed look at them. The most general is
$$
U_3(\theta,\phi,\lambda) = \begin{pmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\\\ e^{i\phi}\sin(\theta/2)
& e^{i\lambda+i\phi}\cos(\theta/2) \end{pmatrix}.
$$
This has the effect of rotating a qubit in the initial $|0\rangle$ state to one with an arbitrary superposition and relative phase:
$$
U_3|0\rangle = \cos(\theta/2)|0\rangle + \sin(\theta/2)e^{i\phi}|1\rangle.
$$
The $U_1$ gate is known as the phase gate and is essentially the same as $R_z(\lambda)$. Its relationship with $U_3$ and its matrix form are,
$$
U_1(\lambda) = U_3(0,0,\lambda) = \begin{pmatrix} 1 & 0 \\\\ 0 & e^{i\lambda} \end{pmatrix}.
$$
In IBM Q hardware, this gate is implemented as a frame change and takes zero time.
The second gate is $U_2$, and has the form
$$
U_2(\phi,\lambda) = U_3(\pi/2,\phi,\lambda) = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -e^{i\lambda} \\\\ e^{i\phi} & e^{i\lambda+i\phi} \end{pmatrix}.
$$
From this gate, the Hadamard is done by $H= U_2(0,\pi)$. In IBM Q hardware, this is implemented by a pre- and post-frame change and an $X_{\pi/2}$ pulse.
### Multiqubit gates
To create quantum algorithms that beat their classical counterparts, we need more than isolated qubits. We need ways for them to interact. This is done by multiqubit gates.
The most prominent multiqubit gates are the two-qubit CNOT and the three-qubit Toffoli. These have already been introduced in 'The atoms of computation'. They essentially perform reversible versions of the classical XOR and AND gates, respectively.
```python
qc.cx(0,1) # CNOT controlled on qubit 0 with qubit 1 as target
qc.ccx(0,1,2) # Toffoli controlled on qubits 0 and 1 with qubit 2 as target
```
Note that the CNOT is referred to as ```cx``` in Qiskit.
We can also interpret the CNOT as performing an $X$ on its target qubit, but only when its control qubit is in state $|1\rangle$, and doing nothing when the control is in state $|0\rangle$. With this interpretation in mind, we can similarly define gates that work in the same way, but instead peform a $Y$ or $Z$ on the target qubit depending on the $|0\rangle$ and $|1\rangle$ states of the control.
```python
qc.cy(0,1) # controlled-Y, controlled on qubit 0 with qubit 1 as target
qc.cz(0,1) # controlled-Z, controlled on qubit 0 with qubit 1 as target
```
The Toffoli gate can be interpreted in a similar manner, except that it has a pair of control qubits. Only if both are in state $|1\rangle$ is the $X$ applied to the target.
### Composite gates
When we combine gates, we make new gates. If we want to see the matrix representation of these, we can use the 'unitary simulator' of Qiskit.
For example, let's try something simple: a two qubit circuit with an `x` applied to one and a `z` to the other. Using tensor products, we can expect the result to be,
$$
Z \otimes X= \begin{pmatrix} 1&0 \\\\ 0&-1 \end{pmatrix} \otimes \begin{pmatrix} 0&1 \\\\ 1&0 \end{pmatrix} = \begin{pmatrix} 0&1&0&0 \\\\ 1&0&0&0\\\\0&0&0&-1\\\\0&0&-1&0 \end{pmatrix}.
$$
This is exactly what we find when we analyze the circuit with this tool.
```
# set up circuit (no measurements required)
qc = QuantumCircuit(2)
qc.x(0) # qubits numbered from the right, so qubit 0 is the qubit on the right
qc.z(1) # and qubit 1 is on the left
# set up simulator that returns unitary matrix
backend = Aer.get_backend('unitary_simulator')
# run the circuit to get the matrix
gate = execute(qc,backend).result().get_unitary()
# now we use some fanciness to display it in latex
from IPython.display import display, Markdown, Latex
gate_latex = '\\begin{pmatrix}'
for line in gate:
for element in line:
gate_latex += str(element) + '&'
gate_latex = gate_latex[0:-1]
gate_latex += '\\\\'
gate_latex = gate_latex[0:-2]
gate_latex += '\end{pmatrix}'
display(Markdown(gate_latex))
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
*Erik J. Bekkers and Maxime W. Lafarge, Eindhoven University of Technology, the Netherlands*
*8 June 2018*
***
*This DEMO was tested on a laptop with*:
- *Windows as OS*
- *Jupyter Notebook (version 5.5.0)*
- *Python (version 3.5.5)*
- *TensorFlow-GPU (versions 1.1 and higher)*
- *An NVIDIA Quadro M1000M GPU*
- *The following additional libraries installed for this demo to run: sklearn, scipy, and matplotlib*
# Basic usage of the se2cnn library
This jupyter demo will contain the basic usage examples of the se2cnn library with applications to digit recognition in the MNIST dataset. The se2cnn library contains 3 main layers (check the useage via *help(se2cnn.layers.z2_se2n)*):
- **z2_se2n**: a lifting layer from 2D tensor to SE(2) tensor
- **se2n_se2n**: a group convolution layer from SE(2) tensor to SE(2) tensor
- **spatial_max_pool**: performs 2x2 spatial max-pooling of the spatial axes
The following functions are used internally, but may be of interest as well:
- **rotate_lifting_kernels**: rotates the raw 2D lifting kernels (output is a set of rotated kernels)
- **rotate_gconv_kernels**: rotates (shift-twists) the se2 kernels (planar rotation + orientation shift)
In this demo we will construct the following network:
| layer nr. | Layer | Tensor shape |
| --------- | ------------------------------ | ------------- |
| 0 | input | (32 x 32 x 1) |
| --------- | --------------------------------------------------- | -------------------------- |
| 1 | 5x5 lifting convultion (+ReLU) | (28 x 28 x Ntheta x Nc) |
| 1 | 2x2 max pooling | (14 x 14 x Ntheta x Nc) |
| --------- | --------------------------------------------------- | -------------------------- |
| 2 | 5x5 group convultion (+ReLU) | (10 x 10 x Ntheta x 2*Nc) |
| 2 | 2x2 max pooling | (5 x 5 x Ntheta x 2*Nc) |
| --------- | --------------------------------------------------- | -------------------------- |
| 3 | merge orientation+channel dim | (5 x 5 x Ntheta 2*Nc) |
| 3 | 5x5 2D convultion (+ReLU) | (1 x 1 x 4*Nc) |
| --------- | --------------------------------------------------- | -------------------------- |
| 4 | 1x1 2D convultion (+ReLU) | (1 x 1 x 128) |
| --------- | --------------------------------------------------- | -------------------------- |
| 5 | 1x1 2D convolution: the output layer | (10) |
Here Ntheta is the number of orientation samples to discretize the SE(2) group, Nc is the number of channels in the lifting layer. The first two layers are **roto-translation covariant**, meaning that the feature vectors rotate according to rotations of the input patterns (e.g. basic features do not need to be learned for each orientation). In layer 3 the orientation axis is merged with the channel axis, followed by 2D convolutions, making the subsequent layers **only translation covariant**. Here we made this choice due to the fact that in the MNIST dataset the characters always appear under the same orientation, and the task does not need to be rotation invariant.
There are several options to make a network completely roto-translation invariant:
1. For example one could stick with (5x5 and 1x1) SE(2) group convolutions all the way to the output layer, which would then provide a length 10 feature vector for each orientation. On can then simply do a maximum projection over the orientations (tf.reduce_max) to get the maximal response for each bin, followed by a softmax.
2. One coulde reduce the patch size to [1 x 1 x Ntheta x Nc] via group convolutions, apply a maximum projection over theta, and perform 1x1 (fully connected) 2D convolutions all the way to the end.
***
# Part 1: Load the libraries
## Load the libraries
```
# Impor tensorflow and numpy
import tensorflow as tf
import numpy as np
import math as m
import time
# For validation
from sklearn.metrics import confusion_matrix
import itertools
# For plotting
from matplotlib import pyplot as plt
# Add the library to the system path
import os,sys
se2cnn_source = os.path.join(os.getcwd(),'..')
if se2cnn_source not in sys.path:
sys.path.append(se2cnn_source)
# Import the library
import se2cnn.layers
```
## Useful functions
### The se2cnn layers
For useage of the relevant layers defined in se2cnn.layers uncomment and run the following:
```
# help(se2cnn.layers.z2_se2n)
# help(se2cnn.layers.se2n_se2n)
# help(se2cnn.layers.spatial_max_pool)
```
### Weight initialization
For initialization we use the initialization method for ReLU activation functions as proposed in:
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
```
# Xavier's/He-Rang-Zhen-Sun initialization for layers that are followed ReLU
def weight_initializer(n_in, n_out):
return tf.random_normal_initializer(mean=0.0, stddev=m.sqrt(2.0 / (n_in))
)
```
### Confusion matrix plot
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
### Size of a tf tensor
```
def size_of(tensor) :
# Multiply elements one by one
result = 1
for x in tensor.get_shape().as_list():
result = result * x
return result
```
***
# Part 2: Load and format the MNIST dataset
The MNIS dataset consists of 28x28 images of handwritten characters. We are going to classify each input image into 1 of the 10 classes (the digits 0 to 9).
```
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
```
By default the data is formatted as flattened arrays. Here were format them as 2D feature maps (with only 1 channel).
```
# Reshape to 2D multi-channel images
train_data_2D = train_data.reshape([len(train_data),28,28,1]) # [batch_size, Nx, Ny, Nc]
eval_data_2D = eval_data.reshape([len(eval_data),28,28,1])
# Plot the first sample
plt.plot()
plt.title('Digit = %d' % train_labels[0])
plt.imshow(train_data_2D[0,:,:,0])
```
We would like to have the patches to be of size 32x32 such that we can reduce it to 1x1 via 5x5 convolutions and max pooling layers. So, here we pad the images on the left and right with zeros.
```
train_data_2D = np.pad(train_data_2D,((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=((0,0),(0,0),(0,0),(0,0)))
eval_data_2D = np.pad(eval_data_2D,((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=((0,0),(0,0),(0,0),(0,0)))
```
***
# Part 3: Build a graph (design the G-CNN)
## Build a graph
```
graph = tf.Graph()
graph.as_default()
tf.reset_default_graph()
```
## Settings
Kernel size and number of orientations
```
Ntheta = 12 # Kernel size in angular direction
Nxy=5 # Kernel size in spatial direction
Nc = 4 # Number of channels in the initial layer
```
### Placeholders
```
inputs_ph = tf.placeholder( dtype = tf.float32, shape = [None,32,32,1] )
labels_ph = tf.placeholder( dtype = tf.int32, shape = [None,] )
```
### Prepare for the first layer
```
tensor_in = inputs_ph
Nc_in = 1
```
### Save the kernels to a library for later inspection
```
kernels={}
```
## Layer 1: The lifting layer
```
with tf.variable_scope("Layer_{}".format(1)) as _scope:
## Settings
Nc_out = Nc
## Perform lifting convolution
# The kernels used in the lifting layer
kernels_raw = tf.get_variable(
'kernel',
[Nxy,Nxy,Nc_in,Nc_out],
initializer=weight_initializer(Nxy*Nxy*Nc_in,Nc_out))
tf.add_to_collection('raw_kernels', kernels_raw)
bias = tf.get_variable( # Same bias for all orientations
"bias",
[1, 1, 1, 1, Nc_out],
initializer=tf.constant_initializer(value=0.01))
# Lifting layer
tensor_out, kernels_formatted = se2cnn.layers.z2_se2n(
input_tensor = tensor_in,
kernel = kernels_raw,
orientations_nb = Ntheta)
# Add bias
tensor_out = tensor_out + bias
## Perform (spatial) max-pooling
tensor_out = se2cnn.layers.spatial_max_pool( input_tensor=tensor_out, nbOrientations=Ntheta)
## Apply ReLU
tensor_out = tf.nn.relu(tensor_out)
## Prepare for the next layer
tensor_in = tensor_out
Nc_in = Nc_out
## Save kernels for inspection
kernels[_scope.name] = kernels_formatted
tensor_in.get_shape()
```
## Layer 2: SE2-conv, max-pool, relu
```
with tf.variable_scope("Layer_{}".format(2)) as _scope:
## Settings
Nc_out = 2*Nc
## Perform group convolution
# The kernels used in the group convolution layer
kernels_raw = tf.get_variable(
'kernel',
[Nxy,Nxy,Ntheta,Nc_in,Nc_out],
initializer=weight_initializer(Nxy*Nxy*Ntheta*Nc_in,Nc_out))
tf.add_to_collection('raw_kernels', kernels_raw)
bias = tf.get_variable( # Same bias for all orientations
"bias",
[1, 1, 1, 1, Nc_out],
initializer=tf.constant_initializer(value=0.01))
# The group convolution layer
tensor_out, kernels_formatted = se2cnn.layers.se2n_se2n(
input_tensor = tensor_in,
kernel = kernels_raw)
tensor_out = tensor_out + bias
## Perform max-pooling
tensor_out = se2cnn.layers.spatial_max_pool( input_tensor=tensor_out, nbOrientations=Ntheta)
## Apply ReLU
tensor_out = tf.nn.relu(tensor_out)
## Prepare for the next layer
tensor_in = tensor_out
Nc_in = Nc_out
## Save kernels for inspection
kernels[_scope.name] = kernels_formatted
tensor_in.get_shape()
```
## Layer 3: 2D fully connected layer (5x5)
```
# Concatenate the orientation and channel dimension
tensor_in = tf.concat([tensor_in[:,:,:,i,:] for i in range(Ntheta)],3)
Nc_in = tensor_in.get_shape().as_list()[-1]
# 2D convolution layer
with tf.variable_scope("Layer_{}".format(3)) as _scope:
## Settings
Nc_out = 4*Nc
## Perform group convolution
# The kernels used in the group convolution layer
kernels_raw = tf.get_variable(
'kernel',
[Nxy,Nxy,Nc_in,Nc_out],
initializer=weight_initializer(Nxy*Nxy*Nc_in,Nc_out))
tf.add_to_collection('raw_kernels', kernels_raw)
bias = tf.get_variable( # Same bias for all orientations
"bias",
[1, 1, 1, Nc_out],
initializer=tf.constant_initializer(value=0.01))
# Convolution layer
tensor_out = tf.nn.conv2d(
input = tensor_in,
filter=kernels_raw,
strides=[1, 1, 1, 1],
padding="VALID")
tensor_out = tensor_out + bias
## Apply ReLU
tensor_out = tf.nn.relu(tensor_out)
## Prepare for the next layer
tensor_in = tensor_out
Nc_in = Nc_out
## Save kernels for inspection
kernels[_scope.name] = kernels_raw
tensor_in.get_shape()
```
## Layer 4: Fully connected layer (1x1)
```
# 2D convolution layer
with tf.variable_scope("Layer_{}".format(4)) as _scope:
## Settings
Nc_out = 128
## Perform group convolution
# The kernels used in the group convolution layer
kernels_raw = tf.get_variable(
'kernel',
[1,1,Nc_in,Nc_out],
initializer=weight_initializer(1*1*Nc_in,Nc_out))
tf.add_to_collection('raw_kernels', kernels_raw)
bias = tf.get_variable( # Same bias for all orientations
"bias",
[1, 1, 1, Nc_out],
initializer=tf.constant_initializer(value=0.01))
# Convolution layer
tensor_out = tf.nn.conv2d(
input = tensor_in,
filter=kernels_raw,
strides=[1, 1, 1, 1],
padding="VALID")
tensor_out = tensor_out + bias
## Apply ReLU
tensor_out = tf.nn.relu(tensor_out)
## Prepare for the next layer
tensor_in = tensor_out
Nc_in = Nc_out
## Save kernels for inspection
kernels[_scope.name] = kernels_raw
tensor_in.get_shape()
```
## Layer 5: Fully connected (1x1) to output
```
with tf.variable_scope("Layer_{}".format(5)) as _scope:
## Settings
Nc_out = 10
## Perform group convolution
# The kernels used in the group convolution layer
kernels_raw = tf.get_variable(
'kernel',
[1,1,Nc_in,Nc_out],
initializer=weight_initializer(1*1*Nc_in,Nc_out))
tf.add_to_collection('raw_kernels', kernels_raw)
bias = tf.get_variable( # Same bias for all orientations
"bias",
[1, 1, 1, Nc_out],
initializer=tf.constant_initializer(value=0.01))
## Convolution layer
tensor_out = tf.nn.conv2d(
input = tensor_in,
filter=kernels_raw,
strides=[1, 1, 1, 1],
padding="VALID")
tensor_out = tensor_out + bias
## The output logits
logits = tensor_out[:,0,0,:]
predictions = tf.argmax(input=logits, axis=1)
probabilities = tf.nn.softmax(logits)
## Save the kernels for later inspection
kernels[_scope.name] = kernels_raw
logits.get_shape()
```
## Define the loss and the optimizer
```
# Cross-entropy loss
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels_ph, logits=logits)
#-- Define the l2 loss
weightDecay=5e-4
# Get the raw kernels
variables_wd = tf.get_collection('raw_kernels')
print('-----')
print('RAW kernel shapes:')
for v in variables_wd: print( "[{}]: {}, total nr of weights = {}".format(v.name, v.get_shape(), size_of(v)))
print('-----')
loss_l2 = weightDecay*sum([tf.nn.l2_loss(ker) for ker in variables_wd])
# Configure the Training Op (for TRAIN mode)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss + loss_l2,
global_step=tf.train.get_global_step())
```
***
# Part 4: Train and test the G-CNN
## Begin session
```
#-- Start the (GPU) session
initializer = tf.global_variables_initializer()
session = tf.Session(graph=tf.get_default_graph()) #-- Session created
session.run(initializer)
```
## Optimization
In each epoch we pass over all input samples in batch sizes of batch_size
```
batch_size=100
n_epochs=10
```
Loop over the input stack in batch of size "batch_size".
```
for epoch_nr in range(n_epochs):
loss_average = 0
data = train_data_2D
labels = train_labels
# KBatch settings
NItPerEpoch = m.floor(len(data)/batch_size) #number of iterations per epoch
samples=np.random.permutation(len(data))
# Loop over dataset
tStart = time.time()
for iteration in range(NItPerEpoch):
feed_dict = {
inputs_ph: np.array(data[samples[iteration*batch_size:(iteration+1)*batch_size]]),
labels_ph: np.array(labels[samples[iteration*batch_size:(iteration+1)*batch_size]])
}
operators_output = session.run([ loss , train_op ], feed_dict)
loss_average += operators_output[0]/NItPerEpoch
tElapsed = time.time() - tStart
print('Epoch ' , epoch_nr , ' finished... Average loss = ' , round(loss_average,4) , ', time = ',round(tElapsed,4))
```
## Validation
```
batch_size = 1000
labels_pred = []
for i in range(round(len(eval_data_2D)/batch_size)):
[ labels_pred_batch ] = session.run([ predictions ], { inputs_ph: eval_data_2D[i*batch_size:(i+1)*batch_size] })
labels_pred = labels_pred + list(labels_pred_batch)
labels_pred = np.array(labels_pred)
```
Compare the first 10 results with the ground truth
```
print(labels_pred[0:10])
print(eval_labels[0:10])
```
The accuracy (average nr of successes)
```
((labels_pred - eval_labels)**2==0).astype(float).mean()
```
Total nr of errors
```
((labels_pred - eval_labels)**2>0).astype(float).sum()
```
Error rate
```
100*((labels_pred - eval_labels)**2>0).astype(float).mean()
```
Plot a confusion matrix to see what kind of errors are made
```
cm = confusion_matrix(eval_labels, labels_pred)
plot_confusion_matrix(cm, range(10))
```
| github_jupyter |

# Clases y Objetos en Python
Como sabes, Python es un lenguaje de programación orientado a objetos. ¿Esto qué es? El código se organiza en elementos denominados objetos, que vienen definidos por las clases. Es una manera de expresar en lenguaje máquina cosas de la vida real.
1. [Clases](#1.-Clases)
1. [Atributos](#2.-Atributos)
3. [Constructor](#3.-Constructor)
4. [Métodos](#4.-Métodos)
5. [Documentación](#5.-Documentación)
6. [Resumen](#6.-Resumen)
## 1. Clases
Las clases son la manera que tenemos de describir los objetos. Hasta ahora hemos visto clases básicas que vienen incluidas en Python como *int*, *str* o clases algo más complejas como los *dict*. Pero, **¿y si queremos crear nuestros propios objetos?** En los lenguajes orientados a objetos tenemos la posibilidad de definir nuevos objetos que se asemejen más a nuestros casos de uso y hagan la programación más sencilla de desarrollar y entender.
Un número entero es un objeto de la clase *int* que posee unas características diferentes a un texto, que es de la clase *str*. Por ejemplo, ¿cómo sabemos que un coche es un coche? ¿qué características tiene? Los coches tienen una marca, una cantidad de caballos, hay unos automáticos, otros no… De esta manera traducimos a lenguaje de máquina, a programación, un concepto que tenemos nosotros muy claro e interiorizado.
Hasta ahora, hemos visto varias clases, por ejemplo la clase *str*. Cuando veiamos el tipo de dato, Python imprimía por pantalla `str`. Y al ser `str`, tenía unas propiedades que no tenían otros objetos, como las funciones .upper() o .lower().
La sintaxis para crear una clase es:
```Python
class NombreClase:
# Cosas de la clase
```
Normalmente para el nombre de la clase se usa *CamelCase*, que quiere decir que se define en minúscila, sin espacios ni guiones, y jugando con las mayúsculas para diferenciar palabras.
Mira cómo es la [clase *built_in* de *String*](https://docs.python.org/3/library/stdtypes.html#str)
```
class Coche:
# cosas de la clase
pass
```
La sentencia `pass` se usa para forzar el fin de la clase *Coche*. La hemos declarado, pero no lleva nada. Python demanda una definición de la clase y podemos ignorar esa demanda mediante la sentencia `pass`.
```
print(type("texto"))
print(type(str))
print(type(Coche))
```
Bien, coche es de tipo `type`, claro porque **no es un objeto como tal**, sino que es una clase. Cuando creemos coches, estos serán de clase *Coche*, es decir, de tipo *Coche*, por lo que tiene sentido que *Coche* sea de tipo `type`.
### Clase vs Objeto
La clase se usa para definir algo. Al igual que con las funciones. Creamos el esqueleto de lo que será un objeto de esa clase. Por tanto, **una vez tengamos la clase definida, instanciaremos un objeto de esa clase**. Es como crear el concepto de coche, con todas sus características y funcionalidades. Después, a lo largo del programa, podremos crear objetos de tipo coche, que se ajusten a lo definido en la clase coche. Cada coche tendrá una marca, una potencia, etc…
```
lugar = str()
print("asads")
primer_coche = Coche()
segundo_coche = Coche()
print(primer_coche)
print(segundo_coche)
print(type(primer_coche))
```
Ahora sí tenemos un objeto de tipo Coche, que se llama `primer_coche`. Cuando imprimimos su tipo, vemos que es de tipo Coche, y cuando lo imprimes el objeto por pantalla, simplemente nos dice su tipo y un identificador.
Podremos crear todos los coches que queramos
```
citroen = Coche()
seat = Coche()
print(citroen)
print(seat)
# Dos objetos diferentes
citroen == seat
```
De momento todos nuestros coches son iguales, no hemos definido bien la clase, por lo que va a ser difícil diferenciar un coche de otro. Vamos a ver cómo lograr esa diferenciación.
## 2. Atributos
Son las características que definen a los objetos de una clase. La marca, el color, potencia del coche. Estos son atributos, que se definen de manera genérica en la clase y luego cada objeto *Coche* tendrá un valor para cada uno de sus atributos.
Los atributos los definimos tras la declaración de la clase. Y luego se accede a ellos mediante la sintaxis `objeto.atributo`
Vamos a empezar a definir atributos en los coches.
```
class Coche:
puertas = 4
ruedas = 4
```
Ahora todos los coches que creamos, tendrán 4 puertas y 4 ruedas.
```
citroen = Coche()
print(citroen)
print(citroen.puertas)
print(citroen.ruedas)
seat = Coche()
print(seat.puertas)
print(seat.ruedas)
```
También podemos modificar los atributos. Esto Python lo hace muy sencillo, los cambiamos directamente reasignando valores. En otros lenguajes de programación hay que implementar esto mediante métodos denominados `getters` y `setters`.
```
citroen = Coche()
citroen.puertas = 2
print(citroen.puertas)
seat = Coche()
print(seat.puertas)
```
<table align="left">
<tr><td width="80"><img src="../../imagenes/error.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>ERRORES atributos que no existen</h3>
</td></tr>
</table>
```
seat = Coche()
print(seat.motor)
```
Seguimos sin poder diferenciar claramente un coche de otro, pero ya vamos definiendo sus características, que será posible ir modificándolas tanto en la inicialización del objeto, como después. De momento, tenemos características comunes a todos los coches... o no, ¿todos los coches tienen 4 puertas?
## 3. Constructor
Cuando creamos un objeto de la clase *Coche*, tenemos que definirlo bien para diferenciarlo de otros coches. Esa definición inicial se realiza en el constructor de la clase. Son unos argumentos de entrada que nos pide el objeto, para definir esa instancia de otras instancias de la misma clase.
**¿Cómo definimos esto?** Mediante la sentencia `__init__`, dentro de la clase.
```
class Coche:
ruedas = 4
def __init__(self, puertas_coche):
self.puertas = puertas_coche
```
En la declaración del constructor hemos metido la palabra `self`. Lo tendremos que poner siempre. Hace referencia a la propia instancia de coche, es decir, a cuando creemos coches nuevos.
En este caso estamos diferenciando los atributos comunes de la clase *Coche*, de los atributos particulares de los coches, como por ejemplo, el número de puertas. Por eso el número de puertas va junto con `self`, porque no hace referencia a la clase genércia de coche, sino a cada coche que creemos.
```
citroen = Coche(10)
print(citroen)
print(citroen.ruedas)
print(citroen.puertas)
```
¿Y si queremos añadir más variables particulares de nuestro coche? Pues del mismo modo que lo hacíamos antes, será añadir un parámetro más al constructor y crear una nueva variable con el ```self.``` delante a la hora de hacer la asignación.
```
class Coche:
ruedas = 4
def __init__(self, marca_coche, puertas_coche):
self.marca = marca_coche
self.puertas = puertas_coche
self.puertas_por_rueda = self.puertas / self.ruedas
seat = Coche(puertas_coche = 7, marca_coche = "MAZDA")
print(seat.marca)
print(seat.puertas)
print(seat.ruedas)
print(seat.puertas_por_rueda)
```
<table align="left">
<tr><td width="80"><img src="../../imagenes/ejercicio.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>Ejercicio. Crea tu clase coche</h3>
Crea tu propia clase coche a partir de la que acabamos de ver. La clase coche tiene que llevar un par de atributos comunes a todos los coches, y otros tres que los introduciremos mediante el constructor.
</td></tr>
</table>
## 4. Métodos
Son funciones que podemos definir dentro de las clases. Estas funciones cambiarán el estado de algún atributo o realizarán calculos que nos sirvan de output. Un ejemplo sencillo puede ser, un método de la clase coche que saque la potencia en kilovatios, en vez de en caballos. O si tiene un estado de mantenimiento (ITV pasada o no), que modifique ese estado.
El constructor es un tipo de método. La diferencia con el resto de métodos radica en su nombre, `__init__`. La sintaxis para definir los métodos es como si fuese una función. Y luego para llamar al método se utiliza `objeto.metodo(argumentos_metodo)`. Esto ya lo hemos usado anteriormente, cuando haciamos un `string.lower()`, simplemente llamábamos al método `lower()`, que no requería de argumentos, de la clase *string*.
```
class Coche:
ruedas = 4
def __init__(self, marca, puertas):
self.marca_coche = marca
self.puertas = puertas
def caracteriscticas(self):
return "Marca: " + self.marca_coche + "; Num. Puertas: " + str(self.puertas) + "; Num. Ruedas: " + str(self.ruedas)
audi = Coche("Audi", 2)
audi.caracteriscticas()
```
Fíjate que para llamar a las ruedas se usa `self`, a pesar de que no lo habíamos metido en el constructor. Así evitamos llamar a otra variable del programa que se llame *ruedas*. Nos aseguramos que son las ruedas de ese coche con el `self`.
<table align="left">
<tr><td width="80"><img src="../../imagenes/ejercicio.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>Ejercicio. Crea nuevos métodos</h3>
Crea dos métodos nuevos en la clase coche.
<ol>
<li>Introduce dos atributos nuevos en el constructor: Años desde su compra, y precio de compra.</li>
<li>Crea un método nuevo que calcule su precio actual. Si el coche tiene 5 años o menos, su precio será del 50% del precio de compra, en caso de que sean más años, será de un 30%</li>
</ol>
</td></tr>
</table>
## 5. Documentación
Al igual que con las funciones, en las clases también podemos documentar con el método *built-in* `__doc__`. Es un método de `class`. Por tanto, podremos poner al principio de la clase una documentación con todo lo que hace esta clase. Ocurre lo mismo con los métodos de la clase. Se recomienda dar una breve definición de las funcionalidades de las clases/métodos y describir cómo son las entradas y salidas de los métodos. Qué espera recibir y de qué tipo.
```
class Coche:
'''
Clase coche utilizada como ejemplo para la clase
Parametros del constructor:
marca_coche: distingue el fabricante del coche. String
puertas: hay coches de 2 o 4 puertas. Int
'''
ruedas = 4
def __init__(self, marca, puertas):
"""
Documentacion del init
"""
self.marca_coche = marca
self.puertas = puertas
def caracteriscticas(self):
'''
comentario caracteristicas
'''
return "Marca: " + self.marca_coche + "; Num. Puertas: " + str(self.puertas)
print(Coche.__doc__)
print(Coche.__init__.__doc__)
print(Coche.caracteriscticas.__doc__)
```
## 6. Resumen
```
# Las clases se declaran con la siguiente sintaxis
class Coche:
# Estos son los atributos comunes a los objetos de esta clase
ruedas = 4
# Constructor de la clase
def __init__(self, marca_coche, num_puertas):
# Atributos particulares de cada instancia
self.marca_coche = marca_coche
self.num_puertas = num_puertas
# Metodo propio de esta clase
def caracteristicas(self):
return "Marca: " + self.marca_coche + ". Num Puertas: " + str(self.num_puertas) + ". Num Ruedas: " + str(self.ruedas)
audi = Coche("Audi", 2)
print(audi.ruedas)
print(audi.marca_coche)
print(audi.num_puertas)
print(audi.caracteristicas())
```
| github_jupyter |
# Round Trip Tearsheet
When evaluating the performance of an investing strategy, it is helpful to quantify the frequency, duration, and profitability of its independent bets, or "round trip" trades. A round trip trade is started when a new long or short position is opened and then later completely or partially closed out.
The intent of the round trip tearsheet is to help differentiate strategies that profited off a few lucky trades from strategies that profited repeatedly from genuine alpha. Breaking down round trip profitability by traded name and sector can also help inform universe selection and identify exposure risks. For example, even if your equity curve looks robust, if only two securities in your universe of fifteen names contributed to overall profitability, you may have reason to question the logic of your strategy.
To identify round trips, pyfolio reconstructs the complete portfolio based on the transactions that you pass in. When you make a trade, pyfolio checks if shares are already present in your portfolio purchased at a certain price. If there are, we compute the PnL, returns and duration of that round trip trade. In calculating round trips, pyfolio will also append position closing transactions at the last timestamp in the positions data. This closing transaction will cause the PnL from any open positions to realized as completed round trips.
```
%matplotlib inline
import gzip
import os
import pandas as pd
import pyfolio as pf
transactions = pd.read_csv(gzip.open('../tests/test_data/test_txn.csv.gz'),
index_col=0, parse_dates=0)
positions = pd.read_csv(gzip.open('../tests/test_data/test_pos.csv.gz'),
index_col=0, parse_dates=0)
returns = pd.read_csv(gzip.open('../tests/test_data/test_returns.csv.gz'),
index_col=0, parse_dates=0, header=None)[1]
# Optional: Sector mappings may be passed in as a dict or pd.Series. If a mapping is
# provided, PnL from symbols with mappings will be summed to display profitability by sector.
sect_map = {'COST': 'Consumer Goods', 'INTC':'Technology', 'CERN':'Healthcare', 'GPS':'Technology',
'MMM': 'Construction', 'DELL': 'Technology', 'AMD':'Technology'}
```
The easiest way to run the analysis is to call `pyfolio.create_round_trip_tear_sheet()`. Passing in a sector map is optional. You can also pass `round_trips=True` to `pyfolio.create_full_tear_sheet()` to have this be created along all the other analyses.
```
pf.create_round_trip_tear_sheet(returns, positions, transactions, sector_mappings=sect_map)
```
Under the hood, several functions are being called. `extract_round_trips()` does the portfolio reconstruction and creates the round-trip trades.
```
rts = pf.round_trips.extract_round_trips(transactions,
portfolio_value=positions.sum(axis='columns') / (returns + 1))
rts.head()
pf.round_trips.print_round_trip_stats(rts)
```
| github_jupyter |
[](https://colab.research.google.com/github/japan-medical-ai/medical-ai-course-materials/blob/master/notebooks/Sequential_Data_Analysis_with_Deep_Learning.ipynb)
# 実践編: ディープラーニングを使ったモニタリングデータの時系列解析
健康意識の高まりや運動人口の増加に伴って,活動量計などのウェアラブルデバイスが普及し始めています.センサーデバイスから心拍数などの情報を取得することで,リアルタイムに健康状態をモニタリングできる可能性があることから,近年ではヘルスケア領域での活用事例も増えてきています.2018年2月には,Cardiogram社とカリフォルニア大学が共同研究の成果を発表し,心拍数データに対してDeep Learningを適用することで,高精度に糖尿病予備群を予測可能であることを報告し,大きな注目を集めました.また,Apple Watch Series 4には心電図作成の機能が搭載されるなど,センサーデバイスも進歩を続け,より精緻な情報が取得できるようになってきています.こうした背景において,モニタリングデータを収集・解析し,健康管理に繋げていく取り組みは今後益々盛んになっていくものと考えられます.
本章では,心電図の信号波形データを対象として,不整脈を検出する問題に取り組みます.
## 環境構築
本章では, 下記のライブラリを利用します.
* Cupy
* Chainer
* Scipy
* Matplotlib
* Seaborn
* Pandas
* WFDB
* Scikit-learn
* Imbalanced-learn
以下のセルを実行 (Shift + Enter) して,必要なものをインストールして下さい.
```
!apt -y -q install tree
!pip install wfdb==2.2.1 imbalanced-learn==0.4.3
!curl https://colab.chainer.org/install | sh -
```
インストールが完了したら以下のセルを実行して,各ライブラリのインポート,及びバージョン確認を行って下さい.
```
import os
import random
import numpy as np
import chainer
import scipy
import pandas as pd
import matplotlib
import seaborn as sn
import wfdb
import sklearn
import imblearn
chainer.print_runtime_info()
print("Scipy: ", scipy.__version__)
print("Pandas: ", pd.__version__)
print("Matplotlib: ", matplotlib.__version__)
print("Seaborn: ", sn.__version__)
print("WFDB: ", wfdb.__version__)
print("Scikit-learn: ", sklearn.__version__)
print("Imbalanced-learn: ", imblearn.__version__)
```
また,本章の実行結果を再現可能なように,4章 (4.2.4.4) で紹介した乱数シードの固定を行います.
(以降の計算を行う上で必ず必要となる設定ではありません.)
```
def reset_seed(seed=42):
random.seed(seed)
np.random.seed(seed)
if chainer.cuda.available:
chainer.cuda.cupy.random.seed(seed)
reset_seed(42)
```
## 心電図(ECG)と不整脈診断について
**心電図(Electrocardiogram, ECG)**は,心筋を協調的,律動的に動かすために刺激伝導系で伝達される電気信号を体表から記録したものであり,心電図検査は不整脈や虚血性心疾患の診断において広く用いられる検査です[[文献1](https://en.wikipedia.org/wiki/Electrocardiography), [文献2](https://www.ningen-dock.jp/wp/wp-content/uploads/2013/09/d4bb55fcf01494e251d315b76738ab40.pdf)].
標準的な心電図は,手足からとる心電図(四肢誘導)として,双極誘導($Ⅰ$,$Ⅱ$,$Ⅲ$),及び単極誘導($aV_R$,$aV_L$,$aV_F$)の6誘導,胸部からとる心電図(胸部誘導)として,$V_1$,$V_2$,$V_3$,$V_4$,$V_5$,$V_6$の6誘導,計12誘導から成ります.このうち,特に不整脈のスクリーニングを行う際には,$Ⅱ$誘導と$V_1$誘導に注目して診断が行われるのが一般的とされています.
心臓が正常な状態では,ECGにおいては規則的な波形が観測され,これを**正常洞調律 (Normal sinus rhythm, NSR)**といいます.
具体的には,以下の3つの主要な波形で構成されており,
1. **P波**:心房の脱分極(心房の興奮)
1. **QRS波**:心室の脱分極(心室の興奮)
1. **T波**:心室の再分極(心室興奮の収まり)
の順番で,下図のような波形が観測されます.

([[文献1](https://en.wikipedia.org/wiki/Electrocardiography)]より引用)
こうした規則的な波形に乱れが生じ,調律に異常があると判断された場合,不整脈などの疑いがあるため,診断が行われることになります.
## 使用するデータセット
ここでは,ECGの公開データとして有名な[MIT-BIH Arrhythmia Database (mitdb)](https://www.physionet.org/physiobank/database/mitdb/)を使用します.
47名の患者から収集した48レコードが登録されており,各レコードファイルには約30分間の2誘導($II$,$V_1$)のシグナルデータが格納されています.また,各R波のピーク位置に対してアノテーションが付与されています.(データとアノテーションの詳細については[こちら](https://www.physionet.org/physiobank/database/html/mitdbdir/intro.htm)を御覧ください.)
データベースは[PhysioNet](https://www.physionet.org/)によって管理されており,ダウンロードや読み込み用のPythonパッケージが提供されているので,今回はそちらを利用してデータを入手します.
```
dataset_root = './dataset'
download_dir = os.path.join(dataset_root, 'download')
```
まずはmitdbデータベースをダウンロードしましょう.
※実行時にエラーが出た場合は,再度実行して下さい.
```
wfdb.dl_database('mitdb', dl_dir=download_dir)
```
無事ダウンロードが完了すると, `Finished downloading files` というメッセージが表示されます.
ファイル一覧を確認してみましょう.
```
print(sorted(os.listdir(download_dir)))
```
ファイル名の数字はレコードIDを表しています.各レコードには3種類のファイルがあり,
- `.dat` : シグナル(バイナリ形式)
- `.atr` : アノテーション(バイナリ形式)
- `.hea` : ヘッダ(バイナリファイルの読み込みに必要)
となっています.
## データ前処理
ダウンロードしたファイルを読み込み,機械学習モデルへの入力形式に変換する**データ前処理**について説明します.
本節では,以下の手順で前処理を行います.
1. レコードIDを事前に 学習用 / 評価用 に分割
- 48レコードのうち,
- ID =(102, 104, 107, 217)のシグナルはペースメーカーの拍動が含まれるため除外します.
- ID = 114のシグナルは波形の一部が反転しているため,今回は除外します.
- ID = (201, 202)は同一の患者から得られたデータのため,202を除外します.
- 上記を除く計42レコードを,学習用とテスト用に分割します(分割方法は[[文献3](https://ieeexplore.ieee.org/document/1306572)]を参考).
1. シグナルファイル (.dat) の読み込み
- $Ⅱ$誘導シグナルと$V_1$誘導シグナルが格納されていますが,今回は$Ⅱ$誘導のみ利用します.
- サンプリング周波数は360 Hz なので,1秒間に360回のペースで数値が記録されていることになります.
1. アノテーションファイル (.atr) の読み込み
- 各R波ピークの位置 (positions) と,そのラベル (symbols) を取得します.
1. シグナルの正規化
- 平均0,分散1になるように変換を行います.
1. シグナルの分割 (segmentation)
- 各R波ピークを中心として2秒間(前後1秒ずつ)の断片を切り出していきます.
1. 分割シグナルへのラベル付与
- 各R波ピークに付与されているラベルを,下表(※)に従って集約し,今回の解析では正常拍動 (Normal),及び心室異所性拍動 (VEB) に対応するラベルが付与されている分割シグナルのみ学習・評価に利用します.
※ Association for the Advancement of Medical Instrumentation (AAMI)が推奨している基準([[文献3](https://ieeexplore.ieee.org/document/1306572)])で,5種類に大別して整理されています.

([[文献4](https://arxiv.org/abs/1810.04121)]より引用)
まずは以下のセルを実行して,データ前処理クラスを定義しましょう.
前処理クラス内では,以下のメンバ関数を定義しています.
* `__init__()` (コンストラクタ) : 変数の初期化,学習用とテスト用への分割ルール,利用するラベルの集約ルール
* `_load_data()` : シグナル,及びアノテーションの読み込み
* `_normalize_signal()` : `method`オプションに応じてシグナルをスケーリング
* `_segment_data()` : 読み込んだシグナルとアノテーションを,一定幅(`window_size`)で切り出し
* `preprocess_dataset()` : 学習データ,テストデータを作成
* `_preprocess_dataset_core()` : `preprocess_datataset()`内で呼ばれるメインの処理.
```
class BaseECGDatasetPreprocessor(object):
def __init__(
self,
dataset_root,
window_size=720, # 2 seconds
):
self.dataset_root = dataset_root
self.download_dir = os.path.join(self.dataset_root, 'download')
self.window_size = window_size
self.sample_rate = 360.
# split list
self.train_record_list = [
'101', '106', '108', '109', '112', '115', '116', '118', '119', '122',
'124', '201', '203', '205', '207', '208', '209', '215', '220', '223', '230'
]
self.test_record_list = [
'100', '103', '105', '111', '113', '117', '121', '123', '200', '210',
'212', '213', '214', '219', '221', '222', '228', '231', '232', '233', '234'
]
# annotation
self.labels = ['N', 'V']
self.valid_symbols = ['N', 'L', 'R', 'e', 'j', 'V', 'E']
self.label_map = {
'N': 'N', 'L': 'N', 'R': 'N', 'e': 'N', 'j': 'N',
'V': 'V', 'E': 'V'
}
def _load_data(
self,
base_record,
channel=0 # [0, 1]
):
record_name = os.path.join(self.download_dir, str(base_record))
# read dat file
signals, fields = wfdb.rdsamp(record_name)
assert fields['fs'] == self.sample_rate
# read annotation file
annotation = wfdb.rdann(record_name, 'atr')
symbols = annotation.symbol
positions = annotation.sample
return signals[:, channel], symbols, positions
def _normalize_signal(
self,
signal,
method='std'
):
if method == 'minmax':
# Min-Max scaling
min_val = np.min(signal)
max_val = np.max(signal)
return (signal - min_val) / (max_val - min_val)
elif method == 'std':
# Zero mean and unit variance
signal = (signal - np.mean(signal)) / np.std(signal)
return signal
else:
raise ValueError("Invalid method: {}".format(method))
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def preprocess_dataset(
self,
normalize=True
):
# preprocess training dataset
self._preprocess_dataset_core(self.train_record_list, "train", normalize)
# preprocess test dataset
self._preprocess_dataset_core(self.test_record_list, "test", normalize)
def _preprocess_dataset_core(
self,
record_list,
mode="train",
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
```
データ保存先のrootディレクトリ(dataset_root)を指定し, `preprocess_dataset()` を実行することで,前処理後のデータがNumpy Array形式で所定の場所に保存されます.
```
BaseECGDatasetPreprocessor(dataset_root).preprocess_dataset()
```
実行後,以下のファイルが保存されていることを確認しましょう.
* train/X.npy : 学習用シグナル
* train/y.npy : 学習用ラベル
* test/X.npy : 評価用シグナル
* test/y.npy : 評価用ラベル
```
!tree ./dataset/preprocessed
```
次に,保存したファイルを読み込み,中身を確認してみましょう.
```
X_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
y_train = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'y.npy'))
X_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'X.npy'))
y_test = np.load(os.path.join(dataset_root, 'preprocessed', 'test', 'y.npy'))
```
データセットのサンプル数はそれぞれ以下の通りです.
* 学習用 : 47738個
* 評価用 : 45349個
各シグナルデータは,2 (sec) * 360 (Hz) = 720次元ベクトルとして表現されています.
```
print("X_train.shape = ", X_train.shape, " \t y_train.shape = ", y_train.shape)
print("X_test.shape = ", X_test.shape, " \t y_test.shape = ", y_test.shape)
```
各ラベルはインデックスで表現されており,
* 0 : 正常拍動 (Normal)
* 1 : 心室異所性拍動 (VEB)
となっています.
学習用データセットに含まれている各ラベル毎のサンプル数をカウントしてみましょう.
```
uniq_train, counts_train = np.unique(y_train, return_counts=True)
print("y_train count each labels: ", dict(zip(uniq_train, counts_train)))
```
評価用データについても同様にラベル毎のサンプル数をカウントします.
```
uniq_test, counts_test = np.unique(y_test, return_counts=True)
print("y_test count each labels: ", dict(zip(uniq_test, counts_test)))
```
学習用データ,評価用データ共に,VEBサンプルは全体の10%未満であり,大多数は正常拍動サンプルであることが分かります.
次に,正常拍動,及びVEBのシグナルデータを可視化してみましょう.
```
%matplotlib inline
import matplotlib.pyplot as plt
```
正常拍動の例を図示したものが以下になります.
P波 - QRS波 - T波が規則的に出現していることが確認できます.
```
idx_n = np.where(y_train == 0)[0]
plt.plot(X_train[idx_n[0]])
```
一方でVEBの波形は規則性が乱れ,R波ピークの形状やピーク間距離も正常例とは異なる性質を示していることが分かります.
```
idx_s = np.where(y_train == 1)[0]
plt.plot(X_train[idx_s[0]])
```
本章の目的は,ECGシグナル特徴をうまく捉え,新たな波形サンプルに対しても高精度に正常/異常を予測するモデルを構築することです.
次節では,深層学習を利用したモデル構築について説明していきます.
## 深層学習を用いた時系列データ解析
### 学習
まず,前節で準備した前処理済みデータをChainerで読み込むためのデータセットクラスを定義します.
```
class ECGDataset(chainer.dataset.DatasetMixin):
def __init__(
self,
path
):
if os.path.isfile(os.path.join(path, 'X.npy')):
self.X = np.load(os.path.join(path, 'X.npy'))
else:
raise FileNotFoundError("{}/X.npy not found.".format(path))
if os.path.isfile(os.path.join(path, 'y.npy')):
self.y = np.load(os.path.join(path, 'y.npy'))
else:
raise FileNotFoundError("{}/y.npy not found.".format(path))
def __len__(self):
return len(self.X)
def get_example(self, i):
return self.X[None, i].astype(np.float32), self.y[i]
```
続いて,学習(+予測)に利用するネットワーク構造を定義します.
今回は,画像認識タスクで有名な,CNNベースの**ResNet34**と同様のネットワーク構造を利用します.[[文献5](https://arxiv.org/abs/1512.03385)]
ただし,入力シグナルは1次元配列であることから,ここでは画像解析等で一般的に利用される2D Convolutionではなく,前章の遺伝子解析と同様,1D Convolutionを利用します.
```
import chainer.functions as F
import chainer.links as L
from chainer import reporter
from chainer import Variable
class BaseBlock(chainer.Chain):
def __init__(
self,
channels,
stride=1,
dilate=1
):
self.stride = stride
super(BaseBlock, self).__init__()
with self.init_scope():
self.c1 = L.ConvolutionND(1, None, channels, 3, stride, dilate, dilate=dilate)
self.c2 = L.ConvolutionND(1, None, channels, 3, 1, dilate, dilate=dilate)
if stride > 1:
self.cd = L.ConvolutionND(1, None, channels, 1, stride, 0)
self.b1 = L.BatchNormalization(channels)
self.b2 = L.BatchNormalization(channels)
def __call__(self, x):
h = F.relu(self.b1(self.c1(x)))
if self.stride > 1:
res = self.cd(x)
else:
res = x
h = res + self.b2(self.c2(h))
return F.relu(h)
class ResBlock(chainer.Chain):
def __init__(
self,
channels,
n_block,
dilate=1
):
self.n_block = n_block
super(ResBlock, self).__init__()
with self.init_scope():
self.b0 = BaseBlock(channels, 2, dilate)
for i in range(1, n_block):
bx = BaseBlock(channels, 1, dilate)
setattr(self, 'b{}'.format(str(i)), bx)
def __call__(self, x):
h = self.b0(x)
for i in range(1, self.n_block):
h = getattr(self, 'b{}'.format(str(i)))(h)
return h
class ResNet34(chainer.Chain):
def __init__(self):
super(ResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3)
self.resblock1 = ResBlock(128, 4)
self.resblock2 = ResBlock(256, 6)
self.resblock3 = ResBlock(512, 3)
self.fc = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = self.fc(h)
return h
class Classifier(chainer.Chain):
def __init__(
self,
predictor,
lossfun=F.softmax_cross_entropy
):
super(Classifier, self).__init__()
with self.init_scope():
self.predictor = predictor
self.lossfun = lossfun
def __call__(self, *args):
assert len(args) >= 2
x = args[:-1]
t = args[-1]
y = self.predictor(*x)
# loss
loss = self.lossfun(y, t)
with chainer.no_backprop_mode():
# other metrics
accuracy = F.accuracy(y, t)
# reporter
reporter.report({'loss': loss}, self)
reporter.report({'accuracy': accuracy}, self)
return loss
def predict(self, x):
with chainer.function.no_backprop_mode(), chainer.using_config('train', False):
x = Variable(self.xp.asarray(x, dtype=self.xp.float32))
y = self.predictor(x)
return y
```
学習を実行するための準備として,以下の関数を用意します.
- `create_train_dataset()`:学習用データセットを `ECGDataset` クラスに渡す
- `create_trainer()`:学習に必要な設定を行い,Trainerオブジェクトを作成
```
from chainer import optimizers
from chainer.optimizer import WeightDecay
from chainer.iterators import MultiprocessIterator
from chainer import training
from chainer.training import extensions
from chainer.training import triggers
from chainer.backends.cuda import get_device_from_id
def create_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = ECGDataset(train_path)
return train_dataset
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = ResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
```
これで学習の準備が整ったので,関数を呼び出してtrainerを作成します.
```
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
```
それでは学習を開始しましょう. (1分30秒程度で学習が完了します.)
```
%time trainer.run()
```
学習が問題なく進めば,main/accuracyが0.99 (99%)付近まで到達していると思います.
### 評価
学習したモデルを評価用データに当てはめて識別性能を確認するため,以下の関数を用意します.
- `create_test_dataset()` : 評価用データの読み込み
- `predict()` : 推論を行い,結果の配列(正解ラベルと予測ラベル)を出力
- `print_confusion_matrix()` : 予測結果から混同行列とよばれる表を出力
- `print_scores()` : 予測結果から予測精度の評価指標を出力
```
from chainer import cuda
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
def create_test_dataset(root_path):
test_path = os.path.join(root_path, 'preprocessed', 'test')
test_dataset = ECGDataset(test_path)
return test_dataset
def predict(trainer, test_dataset, batchsize, device=-1):
model = trainer.updater.get_optimizer('main').target
ys = []
ts = []
for i in range(len(test_dataset) // batchsize + 1):
if i == len(test_dataset) // batchsize:
X, t = zip(*test_dataset[i*batchsize: len(test_dataset)])
else:
X, t = zip(*test_dataset[i*batchsize:(i+1)*batchsize])
X = cuda.to_gpu(np.array(X), device)
y = model.predict(X)
y = cuda.to_cpu(y.data.argmax(axis=1))
ys.append(y)
ts.append(np.array(t))
return np.concatenate(ts), np.concatenate(ys)
def print_confusion_matrix(y_true, y_pred):
labels = sorted(list(set(y_true)))
target_names = ['Normal', 'VEB']
cmx = confusion_matrix(y_true, y_pred, labels=labels)
df_cmx = pd.DataFrame(cmx, index=target_names, columns=target_names)
plt.figure(figsize = (5,3))
sn.heatmap(df_cmx, annot=True, annot_kws={"size": 18}, fmt="d", cmap='Blues')
plt.show()
def print_scores(y_true, y_pred):
target_names = ['Normal', 'VEB']
print(classification_report(y_true, y_pred, target_names=target_names))
print("accuracy: ", accuracy_score(y_true, y_pred))
```
評価用データセットを用意し,
```
test_dataset = create_test_dataset(dataset_root)
```
評価用データに対して予測を行います. (17秒程度で予測が完了します)
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
```
それでは予測結果を確認していきましょう.
まずは, **混同行列** とよばれる,予測の分類結果をまとめた表を作成します.行方向(表側)を正解ラベル,列方向(表頭)を予測ラベルとして,各項目では以下の集計値を求めています.
* 左上 : 実際に正常拍動であるサンプルが,正常拍動と予測された数
* 右上 : 実際に正常拍動であるサンプルが,VEBと予測された数
* 左下 : 実際にVEBであるサンプルが,正常と予測された数
* 右下 : 実際にVEBであるサンプルが,VEBと予測された数
```
print_confusion_matrix(y_true_test, y_pred_test)
```
続いて,予測結果から計算される予測精度の評価指標スコアを表示してみましょう.
特に,以下のスコアに注目してみてください.
* 適合率 (Precision) : それぞれの予測診断結果 (Normal or VEB) のうち,正しく診断できていた(正解も同じ診断結果であった)割合
* 再現率 (Recall) : それぞれの正解診断結果 (Normal or VEB) のうち,正しく予測できていた(予測も同じ診断結果であった)割合
* F1値 (F1-score) : 適合率と再現率の調和平均
* 正解率 (Accuracy) : 全ての診断結果 (Normal and VEB) のうち,正しく予測できていた(予測も同じ診断結果であった)割合
```
print_scores(y_true_test, y_pred_test)
```
サンプル数が多い正常拍動に対する予測スコアは高い値を示す一方で,サンプル数の少ないVEBに対しては,スコアが低くなる傾向があります.今回のデータセットのように,サンプルが占めるクラスの割合が極端に偏っている不均衡データでは,こうした傾向がしばしば観測されることが知られています.
次節では,このようなクラス不均衡問題への対応をはじめとして,予測モデルを改善するための試行錯誤について幾つか紹介していきます.
## 精度向上に向けて
本節では,前節にて構築した学習器に対して,「データセット」「目的関数」「学習モデル」「前処理」といった様々な観点で工夫を行うことで,精度改善に寄与する方法を模索していきます.
機械学習を用いて解析を行う際には,どの工夫が精度改善に有効なのか予め分からない場合が多く,試行錯誤が必要となります.ただし,手当たり次第の方法を試すことは得策では無いので,対象とするデータセットの性質に基づいて,有効となり得る手段を検討していくことが重要となります.
まずはじめに,前節でも課題として挙がっていた,クラス不均衡の問題への対処法から検討していきましょう.
### クラス不均衡データへの対応
前節でも触れたように,**クラス不均衡データ**を用いて学習器を構築する際,大多数を占めるクラスに偏った予測結果となり,少数のクラスに対して精度が低くなってしまう場合があることが一般的に知られています.一方で,(今回のデータセットを含めて)現実世界のタスクにおいては,大多数の正常サンプルの中に含まれる少数の異常サンプルを精度良く検出することが重要であるというケースは少なくありません.こうした状況において,少数クラスの検出に注目してモデルを学習するための方策が幾つか存在します.
具体的には,
1. **サンプリング**
- 不均衡データセットからサンプリングを行い,クラス比率のバランスが取れたデータセットを作成.
- **Undersampling** : 大多数の正常サンプルを削減.
- **Oversampling** : 少数の異常サンプルを水増し.
1. **損失関数の重み調整**
- 正常サンプルを異常と誤分類した際のペナルティを小さく,異常サンプルを正常と誤分類した際のペナルティを大きくする.
- 例えば,サンプル数の存在比率の逆数を重みとして利用.
1. **目的関数(損失関数)の変更**
- 異常サンプルに対する予測スコアを向上させるような目的関数を導入.
1. **異常検知**
- 正常サンプルのデータ分布を仮定し,そこから十分に逸脱したサンプルを異常とみなす.
などの方法があります.本節では,「1.サンプリング」,「3.目的関数の変更」の例を紹介していきます.
#### サンプリング
**Undersampling**と**Oversampling**を組み合わせて,データセットの不均衡を解消することを考えます.
今回は以下のステップでサンプリングを行います.
1. Undersamplingにより,正常拍動サンプルのみ1/4に削減 (VEBサンプルは全て残す)
* ここでは,単純なランダムサンプリングを採用します.ランダム性があるため,分類にとって重要な(VEBサンプルとの識別境界付近にある)サンプルを削除してしまう可能性があります.
* ランダムサンプリングの問題を緩和する手法も幾つか存在しますが,今回は使用しません.
1. Oversamplingにより,Undersampling後の正常拍動サンプルと同数になるまでVEBサンプルを水増し
* SMOTE (Synthetic Minority Over-sampling TEchnique) という手法を採用します.
* ランダムにデータを水増しする最も単純な方法だと,過学習を引き起こしやすくなります.SMOTEでは,VEBサンプルと,その近傍VEBサンプルとの間のデータ点をランダムに生成してデータに追加していくことで,過学習の影響を緩和しています.
サンプリングを行うために, `SampledECGDataset` クラスを定義します.
また,そのクラスを読み込んで学習用データセットオブジェクトを作成する `create_sampled_train_datset()` 関数を用意します.
```
from imblearn.datasets import make_imbalance
from imblearn.over_sampling import SMOTE
class SampledECGDataset(ECGDataset):
def __init__(
self,
path
):
super(SampledECGDataset, self).__init__(path)
_, counts = np.unique(self.y, return_counts=True)
self.X, self.y = make_imbalance(
self.X, self.y,
sampling_strategy={0: counts[0]//4, 1: counts[1]}
)
smote = SMOTE(random_state=42)
self.X, self.y = smote.fit_sample(self.X, self.y)
def create_sampled_train_dataset(root_path):
train_path = os.path.join(root_path, 'preprocessed', 'train')
train_dataset = SampledECGDataset(train_path)
return train_dataset
train_dataset = create_sampled_train_dataset(dataset_root)
```
それでは先程と同様に,trainerを作成して学習を実行してみましょう.(1分程度で学習が完了します.)
```
trainer = create_trainer(256, train_dataset, nb_epoch=2, device=0)
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
先程の予測結果と比較して,サンプリングの効果によりVEBサンプルに対する検出精度(特にrecall)が向上しているかを確認してみて下さい.
(サンプリングのランダム性や,学習の初期値依存性などの影響があるため,必ず精度向上するとは限らないことにご注意下さい.)
#### 損失関数の変更
続いて,**損失関数を変更**することで,少数の異常サンプルに対して精度向上させる方法を検討します.少数クラスの予測精度向上に注目した損失関数はこれまでに幾つも提案されていますが,今回はその中で,**Focal loss** という損失関数を利用します.
Focal lossは,画像の物体検知手法の研究論文 [[文献6](https://arxiv.org/abs/1708.02002)] の中で提案された損失関数です.One-stage物体検知手法において,大量の候補領域の中で実際に物体が存在する領域はたかだか数個であることが多く,クラス不均衡なタスクになっており,学習がうまく進まないという問題があります.こうした問題に対処するために提案されたのがfocal lossであり,以下の式によって記述されます.
$$
FL(p_t) = - (1 - p_t)^{\gamma}\log(p_t)
$$
ここで$p_t$はSoftmax関数の出力(確率値)です.$\gamma = 0$の場合,通常のSoftmax cross-entorpy lossと等しくなりますが,$\gamma > 0$の場合,明確に分類可能な(識別が簡単な)サンプルに対して,相対損失を小さくする効果があります.その結果,分類が難しいサンプルにより注目して学習が進んでいくことが期待されます.
下図は,正解クラスの予測確率値と,その際の損失の関係をプロットしており,$\gamma$の値を変化させた場合に,相対損失がどのように下がっていくかを示しています.

([[文献6](https://arxiv.org/abs/1708.02002)]より引用)
それでは実際に,Focal loss関数を定義してみましょう.
```
from chainer.backends.cuda import get_array_module
def focal_loss(x, t, class_num=2, gamma=0.5, eps=1e-6):
xp = get_array_module(t)
p = F.softmax(x)
p = F.clip(p, x_min=eps, x_max=1-eps)
log_p = F.log_softmax(x)
t_onehot = xp.eye(class_num)[t.ravel()]
loss_sce = -1 * t_onehot * log_p
loss_focal = F.sum(loss_sce * (1. - p) ** gamma, axis=1)
return F.mean(loss_focal)
```
前項目で実施したデータサンプリングは行わず,初期(§8.5)の学習時と同様の設定にした上で,損失関数をfocal lossに変更します.
```
train_dataset = create_train_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0, lossfun=focal_loss)
```
それでは学習を開始しましょう.(1分30秒ほどで学習が完了します.)
```
%time trainer.run()
```
学習が完了したら,評価用データにて予測結果を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
初期モデルの予測結果と,今回の予測結果を比較してみて下さい.
(余力があれば,$\gamma$の値を変化させた場合に,予測結果にどのような影響があるか確認してみて下さい.)
### ネットワーク構造の変更
続いて,学習に用いる**ネットワーク構造を変更**することを検討します.
ここでは,最初に用いたResNet34構造に対して以下の拡張を行います.
1. 1D Convolutionを,**1D Dilated Convolution**に変更
- Dilated Convolutionを用いることで,パラメータ数の増大を抑えながら,より広範囲の特徴を抽出可能になると期待されます(遺伝子解析の際と同様のモチベーション).
- 広範囲の特徴が重要でないタスクの場合には,精度向上に繋がらない(または,場合によっては精度が低下する)可能性もあります.
1. 最終層の手前に全結合層を追加し,**Dropout**を適用
- Dropoutを行うことで,学習器の汎化性能が向上することを期待します.ただし複数の先行研究([[文献7](https://arxiv.org/abs/1506.02158v6)]など)において,単純に畳み込み層の直後にDropoutを適用するだけでは汎化性能の向上が期待出来ないと報告されていることから,今回は全結合層に適用することにします.
それでは,上記の拡張を加えたネットワークを定義しましょう.(ResBlockクラスは,初期モデル構築時に定義済み)
```
class DilatedResNet34(chainer.Chain):
def __init__(self):
super(DilatedResNet34, self).__init__()
with self.init_scope():
self.conv1 = L.ConvolutionND(1, None, 64, 7, 2, 3)
self.bn1 = L.BatchNormalization(64)
self.resblock0 = ResBlock(64, 3, 1)
self.resblock1 = ResBlock(128, 4, 1)
self.resblock2 = ResBlock(256, 6, 2)
self.resblock3 = ResBlock(512, 3, 4)
self.fc1 = L.Linear(None, 512)
self.fc2 = L.Linear(None, 2)
def __call__(self, x):
h = F.relu(self.bn1(self.conv1(x)))
h = F.max_pooling_nd(h, 3, 2)
for i in range(4):
h = getattr(self, 'resblock{}'.format(str(i)))(h)
h = F.average(h, axis=2)
h = F.dropout(self.fc1(h), 0.5)
h = self.fc2(h)
return h
```
初期(§8.5)の学習時と同様の設定にした上で,ネットワーク構造を `DilatedResNet34` に変更して学習を行います.
```
def create_trainer(
batchsize, train_dataset, nb_epoch=1,
device=0, lossfun=F.softmax_cross_entropy
):
# setup model
model = DilatedResNet34()
train_model = Classifier(model, lossfun=lossfun)
# use Adam optimizer
optimizer = optimizers.Adam(alpha=0.001)
optimizer.setup(train_model)
optimizer.add_hook(WeightDecay(0.0001))
# setup iterator
train_iter = MultiprocessIterator(train_dataset, batchsize)
# define updater
updater = training.StandardUpdater(train_iter, optimizer, device=device)
# setup trainer
stop_trigger = (nb_epoch, 'epoch')
trainer = training.trainer.Trainer(updater, stop_trigger)
logging_attributes = [
'epoch', 'iteration',
'main/loss', 'main/accuracy'
]
trainer.extend(
extensions.LogReport(logging_attributes, trigger=(2000 // batchsize, 'iteration'))
)
trainer.extend(
extensions.PrintReport(logging_attributes)
)
trainer.extend(
extensions.ExponentialShift('alpha', 0.75, optimizer=optimizer),
trigger=(4000 // batchsize, 'iteration')
)
return trainer
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
```
それでは,これまでと同様に学習を開始しましょう.(1分30秒ほどで学習が完了します.)
```
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
初期モデルの予測結果と,今回の予測結果を比較してみて下さい.
### ノイズ除去の効果検証
最後に,心電図に含まれる**ノイズの除去**について検討します.
心電図波形には,以下のような外部ノイズが含まれている可能性があります.[[文献8](http://www.iosrjournals.org/iosr-jece/papers/ICETEM/Vol.%201%20Issue%201/ECE%2006-40-44.pdf)]
* 高周波
* **筋電図ノイズ** (Electromyogram noise)
- 体動により,筋肉の電気的活動が心電図に混入する場合があります.
* **電力線誘導障害** (Power line interference)
- 静電誘導により交流電流が流れ込み,心電図に混入する場合があります.
- 電源配線に電流が流れることで磁力線が発生し,電磁誘導作用により交流電流が流れ込む場合があります.
* **加算性白色ガウスノイズ** (Additive white Gaussian noise)
- 外部環境に由来する様々な要因でホワイトノイズが混入してきます.
* 低周波
* **基線変動** (Baseline wandering)
- 電極の装着不良,発汗,体動などの影響で,基線がゆっくり変動する場合があります.
心電図を解析する際は,頻脈や徐脈などの異常波形を正確に判別するために,上記のようなノイズを除去する前処理が行われるのが一般的です.
ノイズを除去する方法は幾つかありますが,最も単純なのは,線形フィルタを適用する方法です.今回は線形フィルタの一つであるバターワースフィルタを用いて,ノイズ除去を試してみましょう.
`BaseECGDatasetPreprocessor` にシグナルノイズ除去の機能を追加した, `DenoiseECGDatasetPreprocessor` クラスを定義します.
```
from scipy.signal import butter, lfilter
class DenoiseECGDatasetPreprocessor(BaseECGDatasetPreprocessor):
def __init__(
self,
dataset_root='./',
window_size=720
):
super(DenoiseECGDatasetPreprocessor, self).__init__(
dataset_root, window_size)
def _denoise_signal(
self,
signal,
btype='low',
cutoff_low=0.2,
cutoff_high=25.,
order=5
):
nyquist = self.sample_rate / 2.
if btype == 'band':
cut_off = (cutoff_low / nyquist, cutoff_high / nyquist)
elif btype == 'high':
cut_off = cutoff_low / nyquist
elif btype == 'low':
cut_off = cutoff_high / nyquist
else:
return signal
b, a = butter(order, cut_off, analog=False, btype=btype)
return lfilter(b, a, signal)
def _segment_data(
self,
signal,
symbols,
positions
):
X = []
y = []
sig_len = len(signal)
for i in range(len(symbols)):
start = positions[i] - self.window_size // 2
end = positions[i] + self.window_size // 2
if symbols[i] in self.valid_symbols and start >= 0 and end <= sig_len:
segment = signal[start:end]
assert len(segment) == self.window_size, "Invalid length"
X.append(segment)
y.append(self.labels.index(self.label_map[symbols[i]]))
return np.array(X), np.array(y)
def prepare_dataset(
self,
denoise=False,
normalize=True
):
if not os.path.isdir(self.download_dir):
self.download_data()
# prepare training dataset
self._prepare_dataset_core(self.train_record_list, "train", denoise, normalize)
# prepare test dataset
self._prepare_dataset_core(self.test_record_list, "test", denoise, normalize)
def _prepare_dataset_core(
self,
record_list,
mode="train",
denoise=False,
normalize=True
):
Xs, ys = [], []
save_dir = os.path.join(self.dataset_root, 'preprocessed', mode)
for i in range(len(record_list)):
signal, symbols, positions = self._load_data(record_list[i])
if denoise:
signal = self._denoise_signal(signal)
if normalize:
signal = self._normalize_signal(signal)
X, y = self._segment_data(signal, symbols, positions)
Xs.append(X)
ys.append(y)
os.makedirs(save_dir, exist_ok=True)
np.save(os.path.join(save_dir, "X.npy"), np.vstack(Xs))
np.save(os.path.join(save_dir, "y.npy"), np.concatenate(ys))
```
線形フィルタを適用することで,学習モデルが異常拍動のパターンを特徴として捉えやすくなる可能性があります.一方で,異常拍動を検出するにあたって重要な情報も除去されてしまう可能性があることに注意してください.
また,線形フィルタにおいては,その周波数特性(どの帯域の周波数成分を遮断するか)によって,幾つかの大まかな分類があります.例えば,以下のものがあります.
* **ローパスフィルタ (Low-pass filter)** : 低周波成分のみ通過 (高周波成分を遮断)
* **ハイパスフィルタ (High-pass filter)** : 高周波成分のみ通過 (低周波成分を遮断)
* **バンドパスフィルタ(Band-pass filter)** : 特定の帯域成分のみ通過 (低周波,高周波成分を遮断)

([[文献9](https://en.wikipedia.org/wiki/Filter_%28signal_processing%29)] より引用)
mitdbでは,予め0.1 Hz 以下の低周波と,100 Hz 以上の高周波をバンドパスフィルタによって除去済みであるため,ここではさらに,25 Hz のローパス・バターワースフィルタによって高周波ノイズを取り除きます.
それでは,ノイズ除去オプションを有効にして,前処理を実行してみましょう.
```
DenoiseECGDatasetPreprocessor(dataset_root).prepare_dataset(denoise=True)
```
実際に,高周波ノイズ除去後の波形を可視化してみましょう.
```
X_train_d = np.load(os.path.join(dataset_root, 'preprocessed', 'train', 'X.npy'))
plt.subplots(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(X_train[idx_n[0]])
plt.subplot(1, 2, 2)
plt.plot(X_train_d[idx_n[0]])
plt.show()
```
左図がフィルタリング前の波形,右図がフィルタリング後の波形です.
細かな振動が取り除かれていることが確認できると思います.
これまでと同様に,ノイズ除去後のデータを用いて学習を行ってみましょう.(1分30秒ほどで学習が完了します.)
```
train_dataset = create_train_dataset(dataset_root)
test_dataset = create_test_dataset(dataset_root)
trainer = create_trainer(256, train_dataset, nb_epoch=1, device=0)
%time trainer.run()
```
学習が完了したら,評価用データで予測を行い,精度を確認してみましょう.
```
%time y_true_test, y_pred_test = predict(trainer, test_dataset, 256, 0)
print_confusion_matrix(y_true_test, y_pred_test)
print_scores(y_true_test, y_pred_test)
```
高周波のノイズを除去したことで,予測精度がどのように変わったか確認してみましょう.
## おわりに
本章では,ECGの公開データセットを利用して,不整脈検知の問題に取り組みました.
本講義内容を通じてお伝えしたかったことは,以下となります.
1. 心電図を解析するにあたって必要となる最低限の知識
1. モニタリングデータを解析するための基本的な前処理手順
1. CNNベースのモデルを利用した学習器の構築
1. データセットの性質を考慮した学習方法や前処理の工夫
また,精度向上に向けて様々な手法を試してみましたが,現実世界のタスクにおいては,どの工夫が有効に働くか自明で無い場合がほとんどです.従って,試行錯誤を行いながら,その問題設定に適合するやり方を模索していく必要があります.
さらなる取り組みとしては,例えば下記内容を検討する余地があります.
* 情報の追加
* $Ⅱ$誘導シグナルに加えて,$V_1$誘導シグナルも同時に入力として与えます.([[文献10](https://www.kdd.org/kdd2018/files/deep-learning-day/DLDay18_paper_16.pdf)]などで実施)
* 前処理の工夫
* セグメント長の変更
* より長時間のセグメントを入力とすることで,長期的な波形情報を抽出.([[文献4](https://arxiv.org/abs/1810.04121)]では10秒のセグメントを解析に利用)
* 入力情報が増えることで,却って学習が難しくなってしまう可能性あり.
* リサンプリング
* サンプリング周波数を下げることで,長期的な波形情報を抽出.([[文献4](https://arxiv.org/abs/1810.04121)]では180 Hzにダウンサンプリング)
* 波形が粗くなることで学習に影響する可能性あり.
* 適切な前処理を行わないと,折り返し雑音と呼ばれる歪みが発生.
* (モデルに入力する前に情報を縮小する処理は,画像解析などの分野では一般的)
* ラベルの追加
* Normal,VEBに加えて,SVEB(上室異所性拍動)等も追加.
* ラベルの与え方の変更
* セグメント範囲内に正常以外のピークラベルが含まれる場合に優先的にそのラベルを付与する,等.
* モデルの変更
* 長期的な特徴を抽出するために,CNNの後段にRNNベースの構造(LSTMなど)を組み込む ([[文献4](https://arxiv.org/abs/1810.04121)]などで実施).
余力がある方は,是非チャレンジしてみてください.
また,最近では独自に収集した大規模なモニタリングデータを対象として,研究成果を発表する事例も幾つか出てきています.
* Cardiogram社とカリフォルニア大学の共同研究で,活動量計から心拍数データを収集し,深層学習を用いて糖尿病予備群を予測するDeepHeartを発表[[文献11](https://arxiv.org/abs/1802.02511)].
* スタンフォード大学のAndrew Ng.の研究室でも,独自に収集したECGレコードから$14$種類の波形クラス分類を予測するモデルを構築し,医師と比較実験を実施[[文献12](https://arxiv.org/abs/1707.01836)].
デバイスの進歩によって簡単に精緻な情報が収集可能になってきていることから,こうした研究は今後益々盛んになっていくと考えられます.
以上で,モニタリングデータの時系列解析の章は終了となります.お疲れ様でした.
## 参考文献
1. **Electrocardiography** Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 22 July 2004. Web. 10 Aug. 2004, [[Link](https://en.wikipedia.org/wiki/Electrocardiography)]
1. **心電図健診判定マニュアル**, 日本人間ドック学会, 平成26年4月, [[Link](https://www.ningen-dock.jp/wp/wp-content/uploads/2013/09/d4bb55fcf01494e251d315b76738ab40.pdf)]
1. **Automatic classification of heartbeats using ECG morphology and heartbeat interval features**, Phillip de Chazal et al., June 2004, [[Link](https://ieeexplore.ieee.org/document/1306572)]
1. **Inter-Patient ECG Classification with Convolutional and Recurrent Neural Networks**, Li Guo et al., Sep 2018, [[Link](https://arxiv.org/abs/1810.04121)]
1. **Deep Residual Learning for Image Recognition**, Kaiming He et al., Dec 2015, [[Link](https://arxiv.org/abs/1512.03385)]
1. **Focal Loss for Dense Object Detection**, Tsung-Yi Lin et al., Aug 2017, [[Link](https://arxiv.org/abs/1708.02002)]
1. **Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference**, Yarin Gal et al., Jun 2015, [[Link](https://arxiv.org/abs/1506.02158v6)]
1. **Noise Analysis and Different Denoising Techniques of ECG Signal - A Survey**, Aswathy Velayudhan et al., ICETEM2016, [[Link](http://www.iosrjournals.org/iosr-jece/papers/ICETEM/Vol.%201%20Issue%201/ECE%2006-40-44.pdf)]
1. **Filter (signal processing)**, Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 22 July 2004. Web. 10 Aug. 2004, [[Link](https://en.wikipedia.org/wiki/Filter_%28signal_processing%29)]
1. **Arrhythmia Detection from 2-lead ECG using Convolutional Denoising Autoencoders**, Keiichi Ochiai et al., KDD2018, [[Link](https://www.kdd.org/kdd2018/files/deep-learning-day/DLDay18_paper_16.pdf)]
1. **DeepHeart: Semi-Supervised Sequence Learning for Cardiovascular Risk Prediction**, Brandon Ballinger et al., Feb 2018, [[Link](https://arxiv.org/abs/1802.02511)]
1. **Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks**, Pranav Rajpurkar et al., Jul 2017, [[Link](https://arxiv.org/abs/1707.01836)]
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
from datetime import timedelta
from pandas.plotting import register_matplotlib_converters
from statsmodels.tsa.stattools import acf, pacf
from statsmodels.tsa.statespace.sarimax import SARIMAX
register_matplotlib_converters()
from time import time
```
# Catfish Sales Data
```
def parser(s):
return datetime.strptime(s, '%Y-%m-%d')
#read data
catfish_sales = pd.read_csv('catfish.csv', parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
#infer the frequency of the data
catfish_sales = catfish_sales.asfreq(pd.infer_freq(catfish_sales.index))
start_date = datetime(1996,1,1)
end_date = datetime(2000,1,1)
lim_catfish_sales = catfish_sales[start_date:end_date]
```
```
#At December 1 1998
lim_catfish_sales[datetime(1998,12,1)] = 10000
plt.figure(figsize=(10,4))
plt.plot(lim_catfish_sales)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
```
## Remove the trend
```
first_diff = lim_catfish_sales.diff()[1:]
plt.figure(figsize=(10,4))
plt.plot(first_diff)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
plt.axhline(0, color='k', linestyle='--', alpha=0.2)
```
# Get training and testing sets
```
train_end = datetime(1999,7,1)
test_end = datetime(2000,1,1)
test_data = lim_catfish_sales[train_end + timedelta(days=1):test_end]
```
# Make Predictions
```
my_order = (0,1,0)
my_seasonal_order = (1, 0, 1, 12)
rolling_predictions = test_data.copy()
for train_end in test_data.index:
train_data = lim_catfish_sales[:train_end-timedelta(days=1)]
model = SARIMAX(train_data, order=my_order, seasonal_order=my_seasonal_order)
model_fit = model.fit()
pred = model_fit.forecast()
rolling_predictions[train_end] = pred
rolling_residuals = test_data - rolling_predictions
plt.figure(figsize=(10,4))
plt.plot(rolling_residuals)
plt.axhline(0, linestyle='--', color='k')
plt.title('Rolling Forecast Residuals from SARIMA Model', fontsize=20)
plt.ylabel('Error', fontsize=16)
plt.figure(figsize=(10,4))
plt.plot(lim_catfish_sales)
plt.plot(rolling_predictions)
plt.legend(('Data', 'Predictions'), fontsize=16)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Production', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
print('Mean Absolute Percent Error:', round(np.mean(abs(rolling_residuals/test_data)),4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rolling_residuals**2)))
```
# Detecting the Anomaly
## Attempt 1: Deviation Method
```
plt.figure(figsize=(10,4))
plt.plot(lim_catfish_sales)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
rolling_deviations = pd.Series(dtype=float, index = lim_catfish_sales.index)
for date in rolling_deviations.index:
#get the window ending at this data point
window = lim_catfish_sales.loc[:date]
#get the deviation within this window
rolling_deviations.loc[date] = window.std()
#get the difference in deviation between one time point and the next
diff_rolling_deviations = rolling_deviations.diff()
diff_rolling_deviations = diff_rolling_deviations.dropna()
plt.figure(figsize=(10,4))
plt.plot(diff_rolling_deviations)
plt.title('Deviation Differences', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
```
## Attempt 2: Seasonal Method
```
month_deviations = lim_catfish_sales.groupby(lambda d: d.month).std()
plt.figure(figsize=(10,4))
plt.plot(month_deviations)
plt.title('Deviation by Month', fontsize=20)
plt.ylabel('Sales', fontsize=16)
```
## So, the anomaly occurs in a December
```
december_data = lim_catfish_sales[lim_catfish_sales.index.month == 12]
december_data
min_dev = 9999999
curr_anomaly = None
for date in december_data.index:
other_data = december_data[december_data.index != date]
curr_dev = other_data.std()
if curr_dev < min_dev:
min_dev = curr_dev
curr_anomaly = date
curr_anomaly
```
# What to do about the anomaly?
## Simple Idea: use mean of other months
```
adjusted_data = lim_catfish_sales.copy()
adjusted_data.loc[curr_anomaly] = december_data[(december_data.index != curr_anomaly) & (december_data.index < test_data.index[0])].mean()
plt.figure(figsize=(10,4))
plt.plot(lim_catfish_sales, color='firebrick', alpha=0.4)
plt.plot(adjusted_data)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
plt.axvline(curr_anomaly, color='k', alpha=0.7)
```
# Resulting Predictions
```
train_end = datetime(1999,7,1)
test_end = datetime(2000,1,1)
test_data = adjusted_data[train_end + timedelta(days=1):test_end]
rolling_predictions = test_data.copy()
for train_end in test_data.index:
train_data = adjusted_data[:train_end-timedelta(days=1)]
model = SARIMAX(train_data, order=my_order, seasonal_order=my_seasonal_order)
model_fit = model.fit()
pred = model_fit.forecast()
rolling_predictions[train_end] = pred
rolling_residuals = test_data - rolling_predictions
plt.figure(figsize=(10,4))
plt.plot(rolling_residuals)
plt.axhline(0, linestyle='--', color='k')
plt.title('Rolling Forecast Residuals from SARIMA Model', fontsize=20)
plt.ylabel('Error', fontsize=16)
plt.figure(figsize=(10,4))
plt.plot(lim_catfish_sales)
plt.plot(rolling_predictions)
plt.legend(('Data', 'Predictions'), fontsize=16)
plt.title('Catfish Sales in 1000s of Pounds', fontsize=20)
plt.ylabel('Production', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
print('Mean Absolute Percent Error:', round(np.mean(abs(rolling_residuals/test_data)),4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rolling_residuals**2)))
```
| github_jupyter |

# Analiza danych i uczenie maszynowe w Python
Autor notebooka: Jakub Nowacki.
## Pandas
[Pandas](http://pandas.pydata.org/) to biblioteka dostarczająca wygodne w użyciu struktury danych i narzędzia do ich analizy. Inspirowane tutorialem [10 minutes to Pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html). Zobacz też [dokumentację](http://pandas.pydata.org/pandas-docs/stable/index.html) oraz [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
Pandas importujemy używając nazwy `pandas`, najlepiej w całości jako pakiet. Często stosowany jest alias `pd`.
```
import pandas as pd
import numpy as np # Nie wymagane, użyjemy tylko elementów
```
Podstawowymi strukturami danych w Pandas jest Series (seria) i DataFrame (obiekt tabeli); zobacz [dokumentacje](http://pandas.pydata.org/pandas-docs/stable/dsintro.html) po więcej informacji.
## Series
Series jest to jednowymiarowa struktura danych podobna do `ndarray`. Serię tworzymy za pomocą polecenia `Series`; jako dane możemy przekazać wiele kolekcji:
```
l = [1,3,5,np.nan,6,8]
s = pd.Series(l)
s
```
Series posiada indeks, który będzie stworzony automatycznie jeżeli nie został przekazany lub można go stworzyć:
```
daty = pd.date_range('20170101', periods=6)
daty
s = pd.Series(l, index=daty)
s
```
Niemniej, może to być każda seria która jest przynajmniej tak długa jak dane:
```
s = pd.Series(np.random.randn(5), index=list('abcde'))
s
```
Pobierać dane z Series możemy jak w Numpy:
```
print('s[1] = \n{}'.format(s[1]))
print('s[2:] = \n{}'.format(s[2:]))
print('s[1:-2] = \n{}'.format(s[1:-2]))
```
Możemy też robić to jak w słowniku (lub lepiej), jeżeli indeks na to pozwala:
```
print('s["b"] = \n{}'.format(s["b"]))
print('s["c":] = \n{}'.format(s["c":]))
print('s["b":"c"] = \n{}'.format(s["b":"c"]))
```
Można też wykonywać operacje na serii:
```
print('s*5 = \n{}'.format(s*5))
print('s**3 = \n{}'.format(s**3))
print('s*s = \n{}'.format(s*s))
print('s+s = \n{}'.format(s+s))
```
## DataFrame
DataFrame jest obiektem dwuwymiarowym, który w obsłudze przypomina tabelę. Każda kolumna ma nazwę i jest serią danych (Series). Wszystkie kolumny mają wspólny indeks. Operacje można wykonywać na całych kolumnach lub wierszach. DataFrame tworzymy operacją `DataFrame`:
```
df = pd.DataFrame(np.random.randn(6,4), index=daty, columns=list('ABCD'))
df
```
Można też przekazać słownik:
```
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2.E
```
Albo kolekcji słowników:
```
df3 = pd.DataFrame([{'A': 1, 'B': 2}, {'C': 3}])
df3.A
```
Istnieje też wiele innych metod tworzenia i czytania DataFrame, które zostały opicane w [dokumentacji](http://pandas.pydata.org/pandas-docs/stable/dsintro.html).
Pobierać dane można jak w serii i innych kolekcjach Pythonowych:
```
print("df['A'] = \n{}".format(df['A'])) # Kolumna
print("df[1:3] = \n{}".format(df[1:3]))
```
Niemniej zalecane jest używanie zoptymalizowanych funkcji Pandas:
```
df3[['A', 'B']]
print("df.loc[:,'A']) = \n{}".format(df.loc[:,'A']))
print("df.loc[daty[0],'A'] = \n{}".format(df.loc[daty[0],'A']))
print("df.at[daty[0],'A'] = \n{}".format(df.at[daty[0],'A'])) # Pobiera skalar szybciej
print("df.iloc[:,0]] = \n{}".format(df.iloc[:,0]))
print("df.iloc[0,0] = \n{}".format(df.iloc[0,0]))
print("df.iat[0,0] = \n{}".format(df.iat[0,0])) # Pobiera skalar szybciej
print("df.ix[0,0] = \n{}".format(df.iat[0,0]))
```
Można też używać wyrażeń boolowskich do filtrowania wyników:
```
df[df.B > 0.5]
```
Jest też dostęp do poszczególnych elementów takich jak:
```
print('Indeks:\n{}'.format(df.index))
print('Kolumny:\n{}'.format(df.columns))
print('Początek:\n{}'.format(df.head(2)))
print('Koniec:\n{}'.format(df.tail(3)))
```
Dane można też sortować po indeksie:
```
df.sort_index(ascending=False)
```
Po kolumnach:
```
df.sort_index(axis=1, ascending=False)
```
Lub po wartościach:
```
df.sort_values(['B', 'C'])
```
Można też tabelę transponować:
```
df.T
```
Nową kolumnę dodajemy przez przypisanie:
```
df3['Z'] = ['aa', 'bb']
df3
```
Zmiana pojedynczej wartości może być również zrobiona przez przypisanie; używamy wtedy komend lokalizacyjnych, np:
```
df3.at[0, 'C'] = 33
df3
df3.at[2,:] = np.nan
df3
```
Pandas posiada również metody radzenia sobie z brakującymi danymi:
```
df3.dropna(how='any')
df3.dropna(how='all')
df3.fillna(-100)
```
Dostępne są również funkcje statystyczne, np:
```
df.describe()
df.mean()
```
Dodatkowo, można używać funkcji znanych z baz danych jak grupowanie czy złączenie (join):
```
df2.groupby('E').size()
df2.groupby('E').mean()
df2.join(df3, how='left', rsuffix='_3')
df2.merge(df3)
df2.merge(df3, how='outer')
# Odpowiednik:
# df2.join(df3, how='left', rsuffix='_3')
df2.merge(df3, right_index=True, left_index=True, how='left', suffixes=('', '_3'))
df2.append(df3)
df2.append(df3, ignore_index=True)
pd.concat([df2, df3])
pd.concat([df2, df3], ignore_index=True)
pd.concat([df2, df3], join='inner')
```
## Zadanie
Należy stworzyć DataFrame `samochody` z losową kolumną liczb całkowitych `przebieg` z przedziału [0, 200 000] oraz `spalanie` z przedziału [2, 20].
* dodaj kolumnę `marka`
* jeżeli samochód ma spalanie [0, 5] marka to VW
* jeżeli samochód ma spalanie [6, 10] marka to Ford
* jeżeli samochód ma spalanie 11 i więcej, marka to UAZ
* dodaj kolumnę `pochodzenie`:
* jeżeli przebieg poniżej 100 km, pochodzenie `nowy`
* jeżeli przebieg powyżej 100 km, pochodzenie `uzywany`
* jeżeli przebieg powyżej 100 000 km, pochodzenie `z niemiec`
* przeanalizuj dane statystycznie
* ★ pogrupuj dane po `marce` i po `pochodzenie`:
* sprawdź liczność grup
* wykonaj analizę statystyczną
```
n = 50
# np.random.randn # rozklad normalny
samochody = pd.DataFrame({
'przebieg': np.random.randint(0, 200_000, n),
'spalanie': 2 + 18*np.random.rand(n),
})
samochody.head()
#samochody.describe()
samochody.loc[samochody.spalanie < 5, 'marka'] = 'WV'
samochody['marka'] = pd.cut(samochody.spalanie,
bins=[0, 5, 10, 100],
labels=['VW', 'Ford', 'UAZ'])
samochody.head()
samochody['pochodzenie'] = pd.cut(samochody.przebieg,
bins=[0, 100, 100_000, np.inf],
labels=['nowy', 'uzywany', 'z niemiec'])
samochody.groupby(['marka', 'pochodzenie']).describe().T
```
| github_jupyter |
# Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. Los datos analizados son del día 04 de Agosto del 2015
```
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('861524.CSV')
%pylab inline
#Mostramos un resumen de los datos obtenidoss
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
datos_filtrados.describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['time','Diametro X', 'Diametro Y', 'RPM TRAC']
#Mostramos en varias gráficas la información obtenida tras el ensayo
datos_filtrados[columns].plot(subplots=True, figsize=(20,20))
```
Representamos ambos diámetros en la misma gráfica
```
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,3),ylim=(1.4,2)).hlines([1.85,1.65],0,3500,colors='r')
rolling_mean = pd.rolling_mean(datos['Diametro X'], 50)
rolling_std = pd.rolling_std(datos['Diametro X'], 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
datos['Diametro X'].plot(figsize=(12,6), alpha=0.6, ylim=(1,2))
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
```
Mostramos la representación gráfica de la media de las muestras
```
pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12))
```
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
```
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
```
#Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
```
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
```
##Representación de X/Y
```
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
```
#Analizamos datos del ratio
```
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
```
#Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
```
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
```
| github_jupyter |
# Projeto III - Computação Centífica II
### Solução do Problema Planar de Bratu
> Autor: Gil Miranda<br>
> Contato: gil.neto@ufrj.br<br>
> Repo: [@mirandagil](https://github.com/mirandagil)<br>
## Dependências e bibliotecas externas
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
```
---
## Da solução Analítica
Vamos aqui programar as funções que retornam a solução analítica do problema
```
def analytical_bratu(x, l):
b_1 = 0
b_2 = 16
if l >= 1.5:
b_2 = 8
elif l > 1:
b_2 = 10
z = optimize.fixed_point(solve_z, [b_1, b_2], args=(l,0))
if z[0] != z[1]:
tu_1 = np.cosh((x-1/2)*z[0]/2)/np.cosh(z[0]/4)
bra_1 = -2*np.log(tu_1)
tu_2 = np.cosh((x-1/2)*z[1]/2)/np.cosh(z[1]/4)
bra_2 = -2*np.log(tu_2)
bratu = [bra_1, bra_2]
else:
tu = np.cosh((x-1/2)*z[0]/2)/np.cosh(z[0]/4)
bra = -2*np.log(tu)
bratu = bra
return bratu
def solve_z(z, l, l1 = 0):
return np.sqrt(2*l)*np.cosh(z/4)
def lambda_c(l):
a = optimize.fixed_point(alpha, 5)
return 8*(a**2 -1)
def alpha(x):
return 1/np.tanh(x)
```
### Plotando a solução
Vamos fazer um plot da solução para $\lambda = 2.5$
```
ls = np.linspace(0, lambda_c(0))
xs = np.linspace(0,1)
sol = [analytical_bratu(xs, l) for l in ls]
phi = analytical_bratu(xs, 2.5)
ys_0 = []
ys_1 = []
## plotting time
plt.plot(xs, phi[0], label = "1st root")
plt.plot(xs, phi[1], label = "2nd root")
plt.grid(alpha = 0.5)
plt.legend()
plt.title('Bratu Problem solution $\lambda = 2.5$')
plt.show()
```
### Visualizando a correlação de bifurcação das soluções para $\lambda < \lambda_c$
Como visto no artigo, temos raíz dupla $z_1, z_2$ para $\lambda < \lambda_c$ e raíz única para $\lambda = \lambda_c$, então plotando o maior valor de cada solução em função de cada $\lambda$ temos o seguinte gráfico
```
for y in sol:
if type(y) is list:
ys_0.append(max(y[0]))
ys_1.append(max(y[1]))
else:
ys_0.append(max(y))
ys_1.append(max(y))
plt.plot(ls, ys_0, label = 'Solução para $z_1$')
plt.plot(ls[1:], ys_1[1:], label = 'Solução para $z_2$')
plt.title('Plot da solução por $\lambda$')
plt.xlabel('$\lambda$')
plt.ylabel('max(solução)')
plt.grid(alpha = 0.3)
plt.legend()
plt.show()
ls
```
Agora temos como comparar a solução analítica com a númerica e verificar o erro do metódo.
---
## Da solução Númerica #1
```
t0 = 0
tf = 1
n = 20
h = 1/(n+1)
ts = [t0]
us = np.random.rand(21)
u0 = 0
uf = 0
l = 2.5
for i in range(n+1):
ts.append(ts[-1] + h)
def F(u):
F = []
for j in range(n+1):
if j == 0:
fi = u0
elif j == n:
fi = uf
else:
fi = (u[j+1] - 2*u[j] + u[j-1])/h**2 + l * np.e**(u[j])
F.append(fi)
return F
### Preparando o grid de x
N = 5
h = 1/N
xs = np.arange(0,1+h,h)
eps_array = [10,1,0.1,0.01]
eps = eps_array[2]
def bvp_solve(n):
h = 1/(n+1)
u0 = ui = 0
u1 = 1
phiH = 2*np.log(np.cosh(h))
A = 1/phiH
B = -2/phiH
C = 1/phiH
M_A = A*np.eye(n,k = 1)
M_B = B*np.eye(n)
M_C = C*np.eye(n, k = -1)
M = M_A + M_B + M_C
# M = np.array([
# [B, A, 0, 0],
# [C, B, A, 0],
# [0, C, B, A],
# [0, 0, C, B]
# ])
b = np.zeros(n)
u = np.linalg.solve(M,b)
hs = [0]
while hs[-1] <= 1:
hs.append(hs[-1] + h)
u = np.insert(u,0,0)
u = np.append(u,1)
return u, hs
bvp_solve(5)
```
| github_jupyter |
# Real-time Control of Stormwater Systems using Network Optimization
1. Introduction
- Problem Description
- Literature Review
- Contribution of the work
2. Model
3. Algorithm
4. Numerical Studies
5. Conclusion
```
# Python boilerplate
import numpy as np
import scipy as spy
import matplotlib.pyplot as plt
%matplotlib notebook
%config InlineBackend.print_figure_kwargs = {}
import seaborn as sns
sns.set_style("whitegrid")
# Optimization thingy
from gurobipy import *
```
## Introduction
## Model
Dyanmics of the stormwater systems are nonlinear and these dynamics have to be accounted for achiving finer control of the system. But for achiving certain objective, like minimizing flooding and minimizing CSO's we can just control the network to effectively move the mass of water in the network, which can be achived by sorta linearnizing the system.
### Virtual Tank Models
### Links:
Flow of water in the links is highly non-linear. Usually, we have to solve kinematic wave equation to model the flow of water in the links. But for this problem we ignore this, to assume that flow in and flow out are perfect. i.e no losses and more similar to the network links we have been looking in the class.
### Nodes:
We assume that the Nodes are idealized cylinders.
$$ V^t = V^{t-1} + q_{in}^t - q_{out}^t $$
But the amount water that can leave the system is constrained by the water level in the tank.
$$ q_{out} \leq \sqrt{2 \times g \times water level} \times C_D $$
We linearize this constaraint by approximating the square root operation.
```
# Generate the outflows
volume = np.linspace(0,1000,100)
area = 100
height = volume/area
outflow = np.sqrt(2*9.81*height)
# Get the first order polyfit
coeffws = np.polyfit(height, outflow, 1)
plt.plot(height, outflow, label = "True")
plt.plot(height, coeffws[0]*height + coeffws[1], label = "Linearized")
plt.ylabel("Outflows")
plt.xlabel("Water Level")
plt.legend()
```
Linearized version of the constraint:
$$ q_{out} \leq 1.13047751 \times volume/area + 3.65949363 $$
### Network Representation
Network can be represented as a combination of multiple tanks (nodes) interconnected with links.
### Control Problem
Given a network with linear dynamics, how can we control to move water away from the network as fast as possible.
This control problem can be done in two ways. A centralized controller observes the network and routes the water from it. In the decentralized controller, we distribute the control problem. That is we have a central problem which tells the local controllers how to act.
#### Centralized Controller
**Terminology**
- $A$ links in the network
- $V_i^t$ volume in the $i^{th}$ tank at $t^{th}$ time
- $x_{ij}^t $ outflow from $i^{th}$ tank to $j^{th}$ at $t^{th}$ time
- $q_i^t$ inflow into $i^{th}$ tank at $t^{th}$ time
- $N$ is the number of tanks
- $T$ time horizon for solving
- $u_{ij}$ is the upper bound for flow in link.
***Objective Function***
If the objective is to minimize the utility in the tanks.
$$ \min \sum^N_{i} \sum^T_t V_i^t $$
If the objective is to move water as fast as possible.
$$ \max \sum^N_{i} \sum^T_t x_{ij}^t $$
***Constraints***
_Mass Balance_:
This constraint accounts for the mass balance in the tanks. This constraint accounts for the time of travel between tanks($\delta_{ji}$).
$$ V_i^t = V_i^{t-1} + q_i^{t-1} + \sum_{j} x_{ji}^{t-\delta_{ji}} - \sum_{j} x_{ij}^{t-1}\ \text{for}\ \forall t\ \text{and}\ \forall i$$
_Flow thresholds_:
$$ x_{ij}^t \leq u_{ij}\ \text{for}\ \forall t\ \text{and}\ \forall ij \in A$$
_Outflow limitation_:
Amount of water that can be released from the tank at any time $t$ is limited by the volume in the tank. Though this relation is non-linear (i.e. $\sqrt{2*g*depth}$, we assume a linear relationship in this formulation.
$$ x_{ij}^t \leq f(V_{ij}^{t-1}) \ \text{for}\ \forall t\ \text{and}\ \forall ij \in A$$
## Algorithm
This can be solved using gurobi
## Numerical Studies
## Conclusion
| github_jupyter |
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# jt -t monokai -cellw 95% -f dejavu -fs 12
from jupyterthemes import jtplot
jtplot.style()
import numpy as np
import os
import pandas as pd
import random
import pickle
import bcolz
from tqdm import tqdm
from IPython.display import FileLink, FileLinks
from IPython.display import SVG
import scipy
from sklearn import preprocessing
from sklearn.metrics import fbeta_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.ensemble import RandomForestClassifier
from PIL import Image
import cv2
os.environ["KERAS_BACKEND"] = "tensorflow"
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, concatenate, Input
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras import optimizers
from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard
from keras.models import load_model
from keras.utils.vis_utils import model_to_dot
from keras import backend as K
K.set_image_dim_ordering('tf')
from keras.applications.xception import Xception, preprocess_input
def get_raw(df, data_path):
im_features = df.copy()
rgb = []
for image_name in tqdm(im_features.image_name.values, mininterval=10):
img = Image.open(data_path + image_name + '.jpg')
img = img.resize((imagesize,imagesize))
img = np.array(img)[:,:,:3]
# im = np.hstack( ( img[:,:,0].ravel(), img[:,:,1].ravel(), img[:,:,2].ravel() ))
rgb.append( img )
return np.array(rgb)
def getEdges(df, data_path):
im_features = df.copy()
edgeArr = []
for image_name in tqdm(im_features.image_name.values, mininterval=10):
img = cv2.imread( data_path + image_name + '.jpg' , 0)
img = cv2.resize(img, (imagesize, imagesize))
edges = cv2.Canny( img, 5, 25)
edgeArr.append( np.sum(edges) )
return np.array(edgeArr)
def getDistance(xypair):
x_delta = abs(xypair[0] - xypair[2])
y_delta = abs(xypair[1] - xypair[3])
hypotenuse = (x_delta**2 + y_delta**2)**0.5
return hypotenuse
def getLines(df, data_path):
im_features = df.copy()
lineArr = []
for image_name in tqdm(im_features.image_name.values, mininterval=10):
img = cv2.imread( data_path + image_name + '.jpg' , 0)
img = cv2.resize(img, (imagesize, imagesize))
edges = cv2.Canny( img, 100, 125)
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=100,maxLineGap=50)
zeros = np.zeros((imagesize, imagesize))
if lines is None:
lineArr.append( np.sum(zeros) )
else:
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(zeros,(x1,y1),(x2,y2),(255),1)
lineArr.append( np.sum(zeros) )
return np.array(lineArr)
def getCorners(df, data_path):
im_features = df.copy()
cornerArr = []
for image_name in tqdm(im_features.image_name.values, mininterval=10):
img = cv2.imread( data_path + image_name + '.jpg' , 0)
img = cv2.resize(img, (imagesize, imagesize))
img = np.float32(img)
dst = cv2.cornerHarris(img,2,3,0.04)
thresholdIndices = dst > 0.05 * dst.max()
matrix = np.zeros(shape=(dst.shape[0],dst.shape[1]))
matrix[thresholdIndices] = 1
cornerArr.append( np.sum(matrix) )
return np.array(cornerArr)
def extract_features(df, data_path):
im_features = df.copy()
r_mean = []
g_mean = []
b_mean = []
r_std = []
g_std = []
b_std = []
r_max = []
g_max = []
b_max = []
r_min = []
g_min = []
b_min = []
r_kurtosis = []
g_kurtosis = []
b_kurtosis = []
r_skewness = []
g_skewness = []
b_skewness = []
for image_name in tqdm(im_features.image_name.values, mininterval=10):
im = Image.open(data_path + image_name + '.jpg')
im = np.array(im)[:,:,:3]
r_mean.append(np.mean(im[:,:,0].ravel()))
g_mean.append(np.mean(im[:,:,1].ravel()))
b_mean.append(np.mean(im[:,:,2].ravel()))
r_std.append(np.std(im[:,:,0].ravel()))
g_std.append(np.std(im[:,:,1].ravel()))
b_std.append(np.std(im[:,:,2].ravel()))
r_max.append(np.max(im[:,:,0].ravel()))
g_max.append(np.max(im[:,:,1].ravel()))
b_max.append(np.max(im[:,:,2].ravel()))
r_min.append(np.min(im[:,:,0].ravel()))
g_min.append(np.min(im[:,:,1].ravel()))
b_min.append(np.min(im[:,:,2].ravel()))
r_kurtosis.append(scipy.stats.kurtosis(im[:,:,0].ravel()))
g_kurtosis.append(scipy.stats.kurtosis(im[:,:,1].ravel()))
b_kurtosis.append(scipy.stats.kurtosis(im[:,:,2].ravel()))
r_skewness.append(scipy.stats.skew(im[:,:,0].ravel()))
g_skewness.append(scipy.stats.skew(im[:,:,1].ravel()))
b_skewness.append(scipy.stats.skew(im[:,:,2].ravel()))
im_features['r_mean'] = r_mean
im_features['g_mean'] = g_mean
im_features['b_mean'] = b_mean
im_features['r_std'] = r_std
im_features['g_std'] = g_std
im_features['b_std'] = b_std
im_features['r_max'] = r_max
im_features['g_max'] = g_max
im_features['b_max'] = b_max
im_features['r_min'] = r_min
im_features['g_min'] = g_min
im_features['b_min'] = b_min
im_features['r_kurtosis'] = r_kurtosis
im_features['g_kurtosis'] = g_kurtosis
im_features['b_kurtosis'] = b_kurtosis
im_features['r_skewness'] = r_skewness
im_features['g_skewness'] = g_skewness
im_features['b_skewness'] = b_skewness
return np.array(im_features.drop(['image_name', 'tags'], axis=1))
# def extract_features(df, data_path):
# im_features = df.copy()
# histArr = []
# for image_name in tqdm(im_features.image_name.values, mininterval=10):
# img = cv2.imread( folderpath + 'train-jpg/train_0.jpg' )
# img = np.array(img)
# img.shape
# R = img[:,:,0]
# G = img[:,:,1]
# B = img[:,:,2]
# RGBHistArr = []
# for channel in [R,G,B]:
# placeholder = np.zeros( (256) )
# unique, counts = np.unique(R, return_counts=True)
# placeholder[unique] = counts
# RGBHistArr.append(placeholder)
# histArr.append( np.hstack(tuple(RGBHistArr)) )
# histArr = np.array(histArr).astype('float32')
# return histArr
def splitSet(dataset, split1, split2):
idx_split1 = int( len(dataset) * split1)
idx_split2 = int( len(dataset) * split2)
training = dataset[0:idx_split1]
validation = dataset[idx_split1:idx_split2]
test = dataset[idx_split2:]
return [ training, validation, test ]
def tf_th_ImgReshape(data):
shapedData = [ np.array( [sample[:,:,0] , sample[:,:,1] , sample[:,:,2]] ) for sample in data]
return np.array(shapedData)
def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w'); c.flush()
def load_array(fname): return bcolz.open(fname)[:]
def xceptionPreprocess(rawFeatures):
rawFeatures = rawFeatures.astype('float32')
rawFeatures = preprocess_input(rawFeatures)
return rawFeatures
def shapingDataSet(rawFeatures, edgeFeatures):
edgeFeaturesShaped = np.reshape(edgeFeatures, edgeFeatures.shape + (1,))
X = [ np.dstack((sampleRaw, sampleEdge)) for sampleRaw, sampleEdge in zip(rawFeatures, edgeFeaturesShaped) ]
X = np.array(X)
X = X.astype('float32')
X -= 127
X /= 255
return X
def dataGenerator(imgRGBArr, imgStatsArr, imgLabels, labelsBool=True, loopBool=True):
batchsize = 32
datasetLength = len(imgRGBArr)
while 1 and loopBool == True:
for idx in range(0, datasetLength, batchsize):
endIdx = idx+batchsize
if endIdx > datasetLength:
endIdx = datasetLength
imgRGB = xceptionPreprocess(imgRGBArr[idx:idx+batchsize])
imgStat = imgStatsArr[idx:idx+batchsize]
labels = imgLabels[idx:idx+batchsize]
if labelsBool == True:
yield ({'xception_input': imgRGB, 'aux_input': imgStat}, {'output': labels})
else:
yield ({'xception_input': imgRGB, 'aux_input': imgStat})
def getLabelDistribution(labels, labelNameArray):
labelCount = [ np.sum(labels[:,i]) for i in range(0, len(labels[0])) ]
labelNameCount = {key: val for key, val in zip(labelNameArray, labelCount)}
return labelNameCount, labelCount
def getPrecision(labels, predictions):
# False positive is a negative label but positive prediction
Tp = float(0)
Fp = float(0)
for label, prediction in zip(labels, predictions):
try:
len(label)
except:
label = [label]
prediction = [prediction]
for idx in range(0, len(label)):
if label[idx]==1 and prediction[idx]==1:
Tp += 1
if label[idx]==0 and prediction[idx]==1:
Fp += 1
if Tp+Fp == 0:
return 0
return (Tp / ( Tp + Fp ))
def getRecall(labels, predictions):
# False negative is a positive label but negative prediction
Tp = float(0)
Fn = float(0)
for label, prediction in zip(labels, predictions):
try:
len(label)
except:
label = [label]
prediction = [prediction]
for idx in range(0, len(label)):
if label[idx]==1 and prediction[idx]==1:
Tp += 1
if label[idx]==1 and prediction[idx]==0:
Fn += 1
if Tp+Fn == 0:
return 0
return (Tp / ( Tp + Fn ))
assert_label = [
[0,0,0],
[0,1,0],
[0,1,0]
]
assert_pred = [
[0,0,0],
[0,0,1],
[1,1,0]
]
assert getPrecision(assert_label, assert_pred) == float(1)/3
assert getRecall(assert_label, assert_pred) == 0.5
assert_label2 = [[0], [1], [1]]
assert_pred2 = [[0], [1], [0]]
assert getPrecision(assert_label2, assert_pred2) == 1.0
assert getRecall(assert_label2, assert_pred2) == 0.5
def getStatistics(labels, predictions, labelNames):
precision = [ getPrecision(labels[:, col], predictions[:, col]) for col in range(0, len(labels[0])) ]
recall = [ getRecall(labels[:, col], predictions[:, col]) for col in range(0, len(labels[0])) ]
f1 = [ f1_score(labels[:, col], predictions[:, col]) for col in range(0, len(labels[0])) ]
precision = np.array(precision)
recall = np.array(recall)
labelPR = {labelName: (precision[idx], recall[idx]) for idx, labelName in enumerate(labelNames)}
return labelPR, precision, recall, f1
def errorAnalyticsBarGraph(test_labels, test_predictions, labels):
_, labelCounts = getLabelDistribution(test_labels, labels)
labelPercentage = np.array( [ np.array([ count / np.sum(labelCounts) ]) for count in labelCounts ] )
_, precision, recall, f1 = getStatistics(test_labels, test_predictions, labels)
plt.rcParams['figure.figsize'] = (14, 8)
fig, ax = plt.subplots()
index = np.arange(len(labels))
bar_width = 0.20
opacity = 0.8
rects1 = plt.bar(index, f1, bar_width,
alpha=opacity,
color='#6A93C6',
label='F1')
rects2 = plt.bar(index + bar_width, precision, bar_width,
alpha=opacity,
color='#C3C2BD',
label='Precision')
rects3 = plt.bar(index + bar_width + bar_width, recall, bar_width,
alpha=opacity,
color='#DFDFE2',
label='Recall')
rects4 = plt.bar(index + bar_width + bar_width + bar_width, labelPercentage, bar_width,
alpha=opacity,
color='#7BE686',
label='Percentage')
plt.xlabel('Label')
plt.ylabel('Scores')
plt.title('Scores by Label')
plt.xticks(rotation=70, fontsize=14, fontweight='bold')
plt.xticks(index + bar_width, (label for label in labels))
plt.yticks(fontsize=14, fontweight='bold')
plt.legend()
plt.tight_layout()
plt.show()
imagesize = 299
cutOff = 0.25
# Load data
folderpath = os.getcwd() + '/'
train_path = folderpath+'train-jpg/'
test_path = folderpath+'test-jpg/'
train = pd.read_csv(folderpath+'train.csv')
test = pd.read_csv(folderpath+'sample_submission_v2.csv')
print('Extracting Dataset Features')
rerun = True
if rerun == True:
train_ImgRaw = get_raw(train, train_path)
train_ImgEdge = getEdges(train, train_path)
train_ImgLine = getLines(train, train_path)
train_ImgCorner = getCorners(train, train_path)
train_ImgStats = extract_features(train, train_path)
data_dic = {'pickleImgRaw': train_ImgRaw,
'pickleImgEdge': train_ImgEdge,
'pickleImgLine': train_ImgLine,
'pickleImgCorner': train_ImgCorner,
'pickleImgStats': train_ImgStats
}
for key in data_dic:
save_array(folderpath+key, data_dic[key])
else:
train_ImgRaw = load_array('pickleImgRaw')
train_ImgEdge = load_array('pickleImgEdge')
train_ImgLine = load_array('pickleImgLine')
train_ImgCorner = load_array('pickleImgCorner')
train_ImgStats = load_array('pickleImgStats')
# # Reviewing image features
# imgidx = 62
# plt.subplot(131),plt.imshow(train_ImgRaw[imgidx] )
# plt.title('Original Image'), plt.xticks([]), plt.yticks([])
# plt.subplot(132),plt.imshow(train_ImgLine[imgidx] ,cmap = 'gray')
# plt.title('line Image'), plt.xticks([]), plt.yticks([])
# plt.subplot(133),plt.imshow(train_ImgCorner[imgidx] ,cmap = 'gray')
# plt.title('Corner Image'), plt.xticks([]), plt.yticks([])
# plt.show()
# print('Shaping Dataset')
# Image RGB Features
X_img = xceptionPreprocess(train_ImgRaw)
# Image Statistics features
X_stats = train_ImgStats.astype('float32')
scaler = preprocessing.StandardScaler().fit(X_stats)
X_stats = scaler.transform(X_stats)
# print('Setup Dataset Labels')
y_train = []
# flatten = lambda l: [item for sublist in l for item in sublist]
# labels = np.array(list(set(flatten([l.split(' ') for l in train['tags'].values]))))
labels = np.array(['clear', 'partly_cloudy', 'cloudy', 'haze', 'primary', 'water', 'bare_ground',
'agriculture', 'cultivation', 'habitation', 'road', 'conventional_mine', 'artisinal_mine',
'selective_logging', 'slash_burn', 'blooming', 'blow_down'])
label_map = {l: i for i, l in enumerate(labels)}
inv_label_map = {i: l for l, i in label_map.items()}
for tags in train.tags.values:
targets = np.zeros(17)
for t in tags.split(' '):
targets[label_map[t]] = 1
y_train.append(targets)
y = np.array(y_train).astype('float32')
# Multi run averaging of random forest results
X_stats = np.hstack((train_ImgStats,
train_ImgEdge.reshape(-1,1),
train_ImgLine.reshape(-1,1),
train_ImgCorner.reshape(-1,1)))
# X_stats = train_ImgStats
random_seed = 0
random.seed(random_seed)
npRandomSeed = np.random.seed(random_seed)
numberRuns = 5
runResultsArr = []
for _ in range(numberRuns):
randArr = np.array(range(len(y)))
np.random.shuffle( randArr )
X_shuffled = X_stats[randArr]
y_shuffled = y[randArr]
train_dataset_stats, valid_dataset_stats, test_dataset_stats = splitSet(X_shuffled, 0.8, 0.9)
train_labels, valid_labels, test_labels = splitSet(y_shuffled, 0.8, 0.9)
clf = RandomForestClassifier(n_estimators=100)
clf = clf.fit(train_dataset_stats, train_labels)
test_predictions = [ clf.predict(test_chip.reshape(1,-1))[0] for test_chip in tqdm(test_dataset_stats, mininterval=10) ]
test_predictions_threshold = np.array(test_predictions).astype('int')
runResultsArr.append({'prediction':test_predictions_threshold,'labels': test_labels})
pickle.dump(runResultsArr, open(folderpath+'rf_rgbstats_results', 'wb'))
# Multi run averaging of CNN results
random_seed = 0
random.seed(random_seed)
npRandomSeed = np.random.seed(random_seed)
numberRuns = 5
check = ModelCheckpoint("weights.{epoch:02d}-{val_acc:.5f}.hdf5", monitor='val_acc', verbose=1,
save_best_only=True, save_weights_only=True, mode='auto')
earlyStop = EarlyStopping(monitor='val_loss')
tensorBoard = TensorBoard(log_dir='./logs')
def fbetaAccuracy(y_true, y_pred):
return fbeta_score(y_true, y_pred, 2, average='samples')
def setupModel():
base_model = Xception(include_top=False, weights='imagenet', input_tensor=None, input_shape=(299,299,3))
base_model.layers[0].name = 'xception_input'
for layer in base_model.layers:
layer.trainable = False
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(2048, activation='relu', name='xception_output')(x)
x = Dropout(0.5)(x)
auxiliary_input = Input(shape=(18,), name='aux_input')
x = concatenate([x, auxiliary_input])
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(17, activation='sigmoid', name='output')(x)
model = Model(inputs=[base_model.input, auxiliary_input], outputs=predictions)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', fbetaAccuracy])
return model
runResultsArr = []
for _ in range(numberRuns):
randArr = np.array(range(len(y)))
np.random.shuffle( randArr )
X_img_shuffled = X_img[randArr]
X_stats_shuffled = X_stats[randArr]
y_shuffled = y[randArr]
train_valid_split = 0.50
valid_test_split = 0.75
train_dataset_img, valid_dataset_img, test_dataset_img = splitSet(X_img_shuffled, train_valid_split, valid_test_split)
train_dataset_stats, valid_dataset_stats, test_dataset_stats = splitSet(X_stats_shuffled, train_valid_split, valid_test_split)
train_labels, valid_labels, test_labels = splitSet(y_shuffled, train_valid_split, valid_test_split)
model = setupModel()
model.fit( [train_dataset_img, train_dataset_stats], train_labels,
batch_size=128, epochs=3, callbacks=[check, earlyStop, tensorBoard],
validation_data=([valid_dataset_img, valid_dataset_stats], valid_labels) )
test_predictions = model.predict([test_dataset_img, test_dataset_stats], batch_size=64, verbose=1)
test_predictions_threshold = np.copy(test_predictions)
test_predictions_threshold[test_predictions_threshold < cutOff ] = 0
test_predictions_threshold[test_predictions_threshold >= cutOff ] = 1
test_predictions_threshold = test_predictions_threshold.astype('int')
test_labels = test_labels.astype('int')
runResultsArr.append({'prediction':test_predictions_threshold,'labels': test_labels})
pickle.dump(runResultsArr, open(folderpath+'cnn_rawrgb_results', 'wb'))
# model.save(folderpath+'my_model_yesrgbstats2.h5')
# model = load_model('my_model.h5')
# model.load_weights(folderpath+'weights.03-0.95024.hdf5')
# Analytics
fBetaArr = []
for _ in runResultsArr:
fBetaArr.append( fbeta_score(_['labels'], _['prediction'], 2, average='samples') )
combinedPredictions = np.vstack(tuple([run['prediction'] for run in runResultsArr]))
combinedLabels = np.vstack(tuple([run['labels'] for run in runResultsArr]))
errorAnalyticsBarGraph(combinedLabels, combinedPredictions, labels)
print( [float("%.5f" % i) for i in fBetaArr] )
print("%.5f" % fbeta_score(combinedLabels, combinedPredictions, 2, average='samples') )
```
Labels
['selective_logging', 'conventional_mine', 'partly_cloudy',
'artisinal_mine', 'haze', 'slash_burn', 'primary', 'clear',
'bare_ground', 'blooming', 'water', 'road', 'cloudy', 'habitation',
'agriculture', 'blow_down', 'cultivation']
Training set label distribution
{'slash_burn': 209.0, 'blooming': 332.0, 'water': 7262.0, 'cloudy': 2330.0, 'selective_logging': 340.0,
'road': 8076.0, 'primary': 37840.0, 'clear': 28203.0, 'haze': 2695.0, 'agriculture': 12338.0, 'cultivation': 4477.0,
'partly_cloudy': 7251.0, 'bare_ground': 859.0, 'conventional_mine': 100.0, 'artisinal_mine': 339.0,
'habitation': 3662.0, 'blow_down': 98.0}
# Run Notes
Iter1.
loss: 0.2231 - acc: 0.9143 Single epoch 80% training data
Iter2.
los: 0.2166 - acc: 0.9166 Modified scaling of inputs to subtract 127 and divide by 255
Iter3.
loss: 0.2028 - acc: 0.9234 Changed convultions to 64, 64, 128, 128, while adding an additional 512 dense layer. Also switched to RMSprop optimizer
Iter4.
val_loss: 0.1344 - val_acc: 0.9502 Using new model and RMSprop, trained for roughly 5 epochs. Likely due to better learning rate decay definition. Previously with SGD, it slowed down excessively towards the end of the epoch.
Rerunning baseline with 80% data and one epoch got val_loss: 0.1750 - val_acc: 0.9341. Trained model for 5 epochs on all data and got a submission score of 0.86942.
Iter5.
Utilizing Xception model and fine-tuning output layer. Final for standard 80%/single_epoch had accuracy of around ~0.93. But error analytics show much better results for bare ground, cultivation, habitation, artisianal mining, water, and road.
Preprocessing the input using the provided preprocessor improved the result to val_loss: 0.1274 - val_acc: 0.9516
Iter6.
Training all data with the latest model provided a result of loss: 0.1074 - acc: 0.9585. The submission set scored at .89095
Iter 7.
Increased last dense layer to four times the number of nodes to 4096. However, it failed to yield marked better results achieving only loss 0.1145 - acc: 0.9554 at the third epoch. In addition, accuracy actually deteriorated on the 2nd epoch.
Iter8.
Tried adding additional dense layer of 1024 nodes to end. However, it also failed to yield better results achieving loss: 0.1151 - acc: 0.9557 - val_loss: 0.1203 - val_acc: 0.9545 after four epochs. With loss: 0.1639 - acc: 0.9382 - val_loss: 0.1285 - val_acc: 0.9513 after one epoch.
Iter9.
Tried incorporating RGB statistics which claimed loss: 0.1370 - acc: 0.9486 - val_loss: 0.1292 - val_acc: 0.9509 after one epoch. But the test set analysis looked terrible, it's probably overfitting and failing to generalize.
Actually, running it a second time but scaling across the entire statistics dataset then breaking down into training/valid/test sets rather than scaling based on the training set and applying it to the valid/test sets resulted in loss: 0.1370 - acc: 0.9486 - val_loss: 0.1292 - val_acc: 0.9509 with test set error statistics that looked much better. Potentially I made a mistake in applying the preprocessing on the test set, since the validation accuracy was also good. Either way it's looking promising after doing so well after a single epoch. Will try training for 5 epochs on entire dataset.
After 5 epochs on entire dataset achieved loss: 0.0987 - acc: 0.9619 with consistent incremental improvement from each epoch. Will try training for longer. Model predictions on submission set resulted in an almost 1% improvement, 0.89886
After another additional 5 epochs it achieved an improvement of loss: 0.0810 - acc: 0.9682. However, when applied to the submission set, the score dipped slightly to 0.89420, suggesting that it's overfitting.
Iter10.
Testing using RGB histogram instead of aggregate statistics. 80% data and one epoch resulted in: loss: 0.1635 - acc: 0.9370 - val_loss: 0.1290 - val_acc: 0.9509. After five epochs resulted in: loss: 0.1109 - acc: 0.9569 - val_loss: 0.1207 - val_acc: 0.9546 with very little improvement from epoch one.
Iter11.
Using previous xception and rgbhistogram, but increasing initial dense layer to 2048 matching original literature, adding drop out, and adding an additional dense layer to the end. Performance capped out around loss: 0.1345 - acc: 0.9504 - val_loss: 0.1386 - val_acc: 0.9518
Review Iter.
Pulling back and reviewing
Try converting image into RGB histograms and merging that with final dense layer
Potential additions edge and line analysis can be combined with RGB statistics.
Canny edge analysis and count how many 1s are there.
Line edge analysis and count how many 1s are there.
Corner analysis and count how many 1s are there.
Modify RGB statistics to Purple, Blue, Green, Yellow, Red, Brown, White, Black?
Check misclassification statistics
Utilize an ensemble algorithm, so maybe a Random forest for color + edge statistics, and a separate like a CNN trained specifically to look for specific labels like blow down. This image feature algorithm may potentially use artificially generated data.
```
model = load_model(folderpath+'final_model.h5')
##### SUBMISSION RUN #####
# Making Final Predictions using all training data
model.fit( [X_img, X_stats], y,
batch_size=128, epochs=5)
model.save(folderpath+'final_model.h5')
# model = load_model(folderpath+'final_model.h5')
# print('Making submission predictions')
rerun = False
if rerun == True:
submission_ImgRaw = get_raw(test, test_path)
# submission_ImgEdge = getEdges(test, test_path)
# submission_ImgLine = getLines(test, test_path)
# submission_ImgCorner = getCorners(test, test_path)
submission_ImgStats = extract_features(test, test_path)
data_dic = {'submissionPickleImgRaw': submission_ImgRaw,
# 'submissionPickleImgEdge': submission_ImgEdge,
# 'submissionPickleImgLine': submission_ImgLine,
# 'submissionPickleImgCorner': submission_ImgCorner,
'submissionPickleImgStats': submission_ImgStats
}
for key in data_dic:
save_array(folderpath+key, data_dic[key])
else:
submission_ImgRaw = load_array('submissionPickleImgRaw')
# submission_ImgEdge = load_array('submissionPickleImgEdge')
# submission_ImgLine = load_array('submissionPickleImgLine')
# submission_ImgCorner = load_array('submissionPickleImgCorner')
submission_ImgStats = load_array('submissionPickleImgStats')
train_ImgStats = extract_features(train, train_path)
X_stats = train_ImgStats.astype('float32')
scaler = preprocessing.StandardScaler().fit(X_stats)
pickle.dump(scaler, open(folderpath+'scaler', 'wb'))
scaler = pickle.load(open(folderpath+'scaler', 'rb'))
def batchSet(dataset, batches):
arr = []
stepSize = int(len(dataset)/batches)
for idx in range(0, len(dataset), stepSize):
arr.append(dataset[idx:idx+stepSize])
return arr
submision_subsetRGBArr = batchSet(submission_ImgRaw, 10)
submision_subsetStatsArr = batchSet(submission_ImgStats, 10)
submission_predictions = []
for idx in range(0, len(submision_subsetRGBArr)):
subSetRGB = xceptionPreprocess(submision_subsetRGBArr[idx])
subSetStats = scaler.transform(submision_subsetStatsArr[idx])
submission_subSetPredictions = model.predict([subSetRGB, subSetStats], batch_size=64, verbose=1)
submission_predictions.append(submission_subSetPredictions)
submission_predictionsCombined = np.vstack( tuple(submission_predictions) )
submission_predictions_thresholded = np.copy(submission_predictionsCombined)
submission_predictions_thresholded[submission_predictions_thresholded < cutOff ] = 0
submission_predictions_thresholded[submission_predictions_thresholded >= cutOff ] = 1
predictionLabels = [' '.join(labels[row > 0.2]) for row in submission_predictions_thresholded]
subm = pd.DataFrame()
subm['image_name'] = test.image_name.values
subm['tags'] = predictionLabels
subm.to_csv(folderpath+'submission.csv', index=False)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
m = 2000 # 5, 50, 100, 500, 2000
train_size = 500 # 100, 500, 2000, 10000
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [7,4],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [8,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
train_data=[]
a = []
fg_instance = np.array([[0.0,0.0]])
bg_instance = np.array([[0.0,0.0]])
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
bg_instance += x[b]
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a
fg_instance
bg_instance
(fg_instance+bg_instance)/m , m
# mosaic_list_of_images =[]
# mosaic_label = []
train_label=[]
fore_idx=[]
train_data = []
for j in range(train_size):
np.random.seed(j)
fg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
bg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
# a=[]
for i in range(m):
if i == fg_idx:
fg_class = np.random.randint(0,3)
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
# a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
bg_instance += x[b]
# a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
train_data.append((fg_instance+bg_instance)/m)
# a = np.concatenate(a,axis=0)
# mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
train_label.append(fg_class)
fore_idx.append(fg_idx)
train_data[0], train_label[0]
train_data = torch.stack(train_data, axis=0)
train_data.shape, len(train_label)
test_label=[]
# fore_idx=[]
test_data = []
for j in range(1000):
np.random.seed(j)
fg_instance = torch.zeros([2], dtype=torch.float64) #np.array([[0.0,0.0]])
fg_class = np.random.randint(0,3)
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
fg_instance += x[b]
# a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
test_data.append((fg_instance)/m)
# a = np.concatenate(a,axis=0)
# mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
test_label.append(fg_class)
# fore_idx.append(fg_idx)
test_data[0], test_label[0]
test_data = torch.stack(test_data, axis=0)
test_data.shape, len(test_label)
x1 = (train_data).numpy()
y1 = np.array(train_label)
x1[y1==0,0]
x1[y1==0,0][:,0]
x1[y1==0,0][:,1]
x1 = (train_data).numpy()
y1 = np.array(train_label)
plt.scatter(x1[y1==0,0][:,0], x1[y1==0,0][:,1], label='class 0')
plt.scatter(x1[y1==1,0][:,0], x1[y1==1,0][:,1], label='class 1')
plt.scatter(x1[y1==2,0][:,0], x1[y1==2,0][:,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_data).numpy()
y1 = np.array(test_label)
plt.scatter(x1[y1==0,0][:,0], x1[y1==0,0][:,1], label='class 0')
plt.scatter(x1[y1==1,0][:,0], x1[y1==1,0][:,1], label='class 1')
plt.scatter(x1[y1==2,0][:,0], x1[y1==2,0][:,1], label='class 2')
plt.legend()
plt.title("test dataset4")
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
train_data[0].shape, train_data[0]
batch = 200
traindata_1 = MosaicDataset(train_data, train_label )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(test_data, test_label )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
# testdata_11 = MosaicDataset(test_dataset, labels )
# testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = (self.linear2(x))
return x[:,0]
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
# print(outputs.shape)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list, lr_list):
final_loss = []
for LR in lr_list:
print("--"*20, "Learning Rate used is", LR)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
# print(outputs.shape)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
final_loss.append(loss_curi)
return final_loss
train_loss_all=[]
testloader_list= [ testloader_1]
lr_list = [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 ]
fin_loss = train_all(trainloader_1, 1, testloader_list, lr_list)
train_loss_all.append(fin_loss)
%matplotlib inline
len(fin_loss)
for i,j in enumerate(fin_loss):
plt.plot(j,label ="LR = "+str(lr_list[i]))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
| github_jupyter |
# Welcome to Kaggle!
Once you have created a Kaggle account, you're officially a "Kaggler" -- and joining a community of over seven million users!
> _If you haven't yet created a Kaggle account, it only takes a couple of minutes. Click **[here](https://www.kaggle.com/account/login?phase=startRegisterTab&returnUrl=%2F)**._
So what should you do first? This notebook will introduce you to Kaggle's progression system, which you can use to measure your growth as a data scientist. You'll level up from "Novice" to "Contributor", and also make your first submission to a Kaggle competition (_no programming experience needed!_).
We expect that following all of the steps here will take approximately **35 minutes**.
# Step 1: Your Kaggle profile [2 minutes]
Navigate to your Kaggle profile by visiting [kaggle.com/me](https://www.kaggle.com/me). If you haven't done much yet on Kaggle, the page will look similar to the screenshot below.
<img src="https://i.imgur.com/43ho0EG.png" width="100%"/>
You'll see four boxes that say **Competitions Novice**, **Datasets Novice**, **Notebooks Novice**, and **Discussion Novice**. Each of the boxes maps to a different type of activity that you can do on Kaggle:
- Through [**competitions**](https://www.kaggle.com/competitions), you can submit solutions to data science problems that are posed by companies like [Santander](https://www.kaggle.com/c/santander-customer-transaction-prediction) and [Zillow](https://www.kaggle.com/c/zillow-prize-1). You can also collaborate with other Kagglers, and top solutions are often awarded large cash prizes.
- Kaggle has a large collection of [**datasets**](https://www.kaggle.com/datasets) that you can use in your data science projects. You can also contribute to the community by uploading your own datasets.
- [**Notebooks**](https://www.kaggle.com/code) are a great way to share your projects with the data science community.
- You can contribute to community [**discussion**](https://www.kaggle.com/discussion) by posing questions or providing answers to other Kagglers.
By participating in these activities, you can receive recognition from other Kagglers and be awarded medals. These medals will be recorded on your profile, which you can use to track your growth as a data scientist on Kaggle.
# Step 2: Kaggle progression [5 minutes]
There are five performance tiers at Kaggle: **Novice**, **Contributor**, **Expert**, **Master**, and **Grandmaster**.
The good news is: levelling up from **Novice** to **Contributor** is quick and easy. This is what you'll do today!
<img src="https://i.imgur.com/qivAote.png" width="70%"/>
In order to level up, the list of things you need to do is shown in the image below.
<img src="https://i.imgur.com/ONYGsWU.png" width="95%"/>
If you're curious about what you'll need to do in order to continue to move up the progression system (to **Expert** and ultimately **Grandmaster**), you can access this information by clicking on **Unranked** anywhere on your profile page. This will bring you to [kaggle.com/progression](https://www.kaggle.com/progression). You can read this page now, if you like. If you'd prefer to review this information later, you can safely skip to the next section.
# Step 3: Submit to Titanic [20 minutes]
First, you will run a notebook and make a competition submission (the first two items in the list). To do this, follow the instructions in [this notebook](https://www.kaggle.com/alexisbcook/titanic-tutorial).
The notebook does not assume any programming background, so you'll still be able to complete it, if you're completely new to data science.
# Step 4: Make a comment [7 minutes]
Now, it's time to make one comment. To start, return to the notebook from the previous section by clicking [here](https://www.kaggle.com/alexisbcook/titanic-tutorial).
To make a comment, scroll to the bottom of the notebook, where you'll find the **Comments** section. If you're not sure what to post, you might like to:
- highlight what you found most useful about the notebook, or
- ask questions about anything that you found confusing in the notebook.
If you feel comfortable, you can also offer suggestions for extending the work.
When you try to make a comment, you'll first be prompted to SMS verify your account. SMS verification is required to access some useful features on Kaggle, such as Kaggle's free GPU and TPU hours. This will come in handy if you decide to study [deep learning](https://www.kaggle.com/learn/intro-to-deep-learning) or [computer vision](https://www.kaggle.com/learn/computer-vision). It will also allow you to join competitions that award cash prizes!
The SMS verification process will navigate you to a form for submitting your mobile phone number. After reading the page, if you would like to proceed, complete the form. A verification code will then be sent to your number by text message. Once you submit the code successfully, you're done, and can now comment on Kaggle!
# Step 5: Give an upvote [1 minute]
The next thing to do is give 1 upvote. To do this, take a look at the comments that other users have posted. Pick one that seems particularly useful or insightful, and click on the ticker to the right of the comment to cast an upvote.
<img src="https://i.imgur.com/rpXV4TG.png" width="95%"/>
And that's it! You should now be a Contributor, and this should be reflected on your profile page.
<img src="https://i.imgur.com/yW9db8p.png" width="95%"/>
Congratulations for taking your first steps on Kaggle!
# Not yet a Contributor?
Once you have successfully completed all of the instructions in this notebook, you will be a Kaggle Contributor. If your profile does not yet reflect this, you can return to the [progression page](https://www.kaggle.com/progression) to determine which items are missing.
<img src="https://i.imgur.com/cBFmcZV.png" width="95%"/>
# Have questions?
If you have any questions about Kaggle's progression system, you can reach out to the community for help by posting in the [Getting Started forum](https://www.kaggle.com/getting-started). To do this, click on [**+New Topic**] on the top right of the page.
<img src="https://i.imgur.com/aySR9Zt.png" width="95%"/>
# What's next?
If you are participating in the 30 Days of ML program and accessing this notebook on the very first day, your work is done for today! Tomorrrow, you will receive an email with your next assignment.
If you haven't signed up for the 30 Days of ML program and are curious what to do next, Kaggle has a lot to offer, as you continue to progress as a data scientist.
- If you're just learning data science, you're encouraged to check out our beginner-friendly courses on [**Kaggle Learn**](https://www.kaggle.com/learn). The courses are free and last only a few hours. Each lesson has a hands-on programming exercise, where you'll write code to analyze data.
- To apply what you've learned to a real-life problem and learn new techniques alongside other Kagglers, check out [**Kaggle Competitions**](https://www.kaggle.com/competitions).
- Once you're ready to build out a data science portfolio, you can use [**Kaggle Datasets**](https://www.kaggle.com/datasets) for inspiration and [**Kaggle Notebooks**](https://www.kaggle.com/notebooks) to run code and share your work with the community.
| github_jupyter |
# Logic: `logic.py`; Chapters 6-8
This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.
We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.
But the first step is to load the code:
```
from utils import *
from logic import *
```
## Logical Sentences
The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
```
Symbol('x')
```
Or we can define multiple symbols at the same time with the function `symbols`:
```
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
```
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
```
P & ~Q
```
This works because the `Expr` class overloads the `&` operator with this definition:
```python
def __and__(self, other): return Expr('&', self, other)```
and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
```
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
```
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
```
3 * f(x, y) + P(y) / 2 + 1
```
## Operators for Constructing Logical Sentences
Here is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:
| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input
|--------------------------|----------------------|-------------------------|---|---|
| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`
| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`
| Or | P ∨ Q | `P`<tt> | </tt>`Q`| `P`<tt> | </tt>`Q` | `Expr('`|`', P, Q)`
| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`
| Implication | P → Q | `P` <tt>|</tt>`'==>'`<tt>|</tt> `Q` | `P ==> Q` | `Expr('==>', P, Q)`
| Reverse Implication | Q ← P | `Q` <tt>|</tt>`'<=='`<tt>|</tt> `P` |`Q <== P` | `Expr('<==', Q, P)`
| Equivalence | P ↔ Q | `P` <tt>|</tt>`'<=>'`<tt>|</tt> `Q` |`P <=> Q` | `Expr('<=>', P, Q)`
Here's an example of defining a sentence with an implication arrow:
```
~(P & Q) |'==>'| (~P | ~Q)
```
## `expr`: a Shortcut for Constructing Sentences
If the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
```
expr('~(P & Q) ==> (~P | ~Q)')
```
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, `<==`, or `<=>`, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
```
expr('sqrt(b ** 2 - 4 * a * c)')
```
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix.
## Propositional Knowledge Bases: `PropKB`
The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.
We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.
The class `PropKB` now.
* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.
* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.
* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.
* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those.
## Wumpus World KB
Let us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
```
wumpus_kb = PropKB()
```
We define the symbols we use in our clauses.<br/>
$P_{x, y}$ is true if there is a pit in `[x, y]`.<br/>
$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.<br/>
```
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
```
Now we tell sentences based on `section 7.4.3`.<br/>
There is no pit in `[1,1]`.
```
wumpus_kb.tell(~P11)
```
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
```
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
```
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
```
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
```
We can check the clauses stored in a `KB` by accessing its `clauses` variable
```
wumpus_kb.clauses
```
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.<br/>
$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.<br/>
$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.<br/>
$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.<br/>
$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner.
## Inference in Propositional Knowledge Base
In this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$.
### Truth Table Enumeration
It is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
```
%psource tt_check_all
```
Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
```
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
```
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
```
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
```
### Proof by Resolution
Recall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ <em>if and only if</em> $\text{KB} \land \neg \alpha$ is unsatisfiable".<br/>
This technique corresponds to <em>proof by <strong>contradiction</strong></em>, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, <strong>resolution</strong> which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:
* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.
* Two clauses resolve to yield the <em>empty clause</em>, in which case $\text{KB} \vDash \alpha$.
The <em>empty clause</em> is equivalent to <em>False</em> because it arises only from resolving two complementary
unit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be <em>True</em> at the same time.
```
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
```
## First-Order Logic Knowledge Bases: `FolKB`
The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections.
## Criminal KB
In this section we create a `FolKB` based on the following paragraph.<br/>
<em>The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.</em><br/>
The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
```
clauses = []
```
<em>“... it is a crime for an American to sell weapons to hostile nations”</em><br/>
The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.
* `Criminal(x)`: `x` is a criminal
* `American(x)`: `x` is an American
* `Sells(x ,y, z)`: `x` sells `y` to `z`
* `Weapon(x)`: `x` is a weapon
* `Hostile(x)`: `x` is a hostile nation
Let us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.
$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
```
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
```
<em>"The country Nono, an enemy of America"</em><br/>
We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.
$\text{Enemy}(\text{Nono}, \text{America})$
```
clauses.append(expr("Enemy(Nono, America)"))
```
<em>"Nono ... has some missiles"</em><br/>
This states the existance of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.
$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
```
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
```
<em>"All of its missiles were sold to it by Colonel West"</em><br/>
If Nono owns something and it classifies as a missile, then it was sold to Nono by West.
$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
```
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
```
<em>"West, who is American"</em><br/>
West is an American.
$\text{American}(\text{West})$
```
clauses.append(expr("American(West)"))
```
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.
$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
```
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
```
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
```
crime_kb = FolKB(clauses)
```
## Inference in First-Order Logic
In this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called <strong>unification</strong>, a key component of all first-order inference algorithms.
### Unification
We sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a <em>unifier</em> for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
```
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
```
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
```
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
```
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
```
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
```
### Forward Chaining Algorithm
We consider the simple forward-chaining algorithm presented in <em>Figure 9.3</em>. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.
The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
```
%psource fol_fc_ask
```
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
```
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
```
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
```
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
```
<strong><em>Note</em>:</strong> `fol_fc_ask` makes changes to the `KB` by adding sentences to it.
### Backward Chaining Algorithm
This algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to <em>And/Or</em> search.
#### OR
The <em>OR</em> part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
```
%psource fol_bc_or
```
#### AND
The <em>AND</em> corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each <em>and</em> every clause in the list of conjuncts.
```
%psource fol_bc_and
```
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
```
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
```
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](#Unification)
## Appendix: The Implementation of `|'==>'|`
Consider the `Expr` formed by this syntax:
```
P |'==>'| ~Q
```
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
```
(P | '==>') | ~Q
```
In other words, there are two applications of or-operators. Here's the first one:
```
P | '==>'
```
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.
The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
```
partial = PartialExpr('==>', P)
partial | ~Q
```
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),
who suggested using a string inside the or-bars.
## Appendix: The Implementation of `expr`
How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):
1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).
2. We `eval` the resulting string in an environment in which every identifier
is bound to a symbol with that identifier as the `op`.
In other words,
```
expr('~(P & Q) ==> (~P | ~Q)')
```
is equivalent to doing:
```
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
```
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
```
P & Q |'==>'| P | Q
```
which is probably not what we meant; when in doubt, put in extra parens:
```
(P & Q) |'==>'| (P | Q)
```
## Examples
```
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
```
# Authors
This notebook by [Chirag Vartak](https://github.com/chiragvartak) and [Peter Norvig](https://github.com/norvig).
| github_jupyter |
```
# !pip install datasets
# !pip install matplotlib
import utilities as utils
import pandas as pd
import numpy as np
from collections import Counter
from datasets import load_dataset
```
## Loading the data
For the sake of the example, assume our entire corpus is just the training data from the NER Hin-Eng dataset in LinCE:
```
corpus = {
'original': load_dataset('lince', 'ner_hineng', split="train")
}
print(corpus)
```
This training data has a good CMI (it matches with Table 3 in the paper):
```
# cmi_stats = utils.get_corpus_cmi(corpus, langs={'eng', 'eng&spa', 'spa'})
# cmi_stats = utils.get_corpus_cmi(corpus, langs={'lang1', 'la ng2', 'mixed', 'fw'})
cmi_stats = utils.get_corpus_cmi(corpus, langs={'en', 'hi'})
cmi_stats
```
## Inspecting the original corpus
Recall that corpus is assumed to be our entire data (train + dev + test). We need to inspect the distribution of different aspects in the entire corpus to make sure they replicate in the stratified splits:
```
def plot_sentence_lengths(corpus):
for split in corpus:
counts = Counter([len(sample) for sample in corpus[split]['words']])
counts_df = pd.DataFrame({'sentence_length': counts})
counts_df.plot.bar(title=split.title() + ' - Sentence Lengths', figsize=(14,5))
plot_sentence_lengths(corpus)
def split_counters(datasplit, field):
return Counter(utils.flatten(datasplit[field]))
def corpus_counters(corpus, field):
counters = {}
for dataset in corpus:
counters[dataset] = split_counters(corpus[dataset], field)
return counters
pd.DataFrame(corpus_counters(corpus, field='lid'))
pd.DataFrame(corpus_counters(corpus, field='lid')).plot.bar()
pd.DataFrame(corpus_counters(corpus, field='ner'))[:-1].plot.bar() # [:-1] to ignore the O label for the plot
print('Sentences: {:,}'.format(len(corpus['original']['words'])))
print(' Tokens: {:,}'.format(len(utils.flatten(corpus['original']['words']))))
```
## Stratification
The stratification needs to consider:
* a similar distribution of entity labels
* a similar distribution of langid labels
* a similar distribution of sentence lengths
* the least number of overlapping entity instances between train and dev/test
* a ratio of 70:10:20 (47,056 : 6,722 : 13,445)
```
def unify_labels_for_stratification(unified_corpus, extra_labels=[], snd_label_type='ner'):
unified_labels = Counter()
unified_labels.update(split_counters(unified_corpus, field='lid'))
unified_labels.update(split_counters(unified_corpus, field=snd_label_type))
unified_labels = {label: i for i, label in enumerate(sorted(unified_labels.keys()) + extra_labels)}
return unified_labels
def length_to_category(length):
if length <= 5:
return 'small'
elif 5 < length <= 10:
return 'medium'
else:
return 'large'
def uniq_labels_per_sample(dataset, label_mapper, snd_label_type='ner'):
unified = []
for i in range(len(dataset['words'])):
lid_encoding = [label_mapper[lid] for lid in dataset['lid'][i]]
snd_encoding = [label_mapper[snd] for snd in dataset[snd_label_type][i]]
len_encoding = [label_mapper[length_to_category(len(dataset['words'][i]))]]
unified.append(sorted(set(lid_encoding + snd_encoding + len_encoding)))
return unified
def stratify_train_test(full_dataset, label2index, eval_ratio):
train_dataset = {'words': [], 'lid': [], 'ner': []}
test_dataset = {'words': [], 'lid': [], 'ner': []}
label_data = uniq_labels_per_sample(full_dataset, label2index)
label_uniq = sorted(label2index.values())
train_ratio = 1 - eval_ratio
split_indexes, split_labels = utils.stratify(label_data, label_uniq, [train_ratio, eval_ratio])
for split_i, dataset in enumerate([train_dataset, test_dataset]):
for sample_index in split_indexes[split_i]:
dataset['words'].append(full_dataset['words'][sample_index])
dataset['lid'].append(full_dataset['lid'][sample_index])
dataset['ner'].append(full_dataset['ner'][sample_index])
return train_dataset, test_dataset
def stratify_corpus(full_dataset, label2index, ratio):
temp_ratio = sum(ratio[1:]) # the sum of ratios for dev and test
train_dataset, temp_dataset = stratify_train_test(full_dataset, label2index, temp_ratio)
test_ratio = ratio[2] / temp_ratio # test ratio divided by the sum of dev and test ratio
dev_dataset, test_dataset = stratify_train_test(temp_dataset, label2index, test_ratio)
corpus = {
'train': train_dataset,
'dev': dev_dataset,
'test': test_dataset
}
return corpus
```
Merge all the existing label types in our corpus (lid, ner, and sentence length categories):
```
label2index = unify_labels_for_stratification(corpus['original'], extra_labels=['small', 'medium', 'large'])
index2label = {index: label for label, index in label2index.items()}
index2label
```
Let's run the stratification now that we have everything ready (this may take a few minutes for large datasets):
```
new_corpus = stratify_corpus(corpus['original'], label2index, ratio=[0.5, 0.15, 0.35])
```
## Inspecting the resulting stratified data
```
print("Train: {:,}".format(len(new_corpus['train']['words'])))
print(" Dev: {:,}".format(len(new_corpus['dev']['words'])))
print(" Test: {:,}".format(len(new_corpus['test']['words'])))
new_lid_df = pd.DataFrame({
'original': split_counters(corpus['original'], field='lid'),
'train': split_counters(new_corpus['train'], field='lid'),
'dev': split_counters(new_corpus['dev'], field='lid'),
'test': split_counters(new_corpus['test'], field='lid'),
})
print(new_lid_df)
print(new_lid_df.plot.bar())
new_ner_df = pd.DataFrame({
'original': split_counters(corpus['original'], field='ner'),
'train': split_counters(new_corpus['train'], field='ner'),
'dev': split_counters(new_corpus['dev'], field='ner'),
'test': split_counters(new_corpus['test'], field='ner'),
})
print(new_ner_df)
print(new_ner_df[1:].plot.bar())
plot_sentence_lengths(new_corpus)
plot_sentence_lengths(corpus)
```
KL-divergence between new splits and whole corpus (expect small numbers, in the magnitude of x10^-4 or smaller):
```
def get_kl_div(p_dist, q_dist):
p_np = []
q_np = []
for key in sorted(p_dist.keys()):
if key not in q_dist:
q_dist[key] = 1e-6
p_sum = sum(p_dist.values())
q_sum = sum(q_dist.values())
for key in sorted(p_dist.keys()):
p_np.append(p_dist[key] / p_sum)
q_np.append(q_dist[key] / q_sum)
return utils.numpy_kl_div(np.array(p_np), np.array(q_np))
def print_kl_divergence(label_type='lid'):
orig_dist = dict(Counter(utils.flatten(corpus['original'][label_type])))
for split in new_corpus:
split_dist = dict(Counter(utils.flatten(new_corpus[split][label_type])))
kl_div = get_kl_div(orig_dist, split_dist)
print(split, kl_div)
print_kl_divergence('lid')
```
| github_jupyter |
## Model Layers
This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.
```
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
```
## Custom fastai modules
```
show_doc(AdaptiveConcatPool2d, title_level=3)
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
```
The output will be `2*sz`, or just 2 if `sz` is None.
The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.
Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adaptive Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Now let's try with [Adaptive Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.
```
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!
```
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
show_doc(Lambda, title_level=3)
```
This is very useful to use functions as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
`Lambda(lambda x: x.view(x.size(0),-1))`
Let's see an example of how the shape of our output can change when we add this layer.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(Flatten)
```
The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(PoolFlatten)
```
We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
```
Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one.
```
show_doc(ResizeBatch)
a = torch.tensor([[1., -1.], [1., -1.]])
print(a)
out = ResizeBatch(4)
print(out(a))
show_doc(Debugger, title_level=3)
```
The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, outputs and sizes at any point in the network.
For instance, if you run the following:
``` python
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
Debugger(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
)
model.cuda()
learner = Learner(data, model, metrics=[accuracy])
learner.fit(5)
```
... you'll see something like this:
```
/home/ubuntu/fastai/fastai/layers.py(74)forward()
72 def forward(self,x:Tensor) -> Tensor:
73 set_trace()
---> 74 return x
75
76 class StdUpsample(nn.Module):
ipdb>
```
```
show_doc(PixelShuffle_ICNR, title_level=3)
show_doc(MergeLayer, title_level=3)
show_doc(PartialLayer, title_level=3)
show_doc(SigmoidRange, title_level=3)
show_doc(SequentialEx, title_level=3)
show_doc(SelfAttention, title_level=3)
show_doc(BatchNorm1dFlat, title_level=3)
```
## Loss functions
```
show_doc(FlattenedLoss, title_level=3)
```
Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it
- puts `axis` first in output and target with a transpose
- casts the target to `float` is `floatify=True`
- squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension
- applied the instance of `func`.
```
show_doc(BCEFlat)
show_doc(BCEWithLogitsFlat)
show_doc(CrossEntropyFlat)
show_doc(MSELossFlat)
show_doc(NoopLoss)
show_doc(WassersteinLoss)
```
## Helper functions to create modules
```
show_doc(bn_drop_lin, doc_string=False)
```
The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model.
`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.
```
show_doc(conv2d)
show_doc(conv2d_trans)
show_doc(conv_layer, doc_string=False)
```
The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.
`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.
```
show_doc(embedding, doc_string=False)
```
Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.
```
show_doc(relu)
show_doc(res_block)
show_doc(sigmoid_range)
show_doc(simple_cnn)
```
## Initialization of modules
```
show_doc(batchnorm_2d)
show_doc(icnr)
show_doc(trunc_normal_)
show_doc(icnr)
show_doc(NormType)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(Debugger.forward)
show_doc(Lambda.forward)
show_doc(AdaptiveConcatPool2d.forward)
show_doc(NoopLoss.forward)
show_doc(PixelShuffle_ICNR.forward)
show_doc(WassersteinLoss.forward)
show_doc(MergeLayer.forward)
show_doc(SigmoidRange.forward)
show_doc(MergeLayer.forward)
show_doc(SelfAttention.forward)
show_doc(SequentialEx.forward)
show_doc(SequentialEx.append)
show_doc(SequentialEx.extend)
show_doc(SequentialEx.insert)
show_doc(PartialLayer.forward)
show_doc(BatchNorm1dFlat.forward)
show_doc(Flatten.forward)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
So far you mastered the notation of quantum mechanics and quantum computing, understood as much physics as needed to perform various operations on quantum states, and now you are ready to build quantum algorithms. In this notebook, we look at the basics of gate-model quantum computing, which is sometimes also referred to as universal quantum computing. Most academic and commercial efforts to build a quantum computer focus on this model: Alibaba, Baidu, Google, HP, IBM Q, Intel, IonQ, Microsoft, Rigetti Computing, and Tencent all aim at this, and the list keeps expanding. It remains unclear which implementation will prove scalable: superconducting chips, photonic systems, and ion traps are the most common types, each having its own advantages and disadvantages. We abstract away, we focus on the quantum algorithms irrespective of the physical implementation.
To get there, first we have to familiarize ourselves with some gates and what happens to those gates on quantum computers. The following diagram shows the software stack that bridges a problem we want to solve with the actual computational back-end [[1](#1)]:
<img src="../figures/universal_quantum_workflow.png" alt="Software stack on a gate-model quantum computer" style="width: 400px;"/>
First, we define the problem at a high-level and a suitable quantum algorithm is chosen. Then, we express the quantum algorithm as a quantum circuit composed of gates. This in turn has to be compiled to a specific quantum gate set available. The last step is to execute the final circuit either on a quantum processor or on a simulator.
The quantum algorithms we are interested in are about machine learning. In this notebook, we look at the levels below algorithms: the definition of circuits, their compilation, and the mapping to the hardware or a simulator.
# Defining circuits
Circuits are composed of qubit registers, gates acting on them, and measurements on the registers. To store the outcome of registers, many quantum computing libraries add classical registers to the circuits. Even by this language, you can tell that this is a very low level of programming a computer. It resembles the assembly language of digital computers, in which a program consists of machine code instructions.
Qubit registers are indexed from 0. We often just say qubit 0, qubit 1, and so on, to refer to the register containing a qubit. This is not to be confused with the actual state of the qubit, which can be $|0\rangle$, $|1\rangle$, or any superposition thereof. For instance, qubit 0 can be in the state $|1\rangle$.
Let's take a look at the gates. In digital computing, a processor transform bit strings to bit strings with logical gates. Any bit string can be achieved with just two gates, which makes universal computations possible with simple operations composed only of these two types of gates. It is remarkable and surprising that the same is also true for quantum computers: any unitary operation can be decomposed into elementary gates, and three types of gates are sufficient. This is remarkable since we are talking about transforming continuous-valued probability amplitudes, not just discrete elements. Yet, this result is what provides the high-level theoretical foundation for being able to build a universal quantum computer at all.
Let's look at some common gates, some of which we have already seen. Naturally, all of these are unitary.
| Gate |Name | Matrix |
|------|--------------------|---------------------------------------------------------------------|
| X | Pauli-X or NOT gate|$\begin{bmatrix}0 & 1\\ 1& 0\end{bmatrix}$|
| Z | Pauli-Z gate |$\begin{bmatrix}1 & 0\\ 0& -1\end{bmatrix}$|
| H | Hadamard gate |$\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1\\ 1& -1\end{bmatrix}$|
| Rx($\theta$)| Rotation around X|$\begin{bmatrix}\cos(\theta/2) & -\imath \sin(\theta/2)\\ -\imath \sin(\theta / 2) & \cos(\theta / 2)\end{bmatrix}$|
| Ry($\theta$)| Rotation around Y|$\begin{bmatrix}\cos(\theta/2) & -\sin(\theta/2)\\ -\sin(\theta / 2) & \cos(\theta / 2)\end{bmatrix}$|
| CNOT, CX | Controlled-NOT | $\begin{bmatrix}1 & 0 & 0 &0\\ 0 & 1 & 0 &0\\ 0 & 0 & 0 &1\\ 0 & 0 & 1 &0\end{bmatrix}$|
As we have seen before, the rotations correspond to axis defined in the Bloch sphere.
There should be one thing immediately apparent from the table: there are many, in fact, infinitely many single-qubit operations. The rotations, for instance, are parametrized by a continuous value. This is in stark contrast with digital circuits, where the only non-trivial single-bit gate is the NOT gate.
The CNOT gate is the only two-qubit gate in this list. It has a special role: we need two-qubit interactions to create entanglement. Let's repeat the circuit for creating the $|\phi^+\rangle = \frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$. We will have two qubit registers and two classical registers for measurement output. First, let's define the circuit and plot it:
```
import numpy as np
from pyquil import Program, get_qc
from pyquil.api import WavefunctionSimulator
from pyquil.gates import *
from forest_tools import *
np.set_printoptions(precision=3, suppress=True)
%matplotlib inline
qvm_server, quilc_server, fc = init_qvm_and_quilc()
qc = get_qc('2q-qvm', connection=fc)
wf_sim = WavefunctionSimulator(connection=fc)
circuit = Program()
circuit += H(0)
circuit += CNOT(0, 1)
plot_circuit(circuit)
```
Note that we can't just initialize the qubit registers in a state we fancy. All registers are initialized in $|0\rangle$ and creating a desired state is **part** of the circuit. In a sense, arbitrary state preparation is the same as universal quantum computation: the end of the calculation is a state that we desired to prepare. Some states are easier to prepare than others. The above circuit has only two gates to prepare our target state, so it is considered very easy.
Let us see what happens in this circuit. The Hadamard gate prepares an equal superposition $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ in qubit 0. This qubit controls an X gate on qubit 1. Since qubit 0 is in the equal superposition after the Hadamard gate, it will not apply the X gate for the first part of the superposition ($|0\rangle$) and it will apply the X gate for the second part of the superposition ($|1\rangle$). Thus we create the final state $\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$, and we entangle the two qubit registers.
A digital computer's processing unit typically has 64-bit registers and it is able to perform universal calculations on bit strings. Any complex calculation is broken down into elementary 64-bit operations, either sequentially or in parallel execution. So you may wonder what is the deal with the thousands of qubits we expect from a quantum computer. Why can't a 64-qubit quantum computer be enough?
Entanglement is the easiest way to understand why we need so many qubits. Entanglement is a key resource in quantum computing and we want to make use of it. If we have 64-qubits and we want to entangle another one outside these 64 registers, we would have get rid of the qubit in one of the registers, potentially destroying a superposition ad definitely destroying entanglement between that register and any other qubit on the chip. The only way to make use of superpositions and the strong correlations provided by entanglement is if the entire problem is on the quantum processing unit for the duration of the calculation.
This global nature of the calculation is also the reason why there is a focus on problems that are difficult to break down into elementary calculations. The travelling salesman problem is a great example: we need to consider all cities and all distances to minimize overall travel length.
To finish off the circuit, we could add a measurement to each qubit and plot the statistics:
```
ro = circuit.declare('ro', 'BIT', 2)
circuit += MEASURE(0, ro[0])
circuit += MEASURE(1, ro[1])
circuit.wrap_in_numshots_loop(100)
executable = qc.compile(circuit)
result = qc.run(executable)
plot_histogram(result)
```
As we have seen before, 01 and 10 never appear.
# Compilation
The circuit is the way to describe a quantum algorithm. It may also contain some arbitrary single or two-qubit unitary and controlled versions thereof. A quantum compiler should be able to decompose these into elementary gates.
This is one task of a quantum compiler. The next one is to translate the gates given in the circuit to the gates implemented in the hardware or the simulator. In the table above, we defined many gates, but a well-chosen set of three is sufficient for universality. For engineering constraints, typically one minimal set of universal gates is implemented in the hardware. It depends on the physical architecture which three.
At this point, the number of gates applied is probably already increasing: the decomposition of unitary will create many gates and the translation of gates is also likely to add more gates. An additional problem is the topology of the qubits: in some implementations not all qubit registers are connected to each other. The most popular implementation is superconducting qubits, which are manufactured on silicon chips just like any digital device you have. Since this is a quintessentially two dimensional layout, most qubits on the chip will not be connected. Here is an example topology of eight qubits on a superconducting quantum computer:
<img src="../figures/eight_qubits.svg" alt="8-qubit topology" style="width: 200px;"/>
If we want to perform a two-qubit operations between two qubits that are not neighbouring, we have to perform SWAP operations to switch the qubit states between registers. A SWAP consists of three CNOT gates in a sequence.
The total number of gates at the end of the compilation reflects the true requirement of the hardware. *Circuit depth* is the number of time steps required to execute the circuit, assuming that gates acting on distinct qubits can operate in parallel. On current and near-term quantum computers, we want circuits to be shallow, otherwise decoherence or other forms of noise destroy our calculations.
We have to emphasize that the compilation depends on the backend. In this case, the circuit depth increases significantly, due to the lack of a native CNOT gate on the backend:
```
executable = qc.compile(circuit)
print(executable.program)
```
# References
[1] M. Fingerhuth, T. Babej, P. Wittek. (2018). [Open source software in quantum computing](https://doi.org/10.1371/journal.pone.0208561). *PLOS ONE* 13(12):e0208561. <a id='1'></a>
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D4_ReinforcementLearning/W3D4_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 3: Learning to Act: Q-Learning
**Week 3, Day 4: Reinforcement Learning**
**By Neuromatch Academy**
__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Byron Galbraith
__Content reviewers:__ Ella Batty, Matt Krause and Michael Waskom
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
*Estimated timing of tutorial: 40 min*
In this tutorial you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected **cumulative** future reward.
We will consider here the example of spatial navigation, where actions (movements) in one state (location) affect the states experienced next, and an agent might need to execute a whole sequence of actions before a reward is obtained.
By the end of this tutorial, you will learn
* what grid worlds are and how they help in evaluating simple reinforcement learning agents
* the basics of the Q-learning algorithm for estimating action values
* how the concept of exploration and exploitation, reviewed in the bandit case, also applies to the sequential decision setting
```
# @title Tutorial slides
# @markdown These are the slides for all videos in this tutorial.
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2jzdu/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve as conv
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting Functions
def plot_state_action_values(env, value, ax=None):
"""
Generate plot showing value of each action at each state.
"""
if ax is None:
fig, ax = plt.subplots()
for a in range(env.n_actions):
ax.plot(range(env.n_states), value[:, a], marker='o', linestyle='--')
ax.set(xlabel='States', ylabel='Values')
ax.legend(['R','U','L','D'], loc='lower right')
def plot_quiver_max_action(env, value, ax=None):
"""
Generate plot showing action of maximum value or maximum probability at
each state (not for n-armed bandit or cheese_world).
"""
if ax is None:
fig, ax = plt.subplots()
X = np.tile(np.arange(env.dim_x), [env.dim_y,1]) + 0.5
Y = np.tile(np.arange(env.dim_y)[::-1][:,np.newaxis], [1,env.dim_x]) + 0.5
which_max = np.reshape(value.argmax(axis=1), (env.dim_y,env.dim_x))
which_max = which_max[::-1,:]
U = np.zeros(X.shape)
V = np.zeros(X.shape)
U[which_max == 0] = 1
V[which_max == 1] = 1
U[which_max == 2] = -1
V[which_max == 3] = -1
ax.quiver(X, Y, U, V)
ax.set(
title='Maximum value/probability actions',
xlim=[-0.5, env.dim_x+0.5],
ylim=[-0.5, env.dim_y+0.5],
)
ax.set_xticks(np.linspace(0.5, env.dim_x-0.5, num=env.dim_x))
ax.set_xticklabels(["%d" % x for x in np.arange(env.dim_x)])
ax.set_xticks(np.arange(env.dim_x+1), minor=True)
ax.set_yticks(np.linspace(0.5, env.dim_y-0.5, num=env.dim_y))
ax.set_yticklabels(["%d" % y for y in np.arange(0, env.dim_y*env.dim_x,
env.dim_x)])
ax.set_yticks(np.arange(env.dim_y+1), minor=True)
ax.grid(which='minor',linestyle='-')
def plot_heatmap_max_val(env, value, ax=None):
"""
Generate heatmap showing maximum value at each state
"""
if ax is None:
fig, ax = plt.subplots()
if value.ndim == 1:
value_max = np.reshape(value, (env.dim_y,env.dim_x))
else:
value_max = np.reshape(value.max(axis=1), (env.dim_y,env.dim_x))
value_max = value_max[::-1,:]
im = ax.imshow(value_max, aspect='auto', interpolation='none', cmap='afmhot')
ax.set(title='Maximum value per state')
ax.set_xticks(np.linspace(0, env.dim_x-1, num=env.dim_x))
ax.set_xticklabels(["%d" % x for x in np.arange(env.dim_x)])
ax.set_yticks(np.linspace(0, env.dim_y-1, num=env.dim_y))
if env.name != 'windy_cliff_grid':
ax.set_yticklabels(
["%d" % y for y in np.arange(
0, env.dim_y*env.dim_x, env.dim_x)][::-1])
return im
def plot_rewards(n_episodes, rewards, average_range=10, ax=None):
"""
Generate plot showing total reward accumulated in each episode.
"""
if ax is None:
fig, ax = plt.subplots()
smoothed_rewards = (conv(rewards, np.ones(average_range), mode='same')
/ average_range)
ax.plot(range(0, n_episodes, average_range),
smoothed_rewards[0:n_episodes:average_range],
marker='o', linestyle='--')
ax.set(xlabel='Episodes', ylabel='Total reward')
def plot_performance(env, value, reward_sums):
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(16, 12))
plot_state_action_values(env, value, ax=axes[0,0])
plot_quiver_max_action(env, value, ax=axes[0,1])
plot_rewards(n_episodes, reward_sums, ax=axes[1,0])
im = plot_heatmap_max_val(env, value, ax=axes[1,1])
fig.colorbar(im)
```
---
# Section 1: Markov Decision Processes
```
# @title Video 1: MDPs and Q-learning
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1ft4y1Q7bX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="8yvwMrUQJOU", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
**Grid Worlds**
As pointed out, bandits only have a single state and immediate rewards for our actions. Many problems we are interested in have multiple states and delayed rewards, i.e. we won't know if the choices we made will pay off over time, or which actions we took contributed to the outcomes we observed.
In order to explore these ideas, we turn the a common problem setting: the grid world. Grid worlds are simple environments where each state corresponds to a tile on a 2D grid, and the only actions the agent can take are to move up, down, left, or right across the grid tiles. The agent's job is almost always to find a way to a goal tile in the most direct way possible while overcoming some maze or other obstacles, either static or dynamic.
For our discussion we will be looking at the classic Cliff World, or Cliff Walker, environment. This is a 4x10 grid with a starting position in the lower-left and the goal position in the lower-right. Every tile between these two is the "cliff", and should the agent enter the cliff, they will receive a -100 reward and be sent back to the starting position. Every tile other than the cliff produces a -1 reward when entered. The goal tile ends the episode after taking any action from it.
<img alt="CliffWorld" width="577" height="308" src="https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/static/W3D4_Tutorial3_CliffWorld.png?raw=true">
Given these conditions, the maximum achievable reward is -11 (1 up, 9 right, 1 down). Using negative rewards is a common technique to encourage the agent to move and seek out the goal state as fast as possible.
---
# Section 2: Q-Learning
*Estimated timing to here from start of tutorial: 20 min*
Now that we have our environment, how can we solve it?
One of the most famous algorithms for estimating action values (aka Q-values) is the Temporal Differences (TD) **control** algorithm known as *Q-learning* (Watkins, 1989).
\begin{align}
Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \big(r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)\big)
\end{align}
where $Q(s,a)$ is the value function for action $a$ at state $s$, $\alpha$ is the learning rate, $r$ is the reward, and $\gamma$ is the temporal discount rate.
The expression $r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1})$ is referred to as the TD target while the full expression
\begin{align}
r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1}) - Q(s_t,a_t),
\end{align}
i.e. the difference between the TD target and the current Q-value, is referred to as the TD error, or reward prediction error.
Because of the max operator used to select the optimal Q-value in the TD target, Q-learning directly estimates the optimal action value, i.e. the cumulative future reward that would be obtained if the agent behaved optimally, regardless of the policy currently followed by the agent. For this reason, Q-learning is referred to as an **off-policy** method.
## Coding Exercise 2: Implement the Q-learning algorithm
In this exercise you will implement the Q-learning update rule described above. It takes in as arguments the previous state $s_t$, the action $a_t$ taken, the reward received $r_t$, the current state $s_{t+1}$, the Q-value table, and a dictionary of parameters that contain the learning rate $\alpha$ and discount factor $\gamma$. The method returns the updated Q-value table. For the parameter dictionary, $\alpha$: `params['alpha']` and $\gamma$: `params['gamma']`.
Once we have our Q-learning algorithm, we will see how it handles learning to solve the Cliff World environment.
You will recall from the previous tutorial that a major part of reinforcement learning algorithms are their ability to balance exploitation and exploration. For our Q-learning agent, we again turn to the epsilon-greedy strategy. At each step, the agent will decide with probability $1 - \epsilon$ to use the best action for the state it is currently in by looking at the value function, otherwise just make a random choice.
The process by which our the agent will interact with and learn about the environment is handled for you in the helper function `learn_environment`. This implements the entire learning episode lifecycle of stepping through the state observation, action selection (epsilon-greedy) and execution, reward, and state transition. Feel free to review that code later to see how it all fits together, but for now let's test out our agent.
```
# @markdown Execute to get helper functions `epsilon_greedy`, `CliffWorld`, and `learn_environment`
def epsilon_greedy(q, epsilon):
"""Epsilon-greedy policy: selects the maximum value action with probabilty
(1-epsilon) and selects randomly with epsilon probability.
Args:
q (ndarray): an array of action values
epsilon (float): probability of selecting an action randomly
Returns:
int: the chosen action
"""
if np.random.random() > epsilon:
action = np.argmax(q)
else:
action = np.random.choice(len(q))
return action
class CliffWorld:
"""
World: Cliff world.
40 states (4-by-10 grid world).
The mapping from state to the grids are as follows:
30 31 32 ... 39
20 21 22 ... 29
10 11 12 ... 19
0 1 2 ... 9
0 is the starting state (S) and 9 is the goal state (G).
Actions 0, 1, 2, 3 correspond to right, up, left, down.
Moving anywhere from state 9 (goal state) will end the session.
Taking action down at state 11-18 will go back to state 0 and incur a
reward of -100.
Landing in any states other than the goal state will incur a reward of -1.
Going towards the border when already at the border will stay in the same
place.
"""
def __init__(self):
self.name = "cliff_world"
self.n_states = 40
self.n_actions = 4
self.dim_x = 10
self.dim_y = 4
self.init_state = 0
def get_outcome(self, state, action):
if state == 9: # goal state
reward = 0
next_state = None
return next_state, reward
reward = -1 # default reward value
if action == 0: # move right
next_state = state + 1
if state % 10 == 9: # right border
next_state = state
elif state == 0: # start state (next state is cliff)
next_state = None
reward = -100
elif action == 1: # move up
next_state = state + 10
if state >= 30: # top border
next_state = state
elif action == 2: # move left
next_state = state - 1
if state % 10 == 0: # left border
next_state = state
elif action == 3: # move down
next_state = state - 10
if state >= 11 and state <= 18: # next is cliff
next_state = None
reward = -100
elif state <= 9: # bottom border
next_state = state
else:
print("Action must be between 0 and 3.")
next_state = None
reward = None
return int(next_state) if next_state is not None else None, reward
def get_all_outcomes(self):
outcomes = {}
for state in range(self.n_states):
for action in range(self.n_actions):
next_state, reward = self.get_outcome(state, action)
outcomes[state, action] = [(1, next_state, reward)]
return outcomes
def learn_environment(env, learning_rule, params, max_steps, n_episodes):
# Start with a uniform value function
value = np.ones((env.n_states, env.n_actions))
# Run learning
reward_sums = np.zeros(n_episodes)
# Loop over episodes
for episode in range(n_episodes):
state = env.init_state # initialize state
reward_sum = 0
for t in range(max_steps):
# choose next action
action = epsilon_greedy(value[state], params['epsilon'])
# observe outcome of action on environment
next_state, reward = env.get_outcome(state, action)
# update value function
value = learning_rule(state, action, reward, next_state, value, params)
# sum rewards obtained
reward_sum += reward
if next_state is None:
break # episode ends
state = next_state
reward_sums[episode] = reward_sum
return value, reward_sums
def q_learning(state, action, reward, next_state, value, params):
"""Q-learning: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# Q-value of current state-action pair
q = value[state, action]
##########################################################
## TODO for students: implement the Q-learning update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the Q-learning update rule")
##########################################################
# write an expression for finding the maximum Q-value at the current state
if next_state is None:
max_next_q = 0
else:
max_next_q = ...
# write the expression to compute the TD error
td_error = ...
# write the expression that updates the Q-value for the state-action pair
value[state, action] = ...
return value
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# solve Cliff World using Q-learning
results = learn_environment(env, q_learning, params, max_steps, n_episodes)
value_qlearning, reward_sums_qlearning = results
# Plot results
plot_performance(env, value_qlearning, reward_sums_qlearning)
# to_remove solution
def q_learning(state, action, reward, next_state, value, params):
"""Q-learning: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# Q-value of current state-action pair
q = value[state, action]
# write an expression for finding the maximum Q-value at the current state
if next_state is None:
max_next_q = 0
else:
max_next_q = np.max(value[next_state])
# write the expression to compute the TD error
td_error = reward + params['gamma'] * max_next_q - q
# write the expression that updates the Q-value for the state-action pair
value[state, action] = q + params['alpha'] * td_error
return value
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# solve Cliff World using Q-learning
results = learn_environment(env, q_learning, params, max_steps, n_episodes)
value_qlearning, reward_sums_qlearning = results
# Plot results
with plt.xkcd():
plot_performance(env, value_qlearning, reward_sums_qlearning)
```
If all went well, we should see four plots that show different aspects on our agent's learning and progress.
* The top left is a representation of the Q-table itself, showing the values for different actions in different states. Notably, going right from the starting state or down when above the cliff is clearly very bad.
* The top right figure shows the greedy policy based on the Q-table, i.e. what action would the agent take if it only took its best guess in that state.
* The bottom right is the same as the top, only instead of showing the action, it's showing a representation of the maximum Q-value at a particular state.
* The bottom left is the actual proof of learning, as we see the total reward steadily increasing after each episode until asymptoting at the maximum possible reward of -11.
Feel free to try changing the parameters or random seed and see how the agent's behavior changes.
---
# Summary
*Estimated timing of tutorial: 40 min*
In this tutorial you implemented a reinforcement learning agent based on Q-learning to solve the Cliff World environment. Q-learning combined the epsilon-greedy approach to exploration-expoitation with a table-based value function to learn the expected future rewards for each state.
---
# Bonus
---
## Bonus Section 1: SARSA
An alternative to Q-learning, the SARSA algorithm also estimates action values. However, rather than estimating the optimal (off-policy) values, SARSA estimates the **on-policy** action value, i.e. the cumulative future reward that would be obtained if the agent behaved according to its current beliefs.
\begin{align}
Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \big(r_t + \gamma Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)\big)
\end{align}
where, once again, $Q(s,a)$ is the value function for action $a$ at state $s$, $\alpha$ is the learning rate, $r$ is the reward, and $\gamma$ is the temporal discount rate.
In fact, you will notices that the *only* difference between Q-learning and SARSA is the TD target calculation uses the policy to select the next action (in our case epsilon-greedy) rather than using the action that maximizes the Q-value.
### Bonus Coding Exercise 1: Implement the SARSA algorithm
In this exercise you will implement the SARSA update rule described above. Just like Q-learning, it takes in as arguments the previous state $s_t$, the action $a_t$ taken, the reward received $r_t$, the current state $s_{t+1}$, the Q-value table, and a dictionary of parameters that contain the learning rate $\alpha$ and discount factor $\gamma$. The method returns the updated Q-value table. You may use the `epsilon_greedy` function to acquire the next action. For the parameter dictionary, $\alpha$: `params['alpha']`, $\gamma$: `params['gamma']`, and $\epsilon$: `params['epsilon']`.
Once we have an implementation for SARSA, we will see how it tackles Cliff World. We will again use the same setup we tried with Q-learning.
```
def sarsa(state, action, reward, next_state, value, params):
"""SARSA: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# value of previous state-action pair
q = value[state, action]
##########################################################
## TODO for students: implement the SARSA update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the SARSA update rule")
##########################################################
# select the expected value at current state based on our policy by sampling
# from it
if next_state is None:
policy_next_q = 0
else:
# write an expression for selecting an action using epsilon-greedy
policy_action = ...
# write an expression for obtaining the value of the policy action at the
# current state
policy_next_q = ...
# write the expression to compute the TD error
td_error = ...
# write the expression that updates the Q-value for the state-action pair
value[state, action] = ...
return value
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# learn Cliff World using Sarsa
results = learn_environment(env, sarsa, params, max_steps, n_episodes)
value_sarsa, reward_sums_sarsa = results
# Plot results
plot_performance(env, value_sarsa, reward_sums_sarsa)
def sarsa(state, action, reward, next_state, value, params):
"""SARSA: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# value of previous state-action pair
q = value[state, action]
# select the expected value at current state based on our policy by sampling
# from it
if next_state is None:
policy_next_q = 0
else:
# write an expression for selecting an action using epsilon-greedy
policy_action = epsilon_greedy(value[next_state], params['epsilon'])
# write an expression for obtaining the value of the policy action at the
# current state
policy_next_q = value[next_state, policy_action]
# write the expression to compute the TD error
td_error = reward + params['gamma'] * policy_next_q - q
# write the expression that updates the Q-value for the state-action pair
value[state, action] = q + params['alpha'] * td_error
return value
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# learn Cliff World using Sarsa
results = learn_environment(env, sarsa, params, max_steps, n_episodes)
value_sarsa, reward_sums_sarsa = results
# Plot results
with plt.xkcd():
plot_performance(env, value_sarsa, reward_sums_sarsa)
```
We should see that SARSA also solves the task with similar looking outcomes to Q-learning. One notable difference is that SARSA seems to be skittsh around the cliff edge and often goes further away before coming back down to the goal.
Again, feel free to try changing the parameters or random seed and see how the agent's behavior changes.
---
## Bonus Section 2: On-Policy vs Off-Policy
We have now seen an example of both on- and off-policy learning algorithms. Let's compare both Q-learning and SARSA reward results again, side-by-side, to see how they stack up.
```
# @markdown Execute to see visualization
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# learn Cliff World using Sarsa
np.random.seed(1)
results = learn_environment(env, q_learning, params, max_steps, n_episodes)
value_qlearning, reward_sums_qlearning = results
np.random.seed(1)
results = learn_environment(env, sarsa, params, max_steps, n_episodes)
value_sarsa, reward_sums_sarsa = results
fig, ax = plt.subplots()
ax.plot(reward_sums_qlearning, label='Q-learning')
ax.plot(reward_sums_sarsa, label='SARSA')
ax.set(xlabel='Episodes', ylabel='Total reward')
plt.legend(loc='lower right');
```
On this simple Cliff World task, Q-learning and SARSA are almost indistinguisable from a performance standpoint, but we can see that Q-learning has a slight-edge within the 500 episode time horizon. Let's look at the illustrated "greedy policy" plots again.
```
# @markdown Execute to see visualization
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 6))
plot_quiver_max_action(env, value_qlearning, ax=ax1)
ax1.set(title='Q-learning maximum value/probability actions')
plot_quiver_max_action(env, value_sarsa, ax=ax2)
ax2.set(title='SARSA maximum value/probability actions');
```
What should immediately jump out is that Q-learning learned to go up, then immediately go to the right, skirting the cliff edge, until it hits the wall and goes down to the goal. The policy further away from the cliff is less certain.
SARSA, on the other hand, appears to avoid the cliff edge, going up one more tile before starting over to the goal side. This also clearly solves the challenge of getting to the goal, but does so at an additional -2 cost over the truly optimal route.
Why do you think these behaviors emerged the way they did?
| github_jupyter |
# Introduction
Lets load our environment first
```
from func_adl_servicex import ServiceXSourceUpROOT, ServiceXSourceCMSRun1AOD
from hist import Hist
import awkward as ak
```
## Flat ROOT Files
ATLAS has distributed it's open data as flat ROOT files.
* On CERNOpenData they are a single zip file
* But they have been distributed as files availible via EOS from CERN Open Data's EOS instance.
```
ggH125_ZZ4lep = 'root://eospublic.cern.ch//eos/opendata/atlas/OurestartreachDatasets/2020-01-22/4lep/MC/mc_345060.ggH125_ZZ4lep.4lep.root'
ggH125_ZZ4lep = 'https://atlas-opendata.web.cern.ch/atlas-opendata/samples/2020/4lep/MC/mc_345060.ggH125_ZZ4lep.4lep.root'
ggH125_ZZ4lep_source = ServiceXSourceUpROOT([ggH125_ZZ4lep], 'mini', backend='open_uproot')
```
* We use the `root://` address instead of `http://` due to efficiency and caching.
* `mini` is the tree name in the file
* `backend` basically describes the type of file - this is a flat root file that can be opened by the `uproot` python package.
Now that we have a reference to the datasource, lets pick out a single column and bring its contents back to our local instance:
```
from servicex import ignore_cache
with ignore_cache():
r = (ggH125_ZZ4lep_source
.Select(lambda e: {'lep_pt': e['lep_pt']})
.AsAwkwardArray()
.value()
)
r
```
This is a standard awkward array - and it's shape is simply a series of lepton $p_T$'s for each of the 164716 events:
```
r.type
```
Lets plot this. We'll use the `Hist` library, which is a nice wrapper around the `boost_histogram` library:
* Note we have a single axis, which is the muon's $p_T$
```
h = (Hist.new
.Reg(50, 0, 200, name='mu_pt', label='Muon $p_T$ [GeV]')
.Int64()
)
h.fill(ak.flatten(r['lep_pt'])/1000.0)
_ = h.plot()
```
Lets get back a $p_T$ and an $\eta$ for the leptons now. This requires two things coming back:
```
r = (ggH125_ZZ4lep_source
.Select(lambda e: {'lep_pt': e['lep_pt'], 'lep_eta': e['lep_eta']})
.AsAwkwardArray()
.value()
)
r.type
```
Note that it is columnar data:
* Each event contains two arrays
* The arrays are lepton pt and eta - not a tuple of lepton ($p_T$, $\eta$).
What if we want to cut on the eta? How do we relate these two columns? We use the zip function.
```
r_cut = (ggH125_ZZ4lep_source
.Select(lambda e: Zip({'pt': e['lep_pt'], 'eta': e['lep_eta']}))
.Select(lambda leps: leps.Where(lambda l: abs(l.eta) < 1.0))
.AsAwkwardArray()
.value()
)
r.type
```
Interesting - same number of events - but we cut? Lets look at a histogram:
```
h = (Hist.new
.Reg(50, 0, 200, name='mu_pt', label='Muon Track $p_T$ [GeV]')
.StrCat([], name='cut', label='Cut Type', growth=True)
.Int64()
)
h.fill(mu_pt=ak.flatten(r['lep_pt'])/1000.0, cut='All')
h.fill(mu_pt=ak.flatten(r_cut['pt'])/1000.0, cut='eta')
artists = h.plot()
ax = artists[0].stairs.axes # get the axis
ax.legend(loc="best");
h = (Hist.new
.Reg(8, 0, 8, name='muon_count', label='Number of Muons')
.StrCat([], name='cut', label='Cut Type', growth=True)
.Int64()
)
h.fill(muon_count=ak.count(r['lep_pt'], axis=-1), cut='All')
h.fill(muon_count=ak.count(r_cut['pt'], axis=-1), cut='eta')
artists = h.plot()
ax = artists[0].stairs.axes # get the axis
ax.legend(loc="best");
```
* We cut the number of leptons per event
* We now have some events with zero leptons - but those events were still returned
* We could remove them by doing a cut on the number of leptons...
## CMS Run 1 AOD Files
Differences between FLAT ROOT files and CMS RUN 1 AOD Files:
* You must use an (old) version of CMSSW, a big software framework, to read these files!!
* Data in these files is stored event or row-wise: electrons, and then the $p_T$ and $\eta$ of each electron.
* Some datasets are ~7 TB!! It takes about 30 minutes to run on those when things are working well: we will be using a data from an earlier run that has automatically been locally cached.
Start with a 60GB SM [Higgs dataset](http://opendata.cern.ch/record/1507) ($H \rightarrow ZZ \rightarrow \ell \ell \ell \ell$). In CERN OpenData's catalog, this is record 1507 (pulled straight from the URL: http://opendata.cern.ch/record/1507).
```
hzz_4l_source = ServiceXSourceCMSRun1AOD('cernopendata://1507', backend='cms_run1_aod')
r = (hzz_4l_source \
.Select(lambda e: e.TrackMuons("globalMuons"))
.Select(lambda muons: muons.Select(lambda m: m.pt()))
.AsAwkwardArray(['mu_pt']) \
.value()
)
r.type
r_cut = (hzz_4l_source \
.Select(lambda e: e.TrackMuons("globalMuons"))
.Select(lambda muons: muons.Where(lambda m: abs(m.eta()) < 1.0))
.Select(lambda muons: muons.Select(lambda m: m.pt()))
.AsAwkwardArray(['mu_pt']) \
.value()
)
r.type
h = (Hist.new
.Reg(8, 0, 8, name='muon_count', label='Number of Muons')
.StrCat([], name='cut', label='Cut Type', growth=True)
.Int64()
)
h.fill(muon_count=ak.count(r['mu_pt'], axis=-1), cut='All')
h.fill(muon_count=ak.count(r_cut['mu_pt'], axis=-1), cut='eta')
artists = h.plot()
ax = artists[0].stairs.axes # get the axis
ax.legend(loc="best");
```
Again, a very similar behavior here!
* Note that CMS muons are **all** muons, so a lot more quality cuts must be done to compare them with ATLAS's muons.
* ATLAS's AOD files were skimmed down to make those datasets for the educational purposes: so a lot of the skimming is done early for those files.
## Using Coffea
`ServiceX`:
* Gets columnar data from any format that can a translator has been written for (ATLAS Run 2 xAOD, CMS Run 1 AOD, uproot-able ROOT files, soon some dark matter experiments, etc.)
* Slims, skims, generates calculated quantities
Think _ntuplizer_.
`coffea`:
* Used `awkward` and friends to perform the final analysis
* Plotting
* Distributed running
* Good at running on a large number of datasets at once
`coffea` is arranged around processors that do the physics. Each processor runs once per file, and results are combined for a dataset.
First define a dummy (representative) dataset and apply the operations on it we are interested in:
```
ds = ServiceXSourceUpROOT('cernopendata://dummy', "mimi", backend='open_uproot')
ds.return_qastle = True # Magic
selection_atlas = (ds
.Select(lambda e: Zip({'lep_pt': e['lep_pt'], 'lep_eta': e['lep_eta']}))
.Select(lambda leps: leps.Where(lambda l: abs(l.lep_eta) < 1.0))
.Select(lambda leps: {'lep_pt': leps.lep_pt, 'lep_eta': leps.lep_eta})
.AsParquetFiles('junk.parquet')
)
```
Note:
* no call to `value`: We do not want to try to render this bogus expression.
* We want to return parquet files - this is what `coffea` will be eating and sending to the processors.
```
from coffea.processor.servicex import DataSource, Analysis
from coffea.processor.servicex import LocalExecutor
from servicex import ServiceXDataset
from coffea import processor
class atlas_demo_processor(Analysis):
@staticmethod
def process(events):
import awkward as ak
from collections import defaultdict
sumw = defaultdict(float)
h = (Hist.new
.Reg(50, 0, 200, name='lep_pt', label='Lepton $p_T$ [GeV]')
.StrCat([], name='dataset', label='Dataset', growth=True)
.Int64()
)
dataset = events.metadata['dataset']
leptons = events.lep
h.fill(
dataset=dataset,
lep_pt = ak.flatten(leptons.pt/1000.0)
)
return {
"sumw": sumw,
"pt": h
}
```
Now we create a real dataset and execute it.
```
datasets = [ServiceXDataset([ggH125_ZZ4lep], backend_type='open_uproot',
image='sslhep/servicex_func_adl_uproot_transformer:pr_fix_awk_bug')]
c_datasets = DataSource(query=selection_atlas, metadata={'dataset': 'ggH125_ZZ4lep'}, datasets=datasets)
```
This code is boiler plate:
* Declare a dataset and a transformer image (were accidentally running an old version of awkward)
* Create a datasource
* Note the metadata - a useful way to pass information into your processor on a per-dataset basis.
```
analysis = atlas_demo_processor()
executor = LocalExecutor(datatype='parquet')
async def run_updates_stream(accumulator_stream):
count = 0
async for coffea_info in accumulator_stream:
count += 1
return coffea_info
result = await run_updates_stream(executor.execute(analysis, c_datasets))
print(result)
artists = result['pt'].plot()
ax = artists[0].stairs.axes # get the axis
ax.legend(loc="best");
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science
# Lab 10: Random Forest and Boosting
**Harvard University**<br/>
**Fall 2021**<br/>
**Instructors**: Pavlos Protopapas and Natesh Pillai<br/>
**Lab Team**: Marios Mattheakis, Hayden Joy, Chris Gumb, and Eleni Kaxiras<br/>
**Authors**: Hayden Joy, and Marios Mattheakis
<hr style='height:2px'>
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
This section will work with a spam email dataset again. Our ultimate goal is to be able to build models so that we can predict whether an email is spam or not spam based on word characteristics within each email. We will review Decision Trees and Bagging methods, and introduce Random Forest and Boosting: Ada Boost and XGBoost.
Specifically, we will:
1. *Quick review of last week*
2. What is a Random Forest model?
3. Build the Decision Tree model, Bagging model, Random Forest Model for comparison with Boosting.
4. *Theory:* What is Boosting?
5. Use the Adaboost on the Spam Dataset.
6. *Theory:* What is Gradient Boosting and XGBoost?
7. Use XGBoost on the Spam Dataset: Extreme Gradient Boosting
Optional: Example to better understand Bias vs Variance tradeoff.
---------
## 1. *Quick review of last week*
#### The Idea: Decision Trees are just flowcharts and interpretable!
It turns out that simple flow charts can be formulated as mathematical models for classification and these models have the properties we desire;
- interpretable by humans
- have sufficiently complex decision boundaries
- the decision boundaries are locally linear, each component of the decision boundary is simple to describe mathematically.
----------
#### How to build Decision Trees (the Learning Algorithm in words):
To learn a decision tree model, we take a greedy approach:
1. Start with an empty decision tree (undivided feature space)
2. Choose the ‘optimal’ predictor on which to split and choose the ‘optimal’ threshold value for splitting by applying a **splitting criterion (1)**
3. Recurse on on each new node until **stopping condition (2)** is met
#### So we need a (1) splitting criterion and a (2) stopping condition:
#### (1) Splitting criterion
<img src="data/split2_adj.png" alt="split2" width="70%"/>
#### (2) Stopping condition
**Not stopping while building a deeper and deeper tree = 100% training accuracy; Yet we will overfit!
To prevent the **overfitting** from happening, we should have stopping condition.
-------------
#### How do we go from Classification to Regression?
- For classification, we return the majority class in the points of each leaf node.
- For regression we return the average of the outputs for the points in each leaf node.
-------------
### ensemble: a group of items viewed as a whole rather than individually
#### What is bagging?
One way to adjust for the high variance of the output of an experiment is to perform the experiment multiple times and then average the results.
1. **Bootstrap:** we generate multiple samples of training data, via bootstrapping. We train a full decision tree on each sample of data.
2. **AGgregatiING** for a given input, we output the averaged outputs of all the models for that input.
This method is called **Bagging: B** ootstrap + **AGG**regat**ING**.
-------------
-------------
## 2. Building the tree models of last week
### Rebuild the Decision Tree model and Bagging model for comparison with Boosting methods
```
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
import sklearn.metrics as metrics
import time
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegressionCV
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
%matplotlib inline
pd.set_option('display.width', 1500)
pd.set_option('display.max_columns', 100)
from sklearn.model_selection import learning_curve
#Import Dataframe and Set Column Names
spam_df = pd.read_csv('data/spam.csv', header=None)
columns = ["Column_"+str(i+1) for i in range(spam_df.shape[1]-1)] + ['Spam']
spam_df.columns = columns
display(spam_df.head())
#Let us split the dataset into a 70-30 split by using the following:
#Split data into train and test
np.random.seed(42)
msk = np.random.rand(len(spam_df)) < 0.7
data_train = spam_df[msk]
data_test = spam_df[~msk]
#Split predictor and response columns
x_train, y_train = data_train.drop(['Spam'], axis=1), data_train['Spam']
x_test , y_test = data_test.drop(['Spam'] , axis=1), data_test['Spam']
print("Shape of Training Set :",data_train.shape)
print("Shape of Testing Set :" ,data_test.shape)
#Check Percentage of Spam in Train and Test Set
percentage_spam_training = 100*y_train.sum()/len(y_train)
percentage_spam_testing = 100*y_test.sum()/len(y_test)
print("Percentage of Spam in Training Set \t : {:0.2f}%.".format(percentage_spam_training))
print("Percentage of Spam in Testing Set \t : {:0.2f}%.".format(percentage_spam_testing))
```
-----------
### Fitting an Optimal Single Decision Tree
```
# Best depth for single decision trees of last week
best_depth = 7
print("The best depth was found to be:", best_depth)
#Evalaute the performance at the best depth
model_tree = DecisionTreeClassifier(max_depth=best_depth)
model_tree.fit(x_train, y_train)
#Check Accuracy of Spam Detection in Train and Test Set
acc_trees_training = accuracy_score(y_train, model_tree.predict(x_train))
acc_trees_testing = accuracy_score(y_test, model_tree.predict(x_test))
print("Simple Decision Trees: Accuracy, Training Set \t : {:.2%}".format(acc_trees_training))
print("Simple Decision Trees: Accuracy, Testing Set \t : {:.2%}".format(acc_trees_testing))
```
--------
### Fitting 100 Single Decision Trees while Bagging
```
n_trees = 100 # we tried a variety of numbers here
#Creating model
np.random.seed(0)
model = DecisionTreeClassifier(max_depth=best_depth+5)
#Initializing variables
predictions_train = np.zeros((data_train.shape[0], n_trees))
predictions_test = np.zeros((data_test.shape[0], n_trees))
#Conduct bootstraping iterations
for i in range(n_trees):
temp = data_train.sample(frac=1, replace=True)
response_variable = temp['Spam']
temp = temp.drop(['Spam'], axis=1)
model.fit(temp, response_variable)
predictions_train[:,i] = model.predict(x_train)
predictions_test[:,i] = model.predict(x_test)
#Make Predictions Dataframe
columns = ["Bootstrap-Model_"+str(i+1) for i in range(n_trees)]
predictions_train = pd.DataFrame(predictions_train, columns=columns)
predictions_test = pd.DataFrame(predictions_test, columns=columns)
#Function to ensemble the prediction of each bagged decision tree model
def get_prediction(df, count=-1):
count = df.shape[1] if count==-1 else count
temp = df.iloc[:,0:count]
return np.mean(temp, axis=1)>0.5
#Check Accuracy of Spam Detection in Train and Test Set
acc_bagging_training = 100*accuracy_score(y_train, get_prediction(predictions_train, count=-1))
acc_bagging_testing = 100*accuracy_score(y_test, get_prediction(predictions_test, count=-1))
print("Bagging: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_bagging_training))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
```
### Weaknesses of Bagging
Bagging is a greedy algorithm. What does this mean?
We always choose the feature with the most impact: ie the most information gain.
In what scenarios is this likely to be a problem?
<img src="data/dep_predictors.png" alt="split2" width="40%"/>
Imagine that this is the true underlying data generative process. Here predictors $x_2$ and $x_3$ influence $x_1$.
$\bullet$ Which predictor do you think bagging is likely to select as the root node?
$\bullet$ Why is this likely to be an issue?
$\bullet$ Because of their greedy nature, bagging ensembles are very likely to be correlated, especially in the shallower nodes of the individual decision trees.
### Why are decision trees greedy?
$\bullet$ Decision trees are NP-complete, there is no way to find the global minima ie. the best tree unless we use brute force and try all possible combinations. In practice this is infeasible.
$\bullet$ Thus decision trees are **hueristic algorithms**. Hueristic algorithms are designed to solve problems in a faster more efficient method by sacrificing optimality, accuracy or precision in favor of speed. Hueristic algorithms are often used to solve NP-complete problems.
Helpful analogies: playing chess, class tests
# Part 2 : Random Forest vs Bagging
#### What is Random Forest?
- **Many trees** make a **forest**.
- **Many random trees** make a **random forest**.
Random Forest is a modified form of bagging that creates ensembles of independent decision trees.
To *de-correlate the trees*, we:
1. train each tree on a separate bootstrap **random sample** of the full training set (same as in bagging)
2. for each tree, at each split, we **randomly select a set of 𝐽′ predictors from the full set of predictors.** (not done in bagging)
3. From amongst the 𝐽′ predictors, we select the optimal predictor and the optimal corresponding threshold for the split.
*Question:* Why would this second step help (only considering random sub-group of features)?
Now, we will fit an ensemble method, the Random Forest technique, which is different from the decision tree. Refer to the lectures slides for a full treatment on how they are different. Let's use ```n_estimators = predictor_count/2``` and ```max_depth = best_depth```.
```
#Fit a Random Forest Model
best_depth = 7
#Training
model = RandomForestClassifier(n_estimators=int(x_train.shape[1]/2), max_depth=best_depth)
model.fit(x_train, y_train)
#Predict
y_pred_train = model.predict(x_train)
y_pred_test = model.predict(x_test)
#Performance Evaluation
acc_random_forest_training = accuracy_score(y_train, y_pred_train)*100
acc_random_forest_testing = accuracy_score(y_test, y_pred_test)*100
print("Random Forest: Accuracy, Training Set : {:0.2f}%".format(acc_random_forest_training))
print("Random Forest: Accuracy, Testing Set : {:0.2f}%".format(acc_random_forest_testing))
```
<div class="alert alert-success">
<strong>🏋🏻♂️ TEAM ACTIVITY 1:</strong> Random Forests </div>
*Let's try to improve our accuracy scores on the cancer dataset.
```
from functions import tree_pd
get_tree_scores = tree_pd.get_tree_scores
cancer_scaled, target = tree_pd.load_cancer_dataset(10, 4)
################################### Train Test split
np.random.seed(40)
#test_proportion
test_prop = 0.2
msk = np.random.uniform(0, 1, len(cancer_scaled)) > test_prop
#Split predictor and response columns
ex1_x_train, ex1_y_train = cancer_scaled[msk], target[msk]
ex1_x_test , ex1_y_test = cancer_scaled[~msk], target[~msk]
print("Shape of Training Set :", ex1_x_train.shape)
print("Shape of Testing Set :" , ex1_x_test.shape)
```
## Your tasks:
1) Use the `get_tree_scores` function to assign a dataframe `rf_val_acc` using a class instance of `RandomForestClassifier`. As a reminder this function takes four arguments (x_train, y_train, model, tree_depth_range). This time don't feed a random state.
2) Use pandas groupby function to to get the mean cross-validation accuracy for specific depths. Assign to a new dataframe `rf_mean_acc`.
3) Visualize the mean cross validation accuracy scores by running the cell provided. Answer the subsequent questions.
4) Plot the feature importance of the best random forest model.
```
# %load 'solutions/sol2.py'
tree_depth_range = range(1, 40, 2)
rf_val_acc = get_tree_scores(ex1_x_train,
ex1_y_train,
RandomForestClassifier(),
tree_depth_range)
#rf_mean_acc = pd.DataFrame(rf_val_acc)
rf_mean_acc = rf_val_acc.groupby("depth").mean()
rf_mean_acc["depth"] = list(tree_depth_range)
rf_mean_acc
```
#### Run this code when you are finished with the first exercise to compare random forests and simple decision trees. More questions and one final task lie below.
```
### add a decision tree classifier for comparison.
tree_val_acc = get_tree_scores(ex1_x_train,
ex1_y_train,
DecisionTreeClassifier(),
tree_depth_range)
tree_mean_acc = tree_val_acc.groupby("depth").mean()
tree_mean_acc["depth"] = list(tree_depth_range)
### Make the plot
plt.figure(figsize=(12, 3))
plt.title('Variation of Accuracy on Validation set with Depth - Simple Decision Tree')
sns.lineplot(x = "depth", y = "cv_acc_score", data = rf_val_acc,
label = "random forest");
sns.lineplot(x = "depth", y = "cv_acc_score", data = tree_val_acc,
label = "simple decision tree");
plt.xlabel("max_depth")
plt.ylabel("validation set accuracy score")
max_idx = tree_mean_acc["cv_acc_score"].idxmax()
best_depth_tree = tree_mean_acc["depth"][max_idx]
best_tree_model = DecisionTreeClassifier(max_depth=best_depth)
best_tree_model.fit(ex1_x_train, ex1_y_train)
tree_test_accuracy = best_tree_model.score(ex1_x_test, ex1_y_test.reshape(-1,))
max_idx = rf_mean_acc["cv_acc_score"].idxmax()
best_depth_rf = rf_mean_acc["depth"][max_idx]
best_rf_model = RandomForestClassifier(max_depth=best_depth_rf, random_state = 42)
best_rf_model.fit(ex1_x_train, ex1_y_train.reshape(-1,))
tree_rf_accuracy = best_rf_model.score(ex1_x_test, ex1_y_test.reshape(-1,))
print("Decision Tree best depth {:}".format(best_depth))
print("Random Forest best depth {:}".format(best_depth_rf))
print("Best Decision Tree test set accuracy: {:0.2f}%".format(tree_test_accuracy*100))
print("Best Random Forest test set accuracy: {:0.2f}%".format(tree_rf_accuracy*100))
```
$\bullet$ Why doesn't the random forest accuracy score deteriorate in the same way that the decision tree does for deeper trees?
$\bullet$ What are the two kinds of stochasticity that lead to the robustness of random forests?
$\bullet$ How do random forests differ from Bagging?
### Feature Importance
#### Lets plot the feature importance of the best random forest model:
Random Forest gives the above values as ```feature_importance``` where it normalizes the impact of a predictor to the number of times it is useful and thus gives overvall significance for free. Explore the attributes of the Random Forest model object for the best nodes.
Feature importance is calculated as the decrease in node impurity **weighted by the probability of reaching that node. The node probability can be calculated by the number of samples that reach the node**, divided by the total number of samples. The higher the value the more important the feature.
source: https://towardsdatascience.com/the-mathematics-of-decision-trees-random-forest-and-feature-importance-in-scikit-learn-and-spark-f2861df67e3#:~:text=Feature%20importance%20is%20calculated%20as,the%20more%20important%20the%20feature.
<div class="alert alert-success">
<strong>🏋🏻♂️ TEAM ACTIVITY 2:</strong> Feature importance </div>
1) extract the `.feature_importances_` attribute from your `best_rf_model`. Assign this to a variable called `feature_importance`.
2) Rescale the feature importances such that the most important feature is has an importance of 100.
3) use `np.argsort` to return the indices of the sorted features.
4) finally pass the sorted index to `plt.barh` and plot the feature importances!
```
#your code here
#help(best_rf_model)
feature_importance = ...
feature_importance = ...
sorted_idx = ...
pos = np.arange(sorted_idx.shape[0]) + .5
plt.figure(figsize=(10,12))
plt.barh(pos, ..., align='center')
plt.yticks(pos, ex1_x_train.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
# %load 'solutions/importance.py'
```
#### Let's compare the performance of our 3 models:
```
print("Decision Trees:\tAccuracy, Training Set \t: {:.2%}".format(acc_trees_training))
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("\nBagging: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_bagging_training))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("\nRandom Forest: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_random_forest_training))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
```
#### As we see above, the performance of both Bagging and Random Forest was similar, so what is the difference? Do both overfit the data just as much?
Hints :
- What is the only extra parameter we declared when defining a Random Forest Model vs Bagging? Does it have an impact on overfitting?
```
#Fit a Random Forest Model
new_depth = best_depth + 20
#Training
model = RandomForestClassifier(n_estimators=int(x_train.shape[1]/2), max_depth=new_depth)
model.fit(x_train, y_train)
#Predict
y_pred_train = model.predict(x_train)
y_pred_test = model.predict(x_test)
#Perfromance Evaluation
acc_random_forest_deeper_training = accuracy_score(y_train, y_pred_train)*100
acc_random_forest_deeper_testing = accuracy_score(y_test, y_pred_test)*100
print("Random Forest: Accuracy, Training Set (Deeper): {:0.2f}%".format(acc_random_forest_deeper_training))
print("Random Forest: Accuracy, Testing Set (Deeper): {:0.2f}%".format(acc_random_forest_deeper_testing))
```
#### Training accuracies:
```
print("Training Accuracies:")
print("Decision Trees:\tAccuracy, Training Set \t: {:.2%}".format(acc_trees_training))
print("Bagging: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_bagging_training))
print("Random Forest: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_random_forest_training))
print("RF Deeper: \tAccuracy, Training Set \t: {:0.2f}%".format(acc_random_forest_deeper_training))
```
#### Testing accuracies:
```
print("Testing Accuracies:")
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
print("RF Deeper: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_deeper_testing))
#vars(model)
print(len(model.estimators_[0].tree_.feature))
print(len(model.estimators_[0].tree_.threshold))
```
<div class="alert alert-success">
<strong>🏋🏻♂️ TEAM ACTIVITY 2:</strong> Exploring RandomForestClassifier class instances. </div>
For more resources on python classes (we're relying on them all the time via sklearn!) see <a href = "https://docs.python.org/3/tutorial/classes.html#a-first-look-at-classes">this link.</a>
```
from functions import tree_pd
import numpy as np
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.metrics as metrics
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegressionCV
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
from sklearn import datasets
from sklearn.ensemble import BaggingRegressor
%matplotlib inline
pd.set_option('display.width', 1500)
pd.set_option('display.max_columns', 100)
from sklearn.model_selection import learning_curve
get_tree_pd = tree_pd.get_tree_scores
cancer_scaled, target = tree_pd.load_cancer_dataset(10, 4)
################################### Train Test split
np.random.seed(40)
#test_proportion
test_prop = 0.2
msk = np.random.uniform(0, 1, len(cancer_scaled)) > test_prop
#Split predictor and response columns
X_train, y_train = cancer_scaled[msk], target[msk]
X_test , y_test = cancer_scaled[~msk], target[~msk]
print("Shape of Training Set :", X_train.shape)
print("Shape of Testing Set :" , X_test.shape)
################################### Train a bagging and random forest model
depth = 13
n_estimators = 100
best_rf_model = RandomForestClassifier(max_depth=depth, random_state = 42, n_estimators= n_estimators)
best_rf_model.fit(X_train, y_train.reshape(-1,))
tree_rf_accuracy = best_rf_model.score(X_test, y_test.reshape(-1,))
bagging_model = BaggingRegressor(DecisionTreeClassifier(max_depth=depth),
n_estimators = 100,
random_state = 42).fit(X_train, y_train.reshape(-1,))
```
#### Directions
Run the cell below and look at the output. The .estimators_ attribute of a RandomForestClassifier class instance is a list of the individual DecisionTreeClassifier class instance estimators that make up the ensemble model. Calling .tree_ on the DecisionTreeClassifier will give you the individual tree estimator.
1. Complete the function by extracting the impurity and feature attributes for each decision tree estimator at a specific decision node.
2. Fix the creation of the dictionary at the bottom of the function and return a dataframe.
```
type(best_rf_model.estimators_[0].tree_)
type(best_rf_model)
help(best_rf_model.estimators_[0].tree_)
# %load "exercises/exercise1.py"
def get_impurity_pd(model, n = 0):
"""
This function returns a pandas dataframe with all of the nth nodes feature impurities.
"""
rf_estimators = model.estimators_.copy()
features = np.array(X_train.columns)
node_impurities, node_features = [], []
for i, estimator in enumerate(rf_estimators):
estimator_impurity = #TODO 0
estimator_feature = #TODO 1
node_impurities.append(estimator_impurity)
node_features.append(estimator_feature)
node_impurity_dict = {"feature": #TODO
"impurity": #TODO
df = #TODO
return(df)
# %load "solutions/impurity.py"
tree_node = 0
rf_df = get_impurity_pd(best_rf_model, tree_node)
bagging_df = get_impurity_pd(bagging_model, tree_node)
#plot
fig, ax = plt.subplots(1,2, figsize = (20, 5))
ax.ravel()
sns.swarmplot(x = "feature", y = "impurity", data = rf_df, ax = ax[0])
#sns.swarmplot(x = "feature", y = "impurities", data = rf_df, ax = ax[0])
ax[0].tick_params(labelrotation=45)
ax[0].set_title("Random Forest: Node 0 impurities after split")
sns.swarmplot(x = "feature", y = "impurity", data = bagging_df, ax = ax[1])
ax[1].set_title("Bagging: Node 0 impurities after split")
plt.xticks(rotation=45);
```
____________
## The limitations of random forest
#### When can Random Forest overfit?
- Increasing the number of trees in RF generally doesn't increase the risk of overfitting, BUT if the number of trees in the ensemble is too large then the trees in the ensemble may become correlated, and therefore increase the variance.
#### When can Random Forest fail?
- **When we have a lot of predictors that are completely independent of the response and one overwhelmingly influential predictor**.
#### Why aren't random forests and bagging interpretable? How about a very deep decision tree?
____________
## Bagging and random forest vs. Boosting
- **Bagging and Random Forest:**
- complex and deep trees **overfit**
- thus **let's perform variance reduction on complex trees!**
- **Boosting:**
- simple and shallow trees **underfit**
- thus **let's perform bias reduction of simple trees!**
- make the simple trees more expressive!
**Boosting** attempts to improve the predictive flexibility of simple models.
- It trains a **large number of “weak” learners in sequence**.
- A weak learner is a constrained model (limit the max depth of each decision tree).
- Each one in the sequence focuses on **learning from the mistakes** of the one before it.
- By more heavily weighting in the mistakes in the next tree, our next tree will learn from the mistakes.
- A combining all the weak learners into a single strong learner = **a boosted tree**.
<img src="data/gradient_boosting1.png?" alt="tree_adj" width="70%"/>
----------
### Illustrative example (from [source](https://towardsdatascience.com/underfitting-and-overfitting-in-machine-learning-and-how-to-deal-with-it-6fe4a8a49dbf))
<img src="data/boosting.png" alt="tree_adj" width="70%"/>
We built multiple trees consecutively: Tree 1 -> Tree 2 -> Tree 3 - > ....
**The size of the plus or minus signs indicates the weights of a data points for every Tree**. How do we determine these weights?
For each consecutive tree and iteration we do the following:
- The **wrongly classified data points ("mistakes" = red circles)** are identified and **more heavily weighted in the next tree (green arrow)**.
- Thus the size of the plus or minus changes in the next tree
- This change in weights will influence and change the next simple decision tree
- The **correct predictions are** identified and **less heavily weighted in the next tree**.
We iterate this process for a certain number of times, stop and construct our final model:
- The ensemble (**"Final: Combination"**) is a linear combination of the simple trees, and is more expressive!
- The ensemble (**"Final: Combination"**) has indeed not just one simple decision boundary line, and fits the data better.
<img src="data/boosting_2.png?" alt="tree_adj" width="70%"/>
### What is Ada Boost?
- Ada Boost = Adaptive Boosting.
- AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers
<img src="data/AdaBoost1.png" alt="tree_adj" width="70%"/>
<img src="data/AdaBoost2.png" alt="tree_adj" width="70%"/>
For an individual training point the loss can be defined as:
$$\text{ExpLoss_i} = \begin{cases}
e^{\hat{y}}, & \text{if}\ y=-1 \\
e^{-\hat{y}}, & \text{} y=1
\end{cases}
$$
<img src="data/AdaBoost3.png" alt="tree_adj" width="70%"/>
### Illustrative Example
------
**Step1. Start with equal distribition initially**
<img src="data/ADA2.png" alt="tree_adj" width="40%">
------
**Step2. Fit a simple classifier**
<img src="data/ADA3.png" alt="tree_adj" width="40%"/>
------
**Step3. Update the weights**
<img src="data/ADA4.png" alt="tree_adj" width="40%"/>
**Step4. Update the classifier:** First time trivial (we have no model yet.)
------
**Step2. Fit a simple classifier**
<img src="data/ADA5.png" alt="tree_adj" width="40%"/>
**Step3. Update the weights:** not shown.
------
**Step4. Update the classifier:**
<img src="data/ADA6.png" alt="tree_adj" width="40%">
-------------
### Let's talk about random forest and boosting in the context of bias and variance.
<img src="data/bias_variance.png" alt="split2" width="40%"/>
<img src="data/fitting.png" alt="split2" width="40%"/>
## 4. Use the Adaboost method to visualize Bias-Variance tradeoff.
Now let's try Boosting!
```
#Fit an Adaboost Model
x_train, y_train = data_train.drop(['Spam'], axis=1), data_train['Spam']
x_test , y_test = data_test.drop(['Spam'] , axis=1), data_test['Spam']
#Training
model = AdaBoostClassifier(base_estimator= DecisionTreeClassifier(max_depth=3),
n_estimators=200,
learning_rate=0.05)
model.fit(x_train.values, y_train.values)
#Predict
y_pred_train = model.predict(x_train.values)
y_pred_test = model.predict(x_test.values)
#Performance Evaluation
acc_boosting_training = accuracy_score(y_train, y_pred_train)*100
acc_boosting_test = accuracy_score(y_test, y_pred_test)*100
print("Ada Boost:\tAccuracy, Training Set \t: {:0.2f}%".format(acc_boosting_training))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
```
**How does the test and training accuracy evolve with every iteration (tree)?**
```
#Plot Iteration based score
train_scores = list(model.staged_score(x_train.values,y_train))
test_scores = list(model.staged_score(x_test.values, y_test))
plt.figure(figsize=(10,7))
plt.plot(train_scores,label='train')
plt.plot(test_scores,label='test')
plt.xlabel('Iteration')
plt.ylabel('Accuracy')
plt.title("Variation of Accuracy with Iterations - ADA Boost")
plt.legend();
print('best number of iterations', np.array(test_scores).argmax())
```
What about performance?
```
print("Decision Trees:\tAccuracy, Testing Set \t: {:.3%}".format(acc_trees_testing))
print("Bagging: \tAccuracy, Testing Set \t: {:0.3f}%".format( acc_bagging_testing))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.3f}%".format(acc_random_forest_testing))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.3f}%".format(acc_boosting_test))
```
AdaBoost seems to be performing better than Simple Decision Trees and has a similar Test Set Accuracy performance compared to Random Forest.
**Random tip:** If a "for"-loop takes some time and you want to know the progress while running the loop, use: **tqdm()** ([link](https://github.com/tqdm/tqdm)). No need for 1000's of ```print(i)``` outputs.
Usage: ```for i in tqdm( range(start,finish) ):```
- tqdm means *"progress"* in Arabic (taqadum, تقدّم) and
- tqdm is an abbreviation for *"I love you so much"* in Spanish (te quiero demasiado).
#### What if we change the depth of our AdaBoost trees?
```
#! pip install tqdm
# Start Timer
start = time.time()
#Find Optimal Depth of trees for Boosting
score_train, score_test = {}, {}
depth_start, depth_end = 2, 30
for i in tqdm(range(depth_start, depth_end, 2)):
model = AdaBoostClassifier(
base_estimator=DecisionTreeClassifier(max_depth=i),
n_estimators=200, learning_rate=0.05)
model.fit(x_train, y_train)
score_train[i] = accuracy_score(y_train, model.predict(x_train))
score_test[i] = accuracy_score(y_test, model.predict(x_test))
# Stop Timer
end = time.time()
elapsed_adaboost = end - start
#Plot
lists1 = sorted(score_train.items())
lists2 = sorted(score_test.items())
x1, y1 = zip(*lists1)
x2, y2 = zip(*lists2)
plt.figure(figsize=(10,7))
plt.ylabel("Accuracy")
plt.xlabel("Depth")
plt.title('Variation of Accuracy with Depth - ADA Boost Classifier')
plt.plot(x1, y1, 'b-', label='Train')
plt.plot(x2, y2, 'g-', label='Test')
plt.legend()
plt.show()
```
Adaboost complexity depends on both the number of estimators and the base estimator.
- In the beginning as our model complexity increases (depth 2-3), we first observe a small increase in accuracy.
- But as we go further to the right of the graph (**deeper trees**), our model **will overfit the data.**
- **REMINDER and validation: Boosting relies on simple trees!**
<div class="alert alert-success">
<strong>🏋🏻♂️ TEAM ACTIVITY 3:</strong> Exploring learning rate and TQDM </div>
### Explore how changing the learning rate changes the training and testing accuracy. Use the Te Quero Demasiado (TQDM) wrap around your range as above. (Hint you will probably want to explore a range from $e^{-6}$ to $e^{-1}$
1) copy the tqdm loop code and vary the learning rate as suggested.
2) plot the staged score to visualize how the adaboost model learns differently depending on the learning rate.
```
#TODO
# %load "solutions/bo2.py"
```
#### Is this exercise useful?
**Food for Thought :**
- Are **boosted models independent of one another?** Do they need to wait for the previous model's residuals?
- Are **bagging or random forest models independent of each other**, can they be trained in a parallel fashion?
## End of Standard Lab
## 5. *Theory:* What is Gradient Boosting and XGBoost?
### What is Gradient Boosting?
To improve its predictions, **gradient boosting looks at the difference between its current approximation, and the known correct target vector, which is called the residual**.
The mathematics:
- It may be assumed that there is some imperfect model $F_{m}$
- The gradient boosting algorithm improves on $F_{m}$ constructing a new model that adds an estimator $h$ to provide a better model:
$$F_{m+1}(x)=F_{m}(x)+h(x)$$
- To find $h$, the gradient boosting solution starts with the observation that a perfect **h** would imply
$$F_{m+1}(x)=F_{m}(x)+h(x)=y$$
- or, equivalently solving for h,
$$h(x)=y-F_{m}(x)$$
- Therefore, gradient boosting will fit h to the residual $y-F_{m}(x)$
<img src="data/gradient_boosting2.png" alt="tree_adj" width="80%"/>
-------
### XGBoost: ["Long May She Reign!"](https://towardsdatascience.com/https-medium-com-vishalmorde-xgboost-algorithm-long-she-may-rein-edd9f99be63d)
<img src="data/kaggle.png" alt="tree_adj" width="100%"/>
----------
### What is XGBoost and why is it so good!?
- Based on Gradient Boosting
- XGBoost = **eXtreme Gradient Boosting**; refers to the engineering goal to push the limit of computations resources for boosted tree algorithm
**Accuracy:**
- XGBoost however uses a **more regularized model formalizaiton to control overfitting** (=better performance) by both L1 and L2 regularization.
- Tree Pruning methods: more shallow tree will also prevent overfitting
- Improved convergence techniques (like early stopping when no improvement is made for X number of iterations)
- Built-in Cross-Validaiton
**Computing Speed:**
- Special Vector and matrix type data structures for faster results.
- Parallelized tree building: using all of your CPU cores during training.
- Distributed Computing: for training very large models using a cluster of machines.
- Cache Optimization of data structures and algorithm: to make best use of hardware.
**XGBoost is building boosted trees in parallel? What? How?**
- No: Xgboost doesn't run multiple trees in parallel, you need predictions after each tree to update gradients.
- Rather it does the parallelization WITHIN a single tree my using openMP to create branches independently.
## 6. Use XGBoost: Extreme Gradient Boosting
```
# Let's install XGBoost
#! pip install xgboost
import xgboost as xgb
# Create the training and test data
dtrain = xgb.DMatrix(x_train, label=y_train)
dtest = xgb.DMatrix(x_test, label=y_test)
# Parameters
param = {
'max_depth': best_depth, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'silent': 1, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset
# Number of training iterations
num_round = 200
# Start timer
start = time.time()
# Train XGBoost
bst = xgb.train(param,
dtrain,
num_round,
evals= [(dtrain, 'train')],
early_stopping_rounds=20, # early stopping
verbose_eval=20)
# Make prediction training set
preds_train = bst.predict(dtrain)
best_preds_train = np.asarray([np.argmax(line) for line in preds_train])
# Make prediction test set
preds_test = bst.predict(dtest)
best_preds_test = np.asarray([np.argmax(line) for line in preds_test])
# Performance Evaluation
acc_XGBoost_training = accuracy_score(y_train, best_preds_train)*100
acc_XGBoost_test = accuracy_score(y_test, best_preds_test)*100
# Stop Timer
end = time.time()
elapsed_xgboost = end - start
print("XGBoost:\tAccuracy, Training Set \t: {:0.2f}%".format(acc_XGBoost_training))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
### What about the accuracy performance: AdaBoost versus XGBoost?
```
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
### What about the computing performance: AdaBoost versus XGBoost?
```
print("AdaBoost elapsed time: \t{:0.2f}s".format(elapsed_adaboost))
print("XGBoost elapsed time: \t{:0.2f}s".format(elapsed_xgboost))
```
### What if we change the depth of our XGBoost trees and compare to Ada Boost?
```
def model_xgboost(best_depth):
param = {
'max_depth': best_depth, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'verbosity': 0, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset
# the number of training iterations
num_round = 200
bst = xgb.train(param,
dtrain,
num_round,
evals= [(dtrain, 'train')],
early_stopping_rounds=20,
verbose_eval=False)
preds_train = bst.predict(dtrain)
best_preds_train = np.asarray([np.argmax(line) for line in preds_train])
preds_test = bst.predict(dtest)
best_preds_test = np.asarray([np.argmax(line) for line in preds_test])
#Performance Evaluation
XGBoost_training = accuracy_score(y_train, best_preds_train)
XGBoost_test = accuracy_score(y_test, best_preds_test)
return XGBoost_training, XGBoost_test
#Find Optimal Depth of trees for Boosting
score_train_xgb, score_test_xgb = {}, {}
depth_start, depth_end = 2, 30
for i in trange(depth_start, depth_end, 2):
XGBoost_training, XGBoost_test = model_xgboost(i)
score_train_xgb[i] = XGBoost_training
score_test_xgb[i] = XGBoost_test
#Plot
lists1 = sorted(score_train_xgb.items())
lists2 = sorted(score_test_xgb.items())
x3, y3 = zip(*lists1)
x4, y4 = zip(*lists2)
plt.figure(figsize=(10,7))
plt.ylabel("Accuracy")
plt.xlabel("Depth")
plt.title('Variation of Accuracy with Depth - Adaboost & XGBoost Classifier')
plt.plot(x1, y1, label='Train Accuracy Ada Boost')
plt.plot(x2, y2, label='Test Accuracy Ada Boost')
plt.plot(x3, y3, label='Train Accuracy XGBoost')
plt.plot(x4, y4, label='Test Accuracy XGBoost')
plt.legend()
plt.show()
```
**Interesting**:
- No real optimal depth of the simple tree for XGBoost, probably a lot of regularization, pruning, or early stopping when using a deep tree at the start.
- XGBoost does not seem to overfit when the depth of the tree increases, as opposed to Ada Boost.
**All the accuracy performances:**
```
print("Decision Trees:\tAccuracy, Testing Set \t: {:.2%}".format(acc_trees_testing))
print("Bagging: \tAccuracy, Testing Set \t: {:0.2f}%".format( acc_bagging_testing))
print("Random Forest: \tAccuracy, Testing Set \t: {:0.2f}%".format(acc_random_forest_testing))
print("Ada Boost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_boosting_test))
print("XGBoost:\tAccuracy, Testing Set \t: {:0.2f}%".format(acc_XGBoost_test))
```
----------
**Overview of all the tree algorithms:** [Source](https://towardsdatascience.com/https-medium-com-vishalmorde-xgboost-algorithm-long-she-may-rein-edd9f99be63d)
<img src="data/trees.png" alt="tree_adj" width="100%"/>
----------
## Optional: Example to better understand Bias vs Variance tradeoff.
A central notion underlying what we've been learning in lectures and sections so far is the trade-off between overfitting and underfitting. If you remember back to Homework 3, we had a model that seemed to represent our data accurately. However, we saw that as we made it more and more accurate on the training set, it did not generalize well to unobserved data.
As a different example, in face recognition algorithms, such as that on the iPhone X, a too-accurate model would be unable to identity someone who styled their hair differently that day. The reason is that our model may learn irrelevant features in the training data. On the contrary, an insufficiently trained model would not generalize well either. For example, it was recently reported that a face mask could sufficiently fool the iPhone X.
A widely used solution in statistics to reduce overfitting consists of adding structure to the model, with something like regularization. This method favors simpler models during training.
The bias-variance dilemma is closely related.
- The **bias** of a model quantifies how precise a model is across training sets.
- The **variance** quantifies how sensitive the model is to small changes in the training set.
- A **robust** model is not overly sensitive to small changes.
- **The dilemma involves minimizing both bias and variance**; we want a precise and robust model. Simpler models tend to be less accurate but more robust. Complex models tend to be more accurate but less robust.
**How to reduce bias:**
- **Use more complex models, more features, less regularization,** ...
- **Boosting:** attempts to improve the predictive flexibility of simple models. Boosting uses simple base models and tries to “boost” their aggregate complexity.
**How to reduce variance:**
- **Early Stopping:** Its rules provide us with guidance as to how many iterations can be run before the learner begins to over-fit.
- **Pruning:** Pruning is extensively used while building related models. It simply removes the nodes which add little predictive power for the problem in hand.
- **Regularization:** It introduces a cost term for bringing in more features with the objective function. Hence it tries to push the coefficients for many variables to zero and hence reduce cost term.
- **Train with more data:** It won’t work every time, but training with more data can help algorithms detect the signal better.
- **Ensembling:** Ensembles are machine learning methods for combining predictions from multiple separate models. For example:
- **Bagging** attempts to reduce the chance of overfitting complex models: Bagging uses complex base models and tries to “smooth out” their predictions.
-------------
#### Interesting Piazza post: why randomness in simple decision tree?
```"Hi there. I notice that there is a parameter called "random_state" in decision tree function and I wonder why we need randomness in simple decision tree. If we add randomness in such case, isn't it the same as random forest?"```
- The problem of learning an optimal decision tree is known to be **NP-complete** under several aspects of optimality and even for simple concepts.
- Consequently, practical decision-tree learning algorithms are based on **heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node**.
- Such algorithms **cannot guarantee to return the globally optimal decision tree**.
- This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement (Bagging).
For example: **What is the defaulth DecisionTreeClassifier behaviour when there are 2 or more best features for a certain split (a tie among "splitters")?** (after a deep dive and internet search [link](https://github.com/scikit-learn/scikit-learn/issues/12259 ) ):
- The current default behaviour when splitter="best" is to shuffle the features at each step and take the best feature to split.
- In case there is a tie, we take a random one.
| github_jupyter |
```
# default_exp models.InceptionTimePlus
```
# InceptionTimePlus
> This is an unofficial PyTorch implementation of InceptionTime (Fawaz, 2019) created by Ignacio Oguiza.
**References:**
* Fawaz, H. I., Lucas, B., Forestier, G., Pelletier, C., Schmidt, D. F., Weber, J., ... & Petitjean, F. (2020). [Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery, 34(6), 1936-1962.](https://arxiv.org/pdf/1909.04939)
* Official InceptionTime tensorflow implementation: https://github.com/hfawaz/InceptionTime
```
#export
from tsai.imports import *
from tsai.utils import *
from tsai.models.layers import *
from tsai.models.utils import *
torch.set_num_threads(cpus)
#export
# This is an unofficial PyTorch implementation by Ignacio Oguiza - oguiza@gmail.com modified from:
# Fawaz, H. I., Lucas, B., Forestier, G., Pelletier, C., Schmidt, D. F., Weber, J., ... & Petitjean, F. (2019).
# InceptionTime: Finding AlexNet for Time Series Classification. arXiv preprint arXiv:1909.04939.
# Official InceptionTime tensorflow implementation: https://github.com/hfawaz/InceptionTime
class InceptionModulePlus(Module):
def __init__(self, ni, nf, ks=40, bottleneck=True, padding='same', coord=False, separable=False, dilation=1, stride=1, conv_dropout=0., sa=False, se=None,
norm='Batch', zero_norm=False, bn_1st=True, act=nn.ReLU, act_kwargs={}):
if not (is_listy(ks) and len(ks) == 3):
if isinstance(ks, Integral): ks = [ks // (2**i) for i in range(3)]
ks = [ksi if ksi % 2 != 0 else ksi - 1 for ksi in ks] # ensure odd ks for padding='same'
bottleneck = False if ni == nf else bottleneck
self.bottleneck = Conv(ni, nf, 1, coord=coord, bias=False) if bottleneck else noop #
self.convs = nn.ModuleList()
for i in range(len(ks)): self.convs.append(Conv(nf if bottleneck else ni, nf, ks[i], padding=padding, coord=coord, separable=separable,
dilation=dilation**i, stride=stride, bias=False))
self.mp_conv = nn.Sequential(*[nn.MaxPool1d(3, stride=1, padding=1), Conv(ni, nf, 1, coord=coord, bias=False)])
self.concat = Concat()
self.norm = Norm(nf * 4, norm=norm, zero_norm=zero_norm)
self.conv_dropout = nn.Dropout(conv_dropout) if conv_dropout else noop
self.sa = SimpleSelfAttention(nf * 4) if sa else noop
self.act = act(**act_kwargs) if act else noop
self.se = nn.Sequential(SqueezeExciteBlock(nf * 4, reduction=se), BN1d(nf * 4)) if se else noop
self._init_cnn(self)
def _init_cnn(self, m):
if getattr(self, 'bias', None) is not None: nn.init.constant_(self.bias, 0)
if isinstance(self, (nn.Conv1d,nn.Conv2d,nn.Conv3d,nn.Linear)): nn.init.kaiming_normal_(self.weight)
for l in m.children(): self._init_cnn(l)
def forward(self, x):
input_tensor = x
x = self.bottleneck(x)
x = self.concat([l(x) for l in self.convs] + [self.mp_conv(input_tensor)])
x = self.norm(x)
x = self.conv_dropout(x)
x = self.sa(x)
x = self.act(x)
x = self.se(x)
return x
@delegates(InceptionModulePlus.__init__)
class InceptionBlockPlus(Module):
def __init__(self, ni, nf, residual=True, depth=6, coord=False, norm='Batch', zero_norm=False, act=nn.ReLU, act_kwargs={}, sa=False, se=None,
stoch_depth=1., **kwargs):
self.residual, self.depth = residual, depth
self.inception, self.shortcut, self.act = nn.ModuleList(), nn.ModuleList(), nn.ModuleList()
for d in range(depth):
self.inception.append(InceptionModulePlus(ni if d == 0 else nf * 4, nf, coord=coord, norm=norm,
zero_norm=zero_norm if d % 3 == 2 else False,
act=act if d % 3 != 2 else None, act_kwargs=act_kwargs,
sa=sa if d % 3 == 2 else False,
se=se if d % 3 != 2 else None,
**kwargs))
if self.residual and d % 3 == 2:
n_in, n_out = ni if d == 2 else nf * 4, nf * 4
self.shortcut.append(Norm(n_in, norm=norm) if n_in == n_out else ConvBlock(n_in, n_out, 1, coord=coord, bias=False, norm=norm, act=None))
self.act.append(act(**act_kwargs))
self.add = Add()
if stoch_depth != 0: keep_prob = np.linspace(1, stoch_depth, depth)
else: keep_prob = np.array([1] * depth)
self.keep_prob = keep_prob
def forward(self, x):
res = x
for i in range(self.depth):
if self.keep_prob[i] > random.random() or not self.training:
x = self.inception[i](x)
if self.residual and i % 3 == 2:
res = x = self.act[i//3](self.add(x, self.shortcut[i//3](res)))
return x
#export
@delegates(InceptionModulePlus.__init__)
class InceptionTimePlus(nn.Sequential):
def __init__(self, c_in, c_out, seq_len=None, nf=32, nb_filters=None,
flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None, custom_head=None, **kwargs):
if nb_filters is not None: nf = nb_filters
else: nf = ifnone(nf, nb_filters) # for compatibility
backbone = InceptionBlockPlus(c_in, nf, **kwargs)
#head
self.head_nf = nf * 4
self.c_out = c_out
self.seq_len = seq_len
if custom_head: head = custom_head(self.head_nf, c_out, seq_len)
else: head = self.create_head(self.head_nf, c_out, seq_len, flatten=flatten, concat_pool=concat_pool,
fc_dropout=fc_dropout, bn=bn, y_range=y_range)
layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(head))])
super().__init__(layers)
def create_head(self, nf, c_out, seq_len, flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None):
if flatten:
nf *= seq_len
layers = [Flatten()]
else:
if concat_pool: nf *= 2
layers = [GACP1d(1) if concat_pool else GAP1d(1)]
layers += [LinBnDrop(nf, c_out, bn=bn, p=fc_dropout)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
#export
class InCoordTime(InceptionTimePlus):
def __init__(self, *args, coord=True, zero_norm=True, **kwargs):
super().__init__(*args, coord=coord, zero_norm=zero_norm, **kwargs)
class XCoordTime(InceptionTimePlus):
def __init__(self, *args, coord=True, separable=True, zero_norm=True, **kwargs):
super().__init__(*args, coord=coord, separable=separable, zero_norm=zero_norm, **kwargs)
InceptionTimePlus17x17 = named_partial('InceptionTimePlus17x17', InceptionTimePlus, nf=17, depth=3)
InceptionTimePlus32x32 = named_partial('InceptionTimePlus32x32', InceptionTimePlus)
InceptionTimePlus47x47 = named_partial('InceptionTimePlus47x47', InceptionTimePlus, nf=47, depth=9)
InceptionTimePlus62x62 = named_partial('InceptionTimePlus62x62', InceptionTimePlus, nf=62, depth=9)
InceptionTimeXLPlus = named_partial('InceptionTimeXLPlus', InceptionTimePlus, nf=64, depth=12)
bs = 16
n_vars = 3
seq_len = 51
c_out = 2
xb = torch.rand(bs, n_vars, seq_len)
test_eq(InceptionTimePlus(n_vars,c_out)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out,concat_pool=True)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, bottleneck=False)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, residual=False)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, conv_dropout=.5)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, stoch_depth=.5)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars, c_out, seq_len=seq_len, zero_norm=True, flatten=True)(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, coord=True, separable=True,
norm='Instance', zero_norm=True, bn_1st=False, fc_dropout=.5, sa=True, se=True, act=nn.PReLU, act_kwargs={})(xb).shape, [bs, c_out])
test_eq(InceptionTimePlus(n_vars,c_out, coord=True, separable=True,
norm='Instance', zero_norm=True, bn_1st=False, act=nn.PReLU, act_kwargs={})(xb).shape, [bs, c_out])
test_eq(total_params(InceptionTimePlus(3, 2))[0], 455490)
test_eq(total_params(InceptionTimePlus(6, 2, **{'coord': True, 'separable': True, 'zero_norm': True}))[0], 77204)
test_eq(total_params(InceptionTimePlus(3, 2, ks=40))[0], total_params(InceptionTimePlus(3, 2, ks=[9, 19, 39]))[0])
bs = 16
n_vars = 3
seq_len = 51
c_out = 2
xb = torch.rand(bs, n_vars, seq_len)
model = InceptionTimePlus(n_vars, c_out)
model(xb).shape
test_eq(model[0](xb), model.backbone(xb))
test_eq(model[1](model[0](xb)), model.head(model[0](xb)))
test_eq(model[1].state_dict().keys(), model.head.state_dict().keys())
test_eq(len(ts_splitter(model)), 2)
test_eq(check_bias(InceptionTimePlus(2,3, zero_norm=True), is_conv)[0].sum(), 0)
test_eq(check_weight(InceptionTimePlus(2,3, zero_norm=True), is_bn)[0].sum(), 6)
test_eq(check_weight(InceptionTimePlus(2,3), is_bn)[0], np.array([1., 1., 1., 1., 1., 1., 1., 1.]))
for i in range(10): InceptionTimePlus(n_vars,c_out,stoch_depth=0.8,depth=9,zero_norm=True)(xb)
net = InceptionTimePlus(2,3,**{'coord': True, 'separable': True, 'zero_norm': True})
test_eq(check_weight(net, is_bn)[0], np.array([1., 1., 0., 1., 1., 0., 1., 1.]))
net
#export
@delegates(InceptionTimePlus.__init__)
class MultiInceptionTimePlus(nn.Sequential):
"""Class that allows you to create a model with multiple branches of InceptionTimePlus."""
_arch = InceptionTimePlus
def __init__(self, feat_list, c_out, seq_len=None, nf=32, nb_filters=None, depth=6, stoch_depth=1.,
flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None, custom_head=None, **kwargs):
"""
Args:
feat_list: list with number of features that will be passed to each body.
"""
self.feat_list = [feat_list] if isinstance(feat_list, int) else feat_list
# Backbone
branches = nn.ModuleList()
self.head_nf = 0
for feat in self.feat_list:
if is_listy(feat): feat = len(feat)
m = build_ts_model(self._arch, c_in=feat, c_out=c_out, seq_len=seq_len, nf=nf, nb_filters=nb_filters,
depth=depth, stoch_depth=stoch_depth, **kwargs)
self.head_nf += output_size_calculator(m[0], feat, ifnone(seq_len, 10))[0]
branches.append(m.backbone)
backbone = _Splitter(self.feat_list, branches)
# Head
self.c_out = c_out
self.seq_len = seq_len
if custom_head is None:
head = self._arch.create_head(self, self.head_nf, c_out, seq_len, flatten=flatten, concat_pool=concat_pool,
fc_dropout=fc_dropout, bn=bn, y_range=y_range)
else:
head = custom_head(self.head_nf, c_out, seq_len)
layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(head))])
super().__init__(layers)
#exporti
class _Splitter(Module):
def __init__(self, feat_list, branches):
self.feat_list, self.branches = feat_list, branches
def forward(self, x):
if is_listy(self.feat_list[0]):
x = [x[:, feat] for feat in self.feat_list]
else:
x = torch.split(x, self.feat_list, dim=1)
_out = []
for xi, branch in zip(x, self.branches): _out.append(branch(xi))
output = torch.cat(_out, dim=1)
return output
bs = 16
n_vars = 3
seq_len = 51
c_out = 2
xb = torch.rand(bs, n_vars, seq_len)
test_eq(count_parameters(MultiInceptionTimePlus([1,1,1], c_out)) > count_parameters(MultiInceptionTimePlus(3, c_out)), True)
test_eq(MultiInceptionTimePlus([1,1,1], c_out).to(xb.device)(xb).shape, MultiInceptionTimePlus(3, c_out).to(xb.device)(xb).shape)
bs = 16
n_vars = 3
seq_len = 12
c_out = 10
xb = torch.rand(bs, n_vars, seq_len)
new_head = partial(conv_lin_3d_head, d=(5,2))
net = MultiInceptionTimePlus(n_vars, c_out, seq_len, custom_head=new_head)
print(net.to(xb.device)(xb).shape)
net.head
bs = 16
n_vars = 6
seq_len = 12
c_out = 2
xb = torch.rand(bs, n_vars, seq_len)
net = MultiInceptionTimePlus([1,2,3], c_out, seq_len)
print(net.to(xb.device)(xb).shape)
net.head
bs = 8
c_in = 7 # aka channels, features, variables, dimensions
c_out = 2
seq_len = 10
xb2 = torch.randn(bs, c_in, seq_len)
model1 = MultiInceptionTimePlus([2, 5], c_out, seq_len)
model2 = MultiInceptionTimePlus([[0,2,5], [0,1,3,4,6]], c_out, seq_len)
test_eq(model1.to(xb2.device)(xb2).shape, (bs, c_out))
test_eq(model1.to(xb2.device)(xb2).shape, model2.to(xb2.device)(xb2).shape)
from tsai.data.all import *
from tsai.learner import ts_learner
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
tfms = [None, [Categorize()]]
batch_tfms = TSStandardize()
ts_dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(ts_dls, InceptionTimePlus)
xb,yb=first(learn.dls.train)
test_eq(learn.model.to(xb.device)(xb).shape, (ts_dls.bs, ts_dls.c))
p1 = count_parameters(learn.model)
learn.freeze()
p2 = count_parameters(learn.model)
assert p1 > p2 > 0
p1, p2
from tsai.data.all import *
# from tsai.learner import ts_learner
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
tfms = [None, [Categorize()]]
batch_tfms = TSStandardize()
ts_dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(ts_dls, MultiInceptionTimePlus, c_in=[4, 15, 5])
xb,yb=first(learn.dls.train)
test_eq(learn.model.to(xb.device)(xb).shape, (ts_dls.bs, ts_dls.c))
p1 = count_parameters(learn.model)
learn.freeze()
p2 = count_parameters(learn.model)
assert p1 > p2 > 0
p1, p2
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
<a href="https://colab.research.google.com/github/1ucky40nc3/girlsday/blob/copy_paste/intro_python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##1. Variablen und Datentypen
Möchte man sich Werte merken, so speichert man diese in Variablen.
```
a = 1
b = 2
```
###Variablen können mit folgenden Syntax deklariert werden.
```
name = Wert
```
###Folgende Konventionen gelten für Namen:
* Namen werden klein geschrieben
```
name = Wert
integer = 1
float = 1.0
```
* Bestehen Namen aus mehreren Wörtern so werden diese mit einem "_" verbunden
```
langer_name = Wert
integer_for_calculation = 1
integer_for_calculation = 1.0
```
* Namen dürfen nicht mit Zahlen anfangen, aber Zahlen enthalten
```
name1 = Wert
integer2 = 1
integer3 = 1.0
```
* Namen dürfen keine Sonderzeichen enthalten
###Werte in Python haben einen bestimmten Typ - Python kümmert sich aber darum
```
name = typ(Wert)
```
###Typische Typen sind
* Ganzzahlen (Integer)
```
a = 1
b = 2
```
* Fließkommazahlen (Float)
```
a = 1.0
b = 2.0
```
* Ketten von Zeichen (String)
```
a = "Hello, World!"
b = "How are you?"
```
* Listen
```
a = [1, "Hello, World!"]
b = list((1, "Hello, World!"))
```
* Objekte (alles in Python sind Objekte)
## 2. Funktionen und Schleifen
###Um mit Werten effektiv arbeiten zu können, nutzen wir Funktionen und Schleifen
---
### Bevor Funktionen benutzt werden können, müssen diese definiert werden
```python
def function(parameter): # Kopf der Funktion
return parameter # Körper der Funktion
```
---
###Funktionen bestehen aus:
* Einen Kopf
* Einen Namen
* Eingabeparametern
* Einen Körper
* Eine Menge Code
* Eine Rückgabe
---
Der Kopf gibt den Namen der Funktion und Auskunf über die Variablen und Werte, welche verarbeitet werden sollen. Im Körper werden diese Variablen verwendet um eine Ausgabe zu erstellen, oder andere beliebige Kunststücke zu vollführen.
---
### Um Funktionen zu benutzen schreiben wir deren Namen und ihre Parameter in Klammern dahinter
```
function(parameter)
```
---
Die Funktion zaubert uns folgend ein Ergebnis.
```
print(a) # Die Ausgabefunktion "print" zeigt uns den Wert einer Variable als String
print(b) # Wir können Code nacheinander aufschreiben. Er wird in Leserichtung ausgeführt.
```
###Schleifen haben eine ähnliche Struktur, wie Funktionen.
---
### Es gibt verschiede Arten von Schleifen.
### Wir lernen jetzt die "for" Schleife.
```python
for i in list: # Kopf def Schleife
print(i) # Körper der Schleife
```
---
Schleifen bestehen aus:
* Einen Kopf
* Einen Variable, welche den Fortschritt in der Schleife zeigt
* Eine Variable, wie eine Liste, aus der Variablen genommen werden können
* Einen Körper
* Eine Menge Code, welcher wiederholt werden soll
---
Die Grundidee der "for"-Schleife ist, dass Elemente aus einer Liste nacheinander heraus genommen werden und mit denen im Körper der Schleife gearbeitet werden kann. Die Elemente der Liste sind unsere Variable, welche sich bei jeder Iteration verändert. Den Name der Variable gehört in den Kopf in der Schleife. Der Name verändert sich nicht, dafür aber der Wert. Diesen ziehen wir schließlich aus der Liste, deren Namen wir nach dem "in" Keyword angeben.
```
l = [1, 2, 3, 4]
for i in l:
print(i)
l = ["Hello", "World"]
for i in l:
print(i)
l = [1, "Hi"]
for i in l:
print(i)
l = list(range(4))
for i in l:
print(i)
for i in range(4):
print(i)
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
```
# Aprendizaje no supervisado: algoritmos de clustering jerárquicos y basados en densidades
En el cuaderno número 8, introdujimos uno de los algoritmos de agrupamiento más básicos y utilizados, el K-means. Una de las ventajas del K-means es que es extremadamente fácil de implementar y que es muy eficiente computacionalmente si lo comparamos a otros algoritmos de agrupamiento. Sin embargo, ya vimos que una de las debilidades de K-Means es que solo trabaja bien si los datos a agrupar se distribuyen en formas esféricas. Además, tenemos que decidir un número de grupos, *k*, *a priori*, lo que puede ser un problema si no tenemos conocimiento previo acerca de cuántos grupos esperamos obtener.
En este cuaderno, vamos a ver dos formas alternativas de hacer agrupamiento, agrupamiento jerárquico y agrupamiento basado en densidades.
# Agrupamiento jerárquico
Una característica importante del agrupamiento jerárquico es que podemos visualizar los resultados como un dendograma, un diagrama de árbol. Utilizando la visualización, podemos entonces decidir el umbral de profundidad a partir del cual vamos a cortar el árbol para conseguir un agrupamiento. En otras palabras, no tenemos que decidir el número de grupos sin tener ninguna información.
### Agrupamiento aglomerativo y divisivo
Además, podemos distinguir dos formas principales de clustering jerárquico: divisivo y aglomerativo. En el clustering aglomerativo, empezamos con un único patrón por clúster y vamos agrupando clusters (uniendo aquellos que están más cercanos), siguiendo una estrategia *bottom-up* para construir el dendograma. En el clustering divisivo, sin embargo, empezamos incluyendo todos los puntos en un único grupo y luego vamos dividiendo ese grupo en subgrupos más pequeños, siguiendo una estrategia *top-down*.
Nosotros nos centraremos en el clustering aglomerativo.
### Enlace simple y completo
Ahora, la pregunta es cómo vamos a medir la distancia entre ejemplo. Una forma habitual es usar la distancia Euclídea, que es lo que hace el algoritmo K-Means.
Sin embargo, el algoritmo jerárquico requiere medir la distancia entre grupos de puntos, es decir, saber la distancia entre un clúster (agrupación de puntos) y otro. Dos formas de hacer esto es usar el enlace simple y el enlace completo.
En el enlace simple, tomamos el par de puntos más similar (basándonos en distancia Euclídea, por ejemplo) de todos los puntos que pertenecen a los dos grupos. En el enlace competo, tomamos el par de puntos más lejano.

Para ver como funciona el clustering aglomerativo, vamos a cargar el dataset Iris (pretendiendo que no conocemos las etiquetas reales y queremos encontrar las espacies):
```
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
n_samples, n_features = X.shape
plt.scatter(X[:, 0], X[:, 1], c=y);
```
Ahora vamos haciendo una exploración basada en clustering, visualizando el dendograma utilizando las funciones `linkage` (que hace clustering jerárquico) y `dendrogram` (que dibuja el dendograma) de SciPy:
```
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
clusters = linkage(X,
metric='euclidean',
method='complete')
dendr = dendrogram(clusters)
plt.ylabel('Distancia Euclídea');
```
Alternativamente, podemos usar el `AgglomerativeClustering` de scikit-learn y dividr el dataset en 3 clases. ¿Puedes adivinar qué tres clases encontraremos?
```
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
prediction = ac.fit_predict(X)
print('Etiquetas de clase: %s\n' % prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
```
# Clustering basado en densidades - DBSCAN
Otra forma útil de agrupamiento es la conocida como *Density-based Spatial Clustering of Applications with Noise* (DBSCAN). En esencia, podríamos pensar que DBSCAN es un algoritmo que divide el dataset en subgrupos, buscando regiones densas de puntos.
En DBSCAN, hay tres tipos de puntos:
- Puntos núcleo: puntos que tienen un mínimo número de puntos (MinPts) contenidos en una hiperesfera de radio epsilon.
- Puntos fronterizos: puntos que no son puntos núcleo, ya que no tienen suficientes puntos en su vecindario, pero si que pertenecen al vecindario de radio epsilon de algún punto núcleo.
- Puntos de ruido: todos los puntos que no pertenecen a ninguna de las categorías anteriores.

Una ventaja de DBSCAN es que no tenemos que especificar el número de clusters a priori. Sin embargo, requiere que establezcamos dos hiper-parámetros adicionales que son ``MinPts`` y el radio ``epsilon``.
```
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=400,
noise=0.1,
random_state=1)
plt.scatter(X[:,0], X[:,1])
plt.show()
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2,
min_samples=10,
metric='euclidean')
prediction = db.fit_predict(X)
print("Etiquetas predichas:\n", prediction)
plt.scatter(X[:, 0], X[:, 1], c=prediction);
```
<div class="alert alert-success">
<b>EJERCICIO</b>:
<ul>
<li>
Usando el siguiente conjunto sintético, dos círculos concéntricos, experimenta los resultados obtenidos con los algoritmos de clustering que hemos considerado hasta el momento: `KMeans`, `AgglomerativeClustering` y `DBSCAN`.
¿Qué algoritmo reproduce o descubre mejor la estructura oculta (suponiendo que no conocemos `y`)?
¿Puedes razonar por qué este algoritmo funciona mientras que los otros dos fallan?
</li>
</ul>
</div>
```
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1500,
factor=.4,
noise=.05)
plt.scatter(X[:, 0], X[:, 1], c=y);
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os; import sys; sys.path.append('../')
import pandas as pd
import tqdm
import warnings
warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning)
import socceraction.classification.features as fs
import socceraction.classification.labels as lab
## Configure file and folder names
datafolder = "../data"
spadl_h5 = os.path.join(datafolder,"spadl-statsbomb.h5")
features_h5 = os.path.join(datafolder,"features.h5")
labels_h5 = os.path.join(datafolder,"labels.h5")
predictions_h5 = os.path.join(datafolder,"predictions.h5")
games = pd.read_hdf(spadl_h5,"games")
games = games[games.competition_name == "FIFA World Cup"]
print("nb of games:", len(games))
# 1. Select feature set X
xfns = [fs.actiontype,
fs.actiontype_onehot,
#fs.bodypart,
fs.bodypart_onehot,
fs.result,
fs.result_onehot,
fs.goalscore,
fs.startlocation,
fs.endlocation,
fs.movement,
fs.space_delta,
fs.startpolar,
fs.endpolar,
fs.team,
#fs.time,
fs.time_delta,
#fs.actiontype_result_onehot
]
nb_prev_actions = 1
# generate the columns of the selected features
Xcols = fs.feature_column_names(xfns,nb_prev_actions)
X = []
for game_id in tqdm.tqdm(games.game_id,desc="selecting features"):
Xi = pd.read_hdf(features_h5,f"game_{game_id}")
X.append(Xi[Xcols])
X = pd.concat(X)
# 2. Select label Y
Ycols = ["scores","concedes"]
Y = []
for game_id in tqdm.tqdm(games.game_id,desc="selecting label"):
Yi = pd.read_hdf(labels_h5,f"game_{game_id}")
Y.append(Yi[Ycols])
Y = pd.concat(Y)
print("X:", list(X.columns))
print("Y:", list(Y.columns))
%%time
# 3. train classifiers F(X) = Y
import xgboost
Y_hat = pd.DataFrame()
models = {}
for col in list(Y.columns):
model = xgboost.XGBClassifier()
model.fit(X,Y[col])
models[col] = model
from sklearn.metrics import brier_score_loss, roc_auc_score
Y_hat = pd.DataFrame()
for col in Y.columns:
Y_hat[col] = [p[1] for p in models[col].predict_proba(X)]
print(f"Y: {col}")
print(f" Brier score: %.4f" % brier_score_loss(Y[col],Y_hat[col]))
print(f" ROC AUC: %.4f" % roc_auc_score(Y[col],Y_hat[col]))
```
### Save predictions
```
# get rows with game id per action
A = []
for game_id in tqdm.tqdm(games.game_id,"loading game ids"):
Ai = pd.read_hdf(spadl_h5,f"actions/game_{game_id}")
A.append(Ai[["game_id"]])
A = pd.concat(A)
A = A.reset_index(drop=True)
# concatenate action game id rows with predictions and save per game
grouped_predictions = pd.concat([A,Y_hat],axis=1).groupby("game_id")
for k,df in tqdm.tqdm(grouped_predictions,desc="saving predictions per game"):
df = df.reset_index(drop=True)
df[Y_hat.columns].to_hdf(predictions_h5,f"game_{int(k)}")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/emadphysics/Divulging-electricity-consumption-patterns/blob/main/Consumption.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
import zipfile
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
import math
import seaborn as sns
from matplotlib import rcParams
from datetime import date
from pandas.tseries.holiday import AbstractHolidayCalendar
import pandas as pd
plt.style.use('bmh')
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
```
# **Electricity loading**
```
drive.mount("/content/gdrive")
from zipfile import ZipFile
zip_file = ZipFile('/content/gdrive/My Drive/archive (1).zip')
dfs = {text_file.filename: pd.read_csv(zip_file.open(text_file.filename),usecols=['start','load'],parse_dates=['start'],index_col=['start'])
for text_file in zip_file.infolist()
if text_file.filename.endswith('.csv')}
data=pd.DataFrame(dfs['nl.csv'])
data=data.resample('H').sum()
data
data=data[(data.index>='2015-01-01')&(data.index<'2020-01-01')]
weekdays = {0:'Monday', 1:'Tuesday', 2:'Wednesday', 3: 'Thursday', 4: 'Friday', 5:'Saturday', 6:'Sunday'}
data['Dates']=data.index.date
data['hour']=data.index.hour
data['year']=data.index.year
data['month']=data.index.month
data['day']=data.index.day
data['hour']=data.index.hour
data['weekday'] = data.index.weekday.map(weekdays)
data.index.names=['Date']
#data.index = data.index.map(lambda ts: ts.strftime("%Y-%m-%d"))
hour_dict = {'morning': list(np.arange(7,13)),'afternoon': list(np.arange(13,16)), 'evening': list(np.arange(16,22)),
'night': [22, 23, 0, 1, 2, 3, 4, 5, 6]}
def daytime(x):
if x in hour_dict['morning']:
return 'morning'
elif x in hour_dict['afternoon']:
return 'afternoon'
elif x in hour_dict['evening']:
return 'evening'
else:
return 'night'
data['daytime'] = data['hour'].apply(daytime)
#!pip install workalendar
from pandas.tseries.holiday import *
#subclassing to build the Netherlands public halidays
class Nl(AbstractHolidayCalendar):
rules = [
Holiday('New Years Day', month=1, day=1),
Holiday('Good Friday', month=4, day=10),
Holiday('Easter Sunday', month=4, day=12),
Holiday('Easter Monday', month=4, day=13),
Holiday('Liberation Day', month=5, day=5),
Holiday('Ascension Day', month=5, day=21),
Holiday('Whitsun', month=5, day=31),
Holiday('Whit Monday', month=6, day=1),
Holiday('Christmas Day', month=12, day=25),
Holiday('Christmas Day', month=12, day=26)
]
cal = Nl()
holidays = cal.holidays(data.index.min(), data.index.max())
data['holiday'] = data.index.isin(holidays)
data['non_working'] = data.apply(lambda x: 'Yes' if \
((x['holiday'] == 1) or (x['weekday'] in ['Saturday', 'Sunday']))
else 'No', axis = 1)
mapped = {True:1, False:0}
data['holiday'] = data['holiday'].map(mapped)
data.head()
data['load'].kurt(),data['load'].skew()
sns.displot(data,x='load',kind="kde",bw_adjust=4)
```
```
data['load'].describe()
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
data['load'].plot(kind='box', xlim=(90000, 16000), vert=False, flierprops=red_square, figsize=(16,2))
```
Fortunately, it does not suffers from outliers
```
def season_calc(month):
"""adding season based on the data on SDGE's site -> https://www.sdge.com/whenmatters#how-it-works;
months from June to October are denoted as 'summer' and months from November to May as 'winter'. """
if month in [1,2,3]:
return "winter"
elif month in [4,5,6]:
return 'spring'
elif month in [7,8,9]:
return 'summer'
else:
return 'autumn'
data['Dates'] = pd.to_datetime(data['Dates'])
data['season'] = data['Dates'].dt.month.apply(season_calc)
data
```
# **Weather**
```
weather=pd.read_csv('/content/gdrive/My Drive/KNMI.txt',usecols=['YYYYMMDD','# STN',' TG', ' SP',' NG'],error_bad_lines=False,header=0,parse_dates=['YYYYMMDD'])
del weather['# STN']
weather.columns=['Date','temp','sunshine','cloudy']
weather.max()
weather.drop_duplicates(subset='Date',inplace=True)
weather.set_index('Date',inplace=True)
categorical = [var for var in weather.columns if weather[var].dtype=='O']
for t in categorical:
weather[t] = pd.to_numeric(weather[t],errors='coerce')
weather['temp']=weather['temp']/10
weather.dropna(inplace=True)
weather=weather[(weather.index>='2015-01-01')&(weather.index<'2020-01-01')]
weather
for t in weather.columns:
col_median=weather[t].median()
weather[t]=weather[t].fillna(col_median)
weather['temp'].skew(),weather['sunshine'].skew(),weather['cloudy'].skew()
weather['temp'].kurt(),weather['sunshine'].kurt(),weather['cloudy'].kurt()
# to kill outlirs with respect to the distribution nature
class outlierkiller:
def __init__(self,df,x):
self.df=df
self.x=x
def nongausskiller(self):
for t in self.x:
IQR = self.df[t].quantile(0.75) - self.df[t].quantile(0.25)
Lower_fence = self.df[t].quantile(0.25) - (IQR * 3)
Upper_fence = self.df[t].quantile(0.75) + (IQR * 3)
self.df.loc[self.df[t]>Upper_fence, t] = Upper_fence
self.df.loc[self.df[t]<Lower_fence, t] = Lower_fence
return self
def gausskiller(self):
for t in self.x:
Upper_boundary = self.df[t].mean() + 3* self.df[t].std()
Lower_boundary = self.df[t].mean() - 3* self.df[t].std()
self.df.loc[self.df[t]>Upper_boundary, t] = Upper_boundary
self.df.loc[self.df[t]<Lower_boundary, t] = Lower_boundary
return self
outlierkiller(weather,['cloudy','temp']).nongausskiller()
outlierkiller(weather,['sunshine']).gausskiller()
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
weather['cloudy'].plot(kind='box', xlim=(-1, 15), vert=False, flierprops=red_square, figsize=(16,2))
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
weather['temp'].plot(kind='box', xlim=(-10, 35), vert=False, flierprops=red_square, figsize=(16,2))
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
weather['sunshine'].plot(kind='box', xlim=(-10, 200), vert=False, flierprops=red_square, figsize=(16,2))
weather,data
df=pd.merge(data, weather, left_index=True, right_index=True)
df[df.isna().any(axis = 1)].sum()
drive.mount('drive')
df.to_csv('Wload.csv')
!cp Wload.csv "drive/My Drive/"
usecols=['YYYYMMDD','# STN',' TG', ' SP',' NG'],
,header=0,parse_dates=['YYYYMMDD']
os.listdir(os.getcwd())
```
# ***WEATHER hourly***
```
wdf=pd.read_csv('/content/gdrive/My Drive/NL_temp.csv',usecols=['utc_timestamp','NL_temperature'],error_bad_lines=False,header=0,parse_dates=['utc_timestamp'],index_col=['utc_timestamp'])
wdf.index.names=['Date']
wdf=wdf[(wdf.index>='2015-01-01')&(wdf.index<'2020-01-01')]
wdf.isnull().sum()
wdf['NL_temperature'].kurt(),wdf['NL_temperature'].skew()
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
wdf['NL_temperature'].plot(kind='box', xlim=(-10, 49), vert=False, flierprops=red_square, figsize=(16,2))
# to kill outlirs with respect to the distribution nature
class outlierkiller:
def __init__(self,df,x):
self.df=df
self.x=x
def nongausskiller(self):
for t in self.x:
IQR = self.df[t].quantile(0.75) - self.df[t].quantile(0.25)
Lower_fence = self.df[t].quantile(0.25) - (IQR * 3)
Upper_fence = self.df[t].quantile(0.75) + (IQR * 3)
self.df.loc[self.df[t]>Upper_fence, t] = Upper_fence
self.df.loc[self.df[t]<Lower_fence, t] = Lower_fence
return self
def gausskiller(self):
for t in self.x:
Upper_boundary = self.df[t].mean() + 3* self.df[t].std()
Lower_boundary = self.df[t].mean() - 3* self.df[t].std()
self.df.loc[self.df[t]>Upper_boundary, t] = Upper_boundary
self.df.loc[self.df[t]<Lower_boundary, t] = Lower_boundary
return self
outlierkiller(wdf,['NL_temperature']).gausskiller()
red_square = dict(markerfacecolor='r', markeredgecolor='r', marker='.')
wdf['NL_temperature'].plot(kind='box', xlim=(-14, 49), vert=False, flierprops=red_square, figsize=(16,2))
wdf.columns=['temp']
frame=pd.merge(data, wdf, left_index=True, right_index=True)
drive.mount('drive')
frame.to_csv('frame.csv')
!cp frame.csv "drive/My Drive/"
frame
```
| github_jupyter |
# Beginner's Python—Session One Physics/Engineering Answers
## Planck Length
The Planck units are units defined in terms of 4 universal constants, namely the gravitational constant $G$, the (reduced) Planck constant $\hbar$, the speed of light $c$, and the Boltzmann constant $k_B$.
Go to https://bit.ly/3k0m3EK. Create three variables called ```G```, ```h_bar```, and ```c``` using the corresponding values given in Table 1 of the Wikipedia page, to *one* significant figure.
**NOTES:**
* You should write a number such as $3 \times 10^{-25}$ as ```3e-25``` here.
* The square root of a variable $a$ can be written ```a**0.5```.
```
G = 7e-11
h_bar = 1e-34
c = 3e8
```
The derived Planck unit of area is given by $A = \frac{\hbar G}{c^3}$.
* Calculate the Planck area using the above formula and the 3 variables you defined, and assign it to a
variable called ```area```.
* The Planck length is the square root of the Planck area. Calculate it using ```area```, and assign it to a variable called ```length```.
```
area = h_bar*G / c**3
length = area**0.5
```
Running the below cell will assign the variable ```length_precise``` to a more precise value of the Planck length.
```
length_precise = 1.616255e-35
```
* Calculate the difference between ```length_precise``` and ```length```, and assign it to the variable ```diff```.
* Calculate the ratio between ```diff``` and ```length_precise```. Assign this value to a variable called ```frac_diff```.
* Print ```frac_diff``` to five decimal places.
**NOTE:** You can use the function ```round()``` to round a float to a given number of decimal places. For example, ```round(x, 3)``` will return the float ```x``` rounded to three decimal places. More about other useful functions will be in later sessions.
```
diff = length_precise - length
frac_diff = diff / length_precise
print(round(frac_diff, 5))
```
Print the following phrase, replacing the ??? with the correct value, using the variables you've defined above:
_"The fractional difference, to 5 decimal places, between the calculated Planck constant and the given one is ???. The value of G, to one significant figure, is ???."_
**NOTE:** By default, `print` will place spaces between each input. If we wish to have the numbers output next to the full stops without a space, we'll need to use string concatenation. For example `str(42) + "."` will give `42.`. Notice that we have to convert the number to a string first using `str` else we will recieve a type error.
```
print("The fractional difference, to 5 decimal places,",
"between the calculated Planck constant and the given one is",
str(frac_diff) + ".",
"The value of G, to one significant figure, is",
str(G) + ".")
```
What are the variable types of ```c``` and ```area```? Print both below to check.
```
print(type(c))
print(type(area))
```
* Print the two variables below.
What happens when you try to convert them to a different variable type?
* Try to convert both to ```int``` and print their new values. Is there any loss of information? Think about what has happened here.
```
print(c)
print(area)
print(int(c))
print(int(area))
```
## Resistance Training
In a circuit, three resistors are connected in parallel. They each have a resistance, labelled ```R1```, ```R2```, and ```R3```. Create variables for these resistances, taking values $2.5$, $3$ and $4.5$.
```
R1 = 2.5
R2 = 3
R3 = 4.5
```
The total resistance is given by,
$$R = \frac{R_1R_2R_3}{R_1R_2+R_1R_3+R_2R_3}$$.
* Assign to a new variable, ```R```, the total resistance due to the three resistances from the first part.
* Print ```R```, rounded to five decimal places (see above).
* Print the variable type of ```R```.
```
R = R1 * R2 * R3 / (R1 * R2 + R1 * R3 + R2 * R3)
print(round(R, 5))
print(type(R))
```
In the box below, print the following phrase, with the ??? replaced with the correct values.
You should use expressions involving ```R1```, ```R2```, ```R3```, and/or ```R```, and return these values as ints, and **NOT** floats, hence the "approximately".
_"The product of the three resistances is around ???, which is greater than their sum, which is around ???. The total resistance when they are connected in parallel is approximately ???."_
```
print("The product of the three resistances is around",
int(R1 * R2 * R3),
"which is greater than their sum, which is around",
str(int(R1 + R2 + R3)) + ".",
"The total resistance when they are connected in parallel is approximately",
int(R), "Ohm.")
```
| github_jupyter |
# SQL Reference
## Functions
### Unary Functions
[Docs](https://docs.blazingdb.com/docs/unary-ops)
#### FLOOR
Returns the largest integer value that is smaller than or equal to a number.
```sql
SELECT FLOOR(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select FLOOR(trip_distance), trip_distance from taxi')
```
#### CEILING
Returns the smallest integer value that is larger than or equal to a number.
```sql
SELECT CEILING(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select CEILING(trip_distance), trip_distance from taxi')
```
#### SIN
```sql
SELECT SIN(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select SIN(trip_distance) from taxi')
```
#### COS
```sql
SELECT COS(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select COS(trip_distance) from taxi')
```
#### TAN
```sql
SELECT TAN(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select TAN(trip_distance) from taxi')
```
#### ASIN
```sql
SELECT ASIN(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select ASIN(trip_distance) from taxi')
```
#### ACOS
```sql
SELECT ACOS(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select ACOS(trip_distance) from taxi')
```
#### ATAN
```sql
SELECT ATAN(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select ATAN(trip_distance) from taxi')
```
#### ABSOLUTE (ABS)
```sql
SELECT ABS(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select ABS(dropoff_x) from taxi')
```
#### LN
```sql
SELECT LN(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select LN(trip_distance) from taxi')
```
#### LOG
```sql
SELECT LOG10(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('select LOG10(trip_distance) from taxi')
```
#### RAND
```sql
SELECT column_A, RAND() FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('SELECT trip_distance, RAND() AS r FROM taxi')
```
#### ROUND
```sql
SELECT ROUND(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('SELECT ROUND(trip_distance) FROM taxi')
```
#### SQRT
```sql
SELECT SQRT(column_A) FROM table_A
```
```
from blazingsql import BlazingContext
bc = BlazingContext()
bc.create_table('taxi', '../../../../data/sample_taxi.parquet')
bc.sql('SELECT SQRT(trip_distance) FROM taxi')
```
# BlazingSQL Docs
**[Table of Contents](../../TABLE_OF_CONTENTS.ipynb) | [Issues (GitHub)](https://github.com/BlazingDB/Welcome_to_BlazingSQL_Notebooks/issues)**
| github_jupyter |
<img src='img/header.png'>
[Hyper Parameter Optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) (HPO) improves model quality by searching over hyperparameters, parameters not typically learned during the training process but rather values that control the learning process itself (e.g., model size/capacity). This search can significantly boost model quality relative to default settings and non-expert tuning; however, HPO can take a very long time on a non-accelerated platform. In this notebook, we containerize a RAPIDS workflow and run Bring-Your-Own-Container SageMaker HPO to show how we can overcome the computational complexity of model search.
We accelerate HPO in two key ways:
* by *scaling within a node* (e.g., multi-GPU where each GPU brings a magnitude higher core count relative to CPUs), and
* by *scaling across nodes* and running parallel trials on cloud instances.
By combining these two powers HPO experiments that feel unapproachable and may take multiple days on CPU instances can complete in just hours. For example, we find a <span style="color:#8735fb; font-size:14pt"> **12x** </span> speedup in wall clock time (6 hours vs 3+ days) and a <span style="color:#8735fb; font-size:14pt"> **4.5x** </span> reduction in cost when comparing between GPU and CPU [EC2 Spot instances](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) on 100 XGBoost HPO trials using 10 parallel workers on 10 years of the Airline Dataset (~63M flights) hosted in a S3 bucket. For additional details refer to the <a href='#experiments'>end of the notebook</a>.
With all these powerful tools at our disposal, every data scientist should feel empowered to up-level their model before serving it to the world!
<img src='img/hpo.png'>
<span style="color:#8735fb; font-size:22pt"> **Preamble** </span>
To get things rolling let's make sure we can query our AWS SageMaker execution role and session as well as our account ID and AWS region.
```
import sagemaker
from helper_functions import *
execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account=!(aws sts get-caller-identity --query Account --output text)
region=!(aws configure get region)
account, region
```
<span style="color:#8735fb; font-size:22pt"> **Key Choices** </span>
Let's go ahead and choose the configuration options for our HPO run.
Below are two reference configurations showing a small and a large scale HPO (sized in terms of total experiments/compute).
The default values in the notebook are set for the small HPO configuration, however you are welcome to scale them up.
> **small HPO**: 1_year, XGBoost, 3 CV folds, singleGPU, max_jobs = 10, max_parallel_jobs = 2
> **large HPO**: 10_year, XGBoost, 10 CV folds, multiGPU, max_jobs = 100, max_parallel_jobs = 10
<span style="color:#8735fb; font-size:18pt"> [ Dataset ] </span>
We offer free hosting for several demo datasets that you can try running HPO with, or alternatively you can bring your own dataset (BYOD).
By default we leverage the `Airline` dataset, which is a large public tracker of US domestic flight logs which we offer in various sizes (1 year, 3 year, and 10 year) and in <a href='https://parquet.apache.org/'>Parquet</a> (compressed column storage) format. The machine learning objective with this dataset is to predict whether flights will be more than 15 minutes late arriving to their destination ([dataset link](https://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&DB_URL=), additional details in <a href='#dataset'>Section 1.1</a>).
As an alternative we also offer the `NYC Taxi` dataset which captures yellow cab trip details in Ney York in January 2020, stored in <a href='https://en.wikipedia.org/wiki/Comma-separated_values'>CSV </a> format without any compression. The machine learning objective with this dataset is to predict whether a trip had an above average tip (>$2.20).
We host the demo datasets in public S3 demo buckets in both the **us-east-1** (N. Virginia) or **us-west-2** (Oregon) regions (i.e., `sagemaker-rapids-hpo-us-east-1`, and `sagemaker-rapids-hpo-us-west-2`). You should run the SageMaker HPO workflow in either of these two regions if you wish to leverage the demo datasets since SageMaker requires that the S3 dataset and the compute you'll be renting are co-located.
Lastly, if you plan to use your own dataset refer to the <a href='#byod'>BYOD checklist in the Appendix</a> to help integrate into the workflow.
| dataset | data_bucket | dataset_directory | # samples | storage type | time span |
|---|---|---|---|---|---|
| Airline Stats Small | demo | 1_year | 6.3M | Parquet | 2019 |
| Airline Stats Medium | demo | 3_year | 18M | Parquet | 2019-2017 |
| Airline Stats Large | demo | 10_year | 63M | Parquet | 2019-2010 |
| NYC Taxi | demo | NYC_taxi | 6.3M | CSV | 2020 January |
| Bring Your Own Dataset | custom | custom | custom | Parquet/CSV | custom |
```
# please choose dataset S3 bucket and directory
data_bucket = "sagemaker-rapids-hpo-" + region[0]
dataset_directory = "3_year" # '3_year', '10_year', 'NYC_taxi'
# please choose output bucket for trained model(s)
model_output_bucket = session.default_bucket()
s3_data_input = f"s3://{data_bucket}/{dataset_directory}"
s3_model_output = f"s3://{model_output_bucket}/trained-models"
best_hpo_model_local_save_directory = os.getcwd()
```
<span style="color:#8735fb; font-size:18pt"> [ Algorithm ] </span>
From a ML/algorithm perspective, we offer [XGBoost](https://xgboost.readthedocs.io/en/latest/#) and [RandomForest](https://docs.rapids.ai/api/cuml/stable/cuml_blogs.html#tree-and-forest-models) decision tree models which do quite well on this structured dataset. You are free to switch between these two algorithm choices and everything in the example will continue to work.
```
# please choose learning algorithm
algorithm_choice = "XGBoost"
assert algorithm_choice in ["XGBoost", "RandomForest"]
```
We can also optionally increase robustness via reshuffles of the train-test split (i.e., [cross-validation folds](https://scikit-learn.org/stable/modules/cross_validation.html)). Typical values here are between 3 and 10 folds.
```
# please choose cross-validation folds
cv_folds = 3
assert cv_folds >= 1
```
<span style="color:#8735fb; font-size:18pt"> [ ML Workflow Compute Choice ] </span>
We enable the option of running different code variations that unlock increasing amounts of parallelism in the compute workflow.
* `singleCPU`** = [pandas](https://pandas.pydata.org/) + [sklearn](https://scikit-learn.org/stable/)
* `multiCPU` = [dask](https://dask.org/) + [pandas](https://pandas.pydata.org/) + [sklearn](https://scikit-learn.org/stable/)
* <span style="color:#8735fb; font-size:14pt"> RAPIDS </span> `singleGPU` = [cudf](https://github.com/rapidsai/cudf) + [cuml](https://github.com/rapidsai/cuml)
* <span style="color:#8735fb; font-size:14pt"> RAPIDS </span> `multiGPU` = [dask](https://dask.org/) + [cudf](https://github.com/rapidsai/cudf) + [cuml](https://github.com/rapidsai/cuml)
All of these code paths are available in the `/code/workflows` directory for your reference.
> **Note that the single-CPU option will leverage multiple cores in the model training portion of the workflow; however, to unlock full parallelism in each stage of the workflow we use [Dask](https://dask.org/).
```
# please choose code variant
ml_workflow_choice = "singleGPU"
assert ml_workflow_choice in ["singleCPU", "singleGPU", "multiCPU", "multiGPU"]
```
<span style="color:#8735fb; font-size:18pt"> [ Search Ranges and Strategy ] </span>
<a id='strategy-and-param-ranges'></a>
One of the most important choices when running HPO is to choose the bounds of the hyperparameter search process. Below we've set the ranges of the hyperparameters to allow for interesting variation, you are of course welcome to revise these ranges based on domain knowledge especially if you plan to plug in your own dataset.
> Note that we support additional algorithm specific parameters (refer to the `parse_hyper_parameter_inputs` function in `HPOConfig.py`), but for demo purposes have limited our choice to the three parameters that overlap between the XGBoost and RandomForest algorithms. For more details see the documentation for [XGBoost parameters](https://xgboost.readthedocs.io/en/latest/parameter.html) and [RandomForest parameters](https://docs.rapids.ai/api/cuml/stable/api.html#random-forest).
```
# please choose HPO search ranges
hyperparameter_ranges = {
"max_depth": sagemaker.parameter.IntegerParameter(5, 15),
"n_estimators": sagemaker.parameter.IntegerParameter(100, 500),
"max_features": sagemaker.parameter.ContinuousParameter(0.1, 1.0),
} # see note above for adding additional parameters
if "XGBoost" in algorithm_choice:
# number of trees parameter name difference b/w XGBoost and RandomForest
hyperparameter_ranges["num_boost_round"] = hyperparameter_ranges.pop("n_estimators")
```
We can also choose between a Random and Bayesian search strategy for picking parameter combinations.
**Random Search**: Choose a random combination of values from within the ranges for each training job it launches. The choice of hyperparameters doesn't depend on previous results so you can run the maximum number of concurrent workers without affecting the performance of the search.
**Bayesian Search**: Make a guess about which hyperparameter combinations are likely to get the best results. After testing the first set of hyperparameter values, hyperparameter tuning uses regression to choose the next set of hyperparameter values to test.
```
# please choose HPO search strategy
search_strategy = "Random"
assert search_strategy in ["Random", "Bayesian"]
```
<span style="color:#8735fb; font-size:18pt"> [ Experiment Scale ] </span>
We also need to decide how may total experiments to run, and how many should run in parallel. Below we have a very conservative number of maximum jobs to run so that you don't accidently spawn large computations when starting out, however for meaningful HPO searches this number should be much higher (e.g., in our experiments we often run 100 max_jobs). Note that you may need to request a [quota limit increase](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html) for additional `max_parallel_jobs` parallel workers.
```
# please choose total number of HPO experiments[ we have set this number very low to allow for automated CI testing ]
max_jobs = 2
# please choose number of experiments that can run in parallel
max_parallel_jobs = 2
```
Let's also set the max duration for an individual job to 24 hours so we don't have run-away compute jobs taking too long.
```
max_duration_of_experiment_seconds = 60 * 60 * 24
```
<span style="color:#8735fb; font-size:18pt"> [ Compute Platform ] </span>
Based on the dataset size and compute choice we will try to recommend an instance choice*, you are of course welcome to select alternate configurations.
> e.g., For the 10_year dataset option, we suggest ml.p3.8xlarge instances (4 GPUs) and ml.m5.24xlarge CPU instances ( we will need upwards of 200GB CPU RAM during model training).
```
# we will recommend a compute instance type, feel free to modify
instance_type = recommend_instance_type(ml_workflow_choice, dataset_directory)
```
In addition to choosing our instance type, we can also enable significant savings by leveraging [AWS EC2 Spot Instances](https://aws.amazon.com/ec2/spot/).
We **highly recommend** that you set this flag to `True` as it typically leads to 60-70% cost savings. Note, however that you may need to request a [quota limit increase](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html) to enable Spot instances in SageMaker.
```
# please choose whether spot instances should be used
use_spot_instances_flag = True
```
<span style="color:#8735fb; font-size:22pt"> **Validate** </span>
```
summarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)
```
<span style="display: block; text-align: center; color:#8735fb; font-size:30pt"> **1. ML Workflow** </span>
<img src='img/ml_workflow.png' width='800'>
<span style="color:#8735fb; font-size:20pt"> 1.1 - Dataset </span>
<a id ='dataset'></a>
The default settings for this demo are built to utilize the Airline dataset (Carrier On-Time Performance 1987-2020, available from the [Bureau of Transportation Statistics](https://transtats.bts.gov/Tables.asp?DB_ID=120&DB_Name=Airline%20On-Time%20Performance%20Data&DB_Short_Name=On-Time#)). Below are some additional details about this dataset, we plan to offer a companion notebook that does a deep dive on the data science behind this dataset. Note that if you are using an alternate dataset (e.g., NYC Taxi or BYOData) these details are not relevant.
The public dataset contains logs/features about flights in the United States (17 airlines) including:
* Locations and distance ( `Origin`, `Dest`, `Distance` )
* Airline / carrier ( `Reporting_Airline` )
* Scheduled departure and arrival times ( `CRSDepTime` and `CRSArrTime` )
* Actual departure and arrival times ( `DpTime` and `ArrTime` )
* Difference between scheduled & actual times ( `ArrDelay` and `DepDelay` )
* Binary encoded version of late, aka our target variable ( `ArrDelay15` )
Using these features we will build a classifier model to predict whether a flight is going to be more than 15 minutes late on arrival as it prepares to depart.
<span style="color:#8735fb; font-size:20pt"> 1.2 - Python ML Workflow </span>
To build a RAPIDS enabled SageMaker HPO we first need to build a [SageMaker Estimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html). An Estimator is a container image that captures all the software needed to run an HPO experiment. The container is augmented with entrypoint code that will be trggered at runtime by each worker. The entrypoint code enables us to write custom models and hook them up to data.
In order to work with SageMaker HPO, the entrypoint logic should parse hyperparameters (supplied by AWS SageMaker), load and split data, build and train a model, score/evaluate the trained model, and emit an output representing the final score for the given hyperparameter setting. We've already built multiple variations of this code.
If you would like to make changes by adding your custom model logic feel free to modify the **train.py** and/or the specific workflow files in the `code/workflows` directory. You are also welcome to uncomment the cells below to load the read/review the code.
First, let's switch our working directory to the location of the Estimator entrypoint and library code.
```
%cd code
```
```
# %load train.py
# %load workflows/MLWorkflowSingleGPU.py
```
<span style="display: block; text-align: center; color:#8735fb; font-size:30pt"> **2. Build Estimator** </span>
<img src='img/estimator.png' width='800'>
As we've already mentioned, the SageMaker Estimator represents the containerized software stack that AWS SageMaker will replicate to each worker node.
The first step to building our Estimator, is to augment a RAPIDS container with our ML Workflow code from above, and push this image to Amazon Elastic Cloud Registry so it is available to SageMaker.
<span style="color:#8735fb; font-size:20pt"> 2.1 - Containerize and Push to ECR </span>
Now let's turn to building our container so that it can integrate with the AWS SageMaker HPO API.
Our container can either be built on top of the latest RAPIDS [ nightly ] image as a starting layer or the RAPIDS stable image.
```
rapids_stable = "rapidsai/rapidsai:21.06-cuda11.0-base-ubuntu18.04-py3.7"
rapids_nightly = "rapidsai/rapidsai-nightly:21.08-cuda11.0-base-ubuntu18.04-py3.7"
rapids_base_container = rapids_stable
assert rapids_base_container in [rapids_stable, rapids_nightly]
```
Let's also decide on the full name of our container.
```
image_base = "cloud-ml-sagemaker"
image_tag = rapids_base_container.split(":")[1]
ecr_fullname = f"{account[0]}.dkr.ecr.{region[0]}.amazonaws.com/{image_base}:{image_tag}"
ecr_fullname
```
<span style="color:#8735fb; font-size:18pt"> 2.1.1 - Write Dockerfile </span>
We write out the Dockerfile to disk, and in a few cells execute the docker build command.
Let's now write our selected RAPDIS image layer as the first FROM statement in the the Dockerfile.
```
with open("Dockerfile", "w") as dockerfile:
dockerfile.writelines(
f"FROM {rapids_base_container} \n\n"
f'ENV DATASET_DIRECTORY="{dataset_directory}"\n'
f'ENV ALGORITHM_CHOICE="{algorithm_choice}"\n'
f'ENV ML_WORKFLOW_CHOICE="{ml_workflow_choice}"\n'
f'ENV CV_FOLDS="{cv_folds}"\n'
)
```
Next let's append write the remaining pieces of the Dockerfile, namely adding the sagemaker-training-toolkit, flask, dask-ml, and copying our python code.
```
%%writefile -a Dockerfile
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids && pip3 install sagemaker-training \
&& conda install -c anaconda flask \
&& conda install -c conda-forge dask-ml
# path where SageMaker looks for code when container runs in the cloud
ENV CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $CLOUD_PATH/entrypoint.sh
WORKDIR $CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
```
Lastly, let's ensure that our Dockerfile correctly captured our base image selection.
```
validate_dockerfile(rapids_base_container)
!cat Dockerfile
```
<span style="color:#8735fb; font-size:18pt"> 2.1.2 Build and Tag </span>
The build step will be dominated by the download of the RAPIDS image (base layer). If it's already been downloaded the build will take less than 1 minute.
```
!docker pull $rapids_base_container
%%time
!docker build . -t $ecr_fullname -f Dockerfile
```
<span style="color:#8735fb; font-size:18pt"> 2.1.3 - Publish to Elastic Cloud Registry (ECR) </span>
Now that we've built and tagged our container its time to push it to Amazon's container registry (ECR). Once in ECR, AWS SageMaker will be able to leverage our image to build Estimators and run experiments.
Docker Login to ECR
```
docker_login_str = !(aws ecr get-login --region {region[0]} --no-include-email)
!{docker_login_str[0]}
```
Create ECR repository [ if it doesn't already exist]
```
repository_query = !(aws ecr describe-repositories --repository-names $image_base)
if repository_query[0] == '':
!(aws ecr create-repository --repository-name $image_base)
```
Let's now actually push the container to ECR
> Note the first push to ECR may take some time (hopefully less than 10 minutes).
```
!docker push $ecr_fullname
```
<span style="color:#8735fb; font-size:20pt"> 2.2 - Create Estimator </span>
Having built our container [ +custom logic] and pushed it to ECR, we can finally compile all of efforts into an Estimator instance.
```
# 'volume_size' - EBS volume size in GB, default = 30
estimator_params = {
"image_uri": ecr_fullname,
"role": execution_role,
"instance_type": instance_type,
"instance_count": 1,
"input_mode": "File",
"output_path": s3_model_output,
"use_spot_instances": use_spot_instances_flag,
"max_run": max_duration_of_experiment_seconds, # 24 hours
"sagemaker_session": session,
}
if use_spot_instances_flag == True:
estimator_params.update({"max_wait": max_duration_of_experiment_seconds + 1})
estimator = sagemaker.estimator.Estimator(**estimator_params)
```
<span style="color:#8735fb; font-size:20pt"> 2.3 - Test Estimator </span>
Now we are ready to test by asking SageMaker to run the BYOContainer logic inside our Estimator. This is a useful step if you've made changes to your custom logic and are interested in making sure everything works before launching a large HPO search.
> Note: This verification step will use the default hyperparameter values declared in our custom train code, as SageMaker HPO will not be orchestrating a search for this single run.
```
summarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)
job_name = new_job_name_from_config(
dataset_directory, region, ml_workflow_choice, algorithm_choice, cv_folds, instance_type
)
estimator.fit(inputs=s3_data_input, job_name=job_name.lower())
```
<span style="display: block; text-align: center; color:#8735fb; font-size:30pt"> **3. Run HPO** </span>
With a working SageMaker Estimator in hand, the hardest part is behind us. In the key choices section we <a href='#strategy-and-param-ranges'>already defined our search strategy and hyperparameter ranges</a>, so all that remains is to choose a metric to evaluate performance on. For more documentation check out the AWS SageMaker [Hyperparameter Tuner documentation](https://sagemaker.readthedocs.io/en/stable/tuner.html).
<img src='img/run_hpo.png'>
<span style="color:#8735fb; font-size:20pt"> 3.1 - Define Metric </span>
We only focus on a single metric, which we call 'final-score', that captures the accuracy of our model on the test data unseen during training. You are of course welcome to add aditional metrics, see [AWS SageMaker documentation on Metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-metrics.html). When defining a metric we provide a regular expression (i.e., string parsing rule) to extract the key metric from the output of each Estimator/worker.
```
metric_definitions = [{"Name": "final-score", "Regex": "final-score: (.*);"}]
objective_metric_name = "final-score"
```
<span style="color:#8735fb; font-size:20pt"> 3.2 - Define Tuner </span>
Finally we put all of the elements we've been building up together into a HyperparameterTuner declaration.
```
hpo = sagemaker.tuner.HyperparameterTuner(
estimator=estimator,
metric_definitions=metric_definitions,
objective_metric_name=objective_metric_name,
objective_type="Maximize",
hyperparameter_ranges=hyperparameter_ranges,
strategy=search_strategy,
max_jobs=max_jobs,
max_parallel_jobs=max_parallel_jobs,
)
```
<span style="color:#8735fb; font-size:20pt"> 3.3 - Run HPO </span>
```
summarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)
```
Let's be sure we take a moment to confirm before launching all of our HPO experiments. Depending on your configuration options running this cell can kick off a massive amount of computation!
> Once this process begins, we recommend that you use the SageMaker UI to keep track of the <a href='../img/gpu_hpo_100x10.png'>health of the HPO process and the individual workers</a>.
```
tuning_job_name = new_job_name_from_config(
dataset_directory, region, ml_workflow_choice, algorithm_choice, cv_folds, instance_type
)
hpo.fit(inputs=s3_data_input, job_name=tuning_job_name, wait=True, logs="All")
hpo.wait() # block until the .fit call above is completed
```
<span style="color:#8735fb; font-size:20pt"> 3.4 - Results and Summary </span>
Once your job is complete there are multiple ways to analyze the results.
Below we display the performance of the best job, as well printing each HPO trial/job as a row of a dataframe.
```
hpo_results = summarize_hpo_results(tuning_job_name)
sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name).dataframe()
```
For a more in depth look at the HPO process we invite you to check out the <a href='https://github.com/awslabs/amazon-sagemaker-examples/tree/master/hyperparameter_tuning/analyze_results'>HPO_Analyze_TuningJob_Results.ipynb</a> notebook which shows how we can explore interesting things like the <a href='img/results_analysis.png'>impact of each individual hyperparameter on the performance metric</a>.
<span style="color:#8735fb; font-size:20pt"> 3.5 - Getting the best Model </span>
Next let's download the best trained model from our HPO runs.
```
local_filename, s3_path_to_best_model = download_best_model(
model_output_bucket, s3_model_output, hpo_results, best_hpo_model_local_save_directory
)
```
<span style="color:#8735fb; font-size:20pt"> 3.6 - Model Serving </span>
With your best model in hand, you can now move on to [serving this model on SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html).
In the example below we show you how to build a [RealTimePredictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html) using the best model found during the HPO search. We will add a lightweight Flask server to our RAPIDS Estimator (a.k.a., container) which will handle the incoming requests and pass them along to the trained model for inference. If you are curious about how this works under the hood check out the [Use Your Own Inference Server](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html) documentation and reference the code in `code/serve.py`.
If you are interested in additional serving options (e.g., large batch with batch-transform), we plan to add a companion notebook that will provide additional details.
<span style="color:#8735fb; font-size:18pt"> 3.6.1 - GPU serving </span>
```
endpoint_model = sagemaker.model.Model(
image_uri=ecr_fullname, role=execution_role, model_data=s3_path_to_best_model
)
DEMO_SERVING_FLAG = False
if DEMO_SERVING_FLAG:
endpoint_model.deploy(initial_instance_count=1, instance_type="ml.p3.2xlarge")
if DEMO_SERVING_FLAG:
predictor = sagemaker.predictor.Predictor(
endpoint_name=str(endpoint_model.endpoint_name), sagemaker_session=session
)
""" Below we've compiled examples to sanity test the trained model performance on the Airline dataset.
The first is from a real flight that was nine minutes early in departing,
the second example is from a flight that was 123 minutes late in departing.
"""
if dataset_directory in ["1_year", "3_year", "10_year"]:
on_time_example = [
2019.0,
4.0,
12.0,
2.0,
3647.0,
20452.0,
30977.0,
33244.0,
1943.0,
-9.0,
0.0,
75.0,
491.0,
]
late_example = [
2018.0,
3.0,
9.0,
5.0,
2279.0,
20409.0,
30721.0,
31703.0,
733.0,
123.0,
1.0,
61.0,
200.0,
]
example_payload = str(list([on_time_example, late_example]))
else:
example_payload = "" # fill in a sample payload
print(example_payload)
predictor.predict(example_payload)
```
Once we are finished with the serving example, we should be sure to clean up and delete the endpoint.
```
if DEMO_SERVING_FLAG:
predictor.delete_endpoint()
```
<span style="color:#8735fb; font-size:25pt"> **Summary** </span>
We've now successfully built a RAPIDS ML workflow, containerized it (as a SageMaker Estimator), and launched a set of HPO experiments to find the best hyperparamters for our model.
If you are curious to go further, we invite you to plug in your own dataset and tweak the configuration settings to find your champion model!
**HPO Experiment Details**
<a id='experiments'></a>
As mentioned in the introduction we find a <span style="color:#8735fb; font-size:14pt"> **12X** </span> speedup in wall clock time and a <span style="color:#8735fb; font-size:14pt"> **4.5x** </span> reduction in cost when comparing between GPU and CPU instances on 100 HPO trials using 10 parallel workers on 10 years of the Airline Dataset (~63M flights). In these experiments we used the XGBoost algorithm with the multi-GPU vs multi-CPU Dask cluster and 10 cross validaiton folds. Below we offer a table with additional details.
<img src='img/results.png' width='70%'>
In the case of the CPU runs, 12 jobs were stopped since they exceeded the 24 hour limit we set. <a href='img/cpu_hpo_100x10.png'>CPU Job Summary Image</a>
In the case of the GPU runs, no jobs were stopped. <a href='img/gpu_hpo_100x10.png'>GPU Job Summary Image</a>
Note that in both cases 1 job failed because a spot instance was terminated. But 1 failed job out of 100 is a minimal tradeoff for the significant cost savings.
<span style="display: block; color:#8735fb; font-size:25pt"> **Appendix: Bring Your Own Dataset Checklist** </span>
<a id ='byod'></a>
If you plan to use your own dataset (BYOD) here is a checklist to help you integrate into the workflow:
> - [ ] Dataset should be in either CSV or Parquet format.
> - [ ] Dataset is already pre-processed (and all feature-engineering is done).
> - [ ] Dataset is uploaded to S3 and `data_bucket` and `dataset_directory` have been set to the location of your data.
> - [ ] Dataset feature and target columns have been enumerated in `/code/HPODataset.py`
<span style="color:#8735fb; font-size:25pt"> **Rapids References** </span>
> [cloud-ml-examples](http://github.com/rapidsai/cloud-ml-examples)
> [RAPIDS HPO](https://rapids.ai/hpo)
> [cuML Documentation](https://docs.rapids.ai/api/cuml/stable/)
<span style="color:#8735fb; font-size:25pt"> **SageMaker References** </span>
> [SageMaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit)
> [Estimator Parameters](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html)
> Spot Instances [docs](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html), and [blog]()
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipelines with Data Dependency
In this notebook, we will see how we can build a pipeline with implicit data dependancy.
## Prerequisites and Azure Machine Learning Basics
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
### Azure Machine Learning and Pipeline SDK-specific Imports
```
import azureml.core
from azureml.core import Workspace, Experiment, Datastore
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.widgets import RunDetails
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.pipeline.steps import PythonScriptStep
print("Pipeline SDK-specific imports completed")
```
### Initialize Workspace
Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration.
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
# Default datastore (Azure file storage)
def_file_store = ws.get_default_datastore()
print("Default datastore's name: {}".format(def_file_store.name))
def_blob_store = Datastore(ws, "workspaceblobstore")
print("Blobstore's name: {}".format(def_blob_store.name))
# source directory
source_directory = '.'
print('Sample scripts will be created in {} directory.'.format(source_directory))
```
### Required data and script files for the the tutorial
Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much "ML work," as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only.
### Compute Targets
See the list of Compute Targets on the workspace.
```
cts = ws.compute_targets
for ct in cts:
print(ct)
```
#### Retrieve or create a Aml compute
Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's get the default Aml Compute in the current workspace. We will then run the training script on this compute target.
```
aml_compute = ws.get_default_compute_target("CPU")
# For a more detailed view of current Azure Machine Learning Compute status, use get_status()
# example: un-comment the following line.
# print(aml_compute.get_status().serialize())
```
**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**
Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute.
## Building Pipeline Steps with Inputs and Outputs
As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.
### Datasources
Datasource is represented by **[DataReference](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py)** object and points to data that lives in or is accessible from Datastore. DataReference could be a pointer to a file or a directory.
```
# Reference the data uploaded to blob storage using DataReference
# Assign the datasource to blob_input_data variable
# DataReference(datastore,
# data_reference_name=None,
# path_on_datastore=None,
# mode='mount',
# path_on_compute=None,
# overwrite=False)
blob_input_data = DataReference(
datastore=def_blob_store,
data_reference_name="test_data",
path_on_datastore="20newsgroups/20news.pkl")
print("DataReference object created")
```
### Intermediate/Output Data
Intermediate data (or output of a Step) is represented by **[PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py)** object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.
#### Constructing PipelineData
- **name:** [*Required*] Name of the data item within the pipeline graph
- **datastore_name:** Name of the Datastore to write this output to
- **output_name:** Name of the output
- **output_mode:** Specifies "upload" or "mount" modes for producing output (default: mount)
- **output_path_on_compute:** For "upload" mode, the path to which the module writes this output during execution
- **output_overwrite:** Flag to overwrite pre-existing data
```
# Define intermediate data using PipelineData
# Syntax
# PipelineData(name,
# datastore=None,
# output_name=None,
# output_mode='mount',
# output_path_on_compute=None,
# output_overwrite=None,
# data_type=None,
# is_directory=None)
# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.
processed_data1 = PipelineData("processed_data1",datastore=def_blob_store)
print("PipelineData object created")
```
### Pipelines steps using datasources and intermediate data
Machine learning pipelines can have many steps and these steps could use or reuse datasources and intermediate data. Here's how we construct such a pipeline:
#### Define a Step that consumes a datasource and produces intermediate data.
In this step, we define a step that consumes a datasource and produces intermediate data.
**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
#### Specify conda dependencies and a base docker image through a RunConfiguration
This step uses a docker image and scikit-learn, use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify these requirements and use when creating the PythonScriptStep.
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# create a new runconfig object
run_config = RunConfiguration()
# enable Docker
run_config.environment.docker.enabled = True
# set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])
# step4 consumes the datasource (Datareference) in the previous step
# and produces processed_data1
trainStep = PythonScriptStep(
script_name="train.py",
arguments=["--input_data", blob_input_data, "--output_train", processed_data1],
inputs=[blob_input_data],
outputs=[processed_data1],
compute_target=aml_compute,
source_directory=source_directory,
runconfig=run_config
)
print("trainStep created")
```
#### Define a Step that consumes intermediate data and produces intermediate data
In this step, we define a step that consumes an intermediate data and produces intermediate data.
**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
```
# step5 to use the intermediate data produced by step4
# This step also produces an output processed_data2
processed_data2 = PipelineData("processed_data2", datastore=def_blob_store)
extractStep = PythonScriptStep(
script_name="extract.py",
arguments=["--input_extract", processed_data1, "--output_extract", processed_data2],
inputs=[processed_data1],
outputs=[processed_data2],
compute_target=aml_compute,
source_directory=source_directory)
print("extractStep created")
```
#### Define a Step that consumes intermediate data and existing data and produces intermediate data
In this step, we define a step that consumes multiple data types and produces intermediate data.
This step uses the output generated from the previous step as well as existing data on a DataStore. The location of the existing data is specified using a [**PipelineParameter**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter?view=azure-ml-py) and a [**DataPath**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.datapath.datapath?view=azure-ml-py). Using a PipelineParameter enables easy modification of the data location when the Pipeline is published and resubmitted.
**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**
```
# Reference the data uploaded to blob storage using a PipelineParameter and a DataPath
from azureml.pipeline.core import PipelineParameter
from azureml.data.datapath import DataPath, DataPathComputeBinding
datapath = DataPath(datastore=def_blob_store, path_on_datastore='20newsgroups/20news.pkl')
datapath_param = PipelineParameter(name="compare_data", default_value=datapath)
data_parameter1 = (datapath_param, DataPathComputeBinding(mode='mount'))
# Now define the compare step which takes two inputs and produces an output
processed_data3 = PipelineData("processed_data3", datastore=def_blob_store)
compareStep = PythonScriptStep(
script_name="compare.py",
arguments=["--compare_data1", data_parameter1, "--compare_data2", processed_data2, "--output_compare", processed_data3],
inputs=[data_parameter1, processed_data2],
outputs=[processed_data3],
compute_target=aml_compute,
source_directory=source_directory)
print("compareStep created")
```
#### Build the pipeline
```
pipeline1 = Pipeline(workspace=ws, steps=[compareStep])
print ("Pipeline is built")
pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)
print("Pipeline is submitted for execution")
RunDetails(pipeline_run1).show()
```
# Next: Publishing the Pipeline and calling it from the REST endpoint
See this [notebook](./aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) to understand how the pipeline is published and you can call the REST endpoint to run the pipeline.
| github_jupyter |
# Vibration System Objects
[Download This Notebook](http://vibrationtoolbox.github.io/vibration_toolbox/_downloads/vibesystem_notebook.ipynb)
```
import numpy as np
import vibration_toolbox as vtb
%matplotlib notebook
```
This notebook will introduce the class VibeSystem(), which is available in the vibration_toolbox.
As an example we will use the following 3 degree of freedom system:

If we look at the help for the VibeSystem class we have:
```
Parameters
----------
M : array
Mass matrix.
C : array
Damping matrix.
K : array
Stiffness matrix.
name : str, optional
Name of the system.
```
So we need to get the mass, stiffness and damping matrix to create a VibeSystem object (the name of the system is optional and can be used latter to help when we plot results).
For this system the kinetic energy is:
\begin{equation}
T = \frac{1}{2}[m_0\dot{q_0}(t)^2 + m_1\dot{q_1}(t)^2 + m_2\dot{q_2}(t)^2] = \frac{1}{2}{\bf \dot{q}}^T(t)\ M \ {\bf \dot{q}}(t)
\end{equation}
where ${\bf q(t)} = [q_0(t) \ q_1(t) \ q_2(t)]^T$ is the configuration vector. The mass matrix for this system is given by:
\begin{equation}
M =
\begin{bmatrix}
m_0 & 0 & 0\\
0 & m_1 & 0 \\
0 & 0 & m_2
\end{bmatrix}
\end{equation}
The potential energy is:
\begin{equation}
V = \frac{1}{2}[k_0 q_0(t)^2 + k_1(q_1(t) - q_0(t))^2 + k_2 (q_2(t) - q_1(t))^2]
\end{equation}
\begin{equation}
= \frac{1}{2}[(k_0+k_1)q_0(t)^2 + (k_1+k_2)q_1(t)^2 + (k_2)q_2(t)^2 -2k_1 q_0(t)q_1(t) - 2k_2 q_2(t)] \\
\end{equation}
\begin{equation}
= \frac{1}{2}{\bf {q}}^T(t)\ K \ {\bf {q}}(t)
\end{equation}
And the stiffness matrix is:
\begin{equation}
K =
\begin{bmatrix}
k_0 +k_1 & -k_1 & 0\\
-k_1 & k_1+k_2 & -k_2 \\
0 & -k_2 & k_2
\end{bmatrix}
\end{equation}
In this case we will consider proportional damping: $C = \alpha M + \beta K$.
Let's consider the following values for our system:
```
m0, m1, m2 = (1, 1, 1)
k0, k1, k2 = (1600, 1600, 1600)
alpha, beta = 1e-3, 1e-3
```
Now we use numpy to create our matrices:
```
M = np.array([[m0, 0, 0],
[0, m1, 0],
[0, 0, m2]])
K = np.array([[k0+k1, -k1, 0],
[-k1, k1+k2, -k2],
[0, -k2, k2]])
C = alpha*M + beta*K
sys = vtb.VibeSystem(M, C, K, name='3 dof system')
```
Now we have everything needed to create a vibration system object:
Now, if we type **```sys.```** and press ```tab``` we can see the system's attributes and methods that are available.
As an example we can get the natural frequencies:
```
sys.wn
```
Or the damped natural frequencies:
```
sys.wd
```
We can also check the frequency response for specific input output pairs:
```
ax = sys.plot_freq_response(0, 0)
```
M, C and K can be changed and the system will be updated:
```
sys.C = 20*C
sys.wd
ax = sys.plot_freq_response(0, 0)
sys.C = C
axs = sys.plot_freq_response_grid([0, 1], [0, 1])
```
For the plot functions, the axes that will be used can be provided. This can help us when we want to show two different results on the same plot.
This is ilustrated below by plotting the frequency response first using only the first two modes:
```python
ax = sys.plot_freq_response(0, 0, modes=[0, 1],
color='r', linestyle='--',
label='Modes 0 and 1')
```
Notice that the axes with magnitude and phase were returned and assigned to the 'ax' name.
Now we pass 'ax' to plot the frequency response with all the modes (default when modes are not provided).
```python
ax = sys.plot_freq_response(0, 0, ax0=ax[0], ax1=ax[1],
color='b', label='All modes')
```
We can check the results below:
```
ax = sys.plot_freq_response(0, 0, modes=[0, 1],
color='r', linestyle='--',
label='Modes 0 and 1')
ax = sys.plot_freq_response(0, 0, ax0=ax[0], ax1=ax[1],
color='b', label='All modes')
ax[1].legend();
```
The time response for the system can also be plotted. For that we need to define a time array and the force that will be applied during that time. The force should be given as an array where each row corresponds to a period of time and each column to where the force is being applied.
```
t = np.linspace(0, 25, 1000)
# force array with len(t) rows and len(inputs) columns
F1 = np.zeros((len(t), 3))
# in this case we apply the force only to the mass m1
F1[:, 1] = 1000*np.sin(40*t)
axs = sys.plot_time_response(F1, t)
```
We can use the plot_fft function available in the toolbox to see the FFT from the time response.
```
t, yout, xout = sys.time_response(F1, t)
ax = vtb.plot_fft(t, time_response=yout[:, 0])
```
| github_jupyter |
# 引入Tensorflow
```
import tensorflow as tf
from tensorflow.keras import layers
print(tf.__version__)
print(tf.keras.__version__)
```
# 数据输入 tf.data.Dataset.from_tensor_slices
```
import numpy as np
import tensorflow as tf
train_x = np.zeros((1000, 28, 28))
train_y = np.zeros((1000, 10))
dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y)).shuffle(20).repeat(2).batch(512)
for x, y in dataset:
print(x.shape, y.shape)
```
# 数据输入 tf.TFRecordReader
```
import numpy as np
import tensorflow as tf
tfrecord_filename = './train.tfrecord'
class encode_and_write:
def __init__(self):
self.feature_dict = {
'ndarray' : self._ndarray_feature,
'bytes' : self._bytes_feature,
'float' : self._float_feature,
'double' : self._float_feature,
'bool' : self._int64_feature,
'enum' : self._int64_feature,
'int' : self._int64_feature,
'uint' : self._int64_feature
}
def _ndarray_feature(self, value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value.tobytes()]))
def _bytes_feature(self, value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(self, value):
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(self, value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _encode_example(self, example):
"""Creates a tf.Example message ready to be written to a file."""
feature = {}
for vname in example:
vtype = type(example[vname]).__name__
feature[vname] = self.feature_dict[vtype](example[vname])
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
def run(self, filename, datasets):
with tf.io.TFRecordWriter(filename) as writer:
for vdata in datasets:
example = self._encode_example(vdata)
writer.write(example)
class datasets_stream:
def __iter__(self):
self.cnt = 1000
self.idx = 0
return self
def __next__(self):
if self.idx < self.cnt:
self.idx += 1
return {"image": np.zeros((64, 64), np.uint8), "label": 0}
else:
raise StopIteration
encode_and_write().run(tfrecord_filename, iter(datasets_stream()))
class read_and_decode:
def __init__(self):
self.feature_description_dict = {
'ndarray' : self._bytes_feature_description,
'bytes' : self._bytes_feature_description,
'float' : self._float_feature_description,
'double' : self._float_feature_description,
'bool' : self._int64_feature_description,
'enum' : self._int64_feature_description,
'int' : self._int64_feature_description,
'uint' : self._int64_feature_description
}
def _bytes_feature_description(self):
return tf.io.FixedLenFeature([], tf.string)
def _float_feature_description(self):
return tf.io.FixedLenFeature([], tf.float)
def _int64_feature_description(self):
return tf.io.FixedLenFeature([], tf.int64)
def _decode_example(self, e, example):
res = []
for vname in example:
vtype = type(example[vname]).__name__
if vtype == "ndarray":
res.append(tf.reshape(tf.io.decode_raw(e[vname], {
'float32' : tf.float32,
'float64' : tf.float64,
'int32' : tf.int32,
'uint16' : tf.uint16,
'uint8' : tf.uint8,
'int16' : tf.int16,
'int8' : tf.int8,
'int64' : tf.int64
}[str(example[vname].dtype)]), example[vname].shape))
else:
res.append(tf.cast(e[vname], {
'float' : tf.float32,
'int' : tf.int32
}[vtype]))
""""""
return res
def run(self, filename, example):
reader = tf.data.TFRecordDataset(filename)
feature_description = {}
for vname in example:
vtype = type(example[vname]).__name__
feature_description[vname] = self.feature_description_dict[vtype]()
reader = reader.map(lambda e: tf.io.parse_single_example(e, feature_description))
reader = reader.map(lambda e: self._decode_example(e, example))
return reader
#tfrecord_filename = tf.io.gfile.glob(os.path.join(ds_path, 'records/*.tfrec'))#records/train*.tfrec
reader = read_and_decode().run(tfrecord_filename, {"image": np.zeros((64, 64), np.uint8), "label": 0})
batch = reader.shuffle(20).repeat(1).batch(512)
for x, y in batch:
print(x.shape, y.shape)
```
# 数据输入 tf.data.Dataset.from_generator
```
import numpy as np
import tensorflow as tf
def our_generator():
for i in range(10):
x = np.random.rand(28,28)
y = np.random.randint(1,10, size=1)
yield x, y
dataset = tf.data.Dataset.from_generator(our_generator, (tf.float32, tf.int16))
batch = dataset.shuffle(20).repeat(2).batch(512)
for x, y in batch:
print(x.shape, y.shape)
#(10, 28, 28) (10, 1)
```
# 默认数据增强
```
import numpy as np
import tensorflow as tf
train_x = np.zeros((1000, 28, 28, 1))
train_y = np.zeros((1000, 10))
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
datagen.fit(train_x)
batch = datagen.flow(train_x, train_y, batch_size=512)
for x, y in batch:
print(x.shape, y.shape)
```
# 自定义数据增强
```
import numpy as np
import tensorflow as tf
train_x = np.zeros((1000, 28, 28, 1), dtype = np.float32)
train_y = np.zeros((1000, 10), dtype = np.float32)
def mixup(img_batch, label_batch):
batch_size = tf.shape(img_batch)[0]
weight = tf.random.uniform([batch_size])
x_weight = tf.reshape(weight, [batch_size, 1, 1, 1])
y_weight = tf.reshape(weight, [batch_size, 1])
index = tf.random.shuffle(tf.range(batch_size, dtype=tf.int32))
x1, x2 = img_batch, tf.gather(img_batch, index)
img_batch = x1 * x_weight + x2 * (1. - x_weight)
y1, y2 = label_batch, tf.gather(label_batch, index)
label_batch = y1 * y_weight + y2 * (1. - y_weight)
return img_batch, label_batch
batch = tf.data.Dataset.from_tensor_slices((train_x, train_y)).shuffle(20).repeat(2).batch(512)
batch = batch.map(lambda a, b : mixup(a, b))
for x, y in batch:
print(x.shape, y.shape)
```
| github_jupyter |
# Case Study 2 - Predicting Hospital Readmittance
__Team Members:__ Amber Clark, Andrew Leppla, Jorge Olmos, Paritosh Rai
# Content
* [Business Understanding](#business-understanding)
- [Objective](#objective)
- [Introduction](#introduction)
- [EDA](#eda)
- [Methods](#methods)
- [Train and Test Data Split](#train)
- [Missing Values](#missing-values)
- [Other Data Cleanup](#other)
- [Logistic Regression Modeling](#logistic)
- [Results](#results)
- [Conclusion](#conclusion)
* [Data Evaluation](#data-evaluation)
- [Loading Data](#loading-data)
- [Id Mappings](#idmappings)
- [Diabetes Dataset](#dataset)
- [Data Summary](#data-summary)
- [Remapping Data](#remapping)
- [Missing Values](#missing-values)
- [Feature Removal](#feature-removal)
- [Exploratory Data Analysis (EDA)](#eda)
- [Assumptions](#assumptions)
* [Model Preparations](#model-preparations)
- [Modeling Method Proposed](#modelmethod)
- [Model Appropriateness](#approp)
- [Evaluation Metrics](#metrics)
- [Model Usefulness](#usefulness)
- [Split dependent and Independant Variables](#split)
- [Data Imputation](#imputation)
- [Sampling & Scaling Data](#scaling)
- [Prepare for Modeling](#prepare)
* [Model Building & Evaluations](#model-building)
- [Sampling Methodology](#sampling-methodology)
- [Model Setup](#modelsetup)
- [Baseline Model](#baseline-model)
- [Final Model](#final-model)
- [Model's Performance Analysis](#performance-analysis)
* [Model Interpretability & Explainability](#model-explanation)
- [Baseline Model](#baseline)
- [Final Model](#finalmodel)
* [Conclusion](#conclusion)
- [Final Model Proposal](#final-model-proposal)
- [Future Considerations and Model Enhancements](#model-enhancements)
- [Alternative Modeling Approaches](#alternative-modeling-approaches)
# Business Understanding & Executive Summary <a id='business-understanding'/>
```
```
### Objective <a id='objective'/>
This case study involves analyzing a ten-year study tracking diabetic patient readmission into hospitals. The goal of the analysis is to predict when diabetic patients are likely to be readmitted to a hospital based on the available data and determine if any of the provided factors are particularly indicative of a high chance of readmission.
### Introduction <a id='introduction'/>
Diabetes is a common disease that affects the body's ability to regulate blood sugar levels naturally. According to the American Heart Association [1], there are two main types of diabetes which involve issues with insulin. This hormone regulates how cells absorb glucose from the bloodstream. Type 1 diabetes describes the chronic form that is usually identified at a young age in which the body is unable to produce sufficient insulin. Type 2 diabetes, the most common condition, can arise later in life and occurs when the body develops an "insulin resistance" or its insulin production begins to diminish.
Complications involving the effects of diabetes on the heart and circulatory system can lead to dire conditions that require hospitalization. Often patients are released from the hospital but may have to be readmitted soon after with recurring issues, causing additional strain on both the patients' livelihoods and the efficiency of the hospital.
The data provided for this case study is requisitioned from a ten-year study on hospital readmissions of diabetic patients. This analysis aims to predict when diabetic patients are likely to be readmitted to the hospital after a visit given the provided information and if that readmission will occur in less than 30 days. Based on a predictive model, the most influential indicators of probable readmissions can be identified so that medical professionals can make better judgments on whether a diabetic patient is ready to be released. Since diabetes is estimated to affect 463 million people worldwide [2], any improvement in this regard would have far-reaching significant benefits.
### EDA <a id='eda'/>
Two data CSV files are used for the analysis, "diabetic_data.csv" and "IDs_maping.csv". The “diabetic_data” dataset is based on the multi-year study of diabetes patients, and each row represents an individual's hospital visit. The patients' information is systematically collected from the point of entry to that of discharge. It can be broadly classified into a few categories:
Personal information
- The admission situations and conditions
- The laboratory tests conducted
- The physician's diagnosis
- The treatments and medications
- The discharge conditions
This dataset contains 50 features and 101,766 rows, including the response variable readmission information. In the diabetic_data file, the team identified trends and missing data. Following are a few observations:
- <b>The response variable is unbalanced with a 89:11 split between "not readmitted within 30 days" vs. "readmitted within 30 days".</b> This imbalance will be addressed throughout this case study.
- The imbalance was observed to be very similar in the male and female samples.
- Pair plots of continuous variables did not show any obvious correlations or outliers among continuous variables, but they were right-skewed, zero-inflated, and/or discrete. See the full EDA section below. Team decided to proceed without transformation or recoding.
The race variable was identified as a potential ethical concern and a sensitive discussion topic. Hospitals are typically required by state law to collect demographic information like race and ethnicity. and the 1964 Civil Rights Act allows hospitals to collect this data to ensure no discrimination in care. However, including race could result in a model that reinforces negative racial biases in healthcare.
- Data showed more samples for the Caucasian race vs. other races. However, the ratio of the patient taking diabetic medication vs. total patients within the race is very similar (range from 74% to 80%).
The team looked at data in the "ID_mapping.csv" file. This file contains the IDs for multiple categorical variables and a brief description of each. This is explored further in Missing Values.
### Methods <a id='methods'/>
The objective of our analysis is to classify diabetes patients who have the highest risk of being re-admitted to the hospital within 30 days using logistic regression modeling. L2 (Ridge) regularization was used to prevent overfitting without reducing model performance. The team observed that L2 is faster and more efficient to run than L1 with this dataset which had a large number of dummy-coded variables.
The metrics used to evaluate model performance were precision and recall. Precision and recall are useful metrics when the classes are imbalanced like in this dataset. These metrics are defined as follows:
- Recall = TP/(TP+FN)
- Precision = TP/(TP+FP)
Where:
- True Positive (TP) is "patients correctly predicted to be readmitted within 30 days". These predictions would help keep a patient in the hospital longer for observation rather than discharging them, because they would be at risk to be readmitted within 30 days.
- False Negative (FN) is "patients <b>in</b>correctly predicted to <b>not</b> be readmitted within 30 days". These predictions would likely discharge a patient too soon, resulting in readmittance within 30 days.
- False Positive (FP) is "patients <b>in</b>correctly predicted to be readmitted within 30 days". These predictions would likely keep people in the hospital that don't need to be there and are at low risk for readmittance within 30 days.
Both Recall and Precision focus the modeling on True Positives, but Recall is penalized by False Negatives whereas Precision is penalized by False Positives. A scatterplot of Precision vs. Recall (a PR Curve) can be used to tune the False Negatives vs. False Positive for any given model.
In the best interest of the patient, the model would maximize Recall such that hospital readmission is prevented at all costs. However, the resulting lower precision (higher false positive rate) would be costly. It could lead to a shortage of beds or staff in the hospital, keeping more patients that are fine and that wouldn't be readmitted if dishcarged. To balance both precision and recall, the team used the F1 score. The F1 score combines precision and recall into a single metric by taking their harmonic mean:
F1 = 2 * Recall * Precision / (Recall + Precision)
By maximizing the F1 score, the model balances both recall and precision for the positive class.
### Train and Test Data Split <a id='train'/>
The response variable is unbalanced with 11% of patients readmitted within 30 days and 89% not readmitted within 30 days. Stratified data splitting was used to ensure that these two clases are evenly split between the training and test sets. A 70/30 split was used between the training and test sets. Stratified data splitting Dataset was split in training and test set before imputation. The continuous variables were scaled to ensure high magnitude variables were not influencing the outcome more than low magnitude variables.
## Missing Values <a id='missing-values'>
No missing values were observed in any continuous features or the response variable. Missing values, entered as "??", were observed in several categorial features (see table below). Missing values were also observed in the ID_Mapping level descriptions entered as NULL, Not Available, unknown/Invalid, and Not Mapped. NaN replaced all of these missing values in both the diabetic_data and ID_Mapping datasets, and then they were merged for analysis.
The team investigated whether these missing values had any internal correlation or dependency. ***A heatmap did not show any obvious overlap or correlation with the missing values.***
The diagnosis codes in columns diag_1, diag_2, and diag_3 reference Internation Classification of Diseases (ICD) codes. The data hierarchy of these codes was used to reduce the number of category levels from 700-800 each to just 19 each. This significantly reduced the number of dummy variables and made imputation much simpler. See the Remapping Data code below for more details.
https://www.aapc.com/codes/icd9-codes-range/
Following are the variables with missing values and the methodology the team used to mitigate the missing values:
| Missing Variable | # of missing values | Mitigation Methodology |
|----------------------------------|---------------------|------------------------------------------------------------|
| weight | 98,569 | ~97% of data was missing so variable was dropped. |
| race | 2,273 | Impute by Mode |
| payer_code | 40,256 | Impute by Mode |
| medical Specialty | 49,949 | Group by admission_type_id_new_mapping then Impute by Mode |
| disharge_disposition_new_mapping | 4,680 | Impute by Mode |
| admission_source_new_mapping | 7,067 | Impute by Mode |
| diag_1 | 21 | Recategorize and Impute by Mode |
| diag_2 | 358 | Recategorize and Impute by Mode |
| diag_3 | 2,423 | Recategorize and Impute by Mode |
The team decided to fill the missing values of race by the mode, Caucasian. The occurrence of missing information for race is relatively low, so there shouldn't be much concern for inadvertently creating any artificial trends in the model. Refer to the EDA section above for more information on this.
KNN Imputation Methodology: Team also looked at KNN imputation to address missing values as there was no perfect correlation observed among missing variables. The team found that KNNImputer in sklearn only had Euclidian distance available which is not appropriate for the number of categorical variables in the dataset. The team decided to drop the KNN imputation and follow the imputation approach listed above.
### Other Data Cleanup <a id='other'/>
The team reviewed the data in detail after addressing the missing values. Features that are not meaningful such as encounter_id, patient_nbr, examide, and citoglipton are removed from the analysis. encounter_id and patient_nbr are row id codes and will not add value to the model. examide and citoglipton variables have all zeros.
Additionally, we removed rows with a discharge_disposition_id = "expired", which indicated the patient has diseased, thus they cannot be readmitted. The team assumes the end users don't need a model to correctly predict this outcome.
|disharge_disposition|readmit|count
|----|----|----|
|expired|False|1652|
|expired|True|0|
### Logistic Regression Modeling <a id='logistic'/>
Logistic regression is a classification algorithm. It is used to predict a binary outcome based on the set of independent variables. Logistic regression is used for the analysis when working with the binary target variable, i.e., dichotomous, or categorical in nature; in other words, if it fits into one of two categories (such as “yes” or “no”, “pass” or “fail”, 0 or 1). The response variable used for this analysis also has dichotomous data, so the team decided to use Logistic Regression to analyze the diabatic data.
Logistic Regression models were built using L2 (Ridge) regularization and weighting the log loss function by the target class (class_weight = "balanced"). The team used this weighting option to deal with the imbalance of the target class variable. This option balanced how the model predicted probabilities for the negative and positive classes such that the default threshold of 0.5 would predict more of the positive class. Using this approach, the F1 score was maximized by tuning the regularization penalty (C) and the threshold.
Baseline Model:
There were no missing values in the 8 continuous variables, so the team built the baseline model using continuous variables only with Logistic Regression. The team utilized this base model to see how the data looked without dummy coding or imputation, as well as assessing the value of adding multi-level categorical variables with more complex models.
Final Model:
The final Logistic Regression model was made using all of the variables. This dataset was built after recoding the categorical values and imputing any missing data.
### Results <a id='results'/>
The baseline model had the following metrics:
| Group | Precision | Recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| Training | 0.19 | 0.39 | 0.26 | 0.74 |
| Test | 0.19 | 0.38 | 0.25 | 0.74 |
The baseline model performance was consistent between the training and test sets.
The baseline model had a <b>recall of 38-39%</b> and precision of 18% (f1-score = 25-26%).
<br>
The final model had the following metrics:
| Group | Precision | Recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| Test | 0.18 | 0.48 | 0.27 | 0.71 |
| Training | 0.18 | 0.47 | 0.26 | 0.70 |
The final model performance was consistent between the training and test sets.
The final model had a <b>recall of 47-48%</b> and precision of 18% (f1-score = 26-27%). This was a 10% incremental improvement in the recall (and a 3-4% drop in accuracy). The final model accurately predicts approximately 1 in 2 patients that will be readmitted within 30 days.
### Model Interpretation
The three most important factors in the final model were all negatively correlated with readmittance within 30 days:
- Otolaryngology (a.k.a. Ears, Nose, and Throat)
- Gynecology
- Diagnoses for "Complications in Child Birth" (K)
The three most important factors in the final model that were positively correlated with readmittance within 30 days:
- Allergy and Immunology
- Hematology
- Discharge Disposition = "Admitted"
Race was not in the top 15 most important factors and could likely be removed from the model if needed.
Input from domain knowledge experts is needed to asses why these variables were the most important and negatively correlated with the response.
Variables were assessed for importance after centering and scaling continuous variables and one-hot-encoding categorical variables such that they all range from 0 to 1. Using this approach, all of the variables are on equivalent scales and their importance can be directly assessed by their model coefficients. The 15 most important features are presented in the bar plot below.
<img src="https://github.com/olmosjorge28/QTW-SPRING-2022/blob/main/ds7333_case_study_2/plots/final_model_factor_barplot.png?raw=1" width=400 height=400 />
## Conclusion <a id='conclusion'/>
We are proposing a classification model that predicts readmittance to the hospital within 30 days with a 25% F1 score (47% Recall and 18% Precision). This model maximizes the F1 score which maximizes both Recall and Precision in a balanced way such that both patients and hospitals can manage their risk effectively.
The model helps identify patients at high risk of readmission within 30 days and patients with a low risk of readmission that can be discharged. Our model identifies top risk factors for readmittance within 30 days which the hospital staff can use as focus areas when deciding to keep or discharge a patient. This model is not perfect and should be used as a guide rather than gospel. Additional feature engineering and domain knowledge experts are needed to guide model improvements.
### Future Considerations
The data were aggregated from multiple hospitals and may not be generalizable across health care systems, geography, etc. Hospitals may want to develop models with their own data or regional data to guide their staff on discharge decisions.
Need a conversation with stakeholders and domain experts regarding:
- Balance of precision vs. recall. What is the cost of a false positive vs. a false negative?
- Level of detail with diagnoses columns (diag 1-3). We generalized these diagnoses such that there were fewer levels to model. We recommend reviewing the top factor recoded diagnoses categories to see if any warrant a deeper dive
### Other Models
Other classification modeling techniques - Naive Bayes, Random Forest, Boosted Decision Trees
### References <a id='References'/>
[1] American Heart Association. What is Diabetes? https://www.heart.org/en/health-topics/diabetes/about-diabetes
[2] "IDF DIABETES ATLAS Ninth Edition 2019" (PDF). www.diabetesatlas.org. Retrieved 18 May 2020.
[3] What is Logistic Regression? A Beginner's Guide [2022] (careerfoundry.com)
# Data Evaluation <a id='data-evaluation'>
```
# standard libraries
import pandas as pd
import numpy as np
import os
from IPython.display import Image
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import pyplot
%matplotlib inline
from tabulate import tabulate
# data pre-processing
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import StratifiedKFold
#from sklearn.model_selection import StratifiedGroupKFold
from sklearn.impute import SimpleImputer
# prediction models
from sklearn.model_selection import cross_validate
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
#from kneed import KneeLocator
from scipy import stats
# metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import precision_recall_curve as pr_curve
from sklearn.metrics import plot_precision_recall_curve as plot_pr_curve
from sklearn.metrics import auc
# import warnings filter
import warnings
warnings.filterwarnings('ignore')
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
```
## Loading Data <a id='loading-data'>
### Id Mappings <a id='idmappings'>
#### Admission Type
```
url = 'https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/dataset_diabetes/IDs_mapping.csv'
admission_type_mapping = pd.read_csv(url, nrows=8, index_col=0)
admission_type_mapping
```
#### Discharge Disposition
```
discharge_disposition_mapping = pd.read_csv(url, nrows=30,skiprows=10, index_col=0 )
discharge_disposition_mapping
```
#### Admission Source Mapping
```
admission_source_mapping = pd.read_csv(url,skiprows=42, index_col=0 )
admission_source_mapping
```
## Diabetes Dataset <a id='dataset'>
```
url = 'https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/dataset_diabetes/diabetic_data.csv'
df = pd.read_csv(url,na_values='?');
```
## Data Summary <a id='data-summary'>
| Feature name | Type | Description and values |
|-----------------------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Encounter ID | Numeric | Unique identifier of an encounter |
| Patient number | Numeric | Unique identifier of a patient |
| Race | Nominal | Values: Caucasian, Asian, African American, Hispanic, and other |
| Gender | Nominal | Values: male, female, and unknown/invalid |
| Age | Nominal | Grouped in 10-year intervals: 0, 10), 10, 20), …, 90, 100) |
| Weight | Numeric | Weight in pounds. |
| Admission type | Nominal | Integer identifier corresponding to 9 distinct values, for example, emergency, urgent, elective, newborn, and not available |
| Discharge disposition | Nominal | Integer identifier corresponding to 29 distinct values, for example, discharged to home, expired, and not available |
| Admission source | Nominal | Integer identifier corresponding to 21 distinct values, for example, physician referral, emergency room, and transfer from a hospital |
| Time in hospital | Numeric | Integer number of days between admission and discharge |
| Payer code | Nominal | Integer identifier corresponding to 23 distinct values, for example, Blue Cross/Blue Shield, Medicare, and self-pay |
| Medical specialty | Nominal | Integer identifier of a specialty of the admitting physician, corresponding to 84 distinct values, for example, cardiology, internal medicine, family/general practice, and surgeon |
| Number of lab procedures | Numeric | Number of lab tests performed during the encounter |
| Number of procedures | Numeric | Number of procedures (other than lab tests) performed during the encounter |
| Number of medications | Numeric | Number of distinct generic names administered during the encounter |
| Number of outpatient visits | Numeric | Number of outpatient visits of the patient in the year preceding the encounter |
| Number of emergency visits | Numeric | Number of emergency visits of the patient in the year preceding the encounter |
| Number of inpatient visits | Numeric | Number of inpatient visits of the patient in the year preceding the encounter |
| Diagnosis 1 | Nominal | The primary diagnosis (coded as first three digits of ICD9); 848 distinct values |
| Diagnosis 2 | Nominal | Secondary diagnosis (coded as first three digits of ICD9); 923 distinct values |
| Diagnosis 3 | Nominal | Additional secondary diagnosis (coded as first three digits of ICD9); 954 distinct values |
| Number of diagnoses | Numeric | Number of diagnoses entered to the system |
| Glucose serum test result | Nominal | Indicates the range of the result or if the test was not taken. Values: >200, >300, normal, and none if not measured |
| A1c test result | Nominal | Indicates the range of the result or if the test was not taken. Values: “>8” if the result was greater than 8%, “>7” if the result was greater than 7% but less than 8%, “normal” if the result was less than 7%, and “none” if not measured. |
| | | |
| Change of medications | Nominal | Indicates if there was a change in diabetic medications (either dosage or generic name). Values: “change” and “no change” |
| Diabetes medications | Nominal | Indicates if there was any diabetic medication prescribed. Values: “yes” and “no” |
| 24 features for medications | Nominal | For the generic names: metformin, repaglinide, nateglinide, chlorpropamide, glimepiride, acetohexamide, glipizide, glyburide, tolbutamide, pioglitazone, rosiglitazone, acarbose, miglitol, troglitazone, tolazamide, examide, sitagliptin, insulin, glyburide-metformin, glipizide-metformin, glimepiride-pioglitazone, metformin-rosiglitazone, and metformin-pioglitazone, the feature indicates whether the drug was prescribed or there was a change in the dosage. Values: “up” if the dosage was increased during the encounter, “down” if the dosage was decreased, “steady” if the dosage did not change, and “no” if the drug was not prescribed |
| Readmitted | Nominal | Days to inpatient readmission. Values: “<30” if the patient was readmitted in less than 30 days, “>30” if the patient was readmitted in more than 30 days, and “No” for no record of readmission. |
## Remapping Data <a id='remapping'>
### Re-mapping data based on ID mappings
```
def remapIds(*new_mappings: tuple) -> dict:
mapping_dict = dict()
for mapping in new_mappings:
mapping_dict[mapping[1]] = mapping[0].map(mapping[2])
return mapping_dict
def refactorMappingIds(inputDf: pd.DataFrame) -> pd.DataFrame:
admission_type_id_new_mapping = {
1: "emergency",
2: "urgent",
3: "elective",
4: "newborn",
5: float("NaN"),
6: float("NaN"),
7: "trauma-center",
8: float("NaN")
}
disharge_disposition_new_mapping = {
1: "discharged",
2: "transfer",
3: "transfer",
4: "transfer",
5: "transfer",
6: "transfer",
7: "ama",
8: "transfer",
9: "admitted",
10: "transfer",
11: "expired",
12: "admitted",
13: "hospice",
14: "hospice",
15: "transfer",
16: "transfer",
17: "transfer",
18: float("NaN"),
19: "expired",
20: "expired",
21: "expired",
22: "transfer",
23: "transfer",
24: "transfer",
25: float("NaN"),
26: float("NaN"),
27: "transfer",
28: "transfer",
29: "transfer",
}
admission_sourcing_new_mapping = {
1: "referral",
2: "referral",
3: "referral",
4: "transfer",
5: "transfer",
6: "transfer",
7: "emergency",
8: "law-enforcement",
9: float("NaN"),
10: "transfer",
11: "normal-delivery",
12: "other-delivery",
13: "other-delivery",
14: "other-delivery",
15: float("NaN"),
17: float("NaN"),
18: "transfer",
19: "transfer",
20: float("NaN"),
21: float("NaN"),
22: "transfer",
23: "normal-delivery",
24: "normal-delivery",
25: "transfer",
26: "transfer"
}
df = inputDf.copy()
mapping_tuples = [
(df['discharge_disposition_id'],'disharge_disposition_new_mapping', disharge_disposition_new_mapping),
(df['admission_source_id'],'admission_source_new_mapping', admission_sourcing_new_mapping),
(df['admission_type_id'], 'admission_type_id_new_mapping', admission_type_id_new_mapping)
]
remappings = remapIds(*mapping_tuples)
for newMappingKey in remappings:
df[newMappingKey] = remappings[newMappingKey]
return df
```
### Remapping Diag(nosis) Codes based on International Classification of Diseases (ICD) codes
Below you'll find a link to the ICD ranges we used to remap features __diag_1__, __diag_2__, and __diag_3__: <br>
https://www.aapc.com/codes/icd9-codes-range/
```
def getDiagCategory(input: float) -> str:
val: str
if input < 1:
val = float("NaN")
elif input < 140:
val = 'A'
elif input < 240:
val = 'B'
elif input < 280:
val = 'C'
elif input < 290:
val = 'D'
elif input < 320:
val = 'E'
elif input < 390:
val = 'F'
elif input < 460:
val = 'G'
elif input < 520:
val = 'H'
elif input < 580:
val = 'I'
elif input < 630:
val = 'J'
elif input < 680:
val = 'K'
elif input < 710:
val = 'L'
elif input < 740:
val = 'M'
elif input < 760:
val = 'N'
elif input < 780:
val = 'O'
elif input < 800:
val = 'P'
elif input < 1000:
val = 'Q'
elif input < 2000:
val = 'R'
elif input < 3000:
val = 'S'
else:
val = 'Z'
return val
def categorizeDiag(diag: pd.Series) -> pd.Series:
df = diag.copy()
df.fillna(0,inplace=True)
df.mask(df.str.startswith('V', na=False),1000, inplace=True)
df.mask(df.str.startswith('E', na=False),2000, inplace=True)
df = pd.to_numeric(df)
df = df.map(getDiagCategory)
return df
def recategorizeDiags(inputDf: pd.DataFrame) -> pd.DataFrame:
df = inputDf.copy()
df['diag_1_categorized'] = categorizeDiag(df['diag_1'])
df['diag_2_categorized'] = categorizeDiag(df['diag_2'])
df['diag_3_categorized'] = categorizeDiag(df['diag_3'])
return df
def addDiag(*diags):
df = None
for diag in diags:
if (df is None):
df = diag.notna().astype(int)
else:
df = df + diag.notna().astype(int)
return df
def recategorizeData(inputDf: pd.DataFrame) -> pd.DataFrame:
df = inputDf.copy()
df = refactorMappingIds(df)
df = recategorizeDiags(df)
df['total_diag'] = addDiag(df['diag_1'],df['diag_2'],df['diag_3'])
return df
df_recategorized = recategorizeData(df)
```
### Recoding Dependant Variable
```
def recode_variable(series: pd.Series, condition):
s_copy = series.copy()
s_copy = s_copy==condition
s_copy = s_copy.astype(int)
return s_copy
df_recategorized['readmitted'] = recode_variable(df_recategorized['readmitted'], '<30')
```
## Missing Values
  No missing values were observed in any continuous variables. All the missing values were observed in just 9 of the 42 categorial features (see table below). There were also missing values observed in Id_mapping files as mentioned in the EDA section above.
  The team investigated whether these missing values have any internal correlation or dependency. ***A heatmap did not show any obvious overlap or correlation with the missing values.***
  The diagnosis codes in columns diag_1, diag_2, and diag_3 reference Internation Classification of Diseases (ICD) codes. The data hierarchy of these codes was leveraged to reduce the number of category levels from 700-800 each to just 19 each. This significantly reduced the number of dummy variables and made imputation much simpler. See the code below for more details.
https://www.aapc.com/codes/icd9-codes-range/
Following are the variables with the missing values:
| Missing Variable | # of missing values |
|----------------------------------|---------------------|
| weight | 98,569 |
| race | 2,273 |
| payer_code | 40,256 |
| medical Specialty | 49,949 |
| disharge_disposition_new_mapping | 4,680 |
| admission_source_new_mapping | 7,067 |
| diag_1 | 21 |
| diag_2 | 358 |
| diag_3 | 2,423 |
  The race variable is identified as a potential ethical concern and a sensitive discussion topic, and the team decided to fill the missing values of race by the mode, Caucasian. The occurrence of missing information for race is relatively low, so there shouldn't be much concern for inadvertently creating any artificial trends in the model.
```
df_recategorized.isna().sum()
```
## Feature Removal <a id='feature-removal'>
  The team reviewed the data in detail after addressing the missing values. Features that are not meaningful such as __encounter_id__, __patient_nbr__, __examide__, and __citoglipton__ are removed from the analysis. encounter_id and patient_nbr are id codes and will not add value to the model. examide, and citoglipton variables have all zeros.
  Additionally, we remove rows with a __disharge_disposition__ ids that were labeled as __expired__, which indicated the patient has diseased, thus they will not be readmitted.
```
df_recategorized = df_recategorized.drop(['patient_nbr','encounter_id', 'examide', 'weight','citoglipton'], 1);
```
  Also, the following feature recoded and remapped into new variables to a smaller subset of levels, thus the original features are no longer needed.
discharge_disposition_id
admission_source_id
admission_type_id
diag_1
diag_2
diag_3
df_recategorized = df_recategorized.drop(['discharge_disposition_id','admission_source_id','admission_type_id','diag_1',
'diag_2','diag_3'], 1);
```
df_recategorized = df_recategorized[df_recategorized['disharge_disposition_new_mapping']!='expired']
df_recategorized.reset_index(drop=True, inplace=True)
```
## Exploratory Data Analysis (EDA) <a id='eda'>
### Response - Readmitted
<img src="https://github.com/olmosjorge28/QTW-SPRING-2022/blob/main/ds7333_case_study_2/plots/readmitted.png?raw=1" width=300 height=300 />
### Feature Collinearity & Outliers <a id='feature-collinearity'>
No collinearity or outliers were identified based on the pairs plot of the continuous variables below. Several "continuous" variables are right-skewed, discrete, and/or zero-inflated based on the kernel density plots on the main diagonal. There are not obvious shifts or separation in the response variable (blue and orange) vs. the continuous variables.
"outpatient" and "emergency" distributions were more closely examined used a log scale for count in the second figure below.
<img src="https://github.com/olmosjorge28/QTW-SPRING-2022/blob/main/ds7333_case_study_2/plots/cont_feature_vs_readmit_pairplot.png?raw=1" width=1100 height=1100 />
<img src="https://github.com/olmosjorge28/QTW-SPRING-2022/blob/main/ds7333_case_study_2/plots/outpatient_emergency_histplot.png?raw=1" width=600 height=600 />
## Assumptions <a id='assumptions'>
* We assume the ICD codes are groupable
* Expired means patient has diseased
* For discharge_dispostion_id and admission_sourcing_id we combined Ids that had similar descriptions such as referral, transfer, and expired. During our EDA they appear to behave similarly.
* After cleanup and imputation data has sufficient information to predict or estimate readmissions.
# Model Preparations <a id='model-preparations'/>
## Modeling Method Proposed <a id='modelmethod'>
  The objective of our analysis is to classify diabetes patients who have the highest risk of being re-admitted to the hospital within 30 days using logistic regression modeling. L2 regularization was used to prevent overfitting and maintain model performance. The team observed that L2 is faster and more efficient to run than L1 with this dataset that had a large number of dummy-coded variables.
  The metrics used to evaluate model performance were precision and recall. Precision and recall are useful metrics when the classes are imbalanced. These metrics are defined as follows:
- Recall = TP/(TP+FN)
- Precision = TP/(TP+ FP)
where TP = True Positives, FN = False Negatives, and FP = False Positives
## Model Appropriateness <a id='approp'>
  Logistic regression is a classification algorithm. It is used to predict a binary outcome based on the set of independent variables. Logistic regression is used for the analysis when working with the binary target variable, i.e., dichotomous, or categorical in nature; in other words, if it fits into one of two categories (such as “yes” or “no”, “pass” or “fail”, 0 or 1). The response variable used for this analysis also has dichotomous data, so the team decided to use Logistic Regression to analyze the diabatic data.
## Evaluation Metrics <a id='metrics'>
  The metrics used to evaluate model performance were precision and recall. Precision and recall are useful metrics when the classes are imbalanced like in this dataset. These metrics are defined as follows:
- Recall = TP/(TP+FN)
- Precision = TP/(TP+FP)
Where:
- True Positive (TP) is "patients correctly predicted to be readmitted within 30 days". These predictions would help keep a patient in the hospital longer for observation rather than discharging them, because they would be at risk to be readmitted within 30 days.
- False Negative (FN) is "patients <b>in</b>correctly predicted to <b>not</b> be readmitted within 30 days". These predictions would likely discharge a patient too soon, resulting in readmittance within 30 days.
- False Positive (FP) is "patients <b>in</b>correctly predicted to be readmitted within 30 days". These predictions would likely keep people in the hospital that don't need to be there and are at low risk for readmittance within 30 days.
Both Recall and Precision focus the modeling on True Positives, but Recall is penalized by False Negatives whereas Precision is penalized by False Positives. A scatterplot of Precision vs. Recall (a PR Curve) can be used to tune the False Negatives vs. False Positive for any given model.
To balance both precision and recall, the team used the F1 score. The F1 score combines precision and recall into a single metric by taking their harmonic mean:
F1 = 2 * Recall * Precision / (Recall + Precision)
By maximizing the F1 score, the model balances both recall and precision for the positive class. Additionally, we used the Precision-Recall (PR) curve to generate all F1 scores and find the maximum F1 score and its associated threshold.
## Model Usefulness <a id='usefulness'>
  In the best interest of the patient, the model would maximize Recall such that hospital readmission is prevented at all costs. However, the resulting lower precision (higher false positive rate) would be costly. It could lead to a shortage of beds or staff in the hospital, keeping more patients that are fine and that wouldn't be readmitted if dishcarged. For the model to be useful, it needs to balance the trade-off between recall and precision. A more useful model would have a higher F1 score with higher recall and/or precision.
## Split dependent and Independent Variables <a id='split'>
```
def split_dependant_and_independant_variales(df: pd.DataFrame, y_var: str):
X = df.copy()
y = X[y_var]
X = X.drop([y_var], axis=1)
return X, y
X, y = split_dependant_and_independant_variales(df_recategorized, 'readmitted')
```
## Data Imputation <a id='imputation'>
```
def fitImputers(x_train: pd.DataFrame):
df = x_train.copy()
imputer = SimpleImputer(missing_values = np.nan,
strategy ='most_frequent')
columns_of_interest = ['disharge_disposition_new_mapping', 'admission_source_new_mapping', 'admission_type_id_new_mapping', 'diag_1_categorized', 'diag_2_categorized', 'diag_3_categorized', 'race', 'payer_code']
imputer.fit(df[columns_of_interest])
grouped_modes = df.groupby(['admission_type_id_new_mapping'])['medical_specialty'].agg(pd.Series.mode)
grouped_modes['trauma-center'] = grouped_modes['urgent']
def customApply(input):
if(pd.isnull(input[1])):
input[1] = grouped_modes[input[0]]
return input
return imputer, customApply
def imputeData(x_train: pd.DataFrame, x_test: pd.DataFrame):
imputed_train = x_train.copy()
imputed_test = x_test.copy()
imputer, customApply = fitImputers(x_train)
columns_of_interest = ['disharge_disposition_new_mapping', 'admission_source_new_mapping', 'admission_type_id_new_mapping', 'diag_1_categorized', 'diag_2_categorized', 'diag_3_categorized', 'race', 'payer_code']
imputed_train[columns_of_interest] = imputer.transform(imputed_train[columns_of_interest])
imputed_test[columns_of_interest] = imputer.transform(imputed_test[columns_of_interest])
twoColumnsTrain = imputed_train[['admission_type_id_new_mapping','medical_specialty']]
twoColumnsTest = imputed_test[['admission_type_id_new_mapping','medical_specialty']]
twoColumnsTrain.apply(customApply, axis=1);
twoColumnsTest.apply(customApply, axis=1);
imputed_train['medical_specialty'] = twoColumnsTrain['medical_specialty']
imputed_test['medical_specialty'] = twoColumnsTest['medical_specialty']
imputed_train.to_csv('diabetic_data_train_imputed.csv')
imputed_train = pd.read_csv('diabetic_data_train_imputed.csv')
imputed_test.to_csv('diabetic_data_test_imputed.csv')
imputed_test = pd.read_csv('diabetic_data_test_imputed.csv')
imputed_train = imputed_train.drop(['Unnamed: 0'], 1)
imputed_test = imputed_test.drop(['Unnamed: 0'], 1)
return imputed_train, imputed_test
```
## Sampling & Scaling Data <a id='scaling' />
```
def shuffle_split(X, y, test_size, random_state):
stratified_shuffle_split = StratifiedShuffleSplit(n_splits=1, test_size=test_size, random_state=random_state)
for train_index, test_index in stratified_shuffle_split.split(X, y):
X_train, X_test = X.iloc[train_index,:], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
return X_train, X_test, y_train, y_test
def scale_data(X_train, X_test):
scl = StandardScaler()
scl.fit(X_train)
X_train_scaled = pd.DataFrame(scl.transform(X_train), columns = X_train.columns.values, index = X_train.index)
X_test_scaled = pd.DataFrame(scl.transform(X_test), columns = X_train.columns.values, index = X_test.index )
return X_train_scaled, X_test_scaled
def encode_data(X_train, X_test):
enc = OneHotEncoder(handle_unknown = 'ignore')
enc.fit(X_train)
X_train_encoded = pd.DataFrame( enc.transform(X_train).toarray(),
columns = enc.get_feature_names(X_train.columns), index = X_train.index)
X_test_encoded = pd.DataFrame(enc.transform(X_test).toarray(),
columns = enc.get_feature_names(X_test.columns), index = X_test.index)
return X_train_encoded, X_test_encoded
def scale_and_encode_data(x_train, x_test):
cont_vars = x_train._get_numeric_data().columns
X_cont_train_scaled, X_cont_test_scaled = scale_data(x_train[cont_vars], x_test[cont_vars])
X_train_encoded, X_test_encoded = encode_data(x_train.drop(cont_vars, 1), x_test.drop(cont_vars, 1))
return pd.concat([X_cont_train_scaled, X_train_encoded], axis=1), pd.concat([X_cont_test_scaled, X_test_encoded], axis=1)
```
## Prepare for Modeling <a id='prepare'>
```
def model_preparation(X, y, test_split, random):
X_train, X_test, y_train, y_test = shuffle_split(X, y, test_size = test_split, random_state= random)
X_train, X_test = scale_and_encode_data(*imputeData(X_train, X_test))
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = model_preparation(X,y,0.3,1234)
```
# Model Building & Evaluations <a id='model-building'/>
Analysis of your models performance
## Handling missing values
Following are the variables with the missing values and the methodology team used to mitigate the missing values:
| Missing Variable | # of missing values | Mitigation Methodology |
|----------------------------------|---------------------|------------------------------------------------------------|
| weight | 98,569 | ~97% of data was missing so variable was dropped. |
| race | 2,273 | Impute by Mode |
| payer_code | 40,256 | Impute by Mode |
| medical Specialty | 49,949 | Group by admission_type_id_new_mapping then Impute by Mode |
| disharge_disposition_new_mapping | 4,680 | Impute by Mode |
| admission_source_new_mapping | 7,067 | Impute by Mode |
| diag_1 | 21 | Recategorize and Impute by Mode |
| diag_2 | 358 | Recategorize and Impute by Mode |
| diag_3 | 2,423 | Recategorize and Impute by Mode |
## Sampling Methodology <a id='sampling-methodology'>
  The response variable is a somewhat unbalanced with 11% readmitted within 30 days and 89% not readmitted within 30 days. Stratified data splitting was used to ensure that these two clases are evenly split between the training and test sets. A 70/30 split was used between the training and test sets. Stratified data splitting Dataset was split in training and test set before imputation. The continuous variables were scaled to ensure high magnitude variables were not influencing the outcome more than low magnitude variables.
## Model Setup <a id='modelsetup'>
  For penalty we used L2 regularization to prevent overfitting and maintain model performance. The team observed that L2 is faster and more efficient to run than L1 with this dataset that had a large number of dummy-coded variables. Given the dependent variable imbalance we decided to apply weights to the coefficiens by providing 'balanced' as parameter to the model. Lastly we iterated over a range of Cs which is the inverse of regularization strength to optimized the model for f1-score. The team decided to used the liblinear solver as it was one of the supported solvers when using l2 regularization and from our modeling this solver converged (not all converged) and it converged relatively quickly compared to the others solvers.
```
def run_lgr(penalty, c, x_train, x_test, y_train, y_test):
lgr = LogisticRegression(penalty=penalty, C=c, solver='liblinear', class_weight='balanced')
lgr.fit(x_train,y_train)
y_hat_train = lgr.predict(x_train)
score = f1_score(y_train, y_hat_train)
return lgr, score
def run_lgr_key(e):
return e[1]
def run_lgr_key(e):
return e[1]
X_train, X_test, y_train, y_test = model_preparation(X,y,0.3,1234)
C_s = np.linspace(0.2, 2, 10)
C_s
def find_best_model(model_key, C_s, x_train, x_test, y_train, y_test):
lgrs = []
for count, c in enumerate(C_s):
lgr, score = model_key[0]('l2', c, x_train, x_test, y_train, y_test)
print(count+1 , 'out of ', len(C_s), 'jobs | C: ', round(c,1), ' | Score: ', score)
lgrs.append((lgr,score))
best_lgr = max(lgrs, key=model_key[1])
return lgrs, best_lgr
def run_lgr_proba(penalty, c, x_train, x_test, y_train, y_test):
lgr = LogisticRegression(penalty=penalty, C=c, solver='liblinear', class_weight='balanced')
lgr.fit(x_train,y_train)
y_train_proba = lgr.predict_proba(x_train)
precision, recall, thresholds = pr_curve(y_train, y_train_proba[:,1])
precision = precision[precision!=0]
recall = recall[np.where(precision!=0)]
f1 = 2*precision*recall/(precision+recall)
train_threshold = thresholds[ np.where(f1 == np.max(f1)) ][0]
max_f1 = np.max(f1)
max_recall = recall[ np.where(f1 == np.max(f1)) ][0]
max_precision = precision[ np.where(f1 == np.max(f1)) ][0]
return lgr, { 'f1': round(max_f1 ,6), 'recall': round(max_recall ,4), 'precision': round(max_precision ,4), 'threshold': round(train_threshold,4) }
def run_lgr_proba_key(e):
return e[1]['f1']
def analyse_model(lgr, threshold, x_test ,y_test):
y_hat_proba = lgr.predict_proba(x_test)
y_hat_test = ( y_hat_proba[:,1] > threshold ).astype(int)
report = classification_report(y_test, y_hat_test)
return report
def print_report(report):
print("\n", report[0:43])
print(report[109:152])
print(report[163:210])
```
## Baseline Model <a id='baseline-model'>
### Training Set
```
cont_vars = X._get_numeric_data().columns
cont_vars
cont_vars = X._get_numeric_data().columns
lgrs_proba, best_baseline = find_best_model((run_lgr_proba,run_lgr_proba_key),
C_s, X_train[cont_vars], X_test[cont_vars], y_train, y_test)
best_baseline
print_report(analyse_model( best_baseline[0], best_baseline[1]['threshold'], X_train[cont_vars], y_train))
plot_pr_curve(best_baseline[0], X_train[cont_vars], y_train)
```
### Test Set
```
print_report(analyse_model( best_baseline[0], best_baseline[1]['threshold'], X_test[cont_vars], y_test))
plot_pr_curve(best_baseline[0], X_test[cont_vars], y_test)
```
### Analysis of Baseline Model Performance
The baseline model had the following metrics:
| Group | Precision | Recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| Training | 0.19 | 0.39 | 0.26 | 0.74 |
| Test | 0.19 | 0.38 | 0.25 | 0.74 |
The baseline model performance was consistent between the training and test sets.
The baseline model had a <b>recall of 38-39%</b> and precision of 18% (f1-score = 25-26%).
## Final Model <a id='final-model'>
### Training Set
```
lgrs_proba, best_final = find_best_model((run_lgr_proba,run_lgr_proba_key), C_s, X_train, X_test, y_train, y_test)
best_final
print_report(analyse_model( best_final[0], best_final[1]['threshold'], X_train, y_train))
plot_pr_curve(best_final[0], X_train, y_train)
```
### Test Set
```
print_report(analyse_model( best_final[0], best_final[1]['threshold'], X_test, y_test))
plot_pr_curve(best_final[0], X_test, y_test)
```
## Model's Performance Analysis <a id='performance-analysis'/>
### Analysis of Final Model Performance
The final model had the following metrics:
| Group | Precision | Recall | f1-score | accuracy |
|----------|-----------|--------|----------|----------|
| Test | 0.18 | 0.48 | 0.27 | 0.71 |
| Training | 0.18 | 0.47 | 0.26 | 0.70 |
The final model performance was consistent between the training and test sets.
The final model had a <b>recall of 47-48%</b> and precision of 18% (f1-score = 26-27%). This was a 10% incremental improvement in the recall (and a 3-4% drop in accuracy). The final model accurately predicts approximately 1 in 2 patients that will be readmitted within 30 days.
# Model Interpretability & Explainability <a id='model-explanation'>
## Baseline Model <a id='baseline'>
### Which variables were more important and why? How did you come to the conclusion these variables were important how how should the audience interpret this?
The single most important continuous feature in the baseline model was number_inpatient. This is the number of times a patient was previously admitted to inpatient care, which has a strong positive correlation to being readmitted within 30 days.
```
feat_coef = []
feat = zip(cont_vars, best_baseline[0].coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_baseline = feat_coef.loc[abs(feat_coef['coef'])>0].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_baseline, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.show()
```
## Final Model <a id='finalmodel'>
### Which variables were more important and why?
The three most important factors were all negatively correlated with readmittance within 30 days:
- Otolaryngology (a.k.a. Ears, Nose, and Throat)
- Gynecology
- Diagnoses for "Complications in Child Birth" (K)
Input from domain knowledge experts is needed to asses why these variables were the most important and negatively correlated with the response.
### How did you come to the conclusion these variables were important how how should the audience interpret this?
Variables were assessed for importance after centering & scaling continuous variable and one-hot-encoding categorical variables such that they all range from 0 to 1. Using this approach, all of the variables are on equivalent scales and can be assessed by their model coefficients. These coefficients are presented in the bar plot below.
```
feat_coef = []
feat = zip(X_train.columns, best_final[0].coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_final = feat_coef.loc[abs(feat_coef['coef'])>1].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_final, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.show()
```
# Conclusion <a id='conclusion'>
### Final Model Proposal <a id='final-model-proposal'/>
We are proposing a classification model that predicts readmittance to the hospital within 30 days with a 25% F1 score (47% Recall and 18% Precision). This model maximizes the F1 score, maximizing both Recall and Precision in a balanced way such that both patients and hospitals can manage their risk effectively.
The model helps identify patients at high risk of readmission within 30 days and patients with a low risk of readmission that can be discharged. Our model identifies top risk factors for readmittance within 30 days which the hospital staff can use as focus areas when deciding to keep or discharge a patient. This model is not perfect and should be used as a guide rather than gospel. Additional feature engineering and domain knowledge experts are needed to guide model improvements.
### Future Considerations and Model Enhancements <a id='model-enhancements'/>
The data were aggregated from multiple hospitals and may not be generalizable across health care systems, geography, etc. Hospitals may want to develop models with their own data or regional data to guide their staff on discharge decisions.
Need a conversation with stakeholders regarding:
- Balance of precision vs. recall. What is the cost of a false positive vs. a false negative?
- Level of detail with diagnoses columns (diag 1-3). We generalized these diagnoses such that there were fewer levels to model. We recommend reviewing the top factor recoded diagnoses categories to see if any warrant a deeper dive
### Alternative Modeling Approaches <a id='alternative-modeling-approaches'>
Other classification modeling techniques - Naive Bayes, Random Forest, Boosted Decision Trees
| github_jupyter |
Preprocess Training Data
=====================
For the generation of training data you need aligned pairs of fluorescence microscopic (FM) and electron microscopic (EM) images. The FM images must be saved in one directory and the EM images in another directory.<br/> <br/>
At first import some python packages and connect python to ImageJ/Fiji. Therefore you have to insert the **path to your Fiji** installation and make sure, that **Fiji is closed**. If Fiji is not closed problems with Fiji could occure.
```
from __future__ import print_function, unicode_literals, absolute_import, division
import imagej
ij = imagej.init('/home/s353960/rickdata/Programme/Fiji.app/') # Insert path to your Fiji app
from jnius import autoclass
import numpy as np
import matplotlib.pyplot as plt
import os
import PIL
import imageio # dependencies: Numpy, Pillow
import cv2
from PIL import Image
from tifffile import imread
from tifffile import imwrite
from csbdeep.utils import plot_some
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.data import RawData, create_patches_reduced_target
```
Execute the following cell to limit the GPU memory consumption to the half amount of the GPU memory. A limitation of the GPU memory is only neccessary, if there are other processes, that need also the GPU. If you are using the GPU only for the training of this network you do not need to execute the following cell.
```
limit_gpu_memory(fraction=1/2)
```
## 1. Preprocess fluorescence microscopic images
For the next steps you need two directories. One **directory with the raw FM images** and one **directory for the processed FM images** as ground truth for the training process. The directory with the raw images should contain about 100 FM images as training data. The dimensions of this images could be variable. The directory for the processed FM images should be an empty directory to save processed images.
<div align="center">
<br/>
Here is an example for an unprocessed image with the chromatin in the blue and green channel:
<img src="https://raw.githubusercontent.com/Rickmic/Deep_CLEM/assets/fluo_unprocessed.png" width="300"/>
</div>
```
# directory for raw fluorescence microscopic images:
GTdirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/fluo_additional/'
# directory for processed fluorescence microscopic images:
Xdirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/fluo_training2/'
#os.mkdir(Xdirectory) # uncomment if directory do not exist already
```
After that, the FM image will be processed. In this step the image will be converted in a greyscale image and downsized to 256x256 pixels. If a RGB image was presented, the **blue channel will be extracted** (preset). If the chromatin signal is in another channel located, you have to specify the chromatin channel in the code cell below. After execution of this cell the number of images in the directory with the raw FM images and in the directory with the processed FM images must be equal.
<div align="center">
<br/>
This is an example for a processed image:
<img src="https://raw.githubusercontent.com/Rickmic/Deep_CLEM/assets/fluo_processed.png"/>
</div>
```
# convert RGB image or a z stack into a greyscale image:
def StackRGB(shape, IM):
channel2 = shape[2]
if channel2 <=4: # image is a RGB image
channelofI = IM[:, :, 2] # change channel number if channel for correlation of
# light microscopic image differs 0 = red; 1 = green; 2 = blue
else: # image is a stack
channelofI = IM[0, :, :]
# return greyscale image:
return channelofI
Image.MAX_IMAGE_PIXELS = None # avoid error message for extremely large images
print('groundtruth--------------------------------------------------')
for filename in os.listdir(GTdirectory): # loop over all files in the GTdirectory
if filename.endswith((".png", ".tif", ".tiff")): # check if file is an image
GTpath = os.path.join(GTdirectory, filename)
IM = imageio.imread(GTpath) # open image
dimension = IM.ndim
shape = IM.shape
if dimension >= 3: # RGB image or stack
channelofI = StackRGB(shape, IM)
else: # greyscale image
channelofI = IM
GT = channelofI
#GT = GT[:960, :] # uncomment if ground truth image should be cropped [y, x]
GT = cv2.resize(GT, dsize=(256, 256), interpolation=cv2.INTER_LINEAR) # resize image to 256x256
name, ext = os.path.splitext(filename) # split filename in name and extension (ext)
writepath = os.path.join(Xdirectory, name + '.tif')
imwrite(writepath, GT) # save image as .tif
# print feedback
print(filename + ': processed')
text = 'shape: %s to %s' % (shape, GT.shape)
print(text)
print(' ')
```
## 2. Preprocess electron microscopic images
For this step you have to specify again two directories. One **directory with the raw EM images** and one **directory for the processed EM images** as ground truth for the training process. The directory with the raw EM images should contain exactly the same number of images as the directory for the raw FM images. The dimensions and the name of each EM image should be the same as the corresponding FM image. The directory for the processed EM images should be an empty directory to save processed images.
<div align="center">
<br/>
Here is an example for an unprocessed image:
<img src="https://raw.githubusercontent.com/Rickmic/Deep_CLEM/assets/sem_unprocessed.png" width="300" />
<br/>
</div>
```
# directory for raw electron microscopic images:
IPdirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/sem_additional/'
# directory for processed fluorescence microscopic images:
Ydirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/sem_training2/'
#os.mkdir(Ydirectory) # comment out, if directory exists already
```
In this step the EM image will be processed. Therefore the histogramm will be equalized. The image will be converted in a greyscale image and downsized to 256x256 pixels. If a RGB image was presented, the red channel will be extracted (preset). If the chromatin signal is in another channel located, you have to specify this channel in the code cell below. After execution of this cell the number of images in the directory with the raw EM images and in the directory with the processed images must be equal.
<div align="center">
<br/>
This is an example for a processed image with histogram equalization:
<img src="https://raw.githubusercontent.com/Rickmic/Deep_CLEM/assets/sem_processed.png"/>
</div>
```
# convert RGB image or a z stack into a greyscale image:
def StackRGB(shape, IM):
channel2 = shape[2]
if channel2 <=4: # image is a RGB image
channelofI = IM[:, :, 2] # change channel number if channel for correlation of
# electron microscopic image differs 0 = red; 1 = green; 2 = blue
else: # image is a stack
channelofI = IM [0, :, :]
# return greyscale image:
return channelofI
Image.MAX_IMAGE_PIXELS = None # avoid error message for extremely large images
print('input--------------------------------------------------------')
for filename in os.listdir(IPdirectory): # loop over all files in the GTdirectory
if filename.endswith((".png", ".tif", ".tiff")): # check if file is an image
print(filename)
IPpath = os.path.join(IPdirectory, filename)
# equalize histogram with a imageJ macro:
WindowManager = autoclass('ij.WindowManager')
macro = """
#@ String name
open(name);
run("Enhance Contrast...", "saturated=0.3 equalize");
"""
args = {
'name': IPpath
}
ij.py.run_macro(macro, args)
IM = WindowManager.getCurrentImage()
# ImagePlus object to numpy array
IM = ij.py.from_java(IM)
IM = IM.astype(np.uint8)
dimension = IM.ndim
shape = IM.shape
if dimension >= 3: # RGB image or stack
channelofI = StackRGB(shape, IM)
else: # greyscale image
channelofI = IM
#channelofI = channelofI[:960, :] # uncomment if input image should be cropped [y, x]
channelofI = cv2.resize(channelofI, dsize=(256, 256), interpolation=cv2.INTER_LINEAR)
IP = np.stack((channelofI, channelofI), axis = 0) # generate Z-stack for training process
name, ext = os.path.splitext(filename) # split filename in name and extension (ext)
writepath = os.path.join(Ydirectory, name + '.tif')
imwrite(writepath, IP) # save image as .tif
# print feedback:
print(filename + ': processed')
text = 'shape: %s to %s' % (shape, IP.shape)
print(text)
print(' ')
```
## 3. Generate training dataset
This part of the script is based on [a jupyter notebook](https://nbviewer.jupyter.org/url/csbdeep.bioimagecomputing.com/examples/denoising3D/1_datagen.ipynb).
At first you need to specify two directories. One directory should contain all preprocessed EM images. This directory was normally created during the preprocessing step and contains only images with the dimension 2x256x256. Another directory should contain all preprocessed FM images. This images should have the dimensions 256x256. In this step it is very important, that the corresponding **EM and FM images** are named with **exactly the same name**. <br/> Furthermore you have to insert the path and filename where the processed training data should be saved. The filename should end with .npz.
```
# Path to a folder with preprocessed electron microscopic images:
Ydirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/sem_training2/'
# Path to a folder with preprocessed electron microscopic images:
Xdirectory = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/fluo_training2/'
# Path for the training data:
train_npz = '/home/s353960/rickdata/F1 Praktikum/04_2validation_images/my_training_data2.npz'
#create basepath, source_dirs and target_dir
commonpath = os.path.commonpath([Ydirectory, Xdirectory])
def getdirectory(directory):
if os.path.basename(directory) is '':
path = os.path.dirname(directory)
folder = os.path.basename(path)
else:
folder = os.path.basename(directory)
return(folder)
directory = Ydirectory
Yfolder = getdirectory(directory)
directory = Xdirectory
Xfolder = getdirectory(directory)
raw_data = RawData.from_folder (
basepath = commonpath,
source_dirs = [Yfolder],
target_dir = Xfolder,
axes = 'ZYX',
)
X, Y, XY_axes = create_patches_reduced_target (
raw_data = raw_data,
patch_size = (None,128,128),
n_patches_per_image = 16,
target_axes = 'YX',
reduction_axes = 'Z',
save_file = train_npz,
)
```
Now you are ready with preprocessing. If you wish, you can display with the next cells the dimensions of the training data and some generated training data pairs.
```
print("shape of X =", X.shape)
print("shape of Y =", Y.shape)
print("axes of X,Y =", XY_axes)
for i in range(2):
plt.figure(figsize=(16,4))
sl = slice(8*i, 8*(i+1)), 0
plot_some(X[sl],Y[sl],title_list=[np.arange(sl[0].start,sl[0].stop)])
plt.show()
None;
```
| github_jupyter |
## Loading Libraries
```
library('ggplot2')
library('gridExtra')
library(reshape2)
library(RColorBrewer)
library(grid)
library(ggsignif)
```
## Set working directory and output directories
```
projectdir="../"
setwd(projectdir)
paperfigdir="figures"
supfigdir="figures/supfigures"
```
## Plotting Functions
```
add_corner_label <- function(p, letter){
newp <- arrangeGrob(p, top=textGrob(letter, x=unit(0, "npc"), y=unit(1, "npc"), just=c("left", "top")))
return(newp)
}
make_into_stars<-function(x){
s=""
if (x <= 0.05){s="*"}
if (x <= 0.01){s="**"}
if (x <= 0.001){s="***"}
if (x <= 0.0001){s="****"}
return(s)
}
get_cutoffs_from_perc <- function(y, svcf){
num= round(length(y[y>0])*svcf)
cf = sort(y, decreasing=TRUE)[num]
return(cf)
}
make_tissue_type_boxdata2 <- function(pdat, pcf, types, cntdat, nulli){
nullcnt = cntdat[,nulli]
dfnull = data.frame(dat=pdat[nullcnt>0], label="SV detected")
dfbox = dfnull
wctests = c(1)
cutoffs=c(1)
for (mylabel in types) {
rcnts <- cntdat[,which(names(cntdat)==mylabel)]
cf = get_cutoffs_from_perc(rcnts[nullcnt > 0], pcf)
cutoffs=c(cutoffs, cf)
mydf = data.frame(dat = pdat[rcnts >= cf], label=mylabel)
mydfn = data.frame(dat = pdat[rcnts < cf & nullcnt>0], label=paste0(mylabel, "_null"))
dfbox = data.frame(rbind(dfbox, mydf))
wctests = c(wctests, wilcox.test(mydf$dat, mydfn$dat, alternative="greater")$p.value)
}
return(list(dfbox, wctests, cutoffs))
}
make_tissue_type_violinplots <- function(dfbox, myks, mycfs, mycols=c("grey", brewer.pal(8,"Paired")), xl=10){
mylabs=rle(as.character(dfbox$label))$values
newlabs=c()
for (i in seq_along(mylabs)){
pn <- prettyNum(sum(dfbox$label== mylabs[i]), big.mark=",", scientific=TRUE)
newl = paste0(mylabs[i], ", ", mycfs[i], "\n(", pn, ")")
#newl = bquote(.(paste0(mylabs[i], "\n")) ( .(pn) >= .(mycfs[i]) ))
newlabs=c(newlabs, newl)
}
pbox <- ggplot(dfbox, aes(x=label, y=dat)) + geom_violin() +
geom_boxplot(colour=mycols, fill=mycols, alpha=0.5, outlier.colour=NULL, width=0.2) +
labs(x="", y="Predicted DSB") + ylim(0,xl) +
scale_x_discrete(labels=newlabs) +
theme(axis.text.x = element_text(angle = 90, hjust=0.5, vjust=0.5, size=8), axis.title=element_text(size=10)) +
annotate("text", x=seq(1,(length(myks))), y=xl, label=unlist(lapply(myks, make_into_stars)))
return(pbox)
}
```
# Figure 5
```
make_plot_panel <- function(mypcf, types, cntdat, nulli, myc){
moddatn <- read.table("data/randforest_results/NHEK_BREAK/predicted.txt", header=TRUE)
moddatk <- read.table("data/randforest_results/K562_BLISS/predicted.txt", header=TRUE)
moddatm <- read.table("data/randforest_results/MCF7_BLISS/predicted.txt", header=TRUE)
bd <- make_tissue_type_boxdata2(pdat=moddatn$predicted, pcf=mypcf, types=mytypes, cntdat=dat, nulli=mynull)
pboxn <- make_tissue_type_violinplots(bd[[1]], bd[[2]], bd[[3]], mycols=myc, xl=7) + ggtitle("NHEK")
bd <- make_tissue_type_boxdata2(pdat=moddatk$predicted, pcf=mypcf, types=mytypes, cntdat=dat, nulli=mynull)
pboxk <- make_tissue_type_violinplots(bd[[1]], bd[[2]], bd[[3]], mycols=myc, xl=0.002) + ggtitle("K562")
bd <- make_tissue_type_boxdata2(pdat=moddatm$predicted, pcf=mypcf, types=mytypes, cntdat=dat, nulli=mynull)
pboxm <- make_tissue_type_violinplots(bd[[1]], bd[[2]], bd[[3]], mycols=myc, xl=0.03) + ggtitle("MCF7")
return(list(pboxn, pboxk, pboxm))
}
```
## Panels a-c, SV counts by tissue type
```
pcdat <- read.table("data/cancer_SVcnts/icgc/panc_realcnts.txt", header=TRUE)
brdat <- read.table("data/cancer_SVcnts/icgc/brca_realcnts.txt", header=TRUE)
bldat <- read.table("data/cancer_SVcnts/icgc/blood_realcnts.txt", header=TRUE)
cadat <- read.table("data/cancer_SVcnts/icgc/carc_realcnts.txt", header=TRUE)
dat <- data.frame(pancancer= pcdat$any, breast=brdat$any,
blood= bldat$any, carcinoma=cadat$any-brdat$any)
mynull=1
mytypes=c("pancancer", "carcinoma", "blood", "breast")
mycolors = c("grey", brewer.pal(10, "Paired")[c(2,4,6,8)])
gt <- make_plot_panel(mypcf=0.05, types=mytypes, cntdat=dat, nulli=mynull, myc=mycolors)
```
## Panels d-f, SV counts by SV type
```
dat <- read.table("data/cancer_SVcnts/icgc/panc_realcnts.txt", header=TRUE)
mynull=5
mytypes=c("any", "deletion", "duplication", "inversion", "interchr", "intrachr", "insertion", "translocation")
mycolors = c("grey", "grey33", brewer.pal(7, "Set1"))
gb <- make_plot_panel(mypcf=0.05, types=mytypes, cntdat=dat, nulli=mynull, myc=mycolors)
```
## putting all plots together
```
mygrobs=list(gt[[1]], gt[[2]], gt[[3]])
length(mygrobs)
for (i in seq_along(mygrobs)){
mygrobs[[i]] <- add_corner_label(mygrobs[[i]], letters[i])}
gt <- arrangeGrob(grobs=mygrobs, ncol=3)
mygrobs=list(gb[[1]], gb[[2]], gb[[3]])
length(mygrobs)
for (i in seq_along(mygrobs)){
mygrobs[[i]] <- add_corner_label(mygrobs[[i]], letters[i+3])}
gb <- arrangeGrob(grobs=mygrobs, ncol=3)
gf <- arrangeGrob(gt, gb, nrow=2, top="ICGC enriched SV breakpoint regions (ESBs), top 5%" )
options(repr.plot.width=9, repr.plot.height=6)
grid.arrange(gf)
ggsave(file="Figure5_violin_ICGC.png", plot=gf, path=paperfigdir, width=9, height=6, units="in", dpi=300)
```
# Figure S6
TCGA data by tissue type
```
dat <- read.table("data/cancer_SVcnts/tcga/tcga_bytype_realcnts.txt", header=TRUE)
names(dat) <- sub("TCGA.", "", names(dat))
names(dat) <- sub(".all", "", names(dat))
mynull=15
mytypes=c("PANC", "SCCA", "BLOOD", "BRCA")
mycolors = c("grey", brewer.pal(8, "Paired")[c(2,4,6,8)])
g2 <- make_plot_panel(mypcf=0.05, types=mytypes, cntdat=dat, nulli=mynull, myc=mycolors)
## putting all plots together
mygrobs=list(g2[[1]], g2[[2]], g2[[3]])
length(mygrobs)
for (i in seq_along(mygrobs)){
mygrobs[[i]] <- add_corner_label(mygrobs[[i]], letters[i])}
gf <- arrangeGrob(grobs=mygrobs, ncol=3, top="TCGA enriched SV regions (ESBs), top 5%")
ggsave(file="S6_violin_TCGA_ESBs.png", plot=gf, path=supfigdir, width=8, height=3, units="in", dpi=300)
options(repr.plot.width=8, repr.plot.height=3)
grid.arrange(gf)
```
Note- at the p=0.01 level, none are significant
| github_jupyter |
# QRNG via HTTP
```
import json
import requests
```
Quantum random number generation is a unique application of quantum computers. Most applications need a range of custom circuits to be built and run throughout the course of an algorithm. QRNG, however, requires just one pre-defined circuit to be run whenever new random values are needed. This is therefore an application that only needs light usage of Qiskit. In fact, we can simply use Qiskit to define the initial job, and then run it via HTTP requests.
This notebook shows how to do this in Python. However, it can also be done in other languages using the job specifications provided in this notebook.
*Note: Results are provided according to the [IBM Q Experience EULA](https://github.com/Qiskit/qiskit-tutorials/blob/master/INSTALL.md). You are not advised to use them for cryptographic purposes.*
### Setting up the job
To set up the job, we need Qiskit. It is used set up a job to create 8192 random 5-bit strings on a real 5 qubit device.
```
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import QuantumCircuit, IBMQ, execute, compile
IBMQ.load_accounts()
q = QuantumRegister(5)
c = ClassicalRegister(5)
qc = QuantumCircuit(q, c)
qc.h(q)
qc.measure(q,c)
device = 'ibmqx4'
backend = IBMQ.get_backend(device)
qobj = compile(qc,backend,shots=8192,memory=True)
qobj_dict = qobj.as_dict()
```
This creates a full specification of the job we want to run, known as a qobj. Here it is expressed as a Python dict.
```
print(qobj_dict)
```
We can also express it as a json.
```
print( json.dumps(qobj_dict) )
```
Either way, the qobj is used to create the data that we will send via HTTP to run the job on a quantum computer.
```
data_dict = {'qObject': qobj_dict,'backend': {'name':device}}
data = json.dumps(data_dict)
print(data)
```
Now we have this from Qiskit, we can just copy and paste it wherever we need it. We can use it to submit this job over the cloud without using Qiskit, and even by using languages other than Python.
### Submitting the job
To submit the job, you need to sign up for the IBM Q Experience and get an API token (see instructions [here](https://github.com/Qiskit/qiskit-tutorials/blob/master/INSTALL.md)). Note that the EULA requires you not to use the results you get for commercial purposes.
```
token = 'insert your API token here'
```
Now we can login via HTTP using these credentials. In Python we can do this with the `requests` package. In other languages, you can use equivalent tools for sending POST requests via HTTP.
```
url = 'https://quantumexperience.ng.bluemix.net/api'
response = requests.post(str(url + "/users/loginWithToken"),data={'apiToken': token})
resp_id = response.json()['id']
```
Using the ID given to us when logging in, we can set ourselves up to send the job via another POST request. The URL we use depends on the ID we were given.
```
job_url = url+'/Jobs?access_token='+resp_id
```
Now we send the request, using the `data` json we created earlier.
```
job = requests.post(job_url, data=data,headers={'Content-Type': 'application/json'})
```
Once submitted, we can get the job ID in order to look up the result.
```
job_id = job.json()['id']
```
You'll find the results at the following URL.
```
results_url = url+'/Jobs/'+job_id+'?access_token='+resp_id
```
Once the job has successfully completed, you'll be able to get the results from the above URL that contains the result. This can be downloaded as a json, or extracted programatically using your favourite language. Here's how to do it in Python. the results come out as a string of 8192 random hex values.
```
result = requests.get(results_url).json()
random_hex = result['qObjectResult']['results'][0]['data']['memory']
```
| github_jupyter |
```
import sys
import pickle
import numpy as np
import matplotlib.pyplot as plt
sys.path.append("../..")
import gradient_analyze as ga
import hp_file
g = np.array([-0.3376, 0.1305, 0.2561, -0.3416, 0. ])
f = -g
sigma0 = 0.37001625825528
def lam(N):
return 1 / (1 + (len(g) * sigma0 ** 2 / (2 * N * np.sum(g ** 2))))
filename = './results.pickle'
with open(filename, "rb") as file:
results = pickle.load(file)
f = lambda x, y: np.sum((np.array(x, dtype="float64") - np.array(y, dtype="float64")) ** 2)
ga.calculate_new_quantity(['g_fd', 'g_ex'], 'g_fd_err', f, results, hp_file)
ga.calculate_new_quantity(['g_ps', 'g_ex'], 'g_ps_err', f, results, hp_file)
results_processed = ga.avg_quantities(['g_fd_err', 'g_ps_err'], results, hp_file)
results_processed_accessed = ga.access_quantities(['g_fd_err', 'g_ps_err'], results, hp_file)
with open('results_processed.pickle', "wb") as file:
pickle.dump(results_processed, file)
with open('results_processed_accessed.pickle', "wb") as file:
pickle.dump(results_processed_accessed, file)
with open('results_processed_accessed_ang.pickle', "wb") as file:
pickle.dump(results_processed_accessed_ang, file)
with open('results_processed.pickle', "rb") as file:
results_processed = pickle.load(file)
with open('results_processed_accessed.pickle', "rb") as file:
results_processed_accessed = pickle.load(file)
cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
coeffs = np.load("../grad-sim-fitting/coeffs.npy")
hlist = [0.05, 0.06, 0.08, 0.1 , 0.12, 0.14, 0.16, 0.18, 0.2 , 0.22, 0.24,
0.26, 0.28, 0.31, 0.33, 0.35, 0.37, 0.39, 0.41, 0.43, 0.44, 0.45,
0.47, 0.49, 0.51, 0.53, 0.55, 0.57, 0.59, 0.61, 0.63, 0.65, 0.67,
0.69, 0.71, 0.73, 0.75, 0.77, 0.8 , 0.82, 0.83, 0.84, 0.86, 0.88,
0.9 , 0.92, 0.94, 0.96, 0.98, 1. , 1.02, 1.04, 1.06, 1.08, 1.1 ,
1.12, 1.14, 1.16, 1.18, 1.2 , 1.22, 1.22, 1.24, 1.27, 1.29, 1.31,
1.33, 1.35, 1.37, 1.39, 1.41, 1.43, 1.45, 1.47, 1.49, 1.51, 1.53,
1.55, 1.57, 1.59, 1.61, 1.63, 1.65, 1.67, 1.69, 1.71, 1.73,
1.76, 1.78, 1.8 , 1.82, 1.84, 1.86, 1.88, 1.9 , 1.92, 1.94, 1.96,
1.98, 2. ]
coeffs = {hlist[i]: c for i, c in enumerate(coeffs)}
def delta_fd_h(h, N):
t1 = coeffs[h] / (4 * N * h ** 2 * len(f))
t2 = f ** 2 * h ** 4 / 36
return t1 + t2
def delta_fd(h, N):
return np.array([delta_fd_h(h_, N) for h_ in h])
def h_fd_opt(N):
numerator = 9 * len(f) * sigma0
denominator = N * np.sum(f ** 2)
return np.power(numerator / denominator, 1 / 6)
def delta_ps(h, N):
return [coeffs[h_] / (4 * N * np.sin(h_) ** 2 ) for h_ in h]
def delta_ps_opt(N):
return delta_ps([1.57], N)[0]
def delta(h, N):
f_big = np.expand_dims(f, axis=0)
h_big = np.expand_dims(h, axis=1)
t1 = sigma0 / (2 * N * h_big ** 2)
t2 = f_big ** 2 * h_big ** 4 / 36
return t1 + t2
sigma0
n_shots = [10, 14, 20, 29, 41, 59, 84, 100, 119, 170, 242, 346, 492, 702, 1000, 1425, 2031, 2894,
4125, 5878, 8192, 8377, 11938, 17013, 24245, 34551, 49239, 70170, 100000][14]
print(n_shots)
results_slice = ga.calculate_slice({"n_shots": n_shots}, results_processed)
results_slice_acc = ga.calculate_slice({"n_shots": n_shots}, results_processed_accessed)
x, y_fd = ga.make_numpy(results_slice, "h", "g_fd_err")
x, y_ps = ga.make_numpy(results_slice, "h", "g_ps_err")
stds_fd = []
stds_ps = []
for h in x:
errors = list(ga.calculate_slice({"h": h}, results_slice_acc).values())[0]
errors_fd = errors["g_fd_err"]
errors_ps = errors["g_ps_err"]
stds_fd.append(np.std(errors_fd))
stds_ps.append(np.std(errors_ps))
stds_fd = np.array(stds_fd)
stds_ps = np.array(stds_ps)
fd_pred = np.sum(delta_fd(x, n_shots), axis=1)
ps_pred = delta_ps(x, n_shots)
hopt = h_fd_opt(n_shots)
plt.fill_between(x, y_fd - stds_fd, y_fd + stds_fd, color=cols[0], alpha=0.1)
plt.fill_between(x, y_ps - stds_ps, y_ps + stds_ps, color=cols[1], alpha=0.1)
plt.plot(x, fd_pred, "--", c="black", alpha=0.4)
plt.plot(x, y_fd, label="finite-difference", c=cols[0])
plt.plot(x, y_ps, label="parameter-shift", c=cols[1])
plt.plot(x, ps_pred, "--", c="black", alpha=0.4)
plt.axvline(hopt, c="black", alpha=0.4, linestyle=":", ymax=0.5)
plt.axvline(np.pi / 2, c="black", alpha=0.4, linestyle=":", ymax=0.3)
plt.xlabel('step size', fontsize=20)
plt.ylabel('MSE', fontsize=20)
plt.xscale("log")
plt.tick_params(labelsize=15)
plt.legend(fontsize=12)
plt.yscale("log")
plt.title("(A)", loc="left", fontsize=15)
plt.tight_layout()
plt.savefig("fd-vs-ps-simulator.pdf")
# plt.ylim(10**-5.25, 10**(-0.95))
hopt
max_point = 8
y_fit_low = np.log(y_fd[:max_point])
x_fit_low = np.log(x[:max_point])
p = np.polyfit(x_fit_low, y_fit_low, 1)
print(p[0])
y_fit_low = p[0] * np.log(x) + p[1]
y_fit_low = np.exp(y_fit_low)
min_point = 80
max_point = 99
y_fit_high = np.log(y_fd[min_point:max_point])
x_fit_high = np.log(x[min_point:max_point])
pp = np.polyfit(x_fit_high, y_fit_high, 1)
print(pp[0])
y_fit_high = pp[0] * np.log(x) + pp[1]
y_fit_high = np.exp(y_fit_high)
min_point = 40
max_point = 50
y_fit_high_ = np.log(y_fd[min_point:max_point])
x_fit_high_ = np.log(x[min_point:max_point])
ppp = np.polyfit(x_fit_high_, y_fit_high_, 1)
print(ppp[0])
y_fit_high_ = ppp[0] * np.log(x) + ppp[1]
y_fit_high_ = np.exp(y_fit_high_)
h_opt_fd_list = []
h_opt_ps_list = []
n_shots_list = [10, 14, 20, 29, 41, 59, 84, 100, 119, 170, 242, 346, 492, 702, 1000, 1425, 2031, 2894,
4125, 5878, 8192, 8377, 11938, 17013, 24245, 34551, 49239, 70170, 100000]
mins_fd = []
mins_ps = []
for n_shots in n_shots_list:
results_slice = ga.calculate_slice({"n_shots": n_shots}, results_processed)
results_slice_acc = ga.calculate_slice({"n_shots": n_shots}, results_processed_accessed)
x, y_fd = ga.make_numpy(results_slice, "h", "g_fd_err")
x, y_ps = ga.make_numpy(results_slice, "h", "g_ps_err")
y_fd_min = np.min(y_fd)
y_ps_min = np.min(y_ps)
xs_fd = x[np.argwhere(np.abs(y_fd - y_fd_min) < y_fd_min * 0.2).flatten()]
xs_ps = x[np.argwhere(np.abs(y_ps - y_ps_min) < y_ps_min * 0.2).flatten()]
mins_fd.append(y_fd_min)
mins_ps.append(y_ps_min)
h_opt_fd_list.append(xs_fd)
h_opt_ps_list.append(xs_ps)
fd_mins = [np.min(x) for x in h_opt_fd_list]
fd_maxs = [np.max(x) for x in h_opt_fd_list]
fd_means = [np.mean(x) for x in h_opt_fd_list]
ps_mins = [np.min(x) for x in h_opt_ps_list]
ps_maxs = [np.max(x) for x in h_opt_ps_list]
ps_means = [np.mean(x) for x in h_opt_ps_list]
plt.fill_between(n_shots_list, fd_mins, fd_maxs, alpha=0.2)
plt.fill_between(n_shots_list, ps_mins, ps_maxs, alpha=0.2)
plt.plot(n_shots_list, fd_means, label="finite-difference")
plt.plot(n_shots_list, ps_means, label="parameter-shift")
plt.plot(n_shots_list, [h_fd_opt(n) for n in n_shots_list], "--", c="black", alpha=0.4)
plt.hlines(np.pi / 2, 10, 10**5, color="black", alpha=0.4, linestyles="dashed")
plt.yscale("log")
plt.xscale("log")
plt.tick_params(labelsize=15)
plt.legend(fontsize=12)
plt.tight_layout()
plt.xlabel('$N$', fontsize=20)
plt.ylabel('optimal step size', fontsize=20)
plt.tight_layout()
plt.savefig("fd-vs-ps-simulator-opt-h.pdf")
opt_hs = [h_fd_opt(N) for N in n_shots_list]
err_expected = [np.sum(delta([opt_hs[i]], N)) for i, N in enumerate(n_shots_list)]
err_ps_expected = [delta_ps_opt(N) for N in n_shots_list]
crossover = np.argwhere(np.array(mins_fd) < np.array(mins_ps))[-1][0]
cross = (n_shots_list[crossover] + n_shots_list[crossover + 1]) / 2
cross
plt.plot(n_shots_list, mins_fd, label="finite-difference")
plt.plot(n_shots_list, mins_ps, label="parameter-shift")
plt.plot(n_shots_list, err_expected, "--", c="black", alpha=0.4)
plt.plot(n_shots_list, err_ps_expected, "--", c="black", alpha=0.4)
plt.axvline(cross, c="black", alpha=0.4, linestyle=":", ymax=0.9)
# plt.plot(n_shots_list, [h_fd_opt(n) for n in n_shots_list], "--", c="black", alpha=0.4)
plt.yscale("log")
plt.xscale("log")
plt.tick_params(labelsize=15)
plt.legend(fontsize=12)
plt.tight_layout()
plt.xlabel('$N$', fontsize=20)
plt.ylabel('MSE', fontsize=20)
plt.tight_layout()
plt.savefig("fd-vs-ps-simulator-N.pdf")
```
| github_jupyter |
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# The Extended Kalman Filter
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
We have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique.
The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature.
## Linearizing the Kalman Filter
The Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The drag coefficient varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.
For the linear filter we have these equations for the process and measurement models:
$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\\
\mathbf z &= \mathbf{Hx} + w_z
\end{aligned}$$
Where $\mathbf A$ is the systems dynamic matrix. Using the state space methods covered in the **Kalman Filter Math** chapter these equations can be tranformed into
$$\begin{aligned}\bar{\mathbf x} &= \mathbf{Fx} \\
\mathbf z &= \mathbf{Hx}
\end{aligned}$$
where $\mathbf F$ is the *fundamental matrix*. The noise $w_x$ and $w_z$ terms are incorporated into the matrices $\mathbf R$ and $\mathbf Q$. This form of the equations allow us to compute the state at step $k$ given a measurement at step $k$ and the state estimate at step $k-1$. In earlier chapters I built your intuition and minimized the math by using problems describable with Newton's equations. We know how to design $\mathbf F$ based on high school physics.
For the nonlinear model the linear expression $\mathbf{Fx} + \mathbf{Bu}$ is replaced by a nonlinear function $f(\mathbf x, \mathbf u)$, and the linear expression $\mathbf{Hx}$ is replaced by a nonlinear function $h(\mathbf x)$:
$$\begin{aligned}\dot{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\\
\mathbf z &= h(\mathbf x) + w_z
\end{aligned}$$
You might imagine that we could proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the **Nonlinear Filtering** chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.
The EKF does not alter the Kalman filter's linear equations. Instead, it *linearizes* the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter.
*Linearize* means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2−2x$ at $x=1.5$.
```
import kf_book.ekf_internal as ekf_internal
ekf_internal.show_linearization()
```
If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.
We linearize systems by taking the derivative, which finds the slope of a curve:
$$\begin{aligned}
f(x) &= x^2 -2x \\
\frac{df}{dx} &= 2x - 2
\end{aligned}$$
and then evaluating it at $x$:
$$\begin{aligned}m &= f'(x=1.5) \\&= 2(1.5) - 2 \\&= 1\end{aligned}$$
Linearizing systems of differential equations is similar. We linearize $f(\mathbf x, \mathbf u)$, and $h(\mathbf x)$ by taking the partial derivatives of each to evaluate $\mathbf F$ and $\mathbf H$ at the point $\mathbf x_t$ and $\mathbf u_t$. We call the partial derivative of a matrix the [*Jacobian*](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant). This gives us the the discrete state transition matrix and measurement model matrix:
$$
\begin{aligned}
\mathbf F
&= {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}} \\
\mathbf H &= \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
\end{aligned}
$$
This leads to the following equations for the EKF. I put boxes around the differences from the linear filter:
$$\begin{array}{l|l}
\text{linear Kalman filter} & \text{EKF} \\
\hline
& \boxed{\mathbf F = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}}} \\
\mathbf{\bar x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\bar x} = f(\mathbf x, \mathbf u)} \\
\mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\bar P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \\
\hline
& \boxed{\mathbf H = \frac{\partial{h(\bar{\mathbf x}_t)}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}} \\
\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z - \boxed{h(\bar{x})}\\
\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \\
\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \\
\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}
\end{array}$$
We don't normally use $\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\bar{\mathbf x}$ using a suitable numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\mathbf{\bar x} = f(\mathbf x, \mathbf u)$. For the same reasons we don't use $\mathbf{H\bar{x}}$ in the computation for the residual, opting for the more accurate $h(\bar{\mathbf x})$.
I think the easiest way to understand the EKF is to start off with an example. Later you may want to come back and reread this section.
## Example: Tracking a Airplane
This example tracks an airplane using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.
Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object.
The relationship between the radar's slant range distance $r$ and elevation angle $\epsilon$ with the horizontal position $x$ and altitude $y$ of the aircraft is illustrated in the figure below:
```
ekf_internal.show_radar_chart()
```
This gives us the equalities:
$$\begin{aligned}
\epsilon &= \tan^{-1} \frac y x\\
r^2 &= x^2 + y^2
\end{aligned}$$
### Design the State Variables
We want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:
$$\mathbf x = \begin{bmatrix}\mathtt{distance} \\\mathtt{velocity}\\ \mathtt{altitude}\end{bmatrix}= \begin{bmatrix}x \\ \dot x\\ y\end{bmatrix}$$
### Design the Process Model
We assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want
$$\mathbf F = \left[\begin{array}{cc|c} 1 & \Delta t & 0\\
0 & 1 & 0 \\ \hline
0 & 0 & 1\end{array}\right]$$
I've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.
However, let's practice finding these matrices. We model systems with a set of differential equations. We need an equation in the form
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{w}$$
where $\mathbf{w}$ is the system noise.
The variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:
$$\begin{aligned}v &= \dot x \\
a &= \ddot{x} = 0\end{aligned}$$
Now we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as
$$\begin{aligned}\begin{bmatrix}\dot x \\ \ddot{x}\end{bmatrix} &= \begin{bmatrix}0&1\\0&0\end{bmatrix} \begin{bmatrix}x \\
\dot x\end{bmatrix} \\ \dot{\mathbf x} &= \mathbf{Ax}\end{aligned}$$
where $\mathbf A=\begin{bmatrix}0&1\\0&0\end{bmatrix}$.
Recall that $\mathbf A$ is the *system dynamics matrix*. It describes a set of linear differential equations. From it we must compute the state transition matrix $\mathbf F$. $\mathbf F$ describes a discrete set of linear equations which compute $\mathbf x$ for a discrete time step $\Delta t$.
A common way to compute $\mathbf F$ is to use the power series expansion of the matrix exponential:
$$\mathbf F(\Delta t) = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A \Delta t)^3}{3!} + ... $$
$\mathbf A^2 = \begin{bmatrix}0&0\\0&0\end{bmatrix}$, so all higher powers of $\mathbf A$ are also $\mathbf{0}$. Thus the power series expansion is:
$$
\begin{aligned}
\mathbf F &=\mathbf{I} + \mathbf At + \mathbf{0} \\
&= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\
\mathbf F &= \begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}
\end{aligned}$$
This is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate finding the state transition matrix from linear differential equations. We will conclude the chapter with an example that will require the use of this technique.
### Design the Measurement Model
The measurement function takes the state estimate of the prior $\bar{\mathbf x}$ and turn it into a measurement of the slant range distance. We use the Pythagorean theorem to derive:
$$h(\bar{\mathbf x}) = \sqrt{x^2 + y^2}$$
The relationship between the slant distance and the position on the ground is nonlinear due to the square root. We linearize it by evaluating its partial derivative at $\mathbf x_t$:
$$
\mathbf H = \frac{\partial{h(\bar{\mathbf x})}}{\partial{\bar{\mathbf x}}}\biggr|_{\bar{\mathbf x}_t}
$$
The partial derivative of a matrix is called a Jacobian, and takes the form
$$\frac{\partial \mathbf H}{\partial \bar{\mathbf x}} =
\begin{bmatrix}
\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \\
\frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \\
\vdots & \vdots
\end{bmatrix}
$$
In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the $x$ variables. For our problem we have
$$\mathbf H = \begin{bmatrix}{\partial h}/{\partial x} & {\partial h}/{\partial \dot{x}} & {\partial h}/{\partial y}\end{bmatrix}$$
Solving each in turn:
$$\begin{aligned}
\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \\
&= \frac{x}{\sqrt{x^2 + y^2}}
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial \dot{x}} &=
\frac{\partial}{\partial \dot{x}} \sqrt{x^2 + y^2} \\
&= 0
\end{aligned}$$
and
$$\begin{aligned}
\frac{\partial h}{\partial y} &= \frac{\partial}{\partial y} \sqrt{x^2 + y^2} \\
&= \frac{y}{\sqrt{x^2 + y^2}}
\end{aligned}$$
giving us
$$\mathbf H =
\begin{bmatrix}
\frac{x}{\sqrt{x^2 + y^2}} &
0 &
&
\frac{y}{\sqrt{x^2 + y^2}}
\end{bmatrix}$$
This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf x$ so we need to take the derivative of the slant range with respect to $\mathbf x$. For the linear Kalman filter $\mathbf H$ was a constant that we computed prior to running the filter. For the EKF $\mathbf H$ is updated at each step as the evaluation point $\bar{\mathbf x}$ changes at each epoch.
To make this more concrete, let's now write a Python function that computes the Jacobian of $h$ for this problem.
```
from math import sqrt
def HJacobian_at(x):
""" compute Jacobian of H matrix at x """
horiz_dist = x[0]
altitude = x[2]
denom = sqrt(horiz_dist**2 + altitude**2)
return array ([[horiz_dist/denom, 0., altitude/denom]])
```
Finally, let's provide the code for $h(\bar{\mathbf x})$:
```
def hx(x):
""" compute measurement for slant range that
would correspond to state x.
"""
return (x[0]**2 + x[2]**2) ** 0.5
```
Now let's write a simulation for our radar.
```
from numpy.random import randn
import math
class RadarSim(object):
""" Simulates the radar signal returns from an object
flying at a constant altityude and velocity in 1D.
"""
def __init__(self, dt, pos, vel, alt):
self.pos = pos
self.vel = vel
self.alt = alt
self.dt = dt
def get_range(self):
""" Returns slant range to the object. Call once
for each new measurement at dt time from last call.
"""
# add some process noise to the system
self.vel = self.vel + .1*randn()
self.alt = self.alt + .1*randn()
self.pos = self.pos + self.vel*self.dt
# add measurement noise
err = self.pos * 0.05*randn()
slant_dist = math.sqrt(self.pos**2 + self.alt**2)
return slant_dist + err
```
### Design Process and Measurement Noise
The radar measures the range to a target. We will use $\sigma_{range}= 5$ meters for the noise. This gives us
$$\mathbf R = \begin{bmatrix}\sigma_{range}^2\end{bmatrix} = \begin{bmatrix}25\end{bmatrix}$$
The design of $\mathbf Q$ requires some discussion. The state $\mathbf x= \begin{bmatrix}x & \dot x & y\end{bmatrix}^\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use `Q_discrete_white_noise` noise to compute the values for the upper left hand side of $\mathbf Q$. The third element of $\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\mathbf Q$ of:
$$\mathbf Q = \begin{bmatrix}\mathbf Q_\mathtt{x} & 0 \\ 0 & \mathbf Q_\mathtt{y}\end{bmatrix}$$
### Implementation
`FilterPy` provides the class `ExtendedKalmanFilter`. It works similarly to the `KalmanFilter` class we have been using, except that it allows you to provide a function that computes the Jacobian of $\mathbf H$ and the function $h(\mathbf x)$.
We start by importing the filter and creating it. The dimension of `x` is 3 and `z` has dimension 1.
```python
from filterpy.kalman import ExtendedKalmanFilter
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
```
We create the radar simulator:
```python
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
```
We will initialize the filter near the airplane's actual position:
```python
rk.x = array([radar.pos, radar.vel-10, radar.alt+100])
```
We assign the system matrix using the first term of the Taylor series expansion we computed above:
```python
dt = 0.05
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt
```
After assigning reasonable values to $\mathbf R$, $\mathbf Q$, and $\mathbf P$ we can run the filter with a simple loop. We pass the functions for computing the Jacobian of $\mathbf H$ and $h(x)$ into the `update` method.
```python
for i in range(int(20/dt)):
z = radar.get_range()
rk.update(array([z]), HJacobian_at, hx)
rk.predict()
```
Adding some boilerplate code to save and plot the results we get:
```
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import ExtendedKalmanFilter
from numpy import eye, array, asarray
import numpy as np
dt = 0.05
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
radar = RadarSim(dt, pos=0., vel=100., alt=1000.)
# make an imperfect starting guess
rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000])
rk.F = eye(3) + array([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]]) * dt
range_std = 5. # meters
rk.R = np.diag([range_std**2])
rk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1)
rk.Q[2,2] = 0.1
rk.P *= 50
xs, track = [], []
for i in range(int(20/dt)):
z = radar.get_range()
track.append((radar.pos, radar.vel, radar.alt))
rk.update(array([z]), HJacobian_at, hx)
xs.append(rk.x)
rk.predict()
xs = asarray(xs)
track = asarray(track)
time = np.arange(0, len(xs)*dt, dt)
ekf_internal.plot_radar(xs, track, time)
```
## Using SymPy to compute Jacobians
Depending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.
As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.
```
import sympy
sympy.init_printing(use_latex=True)
x, x_vel, y = sympy.symbols('x, x_vel y')
H = sympy.Matrix([sympy.sqrt(x**2 + y**2)])
state = sympy.Matrix([x, x_vel, y])
H.jacobian(state)
```
This result is the same as the result we computed above, and with much less effort on our part!
## Robot Localization
It's time to try a real problem. I warn you that this section is difficult. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to solve a real world problem.
We will consider the problem of robot localization. We already implemented this in the **Unscented Kalman Filter** chapter, and I recommend you read it now if you haven't already. In this scenario we have a robot that is moving through a landscape using a sensor to detect landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.
The robot has 4 wheels in the same configuration used by automobiles. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model.
The robot has a sensor that measures the range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry.
Both the process model and measurement models are nonlinear. The EKF accommodates both, so we provisionally conclude that the EKF is a viable choice for this problem.
### Robot Motion Model
At a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations.
For lower speed robotic applications a simpler *bicycle model* has been found to perform well. This is a depiction of the model:
```
ekf_internal.plot_bicycle()
```
In the **Unscented Kalman Filter** chapter we derived these equations:
$$\begin{aligned}
\beta &= \frac d w \tan(\alpha) \\
x &= x - R\sin(\theta) + R\sin(\theta + \beta) \\
y &= y + R\cos(\theta) - R\cos(\theta + \beta) \\
\theta &= \theta + \beta
\end{aligned}
$$
where $\theta$ is the robot's heading.
You do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter.
### Design the State Variables
For our filter we will maintain the position $x,y$ and orientation $\theta$ of the robot:
$$\mathbf x = \begin{bmatrix}x \\ y \\ \theta\end{bmatrix}$$
Our control input $\mathbf u$ is the velocity $v$ and steering angle $\alpha$:
$$\mathbf u = \begin{bmatrix}v \\ \alpha\end{bmatrix}$$
### Design the System Model
We model our system as a nonlinear motion model plus noise.
$$\bar x = f(x, u) + \mathcal{N}(0, Q)$$
Using the motion model for a robot that we created above, we can expand this to
$$\bar{\begin{bmatrix}x\\y\\\theta\end{bmatrix}} = \begin{bmatrix}x\\y\\\theta\end{bmatrix} +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\
R\cos(\theta) - R\cos(\theta + \beta) \\
\beta\end{bmatrix}$$
We find The $\mathbf F$ by taking the Jacobian of $f(x,u)$.
$$\mathbf F = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix}
\frac{\partial f_1}{\partial x} &
\frac{\partial f_1}{\partial y} &
\frac{\partial f_1}{\partial \theta}\\
\frac{\partial f_2}{\partial x} &
\frac{\partial f_2}{\partial y} &
\frac{\partial f_2}{\partial \theta} \\
\frac{\partial f_3}{\partial x} &
\frac{\partial f_3}{\partial y} &
\frac{\partial f_3}{\partial \theta}
\end{bmatrix}
$$
When we calculate these we get
$$\mathbf F = \begin{bmatrix}
1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \\
0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \\
0 & 0 & 1
\end{bmatrix}$$
We can double check our work with SymPy.
```
import sympy
from sympy.abc import alpha, x, y, v, w, R, theta
from sympy import symbols, Matrix
sympy.init_printing(use_latex="mathjax", fontsize='16pt')
time = symbols('t')
d = v*time
beta = (d/w)*sympy.tan(alpha)
r = w/sympy.tan(alpha)
fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)],
[theta+beta]])
F = fxu.jacobian(Matrix([x, y, theta]))
F
```
That looks a bit complicated. We can use SymPy to substitute terms:
```
# reduce common expressions
B, R = symbols('beta, R')
F = F.subs((d/w)*sympy.tan(alpha), B)
F.subs(w/sympy.tan(alpha), R)
```
This form verifies that the computation of the Jacobian is correct.
Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in *control space*. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system.
$$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \\ 0 & \sigma_\alpha^2\end{bmatrix}$$
If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$.
$$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}
\frac{\partial f_1}{\partial v} & \frac{\partial f_1}{\partial \alpha} \\
\frac{\partial f_2}{\partial v} & \frac{\partial f_2}{\partial \alpha} \\
\frac{\partial f_3}{\partial v} & \frac{\partial f_3}{\partial \alpha}
\end{bmatrix}$$
These partial derivatives become very difficult to work with. Let's compute them with SymPy.
```
V = fxu.jacobian(Matrix([v, alpha]))
V = V.subs(sympy.tan(alpha)/w, 1/R)
V = V.subs(time*v/R, B)
V = V.subs(time*v, 'd')
V
```
This should give you an appreciation of how quickly the EKF become mathematically intractable.
This gives us the final form of our prediction equations:
$$\begin{aligned}
\mathbf{\bar x} &= \mathbf x +
\begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\
R\cos(\theta) - R\cos(\theta + \beta) \\
\beta\end{bmatrix}\\
\mathbf{\bar P} &=\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}
\end{aligned}$$
This form of linearization is not the only way to predict $\mathbf x$. For example, we could use a numerical integration technique such as *Runge Kutta* to compute the movement
of the robot. This will be required if the time step is relatively large. Things are not as cut and dried with the EKF as for the Kalman filter. For a real problem you have to carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns.
### Design the Measurement Model
The robot's sensor provides a noisy bearing and range measurement to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf T$ into a range and bearing to the landmark. If $\mathbf p$
is the position of a landmark, the range $r$ is
$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$
The sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:
$$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$
Thus our measurement model $h$ is
$$\begin{aligned}
\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\\
&= \begin{bmatrix}
\sqrt{(p_x - x)^2 + (p_y - y)^2} \\
\arctan(\frac{p_y - y}{p_x - x}) - \theta
\end{bmatrix} &+ \mathcal{N}(0, R)
\end{aligned}$$
This is clearly nonlinear, so we need linearize $h$ at $\mathbf x$ by taking its Jacobian. We compute that with SymPy below.
```
px, py = symbols('p_x, p_y')
z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)],
[sympy.atan2(py-y, px-x) - theta]])
z.jacobian(Matrix([x, y, theta]))
```
Now we need to write that as a Python function. For example we might write:
```
from math import sqrt
def H_of(x, landmark_pos):
""" compute Jacobian of H matrix where h(x) computes
the range and bearing to a landmark for state x """
px = landmark_pos[0]
py = landmark_pos[1]
hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2
dist = sqrt(hyp)
H = array(
[[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0],
[ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]])
return H
```
We also need to define a function that converts the system state into a measurement.
```
from math import atan2
def Hx(x, landmark_pos):
""" takes a state variable and returns the measurement
that would correspond to that state.
"""
px = landmark_pos[0]
py = landmark_pos[1]
dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2)
Hx = array([[dist],
[atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]])
return Hx
```
### Design Measurement Noise
It is reasonable to assume that the noise of the range and bearing measurements are independent, hence
$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$
### Implementation
We will use `FilterPy`'s `ExtendedKalmanFilter` class to implement the filter. Its `predict()` method uses the standard linear equations for the process model. Ours is nonlinear, so we will have to override `predict()` with our own implementation. I'll want to also use this class to simulate the robot, so I'll add a method `move()` that computes the position of the robot which both `predict()` and my simulation can call.
The matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's `evalf` function. `evalf` evaluates a SymPy `Matrix` with specific values for the variables. I decided to demonstrate this technique to you, and used `evalf` in the Kalman filter code. You'll need to understand a couple of points.
First, `evalf` uses a dictionary to specify the values. For example, if your matrix contains an `x` and `y`, you can write
```python
M.evalf(subs={x:3, y:17})
```
to evaluate the matrix for `x=3` and `y=17`.
Second, `evalf` returns a `sympy.Matrix` object. Use `numpy.array(M).astype(float)` to convert it to a NumPy array. `numpy.array(M)` creates an array of type `object`, which is not what you want.
Here is the code for the EKF:
```
from filterpy.kalman import ExtendedKalmanFilter as EKF
from numpy import dot, array, sqrt
class RobotEKF(EKF):
def __init__(self, dt, wheelbase, std_vel, std_steer):
EKF.__init__(self, 3, 2, 2)
self.dt = dt
self.wheelbase = wheelbase
self.std_vel = std_vel
self.std_steer = std_steer
a, x, y, v, w, theta, time = symbols(
'a, x, y, v, w, theta, t')
d = v*time
beta = (d/w)*sympy.tan(a)
r = w/sympy.tan(a)
self.fxu = Matrix(
[[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)],
[y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)],
[theta+beta]])
self.F_j = self.fxu.jacobian(Matrix([x, y, theta]))
self.V_j = self.fxu.jacobian(Matrix([v, a]))
# save dictionary and it's variables for later use
self.subs = {x: 0, y: 0, v:0, a:0,
time:dt, w:wheelbase, theta:0}
self.x_x, self.x_y, = x, y
self.v, self.a, self.theta = v, a, theta
def predict(self, u=0):
self.x = self.move(self.x, u, self.dt)
self.subs[self.theta] = self.x[2, 0]
self.subs[self.v] = u[0]
self.subs[self.a] = u[1]
F = array(self.F_j.evalf(subs=self.subs)).astype(float)
V = array(self.V_j.evalf(subs=self.subs)).astype(float)
# covariance of motion noise in control space
M = array([[self.std_vel*u[0]**2, 0],
[0, self.std_steer**2]])
self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T)
def move(self, x, u, dt):
hdg = x[2, 0]
vel = u[0]
steering_angle = u[1]
dist = vel * dt
if abs(steering_angle) > 0.001: # is robot turning?
beta = (dist / self.wheelbase) * tan(steering_angle)
r = self.wheelbase / tan(steering_angle) # radius
dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)],
[r*cos(hdg) - r*cos(hdg + beta)],
[beta]])
else: # moving in straight line
dx = np.array([[dist*cos(hdg)],
[dist*sin(hdg)],
[0]])
return x + dx
```
Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a angular difference of $-358^\circ$, whereas the correct value is $2^\circ$. We have to write code to correctly compute the bearing residual.
```
def residual(a, b):
""" compute residual (a-b) between measurements containing
[range, bearing]. Bearing is normalized to [-pi, pi)"""
y = a - b
y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi)
if y[1] > np.pi: # move to [-pi, pi)
y[1] -= 2 * np.pi
return y
```
The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable `landmarks` that contains the landmark coordinates. I update the simulated robot position 10 times a second, but run the EKF only once per second. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.
```
from filterpy.stats import plot_covariance_ellipse
from math import sqrt, tan, cos, sin, atan2
import matplotlib.pyplot as plt
dt = 1.0
def z_landmark(lmark, sim_pos, std_rng, std_brg):
x, y = sim_pos[0, 0], sim_pos[1, 0]
d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2)
a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0]
z = np.array([[d + randn()*std_rng],
[a + randn()*std_brg]])
return z
def ekf_update(ekf, z, landmark):
ekf.update(z, HJacobian=H_of, Hx=Hx,
residual=residual,
args=(landmark), hx_args=(landmark))
def run_localization(landmarks, std_vel, std_steer,
std_range, std_bearing,
step=10, ellipse_step=20, ylim=None):
ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel,
std_steer=std_steer)
ekf.x = array([[2, 6, .3]]).T # x, y, steer angle
ekf.P = np.diag([.1, .1, .1])
ekf.R = np.diag([std_range**2, std_bearing**2])
sim_pos = ekf.x.copy() # simulated position
# steering command (vel, steering angle radians)
u = array([1.1, .01])
plt.figure()
plt.scatter(landmarks[:, 0], landmarks[:, 1],
marker='s', s=60)
track = []
for i in range(200):
sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot
track.append(sim_pos)
if i % step == 0:
ekf.predict(u=u)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='k', alpha=0.3)
x, y = sim_pos[0, 0], sim_pos[1, 0]
for lmark in landmarks:
z = z_landmark(lmark, sim_pos,
std_range, std_bearing)
ekf_update(ekf, z, lmark)
if i % ellipse_step == 0:
plot_covariance_ellipse(
(ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2],
std=6, facecolor='g', alpha=0.8)
track = np.array(track)
plt.plot(track[:, 0], track[:,1], color='k', lw=2)
plt.axis('equal')
plt.title("EKF Robot localization")
if ylim is not None: plt.ylim(*ylim)
plt.show()
return ekf
landmarks = array([[5, 10], [10, 5], [15, 15]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
print('Final P:', ekf.P.diagonal())
```
I have plotted the landmarks as solid squares. The path of the robot is drawn with a black line. The covariance ellipses for the predict step are light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$.
We can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We determine that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements and the errors improve.
I used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\mathbf x$ is concerned.
Now let's add another landmark.
```
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1)
plt.show()
print('Final P:', ekf.P.diagonal())
```
The uncertainly in the estimates near the end of the track are smaller. We can see the effect that multiple landmarks have on our uncertainty by only using the first two landmarks.
```
ekf = run_localization(
landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
```
The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:
```
ekf = run_localization(
landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10,
std_range=1.4, std_bearing=.05)
print('Final P:', ekf.P.diagonal())
```
As you probably suspected, one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.
```
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10],
[10,14], [23, 14], [25, 20], [10, 20]])
ekf = run_localization(
landmarks, std_vel=0.1, std_steer=np.radians(1),
std_range=0.3, std_bearing=0.1, ylim=(0, 21))
print('Final P:', ekf.P.diagonal())
```
### Discussion
I said that this was a real problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to simpler Jacobians. On the other hand, my model of the movement is also simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in *Probabilistic Robots* that this simplified model is justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the CPU time required to perform the linear algebra.
Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic.
## UKF vs EKF
In the last chapter I used the UKF to solve this problem. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model.
There are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That undertaking is not trivial, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates.
So, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.
Let's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result.
The EKF linearizes the function by taking the derivative to find the slope at the evaluation point $x$. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.
```
import kf_book.nonlinear_plots as nonlinear_plots
nonlinear_plots.plot_ekf_vs_mc()
```
The EKF computation is rather inaccurate. In contrast, here is the performance of the UKF:
```
nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)
```
Here we can see that the computation of the UKF's mean is accurate to 2 decimal places. The standard deviation is slightly off, but you can also fine tune how the UKF computes the distribution by using the $\alpha$, $\beta$, and $\gamma$ parameters for generating the sigma points. Here I used $\alpha=0.001$, $\beta=3$, and $\gamma=1$. Feel free to modify them to see the result. You should be able to get better results than I did. However, avoid over-tuning the UKF for a specific test. It may perform better for your test case, but worse in general.
| github_jupyter |
# Scalable GP Classification (w/ KISS-GP)
This example shows how to use a `GridInterpolationVariationalStrategy` module. This classification module is designed for when the function you're modeling has 2-3 dimensional inputs and you don't believe that the output can be additively decomposed.
In this example, the function is checkerboard of 1/3x1/3 squares with labels of -1 or 1
Here we use KISS-GP (https://arxiv.org/pdf/1503.01057.pdf) to classify
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
from math import exp
%matplotlib inline
%load_ext autoreload
%autoreload 2
# We make an nxn grid of training points
# In [0,1]x[0,1] spaced every 1/(n-1)
n = 30
train_x = torch.zeros(int(pow(n, 2)), 2)
train_y = torch.zeros(int(pow(n, 2)))
for i in range(n):
for j in range(n):
train_x[i * n + j][0] = float(i) / (n - 1)
train_x[i * n + j][1] = float(j) / (n - 1)
# True function is checkerboard of 1/3x1/3 squares with labels of -1 or 1
train_y[i * n + j] = pow(-1, int(3 * i / n + int(3 * j / n))) + 1 // 2
from gpytorch.models import AbstractVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import GridInterpolationVariationalStrategy
class GPClassificationModel(AbstractVariationalGP):
def __init__(self, grid_size=10, grid_bounds=[(0, 1), (0, 1)]):
variational_distribution = CholeskyVariationalDistribution(int(pow(grid_size, len(grid_bounds))))
variational_strategy = GridInterpolationVariationalStrategy(self, grid_size, grid_bounds, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(
exp(0), exp(3), sigma=0.1, transform=torch.exp
)
)
)
def forward(self,x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
model = GPClassificationModel()
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
from gpytorch.mlls.variational_elbo import VariationalELBO
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
# "Loss" for GPs - the marginal log likelihood
# n_data refers to the amount of training data
mll = VariationalELBO(likelihood, model, train_y.numel())
def train():
num_iter = 50
for i in range(num_iter):
optimizer.zero_grad()
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.item()))
optimizer.step()
# Get clock time
%time train()
# Set model and likelihood into eval mode
model.eval()
likelihood.eval()
# Initialize figiure an axis
f, ax = plt.subplots(1, 1, figsize=(4, 3))
n = 100
test_x = torch.zeros(int(pow(n, 2)), 2)
for i in range(n):
for j in range(n):
test_x[i * n + j][0] = float(i) / (n-1)
test_x[i * n + j][1] = float(j) / (n-1)
with torch.no_grad(), gpytorch.settings.fast_pred_var():
predictions = likelihood(model(test_x))
# prob<0.5 --> label -1 // prob>0.5 --> label 1
pred_labels = predictions.mean.ge(0.5).float().numpy()
# Colors = yellow for 1, red for -1
color = []
for i in range(len(pred_labels)):
if pred_labels[i] == 1:
color.append('y')
else:
color.append('r')
# Plot data a scatter plot
ax.scatter(test_x[:, 0], test_x[:, 1], color=color, s=1)
ax.set_ylim([-0.5, 1.5])
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from statistics import mean
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import (
cross_validate, train_test_split, GridSearchCV, learning_curve, validation_curve
)
from sklearn.metrics import classification_report, accuracy_score
```
# Import dataset
```
compressed_final = pd.read_csv("../DataFormating/compressed_final.csv")
final = pd.read_csv("../DataFormating/final.csv")
compressed_final.head()
```
# Setup `X`, `y` data for training / testing
```
compressed_final.columns[:10]
X = compressed_final.drop(["Home Team Goals", "Away Team Goals",
"Half-time Home Goals", "Half-time Away Goals",
"Home Team Initials", "Away Team Initials"], axis=1)
y = []
for i in range(len(compressed_final)):
home_team_goals = compressed_final.iloc[i]["Home Team Goals"]
away_team_goals = compressed_final.iloc[i]["Away Team Goals"]
if home_team_goals > away_team_goals:
y.append(1)
elif home_team_goals < away_team_goals:
y.append(2)
else:
y.append(0)
# Test
assert len(X) == len(y)
```
### Encode textual features from the `X` dataset
```
X["Stage"] = LabelEncoder().fit_transform(X["Stage"])
X["Home Team Name"] = LabelEncoder().fit_transform(X["Home Team Name"])
X["Away Team Name"] = LabelEncoder().fit_transform(X["Away Team Name"])
len(X.columns)
```
### Feature Selection
```
X.columns[4:]
# selection = SelectKBest(score_func=f_classif, k=10)
# selection.fit(X, y)
# X = selection.transform(X)
feature_names = [
"Stage", "Home Team Name", "Away Team Name",
"Attendance", "Overall",
"Mean Home Team Goals", "Mean Away Team Goals"
]
COLUMNS = []
for column_name in X.columns:
for feature_name in feature_names:
if feature_name in column_name:
COLUMNS.append(column_name)
break
X = X[COLUMNS]
```
### Split `X` and `y` into train / test sets
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
```
# Fast testing
```
def test_model(model, cv=10):
cv_scores = cross_validate(model, X, y, cv=cv)
mean_train_acc = mean(cv_scores["train_score"])
mean_test_acc = mean(cv_scores["test_score"])
print()
print("Train Accuracy: ", mean_train_acc)
print("Test Accuracy: ", mean_test_acc)
print()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))
```
### K-Nearest Neighbors
```
test_model(KNeighborsClassifier(n_neighbors=3))
```
### Random Forests
```
model = RandomForestClassifier(n_estimators=1200, max_depth=10, bootstrap=True, n_jobs=-1)
test_model(model)
```
### Support Vector Machines
```
test_model(SVC(C=1.0, kernel="rbf", gamma="auto"))
```
### Extremely Randomized Trees
```
test_model(
ExtraTreesClassifier(n_estimators=1100, max_depth=10, bootstrap=True, n_jobs=-1)
)
```
### Gradient Boosting Machines
```
model = GradientBoostingClassifier(n_estimators=100, max_depth=3, learning_rate=0.1)
test_model(model)
```
### XGBoost (Best for now with 78.53%)
```
# best_model = XGBClassifier(n_estimators=4000, max_depth=20, learning_rate=0.03, n_jobs=-1)
model = XGBClassifier(n_estimators=4000, max_depth=20, learning_rate=0.03, n_jobs=-1)
test_model(model)
```
### AdaBoost with Decision Tree
```
tree = DecisionTreeClassifier()
ada = AdaBoostClassifier(tree, n_estimators=10**7, learning_rate=0.03)
test_model(ada)
```
### Neural Network
```
test_model(
MLPClassifier(
hidden_layer_sizes=(20, 40, 60, 100, 200, 300, 500, 500, 300, 200, 100, 60, 40, 20),
activation="logistic"
)
)
```
# Build up a Random Forest Classifier with Grid Search
```
model = RandomForestClassifier()
grid_search = GridSearchCV(
model,
param_grid={
"n_estimators": [100, 200, 300, 500, 700, 1000],
"max_depth": [1, 2, 3, 5, 10],
},
scoring="accuracy",
cv=3,
verbose=True
)
grid_search.fit(X, y)
grid_search.best_score_
grid_search.best_params_
```
| github_jupyter |
# Version information
```
from datetime import date
print("Running date:", date.today().strftime("%B %d, %Y"))
import pyleecan
print("Pyleecan version:" + pyleecan.__version__)
import SciDataTool
print("SciDataTool version:" + SciDataTool.__version__)
```
# How to compute magnetic forces using Force Module
This tutorial shows the different steps to **compute magnetic forces** with pyleecan.
The notebook related to this tutorial is available on [GitHub](https://github.com/Eomys/pyleecan/tree/master/Tutorials/tuto_Force.ipynb).
To demonstrate the capabilities and the use of the SciDataTool objects, a simulation is launched with FEMM, with imposed currents, using periodicity and parallelization to reduce execution time.
```
from numpy import exp, sqrt, pi
from os.path import join
from pyleecan.Classes.Simu1 import Simu1
from pyleecan.Classes.InputCurrent import InputCurrent
from pyleecan.Classes.MagFEMM import MagFEMM
from pyleecan.Classes.ForceMT import ForceMT
from pyleecan.Classes.Output import Output
from pyleecan.Functions.load import load
from pyleecan.definitions import DATA_DIR
# Load the machine
Toyota_Prius = load(join(DATA_DIR, "Machine", "Toyota_Prius.json"))
# Simulation initialization
simu = Simu1(name="FEMM_periodicity", machine=Toyota_Prius)
# Definition of the enforced output of the electrical module
simu.input = InputCurrent(
Na_tot=252 * 8,
Nt_tot=50 * 8,
N0=1000,
)
# Set Id/Iq according to I0/Phi0
simu.input.set_Id_Iq(I0=250 / sqrt(2), Phi0=140*pi/180)
# Definition of the magnetic simulation: with periodicity
simu.mag = MagFEMM(is_periodicity_a=True, is_periodicity_t=True, nb_worker=4)
simu.force = ForceMT(is_periodicity_a=True, is_periodicity_t=True)
# Run simulations
out = simu.run()
```
## Force Module
The Force abstract class will make it possible to define different ways of calculating forces.
The ForceMT class inherits from Force class. ForceMT is dedicated to the computation of air-gap surface force based on the Maxwell stress tensor \[[source](https://eomys.com/IMG/pdf/comparison-main-magnetic.pdf)\].
Here, we get the results from a magnetic simulation without any force calculation. The Force module is initialized and run alone.
```
from pyleecan.Classes.Simu1 import Simu1
from pyleecan.Classes.ForceMT import ForceMT
# Create the Simulation
mySimu = Simu1(name="Tuto_Force")
mySimu.parent = out
mySimu.force = ForceMT()
# Run only the force module
mySimu.force.run()
```
Once the simulation is finished, the results are stored in the force part of the output (i.e. _myResults.force_ ) and we can call different plots. This object contains:
- *Time*: Time axe
- *Angle*: Angular position axe
- *AGSF*: Airgap surface force (Radial and Tangential component)
**Output** object embbed different plot to visualize results easily. You can find a dedicated tutorial [here](https://www.pyleecan.org/tuto_Plots.html).
Here are some example of useful plots.
```
%matplotlib notebook
from pyleecan.Functions.Plot import dict_2D, dict_3D
out.force.AGSF.plot_2D_Data("angle{°}", **dict_2D)
out.force.AGSF.plot_2D_Data("wavenumber=[0,78]", **dict_2D)
from numpy import pi
#------------------------------------------------------
# Plot the air-gap force as a function of time with the time fft
out.force.AGSF.plot_2D_Data("time","angle[10]", is_auto_ticks=False, **dict_2D)
out.force.AGSF.plot_2D_Data("freqs=[0,4000]", is_auto_ticks=False, **dict_2D)
#------------------------------------------------------
```
The following plot displays the radial air-gap surface force over time and angle.
```
#------------------------------------------------------
# Plot the tangential force as a function of time and space
out.force.AGSF.plot_3D_Data("time", "angle{°}", is_2D_view=True, **dict_3D)
#------------------------------------------------------
```
| github_jupyter |
#### Motivation
Recently I spent a considerable amount of time writing the [postman_problems] python library implementing solvers for the Chinese and Rural Postman Problems (CPP and RPP respectively). I wrote about my initial motivation for the project: finding the optimal route through a trail system in a state park [here][sleeping_giant_cpp_post]. Although I've still yet to run the 34 mile optimal trail route, I am pleased with the optimization procedure. However, I couldn't help but feel that all those nights and weekends hobbying on this thing deserved a more satisfying visual than my static SVGs and hacky GIF. So to spice it up, I decided to solve the RPP on a graph derived from geodata and visualize on an interactive Leaflet map.
#### The Problem
In short, ride all 50 state named avenues in DC end-to-end following the shortest route possible.
There happens to be an annual [50 states ride] sponsored by our regional bike association, [WABA], that takes riders to each of the 50<sup>†</sup> state named avenues in DC. Each state's avenue is *touched*, but not covered in full. This problem takes it a step further by instituting this requirement. Thus, it boils to the RPP where the required edges are state avenues (end-to-end) and the optional edges are every other road within DC city limits.
For those unfamiliar with DC street naming convention, that can (and should) be remedied with a read through the history behind the street system [here][dc_avenues_facts]. Seriously, it's an interesting read. Basically there are 50 state named avenues in DC ranging from 0.3 miles (Indiana Avenue) to 10 miles (Massachusetts Avenue) comprising [115 miles] in total.
#### The Solution
The data is grabbed from Open Street Maps (OSM). Most of the post is spent wrangling the OSM geodata into shape for the RPP algorithm using NetworkX graphs. The final route (and intermediate steps) are visualized using Leaflet maps through [mplleaflet] which enable interactivity using tiles from Mapbox and CartoDB among others.
Note to readers: the rendering of these maps can work the browser pretty hard; allow a couple extra seconds for loading.
#### The Approach
Most of the heavy lifting leverages functions from the [graph.py] module in the [postman_problems_examples] repo. The majority of pre-RPP processing employs heuristics that simplify the computation such that this code can run in a reasonable amount of time. The parameters employed here, which I believe get pretty darn close to the optimal solution, run in about 50 minutes. By tweaking a couple parameters, accuracy can be sacrificed for time to get run time down to ~5 minutes on a standard 4 core laptop.
Verbose technical details about the guts of each step are omitted from this post for readability. However the interested reader can find these in the docstrings in [graph.py].
The table of contents below provides the best high-level summary of the approach. All code needed to reproduce this analysis is in the [postman_problems_examples] repo, including the jupyter notebook used to author this blog post and a conda environment.
<sup>†</sup> While there are 50 *roadways*, there are technically only 48 state named *avenues*: Ohio Drive and California Street are the stubborn exceptions.
#### Table of Contents
[50 states ride]: https://org.salsalabs.com/o/451/p/salsa/event/common/public/?event_KEY=99906
[WABA]: http://www.waba.org/
[postman_problems]: https://github.com/brooksandrew/postman_problems
[sleeping_giant_cpp_post]:http://brooksandrew.github.io/simpleblog/articles/intro-to-graph-optimization-solving-cpp/
[dc_avenues_facts]:https://dc.curbed.com/2014/8/13/10061100/facts-and-myths-about-dcs-street-system
[115 miles]: https://en.wikipedia.org/wiki/List_of_state-named_roadways_in_Washington,_D.C.#cite_note-AL2-2
[mplleaflet]:https://github.com/jwass/mplleaflet
[graph.py]: https://github.com/brooksandrew/postman_problems_examples/blob/master/graph.py
[postman_problems_examples]: https://github.com/brooksandrew/postman_problems_examples
* Table of Contents
{:toc}
```
import mplleaflet
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# can be found in https://github.com/brooksandrew/postman_problems_examples
from osm2nx import read_osm, haversine
from graph import (
states_to_state_avenue_name, subset_graph_by_edge_name, keep_oneway_edges_only, create_connected_components,
create_unkinked_connected_components, nodewise_distance_connected_components,
calculate_component_overlap, calculate_redundant_components, create_deduped_state_road_graph,
create_contracted_edge_graph, shortest_paths_between_components, find_minimum_weight_edges_to_connect_components,
create_rpp_edgelist
)
# can be found in https://github.com/brooksandrew/postman_problems
from postman_problems.tests.utils import create_mock_csv_from_dataframe
from postman_problems.solver import rpp, cpp
from postman_problems.stats import calculate_postman_solution_stats
```
## 0: Get the data
There are many ways to grab Open Street Map (OSM) data, since it's, well, open. I grabbed the DC map from GeoFabrik [here][geofabrik].
[geofabrik]:http://download.geofabrik.de/north-america/us/district-of-columbia.html
## 1: Load OSM to NetworkX
While some libraries like [OSMnx](https://github.com/gboeing/osmnx) provide an elegant interface to downloading, transforming and manipulating OSM data in NetworkX, I decided to start with the raw data itself. I adopted an OSM-to-nx parser from a hodge podge of Gists ([here][osmgist_aflaxman] and [there][osmgist_tofull]) to [`read_osm`][my-osm2nx-parser].
`read_osm` creates a directed graph. However, for this analysis, we'll use undirected graphs with the assumption that all roads are bidirectional on a bike one way or another.
[my-osm2nx-parser]:https://github.com/brooksandrew/50states/blob/master/osm2nx.py
[osmgist_tofull]:https://gist.github.com/Tofull/49fbb9f3661e376d2fe08c2e9d64320e
[osmgist_aflaxman]:https://gist.github.com/aflaxman/287370/
```
%%time
# load OSM to a directed NX
g = read_osm('district-of-columbia-latest.osm')
# create an undirected graph
g_ud = g.to_undirected()
```
This is a pretty big graph, about 275k edges. It takes about a minute to load on my machine (Macbook w 4 cores)
```
print(len(g.edges())) # number of edges
```
## 2: Make Graph w State Avenues only
#### Generate state avenue names
```
STATE_STREET_NAMES = [
'Alabama','Alaska','Arizona','Arkansas','California','Colorado',
'Connecticut','Delaware','Florida','Georgia','Hawaii','Idaho','Illinois',
'Indiana','Iowa','Kansas','Kentucky','Louisiana','Maine','Maryland',
'Massachusetts','Michigan','Minnesota','Mississippi','Missouri','Montana',
'Nebraska','Nevada','New Hampshire','New Jersey','New Mexico','New York',
'North Carolina','North Dakota','Ohio','Oklahoma','Oregon','Pennsylvania',
'Rhode Island','South Carolina','South Dakota','Tennessee','Texas','Utah',
'Vermont','Virginia','Washington','West Virginia','Wisconsin','Wyoming'
]
```
Most state avenues are written in the long form (ex. Connecticut Avenue Northwest). However, some, such as Florida Ave NW, are written in the short form. To be safe, we grab any permutation OSM could throw at us.
```
candidate_state_avenue_names = states_to_state_avenue_name(STATE_STREET_NAMES)
# two states break the "Avenue" pattern
candidate_state_avenue_names += ['California Street Northwest', 'Ohio Drive Southwest']
# preview
candidate_state_avenue_names[0:20]
```
#### Create graph w state avenues only
```
g_st = subset_graph_by_edge_name(g, candidate_state_avenue_names)
# Add state edge attribute from full streetname (with avenue/drive and quandrant)
for e in g_st.edges(data=True):
e[2]['state'] = e[2]['name'].rsplit(' ', 2)[0]
```
This is a much smaller graph:
```
print(len(g_st.edges()))
```
But every state is represented:
```
edge_count_by_state = Counter([e[2]['state'] for e in g_st.edges(data=True)])
# number of unique states
print(len(edge_count_by_state))
```
Here they are by edge count:
```
edge_count_by_state
```
### 2.1 Viz state avenues
As long as your NetworkX graph has `lat` and `lon` node attributes, [mplleaflet] can be used to pretty effortlessly plot your NetworkX graph on an interactive map.
Here's the map with all the state avenues...
[mplleaflet]: https://github.com/jwass/mplleaflet
```
fig, ax = plt.subplots(figsize=(1,8))
pos = {k: (g_st.node[k]['lon'], g_st.node[k]['lat']) for k in g_st.nodes()}
nx.draw_networkx_edges(g_st, pos, width=4.0, edge_color='black', alpha=0.7)
# save viz
mplleaflet.save_html(fig, 'state_avenues_all.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/state_avenues_all.html" height="400" width="750"></iframe>
You can even customize with your favorite tiles. For example:
```
mplleaflet.display(fig=ax.figure, tiles='stamen_wc')
```
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/50states/fig/stamen_wc_state_ave.jpeg" width="700">
...But there's a wrinkle. Zoom in on bigger avenues, like New York or Rhode Island, and you'll notice that there are two parallel edges representing each direction as a separate one-way road. This usually occurs when there are several lanes of traffic in each direction, or physical dividers between directions. Example below:
This is great for OSM and point A to B routing problems, but for the Rural Postman problem it imposes the requirement that each main avenue be cycled twice. We're not into that.
**Example:** Rhode Island Ave (parallel edges) vs Florida Ave (single edge)

## 3. Remove Redundant State Avenues
As it turns out, removing these parallel (redundant) edges is a nontrivial problem to solve. My approach is the following:
1. Build graph with one-way state avenue edges only.
2. For each state avenue, create list of connected components that represent sequences of OSM ways in the same direction (broken up by intersections and turns).
3. Compute distance between each node in a component to every other node in the other candidate components.
4. Identify redundant components as those with the majority of their nodes below some threshold distance away from another component.
5. Build graph without redundant edges.
### 3.1 Create state avenue graph with one-way edges only
```
g_st1 = keep_oneway_edges_only(g_st)
```
The one-way avenues are plotted in red below. A brief look indicates that 80-90% of the one-way avenues are parallel (redundant). A few, like Idaho Avenue NW and Ohio Drive SW, are single one-way roads with no accompanying parallel edge for us to remove.
NOTE: you'll need to zoom in 3-4 levels to see the parallel edges.
```
fig, ax = plt.subplots(figsize=(1,6))
pos = {k: (g_st1.node[k]['lon'], g_st1.node[k]['lat']) for k in g_st1.nodes()}
nx.draw_networkx_edges(g_st1, pos, width=3.0, edge_color='red', alpha=0.7)
# save viz
mplleaflet.save_html(fig, 'oneway_state_avenues.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/oneway_state_avenues.html" height="400" width="750"></iframe>
#### Create connected components with one-way state avenues
```
comps = create_connected_components(g_st1)
```
There are 163 distinct components in the graph above.
```
len(comps)
```
### 3.2 Split connected components
#### Remove kinked nodes
However, we need to break some of these components up into smaller ones. Many components, like the one below, have bends or a connected cycle that contain both the parallel edges, where we only want one. My approach is to identify the nodes with sharp angles and remove them. I don't know what the proper name for these is (you can read about [angular resolution]), but we'll call them "kinked nodes."
This will split the connected component below into two, allowing us to determine that one of them is redundant.
[angular resolution]: https://en.wikipedia.org/wiki/Angular_resolution_(graph_drawing)

I borrow [this code][jeromer_bearing] from `jeromer` to calculate the compass bearing (0 to 360) of each edge. Wherever the the bearing difference between two adjacent edges is greater than `bearing_thresh`, we call the node shared by both edges a "kinked node." A relative low `bearing_thresh` of 60 appeared to work best after some experimentation.
[jeromer_bearing]: https://gist.github.com/jeromer/2005586
```
# create list of comps (graphs) without kinked nodes
comps_unkinked = create_unkinked_connected_components(comps=comps, bearing_thresh=60)
# comps in dict form for easy lookup
comps_dict = {comp.graph['id']:comp for comp in comps_unkinked}
```
After removing these "kinked nodes," our list of components grows from 163 to 246:
```
len(comps_unkinked)
```
#### Viz components without kinked nodes
**Example:** Here's the Massachusetts Ave example from above after we remove kinked nodes:

**Full map:** Zoom in on the map below and you'll see that we split up most of the obvious components that should be. There are a few corner cases that we miss, but I'd estimate we programmatically split about 95% of the components correctly.
```
fig, ax = plt.subplots(figsize=(1,6))
for comp in comps_unkinked:
pos = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}
nx.draw_networkx_edges(comp, pos, width=3.0, edge_color='orange', alpha=0.7)
mplleaflet.save_html(fig, 'oneway_state_avenues_without_kinked_nodes.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/oneway_state_avenues_without_kinked_nodes.html" height="500" width="750"></iframe>
### 3.3 & 3.4 Match connected components
Now that we've crafted the right components, we calculate how close (parallel) each component is to one another.
This is a relatively coarse approach, but performs surprisingly well:
**1.** Find closest nodes from candidate components to each node in each component (pseudo code below):
```
For each node N in component C:
For each C_cand in components with same street avenue as C:
Calculate closest node in C_cand to N.
```
**2.** Calculate overlap between components. Using the distances calculated in **1.**, we say that a node from component `C` is matched to a component `C_cand` if the distance is less than `thresh_distance` specified in `calculate_component_overlap`. 75 meters seemed to work pretty well. Essentially we're saying these nodes are close enough to be considered interchangeable.
**3.** Use the node-wise matching calculated in **2.** to calculate which components are redundant. If `thresh_pct` of nodes in component `C` are close enough (within `thresh_distance`) to nodes in component `C_cand`, we call `C` redundant and discard it.
```
# caclulate nodewise distances between each node in comp with closest node in each candidate
comp_matches = nodewise_distance_connected_components(comps_unkinked)
# calculate overlap between components
comp_overlap = calculate_component_overlap(comp_matches, thresh_distance=75)
# identify redundant and non-redundant components
remove_comp_ids, keep_comp_ids = calculate_redundant_components(comp_overlap, thresh_pct=0.75)
```
#### Viz redundant component solution
The map below visualizes the solution to the redundant parallel edges problem. There are some misses, but overall this simple approach works surprisingly well:
* **<font color='red'>red</font>**: redundant one-way edges to remove
* **<font color='black'>black</font>**: one-way edges to keep
* **<font color='blue'>blue</font>**: all state avenues
```
fig, ax = plt.subplots(figsize=(1,8))
# plot redundant one-way edges
for i, road in enumerate(remove_comp_ids):
for comp_id in remove_comp_ids[road]:
comp = comps_dict[comp_id]
posc = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}
nx.draw_networkx_edges(comp, posc, width=7.0, edge_color='red')
# plot keeper one-way edges
for i, road in enumerate(keep_comp_ids):
for comp_id in keep_comp_ids[road]:
comp = comps_dict[comp_id]
posc = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}
nx.draw_networkx_edges(comp, posc, width=3.0, edge_color='black')
# plot all state avenues
pos_st = {k: (g_st.node[k]['lon'], g_st.node[k]['lat']) for k in g_st.nodes()}
nx.draw_networkx_edges(g_st, pos_st, width=1.0, edge_color='blue', alpha=0.7)
mplleaflet.save_html(fig, 'redundant_edges.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/redundant_edges.html" height="500" width="750"></iframe>
### 3.5 Build graph without redundant edges
This is the essentially the graph with just **black** and **<font color='blue'>blue</font>** edges from the map above.
```
# create a single graph with deduped state roads
g_st_nd = create_deduped_state_road_graph(g_st, comps_dict, remove_comp_ids)
```
After deduping the redundant edges, our connected component count drops from 246 to 96.
```
len(list(nx.connected_components(g_st_nd)))
```
## 4. Create Single Connected Component
The strategy I employ for solving the Rural Postman Problem (RPP) in [postman_problems] is simple in that it reuses the machinery from the Chinese Postman Problem (CPP) solver [here][rpp_solver]. However, it makes the strong assumption that the graph's required edges form a single connected component. This is obviously not true for our state avenue graph as-is, but it's not too off. Although there are 96 components, there are only a couple more than a few hundred meters to the next closest component.
So we hack it a bit by adding required edges to the graph to make it a single connected component. The tricky part is choosing the edges that add as little distance as possible. This was the first computationally intensive step that required some clever tricks and approximations to ensure execution in a reasonable amount of time.
My approach:
1. Build graph with [contracted edges] only.
2. Calculate haversine distance between each possible pair of components.
3. Find minimum distance connectors: iterate through the data structure created in **2.** to calculate shortest paths for top candidates based on haversine distance and add shortest connectors to graph. More details below.
4. Build single component graph.
[contracted edges]: https://en.wikipedia.org/wiki/Edge_contraction
[rpp_solver]: https://github.com/brooksandrew/postman_problems/blob/master/postman_problems/solver.py#L14
[postman_problems]: https://github.com/brooksandrew/postman_problems
### 4.1 Contract edges
Nodes with degree 2 are collapsed into an edge stretching from a dead-end node (degree 1) or intersection (degree >= 3) to another. This achieves two things:
* Limits the number of distance calculations.
* Ensures that components are connected at logical points (dead ends and intersections) rather than arbitrary parts of a roadway. This will make for a more continuous route.
```
# Create graph with contracted edges only
g_st_contracted = create_contracted_edge_graph(graph=g_st_nd,
edge_weight='length')
```
This significantly reduces the nodes needed for distances computations by a factor of > 15.
```
print('Number of nodes in contracted graph: {}'.format(len(g_st_contracted.nodes())))
print('Number of nodes in original graph: {}'.format(len(g_st_nd.nodes())))
```
### 4.2 Calculate haversine distance between components
The 345 nodes from the contracted edge graph translate to >100,000 possible node pairings. That means >100,000 distance calculations. While applying a shortest path algorithm over the graph would certainly be more exact, it is painfully slow compared to simple haversine distance. This is mainly due to the high number of nodes and edges in the DC OSM map (over 250k edges).
On my laptop I averaged about 4 shortest path calculations per second. Not too bad for a handful, but 115k would take about 7 hours. Haversine distance, by comparison, churns through 115k in a couple seconds.
```
# create dataframe with shortest paths (haversine distance) between each component
dfsp = shortest_paths_between_components(g_st_contracted)
dfsp.shape[0] # number of rows (node pairs)
```
### 4.3 Find minimum distance connectors
This gets a bit tricky. Basically we iterate through the top (closest) candidate pairs of components and connect them iteration-by-iteration with the shortest path edge. We use pre-calculated haversine distance to get in the right ballpark, then refine with true shortest path for the closest 20 candidates. This helps us avoid the scenario where we naively connect two nodes that are geographically close as the crow flies (haversine), but far away via available roads. Two nodes separated by highways or train tracks, for example.
```
# min weight edges that create a single connected component
connector_edges = find_minimum_weight_edges_to_connect_components(dfsp=dfsp,
graph=g_ud,
edge_weight='length',
top=20)
```
We had 96 components to connect, so it makes sense that we have 95 connectors.
```
len(connector_edges)
```
### 4.4 Build single component graph
```
# adding connector edges to create one single connected component
for e in connector_edges:
g_st_contracted.add_edge(e[0], e[1], distance=e[2]['distance'], path=e[2]['path'], required=1, connector=True)
```
We add about 12 miles with the 95 additional required edges. That's not too bad: an average distance of 0.13 miles per each edge added.
```
print(sum([e[2]['distance'] for e in g_st_contracted.edges(data=True) if e[2].get('connector')])/1609.34)
```
So that leaves us with a single component of 124 miles of required edges to optimize a route through. That means the distance of deduped state avenues alone, without connectors (~112 miles) is just a couple miles away from what [Wikipedia reports](https://en.wikipedia.org/wiki/List_of_state-named_roadways_in_Washington,_D.C.) (115 miles).
```
print(sum([e[2]['distance'] for e in g_st_contracted.edges(data=True)])/1609.34)
```
Make graph with granular edges (filling in those that were contracted) connecting components:
```
g1comp = g_st_contracted.copy()
for e in g_st_contracted.edges(data=True):
if 'path' in e[2]:
granular_type = 'connector' if 'connector' in e[2] else 'state'
# add granular connector edges to graph
for pair in list(zip(e[2]['path'][:-1], e[2]['path'][1:])):
g1comp.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type)
# add granular connector nodes to graph
for n in e[2]['path']:
g1comp.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])
```
### 4.5 Viz single connected component
**Black** edges represent the deduped state avenues.
**<font color='red'>Red</font>** edges represent the 12 miles of connectors that create the single connected component.
```
fig, ax = plt.subplots(figsize=(1,6))
g1comp_conn = g1comp.copy()
g1comp_st = g1comp.copy()
for e in g1comp.edges(data=True):
if ('granular_type' not in e[2]) or (e[2]['granular_type'] != 'connector'):
g1comp_conn.remove_edge(e[0], e[1])
for e in g1comp.edges(data=True):
if ('granular_type' not in e[2]) or (e[2]['granular_type'] != 'state'):
g1comp_st.remove_edge(e[0], e[1])
pos = {k: (g1comp_conn.node[k]['lon'], g1comp_conn.node[k]['lat']) for k in g1comp_conn.nodes()}
nx.draw_networkx_edges(g1comp_conn, pos, width=5.0, edge_color='red')
pos_st = {k: (g1comp_st.node[k]['lon'], g1comp_st.node[k]['lat']) for k in g1comp_st.nodes()}
nx.draw_networkx_edges(g1comp_st, pos_st, width=3.0, edge_color='black')
# save viz
mplleaflet.save_html(fig, 'single_connected_comp.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/single_connected_comp.html" height="500" width="750"></iframe>
## 5. Solve CPP
I don't expect the Chinese Postman solution to be optimal since it only utilizes the required edges. However, I do expect it to execute quickly and serve as a benchmark for the Rural Postman solution. In the age of "deep learning," I agree with Smerity, [baselines need more love].
[baselines need more love]: http://smerity.com/articles/2017/baselines_need_love.html
### 5.1 Create CPP edgelist
The [cpp solver] I wrote operates off an edgelist (text file). This feels a bit clunky here, but it works.
[cpp solver]: https://github.com/brooksandrew/postman_problems/blob/master/postman_problems/solver.py#L65
```
# create list with edge attributes and "from" & "to" nodes
tmp = []
for e in g_st_contracted.edges(data=True):
tmpi = e[2].copy() # so we don't mess w original graph
tmpi['start_node'] = e[0]
tmpi['end_node'] = e[1]
tmp.append(tmpi)
# create dataframe w node1 and node2 in order
eldf = pd.DataFrame(tmp)
eldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})]
```
The first two columns are interpeted as the *from* and *to* nodes; everything else as edge attributes.
```
eldf.head(3)
```
### 5.2 CPP solver
#### Starting point
I fix the starting node for the solution to OSM node `49765113` which corresponds to (**38.917002**, **-77.0364987**): the intersection of New Hampshire Avenue NW, 16th St NW and U St NW... and also the close to my house:

#### Solve
```
# create mockfilename
elfn = create_mock_csv_from_dataframe(eldf)
# solve
START_NODE = '49765113' # New Hampshire Ave NW & U St NW.
circuit_cpp, gcpp = cpp(elfn, start_node=START_NODE)
```
### 5.3: CPP results
The CPP solution covers roughly 390,000 meters, about **242 miles**.
The optimal CPP route doubles the required distance, doublebacking every edge on average... definitely not ideal.
```
# circuit stats
calculate_postman_solution_stats(circuit_cpp)
```
## 6. Solve RPP
The RPP should improve the CPP solution as it considers optional edges that can drastically limit the amount of doublebacking.
We could add every possible edge that connects the required nodes, but it turns out that computation blows up quickly, and I'm not that patient. The `get_shortest_paths_distances` is the bottleneck applying dijkstra path length on all possible combinations. There are ~14k pairs to calculate shortest path for (4 per second) which would take almost one hour.
However, we can use some heuristics to speed this up dramatically without sacrificing too much.
### 6.1 Create RPP edgelist
Ideally optional edges will be relatively short, since they are, well, optional. It is unlikely that the RPP algorithm will find that leveraging an optional edge that stretches from one corner of the graph to another will be efficient. Thus we constrain the set of optional edges presented to the RPP solver to include only those less than `max_distance`.
I experimented with several thresholds. 3200 meters certainly took longer (~40 minutes), but yielded the best route results. I tried 4000m which ran for about 4 hours and returned a route with the same distance (160 miles) as the 3200m threshold.
```
%%time
dfrpp = create_rpp_edgelist(g_st_contracted=g_st_contracted,
graph_full=g_ud,
edge_weight='length',
max_distance=3200)
```
Check how many optional edges are considered (0=optional, 1=required):
```
Counter(dfrpp['required'])
```
### 6.2 RPP solver
Apply the RPP solver to the processed dataset.
```
%%time
# create mockfilename
elfn = create_mock_csv_from_dataframe(dfrpp)
# solve
circuit_rpp, grpp = rpp(elfn, start_node=START_NODE)
```
### 6.3 RPP results
As expected, the RPP route is considerably shorter than the CPP solution. The **~242** mile CPP route is cut significantly to **~160** with the RPP approach.
~26,000m (~161 miles) in total with ~59,000m (37 miles) of doublebacking. Not bad... but probably a 2-day ride.
```
# RPP route distance (miles)
print(sum([e[3]['distance'] for e in circuit_rpp])/1609.34)
# hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe`
for e in circuit_rpp:
if type(e[3]['path']) == str:
exec('e[3]["path"]=' + e[3]["path"])
calculate_postman_solution_stats(circuit_rpp)
```
As seen below, filling the contracted edges back in with the granular nodes adds considerably to the edge count.
```
print('Number of edges in RPP circuit (with contracted edges): {}'.format(len(circuit_rpp)))
print('Number of edges in RPP circuit (with granular edges): {}'.format(rppdf.shape[0]))
```
### 6.4 Viz RPP graph
#### Create RPP granular graph
Add the granular edges (that we contracted for computation) back to the graph.
```
# calc shortest path between optional nodes and add to g1comp graph
for e in [e for e in circuit_rpp if e[3]['required']==0] :
# add granular optional edges to g1comp
path = e[3]['path']
for pair in list(zip(path[:-1], path[1:])):
if g1comp.has_edge(pair[0], pair[1]):
continue
g1comp.add_edge(pair[0], pair[1], granular='True', granular_type='optional')
# add granular nodes from optional edge paths to g1comp
for n in path:
g1comp.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])
```
#### Visualize RPP solution by edge type
* **<font color='black'>black</font>**: required state avenue edges
* **<font color='red'>red</font>**: required non-state avenue edges added to form single component
* **<font color='blue'>blue</font>**: optional non-state avenue roads
```
fig, ax = plt.subplots(figsize=(1,12))
g1comp_conn = g1comp.copy()
g1comp_st = g1comp.copy()
g1comp_opt = g1comp.copy()
for e in g1comp.edges(data=True):
if e[2].get('granular_type') != 'connector':
g1comp_conn.remove_edge(e[0], e[1])
for e in g1comp.edges(data=True):
#if e[2].get('name') not in candidate_state_avenue_names:
if e[2].get('granular_type') != 'state':
g1comp_st.remove_edge(e[0], e[1])
for e in g1comp.edges(data=True):
if e[2].get('granular_type') != 'optional':
g1comp_opt.remove_edge(e[0], e[1])
pos = {k: (g1comp_conn.node[k]['lon'], g1comp_conn.node[k]['lat']) for k in g1comp_conn.nodes()}
nx.draw_networkx_edges(g1comp_conn, pos, width=6.0, edge_color='red')
pos_st = {k: (g1comp_st.node[k]['lon'], g1comp_st.node[k]['lat']) for k in g1comp_st.nodes()}
nx.draw_networkx_edges(g1comp_st, pos_st, width=4.0, edge_color='black')
pos_opt = {k: (g1comp_opt.node[k]['lon'], g1comp_opt.node[k]['lat']) for k in g1comp_opt.nodes()}
nx.draw_networkx_edges(g1comp_opt, pos_opt, width=2.0, edge_color='blue')
# save vbiz
mplleaflet.save_html(fig, 'rpp_solution_edge_type.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/rpp_solution_edge_type.html" height="500" width="750"></iframe>
#### Visualize RPP solution by edge walk count
Edge walks per color:
**<font color='black'>black</font>**: 1 <br>
**<font color='magenta'>magenta</font>**: 2 <br>
**<font color='orange'>orange</font>**: 3 <br>
Edges walked more than once are also widened.
This solution feels pretty reasonable with surprisingly little doublebacking. After staring at this for several minutes, I could think of roads I'd prefer not to cycle on, but no obvious shorter paths.
```
## Create graph directly from rpp_circuit and original graph w lat/lon (g_ud)
color_seq = [None, 'black', 'magenta', 'orange', 'yellow']
grppviz = nx.Graph()
edges_cnt = Counter([tuple(sorted([e[0], e[1]])) for e in circuit_rpp])
for e in circuit_rpp:
for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]):
if grppviz.has_edge(n1, n2):
grppviz[n1][n2]['linewidth'] += 2
grppviz[n1][n2]['cnt'] += 1
else:
grppviz.add_edge(n1, n2, linewidth=2.5)
grppviz[n1][n2]['color_st'] = 'black' if g_st.has_edge(n1, n2) else 'red'
grppviz[n1][n2]['cnt'] = 1
grppviz.add_node(n1, lat=g_ud.node[n1]['lat'], lon=g_ud.node[n1]['lon'])
grppviz.add_node(n2, lat=g_ud.node[n2]['lat'], lon=g_ud.node[n2]['lon'])
for e in grppviz.edges(data=True):
e[2]['color_cnt'] = color_seq[e[2]['cnt']]
fig, ax = plt.subplots(figsize=(1,12))
pos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()}
e_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)]
e_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)]
nx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7)
# save viz
mplleaflet.save_html(fig, 'rpp_solution_edge_cnt.html', tiles='cartodb_positron')
```
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/rpp_solution_edge_cnt.html" height="500" width="750"></iframe>
### 6.5 Serialize RPP solution
#### CSV
Remember we contracted the edges in **4.1** for more efficient computation. However, when we visualize the solution, the more granular edges within the larger contracted ones are filled back in, so we can see the exact route to ride with all the bends and squiggles.
```
# fill in RPP solution edgelist with granular nodes
rpplist = []
for ee in circuit_rpp:
path = list(zip(ee[3]['path'][:-1], ee[3]['path'][1:]))
for e in path:
rpplist.append({
'start_node': e[0],
'end_node': e[1],
'start_lat': g_ud.node[e[0]]['lat'],
'start_lon': g_ud.node[e[0]]['lon'],
'end_lat': g_ud.node[e[1]]['lat'],
'end_lon': g_ud.node[e[1]]['lon'],
'street_name': g_ud[e[0]][e[1]].get('name')
})
# write solution to disk
rppdf = pd.DataFrame(rpplist)
rppdf.to_csv('rpp_solution.csv', index=False)
```
#### Geojson
Similarly, we create a geojson object of the RPP solution using the `time` attribute to keep track of the route order. This data structure can be used for fancy js/d3 visualizations. Coming soon, hopefully.
```
geojson = {'features':[], 'type': 'FeatureCollection'}
time = 0
path = list(reversed(circuit_rpp[0][3]['path']))
for e in circuit_rpp:
if e[3]['path'][0] != path[-1]:
path = list(reversed(e[3]['path']))
else:
path = e[3]['path']
for n in path:
time += 1
doc = {'type': 'Feature',
'properties': {
'latitude': g.node[n]['lat'],
'longitude': g.node[n]['lon'],
'time': time,
'id': e[3].get('id')
},
'geometry':{
'type': 'Point',
'coordinates': [g.node[n]['lon'], g.node[n]['lat']]
}
}
geojson['features'].append(doc)
with open('circuit_rpp.geojson','w') as f:
json.dump(geojson, f)
```
| github_jupyter |
```
import requests
import tushare as ts
import pandas as pd
pro = ts.pro_api()
sw_index=pro.index_basic(market='SW')
sw_indices = sw_index.ts_code.str.split('.', expand=True)[0]
ind_index = sw_index.query('category=="一级行业指数"')
ind_index['name'] = ind_index['name'].str.split('(', expand=True)[0]
ind_index['symbol'] = ind_index.ts_code.str.split('.', expand=True)[0]
ind_index.set_index('name')['symbol'].to_dict()
sw_index.query('category=="一级行业指数"')
sample = sw_indices.sample(10)
def get_sw_index_hist(indices, start, end):
"""
Get historical values for Shen-Wan indices
"""
url_pat = 'http://www.swsindex.com/excel2.aspx?ctable=swindexhistory&where=%20%20swindexcode%20in%20{indices}%20and%20%20BargainDate%3E=%27{start}%27%20and%20%20BargainDate%3C=%27{end}%27'
def _wrap_in_bracket(indices):
return '(' + ','.join('%27'+i+'%27' for i in indices) + ')'
if isinstance(indices, str):
indices = [indices]
url = url_pat.format(indices=_wrap_in_bracket(indices), start=start, end=end)
response = requests.get(url)
indexDf = pd.read_html(response.content.decode('utf-8'))[0]
RENAME = {'指数代码':'symbol',
'指数名称':'name',
'发布日期':'date',
'开盘指数':'open',
'最高指数':'high',
'最低指数':'low',
'收盘指数':'close',
'成交量(亿股)':'volume',
'成交额(亿元)':'amount',
'涨跌幅(%)':'pct_chg'}
indexDf = indexDf.rename(columns = RENAME)
indexDf['date'] = pd.to_datetime(indexDf.date)
MULTIPLIER = 1e8
indexDf[['volume', 'amount']] *= MULTIPLIER
return indexDf
from arctic import CHUNK_STORE, Arctic
a = Arctic('localhost')
a.initialize_library('index', CHUNK_STORE)
lib_index = a['index']
from tqdm import tqdm
for index in tqdm(sw_indices):
df = get_sw_index_hist(index, '2014-01-01', '2020-10-23')
df = df.set_index('date')
lib_index.write(index, df, chunk_size='M')
def get_sw_industry_class():
"""
Returns Shen-Wan indsutry classification table
"""
industry = pd.read_html('./refData/SwClass.xls')[0]
IND_RENAME = {
'行业名称':'industry',
'股票代码':'symbol',
'股票名称':'name',
'起始日期':'start_date',
'结束日期':'end_date',
}
industry = industry.rename(columns=IND_RENAME)
industry['symbol'] = industry.symbol.astype(str).str.zfill(6)
return industry
import requests
url = 'http://www.csindex.com.cn/zh-CN/downloads/industry-class'
response = requests.get(url)
from bs4 import BeautifulSoup
soup = BeautifulSoup(response.content)
soup.find_all('a', {'class' : 'stats_a'})
def get_cics_industry():
from io import StringIO, BytesIO
from zipfile import ZipFile
import pandas as pd
url = 'http://www.csindex.com.cn/uploads/downloads/other/files/zh_CN/ZzhyflWz.zip'
response = requests.get(url)
t = ZipFile(BytesIO(response.content))
cics_industry = pd.read_excel( t.open('cicslevel2.xls') )
cics_industry.columns = [c.split('\n')[1] for c in cics_industry]
cics_industry['Securities Code'] = cics_industry['Securities Code'].astype(str).str.zfill(6)
return cics_industry
cics_industry['CICS 1st Level Name(Eng.)'].value_counts()
import tushare as ts
pro = ts.pro_api()
symbols = [ "{:06d}".format(i) for i in range(908, 918) ]
df = pd.concat([ts.get_k_data(sym, index=True) for sym in symbols])
df['symbol'] = df['code'].str[2:]
close_prices = df.pivot_table('close', 'date', 'symbol')
returns = close_prices.pct_change().dropna()
import matplotlib.pyplot as plt
returns.corr().style.background_gradient()
index_ref = pd.read_csv('./refData/AllIndex.csv', index_col=0)
index_ref[(index_ref.name.str.startswith('沪深300'))&(index_ref.category=="策略指数")]
index_ref[(index_ref.name.str.startswith('全指'))]
index_ref.query('ts_code == "000803.CSI"')
strategies = ['000828', '000918', '000919','000944']
df_style = pd.concat( [ts.get_k_data(sym, index=True) for sym in strategies])
df_style['symbol'] = df_style['code'].str[2:]
close = df_style.reset_index().pivot_table('close', 'date', 'symbol')
returns_style = close.pct_change().dropna()
rolling_corr = returns_style.rolling(250).corr()
rolling_corr.unstack(level=1)['000828'].plot()
sample=returns.rolling(250).corr().unstack(level=1)['000910']
sample = sample.dropna()
sample['000916'].plot()
plt.scatter(returns['000910'], returns['000916'])
(returns[['000910','000916']]+1).cumprod().plot()
```
`fi_symbols` are the full_stock indices
```
fi_symbols = ['%06d'%i for i in range(986, 996) ]
from tushare.stock import cons
for sym in fi_symbols:
if sym not in cons.INDEX_SYMBOL:
cons.INDEX_SYMBOL[sym] = 'sh'+sym
import pandas as pd
df_full = pd.concat( [ ts.get_k_data(sym, index=True) for sym in fi_symbols ] )
df_full['symbol'] = df_full.code.str[2:]
close_full = df_full.pivot_table('close', 'date', 'symbol')
returns_full = close_full.pct_change().dropna()
corr_full = returns_full.corr()
close_full['000988'].plot()
dates = pd.bdate_range('2018-10-31', '2020-10-31', )
cal = pro.trade_cal(exchange='', start_date='20181031', end_date='20201031')
trading_dates = pd.to_datetime( cal.query('is_open==1').cal_date )
(returns_full.reindex(dates)+1).cumprod().plot()
import plotly.express as px
import plotly
plotly.offline.init_notebook_mode(connected=True)
!jupyter labextension list
plotly.__version__
fig = px.line(close_full)
fig.show()
```
| github_jupyter |
```
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import re
import numpy as np
import os
import time
import json
from glob import glob
from PIL import Image
import pickle
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract = True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/zips/train2014.zip',
extract = True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
with open(annotation_file, 'r') as f:
annotations = json.load(f)
all_captions = []
all_img_name_vector = []
for annot in annotations['annotations']:
caption = '<start> ' + annot['caption'] + ' <end>'
image_id = annot['image_id']
full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)
all_img_name_vector.append(full_coco_image_path)
all_captions.append(caption)
train_captions, img_name_vector = shuffle(all_captions,
all_img_name_vector,
random_state=1)
num_examples = 95000
train_captions = train_captions[:num_examples]
img_name_vector = img_name_vector[:num_examples]
len(train_captions), len(all_captions)
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
%%time
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(32)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
os.remove(path_of_feature)
def calc_max_length(tensor):
return max(len(t) for t in tensor)
top_k = 10000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ')
tokenizer.fit_on_texts(train_captions)
train_seqs = tokenizer.texts_to_sequences(train_captions)
out = open("train_captions.pkl","wb")
pickle.dump(train_captions, out)
out.close()
tokenizer.word_index['<pad>'] = 0
tokenizer.index_word[0] = '<pad>'
train_seqs = tokenizer.texts_to_sequences(train_captions)
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
max_length = calc_max_length(train_seqs)
```
## Split the data into training and testing
```
img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,
cap_vector,
test_size=0.2,
random_state=0)
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
```
## Create a tf.data dataset for training
Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
```
BATCH_SIZE = 384
BUFFER_SIZE = 1000
embedding_dim = 256
units = 1024
vocab_size = top_k + 1
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int32]),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, 64, hidden_size)
score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, 64, 1)
# you get 1 at the last axis because you are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it using pickle
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
checkpoint_path = "ckpt"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer = optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=3)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
print(ckpt_manager.latest_checkpoint)
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 30
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(
epoch + 1, batch, batch_loss.numpy() / int(target.shape[1])))
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 1 == 0:
ckpt_manager.save()
print ('Epoch {} Loss {:.6f}'.format(epoch + 1,
total_loss/num_steps))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show
def evaluate2(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input, features, hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()
result.append(tokenizer.index_word[predicted_id])
if tokenizer.index_word[predicted_id] == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for l in range(len_result):
temp_att = np.resize(attention_plot[l], (8, 8))
ax = fig.add_subplot(len_result//2, len_result//2, l+1)
ax.set_title(result[l])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
#BEAM
def evaluate(image, beam_index = 3):
start = [tokenizer.word_index['<start>']]
# result[0][0] = index of the starting word
# result[0][1] = probability of the word predicted
result = [[start, 0.0]]
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
while len(result[0][0]) < max_length:
i=0
temp = []
for s in result:
predictions, hidden, attention_weights = decoder(dec_input, features, hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
i=i+1
# Getting the top <beam_index>(n) predictions
word_preds = np.argsort(predictions[0])[-beam_index:]
# creating a new list so as to put them via the model again
for w in word_preds:
next_cap, prob = s[0][:], s[1]
next_cap.append(w)
prob += predictions[0][w]
temp.append([next_cap, prob])
result = temp
# Sorting according to the probabilities
result = sorted(result, reverse=False, key=lambda l: l[1])
# Getting the top words
result = result[-beam_index:]
predicted_id = result[-1] # with Max Probability
pred_list = predicted_id[0]
prd_id = pred_list[-1]
if(prd_id!=3):
dec_input = tf.expand_dims([prd_id], 0) # Decoder input is the word predicted with highest probability among the top_k words predicted
else:
break
result = result[-1][0]
intermediate_caption = [tokenizer.index_word[i] for i in result]
final_caption = []
for i in intermediate_caption:
if i != '<end>':
final_caption.append(i)
else:
break
attention_plot = attention_plot[:len(result)]
return final_caption, attention_plot
image_url = 'https://www.tensorflow.org/tutorials/text/image_captioning_files/output_RhCND0bCUP11_1.png'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image2s3s3'+image_extension,
origin=image_url)
# result, attention_plot = evaluate(image_path, beam_index=3)
result, attention_plot = evaluate2(image_path)
#result.remove('<start>')
print ('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
os.remove(image_path)
```
| github_jupyter |
# Communicate Data Findings with Ford GoBike's Trip Data
## by Jiarun He
## Introduction
> Bike Sharing services have been rising in popularity in cities around the world over the past five years. This innovative sharing service allow members to borrow bikes for short distance travels (<30 mins). Almost all of the service providers use advanced mobile and information technologies to not only allow users to lock/unlock a ready-to-use bike easily from the mobile platform, but also allow themselves to receive a wealth of data from the users to help understanding how their bike sharing systems are used. In this data analysis project, I am going to perform a data exploration on the data provided by Ford GoBike data system.
## Preliminary Wrangling
>* I have chosen the Ford GoBike System Data : https://www.fordgobike.com/system-data as the proejct source data
* Ford GoBike is the Bay Area's new bike share system, with thousands of public bikes for use across San Francisco, East Bay and San Jose.
* Multiple data files are required to be joined together as a full year’s coverage is desired.
* The features included in the dataset : Trip Duration (seconds) , Start Time and Date , End Time and Date , Start Station ID , Start Station Name , Start Station Latitude , Start Station Longitude , End Station ID , End Station Name, End Station Latitude , End Station Longitude , Bike ID , User Type (Subscriber or Customer – “Subscriber” = Member or “Customer” = Casual) , Member Year of Birth, Member Gender
```
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
from requests import get
from os import path, getcwd, makedirs, listdir
from io import BytesIO
from zipfile import ZipFile
%matplotlib inline
```
> Load in your dataset and describe its properties through the questions below.
Try and motivate your exploration goals through this section.
```
folder_name = 'GoBike_Trip_Files'
makedirs(folder_name)
#first read the 2017 data
pd.read_csv('https://s3.amazonaws.com/fordgobike-data/2017-fordgobike-tripdata.csv').to_csv('{}/2017-forgobike-tripdata.csv'.format(folder_name))
#second, read all 2018 data
for month in range(1,12):
month_string = str(month)
month_leading_zero = month_string.zfill(2)
bike_data_url = 'https://s3.amazonaws.com/fordgobike-data/2018' + month_leading_zero + '-fordgobike-tripdata.csv.zip'
response = get(bike_data_url)
# code below opens zip file; BytesIO returns a readable and writeable view of the contents;
unzipped_file = ZipFile(BytesIO(response.content))
# puts extracted zip file into folder GoBike_Trip_Files
unzipped_file.extractall(folder_name)
for month in range(1,4):
month_string = str(month)
month_leading_zero = month_string.zfill(2)
bike_data_url = 'https://s3.amazonaws.com/fordgobike-data/2019' + month_leading_zero + '-fordgobike-tripdata.csv.zip'
response = get(bike_data_url)
# code below opens zip file; BytesIO returns a readable and writeable view of the contents;
unzipped_file = ZipFile(BytesIO(response.content))
# puts extracted zip file into folder GoBike_Trip_Files
unzipped_file.extractall(folder_name)
# Combine All Locally Saved CSVs into One DataFrame
local_csvs = []
for file_name in listdir(folder_name):
local_csvs.append(pd.read_csv(folder_name+'/'+file_name))
df = pd.concat(local_csvs)
df.to_csv('data.csv')
df = pd.read_csv('data.csv')
df.drop(['Unnamed: 0', 'Unnamed: 0.1'], axis=1, inplace=True)
df.head()
dtype={'bike_id': int}
df.info()
df.head()
```
### What is the structure of your dataset?
> There are 2,883,850 rides entries in the dataset with 16 features such as bike_id bike_share_for_all_trip duration_sec, etc. Most of the attributes are numeric in this dataframe.
### What is/are the main feature(s) of interest in your dataset?
> I am interested in the following areas:
1. What's the average trip distance? (Univariate Exploration)
2. Does trip distance depend on user group (subscriber vs. customer)? (Bivariate Exploration)
3. How does seasonality (weather condition) affect the trip distance of each user group? (Multivariate)
### What features in the dataset do you think will help support your investigation into your feature(s) of interest?
> Start time and date, End time and date, user type, end_station_latitude, end_station_longitude, start_station_latitude, start_station_longitude will support my investigation
## Univariate Exploration
> In this section, investigate distributions of individual variables. If
you see unusual points or outliers, take a deeper look to clean things up
and prepare yourself to look at relationships between variables.
Q1. What's the average trip distance?
### Generate new a new attribute: 'distance_km'
```
#https://stackoverflow.com/questions/29545704/fast-haversine-approximation-python-pandas?noredirect=1&lq=1
def distance(origin, destination):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
All args must be of equal length.
"""
lat1, lon1 = origin
lat2, lon2 = destination
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2
c = 2 * np.arcsin(np.sqrt(a))
km = 6367 * c
return km
df['distance_km'] = df.apply(lambda x: distance((x['start_station_latitude'], x['start_station_longitude']), (x['end_station_latitude'], x['end_station_longitude'])), axis=1)
df_copy=df.copy()
df_copy.describe()
# Plot the distribution of trip distance
bin_edges = np.arange(0, df_copy['distance_km'].max() + 0.1 , 0.1)
plt.hist(data = df_copy , x = 'distance_km' , bins = bin_edges)
plt.xlabel('Distance_KM')
plt.title('Distribution of Trip Distance(s)')
plt.xlim(0,8)
plt.show()
```
### Discuss the distribution(s) of your variable(s) of interest. Were there any unusual points? Did you need to perform any transformations?
> It is right screwed with a long tail on the right. I am going to add log transformation to x axis. Also, there are lots of trips with zero distance, those can be the trips that users returned bikes to the same place, this may look odd on the graph but it makes logical sense.
```
bins = np.arange(0,df_copy['distance_km'].max()+0.1,0.1)
plt.hist(data = df_copy , x = 'distance_km' , bins = bins);
plt.xscale('log')
```
### Of the features you investigated, were there any unusual distributions? Did you perform any operations on the data to tidy, adjust, or change the form of the data? If so, why did you do this?
> In the step above, I have performed a trail on log transformation. As we can see, a lot of distance entry has zero KM, I believe this is due to the return to the same location, therefore, I will do no adjustment to the data.
```
np.log(df_copy['distance_km'].describe())
df_copy['distance_m'] = df_copy['distance_km'] * 1000
bin_edges = 10**np.arange(2 , 4+0.1 , 0.05)
ticks = [400,800,1600,3000,5500,10000]
labels = ['{}'.format(v) for v in ticks]
plt.hist(data = df_copy , x = 'distance_m' , bins = bin_edges);
#plt.xticks(ticks,labels);
plt.xlabel('Distance_M');
plt.xscale('log');
plt.title('Distribution of Trip Distance(s)')
plt.ylabel('Frequency')
plt.xticks(ticks,labels)
plt.show()
```
We can see by the plot above, most of bike users ride bikes for a short distance around 1.6 KM.
## Bivariate Exploration
> In this section, investigate relationships between pairs of variables in your
data. Make sure the variables that you cover here have been introduced in some
fashion in the previous section (univariate exploration).
Q2. Does trip distance depend on user group (subscriber vs. customer)?
```
ax = df_copy.groupby('user_type')['distance_km'].mean().plot(kind='barh', color=['red', 'green'], figsize=(12,6))
ax.set_title('Average trip distance per user type', fontsize=20, y=1.015)
ax.set_ylabel('user type')
ax.set_xlabel('distance [km]')
plt.show()
```
#### We can see that on average customers travel 1.7 km, and subscribers travel 1.6 km on average.
```
df_copy['member_age'] = 2019-df_copy['member_birth_year']
#Generate a new field for member age group from member_age_bin
df_copy['member_age_bins'] = df_copy['member_age'].apply(lambda x: '10 - 20' if 10<x<=20
else '20 - 30' if 20<x<=30
else '30 - 40' if 30<x<=40
else '40 - 50' if 40<x<=50
else '50 - 60' if 50<x<=60
else 'other')
trip_age_df = df_copy.groupby('member_age_bins').agg({'bike_id':'count'})
trip_age_df['perc'] = (trip_age_df['bike_id']/trip_age_df['bike_id'].sum())*100
trip_age_df['perc'].plot(kind='bar',color='silver', figsize=(12,8))
plt.title('Percentage of all bike rides per age group', fontsize=22, y=1.015)
plt.xlabel('member age group', labelpad=16)
plt.ylabel('pecentage(%) [rides]', labelpad=16)
plt.xticks(rotation=360)
plt.show()
```
## Talk about some of the relationships you observed in this part of the investigation. How did the feature(s) of interest vary with other features in the dataset?
> Subscriber's average trip distance is about 1.6 km.
> Customer's average trip duration is about 1.7 km.
### Did you observe any interesting relationships between the other features (not the main feature(s) of interest)?
> People in age from 20-40 years old consume more than 65% of bike rides.
## Multivariate Exploration
> Create plots of three or more variables to investigate your data even
further. Make sure that your investigations are justified, and follow from
your work in the previous sections.
Q3. How does seasonality (weather condition) affect the trip distance of each user group? (Multivariate)
```
df_copy['month'] = pd.DatetimeIndex(df_copy['start_time']).month
#Generate a new field for season group from month
df_copy['season'] = df_copy['month'].apply(lambda x: 'Spring' if x in [3 , 4 , 5]
else 'Summer' if x in [6 , 7 , 8]
else 'Fall' if x in [9 , 10 , 11]
else 'Winter' if x in [12 , 1 , 2]
else 'unknown')
df_copy.head()
df_copy['season_num'] = df_copy['month'].apply(lambda x: '1' if x in [3 , 4 , 5]
else '2' if x in [6 , 7 , 8]
else '3' if x in [9 , 10 , 11]
else '4' if x in [12 , 1 , 2]
else '99')
df_copy['year'] = pd.DatetimeIndex(df_copy['start_time']).year
df_copy_2018 = df_copy[df_copy['year']==2018]
test_df = df_copy_2018.groupby('season').agg({'distance_km':'sum'})
#test_df['perc'] = (test_df['bike_id']/test_df['bike_id'].sum())*100
#trip_age_df['perc'].plot(kind='bar',color='silver', figsize=(12,8))
test_df.plot(kind='bar',color='silver', figsize=(12,8))
plt.title('Total distance traveled by bike per season', fontsize=22, y=1.015)
plt.xlabel('Season', labelpad=16)
plt.ylabel('Total distance traveled in KM', labelpad=16)
plt.xticks(rotation=360)
plt.show()
plt.figure(figsize=(14,8))
my_palette = {'20 - 30': 'deepskyblue', '30 - 40': 'navy', '40 - 50': 'lightgrey'}
ax = sb.barplot(x='season', y='distance_km', estimator=sum, hue='member_age_bins', palette=my_palette, data=df_copy_2018[df_copy_2018['member_age_bins'].isin(['20 - 30', '30 - 40', '40 - 50'])].sort_values(by=['season_num', 'member_age_bins']))
plt.title('The seasonality of bike rides for 20 to 50 years olds in year of 2018', fontsize=22, y=1.015)
plt.xlabel('Season', labelpad=16)
plt.ylabel('Total mileage in KM', labelpad=16)
leg = ax.legend()
leg.set_title('Member age group',prop={'size':16})
ax = plt.gca()
#ax.yaxis.set_major_formatter(tick.FuncFormatter(transform_axis_fmt))
```
### Talk about some of the relationships you observed in this part of the investigation. Were there features that strengthened each other in terms of looking at your feature(s) of interest?
> Poeple in age 30-40 consume more than twice of the bike service mileage than people in age group 40-50 in all four seasons.
> Summer is the most popular season of the year across all age groups
### Were there any interesting or surprising interactions between features?
> It's interesting to see people prefer fall weather than spring (more total mileage in fall than spring), I think it's due to the fact that there are more raining days in spring in general than fall [1].
[1] "First month of spring is rainy with average temperature about 13 °C (55 °F)." reference:https://seasonsyear.com/USA/San-Francisco
> At the end of your report, make sure that you export the notebook as an
html file from the `File > Download as... > HTML` menu. Make sure you keep
track of where the exported file goes, so you can put it in the same folder
as this notebook for project submission. Also, make sure you remove all of
the quote-formatted guide notes like this one before you finish your report!
```
df_copy.to_csv('final_df.csv')
```
### Reference
https://github.com/CICIFLY/Data-Analytics-Projects/blob/master/Communicate%20Data%20Findings%20with%20Ford%20Bike%20Sharing%20Data/Data-Exploration-with-Bike-Data.ipynb
https://github.com/juliaYi/Ford-GoBike-Data-Visualization/blob/master/ford_bike_data_analysis.ipynb
https://github.com/seby-sbirna/Data-Analyst-Nanodegree-Project-Portfolio/blob/master/Project%205%20-%20Communicate%20Data%20Findings/Project%205%20-%20Sebastian%20Sbirna.ipynb
https://stackoverflow.com/questions/29545704/fast-haversine-approximation-python-pandas?noredirect=1&lq=1
https://stackoverflow.com/questions/6282058/writing-numerical-values-on-the-plot-with-matplotlib
| github_jupyter |
```
### RNN description and code by A. Karpathy
# - http://karpathy.github.io/2015/05/21/rnn-effectiveness/
# - https://gist.github.com/karpathy/d4dee566867f8291f086
import numpy as np
def lossFun(inputs, targets, hprev):
"""
inputs,targets are both list of integers.
hprev is Hx1 array of initial hidden state
returns the loss, gradients on model parameters, and last hidden state
"""
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(hprev)
loss = 0
# forward pass
for t in range(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
# backward pass: compute gradients going backwards
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
for t in reversed(range(len(inputs))):
dy = np.copy(ps[t])
dy[targets[t]] -= 1 # backprop into y. see http://cs231n.github.io/neural-networks-case-study/#grad if confused here
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]
def sample(h, seed_ix, n):
"""
sample a sequence of integers from the model
h is memory state, seed_ix is seed letter for first time step
"""
x = np.zeros((vocab_size, 1))
x[seed_ix] = 1
ixes = []
for t in range(n):
h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh)
y = np.dot(Why, h) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel())
x = np.zeros((vocab_size, 1))
x[ix] = 1
ixes.append(ix)
return ixes
# data I/O
data = open('data/input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print ('data has %d characters, %d unique.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
# hyperparameters
hidden_size = 100 # size of hidden layer of neurons
seq_length = 25 # number of steps to unroll the RNN for
learning_rate = 1e-1
# model parameters
Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden
Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden
Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output
bh = np.zeros((hidden_size, 1)) # hidden bias
by = np.zeros((vocab_size, 1)) # output bias
n, p = 0, 0
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0
# We will run untin counter_max is achieved.
# Original was an infinite loop
counter = 0
counter_max = 10000
while counter < counter_max:
#while True:
# prepare inputs (we're sweeping from left to right in steps seq_length long)
if p+seq_length+1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size,1)) # reset RNN memory
p = 0 # go from start of data
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
# sample from the model now and then
if n % 100 == 0:
sample_ix = sample(hprev, inputs[0], 200)
txt = ''.join(ix_to_char[ix] for ix in sample_ix)
print ('----\n %s \n----' % (txt, ))
# forward seq_length characters through the net and fetch gradient
loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
if n % 100 == 0: print ('iter %d, loss: %f' % (n, smooth_loss)) # print progress
# perform parameter update with Adagrad
for param, dparam, mem in zip([Wxh, Whh, Why, bh, by],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
p += seq_length # move data pointer
n += 1 # iteration counter
counter += 1
```
| github_jupyter |
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Dictionaries in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it.</p>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dic">Dictionaries</a>
<ul>
<li><a href="content">What are Dictionaries?</a></li>
<li><a href="key">Keys</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Dictionaries</a>
</li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="Dic">Dictionaries</h2>
<h3 id="content">What are Dictionaries?</h3>
A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsList.png" width="650" />
An example of a Dictionary <code>Dict</code>:
```
# Create the dictionary
Dict = {"key1": 1, "key2": "2", "key3": [3, 3, 3], "key4": (4, 4, 4), ('key5'): 5, (0, 1): 6}
Dict
```
The keys can be strings:
```
# Access to the value by the key
Dict["key1"]
```
Keys can also be any immutable object such as a tuple:
```
# Access to the value by the key
Dict[(0, 1)]
```
Each key is separated from its value by a colon "<code>:</code>". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "<code>{}</code>".
```
# Create a sample dictionary
release_year_dict = {"Thriller": "1982", "Back in Black": "1980", \
"The Dark Side of the Moon": "1973", "The Bodyguard": "1992", \
"Bat Out of Hell": "1977", "Their Greatest Hits (1971-1975)": "1976", \
"Saturday Night Fever": "1977", "Rumours": "1977"}
release_year_dict
```
In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type.
It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsStructure.png" width="650" />
<h3 id="key">Keys</h3>
You can retrieve the values based on the names:
```
# Get value by keys
release_year_dict['Thriller']
```
This corresponds to:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyOne.png" width="500" />
Similarly for <b>The Bodyguard</b>
```
# Get value by key
release_year_dict['The Bodyguard']
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyTwo.png" width="500" />
Now let you retrieve the keys of the dictionary using the method <code>release_year_dict()</code>:
```
# Get all the keys in dictionary
release_year_dict.keys()
```
You can retrieve the values using the method <code>values()</code>:
```
# Get all the values in dictionary
release_year_dict.values()
```
We can add an entry:
```
# Append value with key into dictionary
release_year_dict['Graduation'] = '2007'
release_year_dict
```
We can delete an entry:
```
# Delete entries by key
del(release_year_dict['Thriller'])
del(release_year_dict['Graduation'])
release_year_dict
```
We can verify if an element is in the dictionary:
```
# Verify the key is in the dictionary
'The Bodyguard' in release_year_dict
```
<hr>
<h2 id="quiz">Quiz on Dictionaries</h2>
<b>You will need this dictionary for the next two questions:</b>
```
# Question sample dictionary
soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic
```
a) In the dictionary <code>soundtrack_dict</code> what are the keys ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic.keys()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.keys() # The Keys "The Bodyguard" and "Saturday Night Fever"
-->
b) In the dictionary <code>soundtrack_dict</code> what are the values ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic.values()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.values() # The values are "1992" and "1977"
-->
<hr>
<b>You will need this dictionary for the following questions:</b>
The Albums <b>Back in Black</b>, <b>The Bodyguard</b> and <b>Thriller</b> have the following music recording sales in millions 50, 50 and 65 respectively:
a) Create a dictionary <code>album_sales_dict</code> where the keys are the album name and the sales in millions are the values.
```
# Write your code below and press Shift+Enter to execute
album_sales_dict = {"Back in Black":50 , "The Bodyguard" : 50 , "Thriller" : 65}
album_sales_dict
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict = {"The Bodyguard":50, "Back in Black":50, "Thriller":65}
-->
b) Use the dictionary to find the total sales of <b>Thriller</b>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict["Thriller"]
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict["Thriller"]
-->
c) Find the names of the albums from the dictionary using the method <code>keys</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.keys()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.keys()
-->
d) Find the names of the recording sales from the dictionary using the method <code>values</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.values()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.values()
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
<a href="https://colab.research.google.com/github/indahpuspitaa17/DeepLearning.AI-TensorFlow-Developer/blob/main/Course_1_Part_6_Lesson_3_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
Let's explore how convolutions work by creating a basic convolution on a 2D Grey Scale image. First we can load the image by taking the 'ascent' image from scipy. It's a nice, built-in picture with lots of angles and lines.
```
import cv2
import numpy as np
from scipy import misc
i = misc.ascent()
```
Next, we can use the pyplot library to draw the image so we know what it looks like.
```
import matplotlib.pyplot as plt
plt.grid(False)
plt.gray()
plt.axis('off')
plt.imshow(i)
plt.show()
```
The image is stored as a numpy array, so we can create the transformed image by just copying that array. Let's also get the dimensions of the image so we can loop over it later.
```
i_transformed = np.copy(i)
size_x = i_transformed.shape[0]
size_y = i_transformed.shape[1]
```
Now we can create a filter as a 3x3 array.
```
# This filter detects edges nicely
# It creates a convolution that only passes through sharp edges and straight
# lines.
#Experiment with different values for fun effects.
#filter = [ [0, 1, 0], [1, -4, 1], [0, 1, 0]]
# A couple more filters to try for fun!
filter = [ [-1, -2, -1], [0, 0, 0], [1, 2, 1]]
#filter = [ [-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]
# If all the digits in the filter don't add up to 0 or 1, you
# should probably do a weight to get it to do so
# so, for example, if your weights are 1,1,1 1,2,1 1,1,1
# They add up to 10, so you would set a weight of .1 if you want to normalize them
weight = 1
```
Now let's create a convolution. We will iterate over the image, leaving a 1 pixel margin, and multiply out each of the neighbors of the current pixel by the value defined in the filter.
i.e. the current pixel's neighbor above it and to the left will be multiplied by the top left item in the filter etc. etc. We'll then multiply the result by the weight, and then ensure the result is in the range 0-255
Finally we'll load the new value into the transformed image.
```
for x in range(1,size_x-1):
for y in range(1,size_y-1):
convolution = 0.0
convolution = convolution + (i[x - 1, y-1] * filter[0][0])
convolution = convolution + (i[x, y-1] * filter[0][1])
convolution = convolution + (i[x + 1, y-1] * filter[0][2])
convolution = convolution + (i[x-1, y] * filter[1][0])
convolution = convolution + (i[x, y] * filter[1][1])
convolution = convolution + (i[x+1, y] * filter[1][2])
convolution = convolution + (i[x-1, y+1] * filter[2][0])
convolution = convolution + (i[x, y+1] * filter[2][1])
convolution = convolution + (i[x+1, y+1] * filter[2][2])
convolution = convolution * weight
if(convolution<0):
convolution=0
if(convolution>255):
convolution=255
i_transformed[x, y] = convolution
```
Now we can plot the image to see the effect of the convolution!
```
# Plot the image. Note the size of the axes -- they are 512 by 512
plt.gray()
plt.grid(False)
plt.imshow(i_transformed)
#plt.axis('off')
plt.show()
```
This code will show a (2, 2) pooling. The idea here is to iterate over the image, and look at the pixel and it's immediate neighbors to the right, beneath, and right-beneath. Take the largest of them and load it into the new image. Thus the new image will be 1/4 the size of the old -- with the dimensions on X and Y being halved by this process. You'll see that the features get maintained despite this compression!
```
new_x = int(size_x/2)
new_y = int(size_y/2)
newImage = np.zeros((new_x, new_y))
for x in range(0, size_x, 2):
for y in range(0, size_y, 2):
pixels = []
pixels.append(i_transformed[x, y])
pixels.append(i_transformed[x+1, y])
pixels.append(i_transformed[x, y+1])
pixels.append(i_transformed[x+1, y+1])
newImage[int(x/2),int(y/2)] = max(pixels)
# Plot the image. Note the size of the axes -- now 256 pixels instead of 512
plt.gray()
plt.grid(False)
plt.imshow(newImage)
#plt.axis('off')
plt.show()
```
| github_jupyter |
# Use Encryption for privacy
This notebook outlines some simple examples to show how simple it can be to use encryption techniques to protect privacy.
In general using encryption will help to protect privacy information in two ways:
* For data at-rest: This includes all privacy information, so called storage objects, that exist on physical media on all forms. E.g. magnetic,optical disk, SSD, etc
* For data in-transit: When personal data is being transferred systems, or system components, or programs, such as over the network, or internal APIs , or across a service bus. So all data that is 'in-motion'.
But never develop your own encryption algoritmes. We known it is fun, improves learning and makes you an excellent hacker. However the art and barrier to develop a good encryption algortime is very high.
Below some simple python examples, based on proven Python encryption algoritmes.
## cryptography
The Cryptography library (https://cryptography.io/) includes both high level recipes and low level interfaces to common cryptographic algorithms such as symmetric ciphers, message digests, and key derivation functions.
Fernet is an implementation of symmetric (also known as “secret key”) authenticated cryptography. Fernet is ideal for encrypting data that easily fits in memory. As a design feature it does not expose unauthenticated bytes.
```
#Encryption algoritmes works on bytes most of the time.
string='This is a test string to be encoded'.encode('UTF-8')
print(string)
#decode string:
decoded_string=string.decode('UTF-8')
print(decoded_string)
from cryptography.fernet import Fernet
key = Fernet.generate_key()
cipher_suite = Fernet(key)
cipher_text = cipher_suite.encrypt(string)
plain_text = cipher_suite.decrypt(cipher_text)
print(cipher_text)
print(plain_text)
print(plain_text.decode('UTF-8'))
```
## AES
To use AES you need to create a Key and an IV (initalization vector). The Key is shared between the pair, The IV can be different for each message.
Below a sample with use of the the Python Cryptography Toolkit https://www.dlitz.net/software/pycrypto/ (https://github.com/dlitz/pycrypto) This library is a collection of both secure hash functions (such as SHA256 and
RIPEMD160), and various encryption algorithms (AES, DES, RSA, ElGamal,
etc.).
```
from Crypto.Cipher import AES
key = b'ABCDEFGHIJ123456'
iv = b'1234567890ZYXWVU'
cipher = AES.new(key, AES.MODE_CFB, iv)
data = 'Encryption should be used to make life with GDPR simpler'
msg = iv + cipher.encrypt(data)
plain=cipher.decrypt(msg)
print (plain[16:])
```
In general you MUST use a Crypto library that is actively maintained. Prefferd is that the project has a CII badge.
So always check: https://bestpractices.coreinfrastructure.org/en/projects?gteq=100
_This pycrypto library is not active maintained anymore, so DO NOT use it for production._ There is a fork: https://pypi.org/project/pycryptodome/ that has good documentation (https://www.pycryptodome.org/en/latest/) and is maintained.
```
from Crypto.Cipher import AES
import base64
def cypher_aes(secret_key, msg_text, encrypt=True):
# an AES key must be either 16, 24, or 32 bytes long
remainder = len(secret_key) % 16
modified_key = secret_key.ljust(len(secret_key) + (16 - remainder))[:32]
print(modified_key)
remainder = len(msg_text) % 16
modified_text = msg_text.ljust(len(msg_text) + (16 - remainder))
print(modified_text)
cipher = AES.new(modified_key, AES.MODE_ECB) # use of ECB mode in enterprise environments is very much frowned upon
if encrypt:
return base64.b64encode(cipher.encrypt(modified_text)).strip()
return cipher.decrypt(base64.b64decode(modified_text)).strip()
#encrypted = cypher_aes(b'secret_AES_key_string_to_encrypt/decrypt_with', b'input_string_to_encrypt/decrypt', encrypt=True)
encrypted = cypher_aes(string, b'Encryption must be done good! Test,test and test!', encrypt=True)
print(encrypted)
print()
print('A wrong decryption string will not work:')
print(cypher_aes(b'tst', encrypted, encrypt=False))
print()
print('Now the correct string:')
print(cypher_aes(string, encrypted, encrypt=False))
```
## Hashing
Python has nice default hashing library. This default implements a common interface to many different secure hash and message digest algorithms. E.g. hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 are supported.
```
import hashlib
m = hashlib.sha256()
m.update(b"Nobody makes the mistake")
m.update(b" to use hashing to be protect privacy information!")
m.digest()
b'\x03\x1e\xdd}Ae\x15\x93\xc5\xfe\\\x00o\xa5u+7\xfd\xdf\xf7\xbcN\x84:\xa6\xaf\x0c\x95\x0fK\x94\x06'
print(m.digest_size)
print(m.block_size)
```
| github_jupyter |
# Continuous Control
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
from ddpg_agent import Agent
from collections import deque
import time
import torch
from itertools import count
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Reacher.app"`
- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`
- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`
- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`
- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`
- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`
- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`
For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Reacher.app")
```
```
env = UnityEnvironment(file_name='env/Reacher_Linux_NoVis/Reacher.x86')
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
brain_name = env.brain_names[0]
print(brain_name)
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
## 4. Train!
```
env_info = env.reset(train_mode=True)[brain_name]
agent = Agent(state_size=state_size, action_size=action_size, random_seed=15)
avg_over = 100
print_every = 10
def ddpg(n_episodes=200):
scores_deque = deque(maxlen=avg_over)
scores_global = []
average_global = []
best_avg = -np.inf
solved = False
tic = time.time()
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
states = env_info.vector_observations
scores = np.zeros(num_agents)
agent.reset()
score_average = 0
timestep = time.time()
for t in count():
actions = agent.act(states, add_noise=True)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
agent.step(states, actions, rewards, next_states, dones, t)
states = next_states # roll over states to next time step
scores += rewards # update the score (for each agent)
if np.any(dones): # exit loop if episode finished
break
score = np.mean(scores)
scores_deque.append(score)
score_average = np.mean(scores_deque)
scores_global.append(score)
average_global.append(score_average)
print('\rEpisode {}, Average Score: {:.2f}, Max Score: {:.2f}, Min Score: {:.2f}, Time per Episode: {:.2f}'\
.format(i_episode, score_average, np.max(scores), np.min(scores), time.time() - timestep), end="\n")
if i_episode % print_every == 0:
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if score_average >= 30.0:
toc = time.time()
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}, training time: {}'.format(i_episode, score_average, toc-tic))
torch.save(agent.actor_local.state_dict(), 'best_checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'best_checkpoint_critic.pth')
break;
return scores_global, average_global
scores = ddpg()
import matplotlib.pyplot as plt
%matplotlib inline
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# env.close()
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler, ModelCheckpoint
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
# Preprocecss data
X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png")
X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png")
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
display(X_train.head())
```
# Model parameters
```
# Model parameters
model_path = '../working/effNetB5_img224_noBen.h5'
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 10
LEARNING_RATE = 1e-5 * FACTOR
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
LR_WARMUP_EPOCHS = 3
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS = EPOCHS * STEP_SIZE
WARMUP_STEPS = LR_WARMUP_EPOCHS * STEP_SIZE
```
# Pre-procecess images
```
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
if item_set == 'train':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_set == 'validation':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
def plot_metrics(history, figsize=(20, 14)):
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=figsize)
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
# base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
model.load_weights('../input/aptos-pretrain-olddata-effnetb5-224-balanced/effNetB5_img224_oldData.h5')
return model
```
# Fine-tune the model
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = True
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', verbose=1,
save_best_only=True, save_weights_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [checkpoint, es, cosine_lr]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, ax1 = plt.subplots(1, 1, sharex='col', figsize=(20, 4))
ax1.plot(cosine_lr.learning_rates)
ax1.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
plot_metrics(history)
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
| github_jupyter |
# Loading and Preprocessing Data
In this notebook you will learn how to use TensorFlow's Data API to load and preprocess data efficiently, then you will learn about the efficient `TFRecord` binary format for storing your data.
## Imports
```
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras
import time
print("python", sys.version)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
```
## Code examples
You can browse through the code examples or jump directly to the exercises.
```
dataset = tf.data.Dataset.from_tensor_slices(np.arange(10))
dataset
for item in dataset:
print(item)
dataset = dataset.repeat(3).batch(7)
for item in dataset:
print(item)
dataset = dataset.interleave(
lambda v: tf.data.Dataset.from_tensor_slices(v),
cycle_length=3,
block_length=2)
for item in dataset:
print(item.numpy(), end=" ")
X = np.array([[2, 3], [4, 5], [6, 7]])
y = np.array(["cat", "dog", "fox"])
dataset = tf.data.Dataset.from_tensor_slices((X, y))
dataset
for item_x, item_y in dataset:
print(item_x.numpy(), item_y.numpy())
dataset = tf.data.Dataset.from_tensor_slices({"features": X, "label": y})
dataset
for item in dataset:
print(item["features"].numpy(), item["label"].numpy())
```
## Split the California dataset to multiple CSV files
Let's start by loading and preparing the California housing dataset. We first load it, then split it into a training set, a validation set and a test set, and finally we scale it:
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
```
For very large datasets that do not fit in memory, you will typically want to split it into many files first, then have TensorFlow read these files in parallel. To demonstrate this, let's start by splitting the scaled housing dataset and saving it to 20 CSV files:
```
def save_to_multiple_csv_files(data, name_prefix, header=None, n_parts=10):
housing_dir = os.path.join("datasets", "housing")
os.makedirs(housing_dir, exist_ok=True)
path_format = os.path.join(housing_dir, "my_{}_{:02d}.csv")
filenames = []
m = len(data)
for file_idx, row_indices in enumerate(np.array_split(np.arange(m), n_parts)):
part_csv = path_format.format(name_prefix, file_idx)
filenames.append(part_csv)
with open(part_csv, "wt", encoding="utf-8") as f:
if header is not None:
f.write(header)
f.write("\n")
for row_idx in row_indices:
f.write(",".join([repr(col) for col in data[row_idx]]))
f.write("\n")
return filenames
train_data = np.c_[X_train_scaled, y_train]
valid_data = np.c_[X_valid_scaled, y_valid]
test_data = np.c_[X_test_scaled, y_test]
header_cols = ["Scaled" + name for name in housing.feature_names] + ["MedianHouseValue"]
header = ",".join(header_cols)
train_filenames = save_to_multiple_csv_files(train_data, "train", header, n_parts=20)
valid_filenames = save_to_multiple_csv_files(valid_data, "valid", header, n_parts=10)
test_filenames = save_to_multiple_csv_files(test_data, "test", header, n_parts=10)
```
Okay, now let's take a peek at the first few lines of one of these CSV files:
```
with open(train_filenames[0]) as f:
for i in range(3):
print(f.readline(), end="")
```

## Exercise 1 – Data API
### 1.1)
Use `tf.data.Dataset.list_files()` to create a dataset that will simply list the training filenames. Iterate through its items and print them.
```
dset = tf.data.Dataset.list_files(train_filenames)
dset
for f in dset:
print(f)
```
### 1.2)
Use the filename dataset's `interleave()` method to create a dataset that will read from these CSV files, interleaving their lines. The first argument needs to be a function (e.g., a `lambda`) that creates a `tf.data.TextLineDataset` based on a filename, and you must also set `cycle_length=5` so that the reader interleaves data from 5 files at a time. Print the first 15 elements from this dataset to see that you do indeed get interleaved lines from multiple CSV files (you should get the first line from 5 files, then the second line from these same files, then the third lines). **Tip**: To get only the first 15 elements, you can call the dataset's `take()` method.
```
dataset = dset.interleave(lambda x: tf.data.TextLineDataset(x), cycle_length=5)
dataset
for f in dataset.take(15):
print(f.numpy())
```
### 1.3)
We do not care about the header lines, so let's skip them. You can use the `skip()` method for this. Print the first five elements of your final dataset to make sure it does not print any header lines. **Tip**: make sure to call `skip()` for each `TextLineDataset`, not for the interleave dataset.
### 1.4)
We need to parse these CSV lines. First, experiment with the `tf.io.decode_csv()` function using the example below (e.g., look at the types, try changing or removing some field values, etc.).
* You need to pass it the line to parse, and set the `record_defaults` argument. This must be an array containing the default value for each field, in case it is missing. This also tells TensorFlow the number of fields to expect, and the type of each field. If you do not want a default value for a given field, you must use an empty tensor of the appropriate type (e.g., `tf.constant([])` for a `float32` field, or `tf.constant([], dtype=tf.int64` for an `int64` field).
```
record_defaults=[0, np.nan, tf.constant(np.nan, dtype=tf.float64), "Hello", tf.constant([])]
parsed_fields = tf.io.decode_csv('1,2,3,4,5', record_defaults)
parsed_fields
```
### 1.5)
Now you are ready to create a function to parse a CSV line:
* Create a `parse_csv_line()` function that takes a single line as argument.
* Call `tf.io.decode_csv()` to parse that line.
* Call `tf.stack()` to create a single tensor containing all the input features (i.e., all fields except the last one).
* Reshape the labels field (i.e., the last field) to give it a shape of `[1]` instead of `[]` (i.e., it must not be a scalar). You can use `tf.reshape(label_field, [1])`, or call `tf.stack([label_field])`, or use `label_field[tf.newaxis]`.
* Return a tuple with both tensors (input features and labels).
* Try calling it on a single line from one of the CSV files.
### 1.6)
Now create a `csv_reader_dataset()` function that takes a list of CSV filenames and returns a dataset that will provide batches of parsed and shuffled data from these files, including the features and labels, repeating the whole data once per epoch.
**Tips**:
* Copy your code from above to get a dataset that returns interleaved lines from the given CSV files. Your function will need an argument for the `filenames`, and another for the number of files read in parallel at any given time (e.g., `n_reader`).
* The training algorithm will need to go through the dataset many times, so you should call `repeat()` on the filenames dataset. You do not need to specify a number of repetitions, as we will tell Keras the number of iterations to run later on.
* Gradient descent works best when the data is IID (independent and identically distributed), so you should call the `shuffle()` method. It will require the shuffling buffer size, which you can add as an argument to your function (e.g., `shuffle_buffer_size`).
* Use the `map()` method to apply the `parse_csv_line()` function to each CSV line. You can set the `num_parallel_calls` argument to the number of threads that will parse lines in parallel. This should probably be an argument of your function (e.g., `n_parse_threads`).
* Use the `batch()` method to bundle records into batches. You will need to specify the batch size. This should probably be an argument of your function (e.g., `batch_size`).
* Call `prefetch(1)` on your final dataset to ensure that the next batch is loaded and parsed while the rest of your computations take place in parallel (to avoid blocking for I/O).
* Return the resulting dataset.
* Give every argument a reasonable default value (except for the filenames).
* Test your function by calling it with a small batch size and printing the first couple of batches.
* For higher performance, you can replace `dataset.map(...).batch(...)` with `dataset.apply(map_and_batch(...))`, where `map_and_batch()` is an experimental function located in `tf.data.experimental`. It will be deprecated in future versions of TensorFlow when such pipeline optimizations become automatic.
### 1.7)
Build a training set, a validation set and a test set using your `csv_reader_dataset()` function.
### 1.8)
Build and compile a Keras model for this regression task, and use your datasets to train it, evaluate it and make predictions for the test set.
**Tips**
* Instead of passing `X_train_scaled, y_train` to the `fit()` method, pass the training dataset and specify the `steps_per_epoch` argument. This should be set to the number of instances in the training set divided by the batch size.
* Similarly, pass the validation dataset instead of `(X_valid_scaled, y_valid)` and `y_valid`, and set the `validation_steps`.
* For the `evaluate()` and `predict()` methods, you need to pass the test dataset, and specify the `steps` argument.
* The `predict()` method ignores the labels in the test dataset, but if you want to be extra sure that it does not cheat, you can create a new dataset by stripping away the labels from the test set (e.g., `test_set.map(lambda X, y: X)`).

## Exercise 1 – Solution
### 1.1)
Use `tf.data.Dataset.list_files()` to create a dataset that will simply list the training filenames. Iterate through its items and print them.
```
filename_dataset = tf.data.Dataset.list_files(train_filenames)
for filename in filename_dataset:
print(filename)
```
### 1.2)
Use the filename dataset's `interleave()` method to create a dataset that will read from these CSV files, interleaving their lines. The first argument needs to be a function (e.g., a `lambda`) that creates a `tf.data.TextLineDataset` based on a filename, and you must also set `cycle_length=5` so that the reader interleaves data from 5 files at a time. Print the first 15 elements from this dataset to see that you do indeed get interleaved lines from multiple CSV files (you should get the first line from 5 files, then the second line from these same files, then the third lines). **Tip**: To get only the first 15 elements, you can call the dataset's `take()` method.
```
n_readers = 5
dataset = filename_dataset.interleave(
lambda filename: tf.data.TextLineDataset(filename),
cycle_length=n_readers)
for line in dataset.take(15):
print(line.numpy())
```
### 1.3)
We do not care about the header lines, so let's skip them. You can use the `skip()` method for this. Print the first five elements of your final dataset to make sure it does not print any header lines. **Tip**: make sure to call `skip()` for each `TextLineDataset`, not for the interleave dataset.
```
dataset = filename_dataset.interleave(
lambda filename: tf.data.TextLineDataset(filename).skip(1),
cycle_length=n_readers)
for line in dataset.take(5):
print(line.numpy())
```
### 1.4)
We need to parse these CSV lines. First, experiment with the `tf.io.decode_csv()` function using the example below (e.g., look at the types, try removing some field values, etc.).
Notice that field 4 is interpreted as a string.
```
record_defaults=[0, np.nan, tf.constant(np.nan, dtype=tf.float64), "Hello", tf.constant([])]
parsed_fields = tf.io.decode_csv('1,2,3,4,5', record_defaults)
parsed_fields
```
Notice that all missing fields are replaced with their default value, when provided:
```
parsed_fields = tf.io.decode_csv(',,,,5', record_defaults)
parsed_fields
```
The 5th field is compulsory (since we provided `tf.constant([])` as the "default value"), so we get an exception if we do not provide it:
```
try:
parsed_fields = tf.io.decode_csv(',,,,', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
```
The number of fields should match exactly the number of fields in the `record_defaults`:
```
try:
parsed_fields = tf.io.decode_csv('1,2,3,4,5,6,7', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
```
### 1.5)
Now you are ready to create a function to parse a CSV line:
* Create a `parse_csv_line()` function that takes a single line as argument.
* Call `tf.io.decode_csv()` to parse that line.
* Call `tf.stack()` to create a single tensor containing all the input features (i.e., all fields except the last one).
* Reshape the labels field (i.e., the last field) to give it a shape of `[1]` instead of `[]` (i.e., it must not be a scalar). You can use `tf.reshape(label_field, [1])`, or call `tf.stack([label_field])`, or use `label_field[tf.newaxis]`.
* Return a tuple with both tensors (input features and labels).
* Try calling it on a single line from one of the CSV files.
```
n_inputs = X_train.shape[1]
def parse_csv_line(line, n_inputs=n_inputs):
defs = [tf.constant(np.nan)] * (n_inputs + 1)
fields = tf.io.decode_csv(line, record_defaults=defs)
x = tf.stack(fields[:-1])
y = tf.stack(fields[-1:])
return x, y
parse_csv_line(b'-0.739840972632228,-0.3658395634576743,-0.784679995482575,0.07414513752253027,0.7544706668961565,0.407700592469922,-0.686992593958441,0.6019005115704453,2.0')
```
### 1.6)
Now create a `csv_reader_dataset()` function that takes a list of CSV filenames and returns a dataset that will provide batches of parsed and shuffled data from these files, including the features and labels, repeating the whole data once per epoch.
```
def csv_reader_dataset(filenames, n_parse_threads=5, batch_size=32,
shuffle_buffer_size=10000, n_readers=5):
dataset = tf.data.Dataset.list_files(filenames)
dataset = dataset.repeat()
dataset = dataset.interleave(
lambda filename: tf.data.TextLineDataset(filename).skip(1),
cycle_length=n_readers)
dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(parse_csv_line, num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset.prefetch(1)
```
This version uses `map_and_batch()` to get a performance boost (but remember that this feature is experimental and will eventually be deprecated, as explained earlier):
```
def csv_reader_dataset(filenames, batch_size=32,
shuffle_buffer_size=10000, n_readers=5):
dataset = tf.data.Dataset.list_files(filenames)
dataset = dataset.repeat()
dataset = dataset.interleave(
lambda filename: tf.data.TextLineDataset(filename).skip(1),
cycle_length=n_readers)
dataset.shuffle(shuffle_buffer_size)
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
parse_csv_line,
batch_size,
num_parallel_calls=tf.data.experimental.AUTOTUNE))
return dataset.prefetch(1)
train_set = csv_reader_dataset(train_filenames, batch_size=3)
for X_batch, y_batch in train_set.take(2):
print("X =", X_batch)
print("y =", y_batch)
print()
```
### 1.7)
Build a training set, a validation set and a test set using your `csv_reader_dataset()` function.
```
batch_size = 32
train_set = csv_reader_dataset(train_filenames, batch_size)
valid_set = csv_reader_dataset(valid_filenames, batch_size)
test_set = csv_reader_dataset(test_filenames, batch_size)
```
### 1.8)
Build and compile a Keras model for this regression task, and use your datasets to train it, evaluate it and make predictions for the test set.
```
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=[n_inputs]),
keras.layers.Dense(1),
])
model.compile(loss="mse", optimizer="sgd")
model.fit(train_set, steps_per_epoch=len(X_train) // batch_size, epochs=10,
validation_data=valid_set, validation_steps=len(X_valid) // batch_size)
model.evaluate(test_set, steps=len(X_test) // batch_size)
new_set = test_set.map(lambda X, y: X)
model.predict(new_set, steps=len(X_test) // batch_size)
```

## Exercise 2 – The `TFRecord` binary format
### Code examples
You can walk through these code examples or jump down to the [actual exercise](#Actual-exercise) below.
```
favorite_books = [name.encode("utf-8")
for name in ["Arluk", "Fahrenheit 451", "L'étranger"]]
favorite_books = tf.train.BytesList(value=favorite_books)
favorite_books
hours_per_month = tf.train.FloatList(value=[15.5, 9.5, np.nan, 6.0, 9.0])
hours_per_month
age = tf.train.Int64List(value=[42])
age
coordinates = tf.train.FloatList(value=[1.2834, 103.8607])
coordinates
features = tf.train.Features(
feature={
"favorite_books": tf.train.Feature(bytes_list=favorite_books),
"hours_per_month": tf.train.Feature(float_list=hours_per_month),
"age": tf.train.Feature(int64_list=age),
"coordinates": tf.train.Feature(float_list=coordinates),
}
)
features
example = tf.train.Example(features=features)
example
serialized_example = example.SerializeToString()
serialized_example
filename = "my_reading_data.tfrecords"
with tf.io.TFRecordWriter(filename) as writer:
for i in range(3): # you should save different examples instead! :)
writer.write(serialized_example)
for serialized_example_tensor in tf.data.TFRecordDataset([filename]):
print(serialized_example_tensor)
filename = "my_reading_data.tfrecords"
options = tf.io.TFRecordOptions(compression_type="GZIP")
with tf.io.TFRecordWriter(filename, options) as writer:
for i in range(3): # you should save different examples instead! :)
writer.write(serialized_example)
dataset = tf.data.TFRecordDataset([filename], compression_type="GZIP")
for serialized_example_tensor in dataset:
print(serialized_example_tensor)
expected_features = {
"favorite_books": tf.io.VarLenFeature(dtype=tf.string),
"hours_per_month": tf.io.VarLenFeature(dtype=tf.float32),
"age": tf.io.FixedLenFeature([], dtype=tf.int64),
"coordinates": tf.io.FixedLenFeature([2], dtype=tf.float32),
}
for serialized_example_tensor in tf.data.TFRecordDataset(
[filename], compression_type="GZIP"):
example = tf.io.parse_single_example(serialized_example_tensor,
expected_features)
books = tf.sparse.to_dense(example["favorite_books"],
default_value=b"")
for book in books:
print(book.numpy().decode('UTF-8'), end="\t")
print()
```
## Actual exercise
### 2.1)
Write a `csv_to_tfrecords()` function that will read from a given CSV dataset (e.g., such as `train_set`, passed as an argument), and write the instances to multiple TFRecord files. The number of files should be defined by an `n_shards` argument. If there are, say, 20 shards, then the files should be named `my_train_00000-to-00019.tfrecords` to `my_train_00019-to-00019.tfrecords`, where the `my_train` prefix should be defined by an argument.
**Tips**:
* since the CSV dataset repeats the dataset forever, the function should take an argument defining the number of steps per shard, and you should use `take()` to pull only the appropriate number of batches from the CSV dataset for each shard.
* to format 19 as `"00019"`, you can use `"{:05d}".format(19)`.
### 2.2)
Use this function to write the training set, validation set and test set to multiple TFRecord files.
### 2.3)
Write a `tfrecords_reader_dataset()` function, very similar to `csv_reader_dataset()`, that will read from multiple TFRecord files. For convenience, it should take a file prefix (such as `"my_train"`) and use `os.listdir()` to look for all the TFRecord files with that prefix.
**Tips**:
* You can mostly reuse `csv_reader_dataset()`, except it will use a different parsing function (based on `tf.io.parse_single_example()` instead of `tf.io.parse_csv_line()`).
* The parsing function should return `(input features, label)`, not a `tf.train.Example`.
### 2.4)
Create one dataset for each dataset (`tfrecords_train_set`, `tfrecords_valid_set` and `tfrecords_test_set`), and build, train and evaluate a Keras model using them.

## Exercise 2 – Solution
### 2.1)
Write a `csv_to_tfrecords()` function that will read from a given CSV dataset (e.g., such as `train_set`, passed as an argument), and write the instances to multiple TFRecord files. The number of files should be defined by an `n_shards` argument. If there are, say, 20 shards, then the files should be named `my_train_00000-to-00019.tfrecords` to `my_train_00019-to-00019.tfrecords`, where the `my_train` prefix should be defined by an argument.
```
def serialize_example(x, y):
input_features = tf.train.FloatList(value=x)
label = tf.train.FloatList(value=y)
features = tf.train.Features(
feature = {
"input_features": tf.train.Feature(float_list=input_features),
"label": tf.train.Feature(float_list=label),
}
)
example = tf.train.Example(features=features)
return example.SerializeToString()
def csv_to_tfrecords(filename, csv_reader_dataset, n_shards, steps_per_shard,
compression_type=None):
options = tf.io.TFRecordOptions(compression_type=compression_type)
for shard in range(n_shards):
path = "{}_{:05d}-of-{:05d}.tfrecords".format(filename, shard, n_shards)
with tf.io.TFRecordWriter(path, options) as writer:
for X_batch, y_batch in csv_reader_dataset.take(steps_per_shard):
for x_instance, y_instance in zip(X_batch, y_batch):
writer.write(serialize_example(x_instance, y_instance))
```
### 2.2)
Use this function to write the training set, validation set and test set to multiple TFRecord files.
```
batch_size = 32
n_shards = 20
steps_per_shard = len(X_train) // batch_size // n_shards
csv_to_tfrecords("my_train.tfrecords", train_set, n_shards, steps_per_shard)
n_shards = 1
steps_per_shard = len(X_valid) // batch_size // n_shards
csv_to_tfrecords("my_valid.tfrecords", valid_set, n_shards, steps_per_shard)
n_shards = 1
steps_per_shard = len(X_test) // batch_size // n_shards
csv_to_tfrecords("my_test.tfrecords", test_set, n_shards, steps_per_shard)
```
### 2.3)
Write a `tfrecords_reader_dataset()` function, very similar to `csv_reader_dataset()`, that will read from multiple TFRecord files. For convenience, it should take a file prefix (such as `"my_train"`) and use `os.listdir()` to look for all the TFRecord files with that prefix.
```
expected_features = {
"input_features": tf.io.FixedLenFeature([n_inputs], dtype=tf.float32),
"label": tf.io.FixedLenFeature([1], dtype=tf.float32),
}
def parse_tfrecord(serialized_example):
example = tf.io.parse_single_example(serialized_example,
expected_features)
return example["input_features"], example["label"]
def tfrecords_reader_dataset(filename, batch_size=32,
shuffle_buffer_size=10000, n_readers=5):
filenames = [name for name in os.listdir() if name.startswith(filename)
and name.endswith(".tfrecords")]
dataset = tf.data.Dataset.list_files(filenames)
dataset = dataset.repeat()
dataset = dataset.interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=n_readers)
dataset.shuffle(shuffle_buffer_size)
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
parse_tfrecord,
batch_size,
num_parallel_calls=tf.data.experimental.AUTOTUNE))
return dataset.prefetch(1)
tfrecords_train_set = tfrecords_reader_dataset("my_train", batch_size=3)
for X_batch, y_batch in tfrecords_train_set.take(2):
print("X =", X_batch)
print("y =", y_batch)
print()
```
### 2.4)
Create one dataset for each dataset (`tfrecords_train_set`, `tfrecords_valid_set` and `tfrecords_test_set`), and build, train and evaluate a Keras model using them.
```
batch_size = 32
tfrecords_train_set = tfrecords_reader_dataset("my_train", batch_size)
tfrecords_valid_set = tfrecords_reader_dataset("my_valid", batch_size)
tfrecords_test_set = tfrecords_reader_dataset("my_test", batch_size)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=[n_inputs]),
keras.layers.Dense(1),
])
model.compile(loss="mse", optimizer="sgd")
model.fit(tfrecords_train_set, steps_per_epoch=len(X_train) // batch_size, epochs=10,
validation_data=tfrecords_valid_set, validation_steps=len(X_valid) // batch_size)
model.evaluate(tfrecords_test_set, steps=len(X_test) // batch_size)
new_set = test_set.map(lambda X, y: X)
model.predict(new_set, steps=len(X_test) // batch_size)
```
| github_jupyter |
# image-level analysis
```
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyprojroot
import seaborn as sns
from tqdm import tqdm
```
## constants
```
SOURCE_DATA_ROOT = pyprojroot.here('results/VSD/source_data/8-bins-uniform-strategy')
FIGURES_ROOT = pyprojroot.here('docs/paper/figures/experiment-2')
```
#### load dataframes
```
acc_df = pd.read_csv(SOURCE_DATA_ROOT.joinpath('acc.csv'))
rm_corr_df = pd.read_csv(SOURCE_DATA_ROOT.joinpath('rm_corr.csv'))
```
#### print number of trials per bin for one replicate
to show how binning strategy effects number of trials / samples per bin
```
for vsd_score_bin in acc_df.vsd_score_bin.unique():
n_trials = acc_df[(acc_df['vsd_score_bin'] == vsd_score_bin) & (acc_df['replicate'] == 1)].n_trials.sum()
print(f'number of trials in bin {vsd_score_bin}: {n_trials}')
```
## plot figure
```
def cm_to_inches(cm):
return cm / 2.54
sns.set()
sns.set_palette(sns.cubehelix_palette(8))
nrow = 2 # single-label, multi-label
ncol = 4 # number of neural network architectures
FIGSIZE = tuple(cm_to_inches(size) for size in (17.4, 10))
DPI = 300
fig, ax = plt.subplots(nrow, ncol, figsize=FIGSIZE, dpi=DPI, sharey=True)
for mode in ['classify']:
for method in ['transfer']:
for row, loss_func in enumerate(['CE-random', 'BCE']):
for col, net_name in enumerate(acc_df['net_name'].unique()):
sub_df = acc_df[
(acc_df['mode'] == mode) &
(acc_df['method'] == method) &
(acc_df['loss_func'] == loss_func) &
(acc_df['net_name'] == net_name)
]
if len(sub_df) == 0:
continue
# sns.stripplot(x="vsd_score_bin", y="acc", hue="replicate", data=sub_df, ax=ax[row, col], size=2)
for replicate in acc_df.replicate.unique():
replicate_df = sub_df[sub_df['replicate'] == replicate]
sns.regplot(x="vsd_score_bin", y="pred", scatter=False, ci=None, truncate=True, ax=ax[row, col], data=replicate_df,
line_kws={'linestyle': '--', 'linewidth': 1, 'alpha': 0.75})
sns.scatterplot(x="vsd_score_bin", y="acc", data=replicate_df, ax=ax[row, col],
s=20, alpha=0.75)
r = rm_corr_df.loc[
((rm_corr_df['mode'] == mode) & (rm_corr_df['method'] == method) & (rm_corr_df['loss_func'] == loss_func) & (rm_corr_df['net_name'] == net_name)),
'r'
].values.item()
if r < -0.9:
ax[row, col].text(0.2, 0.2, f'r={r:.3f}')
else:
ax[row, col].text(0.2, 0.75, f'r={r:.3f}')
ax[row, col].set_xlabel('')
ax[row, col].set_xticks(acc_df['vsd_score_bin'].unique())
# ax[row, col].set_xticklabels(xticklabels)
if row == 0:
ax[row, col].set_title(net_name.replace('_', ' '))
if col == 0:
ax[row, col].set_ylabel('accuracy')
else:
ax[row, col].set_ylabel('')
# g.fig.suptitle(f'accuracy as a function of visual search difficulty score,\nimages with only one item', y=1.05)
big_ax = fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axes
big_ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
big_ax.grid(False)
big_ax.set_xlabel("difficulty score bin")
big_ax.text(0.01, 1.1, "single-label, images with one item")
big_ax.text(0.01, 0.45, "multi-label, all images")
# fig.tight_layout(h_pad=2)
fig.subplots_adjust(top = 0.95, hspace=0.5)
for ext in ('svg', 'png'):
fig_path = pyprojroot.here().joinpath(
f'docs/paper/figures/experiment-2/acc-VSD-corr/acc-VSD-corr-{SOURCE_DATA_ROOT.name}.{ext}'
)
plt.savefig(fig_path, bbox_inches='tight')
```
| github_jupyter |
# This notebook shows how to extract a model from a Latex document and simulate the mode.
## Why specify a model in Latex?
Sometime the **implementation** of a model in software don't match the **specification** of the model in
the text in which the model is presented. It can be a challenge to make sure that the specification is
updated in order to reflect changes made in the implementation.
By extracting the model from a Latex script which describes and specify the model a one can always be sure that simulations reflect the model as described in the paper.
Also the author is forced to make a complete specification of the model, else it won't run.
## The Economic Credit Loss model
This jupyter notebook is inspired by IMF working paper (WP/20/111) The Expected Credit Loss Modeling from a Top-Down Stress Testing Perspective by Marco Gross, Dimitrios Laliotis, Mindaugas Leika, Pavel Lukyantsau. The working paper and the associated material is located https://www.imf.org/en/Publications/WP/Issues/2020/07/03/Expected-Credit-Loss-Modeling-from-a-Top-Down-Stress-Testing-Perspective-49545
from the abstract of the paper:
> The objective of this paper is to present an integrated tool suite for IFRS 9- and CECL compatible
estimation in top-down solvency stress tests. The tool suite serves as an
illustration for institutions wishing to include accounting-based approaches for credit risk
modeling in top-down stress tests.
This is a jupyter notebook build with the purpose of illustrating the conversion of a model from Latex to ModelFlow. The purpose is testing so take very much care.
## Import libraries
```
%load_ext autoreload
%autoreload 2
import pandas as pd
from IPython.core.display import HTML,Markdown
from modelclass import model
import modeljupytermagic
# some useful stuf
model.widescreen()
pd.set_option('display.max_rows', None, 'display.max_columns', 10, 'precision', 2)
sortdf = lambda df: df[sorted([c for c in df.columns])]
```
## Write a latex script
The model consists of the equations and the lists
The jupyter magic command **%%latexflow** will extract the model. then it will transform the equations to **ModelFlow** equations and finaly it will create a modelflow **model** instance.
The **model** instance will be able to solve the model.
```
%%latexflow ecl
### Loans can be in 3 stages
Loans can be in 3 stages, s1,s2 and s3.
New loans will be generated and loans will mature.
Two lists of stages are defined:
List $stage=\{s1, s2,s3\}$
List $stage\_from=\{s1, s2,s3\}$
### A share of the loans in each stage wil transition to the same or another stage in the next time frame:
\begin{equation}
\label{eq:loanfromto}
loan\_transition\_from\_to^{stage\_from,stage}_{t} = loan^{stage\_from}_{t-1} \times TR^{stage\_from,stage}_{t}
\end{equation}
\begin{equation}
\label{eq:transition}
loan\_transition\_to^{stage}_{t} = \underbrace{\sum_{stage\_from}(loan\_transition\_from\_to^{stage\_from,stage}_{t})}_{Transition}
\end{equation}
### A share of the loans in each stage will mature another share will have to be written off
\begin{equation}
\label{eq:maturing}
loan\_maturing^{stage}_{t} = M^{stage}_{t} \times loan^{stage}_{t-1}
\end{equation}
\begin{equation}
\label{eq:writeoff}
loan\_writeoff^{stage}_{t} = WRO^{stage}_{t} \times loan^{stage}_{t-1}
\end{equation}
### So loans in a stage will reflect the inflow and outflow
\begin{equation}
\label{eq:E}
loan^{stage}_{t} = \underbrace{loan\_transition\_to^{stage}_{t} }_{Transition}
-\underbrace{loan\_maturing^{stage}_{t}}_{Maturing}
-\underbrace{loan\_writeoff^{stage}_{t}}_{Writeoff}
+\underbrace{loan\_new^{stage}_{t}}_{New loans}
\end{equation}
\begin{equation}
\label{eq:new}
loan\_new^{stage}_{t} = new^{stage}_{t} \times loan^{stage}_{t-1}
\end{equation}
\begin{equation}
\label{eq:g}
loan\_total_{t} = \sum_{stage}(loan^{stage}_{t})
\end{equation}
### New loans are only generated in stage 1.
\begin{equation}
\label{eq:E2}
new\_s1_{t} = \frac{loan\_growth_{t} \times loan\_total_{t-1} +
\sum_{stage}((M^{stage}_{t}+WRO^{stage}_{t})\times loan^{stage}_{t-1})}{(loan\_s1_{t-1})}
\end{equation}
### Performing Loans
\begin{equation}
\label{eq:Performing}
loan\_performing_{t} = loan\_s1_{t}+loan\_s2_{t}
\end{equation}
### Cure
\begin{equation}
\label{eq:cure}
loan\_cure = loan\_transition\_from\_to\_s3\_s1+loan\_transition\_from\_to\_s3\_s2
\end{equation}
### Probability of default (PD)
The point in time PD is the fraction of loans in stage s1 and s2 going into stage s3
\begin{equation}
\label{eq:PDPIT}
PD\_pit= \frac{loan\_transition\_from\_to\_s1\_s3+loan\_transition\_from\_to\_s2\_s3}{loan\_s1+loan\_s2}
\end{equation}
The Troug The Cycle PD is a slow mowing average of the Point in time PD. The
\begin{equation}
\label{eq:PDTTC}
PD\_TTC = logit^{-1}(logit(PD\_TTC(-1)) + alfa \times \Delta{PD\_pit})
\end{equation}
### And we can specify the dynamic of the transition matrix, based on Z score
Let $\Phi$ be the normal cumulative distribution $\frac{1}{\sqrt{2\pi}}
\int_{-\infty}^x e^{
-\frac{t^2}{2}}dt$
\begin{equation}
\label{eq:tr}
TR^{stage\_from,stage}_{t} = \Phi{\left(\frac{bound\_upper^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)}-\Phi{\left(\frac{bound\_lower^{stage\_from,stage}-\sqrt{\rho}\times Z_{t}}{\sqrt{1-\rho}}\right)}
\end{equation}
```
## The equations in Macro business logic language can be inspected
```
print(ecl.equations_original)
```
### The equations in business logic language can be inspected
```
print(ecl.equations)
```
### The model structure can be inspected
```
ecl.drawmodel(sink='LOAN_TOTAL',HR=0,pdf=0,att=0,size=(12,12))
```
## load data
The data is copy-pasted from the excel sheet
### Load data common for baseline and adverse
```
%%dataframe T0tr noshow prefix=TR_ periods=1 melt
S1 S2 S3
S1 89% 9% 2%
S2 15% 79% 6%
S3 2% 24% 74%
%%dataframe startvalues show periods=1
pd_pit lgd_pit
1.6% 20%
%%dataframe Loan_t0 show periods=1 melt
S1 S2 S3 total
loan 500 180 30 710
%%dataframe upper_bin prefix=bound_upper_ nshow periods=7 melt
S1 S2 S3
S1 10000 -1,34 -2,33
S2 10000 1,08 -1,64
S3 10000 1,88 0,58
%%dataframe lower_bin prefix=bound_lower_ show periods=7 melt
S1 S2 S3
S1 -1,34 -2,33 -10000
S2 1,08 -1,64 -10000
S3 1,88 0,58 -10000
%%dataframe parameters show periods=7
rho
0.2
```
### Create the Static dataframe, common to scenarios
```
staticdf = pd.concat([T0tr_melted,Loan_t0_melted,startvalues,upper_bin_melted,
lower_bin_melted,parameters],axis=1)
HTML(staticdf.T.style.render().replace('nan',''))
```
### Load data specific for the scenarios
```
%%dataframe inf_baseline nshow t periods=7 melt
m wro
s1 5% 0
s2 3.8% 0
s3 0 7.5%
%%dataframe inf_adverse nshow t periods=7 melt
m wro
s1 3.8% 0
s2 2.5% 0
s3 0 6.3%
%%dataframe projection_baseline nshow
Z loan_growth
0 0.01
-0,47 0.01
-0,42 0.01
-0,38 0.01
-0,36 0.01
-0,34 0.01
-0,33 0.01
%%dataframe projection_adverse nshow
Z loan_growth
0 0.01
-0,65 -0.01
-0,84 -0.008
-0,99 -0.006
-0,69 -0.004
-0,39 -0.002
-0,24 -0.0
```
### Create a dataframe for baseline and adverse scenario
```
baseupdate = pd.concat([inf_baseline_melted, projection_baseline],axis=1).pipe(sortdf)
adverseupdate = pd.concat([inf_adverse_melted, projection_adverse],axis=1).pipe(sortdf)
getnotnul = lambda x: x.loc[:,(x!=0.0).any(axis=0)]
display(Markdown('## Baseline scenario'))
display(baseupdate.pipe(getnotnul))
display(Markdown('## Adverse scenario'))
display(adverseupdate.pipe(getnotnul))
baseline = pd.concat([staticdf, inf_baseline_melted, projection_baseline],axis=1)
adverse = pd.concat([staticdf, inf_adverse_melted, projection_adverse],axis=1)
baseline.index = pd.period_range(start=2021,freq = 'Y',periods=7)
adverse.index = pd.period_range(start=2021,freq = 'Y',periods=7)
display(baseline.pipe(sortdf).T)
```
## Run baseline
```
base_result = ecl(baseline,keep='Baseline')
```
### Save model and baseline results
```
ecl.modeldump('ecl.pcim')
```
## Run adverse
```
adverse_result = ecl(adverse,keep = 'Adverse')
```
## Inspect Results
```
with ecl.set_smpl('2021','2027'):
ecl.keep_plot('loan_total',showtype='growth',legend=False);
ecl.keep_plot('tr_*',showtype='level',legend=False);
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
import glob
import os, gc
import numpy as numpy
import pandas as pd
import datatable as dt
import scipy as sp
from collections import defaultdict
from tqdm.notebook import tqdm
from sklearn.metrics import r2_score
from numba import njit
from utils import *
from numba_functions import *
from IPython.display import clear_output
# CONSTANT
MEAN = -5.762330803300896
STD = 0.6339307835941186
EPS = 1e-9
def transform_target(target):
return (np.log(target + EPS) - MEAN) / STD
def inverse_target(target):
return np.exp(MEAN + STD * target) - EPS
class OptimizeMSPE:
def __init__(self, transform=False):
self.coef_ = 0
self.transform_ = transform
def _mspe(self, coef, X, y):
# create predictions by taking row wise sum
if self.transform_:
X = transform_target(X)
y_hat = np.sum(X * coef, axis=1)
if self.transform_:
y_hat = inverse_target(y_hat)
mspe_score = np.mean(np.square((y - y_hat) / y))
return mspe_score
def fit(self, X, y):
from functools import partial
loss_partial = partial(self._mspe, X=X, y=y)
initial_coef = np.random.dirichlet(np.ones(X.shape[1]), size=1)
# initial_coef = np.zeros(X.shape[1])
self.result_ = sp.optimize.minimize(loss_partial, x0=initial_coef,
method='SLSQP',
jac='3-point',
options=dict(
ftol=1e-10,
disp=True,
)
)
self.coef_ = self.result_.x
print('RMSPE: ', np.sqrt(loss_partial(self.coef_)))
def predict(self, X):
if self.transform_:
X = transform_target(X)
y_pred = np.sum(X * self.coef_, axis=1)
if self.transform_:
y_pred = inverse_target(y_pred)
return y_pred
```
# Load results csv
```
list_result_names = [path.lstrip('./results/').rstrip('.csv') for path in glob.glob('./results/*.csv')]
list_result_names.remove('OptimizeRV')
list_result_names
df_result = pd.read_csv('./dataset/train.csv')
for result_name in list_result_names:
df_pred = pd.read_csv(f'./results/{result_name}.csv')
if 'pred' not in df_pred:
df_pred['pred'] = df_pred[[f for f in df_pred if f.startswith('pred_')]].mean(axis=1)
df_pred.rename(columns={'pred': f'pred_{result_name}'}, inplace=True)
df_result = df_result.merge(df_pred[['stock_id', 'time_id', f'pred_{result_name}']], on=['stock_id', 'time_id'], how='inner', validate='one_to_one')
# OptimizeRV
df_pred = pd.read_csv('results/OptimizeRV.csv')
df_pred.rename(columns={'rv_new': f'pred_OptimizeRV'}, inplace=True)
df_result = df_result.merge(df_pred[['stock_id', 'time_id', 'pred_OptimizeRV']], on=['stock_id', 'time_id'], how='inner', validate='one_to_one')
print(df_result.isna().any().any())
df_result.head()
removed_row_ids = ['31-25504', '31-27174']
df_query = df_result.loc[~df_result.row_id.isin(removed_row_ids)]
print(removed_row_ids)
print(rmspe(df_result['target'], df_result['pred_501-MLP']))
print(rmspe(df_query['target'], df_query['pred_501-MLP']))
```
# Ensemble Together
```
pred_cols = [f for f in df_result if f.startswith('pred_')]
pred_cols_disp = [c.lower().replace('-', '_') for c in pred_cols]
print('pred_cols =', pred_cols_disp)
opt = OptimizeMSPE(transform=False)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
opt = OptimizeMSPE(transform=True)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
```
# Seperate 501 and 601
```
# 501
pred_cols = [f for f in df_result if f.startswith('pred_') and '501' in f]
pred_cols_disp = [c.lower().replace('-', '_') for c in pred_cols]
print('pred_cols =', pred_cols_disp)
opt = OptimizeMSPE(transform=False)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
opt = OptimizeMSPE(transform=True)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
df_result['fpred_501'] = opt.predict(df_result[pred_cols].values)
# 601
pred_cols = [f for f in df_result if f.startswith('pred_') and '601' in f]
pred_cols_disp = [c.lower().replace('-', '_') for c in pred_cols]
print('pred_cols =', pred_cols_disp)
opt = OptimizeMSPE(transform=False)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
opt = OptimizeMSPE(transform=True)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
df_result['fpred_601'] = opt.predict(df_result[pred_cols].values)
# 501 + 601
pred_cols = [f for f in df_result if f.startswith('fpred_')]
pred_cols_disp = [c.lower().replace('-', '_') for c in pred_cols]
print('pred_cols =', pred_cols_disp)
opt = OptimizeMSPE(transform=True)
opt.fit(df_result[pred_cols].values, df_result['target'].values)
print('coef_ = [', ', '.join(map(str, opt.coef_)), ']')
df_result['_fpred_all'] = opt.predict(df_result[pred_cols].values)
```
# hmean
```
pred_cols = [f for f in df_result if f.startswith('fpred_')]
pred_cols_disp = [c.lower().replace('-', '_') for c in pred_cols]
print('pred_cols =', pred_cols_disp)
pred_hmean = sp.stats.hmean(df_result[pred_cols].values, axis=1)
print('RMSPE: ', rmspe(df_result['target'], pred_hmean))
# df_result['bias'] = 1
# pred_cols = [f for f in df_result if f.startswith('pred_')]
# print(pred_cols)
# opt = OptimizeRMSPE()
# opt.fit(df_result[['bias']+pred_cols], df_result['target'])
# print('coef_ = ', opt.coef_)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.