text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Cross Correlation
The general form of the cross correlation with integration:
\begin{equation}
{\displaystyle (f\star g)(\tau )\ = \int _{-\infty }^{\infty }{{f^{\star}(t)}}g(t+\tau )\,dt}
\end{equation}
This can be written in discrete form as:
\begin{equation}
{\displaystyle (f\star g)[n]\ = \sum _{m=-\infty }^{\infty }{{f^{\star}[m]}}g[m+n]}
\end{equation}
```
%pylab inline
pylab.rcParams['savefig.dpi'] = 300
from gps_helper.prn import PRN
from sk_dsp_comm import sigsys as ss
from sk_dsp_comm import digitalcom as dc
from caf_verilog.quantizer import quantize
```
## Test Signals
```
prn = PRN(10)
prn2 = PRN(20)
fs = 625e3
Ns = fs / 125e3
prn_seq = prn.prn_seq()
prn_seq2 = prn2.prn_seq()
prn_seq,b = ss.NRZ_bits2(array(prn_seq), Ns)
prn_seq2,b2 = ss.NRZ_bits2(array(prn_seq2), Ns)
Px,f = psd(prn_seq, 2**12, Fs=fs)
plot(f, 10*np.log10(Px))
```
## Autocorrelation
```
r, lags = dc.xcorr(prn_seq, prn_seq, 100)
plot(lags, abs(r)) # r -> abs
```
## Time Shifted Signals
```
r, lags = dc.xcorr(roll(prn_seq, 50), prn_seq, 100)
plot(lags, abs(r))
r, lags = dc.xcorr(roll(prn_seq, -50), prn_seq, 100)
plot(lags, abs(r))
```
## No Correlation
```
r_nc, lags_nc = dc.xcorr(prn_seq, prn_seq2, 100)
plot(lags_nc, abs(r_nc))
ylim([0, 1])
```
## Calculation Space Visualization
```
from caf_verilog.xcorr import size_visualization
size_visualization(prn_seq[:10], prn_seq[:10], 5)
```
## Simple Cross Correlation
```
from caf_verilog.xcorr import simple_xcorr
r, lags = simple_xcorr(prn_seq, prn_seq, 100)
plot(lags, r)
```
### Time Shifted Signals
```
r, lags = simple_xcorr(prn_seq, roll(prn_seq, 50), 100)
plot(lags, abs(array(r)))
r, lags = simple_xcorr(prn_seq, roll(prn_seq, -50), 100)
plot(lags, abs(array(r)))
```
### No Correlation
```
r, lags = simple_xcorr(prn_seq, prn_seq2, 100)
plot(lags, abs(array(r)))
ylim([0, 5000])
```
## Dot Product Method
To ensure the integration time is filled, the secondary or received signal must be twice the length of the reference signal.
```
from caf_verilog.sim_helper import sim_shift
center = 300
corr_length = 250
shift = 25
ref, rec = sim_shift(prn_seq, center, corr_length, shift=shift, padding=True)
f, axarr = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0})
axarr[0].plot(ref)
axarr[1].plot(rec)
plt.xlabel("Sample Number")
savefig('prn_seq.png')
from caf_verilog.xcorr import dot_xcorr
ref, rec = sim_shift(prn_seq, center, corr_length, shift=shift)
rr = dot_xcorr(ref, rec)
rr = array(rr)
rxy, lags = dc.xcorr(ref, rec, 1000)
plot(abs(rxy))
xlabel("Center Offset (Samples)")
grid();
savefig('xcorr.png')
fig, axs = plt.subplots(2, sharey=True)
axs[0].plot(abs(rr))
axs[0].grid(True)
axs[1].plot(abs(rr))
axs[1].set_xlim([80, 120])
axs[1].set_xlabel('Inverse Center Offset (Samples)')
axs[1].grid(True)
fig.savefig('xcorr_250.png')
argmax(rr)
from caf_verilog.xcorr import XCorr
xc = XCorr(ref, rec, output_dir='.')
xc.gen_tb()
```
| github_jupyter |
# Example Model Metamer Generation from Audio Networks
This notebook walks through model metamer generation for an example sound as a demonstration of the optimization used in the paper:
*Metamers of neural networks reveal divergence from human perceptual systems. Feather, J., Durango, A., Gonzalez, R., & McDermott, J. (2019). In Advances in Neural Information Processing Systems.*
(Tested in tensorflow 1.13, with no guarantee to work with other versions)
# Download tfcochleagram and pycochleagram
To generate audio metamers you will need tfcochleagram to build a tensorflow graph for the cochleagram generation (located in a separate github repo) which will also require pycochleagram to generate the cochlear filters:
https://github.com/jenellefeather/tfcochleagram
https://github.com/mcdermottLab/pycochleagram
# Download the model checkpoint, null distribution, and configuration pickle
### Before running this notebook, download the following file from the McDermott lab website
http://mcdermottlab.mit.edu/jfeather/model_metamers/assets/metamers_audio_models_network_files.tar.gz
(Warning: This file is ~2GB in size)
Untar the above file (`tar -xvf metamers_audio_models_network_files.tar.gz`).
This file should contain the following:
(1) Network configuration files used by the `build_*.py` scripts (`word_network_aliased.pckl`, `word_reduced_aliasing_null_dist_spearman_r.pckl`)
(2) Saved tensorflow checkpoints for both models
(3) Pre-computed null distributions for both models (`word_aliased_null_dist_spearman_r.pckl`, `word_reduced_aliasing_null_dist_spearman_r.pckl`)
# Description of included audio networks
Functions for two audio networks are included, the "Word Trained CNN" presented in Figure 3 and the reduced aliasing version of this network. A direct comparison of metamers from these two networks are in the left most plot of Figure 4, as shown below:
<img src="assets/audio_networks_figure_4.png" alt="Drawing" style="width: 400px;"/>
The below notebook is set up to generate metamers from the reduced aliasing network, and comments are included where the aliased network could be swapped in.
# Now load the dependencies
```
import tensorflow as tf
import numpy as np
%matplotlib inline
import matplotlib.pylab as plt
import scipy
import sys
import os
import metamer_helpers
from lossfunctions import generate_loss_functions
import pickle
import IPython.display as ipd
```
# Set up the input to the graph
Loading the network should include any preprocessing that was performed on the input. For the audio networks, this includes the cochleagram generation graph built with tfcochleagram.
Including the preprocessing in the metamer generation is important to ensure that generated metamers go through the same preprocessing that is applied to the natural input.
All of this is included in the `build_word_network_aliased.py` or `build_word_network_reduced_aliasing.py` script. When generating metamers for a new network, a separate build script should be written including the preprocessing.
The build script also provides easy pointers to activation layers in the network, and applies the modified gradient ReLU to the desired layers.
```
tf.reset_default_graph()
# To load the reduced aliasing network
import build_word_network_reduced_aliasing as build_network
# To load the network with aliasing
# import build_word_network_aliased as build_network
nets, sess, metamer_layers = build_network.main()
# Make a function that re-initializes the input variable
input_tensor, input_noise_assign_op = metamer_helpers.make_initialization_op_pink_audio(
nets['input_signal'], nets['input_signal'].get_shape().as_list()[1],
audio_scaling=0.1, rms_normalize=0.1)
```
# Load an example speech sound to generate metamers.
```
audio_path = 'assets/human_audio_resampled.wav'
wav_word = 'human'
audio_dict = metamer_helpers.use_audio_path_specified_audio(audio_path,
wav_word,
rms_normalize=0.1)
```
# Set up a loss function.
This is a partial function that we can apply to each of the metamer layers. For Feather et al. 2019 we use an L2 loss on the activations from single layers of the network.
```
# The loss function used for metamer generation (L2 loss) is 'raw_pixels'
loss_function_name = 'raw_pixels'
loss_function, measure_stats = generate_loss_functions(LOSS_TYPE=loss_function_name,
SHAPE_NORMALIZE=False)
```
# Run the optimization to generate metamers
Here we can run the optimization for each layer in `metamer_layers` and plot the example images.
We optimize from one early layer of the network and one late layer of the network.
Note: This will take a long time to run in a notebook especially for the late layers of the network.
For this demonstration, we chose two example layers to optimize.
### Interpretation of Demos
By generating metamers from one early layer and one late layer as demonstrated below, it is clear that the late layer of the network no longer sounds like the matched sample of clean speech. This demonstrates that the model invariances are not the same as human invariances.
```
# Uncomment to synthesize and plot all of the metamer layers (WARNING: SLOW)
# plot_metamer_layers = metamer_layers
# Uncomment the below to only run a few metamer layers for the reduced aliasing network
plot_metamer_layers = ['pool_0_0', 'conv_4_jittered_relu']
# Uncomment the below to only run a few metamer layers for the aliased network
# (The size of the activations at `pool_0_0` is matched to the size at `conv_0_jittered_relu`)
# plot_metamer_layers = ['conv_0_jittered_relu', 'conv_4_jittered_relu']
# Metamers in Feather et al. 2019 ran for 15000 iterations of Adam as the loss began to flatten for many
# model metamers at this point. The learning rate the iterations may need to be changed for some models.
iterations_adam = 15000
log_loss_every_num = 1000 # used to track the loss and intermediate examples
starting_learning_rate_adam = 0.001
num_layers = len(plot_metamer_layers)
plt.figure(figsize=(3*num_layers,12))
metamer_dict = {}
print('Original audio')
ipd.display(ipd.Audio(audio_dict['wav'], rate=audio_dict['SR']))
for layer_idx, layer in enumerate(plot_metamer_layers):
print('Generating metamer for Word Network (Reduced Aliasing) layer %s'%layer)
# Reinitialize the input variable
sess.run(input_noise_assign_op)
# Get the loss on the layer features
orig_features = sess.run(nets[layer], feed_dict = {input_tensor:[audio_dict['wav']]})
losses = loss_function(nets[layer], orig_features, update_losses=[])
# In case we had included multiple losses, sum over them
loss = tf.reduce_sum(losses)
# Build the optimizers and run the optimization
total_loss, track_iters, track_audio = metamer_helpers.run_optimization(
loss, sess, input_tensor, iterations_adam, log_loss_every_num,
starting_learning_rate_adam=starting_learning_rate_adam)
# Evaluate some of the tensors to save and to plot
orig_predictions = sess.run(nets['logits'], feed_dict = {input_tensor:[audio_dict['wav']]})[0]
synth_predictions = sess.run(nets['logits'])[0]
synth_audio = sess.run(input_tensor)
synth_features = sess.run(nets[layer]).ravel()
synth_coch = np.squeeze(sess.run(nets['visualization_input']))
orig_coch = np.squeeze(sess.run(nets['visualization_input'], feed_dict = {input_tensor:[audio_dict['wav']]}))
metamer_dict[layer] = {}
metamer_dict[layer]['track_audio'] = track_audio
metamer_dict[layer]['total_loss'] = total_loss
metamer_dict[layer]['orig_features'] = orig_features
metamer_dict[layer]['synth_features'] = synth_features
metamer_dict[layer]['synth_audio'] = synth_audio
metamer_dict[layer]['orig_audio'] = [audio_dict['wav']]
metamer_dict[layer]['synth_audio'] = synth_coch
metamer_dict[layer]['orig_audio'] = orig_coch
metamer_dict[layer]['orig_predictions'] = orig_predictions
metamer_dict[layer]['synth_predictions'] = synth_predictions
print('Audio for layer %s Model Metamer'%layer)
ipd.display(ipd.Audio(synth_audio, rate=audio_dict['SR']))
# Make some plots in the notebook
if layer_idx == 0:
plt.subplot(4, num_layers, 1)
plt.imshow(orig_coch, origin='lower', cmap='Blues')
plt.title('Original Cochleagram')
plt.subplot(4, num_layers, layer_idx+1+num_layers)
plt.imshow(synth_coch, origin='lower', cmap='Blues')
plt.title('%s'%layer)
ax=plt.subplot(4, num_layers, layer_idx+1+num_layers*2)
plt.scatter(orig_predictions,synth_predictions)
plt.ylim(ymin=np.min(orig_predictions),ymax=np.max(orig_predictions))
plt.xlim(xmin=np.min(orig_predictions),xmax=np.max(orig_predictions))
plt.title('Spearman R Predictions: %f'%(scipy.stats.spearmanr(orig_predictions,synth_predictions)[0]))
ax=plt.subplot(4, num_layers, layer_idx+1+num_layers*3)
plt.scatter(orig_features,synth_features)
plt.ylim(ymin=np.min(orig_features.ravel()),ymax=np.max(orig_features.ravel()))
plt.xlim(xmin=np.min(orig_features.ravel()),xmax=np.max(orig_features.ravel()))
plt.title('Spearman R Activations: %f'%(scipy.stats.spearmanr(orig_features.ravel(),synth_features.ravel())[0]))
```
# For each layer, check that the optimized metamer meets the optimization criteria.
(1) The network must predict the same thing for the synthetic and the original
(2) The Spearman R between the synthetic and the original must fall outside of the null distribution constructed on 1,000,000 images pairs (saved as `word_aliased_null_dist_spearman_r.pckl` or `word_reduced_aliasing_null_dist_spearman_r.pckl`). This is especially important for randomly initialized networks, where at the late layers the activations are all very correlated.
```
# To load the null distribution for the reduced aliasing word network
null_distribution_file = 'word_reduced_aliasing_null_dist_spearman_r.pckl'
# To load the null distirbution for the aliased word network
# null_distribution_file = 'word_aliased_null_dist_spearman_r.pckl'
null_distirbution_spearman_r = pickle.load(open(null_distribution_file, 'rb'))
for layer_idx, layer in enumerate(plot_metamer_layers):
# Make sure that the Spearman R falls outside of the null distribution constructed from random image pairs
spearman_r_metamer = scipy.stats.spearmanr(metamer_dict[layer]['orig_features'].ravel(), metamer_dict[layer]['synth_features'].ravel())[0]
null_assertion = 'Synthesized metamer for layer %s falls within the null distribution of sounds. Optimization did not succeed.'%layer
assert np.max(null_distirbution_spearman_r[layer]) < spearman_r_metamer, null_assertion
# Make sure that the predicted class is the same for the original and the synthetic metamer
class_assertion = 'Synthesized metamer for layer %s is not predicted as the same class as the original sound. Optimization did not succeed.'%layer
assert np.argmax(metamer_dict[layer]['orig_predictions']) == np.argmax(metamer_dict[layer]['synth_predictions']), class_assertion
print('Metamers for all layers passed the optimization criteria!')
```
| github_jupyter |
### Creation of the environment
```
%tensorflow_version 2.x
!pip3 install --upgrade pip
#!pip install -qU t5
!pip3 install git+https://github.com/google-research/text-to-text-transfer-transformer.git #extra_id_x support
import functools
import os
import time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import t5
#Set the base dir(Google cloud bucket)
BASE_DIR = "gs://bucket_code_completion"
if not BASE_DIR or BASE_DIR == "gs://":
raise ValueError("You must enter a BASE_DIR.")
ON_CLOUD = True
if ON_CLOUD:
import tensorflow_gcs_config
from google.colab import auth
# Set credentials for GCS reading/writing from Colab and TPU.
TPU_TOPOLOGY = "2x2"
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
TPU_ADDRESS = tpu.get_master()
print('Running on TPU:', TPU_ADDRESS)
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
auth.authenticate_user()
tf.config.experimental_connect_to_host(TPU_ADDRESS)
tensorflow_gcs_config.configure_gcs_from_colab_auth()
tf.disable_v2_behavior()
# Improve logging.
from contextlib import contextmanager
import logging as py_logging
if ON_CLOUD:
tf.get_logger().propagate = False
py_logging.root.setLevel('INFO')
@contextmanager
def tf_verbosity_level(level):
og_level = tf.logging.get_verbosity()
tf.logging.set_verbosity(level)
yield
tf.logging.set_verbosity(og_level)
```
### Loading of tsv files
With this script you can load each tsv file for finetuning.
Please be sure that the path to all tsv files are correct
```
#Validation(train and test on the same dataset)
nq_tsv_path_java_construct = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_java_construct.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_java_construct.tsv',
}
num_nq_examples_java_construct = dict(train=750000, validation=106237)
#Validation(train and test on the same dataset)
nq_tsv_path_android_construct = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_android_construct.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_android_construct.tsv',
}
num_nq_examples_android_construct = dict(train=750000, validation=100536)
#Validation(train and test on the same dataset)
nq_tsv_path_java_block = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_java_block.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_java_block.tsv',
}
num_nq_examples_java_block = dict(train=298470, validation=40008)
#Validation(train and test on the same dataset)
nq_tsv_path_android_block = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_android_block.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_android_block.tsv',
}
num_nq_examples_android_block = dict(train=204580, validation=26978)
#Validation(train and test on the same dataset)
nq_tsv_path_java_token = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_java_token.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_java_token.tsv',
}
num_nq_examples_java_token = dict(train=750000, validation=219486)
#Validation(train and test on the same dataset)
nq_tsv_path_android_token = {
"train": 'gs://bucket_code_completion/T5_extension/ft_datasets/train_android_token.tsv',
"validation": 'gs://bucket_code_completion/T5_extension/ft_datasets/test_android_token.tsv',
}
num_nq_examples_android_token = dict(train=750000, validation=200504)
```
### Preprocess of the dataset
In this step we preprocess the dataset.
You have to change the path to vocab files (*vocab_model_path* and *vocab_path*)
We're going to preprocess all the tsv file so that T5 can use them for finetuning.
```
from t5.data import postprocessors as t5_postprocessors
from t5.seqio import Feature,SentencePieceVocabulary
# # Set the path of sentencepiece model and vocab files
# # Must be the same used for the pre-trained phase
vocab_model_path = 'gs://bucket_code_completion/T5_extension/code.model'
vocab_path = 'gs://bucket_code_completion/T5_extension/code.vocab'
TaskRegistry = t5.data.TaskRegistry
TfdsTask = t5.data.TfdsTask
def get_default_vocabulary():
return SentencePieceVocabulary(vocab_model_path, 100)
DEFAULT_OUTPUT_FEATURES = {
"inputs": Feature(
vocabulary=get_default_vocabulary(), add_eos=True, required=False),
"targets": Feature(
vocabulary=get_default_vocabulary(), add_eos=True)
}
```
JAVA CONSTRUCT
```
def nq_java_construct(split, shuffle_files=True):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_java_construct[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw train examples...")
for ex in tfds.as_numpy(nq_java_construct("train").take(5)):
print(ex)
def java_construct_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['JAVA_CONSTRUCT:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('java_construct')
t5.data.TaskRegistry.add(
"java_construct",
dataset_fn=nq_java_construct,
splits=["train", "validation"],
text_preprocessor=[java_construct_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_java_construct
)
nq_task = t5.data.TaskRegistry.get("java_construct")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
JAVA TOKEN
```
def nq_java_token(split, shuffle_files=False):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_java_token[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw valid examples...")
for ex in tfds.as_numpy(nq_java_token("validation").take(5)):
print(ex)
def java_token_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['JAVA_TOKEN:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('java_token')
t5.data.TaskRegistry.add(
"java_token",
dataset_fn=nq_java_token,
splits=["train", "validation"],
text_preprocessor=[java_token_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_java_token
)
nq_task = t5.data.TaskRegistry.get("java_token")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
JAVA BLOCK
```
def nq_java_block(split, shuffle_files=False):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_java_block[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw valid examples...")
for ex in tfds.as_numpy(nq_java_block("validation").take(5)):
print(ex)
def java_block_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['JAVA_BLOCK:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('java_block')
t5.data.TaskRegistry.add(
"java_block",
dataset_fn=nq_java_block,
splits=["train", "validation"],
text_preprocessor=[java_block_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_java_block
)
nq_task = t5.data.TaskRegistry.get("java_block")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
ANDROID CONSTRUCT
```
def nq_android_construct(split, shuffle_files=True):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_android_construct[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw train examples...")
for ex in tfds.as_numpy(nq_android_construct("train").take(5)):
print(ex)
def android_construct_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['ANDROID_CONSTRUCT:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('android_construct')
t5.data.TaskRegistry.add(
"android_construct",
dataset_fn=nq_android_construct,
splits=["train", "validation"],
text_preprocessor=[android_construct_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_android_construct
)
nq_task = t5.data.TaskRegistry.get("android_construct")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
ANDROID TOKEN
```
def nq_android_token(split, shuffle_files=False):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_android_token[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw valid examples...")
for ex in tfds.as_numpy(nq_android_token("validation").take(5)):
print(ex)
def android_token_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['ANDROID_TOKEN:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('android_token')
t5.data.TaskRegistry.add(
"android_token",
dataset_fn=nq_android_token,
splits=["train", "validation"],
text_preprocessor=[android_token_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_android_token
)
nq_task = t5.data.TaskRegistry.get("android_token")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
ANDROID BLOCK
```
def nq_android_block(split, shuffle_files=False):
# We only have one file for each split.
del shuffle_files
# Load lines from the text file as examples.
ds = tf.data.TextLineDataset(nq_tsv_path_android_block[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["string","string"],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.map(lambda *ex: dict(zip(["input", "output"], ex)))
return ds
print("A few raw valid examples...")
for ex in tfds.as_numpy(nq_android_block("validation").take(5)):
print(ex)
def android_block_preprocessing(ds):
def to_inputs_and_targets(ex):
inputs = tf.strings.join(['ANDROID_BLOCK:' + ex['input']], separator=' ')
class_label = tf.strings.join([ex['output']], separator=' ')
return {'inputs': inputs, 'targets': class_label }
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('android_block')
t5.data.TaskRegistry.add(
"android_block",
dataset_fn=nq_android_block,
splits=["train", "validation"],
text_preprocessor=[android_block_preprocessing],
output_features = DEFAULT_OUTPUT_FEATURES,
metric_fns=[t5.evaluation.metrics.accuracy],
num_input_examples=num_nq_examples_android_block
)
nq_task = t5.data.TaskRegistry.get("android_block")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 256, "targets": 256})
print("A few preprocessed training examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
### Finetuning
You can run the finetuning using the following cells.
Please set the correct path of the variable *MODEL_DIR* (the path to save the pretrained model in), *PATH_GIN_FILE* (the gin file configuration for this finetuning) and *PRETRAINED_DIR* (the folder that contains the pretrained model).
**Keep attention** to change the *pretrained_model_dir* in finetune step (if you are starting the finetuning from scratch you have to set the value *PRETRAINED_DIR*, if you are restarting the finetuning from a previous saved checkpoint you have to set the value *MODEL_DIR*)
```
def _rate_num_input_examples(task):
if "train" in task.splits:
return float(task.num_input_examples("train"))
elif "validation" in task.splits:
return float(task.num_input_examples("validation"))
else:
raise ValueError("Task %s does not have a train or validation split." % (task.name))
t5.data.MixtureRegistry.remove("all_tasks")
t5.data.MixtureRegistry.add(
"all_tasks",
["java_construct", "java_token", "java_block", "android_construct", "android_token", "android_block"],
default_rate=_rate_num_input_examples
#default_rate=1.0
)
from mesh_tensorflow.transformer.learning_rate_schedules import slanted_triangular
MODEL_SIZE = "small"
# Set the folder where the checkpoints and all the others information will be writed
MODEL_DIR = 'gs://bucket_code_completion/T5_extension/finetuning'
# Specify the pre-trained dir which must contain the pre-trained models, the operative_config.gin file and the checkpoint file as well
PRETRAINED_DIR='gs://bucket_code_completion/T5_extension/pretrained_with_masking'
model_parallelism, train_batch_size, keep_checkpoint_max = {
"small": (1, 256, 16),
"base": (2, 128, 8),
"large": (8, 64, 4),
"3B": (8, 16, 1),
"11B": (8, 16, 1)}[MODEL_SIZE]
tf.io.gfile.makedirs(MODEL_DIR)
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=TPU_ADDRESS,
tpu_topology=TPU_TOPOLOGY,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
learning_rate_schedule = slanted_triangular,
sequence_length={"inputs": 256, "targets": 256},
save_checkpoints_steps=5000,
keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,
iterations_per_loop=100,
)
PATH_GIN_FILE = 'gs://bucket_code_completion/T5_extension/finetuned_config/slanted-operative_config.gin'
import gin
with gin.unlock_config():
gin.parse_config_file(PATH_GIN_FILE)
#RUN FINE-TUNING
FINETUNE_STEPS = 400000
model.finetune(
mixture_or_task_name="all_tasks",
pretrained_model_dir=MODEL_DIR,
finetune_steps=FINETUNE_STEPS
)
```
| github_jupyter |
#Aplicación de spectral clustering a datos con estructura de grafos
En NLP, muchas veces los datos con los que se trabaja tienen una esgructura de grafo (por ejemplo, redes léxicas, paradigmas verbales, etc.) El modelo de spectral clustering puede adaptarse a esta estructura y crear clusters de los nodos. A continuación presentamos una aplicación a un grafo de datos bilingües.
```
#Importamos los paquetes que vamos a utilizar
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
from scipy.linalg import eig
from csv import reader
from operator import itemgetter
from sklearn.decomposition import PCA
```
## Preprocesamiento de los datos
Cuando trabajamos con datos ya en estructura de grafos, el preprocesamiento se vuelve más simple, pues no requerimos generar esta estructura. En este caso, en lugar de tener una secuencia de pasos Vector - Grafo - Vector, nos saltaremos el primer paso y sólo tendremos los pasos Grafo - Vector.
```
#Montamos el contenido de Google Drive
from google.colab import drive
drive.mount('/content/drive')
#Abrimos el archivo
file = open('/content/drive/My Drive/Curso RIIAA/data/corpus_bilingual.txt','r')
print(file)
#Leemos el archivo
edges = list(reader(file, delimiter='\t'))
```
Los datos se encuentran estructurados en un grafo bipartito, donde un conjunto de nodos corresponde al lenguaje Náhuatl y otro al Español. Como existen préstamos entre una y otra lengua, utilizaremos un índice para diferenciar a que lengua corresponde cada forma léxica.
```
#Creamos las aristas que definen el grafo
edges = [(edge[0]+'_NA',edge[1]+'_ES',float(edge[4])/38608) for edge in edges] #Dividir entre el máximo (38608) normaliza los pesos
print(edges)
```
Podemos visualizar los datos a aprtir de la librería de $networkx$. Esta misma librería nos puede facilitar la creación de la matriz de adyacencia.
```
#Creamos un grafo a partir de las aristas que hemos definido
G = nx.Graph()
G.add_weighted_edges_from(edges[:10]) #Tomamos pocas aristas para que sea más fácil su visualización
#Visualización de las aristas en formato networkx
print(G.edges(data=True))
#Dibuja el grafo
nx.draw(G, with_labels=True, node_size=10)
```
## Aplicación del algoritmo de spectral clustering
Una vez que tenemos los datos en un formato de grafo tratable, podemos pasar a aplicar el alforitmo de spectral clustering. Para esto, obtenemos la matriz de adyacencia.
```
#Obtiene la matriz de adyacencia a partir del formato netowrkx
A = nx.to_numpy_array(G)
#Guarda las etiqeutas de los nodos
labels = G.nodes
#Visualiza la matriz de adyacencia
df = pd.DataFrame(A, index=labels, columns=labels)
print(df.to_string())
```
Ya que la matriz de adyacencia guarda información del grafo en formato vectorial, podemos visualizarla en un espacio $\mathbb{R}^d$. Sin embargo, notamos que ésta no nos da suficiente información para clusterizar los puntos.
```
#Función para plotear
def plot_words(Z,ids,color='blue'):
#Reduce a dos dimensiones con PCA
Z = PCA(n_components=2).fit_transform(Z)
r=0
#Plotea las dimensiones
plt.scatter(Z[:,0],Z[:,1], marker='o', c=color)
for label,x,y in zip(ids, Z[:,0], Z[:,1]):
#Agrega las etiquetas
plt.annotate(label, xy=(x,y), xytext=(-1,1), textcoords='offset points', ha='center', va='bottom')
r+=1
plot_words(A,labels)
plt.show()
```
Por tanto, aplicamos spectral clustering, obteniendo la matriz laplaciana como: $L = D - A$, donde $D$ es la matriz de grado y $A$ la de adyacencia. Posteriormente hacemos la factorización espectral.
```
#Se calcula la matriz Laplaciana
L = np.diag(A.sum(0))-A
#Se calculan los eigen valores y eigen vectores de L
eig_vals, eig_vecs = eig(L)
#Se ordenan con respecto a los eigenvalores
values = sorted(zip(eig_vals.real,eig_vecs), key=itemgetter(0))
#Obtenemos ambos eigens
vals, vecs = zip(*values)
#Se crea una matriz de eigenvectores
matrix = np.array(vecs)
#Visualización de eigenvalores
plt.plot(np.array(vals),'o')
plt.show()
```
Finalmnete, obtenemos los nuevos vectores a partir de los eigenvectores de $L$ asociados a los eigenvalores más pequeños.
```
#Obtiene la matriz con los vectores nuevos
M_hat = matrix.T.real #Se toman todos los eigenvectores
#Tamaño de la matriz
print(M_hat.shape)
#Ploteamos los datos nuevos
plot_words(M_hat,labels)
```
### Clustering de los puntos
Una vez obtenido los nuevos vectores, podemos aplicar un método de clustering (k-means) para observar las regularidades encontradas.
```
from sklearn.cluster import KMeans
#Número de centroides
centroids=5
#Aplicación de kmenas
kmeans = KMeans(n_clusters=centroids, init='random').fit(M_hat)
#Obtención de los clusters
pred_lables = kmeans.predict(M_hat)
#Plot de clusters
plot_words(M_hat, labels, color=pred_lables)
plt.show()
```
| github_jupyter |
## NSGA-II
The algorithm is implemented based on <cite data-cite="nsga2"></cite> [\[benchmark\]](https://www.egr.msu.edu/coinlab/blankjul/pymoo-benchmark/nsga2.html) [\[data\]](https://www.egr.msu.edu/coinlab/blankjul/pymoo-benchmark/nsga2.zip)
. A benchmark of the algorithm against the original C code can be found
The algorithm follows the general
outline of a genetic algorithm with a modified mating and survival selection. In NSGA-II, first, individuals
are selected frontwise. By doing so, there will be the situation where a front needs to be split because not all individuals are allowed to survive. In this splitting front, solutions are selected based on crowding distance.
<div style="display: block;margin-left: auto;margin-right: auto;width: 80%;">

</div>
The crowding distance is basically the Manhatten Distance in the objective space. However, the extreme points are desired to be kept every generation and, therefore, get assigned a crowding distance of infinity.
<div style="display: block;margin-left: auto;margin-right: auto;width: 50%;">

</div>
Furthermore, to increase some selection pressure NSGA-II uses a binary tournament mating selection. Each individual is first compared by rank and then crowding distance. There also exists a variant in the original C code where instead of using the rank the domination criterium between two solutions is used.
### Example
```
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
res = minimize(problem,
algorithm,
('n_gen', 200),
seed=84,
verbose=True)
plot = Scatter()
plot.add(problem.pareto_front(), plot_type="line", color="black", alpha=0.7)
plot.add(res.F, color="red")
plot.show()
```
Moreover, we can customize NSGA-II to solve a problem with binary decision variable, for example ZDT5.
```
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_sampling, get_crossover, get_mutation
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt5")
algorithm = NSGA2(pop_size=100,
sampling=get_sampling("bin_random"),
crossover=get_crossover("bin_two_point"),
mutation=get_mutation("bin_bitflip"),
eliminate_duplicates=True)
res = minimize(problem,
algorithm,
('n_gen', 500),
seed=1,
verbose=False)
Scatter().add(res.F).show()
```
### API
| github_jupyter |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Import Resources
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
print('Using:')
print('\t\u2022 TensorFlow version:', tf.__version__)
print('\t\u2022 tf.keras version:', tf.keras.__version__)
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
```
## Load the Dataset
```
training_set, dataset_info = tfds.load('mnist', split='train', as_supervised = True, with_info = True)
```
## Create Pipeline
```
def normalize(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
num_training_examples = dataset_info.splits['train'].num_examples
batch_size = 64
training_batches = training_set.cache().shuffle(num_training_examples//4).batch(batch_size).map(normalize).prefetch(1)
```
## Build the Model
```
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape = (28, 28, 1)),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(10, activation = 'softmax')
])
```
## Getting the Model Ready For Training
Before we can train our model we need to set the parameters we are going to use to train it. We can configure our model for training using the `.compile` method. The main parameters we need to specify in the `.compile` method are:
* **Optimizer:** The algorithm that we'll use to update the weights of our model during training. Throughout these lessons we will use the [`adam`](http://arxiv.org/abs/1412.6980) optimizer. Adam is an optimization of the stochastic gradient descent algorithm. For a full list of the optimizers available in `tf.keras` check out the [optimizers documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers#classes).
* **Loss Function:** The loss function we are going to use during training to measure the difference between the true labels of the images in your dataset and the predictions made by your model. In this lesson we will use the `sparse_categorical_crossentropy` loss function. We use the `sparse_categorical_crossentropy` loss function when our dataset has labels that are integers, and the `categorical_crossentropy` loss function when our dataset has one-hot encoded labels. For a full list of the loss functions available in `tf.keras` check out the [losses documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses#classes).
* **Metrics:** A list of metrics to be evaluated by the model during training. Throughout these lessons we will measure the `accuracy` of our model. The `accuracy` calculates how often our model's predictions match the true labels of the images in our dataset. For a full list of the metrics available in `tf.keras` check out the [metrics documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics#classes).
These are the main parameters we are going to set throught these lesson. You can check out all the other configuration parameters in the [TensorFlow documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#compile)
```
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## Taking a Look at the Loss and Accuracy Before Training
Before we train our model, let's take a look at how our model performs when it is just using random weights. Let's take a look at the `loss` and `accuracy` values when we pass a single batch of images to our un-trained model. To do this, we will use the `.evaluate(data, true_labels)` method. The `.evaluate(data, true_labels)` method compares the predicted output of our model on the given `data` with the given `true_labels` and returns the `loss` and `accuracy` values.
```
for image_batch, label_batch in training_batches.take(1):
loss, accuracy = model.evaluate(image_batch, label_batch)
print('\nLoss before training: {:,.3f}'.format(loss))
print('Accuracy before training: {:.3%}'.format(accuracy))
```
## Training the Model
Now let's train our model by using all the images in our training set. Some nomenclature, one pass through the entire dataset is called an *epoch*. To train our model for a given number of epochs we use the `.fit` method, as seen below:
```
EPOCHS = 5
history = model.fit(training_batches, epochs = EPOCHS)
```
The `.fit` method returns a `History` object which contains a record of training accuracy and loss values at successive epochs, as well as validation accuracy and loss values when applicable. We will discuss the history object in a later lesson.
With our model trained, we can check out it's predictions.
```
for image_batch, label_batch in training_batches.take(1):
ps = model.predict(image_batch)
first_image = image_batch.numpy().squeeze()[0]
fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2)
ax1.imshow(first_image, cmap = plt.cm.binary)
ax1.axis('off')
ax2.barh(np.arange(10), ps[0])
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(10))
ax2.set_yticklabels(np.arange(10))
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
```
WOW!! Now our network is brilliant. It can accurately predict the digits in our images. Let's take a look again at the loss and accuracy values for a single batch of images.
```
for image_batch, label_batch in training_batches.take(1):
loss, accuracy = model.evaluate(image_batch, label_batch)
print('\nLoss after training: {:,.3f}'.format(loss))
print('Accuracy after training: {:.3%}'.format(accuracy))
```
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units, then a hidden layer with 64 units, then a hidden layer with 32 units and finally an output layer with 10 units. Use a ReLu activation function for all the hidden layers and a softmax activation function for the output layer. Then compile the model using an `adam` optimizer, a `sparse_categorical_crossentropy` loss function, and the `accuracy` metric. Finally, print the loss and accuracy of your un-trained model for a single batch of images.
```
## Solution
print('\nLoss before training: {:,.3f}'.format(loss))
print('Accuracy before training: {:.3%}'.format(accuracy))
```
> **Exercise:** Train the model you created above for 5 epochs and then print the loss and accuracy of your trained model for a single batch of images.
```
## Solution
print('\nLoss after training: {:,.3f}'.format(loss))
print('Accuracy after training: {:.3%}'.format(accuracy))
```
> **Exercise:** Plot the prediction of the model you created and trained above on a single image from the training set. Also plot the probability predicted by your model for each digit.
```
## Solution
```
## Automatic Differentiation
Let's now take a minute to see how TensorFlow calculates and keeps track of the gradients needed for backpropagation. TensorFlow provides a class that records automatic differentiation operations, called `tf.GradientTape`. Automatic differentiation, also known as algorithmic differentiation or simply “autodiff”, is a family of techniques used by computers for efficiently and accurately evaluating derivatives of numeric functions.
`tf.GradientTape` works by keeping track of operations performed on tensors that are being "watched". By default `tf.GradientTape` will automatically "watch" any trainable variables, such as the weights in our model. Trainable variables are those that have `trainable=True`. When we create a model with `tf.keras`, all of the parameters are initialized with `trainable = True`. Any tensor can also be manually "watched" by invoking the watch method.
Let's see a simple example. Let's take the following equation:
$$
y = x^2
$$
The derivative of `y` with respect to `x` is given by:
$$
\frac{d y}{d x} = 2x
$$
Now, let's use `tf.GradientTape` to calculate the derivative of a tensor `y` with respect to a tensor `x`:
```
# Set the random seed so things are reproducible
tf.random.set_seed(7)
# Create a random tensor
x = tf.random.normal((2,2))
# Calculate gradient
with tf.GradientTape() as g:
g.watch(x)
y = x ** 2
dy_dx = g.gradient(y, x)
# Calculate the actual gradient of y = x^2
true_grad = 2 * x
# Print the gradient calculated by tf.GradientTape
print('Gradient calculated by tf.GradientTape:\n', dy_dx)
# Print the actual gradient of y = x^2
print('\nTrue Gradient:\n', true_grad)
# Print the maximum difference between true and calculated gradient
print('\nMaximum Difference:', np.abs(true_grad - dy_dx).max())
```
The `tf.GradientTape` class keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor.
To know more about `tf.GradientTape` and trainable variables check the following links
* [Gradient Tape](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape)
* [TensorFlow Variables](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/Variable)
Next up you'll write the code for training a neural network on a more complex dataset.
| github_jupyter |
# Overview
This notebook is mainly for testing the stability of the $T^2$ statistic.
1. What happens when the test locations/frequencies are the same?
2. What is the effect on the test if the test locations are redundant? Will the test statistic blow up?
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import sys
# sample source
m = 800
dim = 2
n = m
ss = data.SSGaussMeanDiff(dim, my=0.5)
#ss = data.SSBlobs()
tst_data = ss.sample(m, seed=2)
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=10)
# plot test data
xte, yte = te.xy()
plt.plot(xte[:, 0], xte[:, 1], 'xr', label='X te')
plt.plot(yte[:, 0], yte[:, 1], 'xb', label='Y te')
plt.legend(loc='best')
plt.title('Test set')
print(te)
```
## mean embedding test. J=2 locations
```
# test locations
T = np.array([[0, 0], [1, 0]])
gwidth = 1.0
alpha = 0.01
met = tst.MeanEmbeddingTest(T, gwidth, alpha)
met.perform_test(te)
t1 = np.array([0, 0])
t2x_list = np.linspace(-7, 7, 200)
# add an x very close to 0
t2x_list = np.append(t2x_list, [1e-9])
t2x_list.sort()
stats = np.zeros(len(t2x_list))
for i, t2x in enumerate(t2x_list):
t2 = np.array([t2x, 0])
T = np.vstack((t1, t2))
met_i = tst.MeanEmbeddingTest(T, gwidth, alpha)
test_i = met_i.perform_test(te)
stats[i] = test_i['test_stat']
# plot location shift vs. test stat
plt.plot(t2x_list, stats)
plt.title('t1 = %s, t2 = [x, 0]'%(str(t1)) )
plt.xlabel('x in $1^{st}$ dim. of t2')
plt.ylabel('Test statistic')
```
This showed that if both the test locations are the same at [0, 0], then the covariance matrix is singular, and the test statistic cannot be computed. If $t_1 = [0, 0], t_2 = [x, 0]$ where $x$ approaches 0, then test statistic drops significantly as shown.
## mean embedding test. J=3 locations
```
t1 = np.array([0, 0])
t3 = np.array([1, 0])
t2x_list = np.linspace(-7, 8, 200)
# add an x very close to 0
t2x_list = np.append(t2x_list, [1e-12, 1+1e-9])
t2x_list.sort()
stats = np.zeros(len(t2x_list))
for i, t2x in enumerate(t2x_list):
t2 = np.array([t2x, 0])
T = np.vstack((t1, t2, t3))
met_i = tst.MeanEmbeddingTest(T, gwidth, alpha)
test_i = met_i.perform_test(te)
stats[i] = test_i['test_stat']
# plot location shift vs. test stat
plt.plot(t2x_list, stats)
plt.title('t1 = %s, t2 = [x, 0], t3 = %s'%(str(t1), str(t3)) )
plt.xlabel('x in $1^{st}$ dim. of t2')
plt.ylabel('Test statistic')
```
Same story as in the previous case of $J=2$. That is, there is a singularity at each point where two test locations are the same.
| github_jupyter |
# Zeisel GOrilla Analysis: SGBM vs RF
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from random import shuffle
```
## Load Zeisel data
```
zeisel_ex_path = '/media/tmo/data/work/datasets/zeisel/expression_sara_filtered.txt'
zeisel_tf_path = '/media/tmo/data/work/datasets/TF/mm9_TFs.txt'
zeisel_df = pd.read_csv(zeisel_ex_path, sep='\t')
zeisel_genes = list(zeisel_df['Unnamed: 0'])
shuffle(zeisel_genes)
len(zeisel_genes)
```
# Create GOrilla lists of top regulated genes + background genes
```
GOrilla_GENES = ['Olig1', 'Rel', 'Tspan2', 'Neurod2', 'Lef1', 'Gli3', 'Dlx1']
GOrilla_GENES_2 = ['Ywhae', 'Myrf', 'Diablo', 'Kcnip1']
def gorilla_list(df, TF, gene_names):
targets = list(df[df['TF'] == TF].sort_values(by='importance', ascending=0)['target'])
background = [gene for gene in gene_names if gene not in targets]
return targets + background
```
#### SGBM GOrilla lists
```
for go_TF in GOrilla_GENES:
go_list = gorilla_list(net_sgbm_df, go_TF, zeisel_genes)
pd.DataFrame(go_list).to_csv('GOrilla/SGBM/' + go_TF + '_sgbm_list.txt', index=False, header=False)
for go_TF in GOrilla_GENES_2:
go_list = gorilla_list(net_sgbm_df, go_TF, zeisel_genes)
pd.DataFrame(go_list).to_csv('GOrilla/SGBM/' + go_TF + '_sgbm_list.txt', index=False, header=False)
GOrilla_GENES_3 = ['Fubp1', 'Trim28', 'Ruvbl1', 'Zfp207', 'Crebzf', 'Gtf2b', 'Msra', 'Ubxn1', 'Prkrir', 'Otud4', 'Klf8']
for go_TF in GOrilla_GENES_3:
go_list = gorilla_list(net_sgbm_df, go_TF, zeisel_genes)
pd.DataFrame(go_list).to_csv('GOrilla/SGBM/' + go_TF + '_sgbm_list.txt', index=False, header=False)
```
#### RF GOrilla lists
```
for go_TF in GOrilla_GENES:
go_list = gorilla_list(net_rf_df, go_TF, zeisel_genes)
pd.DataFrame(go_list).to_csv('GOrilla/RF/' + go_TF + '_rf_list.txt', index=False, header=False)
for go_TF in GOrilla_GENES_2:
go_list = gorilla_list(net_rf_df, go_TF, zeisel_genes)
pd.DataFrame(go_list).to_csv('GOrilla/RF/' + go_TF + '_rf_list.txt', index=False, header=False)
```
# GO enrichment comparison
## Olig1
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Olig1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Olig1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Neurod2
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Neurod2.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Neurod2.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Gli3
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Gli3.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Gli3.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Lef1
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Lef1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Lef1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Rel
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Rel.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Rel.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Tspan2
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Tspan2.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Tspan2.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Dlx1
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Dlx1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Dlx1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Myrf
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Myrf.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Myrf.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
## Kcnip1
```
pd.read_csv('GOrilla/SGBM/GOrilla/GOrilla_sgbm_Kcnip1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
pd.read_csv('GOrilla/RF/GOrilla/GOrilla_rf_Kcnip1.xls', sep='\t')[['GO Term', 'Description', 'P-value']].head()
```
---
# Differences in recovered TF+regulator sets in top 100K regulatory links
* SGBM typically has lower scores in the P-values for the above GO enrichment lists
* On the other hand, SGBM recovers a lot more TF+regulator sets in the top 100K regulatory links
* There is a trade-off: recovering more good targets for fewer TFs, or recovering more TF with a non-trivial amount of targets
```
net_sgbm_df = pd.read_csv('zeisel_sgbm_100k.txt', sep='\t')
sgbm_tf_counts_df = pd.DataFrame(net_sgbm_df.TF.value_counts())
sgbm_tf_counts_df.reset_index(inplace=True)
sgbm_tf_counts_df.columns=['TF', 'count']
sgbm_TF_list = list(net_sgbm_df.TF.unique())
rf_TF_list = list(net_rf_df.TF.unique())
sgbm_minus_rf_TFs = [tf for tf in sgbm_TF_list if not tf in rf_TF_list]
sgbm_minus_rf_TF_df = pd.DataFrame(sgbm_minus_rf_TFs)
sgbm_minus_rf_TF_df.columns = ['TF']
rf_minus_sgbm_TFs = [tf for tf in rf_TF_list if not tf in sgbm_TF_list]
rf_minus_sgbm_TF_df = pd.DataFrame(rf_minus_sgbm_TFs)
rf_minus_sgbm_TF_df.columns = ['TF']
print('TF count: ' + str(len(rf_minus_sgbm_TF_df)))
rf_minus_sgbm_TF_df.merge(rf_tf_counts_df, on=['TF']).sort_values(by='count', ascending=0)
print('TF count: ' + str(len(sgbm_min_rf_TF_df)))
sgbm_min_rf_TF_df.merge(sgbm_tf_counts_df, on=['TF']).sort_values(by='count', ascending=0)
```
---
## RF
```
net_rf_df = pd.read_csv('zeisel_rf_100k.txt', sep='\t')
rf_tf_counts_df = pd.DataFrame(net_rf_df.TF.value_counts(), index=None)
rf_tf_counts_df.reset_index(inplace=True)
rf_tf_counts_df.columns=['TF', 'count']
rf_tf_counts_df.head()
rf_tf_counts_df.plot()
plt.show()
merged = sgbm_tf_counts_df.merge(rf_tf_counts_df, on=['TF'], how='outer')
merged.fillna(-200, inplace=True)
merged[merged.TF == 'Fubp1']
merged.head()
len(merged)
```
## Plotting the nr of targets per TF (x=SGBM, y=RF)
```
merged.plot.scatter(x='count_x', y='count_y', figsize=(8, 10))
plt.show()
merged[merged.count_x > merged.count_y]
```
| github_jupyter |
# pixStem tutorial
This notebook shows how to use the `pixstem` library to analyse pixelated scanning transmission electron microscopy (STEM) data, and differential phase contrast (DPC) data.
More documentation is found at http://pixstem.org
Note: for Notebook to work in JupyterLab, run : `conda install pixstem hyperspy-gui-traitsui -c conda-forge` in the Anaconda prompt
## Importing libraries
The first step is setting the plotting toolkit
```
%matplotlib qt5
```
Then import the library itself
You might get a "WARNING:hyperspy_gui_ipywidgets", this can be ignored.
```
import pixstem.api as ps
```
## Working with fast pixelated detector STEM data
### Loading data
```
s = ps.load_ps_signal
```
For large files, use `lazy=True`: `s = ps.load_ps_signal("data.hspy", lazy=True)`. As the file we'll be looking at is (uncompressed) 8.6 GB, we use `lazy=True`.
```
s = ps.load_ps_signal("datasets/cross_grating_medipix3.hdf5", lazy=True)
```
This returns a `PixelatedSTEM` class, which is inherits HyperSpy's `Signal2D`. So all functions which work in `Signal2D`, also works here:
```
s
s.plot()
```
### Virtual detectors
To make the processing go a little bit quicker, we reduce the data down from 256 x 256 probe positions, to 128 x 128.
```
s1 = s.inav[0:128, 0:128]
```
The `virtual_annular_dark_field` is used to construct a images from the `PixelatedSTEM` class, with the input being `(x, y, r_outer, r_inner)`
```
s_adf = s1.virtual_annular_dark_field(128, 128, 60, 100)
s_adf.plot()
```
This signal can now be used for navigation the `s1` signal
```
s1.plot(navigator=s_adf)
```
There is also a virtual bright field method. Passing no parameters to the method gives a sum of the diffraction dimensions:
```
s_bf = s1.virtual_bright_field()
s_bf.plot()
```
A mask can be applied in the form of (x, y, r):
```
s_bf = s1.virtual_bright_field(128, 128, 40)
s_bf.plot()
```
### Radial integration
A common task is getting the intensity as a function of scattering angle. This is done using radial integration, which firstly requires finding the center of the electron beam. Here we use the `center_of_mass` function.
```
s_com = s1.center_of_mass(threshold=1.)
```
This returns a `DPCSignal2D` class, which will be explored more later. What we need to know is that is it basically a HyperSpy `Signal2D` class, where the x-beam shifts are in the first navigation index (`s.inav[0]`), while the y-shifts are in the second navigation index (`s.inav[1]`).
```
s_com.plot()
```
To do the radial integration itself, use the `radial_integration` method, which requires the `centre_x` and `centre_y` arguments to be specified.
```
s1_radial = s1.radial_integration(centre_x=s_com.inav[0].data, centre_y=s_com.inav[1].data)
```
This returns a new signal, where the signal dimensions has been reduced from 2 to 1 dimensions. This is especially useful when working with large datasets, where this operation can drastically reduce the data size, making it possible to load the full data into memory.
```
s1_radial
```
Plotting it shows the electron scattering for each probe position:
```
s1_radial.plot()
```
To rather visualize the data as function of scattering angle (essentially virtual annular dark field), we can transpose the data using `s_radial.T`. This "flips" the signal and navigation axes:
```
s1_radial.T.plot()
```
## Differential phase contrast (DPC) signals
These signal classes are used for beam shift datasets, where x-shifts are stored in the first navigation index (`s_dpc.inav[0]`) and the y-shifts in the second navigation index (`s_dpc.inav[1]`).
They contain many different methods for both processing and visualizing DPC data.
Here, we again use `ps.dummy_data` to get a signal to work with.
There types of signals can be loaded using `s = ps.load_dpc_signal`.
```
s_dpc = ps.dummy_data.get_square_dpc_signal(add_ramp=True)
s_dpc.plot()
```
### Correcting d-scan (ramp)
The `s_dpc` has a lot of d-scan, to correct it use the `correct_ramp` method. This function is fairly basic, with only the possibility to subtract a linear ramp.
```
s_dpc = s_dpc.correct_ramp(corner_size=0.05)
s_dpc.plot()
```
### Plotting methods
The class also has several methods for visualizing DPC data: `get_color_signal`, `get_magnitude_signal` and `get_color_image_with_indicator`.
The two former returns a HyperSpy signal, while the latter interfaces directly with the matplotlib backend making it more customizable.
```
s_color = s_dpc.get_color_signal()
s_color.plot()
```
The `get_color_signal` method has a `rotation` argument, which is used to correct for mismatch between the scan direction and diffraction rotation.
```
s_color_rot = s_dpc.get_color_signal(rotation=45)
s_color_rot.plot()
```
`get_magnitude_signal` gives the magnitude of the beam shift vector
```
s_magnitude = s_dpc.get_magnitude_signal()
s_magnitude.plot()
```
The `get_color_image_with_indicator` method has a large degree of customizability, which is useful when making images for presentations, posters or articles.
By default it returns a matplotlib figure object, which can be saved directly
```
fig = s_dpc.get_color_image_with_indicator()
fig.savefig("dpc_image.jpg")
```
It also accepts a matplotlib subplot object as an argument, which makes it easy to integrate into larger figures.
An example using more of the customizability
```
import matplotlib.pyplot as plt
fig, axarr = plt.subplots(1, 2, figsize=(8, 4))
ax_dpc = axarr[0]
ax_dif = axarr[1]
ax_dif.imshow(s.inav[0, 0].data)
s_dpc.get_color_image_with_indicator(indicator_rotation=90, scalebar_size=30, ax=ax_dpc)
fig.savefig("dpc_figure.jpg")
```
## Various rotating
Rotating the scan dimensions
```
s_dpc_rot = s_dpc.rotate_data(20)
s_dpc_rot.get_color_signal().plot()
```
Rotating the beam shifts, to correct for mismatch between the scan direction and diffraction rotation.
```
s_dpc_rot = s_dpc.rotate_beam_shifts(25)
s_dpc_rot.get_color_signal().plot()
```
## Blurring the beam shifts
```
s_dpc_blur = s_dpc.gaussian_blur(1.2)
s_dpc_blur.get_color_signal().plot()
```
## Bivariate histogram
```
s_hist = s_dpc.get_bivariate_histogram()
s_hist.plot(cmap='viridis')
```
| github_jupyter |
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.
# 1.4. Creating an IPython extension with custom magic commands
1. Let's import a few functions from the IPython magic system.
```
from IPython.core.magic import (register_line_magic,
register_cell_magic)
```
2. Defining a new line magic is particularly simple. First, let's create a function that accepts the contents of the line (except the initial `%`-prefixed magic command). The name of this function is the name of the magic. Then, let's decorate this function with `@register_line_magic`. We're done!
```
@register_line_magic
def hello(line):
if line == 'french':
print("Salut tout le monde!")
else:
print("Hello world!")
%hello
%hello french
```
3. Let's create a slightly more useful cell magic `%%csv` that parses a CSV string and returns a Pandas DataFrame object. This time, the function takes as argument the first line (what follows `%%csv`), and the contents of the cell (everything in the cell except the first line).
```
import pandas as pd
#from StringIO import StringIO # Python 2
from io import StringIO # Python 3
@register_cell_magic
def csv(line, cell):
# We create a string buffer containing the
# contents of the cell.
sio = StringIO(cell)
# We use Pandas' read_csv function to parse
# the CSV string.
return pd.read_csv(sio)
%%csv
col1,col2,col3
0,1,2
3,4,5
7,8,9
```
We can access the returned object with `_`.
```
df = _
df.describe()
```
4. The method we described is useful in an interactive session. If you want to use the same magic in multiple notebooks, or if you want to distribute it, you need to create an **IPython extension** that implements your custom magic command. Let's show how to do that. The first step is to create a Python script (`csvmagic.py` here) that implements the magic.
```
%%writefile csvmagic.py
import pandas as pd
#from StringIO import StringIO # Python 2
from io import StringIO # Python 3
def csv(line, cell):
sio = StringIO(cell)
return pd.read_csv(sio)
def load_ipython_extension(ipython):
"""This function is called when the extension is loaded.
It accepts an IPython InteractiveShell instance.
We can register the magic with the `register_magic_function`
method of the shell instance."""
ipython.register_magic_function(csv, 'cell')
```
5. Once the extension is created, we need to import it in the IPython session. The `%load_ext` magic command takes the name of a Python module and imports it, calling immediately `load_ipython_extension`. Here, loading this extension automatically registers our magic function `%%csv`. The Python module needs to be importable. Here, it is in the current directory. In other situations, it has to be in the Python path. It can also be stored in `~\.ipython\extensions` which is automatically put in the Python path.
```
%load_ext csvmagic
%%csv
col1,col2,col3
0,1,2
3,4,5
7,8,9
```
Finally, to ensure that this magic is automatically defined in our IPython profile, we can instruct IPython to load this extension at startup. To do this, let's open the file `~/.ipython/profile_default/ipython_config.py` and let's put `'csvmagic'` in the `c.InteractiveShellApp.extensions` list. The `csvmagic` module needs to be importable. It is common to create a *Python package* implementing an IPython extension, which itself defines custom magic commands.
> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch.autograd import Variable
import torch.nn as nn
X = np.arange(-5,5,0.01)
y = 4*X**3 + 2**X + 3
print(X.shape, y.shape)
plt.plot(X,y)
```
### Creating variable from numpy arrays
Similar to tensorflow placeholders
```
batch = Variable(torch.from_numpy(X[:4, np.newaxis]))
batch
```
### Concatanating along axis
```
torch.cat((batch, batch), 1)
```
### Convert tensors to type float for easy usage in nn layers
```
batch = Variable(torch.from_numpy(X[:4, np.newaxis])).float() ## Converting to float is important for GPU
batch
```
## Matix multiplication as Layer operation
```
nn.Linear(1,3)(batch)
```
## Do the same for target
```
target = Variable(torch.from_numpy(y[:4, np.newaxis])).float()
target
```
### Broadcasting surrogate
```
hidden = Variable(torch.zeros(1,3))
hidden
h = nn.Linear(3,3)(hidden)
h
x = nn.Linear(1,3)(batch)
x
try:
x + h ## Will give error
except RuntimeError as e:
print(e)
h.expand_as(x) ## This makes h same size as x and compatible for addition
x + h.expand_as(x) ## Finally
```
### Getting the size of the tensor of variable
```
x.size()
x.size(0)
x.size()[0], x.size()[1],
isinstance(x.size(), tuple)
x
```
## Get data-tensor inside a variable
```
x.data
try:
x.numpy()
except AttributeError as e:
print("Numpy conversion happens only on tensors and not variables.")
print(e)
x.data.numpy() ## succeeds
```
## Simple linear regression
```
np.random.seed(1337)
X = np.random.randn(1000,1)*4
W = np.array([0.5,])
bias = -1.68
y_true = np.dot(X, W) + bias
y = y_true + np.random.randn(X.shape[0])
plt.scatter(X, y, s=1, label="data")
plt.scatter(X, y_true, s=1, color='r', label="true")
plt.legend()
def get_variable_from_np(X):
return Variable(torch.from_numpy(X)).float()
class LinearRegression(nn.Module):
def __init__(self, input_size, output_size):
super(LinearRegression, self).__init__()
self.x2o = nn.Linear(input_size, output_size)
def forward(self, X):
return self.x2o(X)
batch_size = 10
batch = get_variable_from_np(X[:batch_size])
batch
model = LinearRegression(1, 1)
y_pred = model.forward(batch)
y_pred
batch = get_variable_from_np(X[:])
y_pred = model.forward(batch)
y_pred_np = y_pred.squeeze().data.numpy()
plt.scatter(X, y, s=1, label="data")
plt.scatter(X, y_true, s=1, color='r', label="true")
plt.scatter(X, y_pred_np, s=1, color='k', alpha=0.5, label="fit")
plt.legend()
```
### Define loss criterion and optimizer
```
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
losses = []
```
## Train the model
```
batch_size = 10
epochs = 100
print_every = 10
for i in range(epochs):
loss = 0
optimizer.zero_grad() # Important for each epoch
idx = np.random.randint(X.shape[0], size=batch_size)
batch = get_variable_from_np(X[idx])
target = get_variable_from_np(y[idx])
output = model.forward(batch)
loss += criterion(output, target)
loss.backward() # Calculate the gradients
optimizer.step() # Updates the parameters of the model
if (i+1) % print_every == 0:
print("Loss at epoch [%s]: %.3f" % (i, loss.data[0]))
losses.append(loss.data[0])
plt.plot(losses, '-or')
plt.xlabel("Epoch")
plt.xlabel("Loss")
batch = get_variable_from_np(X[:])
y_pred = model.forward(batch)
y_pred_np = y_pred.squeeze().data.numpy()
plt.scatter(X, y, s=1, label="data")
plt.scatter(X, y_true, s=1, color='r', label="true")
plt.scatter(X, y_pred_np, s=1, color='k', alpha=0.5, label="fit")
plt.legend()
list(model.x2o.parameters())
model.x2o.weight
model.x2o.bias
print("Model W: %.3f, True W: %.3f" % (model.x2o.weight.data.numpy(), W))
print("Model bias: %.3f, True bias: %.3f" % (model.x2o.bias.data.numpy(), bias))
```
## Running on CUDA
```
batch = Variable(torch.randn(10,3))
target = Variable(torch.randn(10,1))
no_gpu_model = LinearRegression(input_size=3, output_size=1)
no_gpu_model.forward(batch).size()
torch.cuda.is_available()
if torch.cuda.is_available():
gpu_model = no_gpu_model.cuda()
try:
print(gpu_model.forward(batch))
except TypeError as e:
print(e)
```
I have opened an issue related to the above error at: https://github.com/pytorch/pytorch/issues/584
```
if torch.cuda.is_available():
gpu_model = no_gpu_model.cuda()
try:
print(gpu_model.forward(batch.cuda()).size())
except TypeError as e:
print(e)
```
GPU to CPU fallback supported model
```
class LinearRegression(nn.Module):
def __init__(self, input_size, output_size):
super(LinearRegression, self).__init__()
self.x2o = nn.Linear(input_size, output_size)
def forward(self, X):
if next(self.x2o.parameters()).is_cuda:
if not X.is_cuda:
X = X.cuda()
return self.x2o(X)
batch = Variable(torch.randn(10,3))
target = Variable(torch.randn(10,1))
no_gpu_model = LinearRegression(input_size=3, output_size=1)
print("No GPU model: ", no_gpu_model.forward(batch).size())
if torch.cuda.is_available():
gpu_model = no_gpu_model.cuda()
try:
print("GPU model: ", gpu_model.forward(batch.cuda()).size())
except TypeError as e:
print(e)
```
| github_jupyter |
# PySHAC on tougher problems
The earlier example was a basic example with a lot of easy possible answers, and although the search space was quite large, it did quite well in a short amount of time.
However, a linear problem like that can be solved extremely easily using simple linear programming solvers, or even stochastic gradient descent techniques with a given initial value of `x` and `y`.
Now, lets focus on a problem whose search space is limited, but it is not easy for optimization algorithms to get the correct answer !
```
import os
import time
import numpy as np
from collections import OrderedDict
import pyshac
# set the random seed
np.random.seed(0)
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://www.sfu.ca/~ssurjano/branin.png")
```
# The Branin function
The branin search space is based on the equation below, where regular optimization algorithms might strugle.
This function is usually evaluated on the square x1 ∈ [-5, 10], x2 ∈ [0, 15].
```
Image(url="https://www.sfu.ca/~ssurjano/branin2.png")
```
The local minima of this function are the following two values :
```
Image(url="https://www.sfu.ca/~ssurjano/branin3.png")
```
# The Branin evaluation function
Below, lets define the `Branin` evaluation function
```
def evaluation_branin(worker_id, params):
""" Code ported from https://www.sfu.ca/~ssurjano/Code/braninm.html
Global Minimum = -0.397887
"""
xx = list(params.values())
x1, x2 = xx[0], xx[1]
a = 1.0
b = 5.1 / (4 * (np.pi ** 2))
c = 5.0 / np.pi
r = 6.0
s = 10.0
t = 1.0 / (8.0 * np.pi)
term1 = a * ((x2 - b * (x1 ** 2) + c * x1 - r) ** 2)
term2 = s * (1.0 - t) * np.cos(x1)
out = term1 + term2 + s
return out
# Lets test that the implementation is correct.
# Optimal parameter 1
x = [-np.pi, 12.275]
params = OrderedDict()
for i, xx in enumerate(x):
params['h%d' % i] = xx
loss = evaluation_branin(0, params)
assert np.allclose(loss, 0.397887)
# Optimal parameter 2
x = [np.pi, 2.275]
params = OrderedDict()
for i, xx in enumerate(x):
params['h%d' % i] = xx
loss = evaluation_branin(0, params)
assert np.allclose(loss, 0.397887)
# Optimal parameter 3
x = [9.42478, 2.475]
params = OrderedDict()
for i, xx in enumerate(x):
params['h%d' % i] = xx
loss = evaluation_branin(0, params)
assert np.allclose(loss, 0.397887)
```
# Setting up the search space
```
def get_branin_hyperparameter_list():
h1 = pyshac.UniformContinuousHyperParameter('h1', -5.0, 10.0)
h2 = pyshac.UniformContinuousHyperParameter('h2', 0.0, 15.0)
return [h1, h2]
```
# Setting up the PySHAC Engine
Branin is a harder problem than before, so lets allocate a larger budget and more larger number of samples in each batch.
```
total_budget = 200
num_batches = 20
objective = 'min'
params = get_branin_hyperparameter_list()
shac = pyshac.SHAC(params, total_budget=total_budget,
num_batches=num_batches, objective=objective)
```
# Train the engine
```
shac.fit(evaluation_branin, skip_cv_checks=True)
```
# Lets evaluate our engine
As we saw before, there are two possible parameter settings that obtain the global parameter.
```
Image(url="https://www.sfu.ca/~ssurjano/branin3.png")
shac.restore_data()
print("Evaluating after training")
predictions = shac.predict(5, max_classfiers=16)
pred_evals = [evaluation_branin(0, pred) for pred in predictions]
pred_mean = np.mean(pred_evals)
print()
print("Predicted mean : ", pred_mean)
```
| github_jupyter |
```
import spacy
from spacy.matcher import DependencyMatcher
from pathlib import Path
nlp = spacy.load("en_core_web_sm")
def doc_dep_graph(doc):
''' Put the graph with entity labels present (see 'tag' and 'label')
'''
words = []
arcs = []
for tok in doc:
if tok.ent_type == 0:
tag = tok.pos_
else:
tag = "_::"+tok.ent_type_+" ("+tok.pos_+")::_"
words.append({
"text": tok.text,
"tag": tag
})
if tok.dep_ in {'punct'}:
continue
if tok.i < tok.head.i:
arcs.append({
"start": tok.i,
"end": tok.head.i,
"label": tok.dep_,
"dir": "left"
})
elif tok.i > tok.head.i:
arcs.append({
"start": tok.head.i,
"end": tok.i,
"label": tok.dep_,
"dir": "right"
})
return {"words": words, "arcs": arcs}
def output_to_svg(filename, dep):
'''Save the dependency graph to SVG '''
svg = spacy.displacy.render(dep, style="dep",
jupyter=False, manual=True)
Path(filename+".svg").open("w", encoding="utf-8").write(svg)
def get_dep_matcher(nlp, patterns, pattern_names=None):
''' Add patterns with pattern_names to the dependency matcher '''
if pattern_names is None:
pattern_names = ["pattern"+str(pi) for pi in range(len(patterns))]
else:
pattern_names = [x for x in pattern_names]
matcher = DependencyMatcher(nlp.vocab)
for pi, pattern in enumerate(patterns):
print(pattern_names[pi], pattern)
matcher.add(pattern_names[pi], None, pattern)
return matcher
def predicate_matching(doc, matcher, source_target_at_pattern_end=True):
''' Match the patterns to a doc, returns dep graph with edges that match '''
words = []
arcs = []
node_inds = {}
for ti,tok in enumerate(doc):
# if tok.ent_type > 0 and tok.text not in node_inds:
words.append({
"text": tok.text,
"tag": tok.ent_type
})
node_inds[tok.text] = len(node_inds)
for match in matcher(doc):
print(match)
for match_inds in match[1]:
print([doc[mi] for mi in match_inds])
start, end = match_inds[-2], match_inds[-1]
if source_target_at_pattern_end:
print("Getting SOURCE+TARGET from root and final node in pattern")
start, end = match_inds[0], match_inds[-1]
else:
print("Getting SOURCE+TARGET from final two nodes in pattern")
if doc[start].text == doc[end].text or \
doc[start].text not in node_inds or \
doc[end].text not in node_inds:
continue
if end > start:
arcs.append({
"start": node_inds[doc[start].text],
"end": node_inds[doc[end].text],
"link": doc[start].text+" -> "+doc[end].text,
"label": '',
"dir": "right"
})
else:
arcs.append({
"start": node_inds[doc[end].text],
"end": node_inds[doc[start].text],
"link": doc[end].text+" -> "+doc[start].text,
"label": '',
"dir": "left"
})
return {"words": words, "arcs": arcs}
text = "The evidence we have all points to a loosely affiliated terrorist organisation known as al Qaeda"
doc = nlp(text)
dep_graph = doc_dep_graph(doc)
spacy.displacy.render(dep_graph, style="dep", jupyter=True, manual=True)
# output_to_svg('./graphs/sentA_dep', dep_graph)
patterns = {}
# patterns.update({"X->Y": [
# {"PATTERN": {
# "ENT_TYPE": {"NOT_IN": [""]}
# }, "SPEC": {
# "NODE_NAME": "START_ENTITY"
# },
# }, {"PATTERN": {
# "POS": {"IN": ["VERB"]},
# }, "SPEC": {
# "NBOR_NAME": "START_ENTITY", "NBOR_RELOP": ">", "NODE_NAME": "known"}
# },
# ]})
patterns.update({"KnownAs": [
{"PATTERN": {
"POS": {"IN": ["NOUN","PROPN"]}
}, "SPEC": {"NODE_NAME": "START_ENTITY"}
}, {"PATTERN": {
"POS": {"IN": ["VERB"]},
}, "SPEC": {"NBOR_NAME": "START_ENTITY", "NBOR_RELOP": ">", "NODE_NAME": "known"}
}, {"PATTERN": {
"POS": {"IN": ["SCONJ"]},
}, "SPEC": {"NBOR_NAME": "known", "NBOR_RELOP": ">", "NODE_NAME": "as"}
}, {"PATTERN": {
"POS": {"IN": ["NOUN","PROPN"]},
}, "SPEC": {"NBOR_NAME": "as", "NBOR_RELOP": ">", "NODE_NAME": "END_ENTITY"}
}
]})
matcher = get_dep_matcher(nlp, patterns.values(), patterns.keys())
### One of the following two lines should find the pattern.
# If source_target_at_pattern_end=True, the edge is drawn between the
# two nodes at the end of the pattern (pattern[-2]->pattern[-1]).
# If source_target_at_pattern_end=False, the edge is drawn between the
# first and final nodes of the pattern (pattern[0]->pattern[-1]).
### This is so the matcher knows which nodes in the pattern to draw the edge between
### and toovercome the requirement that the first node in the defined pattern must
### be the root in the sub-tree.
matched_edges = predicate_matching(doc, matcher, source_target_at_pattern_end=True)
# matched_edges = predicate_matching(doc, matcher, source_target_at_pattern_end=False)
spacy.displacy.render(matched_edges, style="dep", jupyter=True, manual=True)
# output_to_svg('./graphs/sentA_ascope', matched_edges)
matched_edges
```
| github_jupyter |
# Data preparation
You can run this file to shuffle train, val and test dataset. Then you should repeat learning and test procedure. If model result doesn't change significant, it haven't overfitted.
```
%load_ext autoreload
%autoreload 2
%pylab inline
from sklearn.model_selection import train_test_split
from skimage import transform, color
from matplotlib import pyplot as plt
import numpy as np
import cv2
import os
import pickle as pickle
from copy import copy
from collections import Counter
import pandas as pd
from itertools import count
image_count = 0
def fold(n_fold):
global image_count
fnames, bboxes = [], []
with open("data/FDDB-folds/FDDB-fold-{n_fold:02d}-ellipseList.txt".format(n_fold=n_fold), "r") as fin:
fin = iter(fin)
try:
while True:
fnames.append(next(fin).strip())
shape = imread("data/originalPics/" + fnames[-1] + ".jpg").shape[:2]
count = int(next(fin))
for i in range(count):
a, b, phi, center_x, center_y, _1 = (float(c) for c in next(fin).split())
t_x = np.arctan2(-b * np.tan(phi), a )
x_diff = np.abs(a * np.cos(t_x) * np.cos(phi) - b * np.sin(t_x) * np.sin(phi))
t_y = np.arctan2(b, a * np.tan(phi))
y_diff = np.abs(b * np.sin(t_y) * np.cos(phi) + a * np.cos(t_y) * np.sin(phi))
bbox = [np.floor(center_y - y_diff), np.floor(center_x - x_diff), np.ceil(center_y + y_diff), np.ceil(center_x + x_diff)]
bbox = [max((int(c), 0)) for c in bbox]
bbox[::2] = (min((c, shape[0])) for c in bbox[::2])
bbox[1::2] = (min((c, shape[1])) for c in bbox[1::2])
bbox = [image_count, *bbox, *shape]
bboxes.append(bbox)
image_count += 1
except StopIteration:
pass
return fnames, bboxes
fnames, bboxes = [], []
image_count = 0
for n_fold in range(1, 11):
_fnames, _bboxes = fold(n_fold)
fnames.extend(_fnames)
bboxes.extend(_bboxes)
bboxes = np.array(bboxes, dtype=int)
convert_scales = []
for image_index in set(bboxes[:, 0]):
image_bboxes = bboxes[bboxes[:, 0] == image_index]
bbox_sizes = image_bboxes[:, (3, 4)] - image_bboxes[:, (1, 2)]
avg_size = bbox_sizes.mean()
rescale = 32 / avg_size
converted_bbox_sizes = bbox_sizes * rescale
converted_image_size = image_bboxes[0, -2:] * rescale
TR = 8
if (converted_bbox_sizes.min() >= 32 - TR and
converted_bbox_sizes.max() <= 32 + TR and
converted_image_size.min() >= 40 and
converted_image_size.max() <= 176
):
convert_scales.append([image_index, rescale])
len(convert_scales)
convert_bboxes = []
for image_index, rescale in convert_scales:
image = imread("data/originalPics/" + fnames[image_index] + ".jpg")
image = transform.rescale(image, rescale, mode="reflect")
if len(image.shape) == 2: # image is gray
image = color.gray2rgb(image)
converted_image = np.zeros((176, 176, 3))
converted_image[:image.shape[0], :image.shape[1]] = image
imsave("data/convertedPics/" + str(image_index) + ".png", converted_image)
# print(bboxes[bboxes[:, 0] == image_index, 1:], rescale)
convert_bboxes.append(bboxes[bboxes[:, 0] == image_index, 1:] * rescale)
convert_bboxes = np.vstack([np.hstack([np.array([[image_index]]*len(bboxes)), bboxes]).astype(int)
for bboxes, (image_index, rescale) in zip(convert_bboxes, convert_scales)])
image_indeces = sorted(set(convert_bboxes[:, 0]))
trainval_indeces, test_indeces = train_test_split(image_indeces, test_size=0.2)
train_indeces, val_indeces = train_test_split(trainval_indeces, test_size=0.25)
def extract_images(image_indeces, convert_bboxes):
fnames = ["convertedPics/{image_index}.png".format(image_index=image_index) for image_index in image_indeces]
result_bboxes = []
for i, image_index in enumerate(image_indeces):
part_bboxes = convert_bboxes[convert_bboxes[:, 0] == image_index]
part_bboxes[:, 0] = i
result_bboxes.append(part_bboxes)
return fnames, np.vstack(result_bboxes)
train_fnames, train_bboxes = extract_images(sorted(train_indeces), convert_bboxes)
val_fnames, val_bboxes = extract_images(sorted(val_indeces), convert_bboxes)
test_fnames, test_bboxes = extract_images(sorted(test_indeces), convert_bboxes)
original_indeces = sorted(set(bboxes[:, 0]) - set(image_indeces))
original_bboxes = []
original_fnames = []
for image_index in original_indeces:
original_fnames.append("originalPics/" + fnames[image_index] + ".jpg")
original_bboxes.append(bboxes[bboxes[:, 0] == image_index, 1:])
original_bboxes = np.vstack([np.hstack([np.array([[i]]*len(bboxes)), bboxes]).astype(int)
for i, bboxes in enumerate(original_bboxes)])
with open("data/original_fnames.csv", "w") as fout:
for fname in original_fnames:
print(fname, file=fout)
with open("data/original_bboxes.pkl", "wb") as fout:
pickle.dump(original_bboxes.tolist(), fout, protocol=2)
with open("data/train_fnames.csv", "w") as fout:
for fname in train_fnames:
print(fname, file=fout)
with open("data/val_fnames.csv", "w") as fout:
for fname in val_fnames:
print(fname, file=fout)
with open("data/test_fnames.csv", "w") as fout:
for fname in test_fnames:
print(fname, file=fout)
with open("data/train_bboxes.pkl", "wb") as fout:
pickle.dump(train_bboxes.tolist(), fout, protocol=2)
with open("data/val_bboxes.pkl", "wb") as fout:
pickle.dump(val_bboxes.tolist(), fout, protocol=2)
with open("data/test_bboxes.pkl", "wb") as fout:
pickle.dump(test_bboxes.tolist(), fout, protocol=2)
```
| github_jupyter |
# Preferential Bayesian Optimization: Multinomial Predictive Entropy Search
This notebook demonstrates the use of the Multinomial Predictive Entropy Search (MPES) acquisition function on ordinal (preference) data.
```
import numpy as np
import gpflow
import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import sys
import os
import pickle
from gpflow.utilities import set_trainable, print_summary
gpflow.config.set_default_summary_fmt("notebook")
sys.path.append(os.path.split(os.path.split(os.path.split(os.getcwd())[0])[0])[0]) # Move 3 levels up directory to import project files as module
import importlib
PBO = importlib.import_module("Top-k-Ranking-Bayesian-Optimization")
gpu_to_use = 0
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
tf.config.experimental.set_visible_devices(gpus[gpu_to_use], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
objective = PBO.objectives.hartmann3d
objective_low = 0
objective_high = 1.
objective_dim = 2 # CHANGE 1: require the objective dim
objective_name = "Hart3"
acquisition_name = "MPES"
experiment_name = acquisition_name + "_" + objective_name
num_runs = 10
num_evals = 50
num_samples = 1000
num_choices = 2
input_dims = 3
objective_dim = input_dims # CHANGE 1: require the objective dim
num_maximizers = 20
num_maximizers_init = 50
num_fourier_features = 1000
num_init_prefs = 12 # CHANGE 2: randomly initialize with some preferences
# CHANGE 1: reduce the value of delta to avoid numerical error
# as k(x,x') = sigma^2 * exp( -[(x-x')/l]^2 )
# which could be very small if l is too small
# so we define l relatively by the range of input (objective_high - objective_low)
# It is ok for the total number of observations > the total number of possible inputs
# because there is a noise in the observation, it might require repeated observations
# at the same input pair to improve the confidence
num_discrete_per_dim = 20
delta = (objective_high - objective_low) / num_discrete_per_dim
results_dir = os.getcwd() + '/results/' + experiment_name + '/'
try:
# Create target Directory
os.makedirs(results_dir)
print("Directory " , results_dir , " created ")
except FileExistsError:
print("Directory " , results_dir , " already exists")
def get_noisy_observation(X, objective):
f = PBO.objectives.objective_get_f_neg(X, objective)
return PBO.observation_model.gen_observation_from_f(X, f, 1)
def train_and_visualize(X, y, title, lengthscale_init=None, signal_variance_init=None):
# Train model with data
# CHANGE 6: use full_gp instead of sparse,
result = PBO.models.learning_fullgp.train_model_fullcov(
X, y,
obj_low=objective_low,
obj_high=objective_high,
lengthscale_init=lengthscale_init,
signal_variance_init=signal_variance_init,
indifference_threshold=0.,
n_sample=1000,
deterministic=True, # only sample f values once, not re-sampling
num_steps=3000)
q_mu = result['q_mu']
q_sqrt = result['q_sqrt']
u = result['u']
inputs = result['inputs']
k = result['kernel']
likelihood = gpflow.likelihoods.Gaussian()
model = PBO.models.learning.init_SVGP_fullcov(q_mu, q_sqrt, u, k, likelihood)
u_mean = q_mu.numpy()
inducing_vars = u.numpy()
return model, inputs, u_mean, inducing_vars
def uniform_grid(input_dims, num_discrete_per_dim, low=0., high=1.):
"""
Returns an array with all possible permutations of discrete values in input_dims number of dimensions.
:param input_dims: int
:param num_discrete_per_dim: int
:param low: int
:param high: int
:return: tensor of shape (num_discrete_per_dim ** input_dims, input_dims)
"""
num_points = num_discrete_per_dim ** input_dims
out = np.zeros([num_points, input_dims])
discrete_points = np.linspace(low, high, num_discrete_per_dim)
for i in range(num_points):
for dim in range(input_dims):
val = num_discrete_per_dim ** (dim)
out[i, dim] = discrete_points[int((i // val) % num_discrete_per_dim)]
return out
```
This function is our main metric for the performance of the acquisition function: The closer the model's best guess to the global minimum, the better.
```
def best_guess(model):
"""
Returns a GP model's best guess of the global maximum of f.
"""
# CHANGE 7: use a discrete grid
xx = PBO.models.learning_fullgp.get_all_discrete_inputs(objective_low, objective_high, objective_dim, delta)
res = model.predict_f(xx)[0].numpy()
return xx[np.argmax(res)]
```
Store the results in these arrays:
```
num_data_at_end = int(num_init_prefs + num_evals)
X_results = np.zeros([num_runs, num_data_at_end, num_choices, input_dims])
y_results = np.zeros([num_runs, num_data_at_end, 1, input_dims])
best_guess_results = np.zeros([num_runs, num_evals, input_dims])
```
Create the initial values for each run:
```
np.random.seed(0)
# CHANGE 8: just randomly initialize with some preference observation
init_vals = np.zeros([num_runs, num_init_prefs, num_choices, input_dims])
for run in range(num_runs):
for i in range(num_init_prefs):
init_vals[run,i] = PBO.models.learning_fullgp.get_random_inputs(
objective_low,
objective_high,
objective_dim,
delta,
size=num_choices,
with_replacement=False,
exclude_inputs=None)
```
The following loops carry out the Bayesian optimization algorithm over a number of runs, with a fixed number of evaluations per run.
```
# CHANGE 9: need to store lengthscale and signal_variance from previous iteration to initialize the current iteration
lengthscale_init = None
signal_variance_init = None
for run in range(num_runs): # CHECK IF STARTING RUN IS CORRECT
print("")
print("==================")
print("Beginning run %s" % (run))
X = init_vals[run]
y = get_noisy_observation(X, objective)
model, inputs, u_mean, inducing_vars = train_and_visualize(X, y,
"Run_{}:_Initial_model".format(run))
# save optimized lengthscale and signal variance for next iteration
lengthscale_init = model.kernel.lengthscale.numpy()
signal_variance_init = model.kernel.variance.numpy()
for evaluation in range(num_evals):
print("Beginning evaluation %s" % (evaluation))
# Sample possible next queries
# CHANGE 10: use discrete grid
samples = PBO.models.learning_fullgp.sample_inputs(inputs.numpy(),
num_samples,
num_choices,
min_val=objective_low,
max_val=objective_high,
delta=delta)
# Sample maximizers
print("Evaluation %s: Sampling maximizers" % (evaluation))
maximizers = PBO.fourier_features.sample_maximizers(X=inducing_vars,
count=num_maximizers,
n_init=num_maximizers_init,
D=num_fourier_features,
model=model,
min_val=objective_low,
max_val=objective_high)
print(maximizers)
# Calculate PES value I for each possible next query
print("Evaluation %s: Calculating I" % (evaluation))
I_vals = PBO.acquisitions.pes.I_batch(samples, maximizers, model)
# Select query that maximizes I
next_idx = np.argmax(I_vals)
next_query = samples[next_idx]
print("Evaluation %s: Next query is %s with I value of %s" % (evaluation, next_query, I_vals[next_idx]))
X = np.concatenate([X, [next_query]])
# Evaluate objective function
y = np.concatenate([y, get_noisy_observation(np.expand_dims(next_query, axis=0), objective)], axis=0)
print("Evaluation %s: Training model" % (evaluation))
model, inputs, u_mean, inducing_vars = train_and_visualize(X, y,
"Run_{}_Evaluation_{}".format(run, evaluation))
print_summary(model)
# save optimized lengthscale and signal variance for next iteration
lengthscale_init = model.kernel.lengthscale.numpy()
signal_variance_init = model.kernel.variance.numpy()
best_guess_results[run, evaluation, :] = best_guess(model)
# CHANGE 11: log both the estimated minimizer and its objective value
print("Best_guess f({}) = {}".format(
best_guess_results[run, evaluation, :],
objective(best_guess_results[run, evaluation, :])))
# Save model
pickle.dump((X, y, inputs,
model.kernel.variance,
model.kernel.lengthscale,
model.likelihood.variance,
inducing_vars,
model.q_mu,
model.q_sqrt,
maximizers),
open(results_dir + "Model_Run_{}_Evaluation_{}.p".format(run, evaluation), "wb"))
X_results[run] = X
y_results[run] = y
pickle.dump((X_results, y_results, best_guess_results),
open(results_dir + acquisition_name + "_" + objective_name + "_" + "Xybestguess.p", "wb"))
global_min = np.min(objective(PBO.models.learning_fullgp.get_all_discrete_inputs(objective_low, objective_high, objective_dim, delta)))
metric = best_guess_results
ir = objective(metric) - global_min
mean = np.mean(ir, axis=0)
std_dev = np.std(ir, axis=0)
std_err = std_dev / np.sqrt(ir.shape[0])
print("Mean immediate regret at each evaluation averaged across all runs:")
print(mean)
print("Standard error of immediate regret at each evaluation averaged across all runs:")
print(std_err)
with open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem" + ".txt", "w") as text_file:
print("Mean immediate regret at each evaluation averaged across all runs:", file=text_file)
print(mean, file=text_file)
print("Standard error of immediate regret at each evaluation averaged across all runs:", file=text_file)
print(std_err, file=text_file)
pickle.dump((mean, std_err), open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem.p", "wb"))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Join/simple_joins.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Join/simple_joins.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Join/simple_joins.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load a Landsat 8 image collection at a point of interest.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(ee.Geometry.Point(-122.09, 37.42))
# Define start and end dates with which to filter the collections.
april = '2014-04-01'
may = '2014-05-01'
june = '2014-06-01'
july = '2014-07-01'
# The primary collection is Landsat images from April to June.
primary = collection.filterDate(april, june)
# The secondary collection is Landsat images from May to July.
secondary = collection.filterDate(may, july)
# Use an equals filter to define how the collections match.
filter = ee.Filter.equals(**{
'leftField': 'system:index',
'rightField': 'system:index'
})
# Create the join.
simpleJoin = ee.Join.simple()
# Apply the join.
simpleJoined = simpleJoin.apply(primary, secondary, filter)
# Display the result.
print('Simple join: ', simpleJoined.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
import argparse
import os
import apache_beam as beam
import tensorflow as tf
from apache_beam.options.pipeline_options import PipelineOptions
import apache_beam.runners.interactive.interactive_beam as ib
import apache_beam.transforms.sql
import beam__common
import fidscs_globals
import random
import data_extractor
data_dir = "/tmp/fids-capstone-data/data"
data_extractor.run(max_target_videos=-1, data_dir=data_dir, use_beam=True)
options = {
'project': 'my-project', # change
'runner': 'InteractiveRunner',
'direct_num_workers': 0, # 0 is use all available cores
'direct_running_mode': 'multi_threading', # ['in_memory', 'multi_threading', 'multi_processing'] # 'multi_processing' doesn't seem to work for DirectRunner?
'streaming': False # set to True if data source is unbounded (e.g. GCP PubSub)
}
pipeline_options = PipelineOptions(flags=[], **options) # easier to pass in options from command-line this way
print(f"PipelineOptions:\n{pipeline_options.get_all_options()}\n")
fidscs_globals.DATA_ROOT_DIR = data_dir
can_proceed = True
if not data_extractor__common.path_exists(fidscs_globals.DATA_ROOT_DIR) or len(beam__common.list_dir(fidscs_globals.DATA_ROOT_DIR))==0:
print(f"{fidscs_globals.VALIDATION_FATAL_ERROR_TEXT} data directory does not exist or is empty!")
can_proceed = False
else:
fidscs_globals.VIDEO_DIR = os.path.join(fidscs_globals.DATA_ROOT_DIR, 'videos')
fidscs_globals.STICHED_VIDEO_FRAMES_DIR = os.path.join(fidscs_globals.DATA_ROOT_DIR, 'stitched_video_frames')
fidscs_globals.CORPUS_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.CORPUS_DS_FNAME)
fidscs_globals.DOCUMENT_ASL_CONSULTANT_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.DOCUMENT_ASL_CONSULTANT_DS_FNAME)
fidscs_globals.ASL_CONSULTANT_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.ASL_CONSULTANT_DS_FNAME)
fidscs_globals.VIDEO_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.VIDEO_DS_FNAME)
fidscs_globals.VIDEO_SEGMENT_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.VIDEO_SEGMENT_DS_FNAME)
fidscs_globals.VIDEO_FRAME_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.VIDEO_FRAME_DS_FNAME)
fidscs_globals.UTTERANCE_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.UTTERANCE_DS_FNAME)
fidscs_globals.UTTERANCE_VIDEO_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.UTTERANCE_VIDEO_DS_FNAME)
fidscs_globals.UTTERANCE_TOKEN_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.UTTERANCE_TOKEN_DS_FNAME)
fidscs_globals.UTTERANCE_TOKEN_FRAME_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.UTTERANCE_TOKEN_FRAME_DS_FNAME)
fidscs_globals.VOCABULARY_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.VOCABULARY_DS_FNAME)
fidscs_globals.TRAIN_ASSOC_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.TRAIN_FRAME_SEQ_ASSOC_DS_FNAME)
fidscs_globals.VAL_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.VAL_FRAME_SEQ_DS_FNAME)
fidscs_globals.TRAIN_DS_PATH = os.path.join(fidscs_globals.DATA_ROOT_DIR, fidscs_globals.TRAIN_FRAME_SEQ_DS_FNAME)
pl = beam.Pipeline(options=pipeline_options)
# full_target_vid_index_schemad_pcoll = beam__common.pl__1__read_target_vid_index_csv(pl)
# corpus_index_schemad_pcoll = beam__common.pl__1__read_corpus_index_csv(pl) # XML is base-64 encode but we no longer need it (to decode it) since it is only used to create the datasets
# # corpus_index_decoded_XML_pcoll = pl__2__decode_XML(corpus_index_schemad_pcoll) # see above
# asl_consultant_index_schemad_pcoll = beam__common.pl__1__read_asl_consultant_index_csv(pl)
# document_asl_consultant_utterance_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_utterance_index_csv(pl)
# document_asl_consultant_target_video_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_target_video_index_csv(pl)
# document_asl_consultant_utterance_video_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_utterance_video_index_csv(pl)
# document_target_video_segment_index_schemad_pcoll = beam__common.pl__1__read_document_target_video_segment_index_csv(pl)
# vocabulary_index_schemad_pcoll = beam__common.pl__1__read_vocabulary_index_csv(pl)
# document_asl_consultant_utterance_token_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_utterance_token_index_csv(pl)
# document_asl_consultant_target_video_frame_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_target_video_frame_index_csv(pl)
# as it turns it, this is all we need
document_asl_consultant_target_video_utterance_token_frame_index_schemad_pcoll = beam__common.pl__1__read_document_asl_consultant_target_video_utterance_token_frame_index_csv(pl)
# document_asl_consultant_target_video_utterance_token_frame_index_schemad_pcoll is the main table we use for training.
# This will ultimately provide which frame sequences correspond to individual tokens.
# But our first measure is to build train and validation sets (for tokens).
# In order to split up train vs validation sets, we need to compare "apples to apples".
# That is, in order for a token (TokenID) to be considered a candidate for the split,
# we require at least two of the same (TokenID, CameraPerspective) wherein the ASL
# consultant for each differs. We would prefer more than two of these tuples, each
# having unique ASL consultants in the set of occurrences, with the majority of said
# tuples being assigned to the training set and the remainder (at least one) being
# assigned to the validation set. We would like to achieve a 90/10 split, ideally,
# but we will take what we get.
# document_asl_consultant_target_video_utterance_token_frame_index_schemad_pcoll:
# beam.Row(
# DocumentID=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[0]]),
# ASLConsultantID=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[1]]),
# CameraPerspective=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[2]]),
# TargetVideoFilename=str(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[3]]),
# UtteranceSequence=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[4]]),
# TokenSequence=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[5]]),
# FrameSequence=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[6]]),
# TokenID=int(d_document_asl_consultant_target_video_utterance_token_frame_info[fidscs_globals.SCHEMA_COL_NAMES__UTTERANCE_TOKEN_FRAME_DS[7]])
# )
# We will transform this into tuples of the form:
# [
# 'TokenID',
# 'CameraPerspective',
# 'DocumentID',
# 'ASLConsultantID',
# 'TargetVideoFilename',
# 'UtteranceSequence',
# 'TokenSequence',
# 'FrameSequence'
# ]
dctvustsfs = (
document_asl_consultant_target_video_utterance_token_frame_index_schemad_pcoll
| "Beam PL: extract (TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence,FrameSequence) from dctvustsfs schemad pcoll" >> beam.Map(
lambda dctvustsfs_row: (
dctvustsfs_row.TokenID,
dctvustsfs_row.CameraPerspective,
dctvustsfs_row.ASLConsultantID,
dctvustsfs_row.TargetVideoFilename,
dctvustsfs_row.UtteranceSequence,
dctvustsfs_row.TokenSequence,
dctvustsfs_row.FrameSequence
)
)
)
# for train-validation split, we want to key/group by (TokenID, CameraPerspective) with lists of unique (ASLConsultantID, TargetVideoFilename, UtteranceSequence, TokenSequence) > 1
ctvusts_by_tcp = (
dctvustsfs
| "Beam PL: extract ((TokenID,CameraPerspective), (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence)) from dctvustsfs" >> beam.Map(
lambda dctvustsfs_row_tpl: (
(
dctvustsfs_row_tpl[0],
dctvustsfs_row_tpl[1]
),
(
dctvustsfs_row_tpl[2],
dctvustsfs_row_tpl[3],
dctvustsfs_row_tpl[4],
dctvustsfs_row_tpl[5]
)
)
)
| "Beam PL: select distinct ((TokenID,CameraPerspective), (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence)) from ctvusts_by_tcp" >> beam.Distinct()
| "Beam PL: group (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence) by key (TokenID,CameraPerspective)" >> beam.GroupByKey()
# the above produces tuples of the form:
# (
# (
# TokenID,
# CameraPerspective
# ),
# listof(
# (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence)
# )
# )
)
def flatten_ctvusts_by_tcp(ctvusts_by_tcp_tpl):
return [
(
ctvusts_by_tcp_tpl[0][0], # TokenID
ctvusts_by_tcp_tpl[0][1], # CameraPerspective
ctvusts_tpl[0], # ASLConsultantID
ctvusts_tpl[1], # TargetVideoFilename
ctvusts_tpl[2], # UtteranceSequence
ctvusts_tpl[3] # TokenSequence
) for ctvusts_tpl in ctvusts_by_tcp_tpl[1]
]
ctvusts_by_tcp__gt_1 = (
ctvusts_by_tcp
| "Beam PL: filter candidate (TokenID,CameraPerspective) for test-validation split" >> beam.Filter(
lambda list_ctvusts_by_tcp_tpl: len(set(list_ctvusts_by_tcp_tpl[1])) > 1
)
| "Beam PL: flatten filtered (TokenID,CameraPerspective) candidates for test-validation split" >> beam.FlatMap(flatten_ctvusts_by_tcp)
)
ctvusts_by_tcp__lte_1 = (
ctvusts_by_tcp
| "Beam PL: filter non-candidate (TokenID,CameraPerspective) for test-validation split" >> beam.Filter(
lambda list_ctvusts_by_tcp_tpl: len(set(list_ctvusts_by_tcp_tpl[1])) <= 1
)
| "Beam PL: flatten filtered (TokenID,CameraPerspective) non-candidates for test-validation split" >> beam.FlatMap(flatten_ctvusts_by_tcp)
)
```
<p><br>
#### Finally, execute validation/train split on ctvusts_by_tcp__gt_1
```
# first, we need to put ctvusts_by_tcp__gt_1 back into ((TokenID, CameraPerspective), (ASLConsultantID, TargetVideoFilename, UtteranceSequence, TokenSequence)) form
def rekey_ctvusts_by_tcp(ctvusts_by_tcp_tpl):
return (
(
ctvusts_by_tcp_tpl[0], # TokenID
ctvusts_by_tcp_tpl[1] # CameraPerspective
),
(
ctvusts_by_tcp_tpl[2], # ASLConsultantID
ctvusts_by_tcp_tpl[3], # TargetVideoFilename
ctvusts_by_tcp_tpl[4], # UtteranceSequence
ctvusts_by_tcp_tpl[5] # TokenSequence
)
)
def val_train_split__ctvusts_by_tcp__gt_1__tpl(ctvusts_list__by__tcp__gt_1__tpl):
"""
ctvusts_list__by__tcp__gt_1__tpl
(
(TokenID,CameraPerspective), # key
listof(
(ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence)
)
)
"""
ctvusts_list = ctvusts_list__by__tcp__gt_1__tpl[1].copy() # we need a copy since we want to shuffle
random.shuffle(ctvusts_list)
len_ctvusts_list = len(ctvusts_list)
val_len_ctvusts_list = int(len_ctvusts_list*fidscs_globals.VALIDATION_SIZE_RATIO) if len_ctvusts_list > int(((1-fidscs_globals.VALIDATION_SIZE_RATIO)*100)/10) else 1
train__ctvusts_list, val__ctvusts_list = ctvusts_list[val_len_ctvusts_list:], ctvusts_list[:val_len_ctvusts_list]
return (
(
ctvusts_list__by__tcp__gt_1__tpl[0][0], # TokenID
ctvusts_list__by__tcp__gt_1__tpl[0][1] # CameraPerspective
),
(
train__ctvusts_list,
val__ctvusts_list
)
)
val_train_split_basis__ctvusts_by_tcp__gt_1 = (
ctvusts_by_tcp__gt_1
| "Beam PL: rekey ctvusts_by_tcp__gt_1 for validation/train split" >> beam.Map(rekey_ctvusts_by_tcp)
| "Beam PL: group (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence) rekeyed by (TokenID,CameraPerspective)" >> beam.GroupByKey()
# the above produces tuples of the form:
# (
# (TokenID,CameraPerspective), # key
# listof(
# (ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence)
# )
# )
| "Beam PL: split rekeyed ctvusts_list_by_tcp__gt_1" >> beam.Map(val_train_split__ctvusts_by_tcp__gt_1__tpl)
# the above produces tuples of the form:
# (
# (TokenID,CameraPerspective), # key
# (
# test_list_of(ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence),
# val_list_of(ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence),
# )
# )
)
train__ctvusts_by_tcp__gt_1 = (
val_train_split_basis__ctvusts_by_tcp__gt_1
| "Beam PL: select train sublist from val_train_split_basis__ctvusts_by_tcp__gt_1" >> beam.Map(
lambda val_train_split_basis__ctvusts_by_tcp__gt_1_tpl: [
(
val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[0][0], # TokenID
val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[0][1], # CameraPerspective
train_ctvusts_tpl[0], # ASLConsultantID
train_ctvusts_tpl[1], # TargetVideoFilename
train_ctvusts_tpl[2], # UtteranceSequence
train_ctvusts_tpl[3] # TokenSequence
) for train_ctvusts_tpl in val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[1][0] # index [1][0] points to train sublist
]
)
| "Beam PL: 'explode list_train__ctvusts_by_tcp__gt_1_tpl" >> beam.FlatMap(lambda list_train__ctvusts_by_tcp__gt_1_tpl: list_train__ctvusts_by_tcp__gt_1_tpl)
)
val__ctvusts_by_tcp__gt_1 = (
val_train_split_basis__ctvusts_by_tcp__gt_1
| "Beam PL: select validation sublist from val_train_split_basis__ctvusts_by_tcp__gt_1" >> beam.Map(
lambda val_train_split_basis__ctvusts_by_tcp__gt_1_tpl: [
(
val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[0][0], # TokenID
val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[0][1], # CameraPerspective
val_ctvusts_tpl[0], # ASLConsultantID
val_ctvusts_tpl[1], # TargetVideoFilename
val_ctvusts_tpl[2], # UtteranceSequence
val_ctvusts_tpl[3] # TokenSequence
) for val_ctvusts_tpl in val_train_split_basis__ctvusts_by_tcp__gt_1_tpl[1][1] # index [1][1] points to validation sublist
]
)
| "Beam PL: 'explode list_val__ctvusts_by_tcp__gt_1_tpl" >> beam.FlatMap(lambda list_val__ctvusts_by_tcp__gt_1_tpl: list_val__ctvusts_by_tcp__gt_1_tpl)
)
# join train__ctvusts_by_tcp__gt_1 to dctvustsfs
train__ctvusts_by_tcp__gt_1__keys = (
train__ctvusts_by_tcp__gt_1
| "Beam PL: extract ((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence), '<train__ctvusts_by_tcp__gt_1__has_key>') for join to dctvustsfs" >> beam.Map(
lambda train__ctvusts_by_tcp__gt_1_tpl : (
(
train__ctvusts_by_tcp__gt_1_tpl[0], # TokenID
train__ctvusts_by_tcp__gt_1_tpl[1], # CameraPerspective
train__ctvusts_by_tcp__gt_1_tpl[2], # ASLConsultantID
train__ctvusts_by_tcp__gt_1_tpl[3], # TargetVideoFilename
train__ctvusts_by_tcp__gt_1_tpl[4], # UtteranceSequence
train__ctvusts_by_tcp__gt_1_tpl[5] # TokenSequence
),
"<train__ctvusts_by_tcp__gt_1__has_key>"
)
)
)
dctvustsfs__frame_sequences = (
dctvustsfs
| "Beam PL: extract ((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence), FrameSequence) for join to train__ctvusts_by_tcp__gt_1/val__ctvusts_by_tcp__gt_1" >> beam.Map(
lambda dctvustsfs_tpl: (
(
dctvustsfs_tpl[0], # TokenID
dctvustsfs_tpl[1], # CameraPerspective
dctvustsfs_tpl[2], # ASLConsultantID
dctvustsfs_tpl[3], # TargetVideoFilename
dctvustsfs_tpl[4], # UtteranceSequence
dctvustsfs_tpl[5] # TokenSequence
),
dctvustsfs_tpl[6] # FrameSequence
)
)
)
train_dctvustsfs__gt__1 = (
({
'has_key': train__ctvusts_by_tcp__gt_1__keys,
'frame_sequences': dctvustsfs__frame_sequences
})
| "Beam PL: join train__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.CoGroupByKey()
# the above produces tuples of the form:
# (
# (
# <TokenID>,
# <CameraPerspective>,
# <ASLConsultantID>,
# <TargetVideoFilename>,
# <UtteranceSequence>,
# <TokenSequence>
# ),
# {
# 'has_key': listof('<train__ctvusts_by_tcp__gt_1__has_key>'), # should have only one/single element
# 'frame_sequences': listof(<FrameSequence>) # many
# }
# )
| "Beam PL: filter out mismatches from joined train__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.Filter(
lambda joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl:
len(joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['has_key'])>0 and \
len(joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['frame_sequences'])>0
)
| "Beam PL: 'explode' listof(<FrameSequence>) from joined train__ctvusts_by_tcp__gt_1 to dctvustsfs to list of tuples" >> beam.Map(
lambda joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl: [
(
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][0], # TokenID
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][1], # CameraPerspective
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][2], # ASLConsultantID
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][3], # TargetVideoFilename
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][4], # UtteranceSequence
joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][5], # TokenSequence
frame_seq
) for frame_seq in sorted(joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['frame_sequences'])
]
)
| "Beam PL: 'explode' listof((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence, FrameSequence)) from joined train__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.FlatMap(
lambda list_joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl: list_joined__train__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl
)
)
# join val__ctvusts_by_tcp__gt_1 to dctvustsfs
val__ctvusts_by_tcp__gt_1__keys = (
val__ctvusts_by_tcp__gt_1
| "Beam PL: extract ((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence), '<val__ctvusts_by_tcp__gt_1__has_key>') for join to dctvustsfs" >> beam.Map(
lambda val__ctvusts_by_tcp__gt_1_tpl : (
(
val__ctvusts_by_tcp__gt_1_tpl[0], # TokenID
val__ctvusts_by_tcp__gt_1_tpl[1], # CameraPerspective
val__ctvusts_by_tcp__gt_1_tpl[2], # ASLConsultantID
val__ctvusts_by_tcp__gt_1_tpl[3], # TargetVideoFilename
val__ctvusts_by_tcp__gt_1_tpl[4], # UtteranceSequence
val__ctvusts_by_tcp__gt_1_tpl[5] # TokenSequence
),
"<val__ctvusts_by_tcp__gt_1__has_key>"
)
)
)
val_dctvustsfs__gt__1 = (
({
'has_key': val__ctvusts_by_tcp__gt_1__keys,
'frame_sequences': dctvustsfs__frame_sequences
})
| "Beam PL: join val__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.CoGroupByKey()
# the above produces tuples of the form:
# (
# (
# <TokenID>,
# <CameraPerspective>,
# <ASLConsultantID>,
# <TargetVideoFilename>,
# <UtteranceSequence>,
# <TokenSequence>
# ),
# {
# 'has_key': listof('<val__ctvusts_by_tcp__gt_1__has_key>'), # should have only one/single element
# 'frame_sequences': listof(<FrameSequence>) # many
# }
# )
| "Beam PL: filter out mismatches from joined val__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.Filter(
lambda joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl:
len(joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['has_key'])>0 and \
len(joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['frame_sequences'])>0
)
| "Beam PL: 'explode' listof(<FrameSequence>) from joined val__ctvusts_by_tcp__gt_1 to dctvustsfs to list of tuples" >> beam.Map(
lambda joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl: [
(
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][0], # TokenID
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][1], # CameraPerspective
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][2], # ASLConsultantID
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][3], # TargetVideoFilename
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][4], # UtteranceSequence
joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[0][5], # TokenSequence
frame_seq # FrameSequence
) for frame_seq in sorted(joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl[1]['frame_sequences'])
]
)
| "Beam PL: 'explode' listof((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence, FrameSequence)) from joined val__ctvusts_by_tcp__gt_1 to dctvustsfs" >> beam.FlatMap(
lambda list_joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl: list_joined__val__ctvusts_by_tcp__gt_1__to__dctvustsfs__tpl
)
)
train__ctvusts_by_tcp__lte_1__keys = (
ctvusts_by_tcp__lte_1
| "Beam PL: extract ((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence), '<ctvusts_by_tcp__lte_1_tpl__has_key>') for join to dctvustsfs" >> beam.Map(
lambda ctvusts_by_tcp__lte_1_tpl : (
(
ctvusts_by_tcp__lte_1_tpl[0], # TokenID
ctvusts_by_tcp__lte_1_tpl[1], # CameraPerspective
ctvusts_by_tcp__lte_1_tpl[2], # ASLConsultantID
ctvusts_by_tcp__lte_1_tpl[3], # TargetVideoFilename
ctvusts_by_tcp__lte_1_tpl[4], # UtteranceSequence
ctvusts_by_tcp__lte_1_tpl[5] # TokenSequence
),
"<ctvusts_by_tcp__lte_1_tpl__has_key>"
)
)
)
train_dctvustsfs__lte_1 = (
({
'has_key': train__ctvusts_by_tcp__lte_1__keys,
'frame_sequences': dctvustsfs__frame_sequences
})
| "Beam PL: join ctvusts_by_tcp__lte_1 to dctvustsfs" >> beam.CoGroupByKey()
# the above produces tuples of the form:
# (
# (
# <TokenID>,
# <CameraPerspective>,
# <ASLConsultantID>,
# <TargetVideoFilename>,
# <UtteranceSequence>,
# <TokenSequence>
# ),
# {
# 'has_key': listof('<ctvusts_by_tcp__lte_1_tpl__has_key>'), # should have only one/single element
# 'frame_sequences': listof(<FrameSequence>) # many
# }
# )
| "Beam PL: filter out mismatches from joined train__ctvusts_by_tcp__lte_1 to dctvustsfs" >> beam.Filter(
lambda joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl:
len(joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[1]['has_key'])>0 and \
len(joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[1]['frame_sequences'])>0
)
| "Beam PL: 'explode' listof(<FrameSequence>) from joined train__ctvusts_by_tcp__lte_1 to dctvustsfs to list of tuples" >> beam.Map(
lambda joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl: [
(
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][0], # TokenID
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][1], # CameraPerspective
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][2], # ASLConsultantID
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][3], # TargetVideoFilename
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][4], # UtteranceSequence
joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[0][5], # TokenSequence
frame_seq
) for frame_seq in sorted(joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl[1]['frame_sequences'])
]
)
| "Beam PL: 'explode' listof((TokenID,CameraPerspective,ASLConsultantID,TargetVideoFilename,UtteranceSequence,TokenSequence, FrameSequence)) from joined ttrain__ctvusts_by_tcp__lte_1 to dctvustsfs" >> beam.FlatMap(
lambda list_joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl: list_joined__train__ctvusts_by_tcp__lte_1__to__dctvustsfs__tpl
)
)
train_dctvustsfs__all = (
(train_dctvustsfs__gt__1, train_dctvustsfs__lte_1)
| f"Beam PL: merge train_dctvustsfs__gt__1 with train_dctvustsfs__lte_1" >> beam.Flatten()
)
# find all COMPLETE utterances that can be formed with token-cameraperspective pairs from the validation set
val_tcp__gt__1 = (
val_dctvustsfs__gt__1
| "Beam PL: extract (TokenID, CameraPerspective) from val_dctvustsfs__gt__1" >> beam.Map(
lambda tpl: (
tpl[0], # TokenID
tpl[1] # CameraPerspective
)
)
| "Beam PL: select distinct (TokenID, CameraPerspective) from val_dctvustsfs__gt__1" >> beam.Distinct()
)
complete_utterances__with__val_tcp__gt__1 = (
dctvustsfs
| "Beam PL: extract (ASLConsultantID,TargetVideoFilename,CameraPerspective,UtteranceSequence,TokenSequence,TokenID) from dctvustsfs" >> beam.Map(
lambda tpl: (
tpl[2], # <ASLConsultantID>
tpl[3], # <TargetVideoFilename>
tpl[4], # <UtteranceSequence>
tpl[1], # <CameraPerspective>
tpl[5], # <TokenSequence>
tpl[0] # <TokenID>
)
)
| "Beam PL: select distinct (ASLConsultantID,TargetVideoFilename,CameraPerspective,UtteranceSequence,TokenSequence,TokenID) from dctvustsfs" >> beam.Distinct()
| "Beam PL: transform distinct ctvcpustst tuples to tst_by_ctvuscp" >> beam.Map(
lambda tpl: (
(
tpl[0], # <ASLConsultantID>
tpl[1], # <TargetVideoFilename>
tpl[2], # <UtteranceSequence>
tpl[3] # <CameraPerspective>
),
(
tpl[4], # <TokenSequence>
tpl[5] # <TokenID>
)
)
)
| "Beam PL: collect list of tokenseq-tokenid for each (<ASLConsultantID>, <TargetVideoFilename>, <UtteranceSequence>, <CameraPerspective>)" >> beam.GroupByKey()
# the above produces tuples of the form:
# (
# (<ASLConsultantID>,<TargetVideoFilename>,<UtteranceSequence>,<CameraPerspective>), # key
# listof((<TokenSequence>,<TokenID>))
# )
| "Beam PL: sort list of tokenseq-tokenid by tokenseq for each (<ASLConsultantID>, <TargetVideoFilename>, <UtteranceSequence>, <CameraPerspective>)" >> beam.Map(
lambda tpl: (
(
tpl[0][0], # <ASLConsultantID>
tpl[0][1], # <TargetVideoFilename>
tpl[0][2], # <UtteranceSequence>
tpl[0][3] # <CameraPerspective>
),
[(tst_tpl[1], tpl[0][3]) for tst_tpl in sorted(tpl[1], key=lambda tst_tpl: tst_tpl[0])]
)
)
# the above produces tuples of the form:
# (
# (<ASLConsultantID>,<TargetVideoFilename>,<UtteranceSequence>,<CameraPerspective>), # key
# listof((<TokenID>, <CameraPerspective>)) # sorted by <TokenSequence>
# )
# now we need to filter all of the above (<ASLConsultantID>,<TargetVideoFilename>,<UtteranceSequence>,<CameraPerspective>) where every (<TokenID>, <CameraPerspective>) in the corresponding list exists in val_tcp__gt__1
| "Beam PL: filter matching rows from vid index" >> beam.Filter(
lambda list_tcp_tpl__by__ctvuscp__tpl, existing_val_tcp_tpls: all(tcp_tpl in existing_val_tcp_tpls for tcp_tpl in list_tcp_tpl__by__ctvuscp__tpl[1]),
existing_val_tcp_tpls=beam.pvalue.AsIter(val_tcp__gt__1)
)
| "Beam PL: extract (<ASLConsultantID>,<TargetVideoFilename>,<UtteranceSequence>,<CameraPerspective>,listof(<TokenID>))" >> beam.Map(
lambda tpl: (
tpl[0][0], # <ASLConsultantID>
tpl[0][1], # <TargetVideoFilename>
tpl[0][2], # <UtteranceSequence>
tpl[0][3], # <CameraPerspective>
[tcp_tpl[0] for tcp_tpl in tpl[1]] # listof(<TokenID>)
)
)
)
# we require this in order to make use of ib.show() (which provides visualization of the pcolls specified) or ib.collect() (which creates a pandas dataframe from a pcoll)
# but all pcolls we wish to visualize must be created prior to executing the following line
ib.watch(locals())
```
#### Show those with counts > 1
```
df_ctvusts_by_tcp__gt_1 = ib.collect(ctvusts_by_tcp__gt_1)
df_ctvusts_by_tcp__gt_1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence']
df_ctvusts_by_tcp__gt_1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_ctvusts_by_tcp__gt_1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence'], ignore_index=False, inplace=True)
df_ctvusts_by_tcp__gt_1.sort_index(inplace=True)
df_ctvusts_by_tcp__gt_1
# df_ctvusts_by_tcp__gt_1.loc[
# (
# [2369], # TokenID
# [2] # CameraPerspective
# ),
# :
# ].sort_index(ascending=[True, True])
df_ctvusts_by_tcp__gt_1__count = df_ctvusts_by_tcp__gt_1.reset_index().groupby(['TokenID', 'CameraPerspective']).count()
df_ctvusts_by_tcp__gt_1__count = df_ctvusts_by_tcp__gt_1__count[['ASLConsultantID']]
df_ctvusts_by_tcp__gt_1__count.columns = ['count']
df_ctvusts_by_tcp__gt_1__count.sort_values(axis=0, by=['count'], ascending=False, inplace=True)
# df_ctvusts_by_tcp__gt_1__count.sort_index(inplace=True)
df_ctvusts_by_tcp__gt_1__count
```
#### Now show those with counts <= 1
```
df_ctvusts_by_tcp__lte_1 = ib.collect(ctvusts_by_tcp__lte_1)
df_ctvusts_by_tcp__lte_1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence']
df_ctvusts_by_tcp__lte_1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_ctvusts_by_tcp__lte_1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence'], ignore_index=False, inplace=True)
df_ctvusts_by_tcp__lte_1.sort_index(inplace=True)
df_ctvusts_by_tcp__lte_1
df_ctvusts_by_tcp__lte_1__count = df_ctvusts_by_tcp__lte_1.reset_index().groupby(['TokenID', 'CameraPerspective']).count()
df_ctvusts_by_tcp__lte_1__count = df_ctvusts_by_tcp__lte_1__count[['ASLConsultantID']]
df_ctvusts_by_tcp__lte_1__count.columns = ['count']
df_ctvusts_by_tcp__lte_1__count.sort_values(axis=0, by=['count'], ascending=False, inplace=True)
# df_ctvusts_by_tcp__gt_1__count.sort_index(inplace=True)
df_ctvusts_by_tcp__lte_1__count
df_ctvusts_by_tcp__intersection = df_ctvusts_by_tcp__gt_1.join(df_ctvusts_by_tcp__lte_1, how='inner', lsuffix='_left', rsuffix='_right')
df_ctvusts_by_tcp__intersection
```
#### Now show train/validation split
```
df_train__ctvusts_by_tcp__gt_1 = ib.collect(train__ctvusts_by_tcp__gt_1)
df_train__ctvusts_by_tcp__gt_1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence']
df_train__ctvusts_by_tcp__gt_1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_train__ctvusts_by_tcp__gt_1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence'], ignore_index=False, inplace=True)
df_train__ctvusts_by_tcp__gt_1.sort_index(inplace=True)
df_train__ctvusts_by_tcp__gt_1
df_val__ctvusts_by_tcp__gt_1 = ib.collect(val__ctvusts_by_tcp__gt_1)
df_val__ctvusts_by_tcp__gt_1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence']
df_val__ctvusts_by_tcp__gt_1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_val__ctvusts_by_tcp__gt_1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence'], ignore_index=False, inplace=True)
df_val__ctvusts_by_tcp__gt_1.sort_index(inplace=True)
df_val__ctvusts_by_tcp__gt_1
df_train__ctvusts_by_tcp__gt_1.loc[
(
[2409], # TokenID
[0] # CameraPerspective
),
:
].sort_index(ascending=[True, True])
df_val__ctvusts_by_tcp__gt_1.loc[
(
[2409], # TokenID
[0] # CameraPerspective
),
:
].sort_index(ascending=[True, True])
```
<p><br>
#### View final training/validation sets (with associated frame sequences)
<p><br>
##### Training (sub) set (that has at least one corresponding token/camera perspective in the validation set)
```
df_train_dctvustsfs__gt__1 = ib.collect(train_dctvustsfs__gt__1)
df_train_dctvustsfs__gt__1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence']
df_train_dctvustsfs__gt__1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_train_dctvustsfs__gt__1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence'], ignore_index=False, inplace=True)
df_train_dctvustsfs__gt__1.sort_index(inplace=True)
df_train_dctvustsfs__gt__1
```
<p><br>
##### Validation set
```
df_val_dctvustsfs__gt__1 = ib.collect(val_dctvustsfs__gt__1)
df_val_dctvustsfs__gt__1.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence']
df_val_dctvustsfs__gt__1.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_val_dctvustsfs__gt__1.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence'], ignore_index=False, inplace=True)
df_val_dctvustsfs__gt__1.sort_index(inplace=True)
df_val_dctvustsfs__gt__1
```
##### The complete training set (union of training subset - token/camera perspectives with corresponding validation set tuples - with training subset with no corresponding validation set tuples)
```
df_train_dctvustsfs__all = ib.collect(train_dctvustsfs__all)
df_train_dctvustsfs__all.columns = ['TokenID', 'CameraPerspective', 'ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence']
df_train_dctvustsfs__all.set_index(['TokenID', 'CameraPerspective'], inplace=True)
df_train_dctvustsfs__all.sort_values(axis=0, by=['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'TokenSequence', 'FrameSequence'], ignore_index=False, inplace=True)
df_train_dctvustsfs__all.sort_index(inplace=True)
df_train_dctvustsfs__all
```
<p><br>
##### Show (complete) utterances that can be represented by token-cameraperspective tuples from the validation set
```
df_complete_utterances__with__val_tcp__gt__1 = ib.collect(complete_utterances__with__val_tcp__gt__1)
df_complete_utterances__with__val_tcp__gt__1.columns = ['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'CameraPerspective', 'TokenIDSequence']
df_complete_utterances__with__val_tcp__gt__1.set_index(['ASLConsultantID', 'TargetVideoFilename', 'UtteranceSequence', 'CameraPerspective'], inplace=True)
df_complete_utterances__with__val_tcp__gt__1.sort_index(inplace=True)
df_complete_utterances__with__val_tcp__gt__1
```
| github_jupyter |
```
import sys
sys.path.append('../../../')
sys.path.append('../../../examples/')
sys.path.append('../../performance_tools/')
import os
import pickle
import logging
import numpy as np
import pandas as pd
from dumb_containers import evaluate_performance
import torch
import torch.nn as nn
from torch.nn import NLLLoss
from argparse import Namespace
from tqdm import tqdm
from pytorch_pretrained_bert.modeling_fine_tune import BertForPairWiseClassification
from run_classifier_dataset_utils_fine_tune import LCQMCProcessor, compute_metrics, output_modes
from run_classifier_dataset_utils_fine_tune import convert_examples_to_features_fine_tune as convert_examples_to_features
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from pytorch_pretrained_bert.tokenization import BertTokenizer
FINE_TUNED_PATH = '/efs/fine_tune/atec_ccks/pairwise/atec_ccks_fine_tune_5/'
task_name = 'atec_ccks'
output_mode = output_modes[task_name]
args = Namespace(data_dir = '/efs/projects/bert_fine_tune/fine_tune/data/train_dev_test/LCQMC/processed',
bert_model = '/efs/downloads/bert/pytorch/bert_base_chinese',
max_seq_length = 128,
local_rank = -1,
eval_batch_size = 8,
do_train = False
)
logger = logging.getLogger("ATEC_CCKS_eval")
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.DEBUG)
device = torch.device('cuda')
processor = LCQMCProcessor()
tokenizer = BertTokenizer.from_pretrained(FINE_TUNED_PATH)
model = BertForPairWiseClassification.from_pretrained(FINE_TUNED_PATH)
model.to(device)
label_list = processor.get_labels()
num_labels = len(label_list)
# eval_examples = processor.get_dev_examples(args.data_dir)
eval_examples = processor.get_test_examples(args.data_dir)
# cached_eval_features_file = os.path.join(args.data_dir, 'dev_{0}_{1}_{2}'.format(
# list(filter(None, args.bert_model.split('/'))).pop(),
# str(args.max_seq_length),
# str(task_name)))
# try:
# with open(cached_eval_features_file, "rb") as reader:
# eval_features = pickle.load(reader)
# except:
# eval_features = convert_examples_to_features(
# eval_examples, label_list, args.max_seq_length, tokenizer, output_mode)
# if args.local_rank == -1 or torch.distributed.get_rank() == 0:
# logger.info(" Saving eval features into cached file %s", cached_eval_features_file)
# with open(cached_eval_features_file, "wb") as writer:
# pickle.dump(eval_features, writer)
eval_features = convert_examples_to_features(
eval_examples, label_list, args.max_seq_length, tokenizer, output_mode)
all_input_ids_a = torch.tensor([f.input_ids_a for f in eval_features], dtype=torch.long)
all_input_mask_a = torch.tensor([f.input_mask_a for f in eval_features], dtype=torch.long)
all_segment_ids_a = torch.tensor([f.segment_ids_a for f in eval_features], dtype=torch.long)
all_input_ids_b = torch.tensor([f.input_ids_b for f in eval_features], dtype=torch.long)
all_input_mask_b = torch.tensor([f.input_mask_b for f in eval_features], dtype=torch.long)
all_segment_ids_b = torch.tensor([f.segment_ids_b for f in eval_features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)
eval_data = TensorDataset(all_input_ids_a,
all_input_ids_b,
all_input_mask_a,
all_input_mask_b,
all_segment_ids_a,
all_segment_ids_b,
all_label_ids)
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size)
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
out_label_ids = None
for (input_ids_a, input_ids_b,
input_mask_a, input_mask_b,
segment_ids_a, segment_ids_b,
label_ids
) in tqdm(eval_dataloader, desc="Evaluating"):
input_ids_a = input_ids_a.to(device)
input_mask_a = input_mask_a.to(device)
segment_ids_a = segment_ids_a.to(device)
input_ids_b = input_ids_b.to(device)
input_mask_b = input_mask_b.to(device)
segment_ids_b = segment_ids_b.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
cos_sim, pos_prob, pout1, pout2 = model(input_ids_1 = input_ids_a,
input_ids_2 = input_ids_b,
token_type_ids_1=segment_ids_a,
token_type_ids_2=segment_ids_b,
attention_mask_1=input_mask_a,
attention_mask_2=input_mask_b,)
neg_prob = 1 - pos_prob
# if any(neg_prob) <= 0:
# logger.debug("invalid neg_prob")
# print(cos_sim, probs)
# print(input_ids_a)
# print(input_ids_b)
# break
probs = torch.stack([neg_prob, pos_prob], dim = 1)
log_probs = torch.log(probs)
loss_fct = NLLLoss()
tmp_eval_loss = loss_fct(log_probs.view(-1, num_labels), label_ids.view(-1))
if tmp_eval_loss.mean().item() == np.inf:
logger.debug("invalid loss")
print(cos_sim, probs)
print(input_ids_a)
print(input_ids_b)
break
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if len(preds) == 0:
preds.append(probs.detach().cpu().numpy())
out_label_ids = label_ids.detach().cpu().numpy()
else:
preds[0] = np.append(
preds[0], probs.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(
out_label_ids, label_ids.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
probs = preds[:,1]
pred_outs = np.argmax(preds, axis=1)
result = compute_metrics(task_name, pred_outs, out_label_ids)
loss = tr_loss/global_step if args.do_train else None
result['eval_loss'] = eval_loss
# result['global_step'] = global_step
result['loss'] = loss
# output_eval_file = os.path.join(args.output_dir, "eval_results.txt")
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
# writer.write("%s = %s\n" % (key, str(result[key])))
gt = all_label_ids.numpy()
evaluate_performance(gt, probs)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import rdkit.Chem as Chem
from rdkit.Chem import Descriptors
import numpy as np
import sys
import rdkit.Chem.Crippen as Crippen
import rdkit.Chem.rdMolDescriptors as MolDescriptors
from rdkit.Chem import Descriptors
import matplotlib.pyplot as plt
import pickle
import logging
sys.path.insert(0,'./Modules/')
from rewards import bunch_evaluation
from rewards import get_padel, clean_folder
def get_pIC50(mols):
folder_path = "./generated_molecules/"
file_path = "./descriptors.csv"
#Cleaning up the older files
clean_folder(folder_path)
i = 0
for mol in mols:
print(Chem.MolToMolBlock(mol),file=open(str(folder_path)+str(i)+'.mol','w'))
i += 1
get_padel(folder_path,file_path)
#Reading the descriptors
X = pd.read_csv(file_path)
#Filling Null Values
X.fillna(value=0,inplace=True)
X.Name = pd.to_numeric(X.Name, errors='coerce')
X.sort_values(by='Name',inplace=True)
X.to_csv('./try.csv',index=False)
#Removing the columns with zero variance in original data
with open('./saved_models/drop.txt','rb') as fp:
bad_cols = pickle.load(fp)
X_step1 = X.drop(columns=bad_cols,inplace=False)
X_step2 = X_step1
#Doing StandardScaler() as applied to original data
with open('./saved_models/new_scaler.pkl','rb') as fp:
scaler = pickle.load(fp)
X2 = scaler.transform(X_step2.astype('float64'))
X_step3 = pd.DataFrame(data=X2,columns=X_step2.columns)
#X.head()
#Dropping columns with low correlation with pIC50
# =============================================================================
# X.to_csv('./X.csv',index=False)
# X_step1.to_csv('./X_step1.csv')
# X_step2.to_csv('./X_step2.csv')
# X_step3.to_csv('./X_step3.csv')
# =============================================================================
#Using the Random forest Predictor
with open('./saved_models/new_RFR.pkl','rb') as fp:
pp = pickle.load(fp)
predictions = pp.predict(X_step3)
print('Properties predicted for {} molecules'.format(len(predictions)))
return predictions
df = pd.read_excel('./Generated molecules/AKT_trial.xlsx')
df.head()
img = (ch.MolFromSmiles(df.iloc[1,0]))
mol2 = ch.MolFromSmiles(df.iloc[1,1])
img
mol2
mol = []
for smile in df['Initial']:
mol.append(ch.MolFromSmiles(smile))
logP = []
mw = []
tpsa = []
ss = []
for molecule in mol:
logP.append(Crippen.MolLogP(molecule))
mw.append(MolDescriptors.CalcExactMolWt(molecule))
tpsa.append(Descriptors.TPSA(molecule))
df['logP'] = np.asarray(logP)
df['MW'] = np.asarray(mw)
df['tPSA'] = np.asarray(tpsa)
df['SweetSpot'] = ((3<df['logP']) & (df['logP']<5)& (320<df['MW']) & (df['MW']<420) & (80<df['tPSA']) & (df['tPSA']<110))
df['SweetSpot'].describe()
mol = []
for smile in df[' Modified']:
mol.append(ch.MolFromSmiles(smile))
logP = []
mw = []
tpsa = []
ss = []
for molecule in mol:
logP.append(Crippen.MolLogP(molecule))
mw.append(MolDescriptors.CalcExactMolWt(molecule))
tpsa.append(Descriptors.TPSA(molecule))
df['logP'] = np.asarray(logP)
df['MW'] = np.asarray(mw)
df['tPSA'] = np.asarray(tpsa)
df['SweetSpot'] = ((3<df['logP']) & (df['logP']<5)& (320<df['MW']) & (df['MW']<420) & (80<df['tPSA']) & (df['tPSA']<110))
df['SweetSpot'].describe()
new_df = df.loc[df.SweetSpot==True]
mol1 = (ch.MolFromSmiles(new_df.iloc[1,0]))
mol2 = ch.MolFromSmiles(new_df.iloc[1,1])
print(img)
img
mol2
import sys
sys.path.insert(0, './Modules/')
import rewards
import pandas as pd
import rdkit.Chem as ch
df = pd.read_csv('./out153.csv',engine="python")
mol = ch.MolFromSmiles(df.iloc[0,0])
mol
from rdkit.Chem import Draw
ch.Draw.MolsToImage([mol], subImgSize=(200, 200))
df.head()
from rewards import bunch_evaluation
moli = []
molm = []
for i in range(len(df)):
moli.append(ch.MolFromSmiles(df.iloc[i,0]))
molm.append(ch.MolFromSmiles(df.iloc[i,1]))
ini = bunch_evaluation(moli)
mod = bunch_evaluation(molm)
ini = np.asarray(ini)
mod = np.asarray(mod)
#remember to change pIC50 conversion depending on the reward function
changes = pd.DataFrame(data=np.transpose(np.asarray([(mod[:,1]*3+7),(ini[:,1]*3 +7)])),columns=['Modified','Initial'])
changes.head()
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes.head(10)
changes.to_csv('./out153_pIC.csv',index=False)
inact_to_act.to_csv('./act_pIC153.csv',index=False)
import matplotlib.pyplot as plt
x = [4,5,6,7]
y = x
plt.xlabel('pIC50 initial')
plt.ylabel('pIC50 final')
plt.plot(changes['Initial'].iloc[0:10],changes['Modified'].iloc[0:10],x,y)
moli[207]
molm[207]
df = pd.read_csv('./Data/AKT_pchembl.csv')
mol = []
i = 1
for smile in df['Smiles']:
if i>=6:
break
mol.append(ch.MolFromSmiles(smile))
i += 1
changes.loc[changes['Delta']<0].sum()
changes.loc[changes['Delta']>0].sum()
```
<h2>Exponential run
```
history = np.load('./history/history.npy')
history.shape
np.argmax(history[:,2])
history[292,2]
np.argmax(history[:,0])
history[93,0]
import rewards
import pandas as pd
import rdkit.Chem as ch
df = pd.read_csv('./out292.csv',engine="python")
df.head()
from rewards import bunch_evaluation
moli = []
molm = []
for i in range(len(df)):
moli.append(ch.MolFromSmiles(df.iloc[i,0]))
molm.append(ch.MolFromSmiles(df.iloc[i,1]))
ini = bunch_evaluation(moli)
mod = bunch_evaluation(molm)
ini = np.asarray(ini)
mod = np.asarray(mod)
#remember to change pIC50 conversion depending on the reward function
import math
changes = pd.DataFrame(data=np.transpose(np.asarray([np.log(mod[:,1]*(math.exp(3))),np.log(ini[:,1]*math.exp(3))])),columns=['Modified','Initial'])
changes['Modified'] += 7
changes['Initial'] += 7
changes.head()
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes.head(10)
changes.to_csv('./out292_pIC.csv',index=False)
inact_to_act.to_csv('./act_pIC292.csv',index=False)
inact_to_act.head()
import matplotlib.pyplot as plt
x = [4,5,6,7]
y = x
plt.xlabel('pIC50 initial')
plt.ylabel('pIC50 final')
plt.plot(changes['Initial'].iloc[0:10],changes['Modified'].iloc[0:10],x,y)
new = changes.sort_values(by='Delta',ascending=True,inplace=False)
new.head()
changes.loc[changes['Delta']<0].count()
changes.loc[changes['Delta']>0].count()
```
<h3>final epoch results
```
import rewards
import pandas as pd
import rdkit.Chem as ch
df = pd.read_csv('./past outputs/out299.csv',engine="python")
df.head()
moli = []
molm = []
for i in range(len(df)):
moli.append(ch.MolFromSmiles(df.iloc[i,0]))
molm.append(ch.MolFromSmiles(df.iloc[i,1]))
ini = bunch_evaluation(moli)
mod = bunch_evaluation(molm)
import math
ini = np.asarray(ini)
new_ini = np.log(ini[:,1]*math.exp(3)) + 7
mod = np.asarray(mod)
new_mod = np.log(mod[:,1]*math.exp(3)) + 7
#remember to change pIC50 conversion depending on the reward function
import math
changes = pd.DataFrame(data=np.transpose(np.asarray([new_mod,new_ini])),columns=['Modified','Initial'])
c
changes.head()
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes.head(10)
changes.to_csv('./out299_pIC.csv',index=False)
inact_to_act.to_csv('./act_pIC299.csv',index=False)
inact_to_act.head()
import matplotlib.pyplot as plt
x = [4,5,6,7]
y = x
plt.xlabel('pIC50 initial')
plt.ylabel('pIC50 final')
plt.plot(changes['Initial'].iloc[0:10],changes['Modified'].iloc[0:10],x,y)
plt.hist(changes['Delta'])
changes.loc[changes['Delta']<0].count()
changes.loc[changes['Delta']>0].count()
```
<h1>pIC>8
```
r_tot = np.load('r_tot.npy')
plt.scatter(range(len(r_tot)),r_tot)
r_tot2 = []
for val in r_tot:
if val<10:
r_tot2.append(val)
r_tot2 = np.asarray(r_tot2)
plt.scatter(range(len(r_tot2)),abs(r_tot2))
print(r_tot2.mean())
df = pd.read_csv('./past outputs/out159.csv',engine="python")
from rewards import bunch_evaluation
moli = []
molm = []
for i in range(len(df)):
moli.append(ch.MolFromSmiles(df.iloc[i,0]))
molm.append(ch.MolFromSmiles(df.iloc[i,1]))
ini = get_pIC50(moli)
ini = np.asarray(ini)
mod = get_pIC50(molm)
mod = np.asarray(mod)
mod
changes = pd.DataFrame(data=np.transpose(np.asarray([mod,ini])),columns=['Modified','Initial'])
c
changes.head()
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes.head(10)
changes.to_csv('./out159_pIC.csv',index=False)
inact_to_act.to_csv('./act_pIC159.csv',index=False)
inact_to_act.head()
changes.loc[changes['Delta']<0].sum()
changes.loc[changes['Delta']>0].sum()
changes['Delta'].sum()
img = (ch.MolFromSmiles(df.iloc[1,0]))
mol2 = ch.MolFromSmiles(df.iloc[1,1])
img
mol2
losses = np.load("./Losses/Loss in epoch 159.npy")
loss = []
for i in range(160):
losses = np.load("./Losses/Loss in epoch {}.npy".format(i))
loss.append(losses)
loss = np.asarray(loss)
```
<h1>29-06-20
```
history = np.load('./history/history.npy')
ind = []
for i in range(len(history)):
if history[i,2] > 0.015:
ind.append(i)
ind
```
<h2> Epoch 85 results
```
df = pd.read_csv('./past outputs/out85.csv',engine="python")
df.head()
from rdkit import Chem as ch
moli = []
molm = []
for i in range(len(df)):
moli.append(ch.MolFromSmiles(df.iloc[i,0]))
molm.append(ch.MolFromSmiles(df.iloc[i,1]))
ini = get_pIC50(moli)
del molm[140]
print(len(molm))
mod = get_pIC50(molm)
ini = np.delete(ini,140)
len(ini)
mod.shape
ini = np.asarray(ini)
mod = np.asarray(mod)
changes = pd.DataFrame(np.transpose(np.asarray([ini,mod])),columns=['Modified','Initial'])
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
changes.head()
molm[0]
inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes.to_csv('./past outputs/out85_pIC.csv',index=False)
inact_to_act.to_csv('./past outputs/act_pIC85.csv',index=False)
changes.head(10)
inact_to_act.head()
import matplotlib.pyplot as plt
changes = pd.read_csv('./past outputs/out85_pIC.csv')
inact_to_act=pd.read_csv('./past outputs/act_pIC85.csv')
bins = np.linspace(4,10,14)
#changes = changes.loc[changes.Delta>0]
plt.hist(changes['Initial'], bins, alpha=0.5, label='initial',color='blue')
plt.hist(changes['Modified'], bins, alpha=0.5, label='modified',color='green')
plt.legend(loc='upper right')
changes.loc[changes['Delta']<0].sum()['Delta']
changes.loc[changes['Delta']>0].sum()
for i in range(len(df)):
if type(Chem.MolFromSmiles(df.iloc[i,1])) != "<class 'NoneType'>":
if i==140:
print(type(Chem.MolFromSmiles(df.iloc[i,1])))
moli.append(Chem.MolFromSmiles(df.iloc[i,0]))
molm.append(Chem.MolFromSmiles(df.iloc[i,1]))
type(molm[140])
```
<h2>Epoch 295
```
df = pd.read_csv('./past outputs/out295.csv',engine="python")
df.head()
from rdkit import Chem as ch
moli = []
molm = []
for i in range(len(df)):
if (Chem.MolFromSmiles(df.iloc[i,1])) is not None:
moli.append(Chem.MolFromSmiles(df.iloc[i,0]))
molm.append(Chem.MolFromSmiles(df.iloc[i,1]))
print(len(molm))
ini = get_pIC50(moli)
for i in range(len(molm)):
if molm[i] is None:
print(i)
print(len(molm))
mod = get_pIC50(molm)
ini = np.delete(ini,140)
len(ini)
mod.shape
ini = np.asarray(ini)
mod = np.asarray(mod)
changes = pd.DataFrame(np.transpose(np.asarray([ini,mod])),columns=['Modified','Initial'])
changes['Delta'] = changes['Modified'] - changes['Initial']
changes.sort_values(by='Delta',ascending=False,inplace=True)
changes.head()
molm[0]
#inact_to_act = changes.loc[(changes['Modified']>7) & (changes['Initial']<7),['Modified','Initial','Delta']].sort_values(by='Delta',ascending=False)
changes = pd.read_csv('./past outputs/out_pIC295.csv')
inact_to_act= pd.read_csv('./past outputs/act_pIC295.csv')
changes.head(10)
inact_to_act.head()
changes = pd.read_csv('./past outputs/29Jun/out_pIC295.csv')
inact_to_act=pd.read_csv('./past outputs/29Jun/act_pIC295.csv')
[changes.loc[changes['Initial']>8]['Initial'].count(),changes.loc[changes['Modified']>8]['Modified'].count()]
plt.bar(["Initial","Modified"],[changes.loc[changes['Initial']>8]['Initial'].count(),changes.loc[changes['Modified']>8]['Modified'].count()])
import matplotlib.pyplot as plt
bins = np.linspace(4,10,14)
#changes = changes.loc[changes.Delta>0]
plt.hist(changes['Initial'], bins, alpha=0.5, label='initial',color='blue')
plt.hist(changes['Modified'], bins, alpha=0.5, label='modified',color='green')
plt.legend(loc='upper right')
changes.loc[changes['Delta']<0].sum()['Delta']
```
<h1>r_tot what is?
```
import numpy as np
hist = np.load('./History/history.npy')
hist.shape
plt.figure(figsize=(5,5))
plt.plot(range(1,301),hist[:,1])
plt.figure(figsize=(10,10))
plt.plot(range(1,301),hist[:,2])
df = pd.DataFrame(data=hist,columns=["mean_r_tot","valid_frac","good_pIC","mean_score"])
df.mean_score.unique()
len(df.good_pIC.unique())
len(df.valid_frac.unique())
```
Note that fraction all valid = fraction pIC > 8
<h1> Estimating number of NaN values
```
import sys
sys.path.insert(0,'./Modules/')
from rewards import get_padel
get_padel(r'C:\Users\HP\AZC_Internship\DeepFMPO\3.6\generated_molecules','./chk_desc.csv','1500')
df = pd.read_csv('./chk_desc.csv')
unavlb = []
for col in df.columns:
if df[col].isna().any() == True:
print(df[col].isna().describe())
unavlb.append(col)
```
Till 1500ms:These columns are not there in RFECVSKB1200 => <b>Null values are not affecting the model</b>
```
chk = True
for col in unavlb:
if col in needed_cols:
print("Losing")
chk = False
break
if chk:
print("No problem (y)")
x = pd.read_csv(r'C:\Users\HP\AZC_Internship\jupyter notebooks\data\RFECV1200SKB2.csv')
needed_cols = x.columns
df.head(10)
r= np.load('rewards.npy')
r.shape
df.isna().describe()
r
df = pd.read_csv('./past outputs/out'+str(249)+'.csv',sep=";")
df.head()
moli = []
molm = []
for i in range(len(df)):
if (Chem.MolFromSmiles(df.iloc[i,1])) is not None:
moli.append(Chem.MolFromSmiles(df.iloc[i,0]))
molm.append(Chem.MolFromSmiles(df.iloc[i,1]))
moli[0]
molm[0]
def get_pIC50(mols):
folder_path = "./generated_molecules/"
file_path = "./descriptors.csv"
#Cleaning up the older files
clean_folder(folder_path)
i = 0
for mol in mols:
print(Chem.MolToMolBlock(mol),file=open(str(folder_path)+str(i)+'.mol','w'))
i += 1
get_padel(folder_path,file_path)
#Reading the descriptors
xg_all = pd.read_csv(file_path)
names = xg_all['Name']
bad = []
with open('./saved_models/good_columns','rb') as f:
cols = pickle.load(f)
for col in xg_all.columns:
if col not in cols:
bad.append(col)
xg_all.drop(columns=bad,inplace=True)
#Verifying that all the required columns are there
assert len(xg_all.columns) == len(cols)
xg_all['Name'] = names
files = xg_all[pd.isnull(xg_all).any(axis=1)]['Name']
xg_all.dropna(inplace=True)
mol= []
if len(files) !=0:
uneval_folder = "C:\\Users\\HP\\AZC_Internship\\DeepFMPO\\3.6\\unevalmol\\"
clean_folder(uneval_folder)
for f in files:
m = Chem.MolFromMolFile(folder_path+str(f)+'.mol')
print(Chem.MolToMolBlock((m)),file=open(str(uneval_folder)+str(f)+'.mol','w'))
get_padel(uneval_folder,'./uneval_desc.csv','-1')
unevalmol = pd.read_csv('./uneval_desc.csv')
unevalmol.drop(columns=bad,inplace=True)
print(unevalmol.isna().sum(axis=1))
xg_all = pd.concat([xg_all,unevalmol])
xg_all.to_csv('./xgall.csv')
xg_all.fillna(value=0,inplace=True)
regressor = xgb.XGBRegressor()
regressor.load_model('./saved_models/best_from_gs38.model')
xg_all.sort_values(by='Name',inplace=True)
xg_all.drop(columns='Name',inplace=True)
predictions = regressor.predict(xg_all)
print('Properties predicted for {} molecules'.format(len(predictions)))
return predictions
df = pd.read_csv(r'C:\Users\HP\AZC_Internship\DeepFMPO\3.6\past outputs\out1250.csv',sep=';')
moli = []
molm = []
for i in range(len(df)):
moli.append(Chem.MolFromSmiles(df.iloc[i,0]))
molm.append(Chem.MolFromSmiles(df.iloc[i,1]))
moli[0]
molm[0]
df = pd.read_csv('./Data/AKT_pChemBL.csv')
df = df.loc[df['pChEMBL_Value']<7]
df_new = df.sample(frac=400/len(df))
df_new.to_csv('./Data/AKT_pChemBL_cleaned_good.csv',index=False)
from build_encoding import read_decodings, read_encodings
encodings = read_encodings()
decodings = read_decodings()
import matplotlib.image as mpimg
from rdkit import Chem
from rdkit.Chem import rdBase
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
import matplotlib.pyplot as plt
i = 0
plt.figure(figsize=(15,5))
for code,mol in decodings.items() :
mols = []
if i==5:
break
mols.append(mol)
img = Draw.MolsToGridImage(mols)
plt.subplot(2,3,i+1)
plt.axis('off')
plt.title(code)
i += 1
#img = mpimg.imread('file-name.png')
plt.imshow(img)
```
<h1>An update to view_outputs.py
```
import sys
sys.path.insert(0,'./Modules/')
import numpy as np
from file_reader import read_file
import pandas as pd
from rdkit import Chem
from mol_utils import get_fragments
import numpy as np
import sys
import matplotlib.pyplot as plt
import pickle
import argparse
import xgboost as xgb
import Show_Epoch
import logging
from keras.utils.generic_utils import get_custom_objects
import keras
sys.path.insert(0,'./Modules/')
from models import maximization
from rewards import get_padel, clean_folder, modify_fragment
from build_encoding import get_encodings, encode_molecule, decode_molecule, encode_list, save_decodings, save_encodings, read_decodings, read_encodings
from global_parameters import MAX_SWAP, MAX_FRAGMENTS, GAMMA, BATCH_SIZE, EPOCHS, TIMES, FEATURES
#similar to bunch_Eval, except that it is without the rewards
def get_pIC50(mols):
folder_path = "./generated_molecules/"
file_path = "./descriptors.csv"
#Cleaning up the older files
clean_folder(folder_path)
i = 0
for mol in mols:
print(Chem.MolToMolBlock(mol),file=open(str(folder_path)+str(i)+'.mol','w'))
i += 1
get_padel(folder_path,file_path,'-1')
#Reading the descriptors
xg_all = pd.read_csv(file_path)
names = xg_all['Name']
bad = []
with open('./saved_models/good_columns','rb') as f:
cols = pickle.load(f)
for col in xg_all.columns:
if col not in cols:
bad.append(col)
xg_all.drop(columns=bad,inplace=True)
#Verifying that all the required columns are there
assert len(xg_all.columns) == len(cols)
xg_all['Name'] = names
files = xg_all[pd.isnull(xg_all).any(axis=1)]['Name']
xg_all.dropna(inplace=True)
mol= []
if len(files) !=0:
uneval_folder = "C:\\Users\\HP\\AZC_Internship\\DeepFMPO\\3.6\\unevalmol\\"
clean_folder(uneval_folder)
for f in files:
m = Chem.MolFromMolFile(folder_path+str(f)+'.mol')
print(Chem.MolToMolBlock((m)),file=open(str(uneval_folder)+str(f)+'.mol','w'))
get_padel(uneval_folder,'./uneval_desc.csv','-1')
unevalmol = pd.read_csv('./uneval_desc.csv')
unevalmol.drop(columns=bad,inplace=True)
print(unevalmol.isna().sum(axis=1))
xg_all = pd.concat([xg_all,unevalmol])
xg_all.to_csv('./xgall.csv')
xg_all.fillna(value=0,inplace=True)
regressor = xgb.XGBRegressor()
regressor.load_model('./saved_models/best_from_gs38.model')
xg_all.sort_values(by='Name',inplace=True)
xg_all.drop(columns='Name',inplace=True)
predictions = regressor.predict(xg_all)
print('Properties predicted for {} molecules'.format(len(predictions)))
return predictions
df = pd.read_csv(r'C:\Users\HP\AZC_Internship\DeepFMPO\3.6\past outputs\7July\clean_good_manual\out299.csv',sep=";")
moli = []
molm = []
for i in range(len(df)):
if (Chem.MolFromSmiles(df.iloc[i,1])) is not None:
moli.append(Chem.MolFromSmiles(df.iloc[i,0]))
molm.append(Chem.MolFromSmiles(df.iloc[i,1]))
logging.info("Predicting pIC50 values of the initial molecules")
ini = get_pIC50(moli)
logging.info("Predicting pIC50 values of the predicted molecules")
mod = get_pIC50(molm)
ini = np.asarray(ini)
mod = np.asarray(mod)
df.iloc[0]
np.asarray([df.iloc[:,0],df.iloc[:,1],np.transpose([ini,mod])])
changes = pd.DataFrame(np.transpose(np.asarray([ini,mod])),columns=['Initial_pIC','Modified_pIC'])
changes['Initial_mol'] = df.iloc[:,0]
changes['Modified_mol'] = df.iloc[:,1]
changes['Delta'] = changes['Modified_pIC'] - changes['Initial_pIC']
changes.sort_values(by='Delta',ascending=False,inplace=True)
changes.head()
inact_to_act = changes.loc[(changes['Modified_pIC']>7) & (changes['Initial_pIC']<7),['Modified_pIC','Initial_pIC','Delta']].sort_values(by='Delta',ascending=False)
changes.to_csv('./past outputs/out_pIC299'+'.csv',index=False)
inact_to_act.to_csv('./past outputs/act_pIC299'+'.csv',index=False)
print(inact_to_act.head())
print(changes.head())
bins = np.linspace(4,10,14)
#changes = changes.loc[changes.Delta>0]
plt.hist(changes['Initial_pIC'], bins, alpha=0.5, label='initial',color='blue')
plt.hist(changes['Modified_pIC'], bins, alpha=0.5, label='modified',color='green')
plt.legend(loc='upper right')
plt.show()
sp = changes.loc[changes['Delta']>0].sum()['Delta']
sn = changes.loc[changes['Delta']<0].sum()['Delta']
cp = changes.loc[changes['Delta']>0].count()['Delta']
cn = changes.loc[changes['Delta']<0].count()['Delta']
print('Sum of positive changes = {}\tNo. of +ves = {}\nSum of negative changes = {}\tNo. of -ves = {}'.format(sp,cp,sn,cn))
df = pd.read_csv(r'C:\Users\HP\AZC_Internship\DeepFMPO\3.6\past outputs\out_pIC128.csv')
df.head()
df2=df[['Initial_pIC','Modified_pIC','Delta']].head()
import math
x=np.arange(5,8,0.5)
y=x
plt.xlabel('Initial value')
plt.ylabel('Modified value')
plt.scatter(df.iloc[:10,0],df.iloc[:10,1])
plt.plot(x,y)
moli = []
molm = []
for i in range(5):
moli.append(Chem.MolFromSmiles(changes.iloc[i,2]))
molm.append(Chem.MolFromSmiles(changes.iloc[i,3]))
from rdkit.Chem import Draw
plt.axis('off')
plt.figure(figsize=(10,10))
size = (100,100)
for i in range(5):
plt.axis('off')
plt.subplot(5,2,2*i+1)
Draw.MolToMPL(moli[i],size=size)
plt.show()
plt.axis('off')
plt.subplot(5,2,2*i+2)
Draw.MolToMPL(molm[i],size=size)
plt.show()
#img = mpimg.imread('file-name.png')
plt.subplot(5,2,1)
img = Draw.MolsToGridImage(moli)
plt.imshow(img)
moli[0]
molm[0]
moli[2]
molm[2]
changes[['Initial_pIC','Modified_pIC','Delta']].head()
bins = np.linspace(4,10,14)
#changes = changes.loc[changes.Delta>0]
changes = pd.read_csv('./past outputs/out_pIC299'+'.csv')
plt.figure(figsize=(6,7.5))
plt.title('pIC50 Distribution')
plt.xlabel('pIC50 value')
plt.ylabel('Count')
plt.hist(changes['Initial_pIC'], bins, alpha=0.5, label='initial',color='blue')
plt.hist(changes['Modified_pIC'], bins, alpha=0.5, label='modified',color='green')
plt.legend(loc='upper right')
plt.show()
sp = changes.loc[changes['Delta']>0].sum()['Delta']
sn = changes.loc[changes['Delta']<0].sum()['Delta']
cp = changes.loc[changes['Delta']>0].count()['Delta']
cn = changes.loc[changes['Delta']<0].count()['Delta']
print('Sum of positive changes = {}\tNo. of +ves = {}\nSum of negative changes = {}\tNo. of -ves = {}'.format(sp,cp,sn,cn))
changes.head(10)
moli = []
molm = []
for i in range(5):
moli.append(Chem.MolFromSmiles(changes.iloc[i,2]))
molm.append(Chem.MolFromSmiles(changes.iloc[i,3]))
moli[4]
molm[4]
changes = pd.read_csv('./past outputs/out_pIC1000.csv')
changes.head()
from rdkit.Chem import Draw
moli = []
molm = []
for i in range(5):
moli.append(Chem.MolFromSmiles(changes.iloc[i,2]))
moli.append(Chem.MolFromSmiles(changes.iloc[i,3]))
plot = Draw.MolsToGridImage(moli, molsPerRow=2)
plot.show()
```
| github_jupyter |
Loops, Iteration Schemas and Input
===
While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users.
<a name="top"></a>Contents
===
- [What is a `while` loop?](#what)
- [General syntax](#general_syntax)
- [Example](#example)
- [Exercises](#exercises_while)
- [Accepting user input](#input)
- [General syntax](#general_user_input)
- [Example](#example_user_input)
- [Exercises](#exercises_input)
- [Using while loops to keep your programs running](#keep_running)
- [Exercises](#exercises_running_input)
- [Using while loops to make menus](#menus)
- [Using while loops to process items in a list](#process_list)
- [Accidental Infinite loops](#infinite_loops)
- [Exercises](#exercises_infinite_loops)
- [Overall Challenges](#overall_challenges)
## The FOR (iteration) loop
The `for` loop statement is the most widely used iteration mechanisms in Python.
* Almost every structure in Python can be iterated (*element by element*) by a `for` loop
- a list, a tuple, a dictionary, $\ldots$ (more details will follows)
* In Python, also `while` loops are permitted, but `for` is the one you would see (and use) most of the time!
<a name='what'></a>What is a while loop?
===
A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing.
<a name='general_syntax'></a>General syntax
---
```
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
```
- Every while loop needs an initial condition that starts out true.
- The `while` statement includes a condition to test.
- All of the code in the loop will run as long as the condition remains true.
- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.
- Any code that is defined after the loop will run at this point.
<a name='example'></a>Example
---
Here is a simple example, showing how a game will stay active as long as the player has enough power.
```
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
```
[top](#top)
<a name='exercises_while'></a>Exercises
---
#### Ex 5.1: Growing Strength
- Make a variable called strength, and set its initial value to 5.
- Print a message reporting the player's strength.
- Set up a while loop that runs until the player's strength increases to a value such as 10.
- Inside the while loop, print a message that reports the player's current strength.
- Inside the while loop, write a statement that increases the player's strength.
- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.
- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop.
```
# Ex 5.1 : Growing Strength
# put your code here
```
[top](#top)
<a name='input'></a>Accepting user input
===
Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable.
<a name='general_user_input'></a>General syntax
---
The general case for accepting input looks something like this:
```
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
```
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user.
<a name='example_user_input'></a>Example
---
In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
```
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
```
<a name='exercises_input'></a>Exercises
---
#### Ex 5.2: Game Preferences
- Make a list that includes 3 or 4 games that you like to play.
- Print a statement that tells the user what games you like.
- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.
- Add the user's game to your list.
- Print a new statement that lists all of the games that we like to play (*we* means you and your user).
```
# Ex 5.2 : Game Preferences
# put your code here
```
[top](#top)
<a name='keep_running'></a>Using while loops to keep your programs running
===
Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop.
Here is an example of how to let the user enter an arbitrary number of names.
```
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
```
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
```
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
```
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working.
<a name='exercises_running_input'></a>Exercises
---
#### Ex 5.3: Many Games
- Modify *[Game Preferences](#exercises_input)* so your user can add as many games as they like.
```
# Ex 5.3 : Many Games
# put your code here
```
[top](#top)
<a name='menus'></a>Using while loops to make menus
===
You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit.
Let's look at a simple example, and then analyze the code:
```
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
```
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
```
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
```
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action.
[top](#top)
<a name='processing_list'></a>Using while loops to process items in a list
===
In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need.
Let's look at an example where we process a list of unconfirmed users.
```
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
```
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
```
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
```
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program!
[top](#top)
<a name='infinite_loops'></a>Accidental Infinite loops
===
Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.
Take a look at the following example. Can you pick out why this loop will never stop?
```
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
```
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:
- On most systems, Ctrl-C will interrupt the currently running program.
- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.
The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
```
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
```
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.
Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.
Here is one more example of an accidental infinite loop:
```
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
```
In this example, we accidentally started counting down. The value of `current_number` will always be less than 5, so the loop will run forever.
<a name='exercises_infinite_loops'></a>Exercises
---
#### Ex 5.4: Marveling at Infinity
- Use one of the examples of a failed while loop to create an infinite loop.
- Interrupt your output.
- Marvel at the fact that if you had not interrupted your output, your computer would have kept doing what you told it to until it ran out of power, or memory, or until the universe went cold around it.
```
# Ex 5.4 : Marveling at Infinity
# put your code here
```
[top](#top)
<a name='overall_challenges'></a>Overall Challenges
===
#### Gaussian Addition
This challenge is inspired by a story about the mathematician Carl Frederich Gauss. [As the story goes](http://mathforum.org/library/drmath/view/57919.html), when young Gauss was in grade school his teacher got mad at his class one day.
"I'll keep the lot of you busy for a while", the teacher said sternly to the group. "You are to add the numbers from 1 to 100, and you are not to say a word until you are done."
The teacher expected a good period of quiet time, but a moment later our mathematician-to-be raised his hand with the answer. "It's 5050!" Gauss had realized that if you list all the numbers from 1 to 100, you can always match the first and last numbers in the list and get a common answer:
1, 2, 3, ..., 98, 99, 100
1 + 100 = 101
2 + 99 = 101
3 + 98 = 101
Gauss realized there were exactly 50 pairs of numbers in the range 1 to 100, so he did a quick calculation: 50 * 101 = 5050.
- Write a program that passes a list of numbers to a function.
- The function should use a while loop to keep popping the first and last numbers from the list and calculate the sum of those two numbers.
- The function should print out the current numbers that are being added, and print their partial sum.
- The function should keep track of how many partial sums there are.
- The function should then print out how many partial sums there were.
- The function should perform Gauss' multiplication, and report the final answer.
- Prove that your function works, by passing in the range 1-100, and verifying that you get 5050.
- `gauss_addition(list(range(1,101)))`
- Your function should work for any set of consecutive numbers, as long as that set has an even length.
- Bonus: Modify your function so that it works for any set of consecutive numbers, whether that set has an even or odd length.
```
# Overall Challenge: Gaussian Addition
# put your code here
```
[top](#top)
| github_jupyter |
# ICSR '19 - Gkortzis et al. - Data Analysis
This notebook performs the following analyses reported in the study:
1. [Prepare dataset](#prepare)
2. [RQ1 - Descriptive statistics](#descriptive)
3. [RQ1 - Boxplots](#boxplots)
4. [RQ1 - Grouping analysis](#grouping)
5. [RQ2 - Scatterplots](#scatterplots)
5. [RQ2 - Boxplots](#boxplots2)
7. [RQ2 - Correlation](#correlation)
<a id="prepare"></a>
## Prepare dataset
```
import pandas as pd
def load_dataset(csv_file):
return pd.read_csv(csv_file)
def prepare_dataset(df):
# Remove project with no external classes or user classes
df = df[df['#d_classes'] > 0]
df = df[df['#u_sloc'] > 1000]
# Calculate derived variables
df['#uv_p1'] = df['#uv_p1_r1'] + df['#uv_p1_r2'] + df['#uv_p1_r3'] + df['#uv_p1_r4']
df['#dv_p1'] = df['#dv_p1_r1'] + df['#dv_p1_r2'] + df['#dv_p1_r3'] + df['#dv_p1_r4']
df['#uv_p2'] = df['#uv_p2_r1'] + df['#uv_p2_r2'] + df['#uv_p2_r3'] + df['#uv_p2_r4']
df['#dv_p2'] = df['#dv_p2_r1'] + df['#dv_p2_r2'] + df['#dv_p2_r3'] + df['#dv_p2_r4']
df['#uv'] = df['#uv_p1'] + df['#uv_p2']
df['#dv'] = df['#dv_p1'] + df['#dv_p2']
df['#uv_sloc'] = df['#uv'] / (df['#d_sloc']+df['#u_sloc'])
df['#dv_sloc'] = df['#dv'] / (df['#d_sloc']+df['#u_sloc'])
return df
projects_dataset = '../dataset_final.csv'
study_vars = ['#u_classes', '#d_classes', '#u_sloc', '#d_sloc', '#uv', '#dv', '#uv_classes', '#dv_classes', '#uv_sloc', '#dv_sloc']
```
<a id="descriptive"></a>
## RQ1 - Descriptive statistics
```
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
# Add reuse ratio
df = df[study_vars]
df.describe()
```
<a id="boxplots"></a>
## RQ1 - Boxplots
```
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(8, 6), tight_layout = {'pad': 1})
bp_vars = ['#uv', '#uv_classes', '#uv_sloc', '#dv', '#dv_classes', '#dv_sloc']
cols = ['Number of\nvulnerabilities', 'Number of classes\nwith vulnerabilities', 'Vulnrabilities\nper SLOC']
rows = ['Native Code', 'Reused Code']
# Plot boxes
for r in range(len(rows)):
for c in range(len(cols)):
bxp_df = df[bp_vars[r*len(cols) + c]]
axs[r,c].boxplot(bxp_df, showfliers=False)
axs[r,c].set_xticks([])
# Set titles
for ax, col in zip(axs[0], cols):
ax.set_title(col)
for ax, row in zip(axs[:,0], rows):
ax.set_ylabel(row, rotation=90, size='large')
fig.subplots_adjust(hspace=0.1, wspace=0.5)
plt.savefig("../../paper/figs/boxplots.pdf")
plt.show()
```
<a id="grouping"></a>
## RQ1 - Grouping analysis
```
import numpy as np
from scipy.stats import ttest_ind
def splitby_and_test(df, sort_var, test_var):
df.sort_values([sort_var], ascending=[True])
dfs = np.array_split(df, 2)
t = ttest_ind(dfs[0][test_var],dfs[1][test_var])
print(f'Comparison of {test_var} for data sorted by {sort_var}')
print(f'\tStatistic={t[0]:.2f} (p={t[1]:.2f})')
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
splitby_and_test(df, '#u_sloc', '#uv')
splitby_and_test(df, '#u_sloc', '#dv')
splitby_and_test(df, '#d_sloc', '#uv')
splitby_and_test(df, '#d_sloc', '#dv')
```
<a id="scatterplots"></a>
## RQ2 - Scatterplots
```
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams.update({'font.size': 16})
df['reuse_ratio'] = df['#d_sloc'] / (df['#d_sloc']+df['#u_sloc'])
df['uv_ratio'] = df['#uv'] / df['#u_sloc']
df['dv_ratio'] = df['#dv'] / df['#d_sloc']
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(20, 6), tight_layout = {'pad': 1})
label_size = 24
axs[0].scatter(df['uv_ratio'], df['reuse_ratio'],cmap='bwr')
axs[0].set_xlim([-0.0001,0.02])
axs[0].set_xlabel("Native Vulnerability Density", fontsize=label_size)
axs[0].set_ylabel('Reuse Ratio', rotation=90, fontsize=label_size)
axs[1].scatter(df['dv_ratio'], df['reuse_ratio'],cmap='bwr')
axs[1].set_xlim([-0.0001,0.01])
axs[1].set_xlabel("Reused Vulnerability Density", fontsize=label_size)
axs[1].set_yticks([])
fig.subplots_adjust(wspace=0.1)
plt.savefig("../../paper/figs/scatter_plots.pdf")
plt.show()
```
<a id="boxplots2"></a>
## RQ2 - Boxplots
```
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
df['reuse_ratio'] = df['#d_sloc'] / (df['#d_sloc']+df['#u_sloc'])
df['uv_ratio'] = df['#uv'] / df['#u_sloc']
df['dv_ratio'] = df['#dv'] / df['#d_sloc']
df['#v_sloc'] = (df['#uv'] + df['#dv']) / (df['#d_sloc']+df['#u_sloc'])
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(8, 4), tight_layout = {'pad': 1})
bp_vars = ['uv_ratio', 'dv_ratio', '#v_sloc'] #'reuse_ratio'
labels = ['Native\nvulnerabilities density', 'Reused\nvulnerabilities density', 'Overall\nvulnerabilities density'] #'Reuse ratio',
# Plot boxes
for i in range(len(labels)):
bxp_df = df[bp_vars[i]]
axs[i].boxplot(bxp_df, showfliers=False)
axs[i].set_xticks([])
axs[i].set_ylim([-0.0001,0.0065])
axs[i].set_title(labels[i])
fig.subplots_adjust(hspace=0.1, wspace=0.5)
plt.savefig("../../paper/figs/boxplots2.pdf")
plt.show()
```
<a id="correlation"></a>
## RQ2 - Correlation
```
from scipy.stats import pearsonr
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
df['reuse_ratio'] = df['#d_sloc'] / (df['#d_sloc']+df['#u_sloc'])
df['#v_sloc'] = (df['#uv'] + df['#dv']) / (df['#d_sloc']+df['#u_sloc'])
df.sort_values(['reuse_ratio'], ascending=[True])
corr = pearsonr(df['reuse_ratio'],df['#v_sloc'])
print(f"Pearson correlation coefficient")
print(f"\tCoefficient: {corr[0]:.3f} (p-value={corr[1]:.5f})")
print("native vulns :: {}".format(sum(df['#uv'])))
print("reused vulns :: {}".format(sum(df['#dv'])))
```
| github_jupyter |
##### Copyright 2021 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Generalized Gumbel-max causal mechanisms tutorial
This notebook explains the APIs of the Gumbel-max causal mechanism implementation, along with those of our two gadgets.
[](https://colab.research.google.com/github/google-research/google-research/blob/master/gumbel_max_causal_gadgets/tutorial.ipynb)
## Setting up the environment
These instructions are designed for running this tutorial using Google Colab; if you are using a different environment, the setup instructions may differ!
The first step is to connect the Colab runtime to a GPU. You can use the "Runtime > Change runtime type" option in the toolbar above.
Next, install necessary dependencies:
```
# Download the codebase
!git clone https://github.com/google-research/google-research.git --depth=1
import os
os.chdir("google-research")
# Install Python packages
!pip install flax optax
import os
os.environ["XLA_PYTHON_CLIENT_ALLOCATOR"] = "platform"
import jax
jax.devices()
```
## Imports and configuration
```
import functools
import time
from typing import *
import numpy as np
import jax
import jax.numpy as jnp
import optax
import flax
import flax.linen as nn
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
plt.ion()
np.set_printoptions(linewidth=150)
from gumbel_max_causal_gadgets import coupling_util
from gumbel_max_causal_gadgets import gadget_1
from gumbel_max_causal_gadgets import gadget_2
from gumbel_max_causal_gadgets import experiment_util
```
## The Gumbel-max causal mechanism and Gumbel-max coupling
We start with the Gumbel-max causal mechanism, as introduced in "Counterfactual off-policy evaluation with Gumbel-max structural causal models" [(Oberst and Sontag, 2019)](http://proceedings.mlr.press/v97/oberst19a.html).
Suppose we wish to sample an observation $x$ from an interventional distribution $p(x | do(y)) \propto \exp l_x$, defined by a vector of logits $l \in \mathbb{R}^k$. We can do this by first sampling a vector of Gumbel(0) exogenous noise $\gamma$, then shifting it by $l$ and taking the argmax:
```
def sample_gumbel_max(rng, logits):
gumbels = jax.random.gumbel(rng, logits.shape)
x = jnp.argmax(gumbels + logits)
return x
```
If we wish to jointly sample two outcomes under two interventions, we can do so by passing two different logit vectors while re-using the same `gumbels`. Because `rng` determines the samples of Gumbels, we can do this by passing the same `rng` value:
```
# Two fairly arbitrary logit vectors
p_logits = 0.1 * jnp.arange(10) - (10 - 1.0) / 2
q_logits = -p_logits
p_logits = p_logits - jax.scipy.special.logsumexp(p_logits)
q_logits = q_logits - jax.scipy.special.logsumexp(q_logits)
print("p_probs", jnp.exp(p_logits))
print("q_probs", jnp.exp(q_logits))
keys = jax.random.split(jax.random.PRNGKey(42), 10)
p_samples = []
q_samples = []
for prng_key in keys:
p_samples.append(int(sample_gumbel_max(prng_key, p_logits)))
q_samples.append(int(sample_gumbel_max(prng_key, q_logits)))
print("p_samples", p_samples)
print("q_samples", q_samples)
```
Note that the samples from $p$ and $q$ are the same more often than they would be if we drew them independently. This is because they share the same exogenous noise, and only have different interventional distributions. We can repeat this for a larger number of samples to visualize the resulting *Gumbel-max coupling* between $p$ and $q$:
```
gm_coupling_p_q = coupling_util.joint_from_samples(
coupling_util.gumbel_max_sampler,
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
plt.imshow(gm_coupling_p_q, vmin=0)
plt.colorbar()
```
### The Gumbel-max coupling v.s. a maximal coupling
Notice that a lot of mass is on the diagonal. We might wonder if the diagonal contains as much mass as possible, e.g. is this a maximal coupling? However, as we show in Section 4, the answer is no.
For comparison, we can construct a maximal coupling (which does not correspond to a causal mechanism, but is instead defined directly with respect to `p_logits` and `q_logits`):
```
maximal_coupling_p_q = coupling_util.joint_from_samples(
coupling_util.maximal_coupling_sampler,
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
plt.imshow(maximal_coupling_p_q, vmin=0)
plt.colorbar()
```
We can plot the difference between the two, which reveals that the Gumbel-max sampler assigns less mass to the diagonal, and more mass to the off diagonal elements.
```
difference = gm_coupling_p_q - maximal_coupling_p_q
plt.imshow(difference, vmin=-0.016, vmax=0.016, cmap="RdBu")
plt.colorbar()
```
### Using the Gumbel-max SCM to sample counterfactuals
We can use the top-down sampling algorithm [(Maddison et al. [2014])](https://arxiv.org/abs/1411.0030) to answer counterfactual queries: given that we observed $x^{(obs)}$ under `p_logits`, what would we have observed under `q_logits`?
The key insight is that the maximum value and the argmax are independent for a set of independent shifted Gumbels (as explained [here](https://cmaddis.github.io/gumbel-machinery)). $x^{(obs)}$ is the argmax, so we can sample the exogenous noise by sampling the max, then filling in the rest.
```
x_obs = 7 # for example
rng = jax.random.PRNGKey(1234)
# Use jax.vmap to draw many samples
def sample_one(key):
gumbels = coupling_util.counterfactual_gumbels(p_logits, x_obs, key)
y_for_q = jnp.argmax(gumbels + q_logits)
return jnp.zeros([10]).at[y_for_q].set(1.)
counterfactual_y = jnp.mean(jax.vmap(sample_one)(jax.random.split(rng, 1000)), axis=0)
plt.imshow(counterfactual_y[None, :], vmin=0)
```
This is equivalent to sampling only within a single row (row 7) of the coupling matrix in the previous section.
The key property that makes this useful for counterfactual inference is that it works for any `x_obs`, even one that we did not sample using our mechanism, as long as the value of `x_obs` can be viewed as a sample from the distribution given by `p_logits`. Thus, it can be used to infer counterfactual distributions for data collected offline by interacting with the real world, as described by Oberst and Sontag (2017).
## Inverse-CDF couplings and monotonicity
Another interesting class of coupling is the "inverse CDF" causal mechanism and resulting coupling. If we define an order on the outcomes, we can use this to construct the cumulative distribution function, or CDF, for any particular logit vector $l$. It turns out that inverting the CDF and evaluating it at a sample of uniform random noise will produce a sample from the desired distribution.
```
inverse_cdf_p_q = coupling_util.inverse_cdf_coupling(
logits_1=p_logits,
logits_2=q_logits)
plt.imshow(inverse_cdf_p_q, vmin=0)
plt.colorbar()
```
As we discuss in Section 2, if we are interested in measuring a difference of costs with minimum variance, and each cost function is monotonic with respect to this ordering, it turns out that this coupling will always minimize the variance. However, this is only possible if we know the ordering in advance while building our causal mechanism. If we use a different order, we destroy this structure.
```
perm_inverse_cdf_p_q = coupling_util.permuted_inverse_cdf_coupling(
logits_1=p_logits,
logits_2=q_logits,
permutation_seed=1)
plt.imshow(perm_inverse_cdf_p_q, vmin=0)
plt.colorbar()
```
## Independent couplings
One other class of couplings that we compare against is the independent coupling, which implies that $p(x)$ and $q(y)$ have nothing in common. From a causal perspective, this corresponds to a situation where the outcome for an observation tells you nothing at all about what the outcome would have been for some other counterfactual intervention.
```
independent_pq = coupling_util.independent_coupling(
logits_1=p_logits,
logits_2=q_logits)
plt.imshow(independent_pq, vmin=0)
plt.colorbar()
```
## Gadget 1 and Gadget 2
We now show how to use our learnable "gadgets" to define couplings and to draw samples from a counterfactual.
As discussed in Section 5.1, Gadget 1 deviates from a normal SCM in that the exogenous noise is not shared exactly between the "observed" and "counterfactual" samples. In particular, sampling from a counterfactual distribution requires transposing the matrix of Gumbels, and thus requires designating one of the interventions as the non-transposed original and the other as the transposed counterfactual.
Gadget 2, on the other hand, satisfies the normal requirements of an SCM, and uses the same exogenous noise across all possible interventions. This means that it can be used in the same set of situations as the Gumbel-max SCM.
Both gadgets are implemented as `flax` modules, which separate the definition of the model class $\{f_\theta\}_{\theta \in \Theta}$ from the specific value of the parameters $\theta$. We can instantiate each model class by specifying all of the necessary hyperparameters. For instance:
```
# S_dim is the number of outcomes for our distribution of interest.
gadget_1_def = gadget_1.GadgetOneMLPPredictor(
S_dim=10, hidden_features=[1024, 1024],
relaxation_temperature=1.0)
# Gadget 2 also requires Z_dim, the space of the latent auxiliary variable.
gadget_2_def = gadget_2.GadgetTwoMLPPredictor(
S_dim=10, Z_dim=100, hidden_features=[1024, 1024],
relaxation_temperature=1.0, learn_prior=False)
```
To use them to draw samples, we must pick a particular value for $\theta$. We can start by randomly initializing each:
```
init_key = jax.random.PRNGKey(1001)
gadget_1_theta = gadget_1_def.init(init_key, jnp.zeros([gadget_1_def.S_dim]))
init_key = jax.random.PRNGKey(1002)
gadget_2_theta = gadget_2_def.init(init_key, jnp.zeros([gadget_2_def.S_dim]))
# Summarize the shape of each parameter tree:
print("Gadget 1:")
print(jax.tree_map(lambda x: f"dtype={x.dtype} shape={x.shape} values={x.reshape([-1])[:4]}...", gadget_1_theta))
print("Gadget 2:")
print(jax.tree_map(lambda x: f"dtype={x.dtype} shape={x.shape} values={x.reshape([-1])[:4]}...", gadget_2_theta))
```
We can also bind a particular value of $\theta$ to each model definition to obtain a concrete mechanism $f_\theta$. (This is only recommended for interactive use cases, such as this notebook. If you want to learn $\theta$, it's better to keep the two separate. See the [flax documentation](https://flax.readthedocs.io/en/latest/notebooks/flax_basics.html) for more details on using flax.)
```
gadget_1_at_init = gadget_1_def.bind(gadget_1_theta)
gadget_2_at_init = gadget_2_def.bind(gadget_2_theta)
```
### Sampling from the gadgets
Given a bound gadget, we can draw samples similarly to Gumbel-max. Each gadget defines a method `sample`, which can be used to draw samples according to their structural causal model. Just like for Gumbel-max, using the same random number generator for two different logit vectors produces coupled interventions. However, as noted before, Gadget 1 requires passing a special `transpose` argument when sampling the second logit vector.
```
keys = jax.random.split(jax.random.PRNGKey(42), 20)
g1_p_samples = []
g1_q_samples = []
g2_p_samples = []
g2_q_samples = []
for prng_key in keys:
# Gadget 1
g1_p_samples.append(int(gadget_1_at_init.sample(p_logits, prng_key)))
g1_q_samples.append(int(gadget_1_at_init.sample(q_logits, prng_key, transpose=True)))
# Gadget 2
g2_p_samples.append(int(gadget_2_at_init.sample(p_logits, prng_key)))
g2_q_samples.append(int(gadget_2_at_init.sample(q_logits, prng_key)))
print("g1_p_samples", g1_p_samples)
print("g1_q_samples", g1_q_samples)
print()
print("g2_p_samples", g2_p_samples)
print("g2_q_samples", g2_q_samples)
g1_init_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_1_at_init.sample, second_kwargs={"transpose": True}),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
g2_init_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_2_at_init.sample),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
_, axs = plt.subplots(ncols=2, figsize=(12,6))
axs[0].imshow(g1_init_pq, vmin=0)
axs[1].imshow(g2_init_pq, vmin=0)
```
(Note: Gadget 2, even at initialization, has similar behavior to Gumbel-max, in that it tends to produce samples that are the same across $p$ and $q$. Gadget 1, on the other hand, often draws distinct samples at initialization, because the exogenous noise is transposed.)
### Drawing counterfactual samples
Each gadget also provides a method `gadget.counterfactual_sample(p_logits, q_logits, p_observed, rng)`. This method serves a similar role as the counterfactual sampling for Gumbel-max SCMs: it allows us to draw a sample from the counterfactual distribution `q_logits`, conditioned on a particular observation from `p_logits`.
```
x_obs = 7 # for example
rng = jax.random.PRNGKey(1234)
def sample_ctf_gadgets(key):
y_from_gadget_1 = gadget_1_at_init.counterfactual_sample(p_logits, q_logits, x_obs, key)
y_from_gadget_2 = gadget_2_at_init.counterfactual_sample(p_logits, q_logits, x_obs, key)
return (
jnp.zeros([10]).at[y_from_gadget_1].set(1.),
jnp.zeros([10]).at[y_from_gadget_2].set(1.),
)
from_p, from_q = jax.vmap(sample_ctf_gadgets)(jax.random.split(rng, 1000))
from_p = jnp.mean(from_p, axis=0)
from_q = jnp.mean(from_q, axis=0)
_, axs = plt.subplots(nrows=2)
axs[0].imshow(from_p[None, :], vmin=0)
axs[1].imshow(from_q[None, :], vmin=0)
```
As before, these correspond to the 7th row of the full joint distribution shown in the previous section.
### Sampling differentiable continuous relaxations
To train the gadgets, we additionally provide a method `relaxed_sample`, which continuously relaxes the Gumbel-max operations inside each gadget to instead be Gumbel-softmax operations. The default temperature is specified when initializing the gadget, and determines the tradeoff between higher gradient variance and more gradient bias.
Below, we compare the discrete samples with their continuously relaxed counterparts, where each row is a new sample. Note that the discrete sample is always the same as the position of the maximum in the continuous version.
```
def draw_relaxed_from_p(key):
k1, k2 = jax.random.split(key)
return (
jnp.zeros([10]).at[gadget_1_at_init.sample(p_logits, k1)].set(1),
gadget_1_at_init.sample_relaxed(p_logits, k1),
jnp.zeros([10]).at[gadget_2_at_init.sample(p_logits, k2)].set(1),
gadget_2_at_init.sample_relaxed(p_logits, k2),
)
g1_samples, g1_relaxed_samples, g2_samples, g2_relaxed_samples = jax.vmap(draw_relaxed_from_p)(jax.random.split(jax.random.PRNGKey(3), 15))
_, axs = plt.subplots(ncols=4, figsize=(10,4))
axs[0].imshow(g1_samples, vmin=0)
axs[1].imshow(g1_relaxed_samples, vmin=0)
axs[2].imshow(g2_samples, vmin=0)
axs[3].imshow(g2_relaxed_samples, vmin=0)
```
If we simultaneously draw relaxed samples from $p$ and $q$, again using the same exogenous noise, we can obtain a differentiable estimate of the implicit coupling between them (shown only for gadget 2):
```
def gen_soft_pair(key):
soft_x = gadget_2_at_init.sample_relaxed(p_logits, key)
soft_y = gadget_2_at_init.sample_relaxed(q_logits, key)
return (soft_x[:, None] * soft_y[None, :])
soft_pairs = jax.vmap(gen_soft_pair)(jax.random.split(jax.random.PRNGKey(1), 4*8))
_, axs = plt.subplots(nrows=4, ncols=8, figsize=(16,8))
for i in range(4):
for j in range(8):
axs[i, j].imshow(soft_pairs[np.ravel_multi_index((i,j), (4,8))])
```
Over a large number of samples, these soft pairs will tend to approximate a noisier version of the implicit coupling. We can thus use stochastic gradient descent to optimize this coupling to have a better score under our objective of interest.
```
_, axs = plt.subplots(ncols=2, figsize=(12,6))
# True implicit coupling defined by gadget 2
axs[0].imshow(g2_init_pq, vmin=0)
# Relaxed approximation with gradients
axs[1].imshow(jnp.mean(soft_pairs, axis=0), vmin=0)
```
### Training the gadgets
So far, we have described how to use the gadgets once we know a particular value for $\theta$. It remains to see how we can *learn* $\theta$ to optimize an objective of interest.
To this end, we provide a training helper class `CouplingExperimentConfig` which allows either of our gadgets to be trained to minimize a particular objective over a distribution of interest. In order to use this, you must provide
- a model definition (either gadget 1 or gadget 2)
- a function that generates random pairs of logits according to the distribution of interest (this is $\mathcal{D}$ from Section 3)
- a function that takes a pair of logits, and returns a matrix of scores for each pair of counterfactual samples (this is $g_{l^{(1)}, l^{(2)}}$ from Section 3)
- a flag specifying whether it should pass the `transpose` argument during training (True for gadget 1, False otherwise)
- hyperparameters that control the training process, such as the batch size, number of samples per iteration, and optimizer.
To show how this works, here is an example of training each of our gadgets based on two distance functions:
- $g(x, y) = 0\text{ if }x=y\text{ else }1$, which encourages our coupling to be closer to a maximal coupling.
- $g(x, y) = (x-y)^2$, which encourages our coupling to minimize the variance of the difference between the sampled indices.
In both cases, we take the pair of $p$ and $q$ we have used for the rest of this notebook, and perturb it with a small amount of noise, so that it represents a distribution of intervention pairs.
```
def logit_pair_distribution_fn(rng, dim, base_scale=.1, noise_scale=.1):
p_rng, q_rng = jax.random.split(rng, 2)
p_base = jnp.arange(dim) - (dim - 1.0) / 2
q_base = -p_base
p_logits = base_scale * p_base + noise_scale * jax.random.normal(p_rng, (dim,))
q_logits = base_scale * q_base + noise_scale * jax.random.normal(q_rng, (dim,))
return p_logits, q_logits
def maximal_coupling_loss_matrix_fn(logits1, logits2):
return 1.0 - jnp.eye(logits1.shape[0])
def squared_loss_matrix_fn(logits1, logits2):
seq = jnp.arange(logits1.shape[0]).astype(jnp.float32)
return jnp.square(seq[None, :] - seq[:, None])
experiments = []
S_dim = 10
Z_dim = 100
for task_fn in [maximal_coupling_loss_matrix_fn, squared_loss_matrix_fn]:
for gadget in [1, 2]:
ex = experiment_util.CouplingExperimentConfig(
name=f"Gadget {gadget} example training: {task_fn.__name__}",
model=(
gadget_1.GadgetOneMLPPredictor(
S_dim=S_dim,
hidden_features=[1024, 1024],
relaxation_temperature=1.0)
if gadget == 1 else
gadget_2.GadgetTwoMLPPredictor(
S_dim=S_dim,
Z_dim=Z_dim,
hidden_features=[1024, 1024],
relaxation_temperature=1.0,
learn_prior=False)
),
logit_pair_distribution_fn=functools.partial(
logit_pair_distribution_fn,
dim=S_dim,
base_scale=.1,
noise_scale=0.4),
coupling_loss_matrix_fn=task_fn,
inner_num_samples=16,
batch_size=64,
use_transpose=(gadget == 1),
tx=optax.adam(1e-5),
num_steps=2001,
print_every=1000,
)
experiments.append(ex)
results = []
for ex in experiments:
print("=" * 80)
print(ex.name)
print("=" * 80)
results.append(ex.train(jax.random.PRNGKey(42)))
print()
gadget_1_maximal_theta = results[0].params
gadget_1_maximal = experiments[0].model.bind(gadget_1_maximal_theta)
gadget_2_maximal_theta = results[1].params
gadget_2_maximal = experiments[1].model.bind(gadget_2_maximal_theta)
gadget_1_variance_theta = results[2].params
gadget_1_variance = experiments[2].model.bind(gadget_1_variance_theta)
gadget_2_variance_theta = results[3].params
gadget_2_variance = experiments[3].model.bind(gadget_2_variance_theta)
g1_maximal_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_1_maximal.sample, second_kwargs={"transpose": True}),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
g2_maximal_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_2_maximal.sample),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
_, axs = plt.subplots(ncols=2, figsize=(12,6))
axs[0].imshow(g1_maximal_pq, vmin=0)
axs[1].imshow(g2_maximal_pq, vmin=0)
```
We see that, after optimizing them to be closer to a maximal coupling, both gadgets have adapted to put more probability mass on the diagonal than they did at initialization time.
```
g1_variance_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_1_variance.sample, second_kwargs={"transpose": True}),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
g2_variance_pq = coupling_util.joint_from_samples(
coupling_util.sampler_from_common_random_numbers(gadget_2_variance.sample),
logits_1=p_logits,
logits_2=q_logits,
rng=jax.random.PRNGKey(42),
num_samples=100_000)
_, axs = plt.subplots(ncols=2, figsize=(12,6))
axs[0].imshow(g1_variance_pq, vmin=0)
axs[1].imshow(g2_variance_pq, vmin=0)
```
If they are trained to reduce variance, Gadget 2 learns a coupling that shares some similarity with the inverse CDF coupling, whereas Gadget 1 again pulls a large amount of mass onto the diagonal.
## MDP counterfactual treatment effect
In this part we show how to use an MDP as interventional distribution in order to couple between transitions under a behavior policy (e.g. physician) and target policy (e.g. RL policy)
Following Oberst and Sontag (2019), we consider a sepsis management simulator and take the following steps:
1. Learn an MDP by interacting with the simulator. This MDP represents the "true" behavior of sepsis management.
2. Train a behavior policy (physician) over the MDP.
3. Generate patients trajectories (data) using the behavior policy, and construct an estimated MDP based on the sampled data.
4. Learn RL policy over the estimated MDP.
The MDP produces the probability $pr(s'| s, a)$ and gives us two interventional distributions depending on whether we choose $a$ according to the physician policy or the RL policy. In counterfactual setting, we sample the counterfactual $s'_{cf}$ conditioned on the observed $s'_{obs}$ (from step 3).
```
%cd ..
!git clone https://github.com/GuyLor/gumbel_max_causal_gadgets_part2.git
import os
os.chdir("gumbel_max_causal_gadgets_part2")
!pip install -r requirements.txt
from sepsis_mdp import SepsisMDP
import numpy as np
import cf.utils as utils
from joint_predictor import Coupler
from cf import fixed_mechanisms as fm
```
### Observations and interventional distributions
```
# Setup of the sepsis simulator:
sep = SepsisMDP()
# Load an MDP that was trained over the simulator's states and actions - the 'true' transition distributions of sepsis management
true_mdp = sep.load_mdp_from_simulator()
# Train a behavior policy over the true MDP using policy iteration algorithm
physician_policy = sep.get_physician_policy(true_mdp)
# Sample trajectories of patients by interacting with the MDP using the physician policy
# Using these trajectories, construct an estimated MDP
obs_samples, est_mdp = sep.simulate_patient_trajectories_and_construct_mdp(physician_policy,
num_steps=20,
num_samples=20000)
# Train a policy over the estimated MDP
cf_policy = sep.train_rl_policy(est_mdp)
```
Unlike Oberst and Sontag, we couple between single time steps and therefore, we consider rewards per state (instead of [0,1] rewards at trajectory completion).
The state is composed of 6 categorical variables, each with a different number of categories.
We sample a Gaussian noise for each category representing its energy.
The reward of a given state is obtained by summing the energies associated with its variables.
```
relevant_trajs_and_t = sep.search_for_relevant_tr_t(obs_samples, cf_policy, est_mdp,
num_of_diff_p_q=sep.n_proj_states, num_gt_zero_probs=4)
trajectory_idx, time_idx = relevant_trajs_and_t[0]
current_state, obs_action, obs_next_state = sep.parse_samples(obs_samples, trajectory_idx, time_idx)
cf_action = cf_policy[current_state, :].squeeze().argmax()
# get p and q from the MDP
behavior_interv_probs = est_mdp.tx_mat[0, obs_action, current_state, :].squeeze().tolist()
target_interv_probs = est_mdp.tx_mat[0, cf_action, current_state, :].squeeze().tolist()
```
This function calculates the variance of the treatment effect and compares between couplings with fixed mechanisms inverse-CDF and Gumbel-max to our two learnable mechanisms
```
def run_comparison(behavior_interv_probs, target_interv_probs, s_prime_obs=None, n=10, seed=0):
# testing
logits_p = np.log(np.array(behavior_interv_probs) + 1e-10).clip(min=-80.0)
logits_q = np.log(np.array(target_interv_probs) + 1e-10).clip(min=-80.0)
batch_size_test = 2000
batch_logits_p = np.tile(logits_p, (batch_size_test, 1)) + noise_scale * np.random.randn(batch_size_test,
logits_p.shape[-1])
batch_logits_q = np.tile(logits_q, (batch_size_test, 1)) + noise_scale * np.random.randn(batch_size_test,
logits_q.shape[-1])
batch_s_prime_obs = np.tile(s_prime_obs, (batch_size_test, 1)) if counterfactual else None
vars_gm, vars_icdf, vars_gd1, vars_gd2 = [], [], [], []
for i in range(n):
(s_prime_p, s_prime_q), _ = c.gadget_1.sample_from_joint(batch_logits_p, batch_logits_q, batch_s_prime_obs,
counterfactual=counterfactual, train=False)
vars_gd1.append(utils.compute_variance_treatment_effect(reward_vector, s_prime_p, s_prime_q, batch_s_prime_obs))
(s_prime_p, s_prime_q), _ = c.gadget_2.sample_from_joint(batch_logits_p, batch_logits_q, batch_s_prime_obs,
counterfactual=counterfactual, train=False)
vars_gd2.append(utils.compute_variance_treatment_effect(reward_vector, s_prime_p, s_prime_q, batch_s_prime_obs))
s_prime_p, s_prime_q = fm.gumbel_max_coupling(batch_logits_p, batch_logits_q, batch_s_prime_obs, counterfactual=counterfactual)
vars_gm.append(utils.compute_variance_treatment_effect(reward_vector, s_prime_p, s_prime_q, batch_s_prime_obs))
s_prime_p, s_prime_q = fm.inverse_cdf_coupling(batch_logits_p, batch_logits_q, batch_s_prime_obs, counterfactual=counterfactual)
vars_icdf.append(utils.compute_variance_treatment_effect(reward_vector, s_prime_p, s_prime_q, batch_s_prime_obs))
return np.mean(vars_gm), np.mean(vars_icdf), np.mean(vars_gd1), np.mean(vars_gd2)
```
### Training the gadgets
Train the gadgets with specific realization of rewards:
```
noise_scale = 1.0
counterfactual = True
vars_gm, vars_icdf, vars_gd1, vars_gd2 = [],[],[],[]
for t in range(5):
print('='*80)
print(f'Trial {t}: sample new rewards')
c = Coupler(s_dim=sep.n_proj_states, z_dim=20, hidden_features=[1024, 1024], tmp=1.0, seed=t)
reward_vector = sep.randomize_states_rewards()
print('---- Train gadget-1 -----')
c.train_gadget_1(p=behavior_interv_probs, q=target_interv_probs, s_prime_obs=obs_next_state, reward_vector=reward_vector,
batch_size=64, counterfactual=counterfactual, num_iter=200, noise_scale=noise_scale)
print('---- Train gadget-2 -----')
c.train_gadget_2(p=behavior_interv_probs, q=target_interv_probs, s_prime_obs=obs_next_state, reward_vector=reward_vector,
batch_size=64, counterfactual=counterfactual, num_iter=200, noise_scale=noise_scale)
gm, icdf, gd1, gd2 = run_comparison(behavior_interv_probs, target_interv_probs, obs_next_state, n=10)
print(f'Gumbel-max: {gm}, inverse-CDF: {icdf}, gadget-1: {gd1}, gadget-2: {gd2}')
vars_gm.append(gm); vars_icdf.append(icdf); vars_gd1.append(gd1); vars_gd2.append(gd2)
print('Average over 5 rewards realizations (same p, q)')
print(f'Gumbel-max: {np.mean(vars_gm)}, inverse-CDF: {np.mean(vars_icdf)}, gadget-1: { np.mean(vars_gd1)}, gadget-2: {np.mean(vars_gd2)}')
```
### Comparison to fixed causal mechanisms (inverse-CDF, Gumbel-max)
```
utils.plot_mdp_variances(vars_gm, vars_icdf, vars_gd1, vars_gd2, cf=counterfactual,
generalized=noise_scale > 0)
```
| github_jupyter |
# Neural Network
In this notebook, we will implement a neural network model to classify success/failure of terrorist attacks in our database. Our goal is to find out how accurate this model is compared to LASSO regression and a neural network. For a discussion of what "success" means in this database, see our main notebook.
Our data are categorical one-hot-encoded feature vectors describing the attack, while the labels correspond to the success or failure of the attack.
## Benefits
1\. **Universal Approximation**: The generally high predictive power of Neural Networks can in part be explained by the Universal Approximation Theorem, which states that a multilayer perceptron is able to approximate continuous functions on compact subsets of $R^{n}$.
2\. **Binary Classification**: By employing a binary cross-entropy loss function, neural networks are well-suited to approach binary classification problems. In combination with the Universal Approximation Theorem, this means that a neural network can approximate any decision function in $R^{n}$, the space of independent variables.
## Proprocessing
First, we load the data, generate one-hot-categorical variables, and split into training and test sets. We will use the test set to perform cross-validation of hyperparameters.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.models import Sequential
from keras.layers import Dense
import keras
from preprocess_functions import load_data_relevant_cols, get_dummies
raw = load_data_relevant_cols()
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, roc_curve, confusion_matrix
np.random.seed(159)
```
Lets look at some examples of the possible options for each category.
```
print(raw.attacktype1_txt.unique()[0:10])
print(raw.targtype1_txt.unique()[0:10])
print(raw.targsubtype1_txt.unique()[0:10])
print(raw.weaptype1_txt.unique()[0:10])
print(raw.weapsubtype1_txt.unique()[0:10])
```
To use the categorical variables for analysis, we need to convert them to one-hot-encoded dummy variables, using pandas built in get_dummies function
```
# Making one-hot-encoded dummy variables
rel_columns = ['attacktype1_txt', 'targtype1_txt', 'targsubtype1_txt', 'weaptype1_txt', 'weapsubtype1_txt']
X = get_dummies(raw, rel_columns)
Y = raw.success
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=42)
```
## Hyperparameter Search:
Due to the large number of hyperparameters, grid search is impossible. Instead, we will run a hyperparameter search by first trying out different activation functions, then model architectures.
Since the data is on the scale of hundreds of thousands of rows, it is highly likely that a neural network will do better than a linear model.
We begin by trying different activation functions.
```
model_tanh = Sequential()
model_tanh.add(Dense(units=10, activation='tanh', input_dim=182))
model_tanh.add(Dense(units=10, activation='tanh'))
model_tanh.add(Dense(units=1, activation='sigmoid'))
model_tanh.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_tanh = model_tanh.fit(X_train.values, y_train, epochs=25, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
model_relu = Sequential()
model_relu.add(Dense(units=10, activation='relu', input_dim=182))
model_relu.add(Dense(units=10, activation='relu'))
model_relu.add(Dense(units=1, activation='sigmoid'))
model_relu.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_relu = model_relu.fit(X_train.values, y_train, epochs=25, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
model_sigmoid = Sequential()
model_sigmoid.add(Dense(units=10, activation='sigmoid', input_dim=182))
model_sigmoid.add(Dense(units=10, activation='sigmoid'))
model_sigmoid.add(Dense(units=1, activation='sigmoid'))
model_sigmoid.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_sigmoid = model_sigmoid.fit(X_train.values, y_train, epochs=25, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
```
We will use the following functions to visualize the loss and binary accuracy for each model over successive epochs.
```
def plot_loss(history, save=False, save_path=None):
"""
Plots and optionally saves the loss of a keras history over
successive epochs of training.
"""
plt.plot(history.history['val_loss'], label='val_loss')
plt.plot(history.history['loss'], label='loss')
plt.legend(loc='upper right')
plt.xlabel('epoch')
plt.ylabel('binary cross-entropy loss')
plt.title(str(history) + ' loss')
if save:
plt.savefig(save_path)
plt.show()
def plot_accuracy(history, save=False, save_path=None):
"""
Plots and optionally saves the binary accuracy of a keras history over
successive epochs of training
"""
plt.plot(history.history['val_binary_accuracy'], label='val_accuracy')
plt.plot(history.history['binary_accuracy'], label='accuracy')
plt.legend(loc='lower right')
plt.xlabel('epoch')
plt.ylabel('binary accuracy')
plt.title(str(history) + ' accuracy')
if save:
plt.savefig(save_path)
plt.show()
```
### Visualizing results of Activation Function tuning
```
plot_loss(history_tanh)
plot_accuracy(history_tanh)
plot_loss(history_relu)
plot_accuracy(history_relu)
plot_loss(history_sigmoid)
plot_accuracy(history_sigmoid)
plt.figure(figsize=(12,8))
plt.plot(history_tanh.history['val_binary_accuracy'], label='tanh')
plt.plot(history_relu.history['val_binary_accuracy'], label='relu')
plt.plot(history_sigmoid.history['val_binary_accuracy'], label='sigmoid')
plt.legend(loc='lower right')
plt.xlabel('epoch')
plt.ylabel('binary accuracy')
plt.title('Accuracy for Different Activation Functions')
plt.savefig('figures/neural_net_activations')
plt.show()
np.where(history_relu.history['val_binary_accuracy'] == max(history_relu.history['val_binary_accuracy']))
np.where(history_tanh.history['val_binary_accuracy'] == max(history_tanh.history['val_binary_accuracy']))
max(history_relu.history['val_binary_accuracy'])
max(history_tanh.history['val_binary_accuracy'])
```
It appears that tanh with a few (<10) epochs is the best activation function.
## Model Architecture Comparison
Now we will try a few different model architectures.
```
# First, a wide model (more nodes per layer)
model_wide = Sequential()
model_wide.add(Dense(units=32, activation='tanh', input_dim=182))
model_wide.add(Dense(units=32, activation='tanh'))
model_wide.add(Dense(units=1, activation='sigmoid'))
model_wide.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_wide = model_wide.fit(X_train.values, y_train, epochs=10, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
# Next, a wider model with even more nodes per layer
model_wider = Sequential()
model_wider.add(Dense(units=64, activation='tanh', input_dim=182))
model_wider.add(Dense(units=64, activation='tanh'))
model_wider.add(Dense(units=1, activation='sigmoid'))
model_wider.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_wider = model_wider.fit(X_train.values, y_train, epochs=10, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
# Next, a deep model, with more layers
model_deep = Sequential()
model_deep.add(Dense(units=10, activation='tanh', input_dim=182))
model_deep.add(Dense(units=10, activation='tanh'))
model_deep.add(Dense(units=10, activation='tanh'))
model_deep.add(Dense(units=1, activation='sigmoid'))
model_deep.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_deep = model_deep.fit(X_train.values, y_train, epochs=15, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
# Next, a bigger model, with both more layers and more nodes per layer
model_big = Sequential()
model_big.add(Dense(units=32, activation='tanh', input_dim=182))
model_big.add(Dense(units=32, activation='tanh'))
model_big.add(Dense(units=32, activation='tanh'))
model_big.add(Dense(units=1, activation='sigmoid'))
model_big.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_big = model_deep.fit(X_train.values, y_train, epochs=20, batch_size=32,
validation_data=(X_test.values, y_test), verbose=0)
plot_loss(history_wide)
plot_accuracy(history_wide)
plot_loss(history_wider)
plot_accuracy(history_wider)
plot_loss(history_deep)
plot_accuracy(history_deep)
plot_loss(history_big)
plot_accuracy(history_big)
plt.figure(figsize=(12,8))
plt.plot(history_wide.history['val_binary_accuracy'], label='wide_accuracy')
plt.plot(history_wider.history['val_binary_accuracy'], label='wider_accuracy')
plt.plot(history_deep.history['val_binary_accuracy'], label='deep_accuracy')
plt.plot(history_big.history['val_binary_accuracy'], label='big_accuracy')
plt.legend(loc='lower right')
plt.xlabel('epoch')
plt.ylabel('binary accuracy')
plt.title('Accuracy for Different Model Architectures')
plt.savefig('figures/neural_net_architectures')
plt.show()
```
The differences in performance are negligible between different architectures. We will use the big model, (3 layers each with 64 layers) with 4 epochs, since the validation accuracy is most consistent.
```
# Rerunning final model
model_final = Sequential()
model_final.add(Dense(units=32, activation='tanh', input_dim=182))
model_final.add(Dense(units=32, activation='tanh'))
model_final.add(Dense(units=32, activation='tanh'))
model_final.add(Dense(units=1, activation='sigmoid'))
model_final.compile(loss=keras.losses.binary_crossentropy, metrics=['binary_accuracy'],
optimizer='adam')
history_final = model_final.fit(X_train.values, y_train, epochs=3,
batch_size=32, validation_data=(X_test.values, y_test), verbose=0)
# Loss and accuracy for final model
plot_loss(history_final, save=True, save_path='figures/NN_loss')
plot_accuracy(history_final, save=True, save_path='figures/NN_accuracy')
```
## Discussion
### ROC Curves
Due to the skewed nature of the target class distribution, we will use ROC Curves to measure final performance. ROC Curves plot the true-positive-rate versus false-positive-rate over all possible rounding thresholds for the raw prediction.
```
# ROC Curve analysis
y_pred = model_final.predict(X_test.values)
fpr, tpr, thresh = roc_curve(y_test, y_pred)
roc_auc = roc_auc_score(y_test, y_pred)
plt.plot(fpr, tpr, color='darkorange', label='ROC curve (area = %0.2f)' % roc_auc)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.savefig('figures/NN_ROC')
plt.show()
# Choosing the rounding threshold where the true positive rate
# is greater than 0.9
loc = np.min(np.where(tpr > 0.9))
fpr[loc]
threshold = thresh[loc]
# Rounding the predictions based on the threshold
rounded = []
for i in y_pred:
if i > threshold:
rounded.append(1)
else:
rounded.append(0)
rounded_y_pred = np.array(rounded)
confusion_matrix(y_test, rounded_y_pred)
```
- True Negatives: 836
- False Negatives: 1551
- True Positives: 13724
- False Positives: 924
```
# Calculating binary accuracy
accuracy = (13740 + 861) / (13740 + 861 + 1535 + 899)
accuracy
```
### Conclusion
The neural network's predictive performance is better than the LASSO model, which is expected because as mentioned the universal approximation theorem implies that neural networks can fit a much more complex decision function. However, the overall accuracy of the neural network is worse than that of decision trees.
| github_jupyter |
# CNTK 103 Part A: MNIST Data Loader
This tutorial is targeted to individuals who are new to CNTK and to machine learning. We assume you have completed or are familiar with CNTK 101 and 102. In this tutorial, you will train a feed forward network based simple model to recognize handwritten digits. This is the first example, where we will train and evaluate a neural network based model on read real world data.
CNTK 103 tutorial is divided into two parts:
- Part A: Familiarize with the [MNIST][] database that will be used later in the tutorial
- [Part B](https://github.com/Microsoft/CNTK/blob/v2.0.beta7.0/Tutorials/CNTK_103B_MNIST_FeedForwardNetwork.ipynb): We will use the feedforward classifier used in CNTK 102 to classify digits in MNIST data set.
[MNIST]: http://yann.lecun.com/exdb/mnist/
```
# Figure 1 - This is what the MNIST data looks like
print ('This is what the MNIST data looks like...')
Image(url= "https://github.com/Azure/DataScienceVM/blob/master/Tutorials/WebinarDocuments-04-04-2017/MiscAssets/mnist_originals.png?raw=true", width=200, height=200)
# Import the relevant modules to be used later
from __future__ import print_function
import gzip
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import struct
import sys
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
```
## Data download
We will download the data into local machine. The MNIST database is a standard handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 pixels. This set is easy to use visualize and train on any computer.
```
# Functions to load MNIST images and unpack into train and test set.
# - loadData reads image data and formats into a 28x28 long array
# - loadLabels reads the corresponding labels data, 1 for each image
# - load packs the downloaded image and labels data into a combined format to be read later by
# CNTK text reader
def loadData(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x3080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))[0]
if n != cimg:
raise Exception('Invalid file: expected {0} entries.'.format(cimg))
crow = struct.unpack('>I', gz.read(4))[0]
ccol = struct.unpack('>I', gz.read(4))[0]
if crow != 28 or ccol != 28:
raise Exception('Invalid file: expected 28 rows/cols per image.')
# Read data.
res = np.fromstring(gz.read(cimg * crow * ccol), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, crow * ccol))
def loadLabels(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x1080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))
if n[0] != cimg:
raise Exception('Invalid file: expected {0} rows.'.format(cimg))
# Read labels.
res = np.fromstring(gz.read(cimg), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, 1))
def try_download(dataSrc, labelsSrc, cimg):
data = loadData(dataSrc, cimg)
labels = loadLabels(labelsSrc, cimg)
return np.hstack((data, labels))
```
# Download the data
The MNIST data is provided as train and test set. Training set has 60000 images while the test set has 10000 images. Let us download the data.
```
# URLs for the train image and labels data
url_train_image = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz'
url_train_labels = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz'
num_train_samples = 60000
print("Downloading train data")
train = try_download(url_train_image, url_train_labels, num_train_samples)
url_test_image = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz'
url_test_labels = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
num_test_samples = 10000
print("Downloading test data")
test = try_download(url_test_image, url_test_labels, num_test_samples)
```
# Visualize the data
```
# Plot a random image
sample_number = 5001
plt.imshow(train[sample_number,:-1].reshape(28,28), cmap="gray_r")
plt.axis('off')
print("Image Label: ", train[sample_number,-1])
```
# Save the images
Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points) and the labels are encoded as [1-hot][] encoding (label of 3 with 10 digits becomes `0010000000`.
[1-hot]: https://en.wikipedia.org/wiki/One-hot
```
# Save the data files into a format compatible with CNTK text reader
def savetxt(filename, ndarray):
dir = os.path.dirname(filename)
if not os.path.exists(dir):
os.makedirs(dir)
if not os.path.isfile(filename):
print("Saving", filename )
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
else:
print("File already exists", filename)
# Save the train and test files
print ('Writing train text file...')
savetxt(r'data/MNIST/Train-28x28_cntk_text.txt', train)
print ('Writing test text file...')
savetxt(r'data/MNIST/Test-28x28_cntk_text.txt', test)
print('Done')
```
**Suggested Explorations**
One can do data manipulations to improve the performance of a machine learning system. I suggest you first use the data generated so far and run the classifier in CNTK 103 Part B. Once you have a baseline with classifying the data in its original form, now use the different data manipulation techniques to further improve the model.
There are several ways data alterations can be performed. CNTK readers automate a lot of these actions for you. However, to get a feel for how these transforms can impact training and test accuracies, I strongly encourage individuals to try one or more of data perturbation.
- Shuffle the training data (rows to create a different). Hint: Use `permute_indices = np.random.permutation(train.shape[0])`. Then run Part B of the tutorial with this newly permuted data.
- Adding noise to the data can often improves [generalization error][]. You can augment the training set by adding noise (generated with numpy, hint: use `numpy.random`) to the training images.
- Distort the images with [affine transformation][] (translations or rotations)
[generalization error]: https://en.wikipedia.org/wiki/Generalization_error
[affine transformation]: https://en.wikipedia.org/wiki/Affine_transformation
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/181-tweet-train-5fold-roberta-bilstm-td-head-lbl02/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + 'model' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True))(last_hidden_state)
x = layers.Dropout(.1)(x)
x_start = layers.TimeDistributed(layers.Dense(1))(x)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.TimeDistributed(layers.Dense(1))(x)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['text_len'] = test['text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['end'].clip(0, test['text_len'], inplace=True)
test['start'].clip(0, test['end'], inplace=True)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
display(test.head(10))
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
# Introduction to Data Science – Regular Expressions
*COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/*
In this lecture we'll learn about regular expressions. Regular expressions are a way to match strings. They are very useful to find (and replace) text, to extract structured information such as e-mails, phone numbers, etc., or for cleaning up text that was entered by humans, and many other applications.
In Python, regular expressions are available as part of the [`re`](https://docs.python.org/3/library/re.html#module-re) module. There are various [good](https://docs.python.org/3/howto/regex.html) [tutorials](https://developers.google.com/edu/python/regular-expressions) available on which this document is partially based.
The basic syntax to search for a match in a string is this:
```python
match = re.search(pattern, text)
```
Here, `pattern` is the regular expression, `text` is the text that the regular expression is applied to. Match holds the search result that matches the string in an object.
[`search()`](https://docs.python.org/3/library/re.html#re.search) returns only the first occurrence of a match, in contrast, [`findall()`](https://docs.python.org/3/library/re.html#re.findall) returns all matches.
Another useful function is [`split()`](https://docs.python.org/3/library/re.html#re.split), which splits a string based on a regex pattern – we'll use all of these functions – and others where appropriate.
## A simple Example
We'll use a regular expression:
```python
'animal:\w\w\w'
```
To match the substring 'animal:' followed by a three letter word, encoded by '\w\w\w'
```
import re
# example text
text = "an example animal:cat!! animal:dog! animal:hedgehog"
# running the search, r before the string denotes a raw string
match = re.search(r"animal:\w\w\w", text)
# If-statement after search() tests if it succeeded
if match:
print ("found:", match.group())
else:
print ("did not find")
```
Here, the `r` before the string denotes that this should be treated as a raw string literal, i.e., that python shouldn't try to interpret the backslashes as escape characters, as it would, e.g., for `\n` – new line. This is quite useful for regular expressions, because we'd have to write the above query like this otherwise:
```
"animal:\\w\\w\\w"
```
The specific match can be retrieved using [`match.group()`](https://docs.python.org/3/library/re.html#re.Match.group).
## Basic Patterns
Ordinary characters, such as "`a, X, 9, <`" match themselves literally.
```
# search for occurence of "sc"
re.search(r"sc", "datascience").group()
# search for occurence of <
re.search(r"<", "data<science").group()
```
Special characters do not match themselves because they are part of the language. These are `. ^ $ * + ? { [ ] \ | ( )`.
```
# search for the beginning of the string, not the ^ symbol
re.search(r"^", "datascience^2").group()
```
We can escape special characters to match literally with a backslash `\`.
```
# search for the ^ symbol by escaping it
re.search(r"\^", "datascience^2").group()
```
A period `.` matches a single character, but not a newline character.
```
# search for the first single character
re.search(r".", "datascience.net").group()
```
`\w` matches a "word" character: a letter or digit or underbar `[a-zA-Z0-9_]`. Note that it only matches a single word char, not a whole word.
```
# search for the first word char
re.search(r"\w", "datascience").group()
# search for the first word char - note that < doesn't match
re.search(r"\w", "<datascience>").group()
```
`\W` (upper case W) matches any non-word character.
```
# search for the first non-word char
re.search(r"\W", "<datascience>").group()
```
`\s` matches a single whitespace character: space, newline `\n`, return `\r`, tab `\t`, and others.
```
# split by whitespace - searching for whitespace is boring
re.split(r"\s", "Intro datascience")
```
`\S` (upper case S) matches any non-whitespace character.
```
# search for first non-whitespace character
re.search(r"\S", " Intro datascience").group()
```
`\t`, `\n`, and `\r` match tab, newline, and return respectively.
```
# split the string based on tab \t
print("Intro\tdatascience 2021")
re.split(r"\t", "Intro\tdatascience 2021")
```
`\d` matches a decimal digit [0-9].
```
re.search(r"\d", "Intro datascience 2021").group()
```
`^` matches the start and `$` matches the end of the string. These are useful in context of a larger regular expressions, but not very useful in isolation.
### Repetition Qualifiers
A key concept in regex is repetition.
`+` matches 1 or more occurrences of the pattern to its left.
```
# this matches as much as it can
re.search(r"o+", "Introoooo datascience").group()
```
`*` matches 0 or more occurrences of the pattern on its left
```
# serch for digits \d possibly seprated by one ore more whitespaces
re.search(r'\d\s*\d\s*\d', 'xx1 2 3xx').group()
# note that this also works if there are no whitespaces as * indicates 0-n matches
re.search(r'\d\s*\d\s*\d', 'xx123xx').group()
```
We can use this, for example to look for words starting with a certain character:
```
# d\w* start with a d, then match zero or more word characters
re.search(r"d\w*", "Introoooo datascience !").group()
```
`?` matches 0 or 1 occurrences of the pattern on its left:
```
# d\w? start with a d, then match zero or one characters. Why is the result "da" not "d"?
re.search(r"d\w?", "Introoooo datascience !").group()
```
This matches `da` not `d` because all these repetition qualifiers are greedy, i.e., match as much as possible. We'll talk more about this below.
Be aware that the zero or more condition can be tricky. For example, if we want to match a `dd` with `*` and do it like this, we get a zero match, because the **start of the string** already matches the "or zero" condition. The correct pattern here would be `d+`.
```
re.search(r"d*", "Introoooo ddatascience !").group()
re.search(r"d+", "Introoooo ddatascience !").group()
```
### Example: E-Mails
Let's take a look at how we can use regular expressions. Suppose you're a spammer and you want to scrape e-mail addresses from websites.
Here is an example:
```
html = 'You can reach me <a href="mailto:alex@sci.utah.edu">by e-mail</a> if necessary.'
# a first attempt:
# \w+ 1-n word letters,
# @ for the literal @
# 1-n word letters
re.search(r'\w+@\w+', html).group()
```
That didn't work because `.` doesn't match for `\w`. We can write a more specific query:
```
# \w+ 1-n word letters
# @
# \w+ 1-n word letters
# \. a period (escaped)
# \w+ 1-n word letters
# \. another period
# \w+ and more 1-n word letters
re.search(r'\w+@\w+\.+\w+\.\w+', html).group()
```
That worked! But it's easy to see that this isn't very general, i.e., it doesn't work for every legal e-mail.
```
html2 = 'You can reach me <a href="mailto:alex@utah.edu">by e-mail</a> if necessary.'
match = re.search(r'\w+@\w+\.+\w+\.\w+', html2)
if match:
print(match.group())
else:
print ("didn't match")
```
Here the e-mail alex@utah.edu wasn't matched at all.
```
html3 = "You can reach me <a href='mailto:alex-lex@sci.utah.edu'>by e-mail</a> if necessary."
# \w+ 1-n word letters, @,
match = re.search(r'\w+@\w+\.+\w+\.\w+', html3)
if match:
print(match.group())
else:
print ("didn't match")
```
Here, something matched but it's the wrong e-mail! It's not alex-lex@sci.utah.edu, but lex@sci.utah.edu.
To fix this, we need another concept:
## Sets of legal chars
We need another tool: **square brackets** `[]`. When using square brackets to enclose an expression, all the characters in the expression match:
```
#[\w.-]+ matches all strings that are made up of one or more word character, a period ., or dash - characters.
re.search(r'[\w.-]+@[\w.-]+', html).group()
re.search(r'[\w.-]+@[\w.-]+', html3).group()
```
That worked wonderfully! See how easy it is to extract an e-mail from a website.
Also note that we didn't escape the `.`. That's because inside square brackets, only `^`, `-`, `]`, and `\` need to be escpaed, all others, like `.`, `^`, and `$`, are treated as literals.
However, this pattern matches valid e-mail addresses, but it also matches non-valid ones. So this is a fine regex if you want to extract e-mail addresses, but not if you want to validate an e-mail address:
```
html4 = "alexander@sci..."
re.search(r'[\w.-]+@[\w.-]+', html4).group()
```
## Grouping
If we want to be more specific about repeating substrings, for example, we need to be able to group a part of a regular expression. You can group with round brackets `()`:
```
# (da)+ gives us 1+ matches of the string "da", e.g., this will match da dada dadada, etc.
re.search(r"(da)+", "Introoooo dadadadascience 2016").group()
```
Groups are also a handy way to match a larger string, but only extract what is nested within a group. The [`group()`](https://docs.python.org/3/library/re.html#re.match.group) method we've been using provides access to matched groups independently. Here is an example of extracting a URL from a string:
```
url = 'Visit the course website <a href="http://datasciencecourse.net">here</a>'
# legal characters in a url are \w, :, slash /, period .
# we use the href="" part to identify only URLs contained within that attribute
# but we don't actually want to match that.
match = re.search(r'href="([\w:/.]+)"', url)
print("The whole match:", match.group())
# Here we retreive the first individual group:
print("Only the match within the second group at index 1:", match.group(1))
```
## Find All Occurrences
Instead of finding only a single occurrence of a match, we can also find all occurrences. Here is an example:
```
findall_html = 'You can reach us at <a href=\"mailto:alex-lex@sci.utah.edu\">Alex\'s</a> ' \
'or <a href="mailto:little@math.utah.edu">Anna\'s</a> e-mail if necessary.'
e_mail_re = r'[\w.-]+@[\w.-]+'
re.findall(e_mail_re, findall_html)
```
You can also combine the findall with groups:
```
# separating username and domain
e_mail_re_groups = r'([\w.-]+)@([\w.-]+)'
re.findall(e_mail_re_groups, findall_html)
```
If we want to use parentheses only for logic, not for grouping, we can use the `(?:)` syntax (a non-capturing grouping):
```
re.findall(r'(?:[\w.-]+)@(?:[\w.-]+)', findall_html)
```
## Greedy vs Non-Greedy
By default, regular expressions are greedy. In this example, we try to match HTML tags:
```
html_tags = "The <b>amount and complexity</b> of information produced in <i>science</i>..."
# start with <, repeat any character 1-n times, close with >
re.findall("<.+>", html_tags)
```
This wasn't what we tried to do – the greedy nature of regex matched from the first opening tag < to the last closing tag. We can modify this behavior with the `?` character, which signals that the expression on the left should not be greedy:
```
# start with <, repeat any character 1-n times in a non-greedy way, terminat at the first >
re.findall("<.+?>", html_tags)
```
Greedy applies to the `*`, `+` and `?` operators – so these are legal sequences: `*?`, `+?`, `??`.
## Custom character subsets
You can also define custom character sets by specifying a range with a dash:
```
re.search(r"[2-9]+", "0123405").group()
```
When combined with character sets, we can use the `^` operator to invert a match.
```
re.search(r"[^0-2]+", "0123405").group()
```
## Specifying number of copies
`{m}` Specifies that exactly m copies of the previous RE that should be matched. Fewer matches cause the entire RE not to match.
```
phone_numbers = "(857) 131-2235, (801) 134-2215, this is common in twelve (12) countries and one (1) state"
# match exactly three digits enclosed in brackets
re.findall("\(([0-9]{3})\)", phone_numbers)
```
{m,n} specifies that m to n copies match:
```
# match two to three digits enclosed in brackets
re.findall("\(([0-9]{2,3})\)", phone_numbers)
```
## Or expression
We can use the pipe `|` to define an or between any regular expression:
```
weekdays = "We could meet Monday or Wednesday"
re.findall("Monday|Tuesday|Wednesday|Thursday|Friday|Saturday|Sunday", weekdays)
```
## Replacing strings
We can use the [`sub()`](https://docs.python.org/3/library/re.html#re.sub) to dynamically replace content.
```
re.sub("Monday|Tuesday|Wednesday|Thursday|Friday", "Weekday", weekdays)
```
## Other Functions
We've covered a lot, but not all of the functionality of regex. A couple of other functions that could be helpful:
* [finditer](https://docs.python.org/3/library/re.html#re.finditer) returns an iterator
* the [IGNORECASE](https://docs.python.org/3/library/re.html#re.IGNORECASE) option
* the [DOTALL](https://docs.python.org/3/library/re.html#re.DOTALL) option makes a . match a new line character too.
| github_jupyter |
# Detect stalled training and stop training job using debugger rule
In this notebook, we'll show you how you can use StalledTrainingRule rule which can take action like stopping your training job when it finds that there has been no update in training job for certain threshold duration.
## How does StalledTrainingRule works?
Amazon Sagemaker debugger automatically captures tensors from training job which use AWS DLC(tensorflow, pytorch, mxnet, xgboost)[refer doc for supported versions](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md#zero-script-change). StalledTrainingRule keeps watching on emission of tensors like loss. The execution happens outside of training containers. It is evident that if training job is running good and is not stalled it is expected to emit loss and metrics tensors at frequent intervals. If Rule doesn't find new tensors being emitted from training job for threshold period of time, it takes automatic action to issue StopTrainingJob.
#### With no changes to your training script
If you use one of the SageMaker provided [Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html). [Refer doc for supported framework versions](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md#zero-script-change), then you don't need to make any changes to your training script for activating this rule. Loss tensors will automatically be captured and monitored by the rule.
You can also emit tensors periodically by using [save scalar api of hook](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md#common-hook-api) .
Also look at example how to use save_scalar api [here](https://github.com/awslabs/sagemaker-debugger/blob/master/examples/tensorflow2/scripts/tf_keras_fit_non_eager.py#L42)
```
! pip install -q sagemaker
import boto3
import os
import sagemaker
from sagemaker.tensorflow import TensorFlow
print(sagemaker.__version__)
from sagemaker.debugger import Rule, DebuggerHookConfig, TensorBoardOutputConfig, CollectionConfig
import smdebug_rulesconfig as rule_configs
# define the entrypoint script
# Below script has 5 minutes sleep, we will create a stalledTrainingRule with 3 minutes of threshold.
entrypoint_script='src/simple_stalled_training.py'
# these hyperparameters ensure that vanishing gradient will trigger for our tensorflow mnist script
hyperparameters = {
"num_epochs": "10",
"lr": "10.00"
}
```
### Create unique training job prefix
We will create unique training job name prefix. this prefix would be passed to StalledTrainingRule to identify which training job, rule should take action on once the stalled training rule condition is met.
Note that, this prefix needs to be unique. If rule doesn't find exactly one job with provided prefix, it will fallback to safe mode and not take action of stop training job. Rule will still emit a cloudwatch event if the rule condition is met. To see details about cloud watch event, check [here](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-debugger/tensorflow_action_on_rule/tf-mnist-stop-training-job.ipynb).
```
import time
print(int(time.time()))
# Note that sagemaker appends date to your training job and truncates the provided name to 39 character. So, we will make
# sure that we use less than 39 character in below prefix. Appending time is to provide a unique id
base_job_name_prefix= 'smdebug-stalled-demo-' + str(int(time.time()))
base_job_name_prefix = base_job_name_prefix[:34]
print(base_job_name_prefix)
stalled_training_job_rule = Rule.sagemaker(
base_config={
'DebugRuleConfiguration': {
'RuleConfigurationName': 'StalledTrainingRule',
'RuleParameters': {'rule_to_invoke': 'StalledTrainingRule'}
}
},
rule_parameters={
'threshold': '120',
'training_job_name_prefix': base_job_name_prefix,
'stop_training_on_fire' : 'True'
},
)
estimator = TensorFlow(
role=sagemaker.get_execution_role(),
base_job_name=base_job_name_prefix,
train_instance_count=1,
train_instance_type='ml.m5.4xlarge',
entry_point=entrypoint_script,
#source_dir = 'src',
framework_version='1.15.0',
py_version='py3',
train_max_run=3600,
script_mode=True,
## New parameter
rules = [stalled_training_job_rule]
)
# After calling fit, SageMaker will spin off 1 training job and 1 rule job for you
# The rule evaluation status(es) will be visible in the training logs
# at regular intervals
# wait=False makes this a fire and forget function. To stream the logs in the notebook leave this out
estimator.fit(wait=True)
```
## Monitoring
SageMaker kicked off rule evaluation job `StalledTrainingRule` as specified in the estimator.
Given that we've stalled our training script for 10 minutes such that `StalledTrainingRule` is bound to fire and take action StopTrainingJob, we should expect to see the `TrainingJobStatus` as
`Stopped` once the `RuleEvaluationStatus` for `StalledTrainingRule` changes to `IssuesFound`
```
# rule job summary gives you the summary of the rule evaluations. You might have to run it over
# a few times before you start to see all values populated/changing
estimator.latest_training_job.rule_job_summary()
# This utility gives the link to monitor the CW event
def _get_rule_job_name(training_job_name, rule_configuration_name, rule_job_arn):
"""Helper function to get the rule job name"""
return "{}-{}-{}".format(
training_job_name[:26], rule_configuration_name[:26], rule_job_arn[-8:]
)
def _get_cw_url_for_rule_job(rule_job_name, region):
return "https://{}.console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix".format(region, region, rule_job_name)
def get_rule_jobs_cw_urls(estimator):
region = boto3.Session().region_name
training_job = estimator.latest_training_job
training_job_name = training_job.describe()["TrainingJobName"]
rule_eval_statuses = training_job.describe()["DebugRuleEvaluationStatuses"]
result={}
for status in rule_eval_statuses:
if status.get("RuleEvaluationJobArn", None) is not None:
rule_job_name = _get_rule_job_name(training_job_name, status["RuleConfigurationName"], status["RuleEvaluationJobArn"])
result[status["RuleConfigurationName"]] = _get_cw_url_for_rule_job(rule_job_name, region)
return result
get_rule_jobs_cw_urls(estimator)
```
After running the last two cells over and until `VanishingGradient` reports `IssuesFound`, we'll attempt to describe the `TrainingJobStatus` for our training job.
```
estimator.latest_training_job.describe()["TrainingJobStatus"]
```
## Result
This notebook attempted to show a very simple setup of how you can use CloudWatch events for your training job to take action on rule evaluation status changes. Learn more about Amazon SageMaker Debugger in the [GitHub Documentation](https://github.com/awslabs/sagemaker-debugger).
| github_jupyter |
# Rolling Regression
* [Pairs trading](https://www.quantopian.com/posts/pairs-trading-algorithm-1) is a famous technique in algorithmic trading that plays two stocks against each other.
* For this to work, stocks must be correlated (cointegrated).
* One common example is the price of gold (GLD) and the price of gold mining operations (GFI).
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
```
Lets load the prices of GFI and GLD.
```
# from pandas_datareader import data
# prices = data.GoogleDailyReader(symbols=['GLD', 'GFI'], end='2014-8-1').read().loc['Open', :, :]
prices = pd.read_csv(pm.get_data('stock_prices.csv')).dropna()
prices['Date'] = pd.DatetimeIndex(prices['Date'])
prices = prices.set_index('Date')
prices_zscored = (prices - prices.mean()) / prices.std()
prices.head()
```
Plotting the prices over time suggests a strong correlation. However, the correlation seems to change over time.
```
fig = plt.figure(figsize=(9, 6))
ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$')
colors = np.linspace(0.1, 1, len(prices))
mymap = plt.get_cmap("winter")
sc = ax.scatter(prices.GFI, prices.GLD, c=colors, cmap=mymap, lw=0)
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]);
```
A naive approach would be to estimate a linear model and ignore the time domain.
```
with pm.Model() as model_reg:
pm.glm.GLM.from_formula('GLD ~ GFI', prices)
trace_reg = pm.sample(2000, tune=1000)
```
The posterior predictive plot shows how bad the fit is.
```
fig = plt.figure(figsize=(9, 6))
ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$',
title='Posterior predictive regression lines')
sc = ax.scatter(prices.GFI, prices.GLD, c=colors, cmap=mymap, lw=0)
pm.plot_posterior_predictive_glm(trace_reg[100:], samples=100,
label='posterior predictive regression lines',
lm=lambda x, sample: sample['Intercept'] + sample['GFI'] * x,
eval=np.linspace(prices.GFI.min(), prices.GFI.max(), 100))
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]);
ax.legend(loc=0);
```
## Rolling regression
Next, we will build an improved model that will allow for changes in the regression coefficients over time. Specifically, we will assume that intercept and slope follow a random-walk through time. That idea is similar to the [stochastic volatility model](stochastic_volatility.ipynb).
$$ \alpha_t \sim \mathcal{N}(\alpha_{t-1}, \sigma_\alpha^2) $$
$$ \beta_t \sim \mathcal{N}(\beta_{t-1}, \sigma_\beta^2) $$
First, lets define the hyper-priors for $\sigma_\alpha^2$ and $\sigma_\beta^2$. This parameter can be interpreted as the volatility in the regression coefficients.
```
model_randomwalk = pm.Model()
with model_randomwalk:
# std of random walk
sigma_alpha = pm.Exponential('sigma_alpha', 50.)
sigma_beta = pm.Exponential('sigma_beta', 50.)
alpha = pm.GaussianRandomWalk('alpha', sigma=sigma_alpha,
shape=len(prices))
beta = pm.GaussianRandomWalk('beta', sigma=sigma_beta,
shape=len(prices))
```
Perform the regression given coefficients and data and link to the data via the likelihood.
```
with model_randomwalk:
# Define regression
regression = alpha + beta * prices_zscored.GFI
# Assume prices are Normally distributed, the mean comes from the regression.
sd = pm.HalfNormal('sd', sigma=.1)
likelihood = pm.Normal('y',
mu=regression,
sigma=sd,
observed=prices_zscored.GLD)
```
Inference. Despite this being quite a complex model, NUTS handles it wells.
```
with model_randomwalk:
trace_rw = pm.sample(tune=2000, cores=4,
target_accept=0.9)
```
Increasing the tree-depth does indeed help but it makes sampling very slow. The results look identical with this run, however.
## Analysis of results
As can be seen below, $\alpha$, the intercept, changes over time.
```
fig = plt.figure(figsize=(8, 6))
ax = plt.subplot(111, xlabel='time', ylabel='alpha', title='Change of alpha over time.')
ax.plot(trace_rw['alpha'].T, 'r', alpha=.05);
ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]);
```
As does the slope.
```
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, xlabel='time', ylabel='beta', title='Change of beta over time')
ax.plot(trace_rw['beta'].T, 'b', alpha=.05);
ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]);
```
The posterior predictive plot shows that we capture the change in regression over time much better. Note that we should have used returns instead of prices. The model would still work the same, but the visualisations would not be quite as clear.
```
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, xlabel='Price GFI in \$', ylabel='Price GLD in \$',
title='Posterior predictive regression lines')
colors = np.linspace(0.1, 1, len(prices))
colors_sc = np.linspace(0.1, 1, len(trace_rw[::10]['alpha'].T))
mymap = plt.get_cmap('winter')
mymap_sc = plt.get_cmap('winter')
xi = np.linspace(prices_zscored.GFI.min(), prices_zscored.GFI.max(), 50)
for i, (alpha, beta) in enumerate(zip(trace_rw[::15]['alpha'].T,
trace_rw[::15]['beta'].T)):
for a, b in zip(alpha[::30], beta[::30]):
ax.plot(xi, a + b*xi, alpha=.01, lw=1,
c=mymap_sc(colors_sc[i]))
sc = ax.scatter(prices_zscored.GFI, prices_zscored.GLD,
label='data', cmap=mymap, c=colors)
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in prices_zscored[::len(prices)//10].index]);
#ax.set(ylim=(100, 190));
```
Author: Thomas Wiecki
```
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
```
import pandas as pd
from bokeh.io import output_notebook, reset_output
from bokeh.plotting import figure, show, output_file
from bokeh.models import ColumnDataSource, HoverTool, NumeralTickFormatter, Label
from bokeh.palettes import Category10
import matplotlib.pyplot as plt
#import holoviews as hv
#hv.extension('bokeh')
#import hvplot.pandas
```
<h2>Making mass spectra interactive with bokeh</h2>
Visualizing peptide fragmentation mass spectra
Import an example of a fragment ion spectrum:<br>
a spectrum is a list of value pairs: mass-to-charge (shortened <i>m/z</i>) and inetsity of a signal for each ion
```
df = pd.read_csv('tmt_spectrum_example.csv', sep=',')
print(df.shape)
df.head(3)
```
Assign a precursor <i>m/z</i> and charge, they are often known and stored alongside the information about fragment ions
```
precMZ = 939.88733
precCh = 5
```
<h3>Take a look at a static view of the spectrum using matplotlib</h3>
Matplotlib plots are highly customizable, which makes it the choice for preparing a publication quality spectra. The <i>stem</i> method is handy for displaying mass spectra:
```
fig, ax = plt.subplots(1, 1, figsize=(15,4))
fig.suptitle('Peptide Fragmentation Mass Spectrum')
ax.stem( df['mz'], df['Intensity'], markerfmt=' ' )
ax.set_xlabel('m/z')
ax.set_ylabel('Intensity')
```
As you can see, the typical characteristics of a tandem mass spectrum are:
* High density of the <i>m/z</i> values, a spectrum has dozens or even hundreds of points
* Substantial differences in the intensity (height) of the signals
It is impossible to put all <i>m/z</i> labels onto the plot, but it is of primary interest for the scientists to see those values. An interactive highlighting of <i>m/z</i> would come very handy!
<h3>Render bokeh plot</h3>
```
mainTitle = 'Peptide Fragmentation Mass Spectrum'
cds = ColumnDataSource(data=df)
output_notebook()
#output_file('msms_tmt_bar.html')
def create_p(width=800, height=300):
tooltips = [
('m/z','@mz{0.0000}'),
('Int','@Intensity')
]
p = figure(
plot_width=width, plot_height=height,
title = mainTitle,
tools = 'xwheel_zoom,xpan,box_zoom,undo,reset',
tooltips=tooltips
)
return p
p = create_p()
p.vbar(
x = 'mz', top = 'Intensity',
source = cds,
color = '#324ea8',# alpha = 0.8,
width = 0.001
)
show(p)
```
But there's a problem with the bar plots: they have constant width. If we set the width to a meaningful value that is based on the actual uncertainty of the <i>m/z</i> measurement, it will be extremely narrow. And the hover tool does not work as we would want it to do!
```
p = figure(
plot_width=800, plot_height=300,
title = mainTitle
)
p.line(
x = 'mz', y = 'Intensity',
source = cds,
color = '#324ea8',# alpha = 0.8,
line_width = 2
)
show(p)
```
Let's modify the line, so that it adopts the expected shape, but stays continuous
```
#Triple the points on the m/z axis
mzTransformed = [ (x, x, x) for x in df['mz'] ]
#Flatten the list of tuples for the m/z axis
mzTransformed = [ x for y in mzTransformed for x in y ]
#Create the vertical bars for each intensity value
intensTransformed = [ (0, x, 0) for x in df['Intensity'] ]
#Flatten the list of tuples for the intensity axis
intensTransformed = [ x for y in intensTransformed for x in y ]
df2 = pd.DataFrame(
{
'mz': mzTransformed,
'Intensity': intensTransformed
}
)
df2.head(7)
df2.plot(x='mz', y='Intensity', figsize=(15, 4))
#output_file('msms_tmt_spectrum2.html')
cds = ColumnDataSource(data=df2)
p = create_p()
maxIntens = df2['Intensity'].max()
#Main line
p.line(
'mz', 'Intensity',
source = cds,
color = '#324ea8',# alpha = 0.8,
line_width = 2
)
#Add the precursor info as a dashed line with a label
def add_precursor(p, mz, charge, intens, col):
p.line(
[mz, mz], [0, intens*0.9],
line_dash = 'dashed', line_width = 4,
color = col, alpha = 0.5,
)
p.add_layout(
Label(
x = mz, y = intens*0.93,
text = f'Precursor {mz}, {charge}+',
text_font_size = '10pt',
text_color = col
)
)
add_precursor(p, precMZ, precCh, maxIntens, '#198c43')
#Format axis labels
def add_axis_labels(p):
p.xaxis.axis_label = 'Fragment m/z'
p.xaxis.axis_label_text_font_size = '10pt'
p.xaxis.major_label_text_font_size = '9pt'
p.yaxis.axis_label = 'Intensity'
p.yaxis.axis_label_text_font_size = '10pt'
p.yaxis.major_label_text_font_size = '9pt'
p.yaxis.formatter = NumeralTickFormatter(format='0.')
add_axis_labels(p)
show(p)
```
<h3>What if the signals are not al the same?</h3>
Download the same spectrum with annotations
```
dfA = pd.read_csv('tmt_spectrum_annotated.csv', sep=',')
print(dfA.shape)
dfA.head(3)
```
There are 3 categories of signals:
```
dfA['Annotation'].unique()
mzTransformed = [ (x, x, x) for x in dfA['mz'] ]
mzTransformed = [ x for y in mzTransformed for x in y ]
intensTransformed = [ (0, x, 0) for x in dfA['Intensity'] ]
intensTransformed = [ x for y in intensTransformed for x in y ]
annotTransformed = [ (x, x, x) for x in dfA['Annotation'] ]
annotTransformed = [ x for y in annotTransformed for x in y ]
dfA2 = pd.DataFrame(
{
'mz': mzTransformed,
'Intensity': intensTransformed,
'Annotation': annotTransformed
}
)
dfA2.head(7)
#output_file('msms_tmt_spectrum_Cat.html')
#Number of categories
ncat = len( dfA['Annotation'].unique() )
#Create a separate ColumnDataSource for each categorical value
sources = []
for idx, cat in enumerate( dfA2['Annotation'].unique() ):
sources.append(
(
idx, cat,
ColumnDataSource(
data=dfA2[
dfA2['Annotation'] == cat
]
)
)
)
print(sources)
p = create_p()
maxIntens = df2['Intensity'].max()
#Create separate line for each annotation
for idxColor, cat, cds in sources:
#Assign colors from the Category10 paletted
#If there are more than 10 categories, the colors will start to rotate
idxColor = idxColor % 10
p.line(
'mz', 'Intensity',
source = cds,
color = Category10[10][idxColor],
line_width = 2, alpha = 0.7,
legend_label=cat
)
#Add a thick horizontal line at y=0 to make the look cleaner
p.line(
x = [ dfA2['mz'].min(), dfA2['mz'].max() ],
y = [0, 0],
color = Category10[10][0],
line_width = 3
)
p.legend.location = 'top_right'
#Click on the legend item and the corresponding line will become hidden
p.legend.click_policy = 'hide'
p.legend.title = 'Signal Type'
add_precursor(p, precMZ, precCh, maxIntens, '#a31534')
add_axis_labels(p)
show(p)
```
| github_jupyter |
<!-- <div style='float:right'><img width=200 src="hse-logo.jpg" alt="HSE logo"></img></div> -->
<div style='float:left'><img width=400 src="python_logo.png" alt="Python"></img></div>
<div style='float:right'>
<h1 align='center'>Язык программирования Python</h1>
<h2 align='right'>Бобер Станислав Алексеевич</h2>
<h3 align='right'>Ст. преп. Департамента Прикладной Математики</h3>
<h3 align='right'>e-mail: sbober@hse.ru, stas.bober@gmail.com</h3>
</div>
<h1 align='center'>Семинар 1</h1>
# Темы семинара:
### 0. Функция open
### 1. With ... as
### 2. Модуль pickle
### 3. Форматирование строк, join
### 4. Работа с текстовыми файлами
## 0. Функция open
```
# основные аргументы функции open
# open('путь_к_файлу/имя_файла', mode='режим_доступа', encoding='кодировка')
```
```
# открытие текстового файла для записи
f = open('example_0.txt', 'wt', encoding='utf-8')
text = 'Если фотон направляется к плоскости с двумя щелями, в одной из которых детектор, интерференции не будет. Если детектора нет — будет. Если вернуть детектор, когда фотон покинул плоскость, но не достиг конечной точки, интерференция снова пропадет.'
f.write(text) # запись содержимого строки
f.close()
# загрузка текста для проверки при помощи магической команды jupyter
%load example_0.txt
```
## 1. With ... as
```
# тоже самое, но проще
with open('example_0.txt', 'wt', encoding='utf-8') as f:
f.write(text)
# прочитать и вывести содержание файла
with open('example_0.txt', 'rt', encoding='utf-8') as f:
print(f.read())
```
## 2. Pickle
```
import pickle
# запись списка из целых чисел в бинарный файл
lst = list(range(500,510))
with open('example_1.bin', 'wb') as f:
pickle.dump(lst, f)
# загрузка сохраненного списка
with open('example_1.bin', 'rb') as f:
data = pickle.load(f)
data
# сохранение и загрузка сложных объектов
d = {'list':[6.0, 7.3, 8.8], 'tuple':(0, 9, 8), 'string':'test_string'}
with open('example_2.bin', 'wb') as f:
pickle.dump(d, f)
with open('example_2.bin', 'rb') as f:
data = pickle.load(f)
data
```
## 3. Форматирование строк, join
```
# вспомним форматирование в стиле "Си"
"В строке можно разместить: целое число '%d', \
действительное число '%.1f', строку '%s'"%(1, 2.5, 'abcdef')
# продвинутое форматирование
"В строке можно разместить: целое число '{i}', \
действительное число '{f}', строку '{s}'".format(i=1, f=2.5, s='abcdef')
'Первый элемент списка: {lst[0]}, второй: {lst[1]}, \
третий: {lst[2]}'.format(lst=['a', 'bb', (3, 4)])
# многострочная запись текста
lines = '''— Угадайте, что произошло?
— Ты шел по коридору, наткнулся на межпространственный портал,
который перекинул тебя на пять тысяч лет в будущее, в котором,
пользуюсь преимуществом и технологией, ты построил машину времени,
а сейчас вернулся, чтобы взять нас с собой в семь тысяч десятый год,
где на работу в мыслинарий нас будут возить телепатически управляемые
летающие дельфины?'''
print(lines)
lines
# метод строки join позволяет объединить список строк в одну
lst = ['Вот здорово!',
'Посмотри на меня.',
'Я в реальном мире обычных людей,',
'живущих своей унылой обыденной жизнью.']
lst
s = '\n'.join(lst)
print(s)
s
# замена всех подстрок в строке на заданную
print(s.replace('о', 'О'))
# удаление всех подстрок в строке
print(s.replace('.', ''))
```
## 4. Работа с текстовыми файлами
### 4.1 Задание
Текст произведения Льюиса Кэрролла находится в файле alice.txt (загружено отсюда: http://lib.ru/CARROLL/alice.txt).
Требуется вывести 20 наиболее часто встречающихся в тексте слов.
Задействуются умения:
- загрузка из файла
- использование методов split, replace
- работа со списками и словарями
### 4.2 Задание
*Дан текстовый файл em16_edm_v00.tf, содержащий описание фреймов*
https://naif.jpl.nasa.gov/pub/naif/EXOMARS2016/kernels/fk/em16_edm_v00.tf
*Требуется:*
- выделить содержательные части файла и разместить их в списке из строк
- на основе каждой содержательной части создать словарь, содержащий имя фрейма, его идентификатор и класс (в результате - список словарей)
- вывести на экран список имен фреймов и их количество
- сохранить список словарей в бинарном файле, загрузить его из файла вновь и вывести на экран
```
# для примера приведенного выше словарь должен выглядеть следующим образом:
# {'name':'EDM_SURFACE_FIXED', 'id':-117901, 'class':4}
```
| github_jupyter |
# Weekly Assignment 5a
## Loan Pham and Brandan Owens
Q.1 We'll We'll be working with the 120 years of Olympic History dataset. Download the dataset “athlete_events.csv” and perform the following:
```
#import dataset and tools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
olympic_data = pd.read_csv("../dataFiles/athlete_events.csv")
olympic_data
#Q.1.a Filter the DataFrame to only include the rows corresponding to medal winners from 2016. (In other words, drop the rows if 'Medal'==NaN)
olympic_filtered = olympic_data[olympic_data['Year'] >= 2016]
olympic_filtered = olympic_filtered[olympic_filtered['Medal'].notna()]
olympic_filtered
#Q.1.b Find out the number of medals awarded in 2016 for each sport.
awards_per_sport = olympic_filtered[['Sport','Year','Medal']]
awards_per_sport.groupby(['Sport','Year']).count()
#Q.1.c Filter the DataFrame one more time to only include the records for the top five sports based on the number of medals in 2016.
awards_count = awards_per_sport.groupby(['Sport','Year']).count()
awards_per_sport_sorted = awards_count.sort_values('Medal', ascending=False)
awards_per_sport_sorted.head(5)
#Q.1.d Generate a bar plot of record counts corresponding to each of the top five sports.
plot = awards_per_sport_sorted.head(5)
plot.plot.bar(ylabel= 'count')
#Q.1.e Generate a histogram for the “Age” of all medal winners in the top five sports (2016).
top_five = olympic_filtered.loc[olympic_filtered['Sport'].isin(['Athletics', 'Swimming', 'Rowing', 'Football', 'Hockey'])]
top_five = top_five[top_five['Year'] >= 2016]
medals_age = top_five[['Sport','Age','Medal']]
medals = ['Gold','Silver','Bronze']
medals_age.hist()
country_by_medal = top_five.groupby(['Team'])['Medal'].count()
country_by_medal.plot(kind='bar')
#Q.1.g Generate a bar plot indicating the average weight of players, categorized based on gender, winning in the top five sports in 2016.
ax = sns.barplot(x=top_five['Sport'], y=top_five['Weight'], hue=top_five['Sex'], data=top_five)
#Q.1.h Create a scatter plot with x=height and y=weight.
height_by_weight = top_five[['Weight','Height']]
ax = sns.scatterplot(x='Height', y='Weight', data=height_by_weight)
#Q.1.i Create a joint plot with x=height and y=weight.
ax = sns.jointplot(x='Height', y='Weight', data=height_by_weight)
#Q.1.j Create two violin plots: 1 for distribution of weight by genders & class of medal | 1 for distribution of height by genders & class of medal.
ax = sns.violinplot(x=top_five['Medal'], y=top_five['Weight'], hue=top_five['Sex'], data=top_five)
```
Q.2 We will work with the HPI dataset. The objective is to draw a bar plot depicting the number of countries in each region and a heatmap indicating the number of countries in various ranges of wellbeing and life-expectancy.
```
#Q.2.a Read from the dataset "hpi_data_countries.txt" (it is txt file separated by \t). Use pandas to read it.
hpi_df = pd.read_csv('..\dataFiles\hpi_data_countries.txt', delimiter = "\t")
hpi_df
#Q.2.b Count the number of rows for each region to generate a bar chart.
region_count = hpi_df['Region'].value_counts()
region_count.plot(kind='bar')
#Q.2.c Use pd.cut to bin the values into different categories.
#Then generate a heatmap using cmap="Greens"
hpi_df["Wellbeing (0-10)(binned)"] = pd.cut(hpi_df["Wellbeing (0-10)"], bins=np.linspace(2.5,8,12))
hpi_df["Life Expectancy (years)(binned)"] = pd.cut(hpi_df["Life Expectancy (years)"], bins=np.linspace(45,85,9))
heat_df = hpi_df.groupby(["Wellbeing (0-10)(binned)", "Life Expectancy (years)(binned)"]).size().unstack(level=0)
heat_df = heat_df.sort_values(ascending=False, by="Life Expectancy (years)(binned)")
sns.heatmap(heat_df, cmap="Greens")
```
| github_jupyter |
# 卷积神经网络(LeNet)
:label:`sec_lenet`
通过之前几节,我们学习了构建一个完整卷积神经网络的所需组件。
回想一下,之前我们将softmax回归模型( :numref:`sec_softmax_scratch`)和多层感知机模型( :numref:`sec_mlp_scratch`)应用于Fashion-MNIST数据集中的服装图片。
为了能够应用softmax回归和多层感知机,我们首先将每个大小为$28\times28$的图像展平为一个784维的固定长度的一维向量,然后用全连接层对其进行处理。
而现在,我们已经掌握了卷积层的处理方法,我们可以在图像中保留空间结构。
同时,用卷积层代替全连接层的另一个好处是:模型更简洁、所需的参数更少。
在本节中,我们将介绍LeNet,它是最早发布的卷积神经网络之一,因其在计算机视觉任务中的高效性能而受到广泛关注。
这个模型是由AT&T贝尔实验室的研究员Yann LeCun在1989年提出的(并以其命名),目的是识别图像 :cite:`LeCun.Bottou.Bengio.ea.1998`中的手写数字。
当时,Yann LeCun发表了第一篇通过反向传播成功训练卷积神经网络的研究,这项工作代表了十多年来神经网络研究开发的成果。
当时,LeNet取得了与支持向量机(support vector machines)性能相媲美的成果,成为监督学习的主流方法。
LeNet被广泛用于自动取款机(ATM)机中,帮助识别处理支票的数字。
时至今日,一些自动取款机仍在运行Yann LeCun和他的同事Leon Bottou在上世纪90年代写的代码呢!
## LeNet
总体来看,(**LeNet(LeNet-5)由两个部分组成:**)(~~卷积编码器和全连接层密集块~~)
* 卷积编码器:由两个卷积层组成;
* 全连接层密集块:由三个全连接层组成。
该架构如 :numref:`img_lenet`所示。

:label:`img_lenet`
每个卷积块中的基本单元是一个卷积层、一个sigmoid激活函数和平均汇聚层。请注意,虽然ReLU和最大汇聚层更有效,但它们在20世纪90年代还没有出现。每个卷积层使用$5\times 5$卷积核和一个sigmoid激活函数。这些层将输入映射到多个二维特征输出,通常同时增加通道的数量。第一卷积层有6个输出通道,而第二个卷积层有16个输出通道。每个$2\times2$池操作(步骤2)通过空间下采样将维数减少4倍。卷积的输出形状由批量大小、通道数、高度、宽度决定。
为了将卷积块的输出传递给稠密块,我们必须在小批量中展平每个样本。换言之,我们将这个四维输入转换成全连接层所期望的二维输入。这里的二维表示的第一个维度索引小批量中的样本,第二个维度给出每个样本的平面向量表示。LeNet的稠密块有三个全连接层,分别有120、84和10个输出。因为我们仍在执行分类,所以输出层的10维对应于最后输出结果的数量。
通过下面的LeNet代码,你会相信用深度学习框架实现此类模型非常简单。我们只需要实例化一个`Sequential`块并将需要的层连接在一起。
```
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
net = nn.Sequential()
net.add(nn.Conv2D(channels=6, kernel_size=5, padding=2, activation='sigmoid'),
nn.AvgPool2D(pool_size=2, strides=2),
nn.Conv2D(channels=16, kernel_size=5, activation='sigmoid'),
nn.AvgPool2D(pool_size=2, strides=2),
# 默认情况下,“Dense”会自动将形状为(批量大小,通道数,高度,宽度)的输入,
# 转换为形状为(批量大小,通道数*高度*宽度)的输入
nn.Dense(120, activation='sigmoid'),
nn.Dense(84, activation='sigmoid'),
nn.Dense(10))
```
我们对原始模型做了一点小改动,去掉了最后一层的高斯激活。除此之外,这个网络与最初的LeNet-5一致。
下面,我们将一个大小为$28 \times 28$的单通道(黑白)图像通过LeNet。通过在每一层打印输出的形状,我们可以[**检查模型**],以确保其操作与我们期望的 :numref:`img_lenet_vert`一致。

:label:`img_lenet_vert`
```
X = np.random.uniform(size=(1, 1, 28, 28))
net.initialize()
for layer in net:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
```
请注意,在整个卷积块中,与上一层相比,每一层特征的高度和宽度都减小了。
第一个卷积层使用2个像素的填充,来补偿$5 \times 5$卷积核导致的特征减少。
相反,第二个卷积层没有填充,因此高度和宽度都减少了4个像素。
随着层叠的上升,通道的数量从输入时的1个,增加到第一个卷积层之后的6个,再到第二个卷积层之后的16个。
同时,每个汇聚层的高度和宽度都减半。最后,每个全连接层减少维数,最终输出一个维数与结果分类数相匹配的输出。
## 模型训练
现在我们已经实现了LeNet,让我们看看[**LeNet在Fashion-MNIST数据集上的表现**]。
```
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
```
虽然卷积神经网络的参数较少,但与深度的多层感知机相比,它们的计算成本仍然很高,因为每个参数都参与更多的乘法。
如果你有机会使用GPU,可以用它加快训练。
为了进行评估,我们需要[**对**] :numref:`sec_softmax_scratch`中描述的(**`evaluate_accuracy`函数进行轻微的修改**)。
由于完整的数据集位于内存中,因此在模型使用GPU计算数据集之前,我们需要将其复制到显存中。
```
def evaluate_accuracy_gpu(net, data_iter, device=None): #@save
"""使用GPU计算模型在数据集上的精度"""
if not device: # 查询第一个参数所在的第一个设备
device = list(net.collect_params().values())[0].list_ctx()[0]
metric = d2l.Accumulator(2) # 正确预测的数量,总预测的数量
for X, y in data_iter:
X, y = X.as_in_ctx(device), y.as_in_ctx(device)
metric.add(d2l.accuracy(net(X), y), d2l.size(y))
return metric[0] / metric[1]
```
[**为了使用GPU,我们还需要一点小改动**]。
与 :numref:`sec_softmax_scratch`中定义的`train_epoch_ch3`不同,在进行正向和反向传播之前,我们需要将每一小批量数据移动到我们指定的设备(例如GPU)上。
如下所示,训练函数`train_ch6`也类似于 :numref:`sec_softmax_scratch`中定义的`train_ch3`。
由于我们将实现多层神经网络,因此我们将主要使用高级API。
以下训练函数假定从高级API创建的模型作为输入,并进行相应的优化。
我们使用在 :numref:`subsec_xavier`中介绍的Xavier随机初始化模型参数。
与全连接层一样,我们使用交叉熵损失函数和小批量随机梯度下降。
```
#@save
def train_ch6(net, train_iter, test_iter, num_epochs, lr, device):
"""用GPU训练模型(在第六章定义)"""
net.initialize(force_reinit=True, ctx=device, init=init.Xavier())
loss = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(),
'sgd', {'learning_rate': lr})
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=['train loss', 'train acc', 'test acc'])
timer, num_batches = d2l.Timer(), len(train_iter)
for epoch in range(num_epochs):
metric = d2l.Accumulator(3) # 训练损失之和,训练准确率之和,范例数
for i, (X, y) in enumerate(train_iter):
timer.start()
# 下面是与“d2l.train_epoch_ch3”的主要不同
X, y = X.as_in_ctx(device), y.as_in_ctx(device)
with autograd.record():
y_hat = net(X)
l = loss(y_hat, y)
l.backward()
trainer.step(X.shape[0])
metric.add(l.sum(), d2l.accuracy(y_hat, y), X.shape[0])
timer.stop()
train_l = metric[0] / metric[2]
train_acc = metric[1] / metric[2]
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(train_l, train_acc, None))
test_acc = evaluate_accuracy_gpu(net, test_iter)
animator.add(epoch + 1, (None, None, test_acc))
print(f'loss {train_l:.3f}, train acc {train_acc:.3f}, '
f'test acc {test_acc:.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(device)}')
```
现在,我们[**训练和评估LeNet-5模型**]。
```
lr, num_epochs = 0.9, 10
train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
```
## 小结
* 卷积神经网络(CNN)是一类使用卷积层的网络。
* 在卷积神经网络中,我们组合使用卷积层、非线性激活函数和汇聚层。
* 为了构造高性能的卷积神经网络,我们通常对卷积层进行排列,逐渐降低其表示的空间分辨率,同时增加通道数。
* 在传统的卷积神经网络中,卷积块编码得到的表征在输出之前需由一个或多个全连接层进行处理。
* LeNet是最早发布的卷积神经网络之一。
## 练习
1. 将平均汇聚层替换为最大汇聚层,会发生什么?
1. 尝试构建一个基于LeNet的更复杂的网络,以提高其准确性。
1. 调整卷积窗口大小。
1. 调整输出通道的数量。
1. 调整激活函数(如ReLU)。
1. 调整卷积层的数量。
1. 调整全连接层的数量。
1. 调整学习率和其他训练细节(例如,初始化和轮数)。
1. 在MNIST数据集上尝试以上改进的网络。
1. 显示不同输入(例如毛衣和外套)时,LeNet第一层和第二层的激活值。
[Discussions](https://discuss.d2l.ai/t/1861)
| github_jupyter |
```
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-dev.conllu
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-train.conllu
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-test.conllu
!pip install malaya -U
import malaya
import re
from malaya.texts._text_functions import split_into_sentences
from malaya.texts import _regex
import numpy as np
import itertools
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = malaya.preprocessing._tokenizer
splitter = split_into_sentences
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def preprocessing(w):
if is_number_regex(w):
return '<NUM>'
elif re.match(_regex._money, w):
return '<MONEY>'
elif re.match(_regex._date, w):
return '<DATE>'
elif re.match(_regex._expressions['email'], w):
return '<EMAIL>'
elif re.match(_regex._expressions['url'], w):
return '<URL>'
else:
w = ''.join(''.join(s)[:2] for _, s in itertools.groupby(w))
return w
word2idx = {'PAD': 0,'UNK':1, '_ROOT': 2}
tag2idx = {'PAD': 0, '_<ROOT>': 1}
char2idx = {'PAD': 0,'UNK':1, '_ROOT': 2}
word_idx = 3
tag_idx = 2
char_idx = 3
special_tokens = ['<NUM>', '<MONEY>', '<DATE>', '<URL>', '<EMAIL>']
for t in special_tokens:
word2idx[t] = word_idx
word_idx += 1
char2idx[t] = char_idx
char_idx += 1
word2idx, char2idx
PAD = "_PAD"
PAD_POS = "_PAD_POS"
PAD_TYPE = "_<PAD>"
PAD_CHAR = "_PAD_CHAR"
ROOT = "_ROOT"
ROOT_POS = "_ROOT_POS"
ROOT_TYPE = "_<ROOT>"
ROOT_CHAR = "_ROOT_CHAR"
END = "_END"
END_POS = "_END_POS"
END_TYPE = "_<END>"
END_CHAR = "_END_CHAR"
def process_corpus(corpus, until = None):
global word2idx, tag2idx, char2idx, word_idx, tag_idx, char_idx
sentences, words, depends, labels, pos, chars = [], [], [], [], [], []
temp_sentence, temp_word, temp_depend, temp_label, temp_pos = [], [], [], [], []
first_time = True
for sentence in corpus:
try:
if len(sentence):
if sentence[0] == '#':
continue
if first_time:
print(sentence)
first_time = False
sentence = sentence.split('\t')
for c in sentence[1]:
if c not in char2idx:
char2idx[c] = char_idx
char_idx += 1
if sentence[7] not in tag2idx:
tag2idx[sentence[7]] = tag_idx
tag_idx += 1
sentence[1] = preprocessing(sentence[1])
if sentence[1] not in word2idx:
word2idx[sentence[1]] = word_idx
word_idx += 1
temp_word.append(word2idx[sentence[1]])
temp_depend.append(int(sentence[6]))
temp_label.append(tag2idx[sentence[7]])
temp_sentence.append(sentence[1])
temp_pos.append(sentence[3])
else:
if len(temp_sentence) < 2 or len(temp_word) != len(temp_label):
temp_word = []
temp_depend = []
temp_label = []
temp_sentence = []
temp_pos = []
continue
words.append(temp_word)
depends.append(temp_depend)
labels.append(temp_label)
sentences.append( temp_sentence)
pos.append(temp_pos)
char_ = [[char2idx['_ROOT']]]
for w in temp_sentence:
if w in char2idx:
char_.append([char2idx[w]])
else:
char_.append([char2idx[c] for c in w])
chars.append(char_)
temp_word = []
temp_depend = []
temp_label = []
temp_sentence = []
temp_pos = []
except Exception as e:
print(e, sentence)
return sentences[:-1], words[:-1], depends[:-1], labels[:-1], pos[:-1], chars[:-1]
with open('en_ewt-ud-dev.conllu') as fopen:
dev = fopen.read().split('\n')
sentences_dev, words_dev, depends_dev, labels_dev, _, _ = process_corpus(dev)
with open('en_ewt-ud-test.conllu') as fopen:
test = fopen.read().split('\n')
sentences_test, words_test, depends_test, labels_test, _, _ = process_corpus(test)
sentences_test.extend(sentences_dev)
words_test.extend(words_dev)
depends_test.extend(depends_dev)
labels_test.extend(labels_dev)
with open('en_ewt-ud-train.conllu') as fopen:
train = fopen.read().split('\n')
sentences_train, words_train, depends_train, labels_train, _, _ = process_corpus(train)
len(sentences_train), len(sentences_test)
idx2word = {v:k for k, v in word2idx.items()}
idx2tag = {v:k for k, v in tag2idx.items()}
len(idx2word)
def generate_char_seq(batch, UNK = 2):
maxlen_c = max([len(k) for k in batch])
x = [[len(i) for i in k] for k in batch]
maxlen = max([j for i in x for j in i])
temp = np.zeros((len(batch),maxlen_c,maxlen),dtype=np.int32)
for i in range(len(batch)):
for k in range(len(batch[i])):
for no, c in enumerate(batch[i][k]):
temp[i,k,-1-no] = char2idx.get(c, UNK)
return temp
generate_char_seq(sentences_train[:5]).shape
pad_sequences(words_train[:5],padding='post').shape
train_X = words_train
train_Y = labels_train
train_depends = depends_train
train_char = sentences_train
test_X = words_test
test_Y = labels_test
test_depends = depends_test
test_char = sentences_test
class BiAAttention:
def __init__(self, input_size_encoder, input_size_decoder, num_labels):
self.input_size_encoder = input_size_encoder
self.input_size_decoder = input_size_decoder
self.num_labels = num_labels
self.W_d = tf.get_variable("W_d", shape=[self.num_labels, self.input_size_decoder],
initializer=tf.contrib.layers.xavier_initializer())
self.W_e = tf.get_variable("W_e", shape=[self.num_labels, self.input_size_encoder],
initializer=tf.contrib.layers.xavier_initializer())
self.U = tf.get_variable("U", shape=[self.num_labels, self.input_size_decoder, self.input_size_encoder],
initializer=tf.contrib.layers.xavier_initializer())
def forward(self, input_d, input_e, mask_d=None, mask_e=None):
batch = tf.shape(input_d)[0]
length_decoder = tf.shape(input_d)[1]
length_encoder = tf.shape(input_e)[1]
out_d = tf.expand_dims(tf.matmul(self.W_d, tf.transpose(input_d, [0, 2, 1])), 3)
out_e = tf.expand_dims(tf.matmul(self.W_e, tf.transpose(input_e, [0, 2, 1])), 2)
output = tf.matmul(tf.expand_dims(input_d, 1), self.U)
output = tf.matmul(output, tf.transpose(tf.expand_dims(input_e, 1), [0, 1, 3, 2]))
output = output + out_d + out_e
if mask_d is not None:
d = tf.expand_dims(tf.expand_dims(mask_d, 1), 3)
e = tf.expand_dims(tf.expand_dims(mask_e, 1), 2)
output = output * d * e
return output
class Model:
def __init__(
self,
dim_word,
dim_char,
dropout,
learning_rate,
hidden_size_char,
hidden_size_word,
num_layers
):
def cells(size, reuse = False):
return tf.contrib.rnn.DropoutWrapper(
tf.nn.rnn_cell.LSTMCell(
size,
initializer = tf.orthogonal_initializer(),
reuse = reuse,
),
output_keep_prob = dropout,
)
def luong(embedded, size):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(
num_units = hidden_size_word, memory = embedded
)
return tf.contrib.seq2seq.AttentionWrapper(
cell = cells(hidden_size_word),
attention_mechanism = attention_mechanism,
attention_layer_size = hidden_size_word,
)
self.word_ids = tf.placeholder(tf.int32, shape = [None, None])
self.char_ids = tf.placeholder(tf.int32, shape = [None, None, None])
self.labels = tf.placeholder(tf.int32, shape = [None, None])
self.depends = tf.placeholder(tf.int32, shape = [None, None])
self.maxlen = tf.shape(self.word_ids)[1]
self.lengths = tf.count_nonzero(self.word_ids, 1)
self.mask = tf.math.not_equal(self.word_ids, 0)
float_mask = tf.cast(self.mask, tf.float32)
self.arc_h = tf.layers.Dense(hidden_size_word)
self.arc_c = tf.layers.Dense(hidden_size_word)
self.attention = BiAAttention(hidden_size_word, hidden_size_word, 1)
self.word_embeddings = tf.Variable(
tf.truncated_normal(
[len(word2idx), dim_word], stddev = 1.0 / np.sqrt(dim_word)
)
)
self.char_embeddings = tf.Variable(
tf.truncated_normal(
[len(char2idx), dim_char], stddev = 1.0 / np.sqrt(dim_char)
)
)
word_embedded = tf.nn.embedding_lookup(
self.word_embeddings, self.word_ids
)
char_embedded = tf.nn.embedding_lookup(
self.char_embeddings, self.char_ids
)
s = tf.shape(char_embedded)
char_embedded = tf.reshape(
char_embedded, shape = [s[0] * s[1], s[-2], dim_char]
)
for n in range(num_layers):
(out_fw, out_bw), (
state_fw,
state_bw,
) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = cells(hidden_size_char),
cell_bw = cells(hidden_size_char),
inputs = char_embedded,
dtype = tf.float32,
scope = 'bidirectional_rnn_char_%d' % (n),
)
char_embedded = tf.concat((out_fw, out_bw), 2)
output = tf.reshape(
char_embedded[:, -1], shape = [s[0], s[1], 2 * hidden_size_char]
)
word_embedded = tf.concat([word_embedded, output], axis = -1)
for n in range(num_layers):
(out_fw, out_bw), (
state_fw,
state_bw,
) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = luong(word_embedded, hidden_size_word),
cell_bw = luong(word_embedded, hidden_size_word),
inputs = word_embedded,
dtype = tf.float32,
scope = 'bidirectional_rnn_word_%d' % (n),
)
word_embedded = tf.concat((out_fw, out_bw), 2)
logits = tf.layers.dense(word_embedded, len(idx2tag))
log_likelihood, transition_params = tf.contrib.crf.crf_log_likelihood(
logits, self.labels, self.lengths
)
arc_h = tf.nn.elu(self.arc_h(word_embedded))
arc_c = tf.nn.elu(self.arc_c(word_embedded))
out_arc = tf.squeeze(self.attention.forward(arc_h, arc_h, mask_d=float_mask, mask_e=float_mask), axis = 1)
batch = tf.shape(out_arc)[0]
batch_index = tf.range(0, batch)
max_len = tf.shape(out_arc)[1]
sec_max_len = tf.shape(out_arc)[2]
minus_inf = -1e8
minus_mask = (1 - float_mask) * minus_inf
out_arc = out_arc + tf.expand_dims(minus_mask, axis = 2) + tf.expand_dims(minus_mask, axis = 1)
loss_arc = tf.nn.log_softmax(out_arc, dim=1)
loss_arc = loss_arc * tf.expand_dims(float_mask, axis = 2) * tf.expand_dims(float_mask, axis = 1)
num = tf.reduce_sum(float_mask) - tf.cast(batch, tf.float32)
child_index = tf.tile(tf.expand_dims(tf.range(0, max_len), 1), [1, batch])
t = tf.transpose(self.depends)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0),
tf.expand_dims(child_index, axis = 0)], axis = 0))
loss_arc = tf.gather_nd(loss_arc, concatenated)
loss_arc = tf.transpose(loss_arc, [1, 0])[1:]
loss_arc = tf.reduce_sum(-loss_arc) / num
self.cost = tf.reduce_mean(-log_likelihood) + loss_arc
self.optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate
).minimize(self.cost)
mask = tf.sequence_mask(self.lengths, maxlen = self.maxlen)
self.tags_seq, _ = tf.contrib.crf.crf_decode(
logits, transition_params, self.lengths
)
out_arc = out_arc + tf.linalg.diag(tf.fill([max_len], -np.inf))
minus_mask = tf.expand_dims(tf.cast(1.0 - float_mask, tf.bool), axis = 2)
minus_mask = tf.tile(minus_mask, [1, 1, sec_max_len])
out_arc = tf.where(minus_mask, tf.fill(tf.shape(out_arc), -np.inf), out_arc)
self.heads = tf.argmax(out_arc, axis = 1)
self.prediction = tf.boolean_mask(self.tags_seq, mask)
mask_label = tf.boolean_mask(self.labels, mask)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
self.prediction = tf.cast(tf.boolean_mask(self.heads, mask), tf.int32)
mask_label = tf.boolean_mask(self.depends, mask)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy_depends = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
dim_word = 128
dim_char = 256
dropout = 1.0
learning_rate = 1e-3
hidden_size_char = 128
hidden_size_word = 128
num_layers = 2
model = Model(dim_word,dim_char,dropout,learning_rate,hidden_size_char,hidden_size_word,num_layers)
sess.run(tf.global_variables_initializer())
batch_x = train_X[:5]
batch_x = pad_sequences(batch_x,padding='post')
batch_char = train_char[:5]
batch_char = generate_char_seq(batch_char)
batch_y = train_Y[:5]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = train_depends[:5]
batch_depends = pad_sequences(batch_depends,padding='post')
sess.run([model.accuracy, model.accuracy_depends, model.cost],
feed_dict = {model.word_ids: batch_x,
model.char_ids: batch_char,
model.labels: batch_y,
model.depends: batch_depends})
from tqdm import tqdm
batch_size = 32
epoch = 15
for e in range(epoch):
train_acc, train_loss = [], []
test_acc, test_loss = [], []
train_acc_depends, test_acc_depends = [], []
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = train_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_char = train_char[i: index]
batch_char = generate_char_seq(batch_char)
batch_y = train_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = train_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
acc_depends, acc, cost, _ = sess.run(
[model.accuracy_depends, model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.word_ids: batch_x,
model.char_ids: batch_char,
model.labels: batch_y,
model.depends: batch_depends
},
)
train_loss.append(cost)
train_acc.append(acc)
train_acc_depends.append(acc_depends)
pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends)
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'test minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_char = test_char[i: index]
batch_char = generate_char_seq(batch_char)
batch_y = test_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = test_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
acc_depends, acc, cost = sess.run(
[model.accuracy_depends, model.accuracy, model.cost],
feed_dict = {
model.word_ids: batch_x,
model.char_ids: batch_char,
model.labels: batch_y,
model.depends: batch_depends
},
)
test_loss.append(cost)
test_acc.append(acc)
test_acc_depends.append(acc_depends)
pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends)
print(
'epoch: %d, training loss: %f, training acc: %f, training depends: %f, valid loss: %f, valid acc: %f, valid depends: %f\n'
% (e, np.mean(train_loss),
np.mean(train_acc),
np.mean(train_acc_depends),
np.mean(test_loss),
np.mean(test_acc),
np.mean(test_acc_depends)
))
def evaluate(heads_pred, types_pred, heads, types, lengths,
symbolic_root=False, symbolic_end=False):
batch_size, _ = words.shape
ucorr = 0.
lcorr = 0.
total = 0.
ucomplete_match = 0.
lcomplete_match = 0.
corr_root = 0.
total_root = 0.
start = 1 if symbolic_root else 0
end = 1 if symbolic_end else 0
for i in range(batch_size):
ucm = 1.
lcm = 1.
for j in range(start, lengths[i] - end):
total += 1
if heads[i, j] == heads_pred[i, j]:
ucorr += 1
if types[i, j] == types_pred[i, j]:
lcorr += 1
else:
lcm = 0
else:
ucm = 0
lcm = 0
if heads[i, j] == 0:
total_root += 1
corr_root += 1 if heads_pred[i, j] == 0 else 0
ucomplete_match += ucm
lcomplete_match += lcm
return (ucorr, lcorr, total, ucomplete_match, lcomplete_match), \
(corr_root, total_root), batch_size
tags_seq, heads = sess.run(
[model.tags_seq, model.heads],
feed_dict = {
model.word_ids: batch_x,
model.char_ids: batch_char
},
)
tags_seq[0], heads[0], batch_depends[0]
def evaluate(heads_pred, types_pred, heads, types, lengths,
symbolic_root=False, symbolic_end=False):
batch_size, _ = heads_pred.shape
ucorr = 0.
lcorr = 0.
total = 0.
ucomplete_match = 0.
lcomplete_match = 0.
corr_root = 0.
total_root = 0.
start = 1 if symbolic_root else 0
end = 1 if symbolic_end else 0
for i in range(batch_size):
ucm = 1.
lcm = 1.
for j in range(start, lengths[i] - end):
total += 1
if heads[i, j] == heads_pred[i, j]:
ucorr += 1
if types[i, j] == types_pred[i, j]:
lcorr += 1
else:
lcm = 0
else:
ucm = 0
lcm = 0
if heads[i, j] == 0:
total_root += 1
corr_root += 1 if heads_pred[i, j] == 0 else 0
ucomplete_match += ucm
lcomplete_match += lcm
return ucorr / total, lcorr / total, corr_root / total_root
arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y,
np.count_nonzero(batch_x, axis = 1))
arc_accuracy, type_accuracy, root_accuracy
arcs, types, roots = [], [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'test minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_char = test_char[i: index]
batch_char = generate_char_seq(batch_char)
batch_y = test_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = test_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
tags_seq, heads = sess.run(
[model.tags_seq, model.heads],
feed_dict = {
model.word_ids: batch_x,
model.char_ids: batch_char
},
)
arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y,
np.count_nonzero(batch_x, axis = 1))
pbar.set_postfix(arc_accuracy = arc_accuracy, type_accuracy = type_accuracy,
root_accuracy = root_accuracy)
arcs.append(arc_accuracy)
types.append(type_accuracy)
roots.append(root_accuracy)
print('arc accuracy:', np.mean(arcs))
print('types accuracy:', np.mean(types))
print('root accuracy:', np.mean(roots))
```
| github_jupyter |
# Tutorial 2: Part A
## `Fraction`
### Author: Vedant Prakash Shenoy
`fractions` is a library in the Python Standard Library that allows us to define (no points for guessing) fractions. This allows us to calculate HCFs and LCMs, do the normal arithmetic operations, and allow for different types of formatting (eg: improper fractions <-> mixed fractions).
To illustrate how we can leverage classes, let us try to make a similar class for fractions ourselves!
At any point, if you are confused about what to do, consult the [holy archives](https://www.google.com)
### Before Attempting this Tutorial:
1. Revise what classes are (Lecture 3) and what dunder methods do (operator overloading)
2. Watch [Python Class Development Toolkit](https://www.youtube.com/watch?v=HTLu2DFOdTg)
### During this Tutorial
1. Fill in the code to do what is described in the instructions.
2. Write your own test cases.
3. Try to think of different ways to implement the features that we want our class to have
We will cover the following concepts in roughly the same order:
1. Data Attributes and Methods (including `__init__` and `__repr__`)
2. Input validation and Exception Handling
3. `@property` and `@property.setter`
4. More dunder methods: arithmetic operations, relational operatations and absolute value
5. `@classmethods` and alternate constructors
If you see something new in the above list, revise Lecture 3 and watch the video linked above.
### After this Tutorial
1. Notice how we incrementally develop the class `Fraction`, and find new test cases that break our code, and continue the process iteratively. This testing can by automated using 'Unit Testing'. Try to find out how this is done in Python; it may be useful for you in your project!
2. Use the concepts you have learnt here, and try to apply them in a field you are interested in.
3. Go watch this video to find out when you should not be using classes: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)
---
To start off with, we need to implement the concept of a fraction. At its core, a fraction simply has a numerator (`num`) and a denominator (`den`); and dividing them gives the decimal value (`decimal`)
We'll also add a pretty `repr` so that we can see what the fraction is.
```
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
self.num = num
self.den = den
def decimal(self):
return self.num/self.den
def __repr__(self):
return f"{self.num}/{self.den} = {self.decimal()}"
f = Fraction(1, 2)
print(f.num, f.den, f.decimal())
Fraction(1, 3)
Fraction(-1, 3)
```
Looks good so far. However:
```
Fraction(1.5, 1)
Fraction(1, -3)
Fraction(0, 1)
Fraction(1, 0)
```
---
What happened in those last two cells? There are three basic properties of fractions that we haven't yet implemented:
1. `num` and `den` are integers,
2. `den` can't be zero,
3. the sign is written next to the numerator.
Let us modify the class to reflect this as well.
```
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
# Modify the __init__ function from above to do the checks that we wanted.
if not (isinstance(num, int) and isinstance(den, int)):
raise TypeError(f"`num` and `den` must both be integers, not '{type(num).__name__}' and '{type(den).__name__}'")
if den == 0:
raise ZeroDivisionError(f"`den` must not be 0")
self.num = num
if den < 0:
self.num = -self.num
self.den = abs(den)
def decimal(self):
return self.num/self.den
def __repr__(self):
return f"{self.num}/{self.den} = {self.decimal()}"
# Expected: a TypeError (since `num` is not an integer)
Fraction(1.2, 3)
# Expected: ZeroDevisionError (since `den` is zero)
Fraction(1, 0)
# Expected: Fraction -1/2 and not 1/-2
Fraction(1, -2)
```
---
Our problems with wrong input types and values are now solved! We can now move on with implementing operations right?
However, a new bug shows up!
```
f = Fraction(1, 2)
f.num = 3.2
f.den = 1.2
f
```
Python allows for attributes to be accessed and changed (there are no private or protected variables here).
This means that if the user changes the numerator and denominator later on, the checks that we did in `__init__` are no longer done.
How can we solve this? Simple, rename `num` as `_num` and so on. This is a convention that lets people know not to mess with that attribute.
But then, the value of `num` is not accessible directly. We could write a method `num()` to return the value of `self._num`?
A solution that lets us keep the earlier API (and shows that `num` is a data attribute or a property rather than a method) is to use the `@property` decorator. Complete the class definition below:
```
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
# Copy your _init__ from above
if not (isinstance(num, int) and isinstance(den, int)):
raise TypeError(f"`num` and `den` must both be integers, not '{type(num).__name__}' and '{type(den).__name__}'")
if den == 0:
raise ZeroDivisionError(f"`den` must not be 0")
self._num = num
if den < 0:
self._num = -self.num
self._den = abs(den)
def decimal(self):
return self.num/self.den
@property
def num(self):
return self._num
@property
def den(self):
return self._den
def __repr__(self):
return f"{self._num}/{self._den} = {self.decimal()}"
f = Fraction(1, 2)
f.num, f.den
f.num = 2
f
```
---
This solves our problem of worrying about the user breaking our code. However, keep in mind that if they want to, they still can (change `f._num` instead in the cell above)
Note how this varies somewhat from the `C++ Way of Doing Things`<sup>TM</sup>, where you might write a `get_num()` method to get the value and a `set_num()` method to set the value (if you wish to implement such a feature).
The idiomatic way to do it in Python is to use `@property` and `@property.setter`. Again, we show the process for `num`. Complete the code for `den` to make `Fraction` a mutable, consistent container for fractions.
```
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
# Copy __init__ from above
if not (isinstance(num, int) and isinstance(den, int)):
raise TypeError(f"`num` and `den` must both be integers, not '{type(num).__name__}' and '{type(den).__name__}'")
if den == 0:
raise ZeroDivisionError(f"`den` must not be 0")
self._num = num
if den < 0:
self._num = -self.num
self._den = abs(den)
def decimal(self):
return self.num/self.den
@property
def num(self):
return self._num
@num.setter
def num(self, value):
if isinstance(value, int):
self._num = value
else:
raise TypeError(f"`num` must be of type 'int', not '{type(value).__name__}'")
# Put in @property and @property.setter for den as well
@property
def den(self):
return self._den
@den.setter
def den(self, value):
if not isinstance(value, int):
raise TypeError(f"`den` must be of type 'int', not '{type(value).__name__}'")
elif value == 0:
raise ZeroDivisionError(f"`den` must not be 0")
else:
self._den = abs(value)
if value < 0:
self._num = -self._num
def __repr__(self):
return f"{self._num}/{self._den} = {self.decimal()}"
f = Fraction(1, 2)
f.num = 2
f.den = -1
f
```
---
At this point, we have a class `Fraction` which does has all the checks we wanted it to have, is mutable and remains consistent at all times.
Let us take the plunge and add more features
## Feature Request
Implement the following features into the class `Fraction`
1. A method to get the simplest form of a fraction (2/4 --> 1/2)
2. Modify `Fraction.decimal()` to take an optional argument `fmt`, which if provided returns a formatted string with the given format. For example:
f = Fraction(1, 3)
print(f.decimal())
# Out: 0.3333333333333333
print(f.decimal('4.2f'))
# Out: 0.33
print(f.decimal('.2e')
# Out: 3.33e-01
3. Define the `abs` of a `Fraction`
4. Make the `repr` a bit more expressive: include the class name. When someone prints an instance of `Fraction`, only show the `num`/`den` form. On the interactive console, this should look something like:
In : f = Fraction(-2, 4)
In : f
Out: Fraction: -2/4 = -0.5
In : print(f)
-2/4
(Hint: Look up the dunder method `__str__` and how it differs from `__repr__`)
```
from math import gcd
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
# You know what to do here.
if not (isinstance(num, int) and isinstance(den, int)):
raise TypeError(f"`num` and `den` must both be integers, not '{type(num).__name__}' and '{type(den).__name__}'")
if den == 0:
raise ZeroDivisionError(f"`den` must not be 0")
self._num = num
if den < 0:
self._num = -self.num
self._den = abs(den)
def decimal(self, fmt=None):
"""Format the decimal output according to fmt and return a string. If not passed, return a float."""
# You can maybe try to use `if fmt is None:`
if fmt is None:
return self.num/self.den
else:
return f"{self.num/self.den:{fmt}}"
def simple(self):
"""Return a Fraction instance with num and den in simplest form"""
factor = gcd(self.num, self.den)
return self.__class__(self.num//factor, self.den//factor)
@property
def num(self):
return self._num
@num.setter
def num(self, value):
if isinstance(value, int):
self._num = value
else:
raise TypeError(f"`num` must be of type 'int', not '{type(value).__name__}'")
@property
def den(self):
return self._den
@den.setter
def den(self, value):
if not isinstance(value, int):
raise TypeError(f"`den` must be of type 'int', not '{type(value).__name__}'")
elif value == 0:
raise ZeroDivisionError(f"`den` must not be 0")
else:
self._den = abs(value)
if value < 0:
self._num = -self._num
def __repr__(self):
return f"{self.__class__.__name__}: {self._num}/{self._den} = {self.decimal()}"
def __str__(self):
return f"{self._num}/{self._den}"
def __abs__(self):
return self.__class__(abs(self.num), self.den)
f = Fraction(-2, 4)
f
print(f)
f.simple()
f.decimal('0.3f')
abs(f)
```
---
## Feature Request
Implement the following features into the class `Fraction`
1. The operations for addition, subtraction, multiplication, and division (as per the usual definitions for fractions). Return an instance of `Fraction` with the numerator and denominator in the simplest form.
2. Define the relational operators (`==`, `>`, `<`, `>=`, `<=`, `!=`) for fractions.
3. Make all the above operations compatible with normal numbers (`int` and `float` data types) as well. Use the test case
Fraction(4, 100) == 0.16 - 0.12
```
from math import gcd, isclose
# Copy the class definition from above and add __dunder__ methods to do all these things.
# If you do not implement the compatibility with `float`,
# then raise an exception, so that the user knows not to do it.
class Fraction:
"""Class to implement fractions"""
def __init__(self, num, den):
"""Initialize instance using numerator and denominator"""
if not (isinstance(num, int) and isinstance(den, int)):
raise TypeError(f"`num` and `den` must both be integers, not '{type(num).__name__}' and '{type(den).__name__}'")
if den == 0:
raise ZeroDivisionError(f"`den` must not be 0")
self._num = num
if den < 0:
self._num = -self.num
self._den = abs(den)
def decimal(self, fmt=None):
if fmt is None:
return self.num/self.den
else:
return f"{self.num/self.den:{fmt}}"
def simple(self):
factor = gcd(self.num, self.den)
return self.__class__(self.num//factor, self.den//factor)
@property
def num(self):
return self._num
@num.setter
def num(self, value):
if isinstance(value, int):
self._num = value
else:
raise TypeError(f"`num` must be of type 'int', not '{type(value).__name__}'")
@property
def den(self):
return self._den
@den.setter
def den(self, value):
if not isinstance(value, int):
raise TypeError(f"`den` must be of type 'int', not '{type(value).__name__}'")
elif value == 0:
raise ZeroDivisionError(f"`den` must not be 0")
else:
self._den = abs(value)
if value < 0:
self._num = -self._num
def __repr__(self):
return f"{self.__class__.__name__}: {self._num}/{self._den} = {self.decimal()}"
def __str__(self):
return f"{self._num}/{self._den}"
def __abs__(self):
return self.__class__(abs(self.num), self.den)
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.num * other.den == other.num * self.den
elif isinstance(other, (int, float)):
return isclose(self.decimal(), other)
def __lt__(self, other):
if isinstance(other, self.__class__):
return self.num * other.den < other.num * self.den
elif isinstance(other, (int, float)):
return self.decimal() < other and not self == other
def __gt__(self, other):
if isinstance(other, self.__class__):
return self.num * other.den > other.num * self.den
elif isinstance(other, (int, float)):
return self.decimal() > other and not self == other
def __le__(self, other):
if isinstance(other, self.__class__):
return self.num * other.den <= other.num * self.den
elif isinstance(other, (int, float)):
return self.decimal() <= other or self == other
def __ge__(self, other):
if isinstance(other, self.__class__):
return self.num * other.den >= other.num * self.den
elif isinstance(other, (int, float)):
return self.decimal() >= other or self == other
def __add__(self, other):
if isinstance(other, self.__class__):
num = self.num * other.den + other.num * self.den
den = self.den * other.den
return self.__class__(num, den).simple()
elif isinstance(other, int):
return self.__class__(self.num + other * self.den, self.den).simple()
def __sub__(self, other):
if isinstance(other, self.__class__):
num = self.num * other.den - other.num * self.den
den = self.den * other.den
return self.__class__(num, den).simple()
elif isinstance(other, int):
return self.__class__(self.num - other * self.den, self.den).simple()
def __mul__(self, other):
if isinstance(other, self.__class__):
num = self.num * other.num
den = other.den * self.den
return self.__class__(num, den).simple()
elif isinstance(other, int):
return self.__class__(self.num * other, self.den).simple()
def __truediv__(self, other):
if isinstance(other, self.__class__):
num = self.num * other.den
den = other.num * self.den
return self.__class__(num, den).simple()
elif isinstance(other, int):
return self.__class__(self.num, self.den * other).simple()
f1 = Fraction(1, 2)
f2 = Fraction(5, 6)
f1 + f2
```
---
## Feature Request
Alternative constructors (good time to remember what you saw in that YouTube video)
1. Add support for mixed fractions. Figure out how you want to implement this yourself.
2. Make alternative constructors for `Fraction`. Implement the following ways of making a `Fraction` instance:
Fraction.from_string("1/2") # This should be equivalent to Fraction(1, 2)
Fraction.from_tuple((2)) # Fraction(2, 1)
Fraction.from_tuple((2, 3)) # Fraction(2, 3)
Fraction.from_tuple((1, 2, 3)) # Fraction(5, 3): a mixed fraction
Fraction.from_dict(dict(num=2, den=3)) # Fraction(2, 3)
# Concluding Notes
We hope you guys had fun writing out this class. Have a look at the `fractions` library to see how much of its functionality you have replicated in this tutorial, and how you can improve it.
On Friday afternoon (Jun 4), we will release Part B of the tutorial. Good luck!
__fin__
---
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn import tree
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
df = pd.read_csv("data.csv")
print df.head()
df['FLIGHT_DATE']
df['DAY_OF_WEEK'].astype('category')
df.shape
## Converting variables to string, then to factors
#from datetime import datetime
df['SCHEDULED_DEPARTURE_HOURS'] = df['SCHEDULED_DEPARTURE'].apply(lambda x: str(x)[11:13])
df['SCHEDULED_DEPARTURE_HOURS'].astype('category')
print df['SCHEDULED_DEPARTURE_HOURS']
df['SCHEDULED_ARRIVAL']
df['SCHEDULED_ARRIVAL']
df['SCHEDULED_ARRIVAL_Hours'] = df['SCHEDULED_ARRIVAL'].apply(lambda x: str(x)[:2])
df['SCHEDULED_ARRIVAL_Hours'].astype('category')
print df['SCHEDULED_ARRIVAL_Hours']
df['DAY_OF_WEEK'].astype('category')
df.head()
print df.apply(lambda x:len(x.unique()))
#### Selecting the required variables for ML algorithm.
df = df.loc[:, ['DAY_OF_WEEK', 'AIRLINE', 'ORIGIN_AIRPORT', 'DESTINATION_AIRPORT','DEPARTURE_DELAY', 'ELAPSED_TIME', 'AIR_TIME', 'DISTANCE', 'ARRIVAL_DELAY', 'DAY_TYPE', 'DEP_DELAY_BIN', 'SCHEDULED_DEPARTURE_HOURS', 'SCHEDULED_ARRIVAL_Hours' ]]
print df
#df = df.drop([''])
print list(df)
df = df.dropna(axis = 0, how = 'any')
### Encoding AIRLINE
le = preprocessing.LabelEncoder()
le.fit(df.iloc[:,1])
col_2_transformed = le.transform(df.iloc[:,1])
#print col_2_transformed
df.iloc[:,1] = col_2_transformed
le1 = preprocessing.LabelEncoder()
df['AIRLINE'].astype('category')
print df.head
### Encoding ORIGIN_AIRPORT
le1 = preprocessing.LabelEncoder()
le1.fit(df.iloc[:,2])
col_3_transformed = le1.transform(df.iloc[:,2])
#print col_2_transformed
df.iloc[:,2] = col_3_transformed
df['ORIGIN_AIRPORT'].astype('category')
print df.head
### Encoding DESTINATION_AIRPORT
le2 = preprocessing.LabelEncoder()
le2.fit(df.iloc[:,3])
col_4_transformed = le2.transform(df.iloc[:,3])
#print col_2_transformed
df.iloc[:,3] = col_4_transformed
df['DESTINATION_AIRPORT'].astype('category')
print df.head
# Encoding SCHEDULED_DEPARTURE_HOURS
### Encoding DESTINATION_AIRPORT
le4 = preprocessing.LabelEncoder()
le4.fit(df.iloc[:,11])
col_5_transformed = le4.transform(df.iloc[:,11])
#print col_2_transformed
df.iloc[:,11] = col_5_transformed
df['SCHEDULED_DEPARTURE_HOURS'].astype('category')
print df.head
### Encoding SCHEDULED_ARRIVAL_Hours
le5 = preprocessing.LabelEncoder()
le5.fit(df.iloc[:,12])
col_6_transformed = le5.transform(df.iloc[:,12])
#print col_2_transformed
df.iloc[:,12] = col_6_transformed
df['SCHEDULED_ARRIVAL_Hours'].astype('category')
print df.head
# Selecting the needed variables from the dataframe for training and testing set.
tr_data = df.iloc[:,[0,1,2,3,5,6,7,9,11,12]]
tr_target = df.iloc[:,10]
print tr_data.head
print tr_target.head
#### Need to do upsampling / downsampling.
0,1,2,3,7,8,9
4,5,6,
### Creating the training and test for validation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(tr_data, tr_target, test_size = 0.25, random_state = 0, stratify=tr_target)
### Standardising the elapsed time, air time and Distance
sc = StandardScaler()
sc.fit(X_train.iloc[:,[4]])
X_train.iloc[:,4] = sc.transform(X_train.iloc[:,4])
sc.fit(X_train.iloc[:,[5]])
X_train.iloc[:,5] = sc.transform(X_train.iloc[:,5])
sc.fit(X_train.iloc[:,[6]])
X_train.iloc[:,6] = sc.transform(X_train.iloc[:,6])
sc.fit(X_test.iloc[:,[4]])
X_test.iloc[:,4] = sc.transform(X_test.iloc[:,4])
sc.fit(X_test.iloc[:,[5]])
X_test.iloc[:,5] = sc.transform(X_test.iloc[:,5])
sc.fit(X_test.iloc[:,[6]])
X_test.iloc[:,6] = sc.transform(X_test.iloc[:,6])
X_train
###Logistic Regression
lr = LogisticRegression( random_state=0)
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
#making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print cm
###Stochastic Gradient Descent
from sklearn import linear_model
clf1 = linear_model.SGDClassifier()
clf1.fit(X_train, y_train)
y_pred = clf1.predict(X_test)
#making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print cm
```
| github_jupyter |
<a href="https://colab.research.google.com/github/beatricekiplagat/Deepfake-Audio-Recognition/blob/main/DEEPFAKE_AUDIO_DETECTION_TRANSFER_LEARNING_MODELS_VGG16.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# import necessary libraries
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
# re-size all the images to this
IMAGE_SIZE = [224, 224]
# define train and test data
# from PIL import Image
# import glob
# train_path = []
# for filename in glob.glob('/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Train/Spoof/*.png'):
# im=Image.open(filename)
# train_path.append(im)
# for filename in glob.glob('/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Train/Bonafide/*.png'):
# im=Image.open(filename)
# train_path.append(im)
# from PIL import Image
# import glob
# valid_path = []
# for filename in glob.glob('/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Test/Bonafide/*.png'):
# im=Image.open(filename)
# valid_path.append(im)
# for filename in glob.glob('/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Test/Spoof/*.png'):
# im=Image.open(filename)
# valid_path.append(im)
train_path ='/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Train'
test_path = '/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Test'
# add preprocessing layer to the front of VGG
# the include_top false statement will allow us to be able to set the number of classes on the top layer that we will create
vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
# don't train - vgg has existing weights
for layer in vgg.layers:
layer.trainable = False
# useful for getting number of classes
# this will count the number of classes we have in our dataset assuming that the data is grouped into specific folders
folders = glob('/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Train/*')
# # our layers
# x = Flatten()(vgg.output)
# x = Dense(1000, activation='relu')(x)
# prediction = Dense(folders, activation='sigmoid')(x)
x = Flatten()(vgg.output)
prediction = Dense(len(folders), activation='sigmoid')(x)
model = Model(inputs=vgg.input, outputs=prediction)
model.summary()
# create a model object
model = Model(inputs=vgg.input, outputs=prediction)
# view the structure of the model
model.summary()
# tell the model what cost and optimization method to use
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# from keras import optimizers
# adam = optimizers.Adam()
# model.compile(loss='binary_crossentropy',
# optimizer=adam,
# metrics=['accuracy'])
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import preprocess_input
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
training_set = train_datagen.flow_from_directory(train_path,
target_size = (224, 224),
batch_size = 4,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory(test_path,
target_size = (224, 224),
batch_size = 4,
class_mode = 'categorical')
# fit the model
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=10,
steps_per_epoch= 20, #len(training_set),
validation_steps= 20 #len(test_set)
)
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# accuracies
plt.plot(r.history['accuracy'], label='train accuracy')
plt.plot(r.history['val_accuracy'], label='val accuracy')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
import tensorflow as tf
from keras.models import load_model
model.save('facefeatures_new_model.h5')
import cv2
from matplotlib.pyplot import imread
from matplotlib.pyplot import imshow
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.imagenet_utils import decode_predictions
from tensorflow.keras.applications.imagenet_utils import preprocess_input
import os, sys
img_path = '/content/drive/MyDrive/DEEPFAKE AUDIO DETECTION PROJECT/DATASETS/Train/Bonafide/Copy of LA_D_6787225.png'
#image1_dir = os.path.join(test_path+'/Bonafide/LA_D_4033093.png')
def import_and_predict(image_data, label):
#resize
img = cv2.imread(img_path)
img = cv2.resize(img, (224, 224))
x = np.expand_dims(img, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = imread(img_path)
imshow(my_image)
#predict image
prediction = model.predict(x)
print(prediction)
label_prediction = label[np.argmax(prediction)]
return label_prediction
label = os.listdir(test_path)
prediction = import_and_predict(img_path, label)
prediction
label
# To save the model:
import tensorflow as tf
from keras.models import load_model
model.save('VGG16.h5')
```
| github_jupyter |
```
import csv
total_counts_file = './Ngrams/totalcounts-1'
years = []
unigram_count = []
with open(total_counts_file) as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
for i in range(1,len(row)-1):
year, match_count, page_count, volume_count = tuple(row[i].split(','))
years.append(int(year))
unigram_count.append(int(match_count))
len(unigram_count)
len(years)
#total count
tc = sum(unigram_count)
tc
print('Range is',years[0],'-',years[-1])
#total count in the years 1800-2019
tc18002019 = sum([unigram_count for year, unigram_count in zip(years, unigram_count) if year>=1800])
tc18002019
print(100*(tc18002019/tc),'% of all unigrams in the corpus are found in the years 1800-2019')
#622967 is the number of lexemes in the five closed lexical classes after preprocessing
print('Lexemes in closed lexical classes composed',100*(622967/tc18002019),'% of all of the unigrams.')
import gzip
def open_gzip(directory,file_path):
with gzip.open(directory+file_path,'r') as f_in:
rows = [x.decode('utf8').strip() for x in f_in.readlines()]
return rows
def csv2tuple(string):
year,match_count,volume_count = tuple(string.split(','))
return int(year),int(match_count),int(volume_count)
def readcolumns(columns):
#search backwards
for entry in reversed(columns):
year,match_count,volume_count = csv2tuple(str(entry))
if year>1800:
return True
return False
%%time
import os
from tqdm import tqdm
directory = './Ngrams/'
files = os.listdir(directory)
ngrams = list()
i=0
for file_path in files:
if '.gz' in file_path:
#ngrams.update([row.split('\t')[0] for row in open_gzip(directory,file_path)])
#num_ngrams+=len(open_gzip(directory,file_path))
rows = open_gzip(directory,file_path)
for row in tqdm(rows):
columns = row.split('\t')
#This implementation uses {1gram:{year:match_count ...} ...}
if readcolumns(columns[1:]):
ngrams.append(columns[0])
print(file_path)
%%time
import os
from tqdm import tqdm
directory = './Ngrams/'
files = os.listdir(directory)
ngrams = set()
for file_path in files:
if '.gz' in file_path:
#ngrams.update([row.split('\t')[0] for row in open_gzip(directory,file_path)])
#num_ngrams+=len(open_gzip(directory,file_path))
rows = open_gzip(directory,file_path)
for row in tqdm(rows):
columns = row.split('\t')
#This implementation uses {1gram:{year:match_count ...} ...}
if readcolumns(columns[1:]):
ngrams.add(columns[0])
print(file_path)
len(ngrams)
len(ngrams)
%%time
ngrams = set(ngrams)
len(ngrams)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/us_cropland.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/us_cropland.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/us_cropland.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.ImageCollection('USDA/NASS/CDL') \
.filter(ee.Filter.date('2017-01-01', '2018-12-31')) \
.first()
cropLandcover = dataset.select('cropland')
Map.setCenter(-100.55, 40.71, 4)
Map.addLayer(cropLandcover, {}, 'Crop Landcover')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Install packages
```
%%capture
%%bash
pip install -U h2o numpy==1.17.0
```
# Run in memory - here for code refernece
```
import gc
import numpy as np
import pandas as pd
import warnings
import time
import gc
import os
import os
instance_type = 'c5d4xlarge' # change this
results_bucket = f"s3://xdss-benchmarks/benchmarks" # change this
name = 'h2o'
data_path = 'datasets/taxi_parquet/data_0.parquet' # a single file for code testing
output_file = f'{name}_{instance_type}_1m.csv'
results_path = f"results/{output_file}"
results_bucket = f"{results_bucket}/{output_file}"
benchmarks = {
'run':[],
'duration': [],
'task': []
}
long_min = -74.05
long_max = -73.75
lat_min = 40.58
lat_max = 40.90
def get_results(benchmarks=benchmarks):
return pd.DataFrame.from_dict(benchmarks, orient='index').T
def persist():
gc.collect()
get_results(benchmarks).to_csv(results_path)
os.system(f"aws s3 cp {results_path} {results_bucket}")
def benchmark(f, df, name, **kwargs):
for i in range(2):
start_time = time.time()
ret = f(df, **kwargs)
benchmarks['duration'].append(time.time()-start_time)
benchmarks['task'].append(name)
benchmarks['run'].append(i+1)
persist()
print(f"{name} took: {benchmarks['duration'][-1]}")
return benchmarks['duration'][-1]
def add_nan(name):
for i in range(2):
benchmarks['duration'].append(np.nan)
benchmarks['task'].append(name)
benchmarks['run'].append(i+1)
persist()
print(f"{name} took: {benchmarks['duration'][-1]}")
return benchmarks['duration'][-1]
!mkdir -p results
!mkdir -p datasets
print(f"We test every benchmark twice and save both results")
```
# Benchmark
```
import h2o
h2o.init()
import numpy as np
# Load data
data = h2o.import_file(data_path)
print(f"size: {len(data)} with {len(data.columns)} columns")
def read_file_parquet(df=None):
return h2o.import_file(data_path)
benchmark(read_file_parquet, df=data, name='read_file')
def count(df=None):
return len(df)
benchmark(count, df=data, name='count')
def mean(df):
return df['fare_amount'].mean()
benchmark(mean, df=data, name='mean')
def standard_deviation(df):
return df['fare_amount'].sd()
benchmark(standard_deviation, df=data, name='standard deviation')
```
To calculate the time when using two columns, we can't return the response since it will get into memroy and break, so we run a mean calculation on it, and then remove the time it took to run the mean.
```
def mean_of_sum(df):
return (df['fare_amount'] + df['trip_distance']).mean()
benchmark(mean_of_sum, df=data, name='sum columns mean')
def sum_columns(df):
return (df['fare_amount'] + df['trip_distance']).mean()
benchmark(sum_columns, df=data, name='sum columns')
def mean_of_product(df):
return (df['fare_amount'] * df['trip_distance']).mean()
benchmark(mean_of_product, df=data, name='product columns mean')
def product_columns(df):
return (df['fare_amount'] * df['trip_distance'])
benchmark(product_columns, df=data, name='product columns')
def mean_of_complicated_arithmetic_operation(df):
theta_1 = df['pickup_longitude'].as_data_frame().as_matrix()
phi_1 = df['pickup_latitude'].as_data_frame().as_matrix()
theta_2 = df['dropoff_longitude'].as_data_frame().as_matrix()
phi_2 = df['dropoff_latitude'].as_data_frame().as_matrix()
temp = (np.sin((theta_2-theta_1)/2*np.pi/180)**2
+ np.cos(theta_1*np.pi/180)*np.cos(theta_2*np.pi/180) * np.sin((phi_2-phi_1)/2*np.pi/180)**2)
distance = 2 * np.arctan2(np.sqrt(temp), np.sqrt(1-temp))
return distance.mean()
benchmark(mean_of_complicated_arithmetic_operation, df=data, name='arithmetic operation mean')
def value_counts(df):
return df['fare_amount'].table()
benchmark(value_counts, df=data, name='value counts')
def groupby_statistics(df):
df_grouped = df.group_by(by = ['passenger_count'])
df_grouped.mean(col = ['fare_amount', 'tip_amount']).sd(col = ['fare_amount', 'tip_amount'])
return df_grouped.get_frame()
benchmark(groupby_statistics, df=data, name='groupby statistics')
other = groupby_statistics(data)
def join(df, other):
return df.merge(other)
benchmark(join, data, name='join', other=other)
def join_count(df, other):
return len(df.merge(other))
benchmark(join_count, data, name='join count', other=other)
```
## Filtered data
Dask is not build to run on filter data like you would normally, so we will apply the same strategy
```
print(f"Prepare filtered data and deleted {gc.collect()} MB")
expr_filter = (data['pickup_longitude'] > long_min) & (data['pickup_longitude'] < long_max) & \
(data['pickup_latitude'] > lat_min) & (data['pickup_latitude'] < lat_max) & \
(data['dropoff_longitude'] > long_min) & (data['dropoff_longitude'] < long_max) & \
(data['dropoff_latitude'] > lat_min) & (data['dropoff_latitude'] < lat_max)
def filter_data(df):
return df[expr_filter]
benchmark(filter_data, data, name='filter data')
filtered = filter_data(data)
del data
print(f"cleaned {gc.collect()} mb")
benchmark(mean, filtered, name='filtered mean')
benchmark(standard_deviation, filtered, name='filtered standard deviation')
benchmark(mean_of_sum, filtered, name ='filtered sum columns mean')
add_nan('filtered sum columns')
benchmark(mean_of_product, filtered, name ='filtered product columns mean')
add_nan('filtered product columns')
benchmark(mean_of_complicated_arithmetic_operation, filtered, name='filtered arithmetic operation mean')
add_nan('filtered arithmetic operation')
benchmark(value_counts, filtered, name ='filtered value counts')
benchmark(groupby_statistics, filtered, name='filtered groupby statistics')
other = groupby_statistics(filtered)
add_nan('filtered join')
benchmark(join_count, filtered, name='filtered join count', other=other)
print(name)
get_results(benchmarks)
```
| github_jupyter |
# Code Style
In this chapter, we'll discuss a number of important considerations to make when styling your code. If you think of writing code like writing an essay, considering code style improves your code the same way editing an essay improves your essay. Often, considering code style is referred to as making our code *pythonic*, meaning that it adheres to the foundational principles of the Python programming language.
Learning how to consider and improve your code style up front has a number of benefits. First, your code will be more user-friendly for anyone reading your code. This includes you, who will come back to and edit your code over time. Second, while considering code style and being pythonic is a bit more work up-front on developers (the people writing the code), it pays off on the long run by making your code easier to maintai. Third, by learning this now, early on in your Python journey, you avoid falling into bad habits. It's much easier to learn something and implement it than it is to unlearn bad habits.
Note that what we're discussing here will not affect the functionality of your code. Unlike *programmatic errors* (i.e. errors and exceptions that require debugging for your code to execute properly), *stylistic errors* do not affect the functionality of your code. However, *stylistic errors* are considered bad style and are to be avoided, as they make your code harder to understand.
## Style Guides
Programming lanugages often have style guides, which include a set of conventions for how to write good code. While many of the concepts we'll cover here are applicable for other programming languages (i.e. being consistent), some of the specifics (i.e. variable naming conventions) are more specific to programming in Python.
<div class="alert alert-success">
Coding style refers to a set of conventions for how to write good code.
</div>
### The Zen of Python
To explain the programming philosophy in Python, we'll first introduce what's known as *The Zen of Python*, which lays out the design principles of the individuals who developed the Python programming language. *The Zen of Python* is included as an easter egg in Python, so if you `import this` you're able to read its contents:
```
import this
```
While we won't discuss each of these above we'll highlight two of these tenantsthat are particularly pertinent to the considerations in this chapter. Specifically, **beautiful is better than ugly** and **readability counts** together indicate that how one's code looks matters. Python prioritizes readability in its syntax (relative to other programming languages) and adheres to the idea that "code is more often read than it is written." As such, those who program in Python are encouraged to consider the beauty and readability of their code. To do so, we'll cover a handful of considerations here.
### Code Consistency
For very understandable and good reasons, beginner programmers often focus on getting their code to execute without throwing an error. In this process, however, they often forget about code style. While we'll discuss specific considerations to write well-styled python code in this chapter, the most important overarching concept is that **consistency is the goal**. Rules help us achieve consistency, and so we'll discuss a handful of rules and guidelines to help you write easy-to-read code with consistent code style. However, in doing so, we want you to keep the idea of consistency in mind, as programming is (at least partly) subjective. Since it's easier to recognize & read consistent style, do your best to follow the style guidelines presented in this chapter and once you pick a way to style your code, it's best to use that consistently across your code.
### PEP8
Python Enhancement Proposals (PEPs) are proposals for how something should be or how something shoudl work in the Python programming language. These are written by the people responsible for writing and maintaining the Python programming language. And, PEPs are voted on before incorporation. **[PEP8](https://www.python.org/dev/peps/pep-0008/)**, specfiically, is an accepted proposal that outlines the style guidelines for the Python programming language.
<div class="alert alert-info">
<b><a href="https://www.python.org/dev/peps/pep-0008/">PEP8</a></b> is an accepted proposal that outlines the style guide for Python.
</div>
The general concepts laid out in PEP8 (and in *The Zen of Python*) are as follows:
- Be *explicit & clear*: prioritize readability over cleerness
- There should be a *specific, standard way to do things*: use them
- Coding Style are *guidelines*: They are designed to help the code, but are not laws
#### PEP8: Structure
Throughout this section we'll highlight the PEP8 guideline, provide an example of what to avoid and hten demonstrate an improvement on the error. Note that for each "what to avoid" the code *will* execute without error. This is because we're discussing *stylistic* rather than *programmatic* errors here.
##### Blank Lines
- Use 2 blank lines between functions & classes and 1 between methods
- Use 1 blank line between segments to indicate logical structure
This allows you to, at a glance, identify what pieces of code are there. Using blank lines to separate out components in your code and your code's overall structure improves its readability.
**What to avoid**
In this example of what to avoid, there are no blank lines between segments within your code, making it more difficult to read. Note that if two functions were provided here, there would be 2 blank lines between the different function definitions.
```
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
**How to improve**
To improve the above example, we can use what you see here, with variable definition being separated out from the `for` loop, being separated from the `return` statement. This code helps separate out the logical structures within a function. Note that we do *not* add a blank line between each line of code, as that would *decrease* the readability of the code.
```
# Goodness
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
##### PEP8: Indentation
Use spaces to indicate indentation levels, with each level defined as 4 spaces. Programming languages differ on the speicfics of what constitutes a "tab," but Python has settled on a tab being equivalent to 4 spaces. When you hit "tab" on your keyboard within a Jupyter notebook, for example, the 4 spaces convention is implemented for you automatically, so you may not have even realized this convention before now!
**What to avoid**
Here, you'll note that, while the `print()` statement is indented, only *two* spaces are used. Jupyter will alert you to this by making the word `print` red, rather than its typical green.
```
if True:
print('Words.')
```
**How to improve**
Conversely, here we see the accepted four spaces for a tab/indentation being utilized. Again, remember that the functionality of the code in this example is equivalent to that above; only the style has changed.
```
if True:
print('Words.')
```
##### PEP8: Spacing
- Put one (and only one) space between each element
- Index and assignment don't have a space between opening & closing '()' or '[]'
**What to avoid**
Building on the above, spacing within and surrounding your code should be considered. Here, we see that spaces are missing around operators in the first line of code, whereas the second line has too many spaces around the assignment operator. We also see that there are unecessary spaces around the square brackets the list in line two and spaces after each comma missin in that same line of code. Finaly, in the third line of code there is an unecessary space between `my_list` and the square bracket being used for indexing.
```
my_var=1+2==3
my_list = [ 1,2,3,4 ]
el = my_list [1]
```
**How to improve**
The above spacingissues have all been resolved below:
```
my_var = 1 + 2 == 3
my_list = [1, 2, 3, 4]
el = my_list[1]
```
##### PEP8: Line Length
- PEP8 recommends that each line be at most 79 characters long
Note that this specification is somewhat historical, as computers used to require this. As such, there are tools and development environments that will help ensure that no single line of code exceeds 79 characters. However, in Jupyter notebooks, the general guideline "avoid lengthy lines of code or comments" can be used, as super long lines are hard to read at a glance.
**Multi-line**
To achieve this, know that you can always separate lines of code easily after a comma. In Jupyter notebooks, if you hit return/enter on your keyboard after a comma, your code will be aligned appropriately. For example below you see that after the comma in the first line of code, the `6` is automatically aligned with the `1` from the line above. This visually makes it clear that all of the integers are part of the same list `my_long_list`. Using multiple lines to make your code easier to read is a great habit to get into.
```
my_long_list = [1, 2, 3, 4, 5,
6, 7, 8, 9, 10]
```
Further, note that you can explicitly state that the code on the following line is a continuation of the first line of code with a backlash (`\`) at the end of a line, as you see exemplified here:
```
my_string = 'Python is ' + \
'a pretty great language.'
```
**One Statement Per Line**
While on the topic of line length and readable code, note that while you *can* often condense multiple statements into one line of code, you usually shouldn't, as it makes it harder to read.
**What to avoid**
For example, for loops *can* syntactically be specified on a single line, as you see here:
```
for i in [1, 2, 3]: print(i**2 + i%2)
```
**How to Improve**
However, in he code above, it's harder to read at a glance. Instead, what is being looped over should go on the first line with what code is being executed contained in an indented code block on lines underneat the `for` statement, as this is easier to read than the above example:
```
for i in [1, 2, 3]:
print(i**2 + i%2)
```
##### PEP8: Imports
- Import one module per line
- Avoid `*` imports
- Use the import order: standard library; 3rd party packages; local/custom code
**What to avoid**
While you may still be learning which packages are part of the standard library and which are third party packages, this will become more second nature over time. And, we haven't yet discussed local or custom code, but this includes functions/classes/code you've written and stored in `.py` files. This should be imported last.
In this example here, there are a number of issues! First, `numpy` is a third party package, while `os` and `sys` are part of the standard library, so the order should be flipped. Second `*` imports are to be avoided, as it would be unclear in any resultant code which functionality came from the `numpy` package. Third, `os` and `sys` should be imported on separate lines to be most clear.
```
from numpy import *
import os, sys
```
**How to Improve**
The above issues have been resolved in this set of imports:
```
import os
import sys
import numpy as np
```
##### PEP8: Naming
- Use descriptive names for all modules, variables, functions and classes, that are longer than 1 character
**What to avoid**
Here, single character, non-descriptive names are used.
```
a = 12
b = 24
```
**How to Improve**
Instead, python encourages object names that describe what is stored in the object or what the object is or does.
This is also important when you want to change an object name after the fact. If you were to "Find + Replace All" on the letter `a` that would change every single a in your code. However, if you "Find + Replace All" for `n_filters`, this would likely only change the places in your code you actually intended to replace.
```
n_filters = 12
n_freqs = 24
```
**Naming Style**
- CapWords (leading capitals, no separation) for Classes
- snake_case (all lowercase, underscore separator) for variables, functions, and modules
Note: snake_case is easier to read than CapWords, so we use snake_case for the things (variables, functions) that we name more frequently.
**What to avoid**
While we've been using this convention, it's important to state it explicitly here. Pythonistas (those who program in python) expect the above conventions to be used within their code. Thus, if they see a function `MyFunc`, there will be cognitive dissonance, as CapWords is to be used for classes, not functions. The same for `my_class`; this would require the reader of this code to work harder than necessary, as snake_case is to be used for functions, variables, and modules, not classes.
```
def MyFunc():
pass
class my_class():
def __init__():
pass
```
**How to Improve**
Intead, follow the guideline above. Also, note that we've added two lines between the function and class definitions (to follow the guideline earlier in this chapter).
```
def my_func():
pass
class MyClass():
def __init__():
pass
```
##### String Quotes
In Python, single-quoted strings and double-quoted strings are the same. Note that *PEP8 does not make a recommendation for this*. Rather, you are encouraged to be consistent: **pick a rule and stick to it.** (The author of this books is *exceptionally* bad at following this advice.)
One place, however, to choose one approach over another is when a string contains single or double quote character string literal. In this case, use the other one that's not included in the string to avoid backslashes in the string, as this improves readability. For example...
**What to avoid**
As you see below, you *could* use a backslash to "escape" the apostraphe within the string; however, this makes the string harder to read.
```
my_string = 'Prof\'s Project'
```
**How to Improve**
Instead, using double quotes to specify the string with the apostraphe (single quote) inside the string leads to more readable code, and is thus preferable.
```
my_string = "Prof's Project"
```
#### PEP8: Documentation
While documentation (including how to write docstrings and when, how and where to include code comments) will be covered more explicitly in the next chapter, we'll discuss the style considerations for including code comments and docstrins at this point.
##### PEP8: Comments
First, out-of-date comments are worse than no comments at all. Keep your comments up-to-date. While we encourage writing comments to explain your thinking as you're writing the code, you want to be sure to re-visit your code comments during your "editing" and "improving code style" sessions to ensure that what is stated in the comments matches what is done in your code to avoid confusion for any readers of your code.
**Block comments**
Block comments are comments that are on their own line and come before the code they intend to describe. They follow the following conventions:
- apply to some (or all) code that follows them
- are indented to the same level as that code
- each line of a block comment starts with a # and a single space
**What to avoid**
In the function below, while the code comment does come before the code it describes (good!), it is not at the same level of indentation of the code it describes (not good!) *and* there is no space between the pound sign/hashtag and the code comment:
```
import random
def encourage():
#help try to destress students by picking one thing from the following list using random
statements = ["You've totally got this!","You're so close!","You're going to do great!","Remember to take breaks!","Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**How to Improve**
Intead, here, we see improved code comment style by 1) having the block comment at the same level of indentation as the code it describes, 2) having a space in between the `#` and the comment, and 3) breaking up the comment onto two separate lines to avoid having a too-long comment.
The code style is also further improved by considering spacing within the `statements` list *and* considering line spacing throughout the function.
```
def encourage():
# Randomly pick from list of de-stressing statements
# to help students as they finish the quarter.
statements = ["You've totally got this!",
"You're so close!",
"You're going to do great!",
"Remember to take breaks!",
"Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**Inline comments**
Inline comments are those comments on the same line as the code they're describing. These are:
- to be used sparingly
- to be separated by at least two spaces from the statement
- start with a # and a single space
**What to avoid**
For example, we'll avoid inline comments that 1) are right up against the code they describe and 2) that fail to have a space after the `#`:
```
encourage()#words of encouragement
```
**How to Improve**
Instead, we'll have two spaces after the code, and a space after the `#`:
```
encourage() # words of encouragement
```
##### PEP8: Documentation
We'll cover docstrings in the following chapter, so for now we'll just specify that PEP8 specifies that a descriptive docstring should be written and included for all functions & classes. We'll discuss how to approach this shortly!
## Exercises
Q1. **Considering code style, which of these is best - A, B, or C?**
A)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
B)
```python
def squared(input_number, power=2):
output = input_number ** power
return output
```
C)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
Q2. **Which of the following uses PEP-approved spacing?**
A) `my_list=[1,2,3,4,5]`
B) `my_list = [1,2,3,4,5]`
C) `my_list = [1, 2, 3, 4, 5]`
D) `my_list=[1, 2, 3, 4, 5]`
E) `my_list = [1, 2, 3, 4, 5]`
Q3. **If you were reading code and came cross the following, which of the following would you expect to be a class?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q4. **If you were reading code and came cross the following, which of the following would you expect to be a function or variable name?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q5. **Which of the following would not cause an error in Python and would store the string *You're so close!* ?**
A) `my_string = "You're so close!"`
B) `my_string = "You"re so close!"`
C) `my_string = 'You''re so close!'`
D) `my_string = "You\\'re so close"`
E) `my_string = 'You're so close!'`
Q6. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def MyFunction(input_num):
my_list = [0,1,2,3]
if 1 in my_list: ind = 1
else:
ind = 0
qq = []
for i in my_list [ind:]:
qq.append(input_num/i)
return qq
```
Q7. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def ff(jj):
oo = list(); jj = list(jj)
for ii in jj: oo.append(str(ord(ii)))
return '+'.join(oo)
```
| github_jupyter |
# Federated Learning - SMS spam prediction with a GRU model
**NOTE**: At the time of running this notebook, we were running the grid components in background mode.
**NOTE**: Components:
* Grid Gateway (http://localhost:8080)
* Grid Node Bob (http://localhost:3000)
* Grid Node Anne (http://localhost:3001)
To **start the gateway**:
* ```cd gateway```
* ```python gateway.py --start_local_db --port=8080```
To **start one grid node**:
* ```cd app/websocket/```
* ```python websocket_app.py --start_local_db --id=anne --port=3001 --gateway_url=http://localhost:8080```
This notebook was made based on [Federated SMS Spam prediction](https://github.com/OpenMined/PySyft/tree/master/examples/tutorials/advanced/Federated%20SMS%20Spam%20prediction).
Authors:
* André Macedo Farias: Github: [@andrelmfarias](https://github.com/andrelmfarias) | Twitter: [@andrelmfarias](https://twitter.com/andrelmfarias)
* George Muraru: Github [@gmuraru](https://github/com/gmuraru) | Twitter: [@georgemuraru](https://twitter.com/georgemuraru) | Facebook: [@George Cristian Muraru](https://www.facebook.com/georgecmuraru)
## Useful imports
```
import numpy as np
import torch
import syft as sy
import grid as gr
```
<h2>Setup config</h2>
Init hook, connect with grid nodes, etc...
```
hook = sy.TorchHook(torch)
# Connect directly to grid nodes
nodes = ["ws://localhost:3000/",
"ws://localhost:3001/"]
compute_nodes = []
for node in nodes:
compute_nodes.append(gr.WebsocketGridClient(hook, node))
```
# Load Dataset
## 1) Download (if not present) and preprocess dataset
```
import os
import urllib.request
import pathlib
from zipfile import ZipFile
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip"
DATASET_NAME = "smsspamcollection"
def dataset_exists():
return os.path.isfile('./data/inputs.npy') and \
os.path.isfile('./data/labels.npy')
if not dataset_exists():
#If the dataset does not already exist, let's download the dataset directly from the URL where it is hosted
print('Downloading the dataset with urllib2 to the current directory...')
pathlib.Path("data").mkdir(exist_ok=True)
urllib.request.urlretrieve(URL, './data/data.zip')
print("The dataset was successfully downloaded")
print("Unzipping the dataset...")
with ZipFile('./data/data.zip', 'r') as zipObj:
# Extract all the contents of the zip file in current directory
zipObj.extractall("./data")
print("Dataset successfully unzipped")
from preprocess import preprocess_spam
preprocess_spam()
else:
print("Not downloading the dataset because it was already downloaded")
```
## 2) Loading data
As we are most interested in the usage of PySyft and Federated Learning, I will skip the text-preprocessing part of the project. If you are interested in how I performed the preprocessing of the raw dataset you can take a look on the script [preprocess.py](https://github.com/OpenMined/PyGrid/tree/master/examples/data/SMS-spam/preprocess.py).
Each data point of the `inputs.npy` dataset correspond to an array of 30 tokens obtained form each message (padded at left or truncated at right)
The `label.npy` dataset has the following unique values: `1` for `spam` and `0` for `non-spam`
```
inputs = np.load('./data/inputs.npy')
labels = np.load('./data/labels.npy')
datasets_spam = torch.split(torch.tensor(inputs), int(len(inputs) / len(compute_nodes)), dim=0 ) #tuple of chunks (dataset / number of nodes)
labels_spam = torch.split(torch.tensor(labels), int(len(labels) / len(compute_nodes)), dim=0 ) #tuple of chunks (labels / number of nodes)
```
<h2>3) Tagging tensors</h2>
The code below will add a tag (of your choice) to the data that will be sent to grid nodes. This tag is important as the gateway will need it to retrieve this data later.
```
tag_img = []
tag_label = []
for i in range(len(compute_nodes)):
tag_img.append(datasets_spam[i].tag("#X", "#spam", "#dataset").describe("The input datapoints to the SPAM dataset."))
tag_label.append(labels_spam[i].tag("#Y", "#spam", "#dataset").describe("The input labels to the SPAM dataset."))
```
<h2> 4) Sending our tensors to grid nodes</h2>
```
# NOTE: For some reason, there is strange behavior when trying to send within a loop.
# Ex : tag_x[i].send(compute_nodes[i])
# When resolved, this should be updated.
for i in range(len(compute_nodes)):
shared_x = tag_img[i].send(compute_nodes[i], garbage_collect_data=False)
shared_y = tag_label[i].send(compute_nodes[i], garbage_collect_data=False)
print("X tensor pointers: ", shared_x, shared_y)
```
## Disconnect Nodes
```
for i in range(len(compute_nodes)):
compute_nodes[i].close()
```
| github_jupyter |
# "Spin Glass Models"
> "In this blog post I will provide a quick overview of spin glass models. These models can be mathematically very dense, in this blog post I just aim to give a flavour - in future posts I will come back and look at the models in more detail. Fortunately there tends to be a simple visual representation which we can use to gain an insight. (Note: I look at spin glasses mostly through a mathematical lense, some of the physics interpretations may be a bit ropey.)"
- toc: true
- author: Lewis Cole (2020)
- branch: master
- badges: false
- comments: false
- categories: [Computational-Statistics, Spin-Glass, Magnet]
- hide: false
- search_exclude: false
- image: https://github.com/lewiscoleblog/blog/raw/master/images/spin-glass/spin_glass_img.png
## What is a Spin Glass?
A spin glass is not something you find down the pub (well it could be, but thats not what we're talking about here). Instead spin glasses are models of certain magnetic materials. Very loosely we can think of the atoms of a magnetic material as having a "spin" relating to the magnetic polarity ("north" or "south" ends of a bar magnet) - we typically call these up-spin and down-spin. In this context we use the term "atom" loosely, it may be an atom in a chemical sense but it could also be a molecule - essentially a minimal element of the material.
In a ferromagnetic material (such as iron) spins orient in the same direction. In contrast antiferromagnetic materials the spins orient to oppose each other. In a spin glass we have some ferromagnetic interactions and some antiferromagnetic interactions, we say the system is "disordered".

In the image above we can see various spin configurations. The spins are indicated by arrows (yellow for up and green for down) the ferromagnetic interactions are denoted by blue lines while antiferromagnetic interactions by red. However it is somewhat misleading to think of these occuring only within a square lattice in 2d such as this. Spins also do not have to be 180 degree rotations of each other - they can be at arbitrary angles from each other (but this would make for a messy diagram). A spin glass can also occur in arbitrarily many dimensions, and the interactions do not only have to occur between "nearest neighbours" any atom can have any number of interacting partners.
We can represent the energy of a spin glass system using the Hamiltonian:
$$ H = - \sum_{x,y} J_{xy} \sigma_x \sigma_y - h \sum_x \sigma_x $$
Where $J_{xy}$ denotes an interaction strength between atoms $x$ and $y$. $\sigma_x$ represents the spin/magnetism of an atom $x$ (can be a vector in which case all products are dot products). A system will tend to a state of lowest energy and so $J_{xy} > 0$ represents a ferromagnetic interaction (minimised for pairs of matching spin) and $J_{xy} < 0$ for antiferromagnetic interactions. The range over which the sum applies has been left purposely ambiguous as to allow for various lattice topologies. The second summation reflects an interaction with an external magnetic field (denoted by $h$) if there is no external field the term can be ignored. (It is worth noting that magnetic spins for a specific atom can only take one of two values (rotated through 180 degrees) since electrons have half integer spin)
This allows us to define an important term relating to spin glasses. We can note that at a minimal energy ("ground state") the spin glass will not be "ordered" (it will appear "random"). We call this property **"quenched disorder"** - this is due to the similarity to glass materials that are essentially cooled down liquids that get "frozen" into a state of disorder.
It is important to note this is different to pure "randomness". If we think of a continuum whereby we have complete order on one side (think of a crystalline type structure as an example) and pure randomness on the other the spin glass lives somewhere in the middle - partially structured and ordered but partially random. This is a particularly interesting place to be: in the pure ordered end of the scale there are many established mathematical tools to deal with this situation. Similarly in a pure random situation probability and statistics can provide us tools for study. In the middle things get complicated since you cannot necessarily assume that any one element will "look the same" as any other - thus mean field methods fall down. This occurs in many "real world" phenomena, this can result in a lack of study since it "falls inbetween" different disciplines.

There is another important term relating to spin glasses called **"frustration"**. This is where an atom has interactions with other atoms that are in conflict with each other - one interaction would suggest a lowest energy state for the atom is an up spin and another interaction suggests a down spin. An example of this can be seen below:

We can see that the spin of the centre atom is not clearly defined by interaction with its neighbours. The vertical neighbours interactions suggest the lowest energy state would be a yellow up spin, while horizontal neighbours suggest a green down spin. In a large spin glass with random (or nearly random) configuartions there may be many such frustrated atoms. This gives rise to complexity and questions such as "what is the lowest energy state for a system?" becomes very difficult to answer. In many cases we are not able to determine this analytically. The energy landscape (the Hamiltonian energy for a given configuration of spins given a fixed topology of interactions) can become very complex with lots of local minima, which means "typical" optimization procedures based around greedy hill climbing (and the like) will struggle to find the global minimum. See the plot of energy landscape below as an example:

If one ends up in a configuration near one of these local minima it requires relatively large changes to the configuration to escape the valley. This leads to a kind of "metastability" in the system where the configuration will "stick" around these points for a long time. As such spin glasses tend to violate the Ergodic principle, this again adds to the mathematical complication in dealing with these systems.
Although the system would "prefer" to be in a lower energy state through the application of a temperature (or placing the system within an external magnetic field) the atoms can have sufficient energy to escape this lower energy state. For high enough temperatures this means a ferromagnetic material can become antiferromagnetic. In most cases there exists a critical temperature where a phase transition occurs. Phase transitions are interesting examples of **emergence** - one example of a phase transition that everybody is familiar with is the phenomena of melting a solid to create a liquid. It is interesting that this is a very sharp transition - why is it not the case that a solid gradually becomes "softer" and more liquid like? Instead small temperature fluctuations can cause the state of matter to change. It is not immediately obvious why this is the case, other phase transitions exist in other systems and they are often interesting to analyse.
The eagle eyed amongst you may notice that we have ignored the interference between spins themselves. It is true that this will have an impact but in most mathematical models of spin glasses it can be ignored. As with all mathematical models we look for a "minimal description" that captures the behaviour of interest, it turns out that this complication tends not to add much to the model (although I'm sure there exists research with interesting results capturing this interference).
So what are some physical examples of a spin glass in the real world? Technically any iron magnet subject to rust (which is antiferromagnetic) will be a spin glass, however typically the ferromagnetic atoms will still be so prevelent that we can think of it as a ferromagnet. There are other "exotic" molecules (e.g. europium strontium sulphide) that are spin glasses also. Many of the experiments on spin glasses involve melting down a noble metal (e.g. gold or silver) and adding a small amount of dispersed molten iron (typically around 0.1-5%) and cooling the mixture very quickly. Many counter-intuitive and contradictory properties have been found through these experiments including:
* By cooling quickly one can avoid the transition from liquid to solid - creating a viscous liquid spin glass
* Relaxation times (how long it takes the system to adjust to changes in temperature) can be very slow, way beyond experimental time frames
* Interactions with magnetic fields are odd. Absent of a magnetic field a spin glass is not magnetic. By carefully applying and removing external magnetic fields one can create a magnetic spin glass with varying properties (decays, apparent permanence etc.)
* Spin glasses appear to have a "memory" of previous states and undergo something akin to an aging process
Creating theoretical explanations of these (and other) phenomena is the subject of much research on the subject.
## Why do we care?
Ok, so at this point we may have a base understanding of what a spin glass is and some of the complications and properties therein, but you may be thinking: "but who cares about magnets anyway?" (Unless of course you are [Charlie Kelly](https://i.imgur.com/wZfi1wk.jpg)) It does seem like a lot of work and as a non-physicist it might seem interesting. However in dealing with the complications of spin glasses we can gain a lot of insight into other systems. In the next few bullet points I will try and convince you that it is worth time playing around with spin glasses:
* **They're interesting!** - Spin glasses exhibit a number of properties that I personally find very interesting, for example: emergent behaviour, "in between" order and randomness, simple concepts to explain but difficult to write down mathematically, etc.
* **Non-ergodic systems are everywhere!** - Although spin glasses themselves are quite stylised if they can give insight into the behaviour of non-Ergodic systems this is very useful. Loosely speaking an Ergodic system does not exhibit path depedence (e.g. whatever the state at present eventually any other state can be reached). When looking at complex systems this is typically not the case.
* **Frustration occurs more than we would like** - We often end up in the situation with "conflicting" information and dealing with this gives rise to many opportunities.
* **They're easy to simulate** - while some of the properties above make mathematical analysis difficult in all but a few special cases, spin glasses are fairly easy to code up and simulate. If you wonder "what would happen if....?" you can quickly modify a model and play around to see what happens, you don't need to spend much time thinking about boundary/initial conditions or other technical aspects.
* **There are many different applications** - Given the ubiquity of some of the complications relating to spin glasses the techniques and theory have been applied to many situations including (but not limited to): optimization techniques, neural networks (biological and artificial), machine learning, protein folding, materials science, evolutionary models. The study of quantum spin glasses is also fairly active with applications in quantum computing.
## Conclusion
In this blog post we have been on a whistle-stop tour of the very basic concepts of spin glasses and models of spin glasses. We have seen some of the difficulties with them and what makes them interesting and useful to study. In future blog posts we will look at specific models and mathematical techniques used to study them.
___
This is the first blog post in a series - you can find the next blog post [here](https://lewiscoleblog.com/spin-glass-models-2)
| github_jupyter |
```
%load_ext lab_black
```
# Introduction
In the previous chapter, we have met performance measures like the RMSE or the deviance to measure how good our models are. Unfortunately, we cannot fully rely on these values due to overfitting: The more our models overfit, the less we can trust in their "insample" performance, i.e., the performance on the data used to calculate the models. Selecting models based on their insample performance is equally bad. Overfitting should not be rewarded!
In this chapter, we will meet ways to estimate the performance of a model in a fair way and use it to select the best model among alternatives. They are all based on data splitting techniques, where the models are evaluated on fresh data not used for model calculation. Before introducing these techniques, we will meet a competitor of the linear model.
# Nearest-Neighbour
A very simple and intuitive alternative to the linear model is the k-nearest-neighbour approach, originally introduced by Evelyn Fix and J. L. Hodges in an unpublished technical report in 1951. It can be applied for both regression and classification and works without fitting anything. The prediction for an observation is obtained by
1. searching the closest k neighbours in the data set and then
2. combining their responses.
By "nearest" we usually mean Euclidean distance in the covariate space. If covariates are not on the same scale, it makes sense to *standardize* them first by subtracting the mean and dividing by the standard deviation. Otherwise, distances would be dominated by the covariate on the largest scale. Categorical features need to be one-hot- or integer-encoded first. Note that one-hot-encoded covariates may or may not be standardized.
For regression tasks, the responses of the k nearest neighbours are often combined by computing their arithmetic mean. For classification tasks, they are condensed by their most frequent value or to class probabilities.
## Example: nearest-neighbour
What prediction would we get with 5-nearest-neighbour regression for the 10'000th row of the diamonds data set?
```
from plotnine.data import diamonds
from sklearn.neighbors import KNeighborsRegressor, NearestNeighbors
from sklearn.preprocessing import OrdinalEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import make_pipeline
ord_vars = ["color", "cut", "clarity"]
ord_levels = [diamonds[x].cat.categories.to_list() for x in ord_vars]
# Prepare scaled feature matrix X
preprocessor = make_pipeline(
ColumnTransformer(
transformers=[
("linear", "passthrough", ["carat"]),
("ordered", OrdinalEncoder(categories=ord_levels), ord_vars),
],
),
StandardScaler(),
)
X = preprocessor.fit_transform(diamonds)
X[0:5]
# Fit 5-NN model
model = KNeighborsRegressor(5).fit(X, diamonds["price"])
# Apply it to the 10'000th observation
ix = 9999
print(f"Prediction of 10'000th obs: {model.predict(X[[ix]])}")
print("The observation:")
diamonds.iloc[[ix]]
# Its five nearest neighbours
nearest_5_finder = NearestNeighbors(n_neighbors=5).fit(X)
dist, nearest_5 = nearest_5_finder.kneighbors(X[[ix]])
diamonds.iloc[nearest_5.flatten()]
```
**Comments**
- The five nearest diamonds are extremely similar. One of them is the observation of interest itself, introducing a relevant amount of overfitting.
- The average price of these five observations gives us the nearest-neighbour prediction for the 10'000th diamond.
- Would we get better results for a different choice of the number of neighbours k?
- Three lines are identical up to the perspective variables (`depth`, `table`, `x`, `y`, `z`). These rows most certainly represent the same diamond, introducing additional overfit. We need to keep this problematic aspect of the diamonds data in mind.
**Motivation for this chapter:** Insample, a 1-nearest-neighbour regression predicts without error, a consequence of massive overfitting. This hypothetical example indicates that insample performance is often not worth a penny. Models need to be evaluated on fresh, independent data not used for model calculation. This leads us to *simple validation*.
# Simple Validation
With simple validation, the original data set is partitioned into *training* data used to calculate the models and a separate *validation* data set used to evaluate model performance and/or to select models. Typically, 10%-30% of rows are used for validation.
We can use the validation performance to compare *algorithms* (regression versus k-nearest-neighbour etc.) and also to choose their *hyperparameters* like the "k" of k-nearest-neighbour.
Furthermore, the performance difference between training and validation data indicates the amount of overfitting.
## Example: simple validation
What k provides the best RMSE on 20% validation data of the diamonds data?
```
import pandas as pd
from plotnine.data import diamonds
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import OrdinalEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as mse
ord_vars = ["color", "cut", "clarity"]
ord_levels = [diamonds[x].cat.categories.to_list() for x in ord_vars]
# Split data into train and valid
df_train, df_valid, y_train, y_valid = train_test_split(
diamonds, diamonds["price"], test_size=0.2, random_state=49
)
# Define preprocessing pipeline
preprocessor = make_pipeline(
ColumnTransformer(
transformers=[
("linear", "passthrough", ["carat"]),
("ordered", OrdinalEncoder(categories=ord_levels), ord_vars),
],
),
StandardScaler(),
)
# Fit preprocessor on df_train and apply it to df_valid
X_train = preprocessor.fit_transform(df_train)
X_valid = preprocessor.transform(df_valid)
# Bundle data for easy use
data_pairs = ((y_train, X_train), (y_valid, X_valid))
# Loop over k and store train and valid rmse
search = {}
for k in range(1, 21):
mod = KNeighborsRegressor(k).fit(X_train, y_train)
search[k] = [mse(y, mod.predict(X), squared=False) for y, X in data_pairs]
# Organize and plot results
results = pd.DataFrame.from_dict(search, orient="index", columns=["Train", "Valid"])
results.plot(
figsize=(10, 6),
grid=True,
xticks=results.index,
xlabel="k",
ylabel="Root-mean-square error",
title="Train and validation performance of different k-NN models",
)
results.head(7)
```
**Comments**
- The amount of overfitting decreases for growing k, which makes sense.
- Selecting k based on the training data would lead to a suboptimal model.
- Based on the validation data, we would choose $k=5$. It has a minimal RMSE of 612 USD.
- Why is the RMSE on the training data not 0 for 1-nearest-neighbour?
- Why is it problematic that some diamonds appear multiple times in the dataset?
# Cross-Validation (CV)
If our data set is large and training takes long, then the simple validation strategy introduced above is usually good enough. For smaller data or if training is fast, there is a better alternative that uses the data in a more economic way and takes more robust decisions. It is called **k-fold cross-validation** and works as follows:
1. Split the data into k pieces $D = \{D_1, \dots, D_k\}$ called "folds". Typical values for k are five or ten.
2. Set aside one of the pieces ($D_j$) for validation.
3. Fit the model on all other pieces, i.e., on $D \setminus D_j$.
4. Calculate the model performance on the validation data $D_j$.
5. Repeat Steps 2-4 until each piece was used for validation once.
6. The average of the k model performances yields the *CV performance* of the model.
The CV performance is a good basis to choose the best and final model among alternatives. The final model is retrained on all folds.
**Notes**
- The "best" model is typically the one with best CV performance. Depending on the situation, it could also be a model with "good CV performance and not too heavy overfit compared to insample performance" or some other reasonable criterion.
- If cross-validation is fast, you can repeat the process for additional data splits. Such *repeated* cross-validation leads to even more robust results.
## Example: cross-validation
We now use five-fold CV on the diamonds data to find the optimal k, i.e., to *tune* our nearest-neighbour approach.
```
# First part
import pandas as pd
from plotnine.data import diamonds
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import OrdinalEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score, KFold
ord_vars = ["color", "cut", "clarity"]
ord_levels = [diamonds[x].cat.categories.to_list() for x in ord_vars]
# Define encoder
encoder = ColumnTransformer(
transformers=[
("linear", "passthrough", ["carat"]),
("ordered", OrdinalEncoder(categories=ord_levels), ord_vars),
],
)
# Full model pipeline (with arbitrary n_neighbors)
knn_model = Pipeline(
steps=[
("encoder", encoder),
("scaler", StandardScaler()),
("knn", KNeighborsRegressor(n_neighbors=5)),
]
)
# Second part (A): 5-fold cross-validation for different values of n_neighbors
cv = KFold(n_splits=5, shuffle=True, random_state=4302)
search = {}
for k in range(1, 21):
knn_model.set_params(knn__n_neighbors=k)
search[k] = -cross_val_score(
knn_model,
X=diamonds,
y=diamonds["price"],
scoring="neg_root_mean_squared_error",
cv=cv,
).mean()
# Alternative to initializing KFold: shuffle(!) diamonds and then use cv=5.
# Reason: diamonds are sorted by carat. Using fixed folds would be very bad.
# Organize and plot results
results = pd.DataFrame.from_dict(search, orient="index", columns=["rmse"])
results.plot(
figsize=(10, 6),
grid=True,
xticks=results.index,
xlabel="k",
ylabel="Root-mean-square error",
title="CV performance of different k-NN models",
)
results.head(10)
```
**Comment:** Using 7 neighbours seems to be the best choice with a CV RMSE of 635 USD. Again, the fact that certain diamonds appear multiple times leaves a slightly bad feeling. Should we really trust these results?
Scikit-learn offers such "grid searches" (see next paragraph below) out-of-the box. For above example, it would simplify the second part of the code:
```
# Second part (B): 5-fold cross-validation for different values of n_neighbors
from sklearn.model_selection import GridSearchCV
# Define "tuning grid"
param_grid = {"knn__n_neighbors": range(1, 21)}
# Initialize and fit GridSearch to find best k with 5-fold cross-validation
search = GridSearchCV(
knn_model,
param_grid=param_grid,
scoring="neg_root_mean_squared_error",
cv=KFold(n_splits=5, shuffle=True, random_state=4302),
)
search.fit(X=diamonds, y=diamonds["price"])
# Note: the best model is being refitted on full training data -> convenient!
# Organize results
results = pd.DataFrame(
-search.cv_results_["mean_test_score"],
index=param_grid["knn__n_neighbors"],
columns=["rmse"],
)
results.head(10)
# Note: This combination of a pipeline and grid search would even allow to
# tune parametrized steps in the preprocessing (not relevant in this example).
```
# Grid Search
In the above example, we have systematically compared the CV-performance of k-nearest-neighbour by iterating over a grid of possible values for k. Such strategy to *tune* models, i.e., to select hyperparameters of a model is called **grid search CV**. In the next chapter, we will meet situations where multiple parameters have to be optimized simultaneously. Then, the number of parameter combinations and the grid size explode. To save time, we could evaluate only a random subset of parameter combinations, an approach called **randomized search CV**.
# Test Data and Final Workflow
Often, modeling involves many decisions. Even if guided by (cross-)validation, each decision tends to make the resulting final model look better than it is, an effect that can be called *overfitting on the validation data*. As a consequence, we often do not know how well the final model will perform in reality. As a solution, we can set aside a small *test* data set used to assess the performance of the *final* model. A size of 5%-20% is usually sufficient.
It is important to look at the test data just once at the very end of the modeling process - after each decision has been made.
Note: Such additional test data set is only necessary if one uses the validation data set to *make decisions*. If the validation data set is just used to assess the true performance of a model, then we do not need this extra data set. Then, we can use the terms "validation data" and "test data" interchangeably.
Depending on whether one does simple validation or cross-validation, the typical workflow is as follows:
**Workflow A**
1. Split data into train/valid/test, e.g., by ratios 70%/20%/10%.
2. Train different models on the training data and assess their performance on the validation data. Choose the best model, retrain it on the combination of training and validation data and call it "final model".
3. Assess the performance of the final model on the test data.
**Workflow B**
1. Split data into train/test, e.g., by ratios 90%/10%.
2. Evaluate and tune different models by k-fold cross-validation on the training data. Choose the best model, retrain it on the full training data.
3. Assess performance of the final model on the test data.
The only difference across the two workflows is whether to use simple validation or cross-validation for making decisions.
## Example: test data
We will now go through Workflow B for our diamond price model. We will (1) tune the "k" of our nearest-neighbour regression and (2) compete with a linear regression.
```
import numpy as np
from plotnine.data import diamonds
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.metrics import mean_squared_error as mse
from sklearn.preprocessing import (
OneHotEncoder,
OrdinalEncoder,
StandardScaler,
FunctionTransformer,
)
from sklearn.model_selection import (
train_test_split,
cross_val_score,
KFold,
GridSearchCV,
)
ord_vars = ["color", "cut", "clarity"]
lvl = [diamonds[x].cat.categories.to_list() for x in ord_vars]
# Split data into train and test
df_train, df_test, y_train, y_test = train_test_split(
diamonds, diamonds["price"], test_size=0.1, random_state=49
)
# Define CV strategy
cv = KFold(n_splits=5, shuffle=True, random_state=4432)
# Cross-validation performance of linear regression
linear_regression = make_pipeline(
ColumnTransformer(
transformers=[
("log", FunctionTransformer(np.log), ["carat"]),
("dummies", OneHotEncoder(categories=lvl, drop="first"), ord_vars),
]
),
LinearRegression(),
)
results_linear = -cross_val_score(
linear_regression,
X=df_train,
y=y_train,
scoring="neg_root_mean_squared_error",
cv=cv,
)
print(f"Linear regression CV RMSE: {results_linear.mean():.3f}")
# Cross-validation performance of k-nearest-neighbour for k = 1-20
knn_encoder = ColumnTransformer(
transformers=[
("linear", "passthrough", ["carat"]),
("ordered", OrdinalEncoder(categories=lvl), ord_vars),
],
)
knn_regression = Pipeline(
steps=[
("encoder", knn_encoder),
("scaler", StandardScaler()),
("knn", KNeighborsRegressor(n_neighbors=5)),
]
)
search = GridSearchCV(
knn_regression,
param_grid={"knn__n_neighbors": range(1, 21)},
scoring="neg_root_mean_squared_error",
cv=KFold(n_splits=5, shuffle=True, random_state=4302),
)
# Remember: the best model is refitted on training data
search.fit(X=df_train, y=y_train)
print(f"Best k of k-NN: {search.best_params_}")
print(f"Its CV-RMSE: {-search.best_score_:.3f}")
print("Best model seems 4-NN!")
# The overall best model is 4-nearest-neighbour
final_rmse = mse(y_test, search.predict(df_test), squared=False)
print(f"Test RMSE of final model: {final_rmse:.3f}")
```
**Comments**
- 4-nearest-neighbour regression performs much better than linear regression.
- Its performance on the independent test data is even better than CV suggests. Could this be a consequence of the fact that certain diamonds appear multiple times in the data, introducing potential "leakage" from training to test data?
# Random Splitting?
The data is often *randomly split* into partitions or folds. As long as rows are *independent*, this leads to honest estimates of model performance as it ensures independent data partitions.
When rows are not independent, e.g., with time series data or grouped data, such strategy is flawed and leads to too optimistic results. **This is one of the most frequent reasons to end up with a bad model. It is essential to avoid it.**
## Time-series data
When data represents a time series, splitting is best done in a way that does not destroy the temporal order. For simple validation, e.g., the first 80% of rows could be used for training and the remaining 20% for validation.
## Grouped data
Often, data is grouped or clustered by some (hopefully known) ID variable, e.g.,
- multiple rows belong to the same patient/customer or
- duplicated rows (accidental or not).
Then, instead of distributing *rows* into partitions, we should distribute *groups*/IDs in order to not destroy the data structure and to get honest performance estimates. We speak of *grouped splitting* and *group k-fold CV*.
In our example with diamonds data, it would be useful to have a column with diamond "id" that could be used for grouped splitting. (How would you create a proxy for this?)
## Stratification
*If rows are independent*, there is a variant of random splitting that often provides better results and is therefore frequently used: *stratified splitting*. With stratified splitting or *stratified k-fold CV*, rows are split to ensure approximately equal distribution of a key variable (the response or deciding covariate) across partitions/folds.
# Chapter Summary
In this chapter, we have met strategies to estimate model performance in a fair way. These strategies are also used for model selection and tuning. They are an essential part of the full modeling process. ML models without appropriate validation strategy are not to be trusted.
# Exercises
1. Regarding the problem that some diamonds seem to appear multiple times in the data: As an alternative to *grouped* splitting, repeat the last example also on data deduplicated by `price` and all covariates. Do the results change? Which results do you trust more?
2. Use cross-validation to select the best polynomial degree to represent `log(carat)` in the Gamma GLM with log-link (with additional covariates `color`, `cut`, and `clarity`). Evaluate the result on an independent test data.
3. Optional: Compare the linear regression for `price` (using `log(carat)`, `color`, `cut`, and `clarity` as covariates) with a corresponding Gamma GLM with log-link by simple validation. Use once (R)MSE for comparison and once Gamma deviance. What do you observe?
| github_jupyter |
## Calibration Workshop
In this Notebook we will:
- Load data and train a model
- Assess the Calibration of the model
- Explore various methods to calibrate the results of the model
Notebook requires `numpy`, `pandas`, `scikit-learn` as well as the `ml-insights` and `betacal` packages
```
# !pip install ml_insights
# !pip install betacal
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ml_insights as mli
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, brier_score_loss, roc_auc_score
from sklearn.isotonic import IsotonicRegression
from sklearn.linear_model import LogisticRegression
from sklearn.calibration import calibration_curve
from betacal import BetaCalibration
mli.__version__
```
### MIMIC ICU Data*
We illustrate calibration using a mortality model on the MIMIC ICU data. Each row represents a hospital stay of an individual patient. We have many lab values and vital sign measurements, as well as an indicator of whether or not the patient died in the hospital.
*MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016).
https://mimic.physionet.org
```
# Load dataset derived from the MMIC database
lab_aug_df = pd.read_csv("data/lab_vital_icu_table.csv")
# Impute the median for in each column to replace NA's
for i in range(len(lab_aug_df.columns)):
if lab_aug_df.iloc[:,i].dtype!='O':
lab_aug_df.iloc[:,i].fillna(lab_aug_df.iloc[:,i].median(),inplace=True)
```
## Lesson 1: Assessing Calibration
We will be building a model to predict mortality in the ICU based on vital signs and lab values. To start, we will just pick a few different ones on which to build our model.
```
# Choose a subset of variables
feature_set_1 = ['bun_min',
'bun_max', 'wbc_min', 'wbc_max','sysbp_max', 'sysbp_min']
X_1 = lab_aug_df.loc[:,feature_set_1]
y = lab_aug_df['hospital_expire_flag']
```
We now divide the data into training, calibration, and test sets. The training set will be used to fit the model, the calibration set will be used to calibrate the probabilities, and the test set will be used to evaluate the performance. Later, we will learn about cross-validation approaches that avoid the need for a separate calibration set.
Below are the variables used to control the size of the train, calibration, and test sets, as well as the random state used for split generation *and* random forest model generation.
Note that there will be a lot of variance in the performance of the methods as we change these parameters. Therefore it is important not to draw overly broad conclusions from individual runs. At the end of this notebook will be an exercise to change these parameters around and observe the variation.
```
train_perc = .6
calib_perc = .05
test_perc = 1-train_perc-calib_perc
rs = 42
X_train_calib_1, X_test_1, y_train_calib_1, y_test_1 = train_test_split(X_1, y, test_size=test_perc, random_state=rs)
X_train_1, X_calib_1, y_train_1, y_calib_1 = train_test_split(X_train_calib_1, y_train_calib_1,
test_size=calib_perc/(1-test_perc),
random_state=rs)
X_train_1.shape, X_calib_1.shape, X_test_1.shape
# To understand the problem better, let's see what percentage of patients overall died in the ICU
np.mean(y_train_1)
```
Next, we will fit a Random Forest model to our training data. Then we'll use that model to predict "probabilities" on our validation and test sets.
I use quotes on "probabilities" because these numbers, which are the percentage of trees that voted "yes" are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, in general, one should not expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not necessarily mean that that case has a 60% probability of mortality.
```
rfmodel1 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample',
random_state=rs, n_jobs=-1 )
rfmodel1.fit(X_train_1,y_train_1)
rf1_preds_test_uncalib = rfmodel1.predict_proba(X_test_1)[:,1]
```
## Assessing Calibration: Log-loss (aka Cross-Entropy aka Negative Mean Log-Likelihood)
- The `log_loss` is a common metric to measure the "quality" of predicted probabilities
- AUROC measures the quality of the **ranking** but does not assess **calibration**
- `log_loss` assesses the combination of discrimination and calibration.
- `log_loss` is difficult to interpret on its own, generally used comparatively.
$\begin{equation}
\mbox{log_loss} = \frac{1}{n} \left(\sum_{\mbox{pos cases}} -log(p_i) + \sum_{\mbox{neg cases}} -log(1- p_i)\right)
\end{equation}$
- If you predicted a probability of .25 for a case, and it happened, your loss for that case would be $-\log(.25) = \log(1/.25) = \log(4)$
- If you predicted a probability of .8 for a case, and it *didn't* happen, your loss for that case would be $-\log(1-.8) = \log(1/.2) = \log(5)$
- Loss is 0 when you are certain about the outcome and you are right
- Loss is $\infty$ when you are certain about the outcome and you are wrong
```
roc_auc_score(y_test_1, rf1_preds_test_uncalib), log_loss(y_test_1, rf1_preds_test_uncalib)
# If I divide all probabilities by 4, AUROC is the same, but log_loss gets worse
roc_auc_score(y_test_1, rf1_preds_test_uncalib/4), log_loss(y_test_1, rf1_preds_test_uncalib/4)
```
## Assessing Calibration: Brier Score
$\begin{equation}
\mbox{Brier_score} = \frac{1}{n} \left(\sum_{\mbox{all cases}} (y_i-p_i)^2 \right)
\end{equation}$
Brier score is a fancy name for the mean squared error between the predicted probabilities and the true (0/1) answer. If I predict .7 on a case that was true (1), my error for that case is $(1-.7)^2 = .09$
Average the scores on all cases, and that gives the Brier score.
Note that the "worst case" for Brier score is 1, whereas for log-loss it is $\infty$.
```
brier_score_loss(y_test_1, rf1_preds_test_uncalib)
```
### Log-loss vs Brier Score
The main difference between Brier score and log-loss is how they deal with small probabilities. Suppose the "true" probability is .01 and you predict a probability of .0001, is that a "big" error?
According to log-loss, it is. You have understated the true probability by a factor of 100. If you were working in insurance, and predicting the probability of a car accident, the insurance company would be paying out 100x as much as they thought they would.
According to Brier score, it is not that a big deal. For some applications, that may be appropriate. For example, if you are modeling the probability someone will vote for Candidate A vs Candidate B, and planning to use that model to estimate election results on some population, it doesn't matter to distinguish between very low probabilities.
This is actually a very "deep" topic. We will focus primarily on the log-loss, but will also show the results on Brier score.
## Assessing Calibration: Reliability Diagram
A visual way to check the calibration of a model is to create a "Reliability Diagram". The idea behind the reliability diagram is the following:
- Bin the interval [0,1] into smaller subsets (e.g. [0, 0.05], [0.05, .1], ... [.95,1])
- Find the empirical probabilities when the probabilities fell into each bin (if there were 20 times, and 9 of them were "yes", the empirical probability is .45)
- Plot the predicted probability (average of predicted probabilities in each bin) (x-axis) vs the empirical probabilities(y-axis)
- When the dots are (significantly) above the line y=x, the model is under-predicting the true probability, if below the line, model is over-predicting the true probability.
We will use the ml-insights `plot_reliability_diagram` function. It has a fair bit of flexibility that we will explore.
Some features include:
- custom bins
- accompanying histogram
- error bars
- logit scaling to explore calibration of very small and very large probabilities
```
mli.plot_reliability_diagram?
# This is the default plot
rd = mli.plot_reliability_diagram(y_test_1, rf1_preds_test_uncalib);
```
Above we see the default reliability diagram. While most of the points seem to be within the error bars there are a couple of flaws to point out:
1. Between about .2 and .45 we are consistently over-predicting the mortality. Though they are right on the edge of the error bar, having many in a row with errors in the same direction indicates this is not random noise.
1. It is hard to see the smaller probabilities well (in the first two bins), but they look like they may be under-predicting. We will demonstrate how to explore this better
First we will explore some options in the `plot_reliability_diagram` function
```
# You can opt to have a histogram showing the counts in each bin
plt.figure(figsize=(10,5))
mli.plot_reliability_diagram(y_test_1, rf1_preds_test_uncalib, show_histogram=True);
```
We see that we have lots of observations with small predicted probabilities and fewer with large probabilities. Suppose we want to subdivide further the bins where we have lots of data, and aggregate the bins that have less data.
```
plt.figure(figsize=(10,5))
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
mli.plot_reliability_diagram(y_test_1, rf1_preds_test_uncalib, bins=custom_bins_a, show_histogram=True);
```
Again, we may be underpredicting close to 0, but it is hard to tell. To look closer, we can use the "logit" scaling. This scaling uses more area at probabilities close to 0 and 1, and less area close to .5
```
plt.figure(figsize=(10,10))
rd = mli.plot_reliability_diagram(y_test_1, rf1_preds_test_uncalib, scaling='logit',
bins=custom_bins_a, marker='.')
```
Now we are able to see that we are underpredicting at low probabilities and then overpredicting around .3. Let's extract a few numbers by looking at the dictionary `rd` returned by the function
```
rd.keys()
rd['pred_probs'], rd['emp_probs'], rd['bin_counts']
```
Here we can see clearly that for the smallest bin, we have an average predicted probability of about .0017 but empirically (on 3089 trials) we have a probability of .012 -- off by a factor of 7!
```
.012/.0017
```
#### sklearn `calibration_curve`
- Scikit-learn has a function `calibration_curve` that will give the x and y coordinates for a number of bins.
- The rest of the plotting is up to you
- It does not support custom bin widths (October 2020)
```
prob_true, prob_pred = calibration_curve(y_test_1, rf1_preds_test_uncalib, n_bins=20)
plt.scatter(prob_pred, prob_true)
plt.plot(np.linspace(0,1,11),np.linspace(0,1,11), color='k')
```
## Exercise
### It's your turn!
Repeat this process for a bigger set of features below
```
# Choose a subset of variables
# feature_set_2 = feature_set_1 + ['lactate_min', 'lactate_max', 'platelet_min', 'platelet_max',
# 'potassium_min', 'potassium_max', 'ptt_min', 'ptt_max', 'inr_min',
# 'inr_max']
# Repeat train, calib, test split
# X_2 = lab_aug_df.loc[:,feature_set_2]
# X_train_calib_2, X_test_2, y_train_calib_2, y_test_2 = train_test_split(X_2, y, test_size=test_perc, random_state=rs)
# X_train_2, X_calib_2, y_train_2, y_calib_2 = train_test_split(X_train_calib_2, y_train_calib_2,
# test_size=calib_perc/(1-test_perc),
# random_state=rs)
# Fit a Random Forest Model
# rfmodel2 = RandomForestClassifier(n_estimators=500, class_weight='balanced_subsample',
# random_state=rs, n_jobs=-1 )
# rfmodel2.fit(X_train_2,y_train_2);
# rf2_preds_test_uncalib = rfmodel2.predict_proba(X_test_2)[:,1]
# roc_auc_score(y_test_2, rf2_preds_test_uncalib), log_loss(y_test_2, rf2_preds_test_uncalib)
```
Use the cells below to explore the calibration of this model.
- Is it well calibrated?
- How does it compare to the previous model?
```
# Plot the default reliability diagram
# mli.plot_reliability_diagram();
# Display the histogram on the side
# plt.figure(figsize=(8,4))
# mli.plot_reliability_diagram(...);
# Create a custom set of bins
# bins_custom_b = np.array([])
# mli.plot_reliability_diagram(...);
# Use the logit scaling
# plt.figure(figsize=(10,4))
# mli.plot_reliability_diagram(...);
```
### Extra Credit:
Fit another kind of model (Boosting, Logistic Regression, etc.) on the same data set and assess the calibration of that model
## Lesson 2: Calibrating a Model
Since our models are not well-calibrated, we would like to fix this.
### Getting a **Calibration** Data Set
We will discuss two ways to get a data set on which to perform calibration:
- Use an independent calibration set
- Using Cross-validation to generate scores from the training set.
The first method is simpler, but requires a separate data set, meaning that you will have less data to train your model with. It is good to use if you have plenty of data.
The second approach takes more time, but is generally more data-efficient. We generate a set of cross-validated predictions on the training data. These predictions come from models that are close to, but not exactly identical to, your original model. However, this discrepancy is usually minor and offset by having more data on which to calibrate.
### Method of Calibration
The data set for calibration is a set of scores and the corresponding binary outcomes. The goal is then to find a function that "fits" the relationship between the scores and the "actual" probabilities (as determined empirically in the calibration set. We will review 4 methods of calibration:
- Platt Scaling
- Beta Calibration
- Isotonic Regression
- SplineCalib
## Approach A: Independent calibration set
Overall process:
- Need separate training and calibration sets (plus a test set to evaluate)
- Fit model on training set data
- Make predictions on calibration set.
- Use those predictions + true answers to fit a calibration object.
- Use model to make predictions on test set
- Use calibrator to calibrate those predictions
- Evaluate log_loss, reliability diagram on calibrated predictions
```
calibset_preds_uncalib_1 = rfmodel1.predict_proba(X_calib_1)[:,1]
testset_preds_uncalib_1 = rfmodel1.predict_proba(X_test_1)[:,1]
```
## Method 1: Platt Scaling
Assumes that there is a logistic relationship between the scores $z$ and the true probability $p$.
$\log\left(\frac{p}{1-p}\right) = \alpha + \beta z$
$p = \frac{1}{1+\exp(-(\alpha + \beta z))}$
So it fits the two parameters $\alpha$ and $\beta$ just like in logistic regression!
- Very restrictive set of possible functions
- Needs very little data
- Historically, came from the observation (and subsequent theoretical arguments) that a logistic regression was the "right" calibration for Support Vector Machines
Reference: Platt, J. (1999). Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. Advances in Large Margin Classifiers (pp.61–74).
```
# Fit Platt scaling (logistic calibration)
lr = LogisticRegression(C=99999999999, solver='lbfgs')
lr.fit(calibset_preds_uncalib_1.reshape(-1,1), y_calib_1)
calibset_platt_probs = lr.predict_proba(calibset_preds_uncalib_1.reshape(-1,1))[:,1]
testset_platt_probs = lr.predict_proba(testset_preds_uncalib_1.reshape(-1,1))[:,1]
mli.plot_reliability_diagram(y_calib_1, calibset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, lr.predict_proba(tvec.reshape(-1,1))[:,1]);
plt.title('Platt Calibration Curve on Calibration Data');
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, lr.predict_proba(tvec.reshape(-1,1))[:,1])
plt.title('Platt Calibration Curve on Test Data');
mli.plot_reliability_diagram(y_test_1, testset_platt_probs);
plt.title('Reliability Diagram on Test Data\n after Platt Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_platt_probs, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Platt Calibrated Model');
```
Using the logit scaling, we can see that the calibration does poorly at the small values.
```
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
Despite this, we see an improvement in log_loss. Generally, making mistakes by predicting "closer to .5" is better for log loss (i.e. better to overpredict rare events and underpredict near certain events)
## Method 2: Isotonic Regression
- Fits a piecewise constant, monotonically increasing, function to map the scores to probabilities.
- Uses the PAV (Pool Adjacent Violators, also called PAVA) algorithm.
- Does not assume a particular parametric form.
- Tends to be better than Platt scaling with enough data
- Tends to overfit: ("choppy" with unrealistic jumps)
Reference: Zadrozny, B., & Elkan, C.(2001). Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. ICML (pp.609–616).
Zadrozny, B., & Elkan, C. (2002). Transforming classifier scores into accurate multiclass probability estimates. KDD (pp.694–699).
```
iso = IsotonicRegression(out_of_bounds = 'clip')
iso.fit(calibset_preds_uncalib_1, y_calib_1)
calibset_iso_probs = iso.predict(calibset_preds_uncalib_1)
testset_iso_probs = iso.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_calib_1, calibset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, iso.predict(tvec), label='Isotonic');
plt.title('Isotonic Calibration Curve on Calibration Data');
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, iso.predict(tvec), label='Isotonic');
plt.title('Isotonic Calibration Curve on Test Data');
mli.plot_reliability_diagram(y_test_1, testset_iso_probs);
plt.title('Reliability Diagram on Test Data\n after Isotonic Calibration');
```
Here we see some of the artifacts of isotonic calibration. Some bins have few to no points after calibration due to the vertical "steps" in the function.
```
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_iso_probs, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Isotonic Calibrated Model');
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
## Method 3: Beta Calibration
"A well-founded and easily implemented improvement on logistic calibration for binary classifiers."
$p = \left(1+ 1 / \left( \exp(c) \frac{z^a}{(1-z)^b} \right) \right)^{-1}$
- Similar to Platt scaling with a couple of important improvements
- Is a 3-parameter family of curves rather than 2-parameter
- Family of curves *includes* the line $y=x$ (so it won't mess it up if it's already calibrated)
Reference: Kull, M., Filho, T.S. & Flach, P.. (2017). Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:623-631
```
# Fit three-parameter beta calibration
bc = BetaCalibration()
bc.fit(calibset_preds_uncalib_1, y_calib_1)
calibset_bc_probs = bc.predict(calibset_preds_uncalib_1)
testset_bc_probs = bc.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_calib_1, calibset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, bc.predict(tvec))
plt.title('Beta Calibration Curve on Calibration Set');
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, bc.predict(tvec))
plt.title('Beta Calibration Curve on Test Set');
mli.plot_reliability_diagram(y_test_1, testset_bc_probs);
plt.title('Reliability Diagram on Test Data\n after Beta Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_bc_probs, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Beta Calibrated Model');
print('Beta calibrated log_loss = {}'.format(log_loss(y_test_1, testset_bc_probs)))
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
## Method 4: SplineCalib
- SplineCalib fits a cubic smoothing spline to the relationship between the uncalibrated scores and the calibrated probabilities
- Smoothing splines strike a balance between fitting the points well and having a smooth function
- SplineCalib uses a smoothed logistic function - so the fit to data is measured by likelihood (i.e. log-loss) and the smoothness refers to the integrated second derivative **before** the logistic transformation.
- There is a nuisance parameter that trades off smoothness for fit. At one extreme it will revert to standard logistic regression (i.e. Platt scaling) and at the other extreme it will be a very wiggly function that fits the data but does not generalize well.
- SplineCalib automatically fits the nuisance parameter (though this can be adjusted by the user)
- The resulting calibration function is not necessarily monotonic. (In some cases this may be beneficial).
References: Lucena, B. Spline-based Probability Calibration. https://arxiv.org/abs/1809.07751
```
# Define SplineCalib object
splinecalib = mli.SplineCalib()
splinecalib.fit(calibset_preds_uncalib_1, y_calib_1)
calibset_splinecalib_probs = splinecalib.predict(calibset_preds_uncalib_1)
testset_splinecalib_probs = splinecalib.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_calib_1, calibset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, splinecalib.predict(tvec))
plt.title('SplineCalib Calibration Curve on Calibration Set');
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, splinecalib.predict(tvec))
plt.title('SplineCalib Calibration Curve on Test Set');
mli.plot_reliability_diagram(y_test_1, testset_splinecalib_probs);
plt.title('Reliability Diagram on Test Data\n after SplineCalib Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_splinecalib_probs, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for SplineCalib Calibrated Model');
print('Spline calibrated log_loss = {}'.format(log_loss(y_test_1, testset_splinecalib_probs)))
print('Beta calibrated log_loss = {}'.format(log_loss(y_test_1, testset_bc_probs)))
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
On this example, SplineCalib does best, though Beta calibration and Platt scaling both do reasonably well (and are quicker to fit). Isotonic does relatively poorly. Note that we used only about 3K data points in our calibration set - Isotonic regression is relatively "data-hungry"
Below, we can see the results measured by Brier Score loss. Note that there may be a very different pattern here in relative performance of the methods.
```
print('Spline calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_splinecalib_probs)))
print('Beta calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_bc_probs)))
print('Isotonic calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_iso_probs)))
print('Platt calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_platt_probs)))
print('Uncalibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_preds_uncalib_1)))
```
## Your Turn
Calibrate the `rfmodel2` using a couple (or all) of the methods given above. Compare their performances. Feel free to cut and paste from above, but try to think about the steps you are doing so it makes sense.
Note that you may get very different results from what happened for `rfmodel1`.
## Approach 2: Cross-validation on the training data
The reason to use an independent calibration set (rather than just the training data) is that how the model performs on the training data (that it has already seen) is not indicative of how it will behave on data it has not seen before. We want the calibration to correct how the model will behave on "new" data, not the training data.
Another approach is to take a cross-validation approach to generating calibration data. We divide the training data into k "folds", leave one fold out, train our model (i.e. the choice of model and hyperparameter settings) on the remaining k-1 folds, and then make predictions on the left-out fold. After doing this process k times, each time leaving out a different fold, we will have a set of predictions, each of which was generated by 1 of k slightly different models, but was always generated by a model that did not see that training point. Done properly (assuming no "leakage" across the folds), this set of predictions and answers will serve as an appropriate calibration set.
- Advantages: more data for both training *and* calibration.
- Disadvantages: Must train k+1 models. Also, the calibration data does not come from the exact same model you will be using it on.
ML-Insights (the package containing SplineCalib, as well as other functionality) has a simple function to generate these cross-validated predictions. We demonstrate it below.
```
# Get the cross validated predictions given a model and training data.
cv_preds_train = mli.cv_predictions(rfmodel1, X_train_1, y_train_1, clone_model=True)
cv_preds_train1 = cv_preds_train[:,1]
```
### Platt scaling with CV Data
```
# Fit Platt scaling (logistic calibration)
lr_cv = LogisticRegression(C=99999999999, solver='lbfgs')
lr_cv.fit(cv_preds_train1.reshape(-1,1), y_train_1)
testset_platt_probs_cv = lr_cv.predict_proba(testset_preds_uncalib_1.reshape(-1,1))[:,1]
mli.plot_reliability_diagram(y_train_1, cv_preds_train1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, lr.predict_proba(tvec.reshape(-1,1))[:,1], label='Platt (small calib set)')
plt.plot(tvec, lr_cv.predict_proba(tvec.reshape(-1,1))[:,1], label='Platt (cv calib set)')
plt.title('Platt Calibration Curve on Calibration Data');
plt.legend();
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, lr.predict_proba(tvec.reshape(-1,1))[:,1], label='Platt (small calib set)')
plt.plot(tvec, lr_cv.predict_proba(tvec.reshape(-1,1))[:,1], label='Platt (cv calib set)')
plt.title('Platt Calibration Curve on Test Data');
plt.legend();
```
We see that the two curves are not very different. Since Platt scaling fits just two parameters, the function does not change much with more data. The upside is that you don't need a lot of calibration data to use Platt scaling!
```
mli.plot_reliability_diagram(y_test_1, testset_platt_probs_cv);
plt.title('Reliability Diagram on Test Data\n after Platt Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_platt_probs_cv, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Platt Calibrated Model');
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs_cv)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
Having extra data did not really change our results
### Isotonic with CV data
```
iso_cv = IsotonicRegression(out_of_bounds = 'clip')
iso_cv.fit(cv_preds_train1, y_train_1)
testset_iso_probs_cv = iso_cv.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_train_1, cv_preds_train1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, iso.predict(tvec), label='Isotonic (small calib set)');
plt.plot(tvec, iso_cv.predict(tvec), label='Isotonic (cv calib set)');
plt.title('Isotonic Calibration Curve on Calibration Data');
plt.legend();
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, iso.predict(tvec), label='Isotonic (small calib set)');
plt.plot(tvec, iso_cv.predict(tvec), label='Isotonic (cv calib set)');
plt.title('Isotonic Calibration Curve on Test Data');
plt.legend();
```
We see above, that with more data, the Isotonic regression fits the test data better, particularly in the middle of the range. Qualitatively, the vertical jumps are less extreme.
```
mli.plot_reliability_diagram(y_test_1, testset_iso_probs_cv);
plt.title('Reliability Diagram on Test Data\n after Isotonic Calibration');
```
The isotonic regression appears to have benefitted from the extra calibration data.
```
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_iso_probs_cv, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Isotonic Calibrated Model');
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs_cv)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs_cv)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
With the larger data set we see much better performance from isotonic regression.
### Beta with CV data
```
# Fit three-parameter beta calibration
bc_cv = BetaCalibration()
bc_cv.fit(cv_preds_train1, y_train_1)
testset_bc_probs_cv = bc_cv.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_train_1, cv_preds_train1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, bc.predict(tvec), label='Beta (small calib set)')
plt.plot(tvec, bc_cv.predict(tvec), label='Beta (cv calib set)')
plt.title('Beta Calibration Curve on Calibration Data');
plt.legend();
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, bc.predict(tvec), label='Beta (small calib set)')
plt.plot(tvec, bc_cv.predict(tvec), label='Beta (cv calib set)')
plt.title('Beta Calibration Curve on Test Set');
plt.legend();
```
We see a small difference in the curves, but not a huge change from having 3K points vs 45K points to learn from
```
mli.plot_reliability_diagram(y_test_1, testset_bc_probs_cv);
plt.title('Reliability Diagram on Test Data\n after Beta Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_bc_probs_cv, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for Beta Calibrated Model');
```
Not much of a change from the smaller calib set
```
print('Beta calibrated log_loss = {}'.format(log_loss(y_test_1, testset_bc_probs_cv)))
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs_cv)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs_cv)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
```
Now, Beta calibration does slightly better than Platt scaling. Isotonic has improved.
### SplineCalib with CV data
```
splinecalib_cv = mli.SplineCalib()
splinecalib_cv.fit(cv_preds_train, y_train_1)
testset_splinecalib_probs_cv = splinecalib_cv.predict(testset_preds_uncalib_1)
mli.plot_reliability_diagram(y_train_1, cv_preds_train1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, splinecalib.predict(tvec), label='SplineCalib (small calib set)')
plt.plot(tvec, splinecalib_cv.predict(tvec), label='SplineCalib (cv calib set)')
plt.title('SplineCalib Calibration Curve on Calibration Data');
plt.legend();
```
With the larger data set, the calibration curve is now close to the line y=x
```
mli.plot_reliability_diagram(y_test_1, testset_preds_uncalib_1, error_bars=False);
tvec = np.linspace(.01, .99, 99)
plt.plot(tvec, splinecalib.predict(tvec), label='SplineCalib (small calib set)')
plt.plot(tvec, splinecalib_cv.predict(tvec), label='SplineCalib (cv calib set)')
plt.title('SplineCalib Calibration Curve on Test Set');
plt.legend();
mli.plot_reliability_diagram(y_test_1, testset_splinecalib_probs_cv);
plt.title('Reliability Diagram on Test Data\n after Spline Calibration');
custom_bins_a = np.array([0,.01,.02,.03,.05, .1, .3, .5, .75, 1])
rd = mli.plot_reliability_diagram(y_test_1, testset_splinecalib_probs_cv, scaling='logit', bins=custom_bins_a);
plt.title('Reliability Diagram on Test Data\n for SplineCalib Calibrated Model');
```
The points are very close to the line y=x
```
print('Spline calibrated log_loss = {}'.format(log_loss(y_test_1, testset_splinecalib_probs_cv)))
print('Beta calibrated log_loss = {}'.format(log_loss(y_test_1, testset_bc_probs_cv)))
print('Isotonic calibrated log_loss = {}'.format(log_loss(y_test_1, testset_iso_probs_cv)))
print('Platt calibrated log_loss = {}'.format(log_loss(y_test_1, testset_platt_probs_cv)))
print('Uncalibrated log_loss = {}'.format(log_loss(y_test_1, testset_preds_uncalib_1)))
print('Spline calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_splinecalib_probs_cv)))
print('Beta calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_bc_probs_cv)))
print('Isotonic calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_iso_probs_cv)))
print('Platt calibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_platt_probs_cv)))
print('Uncalibrated Brier Score = {}'.format(brier_score_loss(y_test_1, testset_preds_uncalib_1)))
```
## Your Turn
- Use the `mli.cv_predictions` functions on `rfmodel2` to create a cross-validated set on which to fit your calibration.
- Then calibrate `rfmodel2` using this set, rather than the separate calibration set. Compare the results.
- How do the results compare to `rfmodel1`?
## Extra Credit
- Make a copy of the original notebook. Change the random state `rs` in cell [5] and then do "Kernel->Restart and Run All". Observe how the relative performance changes for the different methods both using the calibration set and the cross-validated approach
- Adjust the training and calibration sizes and see how it affects the performances.
### Some things you might notice
- Isotonic Regression can be quite variable - especially with smaller calibration sets.
- Isotonic Regression tends to improve quite a bit as it gets more data
- Beta Calibration tends to beat Platt Scaling (though not always)
- The rankings for log-loss are often not the same as for Brier Score
| github_jupyter |
# Projectile Motion
## Range
### - $R = \frac{u^2 sin2\theta}{g}$
## Time of Flight
### - $T = \frac{2u sin\theta}{g}$
## Maximum Height
### - $H = \frac{u^2 sin^2\theta}{2g}$
### Task:1 Make a class to calculate the range, time of flight and horizontal range of the projectile fired from the ground.
### Task: 2 Use the list to find the range, time of flight and horizontal range for variying value of angle from 1 degree to 90 degree.
### Task:3 Make a plot to show the variation of range, time of flight and horizontan range with angle of projection.
### Task: 4 Change the list of [angle], [Range], [Time of flight], and [Horizontal range] into dictionary and finelly into a DataFrame using pandas. Save the file in your PC in csv file.
### Task: 5 Open the csv file and make a plot between the angle vs [range, time of flight and horizontal range] using csv file
# Solution :
#### (I) A class to calculate the range, time of flight and horizontal range of the projectile fired from the ground.
#### (II) Using the list to find the range, time of flight and horizontal range for variying value of angle from 1 degree to 90 degree and making a plot to show the variation of range, time of flight and horizontan range with angle of projection.
```
import math
import matplotlib.pyplot as plt
%matplotlib inline
class Projectile():
def __init__(self,u,θ,g):
self.u=u
self.θ=θ
self.g=g
def Range(self):
R = ((self.u**2)*math.sin(2*self.θ*math.pi/180))/self.g
return R
def TimeofFlight(self):
T = ((2*self.u)*math.sin(self.θ*math.pi/180))/self.g
return T
def MaximumHeight(self):
H = ((self.u*math.sin(self.θ*math.pi/180))**2)/(2*self.g)
return H
u = 100 # Initial Velocity = 100m/s
g= 9.8 # g = 9.8m/s^2
X=[]
R=[]
T=[]
H=[]
for θ in range(1,90):
P1 = Projectile(u,θ,g)
r = P1.Range()
t = P1.TimeofFlight()
h = P1.MaximumHeight()
X.append(θ)
R.append(r)
T.append(t)
H.append(h)
plt.figure(figsize = [15,15])
plt.subplot(2,2,1)
plt.plot(X, R, label = 'Range', color = 'r')
plt.xlabel('Angle of Projection (θ)')
plt.ylabel('Range')
plt.subplot(2,2,2)
plt.plot(X, T, label = 'TimeofFlight', color = 'b')
plt.xlabel('Angle of Projection (θ)')
plt.ylabel('Time of Flight')
plt.subplot(2,2,3)
plt.plot(X, H, label = 'MaximumHeight', color = 'y')
plt.xlabel('Angle of Projection (θ)')
plt.ylabel('Maximum Height')
```
#### (III) Change the list of [angle], [Range], [Time of flight], and [Horizontal range] into dictionary and finelly into a DataFrame using pandas. Save the file in your PC in csv file.
#### (IV) Open the csv file and make a plot between the angle vs [range, time of flight and horizontal range] using csv file.
```
data={} #empty dictionary
data.update({"Angle":X,"Range":R,"TimeofFlight":T,"MaximumHeight":H})
import pandas as pd
Df=pd.DataFrame(data)
print(Df)
Df.to_csv('Projectile Motion.csv')
df = pd.read_csv('Projectile Motion.csv')
df.head()
plt.plot(df.Angle, df.Range)
plt.show()
plt.plot(df.Angle, df.TimeofFlight)
plt.show()
plt.plot(df.Angle, df.MaximumHeight)
plt.show()
```
| github_jupyter |
```
from dtwhaclustering.dtw_analysis import dtw_signal_pairs, dtw_clustering
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from dtaidistance import dtw
import pandas as pd
import os
%matplotlib inline
# default matplotlib parameters
import matplotlib
# font = {'family': 'Times',
# 'weight': 'bold',
# 'size': 22}
# matplotlib.rc('font', **font)
plt.rcParams["figure.figsize"] = (12, 6)
plt.style.use('ggplot')
import seaborn as sns
import ipyplot
import warnings
import os
import sys
sys.stderr = open(os.devnull, "w") # silence stderr
plt.rcParams.update({'font.size': 22})
#load pickle data
dataloc = "pickleFiles"
final_dU=pd.read_pickle(os.path.join(dataloc,"dU_wo_seasn.pickle"))
final_dN=pd.read_pickle(os.path.join(dataloc,"dN_wo_seasn.pickle"))
final_dE=pd.read_pickle(os.path.join(dataloc,"dE_wo_seasn.pickle"))
stn_info_df = pd.read_csv('helper_files/selected_stations_info.txt')
lons = stn_info_df['lon'].values
lats = stn_info_df['lat'].values
final_dU.head()
time_series_U = final_dU.values.transpose()
time_series_N = final_dN.values.transpose()
time_series_E = final_dE.values.transpose()
time_series_U.shape
```
## Significance test of the # of clusters for vertical component using bootstrapping
```
## instantiate the class
labels = [stnU.split("_")[0] for stnU in final_dU.columns.values] #remove the prefix _U
dtw_cluster_vertical = dtw_clustering(time_series_U,labels=labels, longitudes=lons, latitudes=lats)
opt_cluster, opt_distance = dtw_cluster_vertical.optimum_cluster_elbow()
opt_cluster, opt_distance
df_cv, df_accl = dtw_cluster_vertical.compute_distance_accl()
vert_sim_accl, _ = dtw_cluster_vertical.significance_test(numsimulations=0, outfile='pickleFiles/dU_accl_sim_results.pickle', fresh_start=False)
vert_sim_accl.head()
with plt.style.context('seaborn'):
fig, ax = plt.subplots(figsize=(8,6))
vert_sim_accl.plot(legend=False, color='k', lw=0.5, ax=ax)
ax.plot(df_cv["level"], df_cv["distance"],
"-o", color='gray',ms=5)
ax.axvline(x=opt_cluster, color='r', lw=1)
ax.plot(
df_accl["level"],
df_accl["accln"],
"-",
color='b',
label='# Orig clusters',
)
ax.set_xlabel('# Clusters', fontsize=22)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig("vert-monte-simulation.pdf",bbox_inches='tight')
```
## Significance test of the # of clusters for north component using bootstrapping
```
dtw_cluster_north = dtw_clustering(time_series_N,labels=labels, longitudes=lons, latitudes=lats)
opt_cluster_north, opt_distance_north = dtw_cluster_north.optimum_cluster_elbow()
df_cv_north, df_accl_north = dtw_cluster_north.compute_distance_accl()
nor_sim_accl, _ = dtw_cluster_north.significance_test(numsimulations=0, outfile='pickleFiles/dN_accl_sim_results.pickle', fresh_start=False)
with plt.style.context('seaborn'):
fig, ax = plt.subplots(figsize=(8,6))
nor_sim_accl.plot(legend=False, color='k', lw=0.5, ax=ax)
ax.plot(df_cv_north["level"], df_cv_north["distance"],
"-o", color='gray', ms=5)
ax.axvline(x=opt_cluster_north,color='r', lw=1)
ax.plot(
df_accl_north["level"],
df_accl_north["accln"],
"-",
color='b',
label='# Orig clusters',
)
ax.set_xlabel('# Clusters', fontsize=22)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig("north-monte-simulation.pdf",bbox_inches='tight')
```
## Significance test of the # of clusters for east component using bootstrapping
```
dtw_cluster_east = dtw_clustering(time_series_E,labels=labels, longitudes=lons, latitudes=lats)
opt_cluster_east, opt_distance_east = dtw_cluster_east.optimum_cluster_elbow()
df_cv_east, df_accl_east = dtw_cluster_east.compute_distance_accl()
east_sim_accl, _ = dtw_cluster_east.significance_test(numsimulations=0, outfile='pickleFiles/dE_accl_sim_results.pickle', fresh_start=False)
with plt.style.context('seaborn'):
fig, ax = plt.subplots(figsize=(8,6))
east_sim_accl.plot(legend=False, color='k', lw=0.5, ax=ax)
ax.plot(df_cv_east["level"], df_cv_east["distance"],
"-o", color='gray', ms=5)
ax.axvline(x=opt_cluster_east, color='r', lw=1)
ax.plot(
df_accl_east["level"],
df_accl_east["accln"],
"-",
color='b',
label='# Orig clusters',
)
ax.set_xlabel('# Clusters', fontsize=22)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig("east-monte-simulation.pdf",bbox_inches='tight')
```
## Simulation summary for all components
```
vert_sim_accl_std = 2*vert_sim_accl.std(axis=1).values #
nor_sim_accl_std = 2*nor_sim_accl.std(axis=1).values
east_sim_accl_std = 2*east_sim_accl.std(axis=1).values
offset=200
with plt.style.context('default'):
plt.rcParams.update({'font.size': 22})
fig, ax = plt.subplots(3,1,figsize=(16,10), sharex=True)
ax[0].errorbar(df_accl["level"].values, df_accl["accln"].values, yerr=vert_sim_accl_std[::-1],
fmt='-o', color="k", ms=3, lw=1,
ecolor="r", capsize=5, elinewidth=0, label="Vertical curvature")
ax[0].axvline(x=opt_cluster, color='g', lw=2, label=f"Optimal Cluster ({opt_cluster})", ls='--')
ax[0].legend(loc='lower center', fontsize=18, fancybox=True, shadow=True, ncol=2)
minmax = np.abs([df_accl["accln"].min(), df_accl["accln"].max()])
ax[0].set_ylim([-max(minmax)-offset, max(minmax)+offset])
# ax[0].set_ylabel('DTW distance', fontsize=22)
ax[1].errorbar(df_accl_north["level"].values, df_accl_north["accln"].values, yerr=nor_sim_accl_std[::-1],
fmt='-o', color="k", ms=3, lw=1,
ecolor="r", capsize=5, elinewidth=0, label="North curvature")
ax[1].axvline(x=opt_cluster_north, color='g', lw=2, label=f"Optimal Cluster ({opt_cluster_north})", ls='--')
ax[1].legend(loc='lower center', fontsize=18,fancybox=True, shadow=True, ncol=2)
minmax = np.abs([df_accl_north["accln"].min(), df_accl_north["accln"].max()])
ax[1].set_ylim([-max(minmax)-offset, max(minmax)+offset])
ax[1].set_ylabel('DTW distance', fontsize=22)
ax[2].errorbar(df_accl_east["level"].values, df_accl_east["accln"].values, yerr=east_sim_accl_std[::-1],
fmt='-o', color="k", ms=3, lw=1,
ecolor="r", capsize=5, elinewidth=0, label="East curvature")
ax[2].axvline(x=opt_cluster_east, color='g', lw=2, label=f"Optimal Cluster ({opt_cluster_east})", ls='--')
ax[2].legend(loc='lower center', fontsize=18,fancybox=True, shadow=True, ncol=2)
minmax = np.abs([df_accl_east["accln"].min(), df_accl_east["accln"].max()])
ax[2].set_ylim([-max(minmax)-offset, max(minmax)+offset])
# ax[2].set_ylabel('DTW distance', fontsize=22)
ax[2].set_xlabel('# Clusters', fontsize=22)
plt.subplots_adjust(hspace=0.1)
plt.savefig("all-comps-monte-simulation.pdf",bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Katonokatono/Term-Deposit-Project/blob/EDA/Exploratory_Data_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#importing libraries
# importing pandas
import pandas as pd
# importing numpy
import numpy as np
# importing matplotlib.pyplot
import matplotlib.pyplot as plt
# importing scipy.stats
import scipy.stats as stats
# importing seaborn
import seaborn as sns
#loading dataset
bank=pd.read_csv("/content/bank-additional-full.csv",delimiter=";")
bank
#renaming column
bank.rename(columns={'y': 'term_deposit'}, inplace=True)
bank.columns
bank=bank[['age', 'job', 'marital', 'education', 'default', 'housing', 'loan',
'contact', 'month', 'day_of_week', 'duration','term_deposit']]
```
##UNIVARIATE ANALYSIS
```
bank.describe().T
```
NUMERICAL
```
#checking for outliers
col_names = ['age','duration']
fig, ax = plt.subplots(len(col_names), figsize= (8,40))
for i, col_val in enumerate(col_names):
sns.boxplot(y = bank[col_val], ax= ax[i])
ax[i].set_title('Box plot - {}'.format(col_val), fontsize= 10)
ax[i].set_xlabel(col_val, fontsize= 8)
plt.show()
```
We will continue working with the outliers as they are since removing might cause a variation in our analysis
AGE
Central Measures of Tendancy
```
##Finding the central measures of tendancy of our numerical variable 'age'
print('The mean of age is: ' +str(bank['age'].mean()))
print('The median of age is: ' +str(bank['age'].median()))
print('The mode of age is: ' +str(bank['age'].mode()))
```
The average age of the existing bank customers is 40 years, the median age is 38 and mode 31
Measure of Dispersion
```
print('The range of age is: ' +str(bank['age'].max()-bank['age'].min()))
print('The standard deviation of age is: ' +str(bank['age'].std()))
print('The variance of age is: ' +str(bank['age'].var()))
print('The skewness of age is: ' +str(bank['age'].skew()))
print('The kurtosis of age is: ' +str(bank['age'].kurt()))
print('The quantiles of age is: ' +str(bank['age'].quantile([0.25, 0.5, 0.75])))
```
DURATION
Central Measures of Tendancy
```
print('The mean of duration is: ' +str(bank['duration'].mean()))
print('The median of duration is: ' +str(bank['duration'].median()))
print('The mode of duration is: ' +str(bank['duration'].mode()))
```
Measure of Dispersion
```
print('The range of duration is: ' +str(bank['duration'].max()-bank['duration'].min()))
print('The standard deviation of duration is: ' +str(bank['duration'].std()))
print('The variance of duration is: ' +str(bank['duration'].var()))
print('The skewness of duration is: ' +str(bank['duration'].skew()))
print('The kurtosis of duration is: ' +str(bank['duration'].kurt()))
print('The quantiles of duration is: ' +str(bank['duration'].quantile([0.25, 0.5, 0.75])))
```
From the above analysis we realize the following about our numerical variables 'age and duration":
* The average contact duration with the bank customers is 258 minutes, median 180 minutes and mode of 85 minutes
* The average age of the banks existing pool of customers is 40 years
* Our age variable is moderately skewed since the skewness falls between 0.5 and 1. Its kurtosis indicates that it follows the mesokurtic kurtosis
* The duration variable, has excess kurtosis as it falls above 3 hence follows the leptokurtic kurtosis indicating extreme values on either sides of the tails
```
#graphical representation of the univariate analysis
#The numerical variables are skewed to the right showing most values are greater than the mean.
bins = 10
fig, (ax1,ax2) = plt.subplots(1, 2, figsize= (12,8))
sns.distplot(bank.age, ax=ax1, bins= bins)
sns.distplot(bank.duration, ax=ax2, bins= bins)
plt.show()
```
CATEGORICAL
```
#univariate analysis categorical variables
columns=['job','marital','education','month','day_of_week','default','housing','loan','contact']
plt.figure(figsize=(15,80),facecolor='white')
plotnumber=1
for col in columns:
ax=plt.subplot(12,3,plotnumber)
sns.countplot(y=col,data=bank)
plt.xlabel(col)
plt.ylabel(col)
plotnumber+=1
plt.show()
for col in columns:
print(bank.groupby([col]).size())
```
The above visualizations show the following about the banks customers:
* Most of the customers work as admins
* Most of the customers are married
* Most of the customers have housing loans
* Most of the customers have loans
* Most of customers were contacted via cellphone
* Most of the customers were contacted during May
* Most of the customers were contacted on Thursday
* Most of the customers do not have default loans
##BIVARIATE ANALYSIS
NUMERICAL VS NUMERICAL
```
#Checking for correlation using the Pearson method
pearson_coeff_bank = bank["age"].corr(bank["duration"], method = "pearson")
pearson_coeff_bank
```
There's no correlation between age of customer and duration of last contact by the marketing team. The criteria which the marketing team used when making calls to potential customers did not matter by their age.
```
#summary of bivariate analysis
sns.pairplot(bank)
plt.show()
```
The above plots and the below heatmap cement the fact that there's a negative weak correlation between age and last contact duration
```
#correlation heat map
sns.heatmap(bank.corr(),annot=True)
plt.show()
```
NUMERICAL VS CATEGORICAL
```
#barplot for numerical vs categorical
fig, ax = plt.subplots(1, figsize=(10,5))
sns.barplot(x=bank['term_deposit'], y= bank['age'], ax=ax)
```
There average age of people with no term deposists is ~39 years and the average age of people with term deposits is ~40 years.
```
fig, ax = plt.subplots(1, figsize=(10,5))
sns.barplot(x=bank['term_deposit'], y= bank['duration'], ax=ax)
```
The more contact there was with the customers the more they had term deposists or rather the more they subscribed to term deposits. Duration of contact is therefore a significant factor in customers subscribing to term deposits.
CATEGORICAL VS CATEGORICAL
```
bank.columns
columns=['default', 'housing', 'loan',
'contact']
for col in columns:
sns.catplot(x='term_deposit',col=col,kind='count',data=bank)
plt.show()
```
From the above;
There were more term deposists subscriptions with customers that hadn't defaulted on loans.
There were slightly more term deposits with customers that had a mortgage and those that didn't have a mortgage.
There were significantly more term deposists with customers with no loans at all which can be interpretad as they had more money to invest.
Lastly, there were more term deposits subscriptions with customers that had a cellular phone and therefore had more and reliable access to their savings data.
```
bank.groupby('education')['term_deposit'].value_counts().unstack().plot.bar(stacked=True)
plt.legend(loc="upper right")
plt.ylabel('has a term deposit')
plt.xlabel('Education')
plt.show()
```
The majority of customers with term deposits have a university degree followed by those with high school education and the least subscriptions were those with no education at all. The unknown entries need to be investigated further.
```
bank.groupby('job')['term_deposit'].value_counts().unstack().plot.bar(stacked=True)
plt.legend(loc="upper right")
plt.ylabel('has a term deposit')
plt.xlabel('Job')
plt.show()
```
The majority of customers with term deposits had an administrative job, closely followed by those with blue collar jobs and technicians. The least subscriptions were observed to be students and those with unknown professions.
```
bank.groupby('month')['term_deposit'].value_counts().unstack().plot.bar(stacked=True)
plt.legend(loc="upper right")
plt.ylabel('has a term deposit')
plt.xlabel('month')
plt.show()
```
The month that recorded highest number of subscriptions is May followed by June, July and August. The least subscritptions were recorded in December and this can be due to the fact that December is when most people are on holiday and spend a lot than investing. This also shows that there was least contact to customers in December hence the low subscription numbers.
```
bank.groupby('day_of_week')['term_deposit'].value_counts().unstack().plot.bar(stacked=True)
plt.legend(loc="upper right")
plt.ylabel('has a term deposit')
plt.xlabel('day of week')
plt.show()
```
Monday and Thursday recorded the highest number of subscriptions, the least subscriptions were recorded on Friday. The weekends were not taken into account mostly because the bank doesn't operate on weekends.
```
for col in columns:
print(bank.groupby(['term_deposit',col]).size())
```
**Bivariate Analysis Recommendation**
We therefore recommend for the bank to invest more in contacting customers through the marketing team especially during the months with significantly few sucbriptions like October, November and December and encourage customers to invest more than spending since the more the customers were contacted the more subscriptions were recorded.
We also recommend the bank to invest more in educating customers on the importance of term deposits in regard to professions and level of education. Students can be encouraged to start saving at a young age to secure a good future and this can be done through visiting the schools and enligtening them about the product. This will help fill the gap in investment noted in those two variables.
The unknown entries need to be investigated further.
##MULTIVARIATE ANALYSIS
```
#Viewing the column names
bank.columns
#Selecting the categorical columns for label encoding
columns=['job', 'marital', 'education', 'default', 'housing', 'loan',
'contact', 'month', 'day_of_week']
#converting categorical variable values to numerical
from sklearn.preprocessing import LabelEncoder
for col in columns:
labelencoder = LabelEncoder()
labelencoder.fit(bank[col])
bank[col] = labelencoder.transform(bank[col])
#Viewing the data types in the encoded dataframe
bank.info()
#viewing our dataset
bank
# Preprocessing
x=bank[['age', 'job', 'marital', 'education', 'default', 'housing', 'loan',
'contact','month','day_of_week','duration']]
y=bank['term_deposit']
#Import linear Doscriminant analysis method from sklearn library
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
#create an instance of LDA
lda=LDA()
lda_=lda.fit(x,y)
lda_x=lda_.transform(x)
lda_.coef_
#creating a dataframe to get the name of the columns
bnk=pd.DataFrame(index=x.columns.values,data=lda_.coef_[0].T)
bnk.sort_values(0,ascending=False)
```
From our analysis,it is safe to conclude that we can use a linear combinationof the following features to comfortably predict if an individual subscribes to a term deposit.
* Marital Status
* Level of education
* Month
* Age
* Job
* Day of the week
* Housing loan
* Duration of the call
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
import cv2
import re
from tqdm import notebook, trange
folder1 = "F:/AI_ML/C-DAC/Project Material/UCF_Crime_Dataset/Anomaly-Videos-Part-1"
normal_folder = "F:/AI_ML/C-DAC/Project Material/UCF_Crime_Dataset/noraml_train"
fighting_folder = os.path.join(folder2, "Fighting")
fighting_name = os.listdir(fighting_folder)
print(fighting_name[1:5])
normal_name = os.listdir(normal_folder)
normal_name.sort()
print(normal_name[1:5])
# Dataframe containing the starting and ending of the anomaly
fighting_annot = pd.read_csv("F:/AI_ML/C-DAC/Project Material/UCF_Crime_Dataset/Temporal Data/Fighting.csv")
fighting_annot
```
## Function to convert video to numpy (Normal part)
```
# Converting Normal frame from video to numpy
def N_vid_2_np(vid_name, img_array, begin, close, n_fps = 10, bag_size = 64, img_h = 224, img_w = 224, ot_folder = None):
# begin = first normal frame, close = last normal frame
# video_cv = captured video through cv2
# Dictionary for selecting number of frames per sec (fps)
fps_dict = {2:15, 3:10, 5:6, 10:3, 15:2, 30:1}
f_n = fps_dict[n_fps]
for i in trange(begin,close,bag_size*f_n):
# list for saving numpy of a frame
n = []
if i+(bag_size*f_n) < close:
# taking 10 (bag_size) frames at a time to create a bag
for j in range(i, i+(bag_size*f_n)):
# reading frame_id and frame(img) from the video
img = img_array[j]
# selecting frames according to fps
if (j % f_n == 0):
n.append(img)
#print("normal_frame",frame_id)
# converting to array
n_arr=np.array(n)
# saving each bag in folder
n_folder = os.path.join(ot_folder, "normal")
if not os.path.exists(n_folder):
os.makedirs(n_folder)
vid = re.findall("\w+",vid_name)
'''np_name = vid[0] + "numpy%d.npy" % frame_id
np.save(os.path.join(n_folder,np_name),n_arr)'''
file_name = vid[0] + "%d.mp4" % j
video = cv2.VideoWriter(os.path.join(n_folder,file_name),cv2.VideoWriter_fourcc(*'mp4v'), 30, (img_w,img_h))
for image in n_arr:
video.write(image)
video.release()
return None
```
## Function to convert video to numpy (Anomaly part)
```
# Converting Anomaly frame from video to numpy
def A_vid_2_np(vid_name, img_array, begin, close, a_fps = 10, bag_size = 64, img_h = 224, img_w = 224, class_name = "anomaly", ot_folder = None):
# begin = first normal frame, close = last normal frame
# video_cv = captured video through cv2
# Dictionary for selecting number of frames per sec (fps)
fps_dict = {2:15, 3:10, 5:6, 10:3, 15:2, 30:1}
f_a = fps_dict[a_fps]
for i in trange(begin,close+1,bag_size*f_a):
# list for saving numpy of a frame
n = []
if i+(bag_size*f_a) < close+1:
# taking 10 (bag_size) frames at a time to create a bag
for j in range(i, i+(bag_size*f_a)):
# reading frame_id and frame(img) from the video
img = img_array[j]
# selecting frames according to fps
if (j % f_a == 0):
n.append(img)
# print("anomaly_frame",frame_id)
# converting to array
n_arr=np.array(n)
# saving each bag in folder
n_folder = os.path.join(ot_folder, class_name)
if not os.path.exists(n_folder):
os.makedirs(n_folder)
vid = re.findall("\w+",vid_name)
'''np_name = vid[0] + "numpy%d.npy" % frame_id
np.save(os.path.join(n_folder,np_name),n_arr)'''
file_name = vid[0] + "%d.mp4" % j
video = cv2.VideoWriter(os.path.join(n_folder,file_name),cv2.VideoWriter_fourcc(*'mp4v'), 30, (img_w,img_h))
for image in n_arr:
video.write(image)
video.release()
return None
```
## Function to produce numpy and labels for a video
```
def video_2_numpy(vid_folder, vid_name, annotation_df = None , n_fps = 10, a_fps = 10, bag_size = 64, img_h = 224, img_w = 224, class_name = "anomaly", ot_folder = None):
# video_name = video name with path
# bag_size is the number of images to be stacked to form a bag
# n_fps, a_fps are fps values for normal and anomaly video (2, 5, 10, 15, 30 are valid values)
# Reading the video
img_array=[]
video_cv = cv2.VideoCapture(os.path.join(vid_folder, vid_name))
print(vid_name)
start1, end1, start2, end2 = -1, -1, -1, -1
# end frame of the video
end_vid = int(video_cv.get(cv2.CAP_PROP_FRAME_COUNT))
print(start1, end1, start2, end2 )
for i in range(end_vid):
res,frame=video_cv.read()
frame=cv2.resize(frame,(img_w,img_h))
img_array.append(frame)
# creating the numpy using functions
if class_name == "normal":
print("normal")
N_vid_2_np(vid_name, img_array, begin=0, close = end_vid, n_fps = n_fps, bag_size = bag_size, img_h = img_h, img_w = img_w, ot_folder = ot_folder)
else:
# Taking starting and stopping frames of anomaly in the video
det = annotation_df[annotation_df.video_name == vid_name].iloc[:,2:].values
start1, end1, start2, end2 = det[0,0], det[0,1], det[0,2], det[0,3]
print(start1, end1, start2, end2 )
A_vid_2_np(vid_name, img_array, begin = start1, close = end1, a_fps = a_fps, bag_size = bag_size, img_h = img_h, img_w = img_w,class_name = class_name, ot_folder = ot_folder)
if start2 != -1:
A_vid_2_np(vid_name, img_array, begin = start2, close = end2, a_fps = a_fps, bag_size = bag_size, img_h = img_h, img_w = img_w, class_name = class_name, ot_folder = ot_folder)
return None
```
# Saving output numpy
```
out_folder = "E:/Fighting Data"
# Making abuse numpy
l = len(fighting_name)
for i in notebook.tqdm(range(l)):
video_2_numpy(fighting_folder,fighting_name[i], annotation_df = fighting_annot, n_fps = 10, a_fps = 15, bag_size = 64, img_h = 224, img_w = 224, class_name = "fighting" , ot_folder = out_folder)
normal_name.sort()
len(normal_name)
out_folder = "E:/Fighting Data"
# Making abuse numpy
for i in notebook.tqdm(range(25,60)):
video_2_numpy(normal_folder,normal_name[i], bag_size = 64, img_h = 224, img_w = 224, class_name = "normal" , ot_folder = out_folder)
```
| github_jupyter |
# Changes in spending
> Layered line chart with vertical lines and text overlay
- toc: false
- comments: true
- image: images/consumer_spending.png
- hide: false
- search_exclude: false
- categories: [spending, NYT]
- author: Shantam Raj
- badges: true
Today we will study the charts in the article [The Rich Cut Their Spending. That Has Hurt All the Workers Who Count on It](https://www.nytimes.com/2020/06/17/upshot/coronavirus-spending-rich-poor.html). These charts tell us something very important about how the spending has been cut differently across different classes.



> Note: The vertical lines correspond to the following dates -
- First stimulus checks - April 17
- States in the process of reopening - May 1
The data for this analysis is taken from [Opportunity Labs](https://github.com/OpportunityInsights/EconomicTracker) where they publish their data in this [dashboard](https://tracktherecovery.org/).
What's important about this data is best summed up by -
> One of the things this crisis has made salient is how interdependent our health was, said Michael Stepner, an economist at the University of Toronto. We’re seeing the mirror of that on the economic side.
```
#hide_output
import pandas as pd
import altair as alt
alt.renderers.set_embed_options(actions=False)
```
# Drop in consumer spending
The rich drive more of the economy than they did 50 years ago. And more workers depend on them.
> For the highest-income quarter, spending has recovered much more slowly, after falling by 36 percent at the lowest point.
> Important: We will use data till July only, so the uri used for the data is for the commit of a particular day. If you want to use the latest data then replace the `spending_uri` with this - 'https://raw.githubusercontent.com/OpportunityInsights/EconomicTracker/main/data/Affinity%20-%20National%20-%20Daily.csv'
```
spending_uri = 'https://raw.githubusercontent.com/OpportunityInsights/EconomicTracker/8d9fae46fab3e386a8f4ce798de09a016cbda0f9/data/Affinity%20-%20National%20-%20Daily.csv'
#spending_uri = 'https://raw.githubusercontent.com/Opportunitylab/EconomicTracker/main/data/Affinity%20-%20National%20-%20Weekly.csv' # for latest data
spending = pd.read_csv(spending_uri)
spending.head()
def add_format_date(df):
df['date'] = df['year'].astype(str) + '-' + df['month'].astype(str) + '-' + df['day'].astype(str)
df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
return df
spending = spending.pipe(add_format_date)
spending.head()
```
Plotting the data -
```
base=alt.Chart(spending).transform_fold(['spend_all_inchigh', 'spend_all_inclow', 'spend_all_incmiddle']).transform_filter(alt.datum.date > alt.expr.toDate('2020-02-14')).mark_line().encode(
x=alt.X('date:T', title=None, axis=alt.Axis(format="%b%e", tickCount=5, labelOffset=0, tickOffset=0, labelPadding=25, ticks=False)),
#y='spend_all_inchigh:Q',
#x2='date:Q',
y=alt.Y('value:Q', title=None, axis=alt.Axis(format="%", tickCount=10)),
color='key:N'
#detail='date'
).properties(width=900, height=600)
lines={'lines': ['2020-04-15', '2020-05-01'], 'y1': [0,0], 'y2': [-0.4, -0.4]}
lines1={'lines': ['2020-04-15'], 'text': ['First stimulus \n checks received'], 'y': [-0.03]}
lines2={'lines': ['2020-05-01'], 'text': ['Half of states in \n process of reopening'], 'y': [-0.03]}
vert_line = alt.Chart(pd.DataFrame(lines)).mark_rule(strokeDash=[5,5], stroke='grey').encode(
x='lines:T',
y=alt.Y('y1:Q', scale=alt.Scale(zero=False)),
#y2=alt.Y2('y2:Q')
)
text1 = alt.Chart(pd.DataFrame(lines1)).mark_text(lineBreak='\n', dx=-10, align='right').encode(
text = 'text:N',
y = 'y:Q',
x = 'lines:T'
)
text2 = alt.Chart(pd.DataFrame(lines2)).mark_text(lineBreak='\n',dx=10, align='left').encode(
text = 'text:N',
y = 'y:Q',
x = 'lines:T',
)
alt.layer(base, vert_line, text1, text2).configure_view(strokeWidth=0).configure_axis(grid=False).configure_axisX(orient='top', offset=-67)
```
We can use the same techniques for the vertical lines and text overlay as the chart above in the following charts. Since the idea is similar I will not implement them for all, instead just plot the line charts.
# Small businesses in the richest neighborhoods have had the biggest drops in revenue
> Note: Latest data from now on
```
revenue_uri = 'https://raw.githubusercontent.com/Opportunitylab/EconomicTracker/main/data/Womply%20Revenue%20-%20National%20-%20Daily.csv'
revenue = pd.read_csv(revenue_uri)
revenue = revenue.pipe(add_format_date)
revenue.head()
alt.Chart(revenue).mark_line().transform_fold(['revenue_inclow', 'revenue_incmiddle', 'revenue_inchigh']).transform_filter(alt.datum.date > alt.expr.toDate('2020-02-14')).encode(
x='date:T',
y= 'value:Q',
color= 'key:N'
)
```
# Low-wage workers in the richest neighborhoods have had the biggest drop in earnings
> Important: The file for this data has been removed and not updated since July. So we will use the data from the particular commit that had this file.
```
earning_uri = 'https://raw.githubusercontent.com/OpportunityInsights/EconomicTracker/5f914ee4e71f56a33857b63e0bd07d71bc31e847/data/Low%20Inc%20Earnings%20Small%20Businesses%20-%20National%20-%20Daily.csv'
earning = pd.read_csv(earning_uri)
earning = earning.pipe(add_format_date)
earning.head()
alt.Chart(earning).mark_line().transform_fold(['pay', 'pay_inclow', 'pay_incmiddle', 'pay_inchigh']).encode(
x='date:T',
y= 'value:Q',
color= 'key:N'
)
```
# Low-wage workers in the richest neighborhoods have had the biggest drop in employment
> Important: This file was also eventually removed. So we will use the file from the commit that still had this file-
```
#employment_uri = 'https://raw.githubusercontent.com/Opportunitylab/EconomicTracker/main/data/Low%20Inc%20Emp%20Small%20Businesses%20-%20National%20-%20Daily.csv' # original file
employment_uri = 'https://raw.githubusercontent.com/OpportunityInsights/EconomicTracker/ba8c0096efb873d90f10cd720576c4ec5e6fc42e/data/Low%20Inc%20Emp%20Small%20Businesses%20-%20National%20-%20Daily.csv'
employment = pd.read_csv(employment_uri)
employment = employment.pipe(add_format_date)
employment.head()
alt.Chart(employment).mark_line().transform_fold(['emp_inclow', 'emp_incmiddle', 'emp_inchigh']).encode(
x='date:T',
y= 'value:Q',
color= 'key:N'
)
```
| github_jupyter |
```
#!pip install rank-bm25
#from rank_bm25 import BM25Okapi
import pandas as pd
#import matplotlib.pyplot as plt
#import seaborn as sns
import os
from os import listdir
from os.path import isfile, join
import re
import numpy as np
from math import floor, ceil
import json
import gzip
from os import walk
from scipy.spatial import KDTree
# !pip install geopy
# !pip install phonenumbers
# !pip install pycountry
import geopy.distance
import phonenumbers
import pycountry
pd.options.display.max_columns = 100
path = r"../src/data"
lb_path_min3 = path + r"/LocalBusiness/LocalBusiness_minimum3/geo_preprocessed"
lb_path_top100 = path + r"/LocalBusiness/LocalBusiness_top100/geo_preprocessed"
rest_path_min3 = path + r"/Restaurant/Restaurant_minimum3/geo_preprocessed"
rest_path_top100 = path + r"/Restaurant/Restaurant_top100/geo_preprocessed"
hotel_path_min3 = path + r"/Hotel/Hotel_minimum3/geo_preprocessed"
hotel_path_top100 = path + r"/Hotel/Hotel_top100/geo_preprocessed"
file_path_list = [lb_path_min3, lb_path_top100, rest_path_min3, rest_path_top100, hotel_path_min3, hotel_path_top100]
def create_df(file_path):
files = os.listdir(file_path)
df_as_list = []
for lb in files:
with gzip.open(file_path + '/' + lb, 'r') as dataFile:
for line in dataFile:
lineData = json.loads(line.decode('utf-8'))
lineData["origin"] = lb
df_as_list.append(lineData)
df = pd.DataFrame(df_as_list)
return df
df_list = []
for file_path in file_path_list:
df = create_df(file_path)
df_list.append(df)
```
## Concatenate Dataframes
```
df_all = pd.concat(df_list, axis = 0, ignore_index = True)
len(df_all)
df_all.head()
```
## Keep where dataframe has non-zero telephone numbers AND non-zero country codes
```
df_clean = df_all[df_all["addresscountry"].notna()]
df_clean = df_clean[df_clean["telephone"].notna()]
len(df_clean)
```
### Roughly 1.3 mio data after conditioning on phone and non-zero country codes
## Format longitudes AND latitudes
```
lon = "longitude"
lat = "latitude"
```
# Question: Keep this approach or replace with formatting library?
```
# Remove entries that are not numbers or cannot be convertred to one number (list etc.)
longitudes = df_clean[lon].to_numpy()
latitudes = df_clean[lat].to_numpy()
deleteList = []
i = 0
for value in longitudes:
if ((isinstance(value, str) == False) & (isinstance(value, float) == False)):
deleteList.append(i)
i = i + 1
i = 0
for value in latitudes:
if ((isinstance(value, str) == False) & (isinstance(value, float) == False)):
deleteList.append(i)
i = i + 1
df_clean.drop(df_clean.index[deleteList], axis = 0, inplace = True)
```
### Format longitude and latitude
```
longArray = df_clean[lon].to_numpy().astype(str)
longArray = np.char.replace(longArray, ',', '.')
longArray = np.char.replace(longArray, '--', '-')
df_clean[lon] = longArray
df_clean[lon] = pd.to_numeric(df_clean[lon], errors='coerce')
latArray = df_clean[lat].to_numpy().astype(str)
latArray = np.char.replace(latArray, ',', '.')
latArray = np.char.replace(latArray, '--', '-')
df_clean[lat] = latArray
df_clean[lat] = pd.to_numeric(df_clean[lat], errors='coerce')
# Remove the entries that were set to NaN because of other errors
df_clean = df_clean[df_clean["longitude"].notna()]
df_clean = df_clean[df_clean["latitude"].notna()]
# Make sure to only include valid longitudes and latitudes
df_clean = df_clean.loc[(df_clean[lat] >= -90) & (df_clean[lat] <= 90)]
df_clean = df_clean.loc[(df_clean[lon] >= -180) & (df_clean[lon] <= 80)]
len(df_clean)
```
### Roughly 700k datapoints left after conditioning on geo-location
```
df_clean["origin"].nunique()
```
## Further preprocessing step
### Remove non-digits from telephone numbers
```
def remove_non_digits(string):
cleaned = re.sub('[^0-9]','', string)
return cleaned
df_clean['telephone_'] = df_clean['telephone'].astype('str').apply(remove_non_digits)
```
### Extract country codes to ISO-2 format using ``pycountry``
```
countries = {}
for country in pycountry.countries:
countries[country.name] = country.alpha_2
countries
# fuction to modify the country dictionary in uppercase
def modify_dic(d):
for key in list(d.keys()):
new_key = key.upper()
d[new_key] = d[key]
return d
countries_upper = modify_dic(countries)
#countries_upper
#uppercase the df_column
df_clean["addresscountry"] = df_clean["addresscountry"].str.upper()
df_clean["addresscountry"].value_counts()
# Replace known countries with ISO-2 format country code
for key, value in countries_upper.items():
df_clean["addresscountry"] = df_clean["addresscountry"].str.replace(key, value)
```
## Manually normalize countries which do not exist in country package
```
df_clean["addresscountry"].value_counts().head(30)
country_dictionary = {
"UNITED STATES": "US",
"USA":"US",
"UNITED KINGDOM": "GB",
"UK": "GB",
"CANADA": "CA",
"AUSTRALIA": "AU",
"UNITED ARAB EMIRATES":"AE",
"UAE": "AE",
"INDIA" : "IN",
"NEW ZEALAND": "NZ",
"SVERIGE" : "SE",
"DEUTSCHLAND": "DE",
"DEU": "DE",
"RUSSIA": "RU",
"ITALIA": "IT",
"IRAN": "IR",
", IN" : "IN",
"ENGLAND": "GB",
"FRA": "FR"
}
for key, value in country_dictionary.items():
df_clean["addresscountry"] = df_clean["addresscountry"].str.replace(key, value)
```
## In this manual step we save about 50k extra datapoints
## Remove non-covered countries
### There are still some uncovered cases left which have to be removed
```
df_clean.reset_index(inplace=True)
liste = []
for i, row in enumerate(df_clean["addresscountry"]):
if len(row) > 2:
liste.append(i)
df_clean = df_clean.drop(liste)
df_clean["addresscountry"].unique()
```
## Drop empty phonenumbers and too lengthy phone numbers
```
df_clean = df_clean[df_clean["telephone_"] != "" ]
liste = []
df_clean.reset_index(inplace=True)
for row_index in df_clean.index:
if len(df_clean.iloc[row_index]["telephone_"])>18:
liste.append(row_index)
df_clean.drop(labels = liste, inplace = True)
df_clean = df_clean.drop(columns = ["level_0","index"])
df_clean.tail()
len(df_clean)
```
## Define normalizer for telephone package phonenumbers
```
def normalizer(entity):
number = entity["telephone_"]
address_country = entity["addresscountry"]
phone_number = phonenumbers.parse(number, address_country)
return phone_number
```
## Finally normalizing phone numbers in E.164 format
### Ignore those which can not be identified and replace as ``unknown``
```
df_clean.reset_index(inplace=True)
phone_objects =[]
#index = []
for row_index in df_clean.index:
try:
phone_object = normalizer(df_clean.iloc[row_index])
#index.append(row_index)
phone_objects.append(phone_object)
except:
phone_objects.append("unknown")
len(phone_objects)
df_clean["phone_object"] = pd.Series(phone_objects)
df_clean = df_clean.drop(columns = "index")
df_clean.head()
unknown_rows = df_clean[df_clean["phone_object"] == "unknown"].index
df_clean = df_clean.drop(unknown_rows)
len(df_clean)
```
## Check whether phonenumbers are valid
```
df_valid_numbers = df_clean[df_clean["phone_object"].apply(phonenumbers.is_valid_number)]
len(df_valid_numbers)
```
## Next step: Format every telephone number into unique E.164 format
```
#phonenumbers.format_number(df_valid_numbers["phone_object"][0], phonenumbers.PhoneNumberFormat.E164)
df_valid_numbers["E.164 format"] = df_valid_numbers["phone_object"].apply(lambda objects: phonenumbers.format_number(objects, phonenumbers.PhoneNumberFormat.E164))
len(df_valid_numbers)
df_valid_numbers.head()
```
## After formatting phone numbers into unified format we can group by phone numbers to identify clusters
```
df_valid_numbers["E.164 format"].value_counts().sort_values().tail(20)
```
## Adding the matching telephone numbers in a new column
```
def createKDTree(tupleArray):
tree = KDTree(tupleArray)
return tree
# Return all values that are in a specific proximity
def queryTree(tree, point, r = 0):
point = [float(i) for i in point]
idx = tree.query_ball_point(point, r)
return idx
df_valid_numbers['telephoneNorm'] = df_valid_numbers['E.164 format'].str.replace('+','').astype(np.int64)
df_valid_numbers.reset_index(drop=True, inplace=True)
df_valid_numbers['indexValue'] = df_valid_numbers.index
telephoneArray = df_valid_numbers['telephoneNorm'].to_numpy().astype('int64')
fillArray = np.full(len(telephoneArray), 1)
tupleArray = np.array((telephoneArray, fillArray)).T.astype('int64')
# create new column with all matching points
tree = createKDTree(tupleArray)
idx = queryTree(tree, tupleArray[0])
# Search for the closest neighbour in all of the points
df_valid_numbers['MatchingNumbers'] = df_valid_numbers.apply(lambda row: queryTree(tree,[row['telephoneNorm'], 1]), axis=1)
len(df_valid_numbers)
# # filter out the values which only have one value
data = df_valid_numbers[df_valid_numbers['MatchingNumbers'].apply(lambda x: len(x) > 1)]
```
## Throwing out noisy clusters
### Throw out everything > 100 (interim step)
### Some peaks come fromt the same table --> could delete those entries right away
```
data["telephoneNorm"].value_counts()
data.loc[df_valid_numbers["telephoneNorm"] == 18884082399]
# len(data)
# data.head()
```
## Additional Filtering by Geo Location
```
def calcDifference(pointOne, pointTwo):
return geopy.distance.great_circle(pointOne, pointTwo).km
def calcDifferenceFromRow (row):
tmp = data
indexValue = row['indexValue']
indexPosition = (row[lat], row[lon])
diffList = []
for value in row['MatchingGeoPoints']:
if not value in tmp.index:
continue
currRow = data.loc[data['indexValue'] == value]
currIndex = currRow['indexValue'].values[0]
if currIndex == indexValue:
diffList.append(-1)
else:
currPosition = (currRow[lat].values[0], currRow[lon].values[0])
diffList.append(calcDifference(indexPosition, currPosition))
return diffList
```
## Experiment with different radius --> check radius = 0.001 ?
```
def createKDTree(tupleArray):
tree = KDTree(tupleArray)
return tree
# Return all values that are in a specific proximity
def queryTree(tree, point, radius = 0.001):
point = [float(i) for i in point]
idx = tree.query_ball_point(point, r=radius)
return idx
idx = tree.query(point, k=neighbours)
return idx[1]
# convert to tuples and from string to float
lonArr = data[lon].to_numpy()
latArr = data[lat].to_numpy()
tupleArray = np.array((lonArr, latArr)).T.astype('float32')
data.reset_index(drop=True, inplace=True)
data['indexValue'] = data.index
# create new column with all matching points
tree = createKDTree(tupleArray)
idx = queryTree(tree, tupleArray[0])
# Search for the closest neighbour in all of the points
data['MatchingGeoPoints'] = data.apply(lambda row: queryTree(tree,[row[lon], row[lat]]), axis=1)
# Keep those that have one or more matches withing the radius
data = data[data['MatchingGeoPoints'].apply(lambda x: len(x) > 1)]
data["telephoneNorm"].value_counts().tail(20)
pd.set_option('max_colwidth', 400)
data.loc[data["telephoneNorm"] == 16103991390]
```
## !Export table with condition and without condition !
```
# data.loc[data['indexValue'] == 5]
# Calculate the difference in km between those
#data['Difference'] = data.apply(lambda row: calcDifferenceFromRow(row), axis=1)
# data.iloc[3:4]
# len(data)
# data["origin"].value_counts()
# data.loc[data['indexValue'] == 32][['name', 'address', 'page_url', 'E.164 format', lat, lon]]
# data.loc[data['indexValue'] == 21907]
data.to_json("Concatenated_MFile", compression="gzip", orient='records', lines=True)
```
| github_jupyter |
## RIHAD VARIAWA, Data Scientist - Who has fun LEARNING, EXPLORING & GROWING
<h1><center>Polynomial Regression</center></h1>
<h4>About this Notebook</h4>
In this notebook, we learn how to use scikit-learn for Polynomial regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, evaluate our model using test set, and finally use model to predict unknown value.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#download_data">Downloading Data</a></li>
<li><a href="#polynomial_regression">Polynomial regression</a></li>
<li><a href="#evaluation">Evaluation</a></li>
<li><a href="#practice">Practice</a></li>
</ol>
</div>
<br>
<hr>
### Importing Needed packages
```
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
```
<h2 id="download_data">Downloading Data</h2>
To download the data, we will use !wget to download it from IBM Object Storage.
```
#!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
```
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
## Understanding the Data
### `FuelConsumption.csv`:
We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64)
- **MODELYEAR** e.g. 2014
- **MAKE** e.g. Acura
- **MODEL** e.g. ILX
- **VEHICLE CLASS** e.g. SUV
- **ENGINE SIZE** e.g. 4.7
- **CYLINDERS** e.g 6
- **TRANSMISSION** e.g. A6
- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9
- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9
- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2
- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0
## Reading the data in
```
df = pd.read_csv("_datasets/FuelConsumption.csv")
# take a look at the dataset
df.head()
```
Lets select some features that we want to use for regression.
```
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
```
Lets plot Emission values with respect to Engine size:
```
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
```
#### Creating train and test dataset
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
```
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
```
<h2 id="polynomial_regression">Polynomial regression</h2>
Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.
In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):
$y = b + \theta_1 x + \theta_2 x^2$
Now, the question is: how we can fit our data on this equation while we have only x values, such as __Engine Size__?
Well, we can create a few additional features: 1, $x$, and $x^2$.
__PloynomialFeatures()__ function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
```
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).
$
\begin{bmatrix}
v_1\\
v_2\\
\vdots\\
v_n
\end{bmatrix}
$
$\longrightarrow$
$
\begin{bmatrix}
[ 1 & v_1 & v_1^2]\\
[ 1 & v_2 & v_2^2]\\
\vdots & \vdots & \vdots\\
[ 1 & v_n & v_n^2]
\end{bmatrix}
$
in our example
$
\begin{bmatrix}
2.\\
2.4\\
1.5\\
\vdots
\end{bmatrix}
$
$\longrightarrow$
$
\begin{bmatrix}
[ 1 & 2. & 4.]\\
[ 1 & 2.4 & 5.76]\\
[ 1 & 1.5 & 2.25]\\
\vdots & \vdots & \vdots\\
\end{bmatrix}
$
It looks like feature sets for multiple linear regression analysis, right? Yes. It Does.
Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:
$y = b + \theta_1 x_1 + \theta_2 x_2$
Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.
so we can use __LinearRegression()__ function to solve it:
```
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
```
As mentioned before, __Coefficient__ and __Intercept__ , are the parameters of the fit curvy line.
Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
```
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
```
<h2 id="evaluation">Evaluation</h2>
```
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
```
<h2 id="practice">Practice</h2>
Try to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
```
# write your code here
poly3 = PolynomialFeatures(degree=3)
train_x_poly3 = poly3.fit_transform(train_x)
clf3 = linear_model.LinearRegression()
train_y3_ = clf3.fit(train_x_poly3, train_y)
# The coefficients
print ('Coefficients: ', clf3.coef_)
print ('Intercept: ',clf3.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_poly3 = poly3.fit_transform(test_x)
test_y3_ = clf3.predict(test_x_poly3)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y3_ , test_y) )
```
Double-click __here__ for the solution.
<!-- Your answer is below:
poly3 = PolynomialFeatures(degree=3)
train_x_poly3 = poly3.fit_transform(train_x)
clf3 = linear_model.LinearRegression()
train_y3_ = clf3.fit(train_x_poly3, train_y)
# The coefficients
print ('Coefficients: ', clf3.coef_)
print ('Intercept: ',clf3.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_poly3 = poly3.fit_transform(test_x)
test_y3_ = clf3.predict(test_x_poly3)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y3_ , test_y) )
-->
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
| github_jupyter |
STAT 453: Deep Learning (Spring 2020)
Instructor: Sebastian Raschka (sraschka@wisc.edu)
- Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/
- GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20
# RNN with LSTM with Own Dataset
Example notebook showing how to use an own CSV text dataset for training a simple RNN for sentiment classification (here: a binary classification problem with two labels, positive and negative) using LSTM (Long Short Term Memory) cells.
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
import torch
import torch.nn.functional as F
from torchtext import data
from torchtext import datasets
import time
import random
import pandas as pd
torch.backends.cudnn.deterministic = True
```
## General Settings
```
RANDOM_SEED = 123
torch.manual_seed(RANDOM_SEED)
VOCABULARY_SIZE = 20000
LEARNING_RATE = 1e-4
BATCH_SIZE = 128
NUM_EPOCHS = 15
DEVICE = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')
EMBEDDING_DIM = 128
HIDDEN_DIM = 256
OUTPUT_DIM = 1
```
## Dataset
The following cells will download the IMDB movie review dataset (http://ai.stanford.edu/~amaas/data/sentiment/) for positive-negative sentiment classification in as CSV-formatted file:
```
!wget https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch08/movie_data.csv.gz
!gunzip -f movie_data.csv.gz
```
Check that the dataset looks okay:
```
df = pd.read_csv('movie_data.csv')
df.head()
del df
```
Define the Label and Text field formatters:
```
TEXT = data.Field(sequential=True,
tokenize='spacy',
include_lengths=True) # necessary for packed_padded_sequence
LABEL = data.LabelField(dtype=torch.float)
```
Process the dataset:
```
fields = [('review', TEXT), ('sentiment', LABEL)]
dataset = data.TabularDataset(
path="movie_data.csv", format='csv',
skip_header=True, fields=fields)
```
Split the dataset into training, validation, and test partitions:
```
train_data, valid_data, test_data = dataset.split(
split_ratio=[0.75, 0.05, 0.2],
random_state=random.seed(RANDOM_SEED))
print(f'Num Train: {len(train_data)}')
print(f'Num Valid: {len(valid_data)}')
print(f'Num Test: {len(test_data)}')
```
Build the vocabulary based on the top "VOCABULARY_SIZE" words:
```
TEXT.build_vocab(train_data, max_size=VOCABULARY_SIZE)
LABEL.build_vocab(train_data)
print(f'Vocabulary size: {len(TEXT.vocab)}')
print(f'Number of classes: {len(LABEL.vocab)}')
LABEL.vocab.freqs
```
The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: `<unk>` and `<pad>`.
Make dataset iterators:
```
train_loader, valid_loader, test_loader = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size=BATCH_SIZE,
sort_within_batch=True, # necessary for packed_padded_sequence
sort_key=lambda x: len(x.review),
device=DEVICE)
```
Testing the iterators (note that the number of rows depends on the longest document in the respective batch):
```
print('Train')
for batch in train_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
print('\nValid:')
for batch in valid_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
print('\nTest:')
for batch in test_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
```
## Model
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, text, text_length):
#[sentence len, batch size] => [sentence len, batch size, embedding size]
embedded = self.embedding(text)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length)
#[sentence len, batch size, embedding size] =>
# output: [sentence len, batch size, hidden size]
# hidden: [1, batch size, hidden size]
packed_output, (hidden, cell) = self.rnn(packed)
return self.fc(hidden.squeeze(0)).view(-1)
INPUT_DIM = len(TEXT.vocab)
torch.manual_seed(RANDOM_SEED)
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
```
## Training
```
def compute_binary_accuracy(model, data_loader, device):
model.eval()
correct_pred, num_examples = 0, 0
with torch.no_grad():
for batch_idx, batch_data in enumerate(data_loader):
text, text_lengths = batch_data.review
logits = model(text, text_lengths)
predicted_labels = (torch.sigmoid(logits) > 0.5).long()
num_examples += batch_data.sentiment.size(0)
correct_pred += (predicted_labels.long() == batch_data.sentiment.long()).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, batch_data in enumerate(train_loader):
text, text_lengths = batch_data.review
### FORWARD AND BACK PROP
logits = model(text, text_lengths)
cost = F.binary_cross_entropy_with_logits(logits, batch_data.sentiment)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
f'Batch {batch_idx:03d}/{len(train_loader):03d} | '
f'Cost: {cost:.4f}')
with torch.set_grad_enabled(False):
print(f'training accuracy: '
f'{compute_binary_accuracy(model, train_loader, DEVICE):.2f}%'
f'\nvalid accuracy: '
f'{compute_binary_accuracy(model, valid_loader, DEVICE):.2f}%')
print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')
print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')
print(f'Test accuracy: {compute_binary_accuracy(model, test_loader, DEVICE):.2f}%')
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
# based on:
# https://github.com/bentrevett/pytorch-sentiment-analysis/blob/
# master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(DEVICE)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor, length_tensor))
return prediction.item()
print('Probability positive:')
1-predict_sentiment(model, "This is such an awesome movie, I really love it!")
print('Probability negative:')
predict_sentiment(model, "I really hate this movie. It is really bad and sucks!")
%watermark -iv
```
| github_jupyter |
---
# Time series anomaly detection
Depending on available data, there are a lot of approaches for anomaly detection. If **data is labeled** (each point in time has a label anomaly / not anomaly, then supervised learning approaches can be used. Thus, it is a classification task where logistic regression, random forest, SVM, boosting, RNN, etc. can be applied. Here you have to pay attention to:
1. Data imbalance because usually there are just a few anomalies (less than 5% of all available data) and
2. [Cross validation through time](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html).
In case **labels are not provided**, other unsupervised techniques should be applied. There are also several methods for anomaly detection, but the general approach is quite similar: predict a value and compare it with a realized value. And if the residual is more or less than some threshold, then it is an anomaly.
There are several methods to make a prediction: moving average (simple, weighted) taking into account the last few hours or days or weekdays, etc., ARIMA model, Prophet, [seasonal Hybrid Extreme Studentized Deviate](https://arxiv.org/pdf/1704.07706.pdf) technique and many others.
In this notebook, we are going to use a moving average for prediction and 1.5 standard deviations as a threshold.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
---
# Data load and preparation
Data is prepared in the [Notebook](./exploratory_data_analysis.ipynb) and pickled. Thus, we can load and visualize the already prepared hourly CTR time series.
```
data_set = pd.read_pickle('./data/CTR_aggregated.pkl')
data_set.index = data_set.hour
data_set.drop(columns='hour', inplace=True)
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
plt.figure(figsize=[16, 8])
sns.lineplot(x='hour', y='CTR', data=data_set, linewidth=3)
plt.title('Hourly CTR for period 2014/10/21 and 2014/10/30', fontsize=20)
```
---
# Anomay detection by Simple Moving Average approach
Here we are using `pandas` functions `shift` and `rolling` for rolling lag `mean` and `std` calculation. 24 hour window is used for moving average and moving standart deviation calculations. The assumption here is that the values of the last day are supposed to be close to the current day.
```
def plot_anmolies(data, period=24):
"""
Function takes data and period for MA and plots time seies with MA prediction and boundaries
"""
# MA calculation
data['MA'] = data.CTR.shift(1).rolling(period).mean()
data['STD'] = data.CTR.shift(1).rolling(period).std()
data[['UpperBoundary']] = data['MA'] + 1.5 * data['STD']
data[['LowerBoundary']] = data['MA'] - 1.5 * data['STD']
data['Anomaly'] = np.where(
abs(data['MA'] - data['CTR']) > 1.5 * data['STD'], 'Anomaly',
'Not anomaly')
# plot anomalies
plt.figure(figsize=[16, 8])
sns.lineplot(x='hour',
y='CTR',
data=data,
linewidth=3,
label='CTR',
color='lightblue')
sns.lineplot(x='hour', y='MA', data=data, linewidth=3, label='MA')
sns.lineplot(x='hour',
y='UpperBoundary',
data=data,
linewidth=3,
color='grey',
label='UpperBoundary')
sns.lineplot(x='hour',
y='LowerBoundary',
data=data,
linewidth=3,
color='grey',
label='LowerBoundary')
sns.scatterplot(x='hour',
y='CTR',
data=data,
hue='Anomaly',
s=100,
palette=['lightblue', 'red'])
plt.title(
"CTR hourly Anomaly Detection with {}-period rolling window Moving Average\nFound {} anomalies in {} data point. {} points are excluded to build moving a average."
.format(period,
sum(data['Anomaly'] == 'Anomaly'),
data['MA'].notna().sum(),
data['MA'].isna().sum()),
fontsize=20)
plt.tight_layout()
plot_anmolies(data_set, 24)
plot_anmolies(data_set, 72)
```
---
# Summary
Simple Moving Approach works quite well for anomay detection. By varying `period` value in `plot_anmolies` function, you can try to find the optimal period value. But given the nature of the data, I recommend using a period as a multiple of 24. If you would like to find less and more robust anomalies, use period 48, 72 or more.
Other approaches can be considered in the next steps: EMA, ARIMA taking into account hourly and weekly seasonality, Prophet, etc.
| github_jupyter |
# Food Manufacture II
## Objective and Prerequisites
In this example, you’ll have to tackle the same problem that you did in “Food Manufacturing I,” but with additional constraints that change the problem type from a linear program (LP) problem to a mixed-integer program (MIP) problem, making it harder to solve.
More information on this type of model can be found in example #2 of the fifth edition of Modeling Building in Mathematical Programming by H. P. Williams on pages 255 and 299 – 300.
This modeling example is at the intermediate level, where we assume that you know Python and are familiar with the Gurobi Python API. In addition, you should have some knowledge about building mathematical optimization models.
**Download the Repository** <br />
You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip).
**Gurobi License** <br />
In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-MFG-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_food-manufacturing_2_COM_EVAL_GITHUB_&utm_term=food-manufacturing-problem&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-MFG-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_food-manufacturing_2_ACADEMIC_EVAL_GITHUB_&utm_term=food-manufacturing-problem&utm_content=C_JPM) as an *academic user*.
---
## Problem Description
A manufacturer needs to refine several raw oils and blend them together to produce a given food product that can be sold. The raw oils needed can be divided into two categories:
| Category | Oil |
| ------------- |-------------|
| Vegetable oils:| VEG 1<br>VEG 2 |
| Non-vegetable oils: | OIL 1<br>OIL 2<br>OIL 3 |
The manufacturer can choose to buy raw oils for the current month and/or buy them on the futures market for delivery in a subsequent month. Prices for immediate delivery and in the futures market are given below in USD/ton:
| Month | VEG 1 | VEG 2 | OIL 1 | OIL 2 | OIL 3|
| ------------- |-------------| -------------| -------------| -------------| -------------|
| January| 110 | 120 | 130 | 110 | 115|
| February |130 | 130 | 110 | 90| 115|
| March |110 | 140 | 130 | 100 | 95|
| April |120 | 110 | 120 | 120 | 125|
| May | 100 | 120 | 150 | 110 | 105|
| June | 90 | 100 | 140 | 80| 135 |
There are a number of additional factors that must be taken into account. These include:
1. The final food product sells for $\$150$ per ton.
2. Each category of oil (vegetable and non-vegetable) needs to be refined on a different production line.
3. There is limited refinement capacity such that in any given month a maximum of 200 tons of vegetable oils and 250 tons of non-vegetable oils can be refined.
4. Also, there is no waste in the refinement process, so the sum of the raw oils refined will equal the amount of refined oils available.
5. The cost of refining the oils may be ignored.
In addition to the refining limits above, there are limits to the amount of raw oils that can be stored for future use, and there is a cost associated with each ton of oil stored. The limit is 1,000 tons of each raw oil and the storage cost is $\$5$ per ton per month. The manufacturer cannot store the produced food product or the refined oils.
The final food product must have a hardness between three and six on a given hardness scale. For the purposes of the model, hardness blends linearly and the hardness of each raw oil is:
|Oils | Hardness|
| ------------- |-------------|
|VEG 1 | 8.8|
|VEG 2 | 6.1|
|OIL 1 | 2.0|
|OIL2 | 4.2|
|OIL 3| 5.0|
At the start of January, there are 500 tons of each type of raw oil in storage. For the purpose of the model, this should also be the level of raw oils in storage at the end of June.
This version of the Food Manufacture problem adds the following additional constraints to the first version:
- Condition 1: If an oil is used during a month, the minimum quantity used must be 20 tons.
- Condition 2: The maximum number of oils used in a month is three.
- Condition 3: The use of VEG1 or VEG2 in a given month requires the use of OIL3 in that same month.
Given the above information, what monthly buying and manufacturing decisions should be made in order to maximize profit?
---
## Model Formulation
### Sets and Indices
$t \in \text{Months}=\{\text{Jan},\text{Feb},\text{Mar},\text{Apr},\text{May},\text{Jun}\}$: Set of months.
$V=\{\text{VEG1},\text{VEG2}\}$: Set of vegetable oils.
$N=\{\text{OIL1},\text{OIL2},\text{OIL3}\}$: Set of non-vegetable oils.
$o \in \text{Oils} = V \cup N$: Set of oils.
### Parameters
$\text{price} \in \mathbb{R}^+$: Sale price of the final product.
$\text{init_store} \in \mathbb{R}^+$: Initial storage amount in tons.
$\text{target_store} \in \mathbb{R}^+$: Target storage amount in tons.
$\text{holding_cost} \in \mathbb{R}^+$: Monthly cost (in USD/ton/month) of keeping in inventory a ton of oil.
$\text{min_consume} \in \mathbb{R}^+$: Minimum number of tons to consume of a given oil in a month.
$\text{veg_cap} \in \mathbb{R}^+$: Installed capacity (in tons) to refine vegetable oils.
$\text{oil_cap} \in \mathbb{R}^+$: Installed capacity (in tons) to refine non-vegetable oils.
$\text{min_hardness} \in \mathbb{R}^+$: lowest hardness allowed for the final product.
$\text{max_hardness} \in \mathbb{R}^+$: highest hardness allowed for the final product.
$\text{hardness}_o \in \mathbb{R}^+$: Hardness of oil $o$.
$\text{max_ingredients} \in \mathbb{N}$: Maximum number of oil types to consume in a given month.
$\text{cost}_{t,o} \in \mathbb{R}^+$: Estimated purchase price for oil $o$ at month $t$.
### Decision Variables
$\text{produce}_t \in \mathbb{R}^+$: Tons of food to produce at month $t$.
$\text{buy}_{t,o} \in \mathbb{R}^+$: Tons of oil $o$ to buy at month $t$.
$\text{consume}_{t,o} \in \mathbb{R}^+$: Tons of oil $o$ to use at month $t$.
$\text{store}_{t,o} \in \mathbb{R}^+$: Tons of oil $o$ to store at month $t$.
$\text{use}_{t,o} \in \{0,1\}$: 1 if oil $o$ is used on month $t$, 0 otherwise.
### Objective Function
- **Profit**: Maximize the total profit (in USD) of the planning horizon.
\begin{equation}
\text{Maximize} \quad Z = \sum_{t \in \text{Months}}\text{price}*\text{produce}_t - \sum_{t \in \text{Months}}\sum_{o \in \text{Oils}}(\text{cost}_{t,o}*\text{consume}_{t,o} + \text{holding_cost}*\text{store}_{t,o})
\tag{0}
\end{equation}
### Constraints
- **Initial Balance:** The Tons of oil $o$ purchased in January and the ones previously stored should be equal to the Tons of said oil consumed and stored in that month.
\begin{equation}
\text{init store} + \text{buy}_{Jan,o} = \text{consume}_{Jan,o} + \text{store}_{Jan,o} \quad \forall o \in \text{Oils}
\tag{1}
\end{equation}
- **Balance:** The Tons of oil $o$ purchased in month $t$ and the ones previously stored should be equal to the Tons of said oil consumed and stored in that month.
\begin{equation}
\text{store}_{t-1,o} + \text{buy}_{t,o} = \text{consume}_{t,o} + \text{store}_{t,o} \quad \forall (t,o) \in \text{Months} \setminus \{\text{Jan}\} \times \text{Oils}
\tag{2}
\end{equation}
- **Inventory Target**: The Tons of oil $o$ kept in inventory at the end of the planning horizon should hit the target.
\begin{equation}
\text{store}_{Jun,o} = \text{target_store} \quad \forall o \in \text{Oils}
\tag{3}
\end{equation}
- **Refinement Capacity**: Total Tons of oil $o$ consumed in month $t$ cannot exceed the refinement capacity.
\begin{equation}
\sum_{o \in V}\text{consume}_{t,o} \leq \text{veg_cap} \quad \forall t \in \text{Months}
\tag{4.1}
\end{equation}
\begin{equation}
\sum_{o \in N}\text{consume}_{t,o} \leq \text{oil_cap} \quad \forall t \in \text{Months}
\tag{4.2}
\end{equation}
- **Hardness**: The hardness value of the food produced in month $t$ should be within tolerances.
\begin{equation}
\text{min_hardness}*\text{produce}_t \leq \sum_{o \in \text{Oils}} \text{hardness}_o*\text{consume}_{t,o} \leq \text{max_hardness}*\text{produce}_t \quad \forall t \in \text{Months}
\tag{5}
\end{equation}
- **Mass Conservation**: Total Tons of oil consumed in month $t$ should be equal to the Tons of the food produced in that month.
\begin{equation}
\sum_{o \in \text{Oils}}\text{consume}_{t,o} = \text{produce}_t \quad \forall t \in \text{Months}
\tag{6}
\end{equation}
- **Consumption Range**: Oil $o$ can be consumed in month $t$ if we decide to use it in that month, and the Tons consumed should be between 20 and the refinement capacity for its type.
\begin{equation}
\text{min_consume}*\text{use}_{t,o} \leq \text{consume}_{t,o} \leq \text{veg_cap}*\text{use}_{t,o} \quad \forall (t,o) \in V \times \text{Months}
\tag{7.1}
\end{equation}
\begin{equation}
\text{min_consume}*\text{use}_{t,o} \leq \text{consume}_{t,o} \leq \text{oil_cap}*\text{use}_{t,o} \quad \forall (t,o) \in N \times \text{Months}
\tag{7.2}
\end{equation}
- **Recipe**: The maximum number of oils used in month $t$ must be three.
\begin{equation}
\sum_{o \in \text{Oils}}\text{use}_{t,o} \leq \text{max_ingredients} \quad \forall t \in \text{Months}
\tag{8}
\end{equation}
- **If-then Constraint**: If oils VEG1 or VEG2 are used in month $t$, then OIL3 must be used in that month.
\begin{equation}
\text{use}_{t,\text{VEG1}} \leq \text{use}_{t,\text{OIL3}} \quad \forall t \in \text{Months}
\tag{9.1}
\end{equation}
\begin{equation}
\text{use}_{t,\text{VEG2}} \leq \text{use}_{t,\text{OIL3}} \quad \forall t \in \text{Months}
\tag{9.2}
\end{equation}
---
## Python Implementation
We import the Gurobi Python Module and other Python libraries.
```
import numpy as np
import pandas as pd
import gurobipy as gp
from gurobipy import GRB
# tested with Python 3.7 & Gurobi 9
```
## Input Data
We define all the input data of the model.
```
# Parameters
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun"]
oils = ["VEG1", "VEG2", "OIL1", "OIL2", "OIL3"]
cost = {
('Jan', 'VEG1'): 110,
('Jan', 'VEG2'): 120,
('Jan', 'OIL1'): 130,
('Jan', 'OIL2'): 110,
('Jan', 'OIL3'): 115,
('Feb', 'VEG1'): 130,
('Feb', 'VEG2'): 130,
('Feb', 'OIL1'): 110,
('Feb', 'OIL2'): 90,
('Feb', 'OIL3'): 115,
('Mar', 'VEG1'): 110,
('Mar', 'VEG2'): 140,
('Mar', 'OIL1'): 130,
('Mar', 'OIL2'): 100,
('Mar', 'OIL3'): 95,
('Apr', 'VEG1'): 120,
('Apr', 'VEG2'): 110,
('Apr', 'OIL1'): 120,
('Apr', 'OIL2'): 120,
('Apr', 'OIL3'): 125,
('May', 'VEG1'): 100,
('May', 'VEG2'): 120,
('May', 'OIL1'): 150,
('May', 'OIL2'): 110,
('May', 'OIL3'): 105,
('Jun', 'VEG1'): 90,
('Jun', 'VEG2'): 100,
('Jun', 'OIL1'): 140,
('Jun', 'OIL2'): 80,
('Jun', 'OIL3'): 135
}
hardness = {"VEG1": 8.8, "VEG2": 6.1, "OIL1": 2.0, "OIL2": 4.2, "OIL3": 5.0}
price = 150
init_store = 500
veg_cap = 200
oil_cap = 250
min_hardness = 3
max_hardness = 6
max_ingredients = 3
holding_cost = 5
min_consume = 20
```
## Model Deployment
For each period, we create a variable which will take into account the value of the food produced. For each product (five kinds of oils) and each period we will create variables for the amount that gets purchased, used, and stored.
For each period and each product, we need a binary variable, which indicates if this product is used in the current period.
```
food = gp.Model('Food Manufacture II')
# Quantity of food produced in each period
produce = food.addVars(months, name="Food")
# Quantity bought of each product in each period
buy = food.addVars(months, oils, name = "Buy")
# Quantity used of each product in each period
consume = food.addVars(months, oils, name = "Consume")
# Quantity stored of each product in each period
store = food.addVars(months, oils, name = "Store")
# binary variables =1, if consume > 0
use = food.addVars(months, oils, vtype=GRB.BINARY, name = "Use")
```
Next, we insert the constraints. The balance constraints ensure that the amount of oil that is in the storage in the previous period plus the amount that gets purchased equals the amount that is used plus the amount that is stored in the current period (for each oil).
```
#1. Initial Balance
Balance0 = food.addConstrs((init_store + buy[months[0], oil]
== consume[months[0], oil] + store[months[0], oil]
for oil in oils), "Initial_Balance")
#2. Balance
Balance = food.addConstrs((store[months[months.index(month)-1], oil] + buy[month, oil]
== consume[month, oil] + store[month, oil]
for oil in oils for month in months if month != months[0]), "Balance")
```
The Inventory Target constraints force that at the end of the last period the storage contains the initial amount of each oil. The problem description demands that the storage is as full as in the beginning.
```
#3. Inventory Target
TargetInv = food.addConstrs((store[months[-1], oil] == init_store for oil in oils), "End_Balance")
```
The capacity constraints restrict the amount of veg and non-veg oils which can be processed per period. Per month only 200 tons of vegetable oil and 250 tons of non-vegetable oil can be processed to the final product.
```
#4.1 Vegetable Oil Capacity
VegCapacity = food.addConstrs((gp.quicksum(consume[month, oil] for oil in oils if "VEG" in oil)
<= veg_cap for month in months), "Capacity_Veg")
#4.2 Non-vegetable Oil Capacity
NonVegCapacity = food.addConstrs((gp.quicksum(consume[month, oil] for oil in oils if "OIL" in oil)
<= oil_cap for month in months), "Capacity_Oil")
```
The hardness constraints limit the hardness of the final product, which needs to remain between 3 and 6. Each oil has a certain hardness. The final product may be made up of different oils. The hardness of the final product is measured by the hardness of each ingredient multiplied by its share of the final product. It is assumed that the hardness blends linearly.
```
#5. Hardness
HardnessMin = food.addConstrs((gp.quicksum(hardness[oil]*consume[month, oil] for oil in oils)
>= min_hardness*produce[month] for month in months), "Hardness_lower")
HardnessMax = food.addConstrs((gp.quicksum(hardness[oil]*consume[month, oil] for oil in oils)
<= max_hardness*produce[month] for month in months), "Hardness_upper")
```
The Mass Conservation constraints ensure that the amount of products used in each period equals the amount of food produced in that period. This ensures that all oil that is used is also processed into the final product (food).
```
#6. Mass Conservation
MassConservation = food.addConstrs((consume.sum(month) == produce[month] for month in months), "Mass_conservation")
```
Condition 1 constraints force that if any product is used in any period then at least 20 tons is used. They also force that the binary variable for each product and each month is set to one if and only if the continuous variable used for the same product and the same month is non-zero. The binary variable is called an indicator variable since it is linked to a continuous variable and indicates if it is non-zero.
It's relatively straightforward to express Condition 1 as a pure MIP constraint set. Let's see how to model this set using Gurobi’s general constraints (from version 7.0 onwards):
```
#7.1 & 7.2 Consumption Range - Using Gurobi's General Constraints
for month in months:
for oil in oils:
food.addGenConstrIndicator(use[month, oil], 0,
consume[month, oil] == 0,
name="Lower_bound_{}_{}".format(month, oil))
food.addGenConstrIndicator(use[month, oil], 1,
consume[month, oil] >= min_consume,
name="Upper_bound_{}_{}".format(month, oil))
```
Condition 2 constraints ensure that each final product is only made up of at most three ingredients.
```
#8. Recipe
condition2 = food.addConstrs((use.sum(month) <= max_ingredients for month in months),"Recipe")
```
Condition 3 constraints ensure that if vegetable one or vegetable two are used, then oil three must also be used. We will use again Gurobi's general constraints:
```
#9.1 & 9.2 If-then Constraint
for month in months:
food.addGenConstrIndicator(use[month, "VEG1"], 1,
use[month, "OIL3"] == 1,
name = "If_then_a_{}".format(month))
food.addGenConstrIndicator(use[month, "VEG2"], 1,
use[month, "OIL3"] == 1,
name = "If_then_b_{}".format(month))
```
The objective is to maximize the profit of the company. This is calculated as revenue minus costs for buying and storing of the purchased products (ingredients).
```
#0. Objective Function
obj = price*produce.sum() - buy.prod(cost) - holding_cost*store.sum()
food.setObjective(obj, GRB.MAXIMIZE) # maximize profit
```
Next, we start the optimization and Gurobi finds the optimal solution.
```
food.optimize()
```
---
## Analysis
When originally designed, this model proved comparatively hard to solve (see Food Manufacture I). The profit (revenue from sales minus cost of raw oils) resulting from this plan is $\$100,278.7$. There are alternative — and equally good — solutions.
### Purchase Plan
This plan defines the amount of vegetable oil (VEG) and non-vegetable oil (OIL) that we need to purchase during the planning horizon. For example, 480.4 tons of vegetable oil of type VEG1 needs to be bought in June.
```
rows = months.copy()
columns = oils.copy()
purchase_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, oil in buy.keys():
if (abs(buy[month, oil].x) > 1e-6):
purchase_plan.loc[month, oil] = np.round(buy[month, oil].x, 1)
purchase_plan
```
### Monthly Consumption
This plan determines the amount of vegetable oil (VEG) and non-vegetable oil (OIL) consumed during the planning horizon. For example, 114.8 tons of vegetable oil of type VEG2 is consumed in January.
```
rows = months.copy()
columns = oils.copy()
reqs = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, oil in consume.keys():
if (abs(consume[month, oil].x) > 1e-6):
reqs.loc[month, oil] = np.round(consume[month, oil].x, 1)
reqs
```
### Inventory Plan
This plan reflects the amount of vegetable oil (VEG) and non-vegetable oil (OIL) in inventory at the end of each period of the planning horizon. For example, at the end of February we have 500 tons of Non-vegetable oil of type OIL1.
```
rows = months.copy()
columns = oils.copy()
store_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, oil in store.keys():
if (abs(store[month, oil].x) > 1e-6):
store_plan.loc[month, oil] = np.round(store[month, oil].x, 1)
store_plan
```
Note: If you want to write your solution to a file, rather than print it to the terminal, you can use the model.write() command. An example implementation is:
`food.write("food-manufacture-2-output.sol")`
---
## References
H. Paul Williams, Model Building in Mathematical Programming, fifth edition.
Copyright © 2020 Gurobi Optimization, LLC
| github_jupyter |
### Scroll Down Below to start from Exercise 8.04
```
# Removes Warnings
import warnings
warnings.filterwarnings('ignore')
#import the necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Reading the data using pandas
```
data= pd.read_csv('Churn_Modelling.csv')
data.head(5)
len(data)
data.shape
```
## Scrubbing the data
```
data.isnull().values.any()
#It seems we have some missing values now let us explore what are the columns
#having missing values
data.isnull().any()
## it seems that we have missing values in Gender,age and EstimatedSalary
data[["EstimatedSalary","Age"]].describe()
data.describe()
#### It seems that HasCrCard has value as 0 and 1 hence needs to be changed to category
data['HasCrCard'].value_counts()
## No of missing Values present
data.isnull().sum()
## Percentage of missing Values present
round(data.isnull().sum()/len(data)*100,2)
## Checking the datatype of the missing columns
data[["Gender","Age","EstimatedSalary"]].dtypes
```
### There are three ways to impute missing values:
1. Droping the missing values rows
2. Fill missing values with a test stastics
3. Predict the missing values using ML algorithm
```
### Filling the missing value with the mean of the values
mean_value=data['EstimatedSalary'].mean()
data['EstimatedSalary']=data['EstimatedSalary'].fillna(mean_value)
data['Gender'].value_counts()
### Since it seems that the Gender is a categorical field therefore
### we will fill the values with the 0 since its the most occuring number
data['Gender']=data['Gender'].fillna(data['Gender'].value_counts().idxmax())
mode_value=data['Age'].mode()
data['Age']=data['Age'].fillna(mode_value[0])
##checking for any missing values
data.isnull().any()
```
### Renaming the columns
```
# We would want to rename some of the columns
data = data.rename(columns={
'CredRate': 'CreditScore',
'ActMem' : 'IsActiveMember',
'Prod Number': 'NumOfProducts',
'Exited':'Churn'
})
data.columns
```
### We would also like to move the churn columnn to the extreme right and drop the customer ID
```
data.drop(labels=['CustomerId'], axis=1,inplace = True)
column_churn = data['Churn']
data.drop(labels=['Churn'], axis=1,inplace = True)
data.insert(len(data.columns), 'Churn', column_churn.values)
data.columns
```
### Changing the data type
```
# Convert these variables into categorical variables
data["Geography"] = data["Geography"].astype('category')
data["Gender"] = data["Gender"].astype('category')
data.dtypes
```
# Exploring the data
## Statistical Overview
```
data['Churn'].value_counts(0)
data['Churn'].value_counts(1)*100
data.describe()
summary_churn = data.groupby('Churn')
summary_churn.mean()
summary_churn.median()
corr = data.corr()
plt.figure(figsize=(15,8))
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,annot=True)
corr
```
## Visualization
```
f, axes = plt.subplots(ncols=3, figsize=(15, 6))
sns.distplot(data.EstimatedSalary, kde=True, color="darkgreen", ax=axes[0]).set_title('EstimatedSalary')
axes[0].set_ylabel('No of Customers')
sns.distplot(data.Age, kde=True, color="darkblue", ax=axes[1]).set_title('Age')
axes[1].set_ylabel('No of Customers')
sns.distplot(data.Balance, kde=True, color="maroon", ax=axes[2]).set_title('Balance')
axes[2].set_ylabel('No of Customers')
plt.figure(figsize=(15,4))
p=sns.countplot(y="Gender", hue='Churn', data=data,palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Churn Distribution by Gender')
plt.figure(figsize=(15,4))
p=sns.countplot(x='Geography', hue='Churn',data=data, palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Geography Distribution')
plt.figure(figsize=(15,4))
p=sns.countplot(x='NumOfProducts', hue='Churn',data=data, palette="Set2")
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('Customer Distribution by Product')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Age'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Age'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Customer Age', ylabel='Frequency')
plt.title('Customer Age - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Balance'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Balance'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='Customer Balance', ylabel='Frequency')
plt.title('Customer Balance - churn vs no churn')
plt.figure(figsize=(15,4))
ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'CreditScore'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn')
ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'CreditScore'] , color=sns.color_palette("Set2")[1],shade=True, label='churn')
ax.set(xlabel='CreditScore', ylabel='Frequency')
plt.title('Customer CreditScore - churn vs no churn')
plt.figure(figsize=(16,4))
p=sns.barplot(x='NumOfProducts',y='Balance',hue='Churn',data=data, palette="Set2")
p.legend(loc='upper right')
legend = p.get_legend()
legend_txt = legend.texts
legend_txt[0].set_text("No Churn")
legend_txt[1].set_text("Churn")
p.set_title('No of Product VS Balance')
```
## Feature selection
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data.dtypes
### Encoding the categorical variables
data["Geography"] = data["Geography"].astype('category').cat.codes
data["Gender"] = data["Gender"].astype('category').cat.codes
target = 'Churn'
X = data.drop('Churn', axis=1)
y=data[target]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.15, random_state=123, stratify=y)
forest=RandomForestClassifier(n_estimators=500,random_state=1)
forest.fit(X_train,y_train)
importances=forest.feature_importances_
features = data.drop(['Churn'],axis=1).columns
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(15,4))
plt.title("Feature importances using Random Forest")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", align="center")
plt.xticks(range(X_train.shape[1]), features[indices], rotation='vertical',fontsize=15)
plt.xlim([-1, X_train.shape[1]])
plt.show()
```
## Model Fitting
```
### From the feature selection let us take only the top 6 features
import statsmodels.api as sm
top5_features = ['Age','EstimatedSalary','CreditScore','Balance','NumOfProducts']
logReg = sm.Logit(y_train, X_train[top5_features])
logistic_regression = logReg.fit()
logistic_regression.summary
logistic_regression.params
# Create function to compute coefficients
coef = logistic_regression.params
def y (coef,Age,EstimatedSalary,CreditScore,Balance,NumOfProducts) :
return coef[0]*Age+ coef[1]*EstimatedSalary+coef[2]*CreditScore+coef[1]*Balance+coef[2]*NumOfProducts
import numpy as np
#A customer having below attributes
#Age: 50
#EstimatedSalary: 100,000
#CreditScore: 600
#Balance: 100,000
#NumOfProducts: 2
#would have 38% chance of churn
y1 = y(coef, 50, 100000, 600,100000,2)
p = np.exp(y1) / (1+np.exp(y1))
p
```
# Fitting Logistic Regression using Scikit Learn
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0, solver='lbfgs').fit(X_train[top5_features], y_train)
clf.predict(X_test[top5_features])
clf.predict_proba(X_test[top5_features])
clf.score(X_test[top5_features], y_test)
```
## Exercise 8.04
# Performing standardization
```
from sklearn import preprocessing
X_train[top5_features].head()
scaler = preprocessing.StandardScaler().fit(X_train[top5_features])
scaler.mean_
scaler.scale_
X_train_scalar=scaler.transform(X_train[top5_features])
X_train_scalar
X_test_scalar=scaler.transform(X_test[top5_features])
```
## Exercise 8.05
# Performing Scaling
```
min_max = preprocessing.MinMaxScaler().fit(X_train[top5_features])
min_max.min_
min_max.scale_
X_train_min_max=min_max.transform(X_train[top5_features])
X_test_min_max=min_max.transform(X_test[top5_features])
```
## Exercise 8.06
# Normalization
```
normalize = preprocessing.Normalizer().fit(X_train[top5_features])
normalize
X_train_normalize=normalize.transform(X_train[top5_features])
X_test_normalize=normalize.transform(X_test[top5_features])
np.sqrt(np.sum(X_train_normalize**2, axis=1))
np.sqrt(np.sum(X_test_normalize**2, axis=1))
```
## Exercise 8.07
# Model Evaluation
```
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=10,random_state=1).split(X_train[top5_features].values,y_train.values)
results=[]
for i, (train,test) in enumerate(skf):
clf.fit(X_train[top5_features].values[train],y_train.values[train])
fit_result=clf.score(X_train[top5_features].values[test],y_train.values[test])
results.append(fit_result)
print('k-fold: %2d, Class Ratio: %s, Accuracy: %.4f' % (i,np.bincount(y_train.values[train]),fit_result))
print('accuracy for CV is:%.3f' % np.mean(results))
```
### Using Scikit Learn cross_val_score
```
from sklearn.model_selection import cross_val_score
results_cross_val_score=cross_val_score(estimator=clf,X=X_train[top5_features].values,y=y_train.values,cv=10,n_jobs=1)
results_cross_val_score
print('accuracy for CV is:%.3f' % np.mean(results_cross_val_score))
```
## Exercise 8.08
# Fine Tuning of Model Using Grid Search
```
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
parameters = [ {'kernel': ['linear'], 'C':[0.1, 1, 10]}, {'kernel': ['rbf'], 'gamma':[0.5, 1, 2], 'C':[0.1, 1, 10]}]
clf = GridSearchCV(svm.SVC(), parameters, cv = StratifiedKFold(n_splits = 10))
clf.fit(X_train[top5_features], y_train)
clf.fit(X_train[top5_features], y_train)
print('best score train:', clf.best_score_)
print('best parameters train: ', clf.best_params_)
```
## Exercise 8.09
# Performance Metrics
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn import metrics
clf_random = RandomForestClassifier(n_estimators=20, max_depth=None,
min_samples_split=7, random_state=0)
clf_random.fit(X_train[top5_features],y_train)
y_pred=clf_random.predict(X_test[top5_features])
target_names = ['No Churn', 'Churn']
print(classification_report(y_test, y_pred, target_names=target_names))
cm = confusion_matrix(y_test, y_pred)
cm_df = pd.DataFrame(cm,
index = ['No Churn','Churn'],
columns = ['No Churn','Churn'])
plt.figure(figsize=(8,6))
sns.heatmap(cm_df, annot=True,fmt='g',cmap='Blues')
plt.title('Random Forest \nAccuracy:{0:.3f}'.format(accuracy_score(y_test, y_pred)))
plt.ylabel('True Values')
plt.xlabel('Predicted Values')
plt.show()
```
## Exercise 8.10
# ROC Curve
```
from sklearn.metrics import roc_curve,auc
fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1)
roc_auc = metrics.auc(fpr, tpr)
thresholds
plt.figure()
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, label='%s AUC = %0.2f' % ('Random Forest', roc_auc))
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.ylabel('Sensitivity(True Positive Rate)')
plt.xlabel('1-Specificity(False Positive Rate)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as sopt
from pysimu import ode2numba, ssa
from ipywidgets import *
%matplotlib notebook
```
## System definition
```
S_base = 100.0e6
U_base = 20e3
Z_base = U_base**2/S_base
r_km = 0.127 # ohm/km
x_km = 0.113 # ohm/km
lenght = 1.0
R = r_km*lenght/Z_base
X = x_km*lenght/Z_base
Z = R +1j*X
Y = 1.0/Z
G_s_inf, B_s_inf = Y.real, Y.imag
sys = { 't_end':20.0,'Dt':0.01,'solver':'forward-euler', 'decimation':10, 'name':'vsg_pi',
'models':[{'params':
{'X_s': 0.3,
'R_s': 0.1,
'K_p' : 1.0,
'T_pi' : 10.0,
'K_q':1.0,
'T_q':1.0,
'K_d':1.0,
'Omega_b' : 2*np.pi*60,
'B_s0':0.0,
'G_s_inf':G_s_inf,
'theta_inf': 0.0,
'K_a':200.0,
'K_stab':10,
'B_s_inf':B_s_inf,
'G_s0':0.0,
'V_inf':1.0
},
'f':[
'ddelta = Omega_b*(omega - 1)',
'domega = K_p*(epsilon_p + xi_p/T_pi)',
'dxi_p = epsilon_p',
'dxi_q = epsilon_q'
],
'g':['ur@-ur + V_s*cos(theta_s)', # polar to real
'ui@-ui + V_s*sin(theta_s)', # polar to imag
'cosu@-cosu +ur/V_s', # ideal PLL
'sinu@-sinu +ui/V_s', # ideal PLL
'v_s_d@-v_s_d + ur*cosu + ui*sinu', # original park
'v_s_q@-v_s_q - ur*sinu + ui*cosu', # original park
'epsilon_p@-epsilon_p + p_m - p_e',
'epsilon_q@-epsilon_q + Q_s_ref - Q_s',
'p_m@p_m - p_m_0',
'e @ -e + K_q*(epsilon_q + xi_q/T_q) ', #
'e_d@ e_d - (K_d*(omega-1.0)+1.0)*e*cos(delta) ', # V
'e_q@ e_q - (K_d*(omega-1.0)+1.0)*e*sin(delta) ', # V
'i_s_d@ -e_d + R_s*i_s_d - X_s*i_s_q + v_s_d', # VSC or SYM equation
'i_s_q@ -e_q + R_s*i_s_q + X_s*i_s_d + v_s_q', # VSC or SYM equation
'p_e@-p_e+ i_s_d*e_d + i_s_q*e_q', # active power equation
'P_s@-P_s+ i_s_d*v_s_d + i_s_q*v_s_q', # active power equation
'Q_s@-Q_s+ i_s_d*v_s_q - i_s_q*v_s_d', # reactive power equation
'V_s@(G_s0 + G_s_inf)*V_s**2 - V_inf*(G_s_inf*cos(theta_s - theta_inf) + B_s_inf*sin(theta_s - theta_inf))*V_s - P_s',
'theta_s@(-B_s0 - B_s_inf)*V_s**2 + V_inf*(B_s_inf*cos(theta_s - theta_inf) - G_s_inf*sin(theta_s - theta_inf))*V_s - Q_s',
],
'u':{'p_m_0':0.8,'Q_s_ref':0.1},
'y':['ur','ui','cosu','sinu','v_s_d','v_s_q','epsilon_p','epsilon_q','p_m','e','e_d','e_q','i_s_d','i_s_q','p_e','P_s','Q_s','V_s','theta_s'],
'y_ini':['ur','ui','cosu','sinu','v_s_d','v_s_q','epsilon_p','epsilon_q','p_m','e','e_d','e_q','i_s_d','i_s_q','p_e','P_s','Q_s','V_s','theta_s'],
'h':[
'omega'
]}
],
'perturbations':[{'type':'step','time':1.0,'var':'p_m_0','final':0.9} ]
}
x,f = ode2numba.system(sys) ;
#omega*Mf*if
#e = MF*if
#omega*e
#(K_d*(omega-1)+1)e
import vsg_pi
syst = vsg_pi.vsg_pi_class()
x0 = np.ones(syst.N_x+syst.N_y)
s = sopt.fsolve(syst.run_problem,x0 )
print(s)
fig,axes = plt.subplots(nrows=1)
points = axes.plot([],[],'o')
axes.set_xlim(-10,2)
axes.set_ylim(-50,50)
axes.grid(True)
def Jac(x):
J=np.vstack((np.hstack((syst.struct[0].Fx,syst.struct[0].Fy)),np.hstack((syst.struct[0].Gx,syst.struct[0].Gy))))
return J
def update(p_m_0 = 0.9, K_p=10, T_pi=10, K_d=0.0, K_q=0.1):
syst.struct[0].p_m_0 = p_m_0
syst.struct[0].K_p = K_p
syst.struct[0].K_d = K_d
syst.struct[0].K_q = K_q
if T_pi <0.001: T_pi = 0.001
syst.struct[0].T_pi = T_pi
x0 = np.vstack([syst.struct[0].x, syst.struct[0].y])
x0 = np.ones(syst.N_x+syst.N_y)
#x0[0,0] = 0.0
frime = np.vstack((syst.struct[0].f,syst.struct[0].g))
s = sopt.fsolve(syst.run_problem,x0 )
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:(syst.N_x+syst.N_y)]
#print(np.linalg.det(syst.struct[0].Gy))
e,v = np.linalg.eig(ssa.eval_A(syst))
points[0].set_xdata(e.real)
points[0].set_ydata(e.imag/np.pi/2)
delta = np.rad2deg(syst.struct[0].x[0,0])
V_s = syst.struct[0].y[-2,0]
print(f'delta = {delta:.2f}, V_s = {V_s:.2f}, zeta = {-100*e[0].real/abs(e[0]):.2f} %, freq = {e[0].imag/2/np.pi:.2f}')
fig.canvas.draw()
interact(update, p_m =widgets.FloatSlider(min=0.0,max=1.2,step=0.02,value=0.8, continuous_update=False),
K_p =widgets.FloatSlider(min=0.0,max=10.0,step=0.1,value=0.8, continuous_update=False),
T_pi =widgets.FloatSlider(min=1.0,max=100.0,step=0.1,value=0.8, continuous_update=False),
K_d = widgets.FloatSlider(min=0.0,max=10.0,step=0.1,value=0.8, continuous_update=True),
K_q = widgets.FloatSlider(min=0.0,max=10.0,step=0.1,value=0.8, continuous_update=False));
1.0/(10.448-10.277)
```
| github_jupyter |
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import torch
import numpy as np
from utils import show, renormalize, pbar
from utils import util, paintwidget, labwidget, imutil
from networks import networks
from PIL import Image
import os
import skvideo.io
from torchvision import transforms
import time
```
### load networks
```
nets = networks.define_nets('stylegan', 'ffhq')
outdim = nets.setting['outdim']
```
### sample an image, and reencode it
```
use_g_sample = True
if use_g_sample:
# use a gan image as source
n = 56
with torch.no_grad():
source_z = nets.sample_zs(n+1, seed=0)[n][None]
source_im = nets.zs2image(source_z)
show(['Source Image', renormalize.as_image(source_im[0]).resize((256, 256), Image.LANCZOS)])
else:
# use a real image as source
im_path = 'photos/torralba_cropped.png'
transform = transforms.Compose([
transforms.Resize(outdim),
transforms.CenterCrop(outdim),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
source_im = transform(Image.open(im_path))[None].cuda()
show(['Source Image', renormalize.as_image(source_im[0]).resize((256, 256), Image.LANCZOS)])
with torch.no_grad():
out = nets.invert(source_im)
show(renormalize.as_image(out[0]).resize((256, 256), Image.LANCZOS))
```
### visualize network priors
You can drag your mouse on the left panel, and the GAN reconstruction will show in the right panel
```
src_painter = paintwidget.PaintWidget(oneshot=False, width=256, height=256,
brushsize=20, save_sequence=False, track_move=True) # , on_move=True)
src_painter.image = renormalize.as_url(source_im[0], size=256)
img_url = renormalize.as_url(torch.zeros(3, 256, 256))
img_html = '<img src="%s"/>'%img_url
output_div = labwidget.Div(img_html)
counter = 0
prev_time = time.time()
update_freq = 0.1 # mouse time intervals 0.05 to 0.07, can change this
mask_list = []
reconstruction_list = []
def probe_changed(c):
global counter
global prev_time
counter += 1
curr_time = time.time()
if curr_time - prev_time < update_freq:
return
prev_time = time.time()
mask_url = src_painter.mask_buffer
mask = renormalize.from_url(mask_url, target='pt', size=(outdim, outdim)).cuda()[None] # 1x3xHxW
with torch.no_grad():
mask = mask[:, [0], :, :] # 1x1xHxW
mask_list.append(mask.cpu())
masked_im = source_im * mask
regenerated_mask = nets.invert(masked_im, mask)
img_url = renormalize.as_url(regenerated_mask[0], size=256)
img_html = '<img src="%s"/>'%img_url
output_div.innerHTML = img_html
reconstruction_list.append(renormalize.as_image(regenerated_mask[0]))
src_painter.on('mask_buffer', probe_changed)
show.a([src_painter], cols=2)
show.a([output_div], cols=2)
show.flush()
```
### save the resulting video
```
def write_video(file_name, rate='15'):
os.makedirs('drawing/masking', exist_ok=True)
assert(not os.path.isfile('drawing/masking/%s' % file_name))
inputdict = {
'-r': rate
}
outputdict = {
'-pix_fmt': 'yuv420p',
'-r': rate
}
writer = skvideo.io.FFmpegWriter('drawing/masking/%s' % file_name, inputdict, outputdict)
source_im_np = np.array(renormalize.as_image(source_im[0]))
for mask, rec_image in zip(pbar(mask_list), reconstruction_list):
masked_im = renormalize.as_image((source_im.cpu() * mask)[0])
masked_im_np = np.array(masked_im)
rec_im_np = np.array(rec_image)
im_np = np.concatenate([source_im_np, masked_im_np, rec_im_np], axis=1)
writer.writeFrame(im_np)
writer.close()
write_video('face.mp4', rate='15')
```
| github_jupyter |
# Recall: Boosting
### AdaBoost Algorithm
An *iterative* algorithm for "ensembling" base learners
- Input: $\{(\mathbf{x}_i, y_i)\}_{i = 1}^n, T, \mathscr{F}$, base learner
- Initialize: $\mathbf{w}^{1} = (\frac{1}{n}, ..., \frac{1}{n})$
- For $t = 1, ..., T$
- $\mathbf{w}^{t} \rightarrow \boxed{\text{base learner finds} \quad \arg\min_{f \in \mathscr{F}} \sum \limits_{i = 1}^n w^t_i \mathbb{1}_{\{f(\mathbf{x}_i) \neq y_i\}}} \rightarrow f_t$
- $\alpha_t = \frac{1}{2}\text{ln}\left(\frac{1 - r_t}{r_t}\right)$
- where $r_t := e_{\mathbf{w}^t}(f_t) = \frac 1 n \sum \limits_{i = 1}^n w_i \mathbf{1}_{\{f(\mathbf{x}_i) \neq y_i\}} $
- $w_i^{t + 1} = \frac{w_i^t \exp \left(- \alpha_ty_if_t(\mathbf{x}_i)\right)}{z_t}$ where $z_t$ normalizes.
- Output: $h_T(\mathbf{x}) = \text{sign}\left(\sum \limits_{t = 1}^T \alpha_t f_t(\mathbf{x})\right)$
## Adaboost through Coordinate Descent
It is often said that we can view Adaboost as "Coordinate Descent" on the exponential loss function.
**Question**: Can you figure out what that means? Why is Adaboost doing coordinate descent?
*Hint 1*: You need to figure out the objective function being minimized. For simplicity, assume there are a finite number of weak learners in $\mathscr{F}$
*Hint 2*: Recall that the exponential loss function is $\ell(h; (x,y)) = \exp(-y h(x))$
*Hint 3*: Let's write down the objective function being minimized. For simplicity, assume there are a finite number of weak learners in $\mathscr{F}$, say indexed by $j=1, \ldots, m$. Given a weight vector $\vec{\alpha}$, exponential loss over the data for this $\vec{\alpha}$ is:
$$\text{Loss}(\vec{\alpha}) = \sum_{i=1}^n \exp \left( - y_i \left(\sum_{j=1}^m \alpha_j h_j(\vec{x}_i)\right)\right)$$
Coordinate descent chooses the smallest coordiante of $\nabla L(\vec{\alpha})$ and updates *only this coordinate*. Which coordinate is chosen?
## Bagging classifiers
Let's explore how bagging (bootstrapped aggregation) works with classifiers to reduce variance, first by evaluating off the shelf tools and then by implementing our own basic bagging classifier.
In both examples we'll be working with the dataset from the [forest cover type prediction Kaggle competition](https://www.kaggle.com/c/forest-cover-type-prediction), where the aim is to build a multi-class classifier to predict the forest cover type of a 30x30 meter plot of land based on cartographic features. See [their notes about the dataset](https://www.kaggle.com/c/forest-cover-type-prediction/data) for more background.
## Exploring bagging
### Loading and splitting the dataset
First, let's load the dataset:
```
import pandas as pd
df = pd.read_csv('forest-cover-type.csv')
df.head()
```
Now we extract the X/y features and split them into a 60/40 train / test split so that we can see how well the training set performance generalizes to a heldout set.
```
X, y = df.iloc[:, 1:-1].values, df.iloc[:, -1].values
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.6, random_state=0)
```
### Evaluating train/test with and without bagging
Now let's use an off the shelf decision tree classifier and compare its train/test performance with a [bagged](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) decision tree.
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.metrics import accuracy_score
models = [
('tree', DecisionTreeClassifier(random_state=0)),
('bagged tree', BaggingClassifier(
DecisionTreeClassifier(random_state=0),
random_state=0,
n_estimators=10))
]
for label, model in models:
model.fit(X_train, y_train)
print("{} training|test accuracy: {:.2f} | {:.2f}".format(
label,
accuracy_score(y_train, model.predict(X_train)),
accuracy_score(y_test, model.predict(X_test))))
```
Note that both models were able to (nearly) fit the training set perfectly, and that bagging substantially improves test set performance (reduces variance).
### Hyperparameters
Let's look at two hyperparametes associated with the bagging classifier:
- **num_estimators** controls how many classifiers make up the ensemble
- **max_samples** controls how many samples each classifier in the ensemble draws
#### How many classifiers do we need to reduce variance?
The default number of estimators is 10; explore the performance of the bagging classifier with a range values. How many classifiers do we need to reduce variance? What is the point of diminishing returns for this dataset?
```
# your code goes here!
```
#### How much of the dataset does each classifier need?
By default, max_samples is set to 1.0, which means each classifier gets a number of samples equal to the size of the training set.
How do you suppose bagging manages to reduce variance while still using the same number of samples?
Explore how the performance varies as you range `max_samples` (note, you can use float values between 0.0 and 1.0 to choose a percentage):
```
# your code goes here!
```
## Implementing Bagging
We've shown the power of bagging, now let's appreciate its simplicity by implementing our own bagging classifier right here!
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import BaseEstimator
import numpy as np
class McBaggingClassifier(BaseEstimator):
def __init__(self, classifier_factory=DecisionTreeClassifier, num_classifiers=10):
self.classifier_factory = classifier_factory
self.num_classifiers = num_classifiers
def fit(self, X, y):
# create num_classifier classifiers calling classifier_factory, each
# fitted with a different sample from X
return self
def predict(self, X):
# get the prediction for each classifier, take a majority vote
return np.ones(X.shape[0])
```
You should be able to achieve similar performance to scikit-learn's implementation:
```
our_models = [
('tree', DecisionTreeClassifier(random_state=0)),
('our bagged tree', McBaggingClassifier(
classifier_factory=lambda: DecisionTreeClassifier(random_state=0)
))
]
for label, model in our_models:
model.fit(X_train, y_train)
print("{} training|test accuracy: {:.2f} | {:.2f}".format(
label,
accuracy_score(y_train, model.predict(X_train)),
accuracy_score(y_test, model.predict(X_test))))
```
| github_jupyter |
# Dynamics on Networks
Because of Python's flexibility, it is not only easy to do network analysis using NetworkX, but with a minimal amount of code we can simulate dynamics on networks.
In this section we'll simulate disease dynamics on NetworkX graphs. This last section is optional, you can choose to work on it, or to try to start doing your own work using NetworkX. I'll be available to help.
## SIR Dynamics
In this section you will implement and describe SIR dynamics on a network. You
should implement the dynamics as follows:
1. Mark all nodes as susceptible
2. Select a single node to begin the infection.
3. While there are infected nodes \textbf{do}:
- For each infected node $u$ in the previous step
1. For each neighbor of $u$, $v$
- If $v$ is susceptible set $v$ to infected with probability $\beta$
2. Set $u$ to recovered
Note here that we are assuming that nodes are infected for exactly one time
step before they move into a recovered state, i.e. $\gamma=1$. Below is the outline of the function
```
import networkx as nx
import numpy as np
def simulate_SIR(G,beta,n0=None):
if n0 is None:
n0 = [random.choice(G.nodes())]
infected = [set(n0)]
recovered = set([])
t = 1
while True:
#
return infected,recovered
```
There are three graphs stored in the `data` folder that correspond to three different real world networks. They are stored as edgelists so use the appropriate function to read them
1. `celegans.g`: A metabolic network of the nematode
_Caenorhabditis elegans_[1]
2. `jazz.g`: A network of collaborations among Jazz
Musicians.[2]
3. `ISPDec2014.g`: A network of connections between ISPs, taken
during the last week of December 2014.[3]
[1] Duch, Jordi, and Alex Arenas. "Community detection in complex networks using extremal optimization." Physical review E 72.2 (2005): 027104.
[2] Gleiser, Pablo M., and Leon Danon. "Community structure in jazz." Advances in complex systems 6.04 (2003): 565-573.
[3] Edwards, Benjamin, et al. "Analyzing and Modeling Longitudinal Security Data: Promise and Pitfalls." Proceedings of the 31st Annual Computer Security Applications Conference. ACM, 2015.
## Infections over time
For each graph, simulate the SIR dynamics starting with a single node and
record the fraction of nodes in the network infected over time for three
different values of $\beta$. Because the SIR dynamics are stochastic, you will
have to simulate each infection multiple times. Plot the average of these runs
over time as well as the standard errors as error bars for each value of
$\beta$.
## Infection Properties
Next investigate you will investigate the behavior of infection spread for
various values of $\beta$. Select at least 20 different values of $\beta$.
Simulate the SIR dynamics on the network starting with a random node, measuring
the total proportion of the network that becomes infected. Be sure to simulate the
infection enough times that you can reasonably estimate mean and standard
deviation of each of these measures (at least 100). For each measure, make a
plot of the $\beta$ values and the measure for each of the three networks.
Include the mean and error bars for the standard deviation. Report on what
each measure tells us about the three different networks.
## Influential Spreaders
It might be important to know which nodes in the network are most capable of
spreading disease. This may be important to identify the best way to stop the
spread of infections or the best way to spread information in a network. We
will measure how influential a node is by measuring the average proportion of
the network which becomes infected when the infection starts with that node.
Using $\beta=0.2$ measure the mean infection size when started from each node
in each network. Once again be run at least 100 simulations for each node. In a
table, report the most and least influential nodes, and the average size of the
infection they create.
## Network Measures to Identify Influential Spreaders
Rather than run simulations it may be useful to use other network measures to
identify influential spreaders in the network. Using the software package of
your choice compute each of the following measures for each node in the
network:
1. **Degree**: Number of edges each node has.
2. **Average Shortest Path Length**: For each node $u$ compute the
shortest path to all other nodes $v$, and take the average of their lengths
3. **Betweenness Centrality**: The fraction of all shortest paths a
node in the network participates in.
For each network, make a scatter plot where each point is each the $(x,y)$ pair,
where $x$ is each of the above measures for a single node, and $y$ is the
average infection size when the infection starts at that node.
Which of the measures provides the best prediction of infection size? Why does
each perform well or poorly? Investigate other network measures and speculate
why they might be better at identifying influential spreaders. Be sure to cite
your sources.
| github_jupyter |
# Refitting PyMC3 models with ArviZ (and xarray)
ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses `SamplingWrappers` to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.
Below there is one example of `SamplingWrapper` usage for PyMC3.
Before starting, it is important to note that PyMC3 cannot modify the shapes of the input data using the same compiled model. Thus, each refitting will require a recompilation of the model.
```
import arviz as az
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
```
For the example we will use a linear regression.
```
np.random.seed(26)
xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata);
```
Now we will write the PyMC3 model, keeping in mind that 1) data must be modifiable (both `x` and `y`) and 2) the model must be recompiled in order to be refitted with the modified data. We therefore have to create a function that recompiles the model when it's called. Luckily for us, compilation in PyMC3 is generally quite fast.
```
def compile_linreg_model(xdata, ydata):
with pm.Model() as model:
x = pm.Data("x", xdata)
b0 = pm.Normal("b0", 0, 10)
b1 = pm.Normal("b1", 0, 10)
sigma_e = pm.HalfNormal("sigma_e", 10)
y = pm.Normal("y", b0 + b1 * x, sigma_e, observed=ydata)
return model
sample_kwargs = {"draws": 500, "tune": 500, "chains": 4}
with compile_linreg_model(xdata, ydata) as linreg_model:
trace = pm.sample(**sample_kwargs)
```
We have defined a dictionary `sample_kwargs` that will be passed to the `SamplingWrapper` in order to make sure that all refits use the same sampler parameters.
We follow the same pattern with `az.from_pymc3`.
Note however, how `coords` are not set. This is done to prevent errors due to coordinates and values shapes being incompatible during refits. Otherwise we'd have to handle subsetting of the coordinate values even though the refits are never used outside the refitting functions such as `reloo`.
We also exclude the `model` because the `model`, like the `trace`, is different for every refit. This may seem counterintuitive or even plain wrong, but we have to remember that the `pm.Model` object contains information like the observed data.
```
dims = {"y": ["time"], "x": ["time"]}
idata_kwargs = {
"dims": dims,
"log_likelihood": False,
}
idata = az.from_pymc3(trace, model=linreg_model, **idata_kwargs)
idata
```
We are now missing the `log_likelihood` group due to setting `log_likelihood=False` in `idata_kwargs`. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get PyMC3 to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.
Even though it is not ideal to lose part of the straight out of the box capabilities of PyMC3, this should generally not be a problem. In fact, other PPLs such as Stan always require writing the pointwise log likelihood values manually (either within the Stan code or in Python). Moreover, computing the pointwise log likelihood in Python using xarray will be more efficient in computational terms than the automatic extraction from PyMC3.
It could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape `n_samples * n_observations`) in memory.
```
def calculate_log_lik(x, y, b0, b1, sigma_e):
mu = b0 + b1 * x
return stats.norm(mu, sigma_e).logpdf(y)
```
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.
Therefore, we can use `xr.apply_ufunc` to handle the broadasting and preserve the dimension names:
```
log_lik = xr.apply_ufunc(
calculate_log_lik,
idata.constant_data["x"],
idata.observed_data["y"],
idata.posterior["b0"],
idata.posterior["b1"],
idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)
```
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to [`xr.apply_ufunc`](http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html#xarray.apply_ufunc).
We are now passing the arguments to `calculate_log_lik` initially as `xr.DataArrays`. What is happening here behind the scenes is that `xr.apply_ufunc` is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing numpy arrays to `calculate_log_lik`. Everything works automagically.
Now let's see what happens if we were to pass the arrays directly to `calculate_log_lik` instead:
```
calculate_log_lik(
idata.constant_data["x"].values,
idata.observed_data["y"].values,
idata.posterior["b0"].values,
idata.posterior["b1"].values,
idata.posterior["sigma_e"].values
)
```
If you are still curious about the magic of xarray and `xr.apply_ufunc`, you can also try to modify the `dims` used to generate the InferenceData a couple cells before:
dims = {"y": ["time"], "x": ["time"]}
What happens to the result if you use a different name for the dimension of `x`?
```
idata
```
We will create a subclass of `az.SamplingWrapper`.
```
class PyMC3LinRegWrapper(az.SamplingWrapper):
def sample(self, modified_observed_data):
with self.model(*modified_observed_data) as linreg_model:
idata = pm.sample(
**self.sample_kwargs,
return_inferencedata=True,
idata_kwargs=self.idata_kwargs
)
return idata
def get_inference_data(self, idata):
return idata
def sel_observations(self, idx):
xdata = self.idata_orig.constant_data["x"]
ydata = self.idata_orig.observed_data["y"]
mask = np.isin(np.arange(len(xdata)), idx)
data__i = [ary[~mask] for ary in (xdata, ydata)]
data_ex = [ary[mask] for ary in (xdata, ydata)]
return data__i, data_ex
loo_orig = az.loo(idata, pointwise=True)
loo_orig
```
In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify `loo_orig` in order to make `az.reloo` believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.
```
loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])
```
We initialize our sampling wrapper. Let's stop and analize each of the arguments.
We'd generally use `model` to pass a model object of some kind, already compiled and reexecutable, however, as we saw before, we need to recompile the model every time we use it to pass the model generating function instead. Close enough.
We then use the `log_lik_fun` and `posterior_vars` argument to tell the wrapper how to call `xr.apply_ufunc`. `log_lik_fun` is the function to be called, which is then called with the following positional arguments:
log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]
where `data_ex` is the second element returned by `sel_observations` and `idata__i` is the InferenceData object result of `get_inference_data` which contains the fit on the subsetted data. We have generated `data_ex` to be a tuple of DataArrays so it plays nicely with this call signature.
We use `idata_orig` as a starting point, and mostly as a source of observed and constant data which is then subsetted in `sel_observations`.
Finally, `sample_kwargs` and `idata_kwargs` are used to make sure all refits and corresponding InferenceData are generated with the same properties.
```
pymc3_wrapper = PyMC3LinRegWrapper(
model=compile_linreg_model,
log_lik_fun=calculate_log_lik,
posterior_vars=("b0", "b1", "sigma_e"),
idata_orig=idata,
sample_kwargs=sample_kwargs,
idata_kwargs=idata_kwargs,
)
```
And eventually, we can use this wrapper to call `az.reloo`, and compare the results with the PSIS LOO-CV results.
```
loo_relooed = az.reloo(pymc3_wrapper, loo_orig=loo_orig)
loo_relooed
loo_orig
```
| github_jupyter |
# Using DS9 Regions to Include and Exclude Sources in HST Image Alignment with TWEAKREG
<div class="alert-danger">Note: The notebook in this repository 'Initializtion.ipynb' goes over many of the basic concepts such as the setup of the environment/package installation and should be read first if you are new to HST images, DrizzlePac, or Astroquery.</div>
<a id='top'></a>
## Introduction
DS9 is a popular [image visualization program](http://ds9.si.edu/site/Home.html) used in astronomy. It is now a standard package in the [AstroConda channel](https://astroconda.readthedocs.io/en/latest/). DS9 regions are interactive, user generated shapes which mark areas of interest. [Here is documentation](http://ds9.si.edu/doc/ref/region.html) about DS9 regions. For users with no experience with DS9, many resources exist online. One example is [this AstroBites page](https://astrobites.org/2011/03/09/how-to-use-sao-ds9-to-examine-astronomical-images/) which summarizes the most common DS9 features.
In this example we show how [TweakReg](https://drizzlepac.readthedocs.io/en/latest/tweakreg.html) can include and exclude sources identified by DS9 regions during image alignment. The use of "excluded" regions prevents spurious detections and ignores parts of the input images that might trouble a proper identification of sources for alignment. "Included" regions is particularly useful for images that have few good sources that can be used for image alignment and need all other sources not contained within these regions to be ignored.
This notebook is based on a [prior example](http://www.stsci.edu/hst/HST_overview/drizzlepac/examples/example10) available from the [DrizzlePac webpage](http://www.stsci.edu/hst/HST_overview/drizzlepac). Please direct inquires about this notebook, DrizzlePac, or any other issues with HST images to the [HST help desk](https://stsci.service-now.com/hst).
```
# import all packages
import glob
import os
import shutil
from astropy.table import Table
from astropy.io import fits
from astroquery.mast import Observations
from drizzlepac import tweakreg
import matplotlib.pyplot as plt
from photutils import CircularAperture
import regions
from regions import read_ds9
# set plotting details for notebooks
%matplotlib inline
plt.rcParams['figure.figsize'] = (20,20)
```
## 1. Download the data
This example uses observations of 'MACSJ1149.5+2223-HFFPAR' ([proposal ID 13504](http://www.stsci.edu/cgi-bin/get-proposal-info?id=13504&observatory=HST), files `jcdua3f4q_flc.fits` and `jcdua3f8q_flc.fits`). We provide code below to retrieve the ACS/WFC calibrated FLC files.
Data are downloaded using the `astroquery` API to access the [MAST](http://archive.stsci.edu/) archive. The `astroquery.mast` [documentation](http://astroquery.readthedocs.io/en/latest/mast/mast.html) has more examples for how to find and download data from MAST.
It is unusual to download individual files instead of all the related files in an association, but it can be done. First, we need to find the IDs for these two specific FLC files.
**Note:** `astroquery` uses both `obs_id` and `obsID`. Be careful not to confuse them.
```
# Retrieve the observation information.
obs_table = Observations.query_criteria(obs_id=['JCDUA3010','JCDUA3020'])
# Find obsID for specific FLC images.
product_list_by_association = Observations.get_product_list(obs_table['obsid'])
product_list_by_association['obsID', 'productFilename'][18:28]
```
Based on this table, the `obsID` values for `jcdua3f4q_flc.fits` and `jcdua3f8q_flc.fits` are 2003170978 and 2003170979. We use this information to download these two FITS files.
```
# Download jcdua3f4q_flc.fits and jcdua3f8q_flc.fits from MAST.
dataProductsByID = Observations.get_product_list(['2003170978','2003170979'])
dataProductsByID = Observations.filter_products(dataProductsByID,
productSubGroupDescription='FLC')
download_table = Observations.download_products(dataProductsByID)
```
**If the cell above produces an error, try running it again.** Connection issues can cause errors on the first try.
```
# Move the files from the mastDownload directory to the current working directory.
fits_files = glob.glob('mastDownload/HST/*/jcdua3f[48]q_flc.fits')
for file in fits_files:
os.rename(file, os.path.basename(file))
# Delete the mastDownload directory and all subdirectories it contains.
shutil.rmtree('mastDownload')
```
## 2. Use TweakReg to create source catalogs
Run `TweakReg` on one of the FLC files downloaded into this directory, `jcdua3f4q_flc.fits`. By limiting the input list to one file `TweakReg` makes the source catalog for this image, but performs no matching or aligning. Using a slightly larger `conv_width` of 4.5 pixels (versus the default of 3.5 for ACS/WFC) means `TweakReg` will be able to utilize small compact objects for alignment.
**Note**: This notebook is only concerned with the source detection capabilities of `TweakReg`, and so to prevent any changes being saved to the images, the `updatehdr` parameter is set to **False**.
```
tweakreg.TweakReg('jcdua3f4q_flc.fits',
imagefindcfg=dict(threshold=50,conv_width=4.5),
updatehdr=False)
```
This creates four output files:
- *jcdua3f4q_flc_sci1_xy_catalog.coo* contains the X and Y positions, flux, and IDs for all detected sources in the SCI1 extention
- *jcdua3f4q_flc_sci2_xy_catalog.coo* contains the X and Y positions, flux, and IDs for all detected sources in the SCI2 extention
- *jcdua3f4q_flc_sky_catalog.coo* has the RA and DEC of all the sources from both extensions
- *tweakreg.log* is the log file output from `TweakReg`
Read in the the SCI1 catalog file.
```
# Read in the SCI1 catalog file
coords_tab = Table.read('jcdua3f4q_flc_sci1_xy_catalog.coo',
format='ascii.no_header', names=['X','Y','Flux', 'ID'])
# Output the first five rows to display the table format
coords_tab[0:5]
```
Now read in the FITS image. This step will be used for demonstrative plots and is not necessary to run `TweakReg`.
```
hdulist = fits.open('jcdua3f4q_flc.fits')
```
Then use `photutils` to generate apertures in order to display the source catalog positions detected by `TweakReg` on the FITS image. A fair number of spurious detections are found, but these are generally cosmic-rays which fall in random positions across the detector and will therefore not make it through into the matched catalogs (frame to frame).
**Note**: This step may take a few seconds to run due to the large number of apertures plotted.
```
# Make the apertures with photutils.
# One pixel offset corrects for differences between (0,0) and (1,1) origin systems.
apertures = CircularAperture([coords_tab['X']-1.,
coords_tab['Y']-1.],
r=10.)
# Plot a region of the image with pyplot
plt.imshow(hdulist[1].data, cmap='Greys', origin='lower', vmin=0, vmax=400)
plt.axis([3200,4000,250,1000])
# Overplot the apertures onto the image
apertures.plot(color='blue', lw=1)
```
## 3. DS9 Regions in TweakReg
`TweakReg` allows the following DS9 regions: circle, ellipse, polygon, and box. All other regions are ignored. All region files must comply with the DS9 region file format and all regions must be provided in *image* coordinates.
This demonstration uses one of each type of shape possible. In the region file, they look like this (in image coordinates):
```ds9
polygon(3702,845,3819,890,3804,797,3734,720,3671,745,3592,735,3602,770,3660,782)
ellipse(3512,809,26,67,0)
circle(3613,396,75)
box(3541,393,113,96,0)
```
Next the DS9 regions are read in and parsed with the [astropy regions package](https://astropy-regions.readthedocs.io/en/latest/getting_started.html) and then added to the plot to show how they look on the image.
```
# Read in and parse the DS9 region file with the regions package
ds9_regions_file = 'jcdua3f4q_sci1_exclude.reg'
regions = read_ds9(ds9_regions_file, errors='ignore')
# Plot previous figure with DS9 region shapes
fig, ax = plt.subplots()
ax.imshow(hdulist[1].data,
cmap='Greys',
origin='lower',
vmin=0, vmax=400)
ax.axis([3200,4000,250,1000])
apertures.plot(color='blue', lw=1.)
for regs in range(4):
regions[regs].plot(ax=ax, edgecolor='red', lw=2, fill=False)
plt.show()
```
You can see the polygon outlining a galaxy, including the extended tidal stream. The other shapes are placed randomly as demonstration.
This figure will be remade several times with different `TweakReg` outputs, so a function has been defined below to automatically read in the TweakReg source catalog and reproduce this figure.
```
# Define a function to remake this figure after subsequent TweakReg runs.
def read_tweak_cat_and_plot():
'''
This function reads in the TweakReg coordinate catalog for
SCI1 of image JCDUA3F4Q, creates apertures for all the sources
detected, then plots the apertures on the FITS image along
with the DS9 region files defined previously in the notebook.'''
# Read in the SCI1 catalog file with the exclusions
coords_tab = Table.read('jcdua3f4q_flc_sci1_xy_catalog.coo',
format='ascii.no_header',
names=['X','Y','Flux', 'ID'])
# Define apertures for TweakReg identified sources
apertures = CircularAperture([coords_tab['X']-1.,
coords_tab['Y']-1.],
r=10.)
# Plot
fig, ax = plt.subplots()
ax.imshow(hdulist[1].data, cmap='Greys',
origin='lower', vmin=0, vmax=400)
ax.axis([3200,4000,250,1000])
apertures.plot(color='blue', lw=1.)
for regs in range(4):
regions[regs].plot(ax=ax, edgecolor='red', lw=2, fill=False)
plt.show()
```
## 4. Exclusion regions
`TweakReg` identifies the DS9 region files from a plain text file provided to the `exclusions` parameter. This text file must give the filename of the images and the name of the DS9 region files that should be applied to the SCI1 and SCI2 extensions, respectively. The format is important, and for our example would look like:
```
jcdua3f4q_flc.fits jcdua3f4q_sci1_exclude.reg None
jcdua3f8q_flc.fits None None
```
'None' serves the function of an empty placeholder. Since the exclusions are applied only to SCI1, the syntax can be simplified to the following.
```
jcdua3f4q_flc.fits jcdua3f4q_sci1_exclude.reg
jcdua3f8q_flc.fits
```
**NOTE**: If an image needs DS9 regions applied to the SCI2 extension only, then 'None' **must** be written after the filename and before the SCI2 region.
The git repo for this notebook contains a file `exclusions.txt` to use as input to `TweakReg`.
To exclude the sources within a DS9 region shape, a minus sign (-) is put before the shape. This time, all four shapes will be exluded from source detection. The corresponding DS9 region `jcdua3f4q_sci1_exclude.reg` therefore has the syntax:
```ds9
# Region file format: DS9 version 4.1
global color=yellow dashlist=8 3 width=2 font="helvetica 10 normal roman"
select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
-polygon(3702,845,3819,890,3804,797,3734,720,3671,745,3592,735,3602,770,3660,782)
-ellipse(3512,809,26,67,0)
-circle(3613,396,75)
-box(3541,393,113,96,0)
```
Now `TweakReg` is run again, this time with the DS9 regions provided by the `exclusions` parameter.
```
# tweakreg run with DS9 regions excluded from source detection
tweakreg.TweakReg('jcdua3f4q_flc.fits',
imagefindcfg=dict(threshold=50,conv_width=4.5),
exclusions='exclusions.txt',
updatehdr=False)
read_tweak_cat_and_plot()
```
As expected, sources within the defined DS9 exlusion regions are no longer in the `TweakReg` source catalog.
## 5. Inclusion Regions
Now we will look at inclusions, where only sources inside the DS9 regions are detected by `TweakReg`. The `exclusions` parameter name doesn't change, but in this example we give it `inclusions.txt` now instead.
```
jcdua3f4q_flc.fits jcdua3f4q_sci1_include.reg
jcdua3f8q_flc.fits
```
From the information from the last section, the file syntax indicates that `jcdua3f4q_sci1_include.reg` is applied to the SCI1 extention of `jcdua3f4q_flc.fits`, and no DS9 regions are given for the SCI2 extention, or for the second image `jcdua3f8q_flc.fits`.
Looking at `jcdua3f4q_sci1_include.reg`, it shows the same shapes as before, but the minus signs (-) at the beginning of the lines are removed.
```ds9
# Region file format: DS9 version 4.1
global color=yellow dashlist=8 3 width=2 font="helvetica 10 normal roman"
select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
polygon(3702,845,3819,890,3804,797,3734,720,3671,745,3592,735,3602,770,3660,782)
ellipse(3512,809,26,67,0)
circle(3613,396,75)
box(3541,393,113,96,0)
```
There is no symbol associated with inclusion regions. If there is no symbol before the shape, then it is treated as an inclusion region. If there is a minus sign (-), then it is treated as an exclusion region.
```
# tweakreg run with source detection only inside the DS9 regions
tweakreg.TweakReg('jcdua3f4q_flc.fits',
imagefindcfg=dict(threshold=50,conv_width=4.5),
exclusions='inclusions.txt',
updatehdr=False)
read_tweak_cat_and_plot()
```
This shows that only sources in the DS9 regions are included in the `TweakReg` source catalog. Note that only 63 objects were found by `TweakReg` for SCI1, compared to 10294 found in the original catalog. The number of objects for SCI2 is unchanged.
## 6. Combining Exclusion and Inclusion Regions
The inclusion and exclusion regions can be used at the same time. For this example, the `inclusions_no_box.txt` file is fed to the exclusions parameter in `TweakReg`.
```
jcdua3f4q_flc.fits jcdua3f4q_sci1_include_no_box.reg
jcdua3f8q_flc.fits
```
`jcdua3f4q_sci1_include_no_box.reg` has only a minus sign (-) on the last line.
```ds9
# Region file format: DS9 version 4.1
global color=yellow dashlist=8 3 width=2 font="helvetica 10 normal roman"
select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
polygon(3702,845,3819,890,3804,797,3734,720,3671,745,3592,735,3602,770,3660,782)
ellipse(3512,809,26,67,0)
circle(3613,396,75)
-box(3541,393,113,96,0)
```
This means that all the shapes will be treated as inclusion regions except for the box, which will be excluded from the source detection.
```
# tweakreg run with a mix of included/excluded DS9 regions
tweakreg.TweakReg('jcdua3f4q_flc.fits',
imagefindcfg=dict(threshold=50,conv_width=4.5),
exclusions='inclusions_no_box.txt',
updatehdr=False)
read_tweak_cat_and_plot()
```
This shows the sources detected within the inclusion regions except for those excluded from the box.
**NOTE**: The order of the DS9 regions is important!
`TweakReg` applies the DS9 region requirements in the order in which they appear in the DS9 region file. To demonstrate this, the excluded box shape is moved to the beginning of the region list, so that it is the first processed instead of the last. This is seen by inputing `inclusions_no_box_first.txt` which specifies the region file `jcdua3f4q_sci1_include_no_box_first.reg`:
```ds9
# Region file format: DS9 version 4.1
global color=yellow dashlist=8 3 width=2 font="helvetica 10 normal roman"
select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
image
-box(3541,393,113,96,0)
polygon(3702,845,3819,890,3804,797,3734,720,3671,745,3592,735,3602,770,3660,782)
ellipse(3512,809,26,67,0)
circle(3613,396,75)
```
```
# tweakreg run with excluded box first to show order of operations
tweakreg.TweakReg('jcdua3f4q_flc.fits',
imagefindcfg=dict(threshold=50,conv_width=4.5),
exclusions='inclusions_no_box_first.txt',
updatehdr=False)
read_tweak_cat_and_plot()
```
Now the circle is given precedence because it is the last region shape processed, and therefore the section of overlap with the box is not removed as it was in the previous figure.
Due to this behavior, **remember to be careful with the order in the DS9 region file when combining inclusion and exclusion requirements.**
# About this Notebook
Author: S. Hoffmann, STScI ACS Team
Updated: December 14, 2018
[Top of Page](#top)
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
```
## Unutarnja stabilnost
Koncept stabilnosti bilježi ponašanje evolucije stanja sustava kada je isti "izbačen" iz ravnotežnog stanja: stabilnost opisuje divergira li evolucija stanja koja nastaje nakon poremećaja iz točke ravnoteže ili ne.
### Definicija
S obzirom na vremenski nepromjenjiv dinamički sustav opisan vektorom stanja $x(t)\in \mathbb{R}^n$, točkom ravnoteže $x_e$, početnim stanjem $x_0$ i početnim vremenom $t_0$, ako vrijedi
$$
\forall \, \epsilon \in \mathbb{R}, \, \epsilon > 0 \quad \exists \delta \in \mathbb{R}, \, \delta > 0 : \quad ||x_0-x_e|| < \delta \, \Rightarrow \, ||x(t)-x_e|| < \epsilon \quad \forall t \ge t_0
$$
to bi se moglo protumačiti kao: ako postoji dovoljno mala početna perturbacija $\delta$ od točke ravnoteže takva da evolucija stanja $x(t)$ od točke poremećaja ne odlazi predaleko (više od $\epsilon$) od same ravnoteže, tada je točka ravnoteže stabilna.
Ako se također dogodi $\lim_{t\to\infty}||x(t)-x_e|| = 0$, što se može protumačiti kao: evolucija stanja se vraća natrag u točku ravnoteže, tada se kaže da je ravnoteža asimptotski stabilna.
U slučaju linearnih vremenski nepromjenjivih sustava:
\begin{cases}
\dot{x} = Ax +Bu \\
y = Cx + Du,
\end{cases}
moguće je dokazati da stabilnost jedne točke ravnoteže podrazumijeva stabilnost svih točaka ravnoteže, pa možemo govoriti o stabilnosti sustava čak i ako je, općenito, svojstvo stabilnosti povezano s točkom ravnoteže. Posebnost ovog linearnog sustava posljedica je činjenice da je evolucija ove vrste sustava strogo povezana sa svojstvenim vrijednostima matrice dinamike $A$, koje su invarijantne s obzirom na rotaciju, translaciju, početne uvjete i vrijeme.
Podsjetite se što je objašnjeno u primjeru o modalnoj analizi:
> Rješenje diferencijalne jednadžbe (u zatvorenoj formi), od početnog vremena $t_0$, s početnim uvjetima $x(t_0)$, je
$$
x(t) = e^{A(t-t_0)}x(t_0).
$$ Matrica $e^{A(t-t_0)}x(t_0)$ se sastoji od linearnih kombinacija funkcije vremena $t$, svake tipa: $$e^{\lambda t},$$ gdje su $\lambda$-e svojstvene vrijednosti matrice $A$; ove su funckije modovi sustava.
stoga:
- linearni dinamički sustav stabilan je ako i samo ako svi njegovi modovi nisu divergentni,
- linearni dinamički sustav je asimptotski stabilan ako i samo ako su svi njegovi modovi konvergentni,
- linearni dinamički sustav je nestabilan ako ima barem jedan divergentni mod.
i, s obzirom na svojstvene vrijednosti matrice dinamike, to se događa ako:
- sve svojstvene vrijednosti matrice $A$ pripadaju <u>zatvorenoj</u> lijevoj polovici kompleksne ravnine (tj. realna vrijednost im je negativna ili nula), a, u slučaju da imaju realnu vrijednost nula, njihova je algebarska višestrukost ista kao geometrijska višestrukost (množnost), ili, ekvivalentno tome, imaju skalarne blokove u Jordanovom obliku;
- sve svojstvene vrijednosti pripadaju <u>otvorenoj</u> lijevoj polovici imaginarne ravnine, odnosno imaju strogo negativne realne dijelove;
- barem jedna svojstvena vrijednost ima pozitivan realni dio ili postoje svojstvene vrijednosti s realnom vrijednošću 0 i neskalarni Jordanovi blokovi.
Ovaj interaktivni primjer predstavlja matricu dinamike $A$ koju je moguće uređivati, a prikazuje slobodni odziv sustava uz odgovarajuće svojstvene vrijednosti.
### Kako koristiti ovaj interaktivni primjer?
- Pokušajte promijeniti svojstvene vrijednosti i početni uvjet $x_0$ i pogledajte kako se mijenja odziv.
```
%matplotlib inline
#%matplotlib notebook
import control as control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#print a matrix latex-like
def bmatrix(a):
"""Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
# Display formatted matrix:
def vmatrix(a):
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{vmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{vmatrix}']
return '\n'.join(rv)
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
# Preparatory cell
A = numpy.matrix([[0,1],[-2/5,-1/5]])
X0 = numpy.matrix('5; 3')
Aw = matrixWidget(2,2)
Aw.setM(A)
X0w = matrixWidget(2,1)
X0w.setM(X0)
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Main cell
def main_callback(A, X0, DW):
sols = numpy.linalg.eig(A)
sys = sss(A,[[1],[0]],[0,1],0)
pole = control.pole(sys)
if numpy.real(pole[0]) != 0:
p1r = abs(numpy.real(pole[0]))
else:
p1r = 1
if numpy.real(pole[1]) != 0:
p2r = abs(numpy.real(pole[1]))
else:
p2r = 1
if numpy.imag(pole[0]) != 0:
p1i = abs(numpy.imag(pole[0]))
else:
p1i = 1
if numpy.imag(pole[1]) != 0:
p2i = abs(numpy.imag(pole[1]))
else:
p2i = 1
print('Svojstvene vrijednosti matrice A su:',round(sols[0][0],4),'i',round(sols[0][1],4))
#T = numpy.linspace(0, 60, 1000)
T, yout, xout = control.initial_response(sys,X0=X0,return_x=True)
fig = plt.figure("Svojstvene vrijednosti od A", figsize=(16,16))
ax = fig.add_subplot(311,title='Polovi (Re vs Img)')
#plt.axis(True)
# Move left y-axis and bottim x-axis to centre, passing through (0,0)
# Eliminate upper and right axes
ax.spines['left'].set_position(('data',0.0))
ax.spines['bottom'].set_position(('data',0.0))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.set_xlim(-max([p1r+p1r/3,p2r+p2r/3]),
max([p1r+p1r/3,p2r+p2r/3]))
ax.set_ylim(-max([p1i+p1i/3,p2i+p2i/3]),
max([p1i+p1i/3,p2i+p2i/3]))
plt.plot([numpy.real(pole[0]),numpy.real(pole[1])],[numpy.imag(pole[0]),numpy.imag(pole[1])],'o')
plt.grid()
ax1 = fig.add_subplot(312,title='Slobodni odziv')
plt.plot(T,xout[0])
plt.grid()
ax1.set_xlabel('vrijeme [s]')
ax1.set_ylabel('$x_1$')
ax1.axvline(x=0,color='black',linewidth='0.8')
ax1.axhline(y=0,color='black',linewidth='0.8')
ax2 = fig.add_subplot(313)
plt.plot(T,xout[1])
plt.grid()
ax2.set_xlabel('vrijeme [s]')
ax2.set_ylabel('$x_2$')
ax2.axvline(x=0,color='black',linewidth='0.8')
ax2.axhline(y=0,color='black',linewidth='0.8')
#plt.show()
alltogether = widgets.HBox([widgets.VBox([widgets.Label('$A$:',border=3),
Aw]),
widgets.Label(' ',border=3),
widgets.VBox([widgets.Label('$X_0$:',border=3),
X0w]),
START])
out = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW})
out.layout.height = '1000px'
display(out, alltogether)
```
| github_jupyter |
```
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
%matplotlib notebook
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy import signal
import ipywidgets as widgets
import control as c
import sympy as sym
from IPython.display import Latex, display, Markdown # For displaying Markdown and LaTeX code
from fractions import Fraction
import matplotlib.patches as patches
```
## Sistemi del primo ordine senza zeri
### Introduzione
I sistemi del primo ordine senza zeri sono caratterizzati dalla seguente funzione di trasferimento:
\begin{equation}
G(s)=\frac{k}{s+k}.
\end{equation}
Il valore $k$ è importante poiché definisce i seguenti parametri:
- $1/k$ indica la *costante di tempo* della risposta, che definisce il tempo necessario affinché la risposta al gradino raggiunga $\approx$ il 63% del suo valore finale.
- $t_r$ indica il *tempo di salita*, ovvero il tempo necessario affinché la risposta del sistema passi dal 10 \% al 90 \% del valore di regime.
- $t_s$ indica il *tempo di assestamento*, ovvero l'istante per cui la risposta del sistema entra all'interno della banda di errore (es.2 \% come impostato nell'esempio sotto) senza più uscirne.
La risposta al gradino di questi sistemi è data da:
\begin{equation}
c(t)=1-e^{-at},
\end{equation}
dove la risposta forzata è uguale a $1$ e la risposta libera a $-e^{-at}$.
---
### Come usare questo notebook?
Sposta lo slider per definire il valore $k$ nella funzione di trasferimento del sistema del primo ordine $G(s)=\frac{k}{s+k}$ e osserva la risposta al gradino del sistema definito.
```
# set up plot
fig, ax = plt.subplots(figsize=[9.8,4],num='Sistema del primo ordine')
ax.set_ylim([-1, 2])
ax.set_xlim([0, 5])
ax.grid(True)
ax.set_title ('Risposta')
ax.set_xlabel('$t$ [s]')
ax.set_ylabel('Input, output')
xaxis = ax.axhline(y=0,color='k',lw=1)
response, = ax.plot([], [])
slope, = ax.plot([], [])
x1a, = ax.plot([], [])
y1a, = ax.plot([], [])
tr11, = ax.plot([], [])
trv1, = ax.plot([], [])
trv2, = ax.plot([], [])
trh1, = ax.plot([], [])
trh2, = ax.plot([], [])
ts11, = ax.plot([], [])
ts1, = ax.plot([], [])
ts2, = ax.plot([], [])
texttr=ax.text(0,0,'')
textts=ax.text(0,0,'')
ax.step([0,5],[0,1],color='C0',label='input')
# generate x values
t = np.linspace(0, 2 * np.pi, 10000)
def response_func(t, k):
""""Return response function"""
return 1-np.exp(-k*t)
@widgets.interact(k=(1, 5, 1))
def update(k=1):
"""Remove old lines from plot and plot new one"""
global response,slope,x1a,y1a,tr11,trv1,trv2,trh1,trh2,ts11,ts1,ts2,texttr,textts
ax.lines.remove(response)
ax.lines.remove(slope)
ax.lines.remove(x1a)
ax.lines.remove(y1a)
ax.lines.remove(tr11)
ax.lines.remove(trv1)
ax.lines.remove(trv2)
ax.lines.remove(trh1)
ax.lines.remove(trh2)
ax.lines.remove(ts11)
ax.lines.remove(ts1)
ax.lines.remove(ts2)
texttr.remove()
textts.remove()
response, = ax.plot(t, response_func(t,k), color='C1',lw=2)
response.set_label('output')
slope, = ax.plot([0,1/k], [0,1], color='C2',lw=2)
slope.set_label('pendenza iniziale')
x1a, = ax.plot([1/k,1/k],[0,1-np.exp(-1)],'--',color='k',lw=.8)
y1a, = ax.plot([0,1/k],[1-np.exp(-1),1-np.exp(-1)],'--',color='k',lw=.8)
# rise time
tr11, = ax.plot([-np.log(0.9)/k,-np.log(0.1)/k],[-0.5,-0.5],color='k',lw=.8)
trv1, = ax.plot([-np.log(0.9)/k,-np.log(0.9)/k],[-0.5,0.1],'--',color='k',lw=.8)
trv2, = ax.plot([-np.log(0.1)/k,-np.log(0.1)/k],[-0.5,0.9],'--',color='k',lw=.8)
trh1, = ax.plot([0,-np.log(0.9)/k],[0.1,0.1],'--',color='k',lw=.8)
trh2, = ax.plot([0,-np.log(0.1)/k],[0.9,0.9],'--',color='k',lw=.8)
# settling time
ts11, = ax.plot([0,-np.log(0.02)/k],[-0.7,-0.7],color='k',lw=.8)
ts1, = ax.plot([0,0],[-0.7,0],'--',color='k',lw=.8)
ts2, = ax.plot([-np.log(0.02)/k,-np.log(0.02)/k],[-0.7,0.98],'--',color='k',lw=.8)
ax.legend()
texttr=ax.text((-np.log(0.1)/k-(-np.log(0.9)/k))/2,-0.45, '$t_r$',fontsize=13)
textts=ax.text((-np.log(0.02)/k)/2-0.1,-0.65, '$t_s$',fontsize=13)
plt.xticks([0,1/k,2,4], [0,'${1}/{%s}$'%k,2,4],fontsize=8)
plt.yticks([0.1,0.5,0.63,0.9,1,1.5,2], [0.1,0.5,0.63,0.9,1,1.5,2],fontsize=8)
num1=[k]
den1=[1,k]
display(Markdown('La funzione di trasferimento del sistema $G(s)$ è uguale a:'))
tf_sys1=c.TransferFunction(num1,den1)
s=sym.Symbol('s')
eq=(k/(s+k))
display(eq)
```
## Sistemi del secondo ordine
### Introduzione
Differentemente dai sistemi del primo ordine presentati sopra, in cui il parametro $k$ influenza solo la velocità della risposta, il cambiamento dei parametri analoghi nei sistemi del secondo ordine possono influenzare la forma effettiva della risposta. In questi sistemi sono possibili le seguenti quattro risposte:
- risposta *sovrasmorzata*,
- risposta *sottosmorzata*,
- risposta *non smorzata* e
- risposta con *smorzamento critico*.
### Come usare questo notebook?
Sposta gli sliders per definire i valori di $a$ e $b$ nella funzione di trasferimento del sistema del secondo ordine dalla forma $G(s)=\frac{b}{s^2+as+b}$ e osserva la mappa poli-zeri e la risposta al gradino del sistema definito.
```
# set up plot
fig1, ax1 = plt.subplots(1,2,figsize=[9.8,4],num='Sistema del secondo ordine')
ax1[0].set_ylim([-3.5, 3])
ax1[1].set_ylim([0, 2.5])
# ax1.set_xlim([0, 5])
ax1[0].grid(True)
ax1[1].grid(True)
ax1[0].axhline(y=0,color='k',lw=.8)
ax1[1].axhline(y=0,color='k',lw=.8)
ax1[0].axvline(x=0,color='k',lw=.8)
ax1[1].axvline(x=0,color='k',lw=.8)
ax1[0].set_xlabel('Re')
ax1[0].set_ylabel('Im')
ax1[1].set_xlabel('$t$ [s]')
ax1[1].set_ylabel('Input, output')
ax1[0].set_title('Mappa poli-zeri')
ax1[1].set_title('Risposta')
t = np.linspace(0, 20, 10000)
textGs = ax1[0].text(0,0,'')
ax1[1].step([0,20],[0,1],color='C0',label='input')
plotzero, = ax1[0].plot([], [])
response2, = ax1[1].plot([], [])
def response_func2(t, a, b):
num_sys=np.array([b])
den_sys=np.array([1,a,b])
tf_sys=c.TransferFunction(num_sys,den_sys)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
T, yout = c.step_response(tf_sys,t)
return T, yout, poles_sys, tf_sys
@widgets.interact(a=(0, 10, 1),b=(1,10,1))
def update(a=7,b=9):
""" Update plots """
global response2, plotzero, textGs
ax1[0].lines.remove(plotzero)
ax1[1].lines.remove(response2)
# textGs.remove()
T, yout, poles_sys, tf_sys = response_func2(t, a, b)
plotzero, = ax1[0].plot(np.real(poles_sys), np.imag(poles_sys), 'xg', markersize=10, label = 'Poles')
# textGs = ax1[0].text(-7,1,tf_sys)
response2, = ax1[1].plot(T,yout,color='C1',label='output')
s=sym.Symbol('s')
eq=b/(s**2+a*s+b)
coeff = [1,a,b]
rootsdenom=np.roots(coeff)
eq2=b/((s-rootsdenom[0])*(s-rootsdenom[1]))
display(Markdown('La funzione di trasferimento del sistema $G(s)$ è uguale a:'))
display(eq),display(Markdown('o')),display(eq2)
if np.imag(poles_sys)[0] == 0 and np.imag(poles_sys)[1] == 0 and np.real(poles_sys)[0] < 0 and np.real(poles_sys)[1] < 0 and np.real(poles_sys)[0]!=np.real(poles_sys)[1]:
display(Markdown('Il sistema è **sovrasmorzato** in quanto entrambi i poli hanno solamente la parte reale negativa.'))
elif math.isclose(0, np.imag(poles_sys)[0], abs_tol=10**-6) and math.isclose(0, np.imag(poles_sys)[1], abs_tol=10**-6) and np.real(poles_sys)[1] < 0 and np.real(poles_sys)[0]==np.real(poles_sys)[1]:
display(Markdown('Il sistema presenta **smorzamento critico** in quanto presenta un polo con molteplicità algebrica 2 con solamente la parte reale negativa.'))
elif np.real(poles_sys)[0] == 0 and np.real(poles_sys)[1] == 0:
display(Markdown('Il sistema è **non smorzato** in quanto i poli hanno solamente la parte immaginaria.'))
elif np.imag(poles_sys)[0] != 0 and np.imag(poles_sys)[1] != 0 and np.real(poles_sys)[0] != 0 and np.real(poles_sys)[1] != 0:
display(Markdown('Il sistema è **sottosmorzato**.'))
ax1[0].legend()
ax1[1].legend()
```
| github_jupyter |
```
from tkinter import *
from datetime import timedelta, datetime
from urllib.request import urlopen, Request, urlretrieve
import urllib
from urllib import request
from pathlib import Path
import urllib.error
from urllib.request import Request, urlopen
import os
import sys
import pandas as pd
import numpy as np
import requests
import csv
import io
import gspread_dataframe as gd
from lxml import html
from lxml import etree
from openpyxl import load_workbook
position_fii, position_dii = 0, 0
workbookPath = 'C:/Users/Saurav/Desktop/Final/test.xlsx'
def dii_and_fii_data(date):
"""DIIs and FIIs Data Single Day"""
# The given url requires date to be in the format ---- ddmmyyyy
url = 'https://www.nseindia.com/content/nsccl/fao_participant_oi_' + \
date.replace('-', '') + '.csv'
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
try:
r = requests.get(url, headers=hdr)
df = pd.read_csv(io.StringIO(r.content.decode('utf-8')))
new_header = df.iloc[0] # grab the first row for the header
df = df[1:] # take the data less the header row
df.columns = new_header # set the header row as the df header
df.insert(loc=0, column="Date", value=date)
df_right_dii = df.loc[2:2, ('Future Index Long',
'Future Index Short',
'Future Stock Long',
'Future Stock Short',
'Option InBBBdex Call Long',
'Option Index Put Long',
'Option Index Call Short',
'Option Index Put Short')]
df_right_fii = df.loc[3:3, ('Future Index Long',
'Future Index Short',
'Future Stock Long',
'Future Stock Short',
'Option Index Call Long',
'Option Index Put Long',
'Option Index Call Short',
'Option Index Put Short')]
df_date = df.loc[2:2, ('Date',)]
return df_date, df_right_dii, df_right_fii
except:
print("[+] Sorry, content for %s is not available online,\nKindly try after 7:30 PM for Today's Contents"%(date))
exit()
def availableDate(date):
"""Find next available data on site.
This removes the possibilty of holidays in the list.
Returns working day DATE as str
Sub-module: <Only for use with nextDate function>
DO NOT TOUCH"""
url = 'https://www.nseindia.com/content/nsccl/fao_participant_oi_' + date + '.csv'
try:
r = requests.get(url)
tree = html.fromstring(r.content)
checkDate = tree.findtext('.//title')
# p Returns None if Data to be scrapped is found
# p Returns 404 Not Found if Data to be scrapped is not found
return checkDate
except:
print("Sorry the fao Participants value of %s, has not been refreshed online yet. \nKindly try after 7:30 PM"%(date))
def findDate():
"""Returns the str of Last filled Date and next Date to be filled"""
global position_dii, position_fii
df = pd.read_excel(workbookPath)
lastFilledDate = pd.isna(df['Unnamed: 16']).index[-1]
# This gives the row index from which data can be started appending
position_fii = len(df) + 1
position_dii = len(df) + 1 - 378
nextDate = (datetime.strptime(lastFilledDate, '%d-%m-%Y') +
timedelta(days=1)).strftime('%d-%m-%Y')
while availableDate(nextDate.replace('-', '')) == '404 Not Found':
nextDate = (datetime.strptime(nextDate, '%d-%m-%Y') +
timedelta(days=1)).strftime('%d-%m-%Y')
return lastFilledDate, nextDate
def niftySpot(date):
"""Returns the nifty closing value of the day as string"""
# Requires date format to be dd-mm-yyyy
url = "https://www.nseindia.com/products/dynaContent/equities/indices/historicalindices.jsp?indexType=NIFTY%2050&fromDate=" + date + "&toDate=" + date
page = requests.get(url)
tree = html.fromstring(page.content)
try:
nifty_close = tree.xpath('/html/body/table/tr/td[5]/text()')[0].strip()
return nifty_close
except IndexError:
print("Sorry the nifty value of %s, has not been refreshed online yet. \nKindly try after 7:30 PM"%(date))
def dataAppend():
# lastFilledDate = findDate()[0]
# now.time() > datetime.time(hour=8)
while datetime.now().strftime('%d-%m-%Y') != findDate()[0]:
if datetime.now().strftime('%d-%m-%Y') == findDate()[0]:
print("[+][+] Process Completed")
break
# Load current date inside the variable, thus changing according to the loop of the function
date = findDate()[1]
# Load the excel file into the script
book = load_workbook(workbookPath)
writer = pd.ExcelWriter(
workbookPath, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
# Get the value to be appended for a given date in the loop
df_date, df_right_dii, df_right_fii = dii_and_fii_data(date)
nifty_close = niftySpot(date)
# Appending Date to FIIs Data and DIIs Data
print("[+] Appending Dates of FIIs and DIIs of date %s to row %s (FII) and row %s (DII)" %
(date, position_fii, position_dii))
df_date.to_excel(writer, "FII Activity", startrow=position_fii,
index=False, header=None)
df_date.to_excel(writer, "DII", startrow=position_dii,
index=False, header=None)
# Appending FII and nifty information to FIIs Data
print("[+] Appending Data of FIIs and Nifty of date %s to row %s" %
(date, position_fii))
df_right_fii.to_excel(writer, "FII Activity", startrow=position_fii,
startcol=14, index=False, header=None)
pd.DataFrame(data=[nifty_close]).to_excel(
writer, "FII Activity", startrow=position_fii, startcol=9, index=False, header=None)
# Appending DII and nifty information to DIIs
print("[+] Appending Data of DIIs and Nifty of date %s to row %s" %
(date, position_dii))
df_right_dii.to_excel(writer, "DII", startrow=position_dii,
startcol=12, index=False, header=None)
pd.DataFrame(data=[nifty_close]).to_excel(
writer, "DII", startrow=position_dii, startcol=9, index=False, header=None)
#Saving the excel file
writer.save()
print("Seems Done")
def main():
dataAppend()
return 0
if __name__ == "__main__":
sys.exit(main())
```
| github_jupyter |
# An example of using candex with regular Latitude and Longitude
## Remapping of ERA5 to subbasins of South Saskatchewan River at Medicine Hat, Alberta, Canada.
-------------------
-------------------
# Step 1: Preparing the target shapefile
### Target shapefile is basin or catchment or any other shape that we intend to have a remapped varibales for.
### We read a shapefile and prepare the filed names that are needed for candex to operate.
```
# cell 1: load the shapefile, check the filed or create the necessaary fields
import geopandas as gpd
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
font = {'family' : 'Times New Roman',
'weight' : 'bold',
'size' : 20}
matplotlib.rc('font', **font)
# target shapefile is what we want the varibales to be remapped to; South Saskachewan River at Medicine Hat
shp = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/target_shp/South_Saskatchewan_MedicineHat.shp')
if (shp.crs != 'epsg:4326'): # check if the projection is WGS84 (or epsg:4326)
print('please project your shapefile to WGS84 (epsg:4326)')
print(shp.head()) # print the first five row of the shapefile
print(shp.columns) # print existing fields in the shapefile
# plotting
shp.geometry.boundary.plot(color=None,edgecolor='k',linewidth = .5, figsize=(20,20))
plt.xlabel('Lon')
plt.ylabel('Lat')
# cell 2: prepare the needed fields; renaming, adding centeroid lat and lon values
shp = shp.rename (columns={'ID':'ID_t'}); # change the COMID to "ID_t" ID from target
shp['lat_t'] = shp.centroid.y # centroid lat from target
shp['lon_t'] = shp.centroid.x # centroid lon from target
print(shp.head()) # show the first 5 rows of the shapefile after renaming and adding centroid
shp.to_file('../data/ERA5_SSR_at_MedicineHat/target_shp/South_Saskatchewan_MedicineHat_standard.shp') # save
```
### The ID_t should be integer and unique, so if any of the below checks fails fix the ID_t so that it is unique for each shape in the shapefile and also integer. The ID_t can be simply from 1 to number of shape in the shapefile.
```
# cell 3: check if the ID_t are unique for each shapefile and ID_t are all int
# load
shp = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/target_shp/South_Saskatchewan_MedicineHat_standard.shp')
if not shp["ID_t"].is_unique:
print('The shapefile has IDs that are not unique for each shape; fix this issue before continue')
else:
print('The shapefile has IDs that are unique; continue continue')
if np.array_equal(shp.ID_t, shp.ID_t.astype(int)):
print('The shapefile has IDs that are integer; continue')
else:
print('The shapefile has IDs that are not integer; please identify IDs that are integer')
# alternatively user can uncommnet this part:
# shp.ID_t = np.arange(len(shp)) + 1 # adding shapefile ID_t from 1 to n
# shp.to_file('../data/target_shp/Bow_Oldman_standard.shp') # save the file as a standard format or candex
```
----------------------------
----------------------------
# Step 2: Prepare the shapefile from netCDF file
### The next step is to prepare the shapefile from coordinated (lat/lon) in shapefiles
### The code supports three cases:
### 1- The source netCDF file is in regular lat/lon (this example)
### 2- The source netCDF file has rotated lat/lon meaning that each point has it own lat/lon
### 3- The course netCDF file is irregular and comes with a netCDF file that holds the geospatial information of the netCDF values
### In this example we have a netCDF in regular lat/lon form to subbasin. The netCDF files are saved in daily fashion for 3 days (the first three days of January 1979).
### Assuming the location and extend of all the netCDF files similar we assume that the first shapefile applies to the other netCDF files as well
### <font color='red'>candex support simple change from coordination of 0-360 to -180-180 for convenience. However, this functionality won’t work in areas near lon of 0 where 0 and 360 comes together in the corrent version.</font>
```
# cell 1: read candex function and load the paraemters
from candex import *
import matplotlib
import numpy as np
font = {'family' : 'Times New Roman',
'weight' : 'bold',
'size' : 20}
matplotlib.rc('font', **font)
# cell 2: specifiying the parameter for creating the source shapefile
# name of the sample nc file (give only one if there are separaete file for each year or month)
name_of_nc = '../data/ERA5_SSR_at_MedicineHat/source_nc/ERA5_NA_19790101.nc'
# sample varibale from nc file (similar to all the dimensions for all the varibales with intend to read)
name_of_variable = 'airtemp'
# name of varibale in nc file (and not dimension) that holed the longituge values
name_of_lon_var = 'longitude'
# name of varibale in nc file (and not dimension) that holds the latitiute values
name_of_lat_var = 'latitude'
# bounding box the trim the created shepefile
# it should be in form of np.array([min_lat,max_lat,min_lon,max_lon])
# or should be give False if there is not box
box_values = np.array([47,53,-118,-108]) # or False;
# if the nc file lon is 0 to 360 and want to transfor to -180 to 180
# in the case the box_value should be in either of east or west hemisphere
correct_360 = False
# name of the shapefile that is created and saved
name_of_shp = '../data/ERA5_SSR_at_MedicineHat/source_shp/ERA5_NA.shp'
# creating the shapefile and preparing the 2D lat/lon field based on shapefile for indexing
lat_2D, lon_2D = NetCDF_SHP_lat_lon(name_of_nc, name_of_variable, name_of_lat_var,
name_of_lon_var, name_of_shp, box_values, correct_360)
# cell 3: plotting the created shapefile
shp_source = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/source_shp/ERA5_NA.shp') # load it
print(shp_source.head()) # show the first 5 rows
# plotting
shp_source.geometry.boundary.plot(color=None,edgecolor='k',linewidth = 0.25, figsize=(20,20))
plt.xlabel('Lon')
plt.ylabel('Lat')
```
------
------
# Step 3: Intersection of the source and target shapefile and creation of remap data frame
### In this section we intersect the two shapefiles (source and target) to find the percent contribution of source in the target
### We rename the targets to the standard name that it used for candex functions
### <font color='red'>candex assumes that the shapefile from netCDF file is in WGS84 (or EPSG:4326)</font>
```
# cell 1: Load the candex functions
from candex import *
import matplotlib
import numpy as np
font = {'family' : 'Times New Roman',
'weight' : 'bold',
'size' : 20}
matplotlib.rc('font', **font)
# cell 2: intersection and of source and target shapefiles
shp_target = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/target_shp/South_Saskatchewan_MedicineHat_standard.shp') # sload target shp
shp_source = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/source_shp/ERA5_NA.shp') # load source shp
# assign coordination (here both are defined as WGS1984)
shp_source = shp_source.set_crs("EPSG:4326") # in case if missing
# intersection
shp_int = intersection_shp (shp_target, shp_source)
# rename dictionary
dict_rename = {'S_1_ID_t' : 'ID_t',
'S_1_lat_t': 'lat_t',
'S_1_lon_t': 'lon_t',
'S_2_ID_s' : 'ID_s',
'S_2_lat_s': 'lat_s',
'S_2_lon_s': 'lon_s',
'AP1N' : 'weight'}
shp_int = shp_int.rename(columns=dict_rename) # rename fields
shp_int = shp_int.sort_values(by=['ID_t']) # sort based on ID_t
shp_int.to_file('../data/ERA5_SSR_at_MedicineHat/intersection_shp/ERA5_NA_SSR.shp') # save files
# plotting
shp_int.geometry.boundary.plot(color=None,edgecolor='k',linewidth = 1, figsize=(20,20))
plt.xlabel('Lon')
plt.ylabel('Lat')
# cell 2: indexing of the source lat/lon to row and colomns in nc file
remap_df = Dbf5('../data/ERA5_SSR_at_MedicineHat/intersection_shp/ERA5_NA_SSR.dbf') # load dbf
remap_df = remap_df.to_dataframe()
# find the rows and cols of source in nc file
rows, cols = lat_lon_to_index(np.array(remap_df['lat_s']),
np.array(remap_df['lon_s']),
lat_2D,
lon_2D)
remap_df['rows'] = rows
remap_df['cols'] = cols
# save remap_df as csv for future use
remap_df.to_csv('../data/ERA5_SSR_at_MedicineHat/remap/remap_ERA5_SSR.csv')
```
-----
-----
# Step 4: Remap nc file(s)
### Execute the Remapping and write the nc file for each source nc file.
### <font color='red'>The time, calendar, time units, variables units are carried to the remapped nc file from source.</font>
```
# loading the remap csv
remap_df = pd.read_csv('../data/ERA5_SSR_at_MedicineHat/remap/remap_ERA5_SSR.csv')
# listing the nc files
nc_names = '../data/ERA5_SSR_at_MedicineHat/source_nc/ERA5_NA_*.nc' # if there are multiple nc file can be pecify by *
output_path = '../data/ERA5_SSR_at_MedicineHat/target_nc/remapped_ERA5_SSR_' # the path
name_of_var_time = 'time' # dimension of time in the source nc files
name_of_vars = ['airtemp','pptrate'] #varibale that need to be remapped
# format of the varibales
format_of_vars = ['f4','f4'] # type of the varibale that need to be remapped f4 single, f8 double, int, integer
fill_values = ['-9999.00','-9999.00'] # Fill values for each varibale
authour_name = 'Shervan Gharari, Computational Hydrology Team, The University of Saskatchewan' # the authour
# candex target_nc_creation functions
target_nc_creation(nc_names,
remap_df,
name_of_var_time,
output_path,
name_of_vars,
format_of_vars,
fill_values,
authour_name)
```
-----------
-----------
# Step5: Visualization of the result for one time steps for the source and remapped data
```
# cell 1: Load the candex functions
from candex import *
import matplotlib
import numpy as np
font = {'family' : 'Times New Roman',
'weight' : 'bold',
'size' : 20}
matplotlib.rc('font', **font)
# load the nc file
nc_source = xr.open_dataset('../data/ERA5_SSR_at_MedicineHat/source_nc/ERA5_NA_19790101.nc') # nc source
print(nc_source)
nc_source_time = nc_source.sel(time="1979-01-01T20:00:00",method="nearest") # the first time step
# subset tof the region og interest:
latbounds = np.array([ 48.8 , 52 ])
lonbounds = np.array([ -116.5 , -110.5 ])
lats = np.array(nc_source_time.variables['latitude'][:] )
lons = np.array(nc_source_time.variables['longitude'][:])
# latitude lower and upper index
latli = np.argmin( np.abs( lats - latbounds[0] ) )
latui = np.argmin( np.abs( lats - latbounds[1] ) )
# longitude lower and upper index
lonli = np.argmin( np.abs( lons - lonbounds[0] ) )
lonui = np.argmin( np.abs( lons - lonbounds[1] ) )
nc_source_time = nc_source_time.isel(latitude=np.arange(latui,latli))
nc_source_time = nc_source_time.isel(longitude=np.arange(lonli,lonui))
print(np.max(np.array(nc_source_time.airtemp[:,:])))
nc_source_time.airtemp.plot(figsize=(15,10))
# load a target shapefile and nc file
nc_target = xr.open_dataset('../data/ERA5_SSR_at_MedicineHat/target_nc/remapped_ERA5_SSR_1979-01-01-00-00-00.nc') # nc source
nc_target_time = nc_target.sel(time="1979-01-01T20:00:00",method="nearest") # the first time step
air_temp = np.array(nc_target_time.airtemp)
shp_target = gpd.read_file('../data/ERA5_SSR_at_MedicineHat/target_shp/South_Saskatchewan_MedicineHat_standard.shp') # sload target shp
shp_target['airtemp'] = air_temp
shp_target.plot(column='airtemp', figsize=(15,10))
plt.xlabel ('Lon')
plt.ylabel ('Lat')
```
| github_jupyter |
## Introduction
An example of implementing the Node2Vec representation learning algorithm using components from the stellargraph and gensim libraries.
<a name="refs"></a>
**References**
[1] Node2Vec: Scalable Feature Learning for Networks. A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016. ([link](https://snap.stanford.edu/node2vec/))
[2] Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
[3] Gensim: Topic modelling for humans. ([link](https://radimrehurek.com/gensim/))
```
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import os
import networkx as nx
import numpy as np
import pandas as pd
%matplotlib inline
```
### Dataset
The dataset is the citation network Cora.
It can be downloaded by clicking [here](https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz)
The following is the description of the dataset from the publisher,
> The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words. The README file in the dataset provides more details.
For this demo, we ignore the word vectors associated with each paper. We are only interested in the network structure and the **subject** attribute of each paper.
Download and unzip the cora.tgz file to a location on your computer.
We assume that the dataset is stored in the directory
`~/data/cora/`
where the files `cora.cites` and `cora.content` can be located.
We are going to load the data into a networkx object.
```
# load directed graph from ordering (cited_paper, citing_paper)
data_location = os.path.expanduser("~/data/cora/")
g_nx = nx.read_edgelist(path=os.path.join(data_location,"cora.cites"), create_using=nx.DiGraph()).reverse()
# convert to undirected graph for processing
g_nx = g_nx.to_undirected()
# load the node attribute data
node_attr = pd.read_csv(os.path.join(data_location,"cora.content"), sep='\t', header=None)
values = { str(row.tolist()[0]): row.tolist()[-1] for _, row in node_attr.iterrows() }
nx.set_node_attributes(g_nx, values, 'subject')
# Select the largest connected component. For clarity we ignore isolated
# nodes and subgraphs; having these in the data does not prevent the
# algorithm from running and producing valid results.
g_nx_ccs = ( g_nx.subgraph(c).copy() for c in nx.connected_components(g_nx) )
g_nx = max(g_nx_ccs, key=len)
print("Largest subgraph statistics: {} nodes, {} edges".format(
g_nx.number_of_nodes(), g_nx.number_of_edges()))
```
### The Node2Vec algorithm
The Node2Vec algorithm introduced in [[1]](#refs) is a 2-step representation learning algorithm. The two steps are:
1. Use 2nd order random walks to generate sentences from a graph. A sentence is a list of node ids. The set of all sentences makes a corpus.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node id is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [[2]](#refs) is used for calculating the embedding vectors.
## Corpus generation using random walks
The stellargraph library provides an implementation for 2nd order random walks as required by Node2Vec. The random walks have fixed maximum length and are controlled by two parameters `p` and `q`. See [[1]](#refs) for a detailed description of these parameters.
We are going to start 10 random walks from each node in the graph with a length up to 100. We set parameter `p` to 0.5 (which encourages backward steps) and `q` to 2.0 (which discourages distant steps); the net result is that walks should remain in the local vicinity of the starting nodes.
```
from stellargraph.data import BiasedRandomWalk
from stellargraph import StellarGraph
rw = BiasedRandomWalk(StellarGraph(g_nx))
walks = rw.run(
nodes=list(g_nx.nodes()), # root nodes
length=100, # maximum length of a random walk
n=10, # number of random walks per root node
p=0.5, # Defines (unormalised) probability, 1/p, of returning to source node
q=2.0 # Defines (unormalised) probability, 1/q, for moving away from source node
)
print("Number of random walks: {}".format(len(walks)))
```
### Representation Learning using Word2Vec
We use the Word2Vec [[2]](#refs) implementation in the free Python library gensim [[3]](#refs), to learn representations for each node in the graph.
We set the dimensionality of the learned embedding vectors to 128 as in [[1]](#refs).
```
from gensim.models import Word2Vec
model = Word2Vec(walks, size=128, window=5, min_count=0, sg=1, workers=2, iter=1)
# The embedding vectors can be retrieved from model.wv using the node ID.
model.wv['19231'].shape
```
### Visualise Node Embeddings
We retrieve the Word2Vec node embeddings that are 128-dimensional vectors and then we project them down to 2 dimensions using the [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) algorithm.
```
# Retrieve node embeddings and corresponding subjects
node_ids = model.wv.index2word # list of node IDs
node_embeddings = model.wv.vectors # numpy.ndarray of size number of nodes times embeddings dimensionality
node_targets = [ g_nx.node[node_id]['subject'] for node_id in node_ids ]
```
Transform the embeddings to 2d space for visualisation
```
transform = TSNE # PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the embedding points, coloring them by the target label (paper subject)
alpha = 0.7
label_map = { l: i for i, l in enumerate(np.unique(node_targets)) }
node_colours = [ label_map[target] for target in node_targets ]
plt.figure(figsize=(7,7))
plt.axes().set(aspect="equal")
plt.scatter(node_embeddings_2d[:,0],
node_embeddings_2d[:,1],
c=node_colours, cmap="jet", alpha=alpha)
plt.title('{} visualization of node embeddings'.format(transform.__name__))
plt.show()
```
### Downstream task
The node embeddings calculated using Word2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the subject of a paper in Cora), community detection (clustering of nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of citation links between papers).
For a more detailed example of using Node2Vec for link prediction see [this example](https://github.com/stellargraph/stellargraph/tree/master/demos/link-prediction/random-walks/cora-lp-demo.ipynb).
| github_jupyter |
# Riskfolio-Lib Tutorial:
<br>__[Financionerioncios](https://financioneroncios.wordpress.com)__
<br>__[Orenji](https://www.orenj-i.net)__
<br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__
<br>__[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__
## Part V: Multi Assets Algorithmic Trading Backtesting
## 1. Downloading the data:
```
import pandas as pd
import datetime
import yfinance as yf
import backtrader as bt
import numpy as np
import warnings
warnings.filterwarnings("ignore")
# Date range
start = '2010-01-01'
end = '2020-10-31'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'AAPL', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'SPY']
assets.sort()
# Downloading data
prices = yf.download(assets, start=start, end=end)
prices = prices.dropna()
############################################################
# Showing data
############################################################
display(prices.head())
```
## 2. Building the Backtest Function with Backtrader
### 2.1 Defining Backtest Function
```
############################################################
# Defining the backtest function
############################################################
def backtest(datas, strategy, start, end, plot=False, **kwargs):
cerebro = bt.Cerebro()
# Here we add transaction costs and other broker costs
cerebro.broker.setcash(1000000.0)
cerebro.broker.setcommission(commission=0.005) # Commission 0.5%
cerebro.broker.set_slippage_perc(0.005, # Slippage 0.5%
slip_open=True,
slip_limit=True,
slip_match=True,
slip_out=False)
for data in datas:
cerebro.adddata(data)
# Here we add the indicators that we are going to store
cerebro.addanalyzer(bt.analyzers.SharpeRatio, riskfreerate=0.0)
cerebro.addanalyzer(bt.analyzers.Returns)
cerebro.addanalyzer(bt.analyzers.DrawDown)
cerebro.addstrategy(strategy, **kwargs)
cerebro.addobserver(bt.observers.Value)
cerebro.addobserver(bt.observers.DrawDown)
results = cerebro.run(stdstats=False)
if plot:
cerebro.plot(iplot=False, start=start, end=end)
return (results[0].analyzers.drawdown.get_analysis()['max']['drawdown'],
results[0].analyzers.returns.get_analysis()['rnorm100'],
results[0].analyzers.sharperatio.get_analysis()['sharperatio'])
```
### 2.2 Building Data Feeds for Backtesting
```
############################################################
# Create objects that contain the prices of assets
############################################################
# Creating Assets bt.feeds
assets_prices = []
for i in assets:
if i != 'SPY':
prices_ = prices.drop(columns='Adj Close').loc[:, (slice(None), i)].dropna()
prices_.columns = ['Close', 'High', 'Low', 'Open', 'Volume']
assets_prices.append(bt.feeds.PandasData(dataname=prices_, plot=False))
# Creating Benchmark bt.feeds
prices_ = prices.drop(columns='Adj Close').loc[:, (slice(None), 'SPY')].dropna()
prices_.columns = ['Close', 'High', 'Low', 'Open', 'Volume']
benchmark = bt.feeds.PandasData(dataname=prices_, plot=False)
display(prices_.head())
```
## 3. Building Strategies with Backtrader
### 3.1 Buy and Hold SPY
```
############################################################
# Building the Buy and Hold strategy
############################################################
class BuyAndHold(bt.Strategy):
def __init__(self):
self.counter = 0
def next(self):
if self.counter >= 1004:
if self.getposition(self.data).size == 0:
self.order_target_percent(self.data, target=0.99)
self.counter += 1
```
If you have an error related to 'warnings' modules when you try to plot, you must modify the 'locator.py' file from backtrader library following the instructions in this __[link](https://community.backtrader.com/topic/981/importerror-cannot-import-name-min_per_hour-when-trying-to-plot/8)__.
```
############################################################
# Run the backtest for the selected period
############################################################
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 6) # (w, h)
plt.plot() # We need to do this to avoid errors in inline plot
start = 1004
end = prices.shape[0] - 1
dd, cagr, sharpe = backtest([benchmark],
BuyAndHold,
start=start,
end=end,
plot=True)
############################################################
# Show Buy and Hold Strategy Stats
############################################################
print(f"Max Drawdown: {dd:.2f}%")
print(f"CAGR: {cagr:.2f}%")
print(f"Sharpe: {sharpe:.3f}")
```
### 3.2 Rebalancing Quarterly using Riskfolio-Lib
```
############################################################
# Calculate assets returns
############################################################
pd.options.display.float_format = '{:.4%}'.format
data = prices.loc[:, ('Adj Close', slice(None))]
data.columns = assets
data = data.drop(columns=['SPY']).dropna()
returns = data.pct_change().dropna()
display(returns.head())
############################################################
# Selecting Dates for Rebalancing
############################################################
# Selecting last day of month of available data
index = returns.groupby([returns.index.year, returns.index.month]).tail(1).index
index_2 = returns.index
# Quarterly Dates
index = [x for x in index if float(x.month) % 3.0 == 0 ]
# Dates where the strategy will be backtested
index_ = [index_2.get_loc(x) for x in index if index_2.get_loc(x) > 1000]
############################################################
# Building Constraints
############################################################
asset_classes = {'Assets': ['JCI','TGT','CMCSA','CPB','MO','NBL','APA','MMC',
'JPM','ZION','PSA','AAPL','BAX','BMY','LUV','PCAR',
'TXT','DHR','DE','MSFT','HPQ','SEE','VZ','CNP','NI'],
'Industry': ['Consumer Discretionary','Consumer Discretionary',
'Consumer Discretionary', 'Consumer Staples',
'Consumer Staples','Energy','Energy','Financials',
'Financials','Financials','Financials','Information Technology',
'Health Care','Health Care','Industrials','Industrials',
'Industrials','Industrials','Industrials',
'Information Technology','Information Technology',
'Materials','Telecommunications Services','Utilities',
'Utilities'] }
asset_classes = pd.DataFrame(asset_classes)
asset_classes = asset_classes.sort_values(by=['Assets'])
constraints = {'Disabled': [False, False, False],
'Type': ['All Assets', 'All Classes', 'All Classes'],
'Set': ['', 'Industry', 'Industry'],
'Position': ['', '', ''],
'Sign': ['<=', '<=', '>='],
'Weight': [0.10, 0.20, 0.03],
'Type Relative': ['', '', ''],
'Relative Set': ['', '', ''],
'Relative': ['', '', ''],
'Factor': ['', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
############################################################
# Building constraint matrixes for Riskfolio Lib
############################################################
import riskfolio.ConstraintsFunctions as cf
A, B = cf.assets_constraints(constraints, asset_classes)
############################################################
# Building a loop that estimate optimal portfolios on
# rebalancing dates
############################################################
import riskfolio.Portfolio as pf
models = {}
# rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM',
# 'CVaR', 'WR', 'MDD', 'ADD', 'CDaR']
rms = ['MV', 'CVaR', 'WR', 'CDaR']
for j in rms:
weights = pd.DataFrame([])
for i in index_:
Y = returns[i-1000:i] # taking last 4 years (250 trading days per year)
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Add portfolio constraints
port.ainequality = A
port.binequality = B
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = j # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
if w is None:
w = weights.tail(1).T
weights = pd.concat([weights, w.T], axis = 0)
models[j] = weights.copy()
models[j].index = index_
############################################################
# Building the Asset Allocation Class
############################################################
class AssetAllocation(bt.Strategy):
def __init__(self):
j = 0
for i in assets:
setattr(self, i, self.datas[j])
j += 1
self.counter = 0
def next(self):
if self.counter in weights.index.tolist():
for i in assets:
w = weights.loc[self.counter, i]
self.order_target_percent(getattr(self, i), target=w)
self.counter += 1
############################################################
# Backtesting Mean Variance Strategy
############################################################
assets = returns.columns.tolist()
weights = models['MV']
dd, cagr, sharpe = backtest(assets_prices,
AssetAllocation,
start=start,
end=end,
plot=True)
############################################################
# Show Mean Variance Strategy Stats
############################################################
print(f"Max Drawdown: {dd:.2f}%")
print(f"CAGR: {cagr:.2f}%")
print(f"Sharpe: {sharpe:.3f}")
############################################################
# Plotting the composition of the last MV portfolio
############################################################
import riskfolio.PlotFunctions as plf
w = pd.DataFrame(models['MV'].iloc[-1,:])
ax = plf.plot_pie(w=w, title='Sharpe Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
############################################################
# Composition per Industry
############################################################
w_classes = pd.concat([asset_classes.set_index('Assets'), w], axis=1)
w_classes = w_classes.groupby(['Industry']).sum()
w_classes.columns = ['weights']
display(w_classes)
############################################################
# Backtesting Mean CVaR Strategy
############################################################
assets = returns.columns.tolist()
weights = models['CVaR']
dd, cagr, sharpe = backtest(assets_prices,
AssetAllocation,
start=start,
end=end,
plot=True)
############################################################
# Show CVaR Strategy Stats
############################################################
print(f"Max Drawdown: {dd:.2f}%")
print(f"CAGR: {cagr:.2f}%")
print(f"Sharpe: {sharpe:.3f}")
############################################################
# Plotting the composition of the last CVaR portfolio
############################################################
w = pd.DataFrame(models['CVaR'].iloc[-1,:])
ax = plf.plot_pie(w=w, title='Sharpe Mean CVaR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
############################################################
# Composition per Industry
############################################################
w_classes = pd.concat([asset_classes.set_index('Assets'), w], axis=1)
w_classes = w_classes.groupby(['Industry']).sum()
w_classes.columns = ['weights']
display(w_classes)
############################################################
# Backtesting Mean Worst Realization Strategy
############################################################
assets = returns.columns.tolist()
weights = models['WR']
dd, cagr, sharpe = backtest(assets_prices,
AssetAllocation,
start=start,
end=end,
plot=True)
############################################################
# Show Worst Realization Strategy Stats
############################################################
print(f"Max Drawdown: {dd:.2f}%")
print(f"CAGR: {cagr:.2f}%")
print(f"Sharpe: {sharpe:.3f}")
############################################################
# Plotting the composition of the last WR portfolio
############################################################
w = pd.DataFrame(models['WR'].iloc[-1,:])
ax = plf.plot_pie(w=w, title='Sharpe Mean WR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
############################################################
# Composition per Industry
############################################################
w_classes = pd.concat([asset_classes.set_index('Assets'), w], axis=1)
w_classes = w_classes.groupby(['Industry']).sum()
w_classes.columns = ['weights']
display(w_classes)
############################################################
# Backtesting Mean CDaR Strategy
############################################################
assets = returns.columns.tolist()
weights = models['CDaR']
dd, cagr, sharpe = backtest(assets_prices,
AssetAllocation,
start=start,
end=end,
plot=True)
############################################################
# Show CDaR Strategy Stats
############################################################
print(f"Max Drawdown: {dd:.2f}%")
print(f"CAGR: {cagr:.2f}%")
print(f"Sharpe: {sharpe:.3f}")
############################################################
# Plotting the composition of the last CDaR portfolio
############################################################
w = pd.DataFrame(models['CDaR'].iloc[-1,:])
ax = plf.plot_pie(w=w, title='Sharpe Mean CDaR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
############################################################
# Composition per Industry
############################################################
w_classes = pd.concat([asset_classes.set_index('Assets'), w], axis=1)
w_classes = w_classes.groupby(['Industry']).sum()
w_classes.columns = ['weights']
display(w_classes)
```
## 4. Conclusion
In this example, the best strategy in terms of performance is __WR__ . The ranking of strategies in base of performance follows:
1. WR (6.07%): Worst Scenario or Minimax Model.
1. MV (5.51%): Mean Variance.
1. SPY (5.36%): Buy and Hold SPY.
1. CVaR (5.13%): Conditional Value at Risk.
1. CDaR (4.42%): Conditional Drawdown at Risk.
On the other hand, the best strategy in terms of Sharpe Ratio is __CVaR__ . The ranking of strategies in base of Sharpe Ratio follows:
1. CVaR (0.671): Conditional Value at Risk.
1. MV (0.660): Mean Variance.
1. WR (0.598): Worst Scenario or Minimax Model.
1. CDaR (0.578): Conditional Drawdown at Risk.
1. SPY (0.570): Buy and Hold SPY.
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Planar-data-classification-with-one-hidden-layer" data-toc-modified-id="Planar-data-classification-with-one-hidden-layer-1"><span class="toc-item-num">1 </span>Planar data classification with one hidden layer</a></div><div class="lev2 toc-item"><a href="#1---Packages" data-toc-modified-id="1---Packages-11"><span class="toc-item-num">1.1 </span>1 - Packages</a></div><div class="lev2 toc-item"><a href="#2---Dataset" data-toc-modified-id="2---Dataset-12"><span class="toc-item-num">1.2 </span>2 - Dataset</a></div><div class="lev2 toc-item"><a href="#3---Simple-Logistic-Regression" data-toc-modified-id="3---Simple-Logistic-Regression-13"><span class="toc-item-num">1.3 </span>3 - Simple Logistic Regression</a></div><div class="lev2 toc-item"><a href="#4---Neural-Network-model" data-toc-modified-id="4---Neural-Network-model-14"><span class="toc-item-num">1.4 </span>4 - Neural Network model</a></div><div class="lev3 toc-item"><a href="#4.1---Defining-the-neural-network-structure" data-toc-modified-id="4.1---Defining-the-neural-network-structure-141"><span class="toc-item-num">1.4.1 </span>4.1 - Defining the neural network structure</a></div><div class="lev3 toc-item"><a href="#4.2---Initialize-the-model's-parameters" data-toc-modified-id="4.2---Initialize-the-model's-parameters-142"><span class="toc-item-num">1.4.2 </span>4.2 - Initialize the model's parameters</a></div><div class="lev3 toc-item"><a href="#4.3---The-Loop" data-toc-modified-id="4.3---The-Loop-143"><span class="toc-item-num">1.4.3 </span>4.3 - The Loop</a></div><div class="lev3 toc-item"><a href="#4.4---Integrate-parts-4.1,-4.2-and-4.3-in-nn_model()" data-toc-modified-id="4.4---Integrate-parts-4.1,-4.2-and-4.3-in-nn_model()-144"><span class="toc-item-num">1.4.4 </span>4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()</a></div><div class="lev3 toc-item"><a href="#4.5-Predictions" data-toc-modified-id="4.5-Predictions-145"><span class="toc-item-num">1.4.5 </span>4.5 Predictions</a></div><div class="lev3 toc-item"><a href="#4.6---Tuning-hidden-layer-size-(optional/ungraded-exercise)" data-toc-modified-id="4.6---Tuning-hidden-layer-size-(optional/ungraded-exercise)-146"><span class="toc-item-num">1.4.6 </span>4.6 - Tuning hidden layer size (optional/ungraded exercise)</a></div><div class="lev2 toc-item"><a href="#5)-Performance-on-other-datasets" data-toc-modified-id="5)-Performance-on-other-datasets-15"><span class="toc-item-num">1.5 </span>5) Performance on other datasets</a></div>
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
X.shape, Y.shape
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y[0, :], s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = Y.size # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV(cv=5);
clf.fit(X.T, Y.T.ravel());
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
#print(W1.shape, b1.shape, W2.shape, b2.shape)
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:55%">
<tr>
<td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Retrieve W1 and W2 from parameters
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2), Y) + np.multiply((1-Y), np.log(1-A2))
### END CODE HERE ###
cost = -1/m * np.sum(logprobs)
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.692919893776 </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2= A2 - Y
dW2 = (1/m) * np.dot(dZ2, A1.T)
db2 = (1/m) * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.multiply(np.dot(W2.T, dZ2), (1 - np.power(A1, 2)))
dW1 = (1/m) * np.dot(dZ1, X.T)
db1 = (1/m) * np.sum(dZ1, axis=1, keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.01018708 -0.00708701]
[ 0.00873447 -0.0060768 ]
[-0.00530847 0.00369379]
[-0.02206365 0.01535126]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[-0.00069728]
[-0.00060606]
[ 0.000364 ]
[ 0.00151207]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[ 0.06589489]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-4.18494056 5.33220609]
[-7.52989382 1.24306181]
[-4.1929459 5.32632331]
[ 7.52983719 -1.24309422]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 2.32926819]
[ 3.79458998]
[ 2.33002577]
[-3.79468846]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[-52.66607724]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = (A2 > 0.5) # Vectorized
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y.ravel(), s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
```
%load_ext version_information
%version_information numpy, matplotlib, sklearn
```
| github_jupyter |
```
%reset -f
import numpy as np
from landlab import RasterModelGrid
from landlab.components.overland_flow import OverlandFlow
#Mapping water depth
from landlab.plot.imshow import imshow_grid
import matplotlib.colors as mcolors
import matplotlib.pyplot as plt
colors = [(0,0,1,i) for i in np.linspace(0,1,3)]
WaterMap = mcolors.LinearSegmentedColormap.from_list('mycmap', colors, N=10)
#Hillshading
from matplotlib.colors import LightSource
#Sediment Network Stuff
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import graph
from landlab.plot import plot_network_and_parcels
import warnings
warnings.filterwarnings('ignore')
#Sampling a raster - coordinate
def sampleRaster(raster,x,y):
x = int(x)
y = int(y)
return raster[y,x]
def TopoToNodes():
for i in range(nNodes):
netZ[i] = sampleRaster(np.reshape(z,(nX,nY)),netGrid.node_x[i],netGrid.node_y[i])
def DepthToLinks():
for i in range(nLinks):
tempX = netGrid.nodes_at_link[i,0]
tempY = netGrid.nodes_at_link[i,1]
netH[i] = sampleRaster(np.reshape(h,(nX,nY)),netGrid.node_x[tempX],netGrid.node_y[tempY])
#Define grid
nX = 200
nY = 100
spacing = 1.0
grid = RasterModelGrid((nX, nY), xy_spacing=1.)
## Topography ##
Datum = 500
z = np.ones(nX*nY) * Datum
z = grid.add_field('topographic__elevation', z, at='node')
# Long. slope
Long_Slope = 1./1000
z += grid.node_y*Long_Slope
imshow_grid(grid,'topographic__elevation')
# Channel Indentation
Channel_Width = 8.
Channel_Depth = 2.
isChannel = (grid.node_x > (nY/2 - Channel_Width/2)) * (grid.node_x < (nY/2 + Channel_Width/2))
z[isChannel] -= Channel_Depth
imshow_grid(grid,'topographic__elevation')
# Banks Slope
Transversal_Slope = 1./1000
z += np.abs(grid.node_x-(nX/2))*Transversal_Slope
imshow_grid(grid,'topographic__elevation')
# Random noise
z += ~isChannel * np.reshape(np.tile(np.random.rand(10,10)*0.20,[int(nX/10),int(nY/10)]),nX*nY)
imshow_grid(grid,'topographic__elevation')
ls = LightSource(azdeg=315, altdeg=45)
plt.imshow(ls.hillshade(np.reshape(z,[nX,nY]), vert_exag=10), cmap='gray')
plt.show()
fig = plt.figure(figsize=(10,8))
## Cross Section
ax1 = plt.subplot(2,1,1)
ax1.plot(grid.node_x[grid.node_y==nX/2],z[grid.node_y==nX/2],label="Cross Section")
ax1.set_ylabel("Elevation (??)")
ax1.set_xlabel("Distance X (??)")
ax1.legend()
## Long Section
ax2 = plt.subplot(2,1,2)
ax2.plot(grid.node_y[grid.node_x==nY/2],z[grid.node_x==nY/2],label="Longitudinal Section")
ax2.set_ylabel("Elevation (??)")
ax2.set_xlabel("Distance Y (??)")
ax2.legend()
fig.show()
#Cast water depth values
#Pointer to water depth
h = np.zeros(nX*nY)
#bools = (grid.node_x > nX/2 - Channel_Width/2) * (grid.node_x < nX/2 + Channel_Width/2) * (grid.node_y >= 95)
bools = (grid.node_x > nY/2 - Channel_Width*2) * (grid.node_x < nY/2 + Channel_Width*2) * (grid.node_y >= nX-5)
h[bools] = Channel_Depth*1.1
#bools = (grid.node_x < 20) * (grid.node_y > 85) * (grid.node_y < 99)
#h[bools] = Channel_Depth*2
h = grid.add_field('surface_water__depth', h, at='node')
fig = plt.figure(figsize=(5,4))
plt.imshow(ls.hillshade(np.reshape(z,[nX,nY]), vert_exag=10), cmap='gray',origin="lower")
imshow_grid(grid,'surface_water__depth',cmap=WaterMap)
fig.show()
#Define a erosion network
#Topology
y_of_node = np.linspace(1,99,10)
x_of_node = np.ones_like(y_of_node)*nY/2
nNodes = len(x_of_node)
nodes_at_link = []
for i in range(nNodes-1):
nodes_at_link.append((i,i+1))
print(nodes_at_link)
nLinks = len(nodes_at_link)
#Grid for sediment model
netGrid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)
#Extract topograpy from raster and assign to nodes in the network
netZ = np.zeros(nNodes)
TopoToNodes()
netGrid.at_node["topographic__elevation"] = netZ
netGrid.at_node["bedrock__elevation"] = netZ - 0.5
#Print initial topographic elevation
print(netGrid.at_node["topographic__elevation"])
#Extract water depth from raster and assign it to nodes in the network
netH = np.zeros(nLinks)
DepthToLinks()
netGrid.at_link["flow_depth"] = netH
#Add other parameters :S
netGrid.at_link["reach_length"] = 50*np.ones(nLinks) # m
netGrid.at_link["channel_width"] = Channel_Width*np.ones(nLinks)
plt.figure(0)
graph.plot_graph(netGrid, at="node,link")
plt.show()
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(nLinks),30)
element_id = np.expand_dims(element_id, axis=1)
volume = 0.05*np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.085 # m
mu = np.log(medianD)
sigma = np.log(2) #assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu,
sigma,
np.shape(element_id)
) # (m) the diameter of grains in each parcel
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
lithology = ["quartzite"] * np.size(element_id)
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"lithology": (["item_id"], lithology),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume)
}
items = {"grid_element": "link", "element_id": element_id}
parcels = DataRecord(
netGrid,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
#Call flow routing
fd = FlowDirectorSteepest(netGrid, "topographic__elevation")
fd.run_one_step()
#Call sediment model
nst = NetworkSedimentTransporter(
netGrid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
#Call overland flow model
of = OverlandFlow(grid, steep_slopes=True)
of.run_one_step()
mydpi = 96
sizeFigure = 400
for t in range(2000):
of.run_one_step()
nst.run_one_step(of.dt)
TopoToNodes()
DepthToLinks()
print(netGrid.at_link['flow_depth'])
if t%50==0:
fig = plt.figure(figsize=(sizeFigure/mydpi, sizeFigure/mydpi), dpi=mydpi)
plt.imshow(ls.hillshade(np.reshape(z,[nX,nY]), vert_exag=10), cmap='gray',origin="lower")
imshow_grid(grid,'surface_water__depth',\
limits=(0,1),cmap=WaterMap,\
colorbar_label="Water depth (m)",\
plot_name="Time = %i" %t)
fig.savefig("ResultImages/" + str(t).zfill(5) + ".png")
import imageio
from glob import glob
print(netGrid.at_link['flow_depth'][5])
images=[]
original_files=list(glob("./ResultImages/*.png"))
original_files.sort(reverse=False)
for file_ in original_files:
images.append(imageio.imread(file_))
imageio.mimsave('./animation.gif', images, duration=1/5, subrectangles=True)
print(netGrid.at_node["topographic__elevation"])
print(netGrid.at_node)
print(netGrid.at_link)
print(netGrid.at_link['flow_depth'])
plot_network_and_parcels(
netGrid, parcels,
parcel_time_index=0,
parcel_color_attribute="D",
link_attribute="sediment_total_volume",
parcel_size=10,
parcel_alpha=1.0)
```
| github_jupyter |
# Pandas and Friends
* Austin Godber
* Mail: godber@uberhip.com
* Twitter: @godber
* Presented at [DesertPy](http://desertpy.com), Jan 2015.
# What does it do?
Pandas is a Python data analysis tool built on top of NumPy that provides a
suite of data structures and data manipulation functions to work on those data
structures. It is particularly well suited for working with time series data.
# Getting Started - Installation
Installing with pip or apt-get::
```
pip install pandas
# or
sudo apt-get install python-pandas
```
* Mac - Homebrew or MacPorts to get the dependencies, then pip
* Windows - Python(x,y)?
* Commercial Pythons: Anaconda, Canopy
# Getting Started - Dependencies
Dependencies, required, recommended and optional
```
# Required
numpy, python-dateutil, pytx
# Recommended
numexpr, bottleneck
# Optional
cython, scipy, pytables, matplotlib, statsmodels, openpyxl
```
# Pandas' Friends!
Pandas works along side and is built on top of several other Python projects.
* IPython
* Numpy
* Matplotlib
## Pandas gets along with EVERYONE!
<img src='panda-on-a-unicorn.jpg'>
# Background - IPython
IPython is a fancy python console. Try running ``ipython`` or ``ipython --pylab`` on your command line. Some IPython tips
```python
# Special commands, 'magic functions', begin with %
%quickref, %who, %run, %reset
# Shell Commands
ls, cd, pwd, mkdir
# Need Help?
help(), help(obj), obj?, function?
# Tab completion of variables, attributes and methods
```
# Background - IPython Notebook
There is a web interface to IPython, known as the IPython notebook, start it
like this
```
ipython notebook
# or to get all of the pylab components
ipython notebook --pylab
```
# IPython - Follow Along
Follow along by connecting to TMPNB.ORG!
* http://tmpnb.org
# Background - NumPy
* NumPy is the foundation for Pandas
* Numerical data structures (mostly Arrays)
* Operations on those.
* Less structure than Pandas provides.
# Background - NumPy - Arrays
```
import numpy as np
# np.zeros, np.ones
data0 = np.zeros((2, 4))
data0
# Make an array with 20 entries 0..19
data1 = np.arange(20)
# print the first 8
data1[0:8]
```
## Background - NumPy - Arrays
```
# make it a 4,5 array
data = np.arange(20).reshape(4, 5)
data
```
## Background - NumPy - Arrays
Arrays have NumPy specific types, `dtypes`, and can be operated on.
```
print("dtype: ", data.dtype)
result = data * 20.5
print(result)
```
Now, on to Pandas
-----------------

Pandas
------
* Tabular, Timeseries, Matrix Data - labeled or not
* Sensible handling of missing data and data alignment
* Data selection, slicing and reshaping features
* Robust data import utilities.
* Advanced time series capabilities
Data Structures
----------------
* Series - 1D labeled array
* DataFrame - 2D labeled array
* Panel - 3D labeled array (More D)
# Assumed Imports
In my code samples, assume I import the following
```
import pandas as pd
import numpy as np
```
# Series
* one-dimensional labeled array
* holds any data type
* axis labels known as index
* implicit integert indexes
* ``dict``-like
# Create a Simple Series
```
s1 = pd.Series([1, 2, 3, 4, 5])
s1
```
# Series Operations
```
# integer multiplication
print(s1 * 5)
```
# Series Operations - Cont.
```
# float multiplication
print(s1 * 5.0)
```
# Series Index
```
s2 = pd.Series([1, 2, 3, 4, 5],
index=['a', 'b', 'c', 'd', 'e'])
s2
```
# Date Convenience Functions
A quick aside ...
```
dates = pd.date_range('20130626', periods=5)
print(dates)
print()
print(dates[0])
```
# Datestamps as Index
```
s3 = pd.Series([1, 2, 3, 4, 5], index=dates)
print(s3)
```
# Selecting By Index
Note that the integer index is retained along with the new date index.
```
print(s3[0])
print(type(s3[0]))
print()
print(s3[1:3])
print(type(s3[1:3]))
```
# Selecting by value
```
s3[s3 < 3]
```
# Selecting by Label (Date)
```
s3['20130626':'20130628']
```
Series Wrapup
-------------
Things not covered but you should look into:
* Other instantiation options: ``dict``
* Operator Handling of missing data ``NaN``
* Reforming Data and Indexes
* Boolean Indexing
* Other Series Attributes:
* ``index`` - ``index.name``
* ``name`` - Series name
DataFrame
---------
* 2-dimensional labeled data structure
* Like a SQL Table, Spreadsheet or ``dict`` of ``Series`` objects.
* Columns of potentially different types
* Operations, slicing and other behavior just like ``Series``
# DataFrame - Simple
```
data1 = pd.DataFrame(np.random.rand(4, 4))
data1
```
# DataFrame - Index/Column Names
```
dates = pd.date_range('20130626', periods=4)
data2 = pd.DataFrame(
np.random.rand(4, 4),
index=dates, columns=list('ABCD'))
data2
```
# DataFrame - Operations
```
data2['E'] = data2['B'] + 5 * data2['C']
data2
```
See? You never need Excel again!
# DataFrame - Column Access
Deleting a column.
```
# Deleting a Column
del data2['E']
data2
```
# DataFrame
Remember this, data2, for the next examples.
```
data2
```
# DataFrame - Column Access
As a dict
```
data2['B']
```
# DataFrame - Column Access
As an attribute
```
data2.B
```
# DataFrame - Row Access
By row label
```
data2.loc['20130627']
```
# DataFrame - Row Access
By integer location
```
data2.iloc[1]
```
# DataFrame - Cell Access
Access column, then row or use iloc and row/column indexes.
```
print(data2.B[0])
print(data2['B'][0])
print(data2.iloc[0,1]) # [row,column]
```
# DataFrame - Taking a Peek
Look at the beginning of the DataFrame
```
data3 = pd.DataFrame(np.random.rand(100, 4))
data3.head()
```
# DataFrame - Taking a Peek
Look at the end of the DataFrame.
```
data3.tail()
```
# DataFrame Wrap Up
Just remember,
* A `DataFrame` is just a bunch of `Series` grouped together.
* Any one dimensional slice returns a `Series`
* Any two dimensional slice returns another `DataFrame`.
* Elements are typically NumPy types or Objects.
# Panel
Like DataFrame but 3 or more dimensions.
# IO Tools
Robust IO tools to read in data from a variety of sources
* CSV - [pd.read_csv()](http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table)
* Clipboard - [pd.read_clipboard()](http://pandas.pydata.org/pandas-docs/stable/io.html#clipboard)
* SQL - [pd.read_sql_table()](http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries)
* Excel - [pd.read_excel()](http://pandas.pydata.org/pandas-docs/stable/io.html#io-excel)
# Plotting
* Matplotlib - [s.plot()](http://pandas.pydata.org/pandas-docs/stable/visualization.html#plotting-with-matplotlib) - Standard Python Plotting Library
* Trellis - [rplot()](http://pandas.pydata.org/pandas-docs/stable/rplot.html) - An 'R' inspired Matplotlib based plotting tool
# Bringing it Together - Data
The csv file (``phx-temps.csv``) contains Phoenix weather data from
GSOD::
```
1973-01-01 00:00:00,53.1,37.9
1973-01-02 00:00:00,57.9,37.0
...
2012-12-30 00:00:00,64.9,39.0
2012-12-31 00:00:00,55.9,41.0
```
# Bringing it Together - Code
Simple `read_csv()`
```
# simple readcsv
phxtemps1 = pd.read_csv('phx-temps.csv')
phxtemps1.head()
```
# Bringing it Together - Code
Advanced `read_csv()`, parsing the dates and using them as the index, and naming the columns.
```
# define index, parse dates, name columns
phxtemps2 = pd.read_csv(
'phx-temps.csv', index_col=0,
names=['highs', 'lows'], parse_dates=True)
phxtemps2.head()
```
# Bringing it Together - Plot
```
import matplotlib.pyplot as plt
%matplotlib inline
phxtemps2.plot() # pandas convenience method
```
Boo, Pandas and Friends would cry if they saw such a plot.
# Bringing it Together - Plot
Lets see a smaller slice of time:
```
phxtemps2['20120101':'20121231'].plot()
```
# Bringing it Together - Plot
Lets operate on the `DataFrame` ... lets take the differnce between the highs and lows.
```
phxtemps2['diff'] = phxtemps2.highs - phxtemps2.lows
phxtemps2['20120101':'20121231'].plot()
```
# Pandas Alternatives
* AstroPy seems to have similar data structures.
* I suspect there are others.
References
----------
* [Pandas Documentation](http://pandas.pydata.org/pandas-docs/stable/index.html)
* [Python for Data Analysis](http://www.amazon.com/Python-Data-Analysis-Wes-McKinney/dp/1449319793/)
* [Presentation Source](https://github.com/desertpy/presentations)
# Thanks! - Pandas and Friends
* Austin Godber
* Mail: godber@uberhip.com
* Twitter: @godber
* Presented at [DesertPy](http://desertpy.com), Jan 2015.
| github_jupyter |
## Functions
pre define functions
user define functions
recursive functions
lambda function
#### Arguments Function types
required arguments
default arguments
positional argumanets
key word arguments
optional arguments
#### return Function types
no return
return
argument as list
argument as dictionary
recursive functions
generator functions
<img src = "https://www.learnbyexample.org/wp-content/uploads/python/Python-Function-Syntax.png">
#### Notes
##### Pre-define-->those functions which are made already.
##### User-defined-->those functionwhich we made for some environment. User defined not means we're making from scretech we can re create
##### Default-fucntion-->some function didn't took functions are called default Functions.
##### lamba-Function---> ak line ka function, bna name ka function, jo na is se pehly use howa na isky bad use hoga to lammba hoga.
##### Recursive-Fuction--->Repeatation of function itself is knnows is Recursive Function. khud apny ap ko call karta ha or usy kisi jaga py roky hain
##### Required argument---> those function which require argumenr & are necessary are called required argument eg:Hubby/Wife marriage
##### Optional argument---> those argument which are not required are called optionnal reuirement eg: relative,walima, hall.
##### return---> store in variabble & answer can be inpute for another Function.
##### Non return ---> Which Perform its funtion without return any value.
##### default-argument---> sef by programmer
##### positional-argument ---> which we def postion
##### keyword-Arrgument--->with the help of key value we can give any function value
##### argument with list ---> *unzip one value one by one in Funtion
##### argument with Dictionary---> (double starik)\unzip the value one by Every Function have: Fuction Deleration Function Body Function Calling
##### parameter vs Argument--> A parameter is the variable listed inside the parentheses in the function definition. An argument is the value that are sent to the function when it is called.
```
# We want to print same statement for 3 times so we have to do same work for many time ...
print('Pakistan zindabad')
print('we are pakistani')
print('we love our country')
print('Pakistan zindabad')
print('we are pakistani')
print('we love our country')
print('Pakistan zindabad')
print('we are pakistani')
print('we love our country')
# To over come this prblem the Function step forward...
def my_country():
print('Pakistan zindabad.')
print('we are pakistani.')
print('we love our country..!!!')
my_country()
my_country()
my_country()
def print_function(): # Function Dec
#Body Start indentation block
'''print 3 statement''' # Doc_string
return("Pakistan zinda bad") # Statment1
# in multiple return when function get first return it will stop at that point n return that value
return("We are ABC")#statment2
return("We love our country!")#Statment3
return("----------------------")
#Body End indentation block
print(print_function())
print_function.__doc__
```
#### Pre define functions
Built-in function
```
print('print is a pre define function')
input('input is a pre-define function: ')
print(type(11))
print(len('Haseeb Aslam'))
print(id('address'))
```
#### User define functions
```
def print_function(): # Function Dec
#Body Start indentation block
'''print 3 statement''' # Doc_string
print("Pakistan zinda bad") # Statment1
print("We are ABC")#statment2
print("We love our country!")#Statment3
print("----------------------")
#Body End indentation block
print_function()
print_function.__doc__
```
#### Required arguments functions
```
#For example:
len() #len() takes exactly one argument (0 given)
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
print(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
print(id_card.__doc__)
id_card('SMIT04588','Qasim','Hassan','A.I','09 to 11')
id_card('SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30')
id_card('SMIT04533', 'Yasir', 'Akbar', 'DS', '2:30 to 4:30')
#id_card() # SHIFT+TAB to check number of arguments and doc_string
def id_card(sid, name, fname, course, time):
print(
"""
--->SAYLANI<---
student id : {0}
student name : {1}
student father name : {2}
course : {3}
time : {4}""".format(sid, name,fname, course,time) #if we don't pass the loc, format will place in format which they are pass...
)
id_card('SMIT04588','Qasim','Hassan','A.I','09 to 11')
id_card('SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30')
id_card('SMIT04533', 'Yasir', 'Akbar', 'DS', '2:30 to 4:30')
def add(num1,num2,num3):
print(num1,num2,num3)
print(f'sum is: {num1+num2+num3}')
add(5,6,0)
add(5,6) # if no of arguments are not as required by function then it will generate an error...
```
#### Default or Optional Arguments
```
# we tackle this argument problem we use default or optinal arguments function
def add(x,y=0,z=1): # you can set any arguments to default as per your need
print(x,y,z,)
print(f'sum is: {x+y+z}')
add(5) #x=5,y=0,z=1
add(5,6) #x=5,y=6,z=1
add(5,6,0) #x=5,y=0,z=0
# given arguments has higher priority then default arguments...
```
#### mutable default parameter values
```
def f(my_list=[]):
my_list.append('X')
return my_list
f() # if list is empty it will add "X" number of time you'll run the code...
f(["Haseeb",13])
def f(my_list=None):
if my_list is None:
my_list= []
my_list.append('X')
else:
my_list.append("*")
return my_list
f() # if list is empty or not passed as argument both is None so, it will add "X"
f(["Haseeb",13]) # if list is passed as argument so, it will add "*" at the end
```
#### Positional Arguments
```
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
return(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
print(id_card('SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30'))
print(id_card('Qasim','SMIT04588','A.I','09 to 11','Hassan'))
# it wouldn't make an error , it just place data on position as arguments pass
```
#### Key_word Arguments
```
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
return(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
print(id_card(sname='Haseeb', course='DS', timing='11 to 2:30',sid = 'SMIT04613',fname = 'Aslam'))
#id_card() # SHIFT+TAB to check number of arguments and doc_string
```
#### Non-return functions
```
#those function who are not able to return value and we are not able to get value from that function...
print("Haseeb")
print(print("haseeb"))
a = print("HAseeb Aslam")
print(a)
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
print(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
data = id_card('SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30')
print("-----------")
print(data)
#id_card() # SHIFT+TAB to check number of arguments and doc_string
```
#### Return functions
```
def add(x, y, z):
return x+y+z
add(2,3,4) # return function
def add(x, y, z):
return x+y+z #always retun first
return x*y*z
return x/2*2
add(2,3,4)
def add(x, y, z):
return x+y+z, x*y*z, x/2*2
add(2,3,4)
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
return(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
data = id_card('SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30')
print("-----------")
print(data)
#id_card() # SHIFT+TAB to check number of arguments and doc_string
```
#### Pass argument with list
```
l1 = [0,'Haseeb', 'Aslam', '013', 'DS']
print(l1)
l1 = [0,'Haseeb', 'Aslam', '013', 'DS']
print(l1[0], l1[1], l1[2], l1[3], l1[4])
l1 = [0,'Haseeb', 'Aslam', '013', 'DS']
print(*l1) #auto slicing
def id_card(sid, sname, fname, course, timing):
#doc_string
'''
sid --> Student id
sname --> Student Name
fname --> Father's Name
course --> Course
timing --> Timing
'''
return(
f"""
........PIAIC........
Student id: {sid}
Student Name: {sname}
Father's Name: {fname}
Course: {course}
Timing: {timing}
"""
)
data = ['SMIT04613', 'Haseeb', 'Aslam', 'DS', '11 to 2:30']
print(id_card(*data)) #starik out all the values onne by one we call the lsit in function.
def add(name,fname,*z): # *z will accept all the element from 3rd argument to onward as a singal tuple, You acan access them same like as you access tuple elements...
print(name) # 1st atr
print(fname) # 2nd atr
print(type(z))
print(z[0]) # Select element from "z"tuple
print(z[5]) # Select element from "z"tuple
print(sum(z)) # Sum of All element in "z"tuple
add('Haseeb', 'Aslam', 2,3,4,5,6,7)
```
#### Pass argument with dictionary
```
def id_card(sid, name, fname, course, time):
print(
"""
--->SAYLANI<---
student id : {}
student name : {}
student father name : {}
course : {}
time : {}""".format(sid, name,fname, course,time)
)
l1 = {"sid":0,
"name": 'Haseeb',
"fname":'ASlam',
"course":'DS',
"time":'09 to 11pm'}
id_card(*l1) # * pass keys of dictionary as argument
id_card(**l1) # ** pass Values of dictionary as argument
```
#### Use Of ** in Parameter
```
def my_func(**nums):
print(nums)
my_func(name = 'qasim', num1 = 1, num2= 2, num3 = 3, num4 = 4)
def my_func(name, **nums):
print(name)
print(nums)
my_func('qasim', num1 = 1, num2= 2, num3 = 3, num4 = 4)
```
#### Unlimited arguments
```
def my_func(name, *number):
print(name)
print(number)
print(type(number))
print(sum(number))
my_func('Haseeb', 2,3,4,5,6)
def my_func(*number, name):
print(name)
print(number)
print(sum(number))
my_func(2,3,4,5,6, name = 'Haseeb')
def my_func(*number, name):
print(name)
print(number)
print(sum(number))
my_func(2,3,4,5,6, 'Haseeb')
def my_func(*number, name):
print(name)
print(number)
print(sum(number))
my_func(2,3,4,5,6, name = 'Haseeb',5,7,6)
def my_func(sid,*number, name):
print(name)
print(number)
print(sum(number))
my_func(sid=0, number= [2,3,4,5,6], name= 'Haseeb')
def my_func(sid,*number, name):
print(name)
print(number)
print(sum(number))
my_func(sid=0, [2,3,4,5,6], name= 'Haseeb')
def my_func(sid,*number, name):
print(name)
print(number)
print(sum(number))
my_func(0, number= [2,3,4,5,6], name= 'Haseeb')
def my_func(sid,*number, name):
print(name)
print(number)
print(sum(number))
my_func(0, [2,3,4,5,6], name= 'Haseeb')
def my_func(sid,*number, name):
print(sid)
print(name)
print(number)
print(sum(number))
my_func(0, 2,3,4,5,6, name= 'Haseeb')
```
#### Recursive functions
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAS0AAACnCAMAAABzYfrWAAABGlBMVEX///8AytPG7/L4/f3m+PlM1duA3uPY9PUAAACZ5Ogfz9fa9PYAzdXt7e36+vry8vLk5ORWV4Srq6tbXIc+Pj7Pz88wefeQkJDb29vOztrf3+Y2NjaamrTp6elBQUFzc3O+vr5nZ2cmJibu4v5LS0u0yvyYmJi1tbWhoaHT1N5lZo7KysotLS3b5f1SUlLQtPw9f/h9fX0TExN7e3tHhfiWRviivvvx9f5gYGCKLPhTi/hqa5GRkayuefqjXfnV4v2/0vx4eZusrMHCwtC17vH17/6ROfgidPdll/neyP11oPmFq/rXvfzG1/yzs8aCg6LDnPu5h/qhWfnr3/2GH/e6jfqVtfrl0/2rc/nAl/vNrfumaPmaufpMTX5k7v2YAAAMLUlEQVR4nO2dDV/aOByA481z3Z1NgZYXy1upyEsLDBAHiIrA9JxTuM3d7e62ff+vcf+ktBSoQhQRWJ79VkP6Ajz+k6ZpExHicDgcDofD4XA4HA6Hw3kxMJ7Ou7hY/udw8/vOq5f9APfy56fJnJvTs29TmctkZ3f3Jd/+If75PJGx3zy9eXvzbO+XKwTpz0o67rl++83u1uraOj2byLi5vHvGtwtUI8NUJS1Nr371/rddwq/LZp6yf4FtW/iiPsx7e+RZa9Xrk4mxpMeWeLTaSUX0nJ0nJg7tpM+pO3e2tqitrSWzu7U9y9XbZrP5J7V1cfrl8ssnsHRxdHl5dNn88mFsw4Or+nUodEy+8vlxrVZrW7m1cjn0EaHiH+R1/Y8DWJbb5+VQ6B2kitflWrl2bm1YK19Z+6RKo6NqglUmUVAf5b7+ldraWTKzbX24/Gt///QIbOHTo7v9D0enkLrb//Pow/7deNAcHF+/K7YPIPO29q5e/Jvq+l77Xizego5iiLx8FSK2atdXt8VzeI2vr4v14gFZc14+LxY/1sghRSHnOmw0af2MJBqu3PfE14yPvmh+35pp6xup3y8+nxFvXyF506QV1tuj/cktD8oHVgJffSQ/arCsl78P147Zuhp6rh+/G66uX5FUvUaW2arPdVin5oqPt2LA1xzfcJG8nm2r+RdZ/gO2/v384e7u7uaSngm9bB0XrUS9/K4IXF2RiBnmjduyFULo/V2kEtq1A9ilTT1noqLrsIFoBHny6s2S21uzbdWb/5IfpN76dNkkPGBrGDFFqIuA2jFC72p2aR2zZUcUwgfH5etbSLSH+xBb4YTblla4x9bSmTu2qK0zV1F4yFY9dOvklcdt1SdsAe2PoXNiq+3kzBlbS2cOW5+/waJ+dkaaWF9H2Q/aOv5o5xUdcVaqXZ6yhdD1NXbVYFBvCW4/Tr3lcfG1XOaw9V8T2gn/XZ6RdsM30nqwGvAP2YLGAHHSJlXW1THETL1IKnBS6V+P26oTg3VaWf1dozatc2LAdVj7nCgm3WdKwi9zf8/FMIet+l9fLpunn8iZcf/bl2bzi3UN9N90W/7AqaOgugrBP6riGmqj0DUizYrycdk6bzonynotdAwVF93vOzS8QjV6UmgkRnHkBFqwGh17O2jPr1wtDwXg681XfEEjqf715ubOasNf7E8VjHpxlFW/PW/XsZV7fl6kOoq3t0VEk8VR8719fmu/gvVtKxkpjNrypYr9ObKm+5NDU3EFWxAvQtZuwHtfJ6Jt4mp3980U758z3FbVFso2rPKXS4oea2kHhPd14nN+nZW1hUQrpETPE+H2m917+EltzeDVjlf/1vbPGlsz2d75bcrWHJe9T2GNbZFum8kq/bG28jHvHtrJN1xnW9M80hbu+GPzbMdtUXoddZ7NfgJb0vCsir3abbDaK8970w22tSfHYn4JdQ3ZiIEvqWsYrR5CXRkWAyOPTuR+6wT3W7LRRSgm53vyCSKJPsp3ZGPgGWubbMswZKnvH/QHMugayK1BC1SMbBktI5b3d/odeQ/F/Hkkt1QkwUI1jG6MvJhmk211VCwaHVVSO341Lw8QUlsD7IqtEwnv+QeqqorUVtffRX2o7QdGT5Jisle1v8G2yGlOlVsEea8HYUUZ2fJD9OCBLLdiEtjaQ/GWDC9V1DLILsbA4/CbbAvqI9V/ohKkHo2VXs+KrY5tC+F8twUVFrEF0RYjjlqtPbqPx+E33JbUMfrqXssQRRkSXf8A9eROvi/btnr+GN4zOpYtqLAgH6IvpqondiyOsbm2ej9INKlwzpMNCKdeyy9DTYZEKHtGSwZbP8CWNJA7LVgd+wG2QGAHzp74BHYhJ4ZpNtdWvEfLktTrWgm13+2RZpTY7+bzPRHl6StY3Yd4IhmwSS9PtoTS2c97Hn5zbT0H3BYL3BYL3BYLC7EVDtt959qhffNBqkze/FsGa2DLJ1Q1K4VTVfsRq4DwEvfj18AWTqdtMTnFjqiIkvK6WfPMrIEtJI28jDps4/f0ET0r62BrdeC2WOC2WOC2WHjQVlyV4P9THgL7mWyd/Nizeh4ezc9kK0Y74LktB26LhTFb/dhJn9RS+Vist9dVua1JXLZwx28YpDO0J8uGX/bnua1JXLb6/hM1fiL3ccvoS3ukb5nbmsBla0C8SHlV9ZN70N0NjC2csi7FI0rmcQcYswXXrFiULFv9zbOFK4nhaCizEHh403tw2YrJ5PkGo4+NVh6pnVUqic5DKaPHRLHo1WOD7Q1c/RbOPgHBGTrmDEpkw2VL6vhbhtySSC3fkmViy78CtkpapBSN0ofacwlFqVANYjihRNMaZDWIyGwDPJjJeFiJKsSIWVKURIqsgXhSEmF6oHg07BwUNxqP6fFxtyCk2GBAH/nrDQbd/kBF/UEe9b0fnpmTBdhSkomMmSMlJ6wHglk9SUKlIWRMrRKBPIV860AUHGmJZEPTSK90XKgEg9kUudVZiWaDOZ1WUwHd1duqVR8TXOOtU2dWhkUNEFqArYQ+7AL0CWSshFaFZc75ruEEtUUiSqsOB+ygrGAHjkmHpQTo6M2ke2gr1sOIndXvg0g4XfF6RBQlUw/Auc0pR2O27NgR9YZm/b7DugT7aIIGfpSK+7BK6hGfZfVtKXbEZHSBEka45Iyzd9salTRfUkjkyIrkcB8oyDg6Fk1p+5ewPT085V7mGCb+FBZhyzYzGoOJG7bBMVsF12hpMaOQNUndyZmILWfI+c6CpyB4Aou0pY3GYFbStjjLVnjKFjRBSfHLCaOWRMNd9nDBjrT3v7Ew9Qj9IlmkLRQl7QQclMhQwjAs4z7ShgqSAXTpMVsRkggSW2IhCRtKQVKLZaIum2bVPbxuRViArYJT6iKlQjodbZBoySmFUlontVGp0EiXKtGxWj6np1PpArUchGZZukCbXhHd1YAPl17iDtsMFmDLHLWSRDOX06wA8WUzOZN4k7RM1keDJ27aAiQzkAkErbNiXMvkNKsNUtGdhlGwmn3ix3oOVus6sdEY6vIlKg9v+jKslC0kDUcFS43DFx+f78Vq2ULYRwur5Ju14cuwYrZWHG6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LBW6LhYXYCuTsmznBjH0DCGdW8abNE1mErfhoNGeyat+PDwjKinauP4FF2JL0gvP8UdW+Hx+s3vPnP9aZhZREcRRFrtQLDH19bngtzwK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xQK3xcKybcWtJ8TFyErOvDKLJduKDOcg80WTM7ZcSZZry1e1/9yPtJqTIM1gHlsiRnEtSIdb4IhplyEpYppxbE+KS5ewkIKaj2yAfaZpzeADG2rDfXCyMZpgi8zstm7MYUtUtDCdaRJioyToQoM+E68lyFSSJkIpOvqiUgUPuXREEQSdrK8IelWgkwdG0rAPnTEWBXXXjK+l0vpVXfPYKkUrPhSJQyqdwvAyDZmmnowjHAQdSTo3YjgK3z1QKJmQCaks8SJqZO9qGCNfgsybi1IJ12FNfQUnBpzBHNOpgp5hFGgFckYzyTy5KcUeCOW25UxfFyjYo6MyUcl5XTgcHRUqfHuWwFdrwzyxlbbH7oSVXCAQyBAnVeeLu205k0qKJSFslbpkGnYJVIhhsZrxOizTfKYvzOzpVMW0/SUPoykgmYQiJDjz3nraQjhb0slfvsMpxdonMmlLKq2hra3d+W2FRxWzM0H10FZlwhZ4MkuKCKfB0ZSv4xNVx4fzpKPtX9aI1zNkuWxpo4o56dRbh7RVoCiTtshkuXQ28FFewj3Jd1DfwOGebltQrExRNElMBAuNoBjPQvnKVgNSPDxhK6uJYrxC5s6VlIYpxjXaKg0XXKPwMtHNG+2Jxv6kgJgkjSwaIcE0SQZJk1OoCodZ2t5y4igHzS7BmmPelxKGrTUU0V0Vl76W1z5sYJ8TEVIkbic8Jgn2RcSpDVHANe+1soGjPRdMsjSUHVjFSatXDZyxLqYjSe1lP8iasH4XhxwOh8PhcDgcDofD4czN/0NkckcdbLAsAAAAAElFTkSuQmCC">
<img src = "https://s3.ap-south-1.amazonaws.com/afteracademy-server-uploads/what-is-the-difference-between-iteration-and-recursion-banner-b9507914affcc7de.png">
```
def factorical(n):
if n == 1:
print(n)
return 1
print(n)
return n * factorical(n-1)
factorical(5)
def add(n):
if n == 1:
return 1
return n + add(n-1)
add(10)
```
#### Generator function
Following properties that generator function posses:
- didn't generate all values at once
- secondly remeber its last value
```
range(1,20000) # it didn't consume all the memory it shows values are from 1 to 20000
def my_range(n):
for i in range(1, n+1):
print(i)
my_range(6)
```
#### When to use yield instead of return in Python?
The yield statement suspends function’s execution and sends a value back to the caller, but retains enough state to enable function to resume where it is left off. When resumed, the function continues execution immediately after the last yield run. This allows its code to produce a series of values over time, rather than computing them at once and sending them back like a list.
```
def my_range(n):
for i in range(1,n+1):
yield i
a = my_range(10)
print(a)
print(next(a))
print(next(a))
print(next(a))
# A Python program to generate squares from 1
# to 100 using yield and therefore generator
# An infinite generator function that prints
# next square number. It starts with 1
def nextSquare():
i = 1;
# An Infinite loop to generate squares
while True:
yield i*i
i += 1 # Next execution resumes
# from this point
# Driver code to test above generator
# function
for num in nextSquare():
if num > 100:
break
print(num)
```
#### lambda function
Following are the properties of lambda function:
a one line function
a without name function
not used before that
not used after that
<img src = "https://0xbharath.github.io/python-foundations/img/lambda.png">
```
(lambda x : x) (5)
(lambda x : x**x) (2)
def add(x,y): return x+y
add(5,3)
(lambda x,y : x+y ) (5,3)
(lambda x,y,z : x+y+z)(2,3,4)
a = lambda x,y,z : x+y+z
a(2,3,4)
sqrt = (lambda x : x**2)
sqrt(3)
full_name = lambda fname, lname : f'complete name is: {fname} {lname}'
full_name('Haseeb', 'Aslam')
```
#### Function Annotation
Annotations provide a way to attach metadata to a function’s parameters and return value.
```
def f(a: '<a>' = 2, b: '<b>' = 3) -> '<ret_value>':
pass
f.__annotations__
f.__annotations__['a']
def area(
r: {
'desc': 'radius of circle',
'type': float
}) -> \
{
'desc': 'area of circle',
'type': float
}:
return 3.14159 * (r ** 2)
print(area(2.5))
area.__annotations__
def f(a: int = 12, b: str = 'baz') -> float:
print(a, b)
return(3.5)
f.__annotations__
f()
def f(a, b):
return
f.__annotations__ = {'a': int, 'b': str, 'return': float}
f.__annotations__
```
| github_jupyter |
# Tutorial 2: Inside CrypTensors
Note: This tutorial is optional, and can be skipped without any loss of continuity to the following tutorials.
In this tutorial, we will take a brief look at the internals of ```CrypTensors```.
Using the `mpc` backend, a `CrypTensor` is a tensor encrypted using secure MPC protocols, called an `MPCTensor`. In order to support the mathematical operations required by the `MPCTensor`, CrypTen implements two kinds of secret-sharing protocols: arithmetic secret-sharing and binary secret-sharing. Arithmetic secret sharing forms the basis for most of the mathematical operations implemented by `MPCTensor`. Similarly, binary secret-sharing allows for the evaluation of logical expressions.
In this tutorial, we'll first introduce the concept of a `CrypTensor` <i>ptype</i> (i.e. <i>private-type</i>), and show how to use it to obtain `MPCTensors` that use arithmetic and binary secret shares. We will also describe how each of these <i>ptypes</i> is used, and how they can be combined to implement desired functionality.
```
#import the libraries
import crypten
import torch
#initialize crypten
crypten.init()
#Disables OpenMP threads -- needed by @mpc.run_multiprocess which uses fork
torch.set_num_threads(1)
```
## <i>ptype</i> in CrypTen
CrypTen defines the `ptype` (for <i>private-type</i>) attribute of an `MPCTensor` to denote the kind of secret-sharing protocol used in the `CrypTensor`. The `ptype` is, in many ways, analogous to the `dtype` of PyTorch. The `ptype` may have two values:
- `crypten.arithmetic` for `ArithmeticSharedTensors`</li>
- `crypten.binary` for `BinarySharedTensors`</li>
We can use the `ptype` attribute to create a `CrypTensor` with the appropriate secret-sharing protocol. For example:
```
#Constructing CrypTensors with ptype attribute
#arithmetic secret-shared tensors
x_enc = crypten.cryptensor([1.0, 2.0, 3.0], ptype=crypten.arithmetic)
print("x_enc internal type:", x_enc.ptype)
#binary secret-shared tensors
y = torch.tensor([1, 2, 1], dtype=torch.int32)
y_enc = crypten.cryptensor(y, ptype=crypten.binary)
print("y_enc internal type:", y_enc.ptype)
```
### Arithmetic secret-sharing
Let's look more closely at the `crypten.arithmetic` <i>ptype</i>. Most of the mathematical operations implemented by `CrypTensors` are implemented using arithmetic secret sharing. As such, `crypten.arithmetic` is the default <i>ptype</i> for newly generated `CrypTensor`s.
Let's begin by creating a new `CrypTensor` using `ptype=crypten.arithmetic` to enforce that the encryption is done via arithmetic secret sharing. We can print values of each share to confirm that values are being encrypted properly.
To do so, we will need to create multiple parties to hold each share. We do this here using the `@mpc.run_multiprocess` function decorator, which we developed to execute crypten code from a single script (as we have in a Jupyter notebook). CrypTen follows the standard MPI programming model: it runs a separate process for each party, but each process runs an identical (complete) program. Each process has a `rank` variable to identify itself.
Note that the sum of the two `_tensor` attributes below is equal to a scaled representation of the input. (Because MPC requires values to be integers, we scale input floats to a fixed-point encoding before encryption.)
```
import crypten.mpc as mpc
import crypten.communicator as comm
@mpc.run_multiprocess(world_size=2)
def examine_arithmetic_shares():
x_enc = crypten.cryptensor([1, 2, 3], ptype=crypten.arithmetic)
rank = comm.get().get_rank()
print(f"Rank {rank}:\n {x_enc}")
x = examine_arithmetic_shares()
```
### Binary secret-sharing
The second type of secret-sharing implemented in CrypTen is binary or XOR secret-sharing. This type of secret-sharing allows greater efficiency in evaluating logical expressions.
Let's look more closely at the `crypten.binary` <i>ptype</i>. Most of the logical operations implemented by `CrypTensors` are implemented using arithmetic secret sharing. We typically use this type of secret-sharing when we want to evaluate binary operators (i.e. `^ & | >> <<`, etc.) or logical operations (like comparitors).
Let's begin by creating a new `CrypTensor` using `ptype=crypten.binary` to enforce that the encryption is done via binary secret sharing. We can print values of each share to confirm that values are being encrypted properly, as we did for arithmetic secret-shares.
(Note that an xor of the two `_tensor` attributes below is equal to an unscaled version of input.)
```
@mpc.run_multiprocess(world_size=2)
def examine_binary_shares():
x_enc = crypten.cryptensor([2, 3], ptype=crypten.binary)
rank = comm.get().get_rank()
print(f"Rank {rank}:\n {x_enc}")
x = examine_binary_shares()
```
### Using Both Secret-sharing Protocols
Quite often a mathematical function may need to use both additive and XOR secret sharing for efficient evaluation. Functions that require conversions between sharing types include comparators (`>, >=, <, <=, ==, !=`) as well as functions derived from them (`abs, sign, relu`, etc.). For a full list of supported functions, please see the CrypTen documentation.
CrypTen provides functionality that allows for the conversion of between <i>ptypes</i>. Conversion between <i>ptypes</i> can be done using the `.to()` function with a `crypten.ptype` input, or by calling the `.arithmetic()` and `.binary()` conversion functions.
```
from crypten.mpc import MPCTensor
@mpc.run_multiprocess(world_size=2)
def examine_conversion():
x = torch.tensor([1, 2, 3])
rank = comm.get().get_rank()
# create an MPCTensor with arithmetic secret sharing
x_enc_arithmetic = MPCTensor(x, ptype=crypten.arithmetic)
# To binary
x_enc_binary = x_enc_arithmetic.to(crypten.binary)
x_from_binary = x_enc_binary.get_plain_text()
if rank == 0: # only print once
print("to(crypten.binary):")
print(f" ptype: {x_enc_binary.ptype}\n plaintext: {x_from_binary}\n")
# To arithmetic
x_enc_arithmetic = x_enc_arithmetic.to(crypten.arithmetic)
x_from_arithmetic = x_enc_arithmetic.get_plain_text()
if rank == 0: # only print once
print("to(crypten.arithmetic):")
print(f" ptype: {x_enc_arithmetic.ptype}\n plaintext: {x_from_arithmetic}\n")
z = examine_conversion()
```
## Data Sources
CrypTen follows the standard MPI programming model: it runs a separate process for each party, but each process runs an identical (complete) program. Each process has a `rank` variable to identify itself.
If the process with rank `i` is the source of data `x`, then `x` gets encrypted with `i` as its source value (denoted as `src`). However, MPI protocols require that both processes to provide a tensor with the same size as their input. CrypTen ignores all data provided from non-source processes when encrypting.
In the next example, we'll show how to use the `rank` and `src` values to encrypt tensors. Here, we will have each of 3 parties generate a value `x` which is equal to its own `rank` value. Within the loop, 3 encrypted tensors are created, each with a different source. When these tensors are decrypted, we can verify that the tensors are generated using the tensor provided by the source process.
(Note that `crypten.cryptensor` uses rank 0 as the default source if none is provided.)
```
@mpc.run_multiprocess(world_size=3)
def examine_sources():
# Create a different tensor on each rank
rank = comm.get().get_rank()
x = torch.tensor(rank)
print(f"Rank {rank}: {x}")
#
world_size = comm.get().get_world_size()
for i in range(world_size):
x_enc = crypten.cryptensor(x, src=i)
z = x_enc.get_plain_text()
# Only print from one process to avoid duplicates
if rank == 0: print(f"Source {i}: {z}")
x = examine_sources()
```
| github_jupyter |
```
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.utils.extmath import softmax
import matplotlib.pyplot as plt
from matplotlib import pyplot
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from mpl_toolkits.axes_grid1 import make_axes_locatable
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman'] + plt.rcParams['font.serif']
```
## Load and display MNIST handwritten digits dataset
```
# Load data from https://www.openml.org/d/554
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
# X = X.values ### Uncomment this line if you are having type errors in plotting. It is loading as a pandas dataframe, but our indexing is for numpy array.
X = X / 255.
print('X.shape', X.shape)
print('y.shape', y.shape)
'''
Each row of X is a vectroization of an image of 28 x 28 = 784 pixels.
The corresponding row of y holds the true class label from {0,1, .. , 9}.
'''
# see how many images are there for each digit
for j in np.arange(10):
idx = np.where(y==str(j))
idx = np.asarray(idx)[0,:]
print('digit %i length %i' % (j, len(idx)))
# Plot some sample images
ncols = 10
nrows = 4
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=[15, 6.5])
for j in np.arange(ncols):
for i in np.arange(nrows):
idx = np.where(y==str(j)) # index of all images of digit 'j'
idx = np.asarray(idx)[0,:] # make idx from tuple to array
idx_subsampled = np.random.choice(idx, nrows)
ax[i,j].imshow(X[idx_subsampled[i],:].reshape(28,28))
# ax[i,j].title.set_text("label=%s" % y[idx_subsampled[j]])
if i == 0:
# ax[j,i].set_ylabel("label=%s" % y[idx_subsampled[j]])
ax[i,j].set_title("label$=$%s" % y[idx_subsampled[i]], fontsize=14)
# ax[i].legend()
plt.subplots_adjust(wspace=0.3, hspace=-0.1)
plt.savefig('MNIST_ex1.pdf', bbox_inches='tight')
# Split the dataset into train and test sets
X_train = []
X_test = []
y_test = []
y_train = []
for i in np.arange(X.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
if U<0.8:
X_train.append(X[i,:])
y_train.append(y[i])
else:
X_test.append(X[i,:])
y_test.append(y[i])
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
print('X_train.shape', X_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_train.shape)
print('y_test.shape', y_test.shape)
def sample_binary_MNIST(list_digits=['0','1'], full_MNIST=None, noise_rate=0):
# get train and test set from MNIST of given two digits
# e.g., list_digits = ['0', '1']
if full_MNIST is not None:
X, y = full_MNIST
else:
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
idx = [i for i in np.arange(len(y)) if y[i] in list_digits] # list of indices where the label y is in list_digits
X01 = X[idx,:]
y01 = y[idx]
X_train = []
X_test = []
y_test = [] # list of integers 0 and 1s
y_train = [] # list of integers 0 and 1s
for i in np.arange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
label = 0
if y01[i] == str(list_digits[1]):
label = 1
if U<0.8:
# add noise to the sampled images
if noise_rate > 0:
for j in np.arange(X01.shape[1]):
U1 = np.random.rand()
if U1 < noise_rate:
X01[i,j] += np.random.rand()
X_train.append(X01[i,:])
y_train.append(label)
else:
X_test.append(X01[i,:])
y_test.append(label)
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train).reshape(-1,1)
y_test = np.asarray(y_test).reshape(-1,1)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = sample_binary_MNIST(list_digits=['0','1'], full_MNIST=[X, y], noise_rate=0.5)
print('X_train.shape', X_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_train.shape)
print('y_test.shape', y_test.shape)
print('y_test', y_test)
# plot corrupted images
ncols = 4
fig, ax = plt.subplots(nrows=1, ncols=ncols, figsize=[15, 6.5])
for j in np.arange(ncols):
id = np.random.choice(np.arange(X_train.shape[0]))
ax[j].imshow(X_train[id,:].reshape(28,28))
plt.savefig('MNIST_ex_corrupted1.pdf', bbox_inches='tight')
def list2onehot(y, list_classes):
"""
y = list of class lables of length n
output = n x k array, i th row = one-hot encoding of y[i] (e.g., [0,0,1,0,0])
"""
Y = np.zeros(shape = [len(y), len(list_classes)], dtype=int)
for i in np.arange(Y.shape[0]):
for j in np.arange(len(list_classes)):
if y[i] == list_classes[j]:
Y[i,j] = 1
return Y
def sample_multiclass_MNIST(list_digits=['0','1', '2'], full_MNIST=None):
# get train and test set from MNIST of given digits
# e.g., list_digits = ['0', '1', '2']
if full_MNIST is not None:
X, y = full_MNIST
else:
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
Y = list2onehot(y.tolist(), list_digits)
idx = [i for i in np.arange(len(y)) if y[i] in list_digits] # list of indices where the label y is in list_digits
X01 = X[idx,:]
y01 = Y[idx,:]
X_train = []
X_test = []
y_test = [] # list of one-hot encodings (indicator vectors) of each label
y_train = [] # list of one-hot encodings (indicator vectors) of each label
for i in np.arange(X01.shape[0]):
# for each example i, make it into train set with probabiliy 0.8 and into test set otherwise
U = np.random.rand() # Uniform([0,1]) variable
if U<0.8:
X_train.append(X01[i,:])
y_train.append(y01[i,:].copy())
else:
X_test.append(X01[i,:])
y_test.append(y01[i,:].copy())
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
return X_train, X_test, y_train, y_test
# test
X_train, X_test, y_train, y_test = sample_multiclass_MNIST(list_digits=['0','1', '2'], full_MNIST=[X, y])
print('X_train.shape', X_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_train.shape)
print('y_test.shape', y_test.shape)
print('y_test', y_test)
```
## Logistic Regression
```
# sigmoid and logit function
def sigmoid(x):
return np.exp(x)/(1+np.exp(x))
# plot sigmoid function
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=[10,3])
x = np.linspace(-7, 7, 100)
ax.plot(x, sigmoid(x), color='blue', label="$y=\sigma(x)=\exp(x)/(1+\exp(x))$")
plt.axhline(y=1, color='g', linestyle='--')
plt.axvline(x=0, color='g', linestyle='--')
ax.legend()
plt.savefig('sigmoid_ex.pdf', bbox_inches='tight')
def fit_LR_GD(Y, H, W0=None, sub_iter=100, stopping_diff=0.01):
'''
Convex optimization algorithm for Logistic Regression using Gradient Descent
Y = (n x 1), H = (p x n) (\Phi in lecture note), W = (p x 1)
Logistic Regression: Y ~ Bernoulli(Q), Q = sigmoid(H.T @ W)
MLE -->
Find \hat{W} = argmin_W ( sum_j ( log(1+exp(H_j.T @ W) ) - Y.T @ H.T @ W ) )
'''
if W0 is None:
W0 = np.random.rand(H.shape[0],1) #If initial coefficients W0 is None, randomly initialize
W1 = W0.copy()
i = 0
grad = np.ones(W0.shape)
while (i < sub_iter) and (np.linalg.norm(grad) > stopping_diff):
Q = 1/(1+np.exp(-H.T @ W1)) # probability matrix, same shape as Y
# grad = H @ (Q - Y).T + alpha * np.ones(W0.shape[1])
grad = H @ (Q - Y)
W1 = W1 - (np.log(i+1) / (((i + 1) ** (0.5)))) * grad
i = i + 1
print('iter %i, grad_norm %f' %(i, np.linalg.norm(grad)))
return W1
def fit_LR_NR(Y, H, W0=None, sub_iter=100, stopping_diff=0.01):
'''
Convex optimization algorithm for Logistic Regression using Newton-Ralphson algorithm.
Y = (n x 1), H = (p x n) (\Phi in lecture note), W = (p x 1)
Logistic Regression: Y ~ Bernoulli(Q), Q = sigmoid(H.T @ W)
MLE -->
Find \hat{W} = argmin_W ( sum_j ( log(1+exp(H_j.T @ W) ) - Y.T @ H.T @ W ) )
'''
### Implement by yourself.
# fit logistic regression using GD
X_train, X_test, y_train, y_test = sample_binary_MNIST(['0', '1'], full_MNIST = [X,y])
# Feature matrix of size (p x n) = (feature dim x samples)
H_train = np.vstack((np.ones(X_train.shape[0]), X_train.T)) # add first row of 1's for bias features
W = fit_LR_GD(Y=y_train, H=H_train/400)
plt.imshow(W[1:,:].reshape(28,28))
# plot fitted logistic regression curve
digit_list_list = [['0','1'],['0','7'],['2','3'],['2', '8']] # list of list of two digits
# fit LR for each cases
W_array = []
for i in np.arange(len(digit_list_list)):
L = digit_list_list[i]
X_train, X_test, y_train, y_test = sample_binary_MNIST(list_digits=L, full_MNIST = [X,y])
H_train = np.vstack((np.ones(X_train.shape[0]), X_train.T)) # add first row of 1's for bias features
W = fit_LR_GD(Y=y_train, H=H_train)
W = fit_LR_GD(Y=y_train, H=H_train)
W_array.append(W.copy())
W_array = np.asarray(W_array)
# make plot
fig, ax = plt.subplots(nrows=1, ncols=len(digit_list_list), figsize=[16, 4])
for i in np.arange(len(digit_list_list)):
L = digit_list_list[i]
W = W_array[i]
im = ax[i].imshow(W[1:,:].reshape(28,28), vmin=np.min(W_array), vmax=np.max(W_array))
ax[i].title.set_text("LR coeff. for %s vs. %s" % (L[0], L[1]))
# ax[i].legend()
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.92, 0.15, 0.01, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.savefig('LR_MNIST_training_ex.pdf', bbox_inches='tight')
def compute_accuracy_metrics(Y_test, P_pred, use_opt_threshold=False, verbose=False):
# y_test = binary label
# P_pred = predicted probability for y_test
# compuate various binary classification accuracy metrics
fpr, tpr, thresholds = metrics.roc_curve(Y_test, P_pred, pos_label=None)
mythre = thresholds[np.argmax(tpr - fpr)]
myauc = metrics.auc(fpr, tpr)
# print('!!! auc', myauc)
# Compute classification statistics
threshold = 0.5
if use_opt_threshold:
threshold = mythre
Y_pred = P_pred.copy()
Y_pred[Y_pred < threshold] = 0
Y_pred[Y_pred >= threshold] = 1
mcm = confusion_matrix(Y_test, Y_pred)
tn = mcm[0, 0]
tp = mcm[1, 1]
fn = mcm[1, 0]
fp = mcm[0, 1]
accuracy = (tp + tn) / (tp + tn + fp + fn)
sensitivity = tn / (tn + fp)
specificity = tp / (tp + fn)
precision = tp / (tp + fp)
fall_out = fp / (fp + tn)
miss_rate = fn / (fn + tp)
# Save results
results_dict = {}
results_dict.update({'Y_test': Y_test})
results_dict.update({'Y_pred': Y_pred})
results_dict.update({'AUC': myauc})
results_dict.update({'Opt_threshold': mythre})
results_dict.update({'Accuracy': accuracy})
results_dict.update({'Sensitivity': sensitivity})
results_dict.update({'Specificity': specificity})
results_dict.update({'Precision': precision})
results_dict.update({'Fall_out': fall_out})
results_dict.update({'Miss_rate': miss_rate})
if verbose:
for key in [key for key in results_dict.keys()]:
print('% s ===> %.3f' % (key, results_dict.get(key)))
return results_dict
# fit logistic regression using GD and compute binary classification accuracies
# Get train and test data
digits_list = ['4', '7']
X_train, X_test, y_train, y_test = sample_binary_MNIST(digits_list, full_MNIST = [X,y])
# Feature matrix of size (p x n) = (feature dim x samples)
list_train_size = [1,10, 30, 100]
# train the regression coefficients for all cases
W_list = []
results_list = []
for i in np.arange(len(list_train_size)):
size = list_train_size[i]
idx = np.random.choice(np.arange(len(y_train)), size)
X_train0 = X_train[idx, :]
y_train0 = y_train[idx]
# Train the logistic regression model
H_train0 = np.vstack((np.ones(X_train0.shape[0]), X_train0.T)) # add first row of 1's for bias features
W = fit_LR_GD(Y=y_train0, H=H_train0)
W_list.append(W.copy()) # make sure use copied version of W since the same name is overrided in the loop
# Get predicted probabilities
H_test = np.vstack((np.ones(X_test.shape[0]), X_test.T))
Q = 1 / (1 + np.exp(-H_test.T @ W)) # predicted probabilities for y_test
# Compute binary classification accuracies
results_dict = compute_accuracy_metrics(Y_test=y_test, P_pred = Q)
results_dict.update({'train size':X_train0.shape[0]}) # add the train data size to the results dictionary
results_list.append(results_dict.copy())
# Print out the results
"""
keys_list = [i for i in results_dict.keys()]
for key in keys_list:
if key not in ['Y_test', 'Y_pred']:
print('%s = %f' % (key, results_dict.get(key)))
"""
# make plot
fig, ax = plt.subplots(nrows=1, ncols=len(list_train_size), figsize=[16, 4])
for i in np.arange(len(list_train_size)):
result_dict = results_list[i]
W = W_list[i][1:,:]
im = ax[i].imshow(W.copy().reshape(28,28), vmin=np.min(W_list), vmax=np.max(W_list))
subtitle = ""
keys_list = [i for i in results_list[i].keys()]
for key in keys_list:
if key not in ['Y_test', 'Y_pred', 'AUC', 'Opt_threshold']:
subtitle += "\n" + str(key) + " = " + str(np.round(results_list[i].get(key),3))
# print('%s = %f' % (key, results_list[i].get(key)))
ax[i].set_title('Opt. regression coeff.', fontsize=13)
ax[i].set_xlabel(subtitle, fontsize=20)
fig.subplots_adjust(right=0.9)
fig.suptitle("MNIST Binary Classification by LR for %s vs. %s" % (digits_list[0], digits_list[1]), fontsize=20, y=1.05)
cbar_ax = fig.add_axes([0.92, 0.15, 0.01, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.savefig('LR_MNIST_test_ex1.pdf', bbox_inches='tight')
```
## Multiclass Logistic Regression
```
def fit_MLR_GD(Y, H, W0=None, sub_iter=100, stopping_diff=0.01):
'''
Convex optimization algorithm for Multiclass Logistic Regression using Gradient Descent
Y = (n x k), H = (p x n) (\Phi in lecture note), W = (p x k)
Multiclass Logistic Regression: Y ~ vector of discrete RVs with PMF = sigmoid(H.T @ W)
MLE -->
Find \hat{W} = argmin_W ( sum_j ( log(1+exp(H_j.T @ W) ) - Y.T @ H.T @ W ) )
'''
k = Y.shape[1] # number of classes
if W0 is None:
W0 = np.random.rand(H.shape[0],k) #If initial coefficients W0 is None, randomly initialize
W1 = W0.copy()
i = 0
grad = np.ones(W0.shape)
while (i < sub_iter) and (np.linalg.norm(grad) > stopping_diff):
Q = 1/(1+np.exp(-H.T @ W1)) # probability matrix, same shape as Y
# grad = H @ (Q - Y).T + alpha * np.ones(W0.shape[1])
grad = H @ (Q - Y)
W1 = W1 - (np.log(i+1) / (((i + 1) ** (0.5)))) * grad
i = i + 1
# print('iter %i, grad_norm %f' %(i, np.linalg.norm(grad)))
return W1
def custom_softmax(a):
"""
given an array a = [a_1, .. a_k], compute the softmax distribution p = [p_1, .. , p_k] where p_i \propto exp(a_i)
"""
a1 = a - np.max(a)
p = np.exp(a1)
if type(a) is list:
p = p/np.sum(p)
else:
row_sum = np.sum(p, axis=1)
p = p/row_sum[:, np.newaxis]
return p
print(np.sum(custom_softmax([1,20,30,50])))
a= np.ones((2,3))
print(softmax(a))
def multiclass_accuracy_metrics(Y_test, P_pred, class_labels=None, use_opt_threshold=False):
# y_test = multiclass one-hot encoding labels
# Q = predicted probability for y_test
# compuate various classification accuracy metrics
results_dict = {}
y_test = []
y_pred = []
for i in np.arange(Y_test.shape[0]):
for j in np.arange(Y_test.shape[1]):
if Y_test[i,j] == 1:
y_test.append(j)
if P_pred[i,j] == np.max(P_pred[i,:]):
# print('!!!', np.where(P_pred[i,:]==np.max(P_pred[i,:])))
y_pred.append(j)
confusion_mx = metrics.confusion_matrix(y_test, y_pred)
results_dict.update({'confusion_mx':confusion_mx})
results_dict.update({'Accuracy':np.trace(confusion_mx)/np.sum(np.sum(confusion_mx))})
print('!!! confusion_mx', confusion_mx)
print('!!! Accuracy', results_dict.get('Accuracy'))
return results_dict
# fit multiclass logistic regression using GD
list_digits=['0', '1', '2']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST(list_digits=list_digits, full_MNIST = [X,y])
# Feature matrix of size (p x n) = (feature dim x samples)
H_train = np.vstack((np.ones(X_train.shape[0]), X_train.T)) # add first row of 1's for bias features
W = fit_MLR_GD(Y=y_train, H=H_train)
print('!! W.shape', W.shape)
# Get predicted probabilities
H_test = np.vstack((np.ones(X_test.shape[0]), X_test.T))
Q = softmax(H_test.T @ W.copy()) # predicted probabilities for y_test # Uses sklearn's softmax for numerical stability
print('!!! y_test.shape', y_test.shape)
print('!!! Q.shape', Q.shape)
results_dict = multiclass_accuracy_metrics(Y_test=y_test, P_pred=Q)
confusion_mx = results_dict.get('results_dict')
# make plot
fig, ax = plt.subplots(nrows=1, ncols=len(list_digits), figsize=[12, 4])
for i in np.arange(len(list_digits)):
L = list_digits[i]
im = ax[i].imshow(W[1:,i].reshape(28,28), vmin=np.min(W), vmax=np.max(W))
ax[i].title.set_text("MLR coeff. for %s" % L )
# ax[i].legend()
# if i == len(list_digits) - 1:
cbar_ax = fig.add_axes([0.92, 0.15, 0.01, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.savefig('MLR_MNIST_ex1.pdf', bbox_inches='tight')
# fit multiclass logistic regression using GD and compute multiclass classification accuracies
# Get train and test data
digits_list = ['0', '1', '2', '3', '4']
X_train, X_test, y_train, y_test = sample_multiclass_MNIST(digits_list, full_MNIST = [X,y])
# Feature matrix of size (p x n) = (feature dim x samples)
list_train_size = [1,10, 30, 100]
# train the regression coefficients for all cases
W_list = []
results_list = []
for i in np.arange(len(list_train_size)):
size = list_train_size[i]
idx = np.random.choice(np.arange(len(y_train)), size)
X_train0 = X_train[idx, :]
y_train0 = y_train[idx, :]
# Train the multiclass logistic regression model
H_train0 = np.vstack((np.ones(X_train0.shape[0]), X_train0.T)) # add first row of 1's for bias features
W = fit_MLR_GD(Y=y_train0, H=H_train0)
W_list.append(W.copy()) # make sure use copied version of W since the same name is overrided in the loop
# Get predicted probabilities
H_test = np.vstack((np.ones(X_test.shape[0]), X_test.T))
Q = softmax(H_test.T @ W.copy()) # predicted probabilities for y_test # Uses sklearn's softmax for numerical stability
results_dict = multiclass_accuracy_metrics(Y_test=y_test, P_pred=Q)
results_dict.update({'train size':X_train0.shape[0]}) # add the train data size to the results dictionary
results_list.append(results_dict.copy())
# make plot
fig, ax = plt.subplots(nrows=len(list_train_size), ncols=len(digits_list)+1, figsize=[15, 10])
for i in np.arange(len(list_train_size)):
for j in np.arange(len(digits_list)+1):
if j < len(digits_list):
L = digits_list[j]
W = W_list[i]
im = ax[i,j].imshow(W[1:,j].reshape(28,28), vmin=np.min(W), vmax=np.max(W))
ax[i,j].title.set_text("MLR coeff. for %s" % L )
if j == 0:
ax[i,j].set_ylabel("train size = %i" % results_list[i].get("train size"), fontsize=13)
divider = make_axes_locatable(ax[i,j])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax)
else:
confusion_mx = results_list[i].get("confusion_mx")
im_confusion = ax[i,j].matshow(confusion_mx)
# ax[i,j].set_title("Confusion Matrix")
ax[i,j].set_xlabel("Confusion Matrix", fontsize=13)
# ax[i].legend()
# if i == len(list_digits) - 1:
divider = make_axes_locatable(ax[i,j])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im_confusion, cax=cax)
plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.savefig('MLR_MNIST_test_ex2.pdf', bbox_inches='tight')
```
## Probit Regression
```
# probit function
from scipy.stats import norm
def probit(x):
return norm.cdf(x) # Yes, it is exactly the standard normal CDF.
# plot probit and sigmoid function
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=[10,3])
x = np.linspace(-7, 7, 100)
ax.plot(x, sigmoid(x), color='blue', label="$y=\sigma(x)=\exp(x)/(1+\exp(x))$")
ax.plot(x, probit(x), color='red', label="$y=\psi(x)=Probit(x)$")
plt.axhline(y=1, color='g', linestyle='--')
plt.axvline(x=0, color='g', linestyle='--')
ax.legend()
plt.savefig('probit_ex.pdf', bbox_inches='tight')
def fit_PR_GD(Y, H, W0=None, sub_iter=100, stopping_diff=0.01):
'''
Convex optimization algorithm for Probit Regression using Gradient Descent
Y = (n x 1), H = (p x n) (\Phi in lecture note), W = (p x 1)
Logistic Regression: Y ~ Bernoulli(Q), Q = Probit(H.T @ W)
'''
if W0 is None:
W0 = 1-2*np.random.rand(H.shape[0],1) #If initial coefficients W0 is None, randomly initialize from [-1,1]
W1 = W0.copy()
i = 0
grad = np.ones(W0.shape)
while (i < sub_iter) and (np.linalg.norm(grad) > stopping_diff):
Q = norm.pdf(H.T @ W1) * ( (1-Y)/norm.cdf(-H.T @ W1) - Y/norm.cdf(H.T @ W1) )
grad = H @ Q
W1 = W1 - (np.log(i+1) / (((i + 1) ** (0.5)))) * grad
i = i + 1
# print('iter %i, grad_norm %f' %(i, np.linalg.norm(grad)))
return W1
# plot fitted probit regression curve
digit_list_list = [['0','1'],['0','7'],['2','3'],['2', '8']] # list of list of two digits
# fit LR for each cases
W_array = []
for i in np.arange(len(digit_list_list)):
L = digit_list_list[i]
X_train, X_test, y_train, y_test = sample_binary_MNIST(list_digits=L, full_MNIST = [X,y], noise_rate=0.5)
H_train = np.vstack((np.ones(X_train.shape[0]), X_train.T)) # add first row of 1's for bias features
W = fit_PR_GD(Y=y_train, H=H_train/1000)
W = fit_PR_GD(Y=y_train, H=H_train/1000)
W_array.append(W.copy())
W_array = np.asarray(W_array)
# make plot
fig, ax = plt.subplots(nrows=1, ncols=len(digit_list_list), figsize=[16, 4])
for i in np.arange(len(digit_list_list)):
L = digit_list_list[i]
W = W_array[i]
im = ax[i].imshow(W[1:,:].reshape(28,28), vmin=np.min(W_array), vmax=np.max(W_array))
ax[i].title.set_text("LR coeff. for %s vs. %s" % (L[0], L[1]))
# ax[i].legend()
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.92, 0.15, 0.01, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.savefig('PR_MNIST_training_ex.pdf', bbox_inches='tight')
# fit probit regression using GD and compute binary classification accuracies
# Get train and test data
digits_list = ['4', '7']
X_train, X_test, y_train, y_test = sample_binary_MNIST(digits_list, full_MNIST = [X,y], noise_rate=0.5)
# Feature matrix of size (p x n) = (feature dim x samples)
list_train_size = [1,10, 30, 100]
# train the regression coefficients for all cases
W_list = []
results_list = []
for i in np.arange(len(list_train_size)):
size = list_train_size[i]
idx = np.random.choice(np.arange(len(y_train)), size)
X_train0 = X_train[idx, :]
y_train0 = y_train[idx]
# Train the logistic regression model
H_train0 = np.vstack((np.ones(X_train0.shape[0]), X_train0.T)) # add first row of 1's for bias features
W = fit_PR_GD(Y=y_train0, H=H_train0/100) # reduce the scale of H for numerical stability
W_list.append(W.copy()) # make sure use copied version of W since the same name is overrided in the loop
# Get predicted probabilities
H_test = np.vstack((np.ones(X_test.shape[0]), X_test.T))
Q = 1 / (1 + np.exp(-H_test.T @ W)) # predicted probabilities for y_test
# Compute binary classification accuracies
results_dict = compute_accuracy_metrics(Y_test=y_test, P_pred = Q)
results_dict.update({'train size':X_train0.shape[0]}) # add the train data size to the results dictionary
results_list.append(results_dict.copy())
# Print out the results
"""
keys_list = [i for i in results_dict.keys()]
for key in keys_list:
if key not in ['Y_test', 'Y_pred']:
print('%s = %f' % (key, results_dict.get(key)))
"""
# make plot
fig, ax = plt.subplots(nrows=1, ncols=len(list_train_size), figsize=[16, 4])
for i in np.arange(len(list_train_size)):
result_dict = results_list[i]
W = W_list[i][1:,:]
im = ax[i].imshow(W.copy().reshape(28,28), vmin=np.min(W_list), vmax=np.max(W_list))
subtitle = ""
keys_list = [i for i in results_list[i].keys()]
for key in keys_list:
if key not in ['Y_test', 'Y_pred', 'AUC', 'Opt_threshold']:
subtitle += "\n" + str(key) + " = " + str(np.round(results_list[i].get(key),3))
# print('%s = %f' % (key, results_list[i].get(key)))
ax[i].set_title('Opt. regression coeff.', fontsize=13)
ax[i].set_xlabel(subtitle, fontsize=20)
fig.subplots_adjust(right=0.9)
fig.suptitle("MNIST Binary Classification by LR for %s vs. %s" % (digits_list[0], digits_list[1]), fontsize=20, y=1.05)
cbar_ax = fig.add_axes([0.92, 0.15, 0.01, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.savefig('PR_MNIST_test_ex1.pdf', bbox_inches='tight')
def classify_PTG_graphs(args, motif_size=1, subsample_ratio=1):
X,Y=prep_binary_classification_PTG(motif_size=motif_size, subsample_ratio=subsample_ratio)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, Y, test_size=0.2, random_state=0)
"""
clf = svm.SVC(kernel='linear')
X_train_list = []
for i in np.arange(X_train.shape[0]):
X_train_list.append(X_train[i,:])
print('y_train', y_train)
clf.fit(X_train_list, y_train)
y_pred = clf.predict(X_test)
print('y_test', y_test)
print('y_pred', y_pred)
acc = metrics.accuracy_score(y_test, y_pred)
print('!!! subgraph classification accuracy using SVC', acc)
"""
########### Logistic Regression ###########
X_train /= np.max(X_train)
y_train1 = list2onehot(y_train, list_classes=[0,1])
y_test1 = list2onehot(y_test, list_classes=[0,1])
print('y_test1', y_test1.shape)
print('y_train1', y_train1.shape)
print('X_train.T', X_train.T.shape)
H_train = np.vstack((np.ones(X_train.shape[0]), X_train.T))
W = fit_MLR_GD(y_train1, H_train, sub_iter=100, stopping_diff=0.01)
# Get predicted probabilities
print('W.shape', W.shape)
H_test = np.vstack((np.ones(X_test.shape[0]), X_test.T))
Q = softmax(H_test.T @ W.copy()) # predicted probabilities for y_test # Uses sklearn's softmax for numerical stability
print('Q.shape', Q.shape)
print('y_test1.shape', y_test1.shape)
results_dict = multiclass_accuracy_metrics(Y_test=y_test1, P_pred=Q)
confusion_mx = results_dict.get('results_dict')
########### FFNN ########
X_train /= np.max(X_train)
y_train0 = list2onehot(y_train, list_classes=[0,1])
# preprocessing
out = []
# populate the tuple list with the data
for i in range(X_train.shape[0]):
item = list((X_train[i,:], y_train0[i,:]))
out.append(item)
# FFNN training
NN = DeepFFNN(hidden_layer_sizes=[100], training_data = out)
NN.train(iterations=200, learning_rate = 0.5, momentum = 0.1, rate_decay = 0.01, verbose=False)
# FFNN prediction
X_test /= np.max(X_test)
out_test = []
for i in range(X_test.shape[0]):
out_test.append(X_test[i,:].tolist())
y_hat = NN.predict(out_test).T
y_test_label = np.asarray(y_test)
P_pred = np.asarray([p[1] for p in y_hat])
print('!!y_test_label', y_test_label)
print('!!P_pred', P_pred)
compute_accuracy_metrics(Y_test=y_test_label, P_pred=P_pred, use_opt_threshold=False, verbose=True)
```
| github_jupyter |
## VPU based inferencing and deployment on IoT Edge device using Azure Machine Learning
We will do deployment similar to the original: https://github.com/Azure-Samples/onnxruntime-iot-edge/tree/master/AzureML-OpenVINO

```
!python -m pip install --upgrade pip
!pip install azureml-core azureml-contrib-iot azure-mgmt-containerregistry azure-cli
!az extension add --name azure-cli-iot-ext
import os
print(os.__file__)
# Check core SDK version number
import azureml.core as azcore
print("SDK version:", azcore.VERSION)
```
## 1. Setup the Azure Machine Learning Environment
### 1a AML Workspace : using existing config
```
#Initialize Workspace
from azureml.core import Workspace
ws = Workspace.from_config()
```
### 1.2 AML Workspace : create a new workspace
Alternatively, you could create a workspace using `azureml.core`:
```
#Initialize Workspace
from azureml.core import Workspace
### Change this cell from markdown to code and run this if you need to create a workspace
### Update the values for your workspace below
ws=Workspace.create(subscription_id="<subscription-id goes here>",
resource_group="<resource group goes here>",
name="<name of the AML workspace>",
location="<location>")
ws.write_config()
```
### 1.3 AML Workspace : initialize an existing workspace
Download the `config.json` file for your AML Workspace from the Azure portal
```
#Initialize Workspace
from azureml.core import Workspace
## existing AML Workspace in config.json
ws = Workspace.from_config('config.json')
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## 2. Setup the trained model to use in this example
### 2.1 Register the trained model in workspace from the ONNX Model Zoo
```
import urllib.request
onnx_model_url = "https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/tiny_yolov2.tar.gz"
urllib.request.urlretrieve(onnx_model_url, filename="tiny_yolov2.tar.gz")
!tar xvzf tiny_yolov2.tar.gz
from azureml.core.model import Model
model = Model.register(workspace = ws,
model_path = "./tiny_yolov2/Model.onnx",
model_name = "Model.onnx",
tags = {"data": "Imagenet", "model": "object_detection", "type": "TinyYolo"},
description = "real-time object detection model from ONNX model zoo")
```
### 2.2 Load the model from your workspace model registry
For e.g. this could be the ONNX model exported from your training experiment
```
from azureml.core.model import Model
model = Model(name='Model.onnx', workspace=ws)
```
## 3. Create the application container image
This container is the IoT Edge module that will be deployed on the UP<sup>2</sup> device.
1. This container is using a pre-build base image for ONNX Runtime.
2. Includes a `score.py` script, Must include a `run()` and `init()` function. The `init()` is entrypoint that reads the camera frames from /device/video0. The `run()` function is a dummy module to satisfy AML-sdk checks.
3. `amlpackage_inference.py` script which is used to process the input frame and run the inference session and
4. the ONNX model, label file used by the ONNX Runtime
```
%%writefile score.py
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for
# full license information.
import sys
import time
import io
import csv
# Imports for inferencing
import onnxruntime as rt
from amlpackage_inference import run_onnx
import numpy as np
import cv2
# Imports for communication w/IOT Hub
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
from azureml.core.model import Model
# Imports for the http server
from flask import Flask, request
import json
# Imports for storage
import os
# from azure.storage.blob import BlockBlobService, PublicAccess, AppendBlobService
import random
import string
import csv
from datetime import datetime
from pytz import timezone
import time
import json
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def send_confirmation_callback(message, result, user_context):
"""
Callback received when the message that we're forwarding is processed.
"""
print("Confirmation[%d] received for message with result = %s" % (user_context, result))
def get_tinyyolo_frame_from_encode(msg):
"""
Formats jpeg encoded msg to frame that can be processed by tiny_yolov2
"""
#inp = np.array(msg).reshape((len(msg),1))
#frame = cv2.imdecode(inp.astype(np.uint8), 1)
frame = cv2.cvtColor(msg, cv2.COLOR_BGR2RGB)
# resize and pad to keep input frame aspect ratio
h, w = frame.shape[:2]
tw = 416 if w > h else int(np.round(416.0 * w / h))
th = 416 if h > w else int(np.round(416.0 * h / w))
frame = cv2.resize(frame, (tw, th))
pad_value=114
top = int(max(0, np.round((416.0 - th) / 2)))
left = int(max(0, np.round((416.0 - tw) / 2)))
bottom = 416 - top - th
right = 416 - left - tw
frame = cv2.copyMakeBorder(frame, top, bottom, left, right,
cv2.BORDER_CONSTANT, value=[pad_value, pad_value, pad_value])
frame = np.ascontiguousarray(np.array(frame, dtype=np.float32).transpose(2, 0, 1)) # HWC -> CHW
frame = np.expand_dims(frame, axis=0)
return frame
def run(msg):
# this is a dummy function required to satisfy AML-SDK requirements.
return msg
def init():
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
DEVICE = 0 # when device is /dev/video0
LABEL_FILE = "labels.txt"
MODEL_FILE = "Model.onnx"
global MESSAGE_TIMEOUT # setting for IoT Hub Manager
MESSAGE_TIMEOUT = 1000
LOCAL_DISPLAY = "OFF" # flag for local display on/off, default OFF
# Create the IoT Hub Manager to send message to IoT Hub
print("trying to make IOT Hub manager")
hub_manager = HubManager(PROTOCOL)
if not hub_manager:
print("Took too long to make hub_manager, exiting program.")
print("Try restarting IotEdge or this module.")
sys.exit(1)
# Get Labels from labels file
labels_file = open(LABEL_FILE)
labels_string = labels_file.read()
labels = labels_string.split(",")
labels_file.close()
label_lookup = {}
for i, val in enumerate(labels):
label_lookup[val] = i
# get model path from within the container image
model_path=Model.get_model_path(MODEL_FILE)
# Loading ONNX model
print("loading model to ONNX Runtime...")
start_time = time.time()
ort_session = rt.InferenceSession(model_path)
print("loaded after", time.time()-start_time,"s")
# start reading frames from video endpoint
cap = cv2.VideoCapture(DEVICE)
while cap.isOpened():
_, _ = cap.read()
ret, img_frame = cap.read()
if not ret:
print('no video RESETTING FRAMES TO 0 TO RUN IN LOOP')
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
continue
"""
Handles incoming inference calls for each fames. Gets frame from request and calls inferencing function on frame.
Sends result to IOT Hub.
"""
try:
draw_frame = img_frame
start_time = time.time()
# pre-process the frame to flatten, scale for tiny-yolo
frame = get_tinyyolo_frame_from_encode(img_frame)
# run the inference session for the given input frame
objects = run_onnx(frame, ort_session, draw_frame, labels, LOCAL_DISPLAY)
# LOOK AT OBJECTS AND CHECK PREVIOUS STATUS TO APPEND
num_objects = len(objects)
print("NUMBER OBJECTS DETECTED:", num_objects)
print("PROCESSED IN:",time.time()-start_time,"s")
if num_objects > 0:
output_IOT = IoTHubMessage(json.dumps(objects))
hub_manager.forward_event_to_output("inferenceoutput", output_IOT, 0)
continue
except Exception as e:
print('EXCEPTION:', str(e))
continue
```
### 3.1 Include the dependent packages required by the application scripts
```
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_pip_package("azure-iothub-device-client")
myenv.add_pip_package("numpy")
myenv.add_pip_package("opencv-python")
myenv.add_pip_package("requests")
myenv.add_pip_package("pytz")
myenv.add_pip_package("onnx")
with open("myenv.yml", "w") as f:
f.write(myenv.serialize_to_string())
```
### 3.2 Build the custom container image with the ONNX Runtime + OpenVINO base image
This step uses pre-built container images with ONNX Runtime and the different HW execution providers. A complete list of base images are located [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles#docker-containers-for-onnx-runtime).
```
from azureml.core.image import ContainerImage
from azureml.core.model import Model
# Set the web service configuration (using default here)
from azureml.core.model import InferenceConfig
#from azureml.core.webservice import AksWebservice
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
useContainerImage = True
if useContainerImage:
openvino_image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
dependencies=["labels.txt", "amlpackage_inference.py"],
conda_file = "myenv.yml",
description = "TinyYolo ONNX Runtime inference container",
tags = {"demo": "onnx"})
# Use the ONNX Runtime + OpenVINO base image for Intel MovidiusTM USB sticks
openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad"
# For the Intel Movidius VAD-M PCIe card use this:
# openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-vadm"
openvino_image = ContainerImage.create(name = "name-of-image",
# this is the model object
models = [model],
image_config = openvino_image_config,
workspace = ws)
# Alternative: Re-use an image that you have already built from the workspace image registry
# openvino_image = ContainerImage(name = "<name-of-image>", workspace = ws)
else:
env = Environment('deploytoedgeenv')
# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)
# for open-sourced GPU base images.
env.docker.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad"
#env.docker.base_image = DEFAULT_GPU_IMAGE
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=['tensorflow-gpu==1.12.0','numpy'],
pip_packages=['azureml-defaults','azure-iothub-device-client','numpy','opencv-python','requests','pytz','onnx']
)
inference_config = InferenceConfig(entry_script="score.py", environment=env)
imagename= "myopenvino-myriad"
#imagename= "myopenvino"
imagelabel="1.0"
if useContainerImage:
openvino_image.wait_for_creation(show_output = True)
if openvino_image.creation_state == 'Failed':
print("Image build log at: " + openvino_image.image_build_log_uri)
else:
package = Model.package(ws, [model], inference_config=inference_config,image_name=imagename, image_label=imagelabel)
package.wait_for_creation(show_output=True)
if useContainerImage:
if openvino_image.creation_state != 'Failed':
print("Image URI at: " +openvino_image.image_location)
else:
print("ACR:", package.get_container_registry)
print("Image:", package.location)
```
## 4. Deploy to the UP<sup>2</sup> device using Azure IoT Edge
### 4.1 Login with the Azure subscription to provision the IoT Hub and the IoT Edge device
```
!az login
!az account set --subscription $ws.subscription_id
# confirm the account
!az account show
```
### 4.2 Specify the IoT Edge device details
```
# Parameter list to configure the IoT Hub and the IoT Edge device
# Pick a name for what you want to call the module you deploy to the camera
module_name = "module-name-here"
# Resource group in Azure
resource_group_name= ws.resource_group
iot_rg=resource_group_name
# Azure region where your services will be provisioned
iot_location="location-here"
# Azure IoT Hub name
iot_hub_name="name-of-IoT-Hub"
# Pick a name for your camera
iot_device_id="name-of-IoT-Edge-device"
# Pick a name for the deployment configuration
iot_deployment_id="Infernce Module from AML"
```
### 4.2a Optional: Provision the IoT Hub, create the IoT Edge device and Setup the Intel UP<sup>2</sup> AI Vision Developer Kit
```
!az iot hub create --resource-group $resource_group_name --name $iot_hub_name --sku S1
# Register an IoT Edge device (create a new entry in the Iot Hub)
!az iot hub device-identity create --hub-name $iot_hub_name --device-id $iot_device_id --edge-enabled
!az iot hub device-identity show-connection-string --hub-name $iot_hub_name --device-id $iot_device_id
```
The following steps need to be executed in the device terminal
1. Open the IoT edge configuration file in UP<sup>2</sup> device to update the IoT Edge device *connection string*
`sudo nano /etc/iotedge/config.yaml`
provisioning:
source: "manual"
device_connection_string: "<ADD DEVICE CONNECTION STRING HERE>"
2. To update the DPS TPM provisioning configuration:
provisioning:
source: "dps"
global_endpoint: "https://global.azure-devices-provisioning.net"
scope_id: "{scope_id}"
attestation:
method: "tpm"
registration_id: "{registration_id}"
3. Save and close the file. `CTRL + X, Y, Enter
4. After entering the privisioning information in the configuration file, restart the *iotedge* daemon
`sudo systemctl restart iotedge`
5. We will show the object detection results from the camera connected (`/dev/video0`) to the UP<sup>2</sup> on the display. Update your .profile file:
`nano ~/.profile`
add the following line to the end of file
__xhost +__
### 4.3 Construct the deployment file
```
# create the registry uri
container_reg = ws.get_details()["containerRegistry"]
reg_name=container_reg.split("/")[-1]
container_url = "\"" + openvino_image.image_location + "\","
subscription_id = ws.subscription_id
print('{}'.format(openvino_image.image_location), "<-- this is the URI configured in the IoT Hub for the device")
print('{}'.format(reg_name))
print('{}'.format(subscription_id))
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth,subscription_id)
result= client.registries.list_credentials(resource_group_name, reg_name, custom_headers=None, raw=False)
username = result.username
password = result.passwords[0].value
```
#### Create the `deplpyment.json` with the AML image registry details
We have provided here a sample deployment template this reference implementation.
```
file = open('./aml-deployment.template.json')
contents = file.read()
contents = contents.replace('__AML_MODULE_NAME', module_name)
contents = contents.replace('__AML_REGISTRY_NAME', reg_name)
contents = contents.replace('__AML_REGISTRY_USER_NAME', username)
contents = contents.replace('__AML_REGISTRY_PASSWORD', password)
contents = contents.replace('__AML_REGISTRY_IMAGE_LOCATION', openvino_image.image_location)
with open('./deployment.json', 'wt', encoding='utf-8') as output_file:
output_file.write(contents)
```
### 4.4 Push the *deployment* to the IoT Edge device
```
print("Pushing deployment to IoT Edge device")
print ("Set the deployement")
!az iot edge set-modules --device-id $iot_device_id --hub-name $iot_hub_name --content deployment.json
```
### 4.5 Monitor IoT Hub Messages
```
!az iot hub monitor-events --hub-name $iot_hub_name -y
```
## 5. CLEANUP
```
!rm score.py deployment.json myenv.yml
```
| github_jupyter |
## Interpretability - Image Explainers
In this example, we use LIME and Kernel SHAP explainers to explain the ResNet50 model's multi-class output of an image.
First we import the packages and define some UDFs and a plotting function we will need later.
```
from synapse.ml.explainers import *
from synapse.ml.onnx import ONNXModel
from synapse.ml.opencv import ImageTransformer
from synapse.ml.io import *
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import StringIndexer
from pyspark.sql.functions import *
from pyspark.sql.types import *
import numpy as np
import pyspark
import urllib.request
import matplotlib.pyplot as plt
import PIL, io
from PIL import Image
vec_slice = udf(lambda vec, indices: (vec.toArray())[indices].tolist(), ArrayType(FloatType()))
arg_top_k = udf(lambda vec, k: (-vec.toArray()).argsort()[:k].tolist(), ArrayType(IntegerType()))
def downloadBytes(url: str):
with urllib.request.urlopen(url) as url:
barr = url.read()
return barr
def rotate_color_channel(bgr_image_array, height, width, nChannels):
B, G, R, *_ = np.asarray(bgr_image_array).reshape(height, width, nChannels).T
rgb_image_array = np.array((R, G, B)).T
return rgb_image_array
def plot_superpixels(image_rgb_array, sp_clusters, weights, green_threshold=99):
superpixels = sp_clusters
green_value = np.percentile(weights, green_threshold)
img = Image.fromarray(image_rgb_array, mode='RGB').convert("RGBA")
image_array = np.asarray(img).copy()
for (sp, v) in zip(superpixels, weights):
if v > green_value:
for (x, y) in sp:
image_array[y, x, 1] = 255
image_array[y, x, 3] = 200
plt.clf()
plt.imshow(image_array)
display()
```
Create a dataframe for a testing image, and use the ResNet50 ONNX model to infer the image.
The result shows 39.6% probability of "violin" (889), and 38.4% probability of "upright piano" (881).
```
from synapse.ml.io import *
image_df = spark.read.image().load("wasbs://publicwasb@mmlspark.blob.core.windows.net/explainers/images/david-lusvardi-dWcUncxocQY-unsplash.jpg")
display(image_df)
# Rotate the image array from BGR into RGB channels for visualization later.
row = image_df.select("image.height", "image.width", "image.nChannels", "image.data").head()
locals().update(row.asDict())
rgb_image_array = rotate_color_channel(data, height, width, nChannels)
# Download the ONNX model
modelPayload = downloadBytes("https://mmlspark.blob.core.windows.net/publicwasb/ONNXModels/resnet50-v2-7.onnx")
featurizer = (
ImageTransformer(inputCol="image", outputCol="features")
.resize(224, True)
.centerCrop(224, 224)
.normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], color_scale_factor = 1/255)
.setTensorElementType(FloatType())
)
onnx = (
ONNXModel()
.setModelPayload(modelPayload)
.setFeedDict({"data": "features"})
.setFetchDict({"rawPrediction": "resnetv24_dense0_fwd"})
.setSoftMaxDict({"rawPrediction": "probability"})
.setMiniBatchSize(1)
)
model = Pipeline(stages=[featurizer, onnx]).fit(image_df)
predicted = (
model.transform(image_df)
.withColumn("top2pred", arg_top_k(col("probability"), lit(2)))
.withColumn("top2prob", vec_slice(col("probability"), col("top2pred")))
)
display(predicted.select("top2pred", "top2prob"))
```
First we use the LIME image explainer to explain the model's top 2 classes' probabilities.
```
lime = (
ImageLIME()
.setModel(model)
.setOutputCol("weights")
.setInputCol("image")
.setCellSize(150.0)
.setModifier(50.0)
.setNumSamples(500)
.setTargetCol("probability")
.setTargetClassesCol("top2pred")
.setSamplingFraction(0.7)
)
lime_result = (
lime.transform(predicted)
.withColumn("weights_violin", col("weights").getItem(0))
.withColumn("weights_piano", col("weights").getItem(1))
.cache()
)
display(lime_result.select(col("weights_violin"), col("weights_piano")))
lime_row = lime_result.head()
```
We plot the LIME weights for "violin" output and "upright piano" output.
Green area are superpixels with LIME weights above 95 percentile.
```
plot_superpixels(rgb_image_array, lime_row["superpixels"]["clusters"], list(lime_row["weights_violin"]), 95)
plot_superpixels(rgb_image_array, lime_row["superpixels"]["clusters"], list(lime_row["weights_piano"]), 95)
```
Your results will look like:
<img src="https://mmlspark.blob.core.windows.net/graphics/explainers/image-lime-20210811.png"/>
Then we use the Kernel SHAP image explainer to explain the model's top 2 classes' probabilities.
```
shap = (
ImageSHAP()
.setModel(model)
.setOutputCol("shaps")
.setSuperpixelCol("superpixels")
.setInputCol("image")
.setCellSize(150.0)
.setModifier(50.0)
.setNumSamples(500)
.setTargetCol("probability")
.setTargetClassesCol("top2pred")
)
shap_result = (
shap.transform(predicted)
.withColumn("shaps_violin", col("shaps").getItem(0))
.withColumn("shaps_piano", col("shaps").getItem(1))
.cache()
)
display(shap_result.select(col("shaps_violin"), col("shaps_piano")))
shap_row = shap_result.head()
```
We plot the SHAP values for "piano" output and "cell" output.
Green area are superpixels with SHAP values above 95 percentile.
> Notice that we drop the base value from the SHAP output before rendering the superpixels. The base value is the model output for the background (all black) image.
```
plot_superpixels(rgb_image_array, shap_row["superpixels"]["clusters"], list(shap_row["shaps_violin"][1:]), 95)
plot_superpixels(rgb_image_array, shap_row["superpixels"]["clusters"], list(shap_row["shaps_piano"][1:]), 95)
```
Your results will look like:
<img src="https://mmlspark.blob.core.windows.net/graphics/explainers/image-shap-20210811.png"/>
| github_jupyter |
# September 25 - EresNet VAE training
```
# Imports
import sys
import os
import time
import math
# Add the path to the parent directory to augment search for module
par_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
if par_dir not in sys.path:
sys.path.append(par_dir)
# Plotting import
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Import the utils for plotting the metrics
from plot_utils import plot_utils
from plot_utils import notebook_utils_2
```
## Plot the training of the VAE using the training logs
```
# Plot model performance over the training iterations
def plot_vae_training(log_paths, model_names, model_color_dict, downsample_interval=None,
legend_loc=(0.8,0.5), show_plot=False, save_path=None):
# Assertions
assert log_paths is not None
assert model_names is not None
assert model_color_dict is not None
assert len(log_paths) == len(model_names)
assert len(model_names) == len(model_color_dict.keys())
# Extract the values stored in the .csv log files
epoch_values = []
mse_loss_values = []
kl_loss_values = []
true_epoch_values = []
true_mse_loss_values = []
true_kl_loss_values = []
# Iterate over the list of log files provided
for log_path in log_paths:
if(os.path.exists(log_path)):
log_df = pd.read_csv(log_path, usecols=["epoch", "recon_loss", "kl_loss"])
# Downsample the epoch and training loss values w.r.t. the downsample interval
curr_epoch_values = log_df["epoch"].values
curr_mse_loss_values = log_df["recon_loss"].values
curr_kl_loss_values = log_df["kl_loss"].values
# Downsample using the downsample interval
true_epoch_values.append(curr_epoch_values)
true_mse_loss_values.append(curr_mse_loss_values)
true_kl_loss_values.append(curr_kl_loss_values)
if downsample_interval is not None:
curr_epoch_values_downsampled = []
curr_mse_loss_values_downsampled = []
curr_kl_loss_values_downsampled = []
curr_epoch_list = []
curr_mse_loss_list = []
curr_kl_loss_list = []
for i in range(1, len(curr_epoch_values)):
if(i%downsample_interval == 0):
# Downsample the values using the mean of the values for the current interval
curr_epoch_values_downsampled.append(sum(curr_epoch_list)/downsample_interval)
curr_mse_loss_values_downsampled.append(sum(curr_mse_loss_list)/downsample_interval)
curr_kl_loss_values_downsampled.append(sum(curr_kl_loss_list)/downsample_interval)
# Reset the list for the next interval
curr_epoch_list = []
curr_mse_loss_list = []
curr_kl_loss_list = []
else:
# Add the values in the interval to the list
curr_epoch_list.append(curr_epoch_values[i])
curr_mse_loss_list.append(curr_mse_loss_values[i])
curr_kl_loss_list.append(curr_kl_loss_values[i])
epoch_values.append(curr_epoch_values_downsampled)
mse_loss_values.append(curr_mse_loss_values_downsampled)
kl_loss_values.append(curr_kl_loss_values_downsampled)
else:
print("Error. log path {0} does not exist".format(log_path))
# Initialize the plot
fig, ax1 = plt.subplots(figsize=(16,11))
ax2 = ax1.twinx()
# Print the mpl rcParams
mpl.rcParams['agg.path.chunksize']=1e12
# Reload the backend
mpl.use(mpl.get_backend())
# Plot the values
if downsample_interval is None:
for i, model_name in enumerate(model_names):
ax1.plot(true_epoch_values[i], true_mse_loss_values[i],
color=model_color_dict[model_name][0],
label= model_name + " MSE loss")
ax2.plot(true_epoch_values[i], true_kl_loss_values[i],
color=model_color_dict[model_name][1],
label= model_name + " KL loss")
else:
for i, model_name in enumerate(model_names):
ax1.plot(true_epoch_values[i], true_mse_loss_values[i],
color=model_color_dict[model_name][0], alpha=0.5, linewidth=0.5)
ax1.plot(epoch_values[i], mse_loss_values[i],
color=model_color_dict[model_name][0],
label= model_name + " MSE loss", alpha=0.9, linewidth=1.0)
ax2.plot(true_epoch_values[i], true_kl_loss_values[i],
color=model_color_dict[model_name][1], alpha=0.5, linewidth=0.5)
ax2.plot(epoch_values[i], kl_loss_values[i],
color=model_color_dict[model_name][1],
label= model_name + " KL loss", alpha=0.9, linewidth=1.0)
# Setup plot characteristics
ax1.tick_params(axis="x", labelsize=30)
ax1.set_xlabel("Epoch", fontsize=30)
#ax1.set_yscale("log")
ax1.set_ylabel("Log Recon loss", fontsize=30, color=model_color_dict[model_name][0])
ax1.tick_params(axis="y", labelsize=30, colors=model_color_dict[model_name][0])
#ax2.set_yscale("log")
ax2.set_ylabel("Log KL loss", fontsize=30, color=model_color_dict[model_name][1])
ax2.tick_params(axis="y", labelsize=30, colors=model_color_dict[model_name][1])
plt.grid(True)
lines1, labels1 = ax1.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
lgd = plt.legend(lines1 + lines2, labels1 + labels2, prop={"size":30},
loc="upper right", bbox_to_anchor=(1.6, 1.0), frameon=True,
fancybox=True, shadow=True)
fig.suptitle("Training vs Epochs", fontsize=25)
ax1.grid(True)
ax2.grid(True)
plt.margins(0.2)
if save_path is not None:
plt.savefig(save_path, format='eps', dpi=300,bbox_extra_artists=(lgd))
if show_plot:
try:
plt.show()
except:
print("plot_utils.plot_vae_training() : Unable to render the plot"
+ " due to limits on \'agg.path.chunksize\')")
if save_path is None:
print("plot_utils.plot_vae_training() : Saving plot to ./{0}".format("vae_training_log.eps"))
plt.savefig("vae_training_log.eps", format='eps', dpi=300,bbox_extra_artists=(lgd))
plt.clf() # Clear the plot frame
plt.close() # Close the opened window if any
else:
plt.clf() # Clear the plot frame
plt.close() # Close the opened window if any
run_ids = ["20190924_210209"]
model_ids = ["EresNet-34"]
dump_dirs = ["/home/akajal/WatChMaL/VAE/dumps/" + run_id + "/" for run_id in run_ids]
training_logs = [dump_dir + "log_train.csv" for dump_dir in dump_dirs]
val_logs = [dump_dir + "log_val.csv" for dump_dir in dump_dirs]
local_color_dict = {key:[np.random.rand(3,),np.random.rand(3,)] for key in model_ids}
# Plot training log
plot_vae_training(training_logs, model_ids,
local_color_dict,
downsample_interval=128,
legend_loc=(0.88,0.88),
show_plot=True)
# Plot validation log
plot_vae_training(val_logs, model_ids,
local_color_dict,
downsample_interval=128,
legend_loc=(0.87,0.88),
show_plot=True)
```
## Get the per-sample metrics using the best model on the validation set at 5 epochs
```
latent_dims = [128]
dumps = ["20190925_104149"]
# First check that all the indices from the test validation set exist in all the dumps
ldump_idx_arr = None
# Iterate over the dumps and check the indices
for latent_dim, dump in zip(latent_dims, dumps):
print("----------------------------------------------------")
print("Reading metrics from VAE with {0} latent dimensions :".format(latent_dim))
print("----------------------------------------------------")
dump_npz_path = "/home/akajal/WatChMaL/VAE/dumps/{0}/test_validation_iteration_metrics.npz".format(dump)
dump_npz_arr = np.load(dump_npz_path)
dump_indices = np.sort(dump_npz_arr["indices"])
if ldump_idx_arr is not None:
if not np.array_equal(dump_indices, ldump_idx_arr):
print("Index array for latent dims {0} not equal to all the other.".format(latent_dim))
else:
print("Index array equal to the first index array")
else:
ldump_idx_arr = dump_indices
# Collect the metrics for plotting as well
recon_loss_values, kl_loss_values = [], []
recon_std_values, kl_std_values = [], []
recon_stderr_values, kl_stderr_values = [], []
# Iterate over the dumps and check the indices
for latent_dim, dump in zip(latent_dims, dumps):
print("\n----------------------------------------------------")
print("Printing metrics for VAE with {0} latent dimensions :".format(latent_dim))
print("----------------------------------------------------")
dump_npz_path = "/home/akajal/WatChMaL/VAE/dumps/{0}/test_validation_iteration_metrics.npz".format(dump)
npz_arr = np.load(dump_npz_path)
dump_recon_loss, dump_kl_loss = npz_arr["recon_loss"], npz_arr["kl_loss"]
mean_recon_loss, std_recon_loss = np.mean(dump_recon_loss), np.std(dump_recon_loss)
stderr_recon_loss = std_recon_loss/math.sqrt(dump_recon_loss.shape[0])
recon_loss_values.append(mean_recon_loss)
recon_std_values.append(std_recon_loss)
recon_stderr_values.append(stderr_recon_loss)
mean_kl_loss, std_kl_loss = np.mean(dump_kl_loss), np.std(dump_kl_loss)
stderr_kl_loss = std_kl_loss/math.sqrt(dump_kl_loss.shape[0])
kl_loss_values.append(mean_kl_loss)
kl_std_values.append(std_kl_loss)
kl_stderr_values.append(stderr_kl_loss)
print("Recon Loss metrics")
print("Mean Recon loss : {0}".format(mean_recon_loss))
print("Std Recon loss : {0}".format(std_recon_loss))
print("Stderr Recon loss : {0}\n".format(stderr_recon_loss))
print("KL Loss metrics")
print("Mean KL loss : {0}".format(mean_kl_loss))
print("Std KL loss : {0}".format(std_kl_loss))
print("Stderr KL loss : {0}".format(stderr_kl_loss))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
def load_data():
# load panda pkl files
data = pd.read_pickle('data.pkl')
nomination_onehot = pd.read_pickle('nomination_onehot.pkl')
selected_performers_onehot = pd.read_pickle('selected_performers_onehot.pkl')
selected_directors_onehot = pd.read_pickle('selected_directors_onehot.pkl')
selected_studio_onehot = pd.read_pickle('selected_studio_onehot.pkl')
selected_scriptwriter_onehot = pd.read_pickle('selected_scriptwriter_onehot.pkl')
base_path = "predict_target/target_"
target_data = pd.read_pickle('predict_target/target.pkl')
target_nomination_onehot = pd.read_pickle(base_path + 'nomination_onehot.pkl')
target_selected_performers_onehot = pd.read_pickle(base_path + 'selected_performers_onehot.pkl')
target_selected_directors_onehot = pd.read_pickle(base_path + 'selected_directors_onehot.pkl')
target_selected_studio_onehot = pd.read_pickle(base_path + 'selected_studio_onehot.pkl')
target_selected_scriptwriter_onehot = pd.read_pickle(base_path + 'selected_scriptwriter_onehot.pkl')
# review_dataframe = pd.read_pickle('review_dataframe.pkl')
# tfidf = pd.read_pickle('tfidf015_025.pkl')
data_len = 211
target_data.index += data_len
target_nomination_onehot.index += data_len
target_selected_performers_onehot.index += data_len
target_selected_directors_onehot.index += data_len
target_selected_studio_onehot.index += data_len
target_selected_scriptwriter_onehot.index += data_len
# concat target files and others
data = pd.concat([data, target_data])
nomination_onehot = pd.concat([nomination_onehot, target_nomination_onehot])
selected_performers_onehot = pd.concat([selected_performers_onehot, target_selected_performers_onehot])
selected_directors_onehot = pd.concat([selected_directors_onehot, target_selected_directors_onehot])
selected_studio_onehot = pd.concat([selected_studio_onehot, target_selected_studio_onehot])
selected_scriptwriter_onehotp = pd.concat([selected_scriptwriter_onehot, target_selected_scriptwriter_onehot])
# selected_directors_onehotとselected_scriptwriter_onehotの重複した人
duplicate_scriptwriter = set(selected_directors_onehot.columns) & set(selected_scriptwriter_onehot.columns)
selected_scriptwriter_onehot = selected_scriptwriter_onehot.drop(duplicate_scriptwriter, axis=1)
df = pd.concat(
[
nomination_onehot,
selected_performers_onehot,
selected_directors_onehot,
selected_studio_onehot,
selected_scriptwriter_onehot,
data["screen_time"],
#tfidf
],
axis=1,
sort=False,
)
# 共線性の高いカラムを除く
drop_clm = ['吉田一夫']
df = df.drop(drop_clm, axis=1)
# 取得できなかった上映時間(screen_time == -1)を平均で埋める
# df[df["screen_time"] == -1] = df.mean().screen_time <- 良くない例
df["screen_time"] = df["screen_time"].replace(-1, df["screen_time"].mean())
# データセットとして扱うのに必要なyear, prizeのフラグを付与する
df = pd.concat(
[df, data["year"], data["prize"]], axis=1
)
df.fillna(0, inplace=True)
return df
df.len
def standard_scale(year):
scaler = StandardScaler()
x_columns = df.drop(["year", "prize"], axis=1).columns
train_x = df[df["year"] != year].drop(["year", "prize"], axis=1).values
test_x = df[df["year"] == year].drop(["year", "prize"], axis=1).values
train_y_df = df[df["year"] != year]["prize"]
test_y_df = df[df["year"] == year]["prize"]
scaler.fit(train_x)
std_train_x = scaler.transform(train_x)
std_test_x = scaler.transform(test_x)
std_train_x_df = pd.DataFrame(std_train_x, columns=x_columns)
std_test_x_df = pd.DataFrame(std_test_x, columns=x_columns)
# インデックスの調整
std_train_x_df.index.name = 'id'
std_test_x_df.index.name = 'id'
std_train_x_df.index += 1
std_test_x_df.index += 1
# pickleで保存
base_path = "../std_data/"
std_train_x_df.to_pickle(base_path + "train/{}_x.pkl".format(str(year)))
std_test_x_df.to_pickle(base_path + "test/{}_x.pkl".format(str(year)))
train_y_df.to_pickle(base_path + "train/{}_y.pkl".format(str(year)))
test_y_df.to_pickle(base_path + "test/{}_y.pkl".format(str(year)))
return std_train_x_df, std_test_x_df, train_y_df, test_y_df
std_train_x_df, std_test_x_df, train_y_df, test_y_df = standard_scale(2020)
std_test_x_df
data = pd.read_pickle('data.pkl')
target_data = pd.read_pickle('predict_target/target.pkl')
target_data.index += 211
data = pd.concat([data, target_data], axis=0, sort=True)
data
```
| github_jupyter |
## Infosys stock price prediction
#### The prediction of the market value is of great importance to help in maximizing the profit of stock option purchase while keeping the risk low. Recurrent neural networks (RNN) have proved one of the most powerful models for processing sequential data. Long Short-Term memory is one of the most successful RNNs architectures. LSTM introduces the memory cell, a unit of computation that replaces traditional artificial neurons in the hidden layer of the network. With these memory cells, networks are able to effectively associate memories and input remote in time, hence suit to grasp the structure of data dynamically over time with high prediction capacity.
#### Importing the necessary libraries and data
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
dataset_train=pd.read_excel('C:/Users/aksha/Desktop/ai/datasets/Infosys_train.xlsx')
dataset_test=pd.read_excel('C:/Users/aksha/Desktop/ai/datasets/Infosys_test.xlsx')
print(dataset_train.info())
print(dataset_test.info())
dataset_train.head()
```
#### We have taken train and test data as dataset_train and dataset_test respectively. There are 7 columns namely Date, Open price, Highest value for the day, Lowest value for the day, Close price, Adjusted close value and Volume.
#### We can see that there are a few null values in the train data, hence we remove them
```
dataset_train=dataset_train.replace('-',np.nan)
dataset_train=dataset_train.dropna()
print(dataset_train.info())
dataset_train.shape
```
#### We take only the Open price for our analysis
```
training_set = dataset_train.iloc[:, 1:2].values
plt.plot(training_set, color = 'red', label = ' Stock Price')
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
training_set_scaled
```
#### Creating a data structure with 60 timesteps and 1 output
```
X_train = []
y_train = []
for i in range(60, 2173):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_train.shape
```
#### Importing the Keras libraries and packages
```
#Building the RNN
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
X_train.shape
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
```
#### Making predictions and visualizing the results
```
real_stock_price = dataset_test.iloc[:, 1:2].values
plt.plot(real_stock_price, color = 'red', label = 'Real Stock Price')
dataset_total = pd.concat((dataset_train['Open'], dataset_test['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 100):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
# Visualising the results
plt.plot(real_stock_price, color = 'red', label = 'Real Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price')
plt.title('Infosys Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Infosys Stock Price')
plt.legend()
plt.show()
```
#### Saving the model using pickle
```
import pickle
filename = 'finalized_model.sav'
pickle.dump(regressor, open(filename, 'wb'))
```
#### The model will be saved as 'finalized_model' and this model can be imported and reused
| github_jupyter |
```
! pip install -Uq catalyst==20.12 gym==0.17.3
```
# Seminar. RL, DQN.
Hi! In the first part of the seminar, we are going to introduce one of the main algorithm in the Reinforcment Learning domain. Deep Q-Network is the pioneer algorithm, that amalmagates Q-Learning and Deep Neural Networks. And there is small review on gym enviroments, where our bots will play in games.
In the beginning, look at the algorithm:

There are several differences between the usual DL and RL routines. Our bots are trained by his actions, that he has done in the past. We don't have infinity memory, but we can save some actions in the buffer. Let's code it!
```
from catalyst.utils import set_global_seed, get_device
set_global_seed(42)
device = get_device()
from collections import deque, namedtuple
import random
import numpy as np
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import numpy as np
import typing as tp
from collections import deque, namedtuple
Transition = namedtuple(
'Transition',
field_names=[
'state',
'action',
'reward',
'done',
'next_state'
]
)
class ReplayBuffer:
def __init__(self, capacity: int):
self.buffer = deque(maxlen=capacity)
def append(self, transition: Transition):
self.buffer.append(transition)
def sample(self, size: int) -> tp.Sequence[np.array]:
indices = np.random.choice(
len(self.buffer),
size,
replace=size > len(self.buffer)
)
states, actions, rewards, dones, next_states = \
zip(*[self.buffer[idx] for idx in indices])
states, actions, rewards, dones, next_states = (
np.array(states, dtype=np.float32),
np.array(actions, dtype=np.int64),
np.array(rewards, dtype=np.float32),
np.array(dones, dtype=np.bool),
np.array(next_states, dtype=np.float32)
)
return states, actions, rewards, dones, next_states
def __len__(self) -> int:
return len(self.buffer)
```
To work well with Catalyst train loops, implement intermedeate abstraction.
```
from torch.utils.data.dataset import IterableDataset
# as far as RL does not have some predefined dataset,
# we need to specify epoch lenght by ourselfs
class ReplayDataset(IterableDataset):
def __init__(self, buffer: ReplayBuffer, epoch_size: int = int(1e3)):
self.buffer = buffer
self.epoch_size = epoch_size
def __iter__(self) -> tp.Iterator[tp.Sequence[np.array]]:
states, actions, rewards, dones, next_states = \
self.buffer.sample(self.epoch_size)
for i in range(len(dones)):
yield states[i], actions[i], rewards[i], dones[i], next_states[i]
def __len__(self) -> int:
return self.epoch_size
```
After creating a Buffer, we need to gather action-value-state and save it in the buffer. We create one function, that asks model for action, and another function to communicate with the enviroment.
```
def get_action(
env,
network: nn.Module,
state: np.array,
epsilon: float = -1
) -> int:
if np.random.random() < epsilon:
action = env.action_space.sample()
else:
state = torch.tensor(state[None], dtype=torch.float32)
q_values = network(state).detach().cpu().numpy()[0]
action = np.argmax(q_values)
return int(action)
def generate_session(
env,
network: nn.Module,
t_max: int = 1000,
epsilon: float = -1,
replay_buffer: tp.Optional[ReplayBuffer] = None,
) -> tp.Tuple[float, int]:
total_reward = 0
state = env.reset()
for t in range(t_max):
action = get_action(env, network, state=state, epsilon=epsilon)
next_state, reward, done, _ = env.step(action)
if replay_buffer is not None:
transition = Transition(
state, action, reward, done, next_state)
replay_buffer.append(transition)
total_reward += reward
state = next_state
if done:
break
return total_reward, t
def generate_sessions(
env,
network: nn.Module,
t_max: int = 1000,
epsilon:float = -1,
replay_buffer: ReplayBuffer = None,
num_sessions: int = 100,
) -> tp.Tuple[float, int]:
sessions_reward, sessions_steps = 0, 0
for i_episone in range(num_sessions):
r, t = generate_session(
env=env,
network=network,
t_max=t_max,
epsilon=epsilon,
replay_buffer=replay_buffer,
)
sessions_reward += r
sessions_steps += t
return sessions_reward, sessions_steps
```
If we look closely into algorithm, we'll see that we need two networks. They looks the same, but one updates weights by gradients algorithm and second one by moving average with the first. This process helps to get stable training by REINFORCE.
```
def soft_update(target: nn.Module, source: nn.Module, tau: float):
"""Updates the target data with smoothing by ``tau``"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - tau) + param.data * tau
)
```
To communicate with the Buffer, Catalyst's Runner requires adiitional Callback.
```
from catalyst import dl
class GameCallback(dl.Callback):
def __init__(
self,
*,
env,
replay_buffer: ReplayBuffer,
session_period: int,
epsilon: float,
epsilon_k: int,
actor_key,
):
super().__init__(order=0)
self.env = env
self.replay_buffer = replay_buffer
self.session_period = session_period
self.epsilon = epsilon
self.epsilon_k = epsilon_k
self.actor_key = actor_key
def on_stage_start(self, runner: dl.IRunner):
self.actor = runner.model[self.actor_key]
self.actor.eval()
generate_sessions(
env=self.env,
network=self.actor,
epsilon=self.epsilon,
replay_buffer=self.replay_buffer,
num_sessions=1000,
)
self.actor.train()
def on_epoch_start(self, runner: dl.IRunner):
self.epsilon *= self.epsilon_k
self.session_counter = 0
self.session_steps = 0
def on_batch_end(self, runner: dl.IRunner):
if runner.global_batch_step % self.session_period == 0:
self.actor.eval()
session_reward, session_steps = generate_session(
env=self.env,
network=self.actor,
epsilon=self.epsilon,
replay_buffer=self.replay_buffer
)
self.session_counter += 1
self.session_steps += session_steps
runner.batch_metrics.update({"s_reward": session_reward})
runner.batch_metrics.update({"s_steps": session_steps})
self.actor.train()
def on_epoch_end(self, runner: dl.IRunner):
num_sessions = 100
self.actor.eval()
valid_rewards, valid_steps = generate_sessions(
env=self.env,
network=self.actor,
num_sessions=num_sessions
)
self.actor.train()
valid_rewards /= num_sessions
runner.epoch_metrics["train_num_samples"] = self.session_steps
runner.epoch_metrics["train_updates_per_sample"] = \
runner.loader_sample_step / self.session_steps
runner.epoch_metrics["train_v_reward"] = valid_rewards
runner.epoch_metrics["train_epsilon"] = self.epsilon
class CustomRunner(dl.Runner):
def __init__(
self,
*,
gamma: float,
tau: flaot,
tau_period: int = 1,
**kwargs,
):
super().__init__(**kwargs)
self.gamma = gamma
self.tau = tau
self.tau_period = tau_period
def on_stage_start(self, runner: dl.IRunner):
super().on_stage_start(runner)
soft_update(self.model["target"], self.model["origin"], 1.0)
def _handle_batch(self, batch: tp.Sequence[np.array]):
# model train/valid step
states, actions, rewards, dones, next_states = batch
network, target_network = self.model["origin"], self.model["target"]
# get q-values for all actions in current states
state_qvalues = network(states)
# select q-values for chosen actions
state_action_qvalues = \
state_qvalues.gather(1, actions.unsqueeze(-1)).squeeze(-1)
# compute q-values for all actions in next states
# compute V*(next_states) using predicted next q-values
# at the last state we shall use simplified formula:
# Q(s,a) = r(s,a) since s' doesn't exist
with torch.no_grad():
next_state_qvalues = target_network(next_states)
next_state_values = next_state_qvalues.max(1)[0]
next_state_values[dones] = 0.0
next_state_values = next_state_values.detach()
# compute "target q-values" for loss,
# it's what's inside square parentheses in the above formula.
target_state_action_qvalues = \
next_state_values * self.gamma + rewards
# mean squared error loss to minimize
loss = self.criterion(
state_action_qvalues,
target_state_action_qvalues.detach()
)
self.batch_metrics.update({"loss": loss})
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
if self.global_batch_step % self.tau_period == 0:
soft_update(target_network, network, self.tau)
from catalyst import utils
def get_network(env, num_hidden: int = 128):
inner_fn = utils.get_optimal_inner_init(nn.ReLU)
outer_fn = utils.outer_init
network = torch.nn.Sequential(
nn.Linear(env.observation_space.shape[0], num_hidden),
nn.ReLU(),
nn.Linear(num_hidden, num_hidden),
nn.ReLU(),
)
head = nn.Linear(num_hidden, env.action_space.n)
network.apply(inner_fn)
head.apply(outer_fn)
return torch.nn.Sequential(network, head)
# data
batch_size = 64
epoch_size = int(1e3) * batch_size
buffer_size = int(1e5)
# runner settings, ~training
gamma = 0.99
tau = 0.01
tau_period = 1 # in batches
# callback, ~exploration
session_period = 100 # in batches
epsilon = 0.98
epsilon_k = 0.9
# optimization
lr = 3e-4
# env_name = "LunarLander-v2"
env_name = "CartPole-v1"
env = gym.make(env_name)
replay_buffer = ReplayBuffer(buffer_size)
network, target_network = get_network(env), get_network(env)
utils.set_requires_grad(target_network, requires_grad=False)
models = {"origin": network, "target": target_network}
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(network.parameters(), lr=lr)
loaders = {
"train": DataLoader(
ReplayDataset(replay_buffer, epoch_size=epoch_size),
batch_size=batch_size,
),
}
runner = CustomRunner(
gamma=gamma,
tau=tau,
tau_period=tau_period,
)
runner.train(
model=models,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir="./logs_dqn",
num_epochs=10,
verbose=True,
main_metric="v_reward",
minimize_metric=False,
load_best_on_end=True,
callbacks=[
GameCallback(
env=env,
replay_buffer=replay_buffer,
session_period=session_period,
epsilon=epsilon,
epsilon_k=epsilon_k,
actor_key="origin",
)
]
)
```
And we can watch how our model plays in the games!
\* to run cells below, you should update your python environment. Instruction depends on your system specification.
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(
gym.make(env_name),
directory="videos_dqn",
force=True)
generate_sessions(
env=env,
network=runner.model["origin"],
num_sessions=100
)
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos_dqn/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
| github_jupyter |
# Multi-Qubit Devices: the `ProcessorSpec` object
This tutorial covers the creation and use of `ProccesorSpec` objects. These objects are used to define the "specification" of a quantum information processor (QIP) (e.g., device connectivity, the gate-set, etc.), and are particularly geared towards multi-qubit devices. Currently, these are mostly encountered in pyGSTi as an input for generating randomized benchmarking experiments, but they will be used more widely in future releases.
```
import pygsti
```
## Using a `ProcessorSpec` to specify a multi-qubit device.
The `ProcessorSpec` object is designed to encapsulate the specification of a small to medium-scale quantum computer, and to hold a variety of useful things that can be derived from this information. The basic information that a `ProcessorSpec` is initialized via is:
1. The number of qubits in the device, and, optionally, the labels of these qubits.
2. The target gate-set of the device, as either unitary matrices or using names that point to in-built unitary matrices. E.g., 'Gcnot' is a shorthand for specifying a CNOT gate. Normally this will be the "primitive" gates of the device, although it may sometimes be useful to choose other gate-sets (it depends what you are then going to use the `ProcessorSpec` for). Currently only discrete gate-sets are supported. E.g., there is no way to specify an arbitrary $\sigma_z$-rotation as one of the gates in the device. "Continuously parameterized" gates such as this may be supported in the future.
3. The connectivity of the device.
So let's create a `ProcessorSpec`.
The number of qubits the device is for:
```
nQubits = 4
```
Next, we pick some names for the qubits. These are akin to the *line labels* in a `Circuit` object (see the [Circuit tutorial](../Circuit.ipynb)). Qubits are typically labelled by names beginning with "Q" or integers (if not specified, the qubit labels default to the integers $0, 1, 2, \ldots$). Here we choose:
```
qubit_labels = ['Q0','Q1','Q2','Q3']
```
Next, we pick a set of fundamental gates. These can be specified via in-built names,such as 'Gcnot' for a CNOT gate. The full set of in-built names is specified in the dictionary returned by `pygsti.tools.internalgates.get_standard_gatename_unitaries()`, and note that there is redundency in this set. E.g., 'Gi' is a 1-qubit identity gate but so is 'Gc0' (as one of the 24 1-qubit Cliffords named as 'Gci' for i = 0, 1, 2, ...). Note that typically we *do not specify an idle/identity gate* as one of the primitives, unless there's a particular type of global-idle gate we're trying to model. (Specifying an idle gate may also be more appropriate for 1- and 2-qubit devices, since in these small-system cases we may label each circuit layer separatey.)
```
gate_names = ['Gxpi2', # A X rotation by pi/2
'Gypi2', # A Y rotation by pi/2
'Gzpi2', # A Z rotation by pi/2
'Gh', # The Hadamard gate
'Gcphase'] # The controlled-Z gate.
```
Additionally, we can define gates with user-specified names and actions, via a dictionary with keys that are strings (gate names) and values that are unitary matrices. For example, if you want to call the hadamard gate 'Ghad' we could do this here. The gate names should all start with a 'G', but are otherwise unrestricted. Here we'll leave this dictionary empty.
```
nonstd_gate_unitaries = {}
```
Specify the "availability" of gates: which qubits they can be applied to. When not specified for a gate, it is assumed that it can be applied to all dimension-appropriate sets of qubits. E.g., a 1-qubit gate will be assumed to be applicable to each qubit; a 2-qubit gate will be assumed to be applicable to all ordered pairs of qubits, etc.
Let's make our device have ring connectivity:
```
availability = {'Gcphase':[('Q0','Q1'),('Q1','Q2'),('Q2','Q3'),('Q3','Q0')]}
```
We then create a `ProcessorSpec` by handing it all of this information. This then generates a variety of auxillary information about the device from this input (e.g., optimal compilations for the Pauli operators and CNOT). The defaults here that haven't been specified will be ok for most purposes. But sometimes they will need to be changed to avoid slow ProcessorSpec initialization - fixes for these issues will likely be implemented in the future.
```
pspec = pygsti.obj.ProcessorSpec(nQubits, gate_names, nonstd_gate_unitaries=nonstd_gate_unitaries,
availability=availability, qubit_labels=qubit_labels)
```
`ProcessorSpec` objects are not particularly useful on their own. Currently, they are mostly used for interfacing with `Circuit` objects, in-built compilation algorithms, and the randomized benchmarking code. However, in the future we expect that they will be used for constructing circuits/circuits for other multi-qubit QCVV methods in pyGSTi.
## Simulating circuits
When a `ProcessorSpec` is created, it creates (and contains) several models (`Model` objects) of device's behavior. These are contained in the `.models` member, which is a dictionary:
```
pspec.models.keys()
```
So our `pspec` has two models, one labelled `'clifford'`, the other `'target'`. Both of these are models of the *perfect* (noise-free) gates. (Models with imperfect gates require the user to build their own imperfect `Model`.)
As demonstrated toward the end of the [Circuit tutorial](../Circuit.ipynb), once we have a model simulating circuit outcomes is easy. Here we'll do a perfect-gates simulation, using the `'clifford'` model (uses an efficient-in-qubit-number stabilizer-state propagation technique):
```
model = pspec.models['clifford']
clifford_circuit = pygsti.obj.Circuit([ [('Gh','Q0'),('Gh','Q1'),('Gxpi2','Q3')],
('Gcphase','Q0','Q1'), ('Gcphase','Q1','Q2'),
[('Gh','Q0'),('Gh','Q1')]],
line_labels=['Q0','Q1','Q2','Q3'])
print(clifford_circuit)
out = clifford_circuit.simulate(model)
print('\n'.join(['%s = %g' % (ol,p) for ol,p in out.items()]))
```
The keys of the outcome dictionary `out` are things like `('00',)` instead of just `'00'` because of possible *intermediate* outcomes. See the [Instruments tutorial](Instruments.ipynb) if you're interested in learning more about intermediate outcomes. Note also that zero-probabilites are not included in `out.keys()`.
If you're interested in creating *imperfect* models, see the tutorials on ["explicit" models](../ExplicitModel.ipynb) and ["implicit" models](../ImplicitModel.ipynb). Note that if you're interested in simulating RB data there are separate Pauli-error circuit simulators within the `pygsti.extras.rb` package which take as input *perfect* models and produce noisy RB.
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# SavedModel 포맷 사용하기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/saved_model">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/guide/saved_model.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/guide/saved_model.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](https://github.com/tensorflow/docs/blob/master/site/en/guide/effective_tf2.md)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 [tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 [docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로 메일을 보내주시기 바랍니다.
SavedModel에는 가중치 및 연산을 포함한 완전한 텐서플로 프로그램이 포함됩니다. 기존에 설계했던 모델 코드를 실행할 필요가 없어 공유하거나 ([TFLite](https://tensorflow.org/lite), [TensorFlow.js](https://js.tensorflow.org/), [TensorFlow Serving](https://www.tensorflow.org/tfx/serving/tutorials/Serving_REST_simple), [TFHub](https://tensorflow.org/hub)와 같은 환경으로) 배포하는 데 유용합니다.
파이썬 모델 코드를 가지고 있고 파이썬 내에서 가중치를 불러오고 싶다면, [체크포인트 훈련 가이드](./checkpoint.ipynb)를 참조하세요.
빠른 소개를 위해 이 섹션에서는 미리 훈련된 케라스 모델을 내보내고 그 모델로 이미지 분류 요청을 처리합니다. 나머지 가이드에서는 세부 정보와 SavedModel을 만드는 다른 방법에 대해 설명합니다.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
file = tf.keras.utils.get_file(
"grace_hopper.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg")
img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])
plt.imshow(img)
plt.axis('off')
x = tf.keras.preprocessing.image.img_to_array(img)
x = tf.keras.applications.mobilenet.preprocess_input(
x[tf.newaxis,...])
```
실행 예제로 그레이스 호퍼(Grace Hopper)의 이미지와 사용이 쉬운 케라스 사전 훈련 이미지 분류 모델을 사용할 것입니다. 사용자 정의 모델도 사용할 수 있는데, 자세한 것은 나중에 설명합니다.
```
#tf.keras.applications.vgg19.decode_predictions
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
pretrained_model = tf.keras.applications.MobileNet()
result_before_save = pretrained_model(x)
print()
decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]
print("저장 전 결과:\n", decoded)
```
이 이미지의 가장 가능성 있는 예측은 "군복"입니다.
```
tf.saved_model.save(pretrained_model, "/tmp/mobilenet/1/")
```
저장 경로의 마지막 경로 요소(여기서는 `1/`)는 모델의 버전 번호인 텐서플로 서빙(TensorFlow Serving) 컨벤션을 따릅니다 - 텐서플로 서빙과 같은 도구가 최신 모델을 구분할 수 있게 합니다.
SavedModel은 시그니처(signatures)라 불리는 이름있는 함수를 가집니다. 케라스 모델은 `serving_default` 시그니처 키를 사용하여 정방향 패스(forward pass)를 내보냅니다. [SavedModel 커맨드 라인 인터페이스](#details_of_the_savedmodel_command_line_interface)는 디스크에 저장된 SavedModel을 검사할 때 유용합니다.
```
!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --signature_def serving_default
```
파이썬에서 `tf.saved_model.load`로 SavedModel을 다시 불러오고 해군대장 호퍼(Admiral Hopper)의 이미지가 어떻게 분류되는지 볼 수 있습니다.
```
loaded = tf.saved_model.load("/tmp/mobilenet/1/")
print(list(loaded.signatures.keys())) # ["serving_default"]
```
가져온 시그니처는 항상 딕셔너리를 반환합니다.
```
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
```
SavedModel로부터 추론을 실행하면 처음 모델과 같은 결과를 제공합니다.
```
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("저장과 불러오기 이후의 결과:\n", decoded)
```
## 텐서플로 서빙으로 모델 배포하기
SavedModel은 파이썬에서 사용하기에 적합하지만, 일반적으로 프로덕션 환경에서는 추론을 위한 전용 서비스를 사용합니다. 이는 텐서플로 서빙을 사용한 SavedModel로 쉽게 구성할 수 있습니다.
`tensorflow_model_server`를 노트북이나 로컬 머신에 설치하는 방법을 포함한 텐서플로 서빙에 대한 자세한 내용은 [TensorFlow Serving REST 튜토리얼](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/tutorials/Serving_REST_simple.ipynb)을 참조하십시오. 간단한 예를 들면 앞서 내보낸 `mobilenet` 모델을 배포하기 위해 모델 경로를 SavedModel 디렉토리로 설정합니다:
```bash
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=mobilenet \
--model_base_path="/tmp/mobilenet" >server.log 2>&1
```
이제 요청을 보냅니다.
```python
!pip install requests
import json
import numpy
import requests
data = json.dumps({"signature_name": "serving_default",
"instances": x.tolist()})
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/mobilenet:predict',
data=data, headers=headers)
predictions = numpy.array(json.loads(json_response.text)["predictions"])
```
`predictions`의 결과는 파이썬에서와 같습니다.
### SavedModel 포맷
SavedModel은 변수값과 상수를 포함하고 직렬화된 시그니처와 이를 실행하는 데 필요한 상태를 담은 디렉토리입니다.
```
!ls /tmp/mobilenet/1 # assets saved_model.pb variables
```
`saved_model.pb` 파일은 각각 하나의 함수로 된 이름있는 시그니처 세트를 포함합니다.
SavedModel에는 다중 시그니처 세트(`saved_model_cli`의 `tag_set` 매개변수 값으로 확인된 다중 MetaGraph)를 포함할 수 있지만 이런 경우는 드뭅니다. 다중 시그니처 세트를 작성하는 API에는 [`tf.Estimator.experimental_export_all_saved_models`](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#experimental_export_all_saved_models) 및 TensorFlow 1.x의 `tf.saved_model.Builder`가 포함됩니다.
```
!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve
```
`variables` 디렉토리에는 일반적인 훈련 체크포인트 파일이 있습니다([훈련 체크포인트 가이드](./checkpoint.ipynb) 참조).
```
!ls /tmp/mobilenet/1/variables
```
`assets` 디렉토리에는 텐서플로 그래프(TensorFlow graph)에서 사용되는 파일들, 예를 들어 상수 테이블을 초기화하는 데 사용되는 텍스트 파일들이 있습니다. 이번 예제에서는 사용되지 않습니다.
SavedModel은 텐서플로 그래프에서 사용되지 않는 파일을 위해 `assets.extra` 디렉토리를 가질 수 있는데, 예를 들면 사용자가 SavedModel과 함께 사용할 파일입니다. 텐서플로 자체는 이 디렉토리를 사용하지 않습니다.
### 사용자 정의 모델 내보내기
첫 번째 섹션에서는, `tf.saved_model.save`가 `tf.keras.Model` 객체에 대한 시그니처를 자동으로 결정했습니다. 이는 케라스의 `Model` 객체가 내보내기 위한 명시적 메서드와 입력 크기를 가지기 때문에 작동했습니다. `tf.saved_model.save`는 저수준(low-level) 모델 설계 API와도 잘 작동하지만, 모델을 텐서플로 서빙에 배포할 계획이라면 시그니처로 사용할 함수를 지정해야 합니다.
```
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
@tf.function
def __call__(self, x):
return x * self.v
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def mutate(self, new_v):
self.v.assign(new_v)
module = CustomModule()
```
이 모듈은 `tf.function` 데코레이터가 적용된 두 메서드를 가지고 있습니다. 이 함수들은 SavedModel에 포함되어 있으므로 `tf.saved_model.load` 함수를 사용하여 파이썬 프로그램에 함께 로드됩니다. 하지만 명시적 선언 없이는 텐서플로 서빙과 같은 시그니처 배포 도구와 `saved_model_cli`가 접근할 수 없습니다.
`module.mutate`는 `input_signature`를 가지고 있어서 계산 그래프를 SavedModel에 저장하기 위한 정보가 이미 충분히 있습니다. `__call__`은 시그니처가 없기에 저장하기 전 이 메서드를 호출해야 합니다.
```
module(tf.constant(0.))
tf.saved_model.save(module, "/tmp/module_no_signatures")
```
`input_signature`가 없는 함수의 경우, 저장 전에 사용된 입력의 크기는 함수가 불려진 이후에 사용될 것입니다. 스칼라값으로 `__call__`을 호출했으므로 스칼라값만 받아들일 것입니다
```
imported = tf.saved_model.load("/tmp/module_no_signatures")
assert 3. == imported(tf.constant(3.)).numpy()
imported.mutate(tf.constant(2.))
assert 6. == imported(tf.constant(3.)).numpy()
```
함수는 벡터와 같은 새로운 형식을 수용하지 않습니다.
```python
imported(tf.constant([3.]))
```
<pre>
ValueError: Could not find matching function to call for canonicalized inputs ((<tf.Tensor 'args_0:0' shape=(1,) dtype=float32>,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})].
</pre>
`get_concrete_function`을 사용해 입력 크기를 함수 호출 없이 추가할 수 있습니다. 이 함수는 매개변수 값으로 `Tensor` 대신 입력 크기와 데이터 타입을 나타내는 `tf.TensorSpec` 객체를 받습니다. 크기가 `None`이면 모든 크기가 수용 가능합니다. 또는 각 축의 크기(axis size)를 담은 리스트일 수도 있습니다. 축 크기가 'None'이면 그 축에 대해 임의의 크기를 사용할 수 있습니다. 또한 `tf.TensorSpecs`는 이름을 가질 수 있는데, 기본값은 함수의 매개변수 키워드(여기서는 "x")입니다.
```
module.__call__.get_concrete_function(x=tf.TensorSpec([None], tf.float32))
tf.saved_model.save(module, "/tmp/module_no_signatures")
imported = tf.saved_model.load("/tmp/module_no_signatures")
assert [3.] == imported(tf.constant([3.])).numpy()
```
`tf.keras.Model`과 `tf.Module`과 같은 객체에 포함된 함수와 변수는 가져올 때 사용할 수 있지만 많은 파이썬의 타입과 속성은 잃어버립니다. 파이썬 프로그램 자체는 SavedModel에 저장되지 않습니다.
내보낼 함수를 시그니처로 지정하지 못했기에 시그니처는 없습니다.
```
!saved_model_cli show --dir /tmp/module_no_signatures --tag_set serve
```
## 내보낼 시그니처 지정하기
어떤 함수가 시그니처라는 것을 나타내려면 저장할 때 `signatures` 매개변수를 지정합니다.
```
call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
tf.saved_model.save(module, "/tmp/module_with_signature", signatures=call)
```
먼저 `tf.function` 객체를 `get_concrete_function` 메서드를 사용해 `ConcreteFunction` 객체로 바꾸었습니다. 이것은 함수가 고정된 `input_signature` 없이 만들어지고 함수와 연관된 명시적인 `Tensor` 입력이 없었으므로 필수적입니다.
```
!saved_model_cli show --dir /tmp/module_with_signature --tag_set serve --signature_def serving_default
imported = tf.saved_model.load("/tmp/module_with_signature")
signature = imported.signatures["serving_default"]
assert [3.] == signature(x=tf.constant([3.]))["output_0"].numpy()
imported.mutate(tf.constant(2.))
assert [6.] == signature(x=tf.constant([3.]))["output_0"].numpy()
assert 2. == imported.v.numpy()
```
하나의 시그니처를 내보냈고 키는 기본값인 "serving_default"가 됩니다. 여러 시그니처를 내보내려면 딕셔너리로 전달합니다.
```
@tf.function(input_signature=[tf.TensorSpec([], tf.string)])
def parse_string(string_input):
return imported(tf.strings.to_number(string_input))
signatures = {"serving_default": parse_string,
"from_float": imported.signatures["serving_default"]}
tf.saved_model.save(imported, "/tmp/module_with_multiple_signatures", signatures)
!saved_model_cli show --dir /tmp/module_with_multiple_signatures --tag_set serve
```
`saved_model_cli`는 커맨드 라인에서 SavedModel을 직접 실행할 수도 있습니다.
```
!saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def serving_default --input_exprs="string_input='3.'"
!saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def from_float --input_exprs="x=3."
```
## 가져온 모델 미세 튜닝하기
변수 객체가 사용 가능하므로 imported 함수를 통해 역전파할 수 있습니다.
```
optimizer = tf.optimizers.SGD(0.05)
def train_step():
with tf.GradientTape() as tape:
loss = (10. - imported(tf.constant(2.))) ** 2
variables = tape.watched_variables()
grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(grads, variables))
return loss
for _ in range(10):
# "v"는 5로 수렴, "loss"는 0으로 수렴
print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy()))
```
## SavedModel의 제어 흐름
`tf.function`에 들어갈 수 있는 것은 모두 SavedModel에 들어갈 수 있습니다. [AutoGraph](./function.ipynb)를 사용하면 Tensor에 의존하는 조건부 논리를 파이썬 제어 흐름으로 표현할 수 있습니다.
```
@tf.function(input_signature=[tf.TensorSpec([], tf.int32)])
def control_flow(x):
if x < 0:
tf.print("유효하지 않음!")
else:
tf.print(x % 3)
to_export = tf.Module()
to_export.control_flow = control_flow
tf.saved_model.save(to_export, "/tmp/control_flow")
imported = tf.saved_model.load("/tmp/control_flow")
imported.control_flow(tf.constant(-1)) # 유효하지 않음!
imported.control_flow(tf.constant(2)) # 2
imported.control_flow(tf.constant(3)) # 0
```
## 추정기(Estimator)의 SavedModel
추정기는 [`tf.Estimator.export_saved_model`](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#export_saved_model)을 통해 SavedModel을 내보냅니다. 자세한 내용은 [Estimator 가이드](https://www.tensorflow.org/guide/estimator)를 참조하십시오.
```
input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])
def input_fn():
return tf.data.Dataset.from_tensor_slices(
({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)
estimator.train(input_fn)
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec([input_column]))
export_path = estimator.export_saved_model(
"/tmp/from_estimator/", serving_input_fn)
```
이 SavedModel은 텐서플로 서빙에 배포하는 데 유용한 직렬화된 `tf.Example` 프로토콜 버퍼를 사용합니다. 그러나 `tf.saved_model.load`로 불러오고 파이썬에서 실행할 수도 있습니다.
```
imported = tf.saved_model.load(export_path)
def predict(x):
example = tf.train.Example()
example.features.feature["x"].float_list.value.extend([x])
return imported.signatures["predict"](
examples=tf.constant([example.SerializeToString()]))
print(predict(1.5))
print(predict(3.5))
```
`tf.estimator.export.build_server_input_receiver_fn`를 사용해 `tf.train.Example`이 아닌 원시 텐서를 가지는 입력 함수를 만들 수 있습니다.
## C++에서 SavedModel 불러오기
SavedModel의 C++ 버전 [loader](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/saved_model/loader.h)는 SessionOptions 및 RunOptions을 허용하며 경로에서 SavedModel을 불러오는 API를 제공합니다. 불러 올 그래프와 연관된 태그를 지정해야합니다. 불러온 SavedModel의 버전은 SavedModelBundle이라고 하며 MetaGraphDef와 불러온 세션을 포함합니다.
```C++
const string export_dir = ...
SavedModelBundle bundle;
...
LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},
&bundle);
```
<a id=saved_model_cli/>
## SavedModel 커맨드 라인 인터페이스 세부 사항
SavedModel 커맨드 라인 인터페이스(CLI)를 사용하여 SavedModel을 검사하고 실행할 수 있습니다.
예를 들어, CLI를 사용하여 모델의 `SignatureDef`를 검사할 수 있습니다.
CLI를 사용하면 입력 Tensor 크기 및 데이터 타입이 모델과 일치하는지 신속하게 확인할 수 있습니다.
또한 모델을 테스트하려는 경우 다양한 형식(예를 들어, 파이썬 표현식)의 샘플 입력을
전달하고 출력을 가져와 CLI를 사용하여 정확성 검사를 수행할 수 있습니다.
### SavedModel CLI 설치하기
대체로 말하자면 다음 두 가지 방법 중 하나로 텐서플로를 설치할 수 있습니다:
* 사전에 빌드된 텐서플로 바이너리로 설치
* 소스 코드로 텐서플로 빌드
사전에 빌드된 텐서플로 바이너리를 통해 설치한 경우 SavedModel CLI가 이미
시스템 경로 `bin\saved_model_cli`에 설치되어 있습니다.
소스 코드에서 텐서플로를 빌드하는 경우 다음 추가 명령을 실행하여 `saved_model_cli`를 빌드해야 합니다:
```
$ bazel build tensorflow/python/tools:saved_model_cli
```
### 명령 개요
SavedModel CLI는 SavedModel의 `MetaGraphDef`에 대해 다음 두 명령어를 지원합니다:
* SavedModel의 `MetaGraphDef`에 대한 계산을 보여주는 `show`
* `MetaGraphDef`에 대한 계산을 실행하는 `run`
### `show` 명령어
SavedModel은 태그 세트로 식별되는 하나 이상의 `MetaGraphDef`를 포함합니다.
모델을 텐서플로 서빙에 배포하려면, 각 모델에 어떤 종류의 `SignatureDef`가 있는지, 그리고 입력과 출력은 무엇인지 궁금할 수 있습니다.
`show` 명령은 SavedModel의 내용을 계층적 순서로 검사합니다. 구문은 다음과 같습니다:
```
usage: saved_model_cli show [-h] --dir DIR [--all]
[--tag_set TAG_SET] [--signature_def SIGNATURE_DEF_KEY]
```
예를 들어, 다음 명령은 SavedModel에서 사용 가능한 모든 `MetaGraphDef` 태그 세트를 보여줍니다:
```
$ saved_model_cli show --dir /tmp/saved_model_dir
The given SavedModel contains the following tag-sets:
serve
serve, gpu
```
다음 명령은 `MetaGraphDef`에서 사용 가능한 모든 `SignatureDef` 키를 보여줍니다:
```
$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve
The given SavedModel `MetaGraphDef` contains `SignatureDefs` with the
following keys:
SignatureDef key: "classify_x2_to_y3"
SignatureDef key: "classify_x_to_y"
SignatureDef key: "regress_x2_to_y3"
SignatureDef key: "regress_x_to_y"
SignatureDef key: "regress_x_to_y2"
SignatureDef key: "serving_default"
```
`MetaGraphDef`가 태그 세트에 *여러 개의* 태그를 가지고 있는 경우, 모든 태그를 지정해야 하며,
각 태그는 쉼표로 구분해야 합니다. 예를 들어:
<pre>
$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu
</pre>
특정 `SignatureDef`에 대한 모든 입력 및 출력 텐서 정보(TensorInfo)를 표시하려면 `SignatureDef` 키를
`signature_def` 옵션으로 전달하십시오. 이것은 나중에 계산 그래프를 실행하기 위해 입력 텐서의 텐서 키 값,
크기 및 데이터 타입을 알고자 할 때 매우 유용합니다. 예를 들어:
```
$ saved_model_cli show --dir \
/tmp/saved_model_dir --tag_set serve --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y:0
Method name is: tensorflow/serving/predict
```
SavedModel에 사용 가능한 모든 정보를 표시하려면 `--all` 옵션을 사용하십시오. 예를 들어:
<pre>
$ saved_model_cli show --dir /tmp/saved_model_dir --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['classify_x2_to_y3']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x2:0
The given SavedModel SignatureDef contains the following output(s):
outputs['scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y3:0
Method name is: tensorflow/serving/classify
...
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y:0
Method name is: tensorflow/serving/predict
</pre>
### `run` 명령어
`run` 명령을 호출하여 그래프 계산을 실행하고, 입력을 전달한 다음 출력을 표시(하고 선택적으로 저장)합니다.
구문은 다음과 같습니다:
```
usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def
SIGNATURE_DEF_KEY [--inputs INPUTS]
[--input_exprs INPUT_EXPRS]
[--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]
[--overwrite] [--tf_debug]
```
`run` 명령은 입력을 모델에 전달하는 다음 세 가지 방법을 제공합니다:
* `--inputs` 옵션을 사용하여 넘파이(numpy) ndarray를 파일에 전달할 수 있습니다.
* `--input_exprs` 옵션을 사용하여 파이썬 표현식을 전달할 수 있습니다.
* `--input_examples` 옵션을 사용하여 `tf.train.Example`을 전달할 수 있습니다.
#### `--inputs`
입력 데이터를 파일에 전달하려면, 다음과 같은 일반적인 형식을 가지는 `--inputs` 옵션을 지정합니다:
```bsh
--inputs <INPUTS>
```
여기서 *INPUTS*는 다음 형식 중 하나입니다:
* `<input_key>=<filename>`
* `<input_key>=<filename>[<variable_name>]`
여러 개의 *INPUTS*를 전달할 수 있습니다. 여러 입력을 전달하는 경우 세미콜론을 사용하여 각 *INPUTS*를 구분하십시오.
`saved_model_cli`는 `numpy.load`를 사용하여 *filename*을 불러옵니다.
*filename*은 다음 형식 중 하나일 수 있습니다:
* `.npy`
* `.npz`
* 피클(pickle) 포맷
`.npy` 파일은 항상 넘파이 ndarray를 포함합니다. 그러므로 `.npy` 파일에서 불러올 때,
배열 내용이 지정된 입력 텐서에 직접 할당될 것입니다. 해당 `.npy` 파일과 함께 *variable_name*을 지정하면
*variable_name*이 무시되고 경고가 발생합니다.
`.npz`(zip) 파일에서 불러올 때, 입력 텐서 키로 불러올 zip 파일 내의 변수를 *variable_name*으로
선택적으로 지정할 수 있습니다. *variable_name*을 지정하지 않으면 SavedModel CLI는 zip 파일에 하나의 파일만
포함되어 있는지 확인하고 지정된 입력 텐서 키로 불러옵니다.
피클 파일에서 불러올 때, 대괄호 안에 `variable_name`이 지정되지 않았다면, 피클 파일 안에 있는
어떤 것이라도 지정된 입력 텐서 키로 전달될 것입니다. 그렇지 않으면, SavedModel CLI는 피클 파일에
딕셔너리가 저장되어 있다고 가정하고 *variable_name*에 해당하는 값이 사용됩니다.
#### `--input_exprs`
파이썬 표현식을 통해 입력을 전달하려면 `--input_exprs` 옵션을 지정하십시오. 이는 데이터 파일이 없어도
모델의 `SignatureDef`의 크기 및 데이터 타입과 일치하는 간단한 입력으로 모델의 정확성 검사를 하려는 경우
유용할 수 있습니다. 예를 들어:
```bsh
`<input_key>=[[1],[2],[3]]`
```
파이썬 표현식 외에도 넘파이 함수를 전달할 수 있습니다. 예를 들어:
```bsh
`<input_key>=np.ones((32,32,3))`
```
(`numpy` 모듈은 `np`로 이미 사용 가능하다고 가정합니다.)
#### `--input_examples`
`tf.train.Example`을 입력으로 전달하려면 `--input_examples` 옵션을 지정하십시오. 입력 키마다 딕셔너리의
리스트를 받습니다. 각 딕셔너리는 `tf.train.Example`의 인스턴스입니다. 딕셔너리 키는 기능이며 값은 각 기능의
값 리스트입니다. 예를 들어:
```bsh
`<input_key>=[{"age":[22,24],"education":["BS","MS"]}]`
```
#### 출력 저장
기본적으로, SavedModel CLI는 출력을 stdout에 기록합니다. `--outdir` 옵션으로 디렉토리를 전달하면,
지정된 디렉토리 안에 출력 텐서 키의 이름을 따라 .npy 파일로 출력이 저장됩니다.
기존 출력 파일을 덮어 쓰려면 `--overwrite`를 사용하십시오.
| github_jupyter |
```
from cobra.sampling import ACHRSampler
from cobra.sampling import OptGPSampler
import cobra
import matplotlib.pyplot as plt
import scipy as sp
import numpy as np
import pandas as pd
from sklearn import decomposition
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression
import statsmodels
from scipy.stats import sem, t
from scipy import mean
from statsmodels.sandbox.stats.multicomp import multipletests
import seaborn as sns
from scipy.stats import hypergeom
%matplotlib inline
subs=pd.read_csv('ReactionSetNHBE.csv')
dataset=pd.DataFrame()
for path in subs['Var2'].unique():
reaction_set=subs.loc[subs['Var2']==path,'Var1']
rxn=reaction_set.reset_index(drop=True)
df_temp=pd.DataFrame({path:rxn})
dataset=pd.concat([dataset,df_temp],axis=1)
# dataset.to_csv('ReactionSetStructuredNHBE.csv')
rxnlist=pd.read_csv('ImpactedReactionsNHBE.csv')
listSize=len(rxnlist)
listrxnSize=[]
setSize=[]
rxnSize=7196
for col in dataset.columns:
df=pd.DataFrame({'Reaction':dataset[col]})
out=df.merge(rxnlist)
listrxnSize.append(len(out))
setSize.append(len(dataset[col].dropna()))
hyperdata=pd.DataFrame({'Pathways':dataset.columns,'ListReactions':listrxnSize,'SetSize':setSize})
hits=hyperdata['ListReactions']
pool=hyperdata['SetSize']
allrxns=hyperdata['SetSize'].sum()
targetrxns=hyperdata['ListReactions'].sum()
allrxns
pvalList=[]
for h,p in zip(hits,pool):
rv=hypergeom(allrxns-p,p,targetrxns)
pval=rv.pmf(h)
pvalList.append(pval)
hyperdata['P-value']=pvalList
reject,padj,_,_=statsmodels.stats.multitest.multipletests(hyperdata['P-value'], alpha=0.05, method='fdr_bh', is_sorted=False, returnsorted=False)
hyperdata['P-valueadj']=padj
hyperdata['Reject']=reject
targetrxns
rv=hypergeom(7196-370,370,765)
pval=rv.pmf(41)
pval
hyperdata.sort_values(by='ListReactions',ascending=False)
hyperdata_sig=hyperdata[(hyperdata['Reject']) & (hyperdata['ListReactions']!=0)]
hyperdata_sorted=hyperdata_sig.sort_values(by='P-valueadj',ascending=False)
hyperdata_sorted=hyperdata_sorted.drop([0,20],axis=0) #Remove transport reactions and exchange reactions
plt.figure(figsize=(12,5))
sc=plt.scatter(hyperdata_sorted['P-valueadj'],np.arange(0,len(hyperdata_sorted['Pathways'])),s=hyperdata_sorted['ListReactions'],color=(0.9,0.3,0.1,0.9))
plt.xlabel('Adjusted p-value')
plt.yticks(np.arange(0,len(hyperdata_sorted['Pathways'])),labels=hyperdata_sorted['Pathways'])
handles, labels = sc.legend_elements(prop="sizes", alpha=0.8)
plt.legend(handles, labels, bbox_to_anchor=(1.6,1.02),loc='upper right',title="Reactions")
# plt.grid(axis='y')
plt.tight_layout()
plt.savefig('D:/COVID19/Manuscript/Final_PlosCompBio/Revision/Figure4A.png',dpi=600)
cd D:/COVID19/Manuscript/Final_PlosCompBio/Revision/
```
| github_jupyter |
# Import section
```
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from ipywidgets import interact, interact_manual, interactive, fixed
import ipywidgets as widgets
from IPython.display import display
```
# Section: MNIST HELPERS
```
def read(fileName):
""" Lee las imágenes desde first hasta last y las almacena en un arreglo 2D,
donde cada renglón corresponde a una imagen.
"""
file = open(fileName, "rb")
# Leer número mágico
## Los dos primeros bytes son cero
byte = file.read(4)
if byte[0] != 0 or byte[1] != 0:
raise Error("Encabezado corrupto: deben ser ceros" + str(byte))
## El tercero codifica el tipo de datos:
## 0x08: unsigned byte
## 0x09: signed byte
## 0x0B: short (2 bytes)
## 0x0C: int (4 bytes)
## 0x0D: float (4 bytes)
## 0x0E: double (8 bytes)
switcher = {
0x08 : np.uint8,
0x09 : np.int8,
0x0B : np.int16,
0x0C : np.int32,
0x0D : np.float32,
0x0E : np.float64,
}
dataType = switcher.get(byte[2], None)
## Número de dimensiones
numDims = byte[3]
## Tamaño de cada dimensión (cada una es un int de 4 bytes MSB first)
sizes = tuple(np.fromfile(file, dtype=np.int32, count=numDims).newbyteorder())
print("Vector de ", numDims, " dimensiones: ", sizes, " tipo ", dataType)
## Resto
a = np.fromfile(file, dtype=dataType)
data = np.reshape(a, sizes)
file.close()
return data
def printFull(array):
opt = np.get_printoptions()
np.set_printoptions(threshold=np.inf)
print(array)
np.set_printoptions(**opt)
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import warnings
from IPython.core.pylabtools import figsize
def muestraImagen(vector3D, labelsVector, indice):
figsize(3, 3)
plt.title(labelsVector[indice])
# El -1 invierte el orden en y
plt.pcolormesh(vector3D[indice][::-1], cmap=cm.winter)
plt.show()
def muestraActividad(red, iEntrada):
""" Grafica los valores de activación de cada neurona
para la entrada en la columna iEntrada.
"""
if(iEntrada > red.A0.shape[1]):
raise IndexError("Ejemplar de entrenamiento inexistente " + str(iEntrada))
nRens = 4
nCols = 1
fig, axes = plt.subplots(figsize=(6,4))
norm = matplotlib.colors.Normalize(vmin=0, vmax=1)
ax_0 = plt.subplot2grid((nRens,nCols), (2,0), rowspan=2)
ax_1 = plt.subplot2grid((nRens,nCols), (1,0))
ax_2 = plt.subplot2grid((nRens,nCols), (0,0), sharey=ax_1)
a0 = red.A0[:,iEntrada]
a1 = red.A1[:,iEntrada:iEntrada+1].T
a2 = red.A2[:,iEntrada:iEntrada+1].T
# A0
ax_0.pcolormesh(a0[1:].reshape((28,28))[::-1], cmap=cm.cool, norm=norm)
ax_0.set_xlim(0, 28)
ax_0.set_ylim(0, 28)
# A1
ax_1.pcolormesh(a1, cmap=cm.cool, norm=norm)
ax_1.set_yticks(np.array([0,1]))
ax_1.set_xlim(0, 26)
ax_1.set_xticks(np.arange(26) + 0.5)
ax_1.set_xticklabels(np.arange(26), minor=False, ha='center')
# A2
ax_2.pcolormesh(a2, cmap=cm.cool, norm=norm)
ax_2.set_xticks(np.arange(10) + 0.5)
ax_2.set_xticklabels(np.arange(10), minor=False, ha='center')
# Barra de color
ax1 = fig.add_axes([1.0, 0, 0.025, 1.0]) # left, bottom, width, height
cb1 = matplotlib.colorbar.ColorbarBase(ax1, cmap=cm.cool,
norm=norm,
orientation='vertical')
with warnings.catch_warnings():
warnings.simplefilter("ignore")
plt.tight_layout()
filesDir = './'
trainingSetFile = filesDir + 'train-images-idx3-ubyte'
trainingSetLabelsFile = filesDir + 'train-labels-idx1-ubyte'
testSetFile = filesDir + 't10k-images-idx3-ubyte'
testSetLabelsFile = filesDir + 't10k-labels-idx1-ubyte'
trainData = read(fileName=trainingSetFile).astype(np.float64)
trainDataLabels = read(fileName=trainingSetLabelsFile).astype(np.float64)
testData = read(fileName=testSetFile).astype(np.float64)
testDataLabels = read(fileName=testSetLabelsFile).astype(np.float64)
```
# Show all is ok
```
@interact(index = (0, len(trainData) - 1))
def ShowImageTrain(index):
muestraImagen(trainData, trainDataLabels, index)
def makeX(data_train):
num_inputs = data_train.shape[0]
return torch.FloatTensor(data_train.reshape((num_inputs, 28 * 28)))
def makeY(labels_train):
num_inputs = labels_train.shape[0]
return torch.LongTensor(labels_train.reshape((num_inputs, 1)))
X = makeX(trainData)
print("X shape=", X.shape)
Y = makeY(trainDataLabels)
print("Y shape=", Y.shape)
## Repite lo mismo con los datos de entrenamiento
XTest = makeX(testData)
YTest = makeY(testDataLabels)
class Network(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Network, self).__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.relu = nn.Sigmoid()
self.l3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
return F.softmax(x, dim=1)
net = Network(input_size=28*28, hidden_size=170, output_size=10)
print(net)
optimizer = optim.Adam(net.parameters(), lr=0.004)
loss_func = nn.CrossEntropyLoss()
epochs = 40
batch_size = 32
loss_log = []
for e in range(epochs):
for i in range(0, X.shape[0], batch_size):
x_mini = X[i:i + batch_size]
y_mini = Y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = loss_func(net_out, y_var[:, 0])
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data.item())
print('Epoch: {} - Loss: {:.6f}'.format(e, loss.data.item()))
def print_proba(ps):
labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
plt.bar(labels, ps, align='center', alpha=0.5)
plt.xticks(labels)
plt.ylabel('Que tan seguro')
plt.title('Estimation')
plt.show()
@interact(index = (0, len(testData) - 1))
def ShowImagePrediction(index):
muestraImagen(testData, testDataLabels, index)
data = XTest[index:index+1]
estimation = net(data)[0].tolist()
print_proba(estimation)
matrix_confusion = np.zeros((10, 10), dtype=int)
correct = 0
total = XTest.shape[0]
for index in range(0, total):
data = XTest[index:index+1]
expected = YTest[index].item()
estimation = net(data)[0].tolist()
result = np.argmax(estimation)
matrix_confusion[expected][result] += 1
if int(expected) == int(result):
correct += 1
print(matrix_confusion)
import seaborn as sns
sns.set()
ax = sns.heatmap(matrix_confusion)
print(f"accuracy = {correct / total}")
plt.plot(loss_log)
plt.show()
```
| github_jupyter |
```
from conda_tools import (cache, environment)
from conda_tools import environment_utils as eu
from conda_tools import cache_utils as cu
import os
from os.path import join
from itertools import groupby, chain
from versio.version import Version
# adjust root to be your Miniconda prefix
root = r"C:\Users\Ryan\Miniconda3"
root_envs = join(root, 'envs')
root_pkgs = join(root, 'pkgs')
print(root_envs)
print(root_pkgs)
```
The two core components of the conda ecosystem are the package cache and the environment subfolders. These are abstracted with `PackageInfo` and `Environment` objects respectively.
Here we create "pools" of `PackageInfo` and `Environment` objects. These objects permit easy, read-only access to various bits of metadata stored in the package cache and conda-meta/ subfolders in the environment. We want to reuse the objects as much as we can to minimize disk I/O. All the disk reads are currently cached with the objects, so the more objects you work with, the more RAM will be required.
```
# Create pkg_cache and environments
pkg_cache = cache.packages(root_pkgs)
envs = environment.environments(root_envs)
print(pkg_cache[:5])
print()
print(envs[:5])
```
# Packages
Conda packages all have an info/ subdirectory for storing metadata about the package. PackageInfo provide convenient access to this metadata.
```
pi = pkg_cache[0]
pi.index # info/index.json
# We can access fields of index.json directly from the object.
pi.name, pi.version, pi.build
# Access to info/files
pi.files
# The full spec of the package. This is always "name-version-build"
pi.full_spec
# We can queries against the information we have on packages
# For example, I want to find all MIT licensed packages in the cache
{pi.full_spec: pi.license for pi in pkg_cache if pi.license == 'MIT'}
```
# Environments
```
e = envs[2]
e
# We can discover the currently activated environment
{e.path: e.activated() for e in envs}
# We can see all the packages that claim to be linked into the environment, keyed by name
e.linked_packages
# linked packages are either hard-linked, symlinked, or copied into environments.
set(chain(e.hard_linked, e.soft_linked, e.copy_linked)) ^ set(e.linked_packages.values())
# The origin channel of each package
e.package_channels
# We also have access to the history of the environment.
# The history object is an adaptation of conda's history parser.
# (note: The interface to this may change in the future)
e.history.object_log
```
# Neat stuff
Convenient access to the package cache and environment metadata allows you to do some neat stuff relatively easily.
Below are a few examples of some quick ideas that can be implemented with little effort.
```
# Calculate potential collisions in environments by packages claiming the same file paths
# Very quick and naive way of detecting file path collisions.
for i, p1 in enumerate(pkg_cache):
for p2 in pkg_cache[i+1:]:
if p1.name == p2.name:
continue
x = p1.files.intersection(p2.files)
if x:
print("{} collides with {}".format(p1, p2))
print("\tCollisions: ", x)
# Cache Utils has some higher order, convenience functions
# See what environments a package is linked into
# Note that this is a O(n) operation where n is the sum of the installed packages in each environment you're checking.
# If you're running this for the first time, it has to read all the metadata for each environment.
# Also note, that this creates new package info objects and environment objects each run, so each run
# prompts a full scan of both the package cache and all environments.
cu.linked_environments((pkg_cache[0],), envs)
# Find which environments the latest packages are linked to.
# This example uses Versio to parse and compare PEP440 compliant version numbers
# This will exclude packages like packages like jpeg and openssl
# This loop simple creates Version objects so we can compare them later.
Versions = {}
for x in pkg_cache:
try:
if x.name in Versions:
Versions[x.name].append(Version(x.version))
else:
Versions[x.name] = [Version(x.version)]
except:
print("Skipping ", x.name, x.version)
# sort the value lists and pick the latest versions
#pversions = {k: str(list(sorted(v))[-1]) for k, v in Versions.items()}
# sort the value lists and pick the older versions
pversions = {k: list(map(str, list(sorted(v))[:-1])) for k, v in Versions.items()}
# The most up-to-date packages are linked to which environments?
#latest_pkgs = [x for x in pkg_cache if x.name in pversions and x.version == pversions[x.name]]
# Find the environments that older packages are linked to
latest_pkgs = [x for x in pkg_cache if x.name in pversions and x.version in set(pversions[x.name])]
# Simply print the results nicely
{str(k): list(map(str, v)) for k, v in cu.linked_environments(latest_pkgs, envs).items()}
# All packages that are not linked to any environment
cu.unlinked_packages(pkg_cache, envs)
# Environment representation of root environment
e = environment.Environment(join(root_envs, 'env2'))
# Long running. Disk intensive.
filter_pyc = lambda f: filter(lambda x: not x.endswith('.pyc'), f)
# List all files in an environment that are not hardlinked (and should be).
# Note that *.pyc files are filtered out.
not_linked = {x: tuple(filter_pyc(y)) for x, y in eu.check_hardlinked_env(envs[0]).items()}
# If you wish to see all the non-existant hardlinks, including *.pyc files, remove the filter_pyc function call
# not_linked = {x: y for x, y in eu.check_hardlinked_env(envs[0]).items()}
not_linked
# We can leverage the information in the environment's history to get packages
# that were explicitly installed by the user.
eu.explicitly_installed(e)
```
| github_jupyter |
# Ludwig Time Series Forecasting
https://github.com/uber/ludwig
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from time import time
matplotlib.rcParams['figure.figsize'] = (16, 9)
pd.options.display.max_columns = 999
```
## Load Dataset
```
df = pd.read_csv('../_datasets/california-solar-power.csv', parse_dates=[0], index_col='DateTime')
print(df.shape)
df.head()
```
## Define Parameters
Make predictions for 24-hour period using a training period of four weeks.
```
dataset_name = 'California Solar Power'
dataset_abbr = 'CSP'
model_name = 'Ludwig'
context_length = 24*7*4 # Four weeks
prediction_length = 24
```
## Define Error Metric
The seasonal variant of the mean absolute scaled error (MASE) will be used to evaluate the forecasts.
```
def calc_sMASE(training_series, testing_series, prediction_series, seasonality=prediction_length):
a = training_series.iloc[seasonality:].values
b = training_series.iloc[:-seasonality].values
d = np.sum(np.abs(a-b)) / len(a)
errors = np.abs(testing_series - prediction_series)
return np.mean(errors) / d
```
## Evaluating Ludwig
To evaluate Ludwig, forecasts will be generated for each time series. sMASE will be calculated for each individual time series, and the mean of all these scores will be used as the overall accuracy metric for Ludwig on this dataset.
### Prepare model definition file
```
!touch ludwig.yaml
config_str = """input_features:
-
name: {}
type: timeseries
output_features:
""".format(dataset_abbr)
for i in range(prediction_length):
config_str += """ -
name: y{}
type: numerical
""".format(i+1)
with open("ludwig.yaml", "w+") as f:
f.write(config_str)
```
### Prepare data
```
df1 = df.iloc[-(context_length+prediction_length):]
df1_train = df1.iloc[:-prediction_length]
df1_test = df1.iloc[-prediction_length:]
df2 = pd.DataFrame()
for i, col in enumerate(df1.columns):
y_cols = ['y%s' % str(j+1) for j in range(prediction_length)]
cols = [dataset_abbr] + y_cols
train = df1_train[col].values[-48:]
test = df1_test[col].values
train_str = ""
for val in train:
train_str += str(val) + " "
train_str = train_str[:-1]
vals = [train_str] + list(test)
df_t = pd.DataFrame([vals], columns=cols, index=[i])
df2 = df2.append(df_t)
df2.to_csv('full.csv', index=False)
```
### Run Model
For this dataset and these parameters, the Ludwig model fails to complete training within an acceptable period of time
```
!ludwig experiment --data_csv full.csv --model_definition_file ludwig.yaml
```
| github_jupyter |
# Beta Hedging
_Roshan Mahes (Based on the Quantopian Lecture Series)._
As usual, we first import our libraries:
```
import math
import numpy as np
import pandas as pd
# statistical analysis
import statsmodels.api as sm
from statsmodels import regression
# plot + styling
import matplotlib.pyplot as plt
from matplotlib import style
style.use('seaborn-whitegrid')
plt.rcParams["figure.figsize"] = (10,6)
# get pricing data
import yfinance as yf
```
### Factor Models
Factor models are a way of explaining the returns of one asset via a linear combination of the returns of other assets. The general form of a (factor or linear regression) model is
\begin{align*}
Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n.
\end{align*}
Note that the $X$'s can also be indicators instead of assets.
An asset's beta to another asset is just the $\beta$ from the above model. For instance, if we regressed TSLA against the S&P 500 ETF SPY using the model $Y_{TSLA} = \alpha + \beta X$, then TSLA's beta exposure to the S&P 500 ETF would be that beta. If we used the model $Y_{TSLA} = \alpha + \beta X_{SPY} + \beta X_{AAPL}$, then we now have two betas, one is TSLA's exposure to the S&P 500 ETF and one is TSLA's exposure to AAPL. In the following, $\beta$ will refer to a stock's beta exposure to the S&P 500 ETF unless otherwise specified.
Let's download the pricing data of the AAPL stock and the SPY ETF:
```
# load Apple data (daily)
ticker = yf.Ticker('AAPL')
df_AAPL = ticker.history(period='max')
df_AAPL.to_csv('data/AAPL.csv')
# load SPY data (daily)
ticker = yf.Ticker('SPY')
df_SPY = ticker.history(period='max')
df_SPY.to_csv('data/SPY.csv')
```
Our data consists of the following:
```
print('Apple data (AAPL):\n', df_AAPL, '\n\n')
print('S&P 500 ETF data (SPY):\n', df_SPY)
```
Notice Apple's stock split on 2020-08-31. We plot the daily returns of the year 2019:
```
start = '2019-01-01'
end = '2019-12-31'
df_AAPL = pd.read_csv('data/AAPL.csv', index_col=0)
asset = df_AAPL.loc[start:end, 'Close']
df_SP500 = pd.read_csv('data/SPY.csv', index_col=0)
benchmark = df_SP500.loc[start:end, 'Close']
# take the percent changes to get to returns
r_a = asset.pct_change()[1:]
r_b = benchmark.pct_change()[1:]
r_a.plot(label='AAPL')
r_b.plot(label='SPY')
plt.ylabel("Daily Return")
plt.legend();
```
It seems that Apple's stock can be predicted quite well by the SPY ETF. To check this, we perform the refression to find $\alpha$ and $\beta$:
```
X = r_b.values
Y = r_a.values
def linreg(x,y):
x = sm.add_constant(x) # add a column of 1s to our data (for intercept)
model = regression.linear_model.OLS(y,x).fit()
x = x[:, 1] # remove the constant
return model.params[0], model.params[1]
alpha, beta = linreg(X,Y)
historical_beta = beta # we will use this later
print('alpha: ' + str(alpha))
print('beta: ' + str(beta))
```
If we plot the line $\alpha + \beta X$, we can see that it does indeed look like the line of best fit:
```
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = X2 * beta + alpha
plt.scatter(X, Y, alpha=0.5) # plot returns
plt.xlabel("SPY Daily Return")
plt.ylabel("AAPL Daily Return")
# add regression line (red)
plt.plot(X2, Y_hat, 'r', alpha=0.9);
```
### Risk Exposure
More generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500 ETF, then while it will do very well while the market is rising, it will do very poorly when the market falls (e.g. during the Covid period). A high beta corresponds to high speculative risk. You are taking out a more volatile bet.
Stratgies that have negligible beta exposure to as many factors as possible are valuable. What this means is that all of the returns in a strategy lie in the $\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools.
### Hedging
The process of reducing exposure to other factors is known as _risk management_. Hedging is one of the best ways to perform risk management in practice. If we determine that our portfolio's returns are dependent on the market via the relation
\begin{align*}
Y_{portfolio} = \alpha + \beta X_{SPY},
\end{align*}
then we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\beta V$ where $V$ is the total value of our portfolio. This works because if our returns are approximated by $\alpha + \beta X_{SPY}$, then adding a short in SPY will make our new returns be $\alpha + \beta X_{SPY} - \beta X_{SPY} = \alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market. When a strategy exhibits a consistent beta of $0$, we say that this strategy is _market neutral_.
The only problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount. Each estimate has a stardard error that corresponds with how stable the estimate is within the observed data.
Now that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\beta$ (negative since we are short in it).
```
# construct portfolio with beta hedging
portfolio = r_a - beta * r_b
portfolio.name = "Portfolio"
# plot portfolio returns and assets
portfolio.plot(alpha=0.9)
r_b.plot(alpha=0.3, label='SPY')
r_a.plot(alpha=0.3, label='AAPL')
plt.ylabel("Daily Return")
plt.legend();
```
It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both:
```
print(f'The portfolio return changed from {r_a.mean():.4f} to {portfolio.mean():.4f}.')
print(f'The volatility changed from {r_a.std():.4f} to {portfolio.std():.4f}.')
```
We've decreased volatility at the expense of some returns. Let's check that the $\alpha$ is the same as before, while the $\beta$ has been eliminated:
```
P = portfolio.values
alpha, beta = linreg(X,P)
print('alpha: ' + str(alpha))
print('beta: ' + str(beta))
```
Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame, namely this year:
```
# Get data for a different time frame:
start = '2020-01-01'
end = '2020-12-31'
asset = df_AAPL.loc[start:end, 'Close']
benchmark = df_SP500.loc[start:end, 'Close']
# compute alpha and beta for the asset
r_a = asset.pct_change()[1:]
r_b = benchmark.pct_change()[1:]
X = r_b.values
Y = r_a.values
alpha, beta = linreg(X,Y)
print('Asset Out of Sample Estimate:')
print('alpha: ' + str(alpha))
print('beta: ' + str(beta))
# create hedged portfolio and compute alpha and beta
portfolio = r_a - historical_beta * r_b
P = portfolio.values
alpha, beta = linreg(X,P)
print('\nPortfolio Out of Sample:')
print('alpha: ' + str(alpha))
print('beta: ' + str(beta))
# plot portfolio returns and assets
portfolio.name = "Portfolio"
portfolio.plot(alpha=0.9)
r_a.plot(alpha=0.3);
r_b.plot(alpha=0.3)
plt.ylabel("Daily Return")
plt.legend();
```
As we can see, the beta estimate changes a good deal when we look at the out of sample estimate. The beta that we computed over our historical data doesn't do a great job at reducing the beta of our portfolio, but does manage to reduce the magnitude by about 1/2.
Hedging against a benchmark such as the market will indeed reduce your returns while the market is not doing poorly. This is, however, completely fine. If your algorithm is less volatile, you will be able to take out leverage on your strategy and multiply your returns back up to their original amount. Even better, your returns will be far more stable than the original volatile beta exposed strategy.
By and large, even though high-beta strategies tend to be deceptively attractive due to their extremely good returns during periods of market growth, they fail in the long term as they will suffer extreme losses during a downturn. There are strategies for hedging that may be better suited for other investment approaches.
#### Pairs Trading
One is pairs trading, in which a second asset is used in place of the benchmark here. This would allow you, for instance, to cancel out the volatility in an industry by being long in the stock of one company and short in the stock of another company in the same industry.
#### Long Short Equity
In this case we define a ranking over a group of $n$ equities, then long the top $p\%$ and short the bottom $p\%$ in equal dollar volume. This has the advantage of being implicitly, versus explicitly, hedged when $n$ is large. To see why this is the case, imagine buying a set of 100 securities randomly. The chance that the market exposure beta of these 100 is far from 1.0 is very low, as we have taken a large sample of the market. Similarly, when we rank by some independent metric and buy the top 100, the chance that we select securities whose overall beta is far from 1.0 is low. So in selecting 100 long and 100 short, the strategy beta should be very close to 1 - 1 = 0. Obviously some ranking systems will introduce a sample bias and break this assumption, for example ranking by the estimated beta of the equity.
Another advantage of long short equity strategies is that you are making a bet on the ranking, or in other words the differential in performance between the top and bottom ranked equities. This means that you don't have to even worry about the alpha/beta tradeoff encountered in hedging.
_This document is based on the Quantiopian Lecture written by Evgenia Nitishinskaya, Delaney Granizo-Mackenzie and David Edwards._
| github_jupyter |
# Welcome to ExKaldi
In this section, we will extract and process the acoustic feature.
Please ensure you have downloaded the complete librispeech_dummy corpus from our github.
https://github.com/wangyu09/exkaldi
First of all, update the wav path info in wav.scp file.
```
! cd librispeech_dummy && python3 reset_wav_path.py
```
From now on, we will start to build a ASR system from the scratch.
```
import exkaldi
import os
dataDir = "librispeech_dummy"
```
In the train dataset, there are 100 utterances fetched from 10 speakers. Each specker corresponds to 10 utterances.
You can compute feature from __WAV file__ or __Kaldi script-file table__ or exkaldi __ListTable__ object.
```
scpFile = os.path.join(dataDir, "train", "wav.scp")
feat = exkaldi.compute_mfcc(scpFile, name="mfcc")
feat
```
Use function __compute_mfcc__ to compute MFCC feature. In current version of ExKaldi, there are 4 functions to compute acoustic feature:
__compute_mfcc__: compute the MFCC feature.
__compute_fbank__: compute the fBank feature.
__compute_plp__: compute the PLP feature.
__compute_spectrogram__: compute the power spectrogram feature.
The returned object: ___feat___ is an exkaldi feature archive whose class name is __BytesFeat__. In ExKaldi, we use 3 approaches to discribe Kaldi archives: __Bytes Object__, __Numpy Array__, and __Index Table__. We have designed a group of classes to hold them. We will introduce them in later steps.
Here, __BytesFeat__ is one of __Bytes Object__ and its object holds the acoustic feature data with bytes format. You can use attribute: __.data__ to get it, but we do not recommend this if you just want to look it, because it is not a human-readable data format.
___feat___ object has some useful attributes and methods. For example, use __.dim__ to look feature dimensions.
```
feat.dim
```
Use __.utts__ to get its' utterances IDs.
```
feat.utts[0:5]
```
Randomly sample 10 utterances.
```
samplingFeat = feat.subset(nRandom=10)
samplingFeat
```
Here, ___samplingFeat___ is also a __BytesFeat__ object.
In ExKaldi, the name of object will record the operation. For example, the ___samplingFeat___ generated above has a new name now.
```
samplingFeat.name
del samplingFeat
```
Besides __BytesFeat__ class, these classes can hold other Kaldi archive tables in bytes format.
__BytesCMVN__: to hold the CMVN statistics.
__BytesProb__: to hold the Neural Network output.
__BytesAliTrans__: to hold the transition-ID Alignment.
__BytesFmllr__: to hold the fmllr transform matrices.
All these classes have some fimiliar properties. For more information, check the [ExKaldi Documents](https://wangyu09.github.io/exkaldi/#/) please. Here we only focus on feature processing.
By the way, in ExKaldi, we sort these archives rigorously in order to reduce buffer cost and accelerate processing.
```
featTemp = feat.sort(by="utt", reverse=True)
featTemp.utts[0:5]
del featTemp
```
Raw feature can be further optimized, typically, with applying CMVN. Here we firstly compute the CMVN statistics.
```
spk2uttFile = os.path.join(dataDir, "train", "spk2utt")
cmvn = exkaldi.compute_cmvn_stats(feat, spk2utt=spk2uttFile, name="cmvn")
cmvn
```
___cmvn___ is an exkaldi __BytesCMVN__ object. It holds the CMVN statistics in binary format. Then we use it to normalize the feature.
```
utt2spkFile = os.path.join(dataDir, "train", "utt2spk")
feat = exkaldi.use_cmvn(feat, cmvn, utt2spk=utt2spkFile)
feat.name
```
We save this feature into file. In futher steps, it will be restoraged. ExKaldi bytes archives can be saved the same as Kaldi format files.
```
featFile = os.path.join(dataDir, "exp", "train_mfcc_cmvn.ark")
exkaldi.utils.make_dependent_dirs(path=featFile, pathIsFile=True)
featIndex = feat.save(featFile, returnIndexTable=True)
#del feat
```
If you appoint the option __returnIndexTable__ to be True, an __IndexTable__ object will be returned. As we introduced above, this is our second approach to discribe archives, __index table__. It plays almost the same role with original feature object. __IndexTable__ is a subclass of Python dict class, so you can view its data directly.
When training a large corpus or using multiple processes, __IndexTable__ will become the main currency.
```
featIndex
```
Of cause, original archives can also be loaded into memory again. For example, feature can be loaded from Kaldi binary archive file (__.ark__ file) or script table file (__.scp__).
Particularly, we can fetch the data via index table directly.
```
feat = featIndex.fetch(arkType="feat")
del featIndex
feat
```
All Bytes archives can be transformed to __Numpy__ format. So If you want to train NN acoustic model with Tensorflow or others, you can use the Numpy format data.
```
feat = feat.to_numpy()
feat
```
by calling __.to_numpy()__ function, ___feat___ became an exkaldi __NumpyFeat__ object, it has some fimiliar attributes and methods with __BytesFeat__, but also has own properties. Let's skip the details here.
This is the third way to discribe archives: __Numpy Array__. __NumpyFeat__ is one of Numpy archives classes.
Here we will introduce some methods to use its data.
```
sampleFeat = feat.subset(nHead=2)
```
1. use __.data__ to get the dict object whose keys are utterance IDs and values are data arrays.
```
sampleFeat.data
```
2. use __.array__ get the arrays only.
```
sampleFeat.array
```
3. use getitem function to get a specified utterance.
```
sampleFeat['103-1240-0000']
```
4. like dict object, __.keys()__,__.values()__,__.items()__ are availabel to iterate it.
```
for key in sampleFeat.keys():
print( sampleFeat[key].shape )
```
5. setitem is also available only if you set the array with right format.
```
sampleFeat['103-1240-0000'] *= 2
sampleFeat['103-1240-0000']
del sampleFeat
```
Similarly, ExKaldi Numpy archives can be transformed back to bytes archives easily.
```
tempFeat = feat.to_bytes()
tempFeat
del tempFeat
```
Numpy data can also be saved to .npy file with a specified format.
```
tempFile = os.path.join(dataDir, "exp", "temp_mfcc.npy")
feat.save(tempFile)
del feat
```
And it can also be restorage into memory again.
```
feat = exkaldi.load_feat(tempFile, name="mfcc")
feat
feat
```
Besides __NumpyFeat__ class, these classes hold Kaldi archives in Numpy format.
__NumpyCMVN__: to hold CMVN statistics data.
__NumpyProb__: to hold NN output data.
__NumpyAli__: to hold Users' own Alignment data.
__NumpyAliTrans__: to hold Transition-ID alignment.
__NumpyAliPhone__: to hold Phone-ID alignment.
__NumpyAliPdf__: to hold Pdf-ID alignment.
__NumpyFmllr__: to hold fmllr transform matrices.
They have similar properties as __NumpyFeat__. We will introduce them in the next steps.
| github_jupyter |
```
try:
from .environment import HarnessEnvironment
from .base import AttributeObject
except SystemError:
from python.environment import HarnessEnvironment
from python.base import AttributeObject
import abc, builtins, collections, contextlib, inspect, jinja2, operator, pandas, \
sklearn.base, time, toolz.curried, typing
np = pandas.np
from toolz.curried import (
complement, compose, concat, concatv, do, filter,
first, get, identity, itemmap, juxt, keyfilter, last, map,
merge, partial, pipe, valfilter, valmap
)
__all__ = ['Harness']
class DataFrameEstimatorMixin(pandas.DataFrame, sklearn.base.BaseEstimator):
"""Combine a DataFrame and BaseEstimator. def __init__ must start
with the DataFrame keyword spec."""
_series = pandas.Series
_blacklist = []
def __dir__(self):
return concatv(super().__dir__(), self._metadata)
@property
def _constructor(self):
return self.__class__
@property
def _constructor_expanddim(self):
return self.__class__
@property
def _constructor_sliced(self):
return self._series
@property
def _metadata(self):
return self._get_param_names()
def __dir__(self):
return concatv(super().__dir__(), list(self.get_params()))
def __finalize__(self, other=None, method=None,):
"""__finalize__ must be at the __class__ level."""
if method == 'merge': other = other.left
if method == 'concat': other = other.objs[0]
self.set_params(**other.get_params(deep=False))
return self
def set_params(self, **kwargs):
params = []
try:
params = self.estimator.get_params()
except: pass
for key, value in kwargs.items():
if key in params:
self.estimator.set_params(**{key: value})
else:
super().set_params(**{key: value})
return self
@classmethod
def _get_param_names(cls):
"""Ignore the parameters that are specific to the dataframe."""
return pipe(
super()._get_param_names(), filter(
complement(partial(operator.contains, cls._blacklist))
), list
)
class HarnessBase(DataFrameEstimatorMixin):
@property
def column_names(self):
"""Include the index names in the column names."""
return tuple(concatv(self.index.names, self.columns))
def __getattr__(self, attr):
# Try to do the dataframe things first.
try:
value = super().__getattr__(attr)
if isinstance(value, pandas.DataFrame):
value = value.pipe(self.__class__)
return value
except AttributeError as e:
pass
super().__getattribute__(
first(self._get_param_names())
)
# If it ain't a dataframe thing then
# try each of the extensions.
if not attr.startswith('_'):
try:
return self.pipe(self.env.pipes, attr)
except:
pass
return super().__getattr__(attr)
def __dir__(self):
"""Extend the completer."""
return list(
concatv(
super().__dir__(), dir(self.estimator), concat(
map(dir, self.env.extensions.values())
)
)
)
def do(self, func, *args, **kwargs):
return self.pipe(do(func), *args, **kwargs)
@property
def Index(self):
return self.index.get_level_values
class Harness(HarnessBase):
# Make ScikitLearn ignore some stuff
_blacklist = ['data', 'index', 'columns', 'copy']
env = HarnessEnvironment(loader=jinja2.ChoiceLoader([
jinja2.DictLoader({}),
]))
env.filters.update(vars(operator))
env.filters.update(vars(builtins))
def __init__(
self, data=None,
index=None, columns=None,
estimator=None,
parent=None, feature_level=None,
copy=False,
extensions=[
'harness.python.ext.base.JinjaExtension',
'harness.python.ext.SciKit.SciKitExtension',
'harness.python.ext.Bokeh.BokehModelsExtension',
'harness.python.ext.Bokeh.BokehPlottingExtension',
'harness.python.ext.Bokeh.BokehChartsExtension'
],
):
kwargs = dict(
estimator=estimator,
parent=parent,
feature_level=feature_level,
extensions=extensions,
)
self.set_params(**kwargs)
for ext in self.extensions:
if not ext in self.env.extensions:
self.env.add_extension(ext)
ext = self.env.extensions[ext]
if (
not(ext.mixin is None)
and
not(ext.mixin in self.__class__.__bases__)
):
self.__class__.__bases__ += (ext.mixin,)
kwargs = pipe(
locals(), keyfilter(
partial(operator.contains, self._blacklist)
), valfilter(complement(lambda x: x is None))
)
super().__init__(**kwargs)
```
| github_jupyter |
# GLM: Poisson Regression
```
## Interactive magics
%matplotlib inline
import re
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import patsy as pt
import pymc3 as pm
import seaborn as sns
plt.style.use('seaborn-darkgrid')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
print('Running on PyMC3 v{}'.format(pm.__version__))
```
This is a minimal reproducible example of Poisson regression to predict counts using dummy data.
This Notebook is basically an excuse to demo Poisson regression using PyMC3, both manually and using the `glm` library to demo interactions using the `patsy` library. We will create some dummy data, Poisson distributed according to a linear model, and try to recover the coefficients of that linear model through inference.
For more statistical detail see:
+ Basic info on [Wikipedia](https://en.wikipedia.org/wiki/Poisson_regression)
+ GLMs: Poisson regression, exposure, and overdispersion in Chapter 6.2 of [ARM, Gelmann & Hill 2006](http://www.stat.columbia.edu/%7Egelman/arm/)
+ This worked example from ARM 6.2 by [Clay Ford](http://www.clayford.net/statistics/poisson-regression-ch-6-of-gelman-and-hill/)
This very basic model is inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/), which is concerned with understanding the various effects of external environmental factors upon the allergic sneezing of a test subject.
## Local Functions
```
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, var_names=varnames, figsize=(12,nrows*1.4),
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(trcs, varnames=varnames).iterrows()]))
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
```
## Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https://twitter.com/ianozsvald
### Assumptions:
+ The subject sneezes N times per day, recorded as `nsneeze (int)`
+ The subject may or may not drink alcohol during that day, recorded as `alcohol (boolean)`
+ The subject may or may not take an antihistamine medication during that day, recorded as the negative action `nomeds (boolean)`
+ I postulate (probably incorrectly) that sneezing occurs at some baseline rate, which increases if an antihistamine is not taken, and further increased after alcohol is consumed.
+ The data is aggregated per day, to yield a total count of sneezes on that day, with a boolean flag for alcohol and antihistamine usage, with the big assumption that nsneezes have a direct causal relationship.
Create 4000 days of data: daily counts of sneezes which are Poisson distributed w.r.t alcohol consumption and antihistamine usage
```
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
```
##### View means of the various combinations (Poisson mean values)
```
df.groupby(['alcohol','nomeds']).mean().unstack()
```
### Briefly Describe Dataset
```
g = sns.catplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
```
**Observe:**
+ This looks a lot like poisson-distributed count data (because it is)
+ With `nomeds == False` and `alcohol == False` (top-left, akak antihistamines WERE used, alcohol was NOT drunk) the mean of the poisson distribution of sneeze counts is low.
+ Changing `alcohol == True` (top-right) increases the sneeze count `nsneeze` slightly
+ Changing `nomeds == True` (lower-left) increases the sneeze count `nsneeze` further
+ Changing both `alcohol == True and nomeds == True` (lower-right) increases the sneeze count `nsneeze` a lot, increasing both the mean and variance.
---
## Poisson Regression
Our model here is a very simple Poisson regression, allowing for interaction of terms:
$$ \theta = exp(\beta X)$$
$$ Y_{sneeze\_count} ~ Poisson(\theta)$$
**Create linear model for interaction of terms**
```
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
```
### 1. Manual method, create design matrices and manually specify model
**Create Design Matrices**
```
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
```
**Create Model**
```
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sigma=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sigma=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sigma=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sigma=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
```
**Sample Model**
```
with mdl_fish:
trc_fish = pm.sample(1000, tune=1000, cores=4)
```
**View Diagnostics**
```
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish, varnames=rvs_fish)
```
**Observe:**
+ The model converges quickly and traceplots looks pretty well mixed
### Transform coeffs and recover theta values
```
np.exp(pm.summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
```
**Observe:**
+ The contributions from each feature as a multiplier of the baseline sneezecount appear to be as per the data generation:
1. exp(b0_intercept): mean=1.02 cr=[0.96, 1.08]
Roughly linear baseline count when no alcohol and meds, as per the generated data:
theta_noalcohol_meds = 1 (as set above)
theta_noalcohol_meds = exp(b0_intercept)
= 1
2. exp(b1_alcohol): mean=2.88 cr=[2.69, 3.09]
non-zero positive effect of adding alcohol, a ~3x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_meds = 3 (as set above)
theta_alcohol_meds = exp(b0_intercept + b1_alcohol)
= exp(b0_intercept) * exp(b1_alcohol)
= 1 * 3 = 3
3. exp(b2_nomeds[T.True]): mean=5.76 cr=[5.40, 6.17]
larger, non-zero positive effect of adding nomeds, a ~6x multiplier of
baseline sneeze count, as per the generated data:
theta_noalcohol_nomeds = 6 (as set above)
theta_noalcohol_nomeds = exp(b0_intercept + b2_nomeds)
= exp(b0_intercept) * exp(b2_nomeds)
= 1 * 6 = 6
4. exp(b3_alcohol[T.True]:nomeds[T.True]): mean=2.12 cr=[1.98, 2.30]
small, positive interaction effect of alcohol and meds, a ~2x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_nomeds = 36 (as set above)
theta_alcohol_nomeds = exp(b0_intercept + b1_alcohol + b2_nomeds + b3_alcohol:nomeds)
= exp(b0_intercept) * exp(b1_alcohol) * exp(b2_nomeds * b3_alcohol:nomeds)
= 1 * 3 * 6 * 2 = 36
### 2. Alternative method, using `pymc.glm`
**Create Model**
**Alternative automatic formulation using `pmyc.glm`**
```
with pm.Model() as mdl_fish_alt:
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
```
**Sample Model**
```
with mdl_fish_alt:
trc_fish_alt = pm.sample(2000, tune=2000)
```
**View Traces**
```
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
```
### Transform coeffs
```
np.exp(pm.summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
```
**Observe:**
+ The traceplots look well mixed
+ The transformed model coeffs look moreorless the same as those generated by the manual model
+ Note also that the `mu` coeff is for the overall mean of the dataset and has an extreme skew, if we look at the median value ...
```
np.percentile(trc_fish_alt['mu'], [25,50,75])
```
We see this is pretty close to the overall mean of:
```
df['nsneeze'].mean()
```
---
Example originally contributed by Jonathan Sedar 2016-05-15 [github.com/jonsedar](https://github.com/jonsedar)
```
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
<blockquote>
<h1>Exercise 5.7</h1>
<p>In Sections 5.3.2 and 5.3.3, we saw that the <code>cv.glm()</code> function can be
used in order to compute the LOOCV test error estimate. Alternatively, one could compute those quantities using just the <code>glm()</code> and
<code>predict.glm()</code> functions, and a for loop. You will now take this ap-
proach in order to compute the LOOCV error for a simple logistic
regression model on the <code>Weekly</code> data set. Recall that in the context
of classification problems, the LOOCV error is given in (5.4).</p>
<ol>
<li>Fit a logistic regression model that predicts $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$.</li>
<li>Fit a logistic regression model that predicts $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$ <i>using all but the first observation</i>.</li>
<li>Use the model from 2 to predict the direction of the first observation. You can do this by predicting that the first observation will go up if $P (\mathrm{Direction}="\mathrm{Up}"| \mathrm{Lag1}, \mathrm{Lag2}) > 0.5$. Was this observation correctly classified?</li>
<li>
Write a for loop from $i = 1$ to $i = n$, where $n$ is the number of observations in the data set, that performs each of the following steps:
<ol>
<li>Fit a logistic regression model using all but the $i$th observation to predict $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$.</li>
<li>Compute the posterior probability of the market moving up for the $i$th observation.</li>
<li>Use the posterior probability for the ith observation in order to predict whether or not the market moves up.</li>
<li>Determine whether or not an error was made in predicting the direction for the $i$th observation. If an error was made, then indicate this as a $1$, and otherwise indicate it as a $0$.</li>
</ol>
</li>
<li>Take the average of the $n$ numbers obtained in 4 in order to obtain the LOOCV estimate for the test error. Comment on the results.</li>
</ol>
</blockquote>
```
import pandas as pd
import numpy as np
# https://stackoverflow.com/questions/34398054/ipython-notebook-cell-multiple-outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import statsmodels.api as sm
```
<h3>Exercise 5.7.1</h3>
<blockquote>
<i>Fit a logistic regression model that predicts $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$.</i>
</blockquote>
```
df = pd.read_csv("../../DataSets/Weekly/Weekly.csv")
df = df.reindex(columns=['Year', 'Today', 'Lag1', 'Lag2', 'Lag3', 'Lag4', 'Lag5', 'Volume', 'Direction'])
df['Direction01'] = np.where(df['Direction'] == 'Up', 1, 0)
df.insert(0, 'Intercept', 1)
targetColumn = ['Direction01']
descriptiveColumns = ['Intercept', 'Lag1', 'Lag2']
df_X = df[descriptiveColumns]
df_Y = df[targetColumn]
model = sm.Logit(df_Y, df_X)
fitted = model.fit()
fitted.summary()
```
<h3>Exercise 5.7.2</h3>
<blockquote>
<i>Fit a logistic regression model that predicts $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$ <i>using all but the first observation</i>.</i>
</blockquote>
```
df_X_train = df[descriptiveColumns].iloc[1:]
df_Y_train = df[targetColumn].iloc[1:]
df_X_test = df[descriptiveColumns].iloc[0]
df_Y_test = df[targetColumn].iloc[0]
model = sm.Logit(df_Y, df_X)
fitted = model.fit()
fitted.summary()
```
<h3>Exercise 5.7.3</h3>
<blockquote>
<i>Use the model from 2 to predict the direction of the first observation. You can do this by predicting that the first observation will go up if $P (\mathrm{Direction}="\mathrm{Up}"| \mathrm{Lag1}, \mathrm{Lag2}) > 0.5$. Was this observation correctly classified?</i>
</blockquote>
```
df_Y_test.iloc[0]
sr_Y_pred = fitted.predict(df_X_test.to_numpy())
sr_Y_pred[0]
```
<p>The model incorrectly classifies this observation as "Up".</p>
<h3>Exercise 5.7.4</h3>
<blockquote>
<i>Write a for loop from $i = 1$ to $i = n$, where $n$ is the number of observations in the data set, that performs each of the following steps:
<ol>
<li>Fit a logistic regression model using all but the $i$th observation to predict $\mathrm{Direction}$ using $\mathrm{Lag1}$ and $\mathrm{Lag2}$.</li>
<li>Compute the posterior probability of the market moving up for the $i$th observation.</li>
<li>Use the posterior probability for the ith observation in order to predict whether or not the market moves up.</li>
<li>Determine whether or not an error was made in predicting the direction for the $i$th observation. If an error was made, then indicate this as a $1$, and otherwise indicate it as a $0$.</li>
</ol></i>
</blockquote>
```
n = df.shape[0]
total_correct = 0
for i in range(n):
excluded = [i]
df_X_train = df[descriptiveColumns].drop(excluded, axis=0, inplace=False)
df_Y_train = df[targetColumn].drop(excluded, axis=0, inplace=False)
df_X_test = df[descriptiveColumns].iloc[i]
df_Y_test = df[targetColumn].iloc[i]
model = sm.Logit(df_Y, df_X)
fitted = model.fit()
# fitted.summary()
sr_Y_pred = fitted.predict(df_X_test.to_numpy())
assert sr_Y_pred.shape == (1, )
Y_pred = 1 if sr_Y_pred[0] > 0.5 else 0
if df_Y_test.iloc[0] != Y_pred:
total_correct += 1
```
<h3>Exercise 5.7.5</h3>
<blockquote>
<i>Take the average of the $n$ numbers obtained in 4 in order to obtain the LOOCV estimate for the test error. Comment on the results.</i>
</blockquote>
```
total_correct
(total_correct / n) * 100
```
<p>The model misclassified about $44 \%$ of the observations.</p>
| github_jupyter |
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad.
Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999
Hint -- it will work best with 3 convolutional layers.
```
import tensorflow as tf
import os
import zipfile
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/happy-or-sad.zip"
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# GRADED FUNCTION: train_happy_sad_model
def train_happy_sad_model():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
DESIRED_ACCURACY = 0.999
class myCallback(tf.keras.callbacks.Callback):
# Your Code
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>DESIRED_ACCURACY):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.
model = tf.keras.models.Sequential([
# Your Code Here
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255)
# Please use a target_size of 150 X 150.
train_generator = train_datagen.flow_from_directory(
# Your Code Here
'/tmp/h-or-s', # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary'
)
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
# model fitting
history = model.fit_generator(
# Your Code Here
train_generator,
steps_per_epoch=8,
epochs=20,
callbacks=[callbacks]
)
# model fitting
return history.history['acc'][-1]
# The Expected output: "Reached 99.9% accuracy so cancelling training!""
train_happy_sad_model()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
<!-- Shutdown and close the notebook -->
window.onbeforeunload = null
window.close();
IPython.notebook.session.delete();
```
| github_jupyter |
```
# from google.colab import drive
# drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx]- fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=18, kernel_size=3, padding=0)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(in_channels=18, out_channels=6, kernel_size=3, padding=0)
# self.conv3 = nn.Conv2d(in_channels=12, out_channels=32, kernel_size=3, padding=0)
self.fc1 = nn.Linear(1014, 512)
self.fc2 = nn.Linear(512, 64)
# self.fc3 = nn.Linear(512, 64)
# self.fc4 = nn.Linear(64, 10)
self.fc3 = nn.Linear(64,1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
return x, y
def helper(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = (F.relu(self.conv2(x)))
# print(x.shape)
# x = (F.relu(self.conv3(x)))
x = x.view(x.size(0), -1)
# print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# x = F.relu(self.fc3(x))
# x = F.relu(self.fc4(x))
x = self.fc3(x)
return x
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3, padding=0)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=6, kernel_size=3, padding=0)
# self.conv3 = nn.Conv2d(in_channels=12, out_channels=20, kernel_size=3, padding=0)
self.fc1 = nn.Linear(1014, 512)
self.fc2 = nn.Linear(512, 64)
# self.fc3 = nn.Linear(512, 64)
# self.fc4 = nn.Linear(64, 10)
self.fc3 = nn.Linear(64,3)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = (F.relu(self.conv2(x)))
# print(x.shape)
# x = (F.relu(self.conv3(x)))
x = x.view(x.size(0), -1)
# print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# x = F.relu(self.fc3(x))
# x = F.relu(self.fc4(x))
x = self.fc3(x)
return x
classify = Classification().double()
classify = classify.to("cuda")
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 200
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 60
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.005):
break;
if epoch % 5 == 0:
# focus_net.eval()
# classify.eval()
col1.append(epoch+1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
print('Finished Training')
# torch.save(focus_net.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_focus_net.pt")
# torch.save(classify.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_classify.pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.savefig("train_ftpt.pdf", bbox_inches='tight')
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.savefig("test_ftpt.pdf", bbox_inches='tight')
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
max_alpha =[]
alpha_ftpt=[]
argmax_more_than_half=0
argmax_less_than_half=0
for i, data in enumerate(test_loader):
inputs, labels,fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
alphas, avg = focus_net(inputs)
outputs = classify(avg)
mx,_ = torch.max(alphas,1)
max_alpha.append(mx.cpu().detach().numpy())
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if (focus == fore_idx[j] and predicted[j] == labels[j]):
alpha_ftpt.append(alphas[j][focus].item())
max_alpha = np.concatenate(max_alpha,axis=0)
print(max_alpha.shape)
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(max_alpha,bins=50,color ="c")
plt.title("alpha values histogram")
plt.savefig("alpha_hist.pdf")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c")
plt.title("alpha values in ftpt")
plt.savefig("alpha_hist_ftpt.pdf")
```
| github_jupyter |
# Programming Exercise 4 - Neural Network Learning
```
#` load libraries
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# load matlab files
from scipy.io import loadmat
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
# matplotlib inline
%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('darkgrid')
data = loadmat('data/ex4data1.mat')
data.keys()
y = data['y']
# add intercept
X = np.c_[np.ones((data['X'].shape[0], 1)), data['X']]
print('X:', X.shape, '(with intercept)')
print('y:', y.shape)
weights = loadmat('data/ex3weights.mat')
weights.keys()
theta1, theta2 = weights['Theta1'], weights['Theta2']
print('theta1 :', theta1.shape)
print('theta2 :', theta2.shape)
params = np.r_[theta1.ravel(), theta2.ravel()]
print('params :', params.shape)
```
#### Neural Network
Input layer size = 400 (20x20 pixels) <br>
Hidden layer size = 25 <br>
Number of labels = 10
### Neural Networks - Feed Forward and Cost Function
```
def sigmoid(z):
return(1 / (1 + np.exp(-z)))
```
#### Sigmoid gradient
#### $$ g'(z) = g(z)(1 - g(z))$$
where $$ g(z) = \frac{1}{1+e^{-z}}$$
```
def sigmoidGradient(z):
return(sigmoid(z)*(1-sigmoid(z)))
```
#### Cost Function
#### $$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}\big[-y^{(i)}_{k}\, log\,(( h_\theta\,(x^{(i)}))_k)-(1-y^{(i)}_k)\,log\,(1-h_\theta(x^{(i)}))_k)\big]$$
#### Regularized Cost Function
#### $$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}\bigg[-y^{(i)}_{k}\, log\,(( h_\theta\,(x^{(i)}))_k)-(1-y^{(i)}_k)\,log\,(1-h_\theta(x^{(i)}))_k)\bigg] + \frac{\lambda}{2m}\bigg[\sum_{j=1}^{25}\sum_{k=1}^{400}(\Theta_{j,k}^{(1)})^2+\sum_{j=1}^{10}\sum_{k=1}^{25}(\Theta_{j,k}^{(2)})^2\bigg]$$
```
def nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, features, classes, reg):
# whuen comparing the octave code note that python uses zero-indexed arrays
# but because numpy indexing doesnot include the right side, the code is the same anyway.
theta1 = nn_params[0:(hidden_layer_size*(input_layer_size+1))].reshape(hidden_layer_size, (input_layer_size+1))
theta2 = nn_params[(hidden_layer_size*(input_layer_size+1)):].reshape(num_labels, (hidden_layer_size+1))
m = features.shape[0]
y_matrix = pd.get_dummies(classes.ravel()).as_matrix()
# cost
a1 = features # 5000x401
z2 = theta1.dot(a1.T) # 25x401 * 401x5000 = 10x5000
a2 = np.c_[np.ones((features.shape[0], 1)), sigmoid(z2.T)] # 5000x26
z3 = theta2.dot(a2.T) # 10x26 * 266x5000 = 10x5000
a3 = sigmoid(z3) # 10x5000
J = -1*(1/m)*np.sum((np.log(a3.T)*(y_matrix)+np.log(1-a3).T*(1-y_matrix))) + \
(reg/(2*m))*(np.sum(np.square(theta1[:,1:])) + np.sum(np.square(theta2[:,1:])))
# Gradients
d3 = a3.T - y_matrix # 5000x10
d2 = theta2[:,1:].T.dot(d3.T)*sigmoidGradient(z2) # 25x10 * 10x5000 = 25x5000
delta1 = d2.dot(a1) # 25x5000 * 5000x401 = 25*401
delta2 = d3.T.dot(a2) # 10x5000 * 5000x26 = 10x26
theta1_ = np.c_[np.ones((theta1.shape[0],1)), theta1[:,1:]]
theta2_ = np.c_[np.ones((theta2.shape[0],1)), theta2[:,1:]]
theta1_grad = delta1/m + (theta1_*reg)/m
theta2_grad = delta2/m + (theta2_*reg)/m
return(J, theta1_grad, theta2_grad)
# regularization parameter = 0
nnCostFunction(params, 400, 25, 10, X, y, 0)[0]
# regularization paramerter = 1
nnCostFunction(params, 400, 25, 10, X, y, 1)[0]
[sigmoidGradient(z) for z in [-1, -0.5, 0, 0.5, 1]]
```
| github_jupyter |
### Tweets processing and sentiment analysis
---
In this notebook we load the tweets we previously collected using the ```Twitter streamer.py```. Along the way, we will flatten the Twitter JSON, select the text objects among the several options (main tweet, re-tweet, quote, etc.), clean them (remove non-alphabetic characters), translate non-English tweets, compute the sentiment of the text, and associate a location given a user-defined location or an automatic geolocalization.
__Note: And accompanying ```Tweets processing and sentiment.py``` file, contains all the code in this notebook and it is meant to be run in the terminal.__
---
We start by loading the tweet object from the ```.json``` files in the ```Twitter/Tweets/``` directory.
```
import glob
import json
# list all files containing tweets
files = list(glob.iglob('Twitter/Tweets/*.json'))
tweets_data = []
for file in files:
tweets_file = open(file, "r", encoding = 'utf-8')
# Read in tweets and store in list: tweets_data
for line in tweets_file:
tweet = json.loads(line)
tweets_data.append(tweet)
tweets_file.close()
print('There are', len(tweets_data), 'tweets in the dataset.')
```
## Processing JSON
---
There are multiple fields in the [Twitter JSON](https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object) which contains textual data. In a typical tweet, there's the tweet text, the user description, and the user location. In a tweet longer than 140 characters, there's the extended tweet child JSON. And in a quoted tweet, there's the original tweet text and the commentary with the quoted tweet. The next image shows a portion of the Twitter JSON contents:

To analyze tweets at scale, we will want to __flatten__ the tweet JSON into a single level. This will allow us to store the tweets in a DataFrame format. To do this, we will define the function ```flatten_tweets()``` which will take several fields regarding text and location (stored in ```place```).
```
def flatten_tweets(tweets):
""" Flattens out tweet dictionaries so relevant JSON is
in a top-level dictionary. """
tweets_list = []
# Iterate through each tweet
for tweet_obj in tweets:
''' User info'''
# Store the user screen name in 'user-screen_name'
tweet_obj['user-screen_name'] = tweet_obj['user']['screen_name']
# Store the user location
tweet_obj['user-location'] = tweet_obj['user']['location']
''' Text info'''
# Check if this is a 140+ character tweet
if 'extended_tweet' in tweet_obj:
# Store the extended tweet text in 'extended_tweet-full_text'
tweet_obj['extended_tweet-full_text'] = \
tweet_obj['extended_tweet']['full_text']
if 'retweeted_status' in tweet_obj:
# Store the retweet user screen name in
# 'retweeted_status-user-screen_name'
tweet_obj['retweeted_status-user-screen_name'] = \
tweet_obj['retweeted_status']['user']['screen_name']
# Store the retweet text in 'retweeted_status-text'
tweet_obj['retweeted_status-text'] = \
tweet_obj['retweeted_status']['text']
if 'extended_tweet' in tweet_obj['retweeted_status']:
# Store the extended retweet text in
#'retweeted_status-extended_tweet-full_text'
tweet_obj['retweeted_status-extended_tweet-full_text'] = \
tweet_obj['retweeted_status']['extended_tweet']['full_text']
if 'quoted_status' in tweet_obj:
# Store the retweet user screen name in
#'retweeted_status-user-screen_name'
tweet_obj['quoted_status-user-screen_name'] = \
tweet_obj['quoted_status']['user']['screen_name']
# Store the retweet text in 'retweeted_status-text'
tweet_obj['quoted_status-text'] = \
tweet_obj['quoted_status']['text']
if 'extended_tweet' in tweet_obj['quoted_status']:
# Store the extended retweet text in
#'retweeted_status-extended_tweet-full_text'
tweet_obj['quoted_status-extended_tweet-full_text'] = \
tweet_obj['quoted_status']['extended_tweet']['full_text']
''' Place info'''
if 'place' in tweet_obj:
# Store the country code in 'place-country_code'
try:
tweet_obj['place-country'] = \
tweet_obj['place']['country']
tweet_obj['place-country_code'] = \
tweet_obj['place']['country_code']
tweet_obj['location-coordinates'] = \
tweet_obj['place']['bounding_box']['coordinates']
except: pass
tweets_list.append(tweet_obj)
return tweets_list
```
In the context of this project though, we are interested in just one text field. Therefore, we now define a function ```select_text(tweets)``` that selects the main text whether the tweet is a principal tweet or a re-tweet, and we decide to drop the quoted text as it usually is repetitive and may not be informative.
```
def select_text(tweets):
''' Assigns the main text to only one column depending
on whether the tweet is a RT/quote or not'''
tweets_list = []
# Iterate through each tweet
for tweet_obj in tweets:
if 'retweeted_status-extended_tweet-full_text' in tweet_obj:
tweet_obj['text'] = \
tweet_obj['retweeted_status-extended_tweet-full_text']
elif 'retweeted_status-text' in tweet_obj:
tweet_obj['text'] = tweet_obj['retweeted_status-text']
elif 'extended_tweet-full_text' in tweet_obj:
tweet_obj['text'] = tweet_obj['extended_tweet-full_text']
tweets_list.append(tweet_obj)
return tweets_list
```
We now build the data frame. Notice that we choose the columns (fields) relevant for our analysis. This includes the language of the tweet, ```lang```.
We also keep ```user-location```, which is set manually by the user, and the ```country```, ```country_code``` and ```coordinates``` fields from ```place```. These fields appear when the tweet is geo-tagged and it is usually contained in less than the 10% of the total of tweets.
```
import pandas as pd
# flatten tweets
tweets = flatten_tweets(tweets_data)
columns_all_text = ['text', 'extended_tweet-full_text', 'retweeted_status-text',
'retweeted_status-extended_tweet-full_text', 'quoted_status-text',
'quoted_status-extended_tweet-full_text', 'lang', 'user-location',
'place-country_code']
# select text
tweets = select_text(tweets)
columns = ['text', 'lang', 'user-location', 'place-country',
'place-country_code', 'location-coordinates', 'user-screen_name']
# Create a DataFrame from `tweets`
df_tweets = pd.DataFrame(tweets, columns=columns)
# replaces NaNs by Nones
df_tweets.where(pd.notnull(df_tweets), None, inplace=True)
#
df_tweets.head()
df_tweets.info()
```
__++++++++++++++++++++++++++++++++++++++++ [Take just a sample for quick checks]__
```
df_tweets_sample = df_tweets.copy()[:50]
```
__++++++++++++++++++++++++++++++++++++++++__
## Languages
---
In this part of this process we will replace the languages codes in ```lang``` by the actual language name. We will do this with the auxiliary ```Countries/languages_codes.csv``` dataset.
```
with open('Countries/languages.json', 'r', encoding='utf-8') as json_file:
languages_dict = json.load(json_file)
{k: languages_dict[k] for k in list(languages_dict)[:5]}
names = []
for idx, row in df_tweets_sample.iterrows():
lang = row['lang']
if lang == 'und':
names.append(None)
elif lang == 'in':
name = languages_dict['id']['name']
names.append(name)
elif lang == 'iw':
name = languages_dict['he']['name']
names.append(name)
else:
name = languages_dict[lang]['name']
names.append(name)
df_tweets_sample['language'] = names
df_tweets_sample.drop(['lang'], axis=1, inplace=True)
#
df_tweets_sample.head()
```
## Locations
---
Now we move to process the locations. We will first treat ```place``` fields and then ```user-location```.
### place-
The data in the ```place``` object is ––obiously–– more reliable than the ```user-location```. Therefore, although it constitutes the 0.91% of our tweets, we will take care of it. First, the country code in ```place-country_code``` comes in ISO 2 form, for which we will translate it to ISO 3 form with [country converter](https://github.com/konstantinstadler/country_converter). Then, we will perform the same to change ```place-country``` names to the standard, short names.
```
import country_converter as coco
# change codes to iso3
to_iso3_func = lambda x: coco.convert(names=x, to='iso3', not_found=None) \
if x is not None else x
df_tweets_sample['place-country_code'] = \
df_tweets_sample['place-country_code'].apply(to_iso3_func)
# change name to standard name
to_std_func = lambda x: coco.convert(names=x, to='name_short', not_found=None) \
if x is not None else x
df_tweets_sample['place-country'] = \
df_tweets_sample['place-country'].apply(to_std_func)
```
### user-locations
Here we take the manually-set ```user-locations``` and translate them to country names and codes –– this involves some trusting on the user. We do this using the [GeoPy](https://geopy.readthedocs.io/en/latest/#) library and, again, ```country_converter``` to find the country codes in ISO 3 form.
__A word of caution__: GeoPy connects to an API and, unfortunately, it takes almost a second for each call. This makes the process of computing ~ 50 K tweets rather slow.
```
from geopy.geocoders import Nominatim
from tqdm import tqdm
tqdm.pandas()
def geo_locator(user_location):
# initialize geolocator
geolocator = Nominatim(user_agent='Tweet_locator')
if user_location is not None:
try :
# get location
location = geolocator.geocode(user_location, language='en')
# get coordinates
location_exact = geolocator.reverse(
[location.latitude, location.longitude], language='en')
# get country codes
c_code = location_exact.raw['address']['country_code']
return c_code
except:
return None
else :
return None
# apply geo locator to user-location
loc = df_tweets_sample['user-location'].progress_apply(geo_locator)
df_tweets_sample['user-country_code'] = loc
# change codes to iso3
df_tweets_sample['user-country_code'] = \
df_tweets_sample['user-country_code'].apply(to_iso3_func)
# create user-country column
df_tweets_sample['user-country'] = \
df_tweets_sample['user-country_code'].apply(to_std_func)
# drop old column
df_tweets_sample.drop(['user-location'], axis=1, inplace=True)
#
df_tweets_sample.head()
```
Finally, we reduce the ```place-country``` and ```user-country``` columns to one by keeping the former when it exists, otherwise we keep the latter. We do the same for _codes_ columns.
```
countries, codes = [], []
for idx, row in df_tweets_sample.iterrows():
if row['place-country_code'] is None:
country = row['user-country']
code = row['user-country_code']
countries.append(country)
codes.append(code)
else :
countries.append(row['place-country'])
codes.append(row['place-country_code'])
df_tweets_sample['location'] = countries
df_tweets_sample['location_code'] = codes
# drop old columns
df_tweets_sample.drop(columns=['place-country', 'place-country_code',
'user-country', 'user-country_code'],
inplace=True)
#
df_tweets_sample.head()
```
## Text-cleaning
---
It is now time to process the tweets' text. This will involve removing non-alphabetic characters and translate non-English tweets. We will however retain both options and actually use the texts with emojis and other characters as our sentiment analyzer can handle them.
To remove non-alphabetic characters, we use [spaCy](https://spacy.io) as it is quite straightforward and we do not need to specify the regular expression.
```
import spacy
nlp = spacy.load('en_core_web_sm')
def cleaner(string):
# Generate list of tokens
doc = nlp(string)
lemmas = [token.lemma_ for token in doc]
# Remove tokens that are not alphabetic
a_lemmas = [lemma for lemma in lemmas
if lemma.isalpha() or lemma == '-PRON-']
# Print string after text cleaning
return ' '.join(a_lemmas)
df_tweets_sample['text_cleaned'] = \
df_tweets_sample['text'].progress_apply(cleaner)
#
df_tweets_sample.head()
```
To translate the non-English tweets, we use [googletrans](https://pypi.org/project/googletrans/) which also connects to its API, however it is faster.
__Another word of caution:__ It exists a poorly documented error discussed, _e.g._, here: https://stackoverflow.com/questions/49497391/googletrans-api-error-expecting-value-line-1-column-1-char-0. To bypass this error, I use ```np.array_split()``` to divide the dataframe into several chunks and process each of them at a time in a loop. Doing this, it works fine but I still save each chunks' translations to a csv so if in any iteration something went wrong, I can recompute just one chunk. I also instantiate ```Translator()``` each time.
```
import numpy as np
# select only not-null tweets not in English
mask1 = df_tweets_sample['text'].notnull()
mask2 = df_tweets_sample['language'] != 'English'
df_masked = df_tweets_sample[(mask1) & (mask2)]
# split dataframe in x equal-size pieces
df_tweets_sample_splitted = np.array_split(df_masked, 150)
def tweet_translation(df, idx):
""" Translate tweets using googletrans """
from googletrans import Translator
translator = Translator()
try:
# translate raw tweet
trans = df['text'].apply(translator.translate, dest='en')
# create column extracting the translated text
df['text_english'] = trans.apply(lambda x: x.text)
# append to empty list
translations.append(df)
# save data in case error happens
df.to_csv('Twitter/Translations/translation_{}.csv'.format(idx))
except Exception as e:
print(e, ' -- at index ', idx)
translations = []
for idx, df in enumerate(tqdm(df_tweets_sample_splitted)):
tqdm._instances.clear()
tweet_translation(df, idx)
# concatenate the chunks into a single dataframe
df_translations = pd.concat(translations)
# join it with the old one
df_english = df_tweets_sample.join(df_translations['text_english'])
#
df_english.head()
```
We finally append the original, unprocessed English texts to 'text_english'.
```
# replaces NaNs by Nones
df_english.where(pd.notnull(df_english), None, inplace=True)
# add original English tweets to text_english by replacing Nones
texts = []
for idx, row in df_english.iterrows():
if row['text_english'] is None:
text = row['text']
texts.append(text)
else :
texts.append(row['text_english'])
df_english['text_english'] = texts
#
df_english.head()
```
## Sentiment Analysis
---
We finally compute the sentiment of each tweet. For this, we use [NLTK](https://www.nltk.org)'s ```SentimentIntensityAnalyzer``` object from the ```nltk.sentiment.vader``` library.
> _VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media._ [[Ref.]](https://medium.com/analytics-vidhya/simplifying-social-media-sentiment-analysis-using-vader-in-python-f9e6ec6fc52f)
```
from nltk.sentiment.vader import SentimentIntensityAnalyzer
df_sentiment = df_english.copy()
# instantiate new SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
sentiment_scores = df_sentiment['text_english'].progress_apply(
sid.polarity_scores)
sentiment = sentiment_scores.apply(lambda x: x['compound'])
df_sentiment['sentiment'] = sentiment
#
df_sentiment.head()
```
To conclude, we reorder the columns and save the dataframe to a csv file.
```
cols_order = ['text', 'language', 'location', 'location_code',
'location-coordinates', 'sentiment', 'text_english',
'text_cleaned', 'user-screen_name']
df_final = df_sentiment[cols_order]
#
df_final.head()
df_final.to_csv('Twitter/Tweets_sentiment_nb.csv')
```
---
Here is a snapshot of the bash process of ```Tweets processing and sentiment.py``` over the full ~ 50 K dataset on a MacBook Pro with a 2,2 GHz Intel Core i7 processor.

| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pickle
from mayavi import mlab
import blockdiagram as bd
%matplotlib qt
%matplotlib qt
# load example model
chb_3d = pickle.load(open("example_fluvial_model.p", "rb" ))
np.shape(chb_3d.strat)
```
## Create block diagram
```
mlab.figure(bgcolor=(1,1,1))
ve = 10.0 # vertical exaggeration
scale = 0.1 # scaling of diagram (important for 3D printing)
strat_switch = 1 # equals 1 if you want stratigraphy displayed on the sides
layers_switch = 0 # equals 1 if you want stratigraphic boundaries displayed on the sides
contour_switch = 0 # equals 1 if you want contours displayed on the top surface
dx = 10.0 # cell size for display
bottom = np.min(chb_3d.strat) - 1.5 # elevation of bottom side of diagram
color_mode = 'property' # determines how the stratigraphy will be colored; can be 'property', 'facies', or 'time'
colors = [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]] # colors for 'facies' display
line_thickness = 1.0 # thickness of lines if 'layers_switch' is 1
gap = 20 # distance between exploded blocks (if any; in number of gridcells)
h = 5.0 # channel depth (m)
nx = 1 # number of blocks in x direction
ny = 1 # number of blocks in y direction
export = 0
bd.create_exploded_view(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,nx,ny,gap,dx,ve,scale,strat_switch,
layers_switch,contour_switch,color_mode,colors,line_thickness,bottom,export)
bd.create_exploded_view(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,2,2,gap,dx,ve,scale,strat_switch,
layers_switch,contour_switch,color_mode,colors,line_thickness,bottom,export)
```
## Create exploded-view diagram
```
mlab.figure(bgcolor=(1,1,1))
bd.create_exploded_view(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,1,1,gap,dx,ve,scale,strat_switch,
layers_switch,contour_switch,color_mode,colors,line_thickness,bottom,export)
```
## Create random section
```
xcoords, ycoords = bd.select_random_section(chb_3d.strat) # define x and y coordinates for random section
mlab.figure(bgcolor=(1,1,1))
color_mode = 'property'
bd.create_random_section_n_points(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,scale,ve,color_mode,colors,
xcoords[:-1],xcoords[1:],ycoords[:-1],ycoords[1:],dx,bottom,export)
```
## Create 'random cookie'
```
xcoords, ycoords = bd.select_random_section(chb_3d.strat) # define x and y coordinates for random section
mlab.figure(bgcolor=(1,1,1))
bd.create_random_cookie(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,scale,ve,color_mode,colors,xcoords[:-1],xcoords[1:],
ycoords[:-1],ycoords[1:],dx,bottom,export)
```
## Create fence diagram
```
mlab.figure(bgcolor=(1,1,1))
bd.create_fence_diagram(chb_3d.strat,chb_3d.facies,chb_3d.topo,h,6,2,gap,dx,ve,scale,layers_switch,color_mode,colors,line_thickness,bottom,export)
```
| github_jupyter |
<p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
> *DS Python for GIS and Geoscience*
> *October, 2021*
>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
<big><center>To run a cell: push the start triangle in the menu or type **SHIFT + ENTER/RETURN** <br>

</big></center>
# Notebook cell types
We will work in **Jupyter notebooks** during this course. A notebook is a collection of `cells`, that can contain different content:
## Code
```
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
```
Writing code is what you will do most during this course!
## Markdown
Text cells, using Markdown syntax. With the syntax, you can make text **bold** or *italic*, amongst many other things...
* list
* with
* items
[Link to interesting resources](https://www.youtube.com/watch?v=z9Uz1icjwrM) or images: 
> Blockquotes if you like them
> This line is part of the same blockquote.
Mathematical formulas can also be incorporated (LaTeX it is...)
$$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
$$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
Or tables:
course | points
--- | ---
Math | 8
Chemistry | 4
or tables with Latex..
Symbool | verklaring
--- | ---
$BZV_{(t=0)}$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
$OZ_{(t=0)}$ | initiële opgeloste zuurstof (8.5 mg.l-1)
$BZV_{in}$ | input BZV(1 mg.l-1.min-1)
$OZ_{sat}$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
$k_1$ | bacteriële degradatiesnelheid (0.3 min-1)
$k_2$ | reäeratieconstante (0.4 min-1)
Code can also be incorporated, but than just to illustrate:
```python
BOT = 12
print(BOT)
```
See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
## HTML
You can also use HTML commands, just check this cell:
<h3> html-adapted titel with <h3> </h3> <p></p>
<b> Bold text <b> </b> of <i>or italic <i> </i>
## Headings of different sizes: section
### subsection
#### subsubsection
## Raw Text
# Notebook handling ESSENTIALS
## Completion: TAB

* The **TAB** button is essential: It provides you all **possible actions** you can do after loading in a library *AND* it is used for **automatic autocompletion**:
```
import os
os.mkdir
os.mkdir
my_very_long_variable_name = 3
my_very_long_variable_name
```
## Help: SHIFT + TAB

* The **SHIFT-TAB** combination is ultra essential to get information/help about the current operation
```
round(3.2)
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
```
<div class="alert alert-success">
<b>EXERCISE</b>: What happens if you put two question marks behind the command?
</div>
```
import glob
glob.glob??
```
## *edit* mode to *command* mode
* *edit* mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
* *command* mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../img/notebook/enterbutton.png" alt="Key enter" style="width:150px">
To stop editing,
<img src="../img/notebook/keyescape.png" alt="Key A" style="width:150px">
## new cell A-bove
<img src="../img/notebook/keya.png" alt="Key A" style="width:150px">
Create a new cell above with the key A... when in *command* mode
## new cell B-elow
<img src="../img/notebook/keyb.png" alt="Key B" style="width:150px">
Create a new cell below with the key B... when in *command* mode
## CTRL + SHIFT + P
Just do it!
## Trouble...
<div class="alert alert-danger">
**NOTE**: When you're stuck, or things are crashing:
* first try **Kernel** > **Interrupt** -> your cell should stop running
* if no succes -> **Kernel** > **Restart** -> restart your notebook
</div>
## Overload?!?
<img src="../img/notebook/toomuch.jpg" alt="Key A" style="width:500px">
<br><br>
<center>No stress, just go to </center>
<br>
<center><p style="font-size: 200%;text-align: center;margin:500">`Help` > `Keyboard shortcuts`</p></center>
* **Stackoverflow** is really, really, really nice!
http://stackoverflow.com/questions/tagged/python
* Google search is with you!
<big><center>**REMEMBER**: To run a cell: <strike>push the start triangle in the menu or</strike> type **SHIFT + ENTER**

# some MAGIC...
## `%psearch`
```
%psearch os.*dir
```
## `%%timeit`
```
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
```
## `%lsmagic`
```
%lsmagic
```
## `%whos`
```
%whos
```
# Let's get started!
```
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
```
| github_jupyter |
```
from resources.workspace import *
```
$
%MACRO DEFINITION
\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Imags}{i\Reals}
\newcommand{\Integers}{\mathbb{Z}}
\newcommand{\Naturals}{\mathbb{N}}
%
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\br}[0]{\bvec{r}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$Similarly
In this tutorial we shall derive:
# the Kalman filter for multivariate systems.
The [forecast step](T3%20-%20Univariate%20Kalman%20filtering.ipynb#The-forecast-step) remains essentially unchanged. The only difference is the use of the transpose ${}^T$ in the covariance equation:
$\begin{align}
\mathbf{\hat{x}}_k^f
&= \bF_{k-1} \mathbf{\hat{\x}}_{k-1}^a \, , \tag{1a} \\\
\mathbf{P}_k^f
&= \bF_{k-1} \bP_{k-1}^a \bF_{k-1}^T + \Q_{k-1} \, . \tag{1b}
\end{align}$
However, the *analysis step* gets a little more complicated.
#### Exc 2 (The likelihood):
Suppose the observations at time $k$ is related to the true state ($\x_k$) by:
\begin{align*}
\y_k &= \bH \x_k + \br_k \, , \;\; \qquad (2)
\end{align*}
where the noise follows the law $\br_k \sim \NormDist(\mathbf{0}, \R_k)$ for some $\R_k>0$ (i.e. $\mathbf{R}_k$ is symmetric-positive-definite).
<div class="alert alert-info" role="alert">
<b>NB:</b> The analysis step is only concerned with a single time (index). We therefore drop the $k$ subscript in the following.
</div>
Derive the expression for $p(\mathbf{y}|\mathbf{x})$.
```
#show_answer('Likelihood derivation')
```
The following exercise derives the analysis step
#### Exc 4 (The 'precision' form of the KF):
$
%MACRO DEFINITION
\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Imags}{i\Reals}
\newcommand{\Integers}{\mathbb{Z}}
\newcommand{\Naturals}{\mathbb{N}}
%
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\br}[0]{\bvec{r}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$Similarly to [Exc 2.18](T2%20-%20Bayesian%20inference.ipynb#Exc--2.18-'Gaussian-Bayes':),
it may be shown that the prior $p(\x) = \NormDist(\x \mid \bb,\B)$
and likelihood $p(\y|\x) = \NormDist(\y \mid \bH \x,\R)$,
yield the posterior:
\begin{align}
p(\x|\y)
&= \NormDist(\x \mid \hat{\x}, \bP) \tag{4}
\, ,
\end{align}
where the posterior/analysis mean (vector) and covariance (matrix) are given by:
\begin{align}
\bP &= (\bH\tr \Ri \bH + \Bi)^{-1} \, , \tag{5} \\
\hat{\x} &= \bP\left[\bH\tr \Ri \y + \Bi \bb\right] \tag{6} \, ,
\end{align}
Prove eqns (4-6).
Hint: as in [Exc 2.18](T2%20-%20Bayesian%20inference.ipynb#Exc--2.18-'Gaussian-Bayes':), the main part lies in "completing the square" in $\x$.
```
#show_answer('KF precision')
```
$
%MACRO DEFINITION
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$<div class="alert alert-info" role="alert">
We have now derived (one form of) the Kalman filter. In the multivariate case,
we know how to:
<ul>
<li>Propagate our estimate of $\x$ to the next time step using eqns (1a) and (1b). </li>
<li>Update our estimate of $\x$ by assimilating the latest observation $\y$, using eqns (5) and (6).</li>
</ul>
</div>
However, the computations can be pretty expensive...
**Exc 5:** Suppose $\mathbf{x}$ is $M$-dimensional and has a covariance matrix $\mathbf{B}$.
* (a). What's the size of $\mathbf{B}$?
* (b). How many "flops" (approximately, i.e. to leading order) are required
to compute the "precision form" of the KF update equation, eqn (5) ?
* (c). How much memory (bytes) is required to hold its covariance matrix $\mathbf{B}$ ?
* (d). How many mega bytes's is this if $M$ is a million?
```
#show_answer('Cov memory')
```
This is one of the principal reasons why basic extended KF is infeasible for DA.
The following exercises serve to derive another, often more practical, form of the KF analysis update.
#### Exc 6 (The "Woodbury" matrix inversion identity):
$
%MACRO DEFINITION
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$For any (suitably shaped matrices)
$\B$, $\R$, $\V,\U$ such that the following exists,
$$\begin{align}
\left( \B^{-1} + \V\tr \R^{-1} \U \right)^{-1}
=
\B - \B \V\tr \left( \R + \U \B \V\tr \right)^{-1} \U \B \, ,
\tag{W}
\end{align}$$
which is known as the Sherman-Morrison-Woodbury lemma/identity.
The significance of this identity is that $\U$ and $\V$ may be rectangular matrices,
meaning that (the necessarily square) $\B$ and $\R$ have different sizes.
Thus, assuming $\R$ is of lower rank (size) than $\B$,
the term $\V\tr \R^{-1} \U$ on the left-hand-side constitutes a lower-rank "update" (addition) to $\B^{-1}$.
Thus, if the inverse ($\B$) of $\B^{-1}$ is already known,
computing the inverse of $\B^{-1} + \V\tr \R^{-1} \U$
only requires an inversion of the size of $\R$.
Prove the identity. Hint: don't derive it, just prove it!
```
#show_answer('Woodbury')
```
#### Exc 8 (Corollary 1):
$
%MACRO DEFINITION
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$Prove that, for any symmetric, positive-definite (SPD) matrices $\R$ and $\B$, and any matrix $\bH$,
$$\begin{align}
\left(\bH\tr \R^{-1} \bH + \B^{-1}\right)^{-1}
&=
\B - \B \bH\tr \left( \R + \bH \B \bH\tr \right)^{-1} \bH \B \tag{C1}
\, .
\end{align}$$
```
#show_answer('Woodbury C1')
```
#### Exc 10 (Corollary 2):
$
%MACRO DEFINITION
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$Prove that, for the same matrices as for Corollary C1,
$$\begin{align}
\left(\bH\tr \R^{-1} \bH + \B^{-1}\right)^{-1}\bH\tr \R^{-1}
&= \B \bH\tr \left( \R + \bH \B \bH\tr \right)^{-1}
\tag{C2}
\, .
\end{align}$$
```
#show_answer('Woodbury C2')
```
#### Exc 12 (The "gain" form of the KF):
$
%MACRO DEFINITION
\newcommand{\Expect}[0]{\mathop{}\! \mathbb{E}}
\newcommand{\NormDist}{\mathop{}\! \mathcal{N}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\F}[0]{\mat{F}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\cx}[0]{\text{const}}
\newcommand{\norm}[1]{\|{#1}\|}
%
$Now, let's go back to the KF, eqns (5) and (6).
Since $\B$ and $\R$ are covariance matrices, they are symmetric-positive.
In addition, we will assume that they are full-rank, making them SPD and invertible.
Define the Kalman gain by:
$$\begin{align}
\K &= \B \bH\tr \big(\bH \B \bH\tr + \R\big)^{-1} \, . \tag{K1}
\end{align}$$
* (a) Apply (C1) to eqn (5) to obtain the Kalman gain form of analysis/posterior covariance matrix:
$$\begin{align}
\bP &= [\I_M - \K \bH]\B \, . \tag{8}
\end{align}$$
* (b) Apply (C2) to (5) to abtain the identity
$$\begin{align}
\K &= \bP \bH\tr \R \, . \tag{K2}
\end{align}$$
* (c) Show that $\bP \Bi = [\I_M - \K \bH]$.
* (d) Use (b) and (c) to obtain the Kalman gain form of analysis/posterior covariance
$$\begin{align}
\hat{\x} &= \bb + \K\left[\y - \bH \bb\right] \, . \tag{9}
\end{align}$$
Together, eqns (8) and (9) define the Kalman gain form of the KF update.
The inversion (eqn 7) involved is of the size of $\R$, while in eqn (5) it is of the size of $\B$.
## In summary:
We have derived two forms of the multivariate KF analysis update step: the "precision matrix" form, and the "Kalman gain" form. The latter is especially practical when the number of observations is smaller than the length of the state vector.
### Next: [Dynamical systems, chaos, Lorenz](T6 - Dynamical systems, chaos, Lorenz.ipynb)
| github_jupyter |
#Seaborn
- Built on Matplotlib
- Forms cool visualizations with less coding
```
# Notebook Magic Line
# create visualizations in the notebook itself
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.set(style='darkgrid') # play around with other styles
from google.colab import drive
drive.mount('/content/drive')
df = pd.read_csv("/content/drive/My Drive/Colab Notebooks/PA Projects/ML Class/Week 4/Top_AI_scientists_cleaned.csv")
df
```
# Basic Plots
## Line Chart
```
plt.rcParams["figure.figsize"] = plt.rcParamsDefault["figure.figsize"]
plt.rcParams["figure.figsize"] = (15,5)
sns.lineplot(x="World Rank", y="#DBLP",data=df)
```
## Bar Plot
```
plt.rcParams["figure.figsize"] = (20,10)
country_cnt = df['Country'].value_counts().to_dict()
countries = list(country_cnt.keys())
cnt = list(country_cnt.values())
ax = sns.barplot(x=countries,y=cnt)
ax.set_title("Country Presence")
ax.set_xticklabels(ax.get_xticklabels(), rotation=90);
```
## Histogram
```
sns.histplot(df['#DBLP'],kde=True)
```
## Boxplot
```
sns.boxplot(data=df['#DBLP']);
```
Multiple Boxplots
```
sns.boxplot(data=df[['#DBLP','H-index']],orient='horizontal');
```
## Violin Plot
```
sns.violinplot(data=df[['#DBLP','H-index']])
```
## Scatter Plots
```
sns.relplot(x="#DBLP",y="World Rank",data=df[:200],kind='scatter',height=10,aspect=2);
sns.relplot(x="Citations",y="World Rank",data=df[:200],kind='scatter',height=10,aspect=2);
sns.relplot(x="Citations",y="#DBLP",hue="Country",data=df[:200],height=10,aspect=2);
```
## Bubble Plot
```
sns.relplot(x="Citations",y="#DBLP",data=df[:200],height=10,aspect=2,size="World Rank",hue="Country");
```
## Subplots
```
sns.relplot(x="Citations",y="#DBLP",hue="World Rank",col="Country",col_wrap=2,data=df[:200],height=10,aspect=2,size="World Rank");
```
# Advanced Plots
## Categorical Scatter Plots
### Strip Plot
```
sns.catplot(x="#DBLP",y="Country",kind='strip',data=df[:200],height=10,aspect=2);
```
### Swarm Plot
```
sns.catplot(x="#DBLP",y="Country",kind='swarm',data=df[:200],height=10,aspect=2);
```
## Categorical Distribution Plots
### Box Plot
```
sns.catplot(x="Citations",y="Country",kind='box',data=df,height=10,aspect=2);
```
### Violin Plot
```
sns.catplot(x="H-index",y="Country",kind='violin',data=df,height=10,aspect=2);
```
## Categorical Estimate Plots
### Bar Plot
```
sns.catplot(x="#DBLP",y="Country",kind='bar',data=df,height=10,aspect=2);
```
## Density Plot
KDE Plot - Kernel density estimate plots
```
plt.rcParams["figure.figsize"] = (10,5)
sns.kdeplot(data=df['#DBLP'],shade=True);
plt.rcParams["figure.figsize"] = (10,5)
sns.kdeplot(data=df['Citations'],shade=True);
plt.rcParams["figure.figsize"] = (10,5)
sns.kdeplot(data=df['#DBLP'],data2=df['Citations'],shade=True);
```
## Pair Plots
can visualize multidimensional relationships
```
data = df[['#DBLP','Citations','H-index','Country','World Rank']]
data.head()
sns.pairplot(data,hue='Country', height=5);
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('/content/collegePlace.csv')
df.head()
df.info()
df.plot()
df.isna().sum()
df.Stream.unique()
plt.xticks(rotation = 90)
sns.barplot(x = df.Stream, y = df.PlacedOrNot)
plt.figure(figsize = (12,7))
sns.barplot(x = df.Age, y = df.PlacedOrNot, hue = df.Gender)
plt.figure(figsize = (7,5))
sns.countplot(x = df.Age)
plt.figure(figsize = (15,8))
sns.barplot('PlacedOrNot',"CGPA",data = df,palette = 'gist_earth',hue ='Stream' )
fig, ax = plt.subplots(figsize=(10,7))
sns.countplot(data=df,x='Stream', order = df['Stream'].value_counts().index,palette='rocket',hue='PlacedOrNot')
plt.xticks(rotation=70)
plt.show()
df.Age.value_counts()
sns.barplot(x = df.Internships, y = df.PlacedOrNot)
df.Internships.value_counts()
df.CGPA.value_counts()
sns.barplot(x = df.CGPA, y = df.PlacedOrNot,palette='rocket_r')
sns.barplot(x = df.Hostel, y = df.PlacedOrNot)
correlation = df.corr()
plt.figure(figsize = (15,8))
sns.heatmap(correlation,annot = True, cmap = 'rocket')
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df.Gender = le.fit_transform(df.Gender)
df.Stream = le.fit_transform(df.Stream)
df.head()
x = df.drop(['PlacedOrNot'], axis = 1)
y = df.PlacedOrNot
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
models_accuracy = {}
logr = LogisticRegression(solver='liblinear')
logr.fit(X_train, y_train)
logr_score = logr.score(X_test, y_test)
models_accuracy['Logistic Regression'] = logr_score*100
logr_score*100
y_pred = logr.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot = True)
ran_model = RandomForestClassifier(n_estimators = 40)
ran_model.fit(X_train, y_train)
ran_score = ran_model.score(X_test, y_test)
models_accuracy['RanForest'] = ran_score*100
ran_score*100
y_pred = ran_model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot = True)
kn_model = KNeighborsClassifier(n_neighbors=5)
kn_model.fit(X_train, y_train)
kn_score = kn_model.score(X_test, y_test)
models_accuracy['Knn'] = kn_score*100
kn_score*100
y_pred = kn_model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot = True)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from numpy.fft import fft, ifft, fftfreq
from scipy import signal
from astropy.stats import LombScargle
from nfft import ndft, nfft, ndft_adjoint, nfft_adjoint
from gatspy.periodic import LombScargleFast
import time
import pdb
plt.style.use('seaborn')
help(LombScargle.autopower)
```
## SNR para datos equiespaciados
```
# sinusoidal signal
def signal_equip(N):
dt = 2 / N
t = np.linspace(0, N * dt, N)
t = t + dt
# print("min of t: ", min(t))
temp = np.zeros(N)
segment_duration = int(N/7)
freq_sin = 1 / (segment_duration * dt)
print("choosen freq is: ", freq_sin)
temp[int(N/4): int(N/4) + segment_duration] = np.sin(freq_sin * 2 * np.pi * t[:segment_duration])
return temp, freq_sin, dt, t
# set data
# np.random.seed(13241)
N = 1024 # should be even for simplicity
temp, freq_sin, dt, t = signal_equip(N)
data = np.random.normal(0, 0.6, N) + temp
plt.plot(t, data)
plt.plot(t, temp)
# obtencion de la psd usando las frequencias de numpy.fft
def get_scipy_psd(N, dt, t, data):
freqs = fftfreq(N, d=dt)
freqs_abs = np.abs(freqs)
freqs_lomb = np.delete(freqs_abs, 0)
pgram_fft = signal.lombscargle(t, data, freqs_lomb * 2 * np.pi, normalize=True)
return freqs_lomb, pgram_fft
freqs_lomb, pgram_fft = get_scipy_psd(N, dt, t, data)
plt.plot(freqs_lomb, pgram_fft)
# las fft
def get_fft(N, temp, data):
dwindow = signal.tukey(len(temp), alpha=1./8)
# pgram = np.mean(pgram)
fft_d = fft(dwindow * data)
fft_t = fft(dwindow * temp)
return fft_d, fft_t
fft_d, fft_t = get_fft(N, temp, data)
freqs = fftfreq(N, d=dt)
plt.plot(freqs[:int(N/2)], np.abs(fft_t[:int(N/2)]), 'o--')
plt.xlim([0,10])
print(np.abs(fft_t[0:10]))
print(freqs[0:10])
# realizando una pequeña revision del teorema de parseval
sum_time = (np.abs(temp)**2).sum()
sum_freq = (np.abs(fft_t)**2).sum()
print(sum_time, sum_freq/N)
# vemos que coincide cuando dividimos la suma en frequencias por N, esto es dividir las frequencias por raiz de N
sum_freq = (np.abs(fft_t/np.sqrt(N))**2).sum()
print(sum_time, sum_freq/N)
print("\n ------------ \n")
# aprovechemos de revisar la invera
temp_inv = ifft(fft_t)
sum_time_inv = (np.abs(temp_inv)**2).sum()
print(sum_time, sum_time_inv)
# vemos que en la inversa se corrige esto, si por otro lado a la inversa le ingreso una fft dividida por sqrt(N)
temp_inv = ifft(fft_t/np.sqrt(N))
sum_time_inv = (np.abs(temp_inv)**2).sum()
print(sum_time, sum_time_inv*N)
# se hace necesario multiplicar por raiz de N, esto es claro por que las constantes en fourier se puede sacar.
# luego si el objetivo final es aplicar ifft de fft no es necesario normalizar
#las fft
def snr_equip(N, only_noise=False):
temp, freq_sin, dt, t = signal_equip(N)
if only_noise:
data = np.random.normal(0, 0.6, N)
else:
data = np.random.normal(0, 0.6, N) + temp
freqs_lomb, pgram_fft = get_scipy_psd(N, dt, t, data)
dwindow = signal.tukey(len(temp), alpha=1./8)
# pgram = np.mean(pgram)
fft_d = fft(dwindow * data)
fft_t = fft(dwindow * temp)
fft_d = np.delete(fft_d, 0) # remving the value corresponding to 0 frequency
fft_t = np.delete(fft_t, 0)
df = 2 / (N * dt)
norm_sigma = 4 * df
h_norm = (fft_t * fft_t.conjugate() / pgram_fft).sum()
norm_corr = 4 * df / np.sqrt(h_norm.real * norm_sigma)
corr = fft_d * fft_t.conjugate() / pgram_fft
snr = ifft(corr) * norm_corr * dt * (len(fft_d) - 1)
snr = np.roll(snr, len(snr) // 2)
return t, np.abs(snr)
N = 1024
t, snr = snr_equip(N, only_noise=True)#
plt.figure(2)
t = np.delete(t, len(t)-1)
plt.plot(t - t[len(t)//2], snr)
print(np.mean(snr), max(snr))
# analicemos el error asociado a muchas repeticiones
def repeat_snr(N, n_repeat, only_noise=False):
mean_snr = []
max_snr = []
median_snr = []
std_snr = []
for i in range(n_repeat):
_, snr = snr_equip(N, only_noise=only_noise)
mean_snr.append(np.mean(snr))
median_snr.append(np.median(snr))
max_snr.append(np.max(snr))
std_snr.append(np.std(snr))
return mean_snr, median_snr, max_snr, std_snr
rep = 100
mean, median, maxx, stdd = repeat_snr(1024, rep, only_noise=True)
print(":::::SNR parameters for {} repetitions for signal of only noise:::::".format(rep))
print("mean snr over is: {} +- {}".format(np.mean(mean), np.std(mean)))
print("median snr is: {} +- {}".format(np.mean(median), np.std(median)))
print("max snr is: {} +- {}".format(np.mean(maxx),np.std(maxx)))
print("std snr is: {} +- {}".format(np.mean(stdd),np.std(stdd)))
print("\n -------------------------- \n")
mean, median, maxx, stdd = repeat_snr(1024, rep, only_noise=False)
print(":::::SNR parameters for {} repetitions for signal of noise + template:::::".format(rep))
print("mean snr over is: {} +- {}".format(np.mean(mean), np.std(mean)))
print("median snr is: {} +- {}".format(np.mean(median), np.std(median)))
print("max snr is: {} +- {}".format(np.mean(maxx),np.std(maxx)))
print("std snr is: {} +- {}".format(np.mean(stdd),np.std(stdd)))
# defining a new SNR
def snr_test(temp, data, dt, fs=1, N0=1, whitened=False, noise=None, bandPass=False, cutoff=None):
N = len(temp)
if whitened:
if noise is None:
_, Pxx = signal.welch(data, fs, noverlap=int(N * 5 / 10),
nperseg=N, return_onesided=False)
else:
_, Pxx = signal.welch(noise, fs, noverlap=int(N * 5 / 10),
nperseg=N, return_onesided=False)
else:
Pxx = 1
if bandPass:
cufott = fs/2 - 1 if cutoff is None else cutoff
bb, ab = signal.butter(4, cutoff*2./fs, btype='lowpass')
data = signal.filtfilt(bb, ab, data)
temp = signal.filtfilt(bb, ab, temp)
fft_d = fft(data)
fft_t = fft(temp)
df = 1 / (N * dt)
sigma_square = df * (fft_t * fft_t.conjugate() / Pxx).sum()
corr = ifft(fft_d * fft_t.conjugate() / Pxx)
snr = corr / np.sqrt(sigma_square) / np.sqrt(N0/2)
return snr
N = 1024 * 4
dwindow = signal.tukey(N, alpha=1./8)
temp, freq_sin, dt, t = signal_equip(N)
temp = -temp
print(len(t), len(temp))
data_noise = np.random.normal(0, 0.2, N)
data = data_noise + temp
# window data/temp
temp = dwindow * temp
data = dwindow * data
data_noise = dwindow * data_noise
# time reverse of template
temp2 = temp
plt.figure()
plt.plot(t, temp, 'r', label="original template")
plt.plot(t, temp2, 'g', label="time reverse template")
plt.plot(t, data, 'b', alpha=0.5, label="data")
plt.legend()
# some params
fs = N / (max(t) - min(t))
whitened = True
bandPass = False
cutoff = 10
# estimate n0
snr_max_arr = []
for i in range(100):
only_noise = np.random.normal(0, 0.2, N)
snr = snr_test(only_noise, np.flip(only_noise, 0), dt, fs=fs, whitened=whitened,
noise=only_noise, bandPass=bandPass, cutoff=cutoff)
snr_max_arr.append(max(np.abs(snr)))
snr_max = np.mean(snr_max_arr)
snr_max_dev = np.std(snr_max_arr)
print("mean is ", snr_max, "and std is ", snr_max_dev)
n0 = (snr_max**2)/2
n0 = 2
print(n0)
snr = snr_test(data_noise, np.flip(data_noise, 0), dt, N0=n0, fs=fs, whitened=whitened,
noise=data_noise, bandPass=bandPass, cutoff=cutoff)
plt.figure()
plt.plot(t - t[N//2], np.roll(np.abs(snr.real), N//2))
print("mean value is ", np.mean(snr.real))
snr2 = snr_test(temp2, temp, dt, N0=n0, fs=fs, whitened=whitened, noise=data,
bandPass=bandPass, cutoff=cutoff)
snr3 = snr_test(temp2, data, dt, N0=n0, fs=fs, whitened=False, noise=None,
bandPass=bandPass, cutoff=cutoff)
plt.figure()
plt.plot(t - t[N//2], np.roll(snr2.real, N//2), 'r')
# plt.plot(t, snr3.real / min(snr3.real), 'g')
# match optimal to see
idx_snr_max = np.abs(snr2.real).argmax()
time_of_max = t[idx_snr_max]
print("duration of template is: ", max(t))
print("matched filter occurs when the templase has peak at time ", (max(t) - time_of_max))
print("and originally the matched filter has the peak at", t[(temp).argmax()])
print("and originally the matched filter has the peak at", t[(temp).argmin()])
freq_pxx, Pxx = signal.welch(data, fs, noverlap=int(N * 5 / 10),
nperseg=N, return_onesided=False, scaling="spectrum")
power = LombScargle(t, data).power(freq_pxx, normalization='psd')
fft_d = fft(data)
T = max(t) - min(t)
plt.plot(freq_pxx, fs*Pxx, 'r')
plt.plot(freq_pxx, power, 'b')
plt.plot(fftfreq(N, d=dt), (np.abs(fft_d)**2)/N, 'g')
plt.xlim([0, 100])
freq_pxx, Pxx = signal.welch(data_noise, fs, noverlap=int(N * 5 / 10),
nperseg=N, return_onesided=False, scaling="density")
plt.plot(freq_pxx, Pxx)
E = np.sum(Pxx)
expo = np.exp((-1/2) * (1**2)/E)
expo / np.sqrt(2 * np.pi * E)
def get_data(t, f=None, n_peaks=1, amp=1, noise=None):
baseline = max(t) - min(t)
idx1 = (np.abs(t - min(t) - baseline/3)).argmin()
idx2 = (np.abs(t - min(t) - 2 * baseline/3)).argmin()
start = idx1 if idx1<idx2 else idx2
end = idx2 if idx1<idx2 else idx1
if f is None:
f = n_peaks / (t[end] - t[start])
print(start, end)
data = np.zeros(len(t))
print(t[start:end+1] - min([t[start], t[end]]))
data[start:end+1] = amp * np.sin(2 * np.pi * f * (t[start:end+1] - min([t[start], t[end]])))
if noise is None:
noise = np.random.normal(0, 0.3, len(t))
return data + noise
N = 1024 * 4
dt = 0.1
t = np.arange(N) * dt
data = get_data(t, noise=np.zeros(N))
data_reverse = get_data(max(t) - t, noise=np.zeros(N))
plt.plot(t, data)
plt.plot(max(t) - t, data, 'r')
plt.plot(t, np.flip(data, 0), 'g')
# def get_n0(repetitions=10):
N = 1024 * 4
dwindow = signal.tukey(N, alpha=1./8)
temp, freq_sin, dt, t = signal_equip(N)
temp = -temp
# time reverse of template
temp2 = np.flip(temp, 0)
print(len(t), len(temp))
# some params
fs = N / (max(t) - min(t))
whitened = True
bandPass = False
cutoff = 10
plt.figure()
for i in range(200):
data_noise = np.random.normal(0, 0.2, N)
data = data_noise + temp
snr2 = snr_test(temp2, data, dt, N0=n0, fs=fs, whitened=whitened, noise=None,
bandPass=bandPass, cutoff=cutoff)
plt.plot(t, np.abs(snr2.real), 'r', alpha=0.05)
# plt.ylim([0, 500])
```
# SNR para datos no equiespaciados
```
def signal_no_equip(N, fixed=True):
# 3 parts separated in time, one with slight irregularities in time sampling
# another with change of spacing and the last one with big outlier in spacing
T = np.zeros(N)
dt_implicit = 1 / N
t0 = np.linspace(0, 2*int(N/6)-1, 2*int(N/6))
if fixed:
np.random.seed(1)
e = np.random.normal(0, dt_implicit * 0.5, 2*int(N/6))
T[0:2*int(N/6)] = t0 * dt_implicit + e
shift = 30 * dt_implicit
if fixed:
np.random.seed(2)
t0 = np.linspace(2*int(N/6), 3*int(N/6)-1, int(N/6))
e = np.random.normal(0, dt_implicit * 0.5, int(N/6))
T[2*int(N/6):3*int(N/6)] = shift + t0 * dt_implicit / 2 + e
if fixed:
np.random.seed(3)
t0 = np.linspace(3*int(N/6), 4*int(N/6)-1, int(N/6))
e = np.random.normal(0, dt_implicit * 0.5, int(N/6))
T[3*int(N/6):4*int(N/6)] = t0 * 2 * dt_implicit + e
if fixed:
np.random.seed(4)
t0 = np.linspace(4*int(N/6), N-1, N - 4*int(N/6))
e = np.random.normal(0, dt_implicit * 0.5, N - 4*int(N/6))
T[4*int(N/6):N] = 2 * shift + t0 * dt_implicit / 2 + e
T.sort()
# signal is sinusoidal again with same frequency
temp = np.zeros(N)
segment_duration = int(N/3)
init = int(N/10)
times_segment = T[init: init + segment_duration]
times_segment = times_segment - min(times_segment)
freq_sin = 2 / (max(times_segment) - min(times_segment))
# print("choosen freq is: ", freq_sin)
temp[init: init + segment_duration] = np.sin(freq_sin * 2 * np.pi * times_segment)
return temp, freq_sin, T
N = 1200
temp2, freq_sin, t2 = signal_no_equip(N, fixed=False)
print(freq_sin)
data2 = np.random.normal(0, 0.3, N) + temp2
plt.plot(t2, temp2, '.')
plt.plot(t2, data2, alpha=0.5)
# obtencion de la psd dejando que calcule sus propias frequencias
df = 1 / (max(t2) - min(t2))
freqs = fftfreq(N, d=1/df)
freqs_lomb = np.delete(np.abs(freqs), 0)
frequency, power = LombScargle(t2, data2).autopower(maximum_frequency=1000, minimum_frequency=1)
pgram = signal.lombscargle(t2, data2, frequency * 2 * np.pi, normalize=True)
print("mean value of the PSD: ", np.mean(pgram), "or ", np.mean(power))
plt.figure(1)
plt.plot(frequency, pgram, "b")
plt.plot(frequency, power, 'r')
plt.axvline(freq_sin, color='k')
plt.axvline(freq_sin, color='g')
plt.xlim([0, 30])
plt.show()
# se obtiene resultados similares pero sabemos que astropy lo implementa en tiempo O(Nlog(N)) y scypy en tiempo O(N^2)
# NF viene de -(Nf // 2) + np.arange(Nf)
def get_psd(k, t, data, min_freq=None, data_per_peak=1):
df = 1 / ((max(t) - min(t)) * data_per_peak)
if min_freq is None:
min_freq = 0.5 * df
NK = len(k)
if NK % 2 == 0: # par
N = int(NK / 2)
else:
N = int((NK-1) / 2)
max_freq = (N - 1) * df + min_freq
frequency, power = LombScargle(t, data).autopower(maximum_frequency=max_freq, minimum_frequency=min_freq,
samples_per_peak=data_per_peak)
if len(frequency) != N:
raise ValueError("algo malo")
return frequency, power, df
Nf = 2 * N
k = -(Nf // 2) + np.arange(Nf)
freqs, pw, df = get_psd(k, t2, data2)
plt.plot(freqs, pw, 'g')
plt.axvline(freq_sin, color='k')
# plt.xlim([0, 30])
# calcula las nfft
def get_nfft(Nf, data, temp, t):
dwindow = signal.tukey(len(temp), alpha=1./8)
nfft_d = nfft_adjoint(t, dwindow * data, Nf)
nfft_t = nfft_adjoint(t, dwindow * temp, Nf)
k = (-(Nf // 2) + np.arange(Nf)) / (max(t) - min(t)) * (2 *np.pi)
return nfft_d, nfft_t, k
Nf = N
nfft_d, nfft_t, k = get_nfft(Nf, data2, temp2, t2)
plt.plot(k[Nf//2-1:] * df, np.abs(nfft_t[Nf//2-1:]))
plt.plot(k[Nf//2-1:] * df, np.abs(nfft_d[Nf//2-1:]))
plt.axvline(freq_sin, color='k', alpha=0.5)
print(np.abs(nfft_t[(Nf//2)+7:(Nf//2) + 20]))
print(k[(Nf//2)+7:(Nf//2) + 20])
t = np.linspace(0.1, 1.1, 100)
freqq = 1 / (max(t) - min(t))
print(freqq)
d = np.sin(2 * np.pi * freqq * t)
plt.figure()
plt.plot(t, d)
nfft_freqs = (-(2*len(t)//2) + np.arange(2 * len(t))) / (max(t) - min(t))
fft_freqs = fftfreq(len(t), d=((max(t) - min(t)) / len(t)))
fft_d_test = fft(d)
nfft_d_test = nfft_adjoint(t, d, 2*len(t))
plt.figure()
plt.plot(fft_freqs, np.abs(fft_d_test), 'bo')
plt.plot(nfft_freqs, np.abs(nfft_d_test), 'r*')
plt.xlim([-10, 10])
# erase 0 frequency
# fft_d_test = np.delete(fft_d_test, 0)
# nfft_d_test = np.delete(fft_d_test, (2 * len(t))//2)
dy = ifft(fft_d_test)
dyy = nfft(t, nfft_d_test) / (2*len(t))
plt.figure()
plt.plot(dy, 'b')
plt.plot(dyy, 'r')
nfft_d, nfft_t, k = get_nfft(Nf, data2, temp2, t2)
fft_d, fft_t = get_fft(N, data2, temp2)
plt.plot(nfft_d)
plt.plot(fft_d)
# realizando una pequeña comprobacion del teorema de parseval
sum_time = (np.abs(temp2)**2).sum()
sum_freq = (np.abs(nfft_t)**2).sum()
print(sum_time, sum_freq / np.sqrt(2 * Nf))
# vemos que no se cumple parseval, esto era esperable dado
#que aplicamos muchas mas frequencias que tiempos, para la inversa
dwindow = signal.tukey(len(temp2), alpha=1./8)
temp_back = nfft(t2, nfft_t)
sum_time_back = (np.abs(temp_back)**2).sum()
print(sum_time, sum_time_back / (2 * Nf) )
print("-------")
# vemos entonces que parseval no se cumple, por otro lado si usamos una matriz
#cuadrada, i.e, misma cantidad de freqs que tiempos nos da:
Nf_test = N
nfft_d_test, nfft_t_test, k_test = get_nfft(Nf_test, data2, temp2, t2)
sum_freq = (np.abs(nfft_t_test)**2).sum()
print(sum_time, sum_freq / N)
temp_back = nfft(t2, nfft_t_test)
sum_time_back = (np.abs(temp_back)**2).sum()
print(sum_time, sum_time_back)
# vemos que tampoco se cumple incluso aplicamos la misma cantidad de frequencias que tiempos, esto nos dice que aun
# si se aplican igual frequencias, dado que los tiempos varias, no hay garantia de que la matriz sea invertible.
# por otro lado puede que el teorema de parseval no se aplique aqui.
# el snr de los no espaciados
def snr_no_equip(N, only_noise=False, fixed=False):
temp, freq_sin, t = signal_no_equip(N, fixed=fixed)
if only_noise:
if fixed:
np.random.seed(12312)
data = np.random.normal(0, 0.3, N)
else:
data = np.random.normal(0, 0.3, N) + temp
# calcula la psd
Nf = 4 * N
k = -(Nf // 2) + np.arange(Nf)
freqs, pw, df = get_psd(k, t, data)
# repite la psd para obtener los datos con frequencias negativas, si Nf es par entonces el ultimo no se repite
pw = np.append(pw, pw)
if Nf % 2 == 0:
pw = np.delete(pw, len(pw) - 1)
nfft_d, nfft_t, k = get_nfft(Nf, data, temp, t)
nfft_d = np.delete(nfft_d, 0) # remving the value corresponding to 0 frequency
nfft_t = np.delete(nfft_t, 0)
## to get this as even remove another freq, for this time it will be the last one
last_one = len(pw)-1
nfft_d = np.delete(nfft_d, last_one)
nfft_t = np.delete(nfft_t, last_one)
pw = np.delete(pw, last_one)
norm_sigma = 4 * df
h_norm = (nfft_t * nfft_t.conjugate() / pw).sum()
norm_corr = 4 * df / np.sqrt(h_norm.real * norm_sigma)
corr = nfft_d * nfft_t.conjugate() / pw / (2*Nf)
snr = nfft(t, corr) * norm_corr * (max(t) - min(t)) * (len(fft_d) - 1) / N
# snr = np.roll(snr, len(snr) // 2)
return t, np.abs(snr), data, temp
N = 200
t, snr, data, temp = snr_no_equip(N, only_noise=True, fixed=False)
plt.figure()
plt.plot(snr)
plt.figure()
plt.plot(t, temp)
plt.plot(t, data)
# analicemos el error asociado a muchas repeticiones
def repeat_snr_non_unif(N, n_repeat, only_noise=False):
mean_snr = []
max_snr = []
median_snr = []
std_snr = []
for i in range(n_repeat):
_, snr, _, _ = snr_no_equip(N, only_noise=only_noise, fixed=False)
mean_snr.append(np.mean(snr))
median_snr.append(np.median(snr))
max_snr.append(np.max(snr))
std_snr.append(np.std(snr))
return mean_snr, median_snr, max_snr, std_snr
# usando misma cantidad de frequencias que de tiempos
rep = 100
N = 1000
mean, median, maxx, stdd = repeat_snr_non_unif(N, rep, only_noise=True)
print(":::::SNR parameters for {} repetitions for signal of only noise:::::".format(rep))
print("mean snr over is: {} +- {}".format(np.mean(mean), np.std(mean)))
print("median snr is: {} +- {}".format(np.mean(median), np.std(median)))
print("max snr is: {} +- {}".format(np.mean(maxx),np.std(maxx)))
print("std snr is: {} +- {}".format(np.mean(stdd),np.std(stdd)))
print("\n -------------------------- \n")
mean, median, maxx, stdd = repeat_snr_non_unif(N, rep, only_noise=False)
print(":::::SNR parameters for {} repetitions for signal of noise + template:::::".format(rep))
print("mean snr over is: {} +- {}".format(np.mean(mean), np.std(mean)))
print("median snr is: {} +- {}".format(np.mean(median), np.std(median)))
print("max snr is: {} +- {}".format(np.mean(maxx),np.std(maxx)))
print("std snr is: {} +- {}".format(np.mean(stdd),np.std(stdd)))
Nf = 67
k = -(Nf // 2) + np.arange(Nf)
kk = np.abs(k)
kk.sort()
print(k)
print(kk)
print(len(np.unique(kk))-1)
import scipy as sp
norm = sp.stats.norm(0, 0.2)
r = norm.rvs(1000)
plt.hist(r)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.