code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Cache Dataset Tutorial and Speed Test
This tutorial shows how to accelerate PyTorch medical DL program based on MONAI CacheDataset.
It's modified from the Spleen 3D segmentation tutorial notebook.
```
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import glob
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
from monai.data import Dataset, CacheDataset
from monai.transforms import \
Compose, LoadNiftid, AddChanneld, ScaleIntensityRanged, CropForegroundd, \
RandCropByPosNegLabeld, RandAffined, Spacingd, Orientationd, ToTensord
from monai.data import list_data_collate
from monai.inferers import sliding_window_inference
from monai.networks.layers import Norm
from monai.networks.nets import UNet
from monai.losses import DiceLoss
from monai.metrics import compute_meandice
from monai.utils import set_determinism
```
## Set MSD Spleen dataset path
```
data_root = '/workspace/data/medical/Task09_Spleen'
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_files, val_files = data_dicts[:-9], data_dicts[-9:]
```
## Setup transforms for training and validation
Deterministic transforms during training:
- LoadNiftid
- AddChanneld
- Spacingd
- Orientationd
- ScaleIntensityRanged
Non-deterministic transforms:
- RandCropByPosNegLabeld
- ToTensord
All the validation transforms are deterministic.
The results of all the deterministic transforms will be cached to accelerate training.
```
def transformations():
train_transforms = Compose([
LoadNiftid(keys=['image', 'label']),
AddChanneld(keys=['image', 'label']),
Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), interp_order=('bilinear', 'nearest')),
Orientationd(keys=['image', 'label'], axcodes='RAS'),
ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=['image', 'label'], source_key='image'),
# randomly crop out patch samples from big image based on pos / neg ratio
# the image centers of negative samples must be in valid image area
RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', size=(96, 96, 96), pos=1,
neg=1, num_samples=4, image_key='image', image_threshold=0),
ToTensord(keys=['image', 'label'])
])
val_transforms = Compose([
LoadNiftid(keys=['image', 'label']),
AddChanneld(keys=['image', 'label']),
Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), interp_order=('bilinear', 'nearest')),
Orientationd(keys=['image', 'label'], axcodes='RAS'),
ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=['image', 'label'], source_key='image'),
ToTensord(keys=['image', 'label'])
])
return train_transforms, val_transforms
```
## Define a typical PyTorch training process
```
def train_process(train_ds, val_ds):
# use batch_size=2 to load images and use RandCropByPosNegLabeld
# to generate 2 x 4 images for network training
train_loader = DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=4, collate_fn=list_data_collate)
val_loader = DataLoader(val_ds, batch_size=1, num_workers=4)
device = torch.device('cuda:0')
model = UNet(dimensions=3, in_channels=1, out_channels=2, channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2), num_res_units=2, norm=Norm.BATCH).to(device)
loss_function = DiceLoss(to_onehot_y=True, softmax=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-4)
epoch_num = 2
val_interval = 1 # do validation for every epoch
best_metric = -1
best_metric_epoch = -1
epoch_loss_values = list()
metric_values = list()
epoch_times = list()
total_start = time.time()
for epoch in range(epoch_num):
epoch_start = time.time()
print('-' * 10)
print('epoch {}/{}'.format(epoch + 1, epoch_num))
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step_start = time.time()
step += 1
inputs, labels = batch_data['image'].to(device), batch_data['label'].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
print('{}/{}, train_loss: {:.4f}, step time: {:.4f}'.format(
step, len(train_ds) // train_loader.batch_size, loss.item(), time.time() - step_start))
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print('epoch {} average loss: {:.4f}'.format(epoch + 1, epoch_loss))
if (epoch + 1) % val_interval == 0:
model.eval()
with torch.no_grad():
metric_sum = 0.
metric_count = 0
for val_data in val_loader:
val_inputs, val_labels = val_data['image'].to(device), val_data['label'].to(device)
roi_size = (160, 160, 160)
sw_batch_size = 4
val_outputs = sliding_window_inference(val_inputs, roi_size, sw_batch_size, model)
value = compute_meandice(y_pred=val_outputs, y=val_labels, include_background=False,
to_onehot_y=True, mutually_exclusive=True)
metric_count += len(value)
metric_sum += value.sum().item()
metric = metric_sum / metric_count
metric_values.append(metric)
if metric > best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(model.state_dict(), 'best_metric_model.pth')
print('saved new best metric model')
print('current epoch: {} current mean dice: {:.4f} best mean dice: {:.4f} at epoch {}'.format(
epoch + 1, metric, best_metric, best_metric_epoch))
print('time consuming of epoch {} is: {:.4f}'.format(epoch + 1, time.time() - epoch_start))
epoch_times.append(time.time() - epoch_start)
print('train completed, best_metric: {:.4f} at epoch: {}, total time: {:.4f}'.format(
best_metric, best_metric_epoch, time.time() - total_start))
return epoch_num, time.time() - total_start, epoch_loss_values, metric_values, epoch_times
```
## Enable deterministic training and define regular Datasets
```
set_determinism(seed=0)
train_trans, val_trans = transformations()
train_ds = Dataset(data=train_files, transform=train_trans)
val_ds = Dataset(data=val_files, transform=val_trans)
```
## Train with regular Dataset
```
epoch_num, total_time, epoch_loss_values, metric_values, epoch_times = train_process(train_ds, val_ds)
print('Total training time of {} epochs with regular Dataset: {:.4f}'.format(epoch_num, total_time))
```
## Enable deterministic training and define Cache Datasets
```
set_determinism(seed=0)
train_trans, val_trans = transformations()
cache_init_start = time.time()
cache_train_ds = CacheDataset(data=train_files, transform=train_trans, cache_rate=1.0, num_workers=4)
cache_val_ds = CacheDataset(data=val_files, transform=val_trans, cache_rate=1.0, num_workers=4)
cache_init_time = time.time() - cache_init_start
```
## Train with Cache Dataset
```
epoch_num, cache_total_time, cache_epoch_loss_values, cache_metric_values, cache_epoch_times = \
train_process(cache_train_ds, cache_val_ds)
print('Total training time of {} epochs with CacheDataset: {:.4f}'.format(epoch_num, cache_total_time))
```
## Plot training loss and validation metrics
```
plt.figure('train', (12, 12))
plt.subplot(2, 2, 1)
plt.title('Regular Epoch Average Loss')
x = [i + 1 for i in range(len(epoch_loss_values))]
y = epoch_loss_values
plt.xlabel('epoch')
plt.grid(alpha=0.4, linestyle=':')
plt.plot(x, y, color='red')
plt.subplot(2, 2, 2)
plt.title('Regular Val Mean Dice')
x = [i + 1 for i in range(len(metric_values))]
y = metric_values
plt.xlabel('epoch')
plt.grid(alpha=0.4, linestyle=':')
plt.plot(x, y, color='red')
plt.subplot(2, 2, 3)
plt.title('Cache Epoch Average Loss')
x = [i + 1 for i in range(len(epoch_loss_values))]
y = epoch_loss_values
plt.xlabel('epoch')
plt.grid(alpha=0.4, linestyle=':')
plt.plot(x, y, color='green')
plt.subplot(2, 2, 4)
plt.title('Cache Val Mean Dice')
x = [i + 1 for i in range(len(metric_values))]
y = metric_values
plt.xlabel('epoch')
plt.grid(alpha=0.4, linestyle=':')
plt.plot(x, y, color='green')
plt.show()
```
## Plot total time and every epoch time
```
plt.figure('train', (12, 6))
plt.subplot(1, 2, 1)
plt.title('Total Train Time(600 epochs)')
plt.bar('regular', total_time, 1, label='Regular Dataset', color='red')
plt.bar('cache', cache_init_time + cache_total_time, 1, label='Cache Dataset', color='green')
plt.bar('cache', cache_init_time, 1, label='Cache Init', color='orange')
plt.ylabel('secs')
plt.grid(alpha=0.4, linestyle=':')
plt.legend(loc='best')
plt.subplot(1, 2, 2)
plt.title('Epoch Time')
x = [i + 1 for i in range(len(epoch_times))]
plt.xlabel('epoch')
plt.ylabel('secs')
plt.plot(x, epoch_times, label='Regular Dataset', color='red')
plt.plot(x, cache_epoch_times, label='Cache Dataset', color='green')
plt.grid(alpha=0.4, linestyle=':')
plt.legend(loc='best')
plt.show()
```
| github_jupyter |
# Random Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Auto Power Spectral Density
The (auto-) [power spectral density](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density) (PSD) is defined as the Fourier transformation of the [auto-correlation function](correlation_functions.ipynb) (ACF).
### Definition
For a continuous-amplitude, real-valued, wide-sense stationary (WSS) random signal $x[k]$ the PSD is given as
\begin{equation}
\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ \varphi_{xx}[\kappa] \},
\end{equation}
where $\mathcal{F}_* \{ \cdot \}$ denotes the [discrete-time Fourier transformation](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) and $\varphi_{xx}[\kappa]$ the ACF of $x[k]$. Note that the DTFT is performed with respect to $\kappa$. The ACF of a random signal of finite length $N$ can be expressed by way of a linear convolution
\begin{equation}
\varphi_{xx}[\kappa] = \frac{1}{N} \cdot x_N[k] * x_N[-k].
\end{equation}
Taking the DTFT of the left- and right-hand side results in
\begin{equation}
\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, X_N(\mathrm{e}^{-\,\mathrm{j}\,\Omega}) =
\frac{1}{N} \, | X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2.
\end{equation}
The last equality results from the definition of the magnitude and the symmetry of the DTFT for real-valued signals. The spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ quantifies the amplitude density of the signal $x_N[k]$. It can be concluded from above result that the PSD quantifies the squared amplitude or power density of a random signal. This explains the term power spectral density.
### Properties
The properties of the PSD can be deduced from the properties of the ACF and the DTFT as:
1. From the link between the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and the spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ derived above it can be concluded that the PSD is real valued
$$\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$$
2. From the even symmetry $\varphi_{xx}[\kappa] = \varphi_{xx}[-\kappa]$ of the ACF it follows that
$$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \Phi_{xx}(\mathrm{e}^{\,-\mathrm{j}\, \Omega}) $$
3. The PSD of an uncorrelated random signal is given as
$$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = (\sigma_x^2 + \mu_x^2) \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) ,$$
which can be deduced from the [ACF of an uncorrelated signal](correlation_functions.ipynb#Properties).
4. The quadratic mean of a random signal is given as
$$ E\{ x[k]^2 \} = \varphi_{xx}[\kappa=0] = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \,\mathrm{d} \Omega $$
The last relation can be found by expressing the ACF via the inverse DTFT of $\Phi_{xx}$ and considering that $\mathrm{e}^{\mathrm{j} \Omega \kappa} = 1$ when evaluating the integral for $\kappa=0$.
### Example - Power Spectral Density of a Speech Signal
In this example the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \,\Omega})$ of a speech signal of length $N$ is estimated by applying a discrete Fourier transformation (DFT) to its ACF. For a better interpretation of the PSD, the frequency axis $f = \frac{\Omega}{2 \pi} \cdot f_s$ has been chosen for illustration, where $f_s$ denotes the sampling frequency of the signal. The speech signal constitutes a recording of the vowel 'o' spoken from a German male, loaded into variable `x`.
In Python the ACF is stored in a vector with indices $0, 1, \dots, 2N - 2$ corresponding to the lags $\kappa = (0, 1, \dots, 2N - 2)^\mathrm{T} - (N-1)$. When computing the discrete Fourier transform (DFT) of the ACF numerically by the fast Fourier transform (FFT) one has to take this shift into account. For instance, by multiplying the DFT $\Phi_{xx}[\mu]$ by $\mathrm{e}^{\mathrm{j} \mu \frac{2 \pi}{2N - 1} (N-1)}$.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
%matplotlib inline
# read audio file
fs, x = wavfile.read('../data/vocal_o_8k.wav')
x = np.asarray(x, dtype=float)
N = len(x)
# compute ACF
acf = 1/N * np.correlate(x, x, mode='full')
# compute PSD
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(2*N-1)*2*np.pi*(N-1)/(2*N-1))
f = np.fft.fftfreq(2*N-1, d=1/fs)
# plot PSD
plt.figure(figsize = (10, 4))
plt.plot(f, np.real(psd))
plt.title('Estimated power spectral density')
plt.ylabel(r'$\hat{\Phi}_{xx}(e^{j \Omega})$')
plt.xlabel(r'$f / Hz$')
plt.axis([0, 500, 0, 1.1*max(np.abs(psd))])
plt.grid()
```
**Exercise**
* What does the PSD tell you about the average spectral contents of a speech signal?
Solution: The speech signal exhibits a harmonic structure with the dominant fundamental frequency $f_0 \approx 100$ Hz and a number of harmonics $f_n \approx n \cdot f_0$ for $n > 0$. This due to the fact that vowels generate random signals which are in good approximation periodic. To generate vowels, the sound produced by the periodically vibrating vowel folds is filtered by the resonance volumes and articulators above the voice box. The spectrum of periodic signals is a line spectrum.
## Cross-Power Spectral Density
The cross-power spectral density is defined as the Fourier transformation of the [cross-correlation function](correlation_functions.ipynb#Cross-Correlation-Function) (CCF).
### Definition
For two continuous-amplitude, real-valued, wide-sense stationary (WSS) random signals $x[k]$ and $y[k]$, the cross-power spectral density is given as
\begin{equation}
\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mathcal{F}_* \{ \varphi_{xy}[\kappa] \},
\end{equation}
where $\varphi_{xy}[\kappa]$ denotes the CCF of $x[k]$ and $y[k]$. Note again, that the DTFT is performed with respect to $\kappa$. The CCF of two random signals of finite length $N$ and $M$ can be expressed by way of a linear convolution
\begin{equation}
\varphi_{xy}[\kappa] = \frac{1}{N} \cdot x_N[k] * y_M[-k].
\end{equation}
Note the chosen $\frac{1}{N}$-averaging convention corresponds to the length of signal $x$. If $N \neq M$, care should be taken on the interpretation of this normalization. In case of $N=M$ the $\frac{1}{N}$-averaging yields a [biased estimator](https://en.wikipedia.org/wiki/Bias_of_an_estimator) of the CCF, which consistently should be denoted with $\hat{\varphi}_{xy,\mathrm{biased}}[\kappa]$.
Taking the DTFT of the left- and right-hand side from above cross-correlation results in
\begin{equation}
\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, Y_M(\mathrm{e}^{-\,\mathrm{j}\,\Omega}).
\end{equation}
### Properties
1. The symmetries of $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ can be derived from the symmetries of the CCF and the DTFT as
$$ \underbrace {\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \Phi_{xy}^*(\mathrm{e}^{-\,\mathrm{j}\, \Omega})}_{\varphi_{xy}[\kappa] \in \mathbb{R}} =
\underbrace {\Phi_{yx}(\mathrm{e}^{\,- \mathrm{j}\, \Omega}) = \Phi_{yx}^*(\mathrm{e}^{\,\mathrm{j}\, \Omega})}_{\varphi_{yx}[-\kappa] \in \mathbb{R}},$$
from which $|\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})| = |\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\, \Omega})|$ can be concluded.
2. The cross PSD of two uncorrelated random signals is given as
$$ \Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mu_x^2 \mu_y^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) $$
which can be deduced from the CCF of an uncorrelated signal.
### Example - Cross-Power Spectral Density
The following example estimates and plots the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of two random signals $x_N[k]$ and $y_M[k]$ of finite lengths $N = 64$ and $M = 512$.
```
N = 64 # length of x
M = 512 # length of y
# generate two uncorrelated random signals
np.random.seed(1)
x = 2 + np.random.normal(size=N)
y = 3 + np.random.normal(size=M)
N = len(x)
M = len(y)
# compute cross PSD via CCF
acf = 1/N * np.correlate(x, y, mode='full')
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(N+M-1)*2*np.pi*(M-1)/(2*M-1))
psd = np.fft.fftshift(psd)
Om = 2*np.pi * np.arange(0, N+M-1) / (N+M-1)
Om = Om - np.pi
# plot results
plt.figure(figsize=(10, 4))
plt.stem(Om, np.abs(psd), basefmt='C0:', use_line_collection=True)
plt.title('Biased estimator of cross power spectral density')
plt.ylabel(r'$|\hat{\Phi}_{xy}(e^{j \Omega})|$')
plt.xlabel(r'$\Omega$')
plt.grid()
```
**Exercise**
* What does the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ tell you about the statistical properties of the two random signals?
Solution: The cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ is essential only non-zero for $\Omega=0$. It hence can be concluded that the two random signals are not mean-free and uncorrelated to each other.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| github_jupyter |
# Politeness strategies in MT-mediated communication
In this notebook, we demo how to extract politeness strategies using ConvoKit's `PolitenessStrategies` module both in English and in Chinese. We will make use of this functionality to assess the degree to which politeness strategies are preserved in machine-translated texts.
The politeness strategies considered are adapted from operationalizations in the following papers:
- English: [A computational approach to politeness with application to social factors](https://www.cs.cornell.edu/~cristian/Politeness.html), [The politeness Package: Detecting Politeness in Natural Language](https://journal.r-project.org/archive/2018/RJ-2018-079/RJ-2018-079.pdf)
- Chinese: [Studying Politeness across Cultures using English Twitter and Mandarin Weibo](https://dl.acm.org/doi/abs/10.1145/3415190)
```
import os
from collections import defaultdict, Counter
from tqdm import tqdm
import pandas as pd
import numpy as np
from scipy.stats import pearsonr
import spacy
from convokit import Corpus, Speaker, Utterance, download
from convokit import TextParser, PolitenessStrategies
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
```
## 1. Preparing diagnostic test sets
We sample utterances from Wikipedia Talkpages discussions in both English and Chinese. In particular, we use the medium-sized `wiki-corpus` shipped by ConvoKit as the source for sampling English utterances (as shown below), and we sampled a subset of utterances from [WikiConv](https://www.cs.cornell.edu/~cristian/index_files/wikiconv-conversation-corpus.pdf) (Chinese) as shared in [figshare](https://figshare.com/articles/dataset/WikiConv_-_Chinese/7376012).
For those who would like to skip the preparatory steps and go straight to our analysis exploring how to assess the permeability of politeness signals in machine-translated communication ([Part 2 of this notebook](#2.-Computing-permeability-for-politeness-strategies)), we have made the sampled corpora directly downloadable via ConvoKit as `wiki-sampled-en-corpus` and `wiki-sampled-zh-corpus`.
### 1.1. English data: `wiki-corpus`
The medium-sized Wikipedia dataset is provided by ConvoKit as `wiki-corpus` ([documentation](https://convokit.cornell.edu/documentation/wiki.html)). Note that ConvoKit also offers a more complete collection of Wikipedia Talkpage discussions: [the Cornell Wikiconv Dataset](https://convokit.cornell.edu/documentation/wikiconv.html). We choose to use `wiki-corpus` as it is already sufficiently large for our purpose.
To load the corpus, see options in the cell below.
```
# OPTION 1: DOWNLOAD CORPUS
# UNCOMMENT THESE LINES TO DOWNLOAD CORPUS
# DATA_DIR = '<YOUR DIRECTORY>'
# WIKI_ROOT_DIR = download('wiki-corpus', data_dir=DATA_DIR)
# OPTION 2: READ PREVIOUSLY-DOWNLOADED CORPUS FROM DISK
# UNCOMMENT THIS LINE AND REPLACE WITH THE DIRECTORY WHERE THE WIKI-CORPUS IS LOCATED
# WIKI_ROOT_DIR = '<YOUR DIRECTORY>'
corpus = Corpus(filename=WIKI_ROOT_DIR)
# load parses
corpus.load_info('utterance',['parsed'])
# Overall stats of the dataset
corpus.print_summary_stats()
```
#### Extracting strategies for sampling
In the case when the corpus is not dependency parsed, it will need to go through an additional step of parsing, which can be achieved via `TextParser`. See [this demo](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/politeness-strategies/politeness_demo.ipynb) for an example. As the `wiki-corpus` ships with pre-computed dependency parses (which we already loaded as you may notice), we can go straight to politeness strategy extraction.
Here, we will focus on a set of local strategies, and hence specify that we will want to extract with strategy collection _politeness_local_. For other available options, refer to the [documentation](https://convokit.cornell.edu/documentation/politenessStrategies.html) for details.
```
ps_local = PolitenessStrategies(strategy_collection="politeness_local", verbose=10000)
# By default, strategy extraction results are saved under "politeness_strategies".
corpus = ps_local.transform(corpus, markers=True)
```
#### Computing strategy prevalance
We can first take a glimpse over utterance-level strategy prevalence, i.e., proportion of utterances in the dataset that use the politeness strategy. This can be easily done using `summarize()`.
```
df_prevalence = ps_local.summarize(corpus)
df_prevalence
```
#### Sampling
To assess permeability of these strategies, we sample 1000 instances for each strategy. The results will be saved to a smaller `wiki-sampled-en` corpus, which may be directly downloaded via ConvoKit if one wants to skip the intermediate steps (which will take a while to run) see [Part 2 of this notebook](#2.-Computing-permeability-for-politeness-strategies).
```
# utterance-level strategy uses
df_feat = pd.DataFrame.from_dict({utt.id: utt.meta['politeness_strategies'] \
for utt in corpus.iter_utterances()}, orient='index')
# sampling from least common to most
sorted_strategies = df_prevalence.sort_values().index
sampled_ids, samples = set(), []
for k in sorted_strategies:
df_sample = df_feat[(~df_feat.index.isin(sampled_ids)) & (df_feat[k]==1)].sample(1000, random_state=42)
df_sample['strategy'] = k
samples.append(df_sample[['strategy']])
sampled_ids.update(df_sample.index)
df_en_sample = pd.concat(samples)
# saving as a convokit corpus
for i, info in df_en_sample.itertuples():
utt = corpus.get_utterance(i)
utt.add_meta('selected', True)
utt.add_meta('strategy', info)
# filter only selected utterances
# (not that this does not maintain conversation structure)
wiki_sampled_en = corpus.filter_utterances_by(lambda utt:'selected' in utt.meta and utt.meta['selected'])
```
#### Translating
To determine the degree to which politeness markers are perserved in translation, we will make comparisons between original and translated texts. To set it up, we two rounds of translations, forming a English -> Chinese -> English loop, i.e., we first translate the English texts into Chinese, and then translate the Chinese translations back into English.
We use [EasyNMT](https://github.com/UKPLab/EasyNMT) to perform translations between English and Chinese, using models from [Opus-MT](https://github.com/Helsinki-NLP/Opus-MT) from [Helsinki-NLP](https://blogs.helsinki.fi/language-technology/).
```
from easynmt import EasyNMT
# texts to be translated
df_utts = wiki_sampled_en.get_utterances_dataframe(exclude_meta=True)
# translation model
model = EasyNMT('opus-mt', cache_folder="/belafonte_sauna/liye_translations/easynmt/")
df_utts['en-zh'] = model.translate(list(df_utts['text']), \
target_lang='zh', \
source_lang='en', \
show_progress_bar=True,
batch_size=8, \
perform_sentence_splitting=False)
df_utts['en-back'] = model.translate(list(df_utts['en-zh']), \
target_lang='en', \
source_lang='zh', \
show_progress_bar=True,
batch_size=8, \
perform_sentence_splitting=False)
```
We add these translated texts as meta data to our sampled corpus, and parse them to prepare for later strategy extraction.
```
from convokit.text_processing.textParser import TextParser
for row in df_utts[['text', 'en-zh', 'en-back']].itertuples():
idx, trans, backtrans = row[0], row[2], row[3]
utt = wiki_sampled_en.get_utterance(idx)
utt.add_meta('en-zh', trans)
utt.add_meta('en-back', backtrans)
# parser to parse back-translated English texts
en_parser = TextParser(output_field='en_parsed', input_field='en-back', \
verbosity=5000)
# parer to parse translated texts in Chinese
spacy_zh = spacy.load('zh_core_web_sm', disable=['ner'])
zh_parser = TextParser(output_field='zh_parsed', input_field='en-zh', \
spacy_nlp=spacy_zh, verbosity=5000)
wiki_sampled_en = en_parser.transform(wiki_sampled_en)
wiki_sampled_en = zh_parser.transform(wiki_sampled_en)
# We can then save the corpus using wiki_sampled_en.dump(YOUR_OUT_DIR)
```
### 1.2 Chinese data: [WikiConv](https://www.cs.cornell.edu/~cristian/index_files/wikiconv-conversation-corpus.pdf)
For the Chinese data, we start utterances from [WikiConv](https://figshare.com/articles/dataset/WikiConv_-_Chinese/7376012) and similarly sampled 1000 instances for a subset of strategies from the collection "_politeness-cscw-zh_". The corpus is saved as `wiki-sampled-zh-corpus`, with all textual data (i.e., both the original utterance texts and the the corresponding translations) tokenized and parsed.
```
wiki_sampled_zh = Corpus(download('wiki-sampled-zh-corpus'))
# Inspect the meta data avaible, should have the following:
# 'parsed' contains the dependency parses for the utterance text
# 'zh-en' and 'zh-back' contains the translations and back translations for utterance texts respectively
# 'en_parsed' and 'zh_parsed' contain the respective parses, which we will use for strategy extractions
wiki_sampled_zh.meta_index
```
## 2. Computing permeability for politeness strategies
With the two sampled datasets tokenized and parsed, we are now ready to the degree to which strategies are perserved vs. lost in different translation directions.
We make two types of comparisons:
* First, we consider a direct comparison between the original vs. translated texts. In particular, we check strategies used in utterances in English texts and Chinese texts with respective politeness strategy operationalizations to make comparisons.
* Second, we consider comparing the original vs. the backtranslated texts using the same strategy operationalization and compare strategies detected.
```
# Download the data if Part 1 of the notebook is skipped
# replace with where you'd like the corpora to be saved
DATA_DIR = '/belafonte_sauna/liye_translations/convokit_mt/test/'
wiki_sampled_en = Corpus(download('wiki-sampled-en-corpus', data_dir=DATA_DIR))
wiki_sampled_zh = Corpus(download('wiki-sampled-zh-corpus', data_dir=DATA_DIR))
wiki_sampled_en.print_summary_stats()
wiki_sampled_zh.print_summary_stats()
```
### Extracting strategies
As a first step, we extract strategies for all translations and back-translations. We will need two politeness strategy transformers:
* for texts in English, we will again use the strategy collection _politeness_local_
* for texts in Chinese, we will be using the strategy collection _politeness-cscw-zh_.
More details of different politeness strategy collections can be found at the [documentation page]( https://convokit.cornell.edu/documentation/politenessStrategies.html).
```
ps_zh = PolitenessStrategies(parse_attribute_name='zh_parsed', \
strategy_attribute_name="zh_strategies", \
strategy_collection="politeness_cscw_zh",
verbose=5000)
ps_en = PolitenessStrategies(parse_attribute_name='en_parsed', \
strategy_attribute_name="en_strategies", \
strategy_collection="politeness_local",
verbose=5000)
# extracting for English samples
wiki_sampled_en = ps_zh.transform(wiki_sampled_en)
wiki_sampled_en = ps_en.transform(wiki_sampled_en)
# extracting for Chinese samples
wiki_sampled_zh = ps_zh.transform(wiki_sampled_zh)
wiki_sampled_zh = ps_en.transform(wiki_sampled_zh)
```
### Making comparisons
We consider permeability of a politeness strategy _s_ as the percentage utterances in a given collection containing such markers for which the translated version also contains (potentially different) markers from the same set.
As mentioned earlier, we estimate permeability both with translations and backtranslations. Note that each approach has its own limitations, and thus both of them are at best _proxies_ for strategy permeability and should be not read as the groundtruth values.
```
# Mapping between strategy names in different collections
# Note that the collections are not exactly equivalent,
# i.e., there are strategies we can't find a close match between the two collections
en2zh = {'Actually': 'factuality',
'Adverb.Just': None,
'Affirmation': 'praise',
'Apology': 'apologetic',
'By.The.Way': 'indirect_btw',
'Conj.Start': 'start_so',
'Filler': None,
'For.Me': None,
'For.You': None,
'Gratitude': 'gratitude',
'Greeting':'greeting',
'Hedges':'hedge',
'Indicative':'can_you',
'Please': 'please',
'Please.Start': 'start_please',
'Reassurance': None,
'Subjunctive': 'could_you',
'Swearing': 'taboo'
}
zh2en = {v:k for k,v in en2zh.items() if v}
# add utterance-level assessing result to utterance metadata for the English corpus
for utt in wiki_sampled_en.iter_utterances():
# strategy names in English and Chinese
en_name = utt.retrieve_meta('strategy')
zh_name = en2zh[en_name]
# translations
if zh_name:
trans_status = utt.retrieve_meta('zh_strategies')[zh_name]
utt.add_meta('translation_result', trans_status)
else:
# when a comparison isn't applicable, we use the value -1
utt.add_meta('translation_result', -1)
# back translations
backtrans_status = utt.retrieve_meta('en_strategies')[en_name]
utt.add_meta('backtranslation_result', backtrans_status)
# add utterance-level assessing result to utterance metadata for the Chinese corpus
for utt in wiki_sampled_zh.iter_utterances():
# strategy names in English and Chinese
zh_name = utt.retrieve_meta('strategy')
en_name = zh2en[zh_name]
# translations
if en_name:
trans_status = utt.retrieve_meta('en_strategies')[en_name]
utt.add_meta('translation_result', trans_status)
# back translations
backtrans_status = utt.retrieve_meta('zh_strategies')[zh_name]
utt.add_meta('backtranslation_result', backtrans_status)
```
We can then export these utterance-level assessing results to pandas DataFrames (via `get_attribute_table`) for easy aggregation and plotting. The utterance metadata we need are:
* strategy: the strategy to be checked for the utterance
* translation_result: whether the checked strategy remains in the translated text
* backtranslation_result: whether the checked strategy remains in the back-translated text
#### A. English -> Chinese
```
# results for the English corpus
res_df_en = wiki_sampled_en.get_attribute_table(obj_type='utterance', \
attrs=['strategy', \
'translation_result', \
'backtranslation_result'])
res_df_en.columns = ['strategy', 'en->zh', 'en->zh->en']
# strategy-level permeability, -1 means the strategy is not applicable
permeability_df_en = res_df_en.groupby('strategy').sum() / 1000
# As a reference, we include permeability computed through an informal small-scale human annotations
# (50 instances, one annotator)
reference = {'Actually': 0.7, 'Adverb.Just': 0.62, 'Affirmation': 0.8, 'Apology': 0.94, 'By.The.Way': 0.42,
'Conj.Start': 0.66, 'Filler': 0.58, 'For.Me': 0.62, 'For.You': 0.52, 'Gratitude': 0.86,
'Greeting': 0.52, 'Hedges': 0.68, 'Indicative': 0.64, 'Please': 0.72, 'Please.Start': 0.82,
'Reassurance': 0.88, 'Subjunctive': 0.0, 'Swearing': 0.3}
permeability_df_en['reference'] = [reference[name] for name in permeability_df_en.index]
# As further context, we can inlcude information about strategy prevalence on our plot
prevalence_en = dict(df_prevalence*100)
permeability_df_en.index = [f"{name} ({prevalence_en[name]:.1f}%)" for name in permeability_df_en.index]
plt.figure(figsize=(9, 12))
sns.set(font_scale=1.2)
# cells that are not applicable are masked in white
with sns.axes_style("white"):
sns.heatmap(permeability_df_en, annot=True, cmap="Greens", fmt=".1%", mask=permeability_df_en==-1)
```
#### B. Chinese -> English
```
# results for the English corpus
res_df_zh = wiki_sampled_zh.get_attribute_table(obj_type='utterance', \
attrs=['strategy', \
'translation_result', \
'backtranslation_result'])
# convert names to make it easier to compare between directions
res_df_zh['strategy'] = res_df_zh['strategy'].apply(lambda name:zh2en[name])
res_df_zh.columns = ['strategy', 'zh->en', 'zh->en->zh']
permeability_df_zh = res_df_zh.groupby('strategy').sum() / 1000
# as the original dataset for the Chinese corpus is quite large
# we present strategy prevalence results directly
prevalence_zh = {'apologetic': 0.6, 'can_you': 0.3, 'could_you': 0.0,
'factuality': 0.4,'gratitude': 3.1, 'greeting': 0.0,
'hedge': 42.8, 'indirect_btw': 0.1,
'praise': 0.4, 'please': 25.4,
'start_please': 17.7, 'start_so': 0.7, 'taboo': 0.4}
permeability_df_zh.index = [f"{name} ({prevalence_zh[en2zh[name]]:.1f}%)" for name in permeability_df_zh.index]
plt.figure(figsize=(6, 9))
sns.set(font_scale=1.2)
with sns.axes_style("white"):
sns.heatmap(permeability_df_zh, annot=True, cmap="Blues", fmt=".1%")
```
| github_jupyter |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Write and Save Files in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about write the text to file in the Python Programming Language. By the end of this lab, you'll know how to write to file and copy the file.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="write">Writing Files</a></li>
<li><a href="copy">Copy a File</a></li>
</ul>
<p>
Estimated time needed: <strong>15 min</strong>
</p>
</div>
<hr>
<h2 id="write">Writing Files</h2>
We can open a file object using the method <code>write()</code> to save the text file to a list. To write the mode, argument must be set to write <b>w</b>. Let’s write a file <b>Example2.txt</b> with the line: <b>“This is line A”</b>
```
# Write line to file
with open('/resources/data/Example2.txt', 'w') as writefile:
writefile.write("This is line A")
```
We can read the file to see if it worked:
```
# Read file
with open('/resources/data/Example2.txt', 'r') as testwritefile:
print(testwritefile.read())
```
We can write multiple lines:
```
# Write lines to file
with open('/resources/data/Example2.txt', 'w') as writefile:
writefile.write("This is line A\n")
writefile.write("This is line B\n")
```
The method <code>.write()</code> works similar to the method <code>.readline()</code>, except instead of reading a new line it writes a new line. The process is illustrated in the figure , the different colour coding of the grid represents a new line added to the file after each method call.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/WriteLine.png" width="500" />
You can check the file to see if your results are correct
```
# Check whether write to file
with open('/resources/data/Example2.txt', 'r') as testwritefile:
print(testwritefile.read())
```
By setting the mode argument to append **a** you can append a new line as follows:
```
# Write a new line to text file
with open('/resources/data/Example2.txt', 'a') as testwritefile:
testwritefile.write("This is line C\n")
```
You can verify the file has changed by running the following cell:
```
# Verify if the new line is in the text file
with open('/resources/data/Example2.txt', 'r') as testwritefile:
print(testwritefile.read())
```
We write a list to a <b>.txt</b> file as follows:
```
# Sample list of text
Lines = ["This is line A\n", "This is line B\n", "This is line C\n"]
Lines
# Write the strings in the list to text file
with open('Example2.txt', 'w') as writefile:
for line in Lines:
print(line)
writefile.write(line)
```
We can verify the file is written by reading it and printing out the values:
```
# Verify if writing to file is successfully executed
with open('Example2.txt', 'r') as testwritefile:
print(testwritefile.read())
```
We can again append to the file by changing the second parameter to <b>a</b>. This adds the code:
```
# Append the line to the file
with open('Example2.txt', 'a') as testwritefile:
testwritefile.write("This is line D\n")
```
We can see the results of appending the file:
```
# Verify if the appending is successfully executed
with open('Example2.txt', 'r') as testwritefile:
print(testwritefile.read())
```
<hr>
<h2 id="copy">Copy a File</h2>
Let's copy the file <b>Example2.txt</b> to the file <b>Example3.txt</b>:
```
# Copy file to another
with open('Example2.txt','r') as readfile:
with open('Example3.txt','w') as writefile:
for line in readfile:
writefile.write(line)
```
We can read the file to see if everything works:
```
# Verify if the copy is successfully executed
with open('Example3.txt','r') as testwritefile:
print(testwritefile.read())
```
After reading files, we can also write data into files and save them in different file formats like **.txt, .csv, .xls (for excel files) etc**. Let's take a look at some examples.
Now go to the directory to ensure the <b>.txt</b> file exists and contains the summary data that we wrote.
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.
This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
## Setup
### Import TensorFlow and other libraries
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import os
import time
```
### Download the Shakespeare dataset
Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
### Read the data
First, look in the text:
```
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
```
## Process the text
### Vectorize the text
Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
```
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
```
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
```
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
```
### The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
### Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
```
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
```
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
```
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
```
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
```
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
```
Print the first examples input and target values:
```
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
```
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
```
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
```
### Create training batches
We used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
```
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
```
## Build The Model
Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:
* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;
* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)
* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
```
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.GRU(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
```
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:

## Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
```
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
```
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
```
model.summary()
```
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
```
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
```
This gives us, at each timestep, a prediction of the next character index:
```
sampled_indices
```
Decode these to see the text predicted by this untrained model:
```
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
```
## Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
### Attach an optimizer, and a loss function
The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.
Because our model returns logits, we need to set the `from_logits` flag.
```
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
```
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
```
model.compile(optimizer='adam', loss=loss)
```
### Configure checkpoints
Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
```
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
```
### Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
```
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
```
## Generate text
### Restore the latest checkpoint
To keep this prediction step simple, use a batch size of 1.
Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.
To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
```
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
```
### The prediction loop
The following code block generates the text:
* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.
* Get the prediction distribution of the next character using the start string and the RNN state.
* Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.
* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters.

Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
```
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the character returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted character as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
```
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).
You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.
## Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output.
We will use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).
The procedure works as follows:
* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.
* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.
* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.
* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.
* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
```
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(
target, predictions, from_logits=True))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
# initally hidden is None
hidden = model.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
```
| github_jupyter |
```
import pandas as pd
import os
import datetime as dt
from alpha_vantage.timeseries import TimeSeries
def getStoredData(srtdt, enddt, ticker):
#currently assumes that csv data is organised in format: Date,Open,High,Low,Close,Adj Close,Volume
#also assumes that the name of the csv is the same as that as the ticker
path = r'C:\Users\Edward Stables\Documents\Programming\Jupyter\Man AHL\Data\Initial Datasets'
beginning_of_time='2000-01-01'#sets the earliest date required to be stored
os.chdir(path) #sets the directory for the files
try:
#attempt to load the csv file from the path directory
frame = pd.read_csv(ticker+'.csv')
#set 'date' as index
frame = frame.set_index(['date'])
if dt.datetime.strptime(frame.index[-1], '%Y-%m-%d').date() < dt.datetime.today().date():
#if the data is not up-to-date:
updateData = getData(dt.datetime.strptime(frame.index[-1], '%Y-%m-%d').date().strftime('%Y-%m-%d'), dt.datetime.today().date().strftime('%Y-%m-%d'), ticker)
#appends the new data to the imported data
frame = frame.append(updateData[1:])
frame = partition(frame)
#saves the data back to storage
frame.to_csv(ticker+'.csv')
except FileNotFoundError:
#if the file doesn't exist then send a request to get the data for the ticker's dates#
with open(ticker+'.csv', "w"):
pass
#gets the new data, from 1/1/2000 to the current day
frame = getData(beginning_of_time, dt.datetime.today().date().strftime('%Y-%m-%d'), ticker)
frame = partition(frame)
#saves it to disk
frame.to_csv(ticker+'.csv')
#reads the requested value from disk
frame = pd.read_csv(ticker+'.csv')
#sets date as index
frame = frame.set_index(['date'])
#returns the selected values
return frame[(frame.index >= srtdt) & (frame.index <= enddt)]
def getData(strdt, enddt, tic):
API_KEY = '4U0DSJC208E4D8R7'
#makes api call
ts = TimeSeries(key=API_KEY, output_format='pandas')
data, meta_data = ts.get_daily(symbol=tic, outputsize='full')
#selects required data from full datarange returned
dataRange = data.loc[strdt: enddt]
return dataRange
print(getStoredData('2000-01','2018-04-20', 'MSFT'))
#adds the flag values for whether data should be used for training or testing
def partition(table):
rowcount = len(table.index) #number of rows in the dataset
flags = [0]*rowcount
#flagsimple gives a train/test split of 70/30 from the start of the dataset to the end
#this division is not recommended, use for validation purposes
divide = round(rowcount*0.7)
index = 0
while index < rowcount:
if index >= divide :
flags[index] = 1
index += 1
table = table.assign(flagsimple=flags)
#yearly 70:30 split, gives the same split as above, but performs it over a yearly interval
#252 business days in a year, therfore a split of 176:76
flags = [0]*rowcount
index = 0
for i in range(rowcount):
if index < 177:
flags[i] = 0
index += 1
elif index < 253:
flags [i] = 1
index += 1
else:
index = 0
table = table.assign(flagyearly=flags)
#gaussian based distribution of values
#currently giving a zero output, someone needs to check the code (I'm too tired right now)
#should form a skewed gaussian distibution limited to between 5 and 150, with mean 30 and standard deviation 10
#the train and test times are currently set as the same distribution.
index = 0
flags = [0]*rowcount
while index < rowcount:
rand = 0
while rand < 5 | rand > 150:
rand = round(rn.gauss(30, 10))
for i in range(rand):
flags[i] = 0
index += 1
for i in range(rand):
flags[i] = 1
index += 1
index += 1
table=table.assign(flaggaussian=flags)
return table
```
| github_jupyter |
---
_You are currently looking at **version 1.5** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._
---
# Assignment 3 - More Pandas
This assignment requires more individual learning then the last one did - you are encouraged to check out the [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to find functions or methods you might not have used yet, or ask questions on [Stack Overflow](http://stackoverflow.com/) and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff.
### Question 1 (20%)
Load the energy data from the file `Energy Indicators.xls`, which is a list of indicators of [energy supply and renewable electricity production](Energy%20Indicators.xls) from the [United Nations](http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls) for the year 2013, and should be put into a DataFrame with the variable name of **energy**.
Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:
`['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']`
Convert `Energy Supply` to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as `np.NaN` values.
Rename the following list of countries (for use in later questions):
```"Republic of Korea": "South Korea",
"United States of America": "United States",
"United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
"China, Hong Kong Special Administrative Region": "Hong Kong"```
There are also several countries with numbers and/or parenthesis in their name. Be sure to remove these,
e.g.
`'Bolivia (Plurinational State of)'` should be `'Bolivia'`,
`'Switzerland17'` should be `'Switzerland'`.
<br>
Next, load the GDP data from the file `world_bank.csv`, which is a csv containing countries' GDP from 1960 to 2015 from [World Bank](http://data.worldbank.org/indicator/NY.GDP.MKTP.CD). Call this DataFrame **GDP**.
Make sure to skip the header, and rename the following list of countries:
```"Korea, Rep.": "South Korea",
"Iran, Islamic Rep.": "Iran",
"Hong Kong SAR, China": "Hong Kong"```
<br>
Finally, load the [Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology](http://www.scimagojr.com/countryrank.php?category=2102) from the file `scimagojr-3.xlsx`, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame **ScimEn**.
Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15).
The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',
'Citations per document', 'H index', 'Energy Supply',
'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008',
'2009', '2010', '2011', '2012', '2013', '2014', '2015'].
*This function should return a DataFrame with 20 columns and 15 entries.*
```
import pandas as pd
import numpy as np
def load_energy():
skiprows = list(range(16)) + [17]
df1 = pd.read_excel('../data/course1_downloads/Energy Indicators.xls', skiprows=skiprows, header=0, skip_footer=38,
usecols=[2,3,4,5])
df1.replace(to_replace=r'\([^)]*\)', value='', inplace=True, regex=True)
df1.replace(to_replace=r'\d+$', value='', inplace=True, regex=True)
df1.replace(to_replace='...', value=np.nan, inplace=True)
df1.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
to_replace = ["Republic of Korea", "United States of America", "United Kingdom of Great Britain and Northern Ireland", "China, Hong Kong Special Administrative Region"]
value = ["South Korea", "United States", "United Kingdom", "Hong Kong"]
energy = df1.replace(to_replace=to_replace, value=value)
return energy
def load_gdp():
df = pd.read_csv('../data/course1_downloads/world_bank.csv', skiprows=4)
to_replace = ["Korea, Rep.", "Iran, Islamic Rep.", "Hong Kong SAR, China"]
value = ["South Korea", "Hong Kong", "Iran"]
gdp = df.replace(to_replace=to_replace, value=value)
return gdp
def load_scimen():
df = pd.read_excel('../data/course1_downloads/scimagojr-3.xlsx')
return df
def answer_one():
energy = load_energy()
GDP = load_gdp()
ScimEn = load_scimen()
# Join these three now
return ScimEn
answer_one()
```
### Question 2 (6.6%)
The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?
*This function should return a single number.*
```
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
def answer_two():
return "ANSWER"
```
<br>
Answer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka the DataFrame returned by `answer_one()`)
### Question 3 (6.6%)
What is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)
*This function should return a Series named `avgGDP` with 15 countries and their average GDP sorted in descending order.*
```
def answer_three():
Top15 = answer_one()
return "ANSWER"
```
### Question 4 (6.6%)
By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?
*This function should return a single number.*
```
def answer_four():
Top15 = answer_one()
return "ANSWER"
```
### Question 5 (6.6%)
What is the mean `Energy Supply per Capita`?
*This function should return a single number.*
```
def answer_five():
Top15 = answer_one()
return "ANSWER"
```
### Question 6 (6.6%)
What country has the maximum % Renewable and what is the percentage?
*This function should return a tuple with the name of the country and the percentage.*
```
def answer_six():
Top15 = answer_one()
return "ANSWER"
```
### Question 7 (6.6%)
Create a new column that is the ratio of Self-Citations to Total Citations.
What is the maximum value for this new column, and what country has the highest ratio?
*This function should return a tuple with the name of the country and the ratio.*
```
def answer_seven():
Top15 = answer_one()
return "ANSWER"
```
### Question 8 (6.6%)
Create a column that estimates the population using Energy Supply and Energy Supply per capita.
What is the third most populous country according to this estimate?
*This function should return a single string value.*
```
def answer_eight():
Top15 = answer_one()
return "ANSWER"
```
### Question 9 (6.6%)
Create a column that estimates the number of citable documents per person.
What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the `.corr()` method, (Pearson's correlation).
*This function should return a single number.*
*(Optional: Use the built-in function `plot9()` to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)*
```
def answer_nine():
Top15 = answer_one()
return "ANSWER"
def plot9():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']
Top15.plot(x='Citable docs per Capita', y='Energy Supply per Capita', kind='scatter', xlim=[0, 0.0006])
#plot9() # Be sure to comment out plot9() before submitting the assignment!
```
### Question 10 (6.6%)
Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.
*This function should return a series named `HighRenew` whose index is the country name sorted in ascending order of rank.*
```
def answer_ten():
Top15 = answer_one()
return "ANSWER"
```
### Question 11 (6.6%)
Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.
```python
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
```
*This function should return a DataFrame with index named Continent `['Asia', 'Australia', 'Europe', 'North America', 'South America']` and columns `['size', 'sum', 'mean', 'std']`*
```
def answer_eleven():
Top15 = answer_one()
return "ANSWER"
```
### Question 12 (6.6%)
Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?
*This function should return a __Series__ with a MultiIndex of `Continent`, then the bins for `% Renewable`. Do not include groups with no countries.*
```
def answer_twelve():
Top15 = answer_one()
return "ANSWER"
```
### Question 13 (6.6%)
Convert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.
e.g. 317615384.61538464 -> 317,615,384.61538464
*This function should return a Series `PopEst` whose index is the country name and whose values are the population estimate string.*
```
def answer_thirteen():
Top15 = answer_one()
return "ANSWER"
```
### Optional
Use the built in function `plot_optional()` to see an example visualization.
```
def plot_optional():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter',
c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',
'#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'],
xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);
for i, txt in enumerate(Top15.index):
ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')
print("This is an example of a visualization that can be created to help understand the data. \
This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \
2014 GDP, and the color corresponds to the continent.")
#plot_optional() # Be sure to comment out plot_optional() before submitting the assignment!
```
| github_jupyter |
Notebook to plot the histogram of the power criterion values of Rel-UME test.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import freqopttest.tst as tst
import kmod
import kgof
import kgof.goftest as gof
# submodules
from kmod import data, density, kernel, util, plot, glo, log
from kmod.ex import cifar10 as cf10
import kmod.ex.exutil as exu
from kmod import mctest as mct
import matplotlib
import matplotlib.pyplot as plt
import pickle
import os
import autograd.numpy as np
import scipy.stats as stats
import numpy.testing as testing
# plot.set_default_matplotlib_options()
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 20,
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
# def store_path(fname):
# """
# Construct a full path for saving/loading files.
# """
# return os.path.join('cifar10', fname)
display(list(zip(range(10), cf10.cifar10_classes)))
```
# Histogram of power criterion values
First construct four samples: $X \sim P, Y \sim Q, Z \sim R$, and a pool W to be used as test location candidates.
```
# class_spec = [
# # (class, #points for p, #points for q, #points for r, #points for the pool)
# ('airplane', 2000, 0, 0, 1500),
# ('cat', 0, 2000, 2000, 1500),
# ('truck', 1500, 1500, 1500, 1500),
# ]
# class_spec = [
# # (class, #points for p, #points for q, #points for r, #points for the pool)
# ('airplane', 1000, 0, 0, 300),
# ('cat', 0, 1000, 1000, 300),
# ('truck', 1500, 1500, 1500, 300),
# ]
class_spec = [
# (class, #points for p, #points for q, #points for r, #points for the pool)
('ship', 2000, 0, 0, 1000),
('airplane', 0, 2000, 1500, 1000),
('dog', 1500, 1500, 1500, 1000),
('bird', 0, 0, 500, 1000),
]
# class_spec = [
# # (class, #points for p, #points for q, #points for r, #points for the pool)
# ('horse', 2000, 0, 0, 1000),
# ('deer', 0, 2000, 1500, 1000),
# ('dog', 1500, 1500, 1500, 1000),
# ('automobile', 0, 0, 500, 1000),
# ]
# class_spec = [
# # (class, #points for p, #points for q, #points for r, #points for the pool)
# ('airplane', 2000, 0, 0, 1000),
# ('automobile', 0, 2000, 1500, 1000),
# ('cat', 1500, 1500, 1500, 1000),
# ('frog', 0, 0, 500, 1000),
# ]
#class_spec = [
# (class, #points for p, #points for q, #points for r, #points for the pool)
# ('airplane', 2000, 0, 0, 1000),
# ('automobile', 0, 2000, 2000, 1000),
# ('cat', 1500, 1500, 1500, 1000),
#]
# class_spec = [
# # (class, #points for p, #points for q, #points for r, #points for the pool)
# ('airplane', 200, 0, 0, 150),
# ('cat', 0, 200, 200, 150),
# ('truck', 150, 150, 150, 150),
# ]
# check sizes
hist_classes = [z[0] for z in class_spec]
p_sizes = [z[1] for z in class_spec]
q_sizes = [z[2] for z in class_spec]
r_sizes = [z[3] for z in class_spec]
pool_sizes = [z[4] for z in class_spec]
# make sure p,q,r have the same sample size
assert sum(p_sizes) == sum(q_sizes)
assert sum(q_sizes) == sum(r_sizes)
# cannot use more than 6000 from each class
for i, cs in enumerate(class_spec):
class_used = sum(cs[1:])
if class_used > 6000:
raise ValueError('class "{}" requires more than 6000 points. Was {}.'.format(cs[0], class_used))
# images as numpy arrays
list_Ximgs = []
list_Yimgs = []
list_Zimgs = []
list_poolimgs = []
# features
list_X = []
list_Y = []
list_Z = []
list_pool = []
# class labels
list_Xlabels = []
list_Ylabels = []
list_Zlabels = []
list_poollabels = []
# seed used for subsampling
seed = 368
with util.NumpySeedContext(seed=seed):
for i, cs in enumerate(class_spec):
# load class data
class_i = cs[0]
imgs_i = cf10.load_data_array(class_i)
feas_i = cf10.load_feature_array(class_i)
# split each class according to the spec
class_sizes_i = cs[1:]
# imgs_i, feas_i may contain more than what we need in total for a class. Subsample
sub_ind = util.subsample_ind(imgs_i.shape[0], sum(class_sizes_i), seed=seed+1)
sub_ind = list(sub_ind)
assert len(sub_ind) == sum(class_sizes_i)
xyzp_imgs_i = util.multi_way_split(imgs_i[sub_ind,:], class_sizes_i)
xyzp_feas_i = util.multi_way_split(feas_i[sub_ind,:], class_sizes_i)
# assignment
list_Ximgs.append(xyzp_imgs_i[0])
list_Yimgs.append(xyzp_imgs_i[1])
list_Zimgs.append(xyzp_imgs_i[2])
list_poolimgs.append(xyzp_imgs_i[3])
list_X.append(xyzp_feas_i[0])
list_Y.append(xyzp_feas_i[1])
list_Z.append(xyzp_feas_i[2])
list_pool.append(xyzp_feas_i[3])
# class labels
class_ind_i = cf10.cifar10_class_ind_dict[class_i]
list_Xlabels.append(np.ones(class_sizes_i[0])*class_ind_i)
list_Ylabels.append(np.ones(class_sizes_i[1])*class_ind_i)
list_Zlabels.append(np.ones(class_sizes_i[2])*class_ind_i)
list_poollabels.append(np.ones(class_sizes_i[3])*class_ind_i)
```
Finally we have the samples (features and images)
```
# stack the lists. For the "histogram" purpose, we don't actually need
# images for X, Y, Z. Only images for the pool.
Ximgs = np.vstack(list_Ximgs)
Yimgs = np.vstack(list_Yimgs)
Zimgs = np.vstack(list_Zimgs)
poolimgs = np.vstack(list_poolimgs)
# features
X = np.vstack(list_X)
Y = np.vstack(list_Y)
Z = np.vstack(list_Z)
pool = np.vstack(list_pool)
# labels
Xlabels = np.hstack(list_Xlabels)
Ylabels = np.hstack(list_Ylabels)
Zlabels = np.hstack(list_Zlabels)
poollabels = np.hstack(list_poollabels)
# sanity check
XYZP = [(X, Ximgs, Xlabels), (Y, Yimgs, Ylabels), (Z, Zimgs, Zlabels), (pool, poolimgs, poollabels)]
for f, fimgs, flabels in XYZP:
assert f.shape[0] == fimgs.shape[0]
assert fimgs.shape[0] == flabels.shape[0]
assert X.shape[0] == sum(p_sizes)
assert Y.shape[0] == sum(q_sizes)
assert Z.shape[0] == sum(r_sizes)
assert pool.shape[0] == sum(pool_sizes)
```
## The actual histogram
```
def eval_test_locations(X, Y, Z, loc_pool, k, func_inds, reg=1e-6):
"""
Use X, Y, Z to estimate the Rel-UME power criterion function and evaluate
the function at each point (individually) in loc_pool (2d numpy array).
* k: a kernel
* func_inds: list of indices of the functions to evaluate. See below.
* reg: regularization parameter in the power criterion
Return an m x (up to) 5 numpy array where m = number of candidates in the
pool. The columns can be (as specified in func_inds):
0. power criterion
1. evaluation of the relative witness (or the test statistic of UME_SC)
2. evaluation of MMD witness(p, r) (not squared)
3. evaluation of witness(q, r)
4. evaluate of witness(p, q)
"""
datap = data.Data(X)
dataq = data.Data(Y)
datar = data.Data(Z)
powcri_func = mct.SC_UME.get_power_criterion_func(datap, dataq, datar, k, k, reg=1e-7)
relwit_func = mct.SC_UME.get_relative_sqwitness(datap, dataq, datar, k, k)
witpr = tst.MMDWitness(k, X, Z)
witqr = tst.MMDWitness(k, Y, Z)
witpq = tst.MMDWitness(k, X, Y)
funcs = [powcri_func, relwit_func, witpr, witqr, witpq]
# select the functions according to func_inds
list_evals = [funcs[i](loc_pool) for i in func_inds]
stack_evals = np.vstack(list_evals)
return stack_evals.T
# Gaussian kernel with median heuristic
medxz = util.meddistance(np.vstack((X, Z)), subsample=1000)
medyz = util.meddistance(np.vstack((Y, Z)), subsample=1000)
k = kernel.KGauss(np.mean([medxz, medyz])**2)
print('Gaussian width: {}'.format(k.sigma2**0.5))
# histogram. This will take some time.
func_inds = np.array([0, 1, 2, 3, 4])
pool_evals = eval_test_locations(X, Y, Z, loc_pool=pool, k=k, func_inds=func_inds, reg=1e-6)
pow_cri_values = pool_evals[:, func_inds==0].reshape(-1)
test_stat_values = pool_evals[:, func_inds==1].reshape(-1)
witpr_values = pool_evals[:, func_inds==2]
witqr_values = pool_evals[:, func_inds==3]
witpq_values = pool_evals[:, func_inds==4].reshape(-1)
plt.figure(figsize=(6, 4))
a = 0.6
plt.figure(figsize=(4,4))
plt.hist(pow_cri_values, bins=15, label='Power Criterion', alpha=a);
plt.hist(witpr_values, bins=15, label='Power Criterion', alpha=a);
plt.hist(witqr_values, bins=15, label='Power Criterion', alpha=a);
plt.hist(witpq_values, bins=15, label='Power Criterion', alpha=a);
# Save the results
# package things to save
datapack = {
'class_spec': class_spec,
'seed': seed,
'poolimgs': poolimgs,
'X': X,
'Y': Y,
'Z': Z,
'pool': pool,
'medxz': medxz,
'medyz': medyz,
'func_inds': func_inds,
'pool_evals': pool_evals,
}
lines = [ '_'.join(str(x) for x in cs) for cs in class_spec]
fname = '-'.join(lines) + '-seed{}.pkl'.format(seed)
with open(fname, 'wb') as f:
# expect result to be a dictionary
pickle.dump(datapack, f)
```
Code for running the experiment ends here.
## Plot the results
This section can be run by loading the previously saved results.
```
# load the results
# fname = 'airplane_2000_0_0_1000-automobile_0_2000_1500_1000-cat_1500_1500_1500_1000-frog_0_0_500_1000-seed368.pkl'
# fname = 'ship_2000_0_0_1000-airplane_0_2000_1500_1000-automobile_1500_1500_1500_1000-bird_0_0_500_1000-seed368.pkl'
# fname = 'ship_2000_0_0_1000-dog_0_2000_1500_1000-automobile_1500_1500_1500_1000-bird_0_0_500_1000-seed368.pkl'
fname = 'ship_2000_0_0_1000-airplane_0_2000_1500_1000-dog_1500_1500_1500_1000-bird_0_0_500_1000-seed368.pkl'
# fname = 'horse_2000_0_0_1000-deer_0_2000_1500_1000-dog_1500_1500_1500_1000-airplane_0_0_500_1000-seed368.pkl'
# fname = 'horse_2000_0_0_1000-deer_0_2000_1500_1000-dog_1500_1500_1500_1000-automobile_0_0_500_1000-seed368.pkl'
# fname = 'horse_2000_0_0_1000-deer_0_2000_2000_1000-dog_1500_1500_1500_1000-seed368.pkl'
#fname = 'airplane_2000_0_0_1000-automobile_0_2000_2000_1000-cat_1500_1500_1500_1000-seed368.pkl'
with open(fname, 'rb') as f:
# expect a dictionary
L = pickle.load(f)
# load the variables
class_spec = L['class_spec']
seed = L['seed']
poolimgs = L['poolimgs']
X = L['X']
Y = L['Y']
Z = L['Z']
pool = L['pool']
medxz = L['medxz']
medyz = L['medyz']
func_inds = L['func_inds']
pool_evals = L['pool_evals']
pow_cri_values = pool_evals[:, func_inds==0].reshape(-1)
test_stat_values = pool_evals[:, func_inds==1].reshape(-1)
witpq_values = pool_evals[:, func_inds==4].reshape(-1)
# plot the histogram
plt.figure(figsize=(6, 4))
a = 0.6
plt.figure(figsize=(4,4))
plt.hist(pow_cri_values, bins=15, label='Power Criterion', alpha=a);
# plt.hist(test_stat_values, label='Stat.', alpha=a);
# plt.legend()
plt.savefig('powcri_hist_locs_pool.pdf', bbox_inches='tight')
plt.figure(figsize=(12, 4))
plt.hist(test_stat_values, label='Stat.', alpha=a);
plt.legend()
def reshape_3c_rescale(img_in_stack):
img = img_in_stack.reshape([3, 32, 32])
# h x w x c
img = img.transpose([1, 2, 0])/255.0
return img
def plot_lowzerohigh(images, values, text_in_title='', grid_rows=2,
grid_cols=10, figsize=(13, 3)):
"""
Sort the values in three different ways (ascending, descending, absolute ascending).
Plot the images corresponding to the top-k sorted values. k is determined
by the grid size.
"""
low_inds, zeros_inds, high_inds = util.top_lowzerohigh(values)
plt.figure(figsize=figsize)
exu.plot_images_grid(images[low_inds], reshape_3c_rescale, grid_rows, grid_cols)
# plt.suptitle('{} Low'.format(text_in_title))
plt.savefig('powcri_low_region.pdf', bbox_inches='tight')
plt.figure(figsize=figsize)
exu.plot_images_grid(images[zeros_inds], reshape_3c_rescale, grid_rows, grid_cols)
# plt.suptitle('{} Near Zero'.format(text_in_title))
plt.savefig('powcri_zero_region.pdf', bbox_inches='tight')
plt.figure(figsize=figsize)
exu.plot_images_grid(images[high_inds], reshape_3c_rescale, grid_rows, grid_cols)
# plt.suptitle('{} High'.format(text_in_title))
plt.savefig('powcri_high_region.pdf', bbox_inches='tight')
grid_rows = 2
grid_cols = 5
figsize = (5, 3)
plot_lowzerohigh(poolimgs, pow_cri_values, 'Power Criterion.', grid_rows, grid_cols, figsize)
# plot_lowzerohigh(poolimgs, rel_wit_values, 'Test statistic.', grid_rows, grid_cols, figsize)
import matplotlib.gridspec as gridspec
def plot_images_grid_witness(images, func_img=None, grid_rows=4, grid_cols=4, witness_pq=None, scale=100.):
"""
Plot images in a grid, starting from index 0 to the maximum size of the
grid.
images: stack of images images[i] is one image
func_img: function to run on each image before plotting
"""
gs1 = gridspec.GridSpec(grid_rows, grid_cols)
gs1.update(wspace=0.2, hspace=0.8) # set the spacing between axes.
wit_sign = np.sign(witness_pq)
for i in range(grid_rows*grid_cols):
if func_img is not None:
img = func_img(images[i])
else:
img = images[i]
if witness_pq is not None:
sign = wit_sign[i]
if sign > 0:
color = 'red'
else:
color = 'blue'
# plt.subplot(grid_rows, grid_cols, i+1)
ax = plt.subplot(gs1[i])
if witness_pq is not None:
ax.text(0.5, -0.6, "{:1.2f}".format(scale*witness_pq[i]), ha="center",
color=color, transform=ax.transAxes)
plt.imshow(img)
plt.axis('off')
def plot_lowzerohigh(images, values, text_in_title='', grid_rows=2,
grid_cols=10, figsize=(13, 3), wit_pq=None, skip_length=1):
"""
Sort the values in three different ways (ascending, descending, absolute ascending).
Plot the images corresponding to the top-k sorted values. k is determined
by the grid size.
"""
low_inds, zeros_inds, high_inds = util.top_lowzerohigh(values)
low_inds = low_inds[::skip_length]
zeros_inds = zeros_inds[::skip_length]
high_inds = high_inds[::skip_length]
plt.figure(figsize=figsize)
plot_images_grid_witness(images[low_inds], reshape_3c_rescale, grid_rows, grid_cols, wit_pq[low_inds])
# plt.suptitle('{} Low'.format(text_in_title))
# plt.savefig('powcri_low_region.pdf', bbox_inches='tight')
plt.figure(figsize=figsize)
plot_images_grid_witness(images[zeros_inds], reshape_3c_rescale, grid_rows, grid_cols, wit_pq[zeros_inds])
# plt.suptitle('{} Near Zero'.format(text_in_title))
# plt.savefig('powcri_zero_region.pdf', bbox_inches='tight')
plt.figure(figsize=figsize)
plot_images_grid_witness(images[high_inds[:]], reshape_3c_rescale, grid_rows, grid_cols, wit_pq[high_inds])
# plt.suptitle('{} High'.format(text_in_title))
# plt.savefig('powcri_high_region.pdf', bbox_inches='tight')
grid_rows = 3
grid_cols = 5
figsize = (8, 3)
plot_lowzerohigh(poolimgs, pow_cri_values, 'Power Criterion.', grid_rows, grid_cols, figsize, witpq_values, skip_length=40)
```
| github_jupyter |
# Integrated gradients for text classification on the IMDB dataset
In this example, we apply the integrated gradients method to a sentiment analysis model trained on the IMDB dataset. In text classification models, integrated gradients define an attribution value for each word in the input sentence. The attributions are calculated considering the integral of the model gradients with respect to the word embedding layer along a straight path from a baseline instance $x^\prime$ to the input instance $x.$ A description of the method can be found [here](https://docs.seldon.io/projects/alibi/en/latest/methods/IntegratedGradients.html). Integrated gradients was originally proposed in Sundararajan et al., ["Axiomatic Attribution for Deep Networks"](https://arxiv.org/abs/1703.01365)
The IMDB data set contains 50K movie reviews labelled as positive or negative.
We train a convolutional neural network classifier with a single 1-d convolutional layer followed by a fully connected layer. The reviews in the dataset are truncated at 100 words and each word is represented by 50-dimesional word embedding vector. We calculate attributions for the elements of the embedding layer.
```
import tensorflow as tf
import numpy as np
import os
import pandas as pd
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Embedding, Conv1D, GlobalMaxPooling1D, Dropout
from tensorflow.keras.utils import to_categorical
from alibi.explainers import IntegratedGradients
import matplotlib.pyplot as plt
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # True
```
## Load data
Loading the imdb dataset.
```
max_features = 10000
maxlen = 100
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
test_labels = y_test.copy()
train_labels = y_train.copy()
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
y_train, y_test = to_categorical(y_train), to_categorical(y_test)
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
index = imdb.get_word_index()
reverse_index = {value: key for (key, value) in index.items()}
```
A sample review from the test set. Note that unknown words are replaced with 'UNK'
```
def decode_sentence(x, reverse_index):
# the `-3` offset is due to the special tokens used by keras
# see https://stackoverflow.com/questions/42821330/restore-original-text-from-keras-s-imdb-dataset
return " ".join([reverse_index.get(i - 3, 'UNK') for i in x])
print(decode_sentence(x_test[1], reverse_index))
```
## Train Model
The model includes one convolutional layer and reaches a test accuracy of 0.85. If `save_model = True`, a local folder `../model_imdb` will be created and the trained model will be saved in that folder. If the model was previously saved, it can be loaded by setting `load_model = True`.
```
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
load_model = False
save_model = True
filepath = './model_imdb/' # change to directory where model is downloaded
if load_model:
model = tf.keras.models.load_model(os.path.join(filepath, 'model.h5'))
else:
print('Build model...')
inputs = Input(shape=(maxlen,), dtype='int32')
embedded_sequences = Embedding(max_features,
embedding_dims)(inputs)
out = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(embedded_sequences)
out = Dropout(0.4)(out)
out = GlobalMaxPooling1D()(out)
out = Dense(hidden_dims,
activation='relu')(out)
out = Dropout(0.4)(out)
outputs = Dense(2, activation='softmax')(out)
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=256,
epochs=3,
validation_data=(x_test, y_test))
if save_model:
if not os.path.exists(filepath):
os.makedirs(filepath)
model.save(os.path.join(filepath, 'model.h5'))
```
## Calculate integrated gradients
The integrated gradients attributions are calculated with respect to the embedding layer for 10 samples from the test set. Since the model uses a word to vector embedding with vector dimensionality of 50 and sequence length of 100 words, the dimensionality of the attributions is (10, 100, 50). In order to obtain a single attribution value for each word, we sum all the attribution values for the 50 elements of each word's vector representation.
The default baseline is used in this example which is internally defined as a sequence of zeros. In this case, this corresponds to a sequence of padding characters (**NB:** in general the numerical value corresponding to a "non-informative" baseline such as the PAD token will depend on the tokenizer used, make sure that the numerical value of the baseline used corresponds to your desired token value to avoid surprises). The path integral is defined as a straight line from the baseline to the input image. The path is approximated by choosing 50 discrete steps according to the Gauss-Legendre method.
```
n_steps = 50
method = "gausslegendre"
internal_batch_size = 100
nb_samples = 10
ig = IntegratedGradients(model,
layer=model.layers[1],
n_steps=n_steps,
method=method,
internal_batch_size=internal_batch_size)
x_test_sample = x_test[:nb_samples]
predictions = model(x_test_sample).numpy().argmax(axis=1)
explanation = ig.explain(x_test_sample,
baselines=None,
target=predictions)
# Metadata from the explanation object
explanation.meta
# Data fields from the explanation object
explanation.data.keys()
# Get attributions values from the explanation object
attrs = explanation.attributions[0]
print('Attributions shape:', attrs.shape)
```
## Sum attributions
```
attrs = attrs.sum(axis=2)
print('Attributions shape:', attrs.shape)
```
## Visualize attributions
```
i = 1
x_i = x_test_sample[i]
attrs_i = attrs[i]
pred = predictions[i]
pred_dict = {1: 'Positive review', 0: 'Negative review'}
print('Predicted label = {}: {}'.format(pred, pred_dict[pred]))
```
We can visualize the attributions for the text instance by mapping the values of the attributions onto a matplotlib colormap. Below we define some utility functions for doing this.
```
from IPython.display import HTML
def hlstr(string, color='white'):
"""
Return HTML markup highlighting text with the desired color.
"""
return f"<mark style=background-color:{color}>{string} </mark>"
def colorize(attrs, cmap='PiYG'):
"""
Compute hex colors based on the attributions for a single instance.
Uses a diverging colorscale by default and normalizes and scales
the colormap so that colors are consistent with the attributions.
"""
import matplotlib as mpl
cmap_bound = np.abs(attrs).max()
norm = mpl.colors.Normalize(vmin=-cmap_bound, vmax=cmap_bound)
cmap = mpl.cm.get_cmap(cmap)
# now compute hex values of colors
colors = list(map(lambda x: mpl.colors.rgb2hex(cmap(norm(x))), attrs))
return colors
```
Below we visualize the attribution values (highlighted in the text) having the highest positive attributions. Words with high positive attribution are highlighted in shades of green and words with negative attribution in shades of pink. Stronger shading corresponds to higher attribution values. Positive attributions can be interpreted as increase in probability of the predicted class ("Positive sentiment") while negative attributions correspond to decrease in probability of the predicted class.
```
words = decode_sentence(x_i, reverse_index).split()
colors = colorize(attrs_i)
HTML("".join(list(map(hlstr, words, colors))))
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
from google.colab import files
import io
import pandas as pd
import math
uploaded = files.upload()
hp = pd.read_csv(io.BytesIO(uploaded['hpq.us.txt']))
hp['Close'].isnull().sum()
hp
#walmart['ret'] = walmart.close.pct_change(1).mul(100)
series = hp['Close']
print(series)
type(series)
time = np.arange(0,1259,1)
print("time_shape :", time.shape)
print("series_shape :", series.shape)
'''usinng returns instead of close price
series = walmart['ret']
print(series)
type(series)
time = np.arange(0,1259,1)
print(time)
'''
'''
##adjusting series for using return
series = series[1:]
##slicing time for uisng wih returns
time = np.arange(0,1258,1)
'''
series = pd.Series.to_numpy(series)
##plot the price
df = hp.copy()
plt.figure(figsize = (22,12))
plt.plot(hp.index, hp['Close'])
plt.title('Walmart Stock Price')
plt.xticks(range(0,hp.shape[0],180),hp['Date'].loc[::180],rotation=40)
plt.ylabel('Price ($)');
plt.show()
time = np.arange(0,12075,1)
split_time = 9000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 64
batch_size = 128
shuffle_buffer_size = 1000
time.shape()
type(x_train)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400.0)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mse"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.figure(figsize=(15,10))
plt.semilogx(history.history["lr"], history.history["loss"])
plt.title("HP learning rate plot")
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400.0)
])
model.compile(loss=tf.keras.losses.Huber(), optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9),metrics=["mse"])
history = model.fit(dataset,epochs=300,verbose=1)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(17,11))
plt.title("walmart close price prediction")
plot_series(time_valid[11000:], x_valid[11000:])
plot_series(time_valid[11000:], results[11000:])
time_valid.shape
plt.figure(figsize=(17,11))
plt.title("HP close price prediction")
plot_series(time_valid[2400:3075], x_valid[2400:3075])
plot_series(time_valid[2400:3075], results[2400:3075])
tf.keras.metrics.mean_squared_error(x_valid, results).numpy()
print("RMSE= " , math.sqrt(tf.keras.metrics.mean_squared_error(x_valid, results).numpy()))
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mse=history.history['mse']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.figure(figsize=(15,10))
plt.plot(epochs, mse, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MSE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MSE", "Loss"])
plt.figure()
epochs_zoom = epochs[80:]
mse_zoom = mse[80:]
loss_zoom = loss[80:300]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.figure(figsize=(15,10))
plt.plot(epochs_zoom, mse_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MSE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MSE", "Loss"])
```
| github_jupyter |
# analysis_cdo_nco_draft2017b.ipynb
## Purpose
Use CDO and NCO to analyse CESM simulation output from project [p17c-marc-comparison](https://github.com/grandey/p17c-marc-comparison).
## Requirements
- Climate Data Operators (CDO)
- NetCDF Operators (NCO)
- CESM output data, post-processed to time-series format, as described in [data_management.org](https://github.com/grandey/p17c-marc-comparison/blob/master/manage_data/data_management.org#syncing-to-local-machine-for-analysis). These data are available via https://doi.org/10.6084/m9.figshare.5687812.
## Author
Benjamin S. Grandey, 2017-2018
```
! date
from glob import glob
import os
import re
import shutil
```
## CDO and NCO version information
Useful to record for reproducibility
```
! cdo --version
! ncks --version
```
## Directory locations for input and output NetCDF files
The data in the input directory (*in_dir*) are available via Figshare: https://doi.org/10.6084/m9.figshare.5687812.
```
# Input data directory
in_dir = os.path.expandvars('$HOME/data/figshare/figshare5687812/')
#in_dir = os.path.expandvars('$HOME/data/projects/p17c_marc_comparison/output_timeseries/')
# Output data directory
out_dir = os.path.expandvars('$HOME/data/projects/p17c_marc_comparison/analysis_cdo_nco_draft2017b/')
```
## Clean output data directory
```
for filename in glob('{}/*.nc'.format(out_dir)):
print('Deleting {}'.format(filename.split('/')[-1]))
os.remove(filename)
for filename in glob('{}/*.nco'.format(out_dir)):
print('Deleting {}'.format(filename.split('/')[-1]))
os.remove(filename)
for filename in glob('{}/*.tmp'.format(out_dir)):
print('Deleting {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
## Calculate annual means of standard 2D atmosphere variables
```
variable_list = ['OC_LDG', # MARC pure OC burden, for checking burdens derived using mass-mixing ratios
'BURDENSO4', 'BURDENPOM', 'BURDENSOA', 'BURDENBC', # MAM burdens
'BURDENSEASALT', 'BURDENDUST', # MAM burdens cont.
'TAU_tot', 'AEROD_v', # AOD in MARC, MAM
'CDNUMC', # vertically-integrated CDNC
'CLDTOT', 'CLDLOW', 'CLDMED', 'CLDHGH', # cloud fraction
'TGCLDLWP', 'TGCLDIWP', 'TGCLDCWP', # grid-box average water path
'PRECC', 'PRECL', # convective and large-scale precipitation rate (in m/s)
'PRECSC', 'PRECSL', # convective and large-scale snow rate (in m/s)
'U10', # 10m wind speed
'FSNTOA', 'FSNTOANOA', 'FSNTOA_d1', # SW flux at TOA, including clean-sky for MARC, MAM
'FSNS', 'FSNSNOA', 'FSNS_d1', # SW flux at surface, including clean-sky for MAM, MAM
'CRF', 'SWCF_d1', # SW cloud radiative effect in MARC, MAM
'LWCF', # LW cloud radiative effect
'FSNTOACNOA', 'FSNTOAC_d1', # SW clear-sky clean-sky flux at TOA in MARC, MAM
]
for variable in variable_list:
for model in ['marc_s2', 'mam3', 'mam7']:
for year in ['1850', '2000']:
# Check if input file exists
in_filename = '{}/p17c_{}_{}.cam.h0.{}.nc'.format(in_dir, model, year, variable)
if os.path.isfile(in_filename):
print('{}, {}, {}'.format(variable, model, year))
# Calculate annual means using NCO (with years starting in January)
annual_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable)
! ncra -O --mro -d time,,,12,12 {in_filename} {annual_filename}
print(' Written {}'.format(annual_filename.split('/')[-1]))
! date
```
## Extract data on specific atmosphere model levels
```
variable_list = ['CCN3', ]
for variable in variable_list:
for model in ['marc_s2', 'mam3', 'mam7']:
for year in ['1850', '2000']:
# Check if input file exists
in_filename = '{}/p17c_{}_{}.cam.h0.{}.nc'.format(in_dir, model, year, variable)
if os.path.isfile(in_filename): # there is no mam7_1850 simulation
# Loop over model levels of interest
for level in [30, 24, 19]: # 30 is bottom level; 24 is ~860hpa; 19 is ~525hPa
print('{}, {}, {}, ml{}'.format(variable, model, year, level))
# Select data for model level using CDO
print(' Selecting data for model level {}'.format(level))
level_filename = '{}/temp_{}_{}_{}_ml{}.nc'.format(out_dir, model, year, variable, level)
! cdo -s sellevidx,{level} {in_filename} {level_filename}
# Rename variable using NCO
print(' Renaming variable to {}_ml{}'.format(variable, level))
! ncrename -v {variable},{variable}_ml{level} {level_filename} >/dev/null 2>/dev/null
# Calculate annual means using NCO (with years starting in January)
print(' Calculating annual means')
annual_filename = '{}/{}_{}_{}_ml{}_ANN.nc'.format(out_dir, model, year, variable, level)
! ncra -O --mro -d time,,,12,12 {level_filename} {annual_filename}
print(' Written {}'.format(annual_filename.split('/')[-1]))
# Remove temporary file
for filename in [level_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
## Calculate column loadings from MARC mass mixing ratios
- This is to facilitate comparison with MAM. Although MARC diagnoses some column loadings, the column loading are not available for every aerosol component.
- Regarding the calculation of the mass in each level, the following reference is helpful: http://nco.sourceforge.net/nco.html#Left-hand-casting. A description of the hybrid vertical coordinate system can be found here: http://www.cesm.ucar.edu/models/atm-cam/docs/usersguide/node25.html.
```
# Calculate column loadings from mass mixing ratios
for year in ['1850', '2000']: # loop over emission years
! date
print('year = {}'.format(year))
# Copy the surface pressure file - necessary for decoding of the hybrid coordinates
print(' Copying surface pressure file')
in_filename = '{}/p17c_marc_s2_{}.cam.h0.PS.nc'.format(in_dir, year)
ps_filename = '{}/temp_marc_s2_{}_PS.nc'.format(out_dir, year)
shutil.copy2(in_filename, ps_filename)
# Create file containing NCO commands for calculation of air mass in each model level
nco_filename = '{}/temp_marc_s2_{}.nco'.format(out_dir, year)
nco_file = open(nco_filename, 'w')
nco_file.writelines(['*P_bnds[time,ilev,lat,lon]=hyai*P0+hybi*PS;\n', # pressures at bounds
'*P_delta[time,lev,lat,lon]=P_bnds(:,1:30,:,:)-P_bnds(:,0:29,:,:);\n', # deltas
'mass_air=P_delta/9.807;']) # mass of air
nco_file.close()
# Calculate mass of air in each model level
print(' Calculating mass of air in each model level')
mass_air_filename = '{}/temp_marc_s2_{}_mass_air.nc'.format(out_dir, year)
! ncap2 -O -v -S {nco_filename} {ps_filename} {mass_air_filename}
# Loop over mass mixing ratios for different aerosol components
for aerosol in ['OC', 'MOS', 'OIM', 'BC', 'MBS', 'BIM',
'NUC', 'AIT', 'ACC',
'SSLT01', 'SSLT02', 'SSLT03', 'SSLT04', # sea-salt
'DST01', 'DST02', 'DST03', 'DST04']: # dust
! date
print(' aerosol = {}'.format(aerosol))
# Name of corresponding mass mixing ratio variable
if aerosol[0:3] in ['SSL', 'DST']:
mmr_aerosol = aerosol
else:
mmr_aerosol = 'm{}'.format(aerosol)
# Copy the mass mixing ratio file
print(' Copying the file for {}'.format(mmr_aerosol))
in_filename = '{}/p17c_marc_s2_{}.cam.h0.{}.nc'.format(in_dir, year, mmr_aerosol)
mmr_filename = '{}/temp_marc_s2_{}_{}.nc'.format(out_dir, year, mmr_aerosol)
shutil.copy2(in_filename, mmr_filename)
# Append the mass of air in each model level
print(' Appending mass_air')
! ncks -A {mass_air_filename} {mmr_filename}
# Calculate the mass of the aerosol
print(' Calculating the mass of {} in each model level'.format(aerosol))
mass_aerosol_filename = '{}/temp_marc_s2_{}_mass_{}.nc'.format(out_dir, year, aerosol)
! ncap2 -O -s 'mass_{aerosol}=mass_air*{mmr_aerosol}' {mmr_filename} {mass_aerosol_filename}
# Sum over levels to calculate column loading (and exclude unwanted variables)
print(' Summing over levels')
column_filename = '{}/temp_marc_s2_{}_column_{}.nc'.format(out_dir, year, aerosol)
! ncwa -O -x -v mass_air,{mmr_aerosol} -a lev -y sum {mass_aerosol_filename} {column_filename}
# Rename variable
print(' Renaming variable to c{}_LDG'.format(aerosol))
! ncrename -v mass_{aerosol},c{aerosol}_LDG {column_filename} >/dev/null 2>/dev/null
# Set units and long_name
print(' Setting units and long_name')
! ncatted -a 'units',c{aerosol}_LDG,o,c,'kg/m2' {column_filename}
! ncatted -a 'long_name',c{aerosol}_LDG,o,c,'{aerosol} column loading' {column_filename}
# Calculate annual means (with years starting in January)
print(' Calculating annual means')
annual_filename = '{}/marc_s2_{}_c{}_LDG_ANN.nc'.format(out_dir, year, aerosol)
! ncra -O --mro -d time,,,12,12 {column_filename} {annual_filename}
print(' Written {}'.format(annual_filename.split('/')[-1]))
# Remove three temporary files
for filename in [mmr_filename, mass_aerosol_filename, column_filename]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
# Remove another two temporary files
for filename in [ps_filename, mass_air_filename, nco_filename]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
# Compare cOC_LDG (calculated above) with standard OC_LDG
for year in ['1850', '2000']:
# Input files
in_filename_1 = '{}/marc_s2_{}_OC_LDG_ANN.nc'.format(out_dir, year)
in_filename_2 = '{}/marc_s2_{}_cOC_LDG_ANN.nc'.format(out_dir, year)
# Rename cOC_LDG to OC_LDG to enable comparison
temp_filename = '{}/temp_marc_s2_{}_cOC_LDG_ANN.nc'.format(out_dir, year)
! ncrename -O -v cOC_LDG,OC_LDG {in_filename_2} {temp_filename} >/dev/null 2>/dev/null
# Compare using CDO
! cdo diffv {in_filename_1} {temp_filename}
# Remove temporary file
for filename in [temp_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
In results above, the maximum relative difference is less than 1%. This shows that the calculation of the column loadings is works as intended.
## Derive additional atmosphere variables
```
# Derived variables that require *two* input variables
derived_variable_dict = {'ctBURDENOM=BURDENPOM+BURDENSOA': ['mam3', 'mam7'], # total OM loading (MAM)
'ctOC_LDG=cOC_LDG+cOIM_LDG': ['marc_s2',], # total OC loading (MARC)
'ctBC_LDG=cBC_LDG+cBIM_LDG': ['marc_s2',], # total BC loading
'cSIMOS_LDG=cMOS_LDG-cOIM_LDG': ['marc_s2',], # SO4 in MOS loading
'cSIMBS_LDG=cMBS_LDG-cBIM_LDG': ['marc_s2',], # SO4 in MBS loading
'cPRECT=PRECC+PRECL': ['marc_s2', 'mam3', 'mam7'], # total precipitation rate
'cPRECST=PRECSC+PRECSL': ['marc_s2', 'mam3', 'mam7'], # total snow rate
'cFNTOA=FSNTOA+LWCF': ['marc_s2', 'mam3', 'mam7'], # net (SW + LW) radiative effect
'cDRE=FSNTOA-FSNTOANOA': ['marc_s2',], # direct radiative effect at TOA
'cDRE=FSNTOA-FSNTOA_d1': ['mam3', 'mam7'],
'cDREsurf=FSNS-FSNSNOA': ['marc_s2',], # direct radiative effect at surface
'cDREsurf=FSNS-FSNS_d1': ['mam3', 'mam7'],
'cAAA=cDRE-cDREsurf': ['marc_s2', 'mam3', 'mam7'], # absorption by aerosols in atmosphere
}
for derived_variable, model_list in derived_variable_dict.items():
for model in model_list:
year_list = ['1850', '2000']
for year in year_list:
print('{}, {}, {}'.format(derived_variable, model, year))
# Merge input files
variable_list = re.split('\=|\+|\-', derived_variable)
in_filename_1 = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable_list[1])
in_filename_2 = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable_list[2])
merge_filename = '{}/temp_{}_{}_merge_ANN.nc'.format(out_dir, model, year)
! cdo -s merge {in_filename_1} {in_filename_2} {merge_filename} >/dev/null 2>/dev/null
# Calculate derived variable
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable_list[0])
! cdo -s expr,'{derived_variable}' {merge_filename} {out_filename}
if os.path.isfile(out_filename):
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary file
for filename in [merge_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
# ctSSLT_LDG and ctDST_LDG require *four* input variables
for sslt_or_dst in ['SSLT', 'DST']: # sea-salt or dust
variable_list = ['ct{}_LDG'.format(sslt_or_dst), ]
for size_bin in ['01', '02', '03', '04']:
variable_list.append('c{}{}_LDG'.format(sslt_or_dst, size_bin))
derived_variable = '{}={}+{}+{}+{}'.format(*variable_list)
model = 'marc_s2' # MARC only
year_list = ['1850', '2000']
for year in year_list:
print('{}, {}, {}'.format(derived_variable, model, year))
# Merge input files
in_filename_list = []
for variable in variable_list[1:]:
in_filename_list.append('{}/{}_{}_{}_ANN.nc'.format(out_dir, model,
year, variable))
merge_filename = '{}/temp_{}_{}_merge_ANN.nc'.format(out_dir, model, year)
! cdo -s merge {in_filename_list[0]} {in_filename_list[1]} \
{in_filename_list[2]} {in_filename_list[3]} {merge_filename} >/dev/null 2>/dev/null
# Calculate derived variable
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable_list[0])
! cdo -s expr,'{derived_variable}' {merge_filename} {out_filename}
if os.path.isfile(out_filename):
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary file
for filename in [merge_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
# ctSUL_LDG requires *seven* input variables
derived_variable_dict = {'ctSUL_LDG=cACC_LDG+cAIT_LDG+cNUC_LDG+cMOS_LDG+cMBS_LDG-cOIM_LDG-cBIM_LDG':
['marc_s2',],} # total SO4 loading
for derived_variable, model_list in derived_variable_dict.items():
for model in model_list:
year_list = ['1850', '2000']
for year in year_list:
print('{}, {}, {}'.format(derived_variable, model, year))
# Merge input files
variable_list = re.split('\=|\+|\-', derived_variable)
in_filename_list = []
for variable in variable_list[1:]:
in_filename_list.append('{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable))
merge_filename = '{}/temp_{}_{}_merge_ANN.nc'.format(out_dir, model, year)
! cdo -s merge {in_filename_list[0]} {in_filename_list[1]} {in_filename_list[2]} \
{in_filename_list[3]} {in_filename_list[4]} {in_filename_list[5]} \
{in_filename_list[6]} {merge_filename} >/dev/null 2>/dev/null
# Calculate derived variable
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable_list[0])
! cdo -s expr,'{derived_variable}' {merge_filename} {out_filename}
if os.path.isfile(out_filename):
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary file
for filename in [merge_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
## Calculate annual means of land variables
```
variable_list = ['FSNO', # fraction of ground covered by snow
'SNOBCMSL', # mass of BC in top layer of snow
'BCDEP' # total BC deposition (dry+wet) from atmosphere
]
for variable in variable_list:
for model in ['marc_s2', 'mam3', 'mam7']:
for year in ['1850', '2000']:
# Check if input file exists
in_filename = '{}/p17c_{}_{}.clm2.h0.{}.nc'.format(in_dir, model, year, variable)
if os.path.isfile(in_filename):
print('{}, {}, {}'.format(variable, model, year))
# Calculate annual means using NCO (with years starting in January)
temp_filename = '{}/temp_{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable)
! ncra -O --mro -d time,,,12,12 {in_filename} {temp_filename}
# Replace missing values with zero
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable)
! cdo -s setmisstoc,0 {temp_filename} {out_filename}
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary file
for filename in [temp_filename, ]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
## Calculate annual means of sea ice variables and remap to lonlat grid
Note: the data are initially on a curvilinear grid.
```
# Create grid file
# Note: this grid is only approximately the same as the land grid
grid_filename = '{}/grid.txt'.format(in_dir)
grid_file = open(grid_filename, 'w')
grid_file.writelines(['gridtype = lonlat\n',
'xsize = 144\n',
'ysize = 96\n',
'xfirst = 0\n',
'xinc = 2.5\n',
'yfirst = -90\n',
'yinc = 1.89473724\n'])
grid_file.close()
print('Written {}'.format(grid_filename.split('/')[-1]))
!date
variable_list = ['fs', # grid-cell mean snow fraction over ice
]
for variable in variable_list:
for model in ['marc_s2', 'mam3', 'mam7']:
for year in ['1850', '2000']:
# Check if input file exists
in_filename = '{}/p17c_{}_{}.cice.h.{}.nc'.format(in_dir, model, year, variable)
if os.path.isfile(in_filename):
print('{}, {}, {}'.format(variable, model, year))
# Calculate annual means using NCO (with years starting in January)
annual_filename = '{}/temp_{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable)
! ncra -O --mro -d time,,,12,12 {in_filename} {annual_filename}
# Apply distance weighted remapping to new grid using CDO
regrid_filename = '{}/temp_{}_{}_{}_ANN_regrid.nc'.format(out_dir, model, year, variable)
! cdo -s remapdis,{grid_filename} {annual_filename} {regrid_filename}
# Replace missing values with zero
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, variable)
! cdo -s setmisstoc,0 {regrid_filename} {out_filename}
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary file
for filename in [annual_filename, regrid_filename]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
```
## Combine snow cover over land and sea ice
```
for model in ['marc_s2', 'mam3', 'mam7']:
for year in ['1850', '2000']:
# Check if input files exists
lnd_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, 'FSNO')
ice_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, 'fs')
if os.path.isfile(lnd_filename) and os.path.isfile(lnd_filename):
print('{}, {}'.format(model, year))
# Remap land data to identical grid as was used for ice
lnd_regrid_filename = '{}/temp_{}_{}_{}_ANN_regrid.nc'.format(out_dir, model, year, 'FSNO')
! cdo -s remapnn,{grid_filename} {lnd_filename} {lnd_regrid_filename}
# Merge land and ice data into one file
merge_filename = '{}/temp_{}_{}_merge_ANN.nc'.format(out_dir, model, year)
! cdo -s merge {lnd_regrid_filename} {ice_filename} {merge_filename}
# Combine snow cover over land and ice, weighting by land fraction
out_filename = '{}/{}_{}_{}_ANN.nc'.format(out_dir, model, year, 'cSnowCover')
derivation_str = '_oceanfrac=1-landfrac;cSnowCover=landfrac*FSNO+_oceanfrac*fs'
! cdo -s expr,'{derivation_str}' {merge_filename} {out_filename}
print(' Written {}'.format(out_filename.split('/')[-1]))
# Remove temporary files
for filename in [lnd_regrid_filename, merge_filename]:
print(' Removing {}'.format(filename.split('/')[-1]))
os.remove(filename)
! date
! date
```
| github_jupyter |
# A practical introduction to Reinforcement Learning
Most of you have probably heard of AI learning to play computer games on their own, a very popular example being Deepmind. Deepmind hit the news when their AlphaGo program defeated the South Korean Go world champion in 2016. There had been many successful attempts in the past to develop agents with the intent of playing Atari games like Breakout, Pong, and Space Invaders.
You know what's common in most of these programs? A paradigm of Machine Learning known as **Reinforcement Learning**. For those of you that are new to RL, let's get some understand with few analogies.
## Reinforcement Learning Analogy
Consider the scenario of teaching a dog new tricks. The dog doesn't understand our language, so we can't tell him what to do. Instead, we follow a different strategy. We emulate a situation (or a cue), and the dog tries to respond in many different ways. If the dog's response is the desired one, we reward them with snacks. Now guess what, the next time the dog is exposed to the same situation, the dog executes a similar action with even more enthusiasm in expectation of more food. That's like learning "what to do" from positive experiences. Similarly, dogs will tend to learn what not to do when face with negative experiences.
That's exactly how Reinforcement Learning works in a broader sense:
- Your dog is an "agent" that is exposed to the **environment**. The environment could in your house, with you.
- The situations they encounter are analogous to a **state**. An example of a state could be your dog standing and you use a specific word in a certain tone in your living room
- Our agents react by performing an **action** to transition from one "state" to another "state," your dog goes from standing to sitting, for example.
- After the transition, they may receive a **reward** or **penalty** in return. You give them a treat! Or a "No" as a penalty.
- The **policy** is the strategy of choosing an action given a state in expectation of better outcomes.
Reinforcement Learning lies between the spectrum of Supervised Learning and Unsupervised Learning, and there's a few important things to note:
1. Being greedy doesn't always work
There are things that are easy to do for instant gratification, and there's things that provide long term rewards
The goal is to not be greedy by looking for the quick immediate rewards, but instead to optimize for maximum rewards over the whole training.
2. Sequence matters in Reinforcement Learning
The reward agent does not just depend on the current state, but the entire history of states. Unlike supervised and unsupervised learning, time is important here.
### The Reinforcement Process
In a way, Reinforcement Learning is the science of making optimal decisions using experiences.
Breaking it down, the process of Reinforcement Learning involves these simple steps:
1. Observation of the environment
2. Deciding how to act using some strategy
3. Acting accordingly
4. Receiving a reward or penalty
5. Learning from the experiences and refining our strategy
6. Iterate until an optimal strategy is found
Let's now understand Reinforcement Learning by actually developing an agent to learn to play a game automatically on its own.
## Example Design: Self-Driving Cab
Let's design a simulation of a self-driving cab. The major goal is to demonstrate, in a simplified environment, how you can use RL techniques to develop an efficient and safe approach for tackling this problem.
The Smartcab's job is to pick up the passenger at one location and drop them off in another. Here are a few things that we'd love our Smartcab to take care of:
- Drop off the passenger to the right location.
- Save passenger's time by taking minimum time possible to drop off
- Take care of passenger's safety and traffic rules
There are different aspects that need to be considered here while modeling an RL solution to this problem: rewards, states, and actions.
### 1. Rewards
Since the agent (the imaginary driver) is reward-motivated and is going to learn how to control the cab by trial experiences in the environment, we need to decide the **rewards** and/or **penalties** and their magnitude accordingly. Here a few points to consider:
- The agent should receive a high positive reward for a successful dropoff because this behavior is highly desired
- The agent should be penalized if it tries to drop off a passenger in wrong locations
- The agent should get a slight negative reward for not making it to the destination after every time-step. "Slight" negative because we would prefer our agent to reach late instead of making wrong moves trying to reach to the destination as fast as possible
### 2. State Space
In Reinforcement Learning, the agent encounters a state, and then takes action according to the state it's in.
The **State Space** is the set of all possible situations our taxi could inhabit. The state should contain useful information the agent needs to make the right action.
Let's say we have a training area for our Smartcab where we are teaching it to transport people in a parking lot to four different locations (R, G, Y, B):

Let's assume Smartcab is the only vehicle in this parking lot. We can break up the parking lot into a 5x5 grid, which gives us 25 possible taxi locations. These 25 locations are one part of our state space. Notice the current location state of our taxi is coordinate (3, 1).
You'll also notice there are four (4) locations that we can pick up and drop off a passenger: R, G, Y, B or `[(0,0), (0,4), (4,0), (4,3)] ` in (row, col) coordinates. Our illustrated passenger is in location **Y** and they wish to go to location **R**.
When we also account for one (1) additional passenger state of being inside the taxi, we can take all combinations of passenger locations and destination locations to come to a total number of states for our taxi environment; there's four (4) destinations and five (4 + 1) passenger locations.
So, our taxi environment has $5 \times 5 \times 5 \times 4 = 500$ total possible states.
### 3. Action Space
The agent encounters one of the 500 states and it takes an action. The action in our case can be to move in a direction or decide to pickup/dropoff a passenger.
In other words, we have six possible actions:
1. `south`
2. `north`
3. `east`
4. `west`
5. `pickup`
6. `dropoff`
This is the **action space**: the set of all the actions that our agent can take in a given state.
You'll notice in the illustration above, that the taxi cannot perform certain actions in certain states due to walls. In environment's code, we will simply provide a -1 penalty for every wall hit and the taxi won't move anywhere. This will just rack up penalties causing the taxi to consider going around the wall.
## Implementation with Python
Fortunately, [OpenAI Gym](https://gym.openai.com/) has this exact environment already built for us.
Gym provides different game environments which we can plug into our code and test an agent. The library takes care of API for providing all the information that our agent would require, like possible actions, score, and current state. We just need to focus just on the algorithm part for our agent.
We'll be using the Gym environment called `Taxi-V3`, which all of the details explained above were pulled from. The objectives, rewards, and actions are all the same.
### Gym's interface
We need to install `gym` first. Executing the following in a Jupyter notebook should work:
```
!pip install cmake 'gym[atari]' scipy
```
Once installed, we can load the game environment and render what it looks like:
```
import gym
env = gym.make("Taxi-v3").env
env.render()
```
The core gym interface is `env`, which is the unified environment interface. The following are the `env` methods that would be quite helpful to us:
- `env.reset`: Resets the environment and returns a random initial state.
- `env.step(action)`: Step the environment by one timestep. Returns
+ **observation**: Observations of the environment
+ **reward**: If your action was beneficial or not
+ **done**: Indicates if we have successfully picked up and dropped off a passenger, also called one *episode*
+ **info**: Additional info such as performance and latency for debugging purposes
- `env.render`: Renders one frame of the environment (helpful in visualizing the environment)
Note: We are using the `.env` on the end of `make` to avoid training stopping at 200 iterations, which is the default for the new version of Gym ([reference](https://stackoverflow.com/a/42802225)).
### Reminder of our problem
Here's our restructured problem statement (from Gym docs):
> There are 4 locations (labeled by different letters), and our job is to pick up the passenger at one location and drop him off at another. We receive +20 points for a successful drop-off and lose 1 point for every time-step it takes. There is also a 10 point penalty for illegal pick-up and drop-off actions.
Let's dive more into the environment.
```
env.reset() # reset environment to a new, random state
env.render()
print("Action Space {}".format(env.action_space))
print("State Space {}".format(env.observation_space))
```
- The **filled square** represents the taxi, which is yellow without a passenger and green with a passenger.
- The **pipe ("|")** represents a wall which the taxi cannot cross.
- **R, G, Y, B** are the possible pickup and destination locations. The **blue letter** represents the current passenger pick-up location, and the **purple letter** is the current destination.
As verified by the prints, we have an **Action Space** of size 6 and a **State Space** of size 500. As you'll see, our RL algorithm won't need any more information than these two things. All we need is a way to identify a state uniquely by assigning a unique number to every possible state, and RL learns to choose an action number from 0-5 where:
- 0 = south
- 1 = north
- 2 = east
- 3 = west
- 4 = pickup
- 5 = dropoff
Recall that the 500 states correspond to a encoding of the taxi's location, the passenger's location, and the destination location.
Reinforcement Learning will learn a mapping of **states** to the optimal **action** to perform in that state by *exploration*, i.e. the agent explores the environment and takes actions based off rewards defined in the environment.
The optimal action for each state is the action that has the **highest cumulative long-term reward**.
#### Back to our illustration
We can actually take our illustration above, encode its state, and give it to the environment to render in Gym. Recall that we have the taxi at row 3, column 1, our passenger is at location 2, and our destination is location 0. Using the Taxi-v2 state encoding method, we can do the following:
```
state = env.encode(3, 1, 2, 0) # (taxi row, taxi column, passenger index, destination index)
print("State:", state)
env.s = state
env.render()
```
We are using our illustration's coordinates to generate a number corresponding to a state between 0 and 499, which turns out to be **328** for our illustration's state.
Then we can set the environment's state manually with `env.env.s` using that encoded number. You can play around with the numbers and you'll see the taxi, passenger, and destination move around.
#### The Reward Table
When the Taxi environment is created, there is an initial Reward table that's also created, called `P`. We can think of it like a matrix that has the number of states as rows and number of actions as columns, i.e. a $states \ \times \ actions$ matrix.
Since every state is in this matrix, we can see the default reward values assigned to our illustration's state:
```
for I in range(10,20):
print(env.P[I],'\n')
```
This dictionary has the structure `{action: [(probability, nextstate, reward, done)]}`.
A few things to note:
- The 0-5 corresponds to the actions (south, north, east, west, pickup, dropoff) the taxi can perform at our current state in the illustration.
- In this env, `probability` is always 1.0.
- The `nextstate` is the state we would be in if we take the action at this index of the dict
- All the movement actions have a -1 reward and the pickup/dropoff actions have -10 reward in this particular state. If we are in a state where the taxi has a passenger and is on top of the right destination, we would see a reward of 20 at the dropoff action (5)
- `done` is used to tell us when we have successfully dropped off a passenger in the right location. Each successfull dropoff is the end of an **episode**
Note that if our agent chose to explore action two (2) in this state it would be going East into a wall. The source code has made it impossible to actually move the taxi across a wall, so if the taxi chooses that action, it will just keep acruing -1 penalties, which affects the **long-term reward**.
### Solving the environment without Reinforcement Learning
Let's see what would happen if we try to brute-force our way to solving the problem without RL.
Since we have our `P` table for default rewards in each state, we can try to have our taxi navigate just using that.
We'll create an infinite loop which runs until one passenger reaches one destination (one **episode**), or in other words, when the received reward is 20. The `env.action_space.sample()` method automatically selects one random action from set of all possible actions.
Let's see what happens:
```
env.s = 328 # set environment to illustration's state
epochs = 0
penalties, reward = 0, 0
frames = [] # for animation
done = False
while not done:
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if reward == -10:
penalties += 1
# Put each rendered frame into dict for animation
frames.append({
'frame': env.render(mode='ansi'),
'state': state,
'action': action,
'reward': reward
}
)
epochs += 1
print("Timesteps taken: {}".format(epochs))
print("Penalties incurred: {}".format(penalties))
from IPython.display import clear_output
from time import sleep
def print_frames(frames):
for i, frame in enumerate(frames):
clear_output(wait=True)
#print(frame['frame'].getvalue())
print(f"Timestep: {i + 1}")
print(f"State: {frame['state']}")
print(f"Action: {frame['action']}")
print(f"Reward: {frame['reward']}")
sleep(.1)
print_frames(frames)
```
Not good. Our agent takes thousands of timesteps and makes lots of wrong drop offs to deliver just one passenger to the right destination.
This is because we aren't *learning* from past experience. We can run this over and over, and it will never optimize. The agent has no memory of which action was best for each state, which is exactly what Reinforcement Learning will do for us.
### Enter Reinforcement Learning
We are going to use a simple RL algorithm called *Q-learning* which will give our agent some memory.
#### Intro to Q-learning
Essentially, Q-learning lets the agent use the environment's rewards to learn, over time, the best action to take in a given state.
In our Taxi environment, we have the reward table, `P`, that the agent will learn from. It does thing by looking receiving a reward for taking an action in the current state, then updating a *Q-value* to remember if that action was beneficial.
The values store in the Q-table are called a *Q-values*, and they map to a `(state, action)` combination.
A Q-value for a particular state-action combination is representative of the "quality" of an action taken from that state. Better Q-values imply better chances of getting greater rewards.
For example, if the taxi is faced with a state that includes a passenger at its current location, it is highly likely that the Q-value for `pickup` is higher when compared to other actions, like `dropoff` or `north`.
Q-values are initialized to an arbitrary value, and as the agent exposes itself to the environment and receives different rewards by executing different actions, the Q-values are updated using the equation:
$$\Large Q({\small state}, {\small action}) \leftarrow (1 - \alpha) Q({\small state}, {\small action}) + \alpha \Big({\small reward} + \gamma \max_{a} Q({\small next \ state}, {\small all \ actions})\Big)$$
Where:
- $\Large \alpha$ (alpha) is the learning rate ($0 < \alpha \leq 1$) - Just like in supervised learning settings, $\alpha$ is the extent to which our Q-values are being updated in every iteration.
- $\Large \gamma$ (gamma) is the discount factor ($0 \leq \gamma \leq 1$) - determines how much importance we want to give to future rewards. A high value for the discount factor (close to **1**) captures the long-term effective award, whereas, a discount factor of **0** makes our agent consider only immediate reward, hence making it greedy.
**What is this saying?**
We are assigning ($\leftarrow$), or updating, the Q-value of the agent's current *state* and *action* by first taking a weight ($1-\alpha$) of the old Q-value, then adding the learned value. The learned value is a combination of the reward for taking the current action in the current state, and the discounted maximum reward from the next state we will be in once we take the current action.
Basically, we are learning the proper action to take in the current state by looking at the reward for the current state/action combo, and the max rewards for the next state. This will eventually cause our taxi to consider the route with the best rewards strung together.
The Q-value of a state-action pair is the sum of the instant reward and the discounted future reward (of the resulting state).
The way we store the Q-values for each state and action is through a **Q-table**
##### Q-Table
The Q-table is a matrix where we have a row for every state (500) and a column for every action (6). It's first initialized to 0, and then values are updated after training. Note that the Q-table has the same dimensions as the reward table, but it has a completely different purpose.
<img src="assets/q-matrix-initialized-to-learned.png" width=500px>
#### Summing up the Q-Learning Process
Breaking it down into steps, we get
- Initialize the Q-table by all zeros.
- Start exploring actions: For each state, select any one among all possible actions for the current state (S).
- Travel to the next state (S') as a result of that action (a).
- For all possible actions from the state (S') select the one with the highest Q-value.
- Update Q-table values using the equation.
- Set the next state as the current state.
- If goal state is reached, then end and repeat the process.
##### Exploiting learned values
After enough random exploration of actions, the Q-values tend to converge serving our agent as an action-value function which it can exploit to pick the most optimal action from a given state.
There's a tradeoff between exploration (choosing a random action) and exploitation (choosing actions based on already learned Q-values). We want to prevent the action from always taking the same route, and possibly overfitting, so we'll be introducing another parameter called $\Large \epsilon$ "epsilon" to cater to this during training.
Instead of just selecting the best learned Q-value action, we'll sometimes favor exploring the action space further. Lower epsilon value results in episodes with more penalties (on average) which is obvious because we are exploring and making random decisions.
### Implementing Q-learning in python
#### Training the Agent
First, we'll initialize the Q-table to a $500 \times 6$ matrix of zeros:
```
import numpy as np
q_table = np.zeros([env.observation_space.n, env.action_space.n])
q_table
```
We can now create the training algorithm that will update this Q-table as the agent explores the environment over thousands of episodes.
In the first part of `while not done`, we decide whether to pick a random action or to exploit the already computed Q-values. This is done simply by using the `epsilon` value and comparing it to the `random.uniform(0, 1)` function, which returns an arbitrary number between 0 and 1.
We execute the chosen action in the environment to obtain the `next_state` and the `reward` from performing the action. After that, we calculate the maximum Q-value for the actions corresponding to the `next_state`, and with that, we can easily update our Q-value to the `new_q_value`:
```
%%time
"""Training the agent"""
import random
from IPython.display import clear_output
import matplotlib.pyplot as plt
import seaborn as sns
from time import sleep
%matplotlib inline
# Hyperparameters
alpha = 0.1
gamma = 0.6
epsilon = 0.1
# For plotting metrics
all_epochs = []
all_penalties = []
for i in range(1, 1000):
state = env.reset()
epochs, penalties, reward, = 0, 0, 0
done = False
while not done:
if random.uniform(0, 1) < epsilon:
action = env.action_space.sample() # Explore action space Sub sample
else:
action = np.argmax(q_table[state]) # Values Funcation
next_state, reward, done, info = env.step(action)
old_value = q_table[state, action] # Q-values Funcation
next_max = np.max(q_table[next_state])
new_value = (1 - alpha) * old_value + alpha * (reward + gamma * next_max)
q_table[state, action] = new_value
if reward == -10:
penalties += 1
state = next_state
epochs += 1
if i % 100 == 0:
clear_output(wait=True)
print(f"Episode: {i}")
print("Training finished.\n")
```
Now that the Q-table has been established over 100,000 episodes, let's see what the Q-values are at our illustration's state:
```
q_table[328]
```
The max Q-value is "north" (-1.971), so it looks like Q-learning has effectively learned the best action to take in our illustration's state!
### Evaluating the agent
Let's evaluate the performance of our agent. We don't need to explore actions any further, so now the next action is always selected using the best Q-value:
```
"""Evaluate agent's performance after Q-learning"""
total_epochs, total_penalties = 0, 0
episodes = 100
for _ in range(episodes):
state = env.reset()
epochs, penalties, reward = 0, 0, 0
done = False
while not done:
action = np.argmax(q_table[state]) # Values
Funcation
state, reward, done, info = env.step(action)
if reward == -10:
penalties += 1
epochs += 1
total_penalties += penalties
total_epochs += epochs
print(f"Results after {episodes} episodes:")
print(f"Average timesteps per episode: {total_epochs / episodes}")
print(f"Average penalties per episode: {total_penalties / episodes}")
```
We can see from the evaluation, the agent's performance improved significantly and it incurred no penalties, which means it performed the correct pickup/dropoff actions with 100 different passengers.
#### Comparing our Q-learning agent to no Reinforcement Learning
With Q-learning agent commits errors initially during exploration but once it has explored enough (seen most of the states), it can act wisely maximizing the rewards making smart moves. Let's see how much better our Q-learning solution is when compared to the agent making just random moves.
We evaluate our agents according to the following metrics,
- **Average number of penalties per episode:** The smaller the number, the better the performance of our agent. Ideally, we would like this metric to be zero or very close to zero.
- **Average number of timesteps per trip:** We want a small number of timesteps per episode as well since we want our agent to take minimum steps(i.e. the shortest path) to reach the destination.
- **Average rewards per move:** The larger the reward means the agent is doing the right thing. That's why deciding rewards is a crucial part of Reinforcement Learning. In our case, as both timesteps and penalties are negatively rewarded, a higher average reward would mean that the agent reaches the destination as fast as possible with the least penalties"
| Measure | Random agent's performance | Q-learning agent's performance |
|----------------------------------------- |-------------------------- |-------------------------------- |
| Average rewards per move | -3.9012092102214075 | 0.6962843295638126 |
| Average number of penalties per episode | 920.45 | 0.0 |
| Average number of timesteps per trip | 2848.14 | 12.38 | |
These metrics were computed over 100 episodes. And as the results show, our Q-learning agent nailed it!
#### Hyperparameters and optimizations
The values of `alpha`, `gamma`, and `epsilon` were mostly based on intuition and some "hit and trial", but there are better ways to come up with good values.
Ideally, all three should decrease over time because as the agent continues to learn, it actually builds up more resilient priors;
- $\Large \alpha$: (the learning rate) should decrease as you continue to gain a larger and larger knowledge base.
- $\Large \gamma$: as you get closer and closer to the deadline, your preference for near-term reward should increase, as you won't be around long enough to get the long-term reward, which means your gamma should decrease.
- $\Large \epsilon$: as we develop our strategy, we have less need of exploration and more exploitation to get more utility from our policy, so as trials increase, epsilon should decrease.
#### Tuning the hyperparameters
A simple way to programmatically come up with the best set of values of the hyperparameter is to create a comprehensive search function (similar to [grid search](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Grid_search)) that selects the parameters that would result in best `reward/time_steps` ratio. The reason for `reward/time_steps` is that we want to choose parameters which enable us to get the maximum reward as fast as possible. We may want to track the number of penalties corresponding to the hyperparameter value combination as well because this can also be a deciding factor (we don't want our smart agent to violate rules at the cost of reaching faster). A more fancy way to get the right combination of hyperparameter values would be to use Genetic Algorithms.
## Conclusion and What's Ahead
Alright! We began with understanding Reinforcement Learning with the help of real-world analogies. We then dived into the basics of Reinforcement Learning and framed a Self-driving cab as a Reinforcement Learning problem. We then used OpenAI's Gym in python to provide us with a related environment, where we can develop our agent and evaluate it. Then we observed how terrible our agent was without using any algorithm to play the game, so we went ahead to implement the Q-learning algorithm from scratch. The agent's performance improved significantly after Q-learning. Finally, we discussed better approaches for deciding the hyperparameters for our algorithm.
Q-learning is one of the easiest Reinforcement Learning algorithms. The problem with Q-earning however is, once the number of states in the environment are very high, it becomes difficult to implement them with Q table as the size would become very, very large. State of the art techniques uses Deep neural networks instead of the Q-table (Deep Reinforcement Learning). The neural network takes in state information and actions to the input layer and learns to output the right action over the time. Deep learning techniques (like Convolutional Neural Networks) are also used to interpret the pixels on the screen and extract information out of the game (like scores), and then letting the agent control the game.
We have discussed a lot about Reinforcement Learning and games. But Reinforcement learning is not just limited to games. It is used for managing stock portfolios and finances, for making humanoid robots, for manufacturing and inventory management, to develop general AI agents, which are agents that can perform multiple things with a single algorithm, like the same agent playing multiple Atari games. Open AI also has a platform called universe for measuring and training an AI's general intelligence across myriads of games, websites and other general applications.
| github_jupyter |
```
#install.packages("caTools", repo="http://cran.itam.mx")
#R.version
path<- "C:/Users/Martin/Documents/Tareas UNISON/Termodinamica/Laboratorio/Informe 5/Datos"
setwd(path)
library("ggplot2")
library("reshape2")
library("dplyr")
library("plotly")
D1 <- read.csv("Datos1.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
D2 <- read.csv("Datos2.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
D3 <- read.csv("Datos3.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
D4 <- read.csv("Datos4-1.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
D5 <- read.csv("Datos5-1.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
D6 <- read.csv("Datos6-1.csv" ,header=TRUE, sep="," , stringsAsFactors=FALSE)
#head(D1,8)
names(D1)[names(D1)=="Tiempo...s.."] <- "Tiempo"
names(D1)[names(D1)=="T...K.."] <- "T"
names(D1)[names(D1)=="V...m3.."] <- "V"
names(D1)[names(D1)=="P...kPa.."] <- "P"
#head(D1); tail(D1)
names(D2)[names(D2)=="Tiempo...s.."] <- "Tiempo"
names(D2)[names(D2)=="T...K.."] <- "T"
names(D2)[names(D2)=="V...m3.."] <- "V"
names(D2)[names(D2)=="P...kPa.."] <- "P"
#head(D2); tail(D2)
names(D3)[names(D3)=="Tiempo...s.."] <- "Tiempo"
names(D3)[names(D3)=="T...K.."] <- "T"
names(D3)[names(D3)=="V...m3.."] <- "V"
names(D3)[names(D3)=="P...kPa.."] <- "P"
#head(D3); tail(D3)
names(D4)[names(D4)=="Tiempo...s.."] <- "Tiempo"
names(D4)[names(D4)=="T...K.."] <- "T"
names(D4)[names(D4)=="V...m3.."] <- "V"
names(D4)[names(D4)=="P...kPa.."] <- "P"
#head(D4); tail(D4)
names(D5)[names(D5)=="Tiempo...s.."] <- "Tiempo"
names(D5)[names(D5)=="T...K.."] <- "T"
names(D5)[names(D5)=="V...m3.."] <- "V"
names(D5)[names(D5)=="P...kPa.."] <- "P"
#head(D5); tail(D5)
names(D6)[names(D6)=="Tiempo...s.."] <- "Tiempo"
names(D6)[names(D6)=="T...K.."] <- "T"
names(D6)[names(D6)=="V...m3.."] <- "V"
names(D6)[names(D6)=="P...kPa.."] <- "P"
#head(D6); tail(D6)
k1<-ggplot(D1, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue") +
geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 1")
#ggplotly(k)
k1
k2<-ggplot(D2, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue") +
geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 2")
#ggplotly(k)
k2
k3<-ggplot(D3, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue") +
geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 3")
#ggplotly(k)
k3
k4<-ggplot(D4, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue", size = 0.5) +
#geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 4")
#ggplotly(k4)
k4
k5<-ggplot(D5, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue") +
geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 5")
#ggplotly(k)
k5
k6<-ggplot(D6, aes(x=Tiempo,y=T)) +
#theme_bw() +
geom_point(col="blue") +
geom_line(col="black")+
#stat_smooth(method = "loess", span=0.1,colour="black")+
#geom_hline(aes(yintercept = MT), linetype= "dashed", size =1.25)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Datos 6")
#ggplotly(k)
k6
ggplot() + theme_bw() +
geom_point(data=D1, aes(x=Tiempo, y=T), color='blue', size =1) +
geom_point(data=D2, aes(x=Tiempo, y=T), color='green', size =1) +
geom_point(data=D3, aes(x=Tiempo, y=T), color='red', size =1)+
geom_line(data=D1, aes(x=Tiempo, y=T), color=' dark blue', size =0.75) +
geom_line(data=D2, aes(x=Tiempo, y=T), color='dark green', size =0.75) +
geom_line(data=D3, aes(x=Tiempo, y=T), color='dark red', size =0.75)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "Proceso Isotermicos")
ggplot() + theme_bw() +
geom_point(data=D4, aes(x=Tiempo, y=T), color='blue', size =1) +
geom_point(data=D5, aes(x=Tiempo, y=T), color='green', size =1) +
geom_point(data=D6, aes(x=Tiempo, y=T), color='red', size =1)+
geom_line(data=D4, aes(x=Tiempo, y=T), color=' dark blue', size =0.75) +
geom_line(data=D5, aes(x=Tiempo, y=T), color='dark green', size =0.75) +
geom_line(data=D6, aes(x=Tiempo, y=T), color='dark red', size =0.75)+
labs( x="Tiempo(s)", y="Temperatura (K)", title= "ProcesoAdiabaticos")
iso<-ggplot() + theme_bw() +
geom_point(data = D1, aes(x=V,y=P), color = 1, size = 0.75) +
geom_point(data = D2, aes(x=V,y=P), color = 2, size = 0.75) +
geom_point(data = D3, aes(x=V,y=P), color = 3, size = 0.75) +
scale_x_log10() +
scale_y_log10() +
labs( x="Volumen ( m3 )", y="Presion ( kPa )", title= "Proceso Isotermico")
#ggplotly(iso)
iso
require(caTools)
x <- trapz(D1$V, D1$P)
y <- trapz(D2$V, D2$P)
z <- trapz(D3$V, D3$P)
x;y;z
(x+y+z)/3
adb<-ggplot() + theme_bw() +
geom_point(data = D4, aes(x=V,y=P), color = 1, size = 0.75) +
geom_point(data = D5, aes(x=V,y=P), color = 2, size = 0.75) +
geom_point(data = D6, aes(x=V,y=P), color = 3, size = 0.75) +
scale_x_log10() +
scale_y_log10() +
labs( x="Volumen ( m3 )", y="Presion ( kPa )", title= "Proceso Adiabatico")
#ggplotly(adb)
adb
require(caTools)
a <- trapz(D4$V, D4$P)
b <- trapz(D5$V, D5$P)
c <- trapz(D6$V, D6$P)
a;b;c
(a+b+c)/3
```
| github_jupyter |
## Convolutional Neural Networks
## Project: Write an Algorithm for a Dog Identification App
---
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.
---
### Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Import Datasets
* [Step 1](#step1): Detect Humans
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Use a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Create a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 6](#step6): Write your Algorithm
* [Step 7](#step7): Test Your Algorithm
---
<a id='step0'></a>
## Step 0: Import Datasets
### Import Dog Dataset
In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the `load_files` function from the scikit-learn library:
- `train_files`, `valid_files`, `test_files` - numpy arrays containing file paths to images
- `train_targets`, `valid_targets`, `test_targets` - numpy arrays containing onehot-encoded classification labels
- `dog_names` - list of string-valued dog breed names for translating labels
```
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('/data/dog_images/train')
valid_files, valid_targets = load_dataset('/data/dog_images/valid')
test_files, test_targets = load_dataset('/data/dog_images/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("/data/dog_images/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
!ls /data/lfw/Monica_Bellucci/Monica_Bellucci_0002.jpg
train_files.shape
```
### Import Human Dataset
In the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array `human_files`.
```
import random
random.seed(8675309)
# load filenames in shuffled human dataset
human_files = np.array(glob("/data/lfw/*/*"))
random.shuffle(human_files)
# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
```
---
<a id='step1'></a>
## Step 1: Detect Humans
We use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory.
In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
```
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter.
In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.
### Write a Human Face Detector
We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
```
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
```
### (IMPLEMENTATION) Assess the Human Face Detector
__Question 1:__ Use the code cell below to test the performance of the `face_detector` function.
- What percentage of the first 100 images in `human_files` have a detected human face?
- What percentage of the first 100 images in `dog_files` have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.
__Answer:__
```
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.
## TODO: Test the performance of the face_detector algorithm
human_correct_label = 0
dog_incorrect_label = 0
for human, dog in zip(human_files_short, dog_files_short):
human_correct_label += int(face_detector(human))
dog_incorrect_label += int(face_detector(dog))
human_correct_label, dog_incorrect_label
## on the images in human_files_short and dog_files_short.
print(f'Total correctly labeled human out of 100 is {human_correct_label}')
print(f'Total incorrectly labeled dog as human out of 100 is {dog_incorrect_label}')
```
__Question 2:__ This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?
__Answer:__
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on each of the datasets.
```
## (Optional) TODO: Report the performance of another
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.
%ls data
```
---
<a id='step2'></a>
## Step 2: Detect Dogs
In this section, we use a pre-trained [ResNet-50](http://ethereon.github.io/netscope/#/gist/db945b393d40bfa26006) model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
```
from keras.applications.resnet50 import ResNet50
# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
```
### Pre-process the Data
When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape
$$
(\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}),
$$
where `nb_samples` corresponds to the total number of images (or samples), and `rows`, `columns`, and `channels` correspond to the number of rows, columns, and channels for each image, respectively.
The `path_to_tensor` function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape
$$
(1, 224, 224, 3).
$$
The `paths_to_tensor` function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape
$$
(\text{nb_samples}, 224, 224, 3).
$$
Here, `nb_samples` is the number of samples, or number of images, in the supplied array of image paths. It is best to think of `nb_samples` as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
```
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
```
### Making Predictions with ResNet-50
Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function `preprocess_input`. If you're curious, you can check the code for `preprocess_input` [here](https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py).
Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the `predict` method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the `ResNet50_predict_labels` function below.
By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
```
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
```
### Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the `ResNet50_predict_labels` function above returns a value between 151 and 268 (inclusive).
We use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
```
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
```
### (IMPLEMENTATION) Assess the Dog Detector
__Question 3:__ Use the code cell below to test the performance of your `dog_detector` function.
- What percentage of the images in `human_files_short` have a detected dog?
- What percentage of the images in `dog_files_short` have a detected dog?
__Answer:__
```
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
human_correct_label = 0
dog_incorrect_label = 0
for human, dog in zip(human_files_short, dog_files_short):
human_correct_label += int(dog_detector(human))
dog_incorrect_label += int(dog_detector(dog))
human_correct_label, dog_incorrect_label
```
---
<a id='step3'></a>
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### Pre-process the Data
We rescale the images by dividing every pixel in every image by 255.
```
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
train_tensors.shape
```
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
model.summary()
We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:

__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.
__Answer:__
```
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
model = Sequential()
### TODO: Define your architecture.
model.add(Conv2D(filters=16, kernel_size=(2, 2), strides=(1, 1), padding='same',
activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=32, kernel_size=(2, 2), strides=(1, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(2, 2), strides=(1, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(232, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(133, activation='softmax'))
model.summary()
```
### Compile the Model
```
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
```
### (IMPLEMENTATION) Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
```
from keras.callbacks import ModelCheckpoint
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 10
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
# from keras.preprocessing.image import ImageDataGenerator
# batch_size = 16
# train_datagen = ImageDataGenerator(
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True)
# validation_datagen = ImageDataGenerator()
# test_datagen = ImageDataGenerator()
# # this is a generator that will read pictures found in
# # subfolers of 'data/train', and indefinitely generate
# # batches of augmented image data
# train_generator = train_datagen.flow_from_directory(
# '/data/dog_images/train',
# target_size=(224, 224),
# batch_size=batch_size,
# class_mode='categorical')
# validation_generator = validation_datagen.flow_from_directory(
# '/data/dog_images/valid/',
# target_size=(224, 224),
# batch_size=batch_size,
# class_mode='categorical')
# test_generator = test_datagen.flow_from_directory(
# '/data/dog_images/test/',
# target_size=(224, 224),
# batch_size=batch_size,
# class_mode='categorical')
# ### TODO: specify the number of epochs that you would like to use to train the model.
# epochs = 10
# ### Do NOT modify the code below this line.
# checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch_aug.hdf5',
# verbose=1, save_best_only=True)
# model.fit_generator(
# train_generator,
# steps_per_epoch=2000 // batch_size,
# epochs=epochs,
# validation_data=validation_generator,
# validation_steps=800 // batch_size,
# callbacks=[checkpointer], verbose=1)
```
### Load the Model with the Best Validation Loss
```
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
```
### Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
```
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
```
---
<a id='step4'></a>
## Step 4: Use a CNN to Classify Dog Breeds
To reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.
### Obtain Bottleneck Features
```
bottleneck_features = np.load('/data/bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
```
### Model Architecture
The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
```
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
```
### Compile the Model
```
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
```
### Train the Model
```
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5',
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
```
### Load the Model with the Best Validation Loss
```
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
```
### Test the Model
Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
```
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
```
### Predict Dog Breed with the Model
```
from extract_bottleneck_features import *
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
```
---
<a id='step5'></a>
## Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras. These are already in the workspace, at /data/bottleneck_features. If you wish to download them on a different machine, they can be found at:
- [VGG-19](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG19Data.npz) bottleneck features
- [ResNet-50](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz) bottleneck features
- [Inception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz) bottleneck features
- [Xception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogXceptionData.npz) bottleneck features
The files are encoded as such:
Dog{network}Data.npz
where `{network}`, in the above filename, can be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`.
The above architectures are downloaded and stored for you in the `/data/bottleneck_features/` folder.
This means the following will be in the `/data/bottleneck_features/` folder:
`DogVGG19Data.npz`
`DogResnet50Data.npz`
`DogInceptionV3Data.npz`
`DogXceptionData.npz`
### (IMPLEMENTATION) Obtain Bottleneck Features
In the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:
bottleneck_features = np.load('/data/bottleneck_features/Dog{network}Data.npz')
train_{network} = bottleneck_features['train']
valid_{network} = bottleneck_features['valid']
test_{network} = bottleneck_features['test']
```
### TODO: Obtain bottleneck features from another pre-trained CNN.
bottleneck_features = np.load('/data/bottleneck_features/DogResnet50Data.npz')
train_res50 = bottleneck_features['train']
valid_res50 = bottleneck_features['valid']
test_res50 = bottleneck_features['test']
```
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
<your model's name>.summary()
__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
__Answer:__
```
### TODO: Define your architecture.
res50_model = Sequential()
res50_model.add(GlobalAveragePooling2D(input_shape=train_res50.shape[1:]))
res50_model.add(Dense(512, activation='relu'))
res50_model.add(Dropout(0.5))
res50_model.add(Dense(133, activation='softmax'))
res50_model.summary()
```
### (IMPLEMENTATION) Compile the Model
```
### TODO: Compile the model.
res50_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
```
### (IMPLEMENTATION) Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
```
### TODO: Train the model.
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 100
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.res50.hdf5',
verbose=1, save_best_only=True)
res50_model.fit(train_res50, train_targets,
validation_data=(valid_res50, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
```
### (IMPLEMENTATION) Load the Model with the Best Validation Loss
```
### TODO: Load the model weights with the best validation loss.
res50_model.load_weights('saved_models/weights.best.res50.hdf5')
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
```
### TODO: Calculate classification accuracy on the test dataset.
resnet50_predictions = [np.argmax(res50_model.predict(np.expand_dims(feature, axis=0))) for feature in test_res50]
test_accuracy = 100 * np.sum(np.array(resnet50_predictions) == np.argmax(test_targets, axis=1))/len(resnet50_predictions)
print('Test accuracy: %.4f' % test_accuracy)
```
### (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan_hound`, etc) that is predicted by your model.
Similar to the analogous function in Step 5, your function should have three steps:
1. Extract the bottleneck features corresponding to the chosen CNN model.
2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
3. Use the `dog_names` array defined in Step 0 of this notebook to return the corresponding breed.
The functions to extract the bottleneck features can be found in `extract_bottleneck_features.py`, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function
extract_{network}
where `{network}`, in the above filename, should be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`.
```
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
def Resnet50_predict_breed(img_path):
bottleneck_feature = extract_Resnet50(path_to_tensor(img_path))
predicted_vector = res50_model.predict(bottleneck_feature)
return dog_names[np.argmax(predicted_vector)]
```
---
<a id='step6'></a>
## Step 6: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a __dog__ is detected in the image, return the predicted breed.
- if a __human__ is detected in the image, return the resembling dog breed.
- if __neither__ is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 5 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### (IMPLEMENTATION) Write your Algorithm
```
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def prediction_maker(img_path):
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
if dog_detector(img_path):
print("Dog")
print("Gussed Breed is {}".format(Resnet50_predict_breed(img_path)))
elif face_detector(img_path):
print("Human")
print("Expected Breed is {}".format(Resnet50_predict_breed(img_path)))
else:
print("Not a Human or a Dog")
```
---
<a id='step7'></a>
## Step 7: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that __you__ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
### (IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
__Answer:__
```
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
test_samples = glob("images/");
test_samples
!ls images
prediction_maker('images/American_water_spaniel_00648.jpg')
prediction_maker('images/sample_human_output.png')
prediction_maker('/data/lfw/Monica_Bellucci/Monica_Bellucci_0002.jpg')
/data/lfw/Monica_Bellucci/Monica_Bellucci_0002.jpg
```
# Please download your notebook to submit
In order to submit, please do the following:
1. Download an HTML version of the notebook to your computer using 'File: Download as...'
2. Click on the orange Jupyter circle on the top left of the workspace.
3. Navigate into the dog-project folder to ensure that you are using the provided dog_images, lfw, and bottleneck_features folders; this means that those folders will *not* appear in the dog-project folder. If they do appear because you downloaded them, delete them.
4. While in the dog-project folder, upload the HTML version of this notebook you just downloaded. The upload button is on the top right.
5. Navigate back to the home folder by clicking on the two dots next to the folder icon, and then open up a terminal under the 'new' tab on the top right
6. Zip the dog-project folder with the following command in the terminal:
`zip -r dog-project.zip dog-project`
7. Download the zip file by clicking on the square next to it and selecting 'download'. This will be the zip file you turn in on the next node after this workspace!
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/MyDrive/Research/AAAI/complexity/50_200/'
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# Generate dataset
```
mu1 = np.array([3,3,3,3,0])
sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu2 = np.array([4,4,4,4,0])
sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu3 = np.array([10,5,5,10,0])
sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu4 = np.array([-10,-10,-10,-10,0])
sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu5 = np.array([-21,4,4,-21,0])
sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu6 = np.array([-10,18,18,-10,0])
sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu7 = np.array([4,20,4,20,0])
sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu8 = np.array([4,-20,-20,4,0])
sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu9 = np.array([20,20,20,20,0])
sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu10 = np.array([20,-10,-10,20,0])
sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
np.random.seed(12)
sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0)
Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)),
5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int)
print(X[0], Y[0])
print(X[500], Y[500])
class SyntheticDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, x, y):
"""
Args:
x: list of instance
y: list of instance label
"""
self.x = x
self.y = y
#self.fore_idx = fore_idx
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
trainset = SyntheticDataset(X,Y)
classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
foreground_classes = {'zero','one','two'}
fg_used = '012'
fg1, fg2, fg3 = 0,1,2
all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
background_classes = all_classes - foreground_classes
print("background classes ",background_classes)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=False)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=100
for i in range(50):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
print(foreground_data[0], foreground_label[0] )
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]])
j+=1
else:
image_list.append(foreground_data[fg_idx])
label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 6000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
list_set_labels = []
for i in range(desired_num):
set_idx = set()
np.random.seed(i)
bg_idx = np.random.randint(0,3500,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,1500)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
list_set_labels.append(set_idx)
len(mosaic_list_of_images), mosaic_list_of_images[0]
```
# load mosaic data
```
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:3000], mosaic_label[0:3000] , fore_idx[0:3000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[3000:6000], mosaic_label[3000:6000] , fore_idx[3000:6000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer with input-50-output architecture
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50, bias=False) #,self.output)
self.linear2 = nn.Linear(50,self.output, bias=False)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.xavier_normal_(self.linear2.weight)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
log_x = F.log_softmax(x,dim=1)
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x , log_x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer with input-50-output architecture
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,200)
self.linear2 = nn.Linear(200,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
torch.manual_seed(12)
focus_net = Focus_deep(2,1,9,2).double()
focus_net = focus_net.to("cuda")
focus_net.linear1.weight.shape,focus_net.linear2.weight.shape
focus_net.linear1.weight.data[25:,:] = focus_net.linear1.weight.data[:25,:] #torch.nn.Parameter(torch.tensor([last_layer]) )
(focus_net.linear1.weight[:25,:]== focus_net.linear1.weight[25:,:] )
focus_net.linear2.weight.data[:,25:] = -focus_net.linear2.weight.data[:,:25] #torch.nn.Parameter(torch.tensor([last_layer]) )
focus_net.linear2.weight
focus_net.helper( torch.randn((1,5,2)).double().to("cuda") )
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
#ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
#loss = (1-k)*criter(outputs, labels) + k*ent
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 10
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(5,1,9,5).double()
where.linear1.weight.data[25:,:] = where.linear1.weight.data[:25,:]
where.linear2.weight.data[:,25:] = -where.linear2.weight.data[:,:25]
where = where.double().to("cuda")
print(where.helper( torch.randn((1,5,9)).double().to("cuda")))
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2000
# calculate zeroth epoch loss and FTPT values
running_loss ,_,_,anlys_data= calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
my_loss,_,_ = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += my_loss.item()
my_loss.backward()
optimizer_where.step()
optimizer_what.step()
#break
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
if(epoch % 200==0):
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
print('breaking in epoch: ', epoch)
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,log_alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test images: %f %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0))
FTPT_analysis
FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 90 ]
print(np.mean(np.array(FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 90 ]),axis=0))
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MonitSharma/Learn-Quantum-Computing/blob/main/Circuit_Basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install qiskit
```
# Qiskit Basics
```
import numpy as np
from qiskit import QuantumCircuit
# building a circuit
qc = QuantumCircuit(3)
# adding gates
qc.h(0)
qc.cx(0,1)
qc.cx(0,2)
qc.draw('mpl')
```
## Simulating the Circuits
```
from qiskit.quantum_info import Statevector
# setting the initial state to 0
state = Statevector.from_int(0,2**3)
state = state.evolve(qc)
state.draw('latex')
from qiskit.quantum_info import Statevector
# setting the initial state to 1
state = Statevector.from_int(1,2**3)
state = state.evolve(qc)
state.draw('latex')
```
Below we use the visualization function to plot the bloch sphere and a hinton representing the real and the imaginary components of the state density matrix $\rho$
```
state.draw('qsphere')
state.draw('hinton')
```
## Unitary Representation of a Circuit
The quant_info module of qiskit has an operator method that can be used to make unitary operator for the circuit.
```
from qiskit.quantum_info import Operator
U = Operator(qc)
U.data
```
## Open QASM backend
The simulators above are useful, as they help us in providing information about the state output and matrix representation of the circuit.
Here we would learn about more simulators that will help us in measuring the circuit
```
qc2 = QuantumCircuit(3,3)
qc2.barrier(range(3))
# do the measurement
qc2.measure(range(3), range(3))
qc2.draw('mpl')
# now, if we want to add both the qc and qc2 circuit
circ = qc2.compose(qc, range(3), front = True)
circ.draw('mpl')
```
This circuit adds a classical register , and three measurement that are used to map the outcome of qubits to the classical bits.
To simulate this circuit we use the 'qasm_simulator' in Qiskit Aer. Each single run will yield a bit string $000$ or $111$. To build up the statistics about the distribution , we need to repeat the circuit many times.
The number of times the circuit is repeated is specified in the 'execute' function via the 'shots' keyword.
```
from qiskit import transpile
# import the qasm simulator
from qiskit.providers.aer import QasmSimulator
backend = QasmSimulator()
# first transpile the quantum circuit to low level QASM instructions
qc_compiled = transpile(circ, backend)
# execute the circuit
job_sim = backend.run(qc_compiled, shots=1024)
# get the result
result_sim = job_sim.result()
```
Since, the code has run, we can count the number of specific ouputs it recieved and plot it too.
```
counts = result_sim.get_counts(qc_compiled)
print(counts)
from qiskit.visualization import plot_histogram
plot_histogram(counts)
```
| github_jupyter |
This script performs analyses to check how many mice pass the currenty set criterion for ephys.
```
import datajoint as dj
dj.config['database.host'] = 'datajoint.internationalbrainlab.org'
from ibl_pipeline import subject, acquisition, action, behavior, reference, data
from ibl_pipeline.analyses.behavior import PsychResults, SessionTrainingStatus
from ibl_pipeline.utils import psychofit as psy
from ibl_pipeline.analyses import behavior as behavior_analysis
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Get list of subjects associated to the repeated site probe trajectory from ONE (original snippet from Gaelle Chapuis)
from oneibl.one import ONE
one = ONE()
traj = one.alyx.rest('trajectories', 'list', provenance='Planned',
x=-2243, y=-2000, # repeated site coordinate
project='ibl_neuropixel_brainwide_01')
sess = [p['session'] for p in traj]
first_pass_map_repeated = [(s['subject'],s['start_time'][0:10]) for s in sess]
# Download all ephys sessions from DataJoint
sess_ephys = (acquisition.Session * subject.Subject * behavior_analysis.SessionTrainingStatus ) & 'task_protocol LIKE "%ephys%"'
# & 'task_protocol LIKE "%biased%"' & 'session_start_time < "2019-09-30"')
df = pd.DataFrame(sess_ephys)
```
The following code computes how many `ephys` sessions are considered `good_enough_for_brainwide_map`:
- across *all* ephys sessions;
- across the ephys sessions in the first-pass map for the repeated site.
```
session_dates = df['session_start_time'].apply(lambda x : x.strftime("%Y-%m-%d"))
# First, count all mice
total = len(df.index)
good_enough = np.sum(df['good_enough_for_brainwide_map'])
prc = good_enough / total * 100
print('Total # of ephys sessions: '+ str(total))
print('Total # of sessions good_enough_for_brainwide_map: ' + str(good_enough) + ' (' + "{:.1f}".format(prc) + ' %)')
# Now, consider only mice in the first pass map, repeated site
count = 0
for (mouse_name,session_date) in first_pass_map_repeated:
tmp = df[(df['subject_nickname'] == mouse_name) & (session_dates == session_date)]
count = count + np.sum(tmp['good_enough_for_brainwide_map'])
total = len(first_pass_map_repeated)
good_enough = count
prc = good_enough / total * 100
print('Total # of ephys sessions in first pass map, repeated site: '+ str(total))
print('Total # of sessions good_enough_for_brainwide_map in first pass map, repeated site: ' + str(good_enough) + ' (' + "{:.1f}".format(prc) + ' %)')
```
The following code computes how many sessions are required for a mouse to reach certain levels of training or protocols, in particular:
- from `trained` status to `biased` protocol
- from `biased` protocol to `ready4ephys` status
```
mice_list = set(df['subject_nickname'])
trained2biased = []
biased2ready4ephys = []
for mouse_name in mice_list:
subj_string = 'subject_nickname LIKE "' + mouse_name + '"'
sess_mouse = (acquisition.Session * subject.Subject * behavior_analysis.SessionTrainingStatus ) & subj_string
df1 = pd.DataFrame(sess_mouse)
# Find first session of training
trained_start = np.argmax(df1['training_status'].apply(lambda x: 'trained' in x))
if 'trained' not in df1['training_status'][trained_start]:
trained_start = None
# Find first session of biased protocol
biased_start = np.argmax(df1['task_protocol'].apply(lambda x: 'biased' in x))
if 'biased' not in df1['task_protocol'][biased_start]:
biased_start = None
# Find first session of ephys
ready4ephys_start = np.argmax(df1['training_status'].apply(lambda x: 'ready4ephys' in x))
if 'ready4ephys' not in df1['training_status'][ready4ephys_start]:
ready4ephys_start = None
if ready4ephys_start != None:
trained2biased.append(biased_start - trained_start)
biased2ready4ephys.append(ready4ephys_start - biased_start)
trained2biased = np.array(trained2biased)
biased2ready4ephys = np.array(biased2ready4ephys)
flag = trained2biased > 0
print('# Mice: ' + str(np.sum(flag)))
print('# Sessions from "trained" to "biased": ' + "{:.2f}".format(np.mean(trained2biased[flag])) + ' +/- '+ "{:.2f}".format(np.std(trained2biased[flag])))
print('# Sessions from "biased" to "ready4ephys": ' + "{:.2f}".format(np.mean(biased2ready4ephys[flag])) + ' +/- '+ "{:.2f}".format(np.std(biased2ready4ephys[flag])))
```
| github_jupyter |
```
# %load ../../templates/load_libs.py
import sys
from pyspark.ml.classification import LogisticRegression, NaiveBayes, DecisionTreeClassifier, GBTClassifier, \
RandomForestClassifier
# set project directory for shared library
PROJECT_DIR='/home/jovyan/work/amazon-review-validator'
if PROJECT_DIR not in sys.path:
sys.path.insert(0, PROJECT_DIR)
from libs.utils import hello
hello()
# %load ../../templates/load_data.py
import pyspark as ps
from pyspark.sql.types import StructField, StructType, StringType, IntegerType
DATA_FILE = '../../data/amazon_reviews_us_Camera_v1_00.tsv.gz'
APP_NAME = 'EDA'
FEATURES = ['star_rating', 'review_body', 'helpful_votes', 'total_votes', 'verified_purchase', 'review_date']
SAMPLE_SIZE = 10000
review_schema = StructType(
[StructField('marketplace', StringType(), True),
StructField('customer_id', StringType(), True),
StructField('review_id', StringType(), True),
StructField('product_id', StringType(), True),
StructField('product_parent', StringType(), True),
StructField('product_title', StringType(), True),
StructField('product_category', StringType(), True),
StructField('star_rating', IntegerType(), True),
StructField('helpful_votes', IntegerType(), True),
StructField('total_votes', IntegerType(), True),
StructField('vine', StringType(), True),
StructField('verified_purchase', StringType(), True),
StructField('review_headline', StringType(), True),
StructField('review_body', StringType(), True),
StructField('review_date', StringType(), True)])
spark = (ps.sql.SparkSession.builder
.master("local[1]")
.appName(APP_NAME)
.getOrCreate()
)
sc = spark.sparkContext
df = spark.read.format("csv") \
.option("header", "true") \
.option("sep", "\t") \
.schema(review_schema) \
.load(DATA_FILE)
df.createOrReplaceTempView("dfTable")
review_all = df.select(FEATURES)
review_sample = df.select(FEATURES).limit(SAMPLE_SIZE).cache()
spark.sql(
"select customer_id, count(if (star_rating==1,1,NULL)) as one_star, count(if (star_rating==2,1,NULL)) as two_star, count(if (star_rating==3,1,NULL)) as tre_star, count(if (star_rating==4,1,NULL)) as qua_star, count(if (star_rating==5,1,NULL)) as cin_star from eda_sql_view group by customer_id order by one_star desc ").show()
spark.sql(
"select customer_id, count(if (star_rating==1,1,NULL)) as one_star, count(if (star_rating==2,1,NULL)) as two_star, count(if (star_rating==3,1,NULL)) as tre_star, count(if (star_rating==4,1,NULL)) as qua_star, count(if (star_rating==5,1,NULL)) as cin_star from eda_sql_view group by customer_id order by cin_star desc ").show()
spark.sql(
"SELECT (COUNT(IF (verified_purchase == 'Y', 1, NULL))/COUNT(*)) as percentage_verified_purchase FROM eda_sql_view").show()
spark.sql(
"SELECT (COUNT(IF (star_rating >3, 1, NULL))/COUNT(*)) as percentage_star_rating FROM eda_sql_view").show()
spark.sql(
"select verified_purchase, count(verified_purchase) as counts from EDA_sql_view group by verified_purchase order by counts desc ").show()
spark.sql(
"select star_rating, count(star_rating) as counts from EDA_sql_view group by star_rating order by counts desc ").show()
spark.sql("select avg(helpful_votes) as average, min(helpful_votes) as min, max(helpful_votes) as max from eda_sql_view").show()
spark.sql("select avg(review_body) as average, min(review_body) as min, max(review_body) as max from eda_sql_view").show()
spark.sql(
"select avg(length(review_body)) as average, min(length(review_body)) as min, max(length(review_body)) as max from eda_sql_view").show()
len(df.columns)
spark.sql(
"Select count( distinct product_id) from eda_sql_view").show()
```
| github_jupyter |
Link for exercises
https://python-textbok.readthedocs.io/en/1.0/Classes.html
Classes and types are themselves objects, and they are of type type. You can find out the type of any object using the type function:
type(any_object)
The data values which we store inside an object are called attributes, and the functions which are associated with the object are called methods. We have already used the methods of some built-in objects, like strings and lists.
import datetime # we will use this for date objects
class Person:
def __init__(self, name, surname, birthdate, address, telephone, email): # The first instance is the selfwhich is a conventional way of writing, and then you can call self.anything that is defined in your parenthesis and it will make sense of it)
See below how they define the methods of this class by self.name/surname.etc
self.name = name
self.surname = surname
self.birthdate = birthdate
self.address = address
self.telephone = telephone
self.email = email
SOS: SELF HELPS RETURN AN ERROR IF YOUR OBECT DOESN'T HAVE THAT VALUE OTHERWISE IF IT WAS ONLY THE NAME OF THE FUNCTION THEN YOUR CODE WOULD HAVE A PROBLEM (AND YOU WOULDN'T BE ABLE TO DETECT IT AS IT WILL UNLIKELY RETURN AN ERROR)
```
import datetime # we will use this for date objects
class Person:
def __init__(self, name, surname, birthdate, address, telephone, email): #_init_ is everything that it wil try to run
self.name = name
self.surname = surname
self.birthdate = birthdate
self.address = address
self.telephone = telephone
self.email = email
def age(self):
today = datetime.date.today()
age = today.year - self.birthdate.year
if today < datetime.date(today.year, self.birthdate.month, self.birthdate.day):
age -= 1
return age
person = Person(
"Jane",
"Doe",
datetime.date(1981, 10, 12), # year, month, day
"No. 12 Short Street, Greenville",
"555 456 0987",
"jane.doe@example.com"
)
print(person.name)
print(person.email)
print(person.age())
```
INFORMATION
1. Explain what the following variables refer to, and their scope:
2. Person # It's the name of the class
3. person # It's the definition of your object
4. surname # one of the attributes of your object (data values) that will be passed when calling the Class
5. self #SOS: SELF HELPS RETURN AN ERROR IF YOUR OBECT DOESN'T HAVE THAT VALUE OTHERWISE IF IT WAS ONLY THE NAME OF THE FUNCTION THEN YOUR CODE WOULD HAVE A PROBLEM (AND YOU WOULDN'T BE ABLE TO DETECT IT AS IT WILL UNLIKELY RETURN AN ERROR)
You may have noticed that both of these method definitions have self as the first parameter, and we use this variable inside the method bodies – but we don’t appear to pass this parameter in. This is because whenever we call a method on an object, the object itself is automatically passed in as the first parameter. This gives us a way to access the object’s properties from inside the object’s methods.
6. age (the function name) # is defining the age
7. age (the variable used inside the function) # this
8. self.email # this is the
9. person.email #
SOS:
num - 1: produce the result of subtracting one from num; num is not changed
num -= 1: subtract one from num and store that result (equivalent to num = num - 1 when numis a number)
Note that you can use num - 1 as an expression since it produces a result, e.g. foo = num - 1, or print(num - 1), but you cannot use num -= 1 as an expression in Python.
CLASS DEFINITION
We start the class definition with the class keyword, followed by the class name and a colon. We would list any parent classes in between round brackets before the colon, but this class doesn’t have any, so we can leave them out.
Inside the class body, we define two functions – these are our object’s methods.
The first is called __init__, which is a special method. When we call the class object, a new instance of the class is created, and the __init__ method on this new object is immediately executed with all the parameters that we passed to the class object. The purpose of this method is thus to set up a new object using data that we have provided.
The second method is a custom method which calculates the age of our person using the birthdate and the current date.
You may have noticed that both of these method definitions have self as the first parameter, and we use this variable inside the method bodies – but we don’t appear to pass this parameter in. This is because whenever we call a method on an object, the object itself is automatically passed in as the first parameter. This gives us a way to access the object’s properties from inside the object’s methods.
| github_jupyter |
# Datasets processing
## Import and preprocessing
```
import pandas as pd
pd.set_option('display.max_colwidth', None)
import warnings
warnings.filterwarnings("ignore")
#INPS
ht_inps=pd.read_csv('../data/raw/Enti/INPS/Hashtags.csv')
ht_inps['type'] = 'hashtag'
mn_inps=pd.read_csv('../data/raw/Enti/INPS/Mentions.csv')
mn_inps['type']='mention'
#INAIL
ht_inail=pd.read_csv('../data/raw/Enti/INAIL/Hashtags.csv')
ht_inail['type'] = 'hashtag'
mn_inail=pd.read_csv('../data/raw/Enti/INAIL/Mentions.csv')
mn_inail['type']='mention'
#Protezione Civile
ht_pc=pd.read_csv('../data/raw/Enti/Protezione Civile/Hashtags.csv')
ht_pc['type'] = 'hashtag'
mn_pc=pd.read_csv('../data/raw/Enti/Protezione Civile/Mentions.csv')
mn_pc['type']='mention'
import numpy as np
#INPS
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_inps = [ht_inps, mn_inps]
df_inps = pd.concat(frames_inps)
#column definition to identify retweets
df_inps['retweet'] = np.where(df_inps['tweet'].str.contains('RT @'), True, False)
#INAIL
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_inail = [ht_inail, mn_inail]
df_inail = pd.concat(frames_inail)
#column definition to identify retweets
df_inail['retweet'] = np.where(df_inail['tweet'].str.contains('RT @'), True, False)
#Protezione Civile
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_pc = [ht_pc, mn_pc]
df_pc = pd.concat(frames_pc)
#column definition to identify retweets
df_pc['retweet'] = np.where(df_pc['tweet'].str.contains('RT @'), True, False)
#Dataset infos
def get_stats(df):
print("Dataset Shape: ",df.shape)
print("\t Mentions - Hashtags")
print("#Mentions:",df.loc[df['type'] == 'mention'].shape)
print("#Hashtags:",df.loc[df['type'] == 'hashtag'].shape)
print(df['type'].value_counts(normalize=True) * 100)
if "retweet" in df:
print("\t Retweet")
print("#Retweet:",df.loc[df['retweet'] == True].shape)
print(df['retweet'].value_counts(normalize=True) * 100)
get_stats(df_inps)
df_inps.head()
#Removing retweets and unnecessary columns
#INPS
df_inps=df_inps.loc[df_inps['retweet'] == False]
df_inps=df_inps[['created_at','tweet_id','tweet','type']]
#INAIL
df_inail=df_inail.loc[df_inail['retweet'] == False]
df_inail=df_inail[['created_at','tweet_id','tweet','type']]
#Protezione Civile
df_pc=df_pc.loc[df_pc['retweet'] == False]
df_pc=df_pc[['created_at','tweet_id','tweet','type']]
get_stats(df_inps)
```
# Silver labelling
```
#Emoji lists
positive_emoticons=["😀","😃","😄","😁","😆","🤣","😂","🙂","😊","😍","🥰","🤩","☺","🥳"]
negative_emoticons=["😒","😔","😟","🙁","☹","😥","😢","😭","😱","😞","😓","😩","😫","😡","😠","🤬"]
#Definition of silver labels based on the presence / absence of emojis within the entire tweet
#INPS
pos_df_inps = df_inps.loc[df_inps['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_inps = df_inps.loc[df_inps['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_inps = pd.concat([df_inps, pos_df_inps, neg_df_inps]).drop_duplicates(keep=False)
#INAIL
pos_df_inail = df_inail.loc[df_inail['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_inail = df_inail.loc[df_inail['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_inail = pd.concat([df_inail, pos_df_inail, neg_df_inail]).drop_duplicates(keep=False)
#Protezione Civile
pos_df_pc = df_pc.loc[df_pc['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_pc = df_pc.loc[df_pc['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_pc = pd.concat([df_pc, pos_df_pc, neg_df_pc]).drop_duplicates(keep=False)
get_stats(neutral_df_pc)
#tweets containing both positive and negative emoticons
int_df_inps = pd.merge(pos_df_inps, neg_df_inps, how ='inner')
int_df_inail = pd.merge(pos_df_inail, neg_df_inail, how ='inner')
int_df_pc = pd.merge(pos_df_pc, neg_df_pc, how ='inner')
int_df_inps.shape
#Sampling neutral datasets to balance classes
#INPS
sample_neutral_df_inps = neutral_df_inps.sample(frac=0.015)
#INAIL
sample_neutral_df_inail = neutral_df_inail.sample(frac=0.015)
#Protezione Civile
sample_neutral_df_pc = neutral_df_pc.sample(frac=0.015)
neg_df_inail.shape
#Added polarity and topic column
#INPS
pos_df_inps['polarity']='positive'
pos_df_inps['topic']='inps'
neg_df_inps['polarity']='negative'
neg_df_inps['topic']='inps'
sample_neutral_df_inps['polarity']='neutral'
sample_neutral_df_inps['topic']='inps'
#INAIL
pos_df_inail['polarity']='positive'
pos_df_inail['topic']='inail'
neg_df_inail['polarity']='negative'
neg_df_inail['topic']='inail'
sample_neutral_df_inail['polarity']='neutral'
sample_neutral_df_inail['topic']='inail'
#Protezione civile
pos_df_pc['polarity']='positive'
pos_df_pc['topic']='pc'
neg_df_pc['polarity']='negative'
neg_df_pc['topic']='pc'
sample_neutral_df_pc['polarity']='neutral'
sample_neutral_df_pc['topic']='pc'
#concatenation of all dataframes
df_total = pd.concat([pos_df_inps, pos_df_inail, pos_df_pc,neg_df_inps,neg_df_inail,neg_df_pc,sample_neutral_df_inps,sample_neutral_df_inail,sample_neutral_df_pc])
df_total.head()
#Dataset shuffling
df_total_shuffle = df_total.sample(frac=1)
#Duplicate removal
df_total_shuffle = df_total_shuffle.drop_duplicates(keep='first',subset=['tweet_id'])
#First round annotation file
ann_1 = df_total_shuffle.sample(n=322,random_state=1)
ann_1.to_csv("../data/interim/Annotazione_1.csv")
#Second round annotation file
ann_2 = pd.concat([df_total_shuffle,ann_1]).drop_duplicates(keep=False,subset=['tweet_id'])
ann_2.to_csv("../data/interim/Annotazione_2")
```
| github_jupyter |
# Pipeline Analysis for CSM Model
- Plot Heatmaps of the model results using Z-normalization
- CEZ/OEZ Pooled Patient Analysis
- CEZ/OEZ IRR Metric
```
import os
import sys
import collections
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import scipy.stats
from sklearn.metrics import roc_curve, auc, precision_recall_curve, \
average_precision_score, confusion_matrix, accuracy_score
from pprint import pprint
import copy
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
sys.path.append("../../")
%matplotlib inline
import matplotlib as mp
import matplotlib.pyplot as plt
import seaborn as sns
import dabest
from eztrack.edm.classifiers.evaluate.dataset import Dataset, Patient
from eztrack.edm.classifiers.evaluate.pipeline import EvaluationFramework
from eztrack.edm.classifiers.evaluate.model_selection import get_clinical_split, compute_category_regression, \
compute_splits_train, large_scale_train
from eztrack.edv.results.plot_distributions import PlotDistributions
from eztrack.edv.base.utils import plot_baseline, plot_boxplot, plot_pr, \
plot_roc, plot_confusion_matrix, plot_boxplot_withdf
from eztrack.base.utils.data_science_utils import cutoff_youdens, split_inds_engel, \
split_inds_clindiff, split_inds_outcome, get_numerical_outcome, compute_minmaxfragilitymetric, compute_fragilitymetric,\
compute_znormalized_fragilitymetric, split_inds_modality
from eztrack.edm.classifiers.model.cez_oez_analyzer import FragilitySplitAnalyzer
from eztrack.pipeline.experiments.cez_vs_oez.center_cezvsoez import plot_results
from eztrack.edp.objects.clinical.master_clinical import MasterClinicalSheet
from eztrack.edp.loaders.dataset.clinical.excel_meta import ExcelReader
# Import magic commands for jupyter notebook
# - autoreloading a module
# - profiling functions for memory usage and scripts
%load_ext autoreload
%autoreload 2
def get_per_patient_results(timewarpdict_dataset):
# reorder them into patients
timewarp_patient = collections.defaultdict(list)
datasetids = []
for datasetid in sorted(timewarpdict_dataset.keys()):
# extract the patient id
patid = datasetid.split("_")[0]
_datasetid = datasetid.split("_")[0]
datasetids.append(_datasetid)
# extract the data from each dataset and the corresponding cez/oez matrix
data = timewarpdict_dataset[datasetid]
cezmat = data['cezmat']
oezmat = data['oezmat']
if oezmat.shape[0] == 0 or cezmat.shape[0] == 0:
print(cezmat.shape, oezmat.shape)
print(patid, datasetid)
continue
# add to patient's list of datasets
timewarp_patient[patid].append((cezmat, oezmat))
totaldatasets = 0
for pat in timewarp_patient.keys():
totaldatasets += len(timewarp_patient[pat])
return timewarp_patient, datasetids, totaldatasets
datadir = "/Users/adam2392/Dropbox/phd_research/Fragility_Analysis_Project/"
# datadir = "/home/adam2392/Documents/Dropbox/phd_research/Fragility_Analysis_Project/"
excelfilename = "organized_clinical_datasheet_raw.xlsx"
excelfilepath = os.path.join(datadir, excelfilename)
outputexcelfilename = "organized_clinical_datasheet_formatted.xlsx"
outputexcelfilepath = os.path.join(datadir, outputexcelfilename)
print(os.path.exists(excelfilepath))
print(excelfilepath)
clinreader = ExcelReader(excelfilepath)
ieegdf, datasetdf, scalpdf = clinreader.read_formatted_df()
mastersheet = MasterClinicalSheet(ieegdf, datasetdf, scalpdf)
figdir = "/Users/adam2392/Downloads/journalfigs/"
```
# Load In Data
```
modality = 'ieeg'
# modality = 'scalp'
reference = "common_avg"
reference = "monopolar"
modelname = "impulse"
networkmodelname = ""
freqband = ""
expname = "trimmed"
datadir = f"/Users/adam2392/Downloads/output_new/{expname}/{modelname}{networkmodelname}/{reference}/{modality}/"
resultfilepath = os.path.join(datadir, f"{modelname}_responses.npz")
if not os.path.exists(resultfilepath):
resultfilepath = os.path.join(datadir, f"networkstatic_responses.npz")
allfiles = os.listdir(datadir)
print(allfiles)
# data that is only timewarped, but without threshold applied yet
# datadir = "/Users/adam2392/Downloads/output_new/joined_results/timewarp_nothreshold/"
# datadir = "/Users/adam2392/Downloads/output_new/common_avg_timewarp_nothresh/"
```
# Create Plots of Data
First create for successful patients, then for failure patients.
```
COMBINE_SEPARATE_PATS = [
'pt11',
'nl22',
'ummc007',
'tvb7',
'nl02', 'nl06', 'nl11', # no resection
]
ignore_pats = [
# 'pt11',
# 'jh107'
'la01-2','la01',
'la03', 'la05',
# 'la09',
'la23',
'nl22',
]
center = 'nih'
dict_dataset = dict()
centerdir = os.path.join(datadir, center)
if freqband != "":
centerdir = os.path.join(centerdir, freqband)
resultfilepath = os.path.join(centerdir, f"{modelname}_responses.npz")
if not os.path.exists(resultfilepath):
resultfilepath = os.path.join(centerdir, f"networkstatic_responses.npz")
if not os.path.exists(resultfilepath):
resultfilepath = os.path.join(centerdir, f"impulsemodel_magnitude1_responses.npz")
allfiles = os.listdir(os.path.join(centerdir))
# load in the dataset
trainresult = np.load(resultfilepath, allow_pickle=True)
dict_dataset.update(**trainresult['timewarpdict'].item())
dataset_patient, datasetids, numdatasets = get_per_patient_results(dict_dataset)
print(dataset_patient.keys())
print(numdatasets)
dict_dataset = dict()
centers = [
# 'clevelandnl',
'cleveland',
'nih',
'jhu',
'ummc'
]
for center in centers:
centerdir = os.path.join(datadir, center)
if freqband != "":
centerdir = os.path.join(centerdir, freqband)
resultfilepath = os.path.join(centerdir, f"{modelname}_responses.npz")
# print(resultfilepath)
if not os.path.exists(resultfilepath):
resultfilepath = os.path.join(centerdir, f"networkstatic_responses.npz")
if not os.path.exists(resultfilepath):
resultfilepath = os.path.join(centerdir, f"impulsemodel_magnitude1_responses.npz")
allfiles = os.listdir(os.path.join(centerdir))
# load in the datasete
result = np.load(resultfilepath, allow_pickle=True)
dict_dataset.update(**result['timewarpdict'].item())
print(dict_dataset.keys())
dataset_patient, datasetids, totaldatasets = get_per_patient_results(dict_dataset)
print(totaldatasets)
plotter = PlotDistributions(figdir)
print(dataset_patient.keys())
jhcount = 0
umcount = 0
nihcount = 0
cccount = 0
for key in dataset_patient.keys():
if 'jh' in key:
jhcount += 1
elif 'ummc' in key:
umcount += 1
elif 'pt' in key:
nihcount += 1
elif 'la' in key:
cccount += 1
print(jhcount)
print(umcount, nihcount, cccount)
print(6+9+13+10)
```
# Dataset Summary
```
failcount = 0
successcount = 0
engel_count_dict = dict()
for patient in patientlist:
if patient.outcome == 'nr':
continue
elif patient.outcome == 'f':
failcount += 1
else:
successcount += 1
if str(patient.engelscore) not in engel_count_dict.keys():
engel_count_dict[str(patient.engelscore)] = 0
engel_count_dict[str(patient.engelscore)] += 1
print(failcount, successcount)
print(engel_count_dict)
print(4+19+8+2)
cez_chs = []
other_chs = []
allpats = []
for pat in dataset_patient.keys():
datasets = dataset_patient[pat]
if pat in ignore_pats:
continue
cezs = []
oezs = []
print(pat)
# normalize
print(len(datasets))
# for i in range(len(datasets)):
# cezmat, oezmat = datasets[i]
# # print(cezmat.shape, oezmat.shape)
# mat = np.concatenate((cezmat, oezmat), axis=0)
# mat = compute_minmaxfragilitymetric(mat)
# cezmat = mat[:cezmat.shape[0], :]
# oezmat = mat[cezmat.shape[0]:, :]
# print(cezmat.shape, oezmat.shape)
for i in range(len(datasets)):
cezmat, oezmat = datasets[i]
mat = np.concatenate((cezmat, oezmat), axis=0)
# mat = compute_fragilitymetric(mat)
cezmat = mat[:cezmat.shape[0], :]
oezmat = mat[cezmat.shape[0]:, :]
if pat in joinseppats:
cezs.append(np.mean(cezmat, axis=0))
oezs.append(np.mean(oezmat, axis=0))
else:
cezs.append(cezmat)
oezs.append(oezmat)
if pat not in joinseppats:
cezs = np.nanmedian(np.array(cezs), axis=0)
oezs = np.nanmedian(np.array(oezs), axis=0)
# print(np.array(cezs).shape)
# store the entire patient concatenated vector
cez_chs.append(np.mean(cezs, axis=0))
other_chs.append(np.mean(oezs, axis=0))
allpats.append(pat)
cez_chs = np.array(cez_chs)
other_chs = np.array(other_chs)
print(cez_chs.shape, other_chs.shape)
# split by outcome
succ_inds, fail_inds = split_inds_outcome(allpats, mastersheet)
print(len(succ_inds), len(fail_inds))
print(totaldatasets)
center = ", ".join(centers)
print(center)
sns.set(font_scale=1.75)
cez_mat_fail = cez_chs[fail_inds,...]
oez_mat_fail = other_chs[fail_inds,...]
# take the average across all patients
mean_onset = np.nanmean(cez_mat_fail, axis=0)
mean_other = np.nanmean(oez_mat_fail, axis=0)
stderr_onset = scipy.stats.sem(cez_mat_fail, nan_policy='omit', axis=0)
stderr_other = scipy.stats.sem(oez_mat_fail, nan_policy='omit', axis=0)
# mean_onset[mean_onset > 3] = 5
# mean_other[mean_other > 3] = 5
# stderr_onset[np.abs(stderr_onset) > 3] = 3
# stderr_other[np.abs(stderr_other) > 3] = 3
xs = [np.arange(len(mean_onset)), np.arange(len(mean_other))]
ys = [mean_onset, mean_other]
errors = [stderr_onset, stderr_other]
labels = ['clinez (n={})'.format(len(cez_mat_fail)),
'others (n={})'.format(len(oez_mat_fail))]
threshstr = "\n Thresh=0.7"
# threshstr = ""
titlestr="{center} {reference} Failure Fragile Channels".format(center=center,
reference=reference)
xlabel = "Normalized Window Around Seizure Onset (+/- 10 secs)"
vertline = [30,130]
# vertline = [offsetwin]
fig, ax = plotter.plot_comparison_distribution(xs, ys, labels=labels, alpha=0.5,
save=True,
# ylim=[0,7.5],
figure_name=titlestr,
errors=errors,
titlestr=titlestr,
ylabel="DOA +/- stderr",
xlabel="Time (a.u.)",
vertlines=vertline)
print(cez_chs.shape, other_chs.shape)
cez_mat = cez_chs[succ_inds,...]
oez_mat = other_chs[succ_inds,...]
# take the average across all patients
mean_onset = np.mean(cez_mat, axis=0)
mean_other = np.mean(oez_mat, axis=0)
stderr_onset = scipy.stats.sem(cez_mat, axis=0)
stderr_other = scipy.stats.sem(oez_mat, axis=0)
# mean_onset[mean_onset>5] = 5
# mean_other[mean_other>5] = 5
# stderr_onset[stderr_onset > 5] = 5
# stderr_other[stderr_other > 5] = 5
xs = [np.arange(len(mean_onset)), np.arange(len(mean_other))]
ys = [mean_onset, mean_other]
errors = [stderr_onset, stderr_other]
labels = ['clinez (n={})'.format(len(cez_mat)),
'others (n={})'.format(len(oez_mat))]
threshstr = "\n Thresh=0.7"
# threshstr = ""
titlestr="{center} {reference} Success Fragile Channels".format(center=center,
reference=reference)
xlabel = "Normalized Window Around Seizure Onset (+/- 10 secs)"
vertline = [30,130]
# vertline = [offsetwin]
fig, ax = plotter.plot_comparison_distribution(xs, ys, labels=labels,
save=True,
# ylim=[0, 7],
figure_name=titlestr,
errors=errors,
titlestr=titlestr,
ylabel="DOA +/- stderr",
xlabel="Time (a.u.)",
vertlines=vertline)
```
# Create Pipeline Object
```
def plot_summary(succ_ezratios, fail_ezratios, clinical_baseline,
engelscore_box, clindiff_box,
fpr, tpr, precision, recall, average_precision,
youdenind, youdenpred, titlestr, clf_auc,
Y_pred_engel, Y_pred_clindiff):
ylabel = "DOA Metric"
# plotting for baselines
baselinex_roc = [0, 1-(clinical_baseline-0.5)]
baseliney_roc = [0+(clinical_baseline-0.5), 1]
baselinex_pr = [0, 1]
baseliney_pr = [clinical_baseline, clinical_baseline]
# make box plot
plt.style.use("classic")
sns.set_style("white")
fix, axs = plt.subplots(2,3, figsize=(25,15))
axs = axs.flatten()
ax = axs[0]
titlestr = f"Outcome Split N={numdatasets} P={numpats}"
boxdict = [
[fail_ezratios, succ_ezratios],
[ 'Fail', 'Success']
]
plot_boxplot(ax, boxdict, titlestr, ylabel)
outcome_df = create_df_from_outcome(succ_ezratios, fail_ezratios)
outcome_dabest = dabest.load(data=outcome_df,
x='outcome', y="ezr",
idx=('failure','success')
)
# Produce a Cumming estimation plot.
outcome_dabest.mean_diff.plot();
ax = axs[1]
titlestr = f"Engel Score Split N={numdatasets} P={numpats}"
plot_boxplot(ax, engelscore_box,
titlestr, ylabel="")
xticks = ax.get_xticks()
ax.plot(xticks, Y_pred_engel,
color='red',
label=f"y={engel_intercept:.2f} + {engel_slope:.2f}x"
)
ax.legend()
ax = axs[2]
titlestr = f"Clin Difficulty Split N={numdatasets} P={numpats}"
plot_boxplot(ax, clindiff_box, titlestr,
ylabel="")
ax.plot(xticks, Y_pred_clindiff, color='red',
label=f"y={clindiff_intercept:.2f} + {clindiff_slope:.2f}x")
ax.legend()
# make ROC Curve plot
ax = axs[3]
titlestr = f"ROC Curve N={numdatasets} P={numpats}"
label = "ROC Curve (AUC = %0.2f)" % (clf_auc)
plot_roc(ax, fpr, tpr, label, titlestr)
plot_baseline(ax, baselinex_roc, baseliney_roc)
ax.legend(loc='lower right')
ax.plot(np.mean(baselinex_roc).squeeze(), np.mean(baseliney_roc).squeeze(),
'k*', linewidth=4, markersize=12,
label=f"Clinical-Baseline {np.round(clinical_baseline, 2)}"
)
ax.plot(fpr[youdenind], tpr[youdenind],
'r*', linewidth=4, markersize=12,
label=f"Youden-Index {np.round(youdenacc, 2)}")
ax.legend(loc='lower right')
# make PR Curve
ax = axs[4]
label = 'PR Curve (AP = %0.2f)' % (average_precision)
titlestr = f"PR-Curve N={numdatasets} P={numpats}"
plot_pr(ax, recall, precision, label, titlestr)
plot_baseline(ax, baselinex_pr, baseliney_pr)
ax.legend(loc='lower right')
# Confusion Matrix
ax = axs[5]
titlestr = f"Confusion matrix Youdens-cutoff"
plot_confusion_matrix(ax, ytrue, youdenpred, classes=[0.,1.],
title=titlestr, normalize=True)
# titlestr = f"{modelname}{networkmodelname}-{freqband} {center} N={numdatasets} P={numpats}"
# plt.savefig(os.path.join(figdir, normname, titlestr+".png"),
# box_inches='tight')
%%time
# create patient list for all datasets
patientlist = []
for patientid in dataset_patient.keys():
# initialize empty list to store datasets per patient
datasetlist = []
if patientid in ignore_pats:
continue
# get metadata for patient
center = mastersheet.get_patient_center(patientid)
outcome = mastersheet.get_patient_outcome(patientid)
engelscore = mastersheet.get_patient_engelscore(patientid)
clindiff = mastersheet.get_patient_clinicaldiff(patientid)
modality = mastersheet.get_patient_modality(patientid)
for datasetname, result in dict_dataset.items():
# get the patient/dataset id
patid = datasetname.split("_")[0]
datasetid = datasetname.split(patid + "_")[1]
# print(patid, datasetid)
if patid != patientid:
continue
# format the matrix and the indices
mat = np.concatenate((result['cezmat'], result['oezmat']), axis=0)
cezinds = np.arange(0, result['cezmat'].shape[0])
# create dataset object
dataset_obj = Dataset(mat=mat,
patientid=patid,
name=datasetid,
datatype='ieeg',
cezinds=cezinds,
markeron=30,
markeroff=130)
datasetlist.append(dataset_obj)
if patientid == 'pt2':
print(mat.shape)
ax = sns.heatmap(mat,cmap='inferno',
yticklabels=[],
# vmax=3,
# vmin=-3
)
ax.axhline(len(cezinds), linewidth=5, color='white')
ax.set_ylabel("CEZ vs OEZ Map")
ax.axvline(30, linewidth=4, linestyle='--', color='red')
ax.axvline(130, linewidth=4, linestyle='--', color='black')
# create patient object
patient_obj = Patient(datasetlist,
name=patientid,
center=center,
outcome=outcome,
engelscore=engelscore,
clindiff=clindiff,
modality=modality)
patientlist.append(patient_obj)
# print(patient_obj, len(datasetlist))
evalpipe = EvaluationFramework(patientlist)
print(patient_obj)
print(evalpipe.centers, evalpipe.modalities)
print(evalpipe)
COMBINE_SEPARATE_PATS = [
'pt11',
# 'nl22',
'ummc007',
# 'tvb7',
# 'nl02', 'nl06', 'nl11', # no resection
]
ignore_pats = [
# 'pt11',
# 'jh107'
# 'jh102', 'jh104',
'la01-2','la01',
'la03', 'la05',
# 'la09',
'la23',
'nl22',
]
# evalpipe.apply_normalization(normalizemethod=None)
ezr_list = evalpipe.compute_ezratios(
# threshold=0.5,
ignore_pats=ignore_pats,
combine_sep_pats=COMBINE_SEPARATE_PATS
)
nr_inds = evalpipe.remove_nr_inds()
surgery_inds = evalpipe.get_surgery_inds()
ezratios = ezr_list[surgery_inds]
patlist = evalpipe.patientlist[surgery_inds]
# split by outcome
succ_inds, fail_inds = split_inds_outcome(patlist, mastersheet)
ytrue = get_numerical_outcome(patlist, mastersheet)
engel_inds_dict = split_inds_engel(patlist, mastersheet)
clindiff_inds_dict = split_inds_clindiff(patlist, mastersheet)
roc_dict, cm = evalpipe.evaluate_roc_performance(ezratios, ytrue,
normalize=True)
pr_dict = evalpipe.evaluate_pr_performance(ezratios, ytrue, pos_label=1)
# extract data from dictionaries
fpr = roc_dict['fpr']
tpr = roc_dict['tpr']
clf_auc = roc_dict['auc']
youdenthreshold = roc_dict['youdenthresh']
youdenacc = roc_dict['youdenacc']
youdenind = roc_dict['youdenind']
precision = pr_dict['prec']
recall = pr_dict['recall']
average_precision = pr_dict['avgprec']
clinical_baseline = pr_dict['baseline']
# youden prediction
youdenpred = ezratios >= youdenthreshold
youdenpred = [int(y == True) for y in youdenpred]
# evaluate box plot separation using wilcoxon rank-sum
succ_ezratios, fail_ezratios, \
stat, pval = evalpipe.evaluate_metric_separation(ytrue, ezratios, pos_label=1, neg_label=0)
print("Wilcoxon Rank-sum: ", stat, pval)
print("Clinical baseline: ", clinical_baseline)
print(sum(ytrue))
# pprint(pr_dict)
engelscore_box = {}
for i in sorted(engel_inds_dict.keys()):
if i == -1:
continue
if np.isnan(i):
continue
this_fratio = ezratios[engel_inds_dict[i]]
engelscore_box[f"ENG{int(i)}"] = this_fratio
clindiff_box = {}
for i in sorted(clindiff_inds_dict.keys()):
this_fratio = ezratios[clindiff_inds_dict[i]]
clindiff_box[f"CD{int(i)}"] = this_fratio
print("Total amount of data: ", len(ezratios), len(patlist))
linear_regressor = LinearRegression() # create object for the class
X = []
y = []
for idx, engelscore in enumerate(engelscore_box.keys()):
print(engelscore)
y.append(np.mean(engelscore_box[engelscore]))
X.append(idx+1)
X = np.array(X)[:, np.newaxis]
linear_regressor.fit(X, y) # perform linear regression
engel_intercept = linear_regressor.intercept_
engel_slope = linear_regressor.coef_[0]
Y_pred_engel = linear_regressor.predict(X) # make predictions
X = []
y = []
for idx, clindiff in enumerate(clindiff_box.keys()):
print(clindiff)
y.append(np.mean(clindiff_box[clindiff]))
X.append(idx+1)
X = np.array(X)[:, np.newaxis]
linear_regressor.fit(X, y) # perform linear regression
clindiff_intercept = linear_regressor.intercept_
clindiff_slope = linear_regressor.coef_[0]
Y_pred_clindiff = linear_regressor.predict(X) # make predictions
print(X, y)
print("Slope and intercept: ", clindiff_slope, clindiff_intercept)
sns.set(font_scale=2.5)
centernames = "UMMC, JHH, CC"
numpats = len(patlist)
numdatasets = totaldatasets
# titlestr = f"{modelname}{networkmodelname}-{freqband} {center} N={numdatasets} P={numpats}"
titlestr= f"{modelname}{networkmodelname}-{freqband} {centernames} N={numdatasets} P={numpats}"
titlestr = ""
plot_summary(succ_ezratios, fail_ezratios, clinical_baseline,
engelscore_box, clindiff_box,
fpr, tpr, precision, recall, average_precision,
youdenind, youdenpred, titlestr, clf_auc,
Y_pred_engel, Y_pred_clindiff)
print("Outlier min on fratio_succ: ",
patlist[ezratios==min(succ_ezratios)])
print("Outlier max oon fratio_fail: ",
patlist[ezratios==max(fail_ezratios)])
argsort_succ = np.sort(succ_ezratios)
topinds = [ezratios.tolist().index(argsort_succ[i]) for i in range(10)]
succ_bad_pats = patlist[topinds]
print("\n\n Outlier of success patients:")
print(succ_bad_pats)
argsort_fail = np.sort(fail_ezratios)[::-1]
topinds = [ezratios.tolist().index(argsort_fail[i]) for i in range(10)]
fail_bad_pats = patlist[topinds]
print("\n\n Outlier of failed patients:")
print(fail_bad_pats)
```
# Train/Test Split
```
# traininds, testinds = train_test_split(np.arange(len(y)), test_size=0.6, random_state=98765)
traininds, testinds = evalpipe.train_test_split(method='engel',
trainsize=0.50)
print(len(traininds), len(testinds))
''' RUN TRAINING '''
ezratios = ezr_list[surgery_inds]
# ezratios = ezratios[traininds]
patlist = evalpipe.patientlist[surgery_inds]
# patlist = patlist[traininds]
numpats = len(patlist)
print(len(patlist), len(ezratios))
# split by outcome
succ_inds, fail_inds = split_inds_outcome(patlist, mastersheet)
ytrue = get_numerical_outcome(patlist, mastersheet)
engel_inds_dict = split_inds_engel(patlist, mastersheet)
clindiff_inds_dict = split_inds_clindiff(patlist, mastersheet)
succ_ezratios = ezratios[succ_inds]
fail_ezratios = ezratios[fail_inds]
# engel / clindiff metric split into dictionary
engel_metric_dict = get_clinical_split(ezratios, 'engel', engel_inds_dict)
clindiff_metric_dict = get_clinical_split(ezratios, 'clindiff', clindiff_inds_dict)
# create dictionary split engel and clindiff classes
engel_metric_dict = get_clinical_split(ezratios, 'engel', engel_inds_dict)
clindiff_metric_dict = get_clinical_split(ezratios, 'clindiff', clindiff_inds_dict)
Y_pred_engel, engel_intercept, engel_slope = compute_category_regression(engel_metric_dict)
Y_pred_clindiff, clindiff_intercept, clindiff_slope = compute_category_regression(clindiff_metric_dict)
ezrcolvals = np.concatenate((succ_ezratios, fail_ezratios), axis=-1)[:, np.newaxis]
scorevals = np.array(['Success']*len(succ_ezratios) + ['Failure']*len(fail_ezratios))[:, np.newaxis]
outcome_df = pd.DataFrame(data=ezrcolvals, columns=['ezr'])
outcome_df['Outcome'] = scorevals
ezrcolvals = []
scorevals = []
for key, vals in engel_metric_dict.items():
scorevals.extend([key] * len(vals))
ezrcolvals.extend(vals)
engel_df = pd.DataFrame(data=ezrcolvals, columns=['ezr'])
engel_df['Engel Score'] = scorevals
ezrcolvals = []
scorevals = []
for key, vals in clindiff_metric_dict.items():
scorevals.extend([key] * len(vals))
ezrcolvals.extend(vals)
clindiff_df = pd.DataFrame(data=ezrcolvals, columns=['ezr'])
clindiff_df['Epilepsy Category'] = scorevals
print("converted clinical categorizations into dataframes!")
display(outcome_df.head())
display(engel_df.head())
display(clindiff_df.head())
outcome_df.to_csv("/Users/adam2392/Downloads/outcome_impulsemodel.csv")
engel_df.to_csv("/Users/adam2392/Downloads/engel_impulsemodel.csv")
clindiff_df.to_csv("/Users/adam2392/Downloads/clindiff_impulsemodel.csv")
ylabel = "Degree of Agreement (CEZ)"
outcome_dabest = dabest.load(data=outcome_df, x='Outcome', y="ezr",
idx=outcome_df['Outcome'].unique()
)
engel_dabest = dabest.load(data=engel_df, x='Engel Score', y="ezr",
idx=engel_df['Engel Score'].unique()
)
clindiff_dabest = dabest.load(data=clindiff_df, x='Epilepsy Category', y="ezr",
idx=clindiff_df['Epilepsy Category'].unique()
)
# make box plot
plt.style.use("classic")
sns.set(font_scale=1.75)
sns.set_style("white")
cols = 3
rows = 1
ylim = [0.3, 0.7]
ylim = None
fig, axs = plt.subplots(rows, cols, figsize=(24,8), constrained_layout=True)
# ax1 = fig.add_subplot(cols, rows, 1)
axs = axs.flatten()
ax = axs[0]
titlestr = f"Outcome Split N={numdatasets} P={numpats}"
titlestr = ""
plot_boxplot_withdf(ax, outcome_df, df_xlabel='Outcome', df_ylabel='ezr', color='black',
ylabel=ylabel, titlestr=titlestr, ylim=ylim, yticks=np.linspace(0.3, 0.7, 5))
ax = axs[1]
titlestr = f"Engel Score Split N={numdatasets} P={numpats}"
titlestr = ""
plot_boxplot_withdf(ax, engel_df, df_xlabel='Engel Score', df_ylabel='ezr', color='black',
ylabel="", titlestr=titlestr, ylim=ylim, yticks=np.linspace(0.3, 0.7, 5))
xticks = ax.get_xticks()
ax.plot(xticks, Y_pred_engel, color='red', label=f"y={engel_intercept:.2f} + {engel_slope:.2f}x")
ax.legend()
ax = axs[2]
titlestr = f"Clin Difficulty Split N={numdatasets} P={numpats}"
titlestr = ""
plot_boxplot_withdf(ax, clindiff_df, df_xlabel='Epilepsy Category', df_ylabel='ezr',color='black',
ylabel="", titlestr=titlestr, ylim=ylim, yticks=np.linspace(0.3, 0.7, 5))
xticks = ax.get_xticks()
ax.plot(xticks, Y_pred_clindiff, color='red', label=f"y={clindiff_intercept:.2f} + {clindiff_slope:.2f}x")
ax.legend()
# fig.tight_layout()
suptitle = f"Clinical Categories Split N={numdatasets}, P={numpats}"
st = fig.suptitle(suptitle)
figpath = os.path.join(figdir, suptitle+".png")
plt.savefig(figpath, bbox_extra_artists=[st], bbox_inches='tight')
# Produce a Cumming estimation plot.
fig1 = outcome_dabest.median_diff.plot()
ax1_list = fig1.axes
ax1 = ax1_list[0]
fig1.suptitle("SRR of Success vs Failure Outcomes", fontsize=20)
fig1.tight_layout()
# print(fig1, ax1)
# print(ax1.)
fig2 = engel_dabest.median_diff.plot()
ax2_list = fig2.axes
ax2 = ax2_list[0]
fig2.suptitle("SRR of Outcomes Stratified By Engel Class", fontsize=20)
fig2.tight_layout()
print("Done")
# clindiff_dabest.mean_diff.plot()
```
# Load in Previous Analysis
```
from eztrack.edv.plot_fragility_heatmap import PlotFragilityHeatmap
from eztrack.edv.baseplot import BasePlotter
plotter = BasePlotter(figdir)
trimmed_dataset_dict = np.load(f"/Users/adam2392/Downloads/improved_allmap_embc_datasets.npy", allow_pickle=True)
trimmed_dataset_ids = np.load(f"/Users/adam2392/Downloads/improved_allmap_embc_datasetids.npy", allow_pickle=True)
trimmed_patient_ids = np.load(f"/Users/adam2392/Downloads/improved_allmap_embc_patientids.npy", allow_pickle=True)
trimmed_chanlabels = np.load(f"/Users/adam2392/Downloads/improved_allmap_embc_chanlabels.npy", allow_pickle=True)
trimmed_cezcontacts = np.load(f"/Users/adam2392/Downloads/improved_allmap_embc_cezcontacts.npy", allow_pickle=True)
print(trimmed_dataset_dict.shape)
print(len(trimmed_patient_ids))
# print(trimmed_cezcontacts[0])
for i, dataset in enumerate(trimmed_dataset_dict):
patient_id = trimmed_patient_ids[i]
dataset_id = trimmed_dataset_ids[i]
print(dataset.shape)
break
```
| github_jupyter |
# Simple RNN
In this notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!
<img src='assets/time_prediction.png' width=40% />
> * First, we'll create our data
* Then, define an RNN in PyTorch
* Finally, we'll train our network and see how it performs
### Import resources and create data
```
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 20
# generate evenly spaced data pts
time_steps = np.linspace(0, np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension
x = data[:-1] # all but the last piece of data
y = data[1:] # all but the first
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x') # x
plt.plot(time_steps[1:], y, 'b.', label='target, y') # y
plt.legend(loc='best')
plt.show()
```
---
## Define the RNN
Next, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:
* **input_size** - the size of the input
* **hidden_dim** - the number of features in the RNN output and in the hidden state
* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN
* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)
Take a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.html#rnn) to read more about recurrent layers.
```
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# define an RNN with specified parameters
# batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# get final output
output = self.fc(r_out)
return output, hidden
```
### Check the input and output dimensions
As a check that your model is working as expected, test out how it responds to input data.
```
# test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
```
---
## Training the RNN
Next, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
```
# decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
```
### Loss and Optimization
This is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?
>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.
* It's typical to use an Adam optimizer for recurrent models.
```
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
```
### Defining the training function
This function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often.
#### Hidden State
Pay close attention to the hidden state, here:
* Before looping over a batch of training data, the hidden state is initialized
* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps
```
# train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i % print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every)
```
### Time-Series Prediction
Time-series prediction can be applied to many tasks. Think about weather forecasting or predicting the ebb and flow of stock market prices. You can even try to generate predictions much further in the future than just one time step!
| github_jupyter |
Welcome to day 5 of the Python Challenge! If you missed any of the previous days, here are the links:
- [Day 1 (syntax, variable assignment, numbers)](https://www.kaggle.com/colinmorris/learn-python-challenge-day-1)
- [Day 2 (functions and getting help)](https://www.kaggle.com/colinmorris/learn-python-challenge-day-2)
- [Day 3 (booleans and conditionals)](https://www.kaggle.com/colinmorris/learn-python-challenge-day-3)
- [Day 4 (lists and objects)](https://www.kaggle.com/colinmorris/learn-python-challenge-day-4)
Today we'll be talking about **loops**.
Loops are a way to repeatedly execute some code statement.
```
planets = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
for planet in planets:
print(planet, end=' ') # print all on same line
```
Notice the simplicity of the ``for`` loop: we specify the variable we want to use, the sequence we want to loop over, and use the "``in``" keyword to link them together in an intuitive and readable way.
The object to the right of the "``in``" can be any object that supports iteration. Basically, if it can be thought of as a sequence or collection of things, you can probably loop over it. In addition to lists, we can iterate over the elements of a tuple:
```
multiplicands = (2, 2, 2, 3, 3, 5)
product = 1
for mult in multiplicands:
product = product * mult
product
```
And even iterate over each character in a string:
```
s = 'steganograpHy is the practicE of conceaLing a file, message, image, or video within another fiLe, message, image, Or video.'
msg = ''
# print all the uppercase letters in s, one at a time
for char in s:
if char.isupper():
print(char, end='')
```
### range()
`range()` is a function that returns a sequence of numbers. It turns out to be very useful for writing loops.
For example, if we want to repeat some action 5 times:
```
for i in range(5):
print("Doing important work. i =", i)
```
You might assume that `range(5)` returns the list `[0, 1, 2, 3, 4]`. The truth is a little bit more complicated:
```
r = range(5)
r
```
`range` returns a "range object". It acts a lot like a list (it's iterable), but doesn't have all the same capabilities. As we saw yesterday, we can call `help()` on an object like `r` to see Python's documentation on that object, including all of its methods. Click the 'output' button if you're curious about what the help page for a range object looks like.
```
help(range)
```
Just as we can use `int()`, `float()`, and `bool()` to convert objects to another type, we can use `list()` to convert a list-like thing into a list, which shows a more familiar (and useful) representation:
```
list(range(5))
```
Note that the range starts at zero, and that by convention the top of the range is not included in the output. `range(5)` gives the numbers from 0 up to *but not including* 5.
This may seem like a strange way to do things, but the documentation (accessed via `help(range)`) alludes to the reasoning when it says:
> `range(4)` produces 0, 1, 2, 3. These are exactly the valid indices for a list of 4 elements.
So for any list `L`, `for i in range(len(L)):` will iterate over all its valid indices.
```
nums = [1, 2, 4, 8, 16]
for i in range(len(nums)):
nums[i] = nums[i] * 2
nums
```
This is the classic way of iterating over the indices of a list or other sequence.
> **Aside**: `for i in range(len(L)):` is analogous to constructs like `for (int i = 0; i < L.length; i++)` in other languages.
### `enumerate`
`for foo in x` loops over the elements of a list and `for i in range(len(x))` loops over the indices of a list. What if you want to do both?
Enter the `enumerate` function, one of Python's hidden gems:
```
def double_odds(nums):
for i, num in enumerate(nums):
if num % 2 == 1:
nums[i] = num * 2
x = list(range(10))
double_odds(x)
x
```
Given a list, `enumerate` returns an object which iterates over the indices *and* the values of the list.
(Like the `range()` function, it returns an iterable object. To see its contents as a list, we can call `list()` on it.)
```
list(enumerate(['a', 'b']))
```
We can see that that the things we were iterating over are tuples. This helps explain that `for i, num` syntax. We're "unpacking" the tuple, just like in this example from yesterday:
```
x = 0.125
numerator, denominator = x.as_integer_ratio()
```
We can use this unpacking syntax any time we iterate over a collection of tuples.
```
nums = [
('one', 1, 'I'),
('two', 2, 'II'),
('three', 3, 'III'),
('four', 4, 'IV'),
]
for word, integer, roman_numeral in nums:
print(integer, word, roman_numeral, sep=' = ', end='; ')
```
This is equivalent to the following (more tedious) code:
```
for tup in nums:
word = tup[0]
integer = tup[1]
roman_numeral = tup[2]
print(integer, word, roman_numeral, sep=' = ', end='; ')
```
## ``while`` loops
The other type of loop in Python is a ``while`` loop, which iterates until some condition is met:
```
i = 0
while i < 10:
print(i, end=' ')
i += 1
```
The argument of the ``while`` loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
## List comprehensions
List comprehensions are one of Python's most beloved and unique features. The easiest way to understand them is probably to just look at a few examples:
```
squares = [n**2 for n in range(10)]
squares
```
Here's how we would do the same thing without a list comprehension:
```
squares = []
for n in range(10):
squares.append(n**2)
squares
```
We can also add an `if` condition:
```
short_planets = [planet for planet in planets if len(planet) < 6]
short_planets
```
(If you're familiar with SQL, you might think of this as being like a "WHERE" clause)
Here's an example of filtering with an `if` condition *and* applying some transformation to the loop variable:
```
# str.upper() returns an all-caps version of a string
loud_short_planets = [planet.upper() + '!' for planet in planets if len(planet) < 6]
loud_short_planets
```
People usually write these on a single line, but you might find the structure clearer when it's split up over 3 lines:
```
[
planet.upper() + '!'
for planet in planets
if len(planet) < 6
]
```
(Continuing the SQL analogy, you could think of these three lines as SELECT, FROM, and WHERE)
The expression on the left doesn't technically have to involve the loop variable (though it'd be pretty unusual for it not to). What do you think the expression below will evaluate to? Press the 'output' button to check.
```
[32 for planet in planets]
```
List comprehensions combined with some of the functions we've seen like `min`, `max`, `sum`, `len`, and `sorted`, can lead to some pretty impressive one-line solutions for problems that would otherwise require several lines of code.
For example, yesterday's exercise included a brainteaser asking you to write a function to count the number of negative numbers in a list *without using loops* (or any other syntax we hadn't seen). Here's how we might solve the problem now that we have loops in our arsenal:
```
def count_negatives(nums):
"""Return the number of negative numbers in the given list.
>>> count_negatives([5, -1, -2, 0, 3])
2
"""
n_negative = 0
for num in nums:
if num < 0:
n_negative = n_negative + 1
return n_negative
```
Here's a solution using a list comprehension:
```
def count_negatives(nums):
return len([num for num in nums if num < 0])
```
Much better, right?
Well if all we care about is minimizing the length of our code, this third solution is better still!
```
def count_negatives(nums):
# Reminder: in the day 3 exercise we learned about a quirk of Python where it calculates
# something like True + True + False + True to be equal to 3.
return sum([num < 0 for num in nums])
```
Which of these solutions is the "best" is entirely subjective. Solving a problem with less code is always nice, but it's worth keeping in mind the following lines from [The Zen of Python](https://en.wikipedia.org/wiki/Zen_of_Python):
> Readability counts.
> Explicit is better than implicit.
The last definition of `count_negatives` might be the shortest, but will other people reading your code understand how it works?
Writing Pythonic code doesn't mean never using for loops!
# Your turn
Head over to [the Exercises notebook](https://www.kaggle.com/kernels/fork/961955) to get some hands-on practice working with loops and list comprehensions.
| github_jupyter |
# Basic Examples with Different Protocols
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* grpcurl
* pygmentize
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](seldon_core_setup.ipynb) to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
* Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080`
* Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80`
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
```
## Seldon Protocol REST Model
```
!pygmentize resources/model_seldon_rest.yaml
!kubectl apply -f resources/model_seldon_rest.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=rest-seldon \
-o jsonpath='{.items[0].metadata.name}')
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/rest-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon_rest.yaml
```
## Seldon Protocol GRPC Model
```
!pygmentize resources/model_seldon_grpc.yaml
!kubectl apply -f resources/model_seldon_grpc.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=grpc-seldon \
-o jsonpath='{.items[0].metadata.name}')
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:grpc-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon_grpc.yaml
```
## Tensorflow Protocol REST Model
```
!pygmentize resources/model_tfserving_rest.yaml
!kubectl apply -f resources/model_tfserving_rest.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=rest-tfserving \
-o jsonpath='{.items[0].metadata.name}')
X=!curl -s -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8003/seldon/seldon/rest-tfserving/v1/models/halfplustwo/:predict \
-H "Content-Type: application/json"
d=json.loads("".join(X))
print(d)
assert(d["predictions"][0] == 2.5)
!kubectl delete -f resources/model_tfserving_rest.yaml
```
## Tensorflow Protocol GRPC Model
```
!pygmentize resources/model_tfserving_grpc.yaml
!kubectl apply -f resources/model_tfserving_grpc.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=grpc-tfserving \
-o jsonpath='{.items[0].metadata.name}')
X=!cd ../executor/proto && grpcurl \
-d '{"model_spec":{"name":"halfplustwo"},"inputs":{"x":{"dtype": 1, "tensor_shape": {"dim":[{"size": 3}]}, "floatVal" : [1.0, 2.0, 3.0]}}}' \
-rpc-header seldon:grpc-tfserving -rpc-header namespace:seldon \
-plaintext -proto ./prediction_service.proto \
0.0.0.0:8003 tensorflow.serving.PredictionService/Predict
d=json.loads("".join(X))
print(d)
assert(d["outputs"]["x"]["floatVal"][0] == 2.5)
!kubectl delete -f resources/model_tfserving_grpc.yaml
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from scipy.spatial import distance
import scipy
import math
import scipy.spatial
from collections import Counter
treino = pd.read_csv("dados/3.fit", sep=" ")
treino.head()
teste = pd.read_csv("dados/3.test", sep=" ")
teste.head()
m, n, k = 30, 10, 3
neigh = KNeighborsClassifier(n_neighbors=k)
def create_2D_array_from_1d_array(n, a, b):
array = []
for i in range(n):
array.append([a[i], b[i]])
return array
X_train = create_2D_array_from_1d_array(m, treino.horas.to_numpy(), treino.media.to_numpy())
X_test = create_2D_array_from_1d_array(n, teste.horas.to_numpy(), teste.media.to_numpy())
y_train = treino.aprovado.array
y_test = []
neigh.fit(X_train, y_train)
predict = neigh.predict(X_test)
predict
for i in range(n):
media = teste.media.array[i]
horas = teste.horas.array[i]
aprovacao = ("Aprovado" if predict[i] == 1 else "Reprovado")
print("Aluno {}: ({}, {}) = {}".format(i, media, horas, aprovacao))
def dist_euclidiana(v1, v2):
dim, soma = len(v1), 0
for i in range(dim):
soma += math.pow(v1[i] - v2[i], 2)
#print("v1[{}]:{} - v2[{}]:{}".format(i, v1[i], i, v2[i]))
return math.sqrt(soma)
#float p1 = dadosTreino[i].notaMediaNaEscola;
#float p2 = dadoTeste.notaMediaNaEscola;
#float q1 = dadosTreino[i].horasDeEstudosSemanais;
#float q2 = dadoTeste.horasDeEstudosSemanais;
for i in range(n):
for j in range(m):
p1 = treino.media.array[j]
p2 = teste.media.array[i]
a = [p1, p2]
q1 = treino.horas.array[j]
q2 = teste.horas.array[i]
b = [q1, q2]
d = distance.euclidean(a, b)
d1 = dist_euclidiana(a, b)
#print("distance.euclidean: p1: {:.2f} p2: {:.2f} q1: {:.2f} q2: {:.2f} d: {:.6f}".format(p1, p2, q1, q2, d));
#print("dist_euclidiana: p1: {:.2f} p2: {:.2f} q1: {:.2f} q2: {:.2f} d: {:.6f}".format(p1, p2, q1, q2, d1));
media = teste.media.array[i]
horas = teste.horas.array[i]
aprovacao = ("Aprovado" if predict[i] == 1 else "Reprovado")
print("Aluno {}: ({}, {}) = {}".format(i, media, horas, aprovacao))
i=4
for j in range(m):
p1 = treino.media.array[j]
p2 = teste.media.array[i]
a = [p1, p2]
q1 = treino.horas.array[j]
q2 = teste.horas.array[i]
b = [q1, q2]
d = distance.euclidean(a, b)
d1 = dist_euclidiana(a, b)
#print("distance.euclidean: p1: {:.2f} p2: {:.2f} q1: {:.2f} q2: {:.2f} d: {:.6f}".format(p1, p2, q1, q2, d));
#print("dist_euclidiana: p1: {:.2f} p2: {:.2f} q1: {:.2f} q2: {:.2f} d: {:.6f}".format(p1, p2, q1, q2, d1));
media = teste.media.array[i]
horas = teste.horas.array[i]
aprovacao = ("Aprovado" if predict[i] == 1 else "Reprovado")
print("Aluno {}: ({}, {}) = {}".format(i, media, horas, aprovacao))
def classificacao(i):
lista = list()
for j in range(m):
p1 = treino.media.array[j]
p2 = teste.media.array[i]
a = [p1, p2]
q1 = treino.horas.array[j]
q2 = teste.horas.array[i]
b = [q1, q2]
d = distance.euclidean(a, b)
d1 = dist_euclidiana(a, b)
a = treino.aprovado.array[j]
lista.append({"i":j, "d": d, "a": a})
lista_ordenada = sorted(lista, key=lambda k: k['d'])
lista_k = lista_ordenada[0:k]
count_aprovados = 0
count_reprovados = 0
for i in range(k):
if(lista_k[i]['a'] == 1):
count_aprovados = count_aprovados + 1
else:
count_reprovados = count_reprovados + 1
#print("Aprovados:{}".format(count_aprovados))
#print("Reprovados:{}".format(count_reprovados))
if(count_aprovados > count_reprovados):
return "Aprovado"
else:
return "Reprovado"
for i in range(n):
aprovacao = classificacao(i)
media = teste.media.array[i]
horas = teste.horas.array[i]
print("Aluno {}: ({}, {}) = {}".format(i, media, horas, aprovacao))
class KNN:
def __init__(self, k):
self.k = k
def fit(self, X, y):
self.X_train = X
self.y_train = y
def distance(self, X1, X2):
distance = scipy.spatial.distance.euclidean(X1, X2)
def predict(self, X_test):
final_output = []
for i in range(len(X_test)):
d = []
votes = []
for j in range(len(X_train)):
dist = scipy.spatial.distance.euclidean(X_train[j] , X_test[i])
d.append([dist, j])
#print("antes:")
#for dist, j in d:
# print("d:{} j:{}".format(dist, j))
d.sort()
#print("depois:")
#for s in d:
# print("d:{} j:{}".format(s[0], s[1]))
d = d[0:self.k]
#print("com k:")
#for s in d:
# print("d:{} j:{}".format(s[0], s[1]))
for d, j in d:
votes.append(y_train[j])
ans = Counter(votes).most_common(1)[0][0]
final_output.append(ans)
return final_output
def score(self, X_test, y_test):
predictions = self.predict(X_test)
return (predictions == y_test).sum() / len(y_test)
clf = KNN(3)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
for i in range(n):
media = teste.media.array[i]
horas = teste.horas.array[i]
aprovacao = ("Aprovado" if prediction[i] == 1 else "Reprovado")
print("Aluno {}: ({}, {}) = {}".format(i, media, horas, aprovacao))
```
| github_jupyter |
Configurations:
* install tensorflow 2.1
* install matplotlib
* install pandas
* install scjkit-learn
* install nltk
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import re
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.preprocessing import text, sequence
from keras import utils
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from keras.layers.embeddings import Embedding
from keras.layers.core import SpatialDropout1D
from keras.layers import LSTM
from keras.callbacks import EarlyStopping
from numpy.random import seed
#Load Data
df_train = pd.read_csv('../data/deep-learning-datasets/twitter-sentiment-analysis/train_E6oV3lV.csv')
df_train.columns = ["id", "label", "text"]
df_test = pd.read_csv('../data/deep-learning-datasets/twitter-sentiment-analysis/test_tweets_anuFYb8.csv')
df_test.columns = ["id","text"]
df_train
# clean data
REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]')
BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')
def clean_text(text):
"""
text: a string
return: modified initial string
"""
text = text.lower() # lowercase text
text = REPLACE_BY_SPACE_RE.sub(' ', text) # replace REPLACE_BY_SPACE_RE symbols by space in text. substitute the matched string in REPLACE_BY_SPACE_RE with space.
text = BAD_SYMBOLS_RE.sub('', text) # remove symbols which are in BAD_SYMBOLS_RE from text. substitute the matched string in BAD_SYMBOLS_RE with nothing.
text = ' '.join(word for word in text.split() if len(word) > 2) # remove stopwors from text
return text
def preprocess_text(df):
df = df.reset_index(drop=True)
df['text'] = df['text'].apply(clean_text)
df['text'] = df['text'].str.replace('\d+', '')
return df
df_train = preprocess_text(df_train)
df_test = preprocess_text(df_test)
df_train
# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = 30000
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 50
# This is fixed.
EMBEDDING_DIM = 100
tokenizer = text.Tokenizer(num_words=MAX_NB_WORDS, filters='#!"$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True, split= ' ')
tokenizer.fit_on_texts((df_train['text'].append(df_test['text'])).values)
word_index = tokenizer.word_index
word_index['study']
def fromTextToFeatures(df_text):
# gives you a list of integer sequences encoding the words in your sentence
X = tokenizer.texts_to_sequences(df_text.values)
# split the X 1-dimensional sequence of word indexes into a 2-d listof items
# Each item is split is a sequence of 50 value left-padded with zeros
X = pad_sequences(X, maxlen=MAX_SEQUENCE_LENGTH)
return X
X = fromTextToFeatures(df_train['text'])
print('Shape of data tensor:', X.shape)
#X
X_test_ex = fromTextToFeatures(df_test['text'])
print('Shape of data tensor:', X_test_ex.shape)
Y = pd.get_dummies(df_train['label']).values
# asdas dasda sd asd asd asd [0, 1]
# dfsdf asd sd fdsf sdf [1, 0]
print('Shape of label tensor:', Y.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.10, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
seed(100)
model = Sequential()
# The Embedding layer is used to create word vectors for incoming words.
# It sits between the input and the LSTM layer, i.e.
# the output of the Embedding layer is the input to the LSTM layer.
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 3
batch_size = 64
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1,callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
accr = model.evaluate(X_test,Y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]))
pred_y = model.predict(X_test)
```
| github_jupyter |
# Tutorial 4 - Setting parameter values
In [Tutorial 1](./Tutorial%201%20-%20How%20to%20run%20a%20model.ipynb) and [Tutorial 2](./Tutorial%202%20-%20Compare%20models.ipynb), we saw how to run a PyBaMM model with all the default settings. However, PyBaMM also allows you to tweak these settings for your application. In this tutorial, we will see how to change the parameters in PyBaMM.
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
```
## Change the whole parameter set
PyBaMM has a number of in-built parameter sets (check the list [here](https://pybamm.readthedocs.io/en/latest/source/parameters/parameter_sets.html)), which can be selected doing
```
chemistry = pybamm.parameter_sets.Chen2020
```
This variable is a dictionary with the corresponding parameter subsets for each component.
```
chemistry
```
More details on each subset can be found [here](https://github.com/pybamm-team/PyBaMM/tree/master/pybamm/input/parameters).
Now we can pass `'chemistry'` into `ParameterValues` to create the dictionary of parameter values
```
parameter_values = pybamm.ParameterValues(chemistry=chemistry)
```
We can see all the parameters stored in the dictionary
```
parameter_values
```
or we can search for a particular parameter
```
parameter_values.search("electrolyte")
```
To run a simulation with this parameter set, we can proceed as usual but passing the parameters as a keyword argument
```
model = pybamm.lithium_ion.DFN()
sim = pybamm.Simulation(model, parameter_values=parameter_values)
sim.solve([0, 3600])
sim.plot()
```
## Change individual parameters
We often want to quickly change a small number of parameter values to investigate how the behaviour or the battery changes. In such cases, we can change parameter values without having to leave the notebook or script you are working in.
We start initialising the model and the parameter values
```
model = pybamm.lithium_ion.DFN()
parameter_values = pybamm.ParameterValues(chemistry=pybamm.parameter_sets.Chen2020)
```
In this example we will change the current to 10 A
```
parameter_values["Current function [A]"] = 10
```
Now we just need to run the simulation with the new parameter values
```
sim = pybamm.Simulation(model, parameter_values=parameter_values)
sim.solve([0, 3600])
sim.plot()
```
Note that we still passed the interval `[0, 3600]` to `sim.solve()`, but the simulation terminated early as the lower voltage cut-off was reached.
You can also simulate drive cycles by passing the data directly
```
parameter_values["Current function [A]"] = "[current data]US06"
```
Then we can just define the model and solve it
```
model = pybamm.lithium_ion.SPMe()
sim = pybamm.Simulation(model, parameter_values=parameter_values)
sim.solve()
sim.plot(["Current [A]", "Terminal voltage [V]"])
```
Alternatively, we can define the current to be an arbitrary function of time
```
import numpy as np
def my_current(t):
return pybamm.sin(2 * np.pi * t / 60)
parameter_values["Current function [A]"] = my_current
```
and we can now solve the model again. In this case, we can pass `t_eval` to the solver to make sure we have enough time points to resolve the function in our output.
```
model = pybamm.lithium_ion.SPMe()
sim = pybamm.Simulation(model, parameter_values=parameter_values)
t_eval = np.arange(0, 121, 1)
sim.solve(t_eval=t_eval)
sim.plot(["Current [A]", "Terminal voltage [V]"])
```
In this notebook we have seen how we can change the parameters of our model. In [Tutorial 5](./Tutorial%205%20-%20Run%20experiments.ipynb) we show how can we define and run experiments.
| github_jupyter |
<table align="center">
<td align="center"><a target="_blank" href="http://introtodeeplearning.com">
<img src="http://introtodeeplearning.com/images/colab/mit.png" style="padding-bottom:5px;" />
Visit MIT Deep Learning</a></td>
<td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab1/solutions/Part2_Music_Generation_Solution.ipynb">
<img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" />Run in Google Colab</a></td>
<td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab1/solutions/Part2_Music_Generation_Solution.ipynb">
<img src="http://introtodeeplearning.com/images/colab/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td>
</table>
# Copyright Information
```
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
```
# Lab 1: Intro to TensorFlow and Music Generation with RNNs
# Part 2: Music Generation with RNNs
In this portion of the lab, we will explore building a Recurrent Neural Network (RNN) for music generation. We will train a model to learn the patterns in raw sheet music in [ABC notation](https://en.wikipedia.org/wiki/ABC_notation) and then use this model to generate new music.
## 2.1 Dependencies
First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
```
# Import Tensorflow 2.0
import tensorflow as tf
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
# Import all remaining packages
import numpy as np
import os
import time
import functools
from IPython import display as ipythondisplay
from tqdm import tqdm
!apt-get install abcmidi timidity > "C:\Users\Shaun Cyr\GitHub\introtodeeplearning\lab2" 2>&1
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
```
## 2.2 Dataset

We've gathered a dataset of thousands of Irish folk songs, represented in the ABC notation. Let's download the dataset and inspect it:
```
# Download the dataset
songs = mdl.lab1.load_training_data()
# Print one of the songs to inspect it in greater detail!
example_song = songs[0]
print("\nExample song: ")
print(example_song)
```
We can easily convert a song in ABC notation to an audio waveform and play it back. Be patient for this conversion to run, it can take some time.
```
# Convert the ABC notation to audio file and listen to it
mdl.lab1.play_song(example_song)
```
One important thing to think about is that this notation of music does not simply contain information on the notes being played, but additionally there is meta information such as the song title, key, and tempo. How does the number of different characters that are present in the text file impact the complexity of the learning problem? This will become important soon, when we generate a numerical representation for the text data.
```
# Join our list of song strings into a single string containing all songs
songs_joined = "\n\n".join(songs)
# Find all unique characters in the joined string
vocab = sorted(set(songs_joined))
print("There are", len(vocab), "unique characters in the dataset")
```
## 2.3 Process the dataset for the learning task
Let's take a step back and consider our prediction task. We're trying to train a RNN model to learn patterns in ABC music, and then use this model to generate (i.e., predict) a new piece of music based on this learned information.
Breaking this down, what we're really asking the model is: given a character, or a sequence of characters, what is the most probable next character? We'll train the model to perform this task.
To achieve this, we will input a sequence of characters to the model, and train the model to predict the output, that is, the following character at each time step. RNNs maintain an internal state that depends on previously seen elements, so information about all characters seen up until a given moment will be taken into account in generating the prediction.
### Vectorize the text
Before we begin training our RNN model, we'll need to create a numerical representation of our text-based dataset. To do this, we'll generate two lookup tables: one that maps characters to numbers, and a second that maps numbers back to characters. Recall that we just identified the unique characters present in the text.
```
### Define numerical representation of text ###
# Create a mapping from character to unique index.
# For example, to get the index of the character "d",
# we can evaluate `char2idx["d"]`.
char2idx = {u:i for i, u in enumerate(vocab)}
# Create a mapping from indices to characters. This is
# the inverse of char2idx and allows us to convert back
# from unique index to the character in our vocabulary.
idx2char = np.array(vocab)
```
This gives us an integer representation for each character. Observe that the unique characters (i.e., our vocabulary) in the text are mapped as indices from 0 to `len(unique)`. Let's take a peek at this numerical representation of our dataset:
```
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
### Vectorize the songs string ###
'''TODO: Write a function to convert the all songs string to a vectorized
(i.e., numeric) representation. Use the appropriate mapping
above to convert from vocab characters to the corresponding indices.
NOTE: the output of the `vectorize_string` function
should be a np.array with `N` elements, where `N` is
the number of characters in the input string
'''
def vectorize_string(string):
vectorized_output = np.array([char2idx[char] for char in string])
return vectorized_output
# def vectorize_string(string):
# TODO
vectorized_songs = vectorize_string(songs_joined)
```
We can also look at how the first part of the text is mapped to an integer representation:
```
print ('{} ---- characters mapped to int ----> {}'.format(repr(songs_joined[:10]), vectorized_songs[:10]))
# check that vectorized_songs is a numpy array
assert isinstance(vectorized_songs, np.ndarray), "returned result should be a numpy array"
```
### Create training examples and targets
Our next step is to actually divide the text into example sequences that we'll use during training. Each input sequence that we feed into our RNN will contain `seq_length` characters from the text. We'll also need to define a target sequence for each input sequence, which will be used in training the RNN to predict the next character. For each input, the corresponding target will contain the same length of text, except shifted one character to the right.
To do this, we'll break the text into chunks of `seq_length+1`. Suppose `seq_length` is 4 and our text is "Hello". Then, our input sequence is "Hell" and the target sequence is "ello".
The batch method will then let us convert this stream of character indices to sequences of the desired size.
```
### Batch definition to create training examples ###
def get_batch(vectorized_songs, seq_length, batch_size):
# the length of the vectorized songs string
n = vectorized_songs.shape[0] - 1
# randomly choose the starting indices for the examples in the training batch
idx = np.random.choice(n-seq_length, batch_size)
'''TODO: construct a list of input sequences for the training batch'''
input_batch = [vectorized_songs[i : i+seq_length] for i in idx]
# input_batch = # TODO
'''TODO: construct a list of output sequences for the training batch'''
output_batch = [vectorized_songs[i+1 : i+seq_length+1] for i in idx]
# output_batch = # TODO
# x_batch, y_batch provide the true inputs and targets for network training
x_batch = np.reshape(input_batch, [batch_size, seq_length])
y_batch = np.reshape(output_batch, [batch_size, seq_length])
return x_batch, y_batch
# Perform some simple tests to make sure your batch function is working properly!
test_args = (vectorized_songs, 10, 2)
if not mdl.lab1.test_batch_func_types(get_batch, test_args) or \
not mdl.lab1.test_batch_func_shapes(get_batch, test_args) or \
not mdl.lab1.test_batch_func_next_step(get_batch, test_args):
print("======\n[FAIL] could not pass tests")
else:
print("======\n[PASS] passed all tests!")
```
For each of these vectors, each index is processed at a single time step. So, for the input at time step 0, the model receives the index for the first character in the sequence, and tries to predict the index of the next character. At the next timestep, it does the same thing, but the RNN considers the information from the previous step, i.e., its updated state, in addition to the current input.
We can make this concrete by taking a look at how this works over the first several characters in our text:
```
x_batch, y_batch = get_batch(vectorized_songs, seq_length=5, batch_size=1)
for i, (input_idx, target_idx) in enumerate(zip(np.squeeze(x_batch), np.squeeze(y_batch))):
print("Step {:3d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
```
## 2.4 The Recurrent Neural Network (RNN) model
Now we're ready to define and train a RNN model on our ABC music dataset, and then use that trained model to generate a new song. We'll train our RNN using batches of song snippets from our dataset, which we generated in the previous section.
The model is based off the LSTM architecture, where we use a state vector to maintain information about the temporal relationships between consecutive characters. The final output of the LSTM is then fed into a fully connected [`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layer where we'll output a softmax over each character in the vocabulary, and then sample from this distribution to predict the next character.
As we introduced in the first portion of this lab, we'll be using the Keras API, specifically, [`tf.keras.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential), to define the model. Three layers are used to define the model:
* [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding): This is the input layer, consisting of a trainable lookup table that maps the numbers of each character to a vector with `embedding_dim` dimensions.
* [`tf.keras.layers.LSTM`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM): Our LSTM network, with size `units=rnn_units`.
* [`tf.keras.layers.Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense): The output layer, with `vocab_size` outputs.
<img src="https://raw.githubusercontent.com/aamini/introtodeeplearning/2019/lab1/img/lstm_unrolled-01-01.png" alt="Drawing"/>
### Define the RNN model
Now, we will define a function that we will use to actually build the model.
```
def LSTM(rnn_units):
return tf.keras.layers.LSTM(
rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
recurrent_activation='sigmoid',
stateful=True,
)
```
The time has come! Fill in the `TODOs` to define the RNN model within the `build_model` function, and then call the function you just defined to instantiate the model!
```
### Defining the RNN Model ###
'''TODO: Add LSTM and Dense layers to define the RNN model using the Sequential API.'''
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
# Layer 1: Embedding layer to transform indices into dense vectors
# of a fixed embedding size
tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),
# Layer 2: LSTM with `rnn_units` number of units.
# TODO: Call the LSTM function defined above to add this layer.
LSTM(rnn_units),
# LSTM('''TODO'''),
# Layer 3: Dense (fully-connected) layer that transforms the LSTM output
# into the vocabulary size.
# TODO: Add the Dense layer.
tf.keras.layers.Dense(vocab_size)
# '''TODO: DENSE LAYER HERE'''
])
return model
# Build a simple model with default hyperparameters. You will get the
# chance to change these later.
model = build_model(len(vocab), embedding_dim=256, rnn_units=1024, batch_size=32)
```
### Test out the RNN model
It's always a good idea to run a few simple checks on our model to see that it behaves as expected.
First, we can use the `Model.summary` function to print out a summary of our model's internal workings. Here we can check the layers in the model, the shape of the output of each of the layers, the batch size, etc.
```
model.summary()
```
We can also quickly check the dimensionality of our output, using a sequence length of 100. Note that the model can be run on inputs of any length.
```
x, y = get_batch(vectorized_songs, seq_length=100, batch_size=32)
pred = model(x)
print("Input shape: ", x.shape, " # (batch_size, sequence_length)")
print("Prediction shape: ", pred.shape, "# (batch_size, sequence_length, vocab_size)")
```
### Predictions from the untrained model
Let's take a look at what our untrained model is predicting.
To get actual predictions from the model, we sample from the output distribution, which is defined by a `softmax` over our character vocabulary. This will give us actual character indices. This means we are using a [categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution) to sample over the example prediction. This gives a prediction of the next character (specifically its index) at each timestep.
Note here that we sample from this probability distribution, as opposed to simply taking the `argmax`, which can cause the model to get stuck in a loop.
Let's try this sampling out for the first example in the batch.
```
sampled_indices = tf.random.categorical(pred[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
sampled_indices
```
We can now decode these to see the text predicted by the untrained model:
```
print("Input: \n", repr("".join(idx2char[x[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices])))
```
As you can see, the text predicted by the untrained model is pretty nonsensical! How can we do better? We can train the network!
## 2.5 Training the model: loss and training operations
Now it's time to train the model!
At this point, we can think of our next character prediction problem as a standard classification problem. Given the previous state of the RNN, as well as the input at a given time step, we want to predict the class of the next character -- that is, to actually predict the next character.
To train our model on this classification task, we can use a form of the `crossentropy` loss (negative log likelihood loss). Specifically, we will use the [`sparse_categorical_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/backend/sparse_categorical_crossentropy) loss, as it utilizes integer targets for categorical classification tasks. We will want to compute the loss using the true targets -- the `labels` -- and the predicted targets -- the `logits`.
Let's first compute the loss using our example predictions from the untrained model:
```
### Defining the loss function ###
'''TODO: define the loss function to compute and return the loss between
the true labels and predictions (logits). Set the argument from_logits=True.'''
def compute_loss(labels, logits):
loss = tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
# loss = tf.keras.losses.sparse_categorical_crossentropy('''TODO''', '''TODO''', from_logits=True) # TODO
return loss
'''TODO: compute the loss using the true next characters from the example batch
and the predictions from the untrained model several cells above'''
example_batch_loss = compute_loss(y, pred)
# example_batch_loss = compute_loss('''TODO''', '''TODO''') # TODO
print("Prediction shape: ", pred.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
```
Let's start by defining some hyperparameters for training the model. To start, we have provided some reasonable values for some of the parameters. It is up to you to use what we've learned in class to help optimize the parameter selection here!
```
### Hyperparameter setting and optimization ###
# Optimization parameters:
num_training_iterations = 2000 # Increase this to train longer
batch_size = 4 # Experiment between 1 and 64
seq_length = 100 # Experiment between 50 and 500
learning_rate = 5e-3 # Experiment between 1e-5 and 1e-1
# Model parameters:
vocab_size = len(vocab)
embedding_dim = 256
rnn_units = 1024 # Experiment between 1 and 2048
# Checkpoint location:
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "my_ckpt")
```
Now, we are ready to define our training operation -- the optimizer and duration of training -- and use this function to train the model. You will experiment with the choice of optimizer and the duration for which you train your models, and see how these changes affect the network's output. Some optimizers you may like to try are [`Adam`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam?version=stable) and [`Adagrad`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adagrad?version=stable).
First, we will instantiate a new model and an optimizer. Then, we will use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) method to perform the backpropagation operations.
We will also generate a print-out of the model's progress through training, which will help us easily visualize whether or not we are minimizing the loss.
```
### Define optimizer and training operation ###
'''TODO: instantiate a new model for training using the `build_model`
function and the hyperparameters created above.'''
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size)
# model = build_model('''TODO: arguments''')
'''TODO: instantiate an optimizer with its learning rate.
Checkout the tensorflow website for a list of supported optimizers.
https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/
Try using the Adam optimizer to start.'''
optimizer = tf.keras.optimizers.Adam(learning_rate)
# optimizer = # TODO
@tf.function
def train_step(x, y):
# Use tf.GradientTape()
with tf.GradientTape() as tape:
'''TODO: feed the current input into the model and generate predictions'''
y_hat = model(x) # TODO
# y_hat = model('''TODO''')
'''TODO: compute the loss!'''
loss = compute_loss(y, y_hat) # TODO
# loss = compute_loss('''TODO''', '''TODO''')
# Now, compute the gradients
'''TODO: complete the function call for gradient computation.
Remember that we want the gradient of the loss with respect all
of the model parameters.
HINT: use `model.trainable_variables` to get a list of all model
parameters.'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', '''TODO''')
# Apply the gradients to the optimizer so it can update the model accordingly
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
##################
# Begin training!#
##################
history = []
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for iter in tqdm(range(num_training_iterations)):
# Grab a batch and propagate it through the network
x_batch, y_batch = get_batch(vectorized_songs, seq_length, batch_size)
loss = train_step(x_batch, y_batch)
# Update the progress bar
history.append(loss.numpy().mean())
plotter.plot(history)
# Update the model with the changed weights!
if iter % 100 == 0:
model.save_weights(checkpoint_prefix)
# Save the trained model and the weights
model.save_weights(checkpoint_prefix)
```
## 2.6 Generate music using the RNN model
Now, we can use our trained RNN model to generate some music! When generating music, we'll have to feed the model some sort of seed to get it started (because it can't predict anything without something to start with!).
Once we have a generated seed, we can then iteratively predict each successive character (remember, we are using the ABC representation for our music) using our trained RNN. More specifically, recall that our RNN outputs a `softmax` over possible successive characters. For inference, we iteratively sample from these distributions, and then use our samples to encode a generated song in the ABC format.
Then, all we have to do is write it to a file and listen!
### Restore the latest checkpoint
To keep this inference step simple, we will use a batch size of 1. Because of how the RNN state is passed from timestep to timestep, the model will only be able to accept a fixed batch size once it is built.
To run the model with a different `batch_size`, we'll need to rebuild the model and restore the weights from the latest checkpoint, i.e., the weights after the last checkpoint during training:
```
'''TODO: Rebuild the model using a batch_size=1'''
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1) # TODO
# model = build_model('''TODO''', '''TODO''', '''TODO''', batch_size=1)
# Restore the model weights for the last checkpoint after training
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
```
Notice that we have fed in a fixed `batch_size` of 1 for inference.
### The prediction procedure
Now, we're ready to write the code to generate text in the ABC music format:
* Initialize a "seed" start string and the RNN state, and set the number of characters we want to generate.
* Use the start string and the RNN state to obtain the probability distribution over the next predicted character.
* Sample from multinomial distribution to calculate the index of the predicted character. This predicted character is then used as the next input to the model.
* At each time step, the updated RNN state is fed back into the model, so that it now has more context in making the next prediction. After predicting the next character, the updated RNN states are again fed back into the model, which is how it learns sequence dependencies in the data, as it gets more information from the previous predictions.

Complete and experiment with this code block (as well as some of the aspects of network definition and training!), and see how the model performs. How do songs generated after training with a small number of epochs compare to those generated after a longer duration of training?
```
### Prediction of a generated song ###
def generate_text(model, start_string, generation_length=1000):
# Evaluation step (generating ABC text using the learned RNN model)
'''TODO: convert the start string to numbers (vectorize)'''
input_eval = [char2idx[s] for s in start_string] # TODO
# input_eval = ['''TODO''']
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Here batch size == 1
model.reset_states()
tqdm._instances.clear()
for i in tqdm(range(generation_length)):
'''TODO: evaluate the inputs and generate the next character predictions'''
predictions = model(input_eval)
# predictions = model('''TODO''')
# Remove the batch dimension
predictions = tf.squeeze(predictions, 0)
'''TODO: use a multinomial distribution to sample'''
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# predicted_id = tf.random.categorical('''TODO''', num_samples=1)[-1,0].numpy()
# Pass the prediction along with the previous hidden state
# as the next inputs to the model
input_eval = tf.expand_dims([predicted_id], 0)
'''TODO: add the predicted character to the generated text!'''
# Hint: consider what format the prediction is in vs. the output
text_generated.append(idx2char[predicted_id]) # TODO
# text_generated.append('''TODO''')
return (start_string + ''.join(text_generated))
'''TODO: Use the model and the function defined above to generate ABC format text of length 1000!
As you may notice, ABC files start with "X" - this may be a good start string.'''
generated_text = generate_text(model, start_string="X", generation_length=1000) # TODO
# generated_text = generate_text('''TODO''', start_string="X", generation_length=1000)
```
### Play back the generated music!
We can now call a function to convert the ABC format text to an audio file, and then play that back to check out our generated music! Try training longer if the resulting song is not long enough, or re-generating the song!
```
### Play back generated songs ###
generated_songs = mdl.lab1.extract_song_snippet(generated_text)
for i, song in enumerate(generated_songs):
# Synthesize the waveform from a song
waveform = mdl.lab1.play_song(song)
# If its a valid song (correct syntax), lets play it!
if waveform:
print("Generated song", i)
ipythondisplay.display(waveform)
```
## 2.7 Experiment and **get awarded for the best songs**!!
Congrats on making your first sequence model in TensorFlow! It's a pretty big accomplishment, and hopefully you have some sweet tunes to show for it.
If you want to go further, try to optimize your model and submit your best song! Tweet us at [@MITDeepLearning](https://twitter.com/MITDeepLearning) or [email us](mailto:introtodeeplearning-staff@mit.edu) a copy of the song (if you don't have Twitter), and we'll give out prizes to our favorites!
Consider how you may improve your model and what seems to be most important in terms of performance. Here are some ideas to get you started:
* How does the number of training epochs affect the performance?
* What if you alter or augment the dataset?
* Does the choice of start string significantly affect the result?
Have fun and happy listening!

```
# Example submission by a previous 6.S191 student (credit: Christian Adib)
%%html
<blockquote class="twitter-tweet"><a href="https://twitter.com/AdibChristian/status/1090030964770783238?ref_src=twsrc%5Etfw">January 28, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
```
| github_jupyter |
# Training the extended rough Bergomi model part 3
In this notebook we train a neural network for the extended rough Bergomi model for expiries in the range (0.03,0.12].
Be aware that the datasets are rather large.
### Load, split and scale the datasets
```
import os, pandas as pd, numpy as np
wd = os.getcwd()
# Load contract grid:
logMoneyness = pd.read_csv(wd + '\\data\\logMoneyness.txt', delimiter=",", header = None).values
expiries = pd.read_csv(wd + '\\data\\expiries.txt', delimiter=",", header = None).values
# Set useful parameters:
nIn = 17
nOut = 275
nXi = 13
# Load training data:
data_train = pd.read_csv(wd + '\\data\\training_and_test_data\\rbergomi_extended\\rbergomi_extended_training_data_3.csv', delimiter=",").values
x_train = data_train[:,:nIn]
y_train = data_train[:,nIn:nIn+nOut]
data_train = None
# Load test data:
data_test = pd.read_csv(wd + '\\data\\training_and_test_data\\rbergomi_extended\\rbergomi_extended_test_data_3.csv', delimiter=",").values
x_valid = data_test[:,:nIn]
y_valid = data_test[:,nIn:nIn+nOut]
data_test = None
# Normalise data:
from sklearn.preprocessing import StandardScaler
tmp1 = np.reshape(np.array([3.50,0.00,0.50,0.50]), (1, 4))
tmp2 = np.reshape(np.array([0.75,-1.00,-0.50,-0.50]), (1, 4))
ub = np.concatenate((tmp1,np.tile(1,(1,nXi))),1)
lb = np.concatenate((tmp2,np.tile(0.0025,(1,nXi))),1)
def myscale(x):
res=np.zeros(nIn)
for i in range(nIn):
res[i]=(x[i] - (ub[0,i] + lb[0,i])*0.5) * 2 / (ub[0,i] - lb[0,i])
return res
def myinverse(x):
res=np.zeros(nIn)
for i in range(nIn):
res[i]=x[i]*(ub[0,i] - lb[0,i]) *0.5 + (ub[0,i] + lb[0,i])*0.5
return res
# Scale inputs:
x_train_mod = np.array([myscale(x) for x in x_train])
x_valid_mod = np.array([myscale(x) for x in x_valid])
# Scale and normalise output:
scale_y = StandardScaler()
y_train_mod = scale_y.fit_transform(y_train)
y_valid_mod = scale_y.transform(y_valid)
```
### Define utility functions
```
import keras
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
keras.backend.set_floatx('float64')
def GetNetwork(nIn,nOut,nNodes,nLayers,actFun):
# Description: Creates a neural network of a specified structure
input1 = keras.layers.Input(shape=(nIn,))
layerTmp = keras.layers.Dense(nNodes,activation = actFun)(input1)
for i in range(nLayers-1):
layerTmp = keras.layers.Dense(nNodes,activation = actFun)(layerTmp)
output1 = keras.layers.Dense(nOut,activation = 'linear')(layerTmp)
return(keras.models.Model(inputs=input1, outputs=output1))
def TrainNetwork(nn,batchsize,numEpochs,objFun,optimizer,xTrain,yTrain,xTest,yTest):
# Description: Trains a neural network and returns the network including the history
# of the training process.
nn.compile(loss = objFun, optimizer = optimizer)
history = nn.fit(xTrain, yTrain, batch_size = batchsize,
validation_data = (xTest,yTest),
epochs = numEpochs, verbose = True, shuffle=1)
return nn,history.history['loss'],history.history['val_loss']
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square( y_pred - y_true )))
```
### Define and train neural network
<span style="color:red">This section can be skipped! Just go straight to "Load network" and load the already trained model</span>
```
# Define model:
model = GetNetwork(nIn,nOut,200,3,'elu')
# Set seed
import random
random.seed(455165)
# Train network
model,loss1,vloss1 = TrainNetwork(model,32,500,root_mean_squared_error,'adam',x_train_mod,y_train_mod,x_valid_mod,y_valid_mod)
model,loss2,vloss2 = TrainNetwork(model,5000,200,root_mean_squared_error,'adam',x_train_mod,y_train_mod,x_valid_mod,y_valid_mod)
```
### Save network
<span style="color:red">This section can be skipped! Just go straight to "Load network" and load the already trained model</span>
```
# Save model:
model.save(wd + '\\data\\neural_network_weights\\rbergomi_extended\\rbergomi_extended_model_3.h5')
# Save weights (and scalings) in JSON format:
# - You need to install 'json-tricks' first.
# - We need this file for proper import into Matlab, R... etc.
weights_and_more = model.get_weights()
weights_and_more.append(0.5*(ub + lb))
weights_and_more.append(np.power(0.5*(ub - lb),2))
weights_and_more.append(scale_y.mean_)
weights_and_more.append(scale_y.var_)
import codecs, json
for idx, val in enumerate(weights_and_more):
weights_and_more[idx] = weights_and_more[idx].tolist()
json_str = json.dumps(weights_and_more)
text_file = open(wd + "\\data\\neural_network_weights\\rbergomi_extended\\rbergomi_extended_weights_3.json", "w")
text_file.write(json_str)
text_file.close()
```
### Load network
```
# Load already trained neural network:
model = keras.models.load_model(wd + '\\data\\neural_network_weights\\rbergomi_extended\\rbergomi_extended_model_3.h5',
custom_objects={'root_mean_squared_error': root_mean_squared_error})
```
### Validate approximation
```
# Specify test sample to plot:
sample_ind = 5006
# Print parameters of test sample:
print("Model Parameters (eta,rho,alpha,beta,xi1,xi2,...): ",myinverse(x_valid_mod[sample_ind,:]))
import scipy, matplotlib.pyplot as plt
npts = 25
x_sample = x_valid_mod[sample_ind,:]
y_sample = y_valid_mod[sample_ind,:]
prediction = scale_y.inverse_transform(model.predict(x_valid_mod))
plt.figure(1,figsize=(14,12))
j = -1
for i in range(0,13):
j = j + 1
plt.subplot(4,4,j+1)
plt.plot(logMoneyness[i*npts:(i+1)*npts],y_valid[sample_ind,i*npts:(i+1)*npts],'b',label="True")
plt.plot(logMoneyness[i*npts:(i+1)*npts],prediction[sample_ind,i*npts:(i+1)*npts],'--r',label=" Neural network")
plt.title("Maturity=%1.3f "%expiries[i*npts])
plt.xlabel("log-moneyness")
plt.ylabel("Implied volatility")
plt.legend()
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Lista 06 - Gradiente Descendente e Regressão Multivariada
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from numpy.testing import *
plt.ion()
```
Hoje vamos fazer um gradiente descendente para uma regressão linear com múltiplas variáveis.
Para isso, utilizaremos a base de dados carros, ``hybrid.csv``. As colunas são definidas da seguinte forma:
* veículo (vehicle): modelo do carro
* ano (year): ano de fabricação
* msrp: preço de varejo em dólar sugerido pelo fabricante em 2013.
* aceleração (acceleration): taxa de aceleração em km por hora por segundo
* mpg: economia de combustível em milhas por galão
* classe (class): a classe do modelo.
Nosso objetivo será estimar o valor de preço sugerido dos carros a partir dos demais atributos (exluindo o nome do veículo e a classe).
Portanto, teremos a regressão definida pela fórmula:
$$ Y = X\Theta + \epsilon $$
Em que, Y corresponde à coluna ``msrp`` dos dados, e X corresponde às colunas ``year,acceleration,mpg``.
```
df = pd.read_csv('./hybrid.csv')
df.head()
import seaborn as sns
sns.pairplot(df, diag_kws={'edgecolor':'k'}, plot_kws={'alpha':0.5, 'edgecolor':'k'})
```
Selecionamos apenas as colunas que serão utilizadas.
Normalizamos os dados para que o gradiente descendente rode sem problemas.
```
y = df['msrp']
X = df[['year','acceleration','mpg']]
X -= X.mean()
X /= X.std(ddof=1)
y -= y.mean()
y /= y.std(ddof=1)
X.insert(0, 'intercept', 1.0)
X = X.values
y = y.values
```
__IMPORTANTE:__
Não crie ou utilize qualquer variável ou função com nome iniciado por ``_teste_``.
A) Implemente a função de gradiente dos parâmetros da regressão, retornando um array com os valores dos gradientes para cada parâmetro theta.
```
def gradients(theta, X, y):
# x : matriz nxm
# y : array nx1
# theta : array mx1
theta = np.array(theta)
X = np.array(X)
y = np.array(y)
return -2 * ((y - X@theta) * X.T).mean(axis=1)
```
B) Implemente a função de gradiente descendente para os parâmetros da regressão linear.
Retorne uma lista com o valor de alpha e os valores de beta para cada coluna, nessa ordem.
```
def descent(theta0, X, y, learning_rate=0.005, tolerance=0.0000001):
theta = theta0.copy()
oldError, error = np.inf, 0
while abs(error - oldError) > tolerance:
dTheta = gradients(theta, X, y)
theta -= (learning_rate * dTheta)
oldError = error
error = ((X.dot(theta) - y) ** 2).mean()
return theta
```
C) Agora vamos tentar avaliar o modelo de regressão linear obtido com o gradiente descendente.
Primeiro implementem uma função que calcule o valor da soma total dos quadrados (SST) a partir dos dados.
```
def sst(y):
y = np.array(y)
return sum((y - y.mean())**2)
```
D) Para calcular a soma total de erros (SSE), primeiro precisamos ter uma previsão para os valores de
preço dos apartamentos.
Implementem uma função que obtenha os valores estimativa de preço a partir dos demais atributos, de acordo com o modelo de regressão linear.
A função deve retornar uma lista com os valores previstos.
```
def predict(X, theta):
return [sum(x_i * theta) for x_i in X]
```
E) Agora implemente a função de cálculo da soma total de erros (SSE).
```
def sse(X, y, theta):
yPredict = predict(X, theta)
return sum((yPredict - y)**2)
```
F) Finalmente, implemente a função que calcula o coeficiente de determinação (R2).
```
def r2(X, y, theta):
return 1 - sse(X, y, theta)/sst(y)
r2(X,y,descent([1,1,1,1], X, y))
```
G) Se observarmos os dados pelos gráficos gerados no começo do notebook, podemos perceber que nem todos possuem uma relação linear. Vamos tentar transformar os dados de um dos atributos dos carros, para que uma regressão linear possa ser aplicada com melhores resultados.
Tire o logaritmo dos dados do atributo ```mpg```, antes de z-normalizar.
```
y = df['msrp']
X = df[['year','acceleration','mpg']]
from math import log
X['mpg'] = X['mpg'].apply(lambda p: log(p))
X -= X.mean()
X /= X.std(ddof=1)
y -= y.mean()
y /= y.std(ddof=1)
X.insert(0, 'intercept', 1.0)
X = X.values
y = y.values
```
Note que o código do gradiente descendente pode ser executado sem alterações.
Verifique se o R2 da regressão melhorou ou piorou ao se transformar os dados.
```
r2(X,y,descent([1,1,1,1], X, y))
```
| github_jupyter |
```
import pandas
import re
table_5_2018 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_and_Ethnicity_by_Bias_Motivation_2018.xls')
new_5_2018 =table_5_2018.rename(columns = {'Table 5' : 'Bias Motivation'}).rename(columns = {'Unnamed: 1' : 'Total Offenses'}).rename(columns = {'Unnamed: 2' : "White"}).rename(columns = {'Unnamed: 3' : 'Black or African American'}).rename(columns = {'Unnamed: 4' : 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5' : 'Asian'}).rename(columns = {'Unnamed: 6' : 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple Races'}).rename(columns= {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9' : 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10' : 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11' : 'Group of multiple ethnicites'}).rename(columns = {'Unnamed: 12' : 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13' : 'Unknown Offender'}).drop(0).drop(1).drop(2).drop(3).drop(4).drop(48).drop(49)
new_5_2018 = new_5_2018.reset_index(drop=True)
hold= new_5_2018['Bias Motivation'][42][:-2]
new_5_2018['Bias Motivation'][42] = hold
new_5_2018
table_9_2018 =pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_Ethnicity_and_Age_2018.xls')
table_2018_9 = table_9_2018.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4', 'Unnamed: 5', 'Unnamed: 6'], axis =1).rename(columns = {'Table 9':'Known Offenders'}).rename(columns = {'Unnamed: 1' : 'Total'}).drop(0).drop(1).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24)
table_2018_9
hold= table_2018_9['Bias Motivation'][42][:-2]
table_2018_9['Bias Motivation'][42] = hold
new_5_2018
table_10_2018 =pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2018.xls')
table_10_2018.rename(columns ={'Table 10': 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2' : 'Race/Ethnicity/Ancestry'}).rename(columns = {'Unnamed: 3' : 'Religion'}).rename(columns = {'Unnamed: 4' : 'Sexual Orientation'}).rename(columns = {'Unnamed: 5' : 'Disability'}).rename(columns = {'Unnamed: 6' : 'Gender'}).rename(columns = {'Unnamed: 7' : 'Gender Identity'}).rename(columns = {'Unnamed: 8' : 'Multiple Bias Incidents'}).drop(0).drop(1).drop(2).drop(53).drop(54).drop(3).drop(4).drop(columns =['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11'])
table_10_2017 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2017.xls')
new_table_10_2017 = table_10_2017.drop(0).drop(1).drop(2).rename(columns ={'Table 10': 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2' : 'Race/Ethnicity/Ancestry'}).rename(columns = {'Unnamed: 3' : 'Religion'}).rename(columns = {'Unnamed: 4' : 'Sexual Orientation'}).rename(columns = {'Unnamed: 5' : 'Disability'}).rename(columns = {'Unnamed: 6' : 'Gender'}).rename(columns = {'Unnamed: 7' : 'Gender Identity'}).rename(columns = {'Unnamed: 8' : 'Multiple Bias Incidents'})
new_table_10_2017.drop(3).drop(4).drop(52).drop(53)
table_5_2017 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_and_Ethnicity_by_Bias_Motivation_2017.xls')
new_table_5_2017 = table_5_2017.drop(0).drop(1).drop(2).drop(48).drop(49)
new_2017_5 = new_table_5_2017.rename(columns = {'Table 5' : 'Bias Motivation'}).rename(columns = {'Unnamed: 1' : 'Total Offenses'}).rename(columns = {'Unnamed: 2' : "White"}).rename(columns = {'Unnamed: 3' : 'Black or African American'}).rename(columns = {'Unnamed: 4' : 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5' : 'Asian'}).rename(columns = {'Unnamed: 6' : 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple Races'}).rename(columns= {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9' : 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10' : 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11' : 'Group of multiple ethnicites'}).rename(columns = {'Unnamed: 12' : 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13' : 'Unknown Offender'}).drop(3).drop(4)
new_2017_5 = new_2017_5.reset_index(drop=True)
hold= new_2017_5['Bias Motivation'][42][:-2]
new_2017_5['Bias Motivation'][42] = hold
new_2017_5
table_9_2017 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_Ethnicity_and_Age_2017.xls')
table_9_2017.drop(0).drop(1).rename(columns = {'Table 9' : 'Race/Ethnicity/Age'}).rename(columns = {'Unnamed: 1' : 'Total'}).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24)
table_10_2016 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2016.xls')
new_10_2016 = table_10_2016.drop(0).drop(1).drop(2).rename(columns = {'Table 10' : 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2': 'Race/Ethnicity/Ancestry'}).rename(columns = {'Unnamed: 3': 'Religion'}).rename(columns = {'Unnamed: 4': 'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Disability'}).rename(columns = {'Unnamed: 6': 'Gender'}).rename(columns = {'Unnamed: 7': 'Gender Identity'}).rename(columns = {'Unnamed: 8': 'Multiple Bias Incidents'}).drop(3).drop(4)
new_10_2016.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12'], axis =1).drop(52).drop(53)
table_5_2016 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_and_Ethnicity_by_Bias_Motivation_2016.xls')
new_5_2016 =table_5_2016.drop(0).drop(1).drop(2).drop(['Unnamed: 14', 'Unnamed: 15', 'Unnamed: 16'], axis= 1)
new_2016_5 = new_5_2016.rename(columns = {'Table 5': 'Bias Motivation'}).rename(columns = {'Unnamed: 1': 'Total Offenses'}).rename(columns = {'Unnamed: 2': 'White'}).rename(columns = {'Unnamed: 3': 'Black or African American'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian'}).rename(columns = {'Unnamed: 6': 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple races'}).rename(columns = {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9': 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10': 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11':'Group of multiple ethnicities'}).rename(columns = {'Unnamed: 12': 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13': 'Unknown Offender'}).drop(3).drop(4).drop(48).drop(49)
new_2016_5 = new_2016_5.reset_index(drop=True)
hold= new_2016_5['Bias Motivation'][42][:-2]
new_2016_5['Bias Motivation'][42] = hold
new_2016_5
table_9_2016 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_Ethnicity_and_Age_2016.xls')
table_9_2016.drop(0).drop(1).drop(['Unnamed: 2'], axis=1).rename(columns = {'Table 9':'Race/Ethnicity/Age'}).rename(columns = {'Unnamed: 1':'Total'}).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24)
table_10_2015 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2015.xls')
table_10_2015.rename(columns ={'Table 10': 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2' : 'Race/Ethnicity/Ancestry'}).rename(columns = {'Unnamed: 3' : 'Religion'}).rename(columns = {'Unnamed: 4' : 'Sexual Orientation'}).rename(columns = {'Unnamed: 5' : 'Disability'}).rename(columns = {'Unnamed: 6' : 'Gender'}).rename(columns = {'Unnamed: 7' : 'Gender Identity'}).rename(columns = {'Unnamed: 8' : 'Multiple Bias Incidents'}).drop(0).drop(1).drop(2).drop(51).drop(52).drop(3).drop(4)
table_5_2015 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_and_Ethnicity_by_Bias_Motivation_2015.xls')
table_5_2015.rename(columns = {'Table 5': 'Bias Motivation'}).rename(columns = {'Unnamed: 1': 'Total Offenses'}).rename(columns = {'Unnamed: 2': 'White'}).rename(columns = {'Unnamed: 3': 'Black or African American'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian'}).rename(columns = {'Unnamed: 6': 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple races'}).rename(columns = {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9': 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10': 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11':'Group of multiple ethnicities'}).rename(columns = {'Unnamed: 12': 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13': 'Unknown Offender'}).drop(3).drop(48).drop(49).drop(0).drop(1).drop(2).drop(4)
table_9_2015 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_Ethnicity_and_Age_2015.xls')
table_9_2015.drop(0).drop(1).rename(columns = {'Table 9':'Race/Ethnicity/Age'}).rename(columns = {'Unnamed: 1':'Total'}).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24)
table_10_2014 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2014.xls')
new_2014_10 = table_10_2014.rename(columns = {'Table 10' : 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2': 'Race'}).rename(columns = {'Unnamed: 3': 'Religion'}).rename(columns = {'Unnamed: 4': 'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Ethnicity'}).rename(columns = {'Unnamed: 6': 'Disability'}).rename(columns = {'Unnamed: 7': 'Gender'}).rename(columns = {'Unnamed: 8': 'Gender Identity'}).rename(columns = {'Unnamed: 9':'Multiple Bias Incidents'})
new_2014_10.drop(0).drop(1).drop(2).drop(48).drop(49).drop(3).drop(4)
table_5_2014 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_and_Ethnicity_by_Bias_Motivation_2014.xls')
new_5_2014 = table_5_2014.rename(columns = {'Table 5': 'Bias Motivation'}).rename(columns = {'Unnamed: 1': 'Total Offenses'}).rename(columns = {'Unnamed: 2': 'White'}).rename(columns = {'Unnamed: 3': 'Black or African American'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian'}).rename(columns = {'Unnamed: 6': 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple races'}).rename(columns = {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9': 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10': 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11':'Group of multiple ethnicities'}).rename(columns = {'Unnamed: 12': 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13': 'Unknown Offender'})
new_5_2014.drop(0).drop(1).drop(2).drop(3).drop(41).drop(42).drop(43).drop(4)
table_9_2014 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_Ethnicity_and_Age_2014.xls')
table_9_2014.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4', 'Unnamed: 5', 'Unnamed: 6'], axis =1).rename(columns = {'Table 9':'Race/Ethnicity/Age'}).rename(columns = {'Unnamed: 1':'Total'}).drop(0).drop(1).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24)
table_10_2013 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2013.xls')
table_10_2013.rename(columns = {'Table 10' : 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2': 'Race'}).rename(columns = {'Unnamed: 3': 'Religion'}).rename(columns = {'Unnamed: 4': 'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Ethnicity'}).rename(columns = {'Unnamed: 6': 'Disability'}).rename(columns = {'Unnamed: 7': 'Gender'}).rename(columns = {'Unnamed: 8': 'Gender Identity'}).rename(columns = {'Unnamed: 9':'Multiple Bias Incidents'}).drop(0).drop(1).drop(2).drop(3).drop(4).drop(48).drop(49)
table_9_2013 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_2013.xls')
table_9_2013.rename(columns = {'Table 9' : 'Race/Ethnicity/Age'}).rename(columns = {'Unnamed: 1' : 'Total'}).drop(2).drop(19).drop(20).drop(21).drop(22).drop(23).drop(24).drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4', 'Unnamed: 5', 'Unnamed: 6'], axis=1)
table_5_2013 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_by_Bias_Motivation_2013.xls')
table_5_2013.rename(columns = {'Table 5': 'Bias Motivation'}).rename(columns = {'Unnamed: 1': 'Total Offenses'}).rename(columns = {'Unnamed: 2': 'White'}).rename(columns = {'Unnamed: 3': 'Black or African American'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian'}).rename(columns = {'Unnamed: 6': 'Native Hawaiian or Other Pacific Islander'}).rename(columns = {'Unnamed: 7': 'Group of Multiple races'}).rename(columns = {'Unnamed: 8': 'Unknown Race'}).rename(columns = {'Unnamed: 9': 'Hispanic or Latino'}).rename(columns = {'Unnamed: 10': 'Not Hispanic or Latino'}).rename(columns = {'Unnamed: 11':'Group of multiple ethnicities'}).rename(columns = {'Unnamed: 12': 'Unknown Ethnicity'}).rename(columns = {'Unnamed: 13': 'Unknown Offender'}).drop(0).drop(1).drop(2).drop(3).drop(4).drop(41).drop(42).drop(43)
table_10_2012 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2012.xls')
table_10_2012.rename(columns = {'Table 10' : 'Location'}).rename(columns = {'Unnamed: 1' : 'Total Incidents'}).rename(columns = {'Unnamed: 2': 'Race'}).rename(columns = {'Unnamed: 3': 'Religion'}).rename(columns = {'Unnamed: 4': 'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Ethnicity'}).rename(columns = {'Unnamed: 6': 'Disability'}).rename(columns = {'Unnamed: 7': 'Gender'}).rename(columns = {'Unnamed: 8': 'Gender Identity'}).rename(columns = {'Unnamed: 9':'Multiple Bias Incidents'}).drop(0).drop(1).drop(2).drop(3).drop(4).drop(48).drop(49)
table_9_2012 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_2012.xls')
table_9_2012.drop(9).drop(10).rename(columns = {'Table 9': "Known Offenders"}).rename(columns = {'Unnamed: 1': 'Total'}).drop(0).drop(1)
table_5_2012.rename(columns = {'Table 5': 'Bias Motivation'}).rename(columns = {'Unnamed: 1': 'Total Offenses'}).rename(columns = {'Unnamed: 2': 'White'}).rename(columns = {'Unnamed: 3': 'Black or African American'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian and Pacific Islander'}).rename(columns = {'Unnamed: 6': 'Multiple Races'}).rename(columns = {'Unnamed: 7': 'Unknown Race'}).rename(columns = {'Unnamed: 8': 'Unknown Offender'}).drop(0).drop(1).drop(2).drop(3).drop(4).drop(34)
table_10_2011 = pandas.read_excel('Table_10_Incidents_Bias_Motivation_by_Location_2011.xls')
table_10_2011.rename(columns = {'Table 10': 'Location'}).rename(columns = {'Unnamed: 1': 'Total Incidents'}).rename(columns = {'Unnamed: 2':'Race'}).rename(columns = {'Unnamed: 3':'Religion'}).rename(columns = {'Unnamed: 4':'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Ethnicity'}).rename(columns = {'Unnamed: 6':'Disability'}).rename(columns = {'Unnamed: 7': 'Multiple Bias Incidents'}).drop(0).drop(1).drop(2).drop(3).drop(48).drop(49)
table_9_2011 = pandas.read_excel('Table_9_Known_Offenders_Known_Offenders_Race_2011.xls')
table_9_2011.rename(columns = {'Table 9':'Known Offender Race'}).rename(columns = {'Unnamed: 1':'Total'}).drop(0).drop(1).drop(9).drop(10)
table_5_2011 = pandas.read_excel('Table_5_Offenses_Known_Offenders_Race_by_Bias_Motivation_2011.xls')
table_5_2011.rename(columns = {'Table 5':'Bias Motivation'}).rename(columns = {'Unnamed: 1':'Total Offenses'}).rename(columns = {'Unnamed: 2':'White'}).rename(columns = {'Unnamed: 3':'Black'}).rename(columns = {'Unnamed: 4': 'American Indian or Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian or Pacific Islander'}).rename(columns = {'Unnamed: 6': 'Multiple Race Groups'}).rename(columns = {'Unnamed: 7': 'Unknown Race'}).rename(columns = {'Unnamed: 8': 'Unknown Offender'}).drop(0).drop(1).drop(2).drop(3).drop(33)
table_10_2010 = pandas.read_excel('Table 10-Incidents-Bias Motivation-by Location 2010.xls')
table_10_2010.drop(0).drop(1).drop(2).drop(35).rename(columns = {'Table 10': 'Location'}).rename(columns = {'Unnamed: 1': 'Total Incidents'}).rename(columns = {'Unnamed: 2':'Race'}).rename(columns = {'Unnamed: 3':'Religion'}).rename(columns = {'Unnamed: 4':'Sexual Orientation'}).rename(columns = {'Unnamed: 5': 'Ethnicity'}).rename(columns = {'Unnamed: 6':'Disability'}).rename(columns = {'Unnamed: 7': 'Multiple Bias Incidents'}).rename(columns = { 'Table 10': 'Location'})
table_9_2010 = pandas.read_excel('Table 9-Known Offenders-Known Offenders Race 2010.xls')
table_9_2010.rename(columns = {'Table 9': 'Known Offender Race'}).rename(columns = {'Unnamed: 1': 'Total'}).drop(0).drop(8).drop(9)
table_5_2010 = pandas.read_excel('Table 5-Offenses-Known Offenders Race-by Bias Motivation 2010.xls')
table_5_2010.rename(columns = {'Table 5':'Bias Motivation'}).rename(columns = {'Unnamed: 1':'Total Offenses'}).rename(columns = {'Unnamed: 2':'White'}).rename(columns = {'Unnamed: 3':'Black'}).rename(columns = {'Unnamed: 4': 'American Indian and Alaska Native'}).rename(columns = {'Unnamed: 5': 'Asian and Pacific Islander'}).rename(columns = {'Unnamed: 6': 'Multiple Race Groups'}).rename(columns = {'Unnamed: 7': 'Unknown Race'}).rename(columns = {'Unnamed: 8': 'Unknown Offender'}).drop(0).drop(1).drop(2).drop(32)
```
| github_jupyter |
# An Introduction To `aima-python`
The [aima-python](https://github.com/aimacode/aima-python) repository implements, in Python code, the algorithms in the textbook *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. A typical module in the repository has the code for a single chapter in the book, but some modules combine several chapters. See [the index](https://github.com/aimacode/aima-python#index-of-code) if you can't find the algorithm you want. The code in this repository attempts to mirror the pseudocode in the textbook as closely as possible and to stress readability foremost; if you are looking for high-performance code with advanced features, there are other repositories for you. For each module, there are three/four files, for example:
- [**`nlp.py`**](https://github.com/aimacode/aima-python/blob/master/nlp.py): Source code with data types and algorithms for natural language processing; functions have docstrings explaining their use.
- [**`nlp.ipynb`**](https://github.com/aimacode/aima-python/blob/master/nlp.ipynb): A notebook like this one; gives more detailed examples and explanations of use.
- [**`nlp_apps.ipynb`**](https://github.com/aimacode/aima-python/blob/master/nlp_apps.ipynb): A Jupyter notebook that gives example applications of the code.
- [**`tests/test_nlp.py`**](https://github.com/aimacode/aima-python/blob/master/tests/test_nlp.py): Test cases, used to verify the code is correct, and also useful to see examples of use.
There is also an [aima-java](https://github.com/aimacode/aima-java) repository, if you prefer Java.
## What version of Python?
The code is tested in Python [3.4](https://www.python.org/download/releases/3.4.3/) and [3.5](https://www.python.org/downloads/release/python-351/). If you try a different version of Python 3 and find a problem, please report it as an [Issue](https://github.com/aimacode/aima-python/issues).
We recommend the [Anaconda](https://www.anaconda.com/download/) distribution of Python 3.5. It comes with additional tools like the powerful IPython interpreter, the Jupyter Notebook and many helpful packages for scientific computing. After installing Anaconda, you will be good to go to run all the code and all the IPython notebooks.
## IPython notebooks
The IPython notebooks in this repository explain how to use the modules, and give examples of usage.
You can use them in three ways:
1. View static HTML pages. (Just browse to the [repository](https://github.com/aimacode/aima-python) and click on a `.ipynb` file link.)
2. Run, modify, and re-run code, live. (Download the repository (by [zip file](https://github.com/aimacode/aima-python/archive/master.zip) or by `git` commands), start a Jupyter notebook server with the shell command "`jupyter notebook`" (issued from the directory where the files are), and click on the notebook you want to interact with.)
3. Binder - Click on the binder badge on the [repository](https://github.com/aimacode/aima-python) main page to open the notebooks in an executable environment, online. This method does not require any extra installation. The code can be executed and modified from the browser itself. Note that this is an unstable option; there is a chance the notebooks will never load.
You can [read about notebooks](https://jupyter-notebook-beginner-guide.readthedocs.org/en/latest/) and then [get started](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Running%20Code.ipynb).
# Helpful Tips
Most of these notebooks start by importing all the symbols in a module:
```
from logic import *
```
From there, the notebook alternates explanations with examples of use. You can run the examples as they are, and you can modify the code cells (or add new cells) and run your own examples. If you have some really good examples to add, you can make a github pull request.
If you want to see the source code of a function, you can open a browser or editor and see it in another window, or from within the notebook you can use the IPython magic function `%psource` (for "print source") or the function `psource` from `notebook.py`. Also, if the algorithm has pseudocode available, you can read it by calling the `pseudocode` function with the name of the algorithm passed as a parameter.
```
%psource WalkSAT
from notebook import psource, pseudocode
psource(WalkSAT)
pseudocode("WalkSAT")
```
Or see an abbreviated description of an object with a trailing question mark:
```
WalkSAT?
```
# Authors
This notebook is written by [Chirag Vertak](https://github.com/chiragvartak) and [Peter Norvig](https://github.com/norvig).
| github_jupyter |
# A case study in screening for new enzymatic reactions
In this example, we show how to search the KEGG database for a reaction of interest based on user requirements. At specific points we highlight how our code could be used for arbitrary molecules that the user is interested in. This is crucial because the KEGG database is not exhaustive, and we only accessed a portion of the database that has no ambiguities (to avoid the need for manual filtering).
Requirements to run this script:
* rdkit (2019.09.2.0)
* matplotlib (3.1.1)
* numpy (1.17.4)
* enzyme_screen
* Clone source code and run this notebook in its default directory.
# This notebook requires data from screening, which is not uploaded!
## The idea:
We want to screen all collected reactions for a reaction that fits these constraints (automatic or manual application is noted):
1. Maximum component size within 5-7 Angstrom (automatic)
2. *One* component on *one* side of the reaction contains a nitrile group (automatic)
3. Value added from reactant to product (partially manual) e.g.:
- cost of the reactants being much less than the products
- products being unpurchasable and reactants being purchasable
Constraint *2* affords potential reaction monitoring through the isolated FT-IR signal of the nitrile group.
Constraint *3* is vague, but generally aims to determine some value-added by using an enzyme for a given reaction. This is often based on overcoming the cost of purchasing/synthesising the product through some non-enzymatic pathway by using an encapsulate enzyme. In this case, we use the primary literature on a selected reaction and some intuition to guide our efforts (i.e. we select a reaction (directionality determined from KEGG) where a relatively cheap (fair assumption) amino acid is the reactant).
The alternative to this process would be to select a target reactant or product and search all reactions that include that target and apply similar constraints to test the validity of those reactions.
### Provide directory to reaction data and molecule data, and parameter file.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import sys
reaction_dir = (
'/data/atarzia/projects/psp_phd/production/rxn_collection'
)
molecule_dir = (
'/data/atarzia/projects/psp_phd/molecules/molecule_DBs/production'
)
# Handle import directories.
module_path = os.path.abspath(os.path.join('../src'))
if module_path not in sys.path:
sys.path.append(module_path)
import utilities
param_file = '../data/param_file.txt'
params = utilities.read_params(param_file)
```
### Find reaction systems with max component sizes within threshold
Using a threshold of 5 to 7 angstrom.
Results in a plot of reaction distributions.
```
import plotting_fn as pfn
threshold_min = 5
threshold_max = 7
# Read in reaction collection CSV: rs_properties.csv
# from running RS_analysis.py.
rs_properties = pd.read_csv(
os.path.join(reaction_dir, 'rs_properties.csv')
)
rs_within_threshold = rs_properties[
rs_properties['max_mid_diam'] < threshold_max
]
rs_within_threshold = rs_within_threshold[
rs_within_threshold['max_mid_diam'] >= threshold_min
]
print(f'{len(rs_within_threshold)} reactions in threshold')
fig, ax = plt.subplots()
alpha = 1.0
width = 0.25
X_bins = np.arange(0, 20, width)
# All reactions.
hist, bin_edges = np.histogram(
a=list(rs_properties['max_mid_diam']),
bins=X_bins
)
ax.bar(
bin_edges[:-1],
hist,
align='edge',
alpha=alpha,
width=width,
color='lightgray',
edgecolor='lightgray',
label='all reactions'
)
# Within threshold.
hist, bin_edges = np.histogram(
a=list(rs_within_threshold['max_mid_diam']),
bins=X_bins
)
ax.bar(
bin_edges[:-1],
hist,
align='edge',
alpha=alpha,
width=width,
color='firebrick',
edgecolor='firebrick',
label='within threshold'
)
pfn.define_standard_plot(
ax,
xtitle='$d$ of largest component [$\mathrm{\AA}$]',
ytitle='count',
xlim=(0, 20),
ylim=None
)
fig.legend(fontsize=16)
fig.savefig(
os.path.join(reaction_dir, 'screen_example_distribution.pdf'),
dpi=720,
bbox_inches='tight'
)
plt.show()
```
### Find reaction systems with at least one nitrile functionality on one side of the reaction
```
import reaction
from rdkit.Chem import AllChem as rdkit
from rdkit.Chem import Fragments
# Handle some warnings for flat molecules.
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
# Needed to show molecules
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
def has_nitrile(mol_file):
"""
Returns False if nitrile fragment is not found using RDKIT.
"""
mol = rdkit.MolFromMolFile(mol_file)
no_frag = Fragments.fr_nitrile(mol)
if no_frag > 0:
return True
else:
return False
# Define generator over reactions.
generator = reaction.yield_rxn_syst(
output_dir=reaction_dir,
pars=params,
)
# Iterate over reactions, checking for validity.
target_reaction_ids = []
molecules_with_nitriles = []
for i, (count, rs) in enumerate(generator):
if 'KEGG' not in rs.pkl:
continue
if rs.skip_rxn:
continue
if rs.components is None:
continue
# Check components for nitrile groups.
reactants_w_nitriles = 0
products_w_nitriles = 0
for m in rs.components:
mol_file = os.path.join(
molecule_dir,
m.name+'_opt.mol'
)
if has_nitrile(mol_file):
if mol_file not in molecules_with_nitriles:
molecules_with_nitriles.append(mol_file)
if m.role == 'reactant':
reactants_w_nitriles += 1
elif m.role == 'product':
products_w_nitriles += 1
# Get both directions.
if products_w_nitriles == 1 and reactants_w_nitriles == 0:
target_reaction_ids.append(rs.DB_ID)
if products_w_nitriles == 0 and reactants_w_nitriles == 1:
target_reaction_ids.append(rs.DB_ID)
```
### Draw nitrile containing molecules
```
print(
f'There are {len(molecules_with_nitriles)} molecules '
f'with nitrile groups, corresponding to '
f'{len(target_reaction_ids)} reactions '
'out of all.'
)
molecules = [
rdkit.MolFromSmiles(rdkit.MolToSmiles(rdkit.MolFromMolFile(i)))
for i in molecules_with_nitriles
]
mol_names = [
i.replace(molecule_dir+'/', '').replace('_opt.mol', '')
for i in molecules_with_nitriles
]
img = Draw.MolsToGridImage(
molecules,
molsPerRow=6,
subImgSize=(100, 100),
legends=mol_names,
)
img
```
## Update dataframe to have target reaction ids only.
```
target_reactions = rs_within_threshold[
rs_within_threshold['db_id'].isin(target_reaction_ids)
]
print(
f'There are {len(target_reactions)} reactions '
'that fit all constraints so far.'
)
target_reactions
```
## Select reaction based on bertzCT and SAScore, plus intuition from visualisation
Plotting the measures of reaction productivity is useful, but so is looking manually through the small subset.
Both methods highlight R02846 (https://www.genome.jp/dbget-bin/www_bget?rn:R02846) as a good candidate:
- High deltaSA and deltaBertzCT
- The main reactant is a natural amino acid (cysteine). Note that the chirality is not defined in this specific KEGG Reaction, however, the chirality is defined as L-cysteine in the Enzyme entry (https://www.genome.jp/dbget-bin/www_bget?ec:4.4.1.9)
```
fig, ax = plt.subplots()
ax.scatter(
target_reactions['deltasa'],
target_reactions['deltabct'],
alpha=1.0,
c='#ff3b3b',
edgecolor='none',
label='target reactions',
s=100,
)
pfn.define_standard_plot(
ax,
xtitle=r'$\Delta$ SAscore',
ytitle=r'$\Delta$ BertzCT',
xlim=(-10, 10),
ylim=None,
)
fig.legend(fontsize=16)
fig.savefig(
os.path.join(
reaction_dir,
'screen_example_complexity_targets.pdf'
),
dpi=720,
bbox_inches='tight'
)
plt.show()
fig, ax = plt.subplots()
ax.scatter(
rs_properties['deltasa'],
rs_properties['deltabct'],
alpha=1.0,
c='lightgray',
edgecolor='none',
label='all reactions',
s=40,
)
ax.scatter(
rs_within_threshold['deltasa'],
rs_within_threshold['deltabct'],
alpha=1.0,
c='#2c3e50',
edgecolor='none',
label='within threshold',
s=40,
)
ax.scatter(
target_reactions['deltasa'],
target_reactions['deltabct'],
alpha=1.0,
c='#ff3b3b',
edgecolor='k',
label='target reactions',
marker='P',
s=60,
)
pfn.define_standard_plot(
ax,
xtitle=r'$\Delta$ SAscore',
ytitle=r'$\Delta$ BertzCT',
xlim=(-10, 10),
ylim=(-850, 850),
)
fig.legend(fontsize=16)
fig.savefig(
os.path.join(
reaction_dir,
'screen_example_complexity_all.pdf'
),
dpi=720,
bbox_inches='tight'
)
plt.show()
```
## Visualise properties of chosen reaction
Reaction: R02846 (https://www.genome.jp/dbget-bin/www_bget?rn:R02846)
```
# Read in reaction system.
rs = reaction.get_RS(
filename=os.path.join(
reaction_dir, 'sRS-4_4_1_9-KEGG-R02846.gpkl'
),
output_dir=reaction_dir,
pars=params,
verbose=True
)
# Print properties and collate components.
print(rs)
if rs.skip_rxn:
print(f'>>> {rs.skip_reason}')
print(
f'max intermediate diameter = {rs.max_min_mid_diam} angstrom'
)
print(
f'deltaSA = {rs.delta_SA}'
)
print(
f'deltaBertzCT = {rs.delta_bCT}'
)
print('--------------------------\n')
print('Components:')
# Output molecular components and their properties.
reacts = []
reactstr = []
prodstr = []
prods = []
for rsc in rs.components:
prop_dict = rsc.read_prop_file()
print(rsc)
print(f"SA = {round(prop_dict['Synth_score'], 3)}")
print(f"BertzCT = {round(prop_dict['bertzCT'], 3)}")
print('\n')
if rsc.role == 'product':
prods.append(
rdkit.MolFromMolFile(rsc.structure_file)
)
prodstr.append(f'{rsc.name}')
if rsc.role == 'reactant':
reacts.append(
rdkit.MolFromMolFile(rsc.structure_file)
)
reactstr.append(f'{rsc.name}')
img = Draw.MolsToGridImage(
reacts,
molsPerRow=2,
subImgSize=(300, 300),
legends=reactstr,
)
img.save(
os.path.join(
reaction_dir,
'screen_example_reactants.png'
)
)
img
img = Draw.MolsToGridImage(
prods,
molsPerRow=2,
subImgSize=(300, 300),
legends=prodstr,
)
img.save(
os.path.join(
reaction_dir,
'screen_example_products.png'
)
)
img
```
## Manually obtaining the cost of molecules
In this example, we will assume C00283 and C00177 are obtainable/purchasable through some means and that only C00736 and C02512 are relevant to the productivity of the reaction.
Note that the synthetic accessibility is 'large' for these molecules due to the two small molecules, while the change in BertzCT comes from the two larger molecules.
- Get CAS number from KEGG Compound pages:
- KEGG: C00736, CAS: 3374-22-9
- KEGG: C02512, CAS: 6232-19-5
- Use CAS number in some supplier website (using http://astatechinc.com/ here for no particular reason)
- KEGG: C00736, Price: \\$69 for 10 gram = \\$6.9 per gram
- KEGG: C02512, Price: \\$309 for 1 gram = \\$309 per gram
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import scipy.io as scio
import displayData as dd
import lrCostFunction as lCF
import oneVsAll as ova
import predictOneVsAll as pova
import scipy.optimize as opt
# Setup the parameters you will use for this part of the exercise
input_layer_size = 400 # 20x20 input images of Digits
num_labels = 10 # 10 labels, from 0 to 9
# Note that we have mapped "0" to label 10
# ===================== Part 1: Loading and Visualizing Data =====================
# We start the exercise by first loading and visualizing the dataset.
# You will be working with a dataset that contains handwritten digits.
#
# Load Training Data
print('Loading and Visualizing Data ...')
data = scio.loadmat('ex3data1.mat')
X = data['X']
y = data['y'].flatten()
m = y.size
def display_data(x):
(m, n) = x.shape
# Set example_width automatically if not passed in
example_width = np.round(np.sqrt(n)).astype(int)
example_height = (n / example_width).astype(int)
# Compute the number of items to display
display_rows = np.floor(np.sqrt(m)).astype(int)
display_cols = np.ceil(m / display_rows).astype(int)
# Between images padding
pad = 1
# Setup blank display
display_array = - np.ones((pad + display_rows * (example_height + pad),
pad + display_rows * (example_height + pad)))
# Copy each example into a patch on the display array
curr_ex = 0
for j in range(display_rows):
for i in range(display_cols):
if curr_ex > m:
break
# Copy the patch
# Get the max value of the patch
max_val = np.max(np.abs(x[curr_ex]))
display_array[pad + j * (example_height + pad) + np.arange(example_height),
pad + i * (example_width + pad) + np.arange(example_width)[:, np.newaxis]] = \
x[curr_ex].reshape((example_height, example_width)) / max_val
curr_ex += 1
if curr_ex > m:
break
# Display image
plt.figure()
plt.imshow(display_array, cmap='gray', extent=[-1, 1, -1, 1])
plt.axis('off')
rand_indices = np.random.permutation(range(m))
selected = X[rand_indices[0:100], :]
display_data(selected)
# ===================== Part 2-a: Vectorize Logistic Regression =====================
# In this part of the exercise, you will reuse your logistic regression
# code from the last exercise. Your task here is to make sure that your
# regularized logistic regression implementation is vectorized. After
# that, you will implement one-vs-all classification for the handwritten
# digit dataset
#
# Test case for lrCostFunction
print('Testing lrCostFunction()')
theta_t = np.array([-2, -1, 1, 2])
X_t = np.c_[np.ones(5), np.arange(1, 16).reshape((3, 5)).T/10]
y_t = np.array([1, 0, 1, 0, 1])
lmda_t = 3
# sigmoid function
sigmoid_func = lambda x: 1 / (1 + np.exp(-x))
# sigmoid_func = lambda x: np.exp(x) / (np.exp(x)+1)
# hypothesis function
h_func = lambda theta, X: sigmoid_func(theta @ X.transpose())
def lr_cost_function(theta, X, y, lmd):
m = y.size
# You need to return the following values correctly
cost = 0
grad = np.zeros(theta.shape)
# ===================== Your Code Here =====================
# Instructions : Compute the cost of a particular choice of theta
# You should set cost and grad correctly.
#
cost = (-y @ np.log(h_func(theta, X)) - (1-y) @ np.log((1-h_func(theta, X)))) / m + (theta[1:] @ theta[1:]) * lmd / (2*m)
for idx in range(theta.size):
grad[idx] += ((h_func(theta, X) - y) @ X[:, idx]) / m + (0 if idx == 0 else lmd * theta[idx] / m)
# =========================================================
return cost, grad
cost, grad = lr_cost_function(theta_t, X_t, y_t, lmda_t)
np.set_printoptions(formatter={'float': '{: 0.6f}'.format})
print('Cost: {:0.7f}'.format(cost))
print('Expected cost: 2.534819')
print('Gradients:\n{}'.format(grad))
print('Expected gradients:\n[ 0.146561 -0.548558 0.724722 1.398003]')
def one_vs_all(X, y, num_labels, lmd):
# Some useful variables
(m, n) = X.shape
# You need to return the following variables correctly
all_theta = np.zeros((num_labels, n + 1))
# Add ones to the X data 2D-array
X = np.c_[np.ones(m), X]
# Optimize
def cost_func(t):
return lr_cost_function(t, X, y_flag, lmd)[0]
def grad_func(t):
return lr_cost_function(t, X, y_flag, lmd)[1]
for i in range(num_labels):
print('Optimizing for handwritten number {}...'.format(i))
# ===================== Your Code Here =====================
# Instructions : You should complete the following code to train num_labels
# logistic regression classifiers with regularization
# parameter lambda
#
#
# Hint: you can use y == c to obtain a vector of True(1)'s and False(0)'s that tell you
# whether the ground truth is true/false for this class
#
# Note: For this assignment, we recommend using opt.fmin_cg to optimize the cost
# function. It is okay to use a for-loop (for c in range(num_labels) to
# loop over the different classes
#
y_flag = (y == (i+1)) * 1
theta = all_theta[i]
theta, cost, *unused = opt.fmin_bfgs(f=cost_func, fprime=grad_func, x0=theta, maxiter=400, full_output=True, disp=False)
all_theta[i] = theta
# ============================================================
print('Done')
return all_theta
lmd = 0.1
# ===================== Part 2-b: One-vs-All Training =====================
print('Training One-vs-All Logistic Regression ...')
lmd = 0.1
all_theta = one_vs_all(X, y, num_labels, lmd)
def predict_one_vs_all(theta, X):
m = X.shape[0]
# Return the following variable correctly
p = np.zeros(m)
# Add ones to the X data 2D-array
X = np.c_[np.ones(m), X]
# ===================== Your Code Here =====================
# Instructions : Complete the following code to make predictions using
# your learned logistic regression parameters.
# You should set p to a 1D-array of 0's and 1's
#
p = np.argmax((X @ all_theta.T), axis=1)+1
# ===========================================================
return p
# ===================== Part 3: Predict for One-Vs-All =====================
pred = predict_one_vs_all(all_theta, X)
print('Training set accuracy: {}'.format(np.mean(pred == y)*100))
print('ex3 Finished. Press ENTER to exit')
```
| github_jupyter |
# Get the data
```
from google.colab import drive
drive.mount('/content/gdrive')
from sklearn.linear_model import ElasticNet, Lasso, Ridge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler,MinMaxScaler,StandardScaler
from sklearn.model_selection import KFold, cross_val_score, train_test_split,GridSearchCV
from sklearn.metrics import mean_squared_error,mean_absolute_error
import numpy as np
import warnings
def ignore_warn(*args, **kwargs):
pass
warnings.warn = ignore_warn
import pandas as pd
dataset = pd.read_csv('/content/gdrive/MyDrive/Dockship/learn_ml_2021_grand_ai_challenge-dataset/new_train.csv')
#dataset.tail()
dataset.corr()
```
# for stock-1
```
inp1= dataset[["Open-Stock-1","High-Stock-1","Low-Stock-1","VWAP-Stock-1"]]
op1 = dataset['Close-Stock-1']
inp2 = dataset[["Open-Stock-2","High-Stock-2","Low-Stock-2","VWAP-Stock-2"]]
op2 = dataset['Close-Stock-2']
inp3 = dataset[["Open-Stock-3","High-Stock-3","Low-Stock-3","VWAP-Stock-3"]]
op3 = dataset['Close-Stock-3']
inp4 = dataset[["Open-Stock-4","High-Stock-4","Low-Stock-4","VWAP-Stock-4"]]
op4 = dataset['Close-Stock-4']
inp5 = dataset[["Open-Stock-5","High-Stock-5","Low-Stock-5","VWAP-Stock-5"]]
op5 = dataset['Close-Stock-5']
n_folds = 5
def rmsle_cv(model):
kf = KFold(n_folds, shuffle=True, random_state=42).get_n_splits(x_train.values)
rmse= np.sqrt(-cross_val_score(model, x_train,y_train, scoring="neg_mean_squared_error", cv = kf,n_jobs=-1))
return(rmse)
x,y = inp5,op5
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 101)
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1,normalize=True,max_iter=100000))
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.99, random_state=3,max_iter=100000))
score = rmsle_cv(lasso)
print("\nLasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(ENet)
print("ElasticNet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
l1 = make_pipeline(RobustScaler(),ElasticNet(alpha=0.000001, l1_ratio=0.5, random_state=3,max_iter=100000,selection='random'))
score = rmsle_cv(l1)
print("\nUpdated-Enet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
l2 = make_pipeline(StandardScaler(), Lasso(alpha =0.0001, random_state=1,normalize=True,max_iter=100000))
score = rmsle_cv(l2)
print("\nUpdated-lasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
l3 = make_pipeline(RobustScaler(),ElasticNet(alpha=0.0000001, l1_ratio=0.000009, random_state=1,max_iter=1000000,selection='random',normalize=True))
score = rmsle_cv(l3)
print("\nUpdated-Enet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
l4 = make_pipeline(RobustScaler(),Lasso(alpha =0.0035, random_state=1,max_iter=10000000))
score = rmsle_cv(l4)
print("\nUpdated-lasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
print(rmsle_cv(Lasso(alpha =0.0035, random_state=1,max_iter=10000000)))
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import PolynomialFeatures
l5 = make_pipeline(RobustScaler(),SGDRegressor(max_iter=1000000,alpha=0.000001))
score = rmsle_cv(l5)
print("\nUpdated-lasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
```
# Prediction for Test data
```
dataset2 = pd.read_csv('/content/gdrive/MyDrive/Dockship/learn_ml_2021_grand_ai_challenge-dataset/new_test.csv')
#dataset2.head()
model = l4
model.fit(x_train, y_train)
pred = model.predict(dataset2[list(x.columns)])
id_sv = dataset2['Date']
sol = pd.DataFrame()
sol['Date']=id_sv
#sol["Close-Stock-1"]=pred
#sol.to_csv('sol.csv',index = False)
sol['Close-Stock-5'] = pred
#pred2 = pred
sol['Close-Stock-1'] = sol['Close-Stock-1'].apply(lambda x:'%.2f'%x)
sol['Close-Stock-2'] = sol['Close-Stock-2'].apply(lambda x:'%.2f'%x)
sol['Close-Stock-3'] = sol['Close-Stock-3'].apply(lambda x:'%.2f'%x)
sol['Close-Stock-4'] = sol['Close-Stock-4'].apply(lambda x:'%.2f'%x)
sol['Close-Stock-5'] = sol['Close-Stock-5'].apply(lambda x:'%.2f'%x)
sol.head()
sol.to_csv('sol.csv',index = False)
```
# LSTM
```
#Create a new dataframe with only the 'Close column
data = dataset.filter(['Close-Stock-3'])
#Convert the dataframe to a numpy array
dataset_new = data.values
#Get the number of rows to train the model on
training_data_len = int(np.ceil( len(dataset_new) * .8 ))
training_data_len
#Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_data = scaler.fit_transform(dataset_new)
#scaled_data
#Create the scaled training data set
train_data = scaled_data[0:int(training_data_len), :]
#Split the data into x_train and y_train data sets
x_train = []
y_train = []
for i in range(120, len(train_data)):
x_train.append(train_data[i-120:i, 0])
y_train.append(train_data[i, 0])
if i<= 61:
print(x_train)
print(y_train)
print()
# Convert the x_train and y_train to numpy arrays
x_train, y_train = np.array(x_train), np.array(y_train)
#Reshape the data
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
# x_train.shape
from keras.models import Sequential
from keras.layers import Dense, LSTM,Dropout
#Defining the LSTM Recurrent Model
#Step 2 Build Model
regressor = Sequential()
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (x_train.shape[1], 1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')
regressor.fit(x_train, y_train, epochs = 15, batch_size = 32)
#Create the testing data set
#Create a new array containing scaled values from index 1543 to 2002
test_data = scaled_data[training_data_len - 120: , :]
#Create the data sets x_test and y_test
x_test = []
y_test = dataset_new[training_data_len:, :]
for i in range(120, len(test_data)):
x_test.append(test_data[i-120:i, 0])
# Convert the data to a numpy array
x_test = np.array(x_test)
# Reshape the data
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1 ))
# Get the models predicted price values
predictions = regressor.predict(x_test)
predictions = scaler.inverse_transform(predictions)
# Get the root mean squared error (RMSE)
rmse = np.sqrt(mean_squared_error(predictions, y_test))
rmse
# Get the root mean squared error (RMSE)
import math
rmse = math.sqrt(mean_squared_error(predictions, y_test))
rmse
# Plot the data
import matplotlib.pyplot as plt
train = data[:training_data_len]
valid = data[training_data_len:]
valid['Predictions'] = predictions
# Visualize the data
plt.figure(figsize=(16,8))
plt.title('Model')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price USD ($)', fontsize=18)
plt.plot(train['Close-Stock-3'])
plt.plot(valid[['Close-Stock-3', 'Predictions']])
plt.legend(['Train', 'Val', 'Predictions'], loc='lower right')
plt.show()
#Create the prediction data set
#Create a new array containing scaled values
pred_data = scaled_data[len(scaled_data)-97 - 120: , :]
#Create the data sets x_test and y_test
x_pred = []
#y_test = dataset_new[training_data_len:, :]
for i in range(120, len(pred_data)):
x_pred.append(pred_data[i-120:i, 0])
# Convert the data to a numpy array
x_pred = np.array(x_pred)
# Reshape the data
x_pred = np.reshape(x_pred, (x_pred.shape[0], x_pred.shape[1], 1 ))
# Get the models predicted price values
predictions = model.predict(x_pred)
predictions = scaler.inverse_transform(predictions)
id_sv = dataset2['Date']
sol = pd.DataFrame()
sol['Date']=id_sv
sol["Close-Stock-1"]=predictions
#sol.to_csv('sol.csv',index = False)
sol["Close-Stock-5"]=predictions
sol.to_csv('sol.csv',index = False)
```
# New Section
```
dataset.head()
df = dataset.iloc[:,[0,18]]
df['pred'] = lasso.predict(x)
df.head()
df.corr()
x = dataset[["Open-Stock-3","High-Stock-3","Low-Stock-3","VWAP-Stock-3"]]
y = dataset['Close-Stock-3']
from sklearn.neighbors import LocalOutlierFactor
from sklearn.linear_model import LinearRegression
for i in range(1,6):
x = dataset[["Open-Stock-"+str(i),"High-Stock-"+str(i),"Low-Stock-"+str(i),"VWAP-Stock-"+str(i)]]
#x = df
y = dataset['Close-Stock-'+str(i)]
x_train, x_test, y_train, y_test = train_test_split(x.values, y.values, test_size = 0.2, random_state = 101)
lof = LocalOutlierFactor(n_neighbors=25,n_jobs=-1)
yhat = lof.fit_predict(x_train)
mask = yhat != -1
x_train, y_train = x_train[mask, :], y_train[mask]
lasso = Lasso(alpha =0.0005, random_state=1,normalize=True,max_iter=100000)
lasso.fit(x,y)
print(np.sqrt(mean_squared_error(y_test,lasso.predict(x_test))))
sol['Close-Stock-'+str(i)] = lasso.predict(dataset2[list(x.columns)])
x = dataset[["Open-Stock-3","High-Stock-3","Low-Stock-3","VWAP-Stock-3"]]
y = dataset['Close-Stock-3']
import statsmodels.api as sm
x_train, x_test, y_train, y_test = train_test_split(x.values, y.values, test_size = 0.2, random_state = 101)
X_train_lm = sm.add_constant(x_train)
lr_1 = sm.OLS(y_train, X_train_lm).fit()
lr_1.summary()
#print(x_train.shape, y_train.shape)
#lasso = Lasso(alpha =0.0005, random_state=1,normalize=True,max_iter=100000)
#lasso.fit(x,y)
#print(np.sqrt(mean_squared_error(y_test,lasso.predict(x_test))))
import matplotlib.pyplot as plt
plt.plot(y_train)
sol.head()
```
| github_jupyter |
# Задание 1.1 - Метод К-ближайших соседей (K-neariest neighbor classifier)
В первом задании вы реализуете один из простейших алгоритмов машинного обучения - классификатор на основе метода K-ближайших соседей.
Мы применим его к задачам
- бинарной классификации (то есть, только двум классам)
- многоклассовой классификации (то есть, нескольким классам)
Так как методу необходим гиперпараметр (hyperparameter) - количество соседей, мы выберем его на основе кросс-валидации (cross-validation).
Наша основная задача - научиться пользоваться numpy и представлять вычисления в векторном виде, а также ознакомиться с основными метриками, важными для задачи классификации.
Перед выполнением задания:
- запустите файл `download_data.sh`, чтобы скачать данные, которые мы будем использовать для тренировки
- установите все необходимые библиотеки, запустив `pip install -r requirements.txt` (если раньше не работали с `pip`, вам сюда - https://pip.pypa.io/en/stable/quickstart/)
Если вы раньше не работали с numpy, вам может помочь tutorial. Например этот:
http://cs231n.github.io/python-numpy-tutorial/
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
from dataset import load_svhn
from knn import KNN
from metrics import binary_classification_metrics, multiclass_accuracy
```
# Загрузим и визуализируем данные
В задании уже дана функция `load_svhn`, загружающая данные с диска. Она возвращает данные для тренировки и для тестирования как numpy arrays.
Мы будем использовать цифры из датасета Street View House Numbers (SVHN, http://ufldl.stanford.edu/housenumbers/), чтобы решать задачу хоть сколько-нибудь сложнее MNIST.
```
train_X, train_y, test_X, test_y = load_svhn("data", max_train=1000, max_test=100)
samples_per_class = 5 # Number of samples per class to visualize
plot_index = 1
for example_index in range(samples_per_class):
for class_index in range(10):
plt.subplot(5, 10, plot_index)
image = train_X[train_y == class_index][example_index]
plt.imshow(image.astype(np.uint8))
plt.axis('off')
plot_index += 1
```
# Сначала реализуем KNN для бинарной классификации
В качестве задачи бинарной классификации мы натренируем модель, которая будет отличать цифру 0 от цифры 9.
```
# First, let's prepare the labels and the source data
# Only select 0s and 9s
binary_train_mask = (train_y == 0) | (train_y == 9)
binary_train_X = train_X[binary_train_mask]
binary_train_y = train_y[binary_train_mask] == 0
binary_test_mask = (test_y == 0) | (test_y == 9)
binary_test_X = test_X[binary_test_mask]
binary_test_y = test_y[binary_test_mask] == 0
# Reshape to 1-dimensional array [num_samples, 32*32*3]
binary_train_X = binary_train_X.reshape(binary_train_X.shape[0], -1)
binary_test_X = binary_test_X.reshape(binary_test_X.shape[0], -1)
# Create the classifier and call fit to train the model
# KNN just remembers all the data
knn_classifier = KNN(k=1)
knn_classifier.fit(binary_train_X, binary_train_y)
```
## Пришло время написать код!
Последовательно реализуйте функции `compute_distances_two_loops`, `compute_distances_one_loop` и `compute_distances_no_loops`
в файле `knn.py`.
Эти функции строят массив расстояний между всеми векторами в тестовом наборе и в тренировочном наборе.
В результате они должны построить массив размера `(num_test, num_train)`, где координата `[i][j]` соотвествует расстоянию между i-м вектором в test (`test[i]`) и j-м вектором в train (`train[j]`).
**Обратите внимание** Для простоты реализации мы будем использовать в качестве расстояния меру L1 (ее еще называют [Manhattan distance](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%B3%D0%BE%D1%80%D0%BE%D0%B4%D1%81%D0%BA%D0%B8%D1%85_%D0%BA%D0%B2%D0%B0%D1%80%D1%82%D0%B0%D0%BB%D0%BE%D0%B2)).

```
# TODO: implement compute_distances_two_loops in knn.py
dists = knn_classifier.compute_distances_two_loops(binary_test_X)
assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10])))
# TODO: implement compute_distances_one_loop in knn.py
dists = knn_classifier.compute_distances_one_loop(binary_test_X)
assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10])))
# TODO: implement compute_distances_no_loops in knn.py
dists = knn_classifier.compute_distances_no_loops(binary_test_X)
assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10])))
# Lets look at the performance difference
%timeit knn_classifier.compute_distances_two_loops(binary_test_X)
%timeit knn_classifier.compute_distances_one_loop(binary_test_X)
%timeit knn_classifier.compute_distances_no_loops(binary_test_X)
# TODO: implement predict_labels_binary in knn.py
prediction = knn_classifier.predict(binary_test_X)
# TODO: implement binary_classification_metrics in metrics.py
precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y)
print("KNN with k = %s" % knn_classifier.k)
print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1))
# Let's put everything together and run KNN with k=3 and see how we do
knn_classifier_3 = KNN(k=3)
knn_classifier_3.fit(binary_train_X, binary_train_y)
prediction = knn_classifier_3.predict(binary_test_X)
precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y)
print("KNN with k = %s" % knn_classifier_3.k)
print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1))
```
# Кросс-валидация (cross-validation)
Попробуем найти лучшее значение параметра k для алгоритма KNN!
Для этого мы воспользуемся k-fold cross-validation (https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation). Мы разделим тренировочные данные на 5 фолдов (folds), и по очереди будем использовать каждый из них в качестве проверочных данных (validation data), а остальные -- в качестве тренировочных (training data).
В качестве финальной оценки эффективности k мы усредним значения F1 score на всех фолдах.
После этого мы просто выберем значение k с лучшим значением метрики.
*Бонус*: есть ли другие варианты агрегировать F1 score по всем фолдам? Напишите плюсы и минусы в клетке ниже.
```
# Find the best k using cross-validation based on F1 score
num_folds = 5
train_folds_X = []
train_folds_y = []
# TODO: split the training data in 5 folds and store them in train_folds_X/train_folds_y
train_indexes = range(0, binary_train_X.shape[0])
test_size = np.floor_divide(binary_train_X.shape[0], num_folds)
k_choices = [1, 2, 3, 5, 8, 10, 15, 20, 25, 50]
k_to_f1 = {} # dict mapping k values to mean F1 scores (int -> float)
for k in k_choices:
# TODO: perform cross-validation
# Go through every fold and use it for testing and all other folds for training
# Perform training and produce F1 score metric on the validation dataset
# Average F1 from all the folds and write it into k_to_f1
f1_arr = []
for i_fold in range(num_folds):
#split indexes
if (i_fold == 0):
test_slice, remainder = np.split(train_indexes, [test_size], axis=0)
else:
remainder[(i_fold-1)*test_size:i_fold*test_size], test_slice = test_slice, remainder[(i_fold-1)*test_size:i_fold*test_size].copy()
# Reshape to 1-dimensional array [num_samples, 32*32*3]
train_folds_X = binary_train_X[remainder]
train_folds_y = binary_train_y[remainder]
validation_folds_X = binary_train_X[test_slice]
validation_folds_y = binary_train_y[test_slice]
# train & predict
knn_classifier = KNN(k=k)
knn_classifier.fit(train_folds_X, train_folds_y)
prediction = knn_classifier.predict(validation_folds_X)
precision, recall, f1, accuracy = binary_classification_metrics(prediction, validation_folds_y)
#print(precision, recall, f1, accuracy)
f1_arr = np.append(f1_arr, f1)
k_to_f1[k] = np.mean(f1_arr)
print('----')
for k in sorted(k_to_f1):
print('k = %d, f1 = %f' % (k, k_to_f1[k]))
```
### Проверим, как хорошо работает лучшее значение k на тестовых данных (test data)
```
# TODO Set the best k to the best value found by cross-validation
best_k = 1
best_knn_classifier = KNN(k=best_k)
best_knn_classifier.fit(binary_train_X, binary_train_y)
prediction = best_knn_classifier.predict(binary_test_X)
precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y)
print("Best KNN with k = %s" % best_k)
print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1))
```
# Многоклассовая классификация (multi-class classification)
Переходим к следующему этапу - классификации на каждую цифру.
```
# Now let's use all 10 classes
train_X = train_X.reshape(train_X.shape[0], -1)
test_X = test_X.reshape(test_X.shape[0], -1)
knn_classifier = KNN(k=1)
knn_classifier.fit(train_X, train_y)
# TODO: Implement predict_labels_multiclass
predict = knn_classifier.predict(test_X)
# TODO: Implement multiclass_accuracy
accuracy = multiclass_accuracy(predict, test_y)
print("Accuracy: %4.2f" % accuracy)
```
Снова кросс-валидация. Теперь нашей основной метрикой стала точность (accuracy), и ее мы тоже будем усреднять по всем фолдам.
```
# Find the best k using cross-validation based on accuracy
num_folds = 5
train_folds_X = []
train_folds_y = []
# TODO: split the training data in 5 folds and store them in train_folds_X/train_folds_y
k_choices = [1, 2, 3, 5, 8, 10, 15, 20, 25, 50]
k_to_accuracy = {}
for k in k_choices:
# TODO: perform cross-validation
# Go through every fold and use it for testing and all other folds for validation
# Perform training and produce accuracy metric on the validation dataset
# Average accuracy from all the folds and write it into k_to_accuracy
pass
for k in sorted(k_to_accuracy):
print('k = %d, accuracy = %f' % (k, k_to_accuracy[k]))
```
### Финальный тест - классификация на 10 классов на тестовой выборке (test data)
Если все реализовано правильно, вы должны увидеть точность не менее **0.2**.
```
# TODO Set the best k as a best from computed
best_k = 1
best_knn_classifier = KNN(k=best_k)
best_knn_classifier.fit(train_X, train_y)
prediction = best_knn_classifier.predict(test_X)
# Accuracy should be around 20%!
accuracy = multiclass_accuracy(prediction, test_y)
print("Accuracy: %4.2f" % accuracy)
```
| github_jupyter |
This notebook was created to convert the original VQC notebook to follow the routines of the new Qiskit version.
```
import logging
import numpy as np
from sklearn.metrics import f1_score
import matplotlib.pyplot as plt
plt.style.use('dark_background')
import qiskit
from qiskit import IBMQ, Aer, QuantumCircuit
from qiskit.circuit import ParameterVector
from qiskit.providers.aer import AerSimulator
from qiskit.providers.aer.noise import NoiseModel
from qiskit.utils import QuantumInstance
from qiskit.circuit.library import TwoLocal, ZZFeatureMap
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B
from qiskit_machine_learning.algorithms.classifiers import VQC, NeuralNetworkClassifier
from qiskit_machine_learning.neural_networks import CircuitQNN
from qiskit_machine_learning.datasets import breast_cancer
# provider = IBMQ.load_account() # Load account to use cloud-simulator.
from IPython.display import clear_output
# Load data. PCA reduced to four dimensions.
Xtrain, Ytrain, Xtest, Ytest = breast_cancer(training_size=4,
test_size=40, n = 4, plot_data=True)
# We create the feature map and ansatz to be used.
feature_map = ZZFeatureMap(4, reps=2,
entanglement="linear") # Our data is two dimensional, hence, two qubits.
parameters = ParameterVector('θ', 16)
ansatz = QuantumCircuit(4)
for i in range(4):
ansatz.ry(parameters[i], i)
ansatz.crx(parameters[4], 3, 0)
ansatz.crx(parameters[5], 2, 3)
ansatz.crx(parameters[6], 1, 2)
ansatz.crx(parameters[7], 0, 1)
ansatz.barrier()
for ii in range(8,12):
ansatz.ry(parameters[ii], ii-8)
ansatz.crx(parameters[12], 3, 2)
ansatz.crx(parameters[13], 0, 3)
ansatz.crx(parameters[14], 1, 0)
ansatz.crx(parameters[15], 2, 1)
total = feature_map.compose(ansatz)
total.draw(output='mpl', filename="full_expressive_circuit.png")
# ansatz.draw(output='mpl', filename='circ14.png')
# # Build a noisy simulator
# quantum_backend = provider.get_backend("ibmq_manila")
# # Generate a noise model based of a real quantum computer.
# noise_model = NoiseModel.from_backend(quantum_backend)
# # Get coupling map from backend
# coupling_map = quantum_backend.configuration().coupling_map
# # Get basis gates from noise model
# basis_gates = noise_model.basis_gates
# cloud_simulator = provider.get_backend('simulator_statevector') # Use cloud-basesd-simulator.
# quantum_instance = QuantumInstance(backend=simulator, coupling_map=coupling_map,
# basis_gates=basis_gates, noise_model=noise_model)
# Noiseless simulator.
quantum_instance = Aer.get_backend("aer_simulator")
# Quantum computer
# quantum_instance = QuantumInstance(quantum_backend)
# callback function that draws a live plot when the .fit() method is called
def callback_graph(weights, obj_func_eval):
clear_output(wait=True)
objective_func_vals.append(obj_func_eval)
plt.title("Objective function value against iteration")
plt.xlabel("Iteration")
plt.ylabel("Objective function value")
plt.plot(range(len(objective_func_vals)), objective_func_vals)
plt.show()
# Create our VQC instance.
vqc = VQC(feature_map=feature_map,
ansatz=ansatz,
loss='cross_entropy',
optimizer=COBYLA(),
quantum_instance=quantum_instance,
callback=callback_graph)
# create empty list for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)
# fit classifier to data
vqc.fit(Xtrain, Ytrain)
# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)
# score classifier
# vqc.score(Xtest, Ytest)
# with open('barren_model.npy', 'wb') as f:
# np.save(f, np.array(objective_func_vals))
with open('barren_model.npy', 'rb') as f:
qc_expressive_results = np.load(f)
# plt.rcdefaults()
with plt.style.context('seaborn-colorblind'):
plt.title("Convergence with an expressive ansatz on a QC")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.plot(range(len(qc_expressive_results)), qc_expressive_results)
plt.xticks(range(0,len(qc_expressive_results),2))
# plt.savefig("qc_expressive.png", dpi=200)
def reverse_one_hot(x):
return x[:,0]
# predictions = vqc.predict(Xtest)
f1_score(reverse_one_hot(Ytest), reverse_one_hot(predictions))
```
0.46153846153846156 F1 score for expressive setup executed on quantum computer.
```
# Cross entropy loss, COBYLA and circuit 14 from Sim et al. gives a score of 0.6 :(.
# 0.675 with cross entropy loss and AQGD and the same circuit as the first example. BOTH ENDED UP
# IN A BARREN PLATEAU!
```
We also perform a simulated run to see what we can expect without noise.
```
with plt.style.context('seaborn-colorblind'):
plt.title("Convergence with an expressive ansatz on an ideal simulator")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.plot(range(len(objective_func_vals)), objective_func_vals, label="Ideal Simulator")
plt.plot(range(len(qc_expressive_results)), qc_expressive_results, '--', label="QC")
plt.xticks(range(0,len(qc_expressive_results),2))
plt.legend()
# plt.savefig("expressive_comparison.png", dpi=200)
predictions = vqc.predict(Xtest)
f1_score(reverse_one_hot(Ytest), reverse_one_hot(predictions))
# 0.6410256410256411 f1 score with ideal simulator.
```
Since we ended up in a barren plateau, we will now try much simpler conditions to see whether we can escape it. We now run the simple circuit for many qubtits to show that we get better results but also the variance in the gradient drops as we increase the number of qubits.
```
all_loss_values = [] # Store loss values for each iteration.
f1_scores = [] # Store respective f1 score for each iteration.
for i in range(2,10,2):
objective_func_vals = [] # Callback function expects a list with this name.
# Scale dimension and data size with ratio defined in paper.
Xtrain, Ytrain, Xtest, Ytest = breast_cancer(training_size=i,
test_size=40, n = i, one_hot=True)
# Redefine circuit to match larger dimension size.
feature_map = ZZFeatureMap(feature_dimension=i, reps=2, entanglement='linear')
ansatz = TwoLocal(num_qubits=i, rotation_blocks='ry', entanglement_blocks='cx',
entanglement='linear', reps=2)
# Define our vqc.
vqc = VQC(feature_map=feature_map,
ansatz=ansatz,
loss='cross_entropy',
optimizer=L_BFGS_B(),
quantum_instance=quantum_instance,
callback=callback_graph)
vqc.fit(Xtrain, Ytrain)
predicitons = vqc.predict(Xtest)
all_loss_values.append(objective_func_vals)
f1_scores.append(f1_score(reverse_one_hot(Ytest), reverse_one_hot(predicitons)))
epochs = range(0,21,2)
labels = range(2,10,2)
with plt.style.context('seaborn-colorblind'):
plt.title("Convergence with an expressive ansatz on an ideal simulator")
plt.xlabel("Epochs")
plt.ylabel("Loss")
for i in range(len(all_loss_values)):
plt.plot(range(len(all_loss_values[i])), all_loss_values[i],
label="{} qubits".format(labels[i]))
plt.xticks(range(0,len(all_loss_values[-1]),2)) # last run was the longest.
plt.yticks(range(0,17,2))
plt.legend()
# plt.savefig("vanishing_loss.png", dpi=200)
```
[0.6486486486486486,
0.5555555555555556,
0.5238095238095238,
0.5569620253164557] F1 scores for the above evaluations.
0.7 score with zzfeaturemap, reps=2, and TwoLocal with ry and cx blocks with reps=2 also. Could be a noise induced barren-plateau. Going to try reps=1 now. Better now, 0.725 but have that loss issue so can't see what is happening. Actually seems to be working, possibly, going to try without noise now. See: https://quantumcomputing.stackexchange.com/questions/15166/callback-function-in-vqe-do-not-return-the-intermediate-values-of-the-parameters for what was attempted to check if its working. We saw that print matched final list. Without noise 0.5 for reps=1. 0.725 with reps=2.
Random test with 2 and 4 qubits with reps=1, for simple configuration returns better results for 2 qubit. Worse for 4 qubits and reps=2. Same performance for 2 qubits.
```
# For dissertaion.
qc = QuantumCircuit(2)
parameters = ParameterVector('θ', 4)
qc.ry(parameters[0], 0)
qc.ry(parameters[1], 1)
qc.cx(0,1)
qc.ry(parameters[2], 0)
qc.ry(parameters[3], 1)
qc.draw(output='mpl', scale=2, filename='simplecirc.png')
```
| github_jupyter |
# API demonstration for paper of v1.0
_the LSST-DESC CLMM team_
Here we demonstrate how to use `clmm` to estimate a WL halo mass from observations of a galaxy cluster when source galaxies follow a given distribution (The LSST DESC Science Requirements Document - arXiv:1809.01669, implemented in `clmm`). It uses several functionalities of the support `mock_data` module to produce mock datasets.
- Setting things up, with the proper imports.
- Computing the binned reduced tangential shear profile, for the 2 datasets, using logarithmic binning.
- Setting up a model accounting for the redshift distribution.
- Perform a simple fit using `scipy.optimize.curve_fit` included in `clmm` and visualize the results.
## Setup
First, we import some standard packages.
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['font.family'] = ['gothambook','gotham','gotham-book','serif']
```
## Generating mock data
`clmm` has a support code to generate a mock catalog given a input cosmology and cluster parameters. We will use this to generate a data sample to be used in this example:
```
from clmm import Cosmology
import clmm.support.mock_data as mock
np.random.seed(14) # For reproducibility
# Set cosmology of mock data
cosmo = Cosmology(H0=70.0, Omega_dm0=0.27-0.045, Omega_b0=0.045, Omega_k0=0.0)
# Cluster info
cluster_m = 1.e15 # Cluster mass - ($M200_m$) [Msun]
concentration = 4 # Cluster concentration
cluster_z = 0.3 # Cluster redshift
cluster_ra = 0. # Cluster Ra in deg
cluster_dec = 0. # Cluster Dec in deg
# Catalog info
field_size = 10 # i.e. 10 x 10 Mpc field at the cluster redshift, cluster in the center
# Make mock galaxies
mock_galaxies = mock.generate_galaxy_catalog(
cluster_m=cluster_m, cluster_z=cluster_z, cluster_c=concentration, # Cluster data
cosmo=cosmo, # Cosmology object
zsrc='desc_srd', # Galaxy redshift distribution,
zsrc_min=0.4, # Minimum redshift of the galaxies
shapenoise=0.05, # Gaussian shape noise to the galaxy shapes
photoz_sigma_unscaled=0.05, # Photo-z errors to source redshifts
field_size=field_size,
ngal_density=20 # number of gal/arcmin2 for z in [0, infty]
)['ra', 'dec', 'e1', 'e2', 'z', 'ztrue', 'pzbins', 'pzpdf', 'id']
print(f'Catalog table with the columns: {", ".join(mock_galaxies.colnames)}')
ngals_init = len(mock_galaxies)
print(f'Initial number of galaxies: {ngals_init:,}')
# Keeping only galaxies with "measured" redshift greater than cluster redshift
mock_galaxies = mock_galaxies[(mock_galaxies['z']>cluster_z)]
ngals_good = len(mock_galaxies)
if ngals_good < ngals_init:
print(f'Number of excluded galaxies (with photoz < cluster_z): {ngals_init-ngals_good:,}')
# reset galaxy id for later use
mock_galaxies['id'] = np.arange(ngals_good)
# Check final density
from clmm.utils import convert_units
field_size_arcmin = convert_units(field_size, 'Mpc', 'arcmin', redshift=cluster_z, cosmo=cosmo)
print(f'Background galaxy density = {ngals_good/field_size_arcmin**2:.2f} gal/arcmin2\n')
```
We can extract the column of this mock catalog to show explicitely how the quantities can be used on `clmm` functionality and how to add them to a `GalaxyCluster` object:
```
# Put galaxy values on arrays
gal_ra = mock_galaxies['ra'] # Galaxies Ra in deg
gal_dec = mock_galaxies['dec'] # Galaxies Dec in deg
gal_e1 = mock_galaxies['e1'] # Galaxies elipticipy 1
gal_e2 = mock_galaxies['e2'] # Galaxies elipticipy 2
gal_z = mock_galaxies['z'] # Galaxies observed redshift
gal_ztrue = mock_galaxies['ztrue'] # Galaxies true redshift
gal_pzbins = mock_galaxies['pzbins'] # Galaxies P(z) bins
gal_pzpdf = mock_galaxies['pzpdf'] # Galaxies P(z)
gal_id = mock_galaxies['id'] # Galaxies ID
```
## Measuring shear profiles
From the source galaxy quantities, we can compute the elepticities and corresponding radial profile usimg `clmm.dataops` functions:
```
import clmm.dataops as da
# Convert elipticities into shears
gal_ang_dist, gal_gt, gal_gx = da.compute_tangential_and_cross_components(cluster_ra, cluster_dec,
gal_ra, gal_dec,
gal_e1, gal_e2,
geometry="flat")
# Measure profile
profile = da.make_radial_profile([gal_gt, gal_gx, gal_z],
gal_ang_dist, "radians", "Mpc",
bins=da.make_bins(0.01, field_size/2., 50),
cosmo=cosmo,
z_lens=cluster_z,
include_empty_bins=False)
print(f'Profile table has columns: {", ".join(profile.colnames)},')
print('where p_(0, 1, 2) = (gt, gx, z)')
```
The other possibility is to use the `GalaxyCluster` object. This is the main approach to handle data with `clmm`, and also the simpler way. For that you just have to provide the following information of the cluster:
* Ra, Dec [deg]
* Mass - ($M200_m$) [Msun]
* Concentration
* Redshift
and the source galaxies:
* Ra, Dec [deg]
* 2 axis of eliptticities
* Redshift
```
import clmm
# Create a GCData with the galaxies
galaxies = clmm.GCData([gal_ra, gal_dec, gal_e1, gal_e2, gal_z,
gal_ztrue, gal_pzbins, gal_pzpdf, gal_id],
names=['ra', 'dec', 'e1', 'e2', 'z',
'ztrue', 'pzbins', 'pzpdf', 'id'])
# Create a GalaxyCluster
cluster = clmm.GalaxyCluster("Name of cluster", cluster_ra, cluster_dec,
cluster_z, mock_galaxies)
# Convert elipticities into shears for the members
cluster.compute_tangential_and_cross_components(geometry="flat")
print(cluster.galcat.colnames)
# Measure profile and add profile table to the cluster
seps = convert_units(cluster.galcat['theta'], 'radians', 'mpc',cluster.z, cosmo)
cluster.make_radial_profile(bins=da.make_bins(0.1, field_size/2., 25, method='evenlog10width'),
bin_units="Mpc",
cosmo=cosmo,
include_empty_bins=False,
gal_ids_in_bins=True,
)
print(cluster.profile.colnames)
```
This results in an attribute `table` added to the `cluster` object.
```
from paper_formating import prep_plot
prep_plot(figsize=(9, 9))
errorbar_kwargs = dict(linestyle='', marker='o',
markersize=1, elinewidth=.5, capthick=.5)
plt.errorbar(cluster.profile['radius'], cluster.profile['gt'],
cluster.profile['gt_err'], c='k', **errorbar_kwargs)
plt.xlabel('r [Mpc]', fontsize = 10)
plt.ylabel(r'$g_t$', fontsize = 10)
plt.xscale('log')
plt.yscale('log')
```
## Theoretical predictions
We consider 3 models:
1. One model where all sources are considered at the same redshift
2. One model using the overall source redshift distribution to predict the reduced tangential shear
3. A more accurate model, relying on the fact that we have access to the individual redshifts of the sources, where the average reduced tangential shear is averaged independently in each bin, accounting for the acutal population of sources in each bin.
All models rely on `clmm.predict_reduced_tangential_shear` to make a prediction that accounts for the redshift distribution of the galaxies in each radial bin:
### Model considering all sources located at the average redshift
\begin{equation}
g_{t,i}^{\rm{avg(z)}} = g_t(R_i, \langle z \rangle)\;,
\label{eq:wrong_gt_model}
\end{equation}
```
def predict_reduced_tangential_shear_mean_z(profile, logm):
return clmm.compute_reduced_tangential_shear(
r_proj=profile['radius'], # Radial component of the profile
mdelta=10**logm, # Mass of the cluster [M_sun]
cdelta=4, # Concentration of the cluster
z_cluster=cluster_z, # Redshift of the cluster
z_source=np.mean(cluster.galcat['z']), # Mean value of source galaxies redshift
cosmo=cosmo,
delta_mdef=200,
halo_profile_model='nfw'
)
```
### Model relying on the overall redshift distribution of the sources N(z), not using individual redshift information (eq. (6) from Applegate et al. 2014, MNRAS, 439, 48)
\begin{equation}
g_{t,i}^{N(z)} = \frac{\langle\beta_s\rangle \gamma_t(R_i, z\rightarrow\infty)}{1-\frac{\langle\beta_s^2\rangle}{\langle\beta_s\rangle}\kappa(R_i, z\rightarrow\infty)}
\label{eq:approx_model}
\end{equation}
```
z_inf = 1000
dl_inf = cosmo.eval_da_z1z2(cluster_z, z_inf)
d_inf = cosmo.eval_da(z_inf)
def betas(z):
dls = cosmo.eval_da_z1z2(cluster_z, z)
ds = cosmo.eval_da(z)
return dls * d_inf / (ds * dl_inf)
def predict_reduced_tangential_shear_approx(profile, logm):
bs_mean = np.mean(betas(cluster.galcat['z']))
bs2_mean = np.mean(betas(cluster.galcat['z'])**2)
gamma_t_inf = clmm.compute_tangential_shear(
r_proj=profile['radius'], # Radial component of the profile
mdelta=10**logm, # Mass of the cluster [M_sun]
cdelta=4, # Concentration of the cluster
z_cluster=cluster_z, # Redshift of the cluster
z_source=z_inf, # Redshift value at infinity
cosmo=cosmo,
delta_mdef=200,
halo_profile_model='nfw')
convergence_inf = clmm.compute_convergence(
r_proj=profile['radius'], # Radial component of the profile
mdelta=10**logm, # Mass of the cluster [M_sun]
cdelta=4, # Concentration of the cluster
z_cluster=cluster_z, # Redshift of the cluster
z_source=z_inf, # Redshift value at infinity
cosmo=cosmo,
delta_mdef=200,
halo_profile_model='nfw')
return bs_mean*gamma_t_inf/(1-(bs2_mean/bs_mean)*convergence_inf)
```
### Model using individual redshift and radial information, to compute the averaged shear in each radial bin, based on the galaxies actually present in that bin.
\begin{equation}
g_{t,i}^{z, R} = \frac{1}{N_i}\sum_{{\rm gal\,}j\in {\rm bin\,}i} g_t(R_j, z_j)
\label{eq:exact_model}
\end{equation}
```
cluster.galcat['theta_mpc'] = convert_units(cluster.galcat['theta'], 'radians', 'mpc',cluster.z, cosmo)
def predict_reduced_tangential_shear_exact(profile, logm):
return np.array([np.mean(
clmm.compute_reduced_tangential_shear(
# Radial component of each source galaxy inside the radial bin
r_proj=cluster.galcat[radial_bin['gal_id']]['theta_mpc'],
mdelta=10**logm, # Mass of the cluster [M_sun]
cdelta=4, # Concentration of the cluster
z_cluster=cluster_z, # Redshift of the cluster
# Redshift value of each source galaxy inside the radial bin
z_source=cluster.galcat[radial_bin['gal_id']]['z'],
cosmo=cosmo,
delta_mdef=200,
halo_profile_model='nfw'
)) for radial_bin in profile])
```
## Mass fitting
We estimate the best-fit mass using `scipy.optimize.curve_fit`. The choice of fitting $\log M$ instead of $M$ lowers the range of pre-defined fitting bounds from several order of magnitude for the mass to unity. From the associated error $\sigma_{\log M}$ we calculate the error to mass as $\sigma_M = M_{fit}\ln(10)\sigma_{\log M}$.
#### First, identify bins with sufficient galaxy statistics to be kept for the fit
For small samples, error bars should not be computed using the simple error on the mean approach available so far in CLMM)
```
mask_for_fit = cluster.profile['n_src'] > 5
data_for_fit = cluster.profile[mask_for_fit]
```
#### Perform the fits
```
from clmm.support.sampler import fitters
def fit_mass(predict_function):
popt, pcov = fitters['curve_fit'](predict_function,
data_for_fit,
data_for_fit['gt'],
data_for_fit['gt_err'], bounds=[10.,17.])
logm, logm_err = popt[0], np.sqrt(pcov[0][0])
return {'logm':logm, 'logm_err':logm_err,
'm': 10**logm, 'm_err': (10**logm)*logm_err*np.log(10)}
fit_mean_z = fit_mass(predict_reduced_tangential_shear_mean_z)
fit_approx = fit_mass(predict_reduced_tangential_shear_approx)
fit_exact = fit_mass(predict_reduced_tangential_shear_exact)
print(f'Input mass = {cluster_m:.2e} Msun\n')
print(f'Best fit mass for average redshift = {fit_mean_z["m"]:.3e} +/- {fit_mean_z["m_err"]:.3e} Msun')
print(f'Best fit mass for N(z) model = {fit_approx["m"]:.3e} +/- {fit_approx["m_err"]:.3e} Msun')
print(f'Best fit mass for individual redshift and radius = {fit_exact["m"]:.3e} +/- {fit_exact["m_err"]:.3e} Msun')
```
As expected, the reconstructed mass is biased when the redshift distribution is not accounted for in the model
## Visualization of the results
For visualization purpose, we calculate the reduced tangential shear predicted by the model with estimated masses for noisy and ideal data.
```
def get_predicted_shear(predict_function, fit_values):
gt_est = predict_function(data_for_fit, fit_values['logm'])
gt_est_err = [predict_function(data_for_fit, fit_values['logm']+i*fit_values['logm_err'])
for i in (-3, 3)]
return gt_est, gt_est_err
gt_mean_z, gt_err_mean_z = get_predicted_shear(predict_reduced_tangential_shear_mean_z, fit_mean_z)
gt_approx, gt_err_approx = get_predicted_shear(predict_reduced_tangential_shear_approx, fit_approx)
gt_exact, gt_err_exact = get_predicted_shear(predict_reduced_tangential_shear_exact, fit_exact)
```
Check reduced chi2 values of the best-fit model
```
chi2_mean_z_dof = np.sum((gt_mean_z-data_for_fit['gt'])**2/(data_for_fit['gt_err'])**2)/(len(data_for_fit)-1)
chi2_approx_dof = np.sum((gt_approx-data_for_fit['gt'])**2/(data_for_fit['gt_err'])**2)/(len(data_for_fit)-1)
chi2_exact_dof = np.sum((gt_exact-data_for_fit['gt'])**2/(data_for_fit['gt_err'])**2)/(len(data_for_fit)-1)
print(f'Reduced chi2 (mean z model) = {chi2_mean_z_dof}')
print(f'Reduced chi2 (N(z) model) = {chi2_approx_dof}')
print(f'Reduced chi2 (individual (R,z) model) = {chi2_exact_dof}')
```
We compare to tangential shear obtained with theoretical mass. We plot the reduced tangential shear models first when redshift distribution is accounted for in the model then for the naive approach, with respective best-fit masses.
```
from matplotlib.ticker import MultipleLocator
prep_plot(figsize=(9 , 9))
gt_ax = plt.axes([.25, .42, .7, .55])
gt_ax.errorbar(data_for_fit['radius'],data_for_fit['gt'], data_for_fit['gt_err'],
c='k', label=rf'$M_{{input}} = {cluster_m*1e-15}\times10^{{{15}}} M_\odot$',
**errorbar_kwargs)
# Points in grey have not been used for the fit
gt_ax.errorbar(cluster.profile['radius'][~mask_for_fit], cluster.profile['gt'][~mask_for_fit],
cluster.profile['gt_err'][~mask_for_fit],
c='grey',**errorbar_kwargs)
pow10 = 15
mlabel = lambda name, fits: fr'$M_{{fit}}^{{{name}}} = {fits["m"]/10**pow10:.3f}\pm{fits["m_err"]/10**pow10:.3f}\times 10^{{{pow10}}} M_\odot$'
# Avg z
gt_ax.loglog(data_for_fit['radius'], gt_mean_z,'-C0',
label=mlabel('avg(z)', fit_mean_z),lw=.5)
gt_ax.fill_between(data_for_fit['radius'], *gt_err_mean_z, lw=0, color='C0', alpha=.2)
# Approx model
gt_ax.loglog(data_for_fit['radius'], gt_approx,'-C1',
label=mlabel('N(z)', fit_approx),
lw=.5)
gt_ax.fill_between(data_for_fit['radius'], *gt_err_approx, lw=0, color='C1', alpha=.2)
# Exact model
gt_ax.loglog(data_for_fit['radius'], gt_exact,'-C2',
label=mlabel('z,R', fit_exact),
lw=.5)
gt_ax.fill_between(data_for_fit['radius'], *gt_err_exact, lw=0, color='C2', alpha=.2)
gt_ax.set_ylabel(r'$g_t$', fontsize = 8)
gt_ax.legend(fontsize=6)
gt_ax.set_xticklabels([])
gt_ax.tick_params('x', labelsize=8)
gt_ax.tick_params('y', labelsize=8)
#gt_ax.set_yscale('log')
errorbar_kwargs2 = {k:v for k, v in errorbar_kwargs.items() if 'marker' not in k}
errorbar_kwargs2['markersize'] = 3
errorbar_kwargs2['markeredgewidth'] = .5
res_ax = plt.axes([.25, .2, .7, .2])
delta = (cluster.profile['radius'][1]/cluster.profile['radius'][0])**.25
res_err = data_for_fit['gt_err']/data_for_fit['gt']
res_ax.errorbar(data_for_fit['radius']/delta, gt_mean_z/data_for_fit['gt']-1,
yerr=res_err, marker='.', c='C0', **errorbar_kwargs2)
errorbar_kwargs2['markersize'] = 1.5
res_ax.errorbar(data_for_fit['radius'], gt_approx/data_for_fit['gt']-1,
yerr=res_err, marker='s', c='C1', **errorbar_kwargs2)
errorbar_kwargs2['markersize'] = 3
errorbar_kwargs2['markeredgewidth'] = .5
res_ax.errorbar(data_for_fit['radius']*delta, gt_exact/data_for_fit['gt']-1,
yerr=res_err, marker='*', c='C2', **errorbar_kwargs2)
res_ax.set_xlabel(r'$R$ [Mpc]', fontsize = 8)
res_ax.set_ylabel(r'$g_t^{mod.}/g_t^{data}-1$', fontsize = 8)
res_ax.set_xscale('log')
res_ax.set_xlim(gt_ax.get_xlim())
res_ax.set_ylim(-0.65,0.65)
res_ax.yaxis.set_minor_locator(MultipleLocator(.1))
res_ax.tick_params('x', labelsize=8)
res_ax.tick_params('y', labelsize=8)
for p in (gt_ax, res_ax):
p.xaxis.grid(True, which='major', lw=.5)
p.yaxis.grid(True, which='major', lw=.5)
p.xaxis.grid(True, which='minor', lw=.1)
p.yaxis.grid(True, which='minor', lw=.1)
plt.savefig('r_gt.png')
```
| github_jupyter |
```
#export
from local.torch_basics import *
from local.test import *
from local.core import *
from local.layers import *
from local.data.all import *
from local.text.core import *
from local.notebook.showdoc import show_doc
#default_exp text.models.awdlstm
#default_cls_lvl 3
```
# AWD-LSTM
> AWD LSTM from [Smerity et al.](https://arxiv.org/pdf/1708.02182.pdf)
## Basic NLP modules
On top of the pytorch or the fastai [`layers`](/layers.html#layers), the language models use some custom layers specific to NLP.
```
#export
def dropout_mask(x, sz, p):
"Return a dropout mask of the same type as `x`, size `sz`, with probability `p` to cancel an element."
return x.new(*sz).bernoulli_(1-p).div_(1-p)
t = dropout_mask(torch.randn(3,4), [4,3], 0.25)
test_eq(t.shape, [4,3])
assert ((t == 4/3) + (t==0)).all()
#export
class RNNDropout(Module):
"Dropout with probability `p` that is consistent on the seq_len dimension."
def __init__(self, p=0.5): self.p=p
def forward(self, x):
if not self.training or self.p == 0.: return x
return x * dropout_mask(x.data, (x.size(0), 1, x.size(2)), self.p)
dp = RNNDropout(0.3)
tst_inp = torch.randn(4,3,7)
tst_out = dp(tst_inp)
for i in range(4):
for j in range(7):
if tst_out[i,0,j] == 0: assert (tst_out[i,:,j] == 0).all()
else: test_close(tst_out[i,:,j], tst_inp[i,:,j]/(1-0.3))
#export
import warnings
#export
class WeightDropout(Module):
"A module that warps another layer in which some weights will be replaced by 0 during training."
def __init__(self, module, weight_p, layer_names='weight_hh_l0'):
self.module,self.weight_p,self.layer_names = module,weight_p,L(layer_names)
for layer in self.layer_names:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
self.module._parameters[layer] = F.dropout(w, p=self.weight_p, training=False)
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=self.training)
def forward(self, *args):
self._setweights()
with warnings.catch_warnings():
#To avoid the warning that comes because the weights aren't flattened.
warnings.simplefilter("ignore")
return self.module.forward(*args)
def reset(self):
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=False)
if hasattr(self.module, 'reset'): self.module.reset()
module = nn.LSTM(5,7).cuda()
dp_module = WeightDropout(module, 0.4)
wgts = getattr(dp_module.module, 'weight_hh_l0')
tst_inp = torch.randn(10,20,5).cuda()
h = torch.zeros(1,20,7).cuda(), torch.zeros(1,20,7).cuda()
x,h = dp_module(tst_inp,h)
new_wgts = getattr(dp_module.module, 'weight_hh_l0')
test_eq(wgts, getattr(dp_module, 'weight_hh_l0_raw'))
assert 0.2 <= (new_wgts==0).sum().float()/new_wgts.numel() <= 0.6
#export
class EmbeddingDropout(Module):
"Apply dropout with probabily `embed_p` to an embedding layer `emb`."
def __init__(self, emb, embed_p):
self.emb,self.embed_p = emb,embed_p
def forward(self, words, scale=None):
if self.training and self.embed_p != 0:
size = (self.emb.weight.size(0),1)
mask = dropout_mask(self.emb.weight.data, size, self.embed_p)
masked_embed = self.emb.weight * mask
else: masked_embed = self.emb.weight
if scale: masked_embed.mul_(scale)
return F.embedding(words, masked_embed, ifnone(self.emb.padding_idx, -1), self.emb.max_norm,
self.emb.norm_type, self.emb.scale_grad_by_freq, self.emb.sparse)
enc = nn.Embedding(10, 7, padding_idx=1)
enc_dp = EmbeddingDropout(enc, 0.5)
tst_inp = torch.randint(0,10,(8,))
tst_out = enc_dp(tst_inp)
for i in range(8):
assert (tst_out[i]==0).all() or torch.allclose(tst_out[i], 2*enc.weight[tst_inp[i]])
#export
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
#export
class AWD_LSTM(Module):
"AWD-LSTM inspired by https://arxiv.org/abs/1708.02182"
initrange=0.1
def __init__(self, vocab_sz, emb_sz, n_hid, n_layers, pad_token=1, hidden_p=0.2, input_p=0.6, embed_p=0.1,
weight_p=0.5, bidir=False, packed=False):
store_attr(self, 'emb_sz,n_hid,n_layers,pad_token,packed')
self.bs = 1
self.n_dir = 2 if bidir else 1
self.encoder = nn.Embedding(vocab_sz, emb_sz, padding_idx=pad_token)
self.encoder_dp = EmbeddingDropout(self.encoder, embed_p)
self.rnns = nn.ModuleList([self._one_rnn(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.n_dir,
bidir, weight_p, l) for l in range(n_layers)])
self.encoder.weight.data.uniform_(-self.initrange, self.initrange)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([RNNDropout(hidden_p) for l in range(n_layers)])
def forward(self, inp, from_embeds=False):
bs,sl = inp.shape[:2] if from_embeds else inp.shape
if bs!=self.bs:
self.bs=bs
self.reset()
if self.packed: inp,lens = self._pack_sequence(inp, sl)
raw_output = self.input_dp(inp if from_embeds else self.encoder_dp(inp))
new_hidden,raw_outputs,outputs = [],[],[]
for l, (rnn,hid_dp) in enumerate(zip(self.rnns, self.hidden_dps)):
if self.packed: raw_output = pack_padded_sequence(raw_output, lens, batch_first=True)
raw_output, new_h = rnn(raw_output, self.hidden[l])
if self.packed: raw_output = pad_packed_sequence(raw_output, batch_first=True)[0]
new_hidden.append(new_h)
raw_outputs.append(raw_output)
if l != self.n_layers - 1: raw_output = hid_dp(raw_output)
outputs.append(raw_output)
self.hidden = to_detach(new_hidden, cpu=False)
return raw_outputs, outputs
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
"Return one of the inner rnn"
rnn = nn.LSTM(n_in, n_out, 1, batch_first=True, bidirectional=bidir)
return WeightDropout(rnn, weight_p)
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return (one_param(self).new_zeros(self.n_dir, self.bs, nh), one_param(self).new_zeros(self.n_dir, self.bs, nh))
def reset(self):
"Reset the hidden states"
[r.reset() for r in self.rnns if hasattr(r, 'reset')]
self.hidden = [self._one_hidden(l) for l in range(self.n_layers)]
def _pack_sequence(self, inp, sl):
mask = (inp == self.pad_token)
lens = sl - mask.long().sum(1)
n_empty = (lens == 0).sum()
if n_empty > 0:
inp,lens = inp[:-n_empty],lens[:-n_empty]
self.hidden = [(h[0][:,:inp.size(0)], h[1][:,:inp.size(0)]) for h in self.hidden]
return (inp,lens)
```
This is the core of an AWD-LSTM model, with embeddings from `vocab_sz` and `emb_sz`, `n_layers` LSTMs potentialy `bidir` stacked, the first one going from `emb_sz` to `n_hid`, the last one from `n_hid` to `emb_sz` and all the inner ones from `n_hid` to `n_hid`. `pad_token` is passed to the PyTorch embedding layer. The dropouts are applied as such:
- the embeddings are wrapped in `EmbeddingDropout` of probability `embed_p`;
- the result of thise embedding layer goes through an `RNNDropout` of probability `input_p`;
- each LSTM has `WeightDropout` applied with probability `weight_p`;
- between two of the inner LSTM, an `RNNDropout` is applied with probabilith `hidden_p`.
THe module returns two lists: the raw outputs (without being applied the dropout of `hidden_p`) of each inner LSTM and the list of outputs with dropout. Since there is no dropout applied on the last output, those two lists have the same last element, which is the output that should be fed to a decoder (in the case of a language model).
```
tst = AWD_LSTM(100, 20, 10, 2)
x = torch.randint(0, 100, (10,5))
r = tst(x)
test_eq(tst.bs, 10)
test_eq(len(tst.hidden), 2)
test_eq([h_.shape for h_ in tst.hidden[0]], [[1,10,10], [1,10,10]])
test_eq([h_.shape for h_ in tst.hidden[1]], [[1,10,20], [1,10,20]])
test_eq(len(r), 2)
test_eq(r[0][-1], r[1][-1]) #No dropout for last output
for i in range(2): test_eq([h_.shape for h_ in r[i]], [[10,5,10], [10,5,20]])
for i in range(2): test_eq(r[0][i][:,-1], tst.hidden[i][0][0]) #hidden state is the last timestep in raw outputs
#test packed with padding
tst = AWD_LSTM(100, 20, 10, 2, packed=True)
x = torch.randint(2, 100, (10,5))
x[9,3:] = 1
r = tst(x)
test_eq(tst.bs, 10)
test_eq(len(tst.hidden), 2)
test_eq([h_.shape for h_ in tst.hidden[0]], [[1,10,10], [1,10,10]])
test_eq([h_.shape for h_ in tst.hidden[1]], [[1,10,20], [1,10,20]])
test_eq(len(r), 2)
test_eq(r[0][-1], r[1][-1]) #No dropout for last output
for i in range(2): test_eq([h_.shape for h_ in r[i]], [[10,5,10], [10,5,20]])
#hidden state is the last timestep in raw outputs
for i in range(2): test_eq(r[0][i][:,-1][:9], tst.hidden[i][0][0][:9])
for i in range(2): test_eq(r[0][i][:,-3][9], tst.hidden[i][0][0][9])
#export
def awd_lstm_lm_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups = L(groups + [nn.Sequential(model[0].encoder, model[0].encoder_dp, model[1])])
return groups.mapped(trainable_params)
splits = awd_lstm_lm_split
#export
awd_lstm_lm_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.1, packed=False,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
def awd_lstm_clas_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(model[0].module.encoder, model[0].module.encoder_dp)]
groups += [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].module.rnns, model[0].module.hidden_dps)]
groups = L(groups + [model[1]])
return groups.mapped(trainable_params)
#export
awd_lstm_clas_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5, packed=True)
```
## QRNN
```
#export
class AWD_QRNN(AWD_LSTM):
"Same as an AWD-LSTM, but using QRNNs instead of LSTMs"
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
from local.text.models.qrnn import QRNN
rnn = QRNN(n_in, n_out, 1, save_prev_x=True, zoneout=0, window=2 if l == 0 else 1, output_gate=True, bidirectional=bidir)
rnn.layers[0].linear = WeightDropout(rnn.layers[0].linear, weight_p, layer_names='weight')
return rnn
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return one_param(self).new_zeros(self.n_dir, self.bs, nh)
#export
awd_qrnn_lm_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
awd_qrnn_clas_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# CI/CD - Make sure all notebooks respects our format policy
**Tags:** #naas
**Author:** [Maxime Jublou](https://www.linkedin.com/in/maximejublou/)
# Input
### Import libraries
```
import json
import glob
from rich import print
import pydash
import re
```
## Model
### Utility functions
These functions are used by other to not repeat ourselves.
```
def tag_exists(tagname, cells):
for cell in cells:
if tagname in pydash.get(cell, 'metadata.tags', []):
return True
return False
def regexp_match(regex, string):
matches = re.finditer(regex, string, re.MULTILINE)
return len(list(matches)) >= 1
def check_regexp(cells, regex, source):
cell_str = pydash.get(cells, source, '')
return regexp_match(regex, cell_str)
def check_title_exists(cells, title):
for cell in cells:
if pydash.get(cell, 'cell_type') == 'markdown' and regexp_match(rf"^## *{title}", pydash.get(cell, 'source[0]')):
return True
return False
```
### Check functions
This functions are used to check if a notebook contains the rights cells with proper formatting.
```
def check_naas_logo(cells):
logo_content = '<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>'
if pydash.get(cells, '[0].cell_type') == 'markdown' and pydash.get(cells, '[0].source[0]', '').startswith(logo_content):
return (True, '')
return (False, f'''
Requirements:
- Cell number: 1
- Cell type: Markdown
- Shape: {logo_content}
''')
def check_title_match_regexp(cells):
return (check_regexp(cells, r"markdown", '[1].cell_type') and check_regexp(cells, r"^#.*-.*", '[1].source[0]'), '''
Requirements:
- Cell number: 2
- Cell type: Markdown
- Shape: "# something - some other thing"
''')
def check_tool_tags(cells):
return (check_regexp(cells, r"markdown", '[2].cell_type') and check_regexp(cells, r"^\*\*Tags:\*\* (#[1-9,a-z,A-Z]*( *|$))*", '[2].source[0]'), '''
Requirements:
- Cell number: 3
- Cell type: Markdown
- Shape: "**Tags:** #atLeastOneTool"
''')
def check_author(cells):
return (check_regexp(cells, r"markdown", '[3].cell_type') and check_regexp(cells, r"^\*\*Author:\*\* *.*", '[3].source[0]'), '''
Requirements:
- Cell number: 4
- Cell type: Markdown
- Shape: "**Author:** At least one author name"
''')
def check_input_title_exists(cells):
return (check_title_exists(cells, 'Input'), '''
Requirements:
- Cell number: Any
- Cell type: Markdown
- Shape: "## Input"
''')
def check_model_title_exists(cells):
return (check_title_exists(cells, 'Model'), '''
Requirements:
- Cell number: Any
- Cell type: Markdown
- Shape: "## Model"
''')
def check_output_title_exists(cells):
return (check_title_exists(cells, 'Output'), '''
Requirements:
- Cell number: Any
- Cell type: Markdown
- Shape: "## Output"
''')
```
## Output
```
got_errors = False
error_counter = 0
for file in glob.glob('../../**/*.ipynb', recursive=True):
# Do not check notebooks in .github or at the root of the project.
if '.github' in file or len(file.split('/')) == 3:
continue
notebook = json.load(open(file))
cells = notebook.get('cells')
filename = "[dark_orange]" + file.replace("../../", "") + "[/dark_orange]"
outputs = [f'Errors found in: {filename}']
should_display_debug = False
for checkf in [
check_naas_logo,
check_title_match_regexp,
check_tool_tags,
check_author,
check_input_title_exists,
check_model_title_exists,
check_output_title_exists]:
result, msg = checkf(cells)
if result is False:
should_display_debug = True
status_msg = "[bright_green]OK[/bright_green]" if result is True else f"[bright_red]KO {msg}[/bright_red]"
outputs.append(f'{checkf.__name__} ... {status_msg}')
if should_display_debug:
got_errors = True
error_counter += 1
for msg in outputs:
print(msg)
print("\n")
if got_errors == True:
print(f'[bright_red]You have {error_counter} notebooks having errors!')
exit(1)
```
| github_jupyter |
# Concise Implementation of Softmax Regression
:label:`sec_softmax_concise`
Just as high-level APIs of deep learning frameworks
made it much easier
to implement linear regression in :numref:`sec_linear_concise`,
we will find it similarly (or possibly more)
convenient for implementing classification models. Let us stick with the Fashion-MNIST dataset
and keep the batch size at 256 as in :numref:`sec_softmax_scratch`.
```
from d2l import mxnet as d2l
from mxnet import gluon, init, npx
from mxnet.gluon import nn
npx.set_np()
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
```
## Initializing Model Parameters
As mentioned in :numref:`sec_softmax`,
the output layer of softmax regression
is a fully-connected layer.
Therefore, to implement our model,
we just need to add one fully-connected layer
with 10 outputs to our `Sequential`.
Again, here, the `Sequential` is not really necessary,
but we might as well form the habit since it will be ubiquitous
when implementing deep models.
Again, we initialize the weights at random
with zero mean and standard deviation 0.01.
```
net = nn.Sequential()
net.add(nn.Dense(10))
net.initialize(init.Normal(sigma=0.01))
```
## Softmax Implementation Revisited
:label:`subsec_softmax-implementation-revisited`
In the previous example of :numref:`sec_softmax_scratch`,
we calculated our model's output
and then ran this output through the cross-entropy loss.
Mathematically, that is a perfectly reasonable thing to do.
However, from a computational perspective,
exponentiation can be a source of numerical stability issues.
Recall that the softmax function calculates
$\hat y_j = \frac{\exp(o_j)}{\sum_k \exp(o_k)}$,
where $\hat y_j$ is the $j^\mathrm{th}$ element of
the predicted probability distribution $\hat{\mathbf{y}}$
and $o_j$ is the $j^\mathrm{th}$ element of the logits
$\mathbf{o}$.
If some of the $o_k$ are very large (i.e., very positive),
then $\exp(o_k)$ might be larger than the largest number
we can have for certain data types (i.e., *overflow*).
This would make the denominator (and/or numerator) `inf` (infinity)
and we wind up encountering either 0, `inf`, or `nan` (not a number) for $\hat y_j$.
In these situations we do not get a well-defined
return value for cross-entropy.
One trick to get around this is to first subtract $\max(o_k)$
from all $o_k$ before proceeding with the softmax calculation.
You can verify that this shifting of each $o_k$ by constant factor
does not change the return value of softmax.
After the subtraction and normalization step,
it might be possible that some $o_j$ have large negative values
and thus that the corresponding $\exp(o_j)$ will take values close to zero.
These might be rounded to zero due to finite precision (i.e., *underflow*),
making $\hat y_j$ zero and giving us `-inf` for $\log(\hat y_j)$.
A few steps down the road in backpropagation,
we might find ourselves faced with a screenful
of the dreaded `nan` results.
Fortunately, we are saved by the fact that
even though we are computing exponential functions,
we ultimately intend to take their log
(when calculating the cross-entropy loss).
By combining these two operators
softmax and cross-entropy together,
we can escape the numerical stability issues
that might otherwise plague us during backpropagation.
As shown in the equation below, we avoid calculating $\exp(o_j)$
and can use instead $o_j$ directly due to the canceling in $\log(\exp(\cdot))$.
$$
\begin{aligned}
\log{(\hat y_j)} & = \log\left( \frac{\exp(o_j)}{\sum_k \exp(o_k)}\right) \\
& = \log{(\exp(o_j))}-\log{\left( \sum_k \exp(o_k) \right)} \\
& = o_j -\log{\left( \sum_k \exp(o_k) \right)}.
\end{aligned}
$$
We will want to keep the conventional softmax function handy
in case we ever want to evaluate the output probabilities by our model.
But instead of passing softmax probabilities into our new loss function,
we will just pass the logits and compute the softmax and its log
all at once inside the cross-entropy loss function,
which does smart things like the ["LogSumExp trick"](https://en.wikipedia.org/wiki/LogSumExp).
```
loss = gluon.loss.SoftmaxCrossEntropyLoss()
```
## Optimization Algorithm
Here, we use minibatch stochastic gradient descent
with a learning rate of 0.1 as the optimization algorithm.
Note that this is the same as we applied in the linear regression example
and it illustrates the general applicability of the optimizers.
```
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
```
## Training
Next we call the training function defined in :numref:`sec_softmax_scratch` to train the model.
```
num_epochs = 10
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
```
As before, this algorithm converges to a solution
that achieves a decent accuracy,
albeit this time with fewer lines of code than before.
## Summary
* Using high-level APIs, we can implement softmax regression much more concisely.
* From a computational perspective, implementing softmax regression has intricacies. Note that in many cases, a deep learning framework takes additional precautions beyond these most well-known tricks to ensure numerical stability, saving us from even more pitfalls that we would encounter if we tried to code all of our models from scratch in practice.
## Exercises
1. Try adjusting the hyperparameters, such as the batch size, number of epochs, and learning rate, to see what the results are.
1. Increase the numper of epochs for training. Why might the test accuracy decrease after a while? How could we fix this?
[Discussions](https://discuss.d2l.ai/t/52)
| github_jupyter |
# Bioinfomatic central script
```
# Source the utility functions file, which should be in the scripts folder with this file
source('scripts/meg_utility_functions.R')
source('scripts/load_libraries.R')
```
## USER Controls
First, we'll need to specify the location of important files on your machine.
You'll need to input files associated with the microbiome and resistome seperately. This allows for the option of including microbiome results from qiime2 or kraken2.
For the resistome:
> Metadata file for all resistome samples (.csv)
> Megares annotation file (.csv)
> Count table results from the AMRplusplus pipeline (.csv)
For the microbiome
> Metadata file for all microbiome samples (.tsv)
> etc..
```
# In which column of the metadata file are the sample IDs stored?
sample_column_id = 'ID'
# Set the output directory for graphs:
graph_output_dir = 'graphs'
# Set the output directory for statistics:
stats_output_dir = 'stats'
```
# For the resistome:
```
# Load the data, MEGARes annotations, and metadata
amr_count_matrix_filepath = 'data/test_data/strict_SNP_confirmed_AMR_analytic_matrix.csv'
# Where is the metadata file stored on your machine?
amr_metadata_filepath = 'data/test_data/FC_meat_AMR_metadata.csv'
# Name of the megares annotation file used for this project
megares_annotation_filename = 'data/amr/megares_annotations_v1.03.csv'
```
# For the microbiome:
```
##### Two options for microbiome analysis, either 16S (below) or shotgun (I'll add)
microbiome_temp_metadata_file <- "data/test_data/FC_meat_metadata.csv"
# Now, specify file location for 16S
biom_file <- "data/test_data/exported-biom-table/otu_table_json.biom"
tre_file <- "data/test_data/exported-tree/tree.nwk"
tax_fasta <- "data/test_data/exported-rep-seqs/dna-sequences.fasta" #https://data.qiime2.org/2017.6/tutorials/training-feature-classifiers/85_otus.fasta
taxa_file <- "data/test_data/exported-biom-table-taxa/taxonomy.tsv" #https://data.qiime2.org/2017.6/tutorials/training-feature-classifiers/85_otu_taxonomy.txt
### or Shotgun analysis
#microbiome_temp_metadata_file = "../FC_meat_metadata.csv"
#kraken_temp_file <- read.table('microbiome_analytic_matrix.csv', header=T, row.names=1, sep=',')
```
# Next, we have to specify which variables you want to create exploratory graphs with
We should try to make this a click through option. And some users might not need both the AMR and microbiome analyses.
```
# The following is a list of analyses based on variables in
# your metadata.csv file that you want
# to use for EXPLORATORY analysis (NMDS, PCA, alpha rarefaction, barplots)
# NOTE: Exploratory variables cannot be numeric.
AMR_exploratory_analyses = list(
# Analysis Store
# Description:
list(
name = 'Store',
subsets = list(),
exploratory_var = 'Blinded_Store',
order = ''
),
# Analysis Dilution
# Description:
list(
name = 'Dilution',
subsets = list(),
exploratory_var = 'Dilution',
order = ''
),
# Analysis 2
# Description:
list(
name = 'Treatment',
subsets = list(),
exploratory_var = 'Treatment',
order = ''
),
# Analysis 3
# Description:
list(
name = 'Packaging',
subsets = list(),
exploratory_var = 'Packaging',
order = ''
)
)
microbiome_exploratory_analyses = list(
# Analysis Store
# Description:
list(
name = 'Store',
subsets = list(),
exploratory_var = 'Blinded_Store',
order = ''
),
# Analysis ID
# Description:
list(
name = 'ID',
subsets = list(),
exploratory_var = 'ID',
order = ''
),
# Analysis 2
# Description:
list(
name = 'Treatment',
subsets = list(),
exploratory_var = 'Treatment',
order = ''
),
# Analysis 3
# Description:
list(
name = 'Packaging',
subsets = list(),
exploratory_var = 'Packaging',
order = ''
)
)
```
# Zero-inflated Gaussian model
```
# Each analyses you wish to perform should have its own list in the following
# statistical_analyses list. A template is provided to get you started.
# Multiple analyses, subsets, and contrasts are valid, but only one random
# effect can be used per analysis. The contrasts of interest must have their
# parent variable in the model matrix equation. Contrasts are named by
# parent variable then child variable without a space inbetween, for example:
# PVar1Cvar1 where the model matrix equation is ~ 0 + Pvar1.
AMR_statistical_analyses = list(
# Analysis 1
# Description:
list(
name = 'Treatment',
subsets = list(),
model_matrix = '~ 0 + Treatment ',
contrasts = list('TreatmentCONV - TreatmentRWA'),
random_effect = NA
)
)
microbiome_statistical_analyses = list(
# Analysis 1
# Description:
list(
name = 'Treatment',
subsets = list(),
model_matrix = '~ 0 + Treatment ',
contrasts = list('TreatmentCONV - TreatmentRWA'),
random_effect = NA
)
)
```
# Run main script to get convenient R objects for further analysis
* You have to select which script you need to run, based on what data you are providing. In the example data provided, we used qiime2 for microbiome analysis and AMR++ with the megares database. Therefore, we will run this script:
* source('scripts/metagenomeSeq_megares_qiime.R')
* This is the other option (more in development):
* source('scripts/metagenomeSeq_megares_kraken.R')
### After running the next code block, you can explore your data using following R objects
* AMR
* amr_melted_analytic/amr_raw_melted_analytic
* Object of all counts in long form
* AMR_analytic_data
* List of MRexperiment objects at each level; Class, Mechanism, Group, Gene
* Microbiome
* microbiome_melted_analytic/microbiome_raw_melted_analytic
* microbiome_analytic_data
```
#### If 16S microbiome and megares analysis, run:
source('scripts/metagenomeSeq_megares_qiime.R')
```
# Print exploratory figures
* ## Don't use these figures for publication unless you fully understand how the functions in the script, "meg_utility_functions.R", processes your data.
```
## Run code to make some exploratory figures, zero inflated gaussian model, and output count matrices.
suppressMessages(source('scripts/print_figures.R'))
```
# Everything after this is where we can get creative to summarize our results.
## Area to show them how to play around with ggplot2
First, combine the normalized count tables with the metadata file.
```
head(amr_melted_analytic)
### Start of code for figures, combine table objects to include meta
setkey(amr_melted_raw_analytic,ID)
setkey(amr_melted_analytic,ID)
setkey(microbiome_melted_analytic,ID)
# Set keys for both metadata files
setkey(metadata,ID)
setkey(microbiome_metadata,ID)
microbiome_melted_analytic <- microbiome_melted_analytic[microbiome_metadata]
amr_melted_raw_analytic <- amr_melted_raw_analytic[metadata]
amr_melted_analytic <- amr_melted_analytic[metadata]
head(amr_melted_analytic)
```
# Create plots below
```
## Figure showing resistome composition
AMR_class_sum <- amr_melted_analytic[Level_ID=="Class", .(sum_class= sum(Normalized_Count)),by=.(ID, Name, Packaging, Treatment)][order(-Packaging )]
AMR_class_sum[,total:= sum(sum_class), by=.(ID)]
AMR_class_sum[,percentage:= sum_class/total ,by=.(ID, Name) ]
AMR_class_sum$Class <- AMR_class_sum$Name
fig1 <- ggplot(AMR_class_sum, aes(x = ID, y = percentage, fill = Class)) +
geom_bar(stat = "identity",colour = "black")+
facet_wrap( ~ Treatment, scales='free',ncol = 2) +
theme(
panel.grid.major=element_blank(),
panel.grid.minor=element_blank(),
strip.text.x=element_text(size=22),
strip.text.y=element_text(size=22, angle=0),
axis.text.x=element_blank(),
axis.text.y=element_text(size=20),
axis.title=element_text(size=22),
legend.position="right",
panel.spacing=unit(0.1, "lines"),
plot.title=element_text(size=22, hjust=0.5),
legend.text=element_text(size=10),
legend.title=element_text(size=20),
panel.background = element_rect(fill = "white")
) +
ggtitle("\t\tResistome composition by sample") +
xlab('Sample') +
ylab('Relative abundance') +
scale_fill_tableau("Tableau 20")
fig1
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Fuenfgeld/2022TeamADataManagementBC/blob/main/Tutorial-Metadaten/structureData_task.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Strukturelle Daten und Metadatenschema
#### REFERENCE MODEL FOR AN OPEN ARCHIVAL INFORMATION SYSTEM (OASI)

Sturcture Information, definiton by [Open Archival Information System](https://public.ccsds.org/pubs/650x0m2.pdf):
- It does this by **describing the format**, or data structure concepts, which are to be applied to the bit sequences and that in turn result in more meaningful values such as characters, numbers, pixels, arrays, tables, etc.
- These common computer **data types**, **aggregations** of these data types, and **mapping rules** which map from the underlying data types to the higher level concepts needed to understand the Digital Object are referred to as the Structure Information of the Representation Information object.
Beispiel:
- Ein Verweis auf den ASCII-Standard (ISO 9660), um bits in characters zu interpretieren.
- Ein Verweis auf ISO/TS 22028-4 (Digital images encoded using eciRGB) um bits in Bilder zu interpretieren.
## import required libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## Data exploration
Orginal Daten können, wie angesprochen unter https://www.kaggle.com/datasets/sulianova/cardiovascular-disease-datasetOr
gefunden werden.
Features:
- Age | Objective Feature | age | int (days)
- Height | Objective Feature | height | int (cm) |
- Weight | Objective Feature | weight | float (kg) |
- Systolic blood pressure | Examination Feature | ap_hi | int |
- Diastolic blood pressure | Examination Feature | ap_lo | int |
- Smoking | Subjective Feature | smoke | 0-2 |
- Presence or absence of cardiovascular disease | Target Variable | cardio | 0-9 |
```
df = pd.read_csv("./data/new_data.csv", sep = ',', index_col = 0)
df.head(5)
```
## Was bedeuten diese Daten?


### Metadatenschema
- Es gibt viele Arten von Metadatenstandards/-schemata(generisch/domänenspezifisch).
- Generische: [Dublin Core](https://www.dublincore.org/), [MODS](http://format.gbv.de/mods) (Metadata Object Description Schema) sind in der Regel einfach zu verwenden und weit verbreitet, müssen jedoch häufig erweitert werden, um spezifischere Informationen abzudecken.
- Domänenspezifische Schemata: Haben ein viel reichhaltigeres Vokabular und eine viel umfangreichere Struktur, sind jedoch in der Regel hochspezialisiert und nur für Forscher auf diesem Gebiet verständlich. [Beispiele here](https://fairsharing.org/search?fairsharingRegistry=Standard)
## [Dublin Core](https://www.dublincore.org/specifications/dublin-core/dcmi-terms/#section-1)
- Dublin Core geht auf die sogenannte Dublin Core Metadata Initiative(DCMI)
- 1994 in Chicago gegründet
- 1995 in Dublin einheitliche Standards zur Auszeichnung von Metaangaben definiert.
- Ziel: Suchmaschinen das Durchsuchen von Dokumenten zu erleichtern, indem wichtige Inhalte bereits in den Metadaten hinterlegt sind.
- Verwendung wo Suchmaschinen genutzt werden: Internet, Bibliotheken, Verwaltungen oder Museen.
- Heute werden die Standards von einer Gruppe aus Freiwilligen weiter bearbeitet.
### Einteilung in 15 core elements [here](https://www.dublincore.org/specifications/dublin-core/dcmi-terms/):
- contributor
- coverage
- creator
- date
- description
- format
- identifier
- language
- publisher
- relation
- rights
- source
- **contributor (Beitragende)** *WHO?*: <br>
Nennen der Person(en) oder Organisation(en), die bei der Erstellung der Ressource (Content) mitgewirkt haben.
- **coverage (Ort und Zeit)** *WHERE/WHEN?*: <br>
An dieser Stelle werden Informationen zum [Ort](http://www.getty.edu/research/tools/vocabularies/tgn/?find=&place=Heidelberg&nation=&prev_page=1&english=Y&popup=P) und zeitlichen Gültigkeitsbereich abgelegt. Hierbei sollen für Orte die gültigen Namen und für die temporäre Dauer Zeitintervalle (z.B. 07.07 - 12.07) verwendet werden.
- **creator (Ersteller)** *WHO?*: <br>
Nennen des ursprünglichen Autors einer Ressource. Autoren können Person(en) und Organisation(en) sein.
- **[date (Datum)](https://www.w3.org/TR/NOTE-datetime)** *WHEN?*:<br>
Hinterlegen von Informationen bezüglich Erstellungsdatum, Änderungsdatum, Sperrfrist und Löschdatum.
- **description (Beschreibung)** *WHY/WHAT?*:<br>
Zusätzliche Informationen, die die Ressource noch näher beschreiben. Hierzu zählen z.B. eine Kurzfassung oder ein Inhaltsverzeichnis.
- **format (Format)** *WHAT/HOW?*:<br>
Angaben zum [MIME-Typ](https://www.iana.org/assignments/media-types/media-types.xhtml) der Ressource wie Pixelgröße, Dateiformat, Bearbeitungsdauer, usw.
- **identifier (Identifizierer)** *WHAT?*: <br>
Dieses Element enthält einen eindeutigen Bezeichner für die Ressource z.B. eine URL([DOI](https://www.doi.org/)), Artikelnummer oder UID.
- **language (Sprache)** *WHAT/HOW?*:<br>
Hinterlegen eines Sprachecodes. Hierfür sollen Sprachcodes nach [ISO 639](https://www.loc.gov/standards/iso639-2/php/code_list.php) oder RFC 3066 verwendet werden.
- **publisher (Verlag/Herausgeber)** *WHO?*: <br>
Enthält Informationen über den Verleger. Der Verleger können Person(en) oder Organisation(en) sein.
- **relation (Beziehungen)** *WHAT?*:<br>
Hier werden Informationen über Beziehungen zu anderen Ressourcen festgehalten.
- **rights (Rechte)** *WHO/WHERE?*: <br>
An dieser Stelle werden Informationen bezüglich den Rechten an Ressourcen hinterlegt. Zum Beispiel über den Urheber oder die [Lizenzart](https://opensource.org/licenses) (GPL, LGPL, ZPL usw.).
- **source (Quelle)** *WHAT?*: <br>
Eine verwandte Ressource, von der die beschriebene Ressource abgeleitet ist. Die beschriebene Ressource kann ganz oder teilweise von der verwandten Ressource abgeleitet sein.
- **subject (Stichwörter)** *WHAT?*:<br>
Hier können Stichwörter oder ganze identifizierende Phrasen zu einer Ressource hinterlegt werden.
- **title (Titel)** *WHAT?*:<br>
Hinterlegen des Ressourcentitels (z.B. Dokumenttitel).
- **[type (Typ)](https://www.dublincore.org/specifications/dublin-core/dcmi-terms/#section-7)** *WHAT/HOW?*:<br>
Über den Typ wird einer Ressource eine Medienkategorie wie Bild, Artikel, Ordner usw. zugeordnet.
### Aufgabe
Fassen Sie die noch fehlenden "Core Elemente" unter der Verwendung der verlinkten Codierung-Standards für den vorgestellten Datensatz zusammen. Zusätzliche Informationen zum Datensatz können [hier](https://github.com/Fuenfgeld/2022TeamADataManagementBC/wiki/3.-Datensatz#zus%C3%A4tzliche-beschreibung) entnommen werden.
Beispielhafte Codierung-Standards:
- [Thesaurus of Geographic Names (TGN)](http://www.getty.edu/research/tools/vocabularies/tgn/?find=&place=Heidelberg&nation=&prev_page=1&english=Y&popup=P)
- [Date and Time Formats](https://www.w3.org/TR/NOTE-datetime)
- [Media types](https://www.iana.org/assignments/media-types/media-types.xhtml)
- [Codes for the Representation of Names of Languages (ISO 639-2)](https://www.loc.gov/standards/iso639-2/php/code_list.php)
- [List of popular Licenses](https://opensource.org/licenses)
### XML-Schme
- [Guidlines](https://www.dublincore.org/specifications/dublin-core/dc-xml-guidelines/2003-04-02/)
- Unterschied **Simple** und **Qulified** Dublin Core
#### Simple Dublin Core
- Besteht aus einer oder mehreren Eigenschaften und den zugehörigen Werten.
- Jede Eigenschaft ist ein Attribut der beschriebenen Ressource.
- Jede Eigenschaft muss eines der 15 DCMES [DCMES]-Elemente sein.
- Eigenschaften können wiederholt werden.
- Jeder Wert ist ein String.
- Jeder String-Wert kann eine zugeordnete Sprache haben (z.B. en-GB).
```xml
<?xml version="1.0"?>
<metadata
xmlns="http://example.org/myapp/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://example.org/myapp/ http://example.org/myapp/schema.xsd"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:title>
UKOLN
</dc:title>
<dc:description>
UKOLN is a national focus of expertise in digital information
management. It provides policy, research and awareness services
to the UK library, information and cultural heritage communities.
UKOLN is based at the University of Bath.
</dc:description>
<dc:publisher>
UKOLN, University of Bath
</dc:publisher>
<dc:identifier>
http://www.ukoln.ac.uk/
</dc:identifier>
</metadata>
```
#### Qualified Dublin Core
- Besteht aus einer oder mehreren Eigenschaften und den zugehörigen Werten. **✓**
- Jede Eigenschaft ist ein Attribut der beschriebenen Ressource. **✓**
- Jede Eigenschaft muss entweder:
- eines der 15 DC-Elemente, **✓**
- eines der anderen vom DCMI empfohlenen Elemente (z. B. Publikum) [DCTERMS],
- eine der Elementverfeinerungen, die in der Empfehlung der DCMI-Metadatenbedingungen [DCTERMS] aufgeführt sind.
- Eigenschaften können wiederholt werden. **✓**
- Jeder Wert ist eine String. **✓**
- Jeder Wert kann ein zugeordnetes Codierungsschema haben.
- Jedes Kodierungsschema hat einen Namen.
- Jeder String-Wert kann eine zugeordnete Sprache haben (z. B. en-GB). **✓**
```xml
<?xml version="1.0"?>
<metadata
xmlns="http://example.org/myapp/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://example.org/myapp/ http://example.org/myapp/schema.xsd"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dcterms="http://purl.org/dc/terms/">
<dc:title>
UKOLN
</dc:title>
<dcterms:alternative>
UK Office for Library and Information Networking
</dcterms:alternative>
<dc:subject>
national centre, network information support, library
community, awareness, research, information services,public
library networking, bibliographic management, distributed
library systems, metadata, resource discovery,
conferences,lectures, workshops
</dc:subject>
<dc:subject xsi:type="dcterms:DDC">
062
</dc:subject>
<dc:subject xsi:type="dcterms:UDC">
061(410)
</dc:subject>
<dc:description>
UKOLN is a national focus of expertise in digital information
management. It provides policy, research and awareness services
to the UK library, information and cultural heritage communities.
UKOLN is based at the University of Bath.
</dc:description>
<dc:description xml:lang="fr">
UKOLN est un centre national d'expertise dans la gestion de l'information
digitale.
</dc:description>
<dc:publisher>
UKOLN, University of Bath
</dc:publisher>
<dcterms:isPartOf xsi:type="dcterms:URI">
http://www.bath.ac.uk/
</dcterms:isPartOf>
<dc:identifier xsi:type="dcterms:URI">
http://www.ukoln.ac.uk/
</dc:identifier>
<dcterms:modified xsi:type="dcterms:W3CDTF">
2001-07-18
</dcterms:modified>
<dc:format xsi:type="dcterms:IMT">
text/html
</dc:format>
<dcterms:extent>
14 Kbytes
</dcterms:extent>
</metadata>
```
| github_jupyter |
# Predict H1N1 and Seasonal Flu Vaccines
## Preprocessing
### Import libraries
```
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
```
### Import data
```
features_raw_df = pd.read_csv("data/training_set_features.csv", index_col="respondent_id")
labels_raw_df = pd.read_csv("data/training_set_labels.csv", index_col="respondent_id")
print("features_raw_df.shape", features_raw_df.shape)
features_raw_df.head()
features_raw_df.dtypes
print("labels_raw_df.shape", labels_raw_df.shape)
labels_raw_df.head()
labels_raw_df.dtypes
features_df = features_raw_df.copy()
labels_df = labels_raw_df.copy()
```
### Exploratory Data Analysis
```
fig, ax = plt.subplots(2, 1, sharex=True)
n_entries = labels_df.shape[0]
(labels_df['h1n1_vaccine'].value_counts().div(n_entries)
.plot.barh(title="Proportion of H1N1 Vaccine", ax=ax[0]))
ax[0].set_ylabel("seasonal_vaccine")
(labels_df['seasonal_vaccine'].value_counts().div(n_entries)
.plot.barh(title="Proportion of H1N1 Vaccine", ax=ax[1]))
ax[1].set_ylabel("seasonal_vaccine")
fig.tight_layout()
pd.crosstab(
labels_df["h1n1_vaccine"],
labels_df["seasonal_vaccine"],
margins=True,
normalize=True
)
(labels_df["h1n1_vaccine"]
.corr(labels_df["seasonal_vaccine"], method="pearson")
)
```
### Features
```
df = features_df.join(labels_df)
print(df.shape)
df.head()
h1n1_concern_vaccine = df[['h1n1_concern', 'h1n1_vaccine']].groupby(['h1n1_concern', 'h1n1_vaccine']).size().unstack()
h1n1_concern_vaccine
ax = h1n1_concern_vaccine.plot.barh()
ax.invert_yaxis()
h1n1_concern_counts = h1n1_concern_vaccine.sum(axis='columns')
h1n1_concern_counts
h1n1_concern_vaccine_prop = h1n1_concern_vaccine.div(h1n1_concern_counts, axis='index')
h1n1_concern_vaccine_prop
ax = h1n1_concern_vaccine_prop.plot.barh(stacked=True)
ax.invert_yaxis()
ax.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), title='h1n1_vaccine')
plt.show()
def vaccination_rate_plot(vaccine, feature, df, ax=None):
feature_vaccine = df[[feature, vaccine]].groupby([feature, vaccine]).size().unstack()
counts = feature_vaccine.sum(axis='columns')
proportions = feature_vaccine.div(counts, axis='index')
ax = proportions.plot.barh(stacked=True, ax=ax)
ax.invert_yaxis()
ax.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), title=vaccine)
ax.legend().remove()
vaccination_rate_plot('seasonal_vaccine', 'h1n1_concern', df)
cols_to_plot = [
'h1n1_concern',
'h1n1_knowledge',
'opinion_h1n1_vacc_effective',
'opinion_h1n1_risk',
'opinion_h1n1_sick_from_vacc',
'opinion_seas_vacc_effective',
'opinion_seas_risk',
'opinion_seas_sick_from_vacc',
'sex',
'age_group',
'race',
]
fig, ax = plt.subplots(len(cols_to_plot), 2, figsize=(10,len(cols_to_plot)*2.5))
for idx, col in enumerate(cols_to_plot):
vaccination_rate_plot('h1n1_vaccine', col, df, ax=ax[idx, 0])
vaccination_rate_plot('seasonal_vaccine', col, df, ax=ax[idx, 1])
ax[0, 0].legend(loc='lower center', bbox_to_anchor=(0.5, 1.05), title='h1n1_vaccine')
ax[0, 1].legend(loc='lower center', bbox_to_anchor=(0.5, 1.05), title='seasonal_vaccine')
fig.tight_layout()
```
### Categorical columns
```
features_df = features_raw_df.copy()
labels_df = labels_raw_df.copy()
features_df.dtypes == object
# All categorical columns considered apart from employment-related
categorical_cols = features_df.columns[features_df.dtypes == "object"].values[:-2]
categorical_cols
categorical_cols = np.delete(categorical_cols, np.where(categorical_cols == 'hhs_geo_region'))
categorical_cols
features_df.employment_occupation.unique()
features_df.hhs_geo_region.unique()
features_df[categorical_cols].head()
for col in categorical_cols:
col_dummies = pd.get_dummies(features_df[col], drop_first = True)
features_df = features_df.drop(col, axis=1)
features_df = pd.concat([features_df, col_dummies], axis=1)
features_df.head()
features_df.isna().sum()
def preprocess_categorical(df):
categorical_cols = df.columns[df.dtypes == "object"].values[:-2]
categorical_cols = np.delete(categorical_cols, np.where(categorical_cols == 'hhs_geo_region'))
for col in categorical_cols:
col_dummies = pd.get_dummies(df[col], drop_first = True)
df = df.drop(col, axis=1)
df = pd.concat([df, col_dummies], axis=1)
df = df.drop(['hhs_geo_region', 'employment_industry', 'employment_occupation'], axis=1)
return df
```
## MACHINE LEARNING
### Machine Learning Model
```
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, roc_auc_score
RANDOM_SEED = 6
features_raw_df.dtypes != "object"
numeric_cols = features_raw_df.columns[features_raw_df.dtypes != "object"].values
print(numeric_cols)
```
### Features Preprocessing
```
# chain preprocessing into a Pipeline object
numeric_preprocessing_steps = Pipeline([
('standard_scaler', StandardScaler()),
('simple_imputer', SimpleImputer(strategy='median'))
])
# create the preprocessor stage of final pipeline
preprocessor = ColumnTransformer(
transformers = [
("numeric", numeric_preprocessing_steps, numeric_cols)
],
remainder = "passthrough"
)
estimators = MultiOutputClassifier(
estimator=LogisticRegression(penalty="l2", C=1)
)
full_pipeline = Pipeline([
("preprocessor", preprocessor),
("estimators", estimators),
])
features_df_trans = preprocess_categorical(features_df)
X_train, X_test, y_train, y_test = train_test_split(
features_df_trans,
labels_df,
test_size=0.33,
shuffle=True,
stratify=labels_df,
random_state=RANDOM_SEED
)
X_train
# Train model
full_pipeline.fit(X_train, y_train)
# Predict on evaluation set
# This competition wants probabilities, not labels
preds = full_pipeline.predict_proba(X_test)
preds
print("test_probas[0].shape", preds[0].shape)
print("test_probas[1].shape", preds[1].shape)
y_pred = pd.DataFrame(
{
"h1n1_vaccine": preds[0][:, 1],
"seasonal_vaccine": preds[1][:, 1],
},
index = y_test.index
)
print("y_pred.shape:", y_pred.shape)
y_pred.head()
fig, ax = plt.subplots(1, 2, figsize=(7, 3.5))
fpr, tpr, thresholds = roc_curve(y_test['h1n1_vaccine'], y_pred['h1n1_vaccine'])
ax[0].plot(fpr, tpr)
ax[0].plot([0, 1], [0, 1], color='grey', linestyle='--')
ax[0].set_ylabel('TPR')
ax[0].set_xlabel('FPR')
ax[0].set_title(f"{'h1n1_vaccine'}: AUC = {roc_auc_score(y_test['h1n1_vaccine'], y_pred['h1n1_vaccine']):.4f}")
fpr, tpr, thresholds = roc_curve(y_test['seasonal_vaccine'], y_pred['seasonal_vaccine'])
ax[1].plot(fpr, tpr)
ax[1].plot([0, 1], [0, 1], color='grey', linestyle='--')
ax[1].set_xlabel('FPR')
ax[1].set_title(f"{'seasonal_vaccine'}: AUC = {roc_auc_score(y_test['seasonal_vaccine'], y_pred['seasonal_vaccine']):.4f}")
fig.tight_layout()
roc_auc_score(y_test, y_pred)
```
### Retrain on full Dataset
```
full_pipeline.fit(features_df_trans, labels_df);
```
## PREDICTIONS FOR THE TEST SET
```
test_features_df = pd.read_csv('data/test_set_features.csv', index_col='respondent_id')
test_features_df
test_features_df_trans = preprocess_categorical(test_features_df)
test_preds = full_pipeline.predict_proba(test_features_df_trans)
submission_df = pd.read_csv('data/submission_format.csv', index_col='respondent_id')
# Save predictions to submission data frame
submission_df["h1n1_vaccine"] = test_preds[0][:, 1]
submission_df["seasonal_vaccine"] = test_preds[1][:, 1]
submission_df.head()
submission_df.to_csv('data/my_submission.csv', index=True)
```
| github_jupyter |
In this chapter you will:
* Clean and prepare text data
* Build feature vectors from text documents
* Train a machine learning model to classify positive and negative movie reviews
* Work with large text datasets using out-of-core learning
```
## Will be working with movie reviews from IMDB database
## Dataset is 50,000 reviews labeled as positive or negative
## Positive was rated with more than 6 stars on IMDb
## Read movie reviews into a Dataframe- may take 10 minutes
import pyprind
import pandas as pd
import os
pbar = pyprind.ProgBar(50000)
labels = {'pos':1, 'neg':0}
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = './aclImdb/%s/%s' % (s,l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index = True)
pbar.update()
df.columns = ['review', 'sentiment']
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
df.to_csv('./movie_data.csv', index = False)
df = pd.read_csv('./movie_data.csv')
df.head(3)
```
### Bag-of-words model
1. Create a vocabulary of unique tokens i.e., words from the entire set of documents
2. Construct a feature vector from each document that contains the counts of how often each word occurs in the particular document
This will result in sparse vectors
```
### Transforming words into feautre vectors
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print (count.vocabulary_)
print(bag.toarray())
## referred to as "raw term frequencies": tf(t,d) the number of times
## term t appeared in document d
### Assessing word relevancy via term frequency-inverse document frequeny
## TF-idf - downweights frequently occurring words
## Defined as the product of term freqency and inverse document frequency
## Inverse doc frequency is
## log [(n_d)/(1+df(d,t))]
## where n_d is the total number of docuements
## and df(d,t) is the number of docs d that contain term t
## Sci-kit learn has a transofrmer for this
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer()
np.set_printoptions(precision=2)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
```
### Cleaning Text Data
```
## strip out all unwanted characters
df.loc[0, 'review'][-50:]
### Had to edit from book- mistake in book near .join
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-) !")
df['review']= df['review'].apply(preprocessor)
```
### Processing documents into tokens
```
## Need to figure out how to split the text into individual elements
## Can tokenize words by splitting at the whitespace characters
def tokenizer(text):
return text.split()
tokenizer('runners like running and thus they run')
## Word stemming is taking the word root and mapping words that are similar
## nltk uses the porter stemming alorithm
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer_porter('runners like running and thus they run')
## lemmatizaiton aims to obtain the canonical forms of individual words
## stemming and lemmatization have little effect on performance
### Stop word removal
## Because they are so common, stop words only have a minimal effect
## on the classification
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop]
```
### Training a logistic regression model for document classification
```
## Divide dataframe into 25,000 training and 25,000 test
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents = None,
lowercase = False,
preprocessor = None)
param_grid = [{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf': [False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]}]
lr_tfidf = Pipeline([('vect', tfidf), ('clf', LogisticRegression(random_state = 0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid, scoring='accuracy', cv = 5, verbose = 1, n_jobs = -1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best Parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
```
Naieve bayes classifiers are also popular for this kind of work. Can read about them in:
S.Raschka Naive Bayes and Text Classification I - introduction and Theory. Computing Research Repository (CoRR), abs/1410.5329, 2014. Http://arxiv.org/pdf/1410.5329v3.pdf
### Out of core learning
Can stream little bits of data at a time to train the model and update the estimates
```
import numpy as np
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[*>]*.', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv)
for line in csv:
text, label = line[:-3], int (line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs ,y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
### Use a hashing trick to be able to calculate counts out of memory
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features = 2**21,
preprocessor=None,
tokenizer = tokenizer)
clf = SGDClassifier(loss = 'log', random_state = 1, n_iter = 1)
doc_stream = stream_docs(path='./movie_data.csv')
## Initialized progress bar with 45 minibatches of 1000 docs each
## use the las 5000 for performance
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0,1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size = 1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes = classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size = 5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
## Add last 5000 docs to update the model
clf = clf.partial_fit(X_test, y_test)
```
A Popular extension of this model that accounts for structre and grammar is the LDA or Latent Dirichlet allocation
Word2vec is a more modern application of the bag-of-words model
uses neural networks to automatically learn relationships betweenw ords
# Chapter 9 Embedding a Machine Learning Model into a Web Application
Session was kept open at the suggestion of the authors as we use the same model that was geneated in the previous chapter
One way for "model persistence" (being able to reuse a trained model) is serializing and deserializing our python objects. this allows us to save and reload the current state of our model.
```
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.makedirs(dest)
pickle.dump(stop,
open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol = 2)
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol = 2)
```
The next bit to test the serializer and vetorizer was done in an ipython session
### Setting up a SQLite database for data storage
```
### Create a new sql lite database inside movieclassifier to
### collect optional feedback about predictions from users
import sqlite3
import os
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute('CREATE TABLE review_db (review TEXT, sentiment INTEGER, date TEXT)')
example1 = 'I love this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example1, 1))
example2 = 'I disliked this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example2, 0))
conn.commit()
conn.close()
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute("SELECT * FROM review_db where date BETWEEN '2018-01-01 00:00:00' AND DATETIME('now')")
results = c.fetchall()
conn.close()
print(results)
import flask
import wtforms
```
| github_jupyter |
# QUANTUM PHASE ESTIMATION
This tutorial provides a detailed implementation of the Quantum Phase Estimation (QPE) algorithm using the Amazon Braket SDK.
The QPE algorithm is designed to estimate the eigenvalues of a unitary operator $U$ [1, 2];
it is a very important subroutine to many quantum algorithms, most famously Shor's algorithm for factoring and the HHL algorithm (named after the physicists Harrow, Hassidim and Lloyd) for solving linear systems of equations on a quantum computer [1, 2].
Moreover, eigenvalue problems can be found across many disciplines and application areas, including (for example) principal component analysis (PCA) as used in machine learning or the solution of differential equations as relevant across mathematics, physics, engineering and chemistry.
We first review the basics of the QPE algorithm.
We then implement the QPE algorithm in code using the Amazon Braket SDK, and we illustrate the application thereof with simple examples.
This notebook also showcases the Amazon Braket `circuit.subroutine` functionality, which allows us to use custom-built gates as if they were any other built-in gates.
This tutorial is set up to run either on the local simulator or the managed simulators; changing between these devices merely requires changing one line of code as demonstrated as follows in cell [4].
## TECHNICAL BACKGROUND OF QPE
__Introduction__: A unitary matrix is a complex, square matrix whose adjoint (or conjugate transpose) is equal to its inverse. Unitary matrices have many nice properties, including the fact that their eigenvalues are always roots of unity (that is, phases). Given a unitary matrix $U$ (satisfying $U^{\dagger}U=\mathbb{1}=UU^{\dagger}$) and an eigenstate $|\psi \rangle$ with $U|\psi \rangle = e^{2\pi i\varphi}|\psi \rangle$, the Quantum Phase Estimation (QPE) algorithm provides an estimate $\tilde{\varphi} \approx \varphi$ for the phase $\varphi$ (with $\varphi \in [0,1]$ since the eigenvalues $\lambda = \exp(2\pi i\varphi)$ of a unitary have modulus one).
The QPE works with high probability within an additive error $\varepsilon$ using $O(\log(1/\varepsilon))$ qubits (without counting the qubits used to encode the eigenstate) and $O(1/\varepsilon)$ controlled-$U$ operations [1].
__Quantum Phase Estimation Algorithm__:
The QPE algorithm takes a unitary $U$ as input. For the sake of simplicity (we will generalize the discussion below), suppose that the algorithm also takes as input an eigenstate $|\psi \rangle$ fulfilling
$$U|\psi \rangle = \lambda |\psi \rangle,$$
with $\lambda = \exp(2\pi i\varphi)$.
QPE uses two registers of qubits: we refer to the first register as *precision* qubits (as the number of qubits $n$ in the first register sets the achievable precision of our results) and the second register as *query* qubits (as the second register hosts the eigenstate $|\psi \rangle$).
Suppose we have prepared this second register in $|\psi \rangle$. We then prepare a uniform superposition of all basis vectors in the first register using a series of Hadamard gates.
Next, we apply a series of controlled-unitaries $C-U^{2^{k}}$ for different powers of $k=0,1,\dots, n-1$ (as illustrated in the circuit diagram that follows).
For example, for $k=1$ we get
\begin{equation}
\begin{split}
(|0 \rangle + |1 \rangle) |\psi \rangle & \rightarrow |0 \rangle |\psi \rangle + |1 \rangle U|\psi \rangle \\
& = (|0 \rangle + e^{2\pi i \varphi}|1 \rangle) |\psi \rangle.
\end{split}
\end{equation}
Note that the second register remains unaffected as it stays in the eigenstate $|\psi \rangle$.
However, we managed to transfer information about the phase of the eigenvalue of $U$ (that is, $\varphi$) into the first *precision* register by encoding it as a relative phase in the state of the qubits in the first register.
Similarly, for $k=2$ we obtain
\begin{equation}
\begin{split}
(|0 \rangle + |1 \rangle) |\psi \rangle & \rightarrow |0 \rangle |\psi \rangle + |1 \rangle U^{2}|\psi \rangle \\
& = (|0 \rangle + e^{2\pi i 2\varphi}|1 \rangle) |\psi \rangle,
\end{split}
\end{equation}
where this time we wrote $2\varphi$ into the precision register. The process is similar for all $k>2$.
Introducing the following notation for binary fractions
$$[0. \varphi_{l}\varphi_{l+1}\dots \varphi_{m}] = \frac{\varphi_{l}}{2^{1}} + \frac{\varphi_{l+1}}{2^{2}} + \frac{\varphi_{m}}{2^{m-l+1}},$$
one can show that the application of a controlled unitary $C-U^{2^{k}}$ leads to the following transformation
\begin{equation}
\begin{split}
(|0 \rangle + |1 \rangle) |\psi \rangle & \rightarrow |0 \rangle |\psi \rangle + |1 \rangle U^{2^{k}}|\psi \rangle \\
& = (|0 \rangle + e^{2\pi i 2^{k}\varphi}|1 \rangle) |\psi \rangle \\
& = (|0 \rangle + e^{2\pi i [0.\varphi_{k+1}\dots \varphi_{n}]}|1 \rangle) |\psi \rangle,
\end{split}
\end{equation}
where the first $k$ bits of precision in the binary expansion (that is, those bits to the left of the decimal) can be dropped, because $e^{2\pi i \theta} = 1$ for any whole number $\theta$.
The QPE algorithm implements a series of these transformations for $k=0, 1, \dots, n-1$, using $n$ qubits in the precision register.
In its entirety, this sequence of controlled unitaries leads to the transformation
$$ |0, \dots, 0 \rangle \otimes |\psi \rangle \longrightarrow
(|0 \rangle + e^{2\pi i [0.\varphi_{n}]}|1 \rangle)
\otimes (|0 \rangle + e^{2\pi i [0.\varphi_{n-1}\varphi_{n}]}|1 \rangle)
\otimes \dots
\otimes (|0 \rangle + e^{2\pi i [0.\varphi_{1}\dots\varphi_{n}]}|1 \rangle)
\otimes |\psi \rangle.
$$
By inspection, one can see that the state of the register qubits above corresponds to a quantum Fourier transform of the state $|\varphi_1,\dots,\varphi_n\rangle$. Thus, the final step of the QPE algorithm is to run the *inverse* Quantum Fourier Transform (QFT) algorithm on the precision register to extract the phase information from this state. The resulting state is
$$|\varphi_{1}, \varphi_{2}, \dots, \varphi_{n} \rangle \otimes |\psi\rangle.$$
Measuring the precision qubits in the computational basis then gives the classical bitstring $\varphi_{1}, \varphi_{2}, \dots, \varphi_{n}$, from which we can readily infer the phase estimate $\tilde{\varphi} = 0.\varphi_{1} \dots \varphi_{n}$ with the corresponding eigenvalue $\tilde{\lambda} = \exp(2\pi i \tilde{\varphi})$.
__Simple example for illustration__: For concreteness, consider a simple example with the unitary given by the Pauli $X$ gate, $U=X$, for which $|\Psi \rangle = |+\rangle = (|0 \rangle + |1 \rangle)/\sqrt{2}$ is an eigenstate with eigenvalue $\lambda = 1$, i.e., $\varphi=0$.
This state can be prepared with a Hadamard gate as $|\Psi \rangle = H|0 \rangle$.
We take a precision register consisting of just two qubits ($n=2$).
Thus, after the first layer of Hadamard gates, the quantum state is
$$|0,0,0 \rangle \rightarrow |+,+,+\rangle.$$
Next, the applications of the controlled-$U$ gates (equal to $C-X$ operations, or CNOT gates in this example) leave this state untouched, because $|+\rangle$ is an eigenstate of $X$ with eigenvalue $+1$.
Finally, applying the inverse QFT leads to
$$\mathrm{QFT}^{\dagger}|+++\rangle=\mathrm{QFT}^\dagger\frac{|00\rangle + |01\rangle + |10\rangle + |11\rangle}{4}\otimes |+\rangle = |00\rangle \otimes |+\rangle,$$
from which we deduce $\varphi = [0.00]=0$ and therefore $\lambda=1$, as expected.
Here, in the last step we have used $|00\rangle + |01\rangle + |10\rangle + |11\rangle = (|0\rangle + e^{2\pi i[0.0]}|1\rangle)(|0\rangle + e^{2\pi i[0.00]}|1\rangle)$, which makes the effect of the inverse QFT more apparent.
__Initial state of query register__: So far, we have assumed that the query register is prepared in an eigenstate $|\Psi\rangle$ of $U$. What happens if this is not the case? Let's reconsider the simple example given previously.
Suppose now that the query register is instead prepared in the state $|\Psi\rangle = |1\rangle$.
We can always express this state in the eigenbasis of $U$, that is, $|1\rangle = \frac{1}{\sqrt{2}}(|+\rangle - |-\rangle)$.
By linearity, application of the QPE algorithm then gives (up to normalization)
\begin{equation}
\begin{split}
\mathrm{QPE}(|0,0,\dots\rangle \otimes |1\rangle) & = \mathrm{QPE}(|0,0,\dots\rangle \otimes |+\rangle)
- \mathrm{QPE}(|0,0,\dots\rangle \otimes |-\rangle) \\
& = |\varphi_{+}\rangle \otimes |+\rangle - |\varphi_{-}\rangle \otimes |-\rangle. \\
\end{split}
\end{equation}
When we measure the precision qubits in this state, 50% of the time we will observe the eigenphase $\varphi_{+}$ and 50% of the time we will measure $\varphi_{-}$. We illustrate this example numerically as follows.
This example motivates the general case: we can pass a state that is not an eigenstate of $U$ to the QPE algorithm, but we may need to repeat our measurements several times in order to obtain an estimate of the desired phase.
## CIRCUIT IMPLEMENTATION OF QPE
The QPE circuit can be implemented using Hadamard gates, controlled-$U$ unitaries, and the inverse QFT (denoted as $\mathrm{QFT}^{-1}$).
The details of the calculation can be found in a number of resources (such as, [1]); we omit them here.
Following the previous discussion, the circuit that implements the QPE algorithm reads as below, where m is the size of lower query register and n is the size of upper precision register.

## IMPORTS and SETUP
```
# general imports
import numpy as np
import math
import matplotlib.pyplot as plt
# magic word for producing visualizations in notebook
%matplotlib inline
# AWS imports: Import Amazon Braket SDK modules
from braket.circuits import Circuit, circuit
from braket.devices import LocalSimulator
from braket.aws import AwsDevice
# local imports
from utils_qpe import qpe, run_qpe
%load_ext autoreload
%autoreload 2
```
__NOTE__: Enter your desired device and S3 location (bucket and key) in the following area. If you are working with the local simulator ```LocalSimulator()``` you do not need to specify any S3 location. However, if you are using the managed (cloud-based) device or any QPU devices, you must specify the S3 location where your results will be stored. In this case, you must replace the API call ```device.run(circuit, ...)``` in the example that follows with ```device.run(circuit, s3_folder, ...)```.
```
# set up device: local simulator or the managed cloud-based simulator
# device = LocalSimulator()
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
# Enter the S3 bucket you created during onboarding into the code that follows
my_bucket = "amazon-braket-Your-Bucket-Name" # the name of the bucket
my_prefix = "Your-Folder-Name" # the name of the folder in the bucket
s3_folder = (my_bucket, my_prefix)
```
### Pauli Matrices:
In some of our examples, we choose the unitary $U$ to be given by the **Pauli Matrices**, which we thus define as follows:
```
# Define Pauli matrices
Id = np.eye(2) # Identity matrix
X = np.array([[0., 1.],
[1., 0.]]) # Pauli X
Y = np.array([[0., -1.j],
[1.j, 0.]]) # Pauli Y
Z = np.array([[1., 0.],
[0., -1.]]) # Pauli Z
```
## IMPLEMENTATION OF THE QPE CIRCUIT
In ```utils_qpe.py``` we provide simple helper functions to implement the quantum circuit for the QPE algorithm.
Specifically, we demonstrate that such modular building blocks can be registered as subroutines, using ```@circuit.subroutine(register=True)```.
Moreover, we provide a helper function (called ```get_qpe_phases```) to perform postprocessing based on the measurement results to extract the phase. The details of ```utils_qpe.py``` are shown in the Appendix.
To implement the unitary $C-U^{2^k}$, one can use the fact that $C-U^{2} = (C-U)(C-U)$, so that $C-U^{2^{k}}$ can be constructed by repeatedly applying the core building block $C-U$.
However, the circuit generated using this approach will have a significantly larger depth. In our implementation, we instead define the matrix $U^{2^k}$ and create the controlled $C-(U^{2^k})$ gate from that.
## VISUALIZATION OF THE QFT CIRCUIT
To check our implementation of the QPE circuit, we visualize this circuit for a small number of qubits.
```
# set total number of qubits
precision_qubits = [0, 1]
query_qubits = [2]
# prepare query register
my_qpe_circ = Circuit().h(query_qubits)
# set unitary
unitary = X
# show small QPE example circuit
my_qpe_circ = my_qpe_circ.qpe(precision_qubits, query_qubits, unitary)
print('QPE CIRCUIT:')
print(my_qpe_circ)
```
As shown in the folllowing code, the two registers can be distributed anywhere across the circuit, with arbitrary indices for the precision and the query registers.
```
# set qubits
precision_qubits = [1, 3]
query_qubits = [5]
# prepare query register
my_qpe_circ = Circuit().i(range(7))
my_qpe_circ.h(query_qubits)
# set unitary
unitary = X
# show small QPE example circuit
my_qpe_circ = my_qpe_circ.qpe(precision_qubits, query_qubits, unitary)
print('QPE CIRCUIT:')
print(my_qpe_circ)
```
As follows, we set up the same circuit, this time implementing the unitary $C-U^{2^k}$, by repeatedly applying the core building block $C-U$.
This operation can be done by setting the parameter ```control_unitary=False``` (default is ```True```).
```
# set qubits
precision_qubits = [1, 3]
query_qubits = [5]
# prepare query register
my_qpe_circ = Circuit().i(range(7))
my_qpe_circ.h(query_qubits)
# set unitary
unitary = X
# show small QPE example circuit
my_qpe_circ = my_qpe_circ.qpe(precision_qubits, query_qubits, unitary, control_unitary=False)
print('QPE CIRCUIT:')
print(my_qpe_circ)
```
In the circuit diagram, we can visually infer the exponents for $k=0,1$, at the expense of a larger circuit depth.
## NUMERICAL TEST EXPERIMENTS
In the following section, we verify that our QFT implementation works as expected with a few test examples:
1. We run QPE with $U=X$ and prepare the eigenstate $|\Psi\rangle = |+\rangle = H|0\rangle$ with phase $\varphi=0$ and eigenvalue $\lambda=1$.
2. We run QPE with $U=X$ and prepare the eigenstate $|\Psi\rangle = |-\rangle = HX|0\rangle$ with phase $\varphi=0.5$ and eigenvalue $\lambda=-1$.
3. We run QPE with $U=X$ and prepare $|\Psi\rangle = |1\rangle = X|0\rangle$ which is *not* an eigenstate of $U$.
Because $|1\rangle = (|+\rangle - |-\rangle)/\sqrt{2}$, we expect to measure both $\varphi=0$ and $\varphi=0.5$ associated with the two eigenstates $|\pm\rangle$.
4. We run QPE with unitary $U=X \otimes Z$, and prepare the query register in the eigenstate $|\Psi\rangle = |+\rangle \otimes |1\rangle = H|0\rangle \otimes Z|0\rangle$.
Here, we expect to measure the phase $\varphi=0.5$ (giving the corresponding eigenvalue $\lambda=-1$).
5. We run QPE with a _random_ two qubit unitary, diagonal in the computational basis, and prepare the query register in the eigenstate $|11\rangle$.
In this case, we should be able to read off the eigenvalue and phase from $U$ and verify QPE gives the right answer (with high probability) up to a small error (that depends on the number of qubits in the precision register).
## HELPER FUNCTIONS FOR NUMERICAL TESTS
Because we will run the same code repeatedly, let's first create a helper function we can use to keep the notebook clean.
```
def postprocess_qpe_results(out):
"""
Function to postprocess dictionary returned by run_qpe
Args:
out: dictionary containing results/information associated with QPE run as produced by run_qpe
"""
# unpack results
circ = out['circuit']
measurement_counts = out['measurement_counts']
bitstring_keys = out['bitstring_keys']
probs_values = out['probs_values']
precision_results_dic = out['precision_results_dic']
phases_decimal = out['phases_decimal']
eigenvalues = out['eigenvalues']
# print the circuit
print('Printing circuit:')
print(circ)
# print measurement results
print('Measurement counts:', measurement_counts)
# plot probabalities
plt.bar(bitstring_keys, probs_values);
plt.xlabel('bitstrings');
plt.ylabel('probability');
plt.xticks(rotation=90);
# print results
print('Results in precision register:', precision_results_dic)
print('QPE phase estimates:', phases_decimal)
print('QPE eigenvalue estimates:', np.round(eigenvalues, 5))
```
### NUMERICAL TEST EXAMPLE 1
First, apply the QPE algorithm to the simple single-qubit unitary $U=X$, with eigenstate $|\Psi\rangle = |+\rangle = H|0\rangle$. Here, we expect to measure the phase $\varphi=0$ (giving the corresponding eigenvalue $\lambda=1$).
We show that this result stays the same as we increase the number of qubits $n$ for the top register.
```
# Set total number of precision qubits: 2
number_precision_qubits = 2
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits]
# State preparation for eigenstate of U=X
query = Circuit().h(query_qubits)
# Run the test with U=X
out = run_qpe(X, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
postprocess_qpe_results(out)
```
Next, check that we get the same result for a larger precision (top) register.
```
# Set total number of precision qubits: 3
number_precision_qubits = 3
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits]
# State preparation for eigenstate of U=X
query = Circuit().h(query_qubits)
# Run the test with U=X
out = run_qpe(X, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
postprocess_qpe_results(out)
```
### NUMERICAL TEST EXAMPLE 2
Next, apply the QPE algorithm to the simple single-qubit unitary $U=X$, with eigenstate $|\Psi\rangle = |-\rangle = HX|0\rangle$.
Here, we expect to measure the phase $\varphi=0.5$ (giving the corresponding eigenvalue $\lambda=-1$).
```
# Set total number of precision qubits: 2
number_precision_qubits = 2
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits]
# State preparation for eigenstate of U=X
query = Circuit().x(query_qubits).h(query_qubits)
# Run the test with U=X
out = run_qpe(X, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
postprocess_qpe_results(out)
```
### NUMERICAL TEST EXAMPLE 3
Next, apply the QPE algorithm again to the simple single-qubit unitary $U=X$, but we initialize the query register in the state $|\Psi\rangle = |1\rangle$ which is *not* an eigenstate of $U$.
Here, following the previous discussion, we expect to measure the phases $\varphi=0, 0.5$ (giving the corresponding eigenvalue $\lambda=\pm 1$). Accordingly, here we set ```items_to_keep=2```.
```
# Set total number of precision qubits: 2
number_precision_qubits = 2
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits]
# State preparation for |1>, which is not an eigenstate of U=X
query = Circuit().x(query_qubits)
# Run the test with U=X
out = run_qpe(X, precision_qubits, query_qubits, query, device, s3_folder, items_to_keep=2)
# Postprocess results
postprocess_qpe_results(out)
```
### NUMERICAL TEST EXAMPLE 4
Next, apply the QPE algorithm to the two-qubit unitary $U=X \otimes Z$, and prepare the query register in the eigenstate $|\Psi\rangle = |+\rangle \otimes |1\rangle = H|0\rangle \otimes Z|0\rangle$.
Here, we expect to measure the phase $\varphi=0.5$ (giving the corresponding eigenvalue $\lambda=-1$).
```
# set unitary matrix U
u1 = np.kron(X, Id)
u2 = np.kron(Id, Z)
unitary = np.dot(u1, u2)
print('Two-qubit unitary (XZ):\n', unitary)
# get example eigensystem
eig_values, eig_vectors = np.linalg.eig(unitary)
print('Eigenvalues:', eig_values)
# print('Eigenvectors:', eig_vectors)
# Set total number of precision qubits: 2
number_precision_qubits = 2
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits, number_precision_qubits+1]
# State preparation for eigenstate |+,1> of U=X \otimes Z
query = Circuit().h(query_qubits[0]).x(query_qubits[1])
# Run the test with U=X
out = run_qpe(unitary, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
postprocess_qpe_results(out)
```
### NUMERICAL TEST EXAMPLE 5
In this example, we choose the unitary to be a _random_ two-qubit unitary, diagonal in the computational basis. We initialize the query register to be in the eigenstate $|11\rangle$ of $U$, which we can prepare using that $|11\rangle = X\otimes X|00\rangle$.
In this case we should be able to read off the eigenvalue and phase from $U$ and verify that QPE gives the right answer.
```
# Generate a random 2 qubit unitary matrix:
from scipy.stats import unitary_group
# Fix random seed for reproducibility
np.random.seed(seed=42)
# Get random two-qubit unitary
random_unitary = unitary_group.rvs(2**2)
# Let's diagonalize this
evals = np.linalg.eig(random_unitary)[0]
# Since we want to be able to read off the eigenvalues of the unitary in question
# let's choose our unitary to be diagonal in this basis
unitary = np.diag(evals)
# Check that this is indeed unitary, and print it out:
print('Two-qubit random unitary:\n', np.round(unitary, 3))
print('Check for unitarity: ', np.allclose(np.eye(len(unitary)), unitary.dot(unitary.T.conj())))
# Print eigenvalues
print('Eigenvalues:', np.round(evals, 3))
```
When we execute the QPE circuit, we expect the following (approximate) result for the eigenvalue estimate:
```
print('Target eigenvalue:', np.round(evals[-1], 3))
# Set total number of precision qubits
number_precision_qubits = 3
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits, number_precision_qubits+1]
# State preparation for eigenstate |1,1> of diagonal U
query = Circuit().x(query_qubits[0]).x(query_qubits[1])
# Run the test with U=X
out = run_qpe(unitary, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
postprocess_qpe_results(out)
# compare output to exact target values
print('Target eigenvalue:', np.round(evals[-1], 3))
```
We can easily improve the precision of our parameter estimate by increasing the number of qubits in the precision register, as shown in the following example.
```
# Set total number of precision qubits
number_precision_qubits = 10
# Define the set of precision qubits
precision_qubits = range(number_precision_qubits)
# Define the query qubits. We'll have them start after the precision qubits
query_qubits = [number_precision_qubits, number_precision_qubits+1]
# State preparation for eigenstate |1,1> of diagonal U
query = Circuit().x(query_qubits[0]).x(query_qubits[1])
# Run the test with U=X
out = run_qpe(unitary, precision_qubits, query_qubits, query, device, s3_folder)
# Postprocess results
eigenvalues = out['eigenvalues']
print('QPE eigenvalue estimates:', np.round(eigenvalues, 5))
# compare output to exact target values
print('Target eigenvalue:', np.round(evals[-1], 5))
```
---
## APPENDIX
```
# Check SDK version
# alternative: braket.__version__
!pip show amazon-braket-sdk | grep Version
```
## Details of the ```utiles_qpe.py``` module
### Imports, including inverse QFT
```python
# general imports
import numpy as np
import math
from collections import Counter
from datetime import datetime
import pickle
# AWS imports: Import Braket SDK modules
from braket.circuits import Circuit, circuit
# local imports
from utils_qft import inverse_qft
```
### QPE Subroutine
```python
@circuit.subroutine(register=True)
def controlled_unitary(control, target_qubits, unitary):
"""
Construct a circuit object corresponding to the controlled unitary
Args:
control: The qubit on which to control the gate
target_qubits: List of qubits on which the unitary U acts
unitary: matrix representation of the unitary we wish to implement in a controlled way
"""
# Define projectors onto the computational basis
p0 = np.array([[1., 0.],
[0., 0.]])
p1 = np.array([[0., 0.],
[0., 1.]])
# Instantiate circuit object
circ = Circuit()
# Construct numpy matrix
id_matrix = np.eye(len(unitary))
controlled_matrix = np.kron(p0, id_matrix) + np.kron(p1, unitary)
# Set all target qubits
targets = [control] + target_qubits
# Add controlled unitary
circ.unitary(matrix=controlled_matrix, targets=targets)
return circ
@circuit.subroutine(register=True)
def qpe(precision_qubits, query_qubits, unitary, control_unitary=True):
"""
Function to implement the QPE algorithm using two registers for precision (read-out) and query.
Register qubits need not be contiguous.
Args:
precision_qubits: list of qubits defining the precision register
query_qubits: list of qubits defining the query register
unitary: Matrix representation of the unitary whose eigenvalues we wish to estimate
control_unitary: Optional boolean flag for controlled unitaries,
with C-(U^{2^k}) by default (default is True),
or C-U controlled-unitary (2**power) times
"""
qpe_circ = Circuit()
# Get number of qubits
num_precision_qubits = len(precision_qubits)
num_query_qubits = len(query_qubits)
# Apply Hadamard across precision register
qpe_circ.h(precision_qubits)
# Apply controlled unitaries. Start with the last precision_qubit, and end with the first
for ii, qubit in enumerate(reversed(precision_qubits)):
# Set power exponent for unitary
power = ii
# Alterantive 1: Implement C-(U^{2^k})
if control_unitary:
# Define the matrix U^{2^k}
Uexp = np.linalg.matrix_power(unitary,2**power)
# Apply the controlled unitary C-(U^{2^k})
qpe_circ.controlled_unitary(qubit, query_qubits, Uexp)
# Alterantive 2: One can instead apply controlled-unitary (2**power) times to get C-U^{2^power}
else:
for _ in range(2**power):
qpe_circ.controlled_unitary(qubit, query_qubits, unitary)
# Apply inverse qft to the precision_qubits
qpe_circ.inverse_qft(precision_qubits)
return qpe_circ
```
### QPE postprocessing helper functions
```python
# helper function to remove query bits from bitstrings
def substring(key, precision_qubits):
"""
Helper function to get substring from keys for dedicated string positions as given by precision_qubits.
This function is necessary to allow for arbitary qubit mappings in the precision and query registers
(that is, so that the register qubits need not be contiguous.)
Args:
key: string from which we want to extract the substring supported only on the precision qubits
precision_qubits: List of qubits corresponding to precision_qubits.
Currently assumed to be a list of integers corresponding to the indices of the qubits
"""
short_key = ''
for idx in precision_qubits:
short_key = short_key + key[idx]
return short_key
# helper function to convert binary fractional to decimal
# reference: https://www.geeksforgeeks.org/convert-binary-fraction-decimal/
def binaryToDecimal(binary):
"""
Helper function to convert binary string (example: '01001') to decimal
Args:
binary: string which to convert to decimal fraction
"""
length = len(binary)
fracDecimal = 0
# Convert fractional part of binary to decimal equivalent
twos = 2
for ii in range(length):
fracDecimal += ((ord(binary[ii]) - ord('0')) / twos);
twos *= 2.0
# return fractional part
return fracDecimal
# helper function for postprocessing based on measurement shots
def get_qpe_phases(measurement_counts, precision_qubits, items_to_keep=1):
"""
Get QPE phase estimate from measurement_counts for given number of precision qubits
Args:
measurement_counts: measurement results from a device run
precision_qubits: List of qubits corresponding to precision_qubits.
Currently assumed to be a list of integers corresponding to the indices of the qubits
items_to_keep: number of items to return (topmost measurement counts for precision register)
"""
# Aggregate the results (that is, ignore the query register qubits):
# First get bitstrings with corresponding counts for precision qubits only
bitstrings_precision_register = [substring(key, precision_qubits) for key in measurement_counts.keys()]
# Then keep only the unique strings
bitstrings_precision_register_set = set(bitstrings_precision_register)
# Cast as a list for later use
bitstrings_precision_register_list = list(bitstrings_precision_register_set)
# Now create a new dict to collect measurement results on the precision_qubits.
# Keys are given by the measurement count substrings on the register qubits. Initialize the counts to zero.
precision_results_dic = {key: 0 for key in bitstrings_precision_register_list}
# Loop over all measurement outcomes
for key in measurement_counts.keys():
# Save the measurement count for this outcome
counts = measurement_counts[key]
# Generate the corresponding shortened key (supported only on the precision_qubits register)
count_key = substring(key, precision_qubits)
# Add these measurement counts to the corresponding key in our new dict
precision_results_dic[count_key] += counts
# Get topmost values only
c = Counter(precision_results_dic)
topmost= c.most_common(items_to_keep)
# get decimal phases from bitstrings for topmost bitstrings
phases_decimal = [binaryToDecimal(item[0]) for item in topmost]
# Get decimal phases from bitstrings for all bitstrings
# number_precision_qubits = len(precision_qubits)
# Generate binary decimal expansion
# phases_decimal = [int(key, 2)/(2**number_precision_qubits) for key in precision_results_dic]
# phases_decimal = [binaryToDecimal(key) for key in precision_results_dic]
return phases_decimal, precision_results_dic
```
### Run QPE experiments:
```python
def run_qpe(unitary, precision_qubits, query_qubits, query_circuit,
device, s3_folder, items_to_keep=1, shots=1000, save_to_pck=False):
"""
Function to run QPE algorithm end-to-end and return measurement counts.
Args:
precision_qubits: list of qubits defining the precision register
query_qubits: list of qubits defining the query register
unitary: Matrix representation of the unitary whose eigenvalues we wish to estimate
query_circuit: query circuit for state preparation of query register
items_to_keep: (optional) number of items to return (topmost measurement counts for precision register)
device: Braket device backend
shots: (optional) number of measurement shots (default is 1000)
save_to_pck: (optional) save results to pickle file if True (default is False)
"""
# get size of precision register and total number of qubits
number_precision_qubits = len(precision_qubits)
num_qubits = len(precision_qubits) + len(query_qubits)
# Define the circuit. Start by copying the query_circuit, then add the QPE:
circ = query_circuit
circ.qpe(precision_qubits, query_qubits, unitary)
# Add desired results_types
circ.probability()
# Run the circuit with all zeros input.
# The query_circuit subcircuit generates the desired input from all zeros.
# The following code executes the correct device.run call, depending on whether the backend is local or managed (cloud-based)
if device.name == 'DefaultSimulator':
task = device.run(circ, shots=shots)
else:
task = device.run(circ, s3_folder, shots=shots)
# get result for this task
result = task.result()
# get metadata
metadata = result.task_metadata
# get output probabilities (see result_types above)
probs_values = result.values[0]
# get measurement results
measurements = result.measurements
measured_qubits = result.measured_qubits
measurement_counts = result.measurement_counts
measurement_probabilities = result.measurement_probabilities
# bitstrings
format_bitstring = '{0:0' + str(num_qubits) + 'b}'
bitstring_keys = [format_bitstring.format(ii) for ii in range(2**num_qubits)]
# QPE postprocessing
phases_decimal, precision_results_dic = get_qpe_phases(measurement_counts, precision_qubits, items_to_keep)
eigenvalues = [np.exp(2*np.pi*1j*phase) for phase in phases_decimal]
# aggregate results
out = {'circuit': circ,
'task_metadata': metadata,
'measurements': measurements,
'measured_qubits': measured_qubits,
'measurement_counts': measurement_counts,
'measurement_probabilities': measurement_probabilities,
'probs_values': probs_values,
'bitstring_keys': bitstring_keys,
'precision_results_dic': precision_results_dic,
'phases_decimal': phases_decimal,
'eigenvalues': eigenvalues}
if save_to_pck:
# store results: dump output to pickle with timestamp in filename
time_now = datetime.strftime(datetime.now(), '%Y%m%d%H%M%S')
results_file = 'results-'+time_now+'.pck'
pickle.dump(out, open(results_file, "wb"))
# you can load results as follows
# out = pickle.load(open(results_file, "rb"))
return out
```
---
## REFERENCES
[1] Wikipedia: https://en.wikipedia.org/wiki/Quantum_phase_estimation_algorithm
[2] Nielsen, Michael A., Chuang, Isaac L. (2010). Quantum Computation and Quantum Information (2nd ed.). Cambridge: Cambridge University Press.
| github_jupyter |
# Módulo 4: APIs
## Spotify
<img src="https://developer.spotify.com/assets/branding-guidelines/logo@2x.png" width=400></img>
En este módulo utilizaremos APIs para obtener información sobre artistas, discos y tracks disponibles en Spotify. Pero primero.. ¿Qué es una **API**?<br>
Por sus siglas en inglés, una API es una interfaz para programar aplicaciones (*Application Programming Interface*). Es decir que es un conjunto de funciones, métodos, reglas y definiciones que nos permitirán desarrollar aplicaciones (en este caso un scraper) que se comuniquen con los servidores de Spotify. Las APIs son diseñadas y desarrolladas por las empresas que tienen interés en que se desarrollen aplicaciones (públicas o privadas) que utilicen sus servicios. Spotify tiene APIs públicas y bien documentadas que estaremos usando en el desarrollo de este proyecto.
#### REST
Un término se seguramente te vas a encontrar cuando estés buscando información en internet es **REST** o *RESTful*. Significa *representational state transfer* y si una API es REST o RESTful, implica que respeta unos determinados principios de arquitectura, como por ejemplo un protocolo de comunicación cliente/servidor (que será HTTP) y (entre otras cosas) un conjunto de operaciones definidas que conocemos como **métodos**. Ya veníamos usando el método GET para hacer solicitudes a servidores web.
#### Documentación
Como mencioné antes, las APIs son diseñadas por las mismas empresas que tienen interés en que se desarrollen aplicaciones (públicas o privadas) que consuman sus servicios o información. Es por eso que la forma de utilizar las APIs variará dependiendo del servicio que querramos consumir. No es lo mismo utilizar las APIs de Spotify que las APIs de Twitter. Por esta razón es de suma importancia leer la documentación disponible, generalmente en la sección de desarrolladores de cada sitio. Te dejo el [link a la de Spotify](https://developer.spotify.com/documentation/)
#### JSON
Json significa *JavaScript Object Notation* y es un formato para describir objetos que ganó tanta popularidad en su uso que ahora se lo considera independiente del lenguaje. De hecho, lo utilizaremos en este proyecto por más que estemos trabajando en Python, porque es la forma en la que obtendremos las respuestas a las solicitudes que realicemos utilizando las APIs. Para nosotros, no será ni más ni menos que un diccionario con algunas particularidades que iremos viendo a lo largo del curso.
Links útiles para la clase:
- [Documentación de Spotify - Artistas](https://developer.spotify.com/documentation/web-api/reference/artists/)
- [Iron Maiden en Spotify](https://open.spotify.com/artist/6mdiAmATAx73kdxrNrnlao)
```
import requests
id_im = '6mdiAmATAx73kdxrNrnlao'
url_base = 'https://api.spotify.com/v1'
ep_artist = '/artists/{artist_id}'
url_base+ep_artist.format(artist_id=id_im)
r = requests.get(url_base+ep_artist.format(artist_id=id_im))
r.status_code
r.json()
token_url = 'https://accounts.spotify.com/api/token'
params = {'grant_type': 'client_credentials'}
headers = {'Authorization': 'Basic NDRiN2IzNmVjMTQ1NDY3ZjlhOWVlYWY3ZTQxN2NmOGI6N2I0YWE3YTBlZjQ4NDQwNDhhYjFkMjI0MzBhMWViMWY='}
r = requests.post(token_url, data=params, headers=headers)
r.status_code
r.json()
token = r.json()['access_token']
token
header = {"Authorization": "Bearer {}".format(token)}
r = requests.get(url_base+ep_artist.format(artist_id=id_im), headers=header)
r.status_code
r.json()
url_busqueda = 'https://api.spotify.com/v1/search'
search_params = {'q': "Iron+Maiden", 'type':'artist', 'market':'AR'}
busqueda = requests.get(url_busqueda, headers=header, params=search_params)
busqueda.status_code
busqueda.json()
import pandas as pd
df = pd.DataFrame(busqueda.json()['artists']['items'])
df.head()
df.sort_values(by='popularity', ascending=False).iloc[0]['id']
import base64
def get_token(client_id, client_secret):
encoded = base64.b64encode(bytes(client_id+':'+client_secret, 'utf-8'))
params = {'grant_type':'client_credentials'}
header={'Authorization': 'Basic ' + str(encoded, 'utf-8')}
r = requests.post('https://accounts.spotify.com/api/token', headers=header, data=params)
if r.status_code != 200:
print('Error en la request.', r.json())
return None
print('Token válido por {} segundos.'.format(r.json()['expires_in']))
return r.json()['access_token']
client_id = '44b7b36ec145467f9a9eeaf7e417cf8b'
client_secret = '7b4aa7a0ef4844048ab1d22430a1eb1f'
token = get_token(client_id, client_secret)
header = {"Authorization": "Bearer {}".format(token)}
id_im
artist_im = requests.get(url_base+ep_artist.format(artist_id=id_im), headers=header)
artist_im.status_code
artist_im.json()
params = {'country': 'AR'}
albums_im = requests.get(url_base+ep_artist.format(artist_id=id_im)+'/albums', headers=header, params=params)
albums_im.status_code
albums_im.json()['items']
[(album['id'], album['name']) for album in albums_im.json()['items']]
bnw_id = '1hDF0QPIHVTnSJtxyQVguB'
album_ep = '/albums/{album_id}'
album_params = {'market':'AR'}
bnw = requests.get(url_base+album_ep.format(album_id=bnw_id)+'/tracks', headers=header, params=album_params)
bnw
bnw.json()
bnw.json()['items']
[(track['id'], track['name']) for track in bnw.json()['items']]
```
## Clase 5
```
def obtener_discografia(artist_id, token, return_name=False, page_limit=50, country=None):
url = f'https://api.spotify.com/v1/artists/{artist_id}/albums'
header = {'Authorization': f'Bearer {token}'}
params = {'limit': page_limit,
'offset': 0,
'country': country}
lista = []
r = requests.get(url, params=params, headers=header)
if r.status_code != 200:
print('Error en request.', r.json())
return None
if return_name:
lista += [(item['id'], item['name']) for item in r.json()['items']]
else:
lista += [item['id'] for item in r.json()['items']]
while r.json()['next']:
r = requests.get(r.json()['next'], headers=header) # El resto de los parámetros están dentro de la URL
if return_name:
lista += [(item['id'], item['name']) for item in r.json()['items']]
else:
lista += [item['id'] for item in r.json()['items']]
return lista
def obtener_tracks(album_id, token, return_name=False, page_limit=50, market=None):
url=f'https://api.spotify.com/v1/albums/{album_id}/tracks'
header = {'Authorization': f'Bearer {token}'}
params = {'limit': page_limit,
'offset': 0,
'market': market}
lista = []
r = requests.get(url, params=params, headers=header)
if r.status_code != 200:
print('Error en request.', r.json())
return None
if return_name:
lista += [(item['id'], item['name']) for item in r.json()['items']]
else:
lista += [item['id'] for item in r.json()['items']]
while r.json()['next']:
r = requests.get(r.json()['next'], headers=header) # El resto de los parámetros están dentro de la URL
if return_name:
lista += [(item['id'], item['name']) for item in r.json()['items']]
else:
lista += [item['id'] for item in r.json()['items']]
return lista
```
| github_jupyter |
# Data Inputs and Display Libraries
```
import pandas as pd
import numpy as np
import pickle
pd.set_option('display.float_format', lambda x: '%.5f' % x)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
```
# Modeling Libraries
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import model_selection
from xgboost import XGBClassifier
import pickle
from sklearn.model_selection import GridSearchCV
```
# Metrics Libraries
```
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix,ConfusionMatrixDisplay
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from sklearn.metrics import f1_score
from matplotlib import pyplot
import matplotlib.pyplot as plt
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_recall_curve
# Accessing the data
!wget "https://github.com/univai-ghf/ghfmedia/raw/main/data/Trees_and_Ensembles/datasets.rar"
!wget "https://github.com/univai-ghf/ghfmedia/raw/main/data/Trees_and_Ensembles/prep_file.rar"
!wget "https://github.com/univai-ghf/ghfmedia/raw/main/data/Trees_and_Ensembles/num_cols.csv"
!wget "https://github.com/univai-ghf/ghfmedia/raw/main/data/Trees_and_Ensembles/str_cols.csv"
#unziping the rar
!unrar x './datasets.rar'
!unrar x './prep_file.rar'
def pick_in(obj_name):
fl_out1 = obj_name
pickle_in = open(fl_out1,"rb")
mod1= pickle.load(pickle_in)
return mod1
list_objs = ["df_all_train2","y_train1","df_all_test2","y_test1"]
for i in list_objs:
globals()[i]= pick_in(i)
def auc1_scr(mod1,test_set,actual1):
mod = eval(mod1)
pred1=mod.predict_proba(test_set)[:,1]
fpr, tpr, thresholds = roc_curve(actual1, pred1)
auc1 = auc(fpr, tpr)
return auc1
# AdaBoost Classifier
ab = AdaBoostClassifier(n_estimators=100, random_state=0)
ab.fit(df_all_train2,y_train1)
auc1_te = auc1_scr("ab",df_all_test2,y_test1)
auc1_tr = auc1_scr("ab",df_all_train2,y_train1)
auc1_te,auc1_tr
```
# Grid Search
```
# This will take around 1hr+ to execute on standard colab runtime
# AB_grid= AdaBoostClassifier(random_state=42)
# params = {
# 'n_estimators': [100,500],
# 'learning_rate': [0.2,0.5,1],
# 'algorithm': ['SAMME','SAMME.R'],
# 'base_estimator' : [DecisionTreeClassifier(max_depth=1),DecisionTreeClassifier(max_depth=2),DecisionTreeClassifier(max_depth=5)]
# }
# grid_search = GridSearchCV(estimator=AB_grid,
# param_grid=params,
# cv=2, n_jobs=5, verbose=1, scoring = "roc_auc")
# grid_search.fit(df_all_test2,y_test1)
# score_df = pd.DataFrame(grid_search.cv_results_)
# score_df.head()
# score_df.sort_values(["rank_test_score"]).head(5)
```
# Gradient Boosting
```
# GradientBoosting Classifier
# It will take around 9 mins for execution
gb = GradientBoostingClassifier(max_depth=5,n_estimators=300, learning_rate=0.5)
gb.fit(df_all_train2,y_train1)
auc1_te = auc1_scr("gb",df_all_test2,y_test1)
auc1_tr = auc1_scr("gb",df_all_train2,y_train1)
auc1_te,auc1_tr
# XGB Classifier
# It will take around 4 mins for execution
xgb = XGBClassifier()
xgb.fit(df_all_train2,y_train1)
auc1_te = auc1_scr("xgb",df_all_test2,y_test1)
auc1_tr = auc1_scr("xgb",df_all_train2,y_train1)
auc1_te,auc1_tr
class_weights = [0.1,0.9]
xgb_param = XGBClassifier(n_estimators=300,max_depth= 5,class_weights = class_weights,
subsample= 0.2,colsample_bytree= 0.3,random_state=0)
xgb_param.fit(df_all_train2,y_train1)
auc1_te = auc1_scr("xgb_param",df_all_test2,y_test1)
auc1_tr = auc1_scr("xgb_param",df_all_train2,y_train1)
auc1_te,auc1_tr
```
| github_jupyter |
# Notes
This project requires the creation of an **assets** and **outputs** folder in the same directory as the notebook. The assets folder should contain the WikiLarge_Train.csv file available from [Kaggle](https://www.kaggle.com/c/umich-siads-695-predicting-text-difficulty).
Several files here are writting to the **outputs** folder during the process due to long run times of different parts of the script.
# Imports
```
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
import time
import pyLDAvis.sklearn
from pylab import bone, pcolor, colorbar, plot, show, rcParams, savefig
from itertools import chain
import pickle
import spacy
from collections import Iterable
import ast
from sklearn.model_selection import GridSearchCV
from sklearn.cluster import AffinityPropagation, KMeans, DBSCAN, Birch, MiniBatchKMeans, OPTICS
from sklearn.metrics import silhouette_score
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
from wordcloud import WordCloud
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
```
# Import Dataset and Build Tokenset
This section uses the nlp.pipe method to build out the document set and then passes the tokenset to a seperate function that lemmatizses the words, removes stopwords, and removes punctation.
```
def lemmatizer_alt(input):
lemma_list = [token.lemma_ for token in input if token.is_stop == False and token.is_punct==False]
return lemma_list
def build_dataset():
wiki = pd.read_csv('assets/WikiLarge_Train.csv')
l_start = time.time()
nlp = spacy.load("en_core_web_sm", exclude=['parser', "ner"])
wiki['nlp_text'] = [doc for doc in nlp.pipe(wiki["original_text"].tolist())]
wiki['tokenized_text'] = wiki["nlp_text"].apply(lemmatizer_alt)
l_duration = time.time() - l_start
print('Pipe Model Timing: {:.2f} seconds'.format(l_duration), flush=True)
wiki = wiki[[
'original_text',
'label',
'tokenized_text'
]]
wiki.to_csv('outputs/wiki_tokenized.csv')
return 0
build_dataset()
```
# Build LDA
The LDA build proceeds through the following steps:
1. Vectorization of the tokenset built in the previous step.
2. Setup of GridSearch parameters.
3. Search of the LDA model through the parameters.
4. Pickling of the final best model, vectorizer, grid search results, and others.
The GridSearch will default to log-likelihood, which should be sufficent here.
```
def lda_build():
l_start = time.time()
wiki = pd.read_csv('outputs/wiki_tokenized.csv')
wiki['token_list'] = wiki['tokenized_text'].apply(ast.literal_eval)
input_list = wiki['token_list'].str.join(" ")
l_duration = time.time() - l_start
print('List Construction: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
vectorizer = TfidfVectorizer(
analyzer='word',
min_df=10,
token_pattern='[a-zA-Z0-9]{4,}' # Ensure every token is at least 4 char long
)
data_vectorized = vectorizer.fit_transform(input_list)
l_duration = time.time() - l_start
print("Number of topics: {:.0f}".format(data_vectorized.shape[1]))
print('Vector Construction: {:.2f} seconds'.format(l_duration), flush=True)
search_params = {
'n_components': [10, 15, 20],
'learning_decay': [.5, .7, .9]
}
l_start = time.time()
lda = LatentDirichletAllocation(
max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=42,
verbose=1
)
model = GridSearchCV(
lda,
param_grid=search_params,
verbose=1,
n_jobs=1
)
model.fit(data_vectorized)
l_duration = time.time() - l_start
print('LDA Grid Search: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
best_lda_model = model.best_estimator_
data_lda = best_lda_model.transform(data_vectorized)
search_results = model.cv_results_
l_duration = time.time() - l_start
print('Grid Search Data Extraction: {:.2f} seconds'.format(l_duration), flush=True)
pickle.dump(vectorizer, open("outputs/vectorizer.pkl", "wb"))
pickle.dump(data_vectorized, open("outputs/data_vectorized.pkl", "wb"))
pickle.dump(data_lda, open("outputs/data_lda.pkl", "wb"))
pickle.dump(best_lda_model, open("outputs/lda_model.pkl", 'wb'))
pickle.dump(search_results, open("outputs/grid_search_results.pkl", 'wb'))
return 0
lda_build()
```
# Grid Search Review
This section plots the log-likelyhood across models from grid_search_results.pkl
```
grid_data = pickle.load(open("outputs/grid_search_results.pkl", "rb"))
search_params = {
'n_components': [10, 15, 20],
'learning_decay': [.5, .7, .9]
}
def plot_grid_search(cv_results, grid_param_1, grid_param_2, name_param_1, name_param_2):
scores_mean = cv_results['mean_test_score']
scores_mean = np.array(scores_mean).reshape(len(grid_param_2),len(grid_param_1))
scores_sd = cv_results['std_test_score']
scores_sd = np.array(scores_sd).reshape(len(grid_param_2),len(grid_param_1))
_, ax = plt.subplots(1,1)
for idx, val in enumerate(grid_param_2):
ax.plot(grid_param_1, scores_mean[idx,:], '-o', label= name_param_2 + ': ' + str(val))
ax.set_xlabel(name_param_1)
ax.set_ylabel('Log Likelyhood')
ax.legend(loc="best")
ax.grid('on')
plot_grid_search(grid_data, search_params['n_components'], search_params['learning_decay'], 'N Components', 'Learning Decay')
```
# LDA Plotting
This section creates an interactive plot of the LDA model for examination of the topics extracted from the document.
```
def LDA_plot():
lda_model = pickle.load(open("outputs/lda_model.pkl", "rb"))
data_vectorized = pickle.load(open("outputs/data_vectorized.pkl", "rb"))
vectorizer = pickle.load(open("outputs/vectorizer.pkl", "rb"))
pyLDAvis.enable_notebook()
dash = pyLDAvis.sklearn.prepare(lda_model, data_vectorized, vectorizer, mds='tsne')
return dash
LDA_plot()
```
# Tag Application
Next the LDA model is used to generate the top n tags for each article. Will only function for articles with at least n tags because otherwise vectorization and clustering won't work as vectors will have different sizes
```
def tag_df(n_items):
wiki = pd.read_csv('outputs/wiki_tokenized.csv')
lda_model = pickle.load(open("outputs/lda_model.pkl", "rb"))
data_vectorized = pickle.load(open("outputs/data_vectorized.pkl", "rb"))
vectorizer = pickle.load(open("outputs/vectorizer.pkl", "rb"))
threshold = 0 #Arbitrary, might change
n_topics = 10 # Change to best fit
list_scores = []
list_words = []
feature_names = np.array(vectorizer.get_feature_names())
lda_components = lda_model.components_ #/ lda_model.components_.sum(axis=1)[:, np.newaxis] # normalization
total_length = len(wiki)
return_df = {
'index': [],
'tag': []
}
for index, row in wiki.iterrows():
if index % 1000 == 0 and index > 0:
print('Percent complete: {:.2%}'.format(index/total_length))
text_projection = data_vectorized[index,:].toarray()
element_score = np.multiply(text_projection[0],lda_components)
non_zero = np.nonzero(element_score)
l_words = {}
for i,j in zip(non_zero[0],non_zero[1]):
if feature_names[j] in l_words:
l_words[feature_names[j] ] += element_score[i,j]
else:
l_words[feature_names[j] ] = element_score[i,j]
l_words = [k for k, v in sorted(l_words.items(), key=lambda item: item[1], reverse=True)]
if len(l_words) >= n_items:
l_words = l_words[:n_items]
return_df['index'].append(index)
return_df['tag'].append(" ".join(list(l_words)))
return_df = pd.DataFrame(return_df).set_index('index')
wiki = wiki.join(return_df)
return wiki
tagged_df = tag_df(5)
tagged_df.to_csv('outputs/tagged_df.csv')
tagged_df.head(5)
```
# Tag Grouping
This next section applies a number of clustering methods to the tags generated in the previous section. In order to reduce the search space the tags are vectorized and then transformed according to the LDA number of topics.
```
tagged_df = pd.read_csv('outputs/tagged_df.csv')
tagged_df.head(5)
def other_tags(input_df, tag_label, return_scores = None):
method = []
method_scores = []
l_start = time.time()
lda_model = pickle.load(open("outputs/lda_model.pkl", "rb"))
vectorizer = pickle.load(open("outputs/vectorizer.pkl", "rb"))
input_df = input_df[input_df[tag_label].str.len()>0]
X_i = vectorizer.transform(input_df[tag_label])
print(X_i.shape)
print(lda_model.components_.shape)
X = X_i @ lda_model.components_.T
print(X.shape)
l_duration = time.time() - l_start
print('Vectorization: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
lda_cluster = np.argmax(X, axis=1)
l_duration = time.time() - l_start
print('LDA clustering: {:.2f} seconds'.format(l_duration), flush=True)
print(lda_cluster.shape)
lda_ss = silhouette_score(X, lda_cluster, sample_size=10000) #Depending on speed, may need to change sample size
print('Lda score: {:.4f}'.format(lda_ss), flush=True)
method.append('LDA')
method_scores.append(lda_ss)
#DBSCAN
l_start = time.time()
db_clustering = DBSCAN(eps=3, min_samples=2).fit(X)
l_duration = time.time() - l_start
print('DBSCAN: {:.2f} seconds'.format(l_duration), flush=True)
num_clusters = len(list(set(db_clustering.labels_)))
db_ss = silhouette_score(X, db_clustering.labels_, sample_size=10000) #Depending on speed, may need to change sample size
print('DBSCAN score: {:.4f}'.format(db_ss), flush=True)
print("DBSCAN number of clusters: " +str(num_clusters))
method.append('DBSCAN')
method_scores.append(db_ss)
index_range = np.arange(0,X.shape[0],1000)
#BIRCH
l_start = time.time()
brc = Birch(n_clusters=None)
for index,val in enumerate(index_range):
#print('Birch Current index: '+str(index))
#l_start = time.time()
if index+1 >= len(index_range):
brc = brc.partial_fit(X[val:X.shape[0],:])
else:
brc = brc.partial_fit(X[val:index_range[index+1],:])
#l_duration = time.time() - l_start
l_duration = time.time() - l_start
print('BIRCH fit: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
brc_labels = brc.predict(X)
num_clusters = len(list(set(brc_labels)))
l_duration = time.time() - l_start
print('BIRCH predict: {:.2f} seconds'.format(l_duration), flush=True)
print("Birch number of clusters: " +str(num_clusters))
birch_ss = silhouette_score(X, brc_labels, sample_size=10000) #Depending on speed, may need to change sample size
print('Birch score: {:.4f}'.format(birch_ss), flush=True)
method.append('BIRCH')
method_scores.append(birch_ss)
if return_scores is None:
return_scores = pd.DataFrame({
'method':method,
'method_scores':method_scores,
})
else:
l_return_scores = pd.DataFrame({
'method':method,
'method_scores':method_scores,
})
return_scores.append(l_return_scores, ignore_index=True)
return return_scores
def k_means_tags(input_df, tag_label, return_scores = None):
method = []
method_scores = []
interial_clusters = []
intertia = []
l_start = time.time()
lda_model = pickle.load(open("outputs/lda_model.pkl", "rb"))
vectorizer = pickle.load(open("outputs/vectorizer.pkl", "rb"))
input_df = input_df[input_df[tag_label].str.len()>0]
X_i = vectorizer.transform(input_df[tag_label])
print(X_i.shape)
print(lda_model.components_.shape)
X = X_i @ lda_model.components_.T
print(X.shape)
l_duration = time.time() - l_start
print('Vectorization: {:.2f} seconds'.format(l_duration), flush=True)
index_range = np.arange(0,X.shape[0],1000)
cluster_range = [10,30,50,70,100,300,400,500,600,700,800,900,1000]
for el in cluster_range:
kmeans = MiniBatchKMeans(n_clusters=el,random_state=0,batch_size=6)
l_start = time.time()
for index,val in enumerate(index_range):
if index+1 >= len(index_range):
kmeans = kmeans.partial_fit(X[val:X.shape[0],:])
else:
kmeans = kmeans.partial_fit(X[val:index_range[index+1],:])
l_duration = time.time() - l_start
print('Kmeans fit: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
kmeans_labels = kmeans.predict(X)
l_duration = time.time() - l_start
print('Kmeans predict: {:.2f} seconds'.format(l_duration), flush=True)
kmeans_ss = silhouette_score(X, kmeans_labels, sample_size=10000) #Depending on speed, may need to change sample size
print('Kmeans score: {:.4f}'.format(kmeans_ss), flush=True)
print("Number of clusters: " +str(el))
method.append('K Means '+str(el)+' clusters')
method_scores.append(kmeans_ss)
interial_clusters.append(el)
intertia.append(kmeans.inertia_)
inertia_scores = pd.DataFrame({
'interia_clusters': interial_clusters,
'interia_score': intertia
})
if return_scores is None:
return_scores = pd.DataFrame({
'method':method,
'method_scores':method_scores,
})
else:
l_return_scores = pd.DataFrame({
'method':method,
'method_scores':method_scores,
})
return_scores = return_scores.append(l_return_scores, ignore_index=True)
return pd.DataFrame(inertia_scores),return_scores
oth_scores = other_tags(tagged_df, 'tags')
inertia_dict, score_dict = k_means_tags(tagged_df, 'tags', oth_scores)
score_dict.head()
```
# Clustering Evaluation
The next section evaluates the results of the clustering methods applied previously and then selects the best model for use in the final section of building a word cloud.
```
method = score_dict['method'].to_list()
method_scores = score_dict['method_scores'].to_list()
plt.bar(method, method_scores)
plt.xlabel("Clustering Method")
plt.xticks(rotation=90)
plt.ylabel("Silhouette Score")
plt.show()
#Plot kmeans inertia
range_n_clusters = inertia_dict['interia_clusters'].to_list()
avg_distance = inertia_dict['interia_score'].to_list()
plt.plot(range_n_clusters, avg_distance)
plt.xlabel("Number of Clusters (k)")
plt.ylabel("Distance")
plt.show()
def tag_best_k_means(input_df, tag_label, num_clusters):
l_start = time.time()
lda_model = pickle.load(open("outputs/lda_model.pkl", "rb"))
vectorizer = pickle.load(open("outputs/vectorizer.pkl", "rb"))
input_df = input_df[input_df[tag_label].str.len()>0]
X_i = vectorizer.transform(input_df[tag_label])
print(X_i.shape)
print(lda_model.components_.shape)
X = X_i @ lda_model.components_.T
print(X.shape)
l_duration = time.time() - l_start
print('Vectorization: {:.2f} seconds'.format(l_duration), flush=True)
index_range = np.arange(0,X.shape[0],1000)
kmeans = MiniBatchKMeans(n_clusters=num_clusters,random_state=0,batch_size=6)
l_start = time.time()
for index,val in enumerate(index_range):
if index+1 >= len(index_range):
kmeans = kmeans.partial_fit(X[val:X.shape[0],:])
else:
kmeans = kmeans.partial_fit(X[val:index_range[index+1],:])
l_duration = time.time() - l_start
print('Kmeans fit: {:.2f} seconds'.format(l_duration), flush=True)
l_start = time.time()
kmeans_labels = kmeans.predict(X)
l_duration = time.time() - l_start
print('Kmeans predict: {:.2f} seconds'.format(l_duration), flush=True)
input_df['cluster'] = kmeans_labels
return input_df
tag_df = tag_best_k_means(tagged_df, 'tags', 400)
```
# Word Clouds
This next section generates word clouds for a particular cluster.
```
def gen_word_cloud(df, tag_col, cluster, cluster_col):
# Read the whole text.
text = df[df[cluster_col]==cluster][tag_col].str.cat(sep=' ')
# Generate a word cloud image
wordcloud = WordCloud().generate(text)
# Display the generated image:
# the matplotlib way:
#plt.imshow(wordcloud, interpolation='bilinear')
#plt.axis("off")
# lower max_font_size
wordcloud = WordCloud(background_color="white", repeat=True).generate(text)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
#plt.show()
return plt
gen_word_cloud(tag_df, 'tags',3,'cluster').show()
tag_df.to_csv('outputs/tagged_df.csv')
```
| github_jupyter |
## Computer Vision Learner
[`vision.learner`](/vision.learner.html#vision.learner) is the module that defines the [`cnn_learner`](/vision.learner.html#cnn_learner) method, to easily get a model suitable for transfer learning.
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
## Transfer learning
Transfer learning is a technique where you use a model trained on a very large dataset (usually [ImageNet](http://image-net.org/) in computer vision) and then adapt it to your own dataset. The idea is that it has learned to recognize many features on all of this data, and that you will benefit from this knowledge, especially if your dataset is small, compared to starting from a randomly initialized model. It has been proved in [this article](https://arxiv.org/abs/1805.08974) on a wide range of tasks that transfer learning nearly always give better results.
In practice, you need to change the last part of your model to be adapted to your own number of classes. Most convolutional models end with a few linear layers (a part will call head). The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of our classes. In transfer learning we will keep all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initialized randomly.
Then we will train the model we obtain in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data), then we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possibly using differential learning rates).
The [`cnn_learner`](/vision.learner.html#cnn_learner) factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data.
```
show_doc(cnn_learner)
```
This method creates a [`Learner`](/basic_train.html#Learner) object from the [`data`](/vision.data.html#vision.data) object and model inferred from it with the backbone given in `arch`. Specifically, it will cut the model defined by `arch` (randomly initialized if `pretrained` is False) at the last convolutional layer by default (or as defined in `cut`, see below) and add:
- an [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) layer,
- a [`Flatten`](/layers.html#Flatten) layer,
- blocks of \[[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)\] layers.
The blocks are defined by the `lin_ftrs` and `ps` arguments. Specifically, the first block will have a number of inputs inferred from the backbone `arch` and the last one will have a number of outputs equal to `data.c` (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_frts` (of course a block has a number of inputs equal to the number of outputs of the previous block). The default is to have an intermediate hidden size of 512 (which makes two blocks `model_activation` -> 512 -> `n_classes`). If you pass a float then the final dropout layer will have the value `ps`, and the remaining will be `ps/2`. If you pass a list then the values are used for dropout probabilities directly.
Note that the very last block doesn't have a [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU) activation, to allow you to use any final activation you want (generally included in the loss function in pytorch). Also, the backbone will be frozen if you choose `pretrained=True` (so only the head will train if you call [`fit`](/basic_train.html#fit)) so that you can immediately start phase one of training as described above.
Alternatively, you can define your own `custom_head` to put on top of the backbone. If you want to specify where to split `arch` you should so in the argument `cut` which can either be the index of a specific layer (the result will not include that layer) or a function that, when passed the model, will return the backbone you want.
The final model obtained by stacking the backbone and the head (custom or defined as we saw) is then separated in groups for gradual unfreezing or differential learning rates. You can specify how to split the backbone in groups with the optional argument `split_on` (should be a function that returns those groups when given the backbone).
The `kwargs` will be passed on to [`Learner`](/basic_train.html#Learner), so you can put here anything that [`Learner`](/basic_train.html#Learner) will accept ([`metrics`](/metrics.html#metrics), `loss_func`, `opt_func`...)
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learner = cnn_learner(data, models.resnet18, metrics=[accuracy])
learner.fit_one_cycle(1,1e-3)
learner.save('one_epoch')
show_doc(unet_learner)
```
This time the model will be a [`DynamicUnet`](/vision.models.unet.html#DynamicUnet) with an encoder based on `arch` (maybe `pretrained`) that is cut depending on `split_on`. `blur_final`, `norm_type`, `blur`, `self_attention`, `y_range`, `last_cross` and `bottle` are passed to unet constructor, the `kwargs` are passed to the initialization of the [`Learner`](/basic_train.html#Learner).
```
jekyll_warn("The models created with this function won't work with pytorch `nn.DataParallel`, you have to use distributed training instead!")
```
### Get predictions
Once you've actually trained your model, you may want to use it on a single image. This is done by using the following method.
```
show_doc(Learner.predict)
img = learner.data.train_ds[0][0]
learner.predict(img)
```
Here the predict class for our image is '3', which corresponds to a label of 0. The probabilities the model found for each class are 99.65% and 0.35% respectively, so its confidence is pretty high.
Note that if you want to load your trained model and use it on inference mode with the previous function, you should export your [`Learner`](/basic_train.html#Learner).
```
learner.export()
```
And then you can load it with an empty data object that has the same internal state like this:
```
learn = load_learner(path)
```
### Customize your model
You can customize [`cnn_learner`](/vision.learner.html#cnn_learner) for your own model's default `cut` and `split_on` functions by adding them to the dictionary `model_meta`. The key should be your model and the value should be a dictionary with the keys `cut` and `split_on` (see the source code for examples). The constructor will call [`create_body`](/vision.learner.html#create_body) and [`create_head`](/vision.learner.html#create_head) for you based on `cut`; you can also call them yourself, which is particularly useful for testing.
```
show_doc(create_body)
show_doc(create_head, doc_string=False)
```
Model head that takes `nf` features, runs through `lin_ftrs`, and ends with `nc` classes. `ps` is the probability of the dropouts, as documented above in [`cnn_learner`](/vision.learner.html#cnn_learner).
```
show_doc(ClassificationInterpretation, title_level=3)
```
This provides a confusion matrix and visualization of the most incorrect images. Pass in your [`data`](/vision.data.html#vision.data), calculated `preds`, actual `y`, and your `losses`, and then use the methods below to view the model interpretation results. For instance:
```
learn = cnn_learner(data, models.resnet18)
learn.fit(1)
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
```
The following factory method gives a more convenient way to create an instance of this class:
```
show_doc(ClassificationInterpretation.from_learner, full_name='from_learner')
```
You can also use a shortcut `learn.interpret()` to do the same.
```
show_doc(Learner.interpret, full_name='interpret')
```
Note that this shortcut is a [`Learner`](/basic_train.html#Learner) object/class method that can be called as: `learn.interpret()`.
```
show_doc(ClassificationInterpretation.plot_top_losses, full_name='plot_top_losses')
```
The `k` items are arranged as a square, so it will look best if `k` is a square number (4, 9, 16, etc). The title of each image shows: prediction, actual, loss, probability of actual class. When `heatmap` is True (by default it's True) , Grad-CAM heatmaps (http://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf) are overlaid on each image. `plot_top_losses` should be used with single-labeled datasets. See `plot_multi_top_losses` below for a version capable of handling multi-labeled datasets.
```
interp.plot_top_losses(9, figsize=(7,7))
show_doc(ClassificationInterpretation.top_losses)
```
Returns tuple of *(losses,indices)*.
```
interp.top_losses(9)
show_doc(ClassificationInterpretation.plot_multi_top_losses, full_name='plot_multi_top_losses')
```
Similar to `plot_top_losses()` but aimed at multi-labeled datasets. It plots misclassified samples sorted by their respective loss.
Since you can have multiple labels for a single sample, they can easily overlap in a grid plot. So it plots just one sample per row.
Note that you can pass `save_misclassified=True` (by default it's `False`). In such case, the method will return a list containing the misclassified images which you can use to debug your model and/or tune its hyperparameters.
```
show_doc(ClassificationInterpretation.plot_confusion_matrix)
```
If [`normalize`](/vision.data.html#normalize), plots the percentages with `norm_dec` digits. `slice_size` can be used to avoid out of memory error if your set is too big. `kwargs` are passed to `plt.figure`.
```
interp.plot_confusion_matrix()
show_doc(ClassificationInterpretation.confusion_matrix)
interp.confusion_matrix()
show_doc(ClassificationInterpretation.most_confused)
```
#### Working with large datasets
When working with large datasets, memory problems can arise when computing the confusion matrix. For example, an error can look like this:
RuntimeError: $ Torch: not enough memory: you tried to allocate 64GB. Buy new RAM!
In this case it is possible to force [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) to compute the confusion matrix for data slices and then aggregate the result by specifying slice_size parameter.
```
interp.confusion_matrix(slice_size=10)
interp.plot_confusion_matrix(slice_size=10)
interp.most_confused(slice_size=10)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
| github_jupyter |
```
import numpy as np
import pandas as pd
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.types import StringType
sqlContext = SQLContext(sc)
conf = SparkConf().setAppName("My App").setMaster("local[*]")
sc.stop()
sc = SparkContext(conf = conf)
```
## Read both the csv files
```
movies = sc.textFile('/Users/jitu/Documents/Fall/Big Data/Assignments/ml-latest-small/movies.csv')
ratings = sc.textFile('/Users/jitu/Documents/Fall/Big Data/Assignments/ml-latest-small/ratings.csv')
##### ------------- user input ----------------
userID = '1'
movieID = '31'
avgrating = '3.0'
```
## split input using , as a delimiter and exclude columns like genre and timestamp as I am not using those for recommendation
```
movie_rdd = movies.map(lambda line: line.split(',')[:-1]) ## Excluding Genre column
ratings_rdd = ratings.map(lambda line: line.split(',')[:-1]) ## Excluding Timestamp column
movie_rdd.collect()
```
## Filtering out header from movie RDD and selecting movie ID and movie name
```
movi = movie_rdd.filter(lambda x: x[0]!='movieId')
#all_users = users.filter(lambda x: x[0:-2])
all_movi = movi.map(lambda x: (x[0],x[1]))
all_movi.collect()
```
## filtering out users using condition that movie ID should be equal to given movie ID and user ID should not be given user ID because we dont want those movies which user has already seen
```
users = ratings_rdd.filter(lambda x: x[1]==movieID and x[0]!=userID)
#all_users = users.filter(lambda x: x[0:-2])
all_users = users.map(lambda x: (x[0],1))
all_users.collect()
```
## Here I have filtered out all movies which user has seen by filtering user ID from from RDD and selected only two columns user ID and movie ID
```
users1 = ratings_rdd.filter(lambda x: x[0]!=userID)
#all_users1 = users.filter(lambda x: x[0:-2])
all_users1 = users1.map(lambda x: (x[0],x[1]))
all_users1.collect()
```
## joining all users who have seen given movie with RDD with all users and movies list
## and selected only movie column and made a tuple (x,1) and then reduced it by key to get a count of users who have seen that particular movie. Finally we have a list of movies and users count for that movie
```
movies_list = all_users.join(all_users1).map(lambda x: (x[1][1],1)).reduceByKey(lambda amt1,amt2 : amt1+amt2)
movies_list.collect()
```
# Calculate average rating of a movie
## first filtered out given user and header from RDD and then selected only movie ID and rating
```
ratings = ratings_rdd.filter(lambda x: (x[0]!=userID and x[0]!='userId'))
#all_users1 = users.filter(lambda x: x[0:-2])
all_ratings = ratings.map(lambda x: (x[1],x[2]))
all_ratings.collect()
```
## Now, join movie list with rating RDD above
## and selected only movie ID and reduced it by key to get count (This count will be used to get average rating)
```
count = movies_list.join(all_ratings).map(lambda x: (x[0],1)).reduceByKey(lambda amt1,amt2 : amt1+amt2)
count.collect()
```
## again join those two rdds and get total count of ratings
```
avg_ratings = movies_list.join(all_ratings).map(lambda x: (x[0],x[1][1])).reduceByKey(lambda amt1,amt2 : float(amt1)+float(amt2))
avg_ratings.collect()
```
## And join both RDDs with total ratings and total count to get average rating for that movie and sort it on based of ratings
```
avg_by_key = avg_ratings.join(count).map(lambda x: (x[0],(float(x[1][0])/float(x[1][1]))))
avg_by_key = avg_by_key.sortBy(lambda x: x[1], False)
avg_by_key.collect()
```
## Finally, join above RDD with movie name RDD to get the movie name which are recommended for users and select TOP 5
```
final_recom = avg_by_key.join(all_movi).map(lambda x: (x[1][0],x[1][1]))
final_recom = final_recom.sortBy(lambda x: x[0], False)
#final_recom.map(lambda x: (x[1]))
final_recom.map
final_recom.top(5)
```
| github_jupyter |
# The Atoms of Computation
Programming a quantum computer is now something that anyone can do in the comfort of their own home.
But what to create? What is a quantum program anyway? In fact, what is a quantum computer?
These questions can be answered by making comparisons to standard digital computers. Unfortunately, most people don’t actually understand how digital computers work either. In this article, we’ll look at the basics principles behind these devices. To help us transition over to quantum computing later on, we’ll do it using the same tools as we'll use for quantum.
## Contents
1. [Splitting information into bits](#bits)
2. [Computation as a Diagram](#diagram)
3. [Your First Quantum Circuit](#first-circuit)
4. [Example: Adder Circuit](#adder)
4.1 [Encoding an Input](#encoding)
4.2 [Remembering how to Add](#remembering-add)
4.3 [Adding with Qiskit](#adding-qiskit)
Below is some Python code we'll need to run if we want to use the code in this page:
```
from qiskit import QuantumCircuit, assemble, Aer
from qiskit.visualization import plot_histogram
```
## 1. Splitting information into bits <a id="bits"></a>
The first thing we need to know about is the idea of bits. These are designed to be the world’s simplest alphabet. With only two characters, 0 and 1, we can represent any piece of information.
One example is numbers. You are probably used to representing a number through a string of the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this string of digits, each digit represents how many times the number contains a certain power of ten. For example, when we write 9213, we mean
$$ 9000 + 200 + 10 + 3 $$
or, expressed in a way that emphasizes the powers of ten
$$ (9\times10^3) + (2\times10^2) + (1\times10^1) + (3\times10^0) $$
Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 9213 becomes 10001111111101, since
$$ 9213 = (1 \times 2^{13}) + (0 \times 2^{12}) + (0 \times 2^{11})+ (0 \times 2^{10}) +(1 \times 2^9) + (1 \times 2^8) + (1 \times 2^7) \\\\ \,\,\, + (1 \times 2^6) + (1 \times 2^5) + (1 \times 2^4) + (1 \times 2^3) + (1 \times 2^2) + (0 \times 2^1) + (1 \times 2^0) $$
In this we are expressing numbers as multiples of 2, 4, 8, 16, 32, etc. instead of 10, 100, 1000, etc.
<a id="binary_widget"></a>
```
from qiskit_textbook.widgets import binary_widget
binary_widget(nbits=5)
```
These strings of bits, known as binary strings, can be used to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number, or punctuation mark you want to use, you can find a corresponding string of at most eight bits using [this table](https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/com.ibm.aix.networkcomm/conversion_table.htm). Though these are quite arbitrary, this is a widely agreed-upon standard. In fact, it's what was used to transmit this article to you through the internet.
This is how all information is represented in computers. Whether numbers, letters, images, or sound, it all exists in the form of binary strings.
Like our standard digital computers, quantum computers are based on this same basic idea. The main difference is that they use *qubits*, an extension of the bit to quantum mechanics. In the rest of this textbook, we will explore what qubits are, what they can do, and how they do it. In this section, however, we are not talking about quantum at all. So, we just use qubits as if they were bits.
### Quick Exercises
1. Think of a number and try to write it down in binary.
2. If you have $n$ bits, how many different states can they be in?
## 2. Computation as a diagram <a id="diagram"></a>
Whether we are using qubits or bits, we need to manipulate them in order to turn the inputs we have into the outputs we need. For the simplest programs with very few bits, it is useful to represent this process in a diagram known as a *circuit diagram*. These have inputs on the left, outputs on the right, and operations represented by arcane symbols in between. These operations are called 'gates', mostly for historical reasons.
Here's an example of what a circuit looks like for standard, bit-based computers. You aren't expected to understand what it does. It should simply give you an idea of what these circuits look like.

For quantum computers, we use the same basic idea but have different conventions for how to represent inputs, outputs, and the symbols used for operations. Here is the quantum circuit that represents the same process as above.

In the rest of this section, we will explain how to build circuits. At the end, you'll know how to create the circuit above, what it does, and why it is useful.
## 3. Your first quantum circuit <a id="first-circuit"></a>
In a circuit, we typically need to do three jobs: First, encode the input, then do some actual computation, and finally extract an output. For your first quantum circuit, we'll focus on the last of these jobs. We start by creating a circuit with eight qubits and eight outputs.
```
n = 8
n_q = n
n_b = n
qc_output = QuantumCircuit(n_q,n_b)
```
This circuit, which we have called `qc_output`, is created by Qiskit using `QuantumCircuit`. The number `n_q` defines the number of qubits in the circuit. With `n_b` we define the number of output bits we will extract from the circuit at the end.
The extraction of outputs in a quantum circuit is done using an operation called `measure`. Each measurement tells a specific qubit to give an output to a specific output bit. The following code adds a `measure` operation to each of our eight qubits. The qubits and bits are both labelled by the numbers from 0 to 7 (because that’s how programmers like to do things). The command `qc.measure(j,j)` adds a measurement to our circuit `qc` that tells qubit `j` to write an output to bit `j`.
```
for j in range(n):
qc_output.measure(j,j)
```
Now that our circuit has something in it, let's take a look at it.
```
qc_output.draw()
```
Qubits are always initialized to give the output ```0```. Since we don't do anything to our qubits in the circuit above, this is exactly the result we'll get when we measure them. We can see this by running the circuit many times and plotting the results in a histogram. We will find that the result is always ```00000000```: a ```0``` from each qubit.
```
sim = Aer.get_backend('qasm_simulator') # this is the simulator we'll use
qobj = assemble(qc_output) # this turns the circuit into an object our backend can run
result = sim.run(qobj).result() # we run the experiment and get the result from that experiment
# from the results, we get a dictionary containing the number of times (counts)
# each result appeared
counts = result.get_counts()
# and display it on a histogram
plot_histogram(counts)
```
The reason for running many times and showing the result as a histogram is because quantum computers may have some randomness in their results. In this case, since we aren’t doing anything quantum, we get just the ```00000000``` result with certainty.
Note that this result comes from a quantum simulator, which is a standard computer calculating what an ideal quantum computer would do. Simulations are only possible for small numbers of qubits (~30 qubits), but they are nevertheless a very useful tool when designing your first quantum circuits. To run on a real device you simply need to replace ```Aer.get_backend('qasm_simulator')``` with the backend object of the device you want to use.
## 4. Example: Creating an Adder Circuit <a id="adder"></a>
### 4.1 Encoding an input <a id="encoding"></a>
Now let's look at how to encode a different binary string as an input. For this, we need what is known as a NOT gate. This is the most basic operation that you can do in a computer. It simply flips the bit value: ```0``` becomes ```1``` and ```1``` becomes ```0```. For qubits, it is an operation called ```x``` that does the job of the NOT.
Below we create a new circuit dedicated to the job of encoding and call it `qc_encode`. For now, we only specify the number of qubits.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(7)
qc_encode.draw()
```
Extracting results can be done using the circuit we have from before: `qc_output`. Adding the two circuits using `qc_encode + qc_output` creates a new circuit with everything needed to extract an output added at the end.
```
qc = qc_encode + qc_output
qc.draw()
```
Now we can run the combined circuit and look at the results.
```
qobj = assemble(qc)
counts = sim.run(qobj).result().get_counts()
plot_histogram(counts)
```
Now our computer outputs the string ```10000000``` instead.
The bit we flipped, which comes from qubit 7, lives on the far left of the string. This is because Qiskit numbers the bits in a string from right to left. Some prefer to number their bits the other way around, but Qiskit's system certainly has its advantages when we are using the bits to represent numbers. Specifically, it means that qubit 7 is telling us about how many $2^7$s we have in our number. So by flipping this bit, we’ve now written the number 128 in our simple 8-bit computer.
Now try out writing another number for yourself. You could do your age, for example. Just use a search engine to find out what the number looks like in binary (if it includes a ‘0b’, just ignore it), and then add some 0s to the left side if you are younger than 64.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(1)
qc_encode.x(5)
qc_encode.draw()
```
Now we know how to encode information in a computer. The next step is to process it: To take an input that we have encoded, and turn it into an output that we need.
### 4.2 Remembering how to add <a id="remembering-add"></a>
To look at turning inputs into outputs, we need a problem to solve. Let’s do some basic maths. In primary school, you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving the following?
```
9213
+ 1854
= ????
```
One way is to do it digit by digit, from right to left. So we start with 3+4
```
9213
+ 1854
= ???7
```
And then 1+5
```
9213
+ 1854
= ??67
```
Then we have 2+8=10. Since this is a two digit answer, we need to carry the one over to the next column.
```
9213
+ 1854
= ?067
¹
```
Finally we have 9+1+1=11, and get our answer
```
9213
+ 1854
= 11067
¹
```
This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always break big tasks down into small and simple steps.
To run on a computer, algorithms need to be compiled down to the smallest and simplest steps possible. To see what these look like, let’s do the above addition problem again but in binary.
```
10001111111101
+ 00011100111110
= ??????????????
```
Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length.
Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1. We get the same result for the 0+1 of the second column.
```
10001111111101
+ 00011100111110
= ????????????11
```
Next, we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written ```10```, and so requires two bits. This means that we need to carry the 1, just as we would for the number 10 in decimal.
```
10001111111101
+ 00011100111110
= ???????????011
¹
```
The next column now requires us to calculate ```1+1+1```. This means adding three numbers together, so things are getting complicated for our computer. But we can still compile it down to simpler operations, and do it in a way that only ever requires us to add two bits together. For this, we can start with just the first two 1s.
```
1
+ 1
= 10
```
Now we need to add this ```10``` to the final ```1``` , which can be done using our usual method of going through the columns.
```
10
+ 01
= 11
```
The final answer is ```11``` (also known as 3).
Now we can get back to the rest of the problem. With the answer of ```11```, we have another carry bit.
```
10001111111101
+ 00011100111110
= ??????????1011
¹¹
```
So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal.
In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there are only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent).
```
0+0 = 00 (in decimal, this is 0+0=0)
0+1 = 01 (in decimal, this is 0+1=1)
1+0 = 01 (in decimal, this is 1+0=1)
1+1 = 10 (in decimal, this is 1+1=2)
```
This is called a *half adder*. If our computer can implement this, and if it can chain many of them together, it can add anything.
### 4.3 Adding with Qiskit <a id="adding-qiskit"></a>
Let's make our own half adder using Qiskit. This will include a part of the circuit that encodes the input, a part that executes the algorithm, and a part that extracts the result. The first part will need to be changed whenever we want to use a new input, but the rest will always remain the same.

The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a ```1``` in both these qubits, and so it seeks to find the solution of ```1+1```. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle.
The dashed lines in the image are just to distinguish the different parts of the circuit (although they can have more interesting uses too). They are made by using the `barrier` command.
The basic operations of computing are known as logic gates. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. Since we want the computer to do the actual computing for us, we’ll need some more powerful gates.
To see what we need, let’s take another look at what our half adder needs to do.
```
0+0 = 00
0+1 = 01
1+0 = 01
1+1 = 10
```
The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for ```0+0``` and ```1+1```, where the two bits are equal, the rightmost bit of the answer comes out ```0```. For ```0+1``` and ```1+0```, where we are adding different bit values, the rightmost bit is ```1```.
To get this part of our solution correct, we need something that can figure out whether two bits are different or not. Traditionally, in the study of digital computation, this is called an XOR gate.
| Input 1 | Input 2 | XOR Output |
|:-------:|:-------:|:------:|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
In quantum computers, the job of the XOR gate is done by the controlled-NOT gate. Since that's quite a long name, we usually just call it the CNOT. In Qiskit its name is ```cx```, which is even shorter. In circuit diagrams, it is drawn as in the image below.
```
qc_cnot = QuantumCircuit(2)
qc_cnot.cx(0,1)
qc_cnot.draw()
```
This is applied to a pair of qubits. One acts as the control qubit (this is the one with the little dot). The other acts as the *target qubit* (with the big circle).
There are multiple ways to explain the effect of the CNOT. One is to say that it looks at its two input bits to see whether they are the same or different. Next, it overwrites the target qubit with the answer. The target becomes ```0``` if they are the same, and ```1``` if they are different.
<img src="images/cnot_xor.svg">
Another way of explaining the CNOT is to say that it does a NOT on the target if the control is ```1```, and does nothing otherwise. This explanation is just as valid as the previous one (in fact, it’s the one that gives the gate its name).
Try the CNOT out for yourself by trying each of the possible inputs. For example, here's a circuit that tests the CNOT with the input ```01```.
```
qc = QuantumCircuit(2,2)
qc.x(0)
qc.cx(0,1)
qc.measure(0,0)
qc.measure(1,1)
qc.draw()
```
If you execute this circuit, you’ll find that the output is ```11```. We can think of this happening because of either of the following reasons.
- The CNOT calculates whether the input values are different and finds that they are, which means that it wants to output ```1```. It does this by writing over the state of qubit 1 (which, remember, is on the left of the bit string), turning ```01``` into ```11```.
- The CNOT sees that qubit 0 is in state ```1```, and so applies a NOT to qubit 1. This flips the ```0``` of qubit 1 into a ```1```, and so turns ```01``` into ```11```.
Here is a table showing all the possible inputs and corresponding outputs of the CNOT gate:
| Input (q1 q0) | Output (q1 q0) |
|:-------------:|:--------------:|
| 00 | 00 |
| 01 | 11 |
| 10 | 10 |
| 11 | 01 |
For our half adder, we don’t want to overwrite one of our inputs. Instead, we want to write the result on a different pair of qubits. For this, we can use two CNOTs.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1)
qc_ha.draw()
```
We are now halfway to a fully working half adder. We just have the other bit of the output left to do: the one that will live on qubit 3.
If you look again at the four possible sums, you’ll notice that there is only one case for which this is ```1``` instead of ```0```: ```1+1```=```10```. It happens only when both the bits we are adding are ```1```.
To calculate this part of the output, we could just get our computer to look at whether both of the inputs are ```1```. If they are — and only if they are — we need to do a NOT gate on qubit 3. That will flip it to the required value of ```1``` for this case only, giving us the output we need.
For this, we need a new gate: like a CNOT but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when both controls are in state ```1```. This new gate is called the *Toffoli*. For those of you who are familiar with Boolean logic gates, it is basically an AND gate.
In Qiskit, the Toffoli is represented with the `ccx` command.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove the this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove the this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
# use ccx to write the AND of the inputs on qubit 3
qc_ha.ccx(0,1,3)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1) # extract AND value
qc_ha.draw()
```
In this example, we are calculating ```1+1```, because the two input bits are both ```1```. Let's see what we get.
```
qobj = assemble(qc_ha)
counts = sim.run(qobj).result().get_counts()
plot_histogram(counts)
```
The result is ```10```, which is the binary representation of the number 2. We have built a computer that can solve the famous mathematical problem of 1+1!
Now you can try it out with the other three possible inputs, and show that our algorithm gives the right results for those too.
The half adder contains everything you need for addition. With the NOT, CNOT, and Toffoli gates, we can create programs that add any set of numbers of any size.
These three gates are enough to do everything else in computing too. In fact, we can even do without the CNOT. Additionally, the NOT gate is only really needed to create bits with value ```1```. The Toffoli gate is essentially the atom of mathematics. It is the simplest element, from which every other problem-solving technique can be compiled.
As we'll see, in quantum computing we split the atom.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
```
%cd -q data/actr_reco
import matplotlib.pyplot as plt
import tqdm
import numpy as np
with open("users.txt", "r") as f:
users = f.readlines()
hist = []
for user in tqdm.tqdm(users):
user = user.strip()
ret = !wc -l user_split/listening_events_2019_{user}.tsv
lc, _ = ret[0].split(" ")
hist.append(int(lc))
len(hist), sum(hist)
plt.hist(hist, bins=100)
plt.show()
subset = [x for x in hist if x < 30_000 and x >= 1_000]
len(subset)
plt.hist(subset, bins=100)
plt.show()
plt.hist(subset, bins=5)
plt.show()
plt.hist(subset, bins=10)
plt.show()
plt.hist(subset, bins=10)
```
# Stratification
```
def stratification_numbers(data, min_value, max_value, bins, num_samples):
subset = [x for x in data if x >= min_value and x < max_value]
percentage = num_samples / len(subset)
bin_size = int((max_value-min_value)/bins)
num_per_bin = []
old_boundary = min_value
for new_boundary in range(min_value+bin_size, max_value+1, bin_size):
data_in_bin = [x for x in subset if x >= old_boundary and x < new_boundary]
num_per_bin.append(len(data_in_bin))
old_boundary = new_boundary
assert sum(num_per_bin) == len(subset)
samples_per_bin = np.array(num_per_bin)*percentage
floor_samples_per_bin = np.floor(samples_per_bin)
error = int(round(sum(samples_per_bin) - sum(floor_samples_per_bin)))
if error == 0:
assert sum(floor_samples_per_bin) == num_samples
return floor_samples_per_bin
remainders = np.remainder(samples_per_bin, 1)
to_adjust = np.argsort(remainders)[::-1][:error]
for ta in to_adjust:
floor_samples_per_bin[ta] += 1
assert sum(floor_samples_per_bin) == num_samples
return floor_samples_per_bin
samples_per_bin = stratification_numbers(hist, 1_000, 30_000, 10, num_samples=100)
samples_per_bin, sum(samples_per_bin)
stratification_numbers(hist, 1_000, 30_000, 10, 2)
```
# Iterative Stratified Sampling
```
test_hist = hist[len(test_users):]
assert len(test_hist) == len(test_users)
test_user_interaction = list(zip(test_users, test_hist))
test_user_interaction[:2]
!wc -l user_split/listening_events_2019_61740.tsv
def get_bin_boundaries_from_config(bin_config=None):
if not bin_config:
bin_config = {"min_value": 1_000, "max_value": 30_000, "bins": 10}
bin_size = int((bin_config["max_value"]-bin_config["min_value"])/bin_config["bins"])
return list(range(bin_config["min_value"], bin_config["max_value"]+1, bin_size))
def check_in_bin(item_value, target_bin, bin_config=None):
bin_boundaries = get_bin_boundaries_from_config()
return item_value >= bin_boundaries[target_bin] and item_value < bin_boundaries[target_bin+1]
assert check_in_bin(2400, 0)
assert not check_in_bin(5000, 0)
assert check_in_bin(29_000, 9)
def get_next_for_bin(user_interactions, target_bin):
iterlist = user_interactions.copy()
for ui in user_interactions:
if check_in_bin(ui[1], target_bin):
iterlist.remove(ui)
return ui[0], iterlist
raise StopIteration("No remaing items for bin.")
def list_index_difference(list1, list2):
changed_indices = []
for index, (first, second) in enumerate(zip(list1, list2)):
if first != second:
changed_indices.append(index)
return changed_indices
assert list_index_difference([0,1], [0,0]) == [1]
def iterative_sampling(user_interactions, max_size=1000, num_bins=10):
iterlist = user_interactions.copy()
bins = num_bins*[0]
sampled_list = []
mult_index_changes = []
for i in tqdm.tqdm(range(1, max_size+1)):
updated_bins = stratification_numbers(hist, 1_000, 30_000, 10, num_samples=i)
changed_indices = list_index_difference(bins, updated_bins)
if len(changed_indices) != 1:
mult_index_changes.append(i)
# print(f"Multi-index change at pos {i}: {changed_indices} (old: {bins} vs new: {updated_bins}")
target_bin = changed_indices[0] # empirically increase the first change index, assuming items are in descending order
bins[target_bin] += 1
item, iterlist = get_next_for_bin(iterlist, target_bin)
sampled_list.append(item)
print(len(mult_index_changes))
print(mult_index_changes[-3:])
print(bins)
return sampled_list
sampled_list = iterative_sampling(test_user_interaction, 150)
len(sampled_list)
# overlap
len(set(test_users[:300]).intersection(set(sampled_list[:150])))
with open("sampled.txt", "w") as f:
f.write("".join(sampled_list))
!head sampled.txt
!wc -l sampled.txt
```
| github_jupyter |
```
import pandas as pd
```
## Load in the "rosetta stone" file
I made this file using QGIS, the open-source mapping software. I loaded in the US Census 2010 block-level shapefile for Hennipin County. I then used the block centroids, provided by the census, to colect them within each zone. Since the centroids, by nature, are a "half a block" from the nearest street, this is more reliable than a polygon-in-polygon calculation. I then inspected the map visually for outliers.
I'll write up my steps for that soonest.
```
rosetta_df = pd.read_csv('../data/minneapolis/rosetta_nabes.csv')
rosetta_df
```
## Load in the population data
I downloaded the population files from [census.data.gov](https://census.data.gov).
Here are the [P3 and P5 census table files for Cook County](https://s3.amazonaws.com/media.johnkeefe.net/census-by-precinct/17031_Cook_County.zip). And here is the ["productDownload_2020-06-07T173132" zip file](https://s3.amazonaws.com/media.johnkeefe.net/census-by-precinct/productDownload_2020-06-07T173132.zip). It's a little messy, and the census doesn't label the files well, but I'm providing them as I got them. The CSVs you need are in there! Adjust your paths accordingly.
```
# census P3 for county by block
p3_df = pd.read_csv('/Volumes/JK_Smarts_Data/precinct_project/MN/productDownload_2020-06-19T224000/DECENNIALSF12010.P3_data_with_overlays_2020-06-19T223910.csv')
p3_df
p3_df.reset_index()
p3_df.drop(0, inplace=True)
p5_df = pd.read_csv('/Volumes/JK_Smarts_Data/precinct_project/MN/productDownload_2020-06-19T224000/DECENNIALSF12010.P5_data_with_overlays_2020-06-19T223910.csv')
p5_df.reset_index()
p5_df.drop(0, inplace=True)
p3_df.shape, p5_df.shape
population_df = p3_df.merge(p5_df, on='GEO_ID')
population_df.shape
population_df
rosetta_df.shape
rosetta_df.dtypes
population_df.dtypes
population_df['GEOID10'] = population_df['GEO_ID'].str[9:].astype(int)
population_df.drop(columns=['NAME_y'], inplace = True)
## Add demographic data to each chicago PD district block
block_data = rosetta_df.merge(population_df, on="GEOID10", how="left")
block_data.shape
block_data
# need to make all those columns numeric
block_data[['P003001', 'P003002', 'P003003', 'P003004',
'P003005', 'P003006', 'P003007', 'P003008', 'P005001', 'P005002',
'P005003', 'P005004', 'P005005', 'P005006', 'P005007', 'P005008',
'P005009', 'P005010', 'P005011', 'P005012', 'P005013', 'P005014',
'P005015', 'P005016', 'P005017']] = block_data[['P003001', 'P003002', 'P003003', 'P003004',
'P003005', 'P003006', 'P003007', 'P003008', 'P005001', 'P005002',
'P005003', 'P005004', 'P005005', 'P005006', 'P005007', 'P005008',
'P005009', 'P005010', 'P005011', 'P005012', 'P005013', 'P005014',
'P005015', 'P005016', 'P005017']].apply(pd.to_numeric)
block_data.to_csv('./temp_data/mpls_2010blocks_2020nabes_population.csv', index=False)
```
-----------------------
**Note**: I stopped here because I'm going to publish the rest using Datasette
Done!
| github_jupyter |
# Sample for KFServing SDK
This is a sample for KFServing SDK.
The notebook shows how to use KFServing SDK to create, get, rollout_canary, promote and delete InferenceService.
```
from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1alpha2EndpointSpec
from kfserving import V1alpha2PredictorSpec
from kfserving import V1alpha2TensorflowSpec
from kfserving import V1alpha2InferenceServiceSpec
from kfserving import V1alpha2InferenceService
from kubernetes.client import V1ResourceRequirements
```
Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.
```
namespace = utils.get_default_target_namespace()
```
## Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.
```
api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION
default_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
tensorflow=V1alpha2TensorflowSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers',
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}))))
isvc = V1alpha2InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='flower-sample', namespace=namespace),
spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec))
```
## Create InferenceService
Call KFServingClient to create InferenceService.
```
KFServing = KFServingClient()
KFServing.create(isvc)
```
## Check the InferenceService
```
KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
```
## Add Canary to InferenceService
Firstly define canary endpoint spec, and then rollout 10% traffic to the canary version, watch the rollout process.
```
canary_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
tensorflow=V1alpha2TensorflowSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers-2',
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}))))
KFServing.rollout_canary('flower-sample', canary=canary_endpoint_spec, percent=10,
namespace=namespace, watch=True, timeout_seconds=120)
```
## Rollout more traffic to canary of the InferenceService
Rollout traffice percent to 50% to canary version.
```
KFServing.rollout_canary('flower-sample', percent=50, namespace=namespace,
watch=True, timeout_seconds=120)
```
## Promote Canary to Default
```
KFServing.promote('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
```
## Delete the InferenceService
```
KFServing.delete('flower-sample', namespace=namespace)
```
| github_jupyter |
# Assignment Submission for FMUP
## Kishlaya Jaiswal
### Chennai Mathematical Institute - MCS201909
---
# Solution 1
I have choosen the following stocks from Nifty50:
- Kotak Mahindra Bank Ltd (KOTAKBANK)
- Hindustan Unilever Ltd (HINDUNILVR)
- Nestle India Limited (NESTLEIND)
Note:
- I am doing these computations on Apr 2, 2021, and hence using the closing price for this day as my strike price.
- I am using the historical data for the month of February to find the volatility of each of these stocks (volatility computation is done at the end)
```
import QuantLib as ql
# function to find the price and greeks for a given option
# with it's strike/spot price and it's volatility
def find_price_greeks(spot_price, strike_price, volatility, option_type):
# construct the European Option
payoff = ql.PlainVanillaPayoff(option_type, strike_price)
exercise = ql.EuropeanExercise(maturity_date)
european_option = ql.VanillaOption(payoff, exercise)
# quote the spot price
spot_handle = ql.QuoteHandle(
ql.SimpleQuote(spot_price)
)
flat_ts = ql.YieldTermStructureHandle(
ql.FlatForward(calculation_date, risk_free_rate, day_count)
)
dividend_yield = ql.YieldTermStructureHandle(
ql.FlatForward(calculation_date, dividend_rate, day_count)
)
flat_vol_ts = ql.BlackVolTermStructureHandle(
ql.BlackConstantVol(calculation_date, calendar, volatility, day_count)
)
# create the Black Scholes process
bsm_process = ql.BlackScholesMertonProcess(spot_handle,
dividend_yield,
flat_ts,
flat_vol_ts)
# set the engine to use the above process
european_option.setPricingEngine(ql.AnalyticEuropeanEngine(bsm_process))
return european_option
tickers = ["KOTAKBANK", "HINDUNILVR", "NESTLEIND"]
# spot price = closing price as on Mar 1, 2021
spot = {"KOTAKBANK":1845.35,
"HINDUNILVR":2144.70,
"NESTLEIND":16288.20}
# strike price = closing price as on Apr 2, 2021
strike = {"KOTAKBANK":1804.45,
"HINDUNILVR":2399.45,
"NESTLEIND":17102.15}
# historical volatility from the past month's data
vol = {"KOTAKBANK":0.38,
"HINDUNILVR":0.15,
"NESTLEIND":0.18}
# date of option purchase
calculation_date = ql.Date(1,3,2021)
# exercise date
# this excludes the holidays in the Indian calendar
calendar = ql.India()
period = ql.Period(65, ql.Days)
maturity_date = calendar.advance(calculation_date, period)
# rate of interest
risk_free_rate = 0.06
# other settings
dividend_rate = 0.0
day_count = ql.Actual365Fixed()
ql.Settings.instance().evaluationDate = calculation_date
# store final variables for future calculations
delta = {}
gamma = {}
vega = {}
# print settings
format_type_head = "{:<15}" + ("{:<12}" * 7)
format_type = "{:<15}{:<12}" + ("{:<12.2f}" * 6)
print(format_type_head.format("Name", "Type", "Price", "Delta", "Gamma", "Rho", "Theta", "Vega"))
print()
for ticker in tickers:
option = find_price_greeks(spot[ticker], strike[ticker], vol[ticker], ql.Option.Call)
print(format_type.format(ticker, "Call", option.NPV(),
option.delta(), option.gamma(),
option.rho(), option.theta(), option.vega()))
delta[ticker] = option.delta()
gamma[ticker] = option.gamma()
vega[ticker] = option.vega()
option = find_price_greeks(spot[ticker], strike[ticker], vol[ticker], ql.Option.Put)
print(format_type.format(ticker, "Put", option.NPV(),
option.delta(), option.gamma(),
option.rho(), option.theta(), option.vega()))
print()
```
### Delta Gamma Vega neutrality
First we make the Gamma and Vega neutral by taking
- x units of KOTAKBANK
- y units of HINDUNILVR
- 1 unit of NESTLEIND
To solve for x,y we have the following:
```
import numpy as np
G1, G2, G3 = gamma["KOTAKBANK"], gamma["HINDUNILVR"], gamma["NESTLEIND"]
V1, V2, V3 = vega["KOTAKBANK"], vega["HINDUNILVR"], vega["NESTLEIND"]
# Solve the following equation:
# G1 x + G2 y + G3 = 0
# V1 x + V2 y + V3 = 0
A = np.array([[G1, G2], [V1, V2]])
b = np.array([-G3, -V3])
z = np.linalg.solve(A, b)
print("x = {:.2f}".format(z[0]))
print("y = {:.2f}".format(z[1]))
print()
final_delta = z[0]*delta["KOTAKBANK"] + z[1]*delta["HINDUNILVR"] + delta["NESTLEIND"]
print("Delta of portfolio is {:.2f}".format(final_delta))
```
## Final Strategy
- Take a short position of 18.46 units of Kotak Mahindra Bank Ltd Call Option
- Take a long position of 17.34 units of Hindustan Unilever Ltd Call Option
- Take a long position of 1 unit of Nestle India Limited Call Option
- Take a long position of 9.13 units of Nestle India Limited Stock
This will yield a portfolio with Delta, Gamma and Vega neutral.
# Solution 2
Using Taylor expansion, we get
$$\Delta P = \frac{\partial P}{\partial y} \Delta y + \frac12 \frac{\partial^2 P}{\partial y^2}(\Delta y)^2$$
$$\implies \frac{\Delta P}{P} = -D \Delta y + \frac12 C (\Delta y)^2$$
where $D$ denotes duration and $C$ denotes convexity of a bond.
We remark that the duration of the bonds we are comparing are same and fixed.
---
<p>With that being said, let's say the interest rates fall, then we have $$\Delta y < 0 \implies - D \Delta y + C \Delta y^2 > 0 \implies \Delta P > 0$$
Now for the bond with greater convexity, $C \Delta y^2$ has a large value hence $\Delta P$ has to be large and so we get that "Greater convexity translates into greater price gains as interest rates fall"
</p>
---
Now suppose interest rates rise that is $\Delta y > 0$, then we $-D \Delta y < 0$, that is the price of the bonds decreases but the bond with greater convexity will add up for a large $C \Delta y^2$ also and so the price decrease will be less for bond with high convexity.
This explains "Lessened price declines as interest rates rise"
# Solution 3
```
import QuantLib as ql
# function to calculate coupon value
def find_coupon(pv, r, m, n):
discount_factor = (r/m) / (1 - (1 + r/m)**(-n*m))
C = pv * discount_factor
return C
# loan settings
loan_amt = 0.8*1000000
rate = 0.12
pay = find_coupon(loan_amt, rate, 12, 5)
month = ql.Date(15,8,2021)
period = ql.Period('1m')
# print settings
print("Monthly coupon is: {:.2f}".format(pay))
print()
format_type = "{:<15}" * 4
print(format_type.format("Date", "Interest", "Principal", "Remaining"))
while loan_amt > 0:
interest = loan_amt * rate / 12
principal = pay - interest
loan_amt = loan_amt - principal
print(format_type.format(month.ISO(), "{:.2f}".format(interest), "{:.2f}".format(principal), "{:.2f}".format(loan_amt)))
if round(loan_amt) == 0:
break
month = month + period
```
### Volatility Computation for Problem 1
```
import math
def get_volatility(csv):
data = csv.split('\n')[1:]
data = map(lambda x: x.split(','), data)
closing_prices = list(map(lambda x: float(x[-2]), data))
n = len(closing_prices)
log_returns = []
for i in range(1,n):
log_returns.append(math.log(closing_prices[i]/closing_prices[i-1]))
mu = sum(log_returns)/(n-1)
tmp = map(lambda x: (x-mu)**2, log_returns)
vol = math.sqrt(sum(tmp)/(n-1)) * math.sqrt(252)
return vol
kotak_csv = '''Date,Open,High,Low,Close,Adj Close,Volume
2021-02-01,1730.000000,1810.000000,1696.250000,1801.349976,1801.349976,220763
2021-02-02,1825.000000,1878.650024,1801.349976,1863.500000,1863.500000,337556
2021-02-03,1875.000000,1882.349976,1820.099976,1851.849976,1851.849976,147146
2021-02-04,1857.900024,1914.500000,1831.050049,1911.250000,1911.250000,188844
2021-02-05,1921.000000,1997.900024,1915.000000,1982.550049,1982.550049,786773
2021-02-08,1995.000000,2029.949951,1951.949951,1956.300049,1956.300049,212114
2021-02-09,1950.000000,1975.000000,1938.000000,1949.199951,1949.199951,62613
2021-02-10,1954.550049,1961.849976,1936.300049,1953.650024,1953.650024,143830
2021-02-11,1936.000000,1984.300049,1936.000000,1961.300049,1961.300049,120121
2021-02-12,1966.000000,1974.550049,1945.599976,1951.449951,1951.449951,86860
2021-02-15,1954.000000,1999.000000,1954.000000,1986.199951,1986.199951,135074
2021-02-16,1995.000000,2048.949951,1995.000000,2021.650024,2021.650024,261589
2021-02-17,2008.500000,2022.400024,1969.500000,1989.150024,1989.150024,450365
2021-02-18,1980.000000,1982.349976,1938.000000,1945.300049,1945.300049,193234
2021-02-19,1945.000000,1969.599976,1925.050049,1937.300049,1937.300049,49189
2021-02-22,1941.000000,1961.650024,1921.650024,1948.550049,1948.550049,44651
2021-02-23,1955.000000,1961.900024,1867.000000,1873.150024,1873.150024,118138
2021-02-24,1875.199951,1953.949951,1852.000000,1919.000000,1919.000000,454695
2021-02-25,1935.000000,1964.949951,1886.900024,1895.349976,1895.349976,195212
2021-02-26,1863.000000,1868.000000,1773.099976,1782.349976,1782.349976,180729'''
hind_csv = '''Date,Open,High,Low,Close,Adj Close,Volume
2021-02-01,2265.000000,2286.000000,2226.550049,2249.149902,2249.149902,130497
2021-02-02,2271.000000,2275.000000,2207.699951,2231.850098,2231.850098,327563
2021-02-03,2234.000000,2256.699951,2218.199951,2232.600098,2232.600098,121232
2021-02-04,2234.000000,2258.449951,2226.949951,2247.050049,2247.050049,533609
2021-02-05,2252.000000,2285.000000,2241.000000,2270.350098,2270.350098,254911
2021-02-08,2275.000000,2287.000000,2233.000000,2237.800049,2237.800049,211465
2021-02-09,2247.000000,2254.000000,2211.199951,2216.649902,2216.649902,171285
2021-02-10,2216.649902,2240.000000,2213.449951,2235.899902,2235.899902,185915
2021-02-11,2245.000000,2267.500000,2235.000000,2262.399902,2262.399902,121168
2021-02-12,2270.000000,2270.649902,2232.199951,2241.899902,2241.899902,33016
2021-02-15,2252.000000,2261.500000,2212.100098,2215.850098,2215.850098,91240
2021-02-16,2225.000000,2228.399902,2190.500000,2196.899902,2196.899902,101652
2021-02-17,2191.000000,2200.000000,2160.300049,2164.649902,2164.649902,138504
2021-02-18,2165.000000,2168.449951,2143.050049,2147.750000,2147.750000,110272
2021-02-19,2150.000000,2193.649902,2148.000000,2181.149902,2181.149902,150398
2021-02-22,2200.000000,2201.699951,2161.100098,2167.250000,2167.250000,98782
2021-02-23,2173.550049,2192.000000,2169.399902,2177.949951,2177.949951,22743
2021-02-24,2179.000000,2183.949951,2104.250000,2181.600098,2181.600098,329265
2021-02-25,2190.000000,2190.000000,2160.000000,2163.600098,2163.600098,357853
2021-02-26,2151.149902,2182.000000,2122.000000,2132.050049,2132.050049,158925'''
nestle_csv = '''Date,Open,High,Low,Close,Adj Close,Volume
2021-02-01,17162.099609,17277.000000,16996.449219,17096.949219,17096.949219,3169
2021-02-02,17211.000000,17328.099609,16800.000000,17189.349609,17189.349609,3852
2021-02-03,17247.449219,17284.000000,17064.349609,17155.400391,17155.400391,2270
2021-02-04,17250.000000,17250.000000,17054.800781,17073.199219,17073.199219,13193
2021-02-05,17244.000000,17244.000000,17019.949219,17123.300781,17123.300781,2503
2021-02-08,17199.949219,17280.000000,17107.349609,17213.550781,17213.550781,7122
2021-02-09,17340.000000,17510.699219,17164.050781,17325.800781,17325.800781,2714
2021-02-10,17396.900391,17439.300781,17083.800781,17167.699219,17167.699219,3341
2021-02-11,17167.699219,17442.000000,17165.550781,17416.650391,17416.650391,2025
2021-02-12,17449.849609,17500.000000,17241.000000,17286.099609,17286.099609,3486
2021-02-15,17290.000000,17500.000000,17280.000000,17484.500000,17484.500000,1927
2021-02-16,17600.000000,17634.599609,17141.250000,17222.449219,17222.449219,7901
2021-02-17,16900.000000,16900.000000,16360.000000,16739.900391,16739.900391,28701
2021-02-18,17050.000000,17050.000000,16307.000000,16374.150391,16374.150391,13711
2021-02-19,16395.000000,16477.599609,16214.450195,16386.099609,16386.099609,5777
2021-02-22,16400.000000,16531.050781,16024.599609,16099.200195,16099.200195,9051
2021-02-23,16123.000000,16250.000000,16003.000000,16165.250000,16165.250000,6261
2021-02-24,16249.000000,16800.000000,15900.000000,16369.950195,16369.950195,18003
2021-02-25,16394.699219,16394.699219,16102.000000,16114.349609,16114.349609,18735
2021-02-26,16075.000000,16287.200195,16010.000000,16097.700195,16097.700195,13733'''
print("Annualized Volatility of KOTAKBANK is {:.2f}%".format(get_volatility(kotak_csv)*100))
print("Annualized Volatility of HINDUNILVR is {:.2f}%".format(get_volatility(hind_csv)*100))
print("Annualized Volatility of NESTLEIND is {:.2f}%".format(get_volatility(nestle_csv)*100))
```
| github_jupyter |
# Sequana_coverage versus CNVnator (viral genome)
This notebook compares CNVnator, CNOGpro and sequana_coverage behaviour on a viral genome instance (same as in the virus notebook).
Versions used:
- sequana 0.7.0
```
%pylab inline
matplotlib.rcParams['figure.figsize'] = [10,7]
```
Here below, we provide the results of the sequana_coverage and CNVNator analysis as files within this directory itself. Nevertheless, if you need to rerun the analysis yourself, you can still do it but you need to generate the BAM file for CNVnator (see virus notebook). For sequana_coverage, you could use the BED file directly, which is also used here below for the plotting. This BED file can simply be downloaded as follows.
```
!wget https://github.com/sequana/resources/raw/master/coverage/JB409847.bed.bz2
!bunzip2 JB409847.bed.bz2
```
# The coverage signal
Let us have a quick look at the coverage itself.
```
from sequana import GenomeCov
b = GenomeCov("JB409847.bed")
chromosome = b.chr_list[0]
chromosome.run(4001, 2, circular=True)
chromosome.plot_coverage()
_ = ylim([0,1500])
```
What you see are two long deleted regions of about 700 bases (one fully deleted) and two 3-bases deleted regions on the right hand side. Our goal is to detect those 4 events automatically.
# Get ROIs using sequana_coverage
We ran the analysis with circular chromosome set on and a window parameter of
- 1000
- 2000
- 3000
- 4000
For example:
```
sequana_coverage -o -input JB409847.bed --no-html --no-multiqc -w 3000
```
Results are available in 4 files for your convenience: e.g. rois_1000.csv, rois_4000.csv
Note that the window size cannot be larger than a fifth of the genome length from the command line. Would you need to force it, you would need to use the library itself.
# Get ROIs using CNVnator
```
cnvnator -root out1.root -tree JB409847.bed
cnvnator -root out1.root -his 1
cnvnator -root out1.root -stat 1
cnvnator -root out1.root -partition 1 -ngc
cnvnator -root out1.root -call 1 -ngc > events_bin1.txt
```
The default parameter of 100 gives no detection. We then used bin=1, 5, 10, 20 and stored the results in:
- events_bin1.txt
- events_bin5.txt
- events_bin10.txt
- events_bin20.txt
# How events are detected using CNVnator?
```
from sequana.cnv import CNVnator
cnv1 = CNVnator("events_bin1.txt").df
cnv5 = CNVnator("events_bin5.txt").df
cnv10 = CNVnator("events_bin10.txt").df
cnv20 = CNVnator("events_bin20.txt").df
def plot_rois_cnv(cnv):
chromosome.plot_coverage()
for _, this in cnv.iterrows():
type_ = this['type']
positions = [this.start, this.end]
if type_ == "deletion":
fill_between(positions, 0, 1000)
else:
fill_between(positions, 1000,2000)
ylim([0,2000])
```
## binning of 20 and 10
```
plot_rois_cnv(cnv10)
```
Here, the two main events on the left hand side (deleted regions of several hundreds of bases) are detected. However, the short deleted regions on the right hand side (a few bases) are not.
# Binning of 5
```
plot_rois_cnv(cnv5)
```
Here again, the two main deleted regions are detected but the two short events
are not. There is also a false detection at position 0 and around 4000
# Binning of 1
```
plot_rois_cnv(cnv1)
```
binning of 1 is too small: the low deleted regions are detected but then, the small
deleted regions are not and there are lots of upper regions classified as duplicated CNVs.
# How events are detected using sequana_coverage ?
```
# W can be 1000,2000,3000,4000
import pandas as pd
def plot_rois_sequana(W):
assert W in [2000, 3000, 4000, 1000,5000, "4000_t3"]
if W == "4000_t3":
chromosome.run(4001, circular=True)
else:
chromosome.run(W+1, circular=True)
chromosome.plot_coverage(sample=False)
rois = pd.read_csv("rois_{}.csv".format(W))
for _, this in rois.iterrows():
#if this.max_zscore >-4.5 and this.max_zscore<4.5 and this.max_cov!=0:
# continue
positions = [this.start, this.end]
if abs(this.mean_zscore) >12: col="red"
elif abs(this.mean_zscore) >8: col="orange"
elif abs(this.mean_zscore) >4: col="yellow"
if this.mean_zscore > 0:
fill_between(positions, 1000, 2000, color=col)
else:
fill_between(positions, 0, 1000, color=col)
if this.end-this.start < 100:
axvline(this.start, ymax=0.5, lw=3, color="r")
ylim([0,2000])
plot_rois_sequana(5000)
plot_rois_sequana(4000)
plot_rois_sequana(3000)
```
Using W=2000, 3000, 4000 the results are robust and consistent: the same ROIs are detected. In particular, the 2 long and 2 short deleted regions are detected;
In addition, some short events not fully deleted are also systematically detected. The detection seems coherent visually except maybe for the duplicated events at position 5000, which could be considered a false detection. Note, however, that the zscore associated with this events could be used to discard the events (mean zscore of 5)
```
rois = pd.read_csv("rois_4000.csv")
rois[["start", "end", "size", "mean_rm", "max_zscore", "log2_ratio"]]
```
Now, if we decrease the W parameter to 1000, we may miss the large deleted regions,
which length is 711 and 782 (to detect those events we recommend to use W as
twice this length so about 1500)
```
plot_rois_sequana(1000)
```
So, here we still detect the two deleted regions but we see that the running
median fits the data too closely. Consequently, we have some
false detections at position 0, 6600, 17923 to cite a few examples.
# CNOGpro detection
Running CNOGpro manually using window length of 100 and 10, we get these
results
```
CNOGpro_10 = [[3521, 4220, 0], [4221, 4230, 5],
[4241,4250,0], [4251,4260,3],
[5681,5740,0], [19771,19795,0]]
CNOGpro_100 = [[3601,4200,0], [4201,4300,3], [4301,4400,2]]
plot_rois_sequana(4000)
for this in CNOGpro_10:
plot(this[0:2], [1000, 1000], "ob-")
plot_rois_sequana(4000)
for this in CNOGpro_100:
axhline(1000, this[0], 1000, color="r", lw=20)
plot(this[0:2], [1000, 1000], "ob-")
```
# Conclusions
On a viral genome (length 18000), we use sequana_coverage and cnvnator to detect events of interests that can be seen by eyes on the coverage signals that is one deleted region of about 700 bases, one depleted region of about 700 and two short deleted regions of a few bases long.
Sequana_coverage detects those 4 events with the default parameter (window length set to a fifth of the full genome length). It is also robust with respect to the parameter (2000, 3000, 4000 that give the same results). Note, however, that a window of 1000 is too short and lead to possibly false detections. Those false detections could be easily removed using the mean zscore associated with these events.
CNVnator detects the two long deleted events. However, the two short deleted ones are systematically missed, which is not surprising since CNVnator is designed to detect long regions. Depending of the bin parameter, there are a few false detections. Decreasing to a bin of 1, the results are difficult to assess since most of the genome is classified as a mix of regions of interests.
CNOGpro detects the two long deleted events. However, their exact duration is not correct. Short events are missed using bins of 5, 10, 100, 200.
**computation time:**
- sequana_coverage: (with --no-html and --no-multiqc options), sequana_coverage takes 2.5 seconds irrespective of the window parameter
- cnvnator: 8.5 s for bin=5 or 10, 18s for bins=1
| github_jupyter |
<a href="https://colab.research.google.com/github/gcfer/reinforcement-learning/blob/main/RL_A2C_2N_TF2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Reinforcement Learning: A2C (Actor-Critic Method) — Two Networks
## Overview
In this notebook, we'll cover the actor-critic framework and implement `A2C`, a state-of-the-art actor-critic algorithm. We'll test it by solving the cartpole problem in the Open AI gym.
```
import numpy as np
import pandas as pd
import datetime
# Import tensorflow
#!pip install tensorflow-gpu==1.14.0 > /dev/null 2>&1
import tensorflow as tf
import tensorflow.keras as K
print(tf.__version__)
# Check that tf sees the GPU
device_name = tf.test.gpu_device_name()
print(device_name)
# Import libraries for plotting
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('seaborn-pastel')
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # this makes plot in high res
```
Since we are in a remote notebook, we cannot display the progress of the environment in real time. Instead, we store the renderings and show a video at the end of the episode (refer to [this](https://star-ai.github.io/Rendering-OpenAi-Gym-in-Colaboratory/) guide in case you need it). The only advice that I can give is to import `gym` _after_ the update below.
```
#remove " > /dev/null 2>&1" to see what is going on under the hood
!pip install gym pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
# Maybe
# !apt-get update > /dev/null 2>&1
# !apt-get install cmake > /dev/null 2>&1
# !pip install --upgrade setuptools 2>&1
# !pip install ez_setup > /dev/null 2>&1
# !pip install gym[atari] > /dev/null 2>&1
# Open AI gym
import gym
from gym import logger as gymlogger
from gym.wrappers import Monitor
gymlogger.set_level(40) #error only
import math
import random
import glob
import io
import base64
from IPython.display import HTML
from IPython import display as ipythondisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(2880, 1800))
display.start()
```
The function below is needed to display the video. I slightly modified it from the original one (that you can in the guide I linked above) to avoid the infinite repetition loop of the video.
```
"""
Utility functions to enable video recording of gym environment and displaying it
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
```
## OpenAI Gym Cartpole
The Cartpole problem is a discrete control problem where we try to keep the pole vertical by moving the cart below it.
Upon loading the environment, we launch a simulation where the agent chooses at random from the action sample space the next action. Finally, we show the video of the result. What happens is that the problem is considered unsolved (= game over) if the angle between pole and the line orthogonal to the cart axis is larger than a threshold. The parameter `done` specifies when the experiment is over.
```
# Load the environment and start
env = wrap_env(gym.make("CartPole-v0"))
observation = env.reset()
while True:
env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break;
env.close()
show_video()
```
To better understand the inputs and outputs of the environment, let consider the action space and the observation space.
The action space is of type `Discrete(n)` where `n` is the number of actions. This is equivalent to the discrete set $\{ 0, 1, 2, \dotsc, n-1 \}$.
```
env.action_space
```
The observation space is of type `Box(n)`, which means that it is the Cartesian product of `n` intervals.
```
env.observation_space
[env.observation_space.low, env.observation_space.high]
```
When we make a step in the environment, the feedback that we get includes the observation:
```
#env = gym.make('CartPole-v0')
# env = Monitor(env, './video', force=True)
env.reset()
obs, r, done, _ = env.step(0)
print(obs)
```
## Actor-Critic Algorithms
Actor-critic algorithms are policy gradient algorithms where the actor is driven by the value- or Q-function estimated by the critic. The role of the actor is to find the policy according to the policy gradient method. The role of the critic is to discover the value- or Q-function, and feed back the information about “how good” the action taken in the current state was.
In policy gradient algorithms, the policy is updated on the basis of the discouted reward that is experienced. Let us recall that the policy $\pi_\theta$ is fit such that
$$ \theta^* = \arg\max_\theta \mathbf{E}\!\left[R(\tau)\sum_{t=0}^T \log \pi_\theta(a_t|s_t)\right] $$
where
$$ R(\tau) =\sum_{t=0}^T\gamma^t r(s_t,a_t) $$
and $\tau=(s_0, a_0, r_0, \cdots, s_{T}, a_{T}, r_{T})$.
In practice, we fit the network so as to minimize the loss
$$ L(\tau) = - \sum_{t=0}^T R(\tau_{t}) \log (\pi_\theta(a_t)), $$
where $\tau_t=(s_t, a_t, r_t, \cdots, s_{T}, a_{T}, r_{T})$, at the end of each episode, and repeat the process over many episodes. We also know that, instead of using $R$, we can use a modified pre-log factor so as to reduce the variance in the learning process. This is accomplished through baselines, that is some function $b(s_t)$ that is removed from $R$:
$$ L(\tau) = - \sum_{t=0}^T [R(\tau_{t})-b(s_t)] \log (\pi_\theta(a_t)). $$
The fundamental problem with this approach is that the actor just tries actions and waits to see the reward at the end of the episode. This is equivalent to perfoming a task and by trial and error learning what to do better next time.
It would be much better if, while performing the task, we could receive a feedback. The critic network is then introduced to do exactly that. How does the critic performs the critics though? It implements a DQN to find the Q-function or, in a simpler version, the value function.
In other words, this is equivalent to having a teacher, but the teacher is learning while teaching. It is intuitive then that the critic needs to learn sooner than the actor how to perform its task, and so we'll set the learning rate of the critic larger than the one of the actor.
What the critic does is to infer the value function, which is $V(s_t)=\mathbf{E}[R(s_t)]$ with the best policy. Suppose the critic converges and learns it. Then we can set the value function as the baseline, $b(s):=V(s)$. The resulting pre-log factor,
$$ L(\tau) = - \sum_{t=0}^T A(\tau_{t}) \log (\pi_\theta(a_t)), $$
is called advantage. This is why `A2C` is called advantage actor critic.
```
# A2C - Two networks
class A2C_2N:
def __init__(self, state_size, action_size, gamma=None, max_steps=None):
# max_steps is the maximum number of batches [s, a, r, s_] or epochs remembered
# Parameters
self.state_size = state_size
self.action_size = action_size
self.memory = list()
if gamma is None:
self.gamma = 0.99
else:
self.gamma = gamma
if max_steps is None:
self.max_steps = 200
else:
self.max_steps = max_steps
# learning rates
self.actor_lr = 0.0008
self.critic_lr = 0.0025
# actor network
self.actor = self.build_actor()
# critic network
self.critic = self.build_critic()
def remember(self, s, a, r, s_, done):
self.memory.append([s, a, r, s_, done])
if len(self.memory) > self.max_steps: # if too long
self.memory.pop(0) # forget the oldest
def forget(self):
self.memory = list()
# actor learns the policy: input is state; output is distribution over actions (policy)
def build_actor(self, n_hidden_1=None, n_hidden_2=None):
if n_hidden_1 == None:
n_hidden_1 = 6 * self.state_size
if n_hidden_2 == None:
n_hidden_2 = 6 * self.state_size
model = K.Sequential()
model.add(K.layers.Dense(n_hidden_1, activation=tf.nn.elu, input_dim=self.state_size)) # first hidden layer
model.add(K.layers.Dense(n_hidden_2, activation=tf.nn.elu))
model.add(K.layers.Dense(self.action_size, activation='softmax')) # output
# loss is categorical_crossentropy since pi_theta (vector) should be equal to one-hot action (vector) eventually
# because there is always a best action to be taken
model.compile(optimizer=K.optimizers.RMSprop(lr=self.actor_lr), loss='categorical_crossentropy')
return model
# critic network
def build_critic(self, n_hidden_1=None, n_hidden_2=None):
if n_hidden_1 == None:
n_hidden_1 = 6 * self.state_size
if n_hidden_2 == None:
n_hidden_2 = 6 * self.state_size
model = K.Sequential()
model.add(K.layers.Dense(n_hidden_1, activation=tf.nn.elu, input_dim=self.state_size)) # first hidden layer
model.add(K.layers.Dense(n_hidden_2, activation=tf.nn.elu))
model.add(K.layers.Dense(1, activation=tf.nn.elu)) # output
model.compile(optimizer=K.optimizers.Adam(lr=self.critic_lr), loss='mse')
return model
# actor implements policy gradient
def policy(self, s):
policy = self.actor.predict(s, batch_size=1).flatten()
a = np.random.choice(self.action_size, 1, p=policy)[0]
return a
# learn from memory
def learn(self):
# replay the entire episode
s, a, r, s_, done = zip(*self.memory)
a = np.reshape(a, (-1, 1))
T = a.shape[0] # epochs in memory
a_one_hot = np.zeros((T, self.action_size))
a_one_hot[np.arange(T), a.reshape(-1)] = 1 # size: T x action_size
s = np.concatenate(s) # or np.vstack(s)
target_actor = a_one_hot # actions
cum_reward = np.cumsum((self.gamma ** np.arange(0, T)) * r)/(self.gamma ** np.arange(0, T))
R = np.flip(cum_reward).reshape(-1, 1)
v = self.critic.predict(s)
A = R - v # theoretical advantage (infinite-horizon problems)
# s_ = np.concatenate(s_)
# v_ = self.critic.predict(s_)
# r = np.reshape(r, (-1, 1))
# A = r + self.gamma * v_ - v # advantage (same as above but works better in finite-horizon problems)
self.actor.fit(s, target_actor, sample_weight=A, epochs=1, verbose=0) # uses advantages
self.critic.fit(s, R, epochs=1, verbose=0) # trained to get the value function
```
## Training
```
seed = 0
np.random.seed(seed)
tf.random.set_seed(seed)
# Restart environment
# env = Monitor(env, './video', force=True)
MAX_REWARD = 200
env._max_episode_steps = MAX_REWARD
# Parameters
n_episodes = 350
winning_streak = 10 # after this number of successive successes, training stops
reward_history = np.zeros(n_episodes)
gamma = 0.99
steps_in_memory = 200 # number of steps to remember
A = np.arange(env.action_space.n)
dim_state_space = env.observation_space.shape[0]
# Start training
agent = A2C_2N(dim_state_space, env.action_space.n, gamma, steps_in_memory)
# init
s = env.reset()
s = np.reshape(s, [1, dim_state_space])
template = "\rEpisode: {:3d}/{:3d} | Reward: {:3.0f} | Duration: {:.2f} s"
for e in range(n_episodes):
start_time = datetime.datetime.now()
s = env.reset()
s = np.reshape(s, [1, dim_state_space])
done = False
cum_reward = 0
while not done:
a = agent.policy(s)
s_, r, done, _ = env.step(a)
s_ = np.reshape(s_, [1, dim_state_space])
agent.remember(s, a, r, s_, done)
cum_reward += r
s = s_
agent.learn()
agent.forget()
dt = datetime.datetime.now() - start_time
print(template.format(e+1, n_episodes, cum_reward, dt.total_seconds()), end='')
reward_history[e] = cum_reward
plt.plot(reward_history[0:e], label='Reward')
plt.xlabel('Episodes')
plt.ylabel('Cumulative reward')
plt.tight_layout()
plt.show()
```
## Trying it
```
env = wrap_env(gym.make("CartPole-v0"))
s = env.reset()
s = np.reshape(s, [1, dim_state_space])
done = False
cum_reward = 0
while not done:
env.render()
a = agent.policy(s)
s_, r, done, _ = env.step(a)
s_ = np.reshape(s_, [1, dim_state_space])
agent.remember(s, a, r, s_, done)
cum_reward += r
s = s_
env.close()
print('We got a reward equal to {:.0f}'.format(cum_reward))
show_video()
```
| github_jupyter |
# QCoDeS Example with Lakeshore 325
Here provided is an example session with model 325 of the Lakeshore temperature controller
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from qcodes.instrument_drivers.Lakeshore.Model_325 import Model_325
lake = Model_325("lake", "GPIB0::12::INSTR")
```
## Sensor commands
```
# Check that the sensor is in the correct status
lake.sensor_A.status()
# What temperature is it reading?
lake.sensor_A.temperature()
lake.sensor_A.temperature.unit
# We can access the sensor objects through the sensor list as well
assert lake.sensor_A is lake.sensor[0]
```
## Heater commands
```
# In a closed loop configuration, heater 1 reads from...
lake.heater_1.input_channel()
lake.heater_1.unit()
# Get the PID values
print("P = ", lake.heater_1.P())
print("I = ", lake.heater_1.I())
print("D = ", lake.heater_1.D())
# Is the heater on?
lake.heater_1.output_range()
```
## Loading and updating sensor calibration values
```
curve = lake.sensor_A.curve
curve_data = curve.get_data()
curve_data.keys()
fig, ax = plt.subplots()
ax.plot(curve_data["Temperature (K)"], curve_data['log Ohm'], '.')
plt.show()
curve.curve_name()
curve_x = lake.curve[23]
curve_x_data = curve_x.get_data()
curve_x_data.keys()
temp = np.linspace(0, 100, 200)
new_data = {"Temperature (K)": temp, "log Ohm": 1/(temp+1)+2}
fig, ax = plt.subplots()
ax.plot(new_data["Temperature (K)"], new_data["log Ohm"], '.')
plt.show()
curve_x.format("log Ohm/K")
curve_x.set_data(new_data)
curve_x.format()
curve_x_data = curve_x.get_data()
fig, ax = plt.subplots()
ax.plot(curve_x_data["Temperature (K)"], curve_x_data['log Ohm'], '.')
plt.show()
```
## Go to a set point
```
import time
import numpy
from IPython.display import display
from ipywidgets import interact, widgets
from matplotlib import pyplot as plt
def live_plot_temperature_reading(channel_to_read, read_period=0.2, n_reads=1000):
"""
Live plot the temperature reading from a Lakeshore sensor channel
Args:
channel_to_read
Lakeshore channel object to read the temperature from
read_period
time in seconds between two reads of the temperature
n_reads
total number of reads to perform
"""
# Make a widget for a text display that is contantly being updated
text = widgets.Text()
display(text)
fig, ax = plt.subplots(1)
line, = ax.plot([], [], '*-')
ax.set_xlabel('Time, s')
ax.set_ylabel(f'Temperature, {channel_to_read.temperature.unit}')
fig.show()
plt.ion()
for i in range(n_reads):
time.sleep(read_period)
# Update the text field
text.value = f'T = {channel_to_read.temperature()}'
# Add new point to the data that is being plotted
line.set_ydata(numpy.append(line.get_ydata(), channel_to_read.temperature()))
line.set_xdata(numpy.arange(0, len(line.get_ydata()), 1)*read_period)
ax.relim() # Recalculate limits
ax.autoscale_view(True, True, True) # Autoscale
fig.canvas.draw() # Redraw
lake.heater_1.control_mode("Manual PID")
lake.heater_1.output_range("Low (2.5W)")
lake.heater_1.input_channel("A")
# The following seem to be good settings for our setup
lake.heater_1.P(400)
lake.heater_1.I(40)
lake.heater_1.D(10)
lake.heater_1.setpoint(15.0) # <- temperature
live_plot_temperature_reading(lake.sensor_a, n_reads=400)
```
## Querying the resistance and heater output
```
# to get the resistance of the system (25 or 50 Ohm)
lake.heater_1.resistance()
# to set the resistance of the system (25 or 50 Ohm)
lake.heater_1.resistance(50)
lake.heater_1.resistance()
# output in percent (%) of current or power, depending on setting, which can be queried by lake.heater_1.output_metric()
lake.heater_1.heater_output() # in %, 50 means 50%
```
| github_jupyter |
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad.
Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999
Hint -- it will work best with 3 convolutional layers.
```
import tensorflow as tf
import os
import zipfile
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/happy-or-sad.zip"
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# GRADED FUNCTION: train_happy_sad_model
def train_happy_sad_model():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
DESIRED_ACCURACY = 0.999
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, callback, logs={}):
if (logs.get('acc') > DESIRED_ACCURACY):
print("\nReached {}% accuracy, stopping training".format(DESIRED_ACCURACY*100))
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.
model = tf.keras.models.Sequential([
# Your Code Here
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150,150,3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['acc'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
# Please use a target_size of 150 X 150.
train_generator = train_datagen.flow_from_directory(
'/tmp/h-or-s',
target_size=(150, 150),
batch_size=10,
class_mode='binary'
)
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
# model fitting
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
callbacks=[callbacks]
)
# model fitting
return history.history['acc'][-1]
# The Expected output: "Reached 99.9% accuracy so cancelling training!""
train_happy_sad_model()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
```
| github_jupyter |
## Single image processing [resize, crope]
```
import numpy as np
#from PIL import Image
import os, glob
import cv2
pic = cv2.imread('../../../data/data/1_d.jpg')
#img = cv2.cvtColor(pic, cv2.COLOR_GRAY2RGB)
# cv2.imshow('image', pic)
# cv2.waitKey(0)
iw, ih = pic.shape[0:2]
w = h = 256
ul_img = pic[:h, :w, :]
ur_img = pic[iw-w:,ih-h:, :]
cv2.imshow("upper-left", ul_img)
cv2.imshow("upper-right", ur_img)
cv2.imshow('image', pic)
cv2.waitKey(0)
im256 = cv2.resize(pic, (256, 256), interpolation=cv2.INTER_LANCZOS4)
im128 = cv2.resize(pic, (128, 128), interpolation=cv2.INTER_LANCZOS4)
cv2.imwrite('../../../data/data/1_128.jpg', im128)
cv2.imwrite('../../../data/data/1_256.jpg', im256)
cv2.imshow("128", im128)
cv2.imshow("256", im256)
cv2.imshow('image', pic)
cv2.waitKey(0)
img_shape = (256, 256)
im256_IN = cv2.resize(pic, img_shape, interpolation=cv2.INTER_NEAREST)
im256_IL = cv2.resize(pic, img_shape, interpolation=cv2.INTER_LINEAR)
im256_IA = cv2.resize(pic, img_shape, interpolation=cv2.INTER_AREA)
im256_IC = cv2.resize(pic, img_shape, interpolation=cv2.INTER_CUBIC)
im256_IL = cv2.resize(pic, img_shape, interpolation=cv2.INTER_LANCZOS4)
img_lis = [im256_IL, im256_IC, im256_IA, im256_IL, im256_IN]
n = 0
for i in img_lis:
n+=1
cv2.imshow(f"{n}",i)
cv2.imshow('image', pic)
cv2.waitKey(0)
```
## sample image style transform by standard model
```
import tensorflow as tf
import tensorflow_hub as hub
# hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2')
model = tf.keras.models.load_model('../../../data/models/magenta/')
model.summary()
img_shape = (256, 256)
style_img = cv2.imread('../../../data/data/3.jpg')
cnt_img = cv2.imread('../../../data/data/1.jpg')
st256_img = cv2.resize(style_img, img_shape, interpolation=cv2.INTER_LANCZOS4).astype(np.float32)[np.newaxis, ...]/255.
cnt256_img = cv2.resize(cnt_img, img_shape, interpolation=cv2.INTER_LANCZOS4).astype(np.float32)[np.newaxis, ...]/255.
type(st256_img)
print(st256_img.shape, cnt256_img.shape)
outputs = hub_module(tf.constant(cnt256_img), tf.constant(st256_img))
outputs
out_img = np.squeeze(np.asarray(outputs))
from matplotlib import pyplot as plt
plt.imshow(out_img)
plt.axis("off")
```
## MSO dataset EDA
```
from os import listdir
from numpy import asarray
from numpy import savez_compressed
from PIL import Image
from matplotlib import pyplot
def load_image(filename):
image = Image.open(filename)
image = image.convert('RGB')
pixels = asarray(image)
return pixels
def load_imgs(dir):
shapes = []
for filename in listdir(dir):
pixels = load_image(dir+filename)
if pixels.shape not in shapes:
shapes.append(pixels.shape)
return shapes
shapes = load_imgs('../../../data/data/BAAT_dataset/')
print(shapes)
len(shapes)
```
| github_jupyter |
```
import glob
import os.path as osp
import random
import numpy as np
import json
from PIL import Image
from tqdm import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torchvision
from torchvision import models, transforms
from utils.dataloader_image_classification import ImageTransform, make_datapath_list, HymenopteraDataset
# 난수 시드 설정
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
```
### 데이터셋과 데이터 로더 작성
```
train_list = make_datapath_list('train')
val_list = make_datapath_list('val')
size = 224
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
train_dataset = HymenopteraDataset(file_list = train_list, transform = ImageTransform(size, mean, std), phase='train')
val_dataset = HymenopteraDataset(file_list = val_list, transform = ImageTransform(size, mean, std), phase='val')
# DataLoader 작성
batch_size = 32
train_dataloader = data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
val_dataloader = data.DataLoader(dataset=val_dataset, batch_size=batch_size, shuffle=False)
# 사전 객체에 정리
dataloaders_dict = {'train': train_dataloader, 'val': val_dataloader}
```
### 네트워크 모델 작성
```
# VGG-16 모델의 인스턴스 생성
use_pretrained = True # 학습된 파라미터 사용
net = models.vgg16(pretrained=use_pretrained)
# VGG의 마지막 출력층의 출력 유닛을 개미와 벌인 두 개로 변경 (전결합 층)
net.classifier[6] = nn.Linear(in_features=4096, out_features=2)
# 훈련 모드로 설정
net.train()
print('네트워크 설정 완료: 학습된 가중치를 읽어들여 훈련 모드로 설정했습니다.')
```
### 손실함수 정의
```
criterion = nn.CrossEntropyLoss()
```
### 최적화 기법 설정
```
# 파인튜닝으로 학습시킬 파라미터를 params_to_update 변수에 저장
params_to_update_1 = []
params_to_update_2 = []
params_to_update_3 = []
# 학습시킬 파라미터 이름
update_param_names_1 = ['features']
update_param_names_2 = ['classifier.0.weight', 'classifier.0.bias', 'classifier.3.weight', 'classifier.3.bias']
update_param_names_3 = ['classifier.6.weight', 'classifier.6.bias']
# 파라미터를 각 리스트에 저장
for name, param in net.named_parameters():
if update_param_names_1[0] in name:
param.requires_grad = True
params_to_update_1.append(param)
print('params_to_update_1에 저장: ', name)
elif name in update_param_names_2:
param.requires_grad = True
params_to_update_2.append(param)
print('params_to_update_2에 저장: ', name)
elif name in update_param_names_3:
param.requires_grad = True
params_to_update_3.append(param)
print('params_to_update_3에 저장: ', name)
else:
param.requires_grad = False
print('경사 계산 없음. 학습하지 않음: ', name)
# 최적화 기법 설정
optimizer = optim.SGD([
{'params': params_to_update_1, 'lr': 1e-4},
{'params': params_to_update_2, 'lr': 5e-4},
{'params': params_to_update_3, 'lr': 1e-3}
], momentum=0.9)
```
### 학습 및 검증 실시
```
def train_model(net, dataloaders_dict, criterion, optimizer, num_epochs):
# 초기 설정
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('사용 장치: ', device)
# 네트워크를 GPU로
net.to(device)
# 네트워크가 어느정도 고정되면 고속화시킨다.
torch.backends.cudnn.benchmark = True
# 에폭 루프
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('--------------------------------')
# 에폭별 학습 및 검증 루프
for phase in ['train', 'val']:
if phase == 'train':
net.train()
else:
net.eval()
epoch_loss = 0.0 # 에폭 손실 합
epoch_corrects = 0 # 에폭 정답 수
# 학습하지 않을 시 검증 성능을 확인하기 위해 epoch=0의 훈련은 생략
if (epoch==0) and (phase=='train'):
continue
# 데이터로더로 미니 배치를 꺼내는 루프
for inputs, labels in tqdm(dataloaders_dict[phase]):
# GPU에 데이터를 보낸다.
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
# 순전파 계산
with torch.set_grad_enabled(phase=='train'):
outputs = net(inputs)
loss = criterion(outputs, labels)
_, preds = torch.max(outputs, 1)
# 훈련시에는 오차 역전파
if phase == 'train':
loss.backward()
optimizer.step()
# 반복 결과 계산
# 손실 합계 갱신
epoch_loss += loss.item()*inputs.size(0)
# 정답 수의 합계 갱신
epoch_corrects += torch.sum(preds==labels.data)
# 에폭당 손실과 정답률 표시
epoch_loss = epoch_loss / len(dataloaders_dict[phase].dataset)
epoch_acc = epoch_corrects.double() / len(dataloaders_dict[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# 학습 및 검증 실시
num_epochs = 2
train_model(net, dataloaders_dict, criterion, optimizer, num_epochs)
```
### 학습한 네트워크 저장 및 로드
| github_jupyter |
# Capstone Part 2a - Classical ML Models (MFCCs with Offset)
___
## Setup
```
# Basic packages
import numpy as np
import pandas as pd
# For splitting the data into training and test sets
from sklearn.model_selection import train_test_split
# For scaling the data as necessary
from sklearn.preprocessing import StandardScaler
# For doing principal component analysis as necessary
from sklearn.decomposition import PCA
# For visualizations
import matplotlib.pyplot as plt
%matplotlib inline
# For building a variety of models
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
# For hyperparameter optimization
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# For caching pipeline and grid search results
from tempfile import mkdtemp
# For model evaluation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
# For getting rid of warning messages
import warnings
warnings.filterwarnings('ignore')
# For pickling models
import joblib
# Loading in the finished dataframe from part 1
ravdess_mfcc_df = pd.read_csv('C:/Users/Patrick/Documents/Capstone Data/ravdess_mfcc.csv')
```
___
# Building Models for Classifying Gender (Regardless of Emotion)
```
# Splitting the dataframe into features and target
X = ravdess_mfcc_df.iloc[:, :-2]
g = ravdess_mfcc_df['Gender']
```
The convention is to name the target variable 'y', but I will be declaring many different target variables throughout the notebook, so I opted for 'g' for simplicity instead of 'y_g' or 'y_gen', for example.
```
# # Encoding the genders
# gender_encoder = LabelEncoder()
# g = gender_encoder.fit_transform(g)
# # Checking the results
# g
# # Which number represents which gender?
# for num in np.unique(g):
# print(f'{num} represents {gender_encoder.inverse_transform([num])[0]}.')
```
Note: I realized that encoding the target is unnecessary; it is done automatically by the models.
```
# What test size should I use?
print(f'Length of g: {len(g)}')
print(f'30% of {len(g)} is {len(g)*0.3}')
```
I will use 30%.
```
# Splitting the data into training and test sets
X_train, X_test, g_train, g_test = train_test_split(X, g, test_size=0.3, stratify=g, random_state=1)
# Checking the shapes
print(X_train.shape)
print(X_test.shape)
print(g_train.shape)
print(g_test.shape)
```
I want to build a simple, initial classifier to get a sense of the performances I might get in more optimized models. To this end, I will build a logistic regression model without doing any cross-validation or hyperparameter optimization.
```
# Instantiate the model
initial_logreg = LogisticRegression()
# Fit to training set
initial_logreg.fit(X_train, g_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg.score(X_train, g_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg.score(X_test, g_test)*100}%')
```
These are extremely high accuracies. The model has most likely overfit to the training set, but the accuracy on the test set is still surprisingly high.
Here are some possible explanations:
- The dataset (RAVDESS) is relatively small, with only 1440 data points (1438 if I do not count the two very short clips that I excluded). This model is likely not very robust and has easily overfit to the training set.
- The features I have extracted could be excellent predictors of gender.
- This could be a very simple classification task. After all, there are only two classes, and theoretically, features extracted from male and female voice clips should have distinguishable patterns.
I had originally planned to build more gender classification models for this dataset, but I will forgo this for now. In part 4, I will try using this model to classify clips from another dataset and examine its performance.
```
# Pickling the model for later use
joblib.dump(initial_logreg, 'pickle1_gender_logreg.pkl')
```
___
# Building Models for Classifying Emotion for Males
```
# Making a new dataframe that contains only male recordings
ravdess_mfcc_m_df = ravdess_mfcc_df[ravdess_mfcc_df['Gender'] == 'male'].reset_index().drop('index', axis=1)
ravdess_mfcc_m_df
# Splitting the dataframe into features and target
Xm = ravdess_mfcc_m_df.iloc[:, :-2]
em = ravdess_mfcc_m_df['Emotion']
# # Encoding the emotions
# emotion_encoder = LabelEncoder()
# em = emotion_encoder.fit_transform(em)
# # Checking the results
# em
# # Which number represents which emotion?
# for num in np.unique(em):
# print(f'{num} represents {emotion_encoder.inverse_transform([num])[0]}.')
```
Note: I realized that encoding the target is unnecessary; it is done automatically by the models.
```
# Splitting the data into training and test sets
Xm_train, Xm_test, em_train, em_test = train_test_split(Xm, em, test_size=0.3, stratify=em, random_state=1)
# Checking the shapes
print(Xm_train.shape)
print(Xm_test.shape)
print(em_train.shape)
print(em_test.shape)
```
As before, I will try building an initial model.
```
# Instantiate the model
initial_logreg_em = LogisticRegression()
# Fit to training set
initial_logreg_em.fit(Xm_train, em_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg_em.score(Xm_train, em_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg_em.score(Xm_test, em_test)*100}%')
```
The model has overfit to the training set yet again, and this time the accuracy on the test set leaves a lot to be desired. Let's evaluate the model further using a confusion matrix and a classification report.
```
# Having initial_logreg_em make predictions based on the test set features
em_pred = initial_logreg_em.predict(Xm_test)
# Building the confusion matrix as a dataframe
emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred))
em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions]
em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions]
em_confusion_df
# Classification report
print(classification_report(em_test, em_pred))
```
In a binary classification problem, there is one negative class and one positive class. This is not the case here, because this is a multiclass classification problem. In the table above, each row of precision and recall scores assumes the corresponding emotion is the positive class, and groups all other emotions as the negative class.
Precision is the following measure: Of all the data points that the model classified as belonging to the positive class (i.e., the true and false positives), what proportion is correct (i.e., truly positive)?
Recall is the following measure: Of all the data points that are truly positive (i.e., the true positives and false negatives as classified by the model), what proportion did the model correctly classify (i.e., the true positives)?
It appears that the initial model is strongest at classifying calm voice clips, and weakest at classifying neutral voice clips. In order of strongest to weakest: calm, angry, fearful, disgusted, surprised, happy, sad, and neutral.
I will now try building new models and optimizing hyperparameters to obtain better performance. I will use a pipeline and multiple grid searches to accomplish this.
Before I build all my models in bulk, I want to see if doing principal component analysis (PCA) could be beneficial. I will do PCA on both unscaled and scaled features, and plot the resulting explained variance ratios. I have two goals here:
- Get a sense of whether scaling would be beneficial for model performance
- Get a sense of how many principal components I should use
```
# PCA on unscaled features
# Instantiate PCA and fit to Xm_train
pca = PCA().fit(Xm_train)
# Transform Xm_train
Xm_train_pca = pca.transform(Xm_train)
# Transform Xm_test
Xm_test_pca = pca.transform(Xm_test)
# Standard scaling
# Instantiate the scaler and fit to Xm_train
scaler = StandardScaler().fit(Xm_train)
# Transform Xm_train
Xm_train_scaled = scaler.transform(Xm_train)
# Transform Xm_test
Xm_test_scaled = scaler.transform(Xm_test)
# PCA on scaled features
# Instantiate PCA and fit to Xm_train_scaled
pca_scaled = PCA().fit(Xm_train_scaled)
# Transform Xm_train_scaled
Xm_train_scaled_pca = pca_scaled.transform(Xm_train_scaled)
# Transform Xm_test_scaled
Xm_test_scaled_pca = pca_scaled.transform(Xm_test_scaled)
# Plot the explained variance ratios
plt.subplots(1, 2, figsize = (15, 5))
# Unscaled
plt.subplot(1, 2, 1)
plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Unscaled Features')
plt.ylim(top = 0.5) # Equalizing the y-axes
# Scaled
plt.subplot(1, 2, 2)
plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Scaled Features')
plt.ylim(top = 0.5) # Equalizing the y-axes
plt.tight_layout()
plt.show()
```
Principal components are linear combinations of the original features, ordered by how much of the dataset's variance they explain. Looking at the two plots above, it appears that for the same number of principal components, those using unscaled features are able to explain more variance (i.e., capture more information) than those using scaled features. For example, looking at the first ~25 principal components of each plot, the bars of the left plot (unscaled) are higher and skewed more to the left than those of the right plot (scaled). Since the purpose of PCA is to reduce dimensionality of the data by keeping the components that explain the most variance and discarding the rest, the unscaled principal components might benefit my models more than the scaled principal components will.
However, I have to be mindful of the underlying variance in my features. Some features have values in the -800s, while others are close to 0.
```
# Examining the variances
var_df = pd.DataFrame(ravdess_mfcc_m_df.var()).T
var_df
```
Since PCA is looking for high variance directions, it can become biased by the underlying variance in a given feature if I do not scale it down first. I can see that some features have much higher variance than others do, so there is likely a lot of bias in the unscaled principal components above.
How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later.
```
# Unscaled
num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51]
for n in num_components:
print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%')
# Scaled
num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51]
for n in num_components:
print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%')
```
I will now build a pipeline and multiple grid searches with five-fold cross-validation to optimize the hyperparameters. I will try five types of classifiers: logistic regression, support vector machine, random forest, XGBoost, and k-nearest neighbours. To get a better sense of how each type performs, I will make a grid search for each one. I will also try different numbers of principal components for unscaled and scaled features.
```
# Cache
cachedir = mkdtemp()
# Pipeline (these values are placeholders)
my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir)
# Parameter grid for log reg
logreg_param_grid = [
# l1 without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 unscaled with PCA
# 5 PCAs * 9 regularization strengths = 45 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50),
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 scaled with PCA
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50),
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) unscaled with PCA
# 5 PCAs * 9 regularization strengths = 45 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50),
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) scaled with PCA
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50),
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the log reg grid search
logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the log reg grid search
fitted_logreg_grid_em = logreg_grid_search.fit(Xm_train, em_train)
# What was the best log reg?
fitted_logreg_grid_em.best_estimator_
print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_em.score(Xm_test, em_test)*100}%")
# Pickling the best log reg for later use
joblib.dump(fitted_logreg_grid_em.best_estimator_, 'pickle2_male_emotion_logreg.pkl')
# Parameter grid for SVM
svm_param_grid = [
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# unscaled
# 5 PCAs * 9 regularization strengths = 45 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# scaled
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the SVM grid search
svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the SVM grid search
fitted_svm_grid_em = svm_grid_search.fit(Xm_train, em_train)
# What was the best SVM?
fitted_svm_grid_em.best_estimator_
print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_em.score(Xm_test, em_test)*100}%")
# Pickling the best SVM for later use
joblib.dump(fitted_svm_grid_em.best_estimator_, 'pickle3_male_emotion_svm.pkl')
# Parameter grid for random forest (scaling is unnecessary)
rf_param_grid = [
# 5 numbers of estimators * 5 max depths = 25 models
{'scaler': [None], 'dim_reducer': [None], 'model': [RandomForestClassifier(n_jobs=-1)], 'model__n_estimators': np.arange(100, 501, 100),
'model__max_depth': np.arange(5, 26, 5)},
# 5 PCAs * 5 numbers of estimators * 5 max depths = 150 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [RandomForestClassifier(n_jobs=-1)],
'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
]
# Instantiate the rf grid search
rf_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=rf_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the rf grid search
fitted_rf_grid_em = rf_grid_search.fit(Xm_train, em_train)
# What was the best rf?
fitted_rf_grid_em.best_estimator_
print(f"The best random forest's accuracy on the training set: {fitted_rf_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best random forest's accuracy on the test set: {fitted_rf_grid_em.score(Xm_test, em_test)*100}%")
# # Parameter grid for XGBoost (scaling is unnecessary)
# xgb_param_grid = [
# # 5 numbers of estimators * 5 max depths = 25 models
# {'scaler': [None], 'dim_reducer': [None], 'model': [XGBClassifier(n_jobs=-1)], 'model__n_estimators': np.arange(100, 501, 100),
# 'model__max_depth': np.arange(5, 26, 5)},
# # 3 PCAs * 5 numbers of estimators * 5 max depths = 75 models
# # I am trying fewer PCAs for XGBoost
# {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [200, 250, 300], 'model': [XGBClassifier(n_jobs=-1)],
# 'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
# ]
# # Instantiate the XGB grid search
# xgb_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=xgb_param_grid, cv=5, n_jobs=-1, verbose=5)
# # Fit the XGB grid search
# fitted_xgb_grid_em = xgb_grid_search.fit(Xm_train, em_train)
```
The above never finished so I decided to comment it out. I will try again without passing `n_jobs=-1` into `XGBClassifier()`, and with a higher number (10 instead of 5) for `verbose` in `GridSearchCV()`.
```
# Parameter grid for XGBoost (scaling is unnecessary)
xgb_param_grid = [
# 5 numbers of estimators * 5 max depths = 25 models
{'scaler': [None], 'dim_reducer': [None], 'model': [XGBClassifier()], 'model__n_estimators': np.arange(100, 501, 100),
'model__max_depth': np.arange(5, 26, 5)},
# 3 PCAs * 5 numbers of estimators * 5 max depths = 75 models
# I am trying fewer PCAs for XGBoost
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [200, 250, 300], 'model': [XGBClassifier()],
'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
]
# Instantiate the XGB grid search
xgb_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=xgb_param_grid, cv=5, n_jobs=-1, verbose=10)
# Fit the XGB grid search
fitted_xgb_grid_em = xgb_grid_search.fit(Xm_train, em_train)
# What was the best XGB model?
fitted_xgb_grid_em.best_estimator_
print(f"The best XGB model's accuracy on the training set: {fitted_xgb_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best XGB model's accuracy on the test set: {fitted_xgb_grid_em.score(Xm_test, em_test)*100}%")
# Parameter grid for KNN
knn_param_grid = [
# unscaled and scaled * 10 Ks = 20 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [KNeighborsClassifier(n_jobs=-1)], 'model__n_neighbors': np.arange(3, 22, 2)},
# unscaled
# 5 PCAs * 10 Ks = 50 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)},
# scaled
# 4 PCAs * 10 Ks = 40 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)}
]
# Instantiate the grid search
knn_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=knn_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the KNN grid search
fitted_knn_grid_em = knn_grid_search.fit(Xm_train, em_train)
# What was the best KNN model?
fitted_knn_grid_em.best_estimator_
print(f"The best KNN model's accuracy on the training set: {fitted_knn_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best KNN model's accuracy on the test set: {fitted_knn_grid_em.score(Xm_test, em_test)*100}%")
```
### Conclusions for classifying emotions for males
- Of the five classifier types I tried in my grid searches, SVM had the highest accuracy on the test set (60.19%), followed by logistic regression (58.80%), XGBoost (51.39%), random forest (46.76%), and lastly, KNN (45.37%).
- Based on these results, I have pickled the best SVM and logistic regression. In part 4, I will try them on a new, male-only dataset.
- Except for the best KNN model, all the best models found in the grid searches had training accuracies of 100%, indicating that they overfit to the training set.
- The best KNN model had a training accuracy of 76.29%, but this was still much higher than its test accuracy of 45.37%.
- For the classifier types in which scaling the features matters (logistic regression, SVM, and KNN), all the best models made use of the standard scaler.
- Of the five best-in-type models, random forest and KNN were the only two which made use of principal components.
___
# Building Models for Classifying Emotion for Females
I will follow the same steps I took in classifying emotions for males, with one difference: This time I will not try XGBoost, due to its long computation time and comparatively low performance.
```
# Making a new dataframe that contains only female recordings
ravdess_mfcc_f_df = ravdess_mfcc_df[ravdess_mfcc_df['Gender'] == 'female'].reset_index().drop('index', axis=1)
ravdess_mfcc_f_df
# Splitting the dataframe into features and target
Xf = ravdess_mfcc_f_df.iloc[:, :-2]
ef = ravdess_mfcc_f_df['Emotion']
# Splitting the data into training and test sets
Xf_train, Xf_test, ef_train, ef_test = train_test_split(Xf, ef, test_size=0.3, stratify=ef, random_state=1)
# Checking the shapes
print(Xf_train.shape)
print(Xf_test.shape)
print(ef_train.shape)
print(ef_test.shape)
```
Here is an initial model:
```
# Instantiate the model
initial_logreg_ef = LogisticRegression()
# Fit to training set
initial_logreg_ef.fit(Xf_train, ef_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg_ef.score(Xf_train, ef_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg_ef.score(Xf_test, ef_test)*100}%')
```
The model has overfit to the training set yet again. Interestingly, this initial accuracy on the female test set is noticeably higher than the initial accuracy on the male test set, which was 56.48%. Again, let's evaluate the model further using a confusion matrix and a classification report.
```
# Having initial_logreg_ef make predictions based on the test set features
ef_pred = initial_logreg_ef.predict(Xf_test)
# Building the confusion matrix as a dataframe
emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
ef_confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred))
ef_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions]
ef_confusion_df.index = [f'Actual {emotion}' for emotion in emotions]
ef_confusion_df
# Classification report
print(classification_report(ef_test, ef_pred))
```
It appears that the initial model is strongest at classifying calm voice clips, and weakest at classifying fearful voice clips. In order of strongest to weakest: calm, neutral, happy, surprised, angry, disgusted, sad, and fearful.
There is not as much variance in performance across the emotions when compared to that of the initial model for male emotions.
Although I found that none of the best male emotion classifiers made use of PCA, I will still examine the explained variance ratios like I did before.
```
# PCA on unscaled features
# Instantiate PCA and fit to Xf_train
pca = PCA().fit(Xf_train)
# Transform Xf_train
Xf_train_pca = pca.transform(Xf_train)
# Transform Xf_test
Xf_test_pca = pca.transform(Xf_test)
# Standard scaling
# Instantiate the scaler and fit to Xf_train
scaler = StandardScaler().fit(Xf_train)
# Transform Xf_train
Xf_train_scaled = scaler.transform(Xf_train)
# Transform Xf_test
Xf_test_scaled = scaler.transform(Xf_test)
# PCA on scaled features
# Instantiate PCA and fit to Xf_train_scaled
pca_scaled = PCA().fit(Xf_train_scaled)
# Transform Xf_train_scaled
Xf_train_scaled_pca = pca_scaled.transform(Xf_train_scaled)
# Transform Xf_test_scaled
Xf_test_scaled_pca = pca_scaled.transform(Xf_test_scaled)
# Plot the explained variance ratios
plt.subplots(1, 2, figsize = (15, 5))
# Unscaled
plt.subplot(1, 2, 1)
plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Unscaled Features')
plt.ylim(top = 0.5) # Equalizing the y-axes
# Scaled
plt.subplot(1, 2, 2)
plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Scaled Features')
plt.ylim(top = 0.5) # Equalizing the y-axes
plt.tight_layout()
plt.show()
```
These are the same trends I saw previously for male emotions.
How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later.
```
# Unscaled
num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51]
for n in num_components:
print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%')
# Scaled
num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51]
for n in num_components:
print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%')
```
Like before, I will now do a grid search for each classifier type, with five-fold cross-validation to optimize the hyperparameters.
```
# Cache
cachedir = mkdtemp()
# Pipeline (these values are placeholders)
my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir)
# Parameter grid for log reg
logreg_param_grid = [
# l1 without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 unscaled with PCA
# 6 PCAs * 9 regularization strengths = 54 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50),
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 scaled with PCA
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50),
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) unscaled with PCA
# 6 PCAs * 9 regularization strengths = 54 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50),
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) scaled with PCA
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50),
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the log reg grid search
logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the log reg grid search
fitted_logreg_grid_ef = logreg_grid_search.fit(Xf_train, ef_train)
# What was the best log reg?
fitted_logreg_grid_ef.best_estimator_
print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_ef.score(Xf_test, ef_test)*100}%")
# Parameter grid for SVM
svm_param_grid = [
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# unscaled
# 6 PCAs * 9 regularization strengths = 54 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# scaled
# 4 PCAs * 9 regularization strengths = 36 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the SVM grid search
svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the SVM grid search
fitted_svm_grid_ef = svm_grid_search.fit(Xf_train, ef_train)
# What was the best SVM?
fitted_svm_grid_ef.best_estimator_
print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_ef.score(Xf_test, ef_test)*100}%")
# Parameter grid for random forest (scaling is unnecessary)
rf_param_grid = [
# 5 numbers of estimators * 5 max depths = 25 models
{'scaler': [None], 'dim_reducer': [None], 'model': [RandomForestClassifier(n_jobs=-1)], 'model__n_estimators': np.arange(100, 501, 100),
'model__max_depth': np.arange(5, 26, 5)},
# 6 PCAs * 5 numbers of estimators * 5 max depths = 150 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [RandomForestClassifier(n_jobs=-1)],
'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
]
# Instantiate the rf grid search
rf_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=rf_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the rf grid search
fitted_rf_grid_ef = rf_grid_search.fit(Xf_train, ef_train)
# What was the best rf?
fitted_rf_grid_ef.best_estimator_
print(f"The best random forest's accuracy on the training set: {fitted_rf_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best random forest's accuracy on the test set: {fitted_rf_grid_ef.score(Xf_test, ef_test)*100}%")
# Parameter grid for KNN
knn_param_grid = [
# unscaled and scaled * 10 Ks = 20 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [KNeighborsClassifier(n_jobs=-1)], 'model__n_neighbors': np.arange(3, 22, 2)},
# unscaled
# 6 PCAs * 10 Ks = 60 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)},
# scaled
# 4 PCAs * 10 Ks = 40 models
{'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)}
]
# Instantiate the grid search
knn_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=knn_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the KNN grid search
fitted_knn_grid_ef = knn_grid_search.fit(Xf_train, ef_train)
# What was the best KNN model?
fitted_knn_grid_ef.best_estimator_
print(f"The best KNN model's accuracy on the training set: {fitted_knn_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best KNN model's accuracy on the test set: {fitted_knn_grid_ef.score(Xf_test, ef_test)*100}%")
```
### Conclusions for classifying emotions for females
- Of the four classifier types I tried in my grid searches, logistic regression had the highest accuracy on the test set (71.29%), followed by SVM (70.83%), random forest (61.57%), and lastly, KNN (55.56%).
- Except for the best KNN model, all the best models found in the grid searches had training accuracies of 100%, indicating that they overfit to the training set.
- The best KNN model had a training accuracy of 59.33%, which was not much higher than its test accuracy of 55.56%. A much wider gap was found in the best KNN model for male emotions.
- For the classifier types in which scaling the features matters (logistic regression, SVM, and KNN), the best logistic regression and SVM models made use of the standard scaler, while the best KNN model did not.
- All the best-in-type models made use of principal components, except SVM.
- Interestingly, the female emotion classifiers achieved higher accuracies than their male counterparts. It appears that for the RAVDESS dataset, the differences between female emotions are greater the differences between male emotions.
- Based on this alone, I cannot extrapolate and conclude that women are more socially more expressive than men are, although this is an interesting thought.
| github_jupyter |
# Minimal tutorial on packing and unpacking sequences in PyTorch, aka how to use `pack_padded_sequence` and `pad_packed_sequence`
This is a jupyter version of [@Tushar-N 's gist](https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e) with comments from [@Harsh Trivedi repo](https://github.com/HarshTrivedi/packing-unpacking-pytorch-minimal-tutorial)
```
# from https://github.com/HarshTrivedi/packing-unpacking-pytorch-minimal-tutorial
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)
# Step 3: Make Model
# * Step 4: Pad instances with 0s till max length sequence
# * Step 5: Sort instances by sequence length in descending order
# * Step 6: Embed the instances
# * Step 7: Call pack_padded_sequence with embeded instances and sequence lengths
# * Step 8: Forward with LSTM
# * Step 9: Call unpack_padded_sequences if required / or just pick last hidden vector
# * Summary of Shape Transformations
# We want to run LSTM on a batch following 3 character sequences
seqs = ['long_str', # len = 8
'tiny', # len = 4
'medium'] # len = 6
## Step 1: Construct Vocabulary ##
##------------------------------##
# make sure <pad> idx is 0
vocab = ['<pad>'] + sorted(set([char for seq in seqs for char in seq]))
# => ['<pad>', '_', 'd', 'e', 'g', 'i', 'l', 'm', 'n', 'o', 'r', 's', 't', 'u', 'y']
vocab
## Step 2: Load indexed data (list of instances, where each instance is list of character indices) ##
##-------------------------------------------------------------------------------------------------##
vectorized_seqs = [[vocab.index(tok) for tok in seq]for seq in seqs]
# vectorized_seqs => [[6, 9, 8, 4, 1, 11, 12, 10],
# [12, 5, 8, 14],
# [7, 3, 2, 5, 13, 7]]
vectorized_seqs
## Step 3: Make Model ##
##--------------------##
embed = Embedding(len(vocab), 4) # embedding_dim = 4
lstm = LSTM(input_size=4, hidden_size=5, batch_first=True) # input_dim = 4, hidden_dim = 5
## Step 4: Pad instances with 0s till max length sequence ##
##--------------------------------------------------------##
# get the length of each seq in your batch
seq_lengths = LongTensor(list(map(len, vectorized_seqs)))
# seq_lengths => [ 8, 4, 6]
# batch_sum_seq_len: 8 + 4 + 6 = 18
# max_seq_len: 8
seq_tensor = Variable(torch.zeros((len(vectorized_seqs), seq_lengths.max()))).long()
# seq_tensor => [[0 0 0 0 0 0 0 0]
# [0 0 0 0 0 0 0 0]
# [0 0 0 0 0 0 0 0]]
for idx, (seq, seqlen) in enumerate(zip(vectorized_seqs, seq_lengths)):
seq_tensor[idx, :seqlen] = LongTensor(seq)
# seq_tensor => [[ 6 9 8 4 1 11 12 10] # long_str
# [12 5 8 14 0 0 0 0] # tiny
# [ 7 3 2 5 13 7 0 0]] # medium
# seq_tensor.shape : (batch_size X max_seq_len) = (3 X 8)
seq_lengths
seq_tensor
## Step 5: Sort instances by sequence length in descending order ##
##---------------------------------------------------------------##
seq_lengths, perm_idx = seq_lengths.sort(0, descending=True)
seq_tensor = seq_tensor[perm_idx]
# seq_tensor => [[ 6 9 8 4 1 11 12 10] # long_str
# [ 7 3 2 5 13 7 0 0] # medium
# [12 5 8 14 0 0 0 0]] # tiny
# seq_tensor.shape : (batch_size X max_seq_len) = (3 X 8)
perm_idx
seq_tensor
## Step 6: Embed the instances ##
##-----------------------------##
embedded_seq_tensor = embed(seq_tensor)
# embedded_seq_tensor =>
# [[[-0.77578706 -1.8080667 -1.1168439 1.1059115 ] l
# [-0.23622951 2.0361056 0.15435742 -0.04513785] o
# [-0.6000342 1.1732816 0.19938554 -1.5976517 ] n
# [ 0.40524676 0.98665565 -0.08621677 -1.1728264 ] g
# [-1.6334635 -0.6100042 1.7509955 -1.931793 ] _
# [-0.6470658 -0.6266589 -1.7463604 1.2675372 ] s
# [ 0.64004815 0.45813003 0.3476034 -0.03451729] t
# [-0.22739866 -0.45782727 -0.6643252 0.25129375]] r
# [[ 0.16031227 -0.08209462 -0.16297023 0.48121014] m
# [-0.7303265 -0.857339 0.58913064 -1.1068314 ] e
# [ 0.48159844 -1.4886451 0.92639893 0.76906884] d
# [ 0.27616557 -1.224429 -1.342848 -0.7495876 ] i
# [ 0.01795524 -0.59048957 -0.53800726 -0.6611691 ] u
# [ 0.16031227 -0.08209462 -0.16297023 0.48121014] m
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ] <pad>
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ]] <pad>
# [[ 0.64004815 0.45813003 0.3476034 -0.03451729] t
# [ 0.27616557 -1.224429 -1.342848 -0.7495876 ] i
# [-0.6000342 1.1732816 0.19938554 -1.5976517 ] n
# [-1.284392 0.68294704 1.4064184 -0.42879772] y
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ] <pad>
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ] <pad>
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ] <pad>
# [ 0.2691206 -0.43435425 0.87935454 -2.2269666 ]]] <pad>
# embedded_seq_tensor.shape : (batch_size X max_seq_len X embedding_dim) = (3 X 8 X 4)
embedded_seq_tensor
## Step 7: Call pack_padded_sequence with embeded instances and sequence lengths ##
##-------------------------------------------------------------------------------##
packed_input = pack_padded_sequence(embedded_seq_tensor, seq_lengths.cpu().numpy(), batch_first=True)
# packed_input (PackedSequence is NamedTuple with 2 attributes: data and batch_sizes
#
# packed_input.data =>
# [[-0.77578706 -1.8080667 -1.1168439 1.1059115 ] l
# [ 0.01795524 -0.59048957 -0.53800726 -0.6611691 ] m
# [-0.6470658 -0.6266589 -1.7463604 1.2675372 ] t
# [ 0.16031227 -0.08209462 -0.16297023 0.48121014] o
# [ 0.40524676 0.98665565 -0.08621677 -1.1728264 ] e
# [-1.284392 0.68294704 1.4064184 -0.42879772] i
# [ 0.64004815 0.45813003 0.3476034 -0.03451729] n
# [ 0.27616557 -1.224429 -1.342848 -0.7495876 ] d
# [ 0.64004815 0.45813003 0.3476034 -0.03451729] n
# [-0.23622951 2.0361056 0.15435742 -0.04513785] g
# [ 0.16031227 -0.08209462 -0.16297023 0.48121014] i
# [-0.22739866 -0.45782727 -0.6643252 0.25129375]] y
# [-0.7303265 -0.857339 0.58913064 -1.1068314 ] _
# [-1.6334635 -0.6100042 1.7509955 -1.931793 ] u
# [ 0.27616557 -1.224429 -1.342848 -0.7495876 ] s
# [-0.6000342 1.1732816 0.19938554 -1.5976517 ] m
# [-0.6000342 1.1732816 0.19938554 -1.5976517 ] t
# [ 0.48159844 -1.4886451 0.92639893 0.76906884] r
# packed_input.data.shape : (batch_sum_seq_len X embedding_dim) = (18 X 4)
#
# packed_input.batch_sizes => [ 3, 3, 3, 3, 2, 2, 1, 1]
# visualization :
# l o n g _ s t r #(long_str)
# m e d i u m #(medium)
# t i n y #(tiny)
# 3 3 3 3 2 2 1 1 (sum = 18 [batch_sum_seq_len])
packed_input.data.shape
## Step 8: Forward with LSTM ##
##---------------------------##
packed_output, (ht, ct) = lstm(packed_input)
# packed_output (PackedSequence is NamedTuple with 2 attributes: data and batch_sizes
#
# packed_output.data :
# [[-0.00947162 0.07743231 0.20343193 0.29611713 0.07992904] l
# [ 0.08596145 0.09205993 0.20892891 0.21788561 0.00624391] o
# [ 0.16861682 0.07807446 0.18812777 -0.01148055 -0.01091915] n
# [ 0.20994528 0.17932937 0.17748171 0.05025435 0.15717036] g
# [ 0.01364102 0.11060348 0.14704391 0.24145307 0.12879576] _
# [ 0.02610307 0.00965587 0.31438383 0.246354 0.08276576] s
# [ 0.09527554 0.14521319 0.1923058 -0.05925677 0.18633027] t
# [ 0.09872741 0.13324396 0.19446367 0.4307988 -0.05149471] r
# [ 0.03895474 0.08449443 0.18839942 0.02205326 0.23149511] m
# [ 0.14620507 0.07822411 0.2849248 -0.22616537 0.15480657] e
# [ 0.00884941 0.05762182 0.30557525 0.373712 0.08834908] d
# [ 0.12460691 0.21189159 0.04823487 0.06384943 0.28563985] i
# [ 0.01368293 0.15872964 0.03759198 -0.13403234 0.23890573] u
# [ 0.00377969 0.05943518 0.2961751 0.35107893 0.15148178] m
# [ 0.00737647 0.17101538 0.28344846 0.18878219 0.20339936] t
# [ 0.0864429 0.11173367 0.3158251 0.37537992 0.11876849] i
# [ 0.17885767 0.12713005 0.28287745 0.05562563 0.10871304] n
# [ 0.09486895 0.12772645 0.34048414 0.25930756 0.12044918]] y
# packed_output.data.shape : (batch_sum_seq_len X hidden_dim) = (18 X 5)
# packed_output.batch_sizes => [ 3, 3, 3, 3, 2, 2, 1, 1] (same as packed_input.batch_sizes)
# visualization :
# l o n g _ s t r #(long_str)
# m e d i u m #(medium)
# t i n y #(tiny)
# 3 3 3 3 2 2 1 1 (sum = 18 [batch_sum_seq_len])
packed_output.data.shape
ht
ct
## Step 9: Call unpack_padded_sequences if required / or just pick last hidden vector ##
##------------------------------------------------------------------------------------##
# unpack your output if required
output, input_sizes = pad_packed_sequence(packed_output, batch_first=True)
# output:
# output =>
# [[[-0.00947162 0.07743231 0.20343193 0.29611713 0.07992904] l
# [ 0.20994528 0.17932937 0.17748171 0.05025435 0.15717036] o
# [ 0.09527554 0.14521319 0.1923058 -0.05925677 0.18633027] n
# [ 0.14620507 0.07822411 0.2849248 -0.22616537 0.15480657] g
# [ 0.01368293 0.15872964 0.03759198 -0.13403234 0.23890573] _
# [ 0.00737647 0.17101538 0.28344846 0.18878219 0.20339936] s
# [ 0.17885767 0.12713005 0.28287745 0.05562563 0.10871304] t
# [ 0.09486895 0.12772645 0.34048414 0.25930756 0.12044918]] r
# [[ 0.08596145 0.09205993 0.20892891 0.21788561 0.00624391] m
# [ 0.01364102 0.11060348 0.14704391 0.24145307 0.12879576] e
# [ 0.09872741 0.13324396 0.19446367 0.4307988 -0.05149471] d
# [ 0.00884941 0.05762182 0.30557525 0.373712 0.08834908] i
# [ 0.00377969 0.05943518 0.2961751 0.35107893 0.15148178] u
# [ 0.0864429 0.11173367 0.3158251 0.37537992 0.11876849] m
# [ 0. 0. 0. 0. 0. ] <pad>
# [ 0. 0. 0. 0. 0. ]] <pad>
# [[ 0.16861682 0.07807446 0.18812777 -0.01148055 -0.01091915] t
# [ 0.02610307 0.00965587 0.31438383 0.246354 0.08276576] i
# [ 0.03895474 0.08449443 0.18839942 0.02205326 0.23149511] n
# [ 0.12460691 0.21189159 0.04823487 0.06384943 0.28563985] y
# [ 0. 0. 0. 0. 0. ] <pad>
# [ 0. 0. 0. 0. 0. ] <pad>
# [ 0. 0. 0. 0. 0. ] <pad>
# [ 0. 0. 0. 0. 0. ]]] <pad>
# output.shape : ( batch_size X max_seq_len X hidden_dim) = (3 X 8 X 5)
output
# Or if you just want the final hidden state?
print(ht[-1])
## Summary of Shape Transformations ##
##----------------------------------##
# (batch_size X max_seq_len X embedding_dim) --> Sort by seqlen ---> (batch_size X max_seq_len X embedding_dim)
# (batch_size X max_seq_len X embedding_dim) ---> Pack ---> (batch_sum_seq_len X embedding_dim)
# (batch_sum_seq_len X embedding_dim) ---> LSTM ---> (batch_sum_seq_len X hidden_dim)
# (batch_sum_seq_len X hidden_dim) ---> UnPack ---> (batch_size X max_seq_len X hidden_dim)
```
| github_jupyter |
```
! pip install opencv-python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import cv2
#tensorflow packages
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing import image
# Face Emotion Recognition
#Here i am using my trained model, that is trained and saved as a h5 file
faceDetection_model = 'D:\pavi\DeepLearningProjects\Face_Emosion_Recognition\pretrained_model\Face_Detection_TrainedModel\haarcascade_frontalface_default.xml'
Emotion_Detction_model = 'D:\pavi\DeepLearningProjects\Face_Emosion_Recognition\pretrained_model\Face_Emotion_model\FER_vggnet.h5'
vggnet = load_model(Emotion_Detction_model)
vggnet.summary()
#defining the emotion classes for classification
classes = np.array(("Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"))
#video capturing and classifing
faceCascade = cv2.CascadeClassifier(faceDetection_model)
video_capture = cv2.VideoCapture(0)
while True:
ret,frame = video_capture.read()
cv2.imshow('Original Video' , frame)
gray = cv2.cvtColor(frame , cv2.COLOR_BGR2GRAY)
face = faceCascade.detectMultiScale(gray ,scaleFactor=1.1 , minNeighbors=5,)
#draw rectangle around the face and cut the face only
for (x,y,w,h) in face:
cv2.rectangle( frame , (x,y) , (x+w , y+h) , (0,255,255) , 2)
face_img = gray[ y:(y+h) , x:(x+w)]
x = cv2.resize(face_img, (48,48) , interpolation = cv2.INTER_AREA)
if np.sum([x])!=0:
#preprocessing
x = x.astype('float')/255.0
x = image.img_to_array(x)
x = np.expand_dims(x , axis = 0)
#face_img = face_img.reshape(48,48)
# prediction
p = vggnet.predict(x)
a = np.argmax(p,axis=1)
print('prediction',classes[a])
label = str(classes[a][0])
print(label)
label_position = (x-10,y-10)
fontScale = 0.6
thickness = 3
cv2.putText(frame , label , label_position , cv2.FONT_HERSHEY_SIMPLEX , fontScale , (0,255,0) , thickness , cv2.LINE_AA)
else:
cv2.putText(frame , 'No Face Detection' , label_position , cv2.FONT_HERSHEY_SIMPLEX , 0.6 , (0,255,0) , 3 ,cv2.LINE_AA)
#cv2.imshow('croped image' , face_img)
#display the resulting frame
cv2.imshow('Face Detected Video' , frame)
#break the capturing
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
```
| github_jupyter |
```
import pandas as pd
try:
import pickle5 as pickle
except:
!pip install pickle5
import pickle5 as pickle
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, GlobalMaxPooling1D, Flatten
from keras.layers import Conv1D, MaxPooling1D, Embedding, Concatenate, Lambda
from keras.models import Model
from sklearn.metrics import roc_auc_score,roc_curve, auc
from numpy import random
from keras.layers import LSTM, Bidirectional, GlobalMaxPool1D, Dropout
from keras.optimizers import Adam
from keras.utils.vis_utils import plot_model
import seaborn as sns
import sys
sys.path.insert(0,'/content/drive/MyDrive/ML_Data/')
import functions as f
def load_data(randomize=False):
try:
with open("/content/drive/MyDrive/ML_Data/hyppi-train.pkl", "rb") as fh:
df_train = pickle.load(fh)
except:
df_train = pd.read_pickle("C:/Users/nik00/py/proj/hyppi-train.pkl")
try:
with open("/content/drive/MyDrive/ML_Data/hyppi-independent.pkl", "rb") as fh:
df_test = pickle.load(fh)
except:
df_test = pd.read_pickle("C:/Users/nik00/py/proj/hyppi-independent.pkl")
if randomize:
return shuff_together(df_train,df_test)
else:
return df_train,df_test
df_train,df_test = load_data()
print('The data used will be:')
df_train[['Human','Yersinia']]
lengths = sorted(len(s) for s in df_train['Human'])
print("Median length of Human sequence is",lengths[len(lengths)//2])
_ = sns.displot(lengths)
_=plt.title("Most Human sequences seem to be less than 2000 in length")
lengths = sorted(len(s) for s in df_train['Yersinia'])
print("Median length of Yersinia sequence is",lengths[len(lengths)//2])
_ = sns.displot(lengths)
_=plt.title("Most Yersinia sequences seem to be less than 1000 in length")
data_1D_join_pre,data_test_1D_join_pre,num_words_1D_join,MAX_SEQUENCE_LENGTH_1D_J,MAX_VOCAB_SIZE_1D = f.get_seq_data_join(1000,1000,df_train,df_test, pad='pre', show=True)
data_1D_join_center,data_test_1D_join_center,num_words_1D_join,MAX_SEQUENCE_LENGTH_1D_J,MAX_VOCAB_SIZE_1D = f.get_seq_data_join(1000,1000,df_train,df_test, pad='center')
data_1D_join_post,data_test_1D_join_post,num_words_1D_join,MAX_SEQUENCE_LENGTH_1D_J,MAX_VOCAB_SIZE_1D = f.get_seq_data_join(1000,1000,df_train,df_test, pad='post')
data1_1D_doubleip_pre,data2_1D_doubleip_pre,data1_test_1D_doubleip_pre,data2_test_1D_doubleip_pre,num_words_1D,MAX_SEQUENCE_LENGTH_1D_dIP,MAX_VOCAB_SIZE_1D = f.get_seq_data_doubleip(100,1000,df_train,df_test,pad = 'pre', show=True)
data1_1D_doubleip_center,data2_1D_doubleip_center,data1_test_1D_doubleip_center,data2_test_1D_doubleip_center,num_words_1D,MAX_SEQUENCE_LENGTH_1D_dIP,MAX_VOCAB_SIZE_1D = f.get_seq_data_doubleip(100,1000,df_train,df_test)
data1_1D_doubleip_post,data2_1D_doubleip_post,data1_test_1D_doubleip_post,data2_test_1D_doubleip_post,num_words_1D,MAX_SEQUENCE_LENGTH_1D_dIP,MAX_VOCAB_SIZE_1D = f.get_seq_data_doubleip(100,1000,df_train,df_test,pad = 'post')
EMBEDDING_DIM_1D = 5
DROP = 0.2
BATCH_SIZE = 128
EPOCHS = 50
M_1D=10
x1_join = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_J,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x2_join = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_J,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x3_join = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_J,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x1_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x2_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x3_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x4_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x5_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
x6_doubleip = f.BiLSTM_model(MAX_SEQUENCE_LENGTH_1D_dIP,EMBEDDING_DIM_1D,num_words_1D,M_1D,DROP)
concatenator = Concatenate(axis=1)
x = concatenator([x1_join.output, x2_join.output, x3_join.output, x1_doubleip.output, x2_doubleip.output, x3_doubleip.output, x4_doubleip.output, x5_doubleip.output, x6_doubleip.output])
x = Dense(128)(x)
x = Dropout(0.2)(x)
output = Dense(1, activation="sigmoid",name="Final")(x)
model1D_combine = Model(inputs=[x1_join.input, x2_join.input, x3_join.input, x1_doubleip.input, x2_doubleip.input, x3_doubleip.input, x4_doubleip.input, x5_doubleip.input, x6_doubleip.input], outputs=output)
model1D_combine.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
trains = [data_1D_join_pre,data_1D_join_center,data_1D_join_post, data1_1D_doubleip_pre,data1_1D_doubleip_center,data1_1D_doubleip_post, data2_1D_doubleip_pre,data2_1D_doubleip_center,data2_1D_doubleip_post]
tests = [data_test_1D_join_pre,data_test_1D_join_center,data_test_1D_join_post, data1_test_1D_doubleip_pre,data1_test_1D_doubleip_center,data1_test_1D_doubleip_post, data2_test_1D_doubleip_pre,data2_test_1D_doubleip_center,data2_test_1D_doubleip_post]
model1D_combine.fit(trains, df_train['label'].values, epochs=EPOCHS, validation_data=(tests,df_test['label'].values),batch_size=BATCH_SIZE)
print(roc_auc_score(df_test['label'].values, model1D_combine.predict(tests)))
#model1D_doubleip.save('/content/drive/MyDrive/ML_Data/model1D_doubleip.h5')
```
| github_jupyter |
```
from mplsoccer import Pitch, VerticalPitch
from mplsoccer.dimensions import valid, size_varies
import matplotlib.pyplot as plt
import numpy as np
import random
np.random.seed(42)
```
# Test five points are same in both orientations
```
for pitch_type in valid:
if pitch_type in size_varies:
kwargs = {'pitch_length': 105, 'pitch_width': 68}
else:
kwargs = {}
pitch = Pitch(pitch_type=pitch_type, line_zorder=2, **kwargs)
pitch_vertical = VerticalPitch(pitch_type=pitch_type, line_zorder=2, **kwargs)
fig, ax = plt.subplots(ncols=2, figsize=(12, 7))
fig.suptitle(pitch_type)
x = np.random.uniform(low=pitch.dim.pitch_extent[0], high=pitch.dim.pitch_extent[1], size=5)
y = np.random.uniform(low=pitch.dim.pitch_extent[2], high=pitch.dim.pitch_extent[3], size=5)
pitch.draw(ax[0])
pitch.scatter(x, y, ax=ax[0], color='red', zorder=3)
stats = pitch.bin_statistic(x, y)
stats['statistic'][stats['statistic'] == 0] = np.nan
hm = pitch.heatmap(stats, ax=ax[0])
txt = pitch.label_heatmap(stats, color='white', ax=ax[0])
pitch_vertical.draw(ax[1])
pitch_vertical.scatter(x, y, ax=ax[1], color='red', zorder=3)
stats_vertical = pitch_vertical.bin_statistic(x, y)
stats_vertical['statistic'][stats_vertical['statistic'] == 0] = np.nan
hm_vertical = pitch_vertical.heatmap(stats_vertical, ax=ax[1])
txt_vertical = pitch_vertical.label_heatmap(stats, color='white', ax=ax[1])
```
# Test five points are same in both orientations - positional
```
for pitch_type in valid:
if pitch_type in size_varies:
kwargs = {'pitch_length': 105, 'pitch_width': 68}
else:
kwargs = {}
pitch = Pitch(pitch_type=pitch_type, line_zorder=2, **kwargs)
pitch_vertical = VerticalPitch(pitch_type=pitch_type, line_zorder=2, **kwargs)
fig, ax = plt.subplots(ncols=2, figsize=(12, 7))
fig.suptitle(pitch_type)
x = np.random.uniform(low=pitch.dim.pitch_extent[0], high=pitch.dim.pitch_extent[1], size=5)
y = np.random.uniform(low=pitch.dim.pitch_extent[2], high=pitch.dim.pitch_extent[3], size=5)
pitch.draw(ax[0])
pitch.scatter(x, y, ax=ax[0], color='red', zorder=3)
stats = pitch.bin_statistic_positional(x, y)
hm = pitch.heatmap_positional(stats, ax=ax[0])
txt = pitch.label_heatmap(stats, color='white', ax=ax[0])
pitch_vertical.draw(ax[1])
pitch_vertical.scatter(x, y, ax=ax[1], color='red', zorder=3)
stats_vertical = pitch_vertical.bin_statistic_positional(x, y)
hm_vertical = pitch_vertical.heatmap_positional(stats_vertical, ax=ax[1])
txt_vertical = pitch_vertical.label_heatmap(stats, color='white', ax=ax[1])
```
# Test edges - positional x
```
for pitch_type in valid:
if pitch_type in size_varies:
kwargs = {'pitch_length': 105, 'pitch_width': 68}
else:
kwargs = {}
pitch = Pitch(pitch_type=pitch_type, line_zorder=2, pitch_color='None', axis=True, label=True, **kwargs)
pitch_vertical = VerticalPitch(pitch_type=pitch_type, line_zorder=2, pitch_color='None', axis=True, label=True, **kwargs)
fig, ax = plt.subplots(ncols=2, figsize=(12, 7))
fig.suptitle(pitch_type)
x = pitch.dim.positional_x
y = np.random.uniform(low=pitch.dim.pitch_extent[2], high=pitch.dim.pitch_extent[3], size=x.size)
pitch.draw(ax[0])
pitch.scatter(x, y, ax=ax[0], color='red', zorder=3)
stats = pitch.bin_statistic_positional(x, y)
hm = pitch.heatmap_positional(stats, ax=ax[0], edgecolors='yellow')
txt = pitch.label_heatmap(stats, color='white', ax=ax[0])
pitch_vertical.draw(ax[1])
pitch_vertical.scatter(x, y, ax=ax[1], color='red', zorder=3)
stats_vertical = pitch_vertical.bin_statistic_positional(x, y)
hm_vertical = pitch_vertical.heatmap_positional(stats_vertical, ax=ax[1], edgecolors='yellow')
txt_vertical = pitch_vertical.label_heatmap(stats_vertical, color='white', ax=ax[1])
```
# Test edges - positional y
```
for pitch_type in valid:
if pitch_type in size_varies:
kwargs = {'pitch_length': 105, 'pitch_width': 68}
else:
kwargs = {}
pitch = Pitch(pitch_type=pitch_type, line_zorder=2, pitch_color='None', axis=True, label=True, **kwargs)
pitch_vertical = VerticalPitch(pitch_type=pitch_type, line_zorder=2, pitch_color='None', axis=True, label=True, **kwargs)
fig, ax = plt.subplots(ncols=2, figsize=(12, 7))
fig.suptitle(pitch_type)
y = pitch.dim.positional_y
x = np.random.uniform(low=pitch.dim.pitch_extent[0], high=pitch.dim.pitch_extent[1], size=y.size)
pitch.draw(ax[0])
pitch.scatter(x, y, ax=ax[0], color='red', zorder=3)
stats = pitch.bin_statistic_positional(x, y)
hm = pitch.heatmap_positional(stats, ax=ax[0], edgecolors='yellow')
txt = pitch.label_heatmap(stats, color='white', ax=ax[0])
pitch_vertical.draw(ax[1])
pitch_vertical.scatter(x, y, ax=ax[1], color='red', zorder=3)
stats_vertical = pitch_vertical.bin_statistic_positional(x, y)
hm_vertical = pitch_vertical.heatmap_positional(stats_vertical, ax=ax[1], edgecolors='yellow')
txt_vertical = pitch_vertical.label_heatmap(stats_vertical, color='white', ax=ax[1])
```
| github_jupyter |
# Pipelines for classifiers using Balanced Accuracy
For each dataset, classifier and folds:
- Robust scaling
- 2, 3, 5, 10-fold outer CV
- balanced accurary as score
We will use folders *datasets2* and *results2*.
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# remove warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold, LeaveOneOut
from sklearn.metrics import confusion_matrix,accuracy_score, roc_auc_score,f1_score, recall_score, precision_score
from sklearn.utils import class_weight
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression, LassoCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from xgboost import XGBClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process.kernels import RBF
from sklearn.svm import LinearSVC
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.feature_selection import RFECV, VarianceThreshold, SelectKBest, chi2
from sklearn.feature_selection import SelectFromModel, SelectPercentile, f_classif
import os
!ls ./datasets2/*
!ls ./results2/*
# get list of files in datasets2 = all datasets
dsList = os.listdir('./datasets2')
print('--> Found', len(dsList), 'dataset files')
# create a list with all output variable names
outVars = []
for eachdsFile in dsList:
outVars.append( (eachdsFile[:-4])[3:] )
```
### Define script parameters
```
# define list of folds
foldTypes = [2,3,5,10]
# define a label for output files
targetName = '_Outer'
seed = 42
```
### Function definitions
```
def set_weights(y_data, option='balanced'):
"""Estimate class weights for umbalanced dataset
If ‘balanced’, class weights will be given by n_samples / (n_classes * np.bincount(y)).
If a dictionary is given, keys are classes and values are corresponding class weights.
If None is given, the class weights will be uniform """
cw = class_weight.compute_class_weight(option, np.unique(y_data), y_data)
w = {i:j for i,j in zip(np.unique(y_data), cw)}
return w
def getDataFromDataset(sFile, OutVar):
# read details file
print('\n-> Read dataset', sFile)
df = pd.read_csv(sFile)
#df = feather.read_dataframe(sFile)
print('Shape', df.shape)
# print(list(df.columns))
# select X and Y
ds_y = df[OutVar]
ds_X = df.drop(OutVar,axis = 1)
Xdata = ds_X.values # get values of features
Ydata = ds_y.values # get output values
print('Shape X data:', Xdata.shape)
print('Shape Y data:',Ydata.shape)
# return data for X and Y, feature names as list
return (Xdata, Ydata, list(ds_X.columns))
def Pipeline_OuterCV(Xdata, Ydata, label = 'my', class_weights = {0: 1, 1: 1}, folds = 3, seed = 42):
# inputs:
# data for X, Y; a label about data, number of folds, seeed
# default: 3-fold CV
# define classifiers
names = ['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']
classifiers = [KNeighborsClassifier(3),
SVC(kernel="linear",random_state=seed,gamma='scale'),
SVC(kernel = 'rbf', random_state=seed,gamma='auto'),
LogisticRegression(solver='lbfgs',random_state=seed),
DecisionTreeClassifier(random_state = seed),
RandomForestClassifier(n_estimators=50,n_jobs=-1,random_state=seed),
XGBClassifier(n_jobs=-1,seed=seed)
]
# results dataframe: each column for a classifier
df_res = pd.DataFrame(columns=names)
# build each classifier
print('* Building scaling+feature selection+outer '+str(folds)+'-fold CV for '+str(len(names))+' classifiers:', str(names))
total = time.time()
# define a fold-CV for all the classifier
outer_cv = StratifiedKFold(n_splits=folds,shuffle=True,random_state=seed)
# use each ML
for name, clf in zip(names, classifiers):
start = time.time()
# create pipeline: scaler + classifier
estimators = []
# SCALER
estimators.append(('Scaler', RobustScaler() ))
# add Classifier
estimators.append(('Classifier', clf))
# create pipeline
model = Pipeline(estimators)
# evaluate pipeline
scores = cross_val_score(model, Xdata, Ydata, cv=outer_cv, scoring='balanced_accuracy', n_jobs=-1)
df_res[name] = scores
print('%s, MeanScore=%0.2f, Time:%0.1f mins' % (name, scores.mean(), (time.time() - start)/60))
# save results
resFile = './results2/'+str(label)+str(targetName)+'_Outer-'+str(folds)+'-foldCV.csv'
df_res.to_csv(resFile, index=False)
print('* Scores saved', resFile)
print('Total time:', (time.time() - total)/60, ' mins')
# return scores for all classifiers as dataframe (each column a classifier)
return df_res
```
### Calculations
```
df_results = None # all results
# apply MLs to each data
for OutVar in outVars:
sFile = './datasets2/ds.'+str(OutVar)+'.csv'
# get data from file
Xdata, Ydata, Features = getDataFromDataset(sFile,OutVar)
# Calculate class weights
class_weights = set_weights(Ydata)
print("Class weights = ", class_weights)
# try different folds for each subset -> box plots
for folds in foldTypes:
# calculate outer CV for different binary classifiers
df_fold = Pipeline_OuterCV(Xdata, Ydata, label = OutVar, class_weights = class_weights, folds = folds, seed = seed)
df_fold['Dataset'] = OutVar
df_fold['folds'] = folds
# add each result to a summary dataframe
df_results = pd.concat([df_results,df_fold])
# save the results to file
resFile = './results2/'+'ML_Outer-n-foldCV.csv'
df_results.to_csv(resFile, index=False)
```
### Mean scores
```
# calculate means of ACC scores for each ML
df_means =df_results.groupby(['Dataset','folds'], as_index = False).mean()[['Dataset', 'folds','KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']]
# save averaged values
resFile_means = './results2/'+'ML_Outer-n-foldCV_means.csv'
df_means.to_csv(resFile_means, index=False)
```
### Best ML results
```
# find the maximum value rows for all MLs
bestMLs = df_means[['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']].idxmax()
print(bestMLs)
# get the best score by ML method
for ML in ['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']:
print(ML, '\t', list(df_means.iloc[df_means[ML].idxmax()][['Dataset', 'folds', ML]]))
# Add a new column with the original output name (get first 2 characters from Dataset column)
getOutOrig = []
for each in df_means['Dataset']:
getOutOrig.append(each[:2])
df_means['Output'] = getOutOrig
df_means
# save new results including extra column with output variable name
resFile_means2 = './results2/'+'ML_Outer-n-foldCV_means2.csv'
df_means.to_csv(resFile_means2, index=False)
```
### Get the best ML for each type of output
We are checking all 2, 3, 5, 10-fold CV results:
```
for outName in list(set(df_means['Output'])):
print('*********************')
print('OUTPUT =', outName)
df_sel = df_means[df_means['Output'] == outName].copy()
for ML in ['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']:
print(ML, '\t', list(df_sel.loc[df_sel[ML].idxmax(),:][['Dataset', 'folds', ML]]))
df_sel.loc[df_sel[ML].idxmax(),:]
```
### Get the best ML for each type of output for 10-fold CV
```
df_10fold = df_means[df_means['folds']==10].copy()
df_10fold.head()
for outName in list(set(df_10fold['Output'])):
print('*********************')
print('OUTPUT =', outName)
df_sel = df_10fold[df_10fold['Output'] == outName].copy()
print('MAX =',df_sel[['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']].max().max())
for ML in ['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']:
print(ML, '\t', list(df_sel.loc[df_sel[ML].idxmax(),:][['Dataset', 'folds', ML]]))
```
### Get the best ML for each type of output for 5-fold CV
```
df_5fold = df_means[df_means['folds']==5].copy()
df_5fold.head()
for outName in list(set(df_5fold['Output'])):
print('*********************')
print('OUTPUT =', outName)
df_sel = df_5fold[df_5fold['Output'] == outName].copy()
print('MAX =',df_sel[['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']].max().max())
for ML in ['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']:
print(ML, '\t', list(df_sel.loc[df_sel[ML].idxmax(),:][['Dataset', 'folds', ML]]))
```
Get only the best values from all MLs for 5- and 10-fold CV:
```
print('5-fold CV')
for outName in list(set(df_5fold['Output'])):
df_sel = df_5fold[df_5fold['Output'] == outName].copy()
print(outName,df_sel[['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']].max().max())
print('10-fold CV')
for outName in list(set(df_10fold['Output'])):
df_sel = df_10fold[df_10fold['Output'] == outName].copy()
print(outName,df_sel[['KNN', 'SVM linear', 'SVM', 'LR', 'DT', 'RF', 'XGB']].max().max())
```
**Conclusion**: even with **5,10-CV** we are able to obtain classification models with **ACC > 0.70** and in one case with **ACC > 0.81**.
| github_jupyter |
|<img style="float:left;" src="http://pierreproulx.espaceweb.usherbrooke.ca/images/usherb_transp.gif" > |Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
### Section 10.6, Conduction de la chaleur dans une sphère
```
#
# Pierre Proulx
#
# Préparation de l'affichage et des outils de calcul symbolique
#
import sympy as sp
from IPython.display import *
sp.init_printing(use_latex=True)
%matplotlib inline
# Paramètres, variables et fonctions
r,k01,k12,k23,h0,h3=sp.symbols('r k_1 k_2 k_3 h_0 h_3')
r0,r1,r2,r3,Ta,Tb=sp.symbols('r_0 r_1 r_2 r_3 T_a T_b')
q=sp.symbols('q')
T=sp.Function('T')(r)
eq1=sp.Eq(k01/r**2*sp.Derivative(r**2*sp.Derivative(T,r)),0)
eq2=sp.Eq(k12/r**2*sp.Derivative(r**2*sp.Derivative(T,r)),0)
eq3=sp.Eq(k23/r**2*sp.Derivative(r**2*sp.Derivative(T,r)),0)
T1=sp.dsolve(eq1).rhs
T2=sp.dsolve(eq2)
T2=T2.subs(sp.symbols('C1'),sp.symbols('C3'))
T2=T2.subs(sp.symbols('C2'),sp.symbols('C4')).rhs
T3=sp.dsolve(eq3)
T3=T3.subs(sp.symbols('C1'),sp.symbols('C5'))
T3=T3.subs(sp.symbols('C2'),sp.symbols('C6')).rhs
display(T1)
display(T2)
display(T3)
# Maintenant on pose les conditions aux limites pour trouver les 6 constantes
cl1=sp.Eq(T1.subs(r,r1)-T2.subs(r,r1)) # températures égales sur les points intérieurs
cl2=sp.Eq(T2.subs(r,r2)-T3.subs(r,r2))
# flux égaux sur les points intérieurs
cl3=sp.Eq(k01*T1.diff(r).subs(r,r1)-k12*T2.diff(r).subs(r,r1))
cl4=sp.Eq(k12*T2.diff(r).subs(r,r2)-k23*T3.diff(r).subs(r,r2))
# flux donnés par la loi de refroidissement de Newton sur les parois
cl5=sp.Eq(-k01*T1.diff(r).subs(r,r0)+h0*(T1.subs(r,r0)-Ta))
cl6=sp.Eq(-k23*T3.diff(r).subs(r,r3)+h3*(Tb-T3.subs(r,r3)))
constantes=sp.solve((cl1,cl2,cl3,cl4,cl5,cl6),sp.symbols('C1 C2 C3 C4 C5 C6'))
T1=T1.subs(constantes)
T2=T2.subs(constantes)
T3=T3.subs(constantes)
dico={'k_1':4,'k_2':25,'k_3':1,
'h_0':100,'h_3':20,'r_0':0.020,'r_1':0.025,'r_2':0.026,'r_3':0.035,'T_a':100,'T_b':20}
T1p=T1.subs(dico)
T2p=T2.subs(dico)
T3p=T3.subs(dico)
# Calcule les taux de chaleur en 0 et en 3 (doivent être égaux) (watts / mètre de longueur)
#
taux3=(h3*(T3-Tb)*2*sp.pi*r3).subs(dico) # pour mettre les valeurs numériques dans
taux0=(h0*(Ta-T1)*2*sp.pi*r0).subs(dico) # l'expression symbolique, on subs(dico)
#
#
print(taux3.subs(r,r3.subs(dico)), taux0.subs(r,r0.subs(dico)))
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 12, 10
#plt.figure(figsize=(12,10))
p=sp.plot((T1p,(r,r0.subs(dico),r1.subs(dico)))
,(T2p,(r,r1.subs(dico),r2.subs(dico)))
,(T3p,(r,r2.subs(dico),r3.subs(dico)))
,legend=True,ylabel='T(r)',xlabel='r',show=False) #affiche pas tout de suite
p[0].line_color = 'red'
p[0].label='de r = r_0 à r=r_1 '
p[1].line_color = 'black'
p[1].label='de r = r_1 à r=r_2 '
p[2].line_color = 'green'
p[2].label='de r = r_2 à r=r_3 '
p.show() # maintenant on est prêts à afficher
```
| github_jupyter |
# TTI pure qP-wave equation implementation
The aim of this notebook is to show how to solve the pure qP-wave equation using the finite-difference (FD) scheme. The 2D TTI pure qP-wave equation can be written as ([Mu et al., 2020](https://library.seg.org/doi/10.1190/geo2019-0320.1))
$$\begin{align}
\frac{1}{v_{p}^{2}}\frac{\partial^{2}p(\textbf{x},t)}{\partial t^{2}} = & \,\, (1+2\delta\sin^{2}\theta\cos^{2}\theta + 2\epsilon\cos^{4}\theta)\frac{\partial^{4}q(\textbf{x},t)}{\partial x^{4}} \\
& + (1+2\delta\sin^{2}\theta\cos^{2}\theta + 2\epsilon\sin^{4}\theta)\frac{\partial^{4}q(\textbf{x},t)}{\partial z^{4}} \\
& + (2 - \delta\sin^{2}2\theta+3\epsilon\sin^{2}2\theta+2\delta\cos^{2}\theta)\frac{\partial^{4}q(\textbf{x},t)}{\partial x^{2}\partial z^{2}} \\
& +(\delta\sin4\theta-4\epsilon\sin2\theta\cos^{2}\theta)\frac{\partial^4 q(\textbf{x},t)}{\partial x^{3}\partial z} \\
& +(-\delta\sin4\theta-4\epsilon\sin2\theta\cos^{2}\theta)\frac{\partial^4 q(\textbf{x},t)}{\partial x\partial z^{3}} \\
& + f(\textbf{x}_{s},t),
\end{align}$$
$$
\frac{\partial^{2}q(\textbf{x},t)}{\partial x^{2}} + \frac{\partial^{2}q(\textbf{x},t)}{\partial z^{2}} = p(\textbf{x},t),
$$
where $q(\textbf{x},t)$ is an auxiliary wavefield, which is introduced for implementing the FD scheme.
First of all, it is necessary to import some Devito modules and other packages that will be used in the implementation. We set Devito logging `configuration ['log-level'] = 'DEBUG'` to view all processing times (i,e. compilation and execution of `Operators`)
```
import numpy as np
from devito import (Function, TimeFunction, cos, sin, solve,
Eq, Operator, configuration, norm)
from examples.seismic import TimeAxis
from examples.seismic import RickerSource
from examples.seismic import Receiver
from examples.seismic import demo_model
from matplotlib import pyplot as plt
# Set logging to debug, captures statistics on the performance of operators
#configuration['log-level'] = 'INFO'
configuration['log-level']='DEBUG'
```
We will start with the definitions of the grid and the physical parameters $v_{p}, \theta, \epsilon, \delta$. For simplicity, we don't use any absorbing boundary conditions. We use a homogeneous model. The model is discretized with a grid of $101 \times 101$ and spacing of 10 m. The $v_{p}, \epsilon, \delta$ and $\theta$ parameters of this model are 3600 m∕s, 0.23, 0.17, and 45°, respectively.
```
# NBVAL_IGNORE_OUTPUT
dtype = np.float32 # 32 bit floating point as the precision type
space_order = 8
shape = (101,101) # 101x101 grid
spacing = (10.,10.) # spacing of 10 meters
origin = (0.,0.)
nbl = 0 # number of pad points
model = demo_model('constant-tti', spacing=spacing, space_order=8,
shape=shape, nbl=nbl, dtype=dtype)
# initialize thomsem parameters to those used in Mu et al., (2020)
model.update('vp', np.ones(shape)*3.6)
model.update('epsilon', np.ones(shape)*0.23)
model.update('delta', np.ones(shape)*0.17)
model.update('theta', np.ones(shape)*(45.*(np.pi/180.)))
```
In cell below, symbols used in the PDE definition are obtained from the `model` object. Note that trigonometric functions proper of Devito are exploited.
```
# Get symbols from model
theta = model.theta
delta = model.delta
epsilon = model.epsilon
m = model.m
# Use trigonometric functions from Devito
costheta = cos(theta)
sintheta = sin(theta)
cos2theta = cos(2*theta)
sin2theta = sin(2*theta)
sin4theta = sin(4*theta)
```
Accordingly to [Mu et al., (2020)](https://library.seg.org/doi/10.1190/geo2019-0320.1), the time sampling can be chosen as
$$
\Delta t < \frac{\Delta d}{\pi \cdot (v_{p})_{max}}\sqrt{\dfrac{1}{(1+\eta_{max}|\cos\theta-\sin\theta|_{max}^{2})}}
$$,
where $\eta_{max}$ denotes the maximum value between $|\epsilon|_{max}$ and $|\delta|_{max}$, $|cos\theta − sin\theta|_{max}$ is the maximum value of $|cos\theta − sin\theta|$.
```
# NBVAL_IGNORE_OUTPUT
# Values used to compute the time sampling
epsilonmax = np.max(np.abs(epsilon.data[:]))
deltamax = np.max(np.abs(delta.data[:]))
etamax = max(epsilonmax, deltamax)
vmax = model._max_vp
max_cos_sin = np.amax(np.abs(np.cos(theta.data[:]) - np.sin(theta.data[:])))
dvalue = min(spacing)
```
The next step is to define the simulation time. It has to be small enough to avoid reflections from borders. Note we will use the `dt` computed below rather than the one provided by the property() function `critical_dt` in the `SeismicModel` class, as the latter only works for the coupled pseudoacoustic equation.
```
# Compute the dt and set time range
t0 = 0. # Simulation time start
tn = 160. # Simulation time end (0.16 second = 160 msec)
dt = (dvalue/(np.pi*vmax))*np.sqrt(1/(1+etamax*(max_cos_sin)**2)) # eq. above (cell 3)
time_range = TimeAxis(start=t0,stop=tn,step=dt)
print("time_range; ", time_range)
```
In exactly the same form as in the [Cavity flow with Navier-Stokes]() tutorial, we will use two operators, one for solving the Poisson equation in pseudotime and one for advancing in time. But unlike what was done in such tutorial, in this case, we write the FD solution of the poisson equation in a manually way, without using the `laplace` shortcut and `solve` functionality (just to break up the routine and try to vary). The internal time loop can be controlled by supplying the number of pseudotime steps (`niter_poisson` iterations) as a `time` argument to the operator. A Ricker wavelet source with peak frequency of 20 Hz is located at center of the model.
```
# NBVAL_IGNORE_OUTPUT
# time stepping
p = TimeFunction(name="p", grid=model.grid, time_order=2, space_order=2)
q = Function(name="q", grid=model.grid, space_order=8)
# Main equations
term1_p = (1 + 2*delta*(sintheta**2)*(costheta**2) + 2*epsilon*costheta**4)*q.dx4
term2_p = (1 + 2*delta*(sintheta**2)*(costheta**2) + 2*epsilon*sintheta**4)*q.dy4
term3_p = (2-delta*(sin2theta)**2 + 3*epsilon*(sin2theta)**2 + 2*delta*(cos2theta)**2)*((q.dy2).dx2)
term4_p = ( delta*sin4theta - 4*epsilon*sin2theta*costheta**2)*((q.dy).dx3)
term5_p = (-delta*sin4theta - 4*epsilon*sin2theta*sintheta**2)*((q.dy3).dx)
stencil_p = solve(m*p.dt2 - (term1_p + term2_p + term3_p + term4_p + term5_p), p.forward)
update_p = Eq(p.forward, stencil_p)
# Poisson eq. (following notebook 6 from CFD examples)
b = Function(name='b', grid=model.grid, space_order=2)
pp = TimeFunction(name='pp', grid=model.grid, space_order=2)
# Create stencil and boundary condition expressions
x, z = model.grid.dimensions
t = model.grid.stepping_dim
update_q = Eq( pp[t+1,x,z],((pp[t,x+1,z] + pp[t,x-1,z])*z.spacing**2 + (pp[t,x,z+1] + pp[t,x,z-1])*x.spacing**2 -
b[x,z]*x.spacing**2*z.spacing**2) / (2*(x.spacing**2 + z.spacing**2)))
bc = [Eq(pp[t+1,x, 0], 0.)]
bc += [Eq(pp[t+1,x, shape[1]+2*nbl-1], 0.)]
bc += [Eq(pp[t+1,0, z], 0.)]
bc += [Eq(pp[t+1,shape[0]-1+2*nbl, z], 0.)]
# set source and receivers
src = RickerSource(name='src',grid=model.grid,f0=0.02,npoint=1,time_range=time_range)
src.coordinates.data[:,0] = model.domain_size[0]* .5
src.coordinates.data[:,1] = model.domain_size[0]* .5
# Define the source injection
src_term = src.inject(field=p.forward,expr=src * dt**2 / m)
rec = Receiver(name='rec',grid=model.grid,npoint=shape[0],time_range=time_range)
rec.coordinates.data[:, 0] = np.linspace(model.origin[0],model.domain_size[0], num=model.shape[0])
rec.coordinates.data[:, 1] = 2*spacing[1]
# Create interpolation expression for receivers
rec_term = rec.interpolate(expr=p.forward)
# Operators
optime=Operator([update_p] + src_term + rec_term)
oppres=Operator([update_q] + bc)
# you can print the generated code for both operators by typing print(optime) and print(oppres)
```
The time steps are advanced through a Python loop where both operators `optime` and `oppres`are called. Note the use of module indices to get proper buffers. As the operators will be applied at each step, the logging detail is expected to be too much. So, it is more convenient to switch from `DEBUG` to `INFO`
```
# NBVAL_IGNORE_OUTPUT
configuration['log-level'] = 'INFO'
psave =np.empty ((time_range.num,model.grid.shape[0],model.grid.shape[1]))
niter_poisson = 1200
# This is the time loop.
for step in range(0,time_range.num-2):
q.data[:,:]=pp.data[(niter_poisson+1)%2,:,:]
optime(time_m=step, time_M=step, dt=dt)
pp.data[:,:]=0.
b.data[:,:]=p.data[(step+1)%3,:,:]
oppres(time_M = niter_poisson)
psave[step,:,:]=p.data[(step+1)%3,:,:]
# Some useful definitions for plotting if nbl is set to any other value than zero
nxpad,nzpad = shape[0] + 2 * nbl, shape[1] + 2 * nbl
shape_pad = np.array(shape) + 2 * nbl
origin_pad = tuple([o - s*nbl for o, s in zip(origin, spacing)])
extent_pad = tuple([s*(n-1) for s, n in zip(spacing, shape_pad)])
```
We can plot equally spaced snaps (by `factor`) from the full history saved in `psave` using matplotlib.
```
# NBVAL_IGNORE_OUTPUT
# Note: flip sense of second dimension to make the plot positive downwards
plt_extent = [origin_pad[0], origin_pad[0] + extent_pad[0],
origin_pad[1] + extent_pad[1], origin_pad[1]]
# Plot the wavefields, each normalized to scaled maximum of last time step
kt = (time_range.num - 2) - 1
amax = 0.05 * np.max(np.abs(psave[kt,:,:]))
print("amax; %12.6f" % (amax))
nsnaps = 10
factor = round(time_range.num/nsnaps)
fig, axes = plt.subplots(2, 5, figsize=(18, 7), sharex=True)
fig.suptitle("Snapshots", size=14)
for count, ax in enumerate(axes.ravel()):
snapshot = factor*count
ax.imshow(np.transpose(psave[snapshot,:,:]), cmap="seismic",
vmin=-amax, vmax=+amax, extent=plt_extent)
ax.plot(model.domain_size[0]* .5, model.domain_size[1]* .5, \
'red', linestyle='None', marker='*', markersize=15, label="Source")
ax.grid()
ax.tick_params('both', length=2, width=0.5, which='major',labelsize=10)
ax.set_title("Wavefield at t=%.2fms" % snapshot,fontsize=12)
for ax in axes[1, :]:
ax.set_xlabel("X Coordinate (m)",fontsize=10)
for ax in axes[:, 0]:
ax.set_ylabel("Z Coordinate (m)",fontsize=10)
```
## References
- **Least-squares reverse time migration in TTI media using a pure qP-wave equation** (2020)
<br> Xinru Mu, Jianping Huang, Jidong Yang, Xu Guo, and Yundong Guo
<br> Geophysics, Vol. 85, No. 4
<br> https://doi.org/10.1190/geo2019-0320.1
| github_jupyter |
# 1-1 Intro Python Practice
## Getting started with Python in Jupyter Notebooks
### notebooks, comments, print(), type(), addition, errors and art
<font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- use Python 3 in Jupyter notebooks
- write working code using `print()` and `#` comments
- write working code using `type()` and variables
- combine strings using string addition (+)
- add numbers in code (+)
- troubleshoot errors
- create character art
#
>**note:** the **[ ]** indicates student has a task to complete
>**reminder:** to run code and save changes: student should upload or clone a copy of notebooks
#### notebook use
- [ ] insert a **code cell** below
- [ ] enter the following Python code, including the comment:
```python
# [ ] print 'Hello!' and remember to save notebook!
print('Hello!')
```
Then run the code - the output should be:
`Hello!`
#### run the cell below
- [ ] use **Ctrl + Enter**
- [ ] use **Shift + Enter**
```
print('watch for the cat')
```
#### Student's Notebook editing
- [ ] Edit **this** notebook Markdown cell replacing the word "Student's" above with your name
- [ ] Run the cell to display the formatted text
- [ ] Run any 'markdown' cells that are in edit mode, so they are easier to read
#### [ ] convert \*this\* cell from markdown to a code cell, then run it
print('Run as a code cell')
## # comments
create a code comment that identifies this notebook, containing your name and the date
#### use print() to
- [ ] print [**your_name**]
- [ ] print **is using python!**
```
# [ ] print your name
# [ ] print "is using Python"
```
Output above should be:
`Your Name
is using Python!`
#### use variables in print()
- [ ] create a variable **your_name** and assign it a string containing your name
- [ ] print **your_name**
```
# [ ] create a variable your_name and assign it a sting containing your name
#[ ] print your_name
```
#### create more string variables
- **[ ]** create variables as directed below
- **[ ]** print the variables
```
# [ ] create variables and assign values for: favorite_song, shoe_size, lucky_number
# [ ] print the value of each variable favorite_song, shoe_size, and lucky_number
```
#### use string addition
- **[ ]** print the above string variables (favorite_song, shoe_size, lucky_number) combined with a description by using **string addition**
>for example favorite_song displayed as:
`favorite song is happy birthday`
```
# [ ] print favorite_song with description
# [ ] print shoe_size with description
# [ ] print lucky_number with description
```
##### more string addition
- **[ ]** make a single string (sentence) in a variable called favorite_lucky_shoe using **string addition** with favorite_song, shoe_size, lucky_number variables and other strings as needed
- **[ ]** print the value of the favorite_lucky_shoe variable string
> sample output:
`For singing happy birthday 8.5 times, you will be fined $25`
```
# assign favorite_lucky_shoe using
```
### print() art
#### use `print()` and the asterisk **\*** to create the following shapes
- [ ] diagonal line
- [ ] rectangle
- [ ] smiley face
```
# [ ] print a diagonal using "*"
# [ ] rectangle using "*"
# [ ] smiley using "*"
```
#### Using `type()`
-**[ ]** calulate the *type* using `type()`
```
# [ ] display the type of 'your name' (use single quotes)
# [ ] display the type of "save your notebook!" (use double quotes)
# [ ] display the type of "25" (use quotes)
# [ ] display the type of "save your notebook " + 'your name'
# [ ] display the type of 25 (no quotes)
# [ ] display the type of 25 + 10
# [ ] display the type of 1.55
# [ ] display the type of 1.55 + 25
```
#### Find the type of variables
- **[ ]** run the cell below to make the variables available to be used in other code
- **[ ]** display the data type as directed in the cells that follow
```
# assignments ***RUN THIS CELL*** before starting the section
student_name = "Gus"
student_age = 16
student_grade = 3.5
student_id = "ABC-000-000"
# [ ] display the current type of the variable student_name
# [ ] display the type of student_age
# [ ] display the type of student_grade
# [ ] display the type of student_age + student_grade
# [ ] display the current type of student_id
# assign new value to student_id
# [ ] display the current of student_id
```
#### number integer addition
- **[ ]** create variables (x, y, z) with integer values
```
# [ ] create integer variables (x, y, z) and assign them 1-3 digit integers (no decimals - no quotes)
```
- **[ ]** insert a **code cell** below
- **[ ]** create an integer variable named **xyz_sum** equal to the sum of x, y, and z
- **[ ]** print the value of **xyz_sum**
```
```
### Errors
- **[ ]** troubleshoot and fix the errors below
```
# [ ] fix the error
print("Hello World!"")
# [ ] fix the error
print(strings have quotes and variables have names)
# [ ] fix the error
print( "I have $" + 5)
# [ ] fix the error
print('always save the notebook")
```
## ASCII art
- **[ ]** Display first name or initials as ASCII Art
- **[ ]** Challenge: insert an additional code cell to make an ASCII picture
```
# [ ] ASCII ART
# [ ] ASCII ART
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| github_jupyter |
# An exercise in discretisation and the CFL criterion
*These notebooks have been built from Lorena Barba's Computational Fluid Dynamics module. Here we are going to go from a (simple) equation, to a numerical solution of it. We are then going to look at how changing the resolution impacts the speed and validity of the program.*
*Barba, Lorena A., and Forsyth, Gilbert F. (2018). CFD Python: the 12 steps to Navier-Stokes equations. Journal of Open Source Education, 1(9), 21, https://doi.org/10.21105/jose.00021*
## Step 1: 1-D Linear Convection
The 1-D Linear Convection equation is the simplest, most basic model that can be used to learn something about CFD.
Here it is, where $w$ is the vertical veolcity and we're using height, $z$, as the vertical coordinate:
$$\frac{\partial w}{\partial t} + c \frac{\partial w}{\partial z} = 0$$
With given initial conditions (understood as a wave), the equation represents the propagation of that initial wave with speed $c$, without change of shape. Let the initial condition be $w(z,0)=w_0(z)$. Then the exact solution of the equation is $w(z,t)=w_0(z-ct)$.
We discretise this equation in both space and time, using the Forward Difference scheme for the time derivative and the Backward Difference scheme for the space derivative. Consider discretising the spatial coordinate $x$ into points that we index from $i=0$ to $N$, and stepping in discrete time intervals of size $\Delta t$.
From the definition of a derivative (and simply removing the limit), we know that:
$$\frac{\partial w}{\partial z}\approx \frac{w(z+\Delta z)-w(z)}{\Delta z}$$
Our discrete equation, then, is:
$$\frac{w_i^{n+1}-w_i^n}{\Delta t} + c \frac{w_i^n - w_{i-1}^n}{\Delta z} = 0 $$
Where $n$ and $n+1$ are two consecutive steps in time, while $i-1$ and $i$ are two neighboring points of the discretized $z$ coordinate. If there are given initial conditions, then the only unknown in this discretization is $w_i^{n+1}$.
We can solve for our unknown to get an equation that allows us to advance in time, as follows:
$$w_i^{n+1} = w_i^n - c \frac{\Delta t}{\Delta z}(w_i^n-w_{i-1}^n)$$
Now let's try implementing this in Python.
```
import numpy as np
import time
import matplotlib.pyplot as plt
%matplotlib inline
```
First, define a few variables...
(1) Define an evenly spaced grid of points within a spatial domain that is 2 units of length wide, i.e., $z_i\in(0,2)$.
(2) define a variable nz, which will be the number of grid points we want and dz will be the distance between any pair of adjacent grid points.
```
total_height = 2.0 # height of the model (in m)
dt = 0.025 # dt is the length of each timestep
nz = 41 # define the number of grid points
dz = total_height / (nz-1) # define the distance between any pair of adjacent grid points (delta z)
nt = 20 #nt is the number of timesteps we want to calculate
c = 1. #assume wavespeed of c = 1 m/s
```
Then we need to set up our initial conditions...
The initial velocity $w_0$ is given as $w = 2$ in the interval $0.5 \leq z \leq 1$ and $w = 1$ everywhere else in $(0,2)$ (i.e., a hat function).
```
w_0 = np.ones(nz) #numpy function ones() makes an array
w_0[int(.5 / dz):int(1 / dz + 1)] = 2. #setting w_0 = 2 if 0.5<=z<=1, setting w_0=1 elsewhere
print(w_0) # it shows us a hat function
# Let's take a look at those initial conditions
plt.plot(w_0, np.linspace(0, total_height, nz))
```
Now it's time to implement the discretisation of the convection equation using a finite-difference scheme.
For every element of our array u, we need to perform the operation $$w_i^{n+1} = w_i^n - c \frac{\Delta t}{\Delta z}(w_i^n-w_{i-1}^n)$$
We'll store the result in a new (temporary) array `wn`, which will be the solution $z$ for the next time-step. We will repeat this operation for as many time-steps as we specify and then we can see how far the wave has convected.
(1) Initialise our placeholder array `wn` to hold the values we calculate for the $n+1$ timestep.
(2) We have two iterative operations: one in space and one in time (we'll learn differently later), so we'll start by nesting one loop inside the other. Note: when we write: for i in range(1, nz) we will iterate through the w array, but we'll be skipping the first element (the zero-th element).
```
wn = np.ones(nz) #Set the velocity as the initial conditions at the beginning of the run
w = w_0.copy()
# In each timestep(20 timesteps in total), iterate through all the grid points...
#...then repeat the iteration for all the timesteps
for n in range(0, nt): #loop for values of n from 0 to nt, so it will run nt times
wn = w.copy() #copy the existing values of w into wn
for i in range(1, nz): # if starting from the zero-th element, it will crash, due to the value un[i-1] doesn't exist
w[i] = wn[i] - c * dt / dz * (wn[i] - wn[i-1])
# Now let's try plotting our u array after advancing in time.
plt.plot(w_0,np.linspace(0, total_height, nz),label = "initial conditions")
plt.plot(w,np.linspace(0, total_height, nz),label = "At end of run")
plt.legend()
```
# Exploring convergence and the CFL criterion
Above we used a grid with 41 points (nz = 41) and a timestep is 0.025 seconds (dt = 0.025). You can see that the "hat" function has not just been pushed upwards (as the analytical solution of the equation suggests should happen). It has also been smoothed out a bit, because of a process called ["numerical diffusion"](https://en.wikipedia.org/wiki/Numerical_diffusion). This is where the discretisation we used introduces a spurious spreading out of the single pulse.
The amount of numerical diffusion will depend on the coarseness of our grid. So now, we'll going to experiment with increasing the size of our grid to get a more accurate solution.
We can do it by defining a new function, so that we can easily examine what happens as we adjust just one variable: the grid size (nz)
```
# define a function called 'linearconv()', it allow us to change the number of grid points in over a 2m layer
def linearconv(nz):
dz = 2 / (nz - 1) #dz is the distance between any pair of adjacent grid points
nt = 20 #nt is the number of timesteps we want to calculate
dt = .025 #dt is the amount of time each timestep covers
c = 1
w = np.ones(nz) #defining a numpy array which is nx elements long with every value equal to 1.
w[int(.5/dz):int(1 / dz + 1)] = 2 #setting w = 2 if 0.5<=z<=1, setting w=1 if 0<z<0.5 or 1<z<2
w_0=w.copy()
wn = np.ones(nz) #initializing our placeholder array, zn, to hold the values we calculate for the n+1 timestep
for n in range(0, nt): #iterate through time
wn = w.copy() #copy the existing values of w into wn
for i in range(1, nz):
w[i] = wn[i] - c * dt / dz * (wn[i] - wn[i-1]) # using 1-D linear convection equation
plt.plot(w_0,np.linspace(0, 2, nz),label = "initial conditions")
plt.plot(w,np.linspace(0, 2, nz),label = "At end of run")
plt.legend()
```
Now let's examine the results of our linear convection problem with an increasingly fine mesh
```
# Now reproduce the plot above for reference:
linearconv(41) #convection using 41 grid points
# Increase the number of grid points
# still numerical diffusion present, but it is less severe (curve less smooth).
linearconv(61)
# the same pattern is present -- the wave is more square than in the previous runs
linearconv(71)
#completely changed to square curves
linearconv(81)
linearconv(85)
# This doesn't look anything like our original hat function.
```
Why does this happen?
In each iteration of our time loop, we use the existing data about our wave to estimate the speed of the wave in the subsequent time step. Initially, the increase in the number of grid points returned more accurate answers. There was less numerical diffusion and the square wave looked much more like a square wave than it did in our first example.
Each iteration of our time loop covers a time-step of length $\Delta t$, which we have been defining as 0.025.
During this iteration, we evaluate the speed of the wave at each of the $z$ points we've created. In the last plot, something has clearly gone wrong.
What has happened is that over the time period $\Delta t$, the wave is travelling a distance which is greater than dz.
The length dz of each grid box is related to the number of total points nz, so stability can be enforced if the $\Delta t$ step size is calculated with respect to the size of $dz$.
$$\sigma = \frac{c \Delta t}{\Delta z} \leq \sigma_{\max}$$
where $c$ is the speed of the wave; $\sigma$ is called the Courant number and the value of $\sigma_{\max}$ that will ensure stability depends on the kind of discretisation used. Overall this equation is called the CFL criterion. We will use to calculate the appropriate time-step $dt$ depending on the vertical resolution.
```
# Re-define the function 'linearconv()' as 'linearconv_CFL(nz)' but make the timestep change dynamically with the grid resolution
def linearconv_CFL(nz):
dz = 2 / (nz - 1) #dz is the distance between two adjacent grid points
run_length = 0.5 # which is the same as before - i.e. 20*0.025
c = 1
sigma = .5 # sigma is a Courant number
dt = sigma * dz # now, the amount of time that each timestep covers, is calculated with respect to the size of dz...
# ...so, stability is enforced (the value of dt now depends on dz)
nt = int(1 + run_length / dt)
w = np.ones(nz) #defining a numpy array which is nx elements long with every value equal to 1.
w[int(.5/dz):int(1 / dz + 1)] = 2 #setting w = 2 if 0.5<=z<=1, setting w=1 if 0<z<0.5 or 1<z<2
w_0=w.copy()
wn = np.ones(nz)
tic = time.perf_counter() # store the time at the beginning of the loop
for n in range(nt): #iterate through timestep
wn = w.copy()
for i in range(1, nz):
w[i] = wn[i] - c * dt / dz * (wn[i] - wn[i-1])
toc = time.perf_counter() # store the time at the end of the loop
time_taken_millisec=(toc-tic)*10e6
print(f"The model took {time_taken_millisec:0.4f} milliseconds to run")
plt.plot(w_0,np.linspace(0, 2, nz),label = "initial conditions")
plt.plot(w,np.linspace(0, 2, nz),label = "At end of run")
plt.legend()
return(time_taken_millisec) # return the wallclock time for the model to complete
runtime_nz41=linearconv_CFL(41)
runtime_nz61=linearconv_CFL(61)
runtime_nz81=linearconv_CFL(81)
# Compare to linearconv (41), the number of grid points (nx) doubled (from 41 to 81)...
# ...which means you have changed to a higher resolution
# The distance between any pair of adjacent grid points (dx) has decreased 1/2 (from 0.05 to 0.025)
# Then, the amount of time each timestep covers (dt) will be changed as well...
# ...it depends on dx and also controlled by the value of sigma (in order to enfore stability)...
# ...so, in this example, dt has decresed 1/2 (from 0.025sec to 0.0125 sec)
# After changing all the variables (nx,dx,dt), iterate through all the grid points in the first timestep...
# ...then do the same iteration for the second timestep....
# ...until complete all the timesteps
runtime_nz101=linearconv_CFL(101)
runtime_nz121=linearconv_CFL(121)
```
### Summary
Looking all the plots above, you can see that as the number of grid points ($nz$) increases, the convected wave is resolved more accurately (i.e. it becomes more square).
However there is a serious downside to the increasing the resolution - it takes much longer to compute. As the numbers of vertical grid points is increased, it intuitively makes sense that the model will need more computations. For example, as $nz$ increases from 41 to 121 we have tripled the number of gridpoints. So does the time taken for the computation increase by a factor of 3 as well? Let's find out...
```
factor=runtime_nz121/runtime_nz41
print(factor)
```
No, it isn't just a tripling. And the reason for this is again down the CFL criterion.
When I ran the code, I got a factor of around 10. The actual value you find will depend on what machine you're using to run this notebook and what else is happening on the machine at the time.
As we reduced distance between grid points by 1/3, we also needed to reduce the timestep by a similar factor. Therefore the amount of computations has gone up by $3^2$. And that's a theoretical baseline, more computations can run into more inefficiencies and make the run length even longer.
Finally, remember that this is a simple example in 1D. A real climate model is 3D meaning that if you increase the grid resolution by a factor of 3, your number of computations would go up by $3^4$. So you would expect the run to take 81 times as long!
| github_jupyter |
```
import plotly.express as px
import pandas as pd
import plotly.graph_objects as go
import pickle
from plotly.subplots import make_subplots
import numpy as np
import os
import Loader
from scipy.spatial import ConvexHull, distance_matrix
loader = Loader.Loader(r"C:\Users\logiusti\Lorenzo\Data\ups")
def remap(x):
"""
parametrizzare
"""
return max(0,x-3.5)
def get_one_pof(p0, p, eta, clicks):
"""
parametrizzare
"""
distance = 1-(1/(1+np.linalg.norm(p0-p, 1)))
pof_eta_load = remap(np.sqrt(eta**.5 + clicks**.5))
pof = distance*pof_eta_load**.5
return pof
def get_p0_name(df):
# test points
pts = df[[2, 3, 4]].to_numpy()
# two points which are fruthest apart will occur as vertices of the convex hull
candidates = pts[ConvexHull(pts).vertices]
# get distances between each pair of candidate points
dist_mat = distance_matrix(candidates, candidates)
# get indices of candidates that are furthest apart
i, j = np.unravel_index(dist_mat.argmax(), dist_mat.shape)
#get the data into the df according to the most distance points
tmp_df = df[(df[[2, 3, 4]].to_numpy() == candidates[j]) |
(df[[2, 3, 4]].to_numpy() == candidates[i])]
#return the one who has lower clicks and lower age
return tmp_df.assign(f=tmp_df['eta']**2 * tmp_df['clicks']**2)\
.sort_values('f')\
.drop('f', axis=1)\
.iloc[0]['UPS']
def get_all_pofs(df):
v = []
p0 = df.loc[df['UPS'] == get_p0_name(df)][[2, 3, 4]].to_numpy()
for _, row in df.iterrows():
p = np.array([row[2], row[3], row[4]])
v.append(get_one_pof(p0, p, row['eta'], row['clicks']))
return pd.Series(v)
def load_df(path):
if os.path.isfile(r""+path):
with open(r""+path, "rb") as input_file:
df = pickle.load(input_file)
ups_to_clicls = pd.DataFrame(list(loader.count_clicks().items()), columns=['UPS', 'clicks'])
df = df.merge(ups_to_clicls, how='inner', on = 'UPS')
columns = df.columns.tolist()
columns = columns[:2] + [columns[-1]] + columns[2:5]#-1] ##this -1 is the desired level of triming
df['pof'] = get_all_pofs(df)
thermal_runaways = df.loc[df['UPS'] == "EBS2C06_SLASH_BL1"]
thermal_runaways = thermal_runaways.append(df.loc[df['UPS'] == "ESS328_SLASH_5E"])
thermal_runaways = thermal_runaways.append(df.loc[df['UPS'] == "ESS329_SLASH_7E"])
return (thermal_runaways, df)
def make_plot(path, title, use_out=True):
fig = make_subplots(
rows=1, cols=2,
specs=[[{'type': 'scatter3d'}, {'type': 'scatter3d'}]]
)
thermal_runaways, df = load_df(path)
fig.add_scatter3d(x=df[2], y = df[3], z = df[4], marker=dict(color=df['eta'], colorscale='Tealrose'),
hovertext=df['UPS'] + "_" + df['eta'].map(str) + "_" + df['clicks'].map(str),
showlegend=False, name="", mode='markers', row=1,col=1, )
fig.add_scatter3d(x=thermal_runaways[2], y = thermal_runaways[3], z = thermal_runaways[4],
marker=dict(color='rgb(255,0,0)'),
hovertext=thermal_runaways['UPS'] + "_" +
thermal_runaways['eta'].map(str) + "_" +
thermal_runaways['clicks'].map(str),
showlegend=False, name="", mode='markers', row=1,col=1)
fig.add_scatter3d(x=df[2], y = df[3], z = df[4], marker=dict(color=df['pof'], colorscale='Tealrose'),
hovertext=df['UPS'] + "_" + df['pof'].map(str), hoverlabel=dict(bgcolor=px.colors.diverging.Tealrose) ,
showlegend=False, name="", mode='markers', row=1,col=2)
fig.add_scatter3d(x=thermal_runaways[2], y = thermal_runaways[3], z = thermal_runaways[4], marker=dict(color='rgb(255,0,0)'),
hovertext=thermal_runaways['UPS'] + "_" + thermal_runaways['pof'].map(str),
showlegend=False, name="", mode='markers', row=1,col=2)
fig.update_layout(title_text=title)
fig.show()
make_plot(r"C:\Users\logiusti\Lorenzo\PyWorkspace\scripts\Wrapper\data\filtered_dT.pickle", 'Grad')
make_plot(r"C:\Users\logiusti\Lorenzo\PyWorkspace\scripts\Wrapper\data\filtered_energy_of_dTemperature.pickle", 'E')
make_plot(r"C:\Users\logiusti\Lorenzo\PyWorkspace\scripts\Wrapper\data\filtered_signed_total_variation.pickle", 'STV')
make_plot(r"C:/Users/logiusti/Lorenzo/PyWorkspace/scripts/Wrapper/data/filtered_dEnergy.pickle", 'dE')
make_plot(r"C:/Users/logiusti/Lorenzo/PyWorkspace/scripts/Wrapper/data/filtered_dSTV.pickle", 'dSTV')
th, df = load_df(r"C:/Users/logiusti/Lorenzo/PyWorkspace/scripts/Wrapper/data/filtered_dEnergy.pickle")
df['zeta'] = 0.75*df['eta']**.5 + 0.6*df['clicks']**.5
th['zeta'] = 0.75*th['eta']**.5 + 0.6*th['clicks']**.5
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['zeta'], y=df['pof'],hovertext=df['UPS'] + "_" + df['eta'].map(str) + "_" + df['clicks'].map(str),
mode='markers',
name=r'$\frac{\partial T}{\partial t}$'))
fig.add_trace(go.Scatter(x=th['zeta'], y=th['pof'],hovertext=th['UPS'] + "_" + th['eta'].map(str) + "_" + th['clicks'].map(str) ,marker=dict(color='rgb(255,0,0)'),
mode='markers',
name=r'$\frac{\partial T}{\partial t}$'))
fig.show()
dE = set(df.loc[df['pof'] >= .75]['UPS'])
E = {'EAS11_SLASH_8H',
'EAS1_SLASH_8H',
'EAS212_SLASH_MS1',
'EBS11_SLASH_15',
'EBS11_SLASH_25',
'EBS11_SLASH_28',
'EBS11_SLASH_33',
'EBS11_SLASH_45',
'EBS11_SLASH_63',
'EBS11_SLASH_65',
'EBS11_SLASH_67',
'EBS131_STAR_60',
'EBS2C06_SLASH_BL1',
'EBS2Z06_SLASH_BL3',
'EBS31_SLASH_83',
'ESS02_SLASH_15A',
'ESS103_SLASH_1R',
'ESS103_SLASH_2R',
'ESS103_SLASH_3R',
'ESS103_SLASH_4R',
'ESS103_SLASH_5E',
'ESS103_SLASH_6R',
'ESS103_SLASH_7R',
'ESS103_SLASH_8R',
'ESS11_SLASH_5H',
'ESS11_SLASH_P18',
'ESS11_STAR_59',
'ESS1_SLASH_5H',
'ESS21_SLASH_65',
'ESS21_SLASH_83',
'ESS2_SLASH_Y83',
'ESS316_SLASH_7E',
'ESS328_SLASH_5E',
'ESS329_SLASH_7E',
'ESS331_SLASH_5E',
'ESS3_SLASH_Y83',
'ESS406_SLASH_E91',
'ESS407_SLASH_E91'}
E.difference(dE)
th, df = load_df(r"C:/Users/logiusti/Lorenzo/PyWorkspace/scripts/Wrapper/data/filtered_signed_total_variation.pickle")
df['zeta'] = 0.75*df['eta']**.5 + 0.6*df['clicks']**.5
th['zeta'] = 0.75*th['eta']**.5 + 0.6*th['clicks']**.5
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['zeta'], y=df['pof'],hovertext=df['UPS'] + "_" + df['eta'].map(str) + "_" + df['clicks'].map(str),
mode='markers',
name=r'$\frac{\partial T}{\partial t}$'))
fig.add_trace(go.Scatter(x=th['zeta'], y=th['pof'],hovertext=th['UPS'] + "_" + th['eta'].map(str) + "_" + th['clicks'].map(str) ,marker=dict(color='rgb(255,0,0)'),
mode='markers',
name=r'$\frac{\partial T}{\partial t}$'))
fig.show()
STV= set(df.loc[df['pof'] >= .75]['UPS'])
STV
```
| github_jupyter |
Week 7 Notebook: Optimizing Other Objectives
===============================================================
This week, we will look at optimizing multiple objectives simultaneously. In particular, we will look at pivoting with adversarial neural networks {cite:p}`Louppe:2016ylz,ganin2014unsupervised,Sirunyan:2019nfw`.
We will borrow the implementation from: <https://github.com/glouppe/paper-learning-to-pivot>
```
import tensorflow.keras as keras
import numpy as np
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import uproot
from tqdm.notebook import tqdm
import yaml
with open('definitions.yml') as file:
# The FullLoader parameter handles the conversion from YAML
# scalar values to Python the dictionary format
definitions = yaml.load(file, Loader=yaml.FullLoader)
features = definitions['features']
spectators = definitions['spectators']
labels = definitions['labels']
nfeatures = definitions['nfeatures']
nspectators = definitions['nspectators']
nlabels = definitions['nlabels']
ntracks = definitions['ntracks']
```
## Define discriminator, regression, and combined adversarial models
The combined loss function is $$L = L_\mathrm{class} - \lambda L_\mathrm{reg}$$
- $L_\mathrm{class}$ is the loss function for the classification part (categorical cross entropy)
- $L_\mathrm{reg}$ is the loss function for the adversarial part (in this case a regression)
- $\lambda$ is a hyperparamter that controls how important the adversarial part of the loss is compared to the classification part, which we nominally set to 1
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, BatchNormalization, Concatenate, GlobalAveragePooling1D
import tensorflow.keras.backend as K
# define Deep Sets model with Dense Keras layer
inputs = Input(shape=(ntracks, nfeatures,), name='input')
x = BatchNormalization(name='bn_1')(inputs)
x = Dense(64, name='dense_1', activation='relu')(x)
x = Dense(32, name='dense_2', activation='relu')(x)
x = Dense(32, name='dense_3', activation='relu')(x)
# sum over tracks
x = GlobalAveragePooling1D(name='pool_1')(x)
x = Dense(100, name='dense_4', activation='relu')(x)
output = Dense(nlabels, name = 'output', activation='softmax')(x)
keras_model_disc = Model(inputs=inputs, outputs=output)
keras_model_disc.compile(optimizer='adam',
loss='categorical_crossentropy')
# regressor
x = Dense(100, name='dense_5', activation='relu')(keras_model_disc(inputs))
x = Dense(100, name='dense_6', activation='relu')(x)
output_reg = Dense(2, activation='linear', name='mass_pt_reg')(x)
sgd_opt = keras.optimizers.SGD(momentum=0)
keras_model_reg = Model(inputs=inputs, outputs=output_reg)
keras_model_reg.compile(optimizer=sgd_opt,
loss='mse')
# combined model
lam = 1
keras_model_adv = Model(inputs=inputs, outputs=[keras_model_disc(inputs), keras_model_reg(inputs)])
keras_model_adv.compile(optimizer=sgd_opt,
loss=['categorical_crossentropy', 'mse'],
loss_weights = [1, -lam])
print(keras_model_disc.summary())
print(keras_model_reg.summary())
print(keras_model_adv.summary())
```
## Load data
```
from DataGenerator import DataGenerator
# load training and validation generators
train_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_10.root']
val_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_11.root']
train_generator = DataGenerator(train_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=5000,
return_spectators=True, scale_mass_pt=[100., 10000.])
val_generator = DataGenerator(val_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=5000,
return_spectators=True, scale_mass_pt=[100., 10000.])
```
## Pretrain discriminator and regressor models
```
# pretrain discriminator
keras_model_disc.trainable = True
keras_model_disc.compile(optimizer='adam',
loss='categorical_crossentropy')
for n_epoch in tqdm(range(20)):
for t in tqdm(train_generator, total=len(train_generator), leave=bool(n_epoch==19)):
keras_model_disc.fit(t[0], t[1][0],verbose=0)
# pretrain regressor
keras_model_reg.trainable = True
keras_model_disc.trainable = False
keras_model_reg.compile(optimizer=sgd_opt, loss='mse')
for n_epoch in tqdm(range(20)):
for t in tqdm(train_generator, total=len(train_generator), leave=bool(n_epoch==19)):
keras_model_reg.fit(t[0], t[1][1], verbose=0)
```
## Main training loop
During the main training loop, we do two things:
1. Train the discriminator model with the combined loss function $$L = L_\mathrm{class} - \lambda L_\mathrm{reg}$$
1. Train the regression model to learn the mass from with the standard MSE loss function $$L_\mathrm{reg}$$
```
# alternate training discriminator and regressor
for n_epoch in tqdm(range(40)):
for t in tqdm(train_generator, total=len(train_generator), leave=bool(n_epoch==39)):
# train discriminator
keras_model_reg.trainable = False
keras_model_disc.trainable = True
keras_model_adv.compile(optimizer=sgd_opt,
loss=['categorical_crossentropy', 'mse'],
loss_weights=[1, -lam])
keras_model_adv.fit(t[0], t[1], verbose=0)
# train regressor
keras_model_reg.trainable = True
keras_model_disc.trainable = False
keras_model_reg.compile(optimizer=sgd_opt, loss='mse')
keras_model_reg.fit(t[0], t[1][1],verbose=0)
keras_model_adv.save_weights('keras_model_adv_best.h5')
```
## Test
```
# load testing file
test_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/test/ntuple_merged_0.root']
test_generator = DataGenerator(test_files, features, labels, spectators, batch_size=8192, n_dim=ntracks,
remove_mass_pt_window=True,
remove_unlabeled=True,
return_spectators=True,
max_entry=200000) # basically, no maximum
# run model inference on test data set
predict_array_adv = []
label_array_test = []
spec_array_test = []
for t in tqdm(test_generator, total=len(test_generator)):
label_array_test.append(t[1][0])
spec_array_test.append(t[1][1])
predict_array_adv.append(keras_model_adv.predict(t[0])[0])
predict_array_adv = np.concatenate(predict_array_adv, axis=0)
label_array_test = np.concatenate(label_array_test, axis=0)
spec_array_test = np.concatenate(spec_array_test, axis=0)
# create ROC curves
print(label_array_test.shape)
print(spec_array_test.shape)
print(predict_array_adv.shape)
fpr_adv, tpr_adv, threshold_adv = roc_curve(label_array_test[:,1], predict_array_adv[:,1])
# plot ROC curves
plt.figure()
plt.plot(tpr_adv, fpr_adv, lw=2.5, label="Adversarial, AUC = {:.1f}%".format(auc(fpr_adv,tpr_adv)*100))
plt.xlabel(r'True positive rate')
plt.ylabel(r'False positive rate')
plt.semilogy()
plt.ylim(0.001, 1)
plt.xlim(0, 1)
plt.grid(True)
plt.legend(loc='upper left')
plt.show()
from utils import find_nearest
plt.figure()
for wp in [1.0, 0.5, 0.3, 0.1, 0.05]:
idx, val = find_nearest(fpr_adv, wp)
plt.hist(spec_array_test[:,0], bins=np.linspace(40, 200, 21),
weights=label_array_test[:,0]*(predict_array_adv[:,1] > threshold_adv[idx]),
alpha=0.4, density=True, label='QCD, {}% FPR cut'.format(int(wp*100)),linestyle='-')
plt.legend()
plt.xlabel(r'$m_{SD}$')
plt.ylabel(r'Normalized probability')
plt.xlim(40, 200)
plt.figure()
for wp in [1.0, 0.5, 0.3, 0.1, 0.05]:
idx, val = find_nearest(fpr_adv, wp)
plt.hist(spec_array_test[:,0], bins=np.linspace(40, 200, 21),
weights=label_array_test[:,1]*(predict_array_adv[:,1] > threshold_adv[idx]),
alpha=0.4, density=True, label='H(bb), {}% FPR cut'.format(int(wp*100)),linestyle='-')
plt.legend()
plt.xlabel(r'$m_{SD}$')
plt.ylabel(r'Normalized probability')
plt.xlim(40, 200)
plt.show()
plt.figure()
plt.hist(predict_array_adv[:,1], bins = np.linspace(0, 1, 21),
weights=label_array_test[:,1]*0.1,
alpha=0.4, linestyle='-', label='H(bb)')
plt.hist(predict_array_adv[:,1], bins = np.linspace(0, 1, 21),
weights=label_array_test[:,0],
alpha=0.4, linestyle='-', label='QCD')
plt.legend()
plt.show()
plt.figure()
plt.hist(spec_array_test[:,0], bins = np.linspace(40, 200, 21),
weights = label_array_test[:,1]*0.1,
alpha=0.4, linestyle='-', label='H(bb)')
plt.hist(spec_array_test[:,0], bins = np.linspace(40, 200, 21),
weights = label_array_test[:,0],
alpha=0.4, linestyle='-', label='QCD')
plt.legend()
plt.show()
```
| github_jupyter |
## TASK-1: Make a class to calculate the range, time of flight and horizontal range of the projectile fired from the ground.
## TASK-2: Use the list to find the range, time of flight and horizontal range for varying value of angle from 1 degree to 90 dergree.
## TASK-3: Make a plot to show the variation of range, time of flight and horizontal range with angle of projection.
## TASK-4: Change the list of [angle], [range], [time of flight] and [horizontal range] into dictionary and finely into dataframe using pandas. Save the file in your PC in csv file.
### Required formula:
### Horizontal range: $R=u^2sin2A/g$
### Time of flight: $T = 2usinA/g$
### Maximum Height: $H = u^2*sin^2A/2g$
```
import math
import numpy as np
class Projectile():
def __init__(self,u,A,g):
self.u=u
self.A=A
self.g=g
def HorizontalRange(self):
R= (self.u^2) * math.sin(2 * self.A * math.pi/180)/ (self.g)
return R
def TimeofFlight(self):
T= (self.u*2) * math.sin(self.A* math.pi/180) / (self.g)
return T
def MaximumHeight(self):
H=(self.u * math.sin(self.A* math.pi/180))**2 / (self.g*2)
return H
def update_A(self,A):
self.A=A
u=36 #in m/s
g=9.8 #in m/s^2
P = Projectile(36, 0, 9.8 )
R=[] #empty list to collect horizontal range
T=[] #empty list to collect the time of flight
H=[] #empty list to collect the maximum height
N=[] #empty list to collect angle of projection
x=np.arange(0,90+0.1,0.1)
for i in x:
N.append(i)
P.update_A(i)
r=P.HorizontalRange()
t=P.TimeofFlight()
h=P.MaximumHeight()
R.append(i)
T.append(t)
H.append(h)
import matplotlib.pyplot as plt
plt.subplot(2,2,1)
plt.plot(N,R)
plt.xlabel('N')
plt.ylabel('R')
plt.title("Angle of projection with Horizontal Range")
plt.subplot(2,2,2)
plt.plot(N,T)
plt.xlabel('N')
plt.ylabel('T')
plt.title("Angle of projection with Time of Flight")
plt.subplot(2,2,3)
plt.plot(N,H)
plt.xlabel('N')
plt.ylabel('H')
plt.title("Angle of projection with Maximum Distance")
data={} #empty list
data.update({"Angle_of_projection":N,"Horizontal_Range":R,"Time_of_Flight":T,"Maximum_Distance":H})
print(data)
import pandas as pd
Df=pd.DataFrame(data)
print(Df)
Df.to_csv('Projectile.csv')
df=pd.read_csv('Projectile.csv')
df.head()
plt.figure(figsize=[10,10])
plt.subplot(2,2,1)
plt.semilogy(df.Angle_of_projection,df.Horizontal_Range)
plt.xlabel('N')
plt.ylabel('R')
plt.title('Angle of projection with Horizontal Range')
plt.subplot(2,2,2)
plt.semilogy(df.Angle_of_projection,df.Time_of_Flight)
plt.xlabel('N')
plt.ylabel('T')
plt.title('Angle of projecton with Time of Flight')
plt.subplot(2,2,3)
plt.semilogy(df.Angle_of_projection,df.Maximum_Distance)
plt.xlabel('N')
plt.ylabel('H')
plt.title('Angle of projection with Maximum Distance')
```
| github_jupyter |
#Improving Computer Vision Accuracy using Convolutions
In the previous lessons you saw how to do fashion recognition using a Deep Neural Network (DNN) containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and a hidden layer. You experimented with the impact of different sized of hidden layer, number of training epochs etc on the final accuracy.
For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
Your accuracy is probably about 89% on training and 87% on validation...not bad...But how do you make that even better? One way is to use something called Convolutions. I'm not going to details on Convolutions here, but the ultimate concept is that they narrow down the content of the image to focus on specific, distinct, details.
If you've ever done image processing using a filter (like this: https://en.wikipedia.org/wiki/Kernel_(image_processing)) then convolutions will look very familiar.
In short, you take an array (usually 3x3 or 5x5) and pass it over the image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.
This is perfect for computer vision, because often it's features that can get highlighted like this that distinguish one item for another, and the amount of information needed is then much less...because you'll just train on the highlighted features.
That's the concept of Convolutional Neural Networks. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focussed, and possibly more accurate.
Run the below code -- this is the same neural network as earlier, but this time with Convolutional layers added first. It will take longer, but look at the impact on the accuracy:
```
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
It's likely gone up to about 93% on the training data and 91% on the validation data.
That's significant, and a step in the right direction!
Try running it for more epochs -- say about 20, and explore the results! But while the results might seem really good, the validation results may actually go down, due to something called 'overfitting' which will be discussed later.
(In a nutshell, 'overfitting' occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at seeing *other* data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it, but blue suade shoes might confuse you...and you know you should never mess with my blue suede shoes.)
Then, look at the code again, and see, step by step how the Convolutions were built:
Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
```
Next is to define your model. Now instead of the input layer at the top, you're going to add a Convolution. The parameters are:
1. The number of convolutions you want to generate. Purely arbitrary, but good to start with something in the order of 32
2. The size of the Convolution, in this case a 3x3 grid
3. The activation function to use -- in this case we'll use relu, which you might recall is the equivalent of returning x when x>0, else returning 0
4. In the first layer, the shape of the input data.
You'll follow the Convolution with a MaxPooling layer which is then designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one, thus turning 4 pixels into 1. It repeats this across the image, and in so doing halves the number of horizontal, and halves the number of vertical pixels, effectively reducing the image by 25%.
You can call model.summary() to see the size and shape of the network, and you'll notice that after every MaxPooling layer, the image size is reduced in this way.
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
```
Add another convolution
```
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2)
```
Now flatten the output. After this you'll just have the same DNN structure as the non convolutional version
```
tf.keras.layers.Flatten(),
```
The same 128 dense layers, and 10 output layers as in the pre-convolution example:
```
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
```
Now compile the model, call the fit method to do the training, and evaluate the loss and accuracy from the test set.
```
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
# Visualizing the Convolutions and Pooling
This code will show us the convolutions graphically. The print (test_labels[;100]) shows us the first 100 labels in the test set, and you can see that the ones at index 0, index 23 and index 28 are all the same value (9). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the DNN is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination.
```
print(test_labels[:100])
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=7
THIRD_IMAGE=26
CONVOLUTION_NUMBER = 1
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
```
EXERCISES
1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.
2. Remove the final Convolution. What impact will this have on accuracy or training time?
3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.
4. Remove all Convolutions but the first. What impact do you think this will have? Experiment with it.
5. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here!
```
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
| github_jupyter |
DeepLarning Couse HSE 2016 fall:
* Arseniy Ashuha, you can text me ```ars.ashuha@gmail.com```,
* ```https://vk.com/ars.ashuha```
* partially reusing https://github.com/ebenolson/pydata2015
<h1 align="center"> Image Captioning </h1>
In this seminar you'll be going through the image captioning pipeline.
To begin with, let us download the dataset of image features from a pre-trained GoogleNet.
```
!wget https://www.dropbox.com/s/3hj16b0fj6yw7cc/data.tar.gz?dl=1 -O data.tar.gz
!tar -xvzf data.tar.gz
```
### Data preprocessing
```
%%time
# Read Dataset
import numpy as np
import pickle
img_codes = np.load("data/image_codes.npy")
captions = pickle.load(open('data/caption_tokens.pcl', 'rb'))
print "each image code is a 1000-unit vector:", img_codes.shape
print img_codes[0,:10]
print '\n\n'
print "for each image there are 5-7 descriptions, e.g.:\n"
print '\n'.join(captions[0])
#split descriptions into tokens
for img_i in range(len(captions)):
for caption_i in range(len(captions[img_i])):
sentence = captions[img_i][caption_i]
captions[img_i][caption_i] = ["#START#"]+sentence.split(' ')+["#END#"]
# Build a Vocabulary
from collections import Counter
word_counts = Counter()
<Compute word frequencies for each word in captions. See code above for data structure>
vocab = ['#UNK#', '#START#', '#END#']
vocab += [k for k, v in word_counts.items() if v >= 5]
n_tokens = len(vocab)
assert 10000 <= n_tokens <= 10500
word_to_index = {w: i for i, w in enumerate(vocab)}
PAD_ix = -1
UNK_ix = vocab.index('#UNK#')
#good old as_matrix for the third time
def as_matrix(sequences,max_len=None):
max_len = max_len or max(map(len,sequences))
matrix = np.zeros((len(sequences),max_len),dtype='int32')+PAD_ix
for i,seq in enumerate(sequences):
row_ix = [word_to_index.get(word,UNK_ix) for word in seq[:max_len]]
matrix[i,:len(row_ix)] = row_ix
return matrix
#try it out on several descriptions of a random image
as_matrix(captions[1337])
```
### Mah Neural Network
```
# network shapes.
CNN_FEATURE_SIZE = img_codes.shape[1]
EMBED_SIZE = 128 #pls change me if u want
LSTM_UNITS = 200 #pls change me if u want
import theano
import theano.tensor as T
# Input Variable
sentences = T.imatrix()# [batch_size x time] of word ids
image_vectors = T.matrix() # [batch size x unit] of CNN image features
sentence_mask = T.neq(sentences,PAD_ix)
import lasagne
from lasagne.layers import *
#network inputs
l_words = InputLayer((None,None),sentences )
l_mask = InputLayer((None,None),sentence_mask )
#embeddings for words
l_word_embeddings = <apply word embedding. use EMBED_SIZE>
#cudos for using some pre-trained embedding :)
# input layer for image features
l_image_features = InputLayer((None,CNN_FEATURE_SIZE),image_vectors )
#convert 1000 image features from googlenet to whatever LSTM_UNITS you have set
#it's also a good idea to add some dropout here and there
l_image_features_small = <convert l_image features to a shape equal to rnn hidden state. Also play with dropout/noize>
assert l_image_features_small.output_shape == (None,LSTM_UNITS)
# Concatinate image features and word embedings in one sequence
decoder = a recurrent layer (gru/lstm) with following checklist:
# * takes word embeddings as an input
# * has LSTM_UNITS units in the final layer
# * has cell_init (or hid init for gru) set to converted image features
# * mask_input = input_mask
# * don't forget the grad clipping (~5-10)
#find out better recurrent architectures for bonus point
# Decoding of rnn hiden states
from broadcast import BroadcastLayer,UnbroadcastLayer
#apply whatever comes next to each tick of each example in a batch. Equivalent to 2 reshapes
broadcast_decoder_ticks = BroadcastLayer(decoder,(0,1))
print "broadcasted decoder shape = ",broadcast_decoder_ticks.output_shape
#predict probabilities for next tokens
predicted_probabilities_each_tick = <predict probabilities for each tick, using broadcasted_decoder_shape as an input. No reshaping needed here.>
# maybe a more complicated architecture will work better?
#un-broadcast back into (batch,tick,probabilities)
predicted_probabilities = UnbroadcastLayer(predicted_probabilities_each_tick,
broadcast_layer=broadcast_decoder_ticks)
print "output shape = ",predicted_probabilities.output_shape
#remove if you know what you're doing (e.g. 1d convolutions or fixed shape)
assert predicted_probabilities.output_shape == (None, None, 10373)
```
### Some tricks
* If you train large network, it is usually a good idea to make a 2-stage prediction
1. (large recurrent state) -> (bottleneck e.g. 256)
2. (bottleneck) -> (vocabulary size)
* this way you won't need to store/train (large_recurrent_state x vocabulary size) matrix
* Also maybe use Hierarchical Softmax?
* https://gist.github.com/justheuristic/581853c6d6b87eae9669297c2fb1052d
```
next_word_probas = <get network output>
predictions_flat = next_word_probas[:,:-1].reshape((-1,n_tokens))
reference_answers = sentences[:,1:].reshape((-1,))
#write symbolic loss function to minimize over NN params
loss = <compute elementwise loss function>
#trainable NN weights
weights = get_all_params(predicted_probabilities,trainable=True)
updates = <parameter updates using your favorite algoritm>
#compile a functions for training and evaluation
#please not that your functions must accept image features as FIRST param and sentences as second one
train_step = <function that takes input sentence and image mask, outputs loss and updates weights>
val_step = <function that takes input sentence and image mask and outputs loss>
#for val_step use deterministic=True if you have any dropout/noize
```
# Training
* You first have to implement a batch generator
* Than the network will get trained the usual way
```
captions = np.array(captions)
from random import choice
def generate_batch(images,captions,batch_size,max_caption_len=None):
#sample random numbers for image/caption indicies
random_image_ix = np.random.randint(0,len(images),size=batch_size)
#get images
batch_images = images[random_image_ix]
#5-7 captions for each image
captions_for_batch_images = captions[random_image_ix]
#pick 1 from 5-7 captions for each image
batch_captions = map(choice,captions_for_batch_images)
#convert to matrix
batch_captions_ix = as_matrix(batch_captions,max_len=max_caption_len)
return batch_images, batch_captions_ix
generate_batch(img_codes,captions,3)
```
### Main loop
* We recommend you to periodically evaluate the network using the next "apply trained model" block
* its safe to interrupt training, run a few examples and start training again
```
batch_size=50 #adjust me
n_epochs=100 #adjust me
n_batches_per_epoch = 50 #adjust me
n_validation_batches = 5 #how many batches are used for validation after each epoch
from tqdm import tqdm
for epoch in range(n_epochs):
train_loss=0
for _ in tqdm(range(n_batches_per_epoch)):
train_loss += train_step(*generate_batch(img_codes,captions,batch_size))
train_loss /= n_batches_per_epoch
val_loss=0
for _ in range(n_validation_batches):
val_loss += val_step(*generate_batch(img_codes,captions,batch_size))
val_loss /= n_validation_batches
print('\nEpoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss))
print("Finish :)")
```
### apply trained model
```
#the same kind you did last week, but a bit smaller
from pretrained_lenet import build_model,preprocess,MEAN_VALUES
# build googlenet
lenet = build_model()
#load weights
lenet_weights = pickle.load(open('data/blvc_googlenet.pkl'))['param values']
#python3: pickle.load(open('data/blvc_googlenet.pkl', 'rb'), encoding='latin1')['param values']
set_all_param_values(lenet["prob"], lenet_weights)
#compile get_features
cnn_input_var = lenet['input'].input_var
cnn_feature_layer = lenet['loss3/classifier']
get_cnn_features = theano.function([cnn_input_var], lasagne.layers.get_output(cnn_feature_layer))
from matplotlib import pyplot as plt
%matplotlib inline
#sample image
img = plt.imread('data/Dog-and-Cat.jpg')
img = preprocess(img)
#deprocess and show, one line :)
from pretrained_lenet import MEAN_VALUES
plt.imshow(np.transpose((img[0] + MEAN_VALUES)[::-1],[1,2,0]).astype('uint8'))
```
## Generate caption
```
last_word_probas = <get network-predicted probas at last tick
#TRY OUT deterministic=True if you want more steady results
get_probs = theano.function([image_vectors,sentences], last_word_probas)
#this is exactly the generation function from week5 classwork,
#except now we condition on image features instead of words
def generate_caption(image,caption_prefix = ("START",),t=1,sample=True,max_len=100):
image_features = get_cnn_features(image)
caption = list(caption_prefix)
for _ in range(max_len):
next_word_probs = <obtain probabilities for next words>
assert len(next_word_probs.shape) ==1 #must be one-dimensional
#apply temperature
next_word_probs = next_word_probs**t / np.sum(next_word_probs**t)
if sample:
next_word = np.random.choice(vocab,p=next_word_probs)
else:
next_word = vocab[np.argmax(next_word_probs)]
caption.append(next_word)
if next_word=="#END#":
break
return caption
for i in range(10):
print ' '.join(generate_caption(img,t=5.)[1:-1])
```
# Demo
### Find at least 10 images to test it on.
* Seriously, that's part of an assignment. Go get at least 10 pictures to get captioned
* Make sure it works okay on __simple__ images before going to something more comples
* Photos, not animation/3d/drawings, unless you want to train CNN network on anime
* Mind the aspect ratio (see what `preprocess` does to your image)
```
#apply your network on image sample you found
#
#
```
# grading
* base 5 if it compiles and trains without exploding
* +1 for finding representative set of reference examples
* +2 for providing 10+ examples where network provides reasonable captions (at least sometimes :) )
* you may want to predict with sample=False and deterministic=True for consistent results
* kudos for submitting network params that reproduce it
* +2 for providing 10+ examples where network fails IF you also got previous 10 examples right
* bonus points for experiments with architecture and initialization (see above)
* bonus points for trying out other pre-trained nets for captioning
* a whole lot of bonus points if you also train via metric learning
* image -> vec
* caption -> vec (encoder, not decoder)
* loss = correct captions must be closer, wrong ones must be farther
* prediction = choose caption that is closest to image
* a freaking whole lot of points if you also obtain statistically signifficant results the other way round
* take caption, get closest image
| github_jupyter |
<a href="https://colab.research.google.com/github/ZinnurovArtur/Colour-Match/blob/main/Outfit_neural_network.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import numpy as np
import cv2
import random
from collections import Counter
from tensorflow.keras.models import load_model
import tensorflow as tf
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras import backend as K
from tensorflow.keras import losses
from tensorflow.keras.optimizers import Adam
import os
%matplotlib inline
from google.colab import drive
from keras.datasets import fashion_mnist
drive.mount('/content/drive')
image = cv2.imread('/content/drive/MyDrive/Colab Notebooks/datasets/pictures_outfit/bauman/bauman-yeallow.jpg')
plt.imshow(image)
path = "/content/drive/MyDrive/datasetTemp/"
originals = []
images_toget = []
mean = np.zeros((224,224,3))
number_ofim = 0
for filename in os.listdir(path+"original2/"):
if (filename.endswith("png")):
number_ofim +=1
print(filename)
original = cv2.imread(path+"original2/"+ filename)
original = cv2.resize(original,(224,224))
originals.append(original)
mean[:,:,0]=mean[:,:,0]+original[:,:,0]
mean[:,:,1]=mean[:,:,1]+original[:,:,1]
mean[:,:,2]=mean[:,:,2]+original[:,:,2]
for filename in os.listdir(path+"original/"):
if (filename.endswith("png")):
number_ofim +=1
print(filename)
original = cv2.imread(path+"original/"+ filename)
original = cv2.resize(original,(224,224))
originals.append(original)
mean[:,:,0]=mean[:,:,0]+original[:,:,0]
mean[:,:,1]=mean[:,:,1]+original[:,:,1]
mean[:,:,2]=mean[:,:,2]+original[:,:,2]
arrDress = []
arrBody = []
for filename in os.listdir(path+"body2/"):
if filename.endswith("png"):
body = path+"body2/"+filename
arrBody.append(body)
for filename in os.listdir(path+"dress2/"):
if filename.endswith("png"):
dress = path+"dress2/"+filename
arrDress.append(dress)
for filename in os.listdir(path+"body/"):
if filename.endswith("png"):
body = path+"body/"+filename
arrBody.append(body)
for filename in os.listdir(path+"dress/"):
if filename.endswith("png"):
dress = path+"dress/"+filename
arrDress.append(dress)
for i in range(len(arrBody)):
body = cv2.imread(arrBody[i],0)
dress = cv2.imread(arrDress[i],0)
dress[dress == 255] = 0
dress[dress > 0] = 255
dress = cv2.resize(dress,(224,224))
body[body == 255] = 0
body[body > 0] = 255
body = cv2.resize(body,(224,224))
skin = body - dress
bg = (255 - body)/255
skin = (255 - skin)/255
dress = (255 - dress)/255
gt = np.zeros((224,224,3))
gt[:,:,0] = (1-skin)
gt[:,:,1] = (1-dress)
gt[:,:,2] = bg
images_toget.append(gt)
mean = mean / number_ofim
print(number_ofim)
mean = mean.astype('int')
import pickle
# pixel mean array
pickle.dump(mean, open(path+"meanArrpixels.pkl", "wb"))
def get_unet():
inputs = Input((None, None, 3))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(3, (1, 1), activation='sigmoid')(conv9)
model = Model(inputs=[inputs], outputs=[conv10])
model.compile(optimizer=Adam(lr=1e-3), loss=losses.binary_crossentropy, metrics=['accuracy'])
return model
model = get_unet()
model.summary()
Xtrain = np.asarray(originals) - mean.reshape(-1,224,224,3)
Xtest = np.asarray(images_toget).reshape(-1,224,224,3)
print(Xtest.shape)
model = get_unet()
history = model.fit(Xtrain, Xtest, epochs=120)
model.summary()
model.evaluate(Xtrain,Xtest)
plt.figure(figsize=[10,5])
plt.subplot(121)
plt.plot(history.history['accuracy'])
print(history.history.keys())
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Training Accuracy',
'Validation Accuracy'])
plt.title('Accuracy Curves')
plt.subplot(122)
plt.plot(history.history['loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Training Loss',
'Validation Loss'])
plt.title('Loss Curves')
plt.show()
model.save("/content/drive/MyDrive/Colab Notebooks/datasets/"+"unet.h5")
```
| github_jupyter |
# Edge Computing using Tensorflow and Neural Compute Stick
## " Generate piano sounds using EEG capturing rhythmic activity of brain"
### Contents
#### 1. Motivation
#### 2. Signal acquisition
#### 3. Signal postprocessing
#### 4. Synthesize music
##### 4.1 Training Data
##### 4.2 Training data preprocessing
##### 4.3 Neural Network architecture
##### 4.4 Training methodology
#### 5. Error definition and Further development
### 1. Motivation
The following work is inspired by EEG. EEG can be described in terms of rhythmic cortical electrical activity of brain triggered by perceived sensory stimuli , where those rythmic activity falls in certain frequency bands(delta to gamma). In sound engineering, signals with dominant frequencies makes a pitch and sequences of pitches creates rhythm. Combining this concepts intuitively shows, by detecting those dominant frequencies, it is possible to listen to our brain using the signals it generates for different stimuli. Using principles of sound synthesis and sampling along with deep neural networks(DNN), in this project, i made an attempt to extract the rhythm or pitch hidding within brain waves and reproduce it as piano music.
### 2. Signal acquisition: (Not available)
EEG/EOG recordings are not available. For the sake of simplicity and bring general working prototype of the model, used some random auto generated signal for test. This is because, the trained DNN is not constrained within brain waves, but to any kind of signal with dominant frequencies. Piano data set available for none commercial use is used during training and evaluation phase.
### 3. Signal Postprocessing (idea proposed)
Enough researches proved, "brain waves are rhytmic"[2] and they falls in frequency bandwidth from Delta(<4Hz) to Gamma (>30-100Hz). Human audible frequecy range 20 - 20KHz. Hence, increasing the acquired EEG freuencies by certain constant value and preprocess with sampling rate 44100 makes it resembles piano sounds (fundamental frequency range 27.5 - 4186.01Hz), which itself within human audible range. Later, save the processed brain signals as numpy arrays and convert them as .wav files to reproduce the sound. Using the .wav files to extract brain signal (now sound) informations (frequencies, sampling rate and pitch). In case, we succeed to regenerate the sounds, since we increased the signal frequency by constant (to fit our piano data), the sounds plays faster. Hence we need to reduce the frequency by the same value while replaying the sound that fits the original brain signal.
### 4. Synthesize music
#### 4.1 Training data
Piano chords dataset available to public for non commercial purposes
[3]. Each piano .wav files in the data set are sampled at 44100 and have varying data length. Data is analysed and studied further in detail from the code blocks below.
#### 4.2 Training data preprocessing
###### Import required python libraries and add the current working directory to python path and system paths
Directory structure
<br>
<br>
Wavenet/
-/dataset (downloaded piano chords)
- /UMAPiano-DB-Poly-1/UMAPiano-DB-A0-NO-F.wav
-/clipped_data (clipped paino sounds are here)
-/wavenet_logs (tensorflow checkpoints and logs)
```
%matplotlib inline
from __future__ import division
import numpy as np
import tensorflow as tf
import scipy.io
import matplotlib
import matplotlib.pyplot as plt
import os
import sys
import random
import scipy.io.wavfile
import scipy
matplotlib.rcParams['figure.figsize'] = (8.0, 6.0)
#-------------------------------------Add working directory to path-----------------------------------------------
cwd = os.getcwd()
sys.path.append(cwd)
sys.path.insert(0,'E:/!CogSci/!!!WS2017/Edge_computing/Wavenet')
sys.path.insert(0,'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset')
sys.path.insert(0,'E:/!CogSci/!!!WS2017/Edge_computing/Wavenet/clipped_data')
# Save the variables in a log/directory during training
save_path = "C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/wavenet_logs"
if not os.path.exists(save_path):
os.makedirs(save_path)
```
Each piano file from the dataset is approximately 1-2 seconds in length. We used the scipy to read each music file and get their sampling rate and data as array and found that all audio files has sampling rate 44100 and the data length varies based on length of audio. To train DNN, we need all training data with same length and increase the sampling rate to prevent signal loss/corruption. Below code shows the acquisition of first information about the piano dataset.
```
# Location of the wav file in the file system.
fileName1 = 'E:/!CogSci/!!!WS2017/Edge_computing/Wavenet/dataset/UMAPiano-DB-Poly-1/UMAPiano-DB-A0-NO-F.wav'
fileName2 = 'E:/!CogSci/!!!WS2017/Edge_computing/Wavenet/dataset/UMAPiano-DB-Poly-1/UMAPiano-DB-A0-NO-M.wav'
# Loads sample rate (bps) and signal data (wav).
sample_rate1, data1 = scipy.io.wavfile.read(fileName1)
sample_rate2, data2 = scipy.io.wavfile.read(fileName2)
# Print in sdout the sample rate, number of items and duration in seconds of the wav file
print("Sample rate1 %s data size1 %s duration1: %s seconds"%(sample_rate1,data1.shape,len(data1)/sample_rate1))
print("Sample rate2 %s data size2 %s duration2: %s seconds"%(sample_rate2,data2.shape,len(data2)/sample_rate2))
print("DATA SIZES ARE DIFFERENT NEEDS TO BE CONSIDERED")
# Plot the wave file and get insight about the sample. Here we test first 100 samples of the wav file
plt.plot(data1)
plt.plot(data2)
plt.show()
```
Looking at the plot above, it is clear that there is no signal informations at the head and tail of the piano data. We can clip them safely and that reduces computation and memory resources. Also, i changed all the data file names with numbers for convenient. Later, i checked the files with shortest and longest length to fix varying length problem in the code block below.
```
"""
dataset_path = 'E:/!CogSci/!!!WS2017/Edge_computing/Wavenet/dataset/UMAPiano-DB-Poly-1'
dir_list_len = len(os.listdir(dataset_path))
print("Number of files in the Dataset ",dir_list_len)
# Change file names to be easily recognized
def change_filenames(dataset_path):
i = 0 # Counter and target filename
for old_name in os.listdir(dataset_path):
# os.rename(dataset_path + "/" + old_name, dataset_path + "/" + str(i) + '.wav')
os.rename(os.path.join(dataset_path, old_name), os.path.join(dataset_path, str(i) + '.wav'))
i+=1
change_filenames(dataset_new)
list_sizes_new =[]
for data_new in os.listdir(dataset_new):
_,data_new = scipy.io.wavfile.read(dataset_new+'/'+data_new)
list_sizes_new.append(data_new.shape[0])
print("Maximum size %s and the music file is",np.argmax(list_sizes_new))
"""
dataset_new = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset'
list_sizes =[]
for datas in os.listdir(dataset_new):
_,data_new = scipy.io.wavfile.read(os.path.join(dataset_new,datas))
list_sizes.append(data_new.shape[0])
if data_new.shape[0]== 39224:
print("Minimum sized file is",datas)
if data_new.shape[0] == 181718:
print("Max sized file is",datas)
print("Maximum size %s "%(max(list_sizes)))
print("Minimum size %s "%(min(list_sizes)))
print("Dataset is in C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset and all the files are numbered")
# -------------------------Get some insights and information about the max and min sized data-----------------------------
# Location of the wav file in the file system.
fileName3 = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset/356.wav'
fileName4 = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset/722.wav'
# Loads sample rate (bps) and signal data (wav).
sample_rate3, data3 = scipy.io.wavfile.read(fileName3)
sample_rate4, data4 = scipy.io.wavfile.read(fileName4)
# Print in sdout the sample rate, number of items and duration in seconds of the wav file
print("Sample rate3 %s data size3 %s duration3: %s seconds"%(sample_rate3,data3.shape,len(data3)/sample_rate3))
print("Sample rate4 %s data size4 %s duration4: %s seconds"%(sample_rate4,data4.shape,len(data4)/sample_rate4))
print("Data sizes are different")
# Plot the wave file and get insight about the sample. Here we test first 100 samples of the wav file
plt.plot(data4)
plt.show()
print("Safe to clip first 10000 sample points out from the array and convert them back to .wav file")
```
As we can see that even the smallest piano file has 20k values of zeros at head and tail combined. Hence it is safe to clip the first and last 10k indices from all files and save them back to .wav files. We can also add small amount of noise to the training data at this step using the code below. We will discuss the reason later briefly.
```
#----------------------- .WAV training data preprocessing steps ----------------------
import IPython
# Clip the first and last 10000 values which doesn't show any informations
"""
def clip_write_wav(dataset_path):
i = 0 # Counter and target filename
for datas in os.listdir(dataset_path):
_,data = scipy.io.wavfile.read(dataset_path+'/'+datas)
data= data[:-10000] # Slice out last 10000 elements in data
data= data[10000:] # Slice out first 10000 elements in the data
#IF ADD NOISE DO it here in the data which is an array.
scipy.io.wavfile.write('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data/%i.wav'%i, 44100, data)
i+=1
"""
_dataset = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/dataset'
_target = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data'
clip_points = 10000
_sampling_rate = 44100
# clip_write_wav(_dataset) # Uncomment this line to clip and write the wav files again
# Verify required informations again
sample_rate3, data3 = scipy.io.wavfile.read('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data/3.wav')
print("Sample rate %s data size %s duration: %s seconds"%(sample_rate3,data3.shape,len(data3)/sample_rate3))
plt.plot(data3)
plt.show()
#Play the audio inline
IPython.display.Audio('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data/3.wav')
```
The data are clipped and they have shorter neck and tail now. Now we will increase the sampling rate (using "write_wav" function below) and fix the varying length in data by choosing the data with longest length as reference and zero padd other data until their length matches the length of the largest file done while feeding DNN using "get_training_data" function below .
<br>
But the scipy read module doesn't preserve the indices of the files in the dataset, as we can see that the largest and smallest file names from code block above and below are different. So, i hard coded the size of smallest and largest and search for the corresponding files.
```
# ------------- Search for the largest and smallest files --------------
_dataset_new = 'C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data'
_list_sizes =[]
for datas in os.listdir(_dataset_new):
_,_data_new = scipy.io.wavfile.read(os.path.join(_dataset_new,datas))
_list_sizes.append(_data_new.shape[0])
if _data_new.shape[0]== 19224:
print("Minimum sized file is",datas)
if _data_new.shape[0] == 161718:
print("Max sized file is",datas)
print("Maximum size %s "%(max(_list_sizes)))
print("Minimum size %s "%(min(_list_sizes)))
print("Notice that io read and write doesnt preserve the index of files in the directory")
# ------------------------ Upsample the data -----------------------------
"""
def write_wav(dataset_path):
i=0
for datas in os.listdir(dataset_path):
_,data = scipy.io.wavfile.read(dataset_path+'/'+datas)
#IF ADD NOISE DO it here in the data which is an array.
scipy.io.wavfile.write('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/upsampled_data/%i.wav'%i, 88000, data)
i+=1
write_wav(_dataset_new)
"""
# ----------------- Verifying data integrity again -----------------------
sampled_datapath ='C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/upsampled_data'
_list_sizes =[]
for datas in os.listdir(sampled_datapath):
sampling_rate,_data_new = scipy.io.wavfile.read(os.path.join(sampled_datapath,datas))
_list_sizes.append(_data_new.shape[0])
if _data_new.shape[0]== 19224:
print("Minimum sized file is %s and sampling rate"%datas,sampling_rate)
elif _data_new.shape[0] == 161718:
print("Max sized file is %s and sampling rate"%datas,sampling_rate)
print("Maximum size %s "%(max(_list_sizes)))
print("Minimum size %s "%(min(_list_sizes)))
# Verify required informations again
sample_rate5, data5 = scipy.io.wavfile.read('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/upsampled_data/3.wav')
print("Sample rate %s data size %s duration: %s seconds"%(sample_rate5,data5.shape,len(data5)/sample_rate5))
plt.plot(data5)
plt.show()
#Play the audio inline
IPython.display.Audio('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/clipped_data/3.wav')
```
Since, we use stacks of CNN in the encoder, i decided to convert the data as matrix of size 512*512 for which
we need each file to have 262144 entries. So, instead of using largest file as reference, i opted 262144 as a length limit for all files. Function "get_training_data" serve this purpose for us.
```
# Each audio file should have 262144 entries. Extend them all with zeros in the tail
# Convert all audio files as matrices of 512x512 shape
def get_training_data(dataset_path):
training_data = []
for datas in os.listdir(dataset_path):
_,data = scipy.io.wavfile.read(dataset_path+'/'+datas)
# Add Zeros at the tail until 262144
temp_zeros = [0]*262144
temp_zeros[:len(data)] = data # Slice temp_zeros and add the data into the slice
# Reshape the data as square matrix of 512*512 of size 262144
data_ = np.reshape(temp_zeros,(512,512))
training_data.append(data_)
return training_data
training_data = get_training_data('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/upsampled_data')
print(training_data[0].shape)
# Expand the dims # The third dimension represents number of channels
for i in range(len(training_data)):
training_data[i] = training_data[i][:,:,np.newaxis]
print(training_data[0].shape)
```
The training data is ready to be fed into the network. But we still require the pitch info about each training data, since the network architecture we use require them while training. Class "HarmonicPowerSpectrum" and the nesxt two code blocks are bandpass filtering the signal that ease pitch detection.
```
# Get pitch of corresponding data
"""
Steps to extract the pitches of input signal:
Reference:
https://stackoverflow.com/questions/43946112/slicing-audio-signal-to-detect-pitch
1. Detect the fundamental frequencies "f0 estimation" (For piano, lowest freq - 27.5 and highest - 4186.01 Hz)
2. Get ride of garbage transients and low frequency noise using bandpass filter
3. After filtering do the peak detection using fft to find the pitches
"""
# 1. Fundamental frequencies [27.5,4186.01] Hz
# 2. Build bandpass fileter
from scipy.signal import butter, lfilter
def butter_bandpass(f0, fs, order):
"""Give the Sampling freq(fs),Bandpass window(f0) of filter, build the bandpass filter"""
nyq = 0.5 * fs
low = f0[0] / nyq
high = f0[1] / nyq
b, a = butter(order, [low, high], btype='band') # Numerator (b) and denominator (a) polynomials of the IIR filter
return b, a
def butter_bandpass_filter(sig, f0, fs, order):
""" Apply bandpass filter to the given signal"""
b, a = butter_bandpass(f0, fs,order)
y = lfilter(b, a, sig) # Apply the filter to the signal
return y
# Verify filter signal
sig = data5
f0= (27.5, 4186.01) # Fundamental freq of piano
fs = sample_rate5 # sampling rate of .wav files in the preprocessed training dataset
order = 1
b, a = butter_bandpass(f0, fs, order=1) # Numerator (b) and denominator (a) polynomials of the IIR filter
filtered_sig= butter_bandpass_filter(sig, f0,fs,order=1)
# Plot some range of samples from both raw signal and bandpass fitered signal.
plt.plot(sig[10000:10500], label='training signal')
plt.plot(filtered_sig[10000:10500], label='Bandpass filtered signal with order %d'% order)
plt.legend(loc='upper left')
# orders = [1,2,3,4,5]
# for order in orders:
# filtered_sig= butter_bandpass_filter(sig, f0,fs,order) # Bandpass filtered signal
# plt.plot(data5[10000:10500], label='training signal')
# plt.plot(filtered_sig[10000:10500], label='Bandpass filtered signal with order %d'% order)
# plt.legend(loc='upper left')
print("Bandpass filter with order 1 looks okay. We do not want to loose much informations in the data by filter it with higher orders")
# Reference :https://github.com/pydanny/pydanny-event-notes/blob/master/Pycon2008/intro_to_numpy/files/pycon_demos/windowed_fft/short_time_fft_solution.py
# Get frequency components of the data using Short time fourier transform
from scipy.fftpack import fft, fftfreq, fftshift
from scipy.signal import get_window
from math import ceil
from pylab import figure, imshow, clf, gray, xlabel, ylabel
sig = data5
f0= (27.5, 4186.01) # Fundamental freq of piano
fs = sample_rate5 # sampling rate of .wav files in the preprocessed training dataset
def freq_comp(signal,sample_rate):
# Define the sample spacing and window size.
dT = 1.0/sample_rate
T_window = 50e-3 # 50ms ; window time frame
N_window = int(T_window * sample_rate) # 440
N_data = len(signal)
# 1. Get the window profile
window = get_window('hamming', N_window) # Multiply the segments of data using hamming window func
# 2. Set up the FFT
result = []
start = 0
while (start < N_data - N_window):
end = start + N_window
result.append(fftshift(fft(window*signal[start:end])))
start = end
result.append(fftshift(fft(window*signal[-N_window:])))
result = np.array(result,result[0].dtype)
return result
freq_comp_unfiltered = freq_comp(sig,fs)
freq_comp_filtered = freq_comp(filtered_sig,fs)
plt.figure(1)
plt.plot(freq_comp_unfiltered)
plt.title("Unfiltered Frequency componenets of the training signal")
plt.show()
plt.figure(2)
plt.plot(freq_comp_filtered)
plt.title("Filtered frequency component of the training signal")
plt.show()
# # Display results
# freqscale = fftshift(fftfreq(N_window,dT))[150:-150]/1e3
# figure(1)
# clf()
# imshow(abs(result[:,150:-150]),extent=(freqscale[-1],freqscale[0],(N_data*dT-T_window/2.0),T_window/2.0))
# xlabel('Frequency (kHz)')
# ylabel('Time (sec.)')
# gray()
# Reference: http://musicweb.ucsd.edu/~trsmyth/analysis/Harmonic_Product_Spectrum.html
# Get the fundamental frequency(peak frequency) of the training data
import parabolic
from pylab import subplot, plot, log, copy, show
# def hps(sig,fs,maxharms):
# """
# Estimate peak frequency using harmonic product spectrum (HPS)
# """
# window = sig * scipy.signal.blackmanharris(len(sig))
# # Harmonic product spectrum: Measures the maximum coincidence for harmonics for each spectral frame
# c = abs(np.fft.rfft(window)) # Compute the one-dimensional discrete Fourier Transform for real input.
# plt.plot(c)
# plt.title("Discrete fourier transform of signal")
# plt.figure()
# pitch = np.log(c)
# plt.plot(pitch)
# plt.title("Max Harmonics for the range same as fundamental frequencies")
# # Search for a maximum value of a range of possible fundamental frequencies
# # for x in range(2, maxharms):
# # a = copy(c[::x]) # Should average or maximum instead of decimating
# # c = c[:len(a)]
# # i = np.argmax(abs(c))
# # c *= a
# # plt.title("Max Harmonics for the range of %d times the fundamental frequencies"%x)
# # plt.plot(maxharms, x)
# # plt.plot(np.log(c))
# # show()
# hps(butter_bandpass_filter(sig,f0, fs,order = 1),fs,maxharms=0)
# print(" As usual we opt to choose the same range as fundamental frequecies to make sure we dont loss much informations")
# Wrap them all in one class HarmonicPowerSpectrum
class HarmonicPowerSpectrum(object):
def __init__(self,sig,f0,fs,order,maxharms):
self.sig = sig
self.f0 = f0
self.fs = fs
self.order = order
self.maxharms = maxharms
@property
def butter_bandpass(self):
"""Give the Sampling freq(fs),Bandpass window(f0) of filter, build the bandpass filter"""
nyq = 0.5 * fs # Nyquist frequency
low = self.f0[0] / nyq
high = self.f0[1] / nyq
b, a = butter(self.order, [low, high], btype='band') # Numerator (b) and denominator (a) polynomials of the IIR filter
return b, a
@property
def butter_bandpass_filter(self):
""" Apply bandpass filter to the given signal"""
b, a = self.butter_bandpass
y = lfilter(b, a, self.sig) # Apply the filter to the signal
return y
@property
def hps(self):
"""Estimate peak frequency using harmonic product spectrum (HPS)"""
y = self.butter_bandpass_filter
window = y * scipy.signal.blackmanharris(len(y)) #Create window to search harmonics in signal slices
# Harmonic product spectrum: Measures the maximum coincidence for harmonics for each spectral frame
c = abs(np.fft.rfft(window)) # Compute the one-dimensional discrete Fourier Transform for real input.
z = np.log(c) # Fundamental frequency or pitch of the given signal
return z
z = HarmonicPowerSpectrum(sig, f0, fs, order = 1,maxharms=0)
harm_pow_spec = z.hps
plt.figure(1)
plt.plot(harm_pow_spec)
plt.title("Max Harmonics for the range same as fundamental frequencies Bp filtered in Order 0 and max harmonic psectum 0")
freq_comp_hps = freq_comp(harm_pow_spec,fs)
plt.figure(2)
plt.plot(freq_comp_hps)
plt.title("""Frequency components(in logarithmix scale) of harmonic spectrum of filtered training data.
A harmonic set of two pitches contributing significantly to this piano chord""")
plt.show()
```
Hence, i updated the get_training_data function to perform pitch detection using the HarmonicPowerSpectrum analyser
as seen below.
```
# Each audio file should have 262144 entries. Extend them all with zeros in the tail
# Convert all audio files as matrices of 512x512 shape
def get_training_data(dataset_path, f0, fs, order = 1,maxharms=0):
training_data = []
pitch_data = []
for datas in os.listdir(dataset_path):
_,data = scipy.io.wavfile.read(dataset_path+'/'+datas)
# Add Zeros at the tail until 162409
temp_zeros_data = [0]*262144
# print("Unpadded data len",len(data))
# print(len(temp_zeros))
temp_zeros_data[:len(data)] = data # Slice temp_zeros and add the data into the slice
# print("Padded data len",len(temp_zeros))
# print(np.shape(temp_zeros))
# Reshape the data as square matrix of 403*403 of size 162409
data_ = np.reshape(temp_zeros_data,(512,512))
# Get pitch of the signal
z = HarmonicPowerSpectrum(temp_zeros_data, f0, fs, order = 1,maxharms=0)
harm_pow_spec = z.hps
training_data.append(data_)
pitch_data.append(harm_pow_spec)
return training_data,pitch_data
training_data,pitch_data = get_training_data('C:/Users/Saran/!!!!!!!!!!!!!Edge_computing/Wavenet/upsampled_data',f0, fs, order = 1,maxharms=0)
print(training_data[0].shape)
# Expand the dims # The third dimension represents number of channels
for i in range(len(training_data)):
training_data[i] = training_data[i][:,:,np.newaxis]
print(training_data[0].shape)
```
| github_jupyter |
# Customizing and controlling xclim
xclim's behaviour can be controlled globally or contextually through `xclim.set_options`, which acts the same way as `xarray.set_options`. For the extension of xclim with the addition of indicators, see the [Extending xclim](extendxclim.ipynb) notebook.
```
import xarray as xr
import xclim
from xclim.testing import open_dataset
```
Let's create fake data with some missing values and mask every 10th, 20th and 30th of the month.This represents 9.6-10% of masked data for all months except February where it is 7.1%.
```
tasmax = (
xr.tutorial.open_dataset("air_temperature")
.air.resample(time="D")
.max(keep_attrs=True)
)
tasmax = tasmax.where(tasmax.time.dt.day % 10 != 0)
```
## Checks
Above, we created fake temperature data from a xarray tutorial dataset that doesn't have all the standard CF attributes. By default, when triggering a computation with an Indicator from xclim, warnings will be raised:
```
tx_mean = xclim.atmos.tx_mean(tasmax=tasmax, freq="MS") # compute monthly max tasmax
```
Setting `cf_compliance` to `'log'` mutes those warnings and sends them to the log instead.
```
xclim.set_options(cf_compliance="log")
tx_mean = xclim.atmos.tx_mean(tasmax=tasmax, freq="MS") # compute monthly max tasmax
```
## Missing values
For example, one can globally change the missing method.
Change the default missing method to "pct" and set its tolerance to 8%:
```
xclim.set_options(check_missing="pct", missing_options={"pct": {"tolerance": 0.08}})
tx_mean = xclim.atmos.tx_mean(tasmax=tasmax, freq="MS") # compute monthly max tasmax
tx_mean.sel(time="2013", lat=75, lon=200)
```
Only February has non-masked data. Let's say we want to use the "wmo" method (and its default options), but only once, we can do:
```
with xclim.set_options(check_missing="wmo"):
tx_mean = xclim.atmos.tx_mean(
tasmax=tasmax, freq="MS"
) # compute monthly max tasmax
tx_mean.sel(time="2013", lat=75, lon=200)
```
This method checks that there is less than `nm=5` invalid values in a month and that there are no consecutive runs of `nc>=4` invalid values. Thus, every month is now valid.
Finally, it is possible for advanced users to register their own method. Xclim's missing methods are in fact based on class instances. Thus, to create a custom missing class, one should implement a subclass based on `xclim.core.checks.MissingBase` and overriding at least the `is_missing` method. The method should take a `null` argument and a `count` argument.
- `null` is a `DataArrayResample` instance of the resampled mask of invalid values in the input dataarray.
- `count` is the number of days in each resampled periods and any number of other keyword arguments.
The `is_missing` method should return a boolean mask, at the same frequency as the indicator output (same as `count`), where True values are for elements that are considered missing and masked on the output.
When registering the class with the `xclim.core.checks.register_missing_method` decorator, the keyword arguments will be registered as options for the missing method. One can also implement a `validate` static method that receives only those options and returns whether they should be considered valid or not.
```
from xclim.core.missing import register_missing_method
from xclim.core.missing import MissingBase
from xclim.indices.run_length import longest_run
@register_missing_method("consecutive")
class MissingConsecutive(MissingBase):
"""Any period with more than max_n consecutive missing values is considered invalid"""
def is_missing(self, null, count, max_n=5):
return null.map(longest_run, dim="time") >= max_n
@staticmethod
def validate(max_n):
return max_n > 0
```
The new method is now accessible and usable with:
```
with xclim.set_options(
check_missing="consecutive", missing_options={"consecutive": {"max_n": 2}}
):
tx_mean = xclim.atmos.tx_mean(
tasmax=tasmax, freq="MS"
) # compute monthly max tasmax
tx_mean.sel(time="2013", lat=75, lon=200)
```
| github_jupyter |
# Predicting Student Admissions with Neural Networks in Keras
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
## Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
```
# Importing pandas and numpy
%matplotlib inline
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
```
## Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
```
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
```
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
```
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
```
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
## One-hot encoding the rank
For this, we'll use the `get_dummies` function in pandas.
```
# Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# Drop the previous rank column
one_hot_data = one_hot_data.drop('rank', axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
```
## Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
```
# Copying our data
processed_data = one_hot_data[:]
# Scaling the columns
processed_data['gre'] = processed_data['gre']/800
processed_data['gpa'] = processed_data['gpa']/4.0
processed_data[:10]
```
## Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
```
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
```
## Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
Also, in Keras, we need to one-hot encode the output. We'll do this with the `to_categorical function`.
```
import keras
# Separate data and one-hot encode the output
# Note: We're also turning the data into numpy arrays, in order to train the model in Keras
features = np.array(train_data.drop('admit', axis=1))
targets = np.array(keras.utils.to_categorical(train_data['admit'], 2))
features_test = np.array(test_data.drop('admit', axis=1))
targets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))
print(features[:10])
print(targets[:10])
```
## Defining the model architecture
Here's where we use Keras to build our neural network.
```
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(6,)))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(2, activation='softmax'))
# Compiling the model
model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
```
## Training the model
```
# Training the model
model.fit(features, targets, epochs=200, batch_size=100, verbose=0)
```
## Scoring the model
```
# Evaluating the model on the training and testing set
score = model.evaluate(features, targets)
print("\n Training Accuracy:", score[1])
score = model.evaluate(features_test, targets_test)
print("\n Testing Accuracy:", score[1])
```
## Challenge: Play with the parameters!
You can see that we made several decisions in our training. For instance, the number of layers, the sizes of the layers, the number of epochs, etc.
It's your turn to play with parameters! Can you improve the accuracy? The following are other suggestions for these parameters. We'll learn the definitions later in the class:
- Activation function: relu and sigmoid
- Loss function: categorical_crossentropy, mean_squared_error
- Optimizer: rmsprop, adam, ada
| github_jupyter |
```
# Import plotting modules
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
df = [4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1]
versicolor_petal_length = np.array(df)
# Set default Seaborn style
sns.set()
# Plot histogram of versicolor petal lengths
_ = plt.hist(versicolor_petal_length, ec='white')
# Show histogram
plt.show()
# Plot histogram of versicolor petal lengths
_ = plt.hist(versicolor_petal_length, ec='black')
# Label axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('count')
# Show histogram
plt.show()
# Import numpy
import numpy as np
# Compute number of data points: n_data
n_data = len(versicolor_petal_length)
print(n_data)
# Number of bins is the square root of number of data points: n_bins
n_bins = np.sqrt(n_data)
print(n_bins)
# Convert number of bins to integer: n_bins
n_bins = int(n_bins)
# Plot the histogram
_ = plt.hist(versicolor_petal_length, bins=n_bins, ec='black')
# Label axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('count')
# Show histogram
plt.show()
import pandas as pd
sepal1 = [5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5.0, 5.0, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5.0, 5.5, 4.9, 4.4, 5.1, 5.0, 4.5, 4.4, 5.0, 5.1, 4.8, 5.1, 4.6, 5.3, 5.0, 7.0, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5.0, 5.9, 6.0, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6.0, 5.7, 5.5, 5.5, 5.8, 6.0, 5.4, 6.0, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5.0, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
sepal2 = [3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.1, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3.0, 2.9, 3.0, 3.0, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3.0, 2.5, 2.8, 3.2, 3.0, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3.0, 2.8, 3.0, 2.8, 3.8, 2.8, 2.8, 2.6, 3.0, 3.4, 3.1, 3.0, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3.0, 2.5, 3.0, 3.4, 3.0]
petal = [1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1.0, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.5, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1, 6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
petal2 = [0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.1, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2.0, 1.9, 2.1, 2.0, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2.0, 2.0, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2.0, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2.0, 2.3, 1.8]
species = ['setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'setosa', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'versicolor', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica', 'virginica']
df = list(zip(sepal1, sepal2, petal, petal2, species))
df = pd.DataFrame(df)
df.columns = ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)', 'species']
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='species', y='petal length (cm)', data=df)
# Label the axes
_ = plt.xlabel('species')
_ = plt.ylabel('petal length (cm)')
# Show the plot
plt.show()
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n+1) / n
return x, y
# Compute ECDF for versicolor data: x_vers, y_vers
x_vers, y_vers = ecdf(versicolor_petal_length)
# Generate plot
_ = plt.plot(x_vers, y_vers, marker='.', linestyle='none')
# Label the axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Display the plot
plt.show()
# Plotting the ECDF
# You will now use your ecdf() function to compute the ECDF for the
# petal lengths of Anderson's Iris versicolor flowers. You will then plot the ECDF.
# Recall that your ecdf() function returns two arrays
# so you will need to unpack them. An example of such unpacking is x, y = foo(data), for some function foo().
setosa_petal_length = [1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4,
1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1. , 1.7, 1.9, 1.6,
1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.5, 1.3,
1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4]
versicolor_petal_length = [4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1]
virginica_petal_length = [6. , 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5,
5. , 5.1, 5.3, 5.5, 6.7, 6.9, 5. , 5.7, 4.9, 6.7, 4.9, 5.7, 6. ,
4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8,
5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5. , 5.2, 5.4, 5.1]
setosa_petal_length = np.array(setosa_petal_length)
# do this for the other 2 ..............................
# Compute ECDFs
x_set, y_set = ecdf(setosa_petal_length)
x_vers, y_vers = ecdf(versicolor_petal_length)
x_virg, y_virg = ecdf(virginica_petal_length)
# Plot all ECDFs on the same plot
_ = plt.plot(x_set, y_set, marker='.', linestyle='none')
_ = plt.plot(x_vers, y_vers, marker='.', linestyle='none')
_ = plt.plot(x_virg, y_virg, marker='.', linestyle='none')
# Annotate the plot
_ = plt.legend(('setosa', 'versicolor', 'virginica'), loc='lower right')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Display the plot
plt.show()
# ECDFs also allow you to compare two or more distributions
# (though plots get cluttered if you have too many). Here, you will plot ECDFs
# for the petal lengths of all three iris species.
# You already wrote a function to generate ECDFs so you can put it to good use!
# Compute the mean
mean_length_vers = np.mean(versicolor_petal_length)
# Print the results with some nice formatting
print('I. versicolor:', mean_length_vers, 'cm')
# The mean of all measurements gives an indication of
# the typical magnitude of a measurement. It is computed using np.mean().
# Specify array of percentiles: percentiles
percentiles = np.array([2.5, 25, 50, 75, 97.5])
# Compute percentiles: ptiles_vers
ptiles_vers = np.percentile(versicolor_petal_length, percentiles)
# Print the result
print(ptiles_vers)
# In this exercise, you will compute the percentiles of petal length of Iris versicolor.
# Plot the ECDF
_ = plt.plot(x_vers, y_vers, '.')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Overlay percentiles as red x's
_ = plt.plot(ptiles_vers, percentiles/100, marker='D', color='red',
linestyle='none')
# Show the plot
plt.show()
# To see how the percentiles relate to the ECDF, you will plot the percentiles of
# Iris versicolor petal lengths you calculated in the last exercise on the ECDF plot you generated in chapter 1.
# The percentile variables from the previous exercise are available in the workspace as ptiles_vers and percentiles.
# Create box plot with Seaborn's default settings
_ = sns.boxplot(x='species', y='petal length (cm)', data=df)
# Label the axes
_ = plt.xlabel('species')
_ = plt.ylabel('petal length (cm)')
# Show the plot
plt.show()
# Making a box plot for the petal lengths is unnecessary because the iris data set is
# not too large and the bee swarm plot works fine. However, it is always good to get some practice.
# Standard deviation is a reasonable metric for the typical spread of the data
# Array of differences to mean: differences
differences = versicolor_petal_length - np.mean(versicolor_petal_length)
# Square the differences: diff_sq
diff_sq = differences**2
# Compute the mean square difference: variance_explicit
variance_explicit = np.mean(diff_sq)
# Compute the variance using NumPy: variance_np
variance_np = np.var(versicolor_petal_length)
# Print the results
print(variance_explicit, variance_np)
# Compute the variance: variance
variance = np.var(versicolor_petal_length)
# Print the square root of the variance
print(np.sqrt(variance))
# Print the standard deviation
print(np.std(versicolor_petal_length))
# the standard deviation is the square root of the variance
# the variance is how far a set of random data points are spread out from the mean
# The variance measures how far each number in the set is from the mean
# A low standard deviation means that most of the numbers are very close to the average.
# A high standard deviation means that the numbers are spread out.
# Covariance of a point is the mean of the product of those differences, with respect to the mean of the x and mean
# of the y axis
# If covariance is positive, the point is positively correlated, (if its above the x mean and y mean)
# If x is high and y is low or vice versa, then the point is negatively correlated
# covariance / std of x * std of y = Pearson correpation p
versicolor_petal_width = np.array([1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1. , 1.3, 1.4, 1. , 1.5, 1. ,
1.4, 1.3, 1.4, 1.5, 1. , 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4,
1.4, 1.7, 1.5, 1. , 1.1, 1. , 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3,
1.3, 1.2, 1.4, 1.2, 1. , 1.3, 1.2, 1.3, 1.3, 1.1, 1.3])
# Make a scatter plot
_ = plt.plot(versicolor_petal_length, versicolor_petal_width,
marker='.', linestyle='none')
# Label the axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('petal width (cm)')
# Show the result
plt.show()
# When you made bee swarm plots, box plots, and ECDF plots in previous exercises, you compared
# the petal lengths of different species of iris. But what if you want to compare
# two properties of a single species? This is exactly what we will do in this exercise. We will make a scatter
# plot of the petal length and width measurements of Anderson's Iris versicolor flowers. If the flower scales
# (that is, it preserves its proportion as it grows), we would expect the length and width to be correlated.
# the highest variance in the variable x,
# the highest covariance,
# negative covariance?
# Compute the covariance matrix: covariance_matrix
covariance_matrix = np.cov(versicolor_petal_length, versicolor_petal_width)
# Print covariance matrix
print(covariance_matrix)
# Extract covariance of length and width of petals: petal_cov
petal_cov = covariance_matrix[0,1]
# Print the length/width covariance
print(petal_cov)
# The covariance may be computed using the Numpy function np.cov(). For example, we have two sets of
# data x and y, np.cov(x, y) returns a 2D array where entries [0,1] and
# [1,0] are the covariances. Entry [0,0] is the variance of the data in x, and entry [1,1] is the variance of
# the data in y. This 2D output array is called the covariance matrix, since it organizes the self- and covariance.
def pearson_r(x, y):
"""Compute Pearson correlation coefficient between two arrays."""
# Compute correlation matrix: corr_mat
corr_mat = np.corrcoef(x, y)
# Return entry [0,1]
return corr_mat[0,1]
# Compute Pearson correlation coefficient for I. versicolor
r = pearson_r(versicolor_petal_width, versicolor_petal_length)
# Print the result
print(r)
# Computing the Pearson correlation coefficient
# As mentioned in the video, the Pearson correlation coefficient, also called the Pearson r, is
# often easier to interpret than the covariance. It is computed using the np.corrcoef() function. Like np.cov(),
# it takes two arrays as arguments and returns a 2D array. Entries [0,0] and [1,1] are necessarily equal to 1
# (can you think about why?), and the value we are after is entry [0,1].
# In this exercise, you will write a function, pearson_r(x, y) that takes in two arrays
# and returns the Pearson correlation coefficient. You will then use this function to compute it for the
# petal lengths and widths of I. versicolor.
# Why do we do statistical inference?
# To draw probabilistic conclusions about what we might expect if we collected the same data again.
# To draw actionable conclusions from data.
# To draw more general conclusions from relatively few data or observations.
# Summary: Correct! Statistical inference involves taking your data to probabilistic
# conclusions about what you would expect if you
# took even more data, and you can make decisions based on these conclusions.
# Seed the random number generator
np.random.seed(42)
# Initialize random numbers: random_numbers
random_numbers = np.empty(100000)
# Generate random numbers by looping over range(100000)
for i in range(100000):
random_numbers[i] = np.random.random()
# Plot a histogram
_ = plt.hist(random_numbers, ec='white')
# Show the plot
plt.show()
def perform_bernoulli_trials(n, p):
"""Perform n Bernoulli trials with success probability p
and return number of successes."""
# Initialize number of successes: n_success
n_success = 0
# Perform trials
for i in range(n):
# Choose random number between zero and one: random_number
random_number = np.random.random()
# If less than p, it's a success so add one to n_success
if random_number < p:
n_success += 1
return n_success
# The np.random module and Bernoulli trials
# You can think of a Bernoulli trial as a flip of a possibly biased coin. Specifically, each coin flip
# has a probability p of landing heads (success) and probability 1−p of landing tails (failure).
# In this exercise, you will write a function to perform n Bernoulli trials, perform_bernoulli_trials(n, p),
# which returns the number of successes out of n Bernoulli trials, each of which has probability p of success.
# To perform each Bernoulli trial,
# use the np.random.random() function, which returns a random number between zero and one.
# Seed random number generator
np.random.seed(42)
# Initialize the number of defaults: n_defaults
n_defaults = np.empty(1000)
# Compute the number of defaults
for i in range(1000):
n_defaults[i] = perform_bernoulli_trials(100, 0.05)
# Plot the histogram with default number of bins; label your axes
_ = plt.hist(n_defaults, normed=True)
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('probability')
# Show the plot
plt.show()
# How many defaults might we expect?
# Let's say a bank made 100 mortgage loans. It is possible that anywhere between 0 and 100 of the loans will be defaulted upon.
# You would like to know the probability of getting a given number of defaults, given that the probability of a
# default is p = 0.05. To investigate this, you will do a simulation. You will perform 100 Bernoulli trials using
# the perform_bernoulli_trials() function you wrote in the previous exercise and record how many defaults we get. Here, a success
# is a default. (Remember that the word "success" just means that the Bernoulli trial evaluates to True, i.e.,
# did the loan recipient default?) You will do this for another 100 Bernoulli trials. And again and again until we
# have tried it 1000 times. Then, you will plot a histogram describing the probability of the number of defaults.
# Compute ECDF: x, y
x, y = ecdf(n_defaults)
# Plot the CDF with labeled axes
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('number of defaults out of 100')
_ = plt.ylabel('CDF')
# Show the plot
plt.show()
# Compute the number of 100-loan simulations with 10 or more defaults: n_lose_money
n_lose_money = np.sum(n_defaults >= 10)
# Compute and print probability of losing money
print('Probability of losing money =', n_lose_money / len(n_defaults))
# Take 10,000 samples out of the binomial distribution: n_defaults
n_defaults = np.random.binomial(n=100, p=0.05, size=10000)
# Compute CDF: x, y
x, y = ecdf(n_defaults)
# Plot the CDF with axis labels
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('CDF')
# Show the plot
plt.show()
# Sampling out of the Binomial distribution
# Compute the probability mass function for the number of defaults we would expect for 100 loans as in the last
# section, but instead of simulating all of the Bernoulli trials, perform the sampling using np.random.binomial().
# This is identical to the calculation you did in the last set of exercises using your custom-written
# perform_bernoulli_trials() function, but far more computationally efficient. Given this extra efficiency, we will
# take 10,000 samples instead of 1000. After
# taking the samples, plot the CDF as last time. This CDF that you are plotting is that of the Binomial distribution.
# Compute bin edges: bins
bins = np.arange(0, max(n_defaults) + 1.5) - 0.5
# Generate histogram
_ = plt.hist(n_defaults, normed=True, bins=bins)
# Label axes
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('PMF')
# Show the plot
plt.show()
# Plotting the Binomial PMF
# As mentioned in the video, plotting a nice looking PMF requires a bit of matplotlib trickery that we will not
# go into here. Instead, we will plot the PMF of the Binomial distribution as a histogram with skills you have already
# learned. The trick is setting up the edges of the bins to pass to plt.hist() via the bins keyword argument.
# We want the bins centered on the integers. So, the edges of the bins should be -0.5, 0.5, 1.5, 2.5, ... up to
# max(n_defaults) + 1.5. You can generate an array like this using np.arange() and then subtracting 0.5 from the array.
# Draw 10,000 samples out of Poisson distribution: samples_poisson
samples_poisson = np.random.poisson(10, size=10000)
# Print the mean and standard deviation
print('Poisson: ', np.mean(samples_poisson),
np.std(samples_poisson))
# Specify values of n and p to consider for Binomial: n, p
n = [20, 100, 1000]
p = [0.5, 0.1, 0.01]
# Draw 10,000 samples for each n,p pair: samples_binomial
for i in range(3):
samples_binomial = np.random.binomial(n[i], p[i], size=10000)
# Print results
print('n =', n[i], 'Binom:', np.mean(samples_binomial),
np.std(samples_binomial))
# Relationship between Binomial and Poisson distributions
# You just heard that the Poisson distribution is a limit of the Binomial distribution for rare events.
# This makes sense if you think about the stories. Say we do a Bernoulli trial every minute for an hour,
# each with a success probability of 0.1. We would do 60 trials, and the number of successes is Binomially distributed,
# and we would expect to get about 6 successes. This is just like the Poisson story we discussed in the video,
# where we get on average 6 hits on a website per hour. So, the Poisson distribution with arrival rate equal
# to np approximates a Binomial distribution for n Bernoulli trials with probability p of success
# (with n large and p small). Importantly, the Poisson distribution is often simpler to work
# with because it has only one parameter instead of two for the Binomial distribution.
# Possible Answers
# Discrete uniform
# Binomial
# Poisson
# Both Binomial and Poisson, though Poisson is easier to model and compute.
# Both Binomial and Poisson, though Binomial is easier to model and compute.
# Correct! When we have rare events (low p, high n), the Binomial distribution is Poisson.
# This has a single parameter,
# the mean number of successes per time interval, in our case the mean number of no-hitters per season.
# Draw 10,000 samples out of Poisson distribution: n_nohitters
n_nohitters = np.random.poisson(251/115, size=10000)
# Compute number of samples that are seven or greater: n_large
n_large = np.sum(n_nohitters >= 7)
# Compute probability of getting seven or more: p_large
p_large = n_large / 10000
# Print the result
print('Probability of seven or more no-hitters:', p_large)
# 1990 and 2015 featured the most no-hitters of any season of baseball (there were seven). Given that
# there are on average 251/115 no-hitters per season, what is the probability of having seven or more in a season?
# a discrete quantity is like a dice roll
# a continuous quantity is like light
# The value of the CDF at
# x = 10 is 0.75, so the probability that x < 10 is 0.75. Thus, the probability that x > 10 is 0.25.
# Draw 100000 samples from Normal distribution with stds of interest: samples_std1, samples_std3, samples_std10
samples_std1 = np.random.normal(20, 1, size=100000)
samples_std3 = np.random.normal(20, 3, size=100000)
samples_std10 = np.random.normal(20, 10, size=100000)
# Make histograms
_ = plt.hist(samples_std1, bins=100, normed=True, histtype='step')
_ = plt.hist(samples_std3, bins=100, normed=True, histtype='step')
_ = plt.hist(samples_std10, bins=100, normed=True, histtype='step')
# Make a legend, set limits and show plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'))
plt.ylim(-0.01, 0.42)
plt.show()
# In this exercise, you will explore the Normal PDF and also learn a way to plot a PDF of a known distribution
# using hacker statistics. Specifically, you will plot a Normal PDF for various values of the variance.
# You can see how the different standard deviations result
# in PDFS of different widths. The peaks are all centered at the mean of 20.
# Generate CDFs
x_std1, y_std1 = ecdf(samples_std1)
x_std3, y_std3 = ecdf(samples_std3)
x_std10, y_std10 = ecdf(samples_std10)
# Plot CDFs
_ = plt.plot(x_std1, y_std1, marker='.', linestyle='none')
_ = plt.plot(x_std3, y_std3, marker='.', linestyle='none')
_ = plt.plot(x_std10, y_std10, marker='.', linestyle='none')
# Make a legend and show the plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'), loc='lower right')
plt.show()
# Now that you have a feel for how the Normal PDF looks, let's consider
# its CDF. Using the samples you generated in the last exercise
# (in your namespace as samples_std1, samples_std3, and samples_std10), generate and plot the CDFs.
# The CDFs all pass through the mean at the 50th percentile; the
# mean and median of a Normal distribution are equal. The width of the CDF varies with the standard deviation.
belmont = [148.51, 146.65, 148.52, 150.7, 150.42000000000002, 150.88, 151.57, 147.54, 149.65, 148.74, 147.86, 148.75, 147.5, 148.26, 149.71, 146.56, 151.19, 147.88, 149.16, 148.82, 148.96, 152.02, 146.82, 149.97, 146.13, 148.1, 147.2, 146.0, 146.4, 148.2, 149.8, 147.0, 147.2, 147.8, 148.2, 149.0, 149.8, 148.6, 146.8, 149.6, 149.0, 148.2, 149.2, 148.0, 150.4, 148.8, 147.2, 148.8, 149.6, 148.4, 148.4, 150.2, 148.8, 149.2, 149.2, 148.4, 150.2, 146.6, 149.8, 149.0, 150.8, 148.6, 150.2, 149.0, 148.6, 150.2, 148.2, 149.4, 150.8, 150.2, 152.2, 148.2, 149.2, 151.0, 149.6, 149.6, 149.4, 148.6, 150.0, 150.6, 149.2, 152.6, 152.8, 149.6, 151.6, 152.8, 153.2, 152.4, 152.2]
belmont_no_outliers = np.array(belmont)
# Compute mean and standard deviation: mu, sigma
mu = np.mean(belmont_no_outliers)
sigma = np.std(belmont_no_outliers)
# Sample out of a normal distribution with this mu and sigma: samples
samples = np.random.normal(mu, sigma, size=10000)
# Get the CDF of the samples and of the data
x_theor, y_theor = ecdf(samples)
x, y = ecdf(belmont_no_outliers)
# Plot the CDFs and show the plot
_ = plt.plot(x_theor, y_theor)
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('Belmont winning time (sec.)')
_ = plt.ylabel('CDF')
plt.show()
# Since 1926, the Belmont Stakes is a 1.5 mile-long race of 3-year old thoroughbred horses.
# Secretariat ran the fastest Belmont Stakes in history in 1973. While that was the fastest year, 1970 was
# the slowest because of unusually wet and sloppy conditions. With these two outliers removed from the data
# set, compute the mean and standard deviation of the Belmont winners' times. Sample out of a Normal
# distribution with this mean and standard deviation using the np.random.normal() function and plot a CDF.
# Overlay the ECDF from the winning Belmont times. Are these close to Normally distributed?
# Note: Justin scraped the data concerning the Belmont Stakes from the Belmont Wikipedia page.
# The theoretical CDF and the ECDF of the data suggest that the winning Belmont times are, indeed, Normally
# distributed. This also suggests that in the last 100 years or so, there have not been major
# technological or training advances that have significantly affected the speed at which horses can run this race.
# What are the chances of a horse matching or beating Secretariat's record?
# Assume that the Belmont winners' times are Normally distributed (with the 1970 and 1973 years removed), what
# is the probability that the winner of a given Belmont Stakes will run it as fast or faster than Secretariat?
# Take a million samples out of the Normal distribution: samples
samples = np.random.normal(mu, sigma, size=1000000)
# Compute the fraction that are faster than 144 seconds: prob
prob = np.sum(samples <= 144) / len(samples)
# Print the result
print('Probability of besting Secretariat:', prob)
# Great work! We had to take a million samples because the probability of
# a fast time is very low and we had to be sure to sample enough.
# We get that there is only a 0.06% chance of a horse running the Belmont as fast as Secretariat.
# Matching a story and a distribution
# How might we expect the time between Major League no-hitters to be distributed?
# Be careful here: a few exercises ago, we considered the probability distribution
# for the number of no-hitters in a season.
# Now, we are looking at the probability distribution of the time between no hitters.
# Possible Answers
# Normal
# Exponential
# Poisson
# Uniform
# Waiting for the next Secretariat
# Unfortunately, Justin was not alive when Secretariat ran the Belmont in 1973.
# Do you think he will get to see a performance like that?
# To answer this, you are interested in how many years you would expect to wait until you see another
# performance like Secretariat's. How is the waiting time
# until the next performance as good or better than Secretariat's distributed? Choose the best answer.
# Possible Answers
# Normal, because the distribution of Belmont winning times are Normally distributed.
# Normal, because there is a most-expected waiting time, so there should be a single peak to the distribution.
# Exponential: It is very unlikely for a horse to be faster than Secretariat, so the distribution should decay
# away to zero for high waiting time.
# Exponential: A horse as fast as Secretariat is a rare event, which can be modeled as a Poisson process,
# and the waiting time between arrivals of a Poisson process is Exponentially distributed.
# Correct! The Exponential distribution describes the waiting times between rare events, and Secretariat is rare!
def successive_poisson(tau1, tau2, size=1):
"""Compute time for arrival of 2 successive Poisson processes."""
# Draw samples out of first exponential distribution: t1
t1 = np.random.exponential(tau1, size=size)
# Draw samples out of second exponential distribution: t2
t2 = np.random.exponential(tau2, size=size)
return t1 + t2
# If you have a story, you can simulate it!
# Sometimes, the story describing our probability distribution does not
# have a named distribution to go along with it. In these cases, fear not!
# You can always simulate it. We'll do that in this and the next exercise.
# In earlier exercises, we looked at the rare event of no-hitters in Major
# League Baseball. Hitting the cycle is another rare baseball event. When a
# batter hits the cycle, he gets all four kinds of hits, a single, double,
# triple, and home run, in a single game. Like no-hitters, this can be modeled
# as a Poisson process, so the time between hits of the cycle are also Exponentially distributed.
# How long must we wait to see both a no-hitter and then a batter hit
# the cycle? The idea is that we have to wait some time for the no-hitter,
# and then after the no-hitter, we have to wait for hitting the cycle. Stated
# another way, what is the total waiting time for the arrival of two different
# Poisson processes? The total waiting time is the time waited for the no-hitter,
# plus the time waited for the hitting the cycle.
# Now, you will write a function to sample out of the distribution described by this story.
# Distribution of no-hitters and cycles
# Now, you'll use your sampling function to compute the waiting time to observe a no-hitter and hitting of the cycle.
# The mean waiting time for a no-hitter is 764 games, and the mean waiting time for hitting the cycle is 715 games.
# Draw samples of waiting times
waiting_times = successive_poisson(764, 715, size=100000)
# Make the histogram
_ = plt.hist(waiting_times, bins=100, histtype='step',
normed=True)
# Label axes
_ = plt.xlabel('total waiting time (games)')
_ = plt.ylabel('PDF')
# Show the plot
plt.show()
```
| github_jupyter |
# Introduction
## Research Question
What is the information flow from visual stream to motor processing and how early in processing can we predict behavioural outcomes.
- Can decoding models be trained by region
- How accurate are the modeled regions at predicting a behaviour
- Possible behaviours (correct vs. incorrect)
- Movement of wheel
## Brief background
The Steinmetz (2018) dataset reported that neurons with action correlates are found globally and that neurons in nearly every brain region are non-selectively activated in the moments leading up to movement onset, however it is currently not known how the information integration occurs across the motor areas and how that integration gives rise to motor behaviour.
Neuron population coding has been robustly used to decode motor behaviours across various species (Georgopoulos et al., 1986), and recent literature has suggested that motor preparation and planning uses distributed populations in corticomotor areas to plan motor movements. However this previous work has been limited by the number of electrodes and therefore areas measured in a single task.
The following assignment seeks to take advantage of the multi-array recording from the Steinmetz (2018) neuropixel data set to investigate temporal aspects of motor behaviours.
# Data Analyses
:brain: :mouse: :brain:
## Set Up
```
import pandas as pd
import numpy as np
import dataframe_image as dfi
import pathlib
from matplotlib import rcParams
from matplotlib import pyplot as plt
import emoji
rcParams['figure.figsize'] = [15, 5]
rcParams['font.size'] = 15
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['figure.autolayout'] = True
import os, requests
fname = []
for i in range(3):
fname.append('steinmetz_part%d.npz'%i)
url = ['https://osf.io/agvxh/download']
url.append('https://osf.io/uv3mw/download')
url.append('https://osf.io/ehmw2/download')
for i in range(len(url)):
if not os.path.isfile(fname[i]):
try:
r = requests.get(url[i])
except requests.ConnectionError:
print("Data could not download!")
else:
if r.status_code != requests.codes.ok:
print("Data could not download!")
else:
with open(fname[i], "wb") as fid:
fid.write(r.content)
steinmetz_data = np.array([])
for i in range(len(fname)):
steinmetz_data = np.hstack((steinmetz_data, np.load('steinmetz_part%d.npz'%i, allow_pickle=True)['dat']))
```
## Exploring the data
```
# choose one recording session (20) to get labels
session_20 = steinmetz_data[20]
keys = session_20.keys()
print(keys)
for key in session_20.keys():
dataset_info = session_20[key]
if isinstance (dataset_info, np.ndarray):
print(key, dataset_info.shape, " - array")
elif isinstance (dataset_info, list):
print(key, len(dataset_info), " - list")
else:
print(key, type(dataset_info), " - other")
brain_areas = []
for i in range(steinmetz_data.shape[0]):
unique_area = np.unique(steinmetz_data[i]['brain_area']) # check this line for the
for u in unique_area:
brain_areas.append(u)
ubs = list(np.unique(brain_areas))
table = pd.DataFrame(columns=['session', 'mouse_name', 'n_neuron'] + ubs)
for i in range(steinmetz_data.shape[0]):
this_session: dict = {}
unique_barea = list(np.unique(steinmetz_data[i]['brain_area']))
this_session['session'] = i
this_session['mouse_name'] = steinmetz_data[i]['mouse_name']
this_session['n_neuron'] = steinmetz_data[i]['spks'].shape[0]
this_session['n_trial'] = steinmetz_data[i]['spks'].shape[1]
for ubrea in unique_barea:
n_neuron, n_trial, _ = (steinmetz_data[i]['spks'][steinmetz_data[i]['brain_area'] == ubrea]).shape
this_session[ubrea] = n_neuron
table = table.append(this_session, ignore_index=True)
table = table.fillna(0)
pathlib.Path('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images').mkdir(parents=True, exist_ok=True)
dfi.export(table, '/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/steinmetz_all_data_table.png', max_cols=77)
table
```
## Investigate Spiking Reponses
```
# groupings of brain regions
brain_regions = ["vis ctx", "thal", "hipp", "other ctx", "midbrain", "basal ganglia", "cortical subplate", "other"]
brain_groupings = [["VISa", "VISam", "VISl", "VISp", "VISpm", "VISrl"], # visual cortex
["CL", "LD", "LGd", "LH", "LP", "MD", "MG", "PO", "POL", "PT", "RT", "SPF", "TH", "VAL", "VPL", "VPM"], # thalamus
["CA", "CA1", "CA2", "CA3", "DG", "SUB", "POST"], # hippocampal
["ACA", "AUD", "COA", "DP", "ILA", "MOp", "MOs", "OLF", "ORB", "ORBm", "PIR", "PL", "SSp", "SSs", "RSP"," TT"], # non-visual cortex
["APN", "IC", "MB", "MRN", "NB", "PAG", "RN", "SCs", "SCm", "SCig", "SCsg", "ZI"], # midbrain
["ACB", "CP", "GPe", "LS", "LSc", "LSr", "MS", "OT", "SNr", "SI"], # basal ganglia
["BLA", "BMA", "EP", "EPd", "MEA"] # cortical subplate
]
mouse_dict = {} # create a dictionary
for session, dat_i in enumerate(steinmetz_data):
name = dat_i["mouse_name"]
if name not in mouse_dict.keys():
mouse_dict[name] = [dat_i]
else:
lst = mouse_dict[name]
lst.append(dat_i)
mouse_dict[name] = lst
assigned_region = "VISp"
# analyse for all runs of a single mouse
for mouse in ["Cori"]:
mouse_data = mouse_dict[mouse] #list of the sessions corresponding to this mouse, [alldat[0], alldat[1], alldat[2]]
num_sessions = len(mouse_dict[mouse])
thing = None
for trial in mouse_data:
spk_trial = trial['spks']
if assigned_region in trial["brain_area"]:
spk_trial_region = spk_trial[trial["brain_area"] == assigned_region]
# average over trials
spk_trial_region_avg = np.mean(spk_trial_region, axis=1)
# take only values that are average above 0.2
spk_trial_region_avg_good = spk_trial_region_avg[np.mean(spk_trial_region_avg, axis=1) >= 0.2,:]
if thing is not None:
thing = np.concatenate((thing, spk_trial_region_avg_good))
else:
thing = spk_trial_region_avg_good
plot = plt.figure()
plt.plot(thing.T)
plot.suptitle("High Spiking Neurons in Cori's Primary Visual Cortex")
plt.xlabel("Timebins")
plt.ylabel("Average Number of Spikes")
plt.savefig('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots/cori_v1_spks.png')
plt.show(plot)
# Group The data by mouse
for session, dat_i in enumerate(steinmetz_data):
name = dat_i["mouse_name"]
if name not in mouse_dict.keys():
mouse_dict[name] = [dat_i]
else:
lst = mouse_dict[name]
lst.append(dat_i)
mouse_dict[name] = lst
names = []
for dat_i in steinmetz_data:
name = dat_i["mouse_name"]
if name not in names:
names.append(name)
print("Mice: {}".format(names))
assigned_regions = ['CA1', 'CA3',"VISp", "VISpm", "VISrl", "VISam", "VISa", "DG", "MD", "MOs", "MG", "MOp" ,]
# change this to be whichever regions are of interest
# !! NOTE !! the order matters
### Note ###
# LIST OF AREAS
# "VISp", "VISpm", "VISI", "VISrl", "VISam", "VISa", 'CA1', 'CA3', "DG", "CP", "SCm", "SCs", "SNr", "SSp", "ACA", "ILA", "GPe", "ACB", "APN", "BLA", "LD", "LGd", "LP", "LS", "MD", "MG", "MOp", "MOs", "MRN", "OLF", "ORB", "PAG", "PL", "PO", "POL", "POST", "RSP", "RT", "SUB", "ZI", "VPL", "VPM"
# VISI is throwing an error
for assigned_region in assigned_regions:
all_mice_names = []
all_mice_lines = None
for mouse in mouse_dict.keys():
mouse_data = mouse_dict[mouse]
num_sessions = len(mouse_dict[mouse])
spk_all_sessions = None
for session in mouse_data:
spk_session = session['spks']
if assigned_region in session['brain_area']:
spk_session_region = spk_session[session['brain_area'] == assigned_region]
# average over trials
spk_session_region_avg = np.mean(spk_session_region, axis=1)
if spk_all_sessions is not None:
spk_all_sessions = np.concatenate((spk_all_sessions, spk_session_region_avg))
else:
spk_all_sessions = spk_session_region_avg
# average over all neurons
if spk_all_sessions is not None:
name_i = mouse
all_mice_names.append(name_i)
mouse_i = np.mean(spk_all_sessions, axis=0)
mouse_i = np.expand_dims(mouse_i, 0)
if all_mice_lines is not None:
all_mice_lines = np.concatenate((all_mice_lines, mouse_i), axis = 0)
else:
all_mice_lines = mouse_i
plot = plt.figure(figsize=(10, 5))
plt.plot(all_mice_lines.T) # had to transpose so that time was on the x axis
plot.suptitle("Average Spiking of {}".format(assigned_region))
plt.xlabel("Timebins") # change axis labels if you need reminders
plt.ylabel("Average Number of Spikes per time bin")
plt.legend(all_mice_names, loc = "upper right")
pathlib.Path('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots').mkdir(parents=True, exist_ok=True)
plt.savefig('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots/Plotof{}.png'.format(assigned_region))
plt.show()
```
## Relationship between spiking and behaviour
```
# analyses for Lederberg : session 11
session_11 = steinmetz_data[11]
dt = session_11['bin_size'] # 10ms bins
NT = session_11['spks'].shape[-1]
# ax = plt.subplot(1,5,1)
response = session_11['response'] # right - nogo - left (-1, 0, 1)
vis_right = session_11['contrast_right'] # 0 - low - high
vis_left = session_11['contrast_left'] # 0 - low - high
avg_gocue = (np.mean(session_11["gocue"]))
plt.plot(dt * np.arange(NT), 1 / dt * session_11['spks'][:,response>=0].mean(axis=(0,1))) # left responses
plt.plot(dt * np.arange(NT), 1 / dt * session_11['spks'][:,response<0].mean(axis=(0,1))) # right responses
plt.plot(dt * np.arange(NT), 1 / dt * session_11['spks'][:,vis_right>0].mean(axis=(0,1))) # right stimuli
plt.plot(dt * np.arange(NT), 1 / dt * session_11['spks'][:,vis_right==0].mean(axis=(0,1))) # left stimuli
plt.axvline(avg_gocue, color='black')
plt.title("Session 11 Spike Frequency")
plt.xlabel("Time (sec)") # change axis labels if you need reminders
plt.ylabel("Firing rate (Hz)")
plt.legend(['left resp', 'right resp', 'right stim', 'left stim', 'stimuli onset'], fontsize=14)
pathlib.Path('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots/ResponseSpikeAnalyses').mkdir(parents=True, exist_ok=True)
plt.savefig('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots/ResponseSpikeAnalyses/session_11_spikes.png')
plt.show()
regions = ["vis ctx", "thal", "hipp", "other ctx", "midbrain", "basal ganglia", "cortical subplate", "other"]
brain_groups = [["VISa", "VISam", "VISl", "VISp", "VISpm", "VISrl"], # visual cortex
["CL", "LD", "LGd", "LH", "LP", "MD", "MG", "PO", "POL", "PT", "RT", "SPF", "TH", "VAL", "VPL", "VPM"], # thalamus
["CA", "CA1", "CA2", "CA3", "DG", "SUB", "POST"], # hippocampal
["ACA", "AUD", "COA", "DP", "ILA", "MOp", "MOs", "OLF", "ORB", "ORBm", "PIR", "PL", "SSp", "SSs", "RSP"," TT"], # non-visual cortex
["APN", "IC", "MB", "MRN", "NB", "PAG", "RN", "SCs", "SCm", "SCig", "SCsg", "ZI"], # midbrain
["ACB", "CP", "GPe", "LS", "LSc", "LSr", "MS", "OT", "SNr", "SI"], # basal ganglia
["BLA", "BMA", "EP", "EPd", "MEA"] # cortical subplate
]
num_good_areas = 4 # only the top 4 regions are in this particular mouse
neurons = len(session_11['brain_area']) # gives the number of neurons
good_areas = num_good_areas * np.ones(neurons, ) # note: last brain region is "other"
for i in range(num_good_areas):
good_areas[np.isin(session_11['brain_area'], brain_groups[i])] = i # assign a number to each region
# Neural response to visual stimuli
for i in range(num_good_areas):
fig, axs = plt.subplots(sharey = True)
plt.plot(1 / dt * session_11['spks'][good_areas == i][:,np.logical_and(vis_left == 0, vis_right > 0)].mean(axis=(0,1)))
plt.plot(1 / dt * session_11['spks'][good_areas == i][:,np.logical_and(vis_left == 0, vis_right == 0)].mean(axis=(0,1)))
plt.plot(1 / dt * session_11['spks'][good_areas == i][:,np.logical_and(vis_left > 0, vis_right == 0)].mean(axis=(0,1)))
plt.plot(1 / dt * session_11['spks'][good_areas == i][:,np.logical_and(vis_left > 0, vis_right > 0)].mean(axis=(0,1)))
fig.suptitle('{} response to visual stimuli'.format(regions[i]))
plt.xlabel('Time (ms)')
plt.ylabel('Spike rate (Hz)')
plt.legend(['right cue', 'left cue', 'no_cue', 'spike response any cue'], fontsize=12)
plt.savefig('/Users/sophiabatchelor/Code/SteinmetzAnalyses/Images/Plots/ResponseSpikeAnalyses/session11_{}_vep.png'.format(regions[i]))
```
## Now let's model
```
print(emoji.emojize(':bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug: :bug:'))
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
session_data = steinmetz_data[11]
num_timebins = session_data['spks'].shape[2]
num_trials = session_data['spks'].shape[1]
move = session_data['response'] # right - nogo - left (-1, 0, 1)
regions = np.unique(session_data['brain_area'])
spikes_in_a_region = {}
for region in regions:
spikes_in_a_region[region] = session_data['spks'][np.where(session_data['brain_area']==region)]
session_spikes = session_data['spks']
quick_info = session_spikes.shape
print("Number of neurons recorded in all sessions: {}, Number of Trials: {}, Number of timebins: {}".format(quick_info[0], quick_info[1], quick_info[2]))
Y = (move != 0).astype(int) # boolean true
Y # 1D array
target_regions = spikes_in_a_region.keys()
scores = np.zeros((len(target_regions),num_timebins))
for target_regions,(, spikes) in enumerate(spikes_in_a_region.items()):
for t in range(num_timebins):
X = spikes[:,:,t].T
X.shape
# check if the function is actually reading in the files okay
alldata = np.array([])
for j in range(2):
alldata = np.hstack((alldata, np.load('/Users/sophiabatchelor/Code/SteinmetzAnalyses/steinmetz_part%d.npz'%(j+1), allow_pickle=True)['dat']))
data = alldata[11]
print(data.keys())
## BUGS ALL THE WAY DOWN
# Note: data isn't the same shape
### nextsteps: ###
# - strip back the functions
# - reshape or Transpose data
def prepare_data(session=11):
model_data = np.array([])
for j in range(2):
model_data = np.hstack((model_data, np.load('/Users/sophiabatchelor/Code/SteinmetzAnalyses/steinmetz_part%d.npz'%(j+1), allow_pickle=True)['dat']))
data = model_data[session]
num_trials = session_data['spks'].shape[1]
n_timebins = data['spks'].shape[2]
move = session_data['response'] # right - nogo - left (-1, 0, 1)
regions = np.unique(data['brain_area'])
spikes_per_region = dict()
for region in regions:
spikes_per_region[region] = data['spks'][np.where(data['brain_area']==region)]
return spikes_per_region, labels, n_timebins
def simple_decoder(session=11):
model = LogisticRegression(penalty='l2',multi_class='ovr',solver='liblinear')
spikes_per_region, Y, n_timebins = prepare_data(session=session)
regions = spikes_per_region.keys()
scores = np.zeros((len(regions),n_timebins))
for region,(_, spikes) in enumerate(spikes_per_region.items()):
for t in range(n_timebins):
X = spikes[:,:,t].T
x = X.transpose()
score = cross_val_score(model, x, Y, cv=5)
scores[region,t] = np.mean(score)
return scores
def plot_scores(scores,session,save_name):
spikes_per_region, _, n_timebins = prepare_data(session=session)
regions = spikes_per_region.keys()
fig = plt.figure(figsize=[10,5])
contour = plt.contourf(scores)
cb = fig.colorbar(contour, shrink = 0.5, aspect = 5)
cb.set_label('Accuracy')
tick_marks = np.arange(len(regions))
plt.yticks(tick_marks, regions)
plt.xticks(np.arange(0,n_timebins,20), np.arange(0,n_timebins*10,200))
plt.ylabel('Brain area')
plt.xlabel('Time (ms)')
plt.tight_layout()
plt.show()
# TODO create a dir in Images for the plot to be saved in
# fig.savefig(<path> + save_name, format='png')
if __name__=="__main__":
scores = simple_decoder(session = 12)
plot_scores(scores,12,'scores_s12.png')
def plot_all_sessions():
n_sessions = 39
for i in range(n_sessions):
scores = simple_decoder(session = i)
plot_scores(scores,i,'scores_s%d.png'%i)
if __name__=="__main__":
scores = simple_decoder(session=12)
plot_scores(scores,12,'scores_s12.png')
# plot_all_sessions()
for trial in range(num_trials): # this will run 340 times
# find the avg spike per time bin
# get the avg spk_per_time_bin
list_of_spikes_in_a_trial = []
list_spk_avg_per_trial= []
for t in range(num_timebins):
spikes_in_a_trial = session_spikes[t,t,:]
list_of_spikes_in_a_trial.append(spikes_in_a_trial)
trial_spk_avg = np.mean(spikes_in_a_trial)
list_spk_avg_per_trial.append(trial_spk_avg)
list_of_spikes_in_a_trial = []
list_spk_avg_per_trial= []
for t in range(num_timebins):
spikes_in_a_trial = session_spikes[t,t,:]
list_of_spikes_in_a_trial.append(spikes_in_a_trial)
trial_spk_avg = np.mean(spikes_in_a_trial)
list_spk_avg_per_trial.append(trial_spk_avg)
len(list_of_spikes_in_a_trial)
num_trials
avg_spks_per_timebin = []
for a_session in range(num_sessions):
spikes_in_bin = session_spikes[c,:,t]
avg_per_bin = np.mean(spikes_in_bin)
avg_spks_per_timebin.append(avg_per_bin)
avg_spks_per_timebin
for t in range(num_timebins):
test_spks = test_set[t,t,:]
test_spks
print(test_spks.ndim)
print(test_spks.shape)
for t in range(num_timebins):
test_bin_piece = test_set[:,:,t]
test_bin_piece
print(test_bin_piece.ndim)
print(test_bin_piece.shape)
hat1 = test_set[0,0,:]
hat1
# 250 results -> these are the spikes in session
hat2 = test_set[1,1,:]
hat2
hat3 = eqtest_setp[2,2,:]
hat3
np.mean(hat1)
np.mean(hat2)
np.mean(hat3)
list_the_spikes_in_a_session = []
list_bin_means = []
for t in range(num_timebins):
the_spikes_in_a_session = test_set[t,t,:]
list_the_spikes_in_a_session.append(the_spikes_in_a_session)
avg_per_bin = np.mean(the_spikes_in_a_session)
list_bin_means.append(avg_per_bin)
print(list_the_spikes_in_a_session)
len(list_the_spikes_in_a_session)
Lederb = table.iloc[11]
Lederb
list_bin_means
```
| github_jupyter |
<center>
<img src="../../img/ods_stickers.jpg">
## Открытый курс по машинному обучению
<center>Автор материала: Michael Kazachok (@miklgr500)
# <center>Другая сторона tensorflow:KMeans
## <center>Введение
<p style="text-indent:20px;"> Многие знают <strong>tensorflow</strong>, как одну из лучших библиотек для обучения нейронных сетей, но в последнее время tensorflow довольно сильно вырос. Появились новые <a href='https://www.tensorflow.org/programmers_guide/estimators'>Estimators</a>, которые более удобны, чем старая парадигма, являющаяся фундаментом для новой.</p>
<p style="text-indent:20px;"> На сайте <a href = 'https://www.tensorflow.org/'>tensorflow</a> будет хорошая инструкция по установке под определенную операционную ситему и возможностях использование <a href = 'https://ru.wikipedia.org/wiki/GPGPU'>GPGPU</a>.Я не буду грузить данную работу особенностями "кухни" tensorflow (поэтому советую почитать хотябы основы в <a href='https://www.tensorflow.org/tutorials/'>официальном тьюториале</a> и посмотреть <a href='https://github.com/aymericdamien/TensorFlow-Examples'>TensorFlow Tutorial and Examples for Beginners with Latest APIs</a>; там же есть примеры, которые помогут в дальнейшем в изучении нейронных сетей), а я пройдусь по уже прошитым в этой либе алгоритмам крастеризации(а их фактически пока только два).</p>
<p style="text-indent:20px;"> При этом будет использоваться набор данных с Kaggel для соревнования <a href = 'https://www.kaggle.com/chicago/chicago-taxi-rides-2016'>Chicago Taxi Rides 2016</a>, который использовался в одной из домашек (<span style='color:green'>рекомендую использовать не более двух месяцев</span>).</p>
<p style="text-indent:20px;"> Применение простейшего алгоритма кластеризации в tensorflow будет сопроваждаться рассмотрением вопросов изящной визуализации (которую я увидел этим летом на соревнований Kaggle <a href = 'https://www.kaggle.com/c/nyc-taxi-trip-duration'>New York City Taxi Trip</a>), представленой <a href = 'https://www.kaggle.com/drgilermo'>DrGuillermo</a> и <a href = 'https://www.kaggle.com/maheshdadhich'>BuryBuryZymon</a> в их работах <a href = 'https://www.kaggle.com/drgilermo/dynamics-of-new-york-city-animation'>Dynamics of New York city - Animation</a> и <a href = 'https://www.kaggle.com/maheshdadhich/strength-of-visualization-python-visuals-tutorial'>Strength of visualization-python visuals tutorial</a> на соревновании.</p>
<p style="text-indent:20px;"><i>P.S. На написание данного тьюториала автора сподвигло довольно плохая освещенность возможностей tensorflow для создания уже довольно хорошо всем известных простых алгоритмов машинного обучения, которые для определенных задачах могут быть более эфективны, чем сложные алгоритмы машинного обучения.</i></p>
## <center>Подключение используемых в работе библиотек и загрузка данных
```
FIG_SIZE = (12,12)
PATH_DATA_JSON = '../../data/column_remapping.json'
PATH_DATA_CSV = '../../data/chicago_taxi_trips_2016_*.csv'
GIF_PATH = '../../img/animation.gif'
KMEANS_GIF_PATH='../../img/kmeans_animation.gif'
NUM_CLUSTERS = 5
BATCH_SIZE = 5
NUM_STEPS = 50
LON_CONST = -87.623177
LAT_CONST = 41.881832
LON_ANI_CENTER = [-87.73, -87.60]
LAT_ANI_CENTER = [41.85, 42.00]
import json
import pandas as pd
from glob import glob
from joblib import Parallel, delayed
import folium
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import animation
from matplotlib.patches import Ellipse
from IPython.display import HTML
plt.rcParams.update({'figure.max_open_warning': 0})
import numpy as np
import tensorflow as tf
from geopy.geocoders import Nominatim
import io
import base64
from dateutil import parser
%load_ext watermark
```
Версии основных библиотек и параметры системы.
```
%watermark -v -m -p numpy,pandas,matplotlib,tensorflow -g
```
Открываем данные за два первых месяца. Будте внимательны со ссылками на данные.
```
#ядро которое будем использовать
#для загруски и преобработки данных за один месяц
def preproc_kernel(path):
with open(PATH_DATA_JSON) as json_file:
column_remapping = json.load(json_file)
df = pd.read_csv(path)
# в дальнейшем понадобяться только геоданные
# и время начала поездки
df = df.loc[:, [
'trip_start_timestamp',
'pickup_latitude',
'pickup_longitude',
'dropoff_latitude',
'dropoff_longitude']].dropna()
geo_labels = ['pickup_latitude',
'pickup_longitude',
'dropoff_latitude',
'dropoff_longitude']
for g in geo_labels:
df[g] = df[g].apply(lambda x: float(column_remapping[g].get(str(int(x)))))
return df
dataset_files = sorted(glob(PATH_DATA_CSV))
# выполняем загрузку данных параллельно
# на двух ядрах, каждому по одному файлу
dfs = Parallel(n_jobs=2)(delayed(preproc_kernel)(path) for path in dataset_files)
# склеиваем данные
df = pd.concat(dfs, ignore_index=True)
df.head()
```
## <center> Визуализация данных
Произведем предварительную визуализацию всех гео данных и выявим их границы.
```
# соединяем гео данные для точек посадки и точек высадки
longitude = list(df.pickup_longitude)+list(df.dropoff_longitude)
print('max_long:'+str(max(longitude)))
print('min_long:'+str(min(longitude)))
latitude = list(df.pickup_latitude)+list(df.dropoff_latitude)
print('max_lat:'+str(max(latitude)))
print('min_lat:'+str(min(latitude)))
loc_df = pd.DataFrame()
loc_df['longitude'] = longitude
loc_df['latitude'] = latitude
#производим визуализацию объединенных гео данных
fig, ax = plt.subplots(1,1, figsize = FIG_SIZE)
plt.plot(longitude,
latitude,
'.',
color = 'orangered',
markersize = 1.5,
axes = ax,
figure = fig
)
ax.set_axis_off()
plt.show();
```
<p style="text-indent:20px;">Мало что можно сказать про количество кластеров из графика выше. Но если вывести рапределение по широте и долготе, то картина немного прояснится.</p>
```
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=FIG_SIZE)
sns.distplot(loc_df['longitude'], bins=300, kde=False, ax=ax1)
sns.distplot(loc_df['latitude'], bins=300, kde=False, ax=ax2)
plt.show();
```
<p style="text-indent:20px;">Из графиков выше видно, что наибольший трафик приходится практически на центр города. При этом стоит отметить, наличее довольно сильно выделяющегося трафика на долготе -87.90, а по долготе правея центра выделятся три центра с ярко выраженным трафиков. Таким образом кроме одного основного яровыделяющего по трафику центра есть еще как миниму четыре центра, которые можно выделить в отдельный кластер. В итоге можно выделить пять кластеров, которые имеют ярковыраженый трафик.</p>
## <center>Kmean в tensorflow
<p style="text-indent:20px;">Пожалуй это один из самых востребованных алгоритмов кластеризации на на данный момент. Не думаю, что тут стоит излагать теорию (учитывая, что она затрагивалась в <a href='https://habrahabr.ru/company/ods/blog/325654/'>лекции курса</a>), если кто-то хочет почитать что-то еще по данному алгоритму и по кластеризации в целом, то я пожалуй могу посоветовать <a href='http://www.machinelearning.ru/wiki/images/2/28/Voron-ML-Clustering-slides.pdf'>лекции К.В.Воронцова</a>.</p>
```
# формируем массив с данными в нужном формате
# т.е. формируем пары [lon, lat]
# Для правильной работы алгоритма
# неообходимо омязательно избавиться от
# постоянной компаненты
data = [[(lon-LON_CONST), (lat-LAT_CONST)] for lon, lat in zip(longitude, latitude)]
data = np.array(data)
```
<p style="text-indent:20px;">В качестве основы выберем уже прошитый в tensorflow алгоритм <a href='https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/factorization/KMeans'>KMeans</a>(<a href='https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/factorization/python/ops/clustering_ops.py'>люблю открытый код</a>). Те кто разобрал открытый код, мог заметить, что из большого набора функций вызвать можем только <i>training_graph(self)</i>. Обратите внимание возвращается ли в вашей версии tensorflow данная функция переменную <i>cluster_centers_var</i>(в 1.3 она не возвращается).</p>
```
def KMeans_clustering(num_clusters=NUM_CLUSTERS, flag_print=True):
# создаем placeholder X
# подставляя его вместо каких-то знаений
# мы говорим вычислительному графу
# что эти значения будут предоставлены потом:
# в процессе обучения и/или инициализации
X = tf.placeholder(tf.float32, shape=[None, 2])
# производим построение вычислительного графа для KMeans
kmeans = tf.contrib.factorization.KMeans(
inputs=X,
num_clusters=num_clusters,
initial_clusters="kmeans_plus_plus",
mini_batch_steps_per_iteration=BATCH_SIZE,
random_seed=29,
use_mini_batch=True
)
(all_scores,cluster_idx, scores,cluster_centers_initialized,\
cluster_centers_var,init_op,train_op) = kmeans.training_graph()
# т.к. изначально возвращается tuple
# то берем только первый его член
cluster_idx = cluster_idx[0]
# производим расчет средней дистанции
# точек до своего кластера
avg_distance = tf.reduce_mean(scores)
# создание сессии и инициальзация
init_vars = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_vars)
sess.run(init_op, feed_dict={X: data})
# пошагово обучаем модель
# получая на каждом шаге
# d:среднюю дистанцию от точки
# до центра своего кластера
#----------------------------
# задаем критерии остановки
for i in range(1,NUM_STEPS+1):
_, d, idx, cl_c = sess.run([train_op,
avg_distance,
cluster_idx,
cluster_centers_var],
feed_dict={X: data}
)
if (i%10==0)&(flag_print):
print('Step %i, Average Distance %.8f'%(i, d))
sess.close()
return d,idx,cl_c
```
<p style="text-indent:20px;">Визуализируем работу алгоритма, произведя инициализацию всех кластеров в координате [LON_CONST, LAT_CONST], являющеся центром города.</p>
```
# сделаем анимацию обучения
num_clusters = 8
# массив для инициализации кластеров
# в точке [LON_CONST, LAT_CONST], но
# т.к. у нас все данные смещенны на
# значение данной координаты,
# то инициализацию необходимо провести
# в точке [0, 0]
init_cl = np.array([[0, 0] for i in range(num_clusters)],
dtype=np.float32
)
X = tf.placeholder(tf.float32, shape=[None, 2])
# производим построение вычислительного графа для KMeans
kmeans = tf.contrib.factorization.KMeans(
inputs=X,
num_clusters=num_clusters,
initial_clusters=init_cl,
mini_batch_steps_per_iteration=2,
random_seed=29,
use_mini_batch=False
)
(all_scores,cluster_idx, scores,cluster_centers_initialized,\
cluster_centers_var,init_op,train_op) = kmeans.training_graph()
# т.к. изначально возвращается tuple
# то берем только первый его член
cluster_idx = cluster_idx[0]
avg_distance = tf.reduce_mean(scores)
# создание сессии и инициальзация
init_vars = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_vars)
sess.run(init_op, feed_dict={X: data})
fig, ax = plt.subplots(1,1, figsize = FIG_SIZE)
# задаем функцию, которую передадим в animation.FuncAnimation
# эта функция будет производить просчет полученого графика
# на каждом шагу, но так как mini_batch_steps_per_iteration=2
# то изменение будут каждые 2 шага, всего шагов будет 10
# их мы непосредственно будем задавать в FuncAnimation
# в виде массива и FuncAnimation пошагово будет передовать
# заданные значения в animate_kmeans
def animate_kmeans(step):
_, d, idx, cl_c = sess.run([train_op,
avg_distance,
cluster_idx,
cluster_centers_var],
feed_dict={X: data}
)
# для упрощения работы с полученными данными после обучения
# создается DataFrame, который в конце кода будет удален
# данное решение может быть не совсем оптимально
# оно просто упрощает жизнь вашему слуге =)
loc_df['labels'] = idx
cl_df = pd.DataFrame()
cl_df['longitude'] = cl_c[:,0]+LON_CONST
cl_df['latitude'] = cl_c[:,1]+LAT_CONST
cl_df['labels'] = cl_df.index
# обязательно чистим предыдущий график
ax.clear()
ax.set_title('Step: '+str(step))
for l in cl_df['labels']:
ax.plot(loc_df.loc[loc_df['labels'] == l, 'longitude'],
loc_df.loc[loc_df['labels'] == l, 'latitude'],
'.',
markersize = 1.5
)
ax.plot(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'],
'ro'
)
ax.annotate(s=str(l),
xy=(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'])
)
ax.set_axis_off()
del cl_df
ani = animation.FuncAnimation(fig,
animate_kmeans,
list(range(0, 20)),
interval=500
)
# производим закрытие отрисованных графиков
plt.close()
# дириктори сохранения гифки
gif_path = KMEANS_GIF_PATH
# сохранение гифки
ani.save(gif_path,
writer='imagemagick',
fps=1
)
# открываем сохраненную гифку и производим ее дешифрование
# для дальнейшего URL и подстановки их в HTML
video = io.open(gif_path,
'r+b'
).read()
encoded = base64.b64encode(video)
# производим отрисовку анимации в notebook
HTML(data='''<img src="data:image/gif;base64,{0}"type="gif"/>'''.format(
encoded.decode('ascii')))
```
<p style="text-indent:20px;">Видно что обновление происходит каждые 2 шага за счет установки mini_batch_steps_per_iteration=2. Вы можете поиграться с кодом выше! Выставте другую инициализацию("kmeans_plus_plus","random") или поиграйтесь с параметрами для mini_batch, а можно и вовсе изменить количество кластеров!</p>
<p style="text-indent:20px;">Найдем оптимальное число кластеров по методу, который был предложен на лекции,а пока идут вычисления можно заварить чашечку кофе и изучить новый алгоритм =)<p>
```
n_cluster = range(1,15,1)
avg_distance = []
for i in n_cluster:
d,idx,cl_c = KMeans_clustering(num_clusters=i, flag_print=False)
avg_distance.append(d)
plt.plot([i for i in n_cluster], avg_distance, color = 'seagreen')
plt.xlabel('number of cluster')
plt.ylabel('avg_distance')
plt.title('Optimal Number Of Cluster')
plt.show();
```
<p style="text-indent:20px;">Из графика видно, что ничего не видно=). Опять гадаем=) Я бы взять 4 кластера, и это довольно неплохо согласуется с предыдущей оценкой, поэтому возмем 5 кластеров(в данном случае лучше взять большее число, т.о. получится более детальная картина трафика).</p>
```
NUM_CLUSTERS = 5
d,idx,cl_c = KMeans_clustering(num_clusters=NUM_CLUSTERS, flag_print=True)
```
<p style="text-indent:20px;">Добавим метки кластеров в loc_df, и создадим новый DataFrame с параметрами (широта, долгота и метка кластера для каждого кластера).</p>
```
loc_df['labels'] = idx
cl_df = pd.DataFrame()
cl_df['longitude'] = cl_c[:,0]+LON_CONST
cl_df['latitude'] = cl_c[:,1]+LAT_CONST
cl_df['labels'] = cl_df.index
cl_df.tail()
```
## <center> Визуализация полученых кластеров
```
fig, ax = plt.subplots(1,1, figsize = FIG_SIZE)
for l in cl_df['labels']:
plt.plot(loc_df.loc[loc_df['labels'] == l, 'longitude'],
loc_df.loc[loc_df['labels'] == l, 'latitude'],
'.',
markersize = 1.5,
axes = ax,
figure = fig
)
plt.plot(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'],
'ro',
axes = ax,
figure = fig
)
ax.annotate(s=str(l),
xy=(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'])
)
ax.set_axis_off()
plt.show();
# посмотрим где наши кластеры расположились на карте
chikago_map = folium.Map(location=[LAT_CONST, LON_CONST],
zoom_start=10,
tiles='OpenStreetMap'
)
# выставляем маркеры на карту Чикаго
for lon, lat in zip(cl_df['longitude'], cl_df['latitude']):
folium.Marker(location=[lat, lon]).add_to(chikago_map)
chikago_map
```
<p style="text-indent:20px;">Можно заметить, что две самых удаленных от скопления мест посадок и высодок центроид кластеров находяться ровно около аэропортов(1,3), одна принадлежит северным жилым зонам Чикаго(2), а две центроиды можно отнести на деловой и культурный части (4,0) Чикаго.</p>
<p style="text-indent:20px;">Может показаться странным, что на южные жилые зоны Чикаго нет ярко выраженной центроиды, но если больше узнать об этом городе, то станет понятно, что это не так уж и странно. Южные кварталы Чикаго - это мексиканские и ирландские районы, в которых уровень жизни ниже северной части Чикаго.</p>
## <center>Визуализация трафика между центрами
<p style="text-indent:20px;">Для прогноза трафика между кластерами по часам необходимо: выделить час посадки и выставить метки принадлежности определенному кластеру для мест посадки и высадки.</p>
```
df['pickup_hour'] = df['trip_start_timestamp'].apply(lambda x: parser.parse(x).hour)
df['pickup_cluster'] = loc_df.loc[:len(df)-1,'labels'].values
df['dropoff_cluster'] = loc_df.loc[len(df):, 'labels'].values
```
<p style="text-indent:20px;">Начнем делать красоту (т.е. анимацию трафика между кластерами). Тот кто хочет получше разобраться с анимацией в matplotlib можно почитать документацию с <a href='https://matplotlib.org/api/animation_api.html'>официального сайта</a>.</p>
```
def trafic_animation(lon_ani_lim=None, lat_ani_lim=None, strong=6):
# передовая пределы возможно ограничить зону
# изображения анимации
# так же немаловажен параметр strong
# который является маштабирующим коэффициентом
# и влияет на ширину стрелок
if (lon_ani_lim==None)|(lat_ani_lim==None):
lim_cl_df = cl_df
elif (len(lon_ani_lim)!=2)|(len(lat_ani_lim)!=2):
lim_cl_df = cl_df
else:
lim_cl_df = cl_df[
((cl_df['longitude']>lon_ani_lim[0])&(cl_df['longitude']<lon_ani_lim[1]))&
((cl_df['latitude']>lat_ani_lim[0])&(cl_df['latitude']<lat_ani_lim[1]))
]
fig, ax = plt.subplots(1,1, figsize = FIG_SIZE)
# функция, которая будет передоваться в animation.FuncAnimation
def animate(hour):
# чистим все что было отрисовано ранее
ax.clear()
# отрисовываем все заново
ax.set_title('Absolute Traffic - Hour' + str(int(hour)) + ':00')
plt.figure(figsize = FIG_SIZE)
# статическая часть, она будет неизменна
# но так как мы чистим все перед этим
# то нам необходимо будет все отрисовать заново
for l in lim_cl_df['labels']:
ax.plot(loc_df.loc[loc_df['labels'] == l, 'longitude'],
loc_df.loc[loc_df['labels'] == l, 'latitude'],
'.',
markersize = 1.5
)
ax.plot(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'],
'ro'
)
ax.annotate(s=str(l),
xy=(cl_df.loc[cl_df['labels'] == l, 'longitude'],
cl_df.loc[cl_df['labels'] == l, 'latitude'])
)
# динамическая часть(стрелочки)
# они будут изменяться со временем
for first_label in lim_cl_df['labels']:
for second_label in lim_cl_df['labels']:
# расчитываем количество поездов в данный час
# из первого кластера во второй и из второго в первый
num_of_rides = len(df[(df['pickup_cluster'] == first_label)&
(df['dropoff_cluster'] == second_label)&
(df['pickup_hour'] == hour)])
# стрелка проводиться как и вектор по двум точкам
# первую задаем начальными координатами
# в качестве второй передаем разность от уже заданной
# до второй точки по обеим осям
dist_x = cl_df.longitude[cl_df['labels'] == first_label].values[0] - \
cl_df.longitude[cl_df['labels'] == second_label].values[0]
dist_y = cl_df.latitude[cl_df['labels'] == first_label].values[0] - \
cl_df.latitude[cl_df['labels'] == second_label].values[0]
# нормировка количества поездок производится по всем поездкам
pct = np.true_divide(num_of_rides, len(df))
# непосредственное создание объекта Arrow
# и его отрисовка
arr = plt.Arrow(cl_df.longitude[cl_df['labels'] == first_label].values,
cl_df.latitude[cl_df['labels'] == first_label].values,
-dist_x,
-dist_y,
edgecolor='white',
width=strong*pct
)
ax.add_patch(arr)
arr.set_facecolor('g')
ax.set_axis_off()
ani = animation.FuncAnimation(fig,
animate,
sorted(df['pickup_hour'].unique()),
interval=1000
)
# производим закрытие отрисованных графиков
plt.close()
# дириктори сохранения гифки
gif_path = GIF_PATH
# сохранение гифки
ani.save(gif_path,
writer='imagemagick',
fps=1
)
# открываем сохраненную гифку и производим ее дешифрование
# для дальнейшего URL и подстановки их в HTML
video = io.open(gif_path,
'r+b'
).read()
encoded = base64.b64encode(video)
return encoded
encoded = trafic_animation()
# производим отрисовку анимации
HTML(data='''<img src="data:image/gif;base64,{0}"type="gif"/>'''.format(
encoded.decode('ascii')))
# присмотримся к центру города
encoded = trafic_animation(lon_ani_lim=LON_ANI_CENTER,
lat_ani_lim=LAT_ANI_CENTER,
strong=2
)
HTML(data='''<img src="data:image/gif;base64,{0}"type="gif"/>'''.format(
encoded.decode('ascii')))
```
<p style="text-indent:20px;">Прелесть такого рода визуализации в том, что ее может проинтерпритировать даже ребенок.</p>
## <center> Заключение
<p style="text-indent:20px;">Tensorflow довольно мощное API, которое хорошо подходит не только для обучения нейронных сетей. Хотя стоит отметит скудность документации(по сравнению с sklearn) по некоторым частям библиотеки. Одна из таких частей и была рассмотренна в данном тьюториале. Я так же надеюсь вам понравилась визуализации и вы влюбились в нее так же как и я когда впервые ее увидел. Если такого рода тьюториал вам понравится, то я подумаю о переносе его в виде статьи на хабрахабр и создании цикла такого рода статей.</p>
<p style="text-indent:20px;">Спасибо за внимание!</p>
| github_jupyter |
# Generating counterfactual explanations with any ML model
The goal of this notebook is to show how to generate CFs for ML models using frameworks other than TensorFlow or PyTorch. This is a work in progress and here we show a method to generate diverse CFs by independent random sampling of features. We use scikit-learn models for demonstration.
```
# import DiCE
import dice_ml
from dice_ml.utils import helpers # helper functions
import numpy as np
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report, accuracy_score
```
## Loading dataset
We use the "adult" income dataset from UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/adult). For demonstration purposes, we transform the data as described in dice_ml.utils.helpers module.
```
dataset = helpers.load_adult_income_dataset()
dataset.head()
d = dice_ml.Data(dataframe=dataset, continuous_features=['age', 'hours_per_week'], outcome_name='income')
```
## Training a custom ML model
Below, we build an Artificial Neural Network using *MLPClassifier* in scikit-learn. We try to use the same set of parameters as used in this advanced [notebook](DiCE_with_private_data.ipynb), however, there are other framework-dependent parameters that can't be easily ported, so the accuracy/performance of the two models will be different.
```
train, test = d.split_data(d.normalize_data(d.one_hot_encoded_data))
X_train = train.loc[:, train.columns != 'income']
y_train = train.loc[:, train.columns == 'income']
X_test = test.loc[:, test.columns != 'income']
y_test = test.loc[:, test.columns == 'income']
mlp = MLPClassifier(hidden_layer_sizes=(20), alpha=0.001, learning_rate_init=0.01, batch_size=32, random_state=17,
max_iter=20, verbose=False, validation_fraction=0.2, ) #max_iter is epochs in TF
mlp.fit(X_train, y_train.values.ravel())
# provide the trained ML model to DiCE's model object
backend = None
m = dice_ml.Model(model=mlp, backend=backend)
```
## Generate diverse counterfactuals
```
# initiate DiCE
exp = dice_ml.Dice(d, m)
# query instance in the form of a dictionary; keys: feature name, values: feature value
query_instance = {'age':22,
'workclass':'Private',
'education':'HS-grad',
'marital_status':'Single',
'occupation':'Service',
'race': 'White',
'gender':'Female',
'hours_per_week': 45}
# generate counterfactuals
dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=4, desired_class="opposite")
dice_exp.visualize_as_dataframe(show_only_changes=True)
```
It can be observed that the random sampling method produces less sparse CFs in contrast to current DiCE's implementation. The sparsity issue with random sampling worsens with increasing *total_CFs*
Further, different sets of counterfactuals can be generated with different random seeds.
```
# generate counterfactuals
dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=4, desired_class="opposite", random_seed=9) # default ranomd see is 17
dice_exp.visualize_as_dataframe(show_only_changes=True)
```
### Selecting the features to vary
When few features are fixed, random sampling is unable to generate valid CFs while we get valid diverse CFs with current DiCE.
```
# generate counterfactuals
dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=4, desired_class="opposite",
features_to_vary=['workclass','education','occupation','hours_per_week'])
dice_exp.visualize_as_dataframe(show_only_changes=True)
```
### Choosing feature ranges
Since the features are sampled randomly, they can freely vary across their range. In the below example, we show how range of continuous features can be controlled using *permitted_range* parameter that can now be passed during CF generation.
```
# generate counterfactuals
dice_exp = exp.generate_counterfactuals(query_instance, total_CFs=4, desired_class="opposite",
permitted_range={'age':[22,50],'hours_per_week':[40,60]})
dice_exp.visualize_as_dataframe(show_only_changes=True)
```
| github_jupyter |
# NumPy
NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more.
- NumPy is a python library, stands for Numerical Python
- Used for working with arrays. It is very useful in Numerical calculations - matrices, linear algebra, etc
- The array object in NumPy is called ndarray (n-dimensional array). Arrays are frequently used in data sciences, where speed and accuracy matters. It is similar to list but it is way faster than that.
- Elements in NumPy array cannot be heterogeneous like in lists. The elements in a NumPy array are all required to be of the same data type, and thus will be the same size in memory.
- NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an ndarray will create a new array and delete the original.
- NumPy library was written partially in Python, but most of the parts that require fast computation are written in C or C++.
- For detailed information you can go through the [official documentation](https://numpy.org/doc/stable/user/absolute_beginners.html#numpy-the-absolute-basics-for-beginners)
- [Source code for NumPy](https://github.com/numpy/numpy)
```
# To import the library use
import numpy
# add keyword numpy before using
a = numpy.array([1,2,3,4,5]) # defines a as numpy object
# array is enclosed in ([])
```
NumPy is imported under the alias using the keyword "as" - import numpy as np
This shortens the keyword required in syntax, instead of numpy.array we can type np.array
```
import numpy as np
a = np.array([1,2,3,4,5])
b = [1,2,3,4,5]
print(a)
print(b)
print(type(a)) # shows the type
print(type(b))
```
Notice the output of print(a), it is enclosed in square brackets like lists but not separated by commas like lists. Hence the output is a numpy array.
```
#Use Tuple to create numpy array
import numpy as np
a = np.array((1,2,3,4,5))
print(a)
print(type(a))
# To create an ndarray, we can pass a list, tuple or any array-like object into the array() method.
```
## Dimensions in Array
A dimension in array is one level of array depth
- nested arrays: are arrays that have arrays as elements.
#### Check Number of Dimensions of array
*ndim* attribute returns an integer that tells us how many dimensions an array has
if a is defined as an array, to check the dimensions of a, the syntax is - a.ndim
### 0-D Arrays
- 0-D Arrays or scalars are elements in array, each value in array is 0-D array.
```
import numpy as np
a = np.array(9) # single element
print(a)
print(a.ndim) #prints the dimension of an array
```
### 1-D Arrays
An array that has 0D Arrays as its elements.
```
a = np.array([1,2,3,4,5])
print(a)
print(a.ndim)
```
### 2-D Arrays
An array that has 1-D elements is called a 2D array
Represents a matrix
Note: NumPy also has a submodule dedicated for matrix operations called numpy.mat (go through [documentation](https://numpy.org/doc/stable/reference/generated/numpy.mat.html))
```
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
print(a)
print(a.ndim)
```
### 3-D Arrays
An array of 2D arrays is called a 3D array.
```
import numpy as np
a = np.array([[[1,2,3],[4,5,6],[7,8,9]],[[9,8,7],[6,5,4],[3,2,1]]])
print(a)
print(a.ndim)
# Common example to demonstrate dimensions
import numpy as np
a = np.array(45)
b = np.array([1,2,3,4,5])
c = np.array([[1,2,3],[4,5,6]])
d = np.array([[1,2,3],[4,5,6],[7,8,9]])
e = np.array([[[1,2,3],[4,5,6]],[[1,2,3],[4,5,6]]])
# Pay very close attention to the number of square brackets.
# One neat trick is the number of square brackets at the beginning is the dimensions of that array.
print(a,'\n')
print(b,'\n')
print(c,'\n')
print(d,'\n')
print(e,'\n')
print("The dimension of",'\n',a,"is --",a.ndim)
print("The dimension of",'\n',b,"is --",b.ndim)
print("The dimension of",'\n',c,"is --",c.ndim)
print("The dimension of",'\n',d,"is --",d.ndim)
print("The dimension of",'\n',e,"is --",e.ndim)
# To make an array of desired dimensions
a = np.array([1,2,3,4],ndmin=7)
print(a)
print("Number of dimensions: ",a.ndim)
```
## Access Array Elements
Array indexing is the same as accessing an array element.
You can access an array element by referring to its index number.
```
import numpy as np
a = np.array([1,2,3,4])
print(a[0]) # Remember! first element has 0 index in python
'''To access elements from 2-D arrays we can use comma separated integers representing
the dimension and the index of the element.'''
a = np.array([[1,2,3,4,5],[6,7,8,9,10]])
#[1,2,3,4,5] = 0th dimension, [6,7,8,9,10] = 1st dimension
print(a[0,1]) # first index = 0 ->selects 1st array, second index = 1 ->selects second element of first array
print(a[1,3]) #syntax - a[dimension,element]
a = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]])
print(a[0,1,1])
'''
first index = 0 -> Selects [[1,2,3],[4,5,6]]
second index = 1 -> Selects [4,5,6]
third index = 1 -> Selects 5
'''
print("Dimensions of a: ",a.ndim)
a.shape
```
a has 2 elements `[[1,2,3],[4,5,6]]` & `[[7,8,9],[10,11,12]]`
of which each has 2 elements `[1,2,3]` & `[4,5,6]` of 1st element; `[7,8,9]` & `[10,11,12]` of 2nd element
of which each has 3 elements `1,2,3` .... and so on you get the point
`a.shape` returns (2,2,3) which is the shape of an array
## Slicing Arrays
Syntax [start_inclusive:end_exclusive]
also
[start:end:step]
Leaving start or end index blank will mean start from beginning and go till end respectively
```
a = np.array([1,2,3,4,5,6,7,8,9])
print(a[1:5]) # From 1st index to 4th index
a[:5] # From beginning to 4th index
a[5:]
a[2:6:2] # from index 2 to 5 in steps of 2
b = np.array([1,2,3,4,5,6,7])
c = np.array_split(b,3) # array_split(array,no. of splits)
print(c)
```
## Random
NumPy has a function `random` which creates an array of given shape and populate it with random samples from a uniform distribution over `[0,1)` [Documentation](https://numpy.org/doc/stable/reference/random/generated/numpy.random.rand.html)
```
# import random from numpy so that we don't have to write np.random.rand()
from numpy import random
x = random.rand() #returns a random float between 0 and 1
x
```
`random.randint(low, high=None, size=None, dtype=int)`
Return random integers from low (inclusive) to high (exclusive).
Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [low, high). If high is None (the default), then results are from [0, low).
[Documentation](https://numpy.org/doc/stable/reference/random/generated/numpy.random.randint.html#numpy-random-randint)
```
x = random.randint(100, size=(5)) #gives an array of 5 random integers between 0 and 100
x
x = random.randint(100, size=(3,3)) # gives a 3 x 3 array
x
x = random.choice([3,5,7,9]) # chooses a random value from given array
x
x = random.choice([3,5,7,9],size=(3,3)) # creates a 3 x 3 array by choosing values randomly from given array
x
x = random.randint(100, size=(5))
y = np.random.shuffle(x)
print(x)
print(y)
x = random.randint(1000,size=(10)) # 10 random values between 0 and 1000
print(x)
print(np.var(x)) # Variance
print(np.std(x)) # Standard Deviation
print(np.average(x)) # Average
```
`np.random.randn()` returns a sample(or samples) from the "Standard Normal" Distribution.
If positive int_like arguments are provided, `randn` generates an array of shape (d0,d1,...,dn), filled with random floats sampled from a univariate "normal" (Gaussian) distribution of mean 0 and variance 1. A single float randomly sampled from the distribution is returned if no argument is provided.
```
x = np.random.randn(10)
x
```
Modify a sequence in-place by shuffling its contents.
This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same.
```
x = np.array([1,2,3,4,5,6,7,8,9,10])
random.shuffle(x)
x
```
## Products
```
p1 = np.inner(2,2) # gives inner product
v_a = 9 + 6j
v_b = 5 + 2j
p2 = np.inner(v_a,v_b) # inner product of 2 vectors
print(p1)
print(p2)
a1 = np.array([[2,6],[7,8]])
a2 = np.array([[5,10],[-2,3]])
p3 = np.inner(a1,a2)
print(p3)
# Cross Product
p4 = np.cross(a1,a2)
print(p4)
# Dot Product
p5 = np.dot(a1,a2)
p5
```
If we just want the indices where a certain condition is satisfied, we can use `np.where( )`. This function is used to filter out data.
```
x = np.array([0,1,2,3,4,5,6,7,8,9])
indices = np.where(x<5)
x[indices]
```
**Functions like np.arange, np.linspace are very useful:**
np.arange (read as 'a range') gives an array of numbers within a given range and stepsize
np.linspace gives an array of linearly spaced numbers
```
np.arange(0,10,3) # syntax - (inclusive_start, exclusive_stop, stepsize)
#This will give an array of values from 0 to 10 in steps of 3
np.arange(-np.pi, np.pi, 1)
np.linspace(-np.pi, np.pi, 7) # linearly spaced values - difference between 2 consecutive values not necessarily 1
```
**Notice** the difference between `np.arange()` function and `np.linspace()` function:
`np.arange` function gives values which have same difference but doesn't include the last value, whereas `np.linspace` function first sets start and end value and divides the numbers linearly.
This changes the output of both of these function significantly.
In the syntax of `np.arange` function the **last value denotes the difference between each element**. But in `np.linspace` function the **last value denotes the number of elements desired in the given range**, the difference between each element is determined accoridingly by the system.
```
np.linspace(0,np.pi,10) #syntax - (inclusive_start, INCLUSIVE_stop, Number of elements)
```
## NumPy Logarithms
NumPy has functions to perform log at base 2, e and 10
- `log2()` - log to the base 2
- `log10()` - log to the base 10
- `log()` - natural log / base $\mathcal{e}$
```
#log to the base 2
x = np.arange(1,10)
print(x)
print(np.log2(x))
# log to the base 10
print(np.log10(x))
# log to the base e or natural log (ln)
print(np.log(x))
```
## NumPy LCM and GCD
NumPy has the functions `np.lcm` and `np.gcd`.
We can also use this functions to find lcm and gcd of each element in an array using the $reduce( )$ method
```
x = 4
y = 12
lcm = np.lcm(x,y)
lcm
gcd = np.gcd(x,y)
gcd
x = np.arange(2,10)
y = np.lcm.reduce(x) # use reduce() when the element is an array
y
x = np.array([4,44,40,20,22])
np.gcd.reduce(x)
x = np.random.randint(100,size=2)
print(x)
print(np.gcd.reduce(x))
```
## Convert Degrees into Radians and Radians to Degrees
By default the values are in radians, but we can convert it into degrees and vice versa if required
$$180^\circ=\pi\;rad$$
$$\therefore 1\;rad=\Big(\frac{180}{\pi}\Big)^\circ$$
```
# Suppose we have array of values in degrees
import numpy as np
x = np.array([0,30,45,60,90,180,270,360])
radian = np.deg2rad(x)
print(radian)
degree = np.rad2deg(radian)
print(degree)
y = np.array([np.pi/2, np.pi, 4*np.pi/3, 2*np.pi])
y_degree = np.rad2deg(y)
print(y_degree)
```
NumPy also has the function to find angles i.e inverse trig values
arcsin( ), arccos( ), arctan( )
```
x = np.arcsin(0.8)
x_deg = np.rad2deg(x)
print(x)
print(round(x_deg,2)) # round(x_deg,2) rounds off the value of x_deg to 2 decimal places
```
| github_jupyter |
# Analisis de resultados encuesta conocimiento y actitudes ante el uso del achiote
#### Cargamos librerias a utilizar
```
library("dplyr")
library("tidytext")
library("tm")
library("ggplot2")
library("stringr")
library("corrplot")
library("cluster")
```
#### Leemos los archivo Csv
```
achiote1 <- read.csv("first_chunk.csv", header = FALSE)
achiote2 <- read.csv("second_chunk.csv", header = FALSE)
```
### Hacemos limpieza para preparar los datos
#### Borramos columnas innecesarias
```
a1_clean <- achiote1[c(-1, -2),-c(1,68)]
a2_clean <- achiote2[c(-1, -2),-c(1,68)]
```
#### Debido a las preguntas con múltiples respuestas, debemos combinar las columnas pertinentes
```
a1_clean$V18 <- paste(a1_clean$V18, a1_clean$V19, a1_clean$V20, a1_clean$V21, a1_clean$V22, a1_clean$V23, sep = " ")
a1_clean$V24 <- paste(a1_clean$V24, a1_clean$V25, a1_clean$V26, a1_clean$V27, a1_clean$V28, a1_clean$V29, a1_clean$V30, a1_clean$V31, a1_clean$V32, a1_clean$V33, a1_clean$V34, sep = " ")
a1_clean$V36 <- paste(a1_clean$V36, a1_clean$V37, a1_clean$V38, a1_clean$V39, a1_clean$V40, a1_clean$V41, a1_clean$V42, sep = " ")
a1_clean$V47 <- paste(a1_clean$V47, a1_clean$V48, a1_clean$V49, a1_clean$V50, a1_clean$V51, a1_clean$V52, a1_clean$V53, a1_clean$V54, a1_clean$V55, sep = " ")
a1_clean$V58 <- paste(a1_clean$V58, a1_clean$V59, a1_clean$V60, a1_clean$V61, a1_clean$V62, a1_clean$V63, a1_clean$V64, a1_clean$V65, sep = " ")
a2_clean$V18 <- paste(a2_clean$V18, a2_clean$V19, a2_clean$V20, a2_clean$V21, a2_clean$V22, a2_clean$V23, sep = " ")
a2_clean$V24 <- paste(a2_clean$V24, a2_clean$V25, a2_clean$V26, a2_clean$V27, a2_clean$V28, a2_clean$V29, a2_clean$V30, a2_clean$V31, a2_clean$V32, a2_clean$V33, a2_clean$V34, sep = " ")
a2_clean$V36 <- paste(a2_clean$V36, a2_clean$V37, a2_clean$V38, a2_clean$V39, a2_clean$V40, a2_clean$V41, a2_clean$V42, sep = " ")
a2_clean$V47 <- paste(a2_clean$V47, a2_clean$V48, a2_clean$V49, a2_clean$V50, a2_clean$V51, a2_clean$V52, a2_clean$V53, a2_clean$V54, a2_clean$V55, sep = " ")
a2_clean$V58 <- paste(a2_clean$V58, a2_clean$V59, a2_clean$V60, a2_clean$V61, a2_clean$V62, a2_clean$V63, a2_clean$V64, a2_clean$V65, sep = " ")
```
#### Eliminamos columnas repetidas
```
a1_clean <- subset(a1_clean, select=-c(V19,V20,V21,V22,V23,V25,V26,V27,V28,V29,V30,V31,V32,V33,V34,V37,V38,V39,V40,V41,V42,V48,V49,V50,V51,V52,V53,V54,V55,V59,V60,V61,V62,V63,V64,V65))
a2_clean <- subset(a2_clean, select=-c(V19,V20,V21,V22,V23,V25,V26,V27,V28,V29,V30,V31,V32,V33,V34,V37,V38,V39,V40,V41,V42,V48,V49,V50,V51,V52,V53,V54,V55,V59,V60,V61,V62,V63,V64,V65))
```
#### Cambiamos titulos de los data sets
```
colnames(a1_clean) <- c("Id",
"Ip",
"Acceso",
"Fecha_inicio",
"Fecha_finalizacion",
"Procedencia",
"Municipio",
"Edad",
"Sexo",
"Q1",
"Q2",
"Q3",
"Q4",
"Justificacion_1",
"He leido",
"Q5",
"Q6",
"Q7",
"Q8",
"Q9",
"Q10",
"Q11",
"Q12",
"Q13",
"Q14",
"Q15",
"Justificacion_2",
"Q16",
"Q17",
"Justificacion_3")
colnames(a2_clean) <- c("Id",
"Ip",
"Acceso",
"Fecha_inicio",
"Fecha_finalizacion",
"Procedencia",
"Municipio",
"Edad",
"Sexo",
"Q1",
"Q2",
"Q3",
"Q4",
"Justificacion_1",
"He leido",
"Q5",
"Q6",
"Q7",
"Q8",
"Q9",
"Q10",
"Q11",
"Q12",
"Q13",
"Q14",
"Q15",
"Justificacion_2",
"Q16",
"Q17",
"Justificacion_3")
```
#### Unimos nuestros datasets separados en uno solo
```
achiote <- rbind(a1_clean, a2_clean)
```
### Procedemos a eliminar caracteres bizarros de los textos de la encuesta
#### Seteamos el encoding a UTF-8 para aceptar tildes y demás caracteres
```
Encoding(achiote$Edad) <- "UTF-8"
Encoding(achiote$Municipio) <- "UTF-8"
Encoding(achiote$Q1) <- "UTF-8"
Encoding(achiote$Q6) <- "UTF-8"
Encoding(achiote$Q7) <- "UTF-8"
Encoding(achiote$Q11) <- "UTF-8"
Encoding(achiote$Q12) <- "UTF-8"
Encoding(achiote$Q13) <- "UTF-8"
Encoding(achiote$Q14) <- "UTF-8"
Encoding(achiote$Q16) <- "UTF-8"
Encoding(achiote$Justificacion_1) <- "UTF-8"
Encoding(achiote$Justificacion_2) <- "UTF-8"
Encoding(achiote$Justificacion_3) <- "UTF-8"
```
#### Eliminamos palabras que hacen ruido
```
achiote$Q1 <- gsub("Si, ¿cuál? ¿cómo?:", "", achiote$Q1, fixed=TRUE)
achiote$Q5 <- gsub("Si, especifique:", "", achiote$Q5, fixed=TRUE)
achiote$Q7 <- gsub("Otro::", "", achiote$Q7, fixed=TRUE)
achiote$Q12 <- gsub("Conozco otra/otras, ¿cuál/cuáles?:", "", achiote$Q12, fixed=TRUE)
achiote$Q13 <- gsub("Si, ¿cuál? ¿cómo?:", "", achiote$Q13, fixed=TRUE)
```
### Revisamos el producto final de la limpieza
```
head(achiote)
tail(achiote)
```
#### Manejamos fechas y tiempos
```
#fechas
fecha_inicio <- as.Date(substr(achiote$Fecha_inicio, 0, 8), "%m/%d/%y")
fecha_final <- as.Date(substr(achiote$Fecha_finalizacion, 0, 8), "%m/%d/%y")
#horas minutos
hora_inicio <- as.POSIXct(substr(achiote$Fecha_inicio, 11, 15), format = "%H:%M")
hora_final <- as.POSIXct(substr(achiote$Fecha_finalizacion, 11, 15), format = "%H:%M")
#creando columnas
achiote$Fecha_inicio <- fecha_inicio
achiote$Fecha_finalizacion <- fecha_final
achiote$hora_inicio <- hora_inicio
achiote$hora_final <- hora_final
dif <- hora_final - hora_inicio
achiote$tiempo_terminar <- dif
```
#### Convertimos a factor los datos necesarios
```
achiote[sapply(achiote, is.character)] <- lapply(achiote[sapply(achiote, is.character)],
as.factor)
summary(achiote)
str(achiote)
```
## Resultados
#### colores para graficas
```
color.function <- colorRampPalette(c("#FFFFFF" , "#45094f" ))
color.ramp <- color.function(n = 10)
```
#### Procedencia
```
info <- table(achiote$Procedencia)
xx <- barplot(info,main="Procedencia", col=color.ramp)
info
```
#### Municipio
```
achiote$Municipio[achiote$Municipio == 'Ciudad'] <- 'Guatemala'
achiote$Municipio[achiote$Municipio == 'Ciudad de Guatemala'] <- 'Guatemala'
achiote$Municipio[achiote$Municipio == 'GUATEMALA'] <- 'Guatemala'
achiote$Municipio[achiote$Municipio == 'guatemala'] <- 'Guatemala'
achiote$Municipio[achiote$Municipio == 'Ciudad Guatemala'] <- 'Guatemala'
achiote$Municipio[achiote$Municipio == 'En Asunción Mita Jutiapa'] <- 'Asunción Mita'
achiote$Municipio[achiote$Municipio == 'Guatemala, Fraijanes'] <- 'Fraijanes'
achiote$Municipio[achiote$Municipio == 'Sta Catarina Pinula'] <- 'Santa Catarina Pinula'
achiote$Municipio[achiote$Municipio == 'Sta. Catarina Pinula'] <- 'Santa Catarina Pinula'
achiote$Municipio[achiote$Municipio == 'Villa Canales'] <- 'Villa canales'
achiote$Municipio[achiote$Municipio == 'Sacatepéquez'] <- 'San Lucas Sacatepéquez'
achiote$Municipio <- droplevels(achiote$Municipio)
info <- table(achiote$Municipio)
xx <- barplot(info,main="Municipio", col=color.ramp, cex.lab=2)
info
```
#### Edad
```
info <- table(achiote$Edad)
xx <- barplot(info,main="Edad", col=color.ramp, cex.lab=2)
info
```
#### Sexo
```
info <- table(achiote$Sexo)
xx <- barplot(info,main="Sexo", col=color.ramp, cex.lab=2)
info
```
#### ¿Ha utilizado usted alguna planta para tratar alguna enfermedad/afeccion?
```
text <- toupper(as.character(levels(achiote$Q1)))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(line = 1:61, text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "dolor", "estomago", "infusion", "colicos"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 3) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#FF6666") +
labs(x = "Plantas utilizadas", y = "Repeticiones") +
coord_flip()
```
#### ¿Conoce usted el achiote?
```
info <- table(achiote$Q2)
xx <- barplot(info,main="¿Conoce usted el achiote?", col=color.ramp, cex.lab=2)
info
```
#### ¿Estaría dispuesto a conocer más sobre los usos ancestrales del achiote?
```
temp <- as.character(achiote$Q3)
si <- grep("Si", temp, value = TRUE)
no <- grep("No", temp, value = TRUE)
info <- table(c(si, no))
xx <- barplot(info,main="¿Estaría dispuesto a conocer más sobre los usos del achiote?", col=color.ramp, cex.lab=2)
info
```
#### ¿Qué tan efectivo cree usted que son los tratamientos alternativos de achiote?
```
temp <- as.character(achiote$Q4)
uno <- grep(1, temp, value = TRUE)
dos <- grep(2, temp, value = TRUE)
tres <- grep(3, temp, value = TRUE)
cuatro <- grep(4, temp, value = TRUE)
cinco <- grep(5, temp, value = TRUE)
info <- table(c(uno, dos, tres, cuatro, cinco))
print("1 - No produce ningun efecto")
print("2 - El efecto no es notable")
print("3 - Levemente efectivo")
print("4 - Efecto notable")
print("5 - Efecto significativo")
xx <- barplot(info,main="¿Qué tan efectivo cree son los tratamientos alternativos de achiote?", col=color.ramp, cex.lab=2)
info
```
#### Justificación de nivel de creencia
```
text <- toupper(as.character(levels(achiote$Justificacion_1)))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "dolor", "estomago", "infusion", "colicos"),
lexicon = "custom"))
text_df %>%
unnest_tokens(ngram, text, token = "ngrams", n = 2) %>%
count(ngram, sort = TRUE)
```
#### ¿Conoce usted otro nombre para el achiote?
```
info <- table(achiote$Q5)
xx <- barplot(info,main="¿Conoce usted otro nombre para el achiote?", col=color.ramp, cex.lab=2)
info
```
#### ¿Qué usos ha oido usted del achiote? ¿cuales usos le ha dado usted al achiote?
```
text <- toupper(as.character(achiote$Q6))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
text_df <- text_df %>%
unnest_tokens(word, text) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 1) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#AFC8F5") +
labs(x = "Usos", y = "Ocurrencias") +
coord_flip()
text_df
```
#### Conoce como aplicaciones del achiote
```
text <- toupper(as.character(achiote$Q7))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "quita", "aumenta", "materna", "flujo", "achiote", "aplicacion", "conozco", "consumo", "control", "solar", "mosquitos"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 1) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#9299DE") +
labs(x = "Aplicaciones", y = "Ocurrencias") +
coord_flip()
text_df
```
#### ¿Con qué frecuencia utiliza el achiote en cualquiera de sus presentaciones?
```
info <- table(achiote$Q8)
xx <- barplot(info,main="¿Con qué frecuencia utiliza el achiote en cualquiera de sus presentaciones?", col=color.ramp, cex.lab=2)
info
```
#### ¿Cómo obtiene usted el achiote?
```
text <- toupper(as.character(achiote$Q9))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "conveniencia"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 11) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#9299DE") +
labs(x = "Lugar obtencion", y = "Ocurrencias") +
coord_flip()
text_df
```
#### ¿Está usted al tanto que es posible conseguir achiote en los centro de conveniencia bajo el apartado de especias?
```
info <- table(achiote$Q10)
xx <- barplot(info,main="¿conseguir achiote en los centro de conveniencia en especias?", col=color.ramp, cex.lab=2)
info
```
#### Seleccione qué parte de la planta de Bixa orellana L. (achiote) se utiliza para obtener el polvo rojo comúnmente comercializado
```
info <- table(achiote$Q11)
xx <- barplot(info,
main="qué parte se utiliza para obtener el polvo rojo comúnmente comercializado",
col=color.ramp,
cex.lab=2)
info
```
#### ¿Conoce alguna otra planta tintorea de comida?
```
text <- toupper(as.character(achiote$Q12))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "otra", "no","planta","tintorea", "de", "cascara", "conozco"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#D9C089") +
labs(x = "Planta", y = "Veces mencionada") +
coord_flip()
text_df
```
#### ¿Ha utilizado usted alguna planta tintorea de comida?
```
text <- toupper(as.character(achiote$Q13))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "curtido", "hirviendo","40", "agua","anteriores","azul","bebidas","cocinar","color","colorear","dar","ensaladas","enchiladas","material","menos","min","morada","morado","obtiene","puede","textiles","usarse","vertiendo","viena"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#8EF0D0") +
labs(x = "Planta", y = "Veces mencionada") +
coord_flip()
text_df
```
#### ¿Que formas de preparacion conoce usted para usar el achiote como medicina?
```
text <- toupper(as.character(achiote$Q14))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "conozco","preparacion"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#F0B081") +
labs(x = "Preparacion", y = "Veces mencionada") +
coord_flip()
text_df
```
#### ¿Que tan dispuesto esta usted a utilizar el achiote como sustituto de la medicina occidental?
```
temp <- as.character(achiote$Q15)
uno <- grep(1, temp, value = TRUE)
dos <- grep(2, temp, value = TRUE)
tres <- grep(3, temp, value = TRUE)
cuatro <- grep(4, temp, value = TRUE)
cinco <- grep(5, temp, value = TRUE)
info <- table(c(uno, dos, tres, cuatro, cinco))
print("1 - No lo utilizaria")
print("2 - Lo utilizaría como último recurso")
print("3 - Lo utilizaría")
print("4 - Totalmente de acuerdo en utilizarlo")
print("5 - Promuevo el uso del achiote como alternativa")
xx <- barplot(info,main="¿utilizar el achiote como sustituto de la medicina occidental?", col=color.ramp, cex.lab=2)
info
```
#### Justificación ¿Que tan dispuesto esta usted a utilizar el achiote como sustituto de la medicina occidental?
```
text <- toupper(as.character(levels(achiote$Justificacion_2)))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
text_df %>%
unnest_tokens(ngram, text, token = "ngrams", n = 3) %>%
count(ngram, sort = TRUE)
```
#### maneras en las cuales estaria usted de acuerdo en administrar medicamento fabricado con achiote
```
text <- toupper(as.character(achiote$Q16))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "aplicacion","anteriores","maneras","acuerdo"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#BCF048") +
labs(x = "Planta", y = "Veces mencionada") +
coord_flip()
text_df
```
#### ¿Qué tan efectivos son los tratamientos con achiote?
```
temp <- as.character(achiote$Q17)
uno <- grep(1, temp, value = TRUE)
dos <- grep(2, temp, value = TRUE)
tres <- grep(3, temp, value = TRUE)
cuatro <- grep(4, temp, value = TRUE)
cinco <- grep(5, temp, value = TRUE)
info <- table(c(uno, dos, tres, cuatro, cinco))
print("1 - No produce ningun efecto")
print("2 - El efecto no es notable")
print("3 - Levemente efectivo")
print("4 - Efecto notable")
print("5 - Efecto significativo")
xx <- barplot(info,main="¿Qué tan efectivo cree son los tratamientos alternativos de achiote?", col=color.ramp)
info
```
#### Justificación pregunta de efectividad del achiote
```
text <- toupper(as.character(levels(achiote$Justificacion_3)))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish")),
lexicon = "custom"))
text_df %>%
unnest_tokens(ngram, text, token = "ngrams", n = 3) %>%
count(ngram, sort = TRUE)
```
#### tiempos de finalización de la encuesta
```
#encuesta más rápida
min(achiote$tiempo_terminar)
#promedio tiempo de finalización
mean(achiote$tiempo_terminar)
#Encuesta más tardada
max(achiote$tiempo_terminar)
```
### division conocedores/entusiastas y no conocedores
```
A <- split(achiote, achiote$Q2)
A_no <- A$No
B <- split(A_no, A_no$Q3, drop = TRUE)
conocedores <- rbind(A$Si, B$Si)
no_conocedores <- B$No
str(conocedores)
str(no_conocedores)
```
### Division variables cuantitativas y cualitativas
```
conocedores_cuanti <- conocedores[c("Procedencia", "Edad", "Sexo", "Q5", "Q8", "Q10", "Q11", "Q15", "Q17")]
conocedores_cuali <- conocedores[c("Municipio", "Q6", "Q7", "Q9", "Q12", "Q13", "Q14", "Q16")]
conocedores_cuanti$Procedencia <- as.numeric(conocedores_cuanti$Procedencia)
conocedores_cuanti$Edad <- as.numeric(conocedores_cuanti$Edad)
conocedores_cuanti$Sexo <- as.numeric(conocedores_cuanti$Sexo)
conocedores_cuanti$Q5 <- as.numeric(conocedores_cuanti$Q5)
conocedores_cuanti$Q8 <- as.numeric(conocedores_cuanti$Q8)
conocedores_cuanti$Q10 <- as.numeric(conocedores_cuanti$Q10)
conocedores_cuanti$Q11 <- as.numeric(conocedores_cuanti$Q11)
corr_cono<- cor(data.matrix(conocedores_cuanti), method = c("pearson", "kendall", "spearman"))
corr_cono
corrplot(corr_cono, type = "upper", order = "hclust",
tl.col = "black", tl.srt = 45)
cono_cuanti <- conocedores_cuanti
```
### Clustering
```
conocedores_cuanti <- data.matrix(conocedores_cuanti)
conocedores_cuanti <- na.omit(conocedores_cuanti) # listwise deletion of missing
conocedores_cuanti <- scale(conocedores_cuanti) # standardize variables
#Cantidad de clusters
wss <- (nrow(conocedores_cuanti)-1)*sum(apply(conocedores_cuanti,2,var))
for (i in 2:15) wss[i] <- sum(kmeans(conocedores_cuanti,
centers=i)$withinss)
plot(1:15, wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares")
## K-Means Cluster Analysis
fit <- kmeans(conocedores_cuanti, 4) # 5 cluster solution
# get cluster means
aggregate(conocedores_cuanti,by=list(fit$cluster),FUN=mean)
#Plot del cluster
clusplot(conocedores_cuanti, fit$cluster, color=TRUE, shade=TRUE,
labels=2, lines=0)
# append cluster assignment
cono_cuanti <- data.frame(cono_cuanti, fit$cluster)
fit
B <- split(cono_cuanti, cono_cuanti$fit.cluster)
group_1 <- B$"1"
temp <- as.character(group_1$Q17)
info <- table(temp)
print("1 - No produce ningun efecto")
print("2 - El efecto no es notable")
print("3 - Levemente efectivo")
print("4 - Efecto notable")
print("5 - Efecto significativo")
xx <- barplot(info,main="¿Qué tan efectivo cree son los tratamientos alternativos de achiote?", col=color.ramp)
info
fit$cluster
```
### Division rango edad
```
Y <- split(achiote, achiote$Edad)
quince35 <- Y$"15-35"
tresseis55 <- Y$"36-55"
cincoseis75 <- Y$"56-75"
str(tresseis55)
```
### 15 - 35 años
```
text <- toupper(as.character(quince35$Q7))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "quita", "aumenta", "materna", "flujo", "achiote", "aplicacion", "conozco", "consumo", "control", "solar", "mosquitos"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 1) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#9299DE") +
labs(x = "Aplicaciones", y = "Ocurrencias") +
coord_flip()
text_df
info <- table(quince35$Q10)
xx <- barplot(info,main="¿conseguir achiote en los centro de conveniencia en especias?", col=color.ramp, cex.lab=2)
info
info <- table(quince35$Q11)
xx <- barplot(info,
main="qué parte se utiliza para obtener el polvo rojo comúnmente comercializado",
col=color.ramp,
cex.lab=2)
info
```
### 36 -55
```
str(tresseis55)
text <- toupper(as.character(tresseis55$Q7))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "quita", "aumenta", "materna", "flujo", "achiote", "aplicacion", "conozco", "consumo", "control", "solar", "mosquitos"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 1) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#9299DE") +
labs(x = "Aplicaciones", y = "Ocurrencias") +
coord_flip()
text_df
info <- table(tresseis55$Q10)
xx <- barplot(info,main="¿conseguir achiote en los centro de conveniencia en especias?", col=color.ramp, cex.lab=2)
info
info <- table(tresseis55$Q11)
xx <- barplot(info,
main="qué parte se utiliza para obtener el polvo rojo comúnmente comercializado",
col=color.ramp,
cex.lab=2)
info
```
### 56 - 75
```
text <- toupper(as.character(cincoseis75$Q7))
text <- chartr("ÁÉÍÓÚ", "AEIOU", text)
text_df <- tibble(text = text)
custom_stop_words <- bind_rows(stop_words,
tibble(word = c(tm::stopwords("spanish"), "quita", "aumenta", "materna", "flujo", "achiote", "aplicacion", "conozco", "consumo", "control", "solar", "mosquitos"),
lexicon = "custom"))
text_df <- text_df %>%
unnest_tokens(word, text) %>%
anti_join(custom_stop_words) %>%
count(word, sort = TRUE)
text_df %>%
filter(n > 1) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(word, n)) +
geom_col() +
geom_bar(stat="identity", fill = "#9299DE") +
labs(x = "Aplicaciones", y = "Ocurrencias") +
coord_flip()
text_df
info <- table(cincoseis75$Q10)
xx <- barplot(info,main="¿conseguir achiote en los centro de conveniencia en especias?", col=color.ramp, cex.lab=2)
info
info <- table(cincoseis75$Q11)
xx <- barplot(info,
main="qué parte se utiliza para obtener el polvo rojo comúnmente comercializado",
col=color.ramp,
cex.lab=2)
info
```
### correlación de variables
```
colnames(achiote)
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/figures/PDSH-cover-small.png?raw=1">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb) | [Contents](Index.ipynb) | [In-Depth: Manifold Learning](05.10-Manifold-Learning.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# In Depth: Principal Component Analysis
Up until now, we have been looking in depth at supervised learning estimators: those estimators that predict labels based on labeled training data.
Here we begin looking at several unsupervised estimators, which can highlight interesting aspects of the data without reference to any known labels.
In this section, we explore what is perhaps one of the most broadly used of unsupervised algorithms, principal component analysis (PCA).
PCA is fundamentally a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, for noise filtering, for feature extraction and engineering, and much more.
After a brief conceptual discussion of the PCA algorithm, we will see a couple examples of these further applications.
We begin with the standard imports:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
```
## Introducing Principal Component Analysis
Principal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data, which we saw briefly in [Introducing Scikit-Learn](05.02-Introducing-Scikit-Learn.ipynb).
Its behavior is easiest to visualize by looking at a two-dimensional dataset.
Consider the following 200 points:
```
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal');
```
By eye, it is clear that there is a nearly linear relationship between the x and y variables.
This is reminiscent of the linear regression data we explored in [In Depth: Linear Regression](05.06-Linear-Regression.ipynb), but the problem setting here is slightly different: rather than attempting to *predict* the y values from the x values, the unsupervised learning problem attempts to learn about the *relationship* between the x and y values.
In principal component analysis, this relationship is quantified by finding a list of the *principal axes* in the data, and using those axes to describe the dataset.
Using Scikit-Learn's ``PCA`` estimator, we can compute this as follows:
```
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
```
The fit learns some quantities from the data, most importantly the "components" and "explained variance":
```
print(pca.components_)
print(pca.explained_variance_)
```
To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector:
```
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
```
These vectors represent the *principal axes* of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis.
The projection of each data point onto the principal axes are the "principal components" of the data.
If we plot these principal components beside the original data, we see the plots shown here:

[figure source in Appendix](06.00-Figure-Code.ipynb#Principal-Components-Rotation)
This transformation from data axes to principal axes is an *affine transformation*, which basically means it is composed of a translation, rotation, and uniform scaling.
While this algorithm to find principal components may seem like just a mathematical curiosity, it turns out to have very far-reaching applications in the world of machine learning and data exploration.
### PCA as dimensionality reduction
Using PCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance.
Here is an example of using PCA as a dimensionality reduction transform:
```
pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", X_pca.shape)
```
The transformed data has been reduced to a single dimension.
To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data:
```
X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal');
```
The light points are the original data, while the dark points are the projected version.
This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance.
The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much "information" is discarded in this reduction of dimensionality.
This reduced-dimension dataset is in some senses "good enough" to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved.
### PCA for visualization: Hand-written digits
The usefulness of the dimensionality reduction may not be entirely apparent in only two dimensions, but becomes much more clear when looking at high-dimensional data.
To see this, let's take a quick look at the application of PCA to the digits data we saw in [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb).
We start by loading the data:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
```
Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional.
To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two:
```
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
```
We can now plot the first two principal components of each point to learn about the data:
```
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
```
Recall what these components mean: the full data is a 64-dimensional point cloud, and these points are the projection of each data point along the directions with the largest variance.
Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits in two dimensions, and have done this in an unsupervised manner—that is, without reference to the labels.
### What do the components mean?
We can go a bit further here, and begin to ask what the reduced dimensions *mean*.
This meaning can be understood in terms of combinations of basis vectors.
For example, each image in the training set is defined by a collection of 64 pixel values, which we will call the vector $x$:
$$
x = [x_1, x_2, x_3 \cdots x_{64}]
$$
One way we can think about this is in terms of a pixel basis.
That is, to construct the image, we multiply each element of the vector by the pixel it describes, and then add the results together to build the image:
$$
{\rm image}(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots x_{64} \cdot{\rm (pixel~64)}
$$
One way we might imagine reducing the dimension of this data is to zero out all but a few of these basis vectors.
For example, if we use only the first eight pixels, we get an eight-dimensional projection of the data, but it is not very reflective of the whole image: we've thrown out nearly 90% of the pixels!

[figure source in Appendix](06.00-Figure-Code.ipynb#Digits-Pixel-Components)
The upper row of panels shows the individual pixels, and the lower row shows the cumulative contribution of these pixels to the construction of the image.
Using only eight of the pixel-basis components, we can only construct a small portion of the 64-pixel image.
Were we to continue this sequence and use all 64 pixels, we would recover the original image.
But the pixel-wise representation is not the only choice of basis. We can also use other basis functions, which each contain some pre-defined contribution from each pixel, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
PCA can be thought of as a process of choosing optimal basis functions, such that adding together just the first few of them is enough to suitably reconstruct the bulk of the elements in the dataset.
The principal components, which act as the low-dimensional representation of our data, are simply the coefficients that multiply each of the elements in this series.
This figure shows a similar depiction of reconstructing this digit using the mean plus the first eight PCA basis functions:

[figure source in Appendix](06.00-Figure-Code.ipynb#Digits-PCA-Components)
Unlike the pixel basis, the PCA basis allows us to recover the salient features of the input image with just a mean plus eight components!
The amount of each pixel in each component is the corollary of the orientation of the vector in our two-dimensional example.
This is the sense in which PCA provides a low-dimensional representation of the data: it discovers a set of basis functions that are more efficient than the native pixel-basis of the input data.
### Choosing the number of components
A vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data.
This can be determined by looking at the cumulative *explained variance ratio* as a function of the number of components:
```
pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components.
For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.
Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
## PCA as Noise Filtering
PCA can also be used as a filtering approach for noisy data.
The idea is this: any components with variance much larger than the effect of the noise should be relatively unaffected by the noise.
So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise.
Let's see how this looks with the digits data.
First we will plot several of the input noise-free data:
```
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
```
Now lets add some random noise to create a noisy dataset, and re-plot it:
```
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
```
It's clear by eye that the images are noisy, and contain spurious pixels.
Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:
```
pca = PCA(0.50).fit(noisy)
pca.n_components_
```
Here 50% of the variance amounts to 12 principal components.
Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:
```
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
```
This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs.
## Example: Eigenfaces
Earlier we explored an example of using a PCA projection as a feature selector for facial recognition with a support vector machine (see [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb)).
Here we will take a look back and explore a bit more of what went into that.
Recall that we were using the Labeled Faces in the Wild dataset made available through Scikit-Learn:
```
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
```
Let's take a look at the principal axes that span this dataset.
Because this is a large dataset, we will use ``RandomizedPCA``—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard ``PCA`` estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000).
We will take a look at the first 150 components:
```
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(150)
pca.fit(faces.data)
```
In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors,"
so these types of images are often called "eigenfaces").
As you can see in this figure, they are as creepy as they sound:
```
fig, axes = plt.subplots(3, 8, figsize=(9, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
```
The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips.
Let's take a look at the cumulative variance of these components to see how much of the data information the projection is preserving:
```
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
We see that these 150 components account for just over 90% of the variance.
That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data.
To make this more concrete, we can compare the input images with the images reconstructed from these 150 components:
```
# Compute the components and projected faces
pca = RandomizedPCA(150).fit(faces.data)
components = pca.transform(faces.data)
projected = pca.inverse_transform(components)
# Plot the results
fig, ax = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(10):
ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')
ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')
ax[0, 0].set_ylabel('full-dim\ninput')
ax[1, 0].set_ylabel('150-dim\nreconstruction');
```
The top row here shows the input images, while the bottom row shows the reconstruction of the images from just 150 of the ~3,000 initial features.
This visualization makes clear why the PCA feature selection used in [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb) was so successful: although it reduces the dimensionality of the data by nearly a factor of 20, the projected images contain enough information that we might, by eye, recognize the individuals in the image.
What this means is that our classification algorithm needs to be trained on 150-dimensional data rather than 3,000-dimensional data, which depending on the particular algorithm we choose, can lead to a much more efficient classification.
## Principal Component Analysis Summary
In this section we have discussed the use of principal component analysis for dimensionality reduction, for visualization of high-dimensional data, for noise filtering, and for feature selection within high-dimensional data.
Because of the versatility and interpretability of PCA, it has been shown to be effective in a wide variety of contexts and disciplines.
Given any high-dimensional dataset, I tend to start with PCA in order to visualize the relationship between points (as we did with the digits), to understand the main variance in the data (as we did with the eigenfaces), and to understand the intrinsic dimensionality (by plotting the explained variance ratio).
Certainly PCA is not useful for every high-dimensional dataset, but it offers a straightforward and efficient path to gaining insight into high-dimensional data.
PCA's main weakness is that it tends to be highly affected by outliers in the data.
For this reason, many robust variants of PCA have been developed, many of which act to iteratively discard data points that are poorly described by the initial components.
Scikit-Learn contains a couple interesting variants on PCA, including ``RandomizedPCA`` and ``SparsePCA``, both also in the ``sklearn.decomposition`` submodule.
``RandomizedPCA``, which we saw earlier, uses a non-deterministic method to quickly approximate the first few principal components in very high-dimensional data, while ``SparsePCA`` introduces a regularization term (see [In Depth: Linear Regression](05.06-Linear-Regression.ipynb)) that serves to enforce sparsity of the components.
In the following sections, we will look at other unsupervised learning methods that build on some of the ideas of PCA.
<!--NAVIGATION-->
< [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb) | [Contents](Index.ipynb) | [In-Depth: Manifold Learning](05.10-Manifold-Learning.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
#return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
if x2 != x1:
theta = np.arctan((y2 - y1) / (x2 - x1)) * 180 / np.pi
"""
theta < 0 - left line
theta > 0 - right line
"""
if 20. < abs(theta) < 40.:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, ρ, Θ, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, ρ, Θ, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α = 0.8, β = 1., γ = 0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
gray = grayscale(image)
blur_gray = gaussian_blur(
img = gray,
kernel_size = 5)
edges = canny(
img = blur_gray,
low_threshold = 100,
high_threshold = 200)
imshape = image.shape
masked_image = region_of_interest(
img = edges,
vertices = np.array([[ (0 , imshape[0] ),
(imshape[1] / 2, 1.15 * imshape[0] / 2 ),
(imshape[1] / 2, 1.15 * imshape[0] / 2 ),
(imshape[1] , imshape[0] )
]], dtype=np.int32))
line_img = hough_lines(
img = masked_image,
ρ = 1,
Θ = np.pi/180,
threshold = 1,
min_line_len = 20,
max_line_gap = 10)
lines_edges = weighted_img(
img = line_img,
initial_img = image
)
return lines_edges
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
test_folder = "test_images/"
output_folder = "test_images_output/"
if not os.path.exists(output_folder):
os.mkdir(output_folder)
for name in os.listdir(test_folder):
image = cv2.imread(os.path.join(test_folder, name))
cv2.imwrite(os.path.join(output_folder, name), process_image(image))
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
# Santander Value Prediction Challenge
According to Epsilon research, 80% of customers are more likely to do business with you if you provide **personalized service**. Banking is no exception.
The digitalization of everyday lives means that customers expect services to be delivered in a personalized and timely manner… and often before they´ve even realized they need the service. In their 3rd Kaggle competition, Santander Group aims to go a step beyond recognizing that there is a need to provide a customer a financial service and **intends to determine the amount or value of the customer's transaction**. This means anticipating customer needs in a more concrete, but also simple and personal way. With so many choices for financial services, this need is greater now than ever before.
In this competition, **Santander Group is asking Kagglers to help them identify the value of transactions for each potential customer**. This is a first step that Santander needs to nail in order to personalize their services at scale.
The evaluation metric for this competition is Root Mean Squared Logarithmic Error. **RMSLE**
**You are provided with an anonymized dataset containing numeric feature variables, the numeric target column, and a string ID column.**
**The task is to predict the value of target column in the test set**
## Load Required Libraries
```
# #Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import statsmodels
import pandas_profiling
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import os
import sys
import time
import json
import random
import requests
import datetime
import missingno as msno
import math
import sys
import gc
import os
# #sklearn
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.ensemble import RandomForestRegressor
# #sklearn - preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
# #sklearn - metrics
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
from sklearn.metrics import roc_auc_score
# #XGBoost & LightGBM
import xgboost as xgb
import lightgbm as lgb
# #Missing value imputation
from fancyimpute import KNN, MICE
# #Hyperparameter Optimization
from hyperopt.pyll.base import scope
from hyperopt.pyll.stochastic import sample
from hyperopt import STATUS_OK, Trials, fmin, hp, tpe
pd.options.display.max_columns = 150
```
## EDA
```
!ls ../
!ls ../data
df_train = pd.read_csv("../data/train.csv")
df_test = pd.read_csv("../data/test.csv")
df_train.shape
df_test.shape
df_train.head()
```
ID, target, everything else is anonymized
```
df_train.info()
df_test.info()
```
### Missing Data
```
df_train.isnull().sum(axis = 0).sum()
df_test.isnull().sum(axis = 0).sum()
```
Yes!! No missing data
### Distributions
```
sns.distplot(df_train['target'])
sns.distplot(np.log(1+df_train['target']))
```
Now, the distribution looks much more normal.
### Hypothesis: Are any of the columns having a constant value?
Since the dataset is so small and number of rows < number of columns.
```
constant_train = df_train.loc[:, (df_train == df_train.iloc[0]).all()].columns.tolist()
constant_test = df_test.loc[:, (df_test == df_test.iloc[0]).all()].columns.tolist()
len(constant_train)
len(constant_test)
```
There are 256 constant columns in the training dataset, but none in the test dataset. These constant columns are thus most likely an artifact of the way that the train and test sets were constructed. Let's remove them from out train set since they will not add any value.
```
columns_to_use = df_test.columns.tolist() # #Target variable is not considered
del columns_to_use[0] # #Remove 'ID'
columns_to_use = [x for x in columns_to_use if x not in constant_train] #Remove all 0 columns
len(columns_to_use)
```
### Measure of sparsity
```
((df_train[columns_to_use].values.flatten())==0).mean()
```
97% of values in the train set are zeros, indicating that it is a very sparse matrix
## Modelling
```
# #Log Transform the target variable
y = np.log(1+df_train.target.values)
X = lgb.Dataset(df_train[columns_to_use], y, feature_name = "auto")
```
### Model 1 - LightGBM (My Favourite :P)
```
params = {'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'rmse',
'learning_rate': 0.01,
'num_leaves': 100,
'feature_fraction': 0.4,
'bagging_fraction': 0.6,
'max_depth': 5,
'min_child_weight': 10}
clf = lgb.train(params,
X,
num_boost_round = 400,
verbose_eval=True)
preds = clf.predict(df_test[columns_to_use])
preds
sample_submission = pd.read_csv("../data/sample_submission.csv")
sample_submission.target = np.exp(preds)-1
sample_submission.to_csv('../submissions/model1_lightgbm_01.csv', index=False)
sample_submission.shape
nr_splits = 5
random_state = 1054
y_oof = np.zeros((y.shape[0]))
total_preds = 0
kf = KFold(n_splits=nr_splits, shuffle=True, random_state=random_state)
for i, (train_index, val_index) in enumerate(kf.split(y)):
print('Fitting fold', i+1, 'out of', nr_splits)
X_train, X_val = df_train[columns_to_use].iloc[train_index], df_train[columns_to_use].iloc[val_index]
y_train, y_val = y[train_index], y[val_index]
train = lgb.Dataset(X_train,y_train ,feature_name = "auto")
val = lgb.Dataset(X_val ,y_val ,feature_name = "auto")
clf = lgb.train(params,train,num_boost_round = 400,verbose_eval=True)
total_preds += clf.predict(df_test[columns_to_use])/nr_splits
pred_oof = clf.predict(X_val)
y_oof[val_index] = pred_oof
print('Fold error', np.sqrt(mean_squared_error(y_val, pred_oof)))
print('Total error', np.sqrt(mean_squared_error(y, y_oof)))
sample_submission.target = np.exp(total_preds)-1
sample_submission.to_csv('../submissions/model1_lightgbm_02.csv', index=False)
sample_submission.head()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# モデルの保存と復元
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。
モデルは訓練中にも、訓練が終わったあとも保存できます。このことは、長い訓練時間を掛けなくても、やめたところから再開できるということを意味します。モデルが保存可能であることは、あなたが作ったモデルを他の人と共有できるということでもあります。研究結果であるモデルや手法を公開する際、機械学習の実務家はほとんど次のものを共有します。
* モデルを構築するプログラム
* 学習済みモデルの重みあるいはパラメータ
このデータを共有することで、他の人がモデルだどの様に動作するかを理解したり、新しいデータに試してみたりすることが容易になります。
注意:信頼できないプログラムには気をつけましょう。TensorFlowのモデルもプログラムです。詳しくは、[Using TensorFlow Securely](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md)を参照してください。
### オプション
TensorFlowのモデルを保存する方法は、使っているAPIによって異なります。このガイドはTensorFlowのモデルを構築し訓練するためのハイレベルなAPIである[tf.keras](https://www.tensorflow.org/guide/keras)を使っています。この他のアプローチについては、TensorFlowの [Save and Restore](https://www.tensorflow.org/guide/saved_model) ガイド、あるいは、[Saving in eager](https://www.tensorflow.org/guide/eager#object-based_saving)を参照してください。
## 設定
### インストールとインポート
TensorFlowと依存関係のライブラリをインストールし、インポートします。
```
!pip install h5py pyyaml
```
### サンプルデータセットの取得
ここでは、モデルを訓練し重みの保存をデモするために、 [MNIST dataset](http://yann.lecun.com/exdb/mnist/) を使います。デモの実行を速くするため、最初の1,000件のサンプルだけを使います。
```
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
```
### モデルの定義
重みの保存と読み込みのデモを行うための簡単なモデルを定義しましょう。
```
# 短いシーケンシャルモデルを返す関数
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# 基本的なモデルのインスタンスを作る
model = create_model()
model.summary()
```
## 訓練中にチェックポイントを保存する
主な用途は訓練の**途中**あるいは**終了後**にチェックポイントを自動的に保存することです。こうすることにより、再び訓練を行うことなくモデルを使用することができ、また、訓練が中断された場合に、中止したところから再開できます。
`tf.keras.callbacks.ModelCheckpoint`がこれを行うためのコールバックです。このコールバックにはチェックポイントを構成するためのいくつかの引数があります。
### チェックポイントコールバックの使い方
モデルの訓練時に、`ModelCheckpoint`を渡します。
```
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# チェックポイントコールバックを作る
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # 訓練にコールバックを渡す
# オプティマイザの状態保存についての警告が表示されるかもしれません。
# これらの警告は(このノートブックで発生する同様な警告を含めて)
# 古い用法を非推奨にするためのもので、無視して構いません。
```
この結果、エポックごとに更新される一連のTensorFlowチェックポイントファイルが作成されます。
```
!ls {checkpoint_dir}
```
訓練していない新しいモデルを作ります。重みだけからモデルを復元する場合には、元のモデルと同じアーキテクチャのモデルが必要です。モデルのアーキテクチャが同じであるため、モデルの異なる**インスタンス**であっても重みを共有することができるのです。
訓練していない全く新しいモデルを作り、テストデータセットで評価します。訓練をしていないモデルは偶然のレベル(正解率10%以下)の性能しか無いはずです。
```
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
```
次に、チェックポイントから重みをロードし、再び評価します。
```
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
```
### チェックポイントコールバックのオプション
このコールバックには、チェックポイントに一意な名前をつけたり、チェックポイントの頻度を調整するためのオプションがあります。
新しいモデルを訓練し、5エポックごとに一意な名前のチェックポイントを保存します。
```
# ファイル名に(`str.format`を使って)エポック数を埋め込みます
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# 重みを5エポックごとに保存します
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
```
次に、出来上がったチェックポイントを確認し、最後のものを選択します。
```
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
```
注意:デフォルトのtensorflowフォーマットは、直近の5つのチェックポイントのみを保存します。
テストのため、モデルをリセットし最後のチェックポイントをロードします。
```
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
```
## これらのファイルは何?
上記のコードでは、重みだけをバイナリで[checkpoint](https://www.tensorflow.org/guide/saved_model#save_and_restore_variables)形式の一連のファイルに保存します。チェックポイントには、次のものが含まれます。
* 1つ以上のモデルの重みの断片
* どの重みがどの断片に保存されているかを示すインデックスファイル
1台のマシンだけでモデルの訓練を行っている場合には、`.data-00000-of-00001`のようなサフィックスのついたファイルが1つだけ作成されます。
## 手動で重みを保存する
上記では重みをモデルにロードする方法を見ました。
手動で重みを保存するのも同じ様に簡単です。`Model.save_weights` メソッドを使います。
```
# 重みの保存
model.save_weights('./checkpoints/my_checkpoint')
# 重みの復元
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
```
## モデル全体の保存
モデルとオプティマイザを、その状態(重みと変数)とモデルの設定の両方を含む1つのファイルに保存することができます。これにより、モデルをオリジナルのPythonコードにアクセスしなくとも使用できるようにエクスポートできます。オプティマイザの状態が復元されるので、中断したところから訓練を再開することも可能です。
完全に機能するモデルを保存できるのは便利です。保存したモデルをTensorFlow.js ([HDF5](https://js.tensorflow.org/tutorials/import-keras.html), [Saved Model](https://js.tensorflow.org/tutorials/import-saved-model.html))でロードし、ブラウザで訓練したり、実行したりすることができるほか、TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_api#exporting_a_tfkeras_file_), [Saved Model](https://www.tensorflow.org/lite/convert/python_api#exporting_a_savedmodel_))
を使ってモバイルデバイスで実行できるように変換することも可能です。
### HDF5ファイルとして
Kerasでは、[HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) 標準を使った基本的なファイルフォーマットが利用できます。ここでの利用目的では、保存されたモデルは単独のバイナリラージオブジェクト(blob)として扱うことができます。
```
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# モデル全体を1つのHDF5ファイルに保存します。
model.save('my_model.h5')
```
保存したファイルを使ってモデルを再作成します。
```
# 重みとオプティマイザを含む全く同じモデルを再作成
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
```
正解率を検査します。
```
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
```
この方法では、次のすべてが保存されます。
* 重みの値
* モデルの設定(アーキテクチャ)
* オプティマイザの設定
Kerasは保存する際にアーキテクチャを調べます。いまのところ、TensorFlowのオプティマイザ(`tf.train`に含まれるもの)を保存することはできません。TensorFlowのオプティマイザを使用している場合には、モデルをロードしたあと再コンパイルする必要があり、オプティマイザの状態は失われます。
### `saved_model`として
注意:この手法による`tf.keras`モデルの保存は実験的なもので、将来のバージョンで変更される可能性があります。
新しいモデルを作ります。
```
model = create_model()
model.fit(train_images, train_labels, epochs=5)
```
`saved_model`を作成し、タイムスタンプ付きのディレクトリに保存します。
```
import time
saved_model_path = "./saved_models/{}".format(int(time.time()))
tf.keras.experimental.export_saved_model(model, saved_model_path)
saved_model_path
```
作成したsaved_modelsを一覧表示します。
```
!ls saved_models/
```
保存されたモデル(SavedModel)から新しいKerasモデルをリロードします。
```
new_model = tf.keras.experimental.load_from_saved_model(saved_model_path)
new_model.summary()
```
復元されたモデルを実行します。
```
model.predict(test_images).shape
# モデルを評価する前にコンパイルする必要があります。
# モデルをデプロイするだけであればこのステップは不要です。
new_model.compile(optimizer=model.optimizer, # ロードしてあったオプティマイザを保持
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# モデルを評価します。
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
```
## この先は?
`tf.keras`を使った保存とロードのクイックガイドでした。
* [tf.keras guide](https://www.tensorflow.org/guide/keras) には`tf.keras`での保存とロードについて、もう少し記載されています
* Eager Executionでの保存については[Saving in eager](https://www.tensorflow.org/guide/eager#object_based_saving) を参照ください
* [Save and Restore](https://www.tensorflow.org/guide/saved_model)ガイドには、TensorFlowでの保存についてローレベルの詳細が記載されています
| github_jupyter |
```
import sys, random, os, json
sys.path.append(sys.path.append(os.path.join(os.getcwd(), '..')))
from datamart.augment import Augment
import pandas as pd
es_index = "datamart"
augment = Augment(es_index=es_index)
```
### Initialize a dataframe
```
old_df = pd.DataFrame(data={
'city': ["los angeles", "New york", "Shanghai", "SAFDA", "manchester"],
'country': ["US", "US", "China", "fwfb", "UK"],
})
print(old_df)
```
### Search metadata
Query by a column, which is query on variable.named_entities, by default, if a metadata matches more than half of cells in original dataframe, it is a hit. Can specify minimum should match with minimum_should_match parameter
```
hitted_metadatas = augment.query_by_column(
col=old_df.loc[:, "city"],
minimum_should_match=len(old_df.loc[:, 'city'].unique().tolist())//2)
print(len(hitted_metadatas))
```
### Query by key value pairs
```
hitted_metadatas = augment.query_by_key_value_pairs(key_value_pairs=[
("description", "average")
])
print(len(hitted_metadatas))
```
### Query by temporal coverage
```
hitted_metadatas = augment.query_by_temporal_coverage(
start="2018-09-23",
end="2018-09-30T00:00:00")
print(len(hitted_metadatas))
```
With some ranking methods, say we want to augment with a specific metadata, datamart id 1230000
```
metadata = augment.query_by_datamart_id(datamart_id=1230000)[0]
# Take a look at some metadata
print(json.dumps(metadata, indent=2))
```
### Materialize dataset with constrains
```
# Get me subset of the dataset only related to my cities in old_df and time range from 2018-09-23 to 2018-09-30
new_df = augment.get_dataset(metadata=metadata, variables=None, constrains={
"locations": old_df.loc[:, 'city'].unique().tolist(),
"date_range": {
"start": "2018-09-23T00:00:00",
"end": "2018-09-30T00:00:00"
}
})
print(new_df.iloc[random.sample(range(1, new_df.shape[0]), 10), :])
```
### Join
There are many ways of join between original dataframe and new dataframe.
Simplest solution if right join which will produce lots of rows for same city.
```
df = pd.merge(left=old_df, right=new_df, left_on='city', right_on='city', how='outer')
print(df)
```
#### Aggregation
Join also can be performed based on aggregation.
```
# Aggregate on city
new_df_aggregated = new_df.groupby(["city"], as_index=False)["TAVG"].mean()
print(new_df_aggregated)
df = pd.merge(left=old_df, right=new_df_aggregated, left_on='city', right_on='city', how='outer')
print(df)
# Aggregate on city and date
new_df_aggregated = new_df.groupby(["city", "date"], as_index=False)["TAVG"].mean()
print(new_df_aggregated)
df = pd.merge(left=old_df, right=new_df_aggregated, left_on='city', right_on='city', how='outer')
print(df)
```
We can also unstack new datagrame to form more columns and so that we will not produce extra rows
```
new_df_unstacked = new_df.groupby(["city", "date"])["TAVG"].mean().unstack().reset_index(level=['city'])
print(new_df_unstacked)
df = pd.merge(left=old_df, right=new_df_unstacked, left_on='city', right_on='city', how='outer')
print(df)
```
| github_jupyter |
# 2022/01/01/SAT(HappyNewYear)
datail-review 해보자
- feature_names = 높이,가로 길이 이런 것들, data = 각 featuredml 값들, target = 0,1,2...예를 들면 붓꽃의 이름을 대용한 것, target_names = 각 target이 가리키는 이름이 무엇인지?
---
model_selection 모듈은 학습 데이터와 테스트 데이터 세트를 분리하거나 교차 검증 분할 및 평가, 그리고 Estimator의 하이퍼 파라미터를 튜닝하기 위한 다양한 함수와 클래스를 제공, 전체 데이터를 학습 데이터와 테스트 데이터 세트로 분리해주는 train_test_split()부터 살펴보자
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
iris=load_iris() # 붓꽃 데이터 세트 로딩
dt_clf=DecisionTreeClassifier()
train_data=iris.data # 데이터 세트에서 feature만으로 구성된 데이터가 ndarray
train_label=iris.target # 데이터 세트에서 label 데이터
dt_clf.fit(train_data, train_label) # 학습 수행중
pred=dt_clf.predict(train_data) # 예측 수행중 // 그런데 학습때 사용했던 train_data를 사용했음 -> 예측도 1 나올 것
print('예측도: ',accuracy_score(train_label,pred))
```
- 정확도가 100% 나왔음 $\to$ 이미 학습한 학습 데이터 세트를 기반으로 예측했기 때문. 답을 알고 있는데 같은 문제를 낸 것이나 마찬가지
- 따라서 예측을 수행하는 데이터 세트는 학습을 수행한 학습용 데이터 세트가 아닌 전용의 테스트 데이터 세트여야 함.
```
from sklearn.model_selection import train_test_split
dt_clf=DecisionTreeClassifier()
iris=load_iris()
# train_test_split()의 반환값은 튜플 형태이다. 순차적으로 네가지 요소들을 반환한다
X_train,X_test,y_train,y_test=train_test_split(iris.data, iris.target,test_size=0.3,random_state=121)
dt_clf.fit(X_train,y_train)
pred = dt_clf.predict(X_test)
print('예측 정확도: {:.4f}'.format(accuracy_score(y_test,pred)))
```
---
지금까지의 방법은 모델이 학습 데이터에만 과도하게 최적화되어, 실제 예측을 다른 데이터로 수행할 경우에는 예측 성능이 과도하게 떨어지는 `과적합`이 발생할 수 있다. 즉 해당 테스트 데이터에만 과적합되는 학습 모델이 만들어져 다른 테스트용 데이터가들어올 경우에는 성능이 저하된다. $\to$ 개선하기 위해 `교차검증`을 이용해 다양한 학습과 평가를 수행해야 한다.
> 교차검증?
: 본고사 치르기 전, 여러 모의고사를 치르는 것. 즉 본고사가 테스트 데이터 세트에 대해 평가하는 것이라면 모의고사는 교차 검증에서 많은 학습과 검증 세트에서 알고리즘 학습과 평가를 수행하는 것.
: 학습 데이터 세트를 검증 데이터 세트와 학습 데이터 세트로 분할하여 수행한 뒤, 모든 학습/검증 과정이 완료된 후 최종적으로 성능을 평가하기 위해 테스트 데이터 세트를 마련함.
> K fold 교차 검증?
: K개의 데이터 폴드 세트를 만들어서 K번만큼 각 폴드 세트에 학습과 검증, 평가를 반복적으로 수행 / 개괄적 과정은 교재 104 참고
- 실습해보자
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold # 위에서는 trian_test_split을 import했었음
iris=load_iris() # 붓꽃 데이터 세트 로딩
features=iris.data
label=iris.target
dt_clf=DecisionTreeClassifier(random_state=156)
kfold=KFold(n_splits=5) # KFold 객체 생성
cv_accuracy=[] # fold set별 정확도를 담을 리스트 객체 생성
print('붓꽃 데이터 세트 크기:',features.shape[0])
```
---
```python
kfold=KFold(n_splits=5)
```
로 KFold객체를 생성했으니 객체의 split()을 호출해 전체 붓꽃 데이터를 5개의 fold 데이터 세트로 분리하자. 붓꽃 데이터 세트 크기가 150개니 120개는 학습용, 30개는 검증 테스트 데이터 세트이다.
```
n_iter=0
for train_index,test_index in kfold.split(features):
# kfold.split()으로 반환된 인덱스를 이용해 학습용, 검증용 테스트 데이터 추출
X_train, X_test = features[train_index], features[test_index]
y_train, y_test = label[train_index], label[test_index]
# 학습 및 예측
dt_clf.fit(X_train, y_train)
pred = dt_clf.predict(X_test)
n_iter+=1
# 반복 시마다 정확도 측정
accuracy = np.round(accuracy_score(y_test,pred),4)
train_size = X_train.shape[0]
test_size = X_test.shape[0]
print('\n#{0} 교차 검증 정확도 :{1}, 학습 데이터 크기 :{2}, 검증 데이터 크기 :{3}'.format(n_iter,accuracy,train_size,test_size))
print('#{0} 검증 세트 인덱스:{1}'.format(n_iter, test_index))
cv_accuracy.append(accuracy)
# 개별 iteration별 정확도를 합하여 평균 정확도 계산
print('\n *Conclusion* 평균 검증 정확도:', np.mean(cv_accuracy))
```
----
- 교차 검증시마다 검증 세트의 인덱스가 달라짐을 알 수 있다.
- 검증세트 인덱스를 살펴보면 104p에서 설명한 그림의 설명과 유사함
---
> Stratified K 폴드
: 불균형한 분포도를가진 레이블(결정 클래스) 데이터 집합을 위한 K 폴드 방식이다. 불균형한 분포도를 가진 레이블 데이터 집합은 특정 레이블 값이 특이하게 많거나 또는 적어서 분포가 한쪽으로 치우치는 것을 말함
가령 대출 사기 데이터를 예측한다고 가정해보자, 이 데이터 세트는 1억건이고 수십개의 feature와 대출 사기 여부를 뜻하는 label(정상 대출0, 대출사기 : 1)로 구성돼 있다. K폴드로 랜덤하게 학습 및 테스트 세트의 인덱스를 고르더라도 레이블 값인 0과1의 비율을 제대로 반영하지 못하게 됨. 따라서 원본 데이터와 유사한 대출 사기 레이블 값의 분포를 학습/테스트 세트에도 유지하는 게 매우 중요
- ***Stratified K 폴드는 이처럼 K폴드가 레이블 데이터 집합이 원본 데이터 집합의 레이블 분포를 학습 및 테스트 세트에 제대로 분배하지 못하는 경우의 문제를 해결해줌***
붓꽃 데이터 세트를 DataFrame으로 생성하고 레이블 값의 분포도를 먼저 확인해보자
```
import pandas as pd
iris=load_iris()
iris_df=pd.DataFrame(data=iris.data,columns=iris.feature_names)
iris_df['label']=iris.target
print(iris_df['label'].value_counts(),'\n')
```
- label값은 모두 50개로 분배되어 있음
```
kfold=KFold(n_splits=3)
n_iter=0
for train_index, test_index in kfold.split(iris_df):
n_iter+=1
label_train = iris_df['label'].iloc[train_index]
label_test=iris_df['label'].iloc[test_index]
print('## 교차 검증: {}'.format(n_iter))
print('학습 레이블 데이터 분포:\n', label_train.value_counts())
print('검증 레이블 데이터 분포:\n', label_test.value_counts())
print('------------------------------------------------------')
```
- 교차 검증 시마다 3개의 폴드 세트로 만들어지는 학습 레이블과 검증 레이블이 완전히 다른 값으로 추출되었다. 예를 들어 첫번째 교차 검증에서는 학습 레이블의 1,2값이 각각 50개가 추출되었고 검증 레이블의 0값이 50개 추출되었음, 즉 학습레이블은 1,2 밖에 없으므로 0의 경우는 전혀 학습하지 못함. 반대로 검증 레이블은 0밖에 없으므로 학습 모델은 절대 0을 예측하지 못함. 이런 유형으로 교차 검증 데이터 세트를 분할하면 검증 예측 정확도는 0이 될 수밖에 없다.
- StratifiedKFold는 이렇게 KFold로 분할된 레이블 데이터 세트가 전체 레이블 값의 분포도를 반영하지 못하는 문제를 해결함.
---
실습해보자
```
from sklearn.model_selection import StratifiedKFold
skf=StratifiedKFold(n_splits=3)
n_iter=0
# split 메소드에 인자로 feature데이터 세트뿐만 아니라 레이블 데이터 세트도 반드시 넣어줘야함
for train_index,test_index in skf.split(iris_df,iris_df['label']):
n_iter+=1
label_train=iris_df['label'].iloc[train_index]
label_test=iris_df['label'].iloc[test_index]
print('## 교차검증: {}'.format(n_iter))
print('학습 레이블 데이터 분포: \n', label_train.value_counts())
print('검증 레이블 데이터 분포: \n', label_test.value_counts())
print('--------------------------------------------------------')
```
- 학습 레이블과 검증 레이블 데이터 값의 분포도가 동일하게 할당됐음을 알 수 있다. 이렇게 분할이 되어야 레이블 값 0,1,2를 모두 학습할 수 있고 이에 기반해 검증을 수행할 수 있다.
- 이제 StratifiedKFold를 이용해 붓꽃 데이터를 교차 검증해보자
```
df_clf=DecisionTreeClassifier(random_state=156)
skfold=StratifiedKFold(n_splits=3)
n_iter=3
cv_accuracy=[]
# StratifiedKFol의 split() 호출시 반드시 레이블 데이터 세트도 추가 입력 필요
for train_index, test_ondex in skfold.split(features, label):
# split()으로 반환된 인덱스를 이용해 학습용, 검증용 테스트 데이터 추출
X_train,X_test=features[train_index],features[test_index]
y_train,y_test=label[train_index], label[test_index]
# 학습 및 예측
df_clf.fit(X_train,y_train)
pred=dt_clf.predict(X_test)
# 반복시마다 정확도 측정
n_iter+=1
accuracy=np.around(accuracy_score(y_test,pred),4)
train_size=X_train.shape[0]
test_size = X_test.shape[0]
print('\n#{} 교차 검증 정확도 : {}, 학습 데이터 크기 : {}, 검증 데이터 크기 : {}'.format(n_iter,accuracy,train_size,test_size))
print('#{} 검증 세트 인덱스: {}'.format(n_iter, test_index))
cv_accuracy.append(accuracy)
# 교차 검증별 정확도 및 평균 정확도 계산
print('\n## 교차 검증별 정확도:', np.around(cv_accuracy,4))
print('## 평균 검증 정확도:',np.mean(cv_accuracy))
```
----
> ### ***`교차 검증을 보다 간편하게 - cross_val_score()`***
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score,cross_validate
from sklearn.datasets import load_iris
iris_data=load_iris()
dt_clf = DecisionTreeClassifier(random_state=156)
data= iris_data.data
label=iris_data.target
# 성능 지표는 정확도 (accuracy), 교차 검증 세트는 3개
scores = cross_val_score(dt_clf, data, label, scoring='accuracy', cv=3)
print('교차 검증별 정확도: ',np.round(scores,4))
print('평균 검증 정확도: ',np.round(np.mean(scores),4))
```
- cv로 지정된 횟수만큼 scoring 파라미터로 지정된 평가지표로 평가 결과값을 배열로 반환
----
> ### ***`GridSearchCV - 교차 검증과 최적 하이퍼 파라미터 튜닝을 동시에`***
- 하이퍼 파라미터? 머신러닝 알고리즘을 구성하는 주요 구성 요소이며, 이 값을 조정해 알고리즘의 예측 성능을 개선할 수 있음
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
# 데이터를 로딩하고 학습 데이터와 테스트 데이터 분리
iris_data = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris_data.data,iris_data.target, test_size=0.2, random_state=121)
dtree= DecisionTreeClassifier()
# 파라미터를 딕셔너리 형태로 설정
parameters = {'max_depth' : [1,2,3], 'min_samples_split' : [2,3]}
import pandas as pd
# param_grid의 하이퍼 파라미터를 3개의 train, test set fold로 나누어 테스트 수행 설정
# rifit=True가 default이며, 이때 가장 젛은 파라미터 설정으로 재학습시킴
grid_dtree = GridSearchCV(dtree, param_grid=parameters, cv=3, refit=True)
# 붓꽃 학습 데이터로 param_grid의 하이퍼 파라미터를 순차적으로 학습/평가
grid_dtree.fit(X_train,y_train)
#GridSearchCV 결과를 추출해 DataFrame으로 변환
scores_df = pd.DataFrame(grid_dtree.cv_results_)
scores_df[['params','mean_test_score','rank_test_score','split0_test_score','split1_test_score','split2_test_score']]
print('GridSearchCV 최적 파라미터:', grid_dtree.best_params_)
print('GridSearchCV 최고 정확도:{:4f}'.format(grid_dtree.best_score_))
```
- 인덱스 4,5rk rank_test_score가 1인 것으로 보아 공동 1위이며 예측 성능 1등을 의미함.
- 열 4,5,6은 cv=3 이라서 열2는 그 세개의 평균을 의미
```
# GridSearchCV의 refit으로 이미 학습된 estimator 반환
estimator = grid_dtree.best_estimator_
# GridSearchCV의 best_estimator_는 이미 최적 학습이 됐으므로 별도 학습이 필요없음
pred = estimator.predict(X_test)
print('테스트 데이터 세트 정확도: {:.4f}'.format(accuracy_score(y_test,pred)))
```
- 일반적으로 학습 데이터를 GridSearchCV를 이용해 최적 하이퍼 파라미터 튜닝을 수행한 뒤에 별도의 테스트 세트에서 이를 평가하는 것이 일반적인 머신 러닝 모델 적용 방법이다.
| github_jupyter |
```
from IPython import display
```
# What to expect from the Python lessons
- Get you started with python through a little project
- Showcase relevant use cases of python for exploratory data analysis
- Provide you with exercises during and after the lessons so that you practice and experience python
- Provide you with good reads to develop further the wonderful craft of programming
# Coding project introduction
We will work with hypothetical clynical trial data about a drug that aims to reduce arthritis inflammation.
The data analysis goal is to assess the effectiveness of the treament and inform the clynical trial with data visualization taking advantage of interactive notebooks.
More details on this later ;)
## Software Carpentry - Python part 1
12:45 - 0.Introduction to Jupyter notebooks
13:00 - 1.Python fundamentals
13:45 - Exercises in Breakout Room
**14:00 - Coffee break**
14:15 - 2.Loading data
14:45 - 3.Visualizing data
**15:30 - Coffee break**
15:45 - 4.Repeating actions
16:30 - Exercises in Breakout Room
16:50 - Wrap up
# Quick tour to Anaconda (Individual edition)
"**Your data science toolkit**: With over 25 million users worldwide, the open-source Individual Edition (Distribution) is the easiest way to perform Python/R data science and machine learning on a single machine. Developed for solo practitioners, it is the toolkit that equips you to work with thousands of open-source packages and libraries."
https://www.anaconda.com/products/individual
## Anaconda installation comes with:
* Latest version of Python
* JupyterLab, Jupyter notebooks and several other IDE options
* An easy-to-install collection of high performance Python libraries
* Conda, tool for managing packages and environments
* Collection of open source packages and tools to install over 1.5k additional packages
* Anaconda Navigator is a graphical user interface to the conda package and environment manager and a free scientific-computing environment.
# Let's open up a jupyter notebook!
## Method 1: Anaconda Navigator
## Method 2: Terminal
Open notebook example: [Gravitational Wave Open Science Center](https://mybinder.org/v2/gh/losc-tutorial/quickview/master?filepath=index.ipynb)
# Why are we using notebooks instead of an IDE like Spyder
- Storytelling with documented and re-executable analysis
- Great for exploring libraries, commands and code snippets
- Persistence of code snippets you want to reuse and try compared to a terminal or ipython.
```
# This is a code snippet
def say_hello(your_name): # define say_hello function
print('Hello ' + your_name)
say_hello('Jose') # Here we call the function
```
## Dont confuse the interactive notebook documents with the "jupyter notebook" application
- The first one is a document format that you can find with an `.ipynb` file extension
- The second is an application where you can use such documents
- Interactive notebooks or `ipynb`s are very related to `.ipython`an interactive shell.
Open notebook example: [Gravitational Wave Open Science Center](https://mybinder.org/v2/gh/losc-tutorial/quickview/master?filepath=index.ipynb)
# 1. Python fundamentals
## Lets get started with python and see what it can do
- Open your jupyter notebook application
- Go to `day_1/` directory and open the `exercises.ipynb` notebook.
### How to use the `exercises.ipynb`
- Lets write our first code snippet in a cell
- Lets execute code (pressing run or pressing `Shift + Enter`)
- Lets write some markdown in another cell
```
# Write your first code snippet
# Write your first markdown cell
```
# Things you should know about python for Today
* Easy to code, It is a very easy to learn language.
* It has a lot of reusable code that allows you to do powerful things (libraries)
* High-Level, when we write python code you dont have to remember systems architecture
* Open source and highly extendable!
* Indentation and spaces make a difference
```
# In this snippet we demonstrate how indentation works in python
names = ['Curie', 'Darwin', 'Turing']
for name in names:
print(name)
```
## Variables and data types in python
Create variables to describe patient characteristics:
- Patient identifier
- Patient name
- Age
- Weight
- Specificatin of inflammations status (is inflammated, is not inflammated)
```
# Demonstrate multiple assignment with two patient names
# Demonstrate variables valid and invalid names
# Examples 1_patient vs patient_1
## patient name = "Peter Pan" # This will not work
## 1_patient_name = "Peter Pan" # This will not work
# Trying data types
# Try integer type with patient_1 age
# Try floating points with patient's weight
# a patient_id string
# Boolean, does patient "x" has had inflammation?
# Get information about data types using a built in function
```
# Combining everything we have learned into a patients dataset
- We will create a list of patients
- And start populating and manipulating that list with python
```
# Working with lists data types
# First lets make an empty list
# Lets populate the list with our previous values assigned to the patients variables
# Grouping more data points per patient, we only have names
# A patients list with different data points per patient
# Demonstrate a two dimensional list: [[], [], []]
# Organizing the same data using dictionaries
# Manipulating a list and adding entries to the list
```
# What have we done so far?
- We know how to assign different data types to different values
- We know how to compose more complex data types by aggregating the basic data types or primitive data types
- We can use the notebook now to play around and try code snippets
- We know how to use `print()` python builtin function to interact display results as outputs
# Lets put everything we have learned together into a little app to enter patients data into a list
- We will introduce 4 patients to the patients list
- Try to stick to the same names I am using...
```
# We can use input like this
# patient_name = input()
# Lets clear our list again and start using the input() to populate the list
# patients = []
# Create a patient id
# Using the input() function:
# Enter patients name and add the entry to patients list
# Enter patients age and add the entry to patients list
# Enter patients weight and add the entry to patients list
# Specify if patient was inflammated
## Lets run the cell 4 times to get a dummy dataset
# Saving the data into a csv vile
# We can recreate our list in case it disappears from memory
patients = [[0, 'Peter Pan', '40', '80', 'True'],
[1, 'Wendy Fuller', '30', '60', 'False'],
[2, 'Mariam McWire', '37', '70', 'False'],
[3, 'Robin Hood', '60', '80', 'True']]
# Check and read the file we created
```
# Checking data types to store data according to our original specifications
```
# Lets check the types and see if the the data complies with requirements we have defined upfront
# Another way of checking with the == operator
# Do it with a for loop to inspect the data
```
## Here we have a problem of types that dont match what we defined at the beginning
- Age should be an integer
- Weight should be a float
- And is_inflammated should be a boolean type
Hold this issue, we will come back to it later ;)
# Breakout Session 1
# Exercise 1 - Slicing strings
A section of an array is called a _slice_. We can take slices of character strings as well:
```
element = "oxygen"
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
```
What is the value of `element[4]`? What about `element[4:]`? Or `element[:]`?
What is `element[-1]`? What is `element[-3:]`?
```
# Your solution
```
# Excercise 2 - Slicing lists with steps
So far we’ve seen how to use slicing to take single blocks of successive entries from a sequence. But what if we want to take a subset of entries that aren’t next to each other in the sequence? You can achieve this by providing a third argument to the range within the brackets, called the step size.
The full sytax for creating slices is `[begin:end:step]`, although you most often find a short-hand notation as we've seen in Exercise 1.
The example below shows how you can take every third entry in a list:
```
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[0:12:3]
print('subset', subset)
```
Given the following list of months:
```
months = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec']
```
### Questions
1. What slice of `months` will produce the following output `['jan', 'mar', 'may', 'jul', 'sep', nov']`?
1. Given the short-hand notation we used for the character string in Exercise 1 (i.e. `element[:2] == 'element[0:2]`), can you find the short-hand notation for question 1? What do you find easier to read?
1. Using the step size parameter, can you think of a way to reverse the list?
```
# Your solution
```
# Back to our code project: analysing patient data
### Arthritis Inflammation
We are studying inflammation in patients who have been given a new treatment for arthritis.
There are 60 patients, who had their inflammation levels recorded for 40 days. We want to analyze these recordings to study the effect of the new arthritis treatment.
To see how the treatment is affecting the patients in general, we would like to:
1. Calculate the average inflammation per day across all patients.
1. Plot the result to discuss and share with colleagues.
```
display.Image("../img/lesson-overview.png")
```
What if we need the maximum inflammation for each patient over all days (as in the next diagram on the left) or the average for each day (as in the diagram on the right)?
```
display.Image("../img/python-operations-across-axes.png")
```
# 2. Analysing patient data
Lets have a look at the dataset first
```
# NumPy is a Python library used for operations with matrices and arrays
# Numpy comes pre-installed with anaconda
import numpy
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(data) # Shows small part of the dataset
# test type of value holded by data variable
print(type(data))
print(data.dtype) # Highlight OOP
print(data.shape)
display.Image("../img/accessing_elements.png", width="500")
# Indexing an array
# Slicing data
display.Image("../img/slicing-a-2d-numpy-array.png", width="200")
```
```python
square_array[1:3,1:4] # slice the array as it is in the figure
```
# Talk about libraries and OOP a bit
- For Today the most important to know about object oriented programing, is that is a way of structuring code in a certain way.
- Python is OOP and it is particularly noticeable when you are reusing libraries like numpy.
```
# Using methods fromt he class numpy
# Useful statistics, assign the variables using multi assignment
```
When analysing data, though, we often want to look at variations in statistical values, such as the maximum inflammation per patient or the average inflammation per day.
```
# Mean per day over all patients
# The average inflammation per patient across all days
# As a quick check, we can look at the shape
```
## Using pandas to show how libraries build on top of each other to add value
Pandas is built on top of NumPy, which means the Python pandas package depends on the NumPy package and also pandas intended with many other 3rd party libraries.
```
# Use pandas to show how to handle array data
# Explain that pandas are based on numpy arrays
```
### Key points
* Import a library into a program using import libraryname.
* Use the `numpy` library to work with arrays in Python.
* Use array `[x, y]` to select a single element from a 2D array.
* Use `numpy.mean(array)`, `numpy.max(array)`, and `numpy.min(array)` to calculate simple statistics.
* Use `numpy.mean(array, axis=0)` or `numpy.mean(array, axis=1)` to calculate statistics across the specified axis.\
### Questions?
# 3. Visualizing Tabular Data
```
import matplotlib.pyplot
# Let's make our lives easier with an alias
## import matplotlib.pyplot as plt
# Plot entire dataset as a heatmap
import matplotlib.pyplot as plt
# Plot the mean inflammation per day over all patients
```
This data looks suspicious! I would not expect to find a sharp peak in an average of the dataset. Very unlikely that the inflammation of all patients spikes on day 18. Let's look at two other statistics: max and min
```
# max method of numpy
# min method of numpy
# Setup subplots
```
### Try to do this on your own later:
Create `inflammation_analysis.ipynb`
Let's organize our analysis in a new notebook
**Steps:**
1. Set up subplots
2. Load data
3. Plot data
4. Add labels
5. Save the figure
### Key points
* Use the `pyplot` module from the `matplotlib` library for creating simple visualizations.
* Use an alias when importing lengthy library names, e.g. `import matplotlib.pyplot as plt`
* Create subplots and add labels
### Questions?
# 4. Repeating actions
```
# Simple examples of for loops
```
### Looping over multiple data files
As a final piece to processing our inflammation data, we need a way to get a list of all the files in our data directory whose names start with `inflammation-` and end with `.csv`. The following library will help us to achieve this:
```
# Loop analysis over filenames
```
### Key points
* Use `for variable in sequence` to process the elements of a sequence one at a time.
* The body of a `for` loop must be indented.
* Use `glob.glob(pattern)` to create a list of files whose names match a pattern.
* Use `*` in a pattern to match zero or more characters, and `?` to match any single character.
### Questions?
# Breakout Session 2
# Exercise 3 - Change in inflammation
The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept.
The `numpy.diff()` function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset:
```
# Load data
import numpy
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
patient3_week1 = data[3, :7]
print(patient3_week1)
patient3_week1 = data[3, :7]
print(patient3_week1)
```
Calling `numpy.diff(patient3_week1)` would do the following calculations
```
numpy.diff(patient3_week1)
```
### Questions
1. When calling `numpy.diff()` with a multi-dimensional array, an axis argument may be passed to the function to specify which axis to process. When applying `numpy.diff()` to our 2D inflammation array data, which axis would we specify?
```
# Your solution
```
# Exercise 4 - Plotting differences
Plot the difference between the average inflammations reported in the first and second datasets (stored in `inflammation-01.csv` and `inflammation-02.csv`, correspondingly), i.e., the difference between the leftmost plots of the first two figures we have plotted so far.
Steps:
1. Import libraries
1. Import data
1. Calculate difference
1. Create and annotate figure
```
# Your solution
```
# End of lesson
| github_jupyter |
```
import numpy as np
from tqdm import tqdm
from time import time
import torchvision
from torchvision import models, transforms
import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
def accuracy(yhat,y):
# si y encode les indexes
if len(y.shape)==1 or y.size(1)==1:
return (torch.argmax(yhat,1).view(y.size(0),-1)== y.view(-1,1)).double().mean()
# si y est encodé en onehot
return (torch.argmax(yhat,1).view(-1) == torch.argmax(y,1).view(-1)).double().mean()
def train(model,epochs,train_loader,test_loader,feature_extract=False):
model = model.to(device)
writer = SummaryWriter(f"{TB_PATH}/{model.name}")
params_to_update = model.parameters()
print("params to learn:")
if feature_extract:
params_to_update = []
for name,param in model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
else:
for name,param in model.named_parameters():
if param.requires_grad == True:
print("\t",name)
optim = torch.optim.Adam(params_to_update,lr=1e-3)
print(f"running {model.name}")
loss = nn.CrossEntropyLoss()
for epoch in tqdm(range(epochs)):
cumloss, cumacc, count = 0, 0, 0
model.train()
for x,y in train_loader:
optim.zero_grad()
x,y = x.to(device), y.to(device)
yhat = model(x)
l = loss(yhat,y)
l.backward()
optim.step()
cumloss += l*len(x)
cumacc += accuracy(yhat,y)*len(x)
count += len(x)
writer.add_scalar('loss/train',cumloss/count,epoch)
writer.add_scalar('accuracy/train',cumacc/count,epoch)
if epoch % 1 == 0:
model.eval()
with torch.no_grad():
cumloss, cumacc, count = 0, 0, 0
for x,y in test_loader:
x,y = x.to(device), y.to(device)
yhat = model(x)
cumloss += loss(yhat,y)*len(x)
cumacc += accuracy(yhat,y)*len(x)
count += len(x)
writer.add_scalar(f'loss/test',cumloss/count,epoch)
writer.add_scalar('accuracy/test',cumacc/count,epoch)
def set_parameter_requires_grad(model, feature_extract):
if feature_extract:
for name,p in model.named_parameters():
if "fc" not in name:
p.requires_grad = False
else:
p.requires_grad = True
def get_test_data(dataloader, size):
X_test, Y_test = next(iter(dataloader))
batch_size = len(X_test)
n = size//batch_size
for i, batch in enumerate(dataloader):
if i < n:
X_tmp, Y_tmp = batch
X_test = torch.cat((X_test, X_tmp), 0)
Y_test = torch.cat((Y_test, Y_tmp), 0)
return X_test, Y_test
TB_PATH = "/tmp/logs/sceance2"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
resnet = models.resnet18(pretrained=True)
resnet.fc = nn.Linear(512, 10)
print(resnet.eval())
set_parameter_requires_grad(resnet, True)
input_size = 224
batch_size = 128
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
transformresnetTrain=transforms.Compose([ # Cette fois on utilise pas de grayscale car nous avons un gros modele pré-entrainé
transforms.RandomResizedCrop(input_size), # selection aléatoire d'une zone de la taille voulue (augmentation des données en apprentissage)
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
transformresnetTest=transforms.Compose([
transforms.Resize(input_size), # selection de la zone centrale de la taille voulue
transforms.CenterCrop(input_size),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
resnet_trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transformresnetTrain)
resnet_trainloader = torch.utils.data.DataLoader(resnet_trainset, batch_size=batch_size, pin_memory=True, shuffle=True)
resnet_testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transformresnetTest)
resnet_testloader = torch.utils.data.DataLoader(resnet_testset, batch_size=batch_size, pin_memory=True, shuffle=True)
## Entraînement du réseau
resnet.name = "resnet"
train(resnet, 1, resnet_trainloader, resnet_testloader)
## Accuracy
X_test, Y_test = get_test_data(resnet_testloader, 1000)
X_test, Y_test = X_test.to(device), Y_test.to(device)
print("Acc for resnet transfer learning :", accuracy(resnet(X_test), Y_test))
for t in (20,40,60,80,100,120):
t0 = time()
resnet(X_test[:t])
print("FPS:", t, " --> seconds:", (time() - t0))
import os
PATH = "./"
torch.save(resnet.state_dict(), os.path.join(PATH,"resnet.pth"))
PATH = "./"
model = models.resnet18(pretrained=True)
model.fc = nn.Linear(512, 10)
model.load_state_dict(torch.load(os.path.join(PATH,"vgg.pth"),map_location='cpu'))
model.eval()
dummy_input = torch.randn(batch_size, 3, input_size, input_size)
torch.onnx.export(model,
dummy_input,
"vgg.onnx",
export_params=True,
do_constant_folding=True,
input_names = ['modelInput'],
output_names = ['modelOutput'])
print(" ")
print('Model has been converted to ONNX')
```
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/CCLog.png" width = 300, align = "center"></a>
<h1 align=center><font size=5>Data Analysis with Python</font></h1>
<h1>Data Wrangling</h1>
<h3>Welcome!</h3>
By the end of this notebook, you will have learned the basics of Data Wrangling!
<h2>Table of content</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="#identify_handle_missing_values">Identify and handle missing values</a>
<ul>
<li><a href="#identify_missing_values">Identify missing values</a></li>
<li><a href="#deal_missing_values">Deal with missing values</a></li>
<li><a href="#correct_data_format">Correct data format</a></li>
</ul>
</li>
<li><a href="#data_standardization">Data standardization</a></li>
<li><a href="#data_normalization">Data Normalization (centering/scaling)</a></li>
<li><a href="#binning">Binning</a></li>
<li><a href="#indicator">Indicator variable</a></li>
</ul>
Estimated Time Needed: <strong>30 min</strong>
</div>
<hr>
<h2>What is the purpose of Data Wrangling?</h2>
Data Wrangling is the process of converting data from the initial format to a format that may be better for analysis.
<h3>What is the fuel consumption (L/100k) rate for the diesel car?</h3>
<h3>Import data</h3>
<p>
You can find the "Automobile Data Set" from the following link: <a href="https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data">https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data</a>.
We will be using this data set throughout this course.
</p>
<h4>Import pandas</h4>
```
import pandas as pd
import matplotlib.pylab as plt
```
<h2>Reading the data set from the URL and adding the related headers.</h2>
URL of the dataset
This dataset was hosted on IBM Cloud object click <a href="https://cocl.us/corsera_da0101en_notebook_bottom">HERE</a> for free storage
```
filename = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
```
Python list <b>headers</b> containing name of headers
```
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
```
Use the Pandas method <b>read_csv()</b> to load the data from the web address. Set the parameter "names" equal to the Python list "headers".
```
df = pd.read_csv(filename, names = headers)
```
Use the method <b>head()</b> to display the first five rows of the dataframe.
```
# To see what the data set looks like, we'll use the head() method.
df.head()
```
As we can see, several question marks appeared in the dataframe; those are missing values which may hinder our further analysis.
<div>So, how do we identify all those missing values and deal with them?</div>
<b>How to work with missing data?</b>
Steps for working with missing data:
<ol>
<li>dentify missing data</li>
<li>deal with missing data</li>
<li>correct data format</li>
</ol>
<h2 id="identify_handle_missing_values">Identify and handle missing values</h2>
<h3 id="identify_missing_values">Identify missing values</h3>
<h4>Convert "?" to NaN</h4>
In the car dataset, missing data comes with the question mark "?".
We replace "?" with NaN (Not a Number), which is Python's default missing value marker, for reasons of computational speed and convenience. Here we use the function:
<pre>.replace(A, B, inplace = True) </pre>
to replace A by B
```
import numpy as np
# replace "?" to NaN
df.replace("?", np.nan, inplace = True)
df.head(5)
```
dentify_missing_values
<h4>Evaluating for Missing Data</h4>
The missing values are converted to Python's default. We use Python's built-in functions to identify these missing values. There are two methods to detect missing data:
<ol>
<li><b>.isnull()</b></li>
<li><b>.notnull()</b></li>
</ol>
The output is a boolean value indicating whether the value that is passed into the argument is in fact missing data.
```
missing_data = df.isnull()
missing_data.head(5)
```
"True" stands for missing value, while "False" stands for not missing value.
<h4>Count missing values in each column</h4>
<p>
Using a for loop in Python, we can quickly figure out the number of missing values in each column. As mentioned above, "True" represents a missing value, "False" means the value is present in the dataset. In the body of the for loop the method ".value_counts()" counts the number of "True" values.
</p>
```
for column in missing_data.columns.values.tolist():
print(column)
print (missing_data[column].value_counts())
print("")
```
Based on the summary above, each column has 205 rows of data, seven columns containing missing data:
<ol>
<li>"normalized-losses": 41 missing data</li>
<li>"num-of-doors": 2 missing data</li>
<li>"bore": 4 missing data</li>
<li>"stroke" : 4 missing data</li>
<li>"horsepower": 2 missing data</li>
<li>"peak-rpm": 2 missing data</li>
<li>"price": 4 missing data</li>
</ol>
<h3 id="deal_missing_values">Deal with missing data</h3>
<b>How to deal with missing data?</b>
<ol>
<li>drop data<br>
a. drop the whole row<br>
b. drop the whole column
</li>
<li>replace data<br>
a. replace it by mean<br>
b. replace it by frequency<br>
c. replace it based on other functions
</li>
</ol>
Whole columns should be dropped only if most entries in the column are empty. In our dataset, none of the columns are empty enough to drop entirely.
We have some freedom in choosing which method to replace data; however, some methods may seem more reasonable than others. We will apply each method to many different columns:
<b>Replace by mean:</b>
<ul>
<li>"normalized-losses": 41 missing data, replace them with mean</li>
<li>"stroke": 4 missing data, replace them with mean</li>
<li>"bore": 4 missing data, replace them with mean</li>
<li>"horsepower": 2 missing data, replace them with mean</li>
<li>"peak-rpm": 2 missing data, replace them with mean</li>
</ul>
<b>Replace by frequency:</b>
<ul>
<li>"num-of-doors": 2 missing data, replace them with "four".
<ul>
<li>Reason: 84% sedans is four doors. Since four doors is most frequent, it is most likely to occur</li>
</ul>
</li>
</ul>
<b>Drop the whole row:</b>
<ul>
<li>"price": 4 missing data, simply delete the whole row
<ul>
<li>Reason: price is what we want to predict. Any data entry without price data cannot be used for prediction; therefore any row now without price data is not useful to us</li>
</ul>
</li>
</ul>
<h4>Calculate the average of the column </h4>
```
avg_norm_loss = df["normalized-losses"].astype("float").mean(axis=0)
print("Average of normalized-losses:", avg_norm_loss)
```
<h4>Replace "NaN" by mean value in "normalized-losses" column</h4>
```
df["normalized-losses"].replace(np.nan, avg_norm_loss, inplace=True)
```
<h4>Calculate the mean value for 'bore' column</h4>
```
avg_bore=df['bore'].astype('float').mean(axis=0)
print("Average of bore:", avg_bore)
```
<h4>Replace NaN by mean value</h4>
```
df["bore"].replace(np.nan, avg_bore, inplace=True)
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #1: </h1>
<b>According to the example above, replace NaN in "stroke" column by mean.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
mean_stroke=df[['stroke']].astype("float").mean(axis=0)
df['stroke'].replace(np.nan,mean_stroke,inplace=True)
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# calculate the mean vaule for "stroke" column
avg_stroke = df["stroke"].astype("float").mean(axis = 0)
print("Average of stroke:", avg_stroke)
# replace NaN by mean value in "stroke" column
df["stroke"].replace(np.nan, avg_stroke, inplace = True)
-->
<h4>Calculate the mean value for the 'horsepower' column:</h4>
```
avg_horsepower = df['horsepower'].astype('float').mean(axis=0)
print("Average horsepower:", avg_horsepower)
```
<h4>Replace "NaN" by mean value:</h4>
```
df['horsepower'].replace(np.nan, avg_horsepower, inplace=True)
```
<h4>Calculate the mean value for 'peak-rpm' column:</h4>
```
avg_peakrpm=df['peak-rpm'].astype('float').mean(axis=0)
print("Average peak rpm:", avg_peakrpm)
```
<h4>Replace NaN by mean value:</h4>
```
df['peak-rpm'].replace(np.nan, avg_peakrpm, inplace=True)
```
To see which values are present in a particular column, we can use the ".value_counts()" method:
```
df['num-of-doors'].value_counts()
```
We can see that four doors are the most common type. We can also use the ".idxmax()" method to calculate for us the most common type automatically:
```
df['num-of-doors'].value_counts().idxmax()
```
The replacement procedure is very similar to what we have seen previously
```
#replace the missing 'num-of-doors' values by the most frequent
df["num-of-doors"].replace(np.nan, "four", inplace=True)
```
Finally, let's drop all rows that do not have price data:
```
# simply drop whole row with NaN in "price" column
df.dropna(subset=["price"], axis=0, inplace=True)
# reset index, because we droped two rows
df.reset_index(drop=True, inplace=True)
df.head()
```
<b>Good!</b> Now, we obtain the dataset with no missing values.
<h3 id="correct_data_format">Correct data format</h3>
<b>We are almost there!</b>
<p>The last step in data cleaning is checking and making sure that all data is in the correct format (int, float, text or other).</p>
In Pandas, we use
<p><b>.dtype()</b> to check the data type</p>
<p><b>.astype()</b> to change the data type</p>
<h4>Lets list the data types for each column</h4>
```
df.dtypes
```
<p>As we can see above, some columns are not of the correct data type. Numerical variables should have type 'float' or 'int', and variables with strings such as categories should have type 'object'. For example, 'bore' and 'stroke' variables are numerical values that describe the engines, so we should expect them to be of the type 'float' or 'int'; however, they are shown as type 'object'. We have to convert data types into a proper format for each column using the "astype()" method.</p>
<h4>Convert data types to proper format</h4>
```
df[["bore", "stroke"]] = df[["bore", "stroke"]].astype("float")
df[["normalized-losses"]] = df[["normalized-losses"]].astype("int")
df[["price"]] = df[["price"]].astype("float")
df[["peak-rpm"]] = df[["peak-rpm"]].astype("float")
```
<h4>Let us list the columns after the conversion</h4>
```
df.dtypes
```
<b>Wonderful!</b>
Now, we finally obtain the cleaned dataset with no missing values and all data in its proper format.
<h2 id="data_standardization">Data Standardization</h2>
<p>
Data is usually collected from different agencies with different formats.
(Data Standardization is also a term for a particular type of data normalization, where we subtract the mean and divide by the standard deviation)
</p>
<b>What is Standardization?</b>
<p>Standardization is the process of transforming data into a common format which allows the researcher to make the meaningful comparison.
</p>
<b>Example</b>
<p>Transform mpg to L/100km:</p>
<p>In our dataset, the fuel consumption columns "city-mpg" and "highway-mpg" are represented by mpg (miles per gallon) unit. Assume we are developing an application in a country that accept the fuel consumption with L/100km standard</p>
<p>We will need to apply <b>data transformation</b> to transform mpg into L/100km?</p>
<p>The formula for unit conversion is<p>
L/100km = 235 / mpg
<p>We can do many mathematical operations directly in Pandas.</p>
```
df.head()
# Convert mpg to L/100km by mathematical operation (235 divided by mpg)
df['city-L/100km'] = 235/df["city-mpg"]
# check your transformed data
df.head()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #2: </h1>
<b>According to the example above, transform mpg to L/100km in the column of "highway-mpg", and change the name of column to "highway-L/100km".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df["highway-L/100km"]=235/df["highway-mpg"]
df.rename(columns={'"highway-mpg"':'highway-L/100km'},inplace=True)
df.head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# transform mpg to L/100km by mathematical operation (235 divided by mpg)
df["highway-mpg"] = 235/df["highway-mpg"]
# rename column name from "highway-mpg" to "highway-L/100km"
df.rename(columns={'"highway-mpg"':'highway-L/100km'}, inplace=True)
# check your transformed data
df.head()
-->
<h2 id="data_normalization">Data Normalization</h2>
<b>Why normalization?</b>
<p>Normalization is the process of transforming values of several variables into a similar range. Typical normalizations include scaling the variable so the variable average is 0, scaling the variable so the variance is 1, or scaling variable so the variable values range from 0 to 1
</p>
<b>Example</b>
<p>To demonstrate normalization, let's say we want to scale the columns "length", "width" and "height" </p>
<p><b>Target:</b>would like to Normalize those variables so their value ranges from 0 to 1.</p>
<p><b>Approach:</b> replace original value by (original value)/(maximum value)</p>
```
# replace (original value) by (original value)/(maximum value)
df['length'] = df['length']/df['length'].max()
df['width'] = df['width']/df['width'].max()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Questiont #3: </h1>
<b>According to the example above, normalize the column "height".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df['height']=df['height']/df['height'].max()
df[['height']].head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
df['height'] = df['height']/df['height'].max()
# show the scaled columns
df[["length","width","height"]].head()
-->
Here we can see, we've normalized "length", "width" and "height" in the range of [0,1].
<h2 id="binning">Binning</h2>
<b>Why binning?</b>
<p>
Binning is a process of transforming continuous numerical variables into discrete categorical 'bins', for grouped analysis.
</p>
<b>Example: </b>
<p>In our dataset, "horsepower" is a real valued variable ranging from 48 to 288, it has 57 unique values. What if we only care about the price difference between cars with high horsepower, medium horsepower, and little horsepower (3 types)? Can we rearrange them into three ‘bins' to simplify analysis? </p>
<p>We will use the Pandas method 'cut' to segment the 'horsepower' column into 3 bins </p>
<h3>Example of Binning Data In Pandas</h3>
Convert data to correct format
```
df["horsepower"]=df["horsepower"].astype(int, copy=True)
```
Lets plot the histogram of horspower, to see what the distribution of horsepower looks like.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
plt.pyplot.hist(df["horsepower"])
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>We would like 3 bins of equal size bandwidth so we use numpy's <code>linspace(start_value, end_value, numbers_generated</code> function.</p>
<p>Since we want to include the minimum value of horsepower we want to set start_value=min(df["horsepower"]).</p>
<p>Since we want to include the maximum value of horsepower we want to set end_value=max(df["horsepower"]).</p>
<p>Since we are building 3 bins of equal length, there should be 4 dividers, so numbers_generated=4.</p>
We build a bin array, with a minimum value to a maximum value, with bandwidth calculated above. The bins will be values used to determine when one bin ends and another begins.
```
bins = np.linspace(min(df["horsepower"]), max(df["horsepower"]), 4)
bins
```
We set group names:
```
group_names = ['Low', 'Medium', 'High']
```
We apply the function "cut" the determine what each value of "df['horsepower']" belongs to.
```
df['horsepower-binned'] = pd.cut(df['horsepower'], bins, labels=group_names, include_lowest=True )
df[['horsepower','horsepower-binned']].head(20)
```
Lets see the number of vehicles in each bin.
```
df["horsepower-binned"].value_counts()
```
Lets plot the distribution of each bin.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
pyplot.bar(group_names, df["horsepower-binned"].value_counts())
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>
Check the dataframe above carefully, you will find the last column provides the bins for "horsepower" with 3 categories ("Low","Medium" and "High").
</p>
<p>
We successfully narrow the intervals from 57 to 3!
</p>
<h3>Bins visualization</h3>
Normally, a histogram is used to visualize the distribution of bins we created above.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
a = (0,1,2)
# draw historgram of attribute "horsepower" with bins = 3
plt.pyplot.hist(df["horsepower"], bins = 3)
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
The plot above shows the binning result for attribute "horsepower".
<h2 id="indicator">Indicator variable (or dummy variable)</h2>
<b>What is an indicator variable?</b>
<p>
An indicator variable (or dummy variable) is a numerical variable used to label categories. They are called 'dummies' because the numbers themselves don't have inherent meaning.
</p>
<b>Why we use indicator variables?</b>
<p>
So we can use categorical variables for regression analysis in the later modules.
</p>
<b>Example</b>
<p>
We see the column "fuel-type" has two unique values, "gas" or "diesel". Regression doesn't understand words, only numbers. To use this attribute in regression analysis, we convert "fuel-type" into indicator variables.
</p>
<p>
We will use the panda's method 'get_dummies' to assign numerical values to different categories of fuel type.
</p>
```
df.columns
```
get indicator variables and assign it to data frame "dummy_variable_1"
```
dummy_variable_1 = pd.get_dummies(df["fuel-type"])
dummy_variable_1.head()
```
change column names for clarity
```
dummy_variable_1.rename(columns={'fuel-type-diesel':'gas', 'fuel-type-diesel':'diesel'}, inplace=True)
dummy_variable_1.head()
```
We now have the value 0 to represent "gas" and 1 to represent "diesel" in the column "fuel-type". We will now insert this column back into our original dataset.
```
# merge data frame "df" and "dummy_variable_1"
df = pd.concat([df, dummy_variable_1], axis=1)
# drop original column "fuel-type" from "df"
df.drop("fuel-type", axis = 1, inplace=True)
df.head()
```
The last two columns are now the indicator variable representation of the fuel-type variable. It's all 0s and 1s now.
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #4: </h1>
<b>As above, create indicator variable to the column of "aspiration": "std" to 0, while "turbo" to 1.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
dummy_variable_2=pd.get_dummies(df['aspiration'])
dummy_variable_2.rename(columns={'std':'aspiration-std','turbo':'aspiration-turbo'},inplace=True)
dummy_variable_2.head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# get indicator variables of aspiration and assign it to data frame "dummy_variable_2"
dummy_variable_2 = pd.get_dummies(df['aspiration'])
# change column names for clarity
dummy_variable_2.rename(columns={'std':'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
# show first 5 instances of data frame "dummy_variable_1"
dummy_variable_2.head()
-->
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #5: </h1>
<b>Merge the new dataframe to the original dataframe then drop the column 'aspiration'</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df=pd.concat([df,dummy_variable_2],axis=1)
df.head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
#merge the new dataframe to the original datafram
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "aspiration" from "df"
df.drop('aspiration', axis = 1, inplace=True)
-->
save the new csv
```
df.to_csv('clean_df.csv')
```
| github_jupyter |
## PureFoodNet implementation
```
#libraries
from tensorflow import keras
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten, Conv2D
from tensorflow.keras.layers import MaxPool2D, BatchNormalization, GlobalAveragePooling2D
from tensorflow.keras.regularizers import l2
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
K.clear_session()
class PureFoodNet:
# The model
def getModel(input_shape=(224,224,3), num_classes=3):
model = Sequential()
#Block 1
model.add(Conv2D(input_shape = input_shape,
filters = 128, kernel_size = (5,5), strides = 2, padding = 'Same', name='block1_conv1',
activation ='relu', kernel_initializer='he_normal'))
model.add(Conv2D(filters = 128, kernel_size = (5,5), strides = 2, padding = 'Same', name='block1_conv2',
activation ='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D(strides=(2, 2), name='block1_pool'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
#Block 2
model.add(Conv2D(filters = 256, kernel_size = (3,3),padding = 'Same', name='block2_conv1',
activation ='relu',kernel_initializer='he_normal'))
model.add(Conv2D(filters = 256, kernel_size = (3,3),padding = 'Same', name='block2_conv2',
activation ='relu',kernel_initializer='he_normal'))
model.add(Conv2D(filters = 256, kernel_size = (3,3),padding = 'Same', name='block2_conv3',
activation ='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D(strides=(2, 2), name='block2_pool'))
model.add(BatchNormalization())
model.add(Dropout(0.35))
#Block 3
model.add(Conv2D(filters = 512, kernel_size = (3,3),padding = 'Same', name='block3_conv1',
activation ='relu',kernel_initializer='he_normal'))
model.add(Conv2D(filters = 512, kernel_size = (3,3),padding = 'Same', name='block3_conv2',
activation ='relu',kernel_initializer='he_normal'))
model.add(Conv2D(filters = 512, kernel_size = (3,3),padding = 'Same', name='block3_conv3',
activation ='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D(strides=(2, 2), name='block3_pool'))
model.add(BatchNormalization())
model.add(Dropout(0.35))
#Block 4
model.add(GlobalAveragePooling2D())
model.add(Dense(512, activation = "relu", kernel_initializer='he_normal'))
model.add(Dropout(0.4))
model.add(Dense(num_classes,
activation = "softmax",
kernel_initializer='he_normal',
kernel_regularizer=l2()))
return model
img_width, img_height = 299, 299
train_data_dir = 'food-101/train/'
validation_data_dir = 'food-101/test/'
specific_classes = None #['apple_pie', 'greek_salad', 'baklava']
batch_size = 128
train_datagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range=10,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.2,
zoom_range=0.2,
channel_shift_range=10,
horizontal_flip=True,
fill_mode='constant'
)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
classes = specific_classes,
directory = train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
classes = specific_classes,
directory = validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
nb_train_samples = train_generator.n
nb_validation_samples = validation_generator.n
n_classes = train_generator.num_classes
model_name = 'PureFoodNet_299x299Nadam_2'
epoch_num = 50
model = PureFoodNet.getModel(input_shape=train_generator.image_shape,
num_classes = n_classes)
model.summary()
# learning rate scheduler
def schedule(epoch):
if epoch < 10:
new_lr = .001
elif epoch < 14:
new_lr = .0006
elif epoch < 17:
new_lr = .0003
elif epoch < 20:
new_lr = .0001
elif epoch < 23:
new_lr = .00005
else:
new_lr = .00001
print("\nLR at epoch {} = {} \n".format(epoch,new_lr))
return new_lr
lr_scheduler = LearningRateScheduler(schedule)
model.compile(optimizer='Nadam',
loss='categorical_crossentropy',
metrics=['accuracy','top_k_categorical_accuracy'])
checkpointer = ModelCheckpoint(filepath='best_model_food101_'+model_name+'.hdf5',
verbose=1,
save_best_only=True)
csv_logger = CSVLogger('hist_food101_'+model_name+'.log')
hist = model.fit_generator(train_generator,
steps_per_epoch = nb_train_samples // batch_size,
validation_data = validation_generator,
validation_steps = nb_validation_samples // batch_size,
epochs = epoch_num,
verbose = 1,
callbacks = [csv_logger, checkpointer, lr_scheduler]
)
```
| github_jupyter |
```
from IPython.display import HTML
css_file = './custom.css'
HTML(open(css_file, "r").read())
```
# Norms and Distances
© 2018 Daniel Voigt Godoy
## 1. Definition
From [Wikipedia](https://en.wikipedia.org/wiki/Norm_(mathematics)):
...a norm is a function that assigns a strictly positive length or size to each vector in a vector space — except for the zero vector, which is assigned a length of zero.
### 1.1 Euclidean Distance
You probably know the most common norm of them all: $\ell_2$ norm (or distance). This is the ***Euclidean Distance*** commonly referred to as the distance between two points:
$$
\ell_2 = ||x||_2 = \sqrt{|x_1|^2 + \dots + |x_n|^2} = \sqrt{\sum_{i=1}^n|x_i|^2}
$$

<center>Source: Wikipedia</center>
### 1.2 Manhattan Distance
You may also have heard of the $\ell_1$ norm (or distance). This is called ***Manhattan Distance***:
$$
\ell_1 = ||x||_1 = |x_1| + \dots + |x_n| = \sum_{i=1}^n|x_i|
$$

<center>Source: Wikipedia</center>
### 1.3 Minkowski Distance of order *p*
There is a pattern to it... you add up all elements exponentiated to the "number" of the norm (1 or 2 in the examples above), then you take the "number"-root of the result.
If we say this "number" is $p$, we can write the formula like this:
$$
||\boldsymbol{x}||_p = \bigg(\sum_{i=1}^{n}|x_i|^p\bigg)^{\frac{1}{p}}
$$
### 1.4 Infinity Norm
This is a special case, which is equivalent to taking the maximum absolute value of all values:
$$
||\boldsymbol{x}||_{\infty} = max(|x_1|, \dots, |x_n|)
$$
## 2. Experiment
Time to try it yourself!
The slider below allows you to change $p$ to get the contour plots for different norms.
Use the slider to play with different configurations and answer the ***questions*** below.
```
from intuitiveml.algebra.Norm import *
from intuitiveml.utils import gen_button
norm = plotNorm()
vb = VBox(build_figure(norm), layout={'align_items': 'center'})
vb
```
#### Questions
1. What happens to the general ***level*** of values (look at the colorscale) as $p$ increases?
2. Let's compare Manhattan to Euclidean distances:
- Using ***Manhattan Distance***, hover your mouse over any point along the ***x axis*** (y = 0) and note its coordinates: its Z value is the computed distance.
- Using ***Euclidean Distance***, go to the same point and note its coordinates. What happens to the computed distance? Did it get bigger / smaller?
- Repeat the process, but this time choose a point along the ***diagonal*** (x and y having the same value). How do the distances compare to each other?
1.) Has full range of color values at l1. Increasing norm = reducing overall scale.
2.) weights are on x, y axis of above graph -> distance to origin is norm (distance measured by according distance to norm, e.g. manhattan for l1)
## 3. Comparing Norms
Here are plots for different $p$-norms, side by side, for easier comparison.
It is also possible to have $p$ values smaller than one, which yield "pointy" figures like the first one.
On the opposite end, if we use a $p$ value of a 100, it is already pretty close to the depicting the ***maximum*** value of the coordinates (as expected for the ***infinity norm***)
```
f = plot_norms()
```
## 4. Numpy
[np.linalg.norm](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html)
#### This material is copyright Daniel Voigt Godoy and made available under the Creative Commons Attribution (CC-BY) license ([link](https://creativecommons.org/licenses/by/4.0/)).
#### Code is also made available under the MIT License ([link](https://opensource.org/licenses/MIT)).
```
from IPython.display import HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
```
| github_jupyter |
```
import numpy as np
from collections import defaultdict
from torch.utils import data
import matplotlib.pyplot as plt
%matplotlib inline
# Generate Dataset
np.random.seed(42)
def generate_dataset(num_sequences=2**8):
sequences = []
for _ in range(num_sequences):
token_length = np.random.randint(1, 12)
sequence = f'{"a"*token_length}{"b"*token_length}EOS'
sequences.append(sequence)
return sequences
def word_encoding(sequences):
# Get 1D list of all words in all sequences
flatten = lambda l: [item for sublist in l for item in sublist]
all_words = flatten(sequences)
# Create dictionary mapping word to word frequency across all sequences
word_to_count = defaultdict(int)
for word in all_words:
word_to_count[word] += 1
word_to_count = sorted(list(word_to_count.items()), key=lambda l: -l[1]) # sorting according to frequency
# List of unique words
dictionary = [item[0] for item in word_to_count]
dictionary.append('UNK')
# Calculate lengths
num_sequences = len(sequences)
vocab_size = len(dictionary)
# Make word to index and index to word mappings
word_to_idx = defaultdict(lambda: vocab_size-1)
idx_to_word = defaultdict(lambda: 'UNK')
for idx, word in enumerate(dictionary):
word_to_idx[word] = idx
idx_to_word[idx] = word
return word_to_idx, idx_to_word, vocab_size
def one_hot_encode(idx, vocab_size):
"""
One-hot encodes a single word given its index and the size of the vocabulary.
Args:
`idx`: the index of the given word
`vocab_size`: the size of the vocabulary
Returns a 1-D numpy array of length `vocab_size`.
"""
# Initialize the encoded array
one_hot = np.zeros(vocab_size)
# Set the appropriate element to one
one_hot[idx] = 1.0
return one_hot
def one_hot_encode_sequence(sequence, vocab_size, word_to_idx):
"""
One-hot encodes a sequence of words given a fixed vocabulary size.
Args:
`sentence`: a list of words to encode
`vocab_size`: the size of the vocabulary
Returns a 3-D numpy array of shape (num words, vocab size, 1).
"""
# Encode each word in the sentence
encoding = np.array([one_hot_encode(word_to_idx[word], vocab_size) for word in sequence])
# Reshape encoding s.t. it has shape (num words, vocab size, 1)
encoding = encoding.reshape(encoding.shape[0], encoding.shape[1], 1)
return encoding
class Dataset(data.Dataset):
def __init__(self, inputs, targets):
self.X = inputs
self.y = targets
def __len__(self):
return len(self.X)
def __getitem__(self, index):
return self.X[index], self.y[index]
def prepare_data(sequences, train_size=0.8, test_size=0.1, val_size=0.1):
# Split data
num_train = int(train_size*len(sequences))
num_test = int(test_size*len(sequences))
num_val = int(val_size*len(sequences))
# print(f'{num_train}, {num_test}, {num_val}')
train_seq = sequences[:num_train]
test_seq = sequences[num_train:num_train+num_test]
val_seq = sequences[-num_val:]
# print(f'{len(train_seq)}, {len(test_seq)}, {len(val_seq)}')
# prepare input & target sequences
def prepare_sequences(sequences):
inputs = []
targets = []
for sequence in sequences:
inputs.append(sequence[:-1])
targets.append(sequence[1:])
return inputs, targets
train_inputs, train_targets = prepare_sequences(train_seq)
test_inputs, test_targets = prepare_sequences(test_seq)
val_inputs, val_targets = prepare_sequences(val_seq)
# print(f'{len(train_inputs)}, {len(test_inputs)}, {len(val_inputs)}')
# create datasets
train_set = Dataset(train_inputs, train_targets)
test_set = Dataset(test_inputs, test_targets)
val_set = Dataset(val_inputs, val_targets)
return train_set, test_set, val_set
# RNN from scratch
def init_orthogonal_weights(dim1, dim2):
# initialize
weights = np.random.randn(dim1, dim2)
# print(f'inital random: {weights}')
if dim1 < dim2:
weights = weights.T
# QR factorization (Q = orthogonal)
q, r = np.linalg.qr(weights)
# print(f'q: {q}')
# Make Q uniform according to https://arxiv.org/pdf/math-ph/0609050.pdf
d = np.diag(r, 0)
ph = np.sign(d)
q *= ph
# print(f'q final: {q}')
if dim1 < dim2:
q = q.T
return q # q is orthogonal
def init_rnn(hidden_size, vocab_size):
'''
Initializes RNN
Args:
hidden_size --> hidden state dimensions
vocab_size --> input vector dimensions
Returns:
U --> Weight matrix applied to input, passed to hidden state
V --> Weight matrix from previous hidden state passed to hidden state
W --> Weight matrix applied to output from hidden state to give final output
bias_hidden = bias applied in hidden state
bias_output = bias applied to output
'''
U = init_orthogonal_weights(hidden_size, vocab_size)
V = init_orthogonal_weights(hidden_size, hidden_size)
W = init_orthogonal_weights(vocab_size, hidden_size)
bias_hidden = init_orthogonal_weights(hidden_size, hidden_size)
bias_output = init_orthogonal_weights(vocab_size, vocab_size)
return (U, V, W, bias_hidden, bias_output)
# Activation Functions
def sigmoid(x, derivative=False):
"""
Computes sigmoid of array x
Args:
x --> input array
derivative --> when set to True will return derivative instead of forward pass
"""
x_safe = x + 1e-12
f = 1 / (1 + np.exp(-x_safe))
if derivative:
return f * (1 - f)
else:
return f
def tanh(x, derivative=False):
"""
Computes tanh of array x
Args:
x --> input array
derivative --> when set to True will return derivative instead of forward pass
"""
x_safe = x + 1e-12
f = (np.exp(x_safe)-np.exp(-x_safe))/(np.exp(x_safe)+np.exp(-x_safe))
if derivative:
return f * (1 - f)
else:
return f
def softmax(x):
"""
Computes softmax of array x
Args:
x --> input array
"""
return np.exp(x+1e-12) / np.sum(np.exp(x+1e-12))
# Forward Pass
def forward_pass(inputs, hidden_state, parameters):
U, V, W, bias_hidden, bias_output = parameters
outputs, hidden_states = [], [hidden_state]
# print(f'U: {U}, V: {V}, W: {W}')
for i in range(len(inputs)):
# print(f'U: {U.shape}, input: {inputs[i].shape}, v: {V.shape}, hidden: {hidden_state.shape}')
hidden_state = tanh((np.dot(U, inputs[i]) + np.dot(V, hidden_states[-1])))
output = np.dot(W, hidden_state)
# print(f'hidden: {hidden_state>0}, output: {output}')
hidden_states.append(hidden_state)
outputs.append(output)
return outputs, hidden_states
def clip_gradient_norm(grads, max_norm=0.25):
"""
Prevents exploding gradient by clipping
Clips gradients to have max norm of max_norm
"""
max_norm = float(max_norm)
total_norm = 0
# Using L2 norm squared
for grad in grads:
grad_norm = np.sum(np.power(grad, 2))
total_norm += grad_norm
total_norm = np.sqrt(total_norm)
clip_coef = max_norm / (total_norm + 1e-6)
if clip_coef < 1:
for grad in grads:
grad *= clip_coef
return grads
def cross_entropy_loss(output, target):
loss = 0
for j in range(len(output)):
# print(f'target: {target[j]}, out: {output[j]}, val: {output[j]}, log: {np.log(output[j] + 1e-9)}')
loss += target[j] * np.log(output[j] + 1e-9)
return -loss
def backward_pass(inputs, outputs, hidden_states, targets, params):
U, V, W, bias_hidden, bias_output = params
# Initialize gradients as zero
d_U, d_V, d_W = np.zeros_like(U), np.zeros_like(V), np.zeros_like(W)
d_bias_hidden, d_bias_output = np.zeros_like(bias_hidden), np.zeros_like(bias_output)
d_hidden_next = np.zeros_like(hidden_states[0])
loss = 0
# Iterate backwards through elements
for i in reversed(range(len(outputs))):
# Calculate loss
# print(f'{cross_entropy_loss(outputs[i], targets[i])}')
loss += (cross_entropy_loss(softmax(outputs[i]), targets[i])/len(targets))
# Backpropagate into output
d_output = outputs[i].copy()
d_output[np.argmax(targets[i])] -= 1
# Backpropagate into W
# print(f'h: {hidden_states[i].T.shape}, out: {d_output.shape}')
d_W += np.dot(d_output, hidden_states[i].T)
d_bias_output += d_output
# Backpropagate into h
d_h = np.dot(W.T, d_output) + d_hidden_next
# Backpropagate through non-linearity (tanh)
d_f = (1 - hidden_states[i]**2) * d_h
d_bias_hidden += d_f
# Backpropagate into U
# print(f'h: {inputs[i].T.shape}, out: {d_f.shape}')
d_U += np.dot(d_f, inputs[i].T)
# Backpropagate into V
# print(f'h: {hidden_states[i-1].T.shape}, out: {d_f.shape}')
d_V += np.dot(hidden_states[i-1].T, d_f)
d_hidden_next = np.dot(V.T, d_f)
# Clip gradients
grads = d_U, d_V, d_W, d_bias_hidden, d_bias_output
grads = clip_gradient_norm(grads)
return loss, grads
def optimizer(parameters, gradients, learning_rate=1e-3):
for parameter, gradient in zip(parameters, gradients):
parameter -= learning_rate * gradient
return parameters
def encode_data(dataset, vocab_size, word_to_idx):
x, y = [], []
for inputs, targets in dataset:
# print(f'input: {len(inputs)}\ntargets{len(targets)}\n')
x.append(one_hot_encode_sequence(inputs, vocab_size, word_to_idx))
y.append(one_hot_encode_sequence(targets, vocab_size, word_to_idx))
# print(f'lengths {len(x)}, {len(y)}')
return (x, y)
def train(training_set, hidden_state, parameters, epochs=1000):
training_loss = []
inputs, targets = training_set
for i in range(epochs):
epoch_training_loss = 0
for x, y in zip(inputs, targets):
hidden_state = np.zeros_like(hidden_state)
# Forward pass
outputs, hidden_states = forward_pass(x, hidden_state, parameters)
# Backward pass
loss, gradients = backward_pass(x, outputs, hidden_states, y, parameters)
if np.isnan(loss):
raise ValueError('ERROR: Gradients have vanished')
# Update parameters (optimizer)
parameters = optimizer(parameters, gradients)
epoch_training_loss += loss
training_loss.append(epoch_training_loss/len(training_set))
if i%100 == 0:
print(f'Epoch {i}, training loss: {training_loss[-1]}')
return parameters, training_loss
def validate(val_set, hidden_state, parameters, epochs=100):
validation_loss = []
inputs, targets = val_set
for i in range(epochs):
epoch_validation_loss = 0
for x, y in zip(inputs, targets):
hidden_state = np.zeros_like(hidden_state)
#Forward pass
outputs, hidden_states = forward_pass(x, hidden_state, parameters)
# Backward pass
loss, _ = backward_pass(x, outputs, hidden_states, y, parameters)
if np.isnan(loss):
raise ValueError('ERROR: Gradients have vanished')
validation_loss.append(epoch_validation_loss/len(val_set))
if i%100 == 0:
print(f'Epoch {i}, validation loss: {validation_loss[-1]}')
return validation_loss
def test(test_set, hidden_state, parameters, idx_to_word):
inputs, targets = test_set
results = defaultdict()
for x in inputs:
hidden_state = np.zeros_like(hidden_state)
outputs, hidden_states = forward_pass(x, hidden_state, parameters)
x_decoded = [ind_to_word[np.argmax(x[i])] for i in range(len(x))]
y_decoded = [ind_to_word[np.argmax(output)] for output in outputs]
x_decoded = ('').join(x_decoded)
y_decoded = ('').join(y_decoded)
results[x_decoded] = y_decoded
return results
def rnn():
# Constants
epochs = 100
hidden_size = 50
hidden_state = np.zeros((hidden_size, 1))
# Data Preparation
sequences = generate_dataset()
word_to_idx, idx_to_word, vocab_size = word_encoding(sequences)
train_set, test_set, val_set = prepare_data(sequences)
# Data encoding
train_set = encode_data(train_set, vocab_size, word_to_idx)
test_set = encode_data(test_set, vocab_size, word_to_idx)
val_set = encode_data(val_set, vocab_size, word_to_idx)
# Initialize rnn
parameters = init_rnn(hidden_size, vocab_size)
training_loss, validation_loss = [], []
# Train
parameters, training_loss = train(train_set, hidden_state, parameters, epochs)
# Validate
validation_loss = validate(val_set, hidden_state, parameters, epochs)
# Test
results = test(test_set, hidden_state, parameters, idx_to_word)
# Print results
for key in results:
print(f'Input: {key}, Output: {results[key]}')
# rnn()
# LSTM
def init_lstm(hidden_size, vocab_size):
z_size = hidden_size + vocab_size
# Forget gate
W_forget = np.zeros((hidden_size, z_size))
b_forget = np.zeros((hidden_size, 1))
# Update gate
W_update = np.zeros((hidden_size, z_size))
b_update = np.zeros((hidden_size, 1))
# Output gate
W_output = np.zeros((hidden_size, z_size))
b_output = np.zeros((hidden_size, 1))
# Candidate
W_g = np.zeros((hidden_size, z_size))
b_g = np.zeros((hidden_size, 1))
# Output: output = W_v * h(t) + b_v
W_v = np.zeros((vocab_size, hidden_size))
b_v = np.zeros((vocab_size, 1))
# Initialize weights
W_forget = init_orthogonal_weights(W_forget.shape[0], W_forget.shape[1])
W_update = init_orthogonal_weights(W_update.shape[0], W_update.shape[1])
W_output = init_orthogonal_weights(W_output.shape[0], W_output.shape[1])
W_g = init_orthogonal_weights(W_g.shape[0], W_g.shape[1])
W_v = init_orthogonal_weights(W_v.shape[0], W_v.shape[1])
return W_forget, W_update, W_output, W_g, W_v, b_forget, b_update, b_output, b_g, b_v
def forward_lstm(inputs, prev_hidden, prev_cell, parameters, activation='softmax'):
# Unpack parameters
W_forget, W_update, W_output, W_g, W_v, b_forget, b_update, b_output, b_g, b_v = parameters
# Lists for computations to be saved
inputs_list = []
forget_gate, update_gate, output_gate = [], [], []
g_comp, v_comp = [], []
hidden_state, cell_state = [], []
outputs_list = []
# Hidden and cell states
hidden_state.append(prev_hidden)
cell_state.append(prev_cell)
# Parse through input
for x in inputs:
# Concatenate input
z = np.row_stack((prev_hidden, x))
inputs_list.append(z)
# Forget gate
f = sigmoid((np.dot(W_forget, z)) + b_forget)
forget_gate.append(f)
# Update gate (Input gate)
u = sigmoid((np.dot(W_update, z)) + b_update)
update_gate.append(u)
# Candidate (g)
g = tanh(np.dot(W_g, z) + b_g)
g_comp.append(g)
# Memory state (Cell state)
c = prev_cell * f + g * u
cell_state.append(c)
# Output gate
o = sigmoid((np.dot(W_output, z)) + b_output)
output_gate.append(o)
# Hidden state
h = o * tanh(c)
hidden_state.append(h)
# Calculate Logits (Intermediate step)
v = np.dot(W_v, prev_hidden) + b_v
v_comp.append(v)
# Calculate final output
if activation == 'softmax':
output = softmax(v)
outputs_list.append(output)
elif activation == 'linear':
outputs_list.append(v)
return inputs_list, forget_gate, update_gate, g_comp, cell_state, output_gate, hidden_state, v_comp, outputs_list
def backward_lstm(computation_lists, targets, parameters):
# Unpack inputs
inputs_list, forget_gate, update_gate, g_comp, cell_state, output_gate, hidden_state, v_comp, outputs_list = computation_lists
W_forget, W_update, W_output, W_g, W_v, b_forget, b_update, b_output, b_g, b_v = parameters
# Initialize gradients (as zero) & other variables
W_f_d = np.zeros_like(W_forget)
b_f_d = np.zeros_like(b_forget)
W_u_d = np.zeros_like(W_update)
b_u_d = np.zeros_like(b_update)
W_g_d = np.zeros_like(W_g)
b_g_d = np.zeros_like(b_g)
W_o_d = np.zeros_like(W_output)
b_o_d = np.zeros_like(b_output)
W_v_d = np.zeros_like(W_v)
b_v_d = np.zeros_like(b_v)
d_hidden_prev = np.zeros_like(hidden_state[0])
d_cell_prev = np.zeros_like(cell_state[0])
hidden_size = len(hidden_state)
loss = 0
for i in reversed(range(len(outputs_list))):
# Cross entropy
loss += (cross_entropy_loss(outputs_list[i], targets[i])/len(targets))
# Previous cell state
prev_cell = cell_state[-1]
# Derivative for v (relation of hidden state to output)
dv = np.copy(outputs_list[i])
dv[np.argmax(targets[i])] -= 1
W_v_d += np.dot(dv, hidden_state[i].T)
b_v_d += dv
# Derivative for hidden state (h)
dh = np.dot(W_v.T, dv)
dh += d_hidden_prev
# Derivative for output (o)
do = dh * tanh(cell_state[i])
do = sigmoid(output_gate[i], derivative=True)*do
W_o_d += np.dot(do, inputs_list[i].T)
b_o_d += do
# Derivative for cell state (c)
dC = np.copy(d_cell_prev)
dC += dh * output_gate[i] * tanh(tanh(cell_state[i]), derivative=True)
# Derivative for candidate (g)
dg = dC * update_gate[i]
dg = tanh(g_comp[i], derivative=True) * dg
W_g_d += np.dot(dg, inputs_list[i].T)
b_g_d += dg
# Derivative for update gate (input gate)
du = dC * g_comp[i]
du = sigmoid(update_gate[i], True) * du
W_u_d += np.dot(du, inputs_list[i].T)
b_u_d += du
# Derivative for forget gate (f)
df = dC * prev_cell
df = sigmoid(forget_gate[i]) * df
W_f_d += np.dot(df, inputs_list[i].T)
b_f_d += df
# Update derivatives of prev cell and hidden states
dz = (np.dot(W_forget.T, df)
+ np.dot(W_update.T, du)
+ np.dot(W_g.T, dg)
+ np.dot(W_output.T, do))
d_hidden_prev = dz[:hidden_size, :]
d_cell_prev = forget_gate[i] * dC
# Clip gradients
gradients = W_f_d, W_u_d, W_g_d, W_o_d, W_v_d, b_f_d, b_u_d, b_g_d, b_o_d, b_v_d
gradients = clip_gradient_norm(gradients)
return loss, gradients
def train_lstm(train_set, parameters, hidden_size, epochs, activation='softmax'):
training_loss = []
inputs, targets = train_set
for i in range(epochs):
epoch_training_loss = 0
for x, y in zip(inputs, targets):
hidden_state = np.zeros((hidden_size, 1))
cell_state = np.zeros((hidden_size, 1))
# Forward pass
computation_lists = forward_lstm(x, hidden_state, cell_state, parameters, activation)
# Backward pass
loss, gradients = backward_lstm(computation_lists, y, parameters)
# Update parameters (optimizer)
parameters = optimizer(parameters, gradients)
# Update loss
epoch_training_loss += loss
training_loss.append(epoch_training_loss)
return training_loss, parameters
def validate_lstm(val_set, parameters, hidden_size, epochs, activation='softmax'):
validation_loss = []
inputs, targets = val_set
for i in range(epochs):
epoch_validation_loss = 0
for x, y in zip(inputs, targets):
hidden_state = np.zeros((hidden_size, 1))
cell_state = np.zeros((hidden_size, 1))
# Forward pass
computation_lists = forward_lstm(x, hidden_state, cell_state, parameters, activation)
# Backward pass
loss, gradients = backward_lstm(computation_lists, y, parameters)
# Update loss
epoch_validation_loss += loss
validation_loss.append(epoch_validation_loss)
return validation_loss
def test_lstm(test_set, parameters, hidden_size, ind_to_word=None, activation='softmax'):
inputs, targets = test_set
results1 = defaultdict()
results2 = []
for x in inputs:
hidden_state = np.zeros((hidden_size, 1))
cell_state = np.zeros((hidden_size, 1))
computation_lists = forward_lstm(x, hidden_state, cell_state, parameters, activation)
inputs_list, forget_gate, update_gate, g_comp, cell_state, output_gate, hidden_state, v_comp, outputs = computation_lists
if ind_to_word:
x_decoded = [ind_to_word[np.argmax(x[i])] for i in range(len(x))]
y_decoded = [ind_to_word[np.argmax(output)] for output in outputs]
x_decoded = ('').join(x_decoded)
y_decoded = ('').join(y_decoded)
results1[x_decoded] = y_decoded
else:
results2.append(outputs)
if ind_to_word:
return results1
else:
return results2
def lstm():
# Data Preparation
sequences = generate_dataset()
word_to_idx, idx_to_word, vocab_size = word_encoding(sequences)
train_set, test_set, val_set = prepare_data(sequences)
print(vocab_size)
# Data encoding
train_set = encode_data(train_set, vocab_size, word_to_idx)
test_set = encode_data(test_set, vocab_size, word_to_idx)
val_set = encode_data(val_set, vocab_size, word_to_idx)
# Initialize network
epochs = 20
hidden_size = 50
z_size = hidden_size + vocab_size
parameters = init_lstm(hidden_size, vocab_size)
# Train
training_loss, parameters = train_lstm(train_set, parameters, hidden_size, epochs, 'softmax')
# Validate
validation_loss = validate_lstm(val_set, parameters, hidden_size, epochs)
# Test
results = test_lstm(test_set, parameters, hidden_size, idx_to_word)
# Print results
for key in results:
print(f'Input: {key}, Output: {results[key]}')
return train_set
# lstm()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/macscheffer/DS-Sprint-01-Dealing-With-Data/blob/master/DS_Unit_1_Sprint_Challenge_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 1
## Loading, cleaning, visualizing, and analyzing data
In this sprint challenge you will look at a dataset of the survival of patients who underwent surgery for breast cancer.
http://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival
Data Set Information:
The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer.
Attribute Information:
1. Age of patient at time of operation (numerical)
2. Patient's year of operation (year - 1900, numerical)
3. Number of positive axillary nodes detected (numerical)
4. Survival status (class attribute)
-- 1 = the patient survived 5 years or longer
-- 2 = the patient died within 5 year
Sprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it!
## Part 1 - Load and validate the data
- Load the data as a `pandas` data frame.
- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).
- Validate that you have no missing values.
- Add informative names to the features.
- The survival variable is encoded as 1 for surviving >5 years and 2 for not - change this to be 0 for not surviving and 1 for surviving >5 years (0/1 is a more traditional encoding of binary variables)
At the end, print the first five rows of the dataset to demonstrate the above.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
cols = [
'age', # age of patient at the time of operation
'operation_year', # year that the operation took place
'positive_axillary_nodes', # positive axillary nodes that we're detected.
'survival' # Survival status, 1 == the patient survived for >= 5 years after operation, 2 == the patient died within 5 years of the operation
]
df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data', names=cols)
df.shape
# the first sum is a method for each series in the df, the second sum adds together the total for each series.
df.isna().sum().sum()
# replacing 2s in the survival column with 0s, meaning they passed away within 5 years after the surgery.
df.survival = df.survival.replace(to_replace=2, value=0)
df.head()
```
## Part 2 - Examine the distribution and relationships of the features
Explore the data - create at least *2* tables (can be summary statistics or crosstabulations) and *2* plots illustrating the nature of the data.
This is open-ended, so to remind - first *complete* this task as a baseline, then go on to the remaining sections, and *then* as time allows revisit and explore further.
Hint - you may need to bin some variables depending on your chosen tables/plots.
```
# 73.53 % survival rate
df.describe()
# creating age bucket, and operation year columns
age_bins = pd.cut(df.age, bins=6)
operation_year_bins = pd.cut(df.operation_year, bins=6)
df['age_bins'] = age_bins
df['operation_year_bins'] = operation_year_bins
# showing the proportion of people in each age bin for each column
# for example the top right value of 0.588 means that between 1968 and 1969 had only 5% of our youngest patients.
pd.crosstab(df.age_bins,df.operation_year_bins, normalize='index')
# in general, as age goes up, survival rates go down.
df.pivot_table(values='survival', index='age_bins')
# as a graph
df.pivot_table(values='survival', index='age_bins').plot.bar()
# as the year of operation goes up, the trend seems to be volatile, but up
df.pivot_table(values='survival', index='operation_year_bins').plot.bar()
# messy at lower x values but can generally see there is a slight negative correlation between age and positive_axillary_nodes
plt.scatter(x=df.positive_axillary_nodes[df.positive_axillary_nodes > 3], y=df.age[df.positive_axillary_nodes > 3])
plt.xlabel('Positive Axillary Nodes')
```
## Part 3 - Analysis and Interpretation
Now that you've looked at the data, answer the following questions:
- What is at least one feature that looks to have a positive relationship with survival?
- What is at least one feature that looks to have a negative relationship with survival?
- How are those two features related with each other, and what might that mean?
Answer with text, but feel free to intersperse example code/results or refer to it from earlier.
One feature that looks to have a positive relationship with survival is operation year. While both age and positive_axillary_nodes seem to have a negative correlation with survival.
It's important to note is that age and operation year are correlated. Meaning as the operation year goes up, the patients tend to be older. This may lead to the overall percentage of survivors year over year to look discouraging, ie not going up as fast as researches or people in general would hope.
Another interesting note, and would need more digging to come to a conclusion, is that age and positive axillary nodes are negatively correlated. Since the positive axillary nodes have the strongest negative and absolute correlation with survival, it may tell us something about our definition of survival. It would need to be thought about in the scope of the question we are asking.
```
df.corr()
print('The correlation between age and operation year:',round(df.age.corr(df.operation_year),4))
df.pivot_table(values='age', index='operation_year').plot.bar()
plt.title('Average Age in Each Operation Year')
plt.ylabel('Age')
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.