code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
Avtor: **Ime in priimek, vpisna številka**
Datum: 1. oktober 2018
*Potrjujem, da sem avtor projektne naloge in da sem vso vsebino pripravil sam. V primeru, da se ugotovi plagiatorstvo se zavedam, da ne bom izpolnjeval pogojev za pristop k izpitu.*
---
# *Navodila za uporabo te predloge*<span class="tocSkip"></span>
*Individualni seminar mora vsebovati **kazalo vsebine** z delujočimi povezavami na poglavja. Najlažje ga pripravite z uporabo razširitve [toc2](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/toc2). Za namestitev poženite spodnjo celico in nato ponovno poženite `jupyter notebook`:*
```
# Namestitev toc2
%sx conda install -c conda-forge jupyter_contrib_nbextensions -y
%sx jupyter contrib nbextension install --user
%sx jupyter nbextension enable toc2/main
```
*Po tem, ko ste zgorni dodatek namestili, ga vključite v poročilo (**korak 1.**), odprete nastavitve kazala (**korak 2.**) in v oknu, ki se odpre, izberete možnost "Add notebook ToC cell" (**korak 3.**)*
<img src="kazalo.gif" width=700>
<h1>Kazalo<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Definicija-naloge" data-toc-modified-id="Definicija-naloge-1"><span class="toc-item-num">1 </span>Definicija naloge</a></span><ul class="toc-item"><li><span><a href="#Simbolno-reševanje" data-toc-modified-id="Simbolno-reševanje-1.1"><span class="toc-item-num">1.1 </span>Simbolno reševanje</a></span></li><li><span><a href="#..." data-toc-modified-id="...-1.2"><span class="toc-item-num">1.2 </span>...</a></span></li></ul></li><li><span><a href="#Sistemi-linearnih-enačb" data-toc-modified-id="Sistemi-linearnih-enačb-2"><span class="toc-item-num">2 </span>Sistemi linearnih enačb</a></span><ul class="toc-item"><li><span><a href="#..." data-toc-modified-id="...-2.1"><span class="toc-item-num">2.1 </span>...</a></span></li></ul></li><li><span><a href="#Interpolacija-/aproksimacija" data-toc-modified-id="Interpolacija-/aproksimacija-3"><span class="toc-item-num">3 </span>Interpolacija /aproksimacija</a></span><ul class="toc-item"><li><span><a href="#..." data-toc-modified-id="...-3.1"><span class="toc-item-num">3.1 </span>...</a></span></li></ul></li><li><span><a href="#Reševanje-enačb-(iskanje-ničel)" data-toc-modified-id="Reševanje-enačb-(iskanje-ničel)-4"><span class="toc-item-num">4 </span>Reševanje enačb (iskanje ničel)</a></span><ul class="toc-item"><li><span><a href="#..." data-toc-modified-id="...-4.1"><span class="toc-item-num">4.1 </span>...</a></span></li></ul></li><li><span><a href="#Integriranje-/-odvajanje" data-toc-modified-id="Integriranje-/-odvajanje-5"><span class="toc-item-num">5 </span>Integriranje / odvajanje</a></span><ul class="toc-item"><li><span><a href="#..." data-toc-modified-id="...-5.1"><span class="toc-item-num">5.1 </span>...</a></span></li></ul></li><li><span><a href="#Reševanje-diferencialnih-enačb" data-toc-modified-id="Reševanje-diferencialnih-enačb-6"><span class="toc-item-num">6 </span>Reševanje diferencialnih enačb</a></span><ul class="toc-item"><li><span><a href="#..." data-toc-modified-id="...-6.1"><span class="toc-item-num">6.1 </span>...</a></span></li></ul></li></ul></div>
---
# Definicija naloge
Definicija naloge naj vsebuje kratek opis problema in sliko!
Slika 3.22 iz vira Slavič 2014:
<img src="slika1.png" width=250>
Namen individualnega seminarja je prikazati **ustrezno izbiro in uporabo numeričnih metod, komentirati njihovo delovanje in ovrednotiti dobljene rezultate**. Temu primerna naj bo izbira fizikalnega problema, katerega reševnje naj zajema vsa predpisana poglavja:
* *simbolno reševanje*,
* *sistemi linearnih enačb*,
* *interpolacija ali aproksimacija*,
* *iskanje ničel*,
* *integriranje ali odvajanje*,
* *reševanje diferencialnih enačb*.
Ocena individualnega projekta je sestavljena iz:
* *numerična pravilnost* (30%),
* *struktura, urejenost, uporaba modulov, stil kode (docstringi)* (30%),
* *lasten odnos / kreativni dodatek* (30%),
* *pripravljeni testi kode in/ali uporabniški vmesnik* (10%).
**Oddaja**: v obliki zip datoteke z imenom: **`Ime in priimek, vpisna številka.zip`** pošljete na email naslov svojega asistenta.
---
## Simbolno reševanje
## ...
---
# Sistemi linearnih enačb
## ...
---
# Interpolacija /aproksimacija
## ...
---
# Reševanje enačb (iskanje ničel)
## ...
---
# Integriranje / odvajanje
## ...
---
# Reševanje diferencialnih enačb
## ...
| github_jupyter |
```
# Remember: library imports are ALWAYS at the top of the script, no exceptions!
import sqlite3
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
from sklearn.impute import KNNImputer
from sklearn.decomposition import PCA
from sklearn.preprocessing import MinMaxScaler, StandardScaler, OneHotEncoder
from pandas_profiling import ProfileReport
%matplotlib inline
sns.set()
```
# Context
The data we will be using through the pratical classes comes from a small relational database whose schema can be seen below:

# Reading the Data
```
# path to database
my_path = os.path.join("..", "data", "datamining.db")
# connect to the database
conn = sqlite3.connect(my_path)
# the query
query = """
select
age,
income,
frq,
rcn,
mnt,
clothes,
kitchen,
small_appliances,
toys,
house_keeping,
dependents,
per_net_purchase,
g.gender,
e.education,
m.status,
r.description
from customers as c
join genders as g on g.id = c.gender_id
join education_levels as e on e.id = c.education_id
join marital_status as m on m.id = c.marital_status_id
join recommendations as r on r.id = c.recommendation_id
order by c.id;
"""
df = pd.read_sql_query(query, conn)
```
## Make a copy of your original dataset
why?
```
df_original = df.copy()
```
# Metadata
- *id* - The unique identifier of the customer
- *age* - The year of birht of the customer
- *income* - The income of the customer
- *frq* - Frequency: number of purchases made by the customer
- *rcn* - Recency: number of days since last customer purchase
- *mnt* - Monetary: amount of € spent by the customer in purchases
- *clothes* - Number of clothes items purchased by the customer
- *kitchen* - Number of kitchen items purchased by the customer
- *small_appliances* - Number of small_appliances items purchased by the customer
- *toys* - Number of toys items purchased by the customer
- *house_keeping* - Number of house_keeping items purchased by the customer
- *dependents* - Binary. Whether or not the customer has dependents
- *per_net_purchase* - Percentage of purchases made online
- *education* - Education level of the customer
- *status* - Marital status of the customer
- *gender* - Gender of the customer
- *description* - Last customer's recommendation description
## Problems:
- Duplicates?
- Data types?
- Missing values?
- Strange values?
- Descriptive statistics?
### Take a closer look and point out possible problems:
(hint: a missing values in pandas is represented with a NaN value)
```
# replace "" by nans
df.replace("", np.nan, inplace=True)
# check dataset data types again
df.dtypes
# check descriptive statistics again
df.describe(include="all").T
# Define metric and non-metric features. Why?
non_metric_features = ["education", "status", "gender", "dependents", "description"]
metric_features = df.columns.drop(non_metric_features).to_list()
```
## Fill missing values (Data imputation)
How can we fill missing values?
```
# Creating a copy to apply central tendency measures imputation
df_central = df.copy()
# count of missing values
df_central.isna().sum()
df_central.median()
modes = df_central[non_metric_features].mode().loc[0]
modes
df_central.fillna(df_central.median(), inplace=True)
df_central.fillna(modes, inplace=True)
df_central.isna().sum() # checking how many NaNs we still have
# Creating new df copy to explore neighbordhood imputation
df_neighbors = df.copy()
# Seeing rows with NaNs
nans_index = df_neighbors.isna().any(axis=1)
df_neighbors[nans_index]
# KNNImputer - only works for numerical varaibles
imputer = KNNImputer(n_neighbors=5, weights="uniform")
df_neighbors[metric_features] = imputer.fit_transform(df_neighbors[metric_features])
# See rows with NaNs imputed
df_neighbors.loc[nans_index, metric_features]
# let's keep the central imputation
df = df_central.copy()
```
## An overview of our previous data exploration
You can also explore this dataset using the exported `pandas-profiling` report.





## Outlier removal
Why do we need to remove outliers? Which methods can we use?
Let's start by "manually" filtering the dataset's outliers
```
# This may vary from session to session, and is prone to varying interpretations.
# A simple example is provided below:
filters1 = (
(df['house_keeping']<=50)
&
(df['kitchen']<=40)
&
(df['toys']<=35)
&
(df['education']!='OldSchool')
)
df_1 = df[filters1]
print('Percentage of data kept after removing outliers:', np.round(df_1.shape[0] / df_original.shape[0], 4))
```
### Outlier removal using only the IQR method
Why should you use/not use this method?
```
q25 = df.quantile(.25)
q75 = df.quantile(.75)
iqr = (q75 - q25)
upper_lim = q75 + 1.5 * iqr
lower_lim = q25 - 1.5 * iqr
filters2 = []
for metric in metric_features:
llim = lower_lim[metric]
ulim = upper_lim[metric]
filters2.append(df[metric].between(llim, ulim, inclusive=True))
filters2 = pd.Series(np.all(filters2, 0))
df_2 = df[filters2]
print('Percentage of data kept after removing outliers:', np.round(df_2.shape[0] / df_original.shape[0], 4))
```
What do you think about this percentage?
## Combining different outlier methods
More robust/ consistent outlier detection method:
```
df_3 = df[(filters1 | filters2)]
print('Percentage of data kept after removing outliers:', np.round(df_3.shape[0] / df_original.shape[0], 4))
# Get the manual filtering version
df = df_1.copy()
```
## Feature Engineering
A reminder of our metadata:
- *id* - The unique identifier of the customer
- *age* - The year of birht of the customer
- *income* - The income of the customer
- *frq* - Frequency: number of purchases made by the customer
- *rcn* - Recency: number of days since last customer purchase
- *mnt* - Monetary: amount of € spent by the customer in purchases
- *clothes* - Number of clothes items purchased by the customer
- *kitchen* - Number of kitchen items purchased by the customer
- *small_appliances* - Number of small_appliances items purchased by the customer
- *toys* - Number of toys items purchased by the customer
- *house_keeping* - Number of house_keeping items purchased by the customer
- *dependents* - Binary. Whether or not the customer has dependents
- *per_net_purchase* - Percentage of purchases made online
- *education* - Education level of the customer
- *status* - Marital status of the customer
- *gender* - Gender of the customer
- *description* - Last customer's recommendation description
```
df['birth_year'] = df['age']
df['age'] = datetime.now().year - df['birth_year']
df['spent_online'] = (df['per_net_purchase'] / 100) * df['mnt']
# How can we avoid having as many extreme values in 'rcn'?
print((df['rcn']>100).value_counts())
rcn_t = df['rcn'].copy()
rcn_t.loc[rcn_t>100] = 100
df['rcn'] = rcn_t
```
## Variable selection: Redundancy VS Relevancy
### Redundancy
We already saw our original correlation matrix:

```
# Select variables according to their correlations
df.drop(columns=['birth_year', 'age', 'mnt'], inplace=True)
# Updating metric_features
metric_features.append("spent_online")
metric_features.remove("mnt")
metric_features.remove("age")
```
### Relevancy
Selecting variables based on the relevancy of each one to the task. Example: remove uncorrelated variables with the target, stepwise regression, use variables for product clustering, use variables for socio-demographic clustering, ...
Variables that aren't correlated with any other variable are often also not relevant. In this case we will not focus on this a lot since we don't have a defined task yet.
## Data Normalization
```
df_minmax = df.copy()
# Use MinMaxScaler to scale the data
scaler = MinMaxScaler()
scaled_feat = scaler.fit_transform(df_minmax[metric_features])
scaled_feat
# See what the fit method is doing (notice the trailing underscore):
print("Parameters fitted:\n", scaler.data_min_, "\n", scaler.data_max_)
df_minmax[metric_features] = scaled_feat
df_minmax.head()
# Checking max and min of minmaxed variables
df_minmax[metric_features].describe().round(2)
df_standard = df.copy()
scaler = StandardScaler()
scaled_feat = scaler.fit_transform(df_standard[metric_features])
scaled_feat
# See what the fit method is doing (notice the trailing underscore):
print("Parameters fitted:\n", scaler.mean_, "\n", scaler.var_)
df_standard[metric_features] = scaled_feat
df_standard.head()
# Checking mean and variance of standardized variables
df_standard[metric_features].describe().round(2)
```
**Important**: What if we had a training and test set? Should we fit a Scaler in both? What about other Sklearn objects?
```
df = df_standard.copy()
```
## One-hot encoding
```
df_ohc = df.copy()
# First let's remove status=Whatever
df_ohc.loc[df_ohc['status'] == 'Whatever', 'status'] = df['status'].mode()[0]
# Use OneHotEncoder to encode the categorical features. Get feature names and create a DataFrame
# with the one-hot encoded categorical features (pass feature names)
ohc = OneHotEncoder(sparse=False, drop="first")
ohc_feat = ohc.fit_transform(df_ohc[non_metric_features])
ohc_feat_names = ohc.get_feature_names()
ohc_df = pd.DataFrame(ohc_feat, index=df_ohc.index, columns=ohc_feat_names) # Why the index=df_ohc.index?
ohc_df
# Reassigning df to contain ohc variables
df_ohc = pd.concat([df_ohc.drop(columns=non_metric_features), ohc_df], axis=1)
df_ohc.head()
df = df_ohc.copy()
```
## Dimensionality Reduction
```
df_pca = df.copy()
```
### [A more specific explanation of PCA](https://builtin.com/data-science/step-step-explanation-principal-component-analysis)

```
# Use PCA to reduce dimensionality of data
pca = PCA()
pca_feat = pca.fit_transform(df_pca[metric_features])
pca_feat # What is this output?
```
### How many Principal Components to retain?
```
# Output PCA table
pd.DataFrame(
{"Eigenvalue": pca.explained_variance_,
"Difference": np.insert(np.diff(pca.explained_variance_), 0, 0),
"Proportion": pca.explained_variance_ratio_,
"Cumulative": np.cumsum(pca.explained_variance_ratio_)},
index=range(1, pca.n_components_ + 1)
)
# figure and axes
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
# draw plots
ax1.plot(pca.explained_variance_, marker=".", markersize=12)
ax2.plot(pca.explained_variance_ratio_, marker=".", markersize=12, label="Proportion")
ax2.plot(np.cumsum(pca.explained_variance_ratio_), marker=".", markersize=12, linestyle="--", label="Cumulative")
# customizations
ax2.legend()
ax1.set_title("Scree Plot", fontsize=14)
ax2.set_title("Variance Explained", fontsize=14)
ax1.set_ylabel("Eigenvalue")
ax2.set_ylabel("Proportion")
ax1.set_xlabel("Components")
ax2.set_xlabel("Components")
ax1.set_xticks(range(0, pca.n_components_, 2))
ax1.set_xticklabels(range(1, pca.n_components_ + 1, 2))
ax2.set_xticks(range(0, pca.n_components_, 2))
ax2.set_xticklabels(range(1, pca.n_components_ + 1, 2))
plt.show()
# Perform PCA again with the number of principal components you want to retain
pca = PCA(n_components=4)
pca_feat = pca.fit_transform(df_pca[metric_features])
pca_feat_names = [f"PC{i}" for i in range(pca.n_components_)]
pca_df = pd.DataFrame(pca_feat, index=df_pca.index, columns=pca_feat_names) # remember index=df_pca.index
pca_df
# Reassigning df to contain pca variables
df_pca = pd.concat([df_pca, pca_df], axis=1)
df_pca.head()
```
### How do we interpret each Principal Component (with style)?
```
def _color_red_or_green(val):
if val < -0.45:
color = 'background-color: red'
elif val > 0.45:
color = 'background-color: green'
else:
color = ''
return color
# Interpreting each Principal Component
loadings = df_pca[metric_features + pca_feat_names].corr().loc[metric_features, pca_feat_names]
loadings.style.applymap(_color_red_or_green)
df = df_pca.copy()
```
**Some final data preprocessing**
```
# Do this after checking the new pandas profiling report
df.drop(columns=['PC3'], inplace=True)
```
## Redo data exploration
Check if the data looks the way you expect it to.
- Have you missed some outliers?
- Are there still missing values?
- Is the data normalized?
This is an iterative process. It is likely you will change your preprocessing steps frequently throughout your group work.
```
ProfileReport(
df,
title='Tugas Customer Data Preprocessed',
correlations={
"pearson": {"calculate": True},
"spearman": {"calculate": False},
"kendall": {"calculate": False},
"phi_k": {"calculate": False},
"cramers": {"calculate": False},
},
)
```
**Is everything as you expect it to be? Save the data for later use.**
```
df.to_csv(os.path.join("..", "data", "tugas_preprocessed.csv"), index=False)
```
| github_jupyter |
#### Xl Juleoriansyah Nksrsb / 13317005
#### Muhamad Asa Nurrizqita Adhiem / 13317018
#### Oktoni Nur Pambudi / 13317022
#### Bernardus Rendy / 13317041
# Definisi Masalah
#### Dalam tangki dengan luas permukaan A, luas luaran a, dalam percepatan gravitasi g [Parameter A,a,g]
#### Diisi dengan flow fluida Vin (asumsi fluida incompressible) sehingga terdapat ketinggian h [Variabel Input Vin dan output h]
#### Akan memiliki luaran $V_{out}$ dengan
$V_{out} = a \sqrt{2gh} $
#### Sehingga akan didapat hubungan persamaan diferensial non-linear
$ \frac {dh}{dt} = \frac {V_{in}}{A} - \frac {a}{A}\sqrt{2gh}$
<img src="./dinsis_nonlinear.png" style="width:50%;">
#### Sumber Gambar: Slide Kuliah Dinamika Sistem dan Simulasi (Eko M. Budi & Estiyanti Ekawati) System Modeling: Fluidic Systems
```
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dhdt_non(h,t,Vin,A,a,g):
return (Vin/A)-(a/A)*np.sqrt(2*g*h)
# initial condition
h0 = 0
# Parameter
A = 1
g = 9.8
Vin =100
a = 1
# time points
t = np.linspace(0,100)
# solve ODEs
hnon = odeint(dhdt_non,h0,t,args=(Vin,A,a,g))
# plot results
plt.plot(t,hnon,'r-',linewidth=2,label='h_non_linear')
plt.xlabel('time')
plt.ylabel('h(t)')
plt.legend()
plt.show()
```
# Alternatif Penyelesaian: Linearisasi
#### Dalam sebuah persamaan diferensial non-linear, sulit ditentukan fungsi transfer (karena h dalam akar sehingga tidak dapat dikeluarkan dengan mudah) dan penyelesaian analitik tanpa menggunakan numerik sehingga dibentuk suatu metode bernama linearisasi. Linearisasi juga mengimplikasi operasi matematika yang jauh lebih mudah
#### Linearisasi menggunakan ekspansi taylor orde 1 untuk mengubah persamaan diferensial $ \frac {dh(t)}{dt} = \frac {q_i(t)}{A} - \frac {a}{A}\sqrt{2gh(t)}$ menjadi linear
<img src="./dinsis_linear1.png" style="width:50%">
#### Sumber Gambar: Slide Kuliah Dinamika Sistem dan Simulasi (Eko M. Budi & Estiyanti Ekawati) System Modeling: Fluidic Systems
#### Menghasilkan (dengan catatan qi adalah Vin)
# $ \frac {dh}{dt} - \frac {d\bar h}{dt} = \frac {V_{in}- \bar {V_{in}}}{A} - (\frac {a \sqrt {2g}}{2A \sqrt {\bar h}})(h - \bar h) $
#### Setelah linearisasi, dihasilkan persamaan diferensial linear yang dapat beroperasi dekat $ \bar h $
#### Secara sederhana, ditulis
# $ \frac {d\hat h}{dt} = \frac {\hat {V_{in}}}{A} - \frac{\hat h}{R} $
#### Dimana
### $ \hat h = h-\bar h $
### $ \hat {V_{in}} = V_{in} - \bar {V_{in}} $
### $ R=\frac {A \sqrt {2 \bar {h}}}{a \sqrt{g}} $
#### Sehingga harus dipilih kondisi dimana $ \bar h $ sesuai untuk daerah operasi persamaan
#### Terlihat bahwa persamaan digunakan pada 0 hingga steady state, saat steady state
# $ \frac {dh}{dt} = 0 $
#### Berdasarkan persamaan
# $ \frac {dh}{dt} = \frac {V_{in}}{A} - \frac {a}{A}\sqrt{2gh}$
# $ 0 = V_{in} - a \sqrt{2g\bar {h}} $
# $ \bar {h} = \frac {V_{in}^2}{2ga^2} $
#### Juga harus dipilih kondisi dimana $ \bar V_{in} $ sesuai untuk daerah operasi persamaan
#### Terlihat bahwa jika input merupakan fungsi step,
# $ \bar V_{in} = V_{in} $
#### Karena $ V_{in} $ konstan, maka pada kondisi akhir dimana $ \bar V_{in} $ beroperasi, juga tetap sama dengan $ V_{in} $
#### Menggunakan persamaan yang sebelumnya telah diturunkan
# $ \frac {d\hat h}{dt} = \frac {\hat {V_{in}}}{A} - \frac{\hat h}{R} $
#### Dimana
### $ \hat h = h-\bar h $
### $ \hat {V_{in}} = V_{in} - \bar {V_{in}} $
### $ R=\frac {A \sqrt {2 \bar {h}}}{a \sqrt{g}} $
```
def dhhatdt_lin(hhat,t,Vinhat,A,a,g,R):
return (Vinhat/A)-(hhat/R)
# Initial condition
h0 = 0
# Input
Vin=100
# Parameter
A = 1
g = 9.8
a = 1
hbar = Vin**2/(2*g*a**2)
R=(A*np.sqrt(2*hbar))/(a*np.sqrt(g))
hhat0 = h0-hbar
Vinbar= Vin
Vinhat= Vin-Vinbar
# time points
t = np.linspace(0,100)
# solve ODEs, karena hasil ODE yang didapat adalah untuk hhat, maka harus dilakukan penambahan hbar karena h = hhat+hbar
hlin = odeint(dhhatdt_lin,hhat0,t,args=(Vinhat,A,a,g,R))
hlin = hlin+hbar
# plot results
plt.plot(t,hlin,'b-',linewidth=2,label='h_linear')
plt.xlabel('time')
plt.ylabel('h(t)')
plt.legend()
plt.show()
```
# Perbandingan Non-linear dan Linearisasi
```
plt.plot(t,hnon,'r-',linewidth=2,label='h_non_linear')
plt.plot(t,hlin,'b-',linewidth=2,label='h_linear')
plt.xlabel('time')
plt.ylabel('h(t)')
plt.legend()
plt.show()
```
#### Terlihat perbedaan respon sistem ketika dilakukan aproksimasi linearisasi terhadap dhdt
#### Walaupun terjadi perbedaan, perbedaan tersebut kurang signifikan pada sistem ini dengan Sum Squared Error sebesar:
```
err=hnon-hlin
err=err*err
sum(err)
```
# Interface Parameter
```
from ipywidgets import interact,fixed,widgets
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from ipywidgets import interact,fixed,widgets,Button,Layout
def dhhatdt_lin(hhat,t,Vinhat,A,a,g,R):
return (Vinhat/A)-(hhat/R)
def dhdt_non(h,t,Vin,A,a,g):
return (Vin/A)-(a/A)*np.sqrt(2*g*h)
g = 9.8
range_A = widgets.FloatSlider(
value=2.,
min=1.,
max=10.0,
step=0.1,
description='Luas Alas Tangki ($dm^2$):',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
range_a = widgets.FloatSlider(
value=2.,
min=0.1, max=+3., step=0.1,
description='Luas Pipa ($dm^2$) :',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
range_Vin = widgets.FloatSlider(
value= 2.,
min=0.1,
max=100.0,
step=0.1,
description='Debit Fluida Masuk ($dm^2 / s$)',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
range_h0 = widgets.FloatSlider(
value= 2.,
min=0.,
max=500.0,
step=0.1,
description='Ketinggian Mula-Mula ($dm$):',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
range_amplitude = widgets.FloatSlider(
value= 2.,
min=0.,
max=100.0,
step=0.1,
description='Amplituda Gangguan Sinusoidal:',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
time_slider = widgets.IntSlider(
min=100, max=1000, step=1, value=100,
description='Waktu Maksimum ($s$):',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
readout_format='.1f',
)
max_err_button = widgets.Button(
description='Error Maksimum',
)
max_err_sin_button = widgets.Button(
description='Error Maksimum Sinusoidal',
)
min_err_button = widgets.Button(
description='Error Minimum',
)
tab1 = widgets.VBox(children=[range_A,range_a,range_Vin,range_h0,time_slider,max_err_button,min_err_button])
tab2 = widgets.VBox(children=[range_A,range_a,range_Vin,range_h0,range_amplitude,time_slider,max_err_sin_button,min_err_button])
tab = widgets.Tab(children=[tab1, tab2])
tab.set_title(0, 'Step')
tab.set_title(1, 'GangguanSinusoidal')
A = range_A.value
a = range_a.value
Vin = range_Vin.value
h0 = range_h0.value
tmax = time_slider.value
amp = range_amplitude.value
#Max error untuk step
def max_err_set(b=None):
range_A.value=10.0
range_a.value=0.1
range_Vin.value=100
range_h0.value=0
time_slider.value=1000
@max_err_button.on_click
def maximum_err_set(b):
max_err_set()
#Max error untuk sinusoidal
def max_err_sin_set(b=None):
range_A.value=10.0
range_a.value=2.9
range_Vin.value=100
range_h0.value=0
time_slider.value=150
range_amplitude.value=100
@max_err_sin_button.on_click
def maximum_err_sin_set(b):
max_err_sin_set()
#Min error untuk step dan sinusoidal
def min_err_set(b=None):
range_A.value=1.0
range_a.value=2.9
range_Vin.value=100
range_h0.value=50
time_slider.value=100
range_amplitude.value=0
@min_err_button.on_click
def minimum_err_set(b):
min_err_set()
def plot3(A,a,Vin,h0,amp,tmax):
t = np.linspace(50,tmax,1000)
f, ax = plt.subplots(1, 1, figsize=(8, 6))
if tab.selected_index == 1 :
def dhdt_non_sin(h,t,Vin,A,a,g,amp):
return ((Vin+abs(amp*np.sin(np.pi*t)))/A)-(a/A)*np.sqrt(2*g*h)
def dhhatdt_lin_sin(hhat,t,Vin,A,a,g,amp):
V=Vin+abs(amp*np.sin(np.pi*t))
R=(A*np.sqrt(2*hbar))/(a*np.sqrt(g))
Vinbar=Vin
Vinhat=V-Vinbar
return ((Vinhat/A)-(hhat/R))
hbar = Vin**2/(2*g*a**2)
hhat0 = h0-hbar
hlin = odeint(dhhatdt_lin_sin,hhat0,t,args=(Vin,A,a,g,amp))
hlin = hlin+hbar
hnon = odeint(dhdt_non_sin,h0,t,args=(Vin,A,a,g,amp))
ax.plot(t, hlin , color = 'blue', label ='linier')
ax.plot(t, hnon , color = 'red', label ='non-linier')
ax.title.set_text('Input Step dengan Gangguan Sinusoidal')
ax.legend()
if tab.selected_index == 0 :
hbar = Vin**2/(2*g*a**2)
R=(A*np.sqrt(2*hbar))/(a*np.sqrt(g))
hhat0 = h0-hbar
Vinbar= Vin
Vinhat= Vin-Vinbar
hlin = odeint(dhhatdt_lin,hhat0,t,args=(Vinhat,A,a,g,R))
hlin = hlin+hbar
hnon = odeint(dhdt_non,h0,t,args=(Vin,A,a,g))
ax.plot(t, hlin , color = 'blue' , label ='linier')
ax.plot(t, hnon , color = 'red', label='non-linier')
ax.title.set_text('Input Step')
ax.legend()
ui = tab
out = widgets.interactive_output(plot3,{'A':range_A,'a':range_a,'Vin':range_Vin,'h0':range_h0,'amp':range_amplitude,'tmax':time_slider})
display(ui,out)
```
# Pembahasan
Dari grafik di atas: kurva biru (linear) dan merah (non linear), dapat dilihat bahwa kurva merah dan biru tersebut terkadang sama atau hampir berhimpit yang berarti error antara linear dan non-linear kecil, namun terkadang juga tidak berhimpit dan error antara linear dan non-linear menjadi besar. Dapat digunakan interaksi untuk meninjau efek perubahan parameter terhadap model respon sistem yang dibentuk dengan persamaan non-linear dan linear. Untuk dapat melihat perbedaan respon persamaan linar dan persamaan nonlinear serta menentukan keterbatasan persamaan hasil linearisasi, kita akan membuat error tersebut agar menjadi besar. Untuk error maksimum atau minimum dapat digunakan tombol "error maksimum" dan "error minimum". Adapun cara yang dilakukan adalah:
#### 1) Memperkecil ketinggian awal (h0) dari fluida di tabung, sehingga rentang h0 dan hfinal semakin besar
Hal ini akan menyebabkan h dan hbar memiliki perbedaan nilai yang besar saat proses transien. Ketika rentang h0 dan hfinal membesar, pada saat respon belum steady, h dan hbar akan semakin menjauh karena nilai hbar yang diambil adalah saat keadaan steady.
#### 2) Meningkatkan luas alas tabung (A)
Untuk nilai A, semakin besar A, maka akan semakin lama keadaan steady tercapai. Maka semakin lama proses h menuju hbar pada steady state, sehingga error semakin besar.
#### 3) Mengecilkan luas pipa luaran (a) [saat respon sisem sedang meningkat]
#### 4) Memperbesar luas pipa luaran (a) [saat respon sistem sedang menurun]
Kemudian untuk a, yang merupakan variabel penentu banyaknya fluida yang keluar dari tangki menentukan apakah respon tersebut menurun atau meningkat. Di faktor 2, 3, dan 4 ini kita juga mengetahui bahwa error akan terjadi saat keadaan transien akibat hbar diasumsikan saat keadaan steady. Saat respon sistem meningkat, jika a semakin kecil, perubahan
## $ \frac{dh}{dt} - \frac{\bar{dh}}{dt} $
semakin besar sehingga error semakin besar pada saat transien. Berlaku sebaliknya saat respon sistem sedang menurun.
#### 5) Vin meningkat (saat respon sedang meningkat)
#### 6) Vin menurun (saat respon sedang menurun)
Dari faktor 5 dan 6 dapat dilihat bahwa saat kita meningkatkan nilai Vin, kurva biru (linear) akan semakin memperlambat keadaan steady (kurva semakin ke kanan) yang berarti error dari linearisasi semakin besar. Hal ini berhubungan dengan asumsi hbar diambil saat keadaan steady.
#### 7) Amplitudo sinusoidal yang meningkat
Faktor 7 menjelaskan hubungan Vinbar dan Vin harus berbeda nilai sekecil mungkin dan harus sesuai dengan rentang kerja sistem.
| github_jupyter |
##### Copyright 2019 Qiyang Hu
```
#@title Licensed under MIT License (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://huqy.github.io/idre_learning_machine_learning/LICENSE.md
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Useful Routines for Collab
## Start Collab with requesting a free GPU from Google Cloud
Edit -> Notebook settings -> Select GPU as Hardware accelerator
**OR**
1. Google Drive -> New -> More -> Connect more apps -> Collaboratory
2. Google Drive -> New -> More -> Collaboratory
3. Runtime -> Interrupt execution
4. Runtime -> Change runtime type -> Select GPU as Hardware accelerator
## Check the resources obtained from Collab
Google colab is a free to use Jupyter notebook , that allows you to use free Tesla T4 GPU it also gives you a total of 12 GB of ram , and you can use it up to 12 hours in row
```
!lsb_release -a
!uname -r
!lscpu | grep 'Model name'
!lscpu | grep 'Socket(s):'
!lscpu | grep 'Thread(s) per core'
!lscpu | grep "L3 cache"
!cat /proc/meminfo | grep 'MemAvailable'
!df -h / | awk '{print $4}'
!nvidia-smi
import tensorflow as tf
tf.test.gpu_device_name()
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
```
## Check the libs in Colab
```
import sys
print('The python version is', sys.version)
import sklearn
print('The scikit-learn version is {}.'.format(sklearn.__version__))
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
```
## Mounting Google Drive to Collab's /content/drive
```
from google.colab import drive
drive.mount('/content/drive')
```
## Using Kaggle API for Google Colaboratory
We need first to download the token from Kaggle:
1. Go to kaggle.com -> log in -> click "my account"
2. Scroll down to API and hit “Create New API Token.” It will prompt to download a file called **kaggle.json** to your local computer.
```
from google.colab import files
files.upload()
#!pip install -q kaggle
!mkdir -p /root/.kaggle
!cp kaggle.json /root/.kaggle
!chmod 600 /root/.kaggle/kaggle.json
!kaggle config set -n path -v{/content}
!kaggle competitions list -s titanic
!kaggle competitions download -c titanic -p /content
```
## Direct open a jupyter notebook with colab
1. Change URL from "https://github.com/..." to "https://colab.research.google.com/github/..."
2. OR: just use the "[Open in Colab](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo)" Chrome extension
---
## References:
[1] https://www.kdnuggets.com/2018/02/google-colab-free-gpu-tutorial-tensorflow-keras-pytorch.html
[2] https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb#scrollTo=K-NVg7RjyeTk
| github_jupyter |
<a href="https://colab.research.google.com/github/Ayanlola2002/Image-Classifier/blob/master/finalpytorchnew.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install Pillow==4.1.1
#http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
!wget -O cat_to_name.json "https://raw.githubusercontent.com/GabrielePicco/deep-learning-flower-identifier/master/cat_to_name.json"
!wget "https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip"
!unzip flower_data.zip
!pip install --no-cache-dir -I pillow
# Imports here
%matplotlib inline
import time
import os
import json
import copy
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from PIL import Image
from collections import OrderedDict
import torch
from torch import nn, optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
from torchvision import datasets, models, transforms
from google.colab import files
data_dir = './flower_data'
train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
dirs = {'train': train_dir,
'valid': valid_dir}
size = 224
data_transforms = data_transforms = {
'train': transforms.Compose([
transforms.RandomRotation(45),
transforms.RandomResizedCrop(size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
]),
'valid': transforms.Compose([
transforms.Resize(size + 32),
transforms.CenterCrop(size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
]),
}
image_datasets = {x: datasets.ImageFolder(dirs[x], transform=data_transforms[x]) for x in ['train', 'valid']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=32, shuffle=True) for x in ['train', 'valid']}
dataset_sizes = {x: len(image_datasets[x])
for x in ['train', 'valid']}
class_names = image_datasets['train'].classes
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
model = models.vgg19(pretrained=True)
# freeze all pretrained model parameters
for param in model.parameters():
param.requires_grad_(False)
print(model)
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(25088, 4096)),
('relu', nn.ReLU()),
('fc2', nn.Linear(4096, 102)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
def train_model(model, criteria, optimizer, scheduler, num_epochs=25, device='cuda'):
model.to(device)
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'valid']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criteria(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'valid' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
# Criteria NLLLoss which is recommended with Softmax final layer
criteria = nn.NLLLoss()
# Observe that all parameters are being optimized
optimizer = optim.Adam(model.classifier.parameters(), lr=0.0001)
# Decay LR by a factor of 0.1 every 4 epochs
sched = lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1)
# Number of epochs
eps=5
device = "cuda" if torch.cuda.is_available() else "cpu"
model_ft = train_model(model, criteria, optimizer, sched, eps, device)
#save check point
model_file_name = 'classifier1.pth'
model.class_to_idx = image_datasets['train'].class_to_idx
model.cpu()
torch.save({'arch': 'vgg19',
'state_dict': model.state_dict(),
'class_to_idx': model.class_to_idx},
model_file_name)
from google.colab import drive
drive.mount('/content/gdrive')
model_save_name = 'classifier1.pt'
path = F"/content/gdrive/My Drive/{model_save_name}"
torch.save(model.state_dict(), path)
```
| github_jupyter |
**Chapter 3 – Classification**
_This notebook contains all the sample code and solutions to the exercises in chapter 3._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/03_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# MNIST
**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this and keep the same code as in the book, we use `as_frame=False`.
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
mnist.keys()
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28 * 28
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
some_digit = X[0]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap=mpl.cm.binary)
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
y[0]
y = y.astype(np.uint8)
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = mpl.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = X[:100]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[0]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
```
# Binary classifier
```
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
```
**Note**: some hyperparameters will have a different defaut value in future versions of Scikit-Learn, such as `max_iter` and `tol`. To be future-proof, we explicitly set these hyperparameters to their future default values. For simplicity, this is not shown in the book.
```
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
```
**Note**: `shuffle=True` was omitted by mistake in previous releases of the book.
```
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
```
**Warning**: this output (and many others in this notebook and other notebooks) may differ slightly from those in the book. Don't worry, that's okay! There are several reasons for this:
* first, Scikit-Learn and other libraries evolve, and algorithms get tweaked a bit, which may change the exact result you get. If you use the latest Scikit-Learn version (and in general, you really should), you probably won't be using the exact same version I used when I wrote the book or this notebook, hence the difference. I try to keep this notebook reasonably up to date, but I can't change the numbers on the pages in your copy of the book.
* second, many training algorithms are stochastic, meaning they rely on randomness. In principle, it's possible to get consistent outputs from a random number generator by setting the seed from which it generates the pseudo-random numbers (which is why you will see `random_state=42` or `np.random.seed(42)` pretty often). However, sometimes this does not suffice due to the other factors listed here.
* third, if the training algorithm runs across multiple threads (as do some algorithms implemented in C) or across multiple processes (e.g., when using the `n_jobs` argument), then the precise order in which operations will run is not always guaranteed, and thus the exact result may vary slightly.
* lastly, other things may prevent perfect reproducibility, such as Python dicts and sets whose order is not guaranteed to be stable across sessions, or the order of files in a directory which is also not guaranteed.
```
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5 # pretend we reached perfection
confusion_matrix(y_train_5, y_train_perfect_predictions)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
cm = confusion_matrix(y_train_5, y_train_pred)
cm[1, 1] / (cm[0, 1] + cm[1, 1])
recall_score(y_train_5, y_train_pred)
cm[1, 1] / (cm[1, 0] + cm[1, 1])
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
cm[1, 1] / (cm[1, 1] + (cm[1, 0] + cm[0, 1]) / 2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 8000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.legend(loc="center right", fontsize=16) # Not shown in the book
plt.xlabel("Threshold", fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.axis([-50000, 50000, 0, 1]) # Not shown
recall_90_precision = recalls[np.argmax(precisions >= 0.90)]
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
plt.figure(figsize=(8, 4)) # Not shown
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.plot([threshold_90_precision, threshold_90_precision], [0., 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [0.9, 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [recall_90_precision, recall_90_precision], "r:")# Not shown
plt.plot([threshold_90_precision], [0.9], "ro") # Not shown
plt.plot([threshold_90_precision], [recall_90_precision], "ro") # Not shown
save_fig("precision_recall_vs_threshold_plot") # Not shown
plt.show()
(y_train_pred == (y_scores > 0)).all()
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.grid(True)
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
plt.plot([recall_90_precision, recall_90_precision], [0., 0.9], "r:")
plt.plot([0.0, recall_90_precision], [0.9, 0.9], "r:")
plt.plot([recall_90_precision], [0.9], "ro")
save_fig("precision_vs_recall_plot")
plt.show()
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
threshold_90_precision
y_train_pred_90 = (y_scores >= threshold_90_precision)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
```
# ROC curves
```
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal
plt.axis([0, 1, 0, 1]) # Not shown in the book
plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown
plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.figure(figsize=(8, 6)) # Not shown
plot_roc_curve(fpr, tpr)
fpr_90 = fpr[np.argmax(tpr >= recall_90_precision)] # Not shown
plt.plot([fpr_90, fpr_90], [0., recall_90_precision], "r:") # Not shown
plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], "r:") # Not shown
plt.plot([fpr_90], [recall_90_precision], "ro") # Not shown
save_fig("roc_curve_plot") # Not shown
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
```
**Note**: we set `n_estimators=100` to be future-proof since this will be the default value in Scikit-Learn 0.22.
```
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
recall_for_forest = tpr_forest[np.argmax(fpr_forest >= fpr_90)]
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.plot([fpr_90, fpr_90], [0., recall_90_precision], "r:")
plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], "r:")
plt.plot([fpr_90], [recall_90_precision], "ro")
plt.plot([fpr_90, fpr_90], [0., recall_for_forest], "r:")
plt.plot([fpr_90], [recall_for_forest], "ro")
plt.grid(True)
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
```
# Multiclass classification
```
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto", random_state=42)
svm_clf.fit(X_train[:1000], y_train[:1000]) # y_train, not y_train_5
svm_clf.predict([some_digit])
some_digit_scores = svm_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
svm_clf.classes_
svm_clf.classes_[5]
from sklearn.multiclass import OneVsRestClassifier
ovr_clf = OneVsRestClassifier(SVC(gamma="auto", random_state=42))
ovr_clf.fit(X_train[:1000], y_train[:1000])
ovr_clf.predict([some_digit])
len(ovr_clf.estimators_)
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
```
**Warning**: the following two cells may take close to 30 minutes to run, or more depending on your hardware.
```
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
```
# Error analysis
```
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
# since sklearn 0.22, you can use sklearn.metrics.plot_confusion_matrix()
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
```
# Multilabel classification
```
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
```
**Warning**: the following cell may take a very long time (possibly hours depending on your hardware).
```
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
```
# Multioutput classification
```
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 0
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
```
# Extra material
## Dummy (ie. random) classifier
```
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier(strategy="prior")
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
```
## KNN classifier
```
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
```
# Exercise solutions
## 1. An MNIST Classifier With Over 97% Accuracy
**Warning**: the next cell may take close to 16 hours to run, or more depending on your hardware.
```
from sklearn.model_selection import GridSearchCV
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
knn_clf = KNeighborsClassifier()
grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)
grid_search.best_params_
grid_search.best_score_
from sklearn.metrics import accuracy_score
y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)
```
## 2. Data Augmentation
```
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
return shifted_image.reshape([-1])
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.title("Original", fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(133)
plt.title("Shifted left", fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.show()
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
knn_clf = KNeighborsClassifier(**grid_search.best_params_)
knn_clf.fit(X_train_augmented, y_train_augmented)
```
**Warning**: the following cell may take close to an hour to run, depending on your hardware.
```
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
By simply augmenting the data, we got a 0.5% accuracy boost. :)
## 3. Tackle the Titanic dataset
The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on.
First, login to [Kaggle](https://www.kaggle.com/) and go to the [Titanic challenge](https://www.kaggle.com/c/titanic) to download `train.csv` and `test.csv`. Save them to the `datasets/titanic` directory.
Next, let's load the data:
```
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
```
The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score.
Let's take a peek at the top few rows of the training set:
```
train_data.head()
```
The attributes have the following meaning:
* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.
* **Pclass**: passenger class.
* **Name**, **Sex**, **Age**: self-explanatory
* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.
* **Parch**: how many children & parents of the passenger aboard the Titanic.
* **Ticket**: ticket id
* **Fare**: price paid (in pounds)
* **Cabin**: passenger's cabin number
* **Embarked**: where the passenger embarked the Titanic
Let's get more info to see how much data is missing:
```
train_data.info()
```
Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable.
The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them.
Let's take a look at the numerical attributes:
```
train_data.describe()
```
* Yikes, only 38% **Survived**. :( That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.
* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).
* The mean **Age** was less than 30 years old.
Let's check that the target is indeed 0 or 1:
```
train_data["Survived"].value_counts()
```
Now let's take a quick look at all the categorical attributes:
```
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
```
The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton.
**Note**: the code below uses a mix of `Pipeline`, `FeatureUnion` and a custom `DataFrameSelector` to preprocess some columns differently. Since Scikit-Learn 0.20, it is preferable to use a `ColumnTransformer`, like in the previous chapter.
Now let's build our preprocessing pipelines. We will reuse the `DataframeSelector` we built in the previous chapter to select specific attributes from the `DataFrame`:
```
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
```
Let's build the pipeline for the numerical attributes:
```
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", SimpleImputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
```
We will also need an imputer for the string categorical columns (the regular `SimpleImputer` does not work on those):
```
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
from sklearn.preprocessing import OneHotEncoder
```
Now we can build the pipeline for the categorical attributes:
```
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
```
Finally, let's join the numerical and categorical pipelines:
```
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
```
Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want.
```
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
```
Let's not forget to get the labels:
```
y_train = train_data["Survived"]
```
We are now ready to train a classifier. Let's start with an `SVC`:
```
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto")
svm_clf.fit(X_train, y_train)
```
Great, our model is trained, let's use it to make predictions on the test set:
```
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
```
And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?
```
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
```
Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy.
Let's try a `RandomForestClassifier`:
```
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
```
That's much better!
Instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and "whiskers" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the `boxplot()` function detects outliers (called "fliers") and does not include them within the whiskers. Specifically, if the lower quartile is $Q_1$ and the upper quartile is $Q_3$, then the interquartile range $IQR = Q_3 - Q_1$ (this is the box's height), and any score lower than $Q_1 - 1.5 \times IQR$ is a flier, and so is any score greater than $Q3 + 1.5 \times IQR$.
```
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores], labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
```
To improve this result further, you could:
* Compare many more models and tune hyperparameters using cross validation and grid search,
* Do more feature engineering, for example:
* replace **SibSp** and **Parch** with their sum,
* try to identify parts of names that correlate well with the **Survived** attribute (e.g. if the name contains "Countess", then survival seems more likely),
* try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).
```
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
```
## 4. Spam classifier
First, let's fetch the data:
```
import os
import tarfile
import urllib.request
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(ham_url=HAM_URL, spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", ham_url), ("spam.tar.bz2", spam_url)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=spam_path)
tar_bz2_file.close()
fetch_spam_data()
```
Next, let's load all the emails:
```
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
```
We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):
```
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
```
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
```
print(ham_emails[1].get_content().strip())
print(spam_emails[6].get_content().strip())
```
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
```
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
from collections import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
```
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have.
Now let's take a look at the email headers:
```
for header, value in spam_emails[0].items():
print(header,":",value)
```
There's probably a lot of useful information in there, such as the sender's email address (12a1mailbot1@web.de looks fishy), but we will just focus on the `Subject` header:
```
spam_emails[0]["Subject"]
```
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
```
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails, dtype=object)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `<head>` section, then converts all `<a>` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `>` or ` `):
```
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
```
Let's see if it works. This is HTML spam:
```
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
```
And this is the resulting plain text:
```
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
```
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
```
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
```
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit ([NLTK](http://www.nltk.org/)). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):
`$ pip3 install nltk`
```
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
```
We will also need a way to replace URLs with the word "URL". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):
`$ pip3 install urlextract`
```
# if running this notebook on Colab, we just pip install urlextract
try:
import google.colab
!pip install -q -U urlextract
except ImportError:
pass # not running on Colab
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
```
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
```
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*)?(?:[eE][+-]?\d+)?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
```
Let's try this transformer on a few emails:
```
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
```
This looks about right!
Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
```
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
```
What does this matrix mean? Well, the 99 in the second row, first column, means that the second email contains 99 words that are not part of the vocabulary. The 11 next to it means that the first word in the vocabulary is present 11 times in this email. The 9 next to it means that the second word is present 9 times, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "the", the second word is "of", etc.
```
vocab_transformer.vocabulary_
```
We are now ready to train our first spam classifier! Let's transform the whole dataset:
```
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
```
**Note**: to be future-proof, we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22.
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(solver="lbfgs", max_iter=1000, random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
```
Over 98.5%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.
But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
```
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(solver="lbfgs", max_iter=1000, random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
```
| github_jupyter |
# Simple ray tracing
```
# setup path to ray_tracing package
import sys
sys.path.append('~/Documents/python/ray_tracing/')
import ray_tracing as rt
from matplotlib import rcParams
rcParams['figure.figsize'] = [8, 4]
import matplotlib.pyplot as plt
plt.ion()
```
## Principle
The package 'ray_tracing.py' provides you with the means to quickly plot an optical system. Currently included are Lenses (L) with and without an aperture and simple apertures (A), such as irises, that are separated by distances (D). All ray tracing is done according to the paraxial approximation: $\sin \alpha \approx \tan \alpha \approx \alpha$, the larger $\alpha$ the larger the error!
### Example 1: one lens
Lets look at one lens of focal length 100 mm, an object shall be placed 150 mm apart from it and we look at two rays, __[the marginal and the principle ray](https://en.wikipedia.org/wiki/Ray_(optics))__. A ray is given as a vector $(h, \varphi)$, where $h$ is the height of the starting point and $\varphi$ is the angle measured against the optical axis in rad.
```
osys = rt.OpticalSystem(' d150 | L100 | d500 ')
height_1 = 0.0; phi_1 = 0.005;
ray_1 = (height_1, phi_1)
height_2 = 1.0; phi_2 = -1/150;
ray_2 = (height_2, phi_2)
ax = osys.plot_statics()
osys.plot_ray(ray_1, label="marginal ray")
osys.plot_ray(ray_2, label="chief ray")
ax.legend()
```
You can see that the marginal ray (blue) crosses the optical axis again at 450 mm. This is where the image is formed. The height of the chief (orange) ray at that position is 2.0 mm. Lets check that:
```
rt.get_image_pos(object_distance=150, focal_length=100)
rt.get_image_size(object_size=1.0, object_distance=150, focal_length=100)
```
The image is formed 300 mm after the lens, hence at 450 mm and it's magnified twice.
### Example 2: two lens system
```
osys = rt.OpticalSystem(' d150 | L100 | d400 | L50 | d150 ')
height_1 = 0.0; phi_1 = 0.005;
ray_1 = (height_1, phi_1)
height_2 = 1.0; phi_2 = -1/150;
ray_2 = (height_2, phi_2)
ax = osys.plot_statics()
osys.plot_ray(ray_1, label="marginal ray")
osys.plot_ray(ray_2, label="meridional ray")
ax.legend();
```
### Example 3: two lens system with apertures
Let's now consider an optical sytem with lenses of finite size. Apertures of lenses can be added by '/' following a number. Apertures of lenses are plotted as thick black lines.
```
osys = rt.OpticalSystem(' d150 | L100/1 | d400 | L50/2 | d150 ')
height_1 = 0.0; phi_1 = 0.005;
ray_1 = (height_1, phi_1)
height_2 = 1.0; phi_2 = -1/150;
ray_2 = (height_2, phi_2)
height_2 = 1.0; phi_2 = -1/150;
ray_2 = (height_2, phi_2)
height_3 = 0.5; phi_3 = -0.5/150;
ray_3 = (height_3, phi_3)
ax = osys.plot_statics()
osys.plot_ray(ray_1, label="marginal ray")
osys.plot_ray(ray_2, label="chief ray")
osys.plot_ray(ray_3, label="meridional ray")
ax.legend();
```
Rays not passing an aperture
```
"""
ray traycing: led | d0 | l1 | d1 | l2 | d2 | d3 | l3 | d4 | l4 | d5 | d6 | obj | d7
"""
trace = 'd15 | L15/5.5 | d10 | L40/12.5 | d40 | d80 | L80/15 | d60 | L300/16 | d300 | d3.33 | L3.33/4.4 | d3.33'
sequence = rt.trace_parser(trace)
from numpy import arange
plt.ion()
plt.close('all')
fig = plt.figure()
ax = fig.add_subplot(111)
for idx, h in enumerate(arange(-0.5, 0.6, 0.125)):
rt.plot_ray(h, sequence, axis=ax )
fig.subplots_adjust(right=0.8)
ax.legend(loc='center right', bbox_to_anchor=(1.3, 0.5));
```
| github_jupyter |
```
#default_exp utils
```
# Utility Functions
> Utility functions to help with downstream tasks
```
#hide
from nbdev.showdoc import *
from self_supervised.byol import *
from self_supervised.simclr import *
from self_supervised.swav import *
#export
from fastai.vision.all import *
```
## Loading Weights for Downstream Tasks
```
#export
def transfer_weights(learn:Learner, weights_path:Path, device:torch.device=None):
"Load and freeze pretrained weights inplace from `weights_path` using `device`"
if device is None: device = learn.dls.device
new_state_dict = torch.load(weights_path, map_location=device)
if 'model' in new_state_dict.keys(): new_state_dict = new_state_dict['model']
#allow for simply exporting the raw PyTorch model
learn_state_dict = learn.model.state_dict()
matched_layers = 0
for name, param in learn_state_dict.items():
name = 'encoder.'+name[2:]
if name in new_state_dict:
matched_layers += 1
input_param = new_state_dict[name]
if input_param.shape == param.shape:
param.copy_(input_param)
else:
raise ValueError(f'Shape mismatch at {name}, please ensure you have the same backbone')
else: pass # these are weights that weren't in the original model, such as a new head
if matched_layers == 0: raise Exception("No shared weight names were found between the models")
learn.model.load_state_dict(learn_state_dict)
learn.freeze()
print("Weights successfully transferred!")
```
When training models with this library, the `state_dict` will change, so loading it back into `fastai` as an encoder won't be a perfect match. This helper function aims to make that simple.
Example usage:
First prepare the downstream-task dataset (`ImageWoof` is shown here):
```
def get_dls(bs:int=32):
"Prepare `IMAGEWOOF` `DataLoaders` with `bs`"
path = untar_data(URLs.IMAGEWOOF)
tfms = [[PILImage.create], [parent_label, Categorize()]]
item_tfms = [ToTensor(), Resize(224)]
batch_tfms = [FlipItem(), RandomResizedCrop(224, min_scale=0.35),
IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)]
items = get_image_files(path)
splits = GrandparentSplitter(valid_name='val')(items)
dsets = Datasets(items, tfms, splits=splits)
dls = dsets.dataloaders(after_item=item_tfms, after_batch=batch_tfms,
bs=32)
return dls
dls = get_dls(bs=32)
```
For the sake of example we will make and save a SWaV model trained for one epoch (in reality you'd want to train for many more):
```
net = create_swav_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, net, SWAVLoss(), cbs=[SWAV()])
learn.save('../../../swav_test');
```
Followed by a `Learner` designed for classification with a simple custom head for our `xresnet`:
```
learn = cnn_learner(dls, xresnet34, pretrained=False)
```
Before loading in all the weights:
```
transfer_weights(learn, '../../swav_test.pth')
```
Now we can do downstream tasks with our pretrained models!
| github_jupyter |
```
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
# https://discuss.pytorch.org/t/how-to-implement-keras-layers-core-lambda-in-pytorch/5903
class BestMeanPooling(nn.Module):
def __init__(self, n_topN):
super(BestMeanPooling, self).__init__()
self.topN = n_topN
def forward(self, aX_tr):
"""
"""
aX_tr_sorted, _ = torch.sort(aX_tr, axis=1) # also return index
return torch.mean(aX_tr_sorted[:, -self.topN:,:], axis=1)
# print("B = ", aX_tr_sorted.size(), aX_tr_sorted)
# c = torch.mean(aX_tr_sorted[:, -self.topN:,:], axis=1)
# print("C = ", c.size(), c)
return c
# Test
aA = torch.randn(1, 200).reshape(4, 10, 5)
mean_pool = BestMeanPooling(2)
aA = mean_pool(aA)
# print(aA)
class SyntheticData(Dataset):
def __init__(self, upper, x, y, z):
self.aX_tr_sy = np.random.random_sample(upper).reshape(x, y, z)
self.ay_tr_sy = np.random.randint(0, 2, x)
def __len__(self):
return len(self.aX_tr_sy)
def __getitem__(self, idx):
return self.aX_tr_sy[idx], self.ay_tr_sy[idx] #.squeeze(1)
class MassCNN(nn.Module):
def __init__(self, n_in, n_out, num_event=10, topN=5, prob=0.00):
super(MassCNN,self).__init__()
self.n_filter = n_out
#first layers
self.conv_1 = nn.Conv1d(n_in, n_out, kernel_size=1)
# output of the layer is # of filters
self.bmpool_1 = BestMeanPooling(topN)
self.dropout_L1 = nn.Dropout(p=prob)
# Fully connected layer
self.fcn_1 = nn.Linear(3, 2)
# self.softmax = nn.LogSoftmax(dim=1)
def forward(self, aX_tr):
aX_tr = self.conv_1(aX_tr)
aX_tr = torch.relu(aX_tr)
# This will collapse one dimension
aX_tr = self.bmpool_1(aX_tr)
if self.n_filter > 4:
aX_tr = self.dropout_L1(aX_tr)
aX_tr = aX_tr.view(aX_tr.size(0), -1)
aX_tr = self.fcn_1(aX_tr)
# aX_tr = self.softmax(aX_tr)
return aX_tr
def train(n_epochs=10, lr=0.1, filter=12, maxpool_percent=100.00, n_iter = 0,
check_point=False):
"""
Train the model with Monte Carlo sampling in which hyperparameters
are randomly generated. Multiple simulations are run to obtain
statistically robust models.
params:
returns:
# of best solutions (models)
"""
# Random data
d_trains = SyntheticData(750, 5, 50, 3)
d_valids = SyntheticData(750, 5, 50, 3)
t_cost_per_epoch = []
# model.state_dict()['cnn1.weight']
# model.state_dict()['cnn2.weight']
af_Tr = DataLoader(d_trains, batch_size=100, shuffle=True)
af_Va = DataLoader(d_valids, batch_size=10, shuffle=True)
t_solutions = []
for epoch in range(n_epochs):
# model, optimizer, and lost function
o_mass_model = MassCNN(50, filter, 3, 5)
o_optimizer = torch.optim.SGD(o_mass_model.parameters(), lr=0.0001)
o_cost = torch.nn.CrossEntropyLoss()
m_models = dict()
fgr_cost = 0
for af_X, af_y in af_Tr: # iterative over batch of data
o_optimizer.zero_grad() ## Back propagation the loss
af_y_pr = o_mass_model(af_X.float())
o_loss = o_cost(af_y_pr, af_y)
o_loss.backward()
o_optimizer.step() # back propagate
fgr_cost += o_loss.data
t_cost_per_epoch.append(fgr_cost)
#print(o_mass_model.state_dict().keys())
#print("Epochs in fun ", epoch, id(o_mass_model))
# Where is validation loss?
o_mass_model.eval()
f_correct = 0
for af_x_va, af_y_va in af_Va:
z = o_mass_model(af_x_va.float()).data
_, yhat = torch.max(z.data, 1)
f_correct += (yhat == af_y_va).sum().item()
f_accuracy = "%0.3f" %(f_correct / 1000) # fixed this with sample size
m_models["epoch"] = epoch
m_models["iteration"] = n_iter
m_models["val_acc"] = f_accuracy
m_models["model_state"] = o_mass_model.state_dict()
m_models["optimizer_state"] = o_optimizer.state_dict()
m_models["loss"] = o_cost.state_dict()
m_models["model"] = o_mass_model
t_solutions.append(m_models.copy())
del m_models
return t_solutions
def get_best_models(t_models):
"""
Select the best solutions from the pool
params:
t_models: list of models. Each model is dictionary containing parameters
and hyperparameters.
return:
list of best models
"""
n_best_sol = 5
t_best_val_accs = []
t_best_models = []
for o_model in t_models:
b_update = True
f_val_acc = float(o_model["val_acc"])
if len(t_best_val_accs) >= n_best_sol:
print(t_best_val_accs)
f_lowest_acc = np.min(t_best_val_accs)
if f_val_acc > f_lowest_acc:
i_cur_idx = t_best_val_accs.index(f_lowest_acc)
t_best_val_accs.pop(i_cur_idx)
t_best_models.pop(i_cur_idx)
else:
b_update = False
if b_update:
t_best_val_accs.append(f_val_acc)
t_best_models.append(o_model)
return t_best_models
# load the multiple models for testing
def test_model(d_tests, t_best_sols):
"""
Test performance of model on independent data. Since multiple models were generated,
these models were tested aganist an independent data and their average performance
will be returned as final result.
params:
d_tests: data for independent test
t_best_sols: list of best solutions
"""
t_test_accs = []
for o_best_sol in t_best_sols:
o_best_model = o_best_sol["model"]
o_best_model.eval()
f_correct = 0
n_test_count = len(d_tests)
for d_test_X, d_target_Y in d_tests:
z = o_best_model(d_test_X.float()).data # remove float
_, yhat = torch.max(z.data, 1)
f_correct += (yhat == d_target_Y).sum().item()
f_accuracy = "%0.3f" %(f_correct / n_test_count) # fixed this with sample size
t_test_accs.append(float(f_accuracy))
print("Coding: ", t_test_accs)
return np.mean(t_test_accs)
# model.load_state_dict(checkpoint['model_state_dict'])
# get 20% of best solutions after iterating
# between 15 to 30 times (these # are randomly selected)
n_cell = 10
f_maxpools = [0.01, 1.0, 5.0, 20.0, 100.0]
n_maxpool_len = len(f_maxpools)
t_models = []
for n_trial in range(10):
f_lr = 10 ** np.random.uniform(-3, -2)
i_filter = np.random.choice(range(3,10))
f_max_pool = f_maxpools[n_trial%n_maxpool_len]
f_maxpool = max(1, int(f_max_pool/100. * n_cell))
t_models += train(10, f_lr, i_filter, f_maxpool)
# if b_checkpoint: # save check point files
t_best_models = get_best_models(t_models)
# generate test data
d_tests = SyntheticData(750, 5, 50, 3)
af_test = DataLoader(d_tests)
predicted_Y = test_model(af_test, t_best_models)
print("Coding: ", predicted_Y)
```
### Keras
```
from keras.layers import Input, Dense, Lambda, Activation, Dropout
from keras.layers.convolutional import Convolution1D
from keras.models import Model
from keras.optimizers import Adam
# the input layer
ncell = 20
nmark = 5
data_input = Input(shape=(ncell, nmark))
nfilter = 10
# the filters
print(data_input.shape)
conv = Convolution1D(nfilter, 1, activation='linear', name='conv1')(data_input)
print(conv.shape)
import keras.backend as K
K.print_tensor(conv)
a = nn.Conv1d(in_channels=1, out_channels=32, kernel_size=7, stride=1, padding=3)
"""
in_channel = 16
out_channel = 33
filter/kernal = 3
batch = 20
"""
m = nn.Conv1d(3, 6, 2, stride=1)
input = torch.randn(5, 3, 4)
print(input.shape)
output = m(input)
print(output.shape)
output.shape
input
output
from heapq import heappush, heappop
heap = []
data = [(1, 'J'), (4, 'N'), (3, 'H'), (2, 'O')]
for item in data:
heappush(heap, item)
print("item ", item, "heap ", heap)
while heap:
print(heappop(heap)[1])
import numpy as np
a = list(np.random.uniform(10, 15, 10))
np.min(a)
a
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.stats.proportion import proportion_confint
from statsmodels.stats.weightstats import CompareMeans, DescrStatsW, ztest
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
import warnings
warnings.filterwarnings('ignore')
```
Прежде всего, скопируем нужные функции из учебного ноутбука. Они понадобятся.
```
def proportions_diff_confint_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
def proportions_diff_z_stat_ind(sample1, sample2):
n1 = len(sample1)
n2 = len(sample2)
p1 = float(sum(sample1)) / n1
p2 = float(sum(sample2)) / n2
P = float(p1*n1 + p2*n2) / (n1 + n2)
return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))
def proportions_diff_z_test(z_stat, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
if alternative == 'two-sided':
return 2 * (1 - scipy.stats.norm.cdf(np.abs(z_stat)))
if alternative == 'less':
return scipy.stats.norm.cdf(z_stat)
if alternative == 'greater':
return 1 - scipy.stats.norm.cdf(z_stat)
def proportions_diff_confint_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
def proportions_diff_z_stat_rel(sample1, sample2):
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
return float(f - g) / np.sqrt(f + g - float((f - g)**2) / n )
```
В одном из выпусков программы "Разрушители легенд" проверялось, действительно ли заразительна зевота. В эксперименте участвовало 50 испытуемых, проходивших собеседование на программу. Каждый из них разговаривал с рекрутером; в конце 34 из 50 бесед рекрутер зевал. Затем испытуемых просили подождать решения рекрутера в соседней пустой комнате.
Во время ожидания 10 из 34 испытуемых экспериментальной группы и 4 из 16 испытуемых контрольной начали зевать. Таким образом, разница в доле зевающих людей в этих двух группах составила примерно 4.4%. Ведущие заключили, что миф о заразительности зевоты подтверждён.
Можно ли утверждать, что доли зевающих в контрольной и экспериментальной группах отличаются статистически значимо? Посчитайте достигаемый уровень значимости при альтернативе заразительности зевоты, округлите до четырёх знаков после десятичной точки.
Имеются данные измерений двухсот швейцарских тысячефранковых банкнот, бывших в обращении в первой половине XX века. Сто из банкнот были настоящими, и сто — поддельными.
Отделите 50 случайных наблюдений в тестовую выборку с помощью функции $\textbf{sklearn.cross_validation.train_test_split}$ (зафиксируйте $\textbf{random state = 1)}$. На оставшихся $150$ настройте два классификатора поддельности банкнот:
1. логистическая регрессия по признакам $X_1,X_2,X_3$
2. логистическая регрессия по признакам $X_4,X_5,X_6$
Каждым из классификаторов сделайте предсказания меток классов на тестовой выборке. Одинаковы ли доли ошибочных предсказаний двух классификаторов? Проверьте гипотезу, вычислите достигаемый уровень значимости. Введите номер первой значащей цифры (например, если вы получили $5.5\times10^{-8}$, нужно ввести 8).
```
df = pd.read_table('banknotes.txt')
y = df['real']
X = df.drop(['real'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state = 1, test_size = 50)
X1_train = X_train[['X1', 'X2','X3']]
X2_train = X_train[['X4','X5','X6']]
X1_test = X_test[['X1', 'X2','X3']]
X2_test = X_test[['X4','X5','X6']]
logreg = LogisticRegression()
logreg.fit(X1_train, y_train)
pred1 = logreg.predict(X1_test)
logreg.fit(X2_train, y_train)
pred2 = logreg.predict(X2_test)
pred1_acc = np.array([1 if pred1[i] == np.array(y_test)[i] else 0 for i in range(len(pred1))])
pred2_acc = np.array([1 if pred2[i] == np.array(y_test)[i] else 0 for i in range(len(pred2))])
print('First prediction accuracy:', sum(pred1_acc)/len(pred1_acc),
'\n','Second prediction accuracy:', sum(pred2_acc)/len(pred2_acc))
```
Вывод - доли ошибок не одинаковы
В предыдущей задаче посчитайте $95\%$ доверительный интервал для разности долей ошибок двух классификаторов. Чему равна его ближайшая к нулю граница? Округлите до четырёх знаков после десятичной точки.
Построим $95\%$ доверительный интервал для разницы предсказаний
```
print('95%% доверительный интервал для разницы предсказаний: [%.4f, %.4f]' %
proportions_diff_confint_rel(pred1_acc, pred2_acc))
print ("p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_rel(pred1_acc, pred2_acc)))
```
Ежегодно более 200000 людей по всему миру сдают стандартизированный экзамен GMAT при поступлении на программы MBA. Средний результат составляет 525 баллов, стандартное отклонение — 100 баллов.
Сто студентов закончили специальные подготовительные курсы и сдали экзамен. Средний полученный ими балл — 541.4. Проверьте гипотезу о неэффективности программы против односторонней альтернативы о том, что программа работает. Отвергается ли на уровне значимости 0.05 нулевая гипотеза? Введите достигаемый уровень значимости, округлённый до 4 знаков после десятичной точки.
```
n = 100
mean_result = 525
stand_dev = 100
mean_spec = 541.4
alpha = 0.05
```
Реализуем формулу: $Z(X^n) = \frac{\overline{X}-\mu_0}{\frac{\sigma}{\sqrt{n}}}$
```
def z_conf(mu, sigma, n, x_mean):
return (x_mean - mu)/(sigma / np.sqrt(n))
print((z_conf(mu = mean_result, x_mean=mean_spec, n=n, sigma=stand_dev)))
print(round(1-stats.norm.cdf(z_conf(mu = mean_result, x_mean=mean_spec, n=n, sigma=stand_dev)),4))
```
Оцените теперь эффективность подготовительных курсов, средний балл 100 выпускников которых равен 541.5. Отвергается ли на уровне значимости 0.05 та же самая нулевая гипотеза против той же самой альтернативы? Введите достигаемый уровень значимости, округлённый до 4 знаков после десятичной точки.
```
print((z_conf(mu = mean_result, x_mean=541.5, n=n, sigma=stand_dev)))
print(round(1-stats.norm.cdf(z_conf(mu = mean_result, x_mean=541.5, n=n, sigma=stand_dev)),4))
```
| github_jupyter |
# Team Ares -- Task 1 Report -- Fall 2020
## Contributions:
### Cody Shearer
- Created/managed team repository.
- Helped setup development environments.
- Added setup instructions for PyCharm jupyter notebooks.
- Organized team meetings.
- Created experiments, evaluation, and report on BIM attacks.
### Zhymir Thompson
### Mahmudul Hasan
### Vincent Davidson
___
## Additional setup for PyCharm jupyter notebooks (optional)
While jupyter notebooks can be opened/run in the browser, if you would like to run it in PyCharm, you will first need to
ensure you are running the Professional version of PyCharm (free for students). Once you have done so and upon opening
this notebook you should be asked to install a jupyter notebook extension. Once installed, run the following in the
terminal [(solution from here)](https://youtrack.jetbrains.com/issue/PY-36913). Be sure to replace "athena" with the
name of your conda environment if not already done.
```
conda activate athena
python -m ipykernel install --user --name athena --display-name "Python (athena)
```
Finally, select `Python (athena)` as the jupyter interpreter (visible in the bar above, when the notebook is open).
## BIM Attack and Evaluation
Here we consider an adversarial attack on a convolutional neural network (CNN) trained on a subset (10%) of the MNIST dataset using five variations of the [basic iterative method](https://arxiv.org/pdf/1607.02533.pdf) (BIM). We hold the epsilon value constant at 0.10 while varying the maximum number of iterations to explore how this may impact the error rate of the undefeneded model (UM), an athena ensemble, and PGD-ADT. We test BIM with the following parameters:
- epsilon: 0.10
- max_iter: 100, 90, 80, 70, 60
Using the following configurations, we generate AEs and evaluate their effectivness against the UM, the ensemble model, and PGD-ADT, using `notebooks/Task1_GenerateAEs_ZeroKnowledgeModel.ipynb`:
- `src/configs/task1/athena-mnist.json`
- `src/configs/task1/attack-bim-mnist.json`
- `src/configs/task1/data-bim-mnist.json`
The AEs can be found at:
- `AE-mnist-cnn-clean-bim_eps0.1_maxiter60.npy`
- `AE-mnist-cnn-clean-bim_eps0.1_maxiter70.npy`
- `AE-mnist-cnn-clean-bim_eps0.1_maxiter80.npy`
- `AE-mnist-cnn-clean-bim_eps0.1_maxiter90.npy`
- `AE-mnist-cnn-clean-bim_eps0.1_maxiter100.npy`
### Undefended Model Results
We find that the error rate drops only for the UM and only twice. We would expect these drops to occur only at 70 and 60, perhaps as some upper bound is reached. However, we find the interesting result that the drop in error rate occurs only from 100 to 90 and 70 to 60; the error rate is the same for 90, 80, and 70.
### Ensemble and PGD-ADT Results
The ensemble has nearly the same error rate as PGD-ADT, which in all cases is about 2%.
| BIM Error Rate (epsilon=0.1) | | | | | |
|------------------------|-------------|-------------|-------------|---------------------------------------------|---------------------------------------------|
| Max Iterations | UM | Ensemble | PGD-ADT | 9->1 | 4->9 |
| 100 | 0.933534743 | 0.022155086 | 0.025176234 |  |  |
| 90 | 0.930513595 | 0.022155086 | 0.025176234 |  |  |
| 80 | 0.930513595 | 0.022155086 | 0.025176234 |  |  |
| 70 | 0.930513595 | 0.022155086 | 0.025176234 |  |  |
| 60 | 0.926485398 | 0.022155086 | 0.025176234 |  |  |
In conclusion, BIM is only effective against the UM, with the erorr rates of the ensemble model and PGD-ADT being around 2%. Changes to the maximum iterations for BIM only have a (slight) effect on the UM, with the ensemble model and PGD-ADT defenses seeing no change.
### Ensemble and PGD-ADT Results
| PGD Error Rate | | | | |
|------------------------|-------------|-------------|---------------------------------------------|---------------------------------------|
| | UM | Ensemble | PGD-ADT |
| Epsilon=0.3 | 0.8 | 0.9 | 1.0 |
| Epsilon=0.5 | 0.7 | 0.9 | 0.8 |
| Epsilon=0.1 | 0.9 | 1.0 | 1.0 |
| Epsilon=0.7 | 0.7 | 0.9 | 0.7 |
| Epsilon=0.8 | 0.7 | 0.9 | 0.7 |
| github_jupyter |
# An Introduction to Jupyter Notebooks
You are now officially using a Jupyter notebook! This tutorial will show you some of the basics of using a notebook, including how to create the cells, run code, and save files for future use.
Jupyter notebooks are based on IPython which started in development in the 2006/7 timeframe. The existing Python interpreter was limited in functionality and work was started to create a richer development environment. By 2011 the development efforts resulted in IPython being released (http://blog.fperez.org/2012/01/ipython-notebook-historical.html).
Jupyter notebooks were a spinoff (2014) from the original IPython project. IPython continues to be the kernel that Jupyter runs on, but the notebooks are now a project on their own.
Jupyter notebooks run in a browser and communicate to the backend IPython server which renders this content. These notebooks are used extensively by data scientists and anyone wanting to document, plot, and execute their code in an interactive environment. The beauty of Jupyter notebooks is that you document what you do as you go along.
## A Quick Tour
This brief introduction will explain the various parts of a Jupyter notebook and how you interact with it. The remainder of the labs in this series will be using Jupyter notebooks so you will have to become really familiar with them!
## File Menu
You may have started this notebook by selecting it from the table of contents, but if you use standard Jupyter notebooks, then you would be presented with a similar file menu.

To start using a notebook, all you need to do is click on its name. So for this notebook, you would have selected _An Introduction to Jupyter Notebooks_. If you want to manage the notebooks (i.e. delete them or place them into a folder, you can select them on the left hand side, and then execute the action from the pull down list (the arrow just below the Running tab).
The Running tab shows you which notebooks are currently active or running. Each notebook is independent from the others. This means that there is no sharing of data or variables between each notebook because they are running on different threads. When you shut down a notebook, you are stopping its process or thread in the system.
If you need to upload a new notebook (or replace an existing one), you can use the Upload button on the far right hand side. This will give you a file menu on your local system where you can select a notebook to upload. Jupyter notebooks have the extension .ipynb (IPython Notebook) which contains all of the notebook information in a JSON format.
If you want to create a brand new notebook, you would select the New button that is beside the Upload button. The New button may ask what type of Notebook that you want. It could by Python 2 or 3, or even a different language based notebook (Scala for instance). This image only has Python 3 installed so that will be your only choice when creating a notebook.
## The Tool Bar
At the top of this page you should see the following toolbar.

The tool bar is found at the top of your Jupyter Notebook. There are three sections that you need to be familiar with.
* Title (An Introduction...)
* File/Edit/View... Menu
* Save/Add Cell/... Icons
### Title
The top of the notebook has the title of the contents. The name of the notebook can be changed by clicking on the title. This will open up a dialog which gives you the option of changing the name of the notebook.

Note that this will create a new copy of the notebook with this name. One important behavior of Jupyter notebooks is that notebooks "autosave" the contents every few minutes (you know how much we hate losing work in the event of a crash). Changing the name of the title will make sure any changes get saved under the new name. However, changes will probably have been saved to the old name up to this point because of autosave. For that reason, it is better to make a new copy of the notebook before starting to edit it.
### File/Edit/View Menu
The menu bar contains options to `File`, `Edit`, `View` and perform other administrative actions within the Jupyter notebook. The `File` option gives you options to save the file as a checkpoint (a version that you can revert to), make a copy of the notebook, rename it, or download it. Of particular interest is the `Copy` command. This will make a copy of the existing notebook and start that up in a separate tab in the browser. You can then view and edit this copy rather than changing the original.
The other option is to checkpoint your progress at regular intervals and then use the `Revert to Checkpoint` to restore the notebook to a previous version. The `Download` option is also very important to be familiar with. The notebook lives within your Jupyter environment so you may not know what the full file path is to access it. Use this option to download the file to your local operating system for safe keeping.
<p>

The seven additional menu items are:
* **Edit** - These menu items are used for editing the cells. The icons below the menus are equivalent to most of these menus.
* **View** - View will turn on Header information, Line numbers, Tool bars and additional Cell information
* **Insert** - Insert new cells above or below the current cell
* **Cell** - This menu item lets you run code in a cell, run all cells, remove output from the cells or change the cell type
* **Kernel** - The kernel that is running the current notebook can be restarted or stopped if there appears to be a problem with it
* **Widgets** - Widgets are add-ons for Jupyter notebooks.
* **Help** - If you need help, check out this menu.
Some important menu items that you may want to use:
- **View/Toggle Line Numbers** should be turned on if you have a substantial amount of code. This makes it easier to find errors when Python generates an error message with a line number. Note that this only applies to code cells.
- **Insert/Above** is useful if you don't want to move your cursor to the cell above before hitting the [+] icon.
- **Cell/All Output/Clear** will get rid of any output that your notebook has produced so that you can start over again.
- **Cell/Run All** is useful if you are lazy or just want to test your entire notebook at once!
- **Kernel/Restart** & Clear Output should be used if the notebook appears to hang and you want to start from scratch again.
### Menu Bar Icons
The icons below the menu bar are used to edit the cells within the notebook.

The icons from left to right are:
* **Save** - save the current notebook. Note: This is not a checkpoint save so you are saving a new copy.
* **[+]** - Add a new cell below the current cell
* **Cut** - Delete the current cell, but a copy is kept in the event you want to paste it somewhere else in the notebook
* **Copy** - Copy the current cell
* **Paste** - Paste a cell below the current cell
* **Up** - Move the current cell up one
* **Down** - Move the current cell down one
* **[>|]** - Execute the current code in the cell, or render the markdown content
* **Stop** - Stop any execution that is currently taking place
* **Restart** - Restart the kernel (to clear out all previous variables, etc.)
* **Cell Type** - Select the type of Cell: Code, Markdown, Raw, or Heading
* **Command** - Show a list of commands
When you create a new cell it will default to containing code, so if you want it to contain Markdown, you will need to select `Markdown` from the cell type list. The cut/copy/paste are similar to any other program with one exception. You can't use Ctrl-C and Ctrl-V to cut and paste cells. These shortcuts can be used for text, but not for an entire cell. Jupyter notebooks will issue a warning if you try to use these to manipulate cells. In addition, if you copy a cell in one notebook, it will not paste into a different notebook! You need to select the contents of the cell and then paste into the cell in the other notebook.
### Cell Contents
A Jupyter notebook contains multiple "cells" which can contain one of three different types of objects:
- **Code** - A cell that contains code that will run (usually Python)
- **Markdown** - A cell that contains text and formatting using a language called Markdown
- **Raw NBConvert** - A specialized cell that is rendered (displayed) using an extension, like mathematical formulas
We are going to keep it simple and only look at the two most common types of cells: code and markdown. The first example below is a code cell.
```
print('Hello World')
```
You can tell that this is a code cell because of the **"`In [ ]:`"** beside the cell and probably because it has some code in the cell! To "execute" the contents of the cell, you must click on the cell (place focus on the cell) and the either hit the run button icon **`[>|]`** or `Shift-Return` (or `Enter`) on your keyboard. You can tell when the focus is on the code cell because it will be highlighted with a thin green box. Cells that contain text will be highlighted with a blue box.
**Action:** Try executing the code in the cell above.
If you were successful it should have printed "Hello World" below the cell. Any output, including errors, from a cell is placed immediately below the cell for you to view. If the code is running for an extended period of time, the notebook will display **`[*]`** until the statement completes.
The contents of the code cell can contain anything that the Jupyter/IPython interpreter can execute. Usually this is Python code, magic extensions, or even Scala, Java, etc... as long as the proper extensions have been added to the notebook.
## Summary
In summary, you've learned how to start, create, update and edit Jupyter notebooks. Jupyter notebooks are used extensively by the data science community, but it is finding its way into many other areas as well. If you are interested in what other applications use Jupyter notebooks, take a look at the list maintained on this web site: https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks.
#### Credits: IBM 2019, George Baklarz [baklarz@ca.ibm.com]
| github_jupyter |
## How-to guide for Customer Churn use-case on Abacus.AI platform
This notebook provides you with a hands on environment to build a customer churn prediction model using the Abacus.AI Python Client Library.
We'll be using the [Telco Customer Churn Dataset](https://s3.amazonaws.com//realityengines.exampledatasets/customer_churn/telco.csv), which contains information about multiple users, their attributes, and whether or not they churned.
1. Install the Abacus.AI library
```
!pip install abacusai
```
We'll also import pandas and pprint tools for visualization in this notebook.
```
import pandas as pd # A tool we'll use to download and preview CSV files
import pprint # A tool to pretty print dictionary outputs
pp = pprint.PrettyPrinter(indent=2)
```
2. Add your Abacus.AI [API Key](https://abacus.ai/app/profile/apikey) generated using the API dashboard as follows:
```
#@title Abacus.AI API Key
api_key = '' #@param {type: "string"}
```
3. Import the Abacus.AI library and instantiate a client.
```
from abacusai import ApiClient
client = ApiClient(api_key)
```
## 1. Create a Project
Abacus.AI projects are containers that have ML features and trained models. By specifying a business **Use Case**, Abacus.AI tailors the deep learning algorithms to produce the best performing model catered specifically for your data.
We'll call the `list_use_cases` method to retrieve a list of the Use Cases currently available on the Abacus.AI platform.
```
client.list_use_cases()
```
In this notebook, we're going to create a customer churn prediction model using the Telco Customer Churn dataset. The 'CUSTOMER_CHURN' use case is best tailored for this situation. For the purpose of taking an example, we will be using the [Telco Customer Churn Dataset](https://s3.amazonaws.com//realityengines.exampledatasets/customer_churn/telco.csv) that has user information, attributes, and whether or not they churned.
```
#@title Abacus.AI Use Case
use_case = 'CUSTOMER_CHURN' #@param {type: "string"}
```
By calling the `describe_use_case_requirements` method we can view what features are required for this use_case and what features are recommended.
```
for requirement in client.describe_use_case_requirements(use_case):
pp.pprint(requirement.to_dict())
```
Finally, let's create the project.
```
churn_project = client.create_project(name='Customer Churn Prediction', use_case=use_case)
churn_project.to_dict()
```
**Note: When feature_groups_enabled is True then the use case supports feature groups (collection of ML features). Feature groups are created at the organization level and can be tied to a project to further use it for training ML models**
## 2. Add Datasets to your Project
Abacus.AI can read datasets directly from `AWS S3`, `Google Cloud Storage`, and other cloud storage buckets, you can also connect your dataset connector and pull your data from them (bigquery, snowflake, etc.). Otherwise you can also directly upload and store your datasets with Abacus.AI. For this notebook, we will have Abacus.AI read the datasets directly from a public S3 bucket's location.
We are using one dataset for this notebook. We'll tell Abacus.AI how the dataset should be used when creating it by tagging the dataset with a special Abacus.AI **Dataset Type**.
- [Telco Customer Churn Dataset](https://s3.amazonaws.com//realityengines.exampledatasets/customer_churn/telco.csv) (**USER_ATTRIBUTES**):
This dataset contains information about multiple users for a specified company, along with whether or not they churned.
### Add the dataset to Abacus.AI
First we'll use Pandas to preview the file, then add it to Abacus.AI.
```
pd.read_csv('https://s3.amazonaws.com//realityengines.exampledatasets/customer_churn/telco.csv')
```
Using the Create Dataset API, we can tell Abacus.AI the public S3 URI of where to find the datasets. We will also give each dataset a Refresh Schedule, which tells Abacus.AI when it should refresh the dataset (take an updated/latest copy of the dataset).
If you're unfamiliar with Cron Syntax, Crontab Guru can help translate the syntax back into natural language: [https://crontab.guru/#0_12_\*_\*_\*](https://crontab.guru/#0_12_*_*_*)
**Note: This cron string will be evaluated in UTC time zone**
```
churn_dataset = client.create_dataset_from_file_connector(name='Telco Customer Churn',
location='s3://realityengines.exampledatasets/customer_churn/telco.csv', table_name='churn_prediction', refresh_schedule = '0 12 * * *')
datasets = [churn_dataset]
for dataset in datasets:
dataset.wait_for_inspection()
```
## 3. Create Feature Groups and add them to your Project
Datasets are created at the organization level and can be used to create feature groups as follows:
```
feature_group = client.create_feature_group(table_name='churn_pred_fg', sql='SELECT * FROM churn_prediction')
```
Adding Feature Group to the project:
```
client.add_feature_group_to_project(feature_group_id=feature_group.feature_group_id,project_id = churn_project.project_id)
```
Setting the Feature Group type according to the use case requirements:
```
client.set_feature_group_type(feature_group_id=feature_group.feature_group_id, project_id = churn_project.project_id, feature_group_type= "USER_ATTRIBUTES")
```
Check current Feature Group schema:
```
client.get_feature_group_schema(feature_group_id=feature_group.feature_group_id)
```
#### For each **Use Case**, there are special **Column Mappings** that must be applied to a column to fulfill use case requirements. We can find the list of available **Column Mappings** by calling the *Describe Use Case Requirements* API:
```
client.describe_use_case_requirements(use_case)[0].allowed_feature_mappings
client.set_feature_mapping(project_id = churn_project.project_id,feature_group_id= feature_group.feature_group_id, feature_name='Churn',feature_mapping='CHURNED_YN')
client.set_feature_group_column_mapping(project_id = churn_project.project_id,feature_group_id= feature_group.feature_group_id, column='customerID',column_mapping='USER_ID')
```
For each required Feature Group Type within the use case, you must assign the Feature group to be used for training the model:
```
client.use_feature_group_for_training(project_id = churn_project.project_id,feature_group_id= feature_group.feature_group_id)
```
Now that we've our feature groups assigned, we're almost ready to train a model!
To be sure that our project is ready to go, let's call project.validate to confirm that all the project requirements have been met:
```
churn_project.validate()
```
## 4. Train a Model
For each **Use Case**, Abacus.AI has a bunch of options for training. We can call the *Get Training Config Options* API to see the available options.
```
churn_project.get_training_config_options()
```
In this notebook, we'll just train with the default options, but definitely feel free to experiment, especially if you have familiarity with Machine Learning.
```
churn_model = churn_project.train_model(training_config={})
churn_model
```
After we start training the model, we can call this blocking call that routinely checks the status of the model until it is trained and evaluated:
```
churn_model.wait_for_evaluation()
```
**Note that model training might take some minutes to some hours depending upon the size of datasets, complexity of the models being trained and a variety of other factors**
## **Checkpoint** [Optional]
As model training can take an hours to complete, your page could time out or you might end up hitting the refresh button, this section helps you restore your progress:
```
!pip install abacusai
import pandas as pd
import pprint
pp = pprint.PrettyPrinter(indent=2)
api_key = '' #@param {type: "string"}
from abacusai import ApiClient
client = ApiClient(api_key)
churn_project = next(project for project in client.list_projects() if project.name == 'Customer Churn Prediction')
churn_model = churn_project.list_models()[-1]
churn_model.wait_for_evaluation()
```
## 5. Evaluate your Model Metrics
After your model is done training you can inspect the model's quality by reviewing the model's metrics:
```
pp.pprint(churn_model.get_metrics().to_dict())
```
To get a better understanding on what these metrics mean, visit our [documentation](https://abacus.ai/app/help/useCases/CUSTOMER_CHURN/training) page.
## 6. Deploy Model
After the model has been trained, we need to deploy the model to be able to start making predictions. Deploying a model will reserve cloud resources to host the model for Realtime and/or batch predictions.
```
churn_deployment = client.create_deployment(name='Customer Churn Deployment', description='Customer Churn Prediction Model Deployment', model_id=churn_model.model_id)
churn_deployment.wait_for_deployment()
```
After the model is deployed, we need to create a deployment token for authenticating prediction requests. This token is only authorized to predict on deployments in this project, so it's safe to embed this token inside of a user-facing application or website.
```
deployment_token = churn_project.create_deployment_token().deployment_token
deployment_token
```
## 7. Predict
Now that you have an active deployment and a deployment token to authenticate requests, you can make the `predict_churn` API call below.
This command will return the probability of a user with specified attributes churning. The prediction would be performed based on the specified dataset, which, in this case, contains information about the user, their attributes, and whether or not they churned.
```
ApiClient().predict_churn(deployment_token=deployment_token,
deployment_id=churn_deployment.deployment_id,
query_data={"MonthlyCharges":69.7,"TotalCharges":560.85,"gender":"Male","SeniorCitizen":"1","Partner":"No","Dependents":"No","tenure":"8","PhoneService":"Yes","MultipleLines":"No","InternetService":"Fiber optic","OnlineSecurity":"No","OnlineBackup":"No","DeviceProtection":"No","TechSupport":"No","StreamingTV":"No","StreamingMovies":"No","Contract":"Month-to-month","PaperlessBilling":"Yes","PaymentMethod":"Electronic check"})
```
| github_jupyter |
```
from jupyter_plotly_dash import JupyterDash
import dash
import dash_leaflet as dl
import dash_core_components as dcc
import dash_html_components as html
import plotly.express as px
import dash_table as dt
from dash.dependencies import Input, Output, State
import os
import numpy as np
import pandas as pd
from pymongo import MongoClient
from bson.json_util import dumps
#### DONE #####
# change animal_shelter and AnimalShelter to match your CRUD Python module file name and class name
from crud2 import AnimalShelter
# image encoder
import base64
###########################
# Data Manipulation / Model
###########################
# DONE: change for your username and password and CRUD Python module name
username = "aacuser"
password = "cs340"
dbname = "AAC"
shelter = AnimalShelter(username, password, dbname)
# class read method must support return of cursor object
df = pd.DataFrame.from_records(shelter.read({}))
#########################
# Dashboard Layout / View
#########################
app = JupyterDash('SimpleExample',)
#DONE: Add in Grazioso Salvare’s logo
image_filename = 'GraziosoSalvareLogo.png'
encoded_image = base64.b64encode(open(image_filename, 'rb').read())
app.layout = html.Div([
html.Div(id='hidden-div', style={'display':'none'}),
#DONE: Place the HTML image tag in the line below into the app.layout code according to your design
html.Center([
# customer image location with anchor tag to the client’s home page: www.snhu.edu.
html.A([
html.Img(id='customer-image',
src='data:image/png;base64,{}'.format(encoded_image.decode()),
alt='Grazioso Salvare Logo',
style={'width': 225})
], href="https:///www.snhu.edu", target="_blank"),
#DONE: Also remember to include a unique identifier such as your name or date
html.H1("Animal Shelter Search Dashboard"),
html.H5("Developed by Arturo Santiago-Rivera", style={'color': 'green'})
]),
html.Hr(),
#DONE: Add in code for the interactive filtering options. For example, Radio buttons, drop down, checkboxes, etc.
# buttons at top of table to filter the data set to find cats or dogs
html.Div(className='row',
style={'display' : 'flex'},
children=[
html.Span("Filter by:", style={'margin': 6}),
html.Span(
html.Button(id='submit-button-one', n_clicks=0, children='Cats'),
style={'margin': 6}
),
html.Span(
html.Button(id='submit-button-two', n_clicks=0, children='Dogs'),
style={'margin': 6}
),
html.Span(
html.Button(id='reset-buttons', n_clicks=0, children='Reset', style={'background-color': 'red', 'color': 'white'}),
style={'margin': 6,}
),
html.Span("or", style={'margin': 6}),
html.Span([
dcc.Dropdown(
id='filter-type',
options=[
{'label': 'Water Rescue', 'value': 'wr'},
{'label': 'Mountain or Wilderness Rescue', 'value': 'mwr'},
{'label': 'Disaster Rescue or Individual Tracking', 'value': 'drit'}
],
placeholder="Select a Dog Category Filter",
style={'marginLeft': 5, 'width': 350}
)
])
]
),
html.Hr(),
dt.DataTable(
id='datatable-id',
columns=[
{"name": i, "id": i, "deletable": False, "selectable": True} for i in df.columns
],
data=df.to_dict('records'),
#DONE: Set up the features for your interactive data table to make it user-friendly for your client
#If you completed the Module Six Assignment, you can copy in the code you created here
editable = False,
filter_action = "native",
sort_action = "native",
sort_mode = "multi",
column_selectable = False,
row_selectable = False,
row_deletable = False,
selected_columns = [],
selected_rows = [0],
page_action = "native",
page_current = 0,
page_size = 10,
),
html.Br(),
html.Hr(),
#This sets up the dashboard so that your chart and your geolocation chart are side-by-side
html.Div(className='row',
style={'display' : 'flex'},
children=[
html.Div(
id='graph-id',
className='col s12 m6',
),
html.Div(
id='map-id',
className='col s12 m6',
)
]
),
#DONE: Also remember to include a unique identifier such as your name or date (footer identifier)
html.Div([
html.Hr(),
html.P([
"Module 7-1 Project Two Submission - Prof. Tad Kellogg M.S.",
html.Br(),
"CS-340-T3237 Client/Server Development 21EW3 - Southern New Hampshire University",
html.Br(),
"February 21, 2021"
], style={'fontSize': 12})
])
])
#############################################
# Interaction Between Components / Controller
#############################################
# DONE: This callback add interactive dropdown filter option to the dashboard to find dogs per category
# or interactive button filter option to the dashboard to find all cats or all dogs
@app.callback(
Output('datatable-id', 'data'),
[Input('filter-type', 'value'),
Input('submit-button-one', 'n_clicks'),
Input('submit-button-two', 'n_clicks')]
)
def update_dshboard(selected_filter, btn1, btn2):
if (selected_filter == 'drit'):
df = pd.DataFrame(list(shelter.read(
{
"animal_type":"Dog",
"breed":{"$in":["Doberman Pinscher","German Shepherd","Golden Retriever","Bloodhound","Rottweiler"]}, "sex_upon_outcome":"Intact Male",
"age_upon_outcome_in_weeks": {"$gte":20},
"age_upon_outcome_in_weeks":{"$lte":300}
}
)
))
elif (selected_filter == 'mwr'):
df = pd.DataFrame(list(shelter.read(
{
"animal_type":"Dog",
"breed":{"$in":["German Shepherd","Alaskan Malamute","Old English Sheepdog","Siberian Husky","Rottweiler"]},
"sex_upon_outcome":"Intact Male",
"age_upon_outcome_in_weeks":{"$gte":26},
"age_upon_outcome_in_weeks":{"$lte":156}
}
)
))
elif (selected_filter == 'wr'):
df = pd.DataFrame(list(shelter.read(
{
"animal_type":"Dog",
"breed":{"$in":["Labrador Retriever Mix","Chesapeake Bay Retriever","Newfoundland"]},
"sex_upon_outcome":"Intact Female",
"age_upon_outcome_in_weeks":{"$gte":26},
"age_upon_outcome_in_weeks":{"$lte":156}
}
)
))
# higher number of button clicks to determine filter type
elif (int(btn1) > int(btn2)):
df = pd.DataFrame(list(shelter.read({"animal_type":"Cat"})))
elif (int(btn2) > int(btn1)):
df = pd.DataFrame(list(shelter.read({"animal_type":"Dog"})))
else:
df = pd.DataFrame.from_records(shelter.read({}))
data = df.to_dict('records')
return data
# This callback reset the clicks of the cat and dog filter button
@app.callback(
[Output('submit-button-one', 'n_clicks'),
Output('submit-button-two', 'n_clicks')],
[Input('reset-buttons', 'n_clicks')]
)
def update(reset):
return 0, 0
# This callback will highlight a column or row on the data table when the user, at first, selects it on the currently visible page
@app.callback(
Output('datatable-id', 'style_data_conditional'),
[Input('datatable-id', 'selected_columns'),
Input('datatable-id', "derived_viewport_selected_rows"),
Input('datatable-id', 'active_cell')]
)
def update_styles(selected_columns, selected_rows, active_cell):
if active_cell is not None:
style = [{
'if': { 'row_index': active_cell['row'] },
'background_color':'#a5d6a7'
}]
else:
style = [{
'if': { 'row_index': i },
'background_color':'#a5d6a7'
} for i in selected_rows]
return (style +
[{
'if': { 'column_id': i },
'background_color': '#80deea'
} for i in selected_columns]
)
# This callback add a pie chart that displays breed percentage from the interactive data table
@app.callback(
Output('graph-id', "children"),
[Input('datatable-id', "derived_viewport_data")]
)
def update_graphs(viewData):
### DONE: ####
dff = pd.DataFrame.from_dict(viewData)
# code for pie chart
fig = px.pie(
dff,
names='breed',
title='Animal Breeds Pie Chart'
)
return [dcc.Graph(figure=fig)]
# This callback add a geolocation chart that displays data from the interactive data table
@app.callback(
Output('map-id', "children"),
[Input('datatable-id', "derived_viewport_data"),
Input('datatable-id', "derived_viewport_selected_rows"),
Input('datatable-id', "active_cell")]
)
def update_map(viewData, selected_rows, active_cell):
# DONE: Add in the code for your geolocation chart
dff = pd.DataFrame.from_dict(viewData)
# define marker position of one selected row
if active_cell is not None:
row = active_cell['row']
else:
row = selected_rows[0]
lat = dff.loc[row,'location_lat']
long = dff.loc[row,'location_long']
name = dff.loc[row,'name']
breed = dff.loc[row,'breed']
animal = dff.loc[row, 'animal_type']
age = dff.loc[row, 'age_upon_outcome']
if name == "":
name = "No Name"
return [
dl.Map(
style={'width': '1000px', 'height': '500px'},
center=[lat,long], zoom=10,
children=[
dl.TileLayer(id="base-layer-id"),
# Marker with tool tip and popup
dl.Marker(
position=[lat,long],
children=[
dl.Tooltip("({:.3f}, {:.3f})".format(lat,long)),
dl.Popup([
html.H2(name),
html.P([
html.Strong("{} | Age: {}".format(animal,age)),
html.Br(),
breed])
])
]
)
]
)
]
# App execution
app
```
| github_jupyter |
```
import sys # for automation and parallelisation
manual, scenario = (True, 'base') if 'ipykernel' in sys.argv[0] else (False, sys.argv[1])
if manual:
%matplotlib inline
import numpy as np
import pandas as pd
from quetzal.model import stepmodel
```
# Modelling steps 1 and 2.
## Saves transport demand between zones
## Needs zones
```
input_path = '../input/transport_demand/'
output_path = '../output/'
model_path = '../model/'
sm = stepmodel.read_json(model_path + 'de_zones')
```
### Emission and attraction with quetzal
Steps: Generation and distribution --> Transport demand in volumes<br>
Transport volumes can be generated by using the function<br>
step_distribution(impedance_matrix=None, **od_volume_from_zones_kwargs)<br>
:param impedance_matrix: an OD unstaked friction dataframe<br>
used to compute the distribution.<br>
:param od_volume_from_zones_kwargs: if the friction matrix is not<br>
provided, it will be automatically computed using a gravity<br>
distribution which uses the following parameters:<br>
param power: (int) the gravity exponent<br>
param intrazonal: (bool) set the intrazonal distance to 0 if False,<br>
compute a characteristic distance otherwise.<br>
Or create the volumes from input data<br>
### Load transport demand data from VP2030
The German federal government's transport study "[Bundesverkehrswegeplan 2030](https://www.bmvi.de/SharedDocs/DE/Artikel/G/BVWP/bundesverkehrswegeplan-2030-inhalte-herunterladen.html)" uses origin destination matrices on NUTS3-level resolution and makes them accessible under copyright restrictions for the base year and the year of prognosis. These matrices cannot be published in their original form.
```
vp2010 = pd.read_excel(input_path + 'PVMatrix_BVWP15_A2010.xlsx')
vp2030 = pd.read_excel(input_path + 'PVMatrix_BVWP15_P2030.xlsx')
#print(vp2010.shape)
vp2010[vp2010.isna().any(axis=1)]
for df in [vp2010, vp2030]:
df.rename(columns={'# Quelle': 'origin', 'Ziel': 'destination'}, inplace=True)
def get_vp2017(vp2010_i, vp2030_i):
return vp2010_i + (vp2030_i - vp2010_i) * (7/20)
# Calculate a OD table for the year 2017
vp2017 = get_vp2017(vp2010.set_index(['origin', 'destination']),
vp2030.set_index(['origin', 'destination']))
vp2017.dropna(how='all', inplace=True)
#print(vp2010.shape)
vp2017[vp2017.isna().any(axis=1)]
vp2017 = vp2017[list(vp2017.columns)].astype(int)
#vp2017.head()
```
### Create the volumes table
```
# Sum up trips by purpose
for suffix in ['Fz1', 'Fz2', 'Fz3', 'Fz4', 'Fz5', 'Fz6']:
vp2017[suffix] = vp2017[[col for col in list(vp2017.columns) if col[-3:] == suffix]].sum(axis=1)
# Merge purpose 5 and 6 due to calibration data limitations
vp2017['Fz6'] = vp2017['Fz5'] + vp2017['Fz6']
# Replace LAU IDs with NUTS IDs in origin and destination
nuts_lau_dict = sm.zones.set_index('lau_id')['NUTS_ID'].to_dict()
vp2017.reset_index(level=['origin', 'destination'], inplace=True)
# Zones that appear in the VP (within Germany) but not in the model
sorted([i for i in set(list(vp2017['origin'])+list(vp2017['destination'])) -
set([int(k) for k in nuts_lau_dict.keys()]) if i<=16077])
# Most of the above numbers are airports in the VP, however
# NUTS3-level zones changed after the VP2030
# Thus the VP table needs to be updated manually
update_dict = {3156: 3159, 3152: 3159, # Göttingen
13001: 13075, 13002: 13071, 13005: 13073, 13006: 13074,
13051: 13072, 13052: 13071, 13053: 13072, 13054: 13076, 13055: 13071, 13056: 13071,
13057: 13073, 13058: 13074, 13059: 13075, 13060: 13076, 13061: 13073, 13062: 13075}
# What is the sum of all trips? For Validation
cols = [c for c in vp2017.columns if c not in ['origin', 'destination']]
orig_sum = vp2017[cols].sum().sum()
orig_sum
# Update LAU codes
vp2017['origin'] = vp2017['origin'].replace(update_dict)
vp2017['destination'] = vp2017['destination'].replace(update_dict)
sorted([i for i in set(list(vp2017['origin'])+list(vp2017['destination'])) -
set([int(k) for k in nuts_lau_dict.keys()]) if i<=16077])
# Replace LAU with NUTS
vp2017['origin'] = vp2017['origin'].astype(str).map(nuts_lau_dict)
vp2017['destination'] = vp2017['destination'].astype(str).map(nuts_lau_dict)
# Restrict to cells in the model
vp2017 = vp2017[~vp2017.isna().any(axis=1)]
vp2017.shape
# What is the sum of all trips after ditching outer-German trips?
vp_sum = vp2017[cols].sum().sum()
vp_sum / orig_sum
# Aggregate OD pairs
vp2017 = vp2017.groupby(['origin', 'destination']).sum().reset_index()
vp2017[cols].sum().sum() / orig_sum
```
### Add car ownership segments
```
sm.volumes = vp2017[['origin', 'destination', 'Fz1', 'Fz2', 'Fz3', 'Fz4', 'Fz6']
].copy().set_index(['origin', 'destination'], drop=True)
# Car availabilities from MiD2017 data
av = dict(zip(list(sm.volumes.columns),
[0.970375, 0.965208, 0.968122, 0.965517, 0.95646]))
# Split purpose cells into car ownership classes
for col in sm.volumes.columns for car in [0,1]:
sm.volumes[(col, car)] = sm.volumes[col] * abs(((1-car)*1 - av[col]))
sm.volumes.drop(col, inplace=True)
sm.volumes.reset_index(inplace=True)
```
## Save model
```
sm.volumes.shape
sm.volumes.columns
# Empty rows?
assert len(sm.volumes.loc[sm.volumes.sum(axis=1)==0])==0
# Saving volumes
sm.to_json(model_path + 'de_volumes', only_attributes=['volumes'], encoding='utf-8')
```
## Create validation table
Generate a normalised matrix for the year 2017 in order to validate model results against each other. It is needed for the calibration step.
```
# Merge purpose 5 and 6
for prefix in ['Bahn', 'MIV', 'Luft', 'OESPV', 'Rad', 'Fuß']:
vp2017[prefix + '_Fz6'] = vp2017[prefix + '_Fz5'] + vp2017[prefix + '_Fz6']
vp2017 = vp2017[[col for col in list(vp2017.columns) if col[-1]!='5']]
# Merge bicycle and foot
for p in [1,2,3,4,6]:
vp2017['non_motor_Fz' + str(p)] = vp2017['Rad_Fz' + str(p)] + vp2017['Fuß_Fz' + str(p)]
vp2017 = vp2017[[col for col in list(vp2017.columns) if not col[:3] in ['Rad', 'Fuß']]]
# Prepare columns
vp2017.set_index(['origin', 'destination'], drop=True, inplace=True)
vp2017 = vp2017[[col for col in vp2017.columns if col[:2]!='Fz']]
vp2017.columns
# Normalise
vp2017_norm = (vp2017-vp2017.min())/(vp2017.max()-vp2017.min()).max()
vp2017_norm.sample(5)
# Save normalised table
vp2017_norm.to_csv(input_path + 'vp2017_validation_normalised.csv')
vp2017_norm.columns = pd.MultiIndex.from_tuples(
[(col.split('_')[0], col.split('_')[-1]) for col in vp2017_norm.columns],
names=['mode', 'segment'])
if manual:
vp2017_norm.T.sum(axis=1).unstack('segment').plot.pie(
subplots=True, figsize=(16, 4), legend=False)
# Restrict to inter-cell traffic and cells of the model
vp2017_norm.reset_index(level=['origin', 'destination'], inplace=True)
vp2017_norm = vp2017_norm.loc[(vp2017_norm['origin']!=vp2017_norm['destination']) &
(vp2017_norm['origin'].notna()) &
(vp2017_norm['destination'].notna())]
vp2017_norm.set_index(['origin', 'destination'], drop=True, inplace=True)
if manual:
vp2017_norm.T.sum(axis=1).unstack('segment').plot.pie(
subplots=True, figsize=(16, 4), legend=False)
# Clear the RAM if notebook stays open
vp2010 = None
vp2030 = None
```
| github_jupyter |
# Identifying country names from incomplete house addresses
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc">
<ul class="toc-item">
<li><span><a href="#Introduction" data-toc-modified-id="Introduction-1">Introduction</a></span></li>
<li><span><a href="#Prerequisites" data-toc-modified-id="Prerequisites-2">Prerequisites</a></span></li>
<li><span><a href="#Imports" data-toc-modified-id="Imports-3">Imports</a></span></li>
<li><span><a href="#Data-preparation" data-toc-modified-id="Data-preparation-4">Data preparation</a></span></li>
<li><span><a href="#TextClassifier-model" data-toc-modified-id="TextClassifier-model-5">TextClassifier model</a></span></li>
<ul class="toc-item">
<li><span><a href="#Load-model-architecture" data-toc-modified-id="Load-model-architecture-5.1">Load model architecture</a></span></li>
<li><span><a href="#Model-training" data-toc-modified-id="Model-training-5.2">Model training</a></span></li>
<li><span><a href="#Validate-results" data-toc-modified-id="Validate-results-5.3">Validate results</a></span></li>
<li><span><a href="#Model-metrics" data-toc-modified-id="Model-metrics-5.4">Model metrics</a></span></li>
<li><span><a href="#Get-misclassified-records" data-toc-modified-id="Get-misclassified-records-5.5">Get misclassified records</a></span></li>
<li><span><a href="#Saving-the-trained-model" data-toc-modified-id="Saving-the-trained-model-5.6">Saving the trained model</a></span></li>
</ul>
<li><span><a href="#Model-inference" data-toc-modified-id="Model-inference-6">Model inference</a></span></li>
<li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-7">Conclusion</a></span></li>
<li><span><a href="#References" data-toc-modified-id="References-8">References</a></span></li>
</ul></div>
# Introduction
[Geocoding](https://en.wikipedia.org/wiki/Geocoding) is the process of taking input text, such as an **address** or the name of a place, and returning a **latitude/longitude** location for that place. In this notebook, we will be picking up a dataset consisting of incomplete house addresses from 10 countries. We will build a classifier using `TextClassifier` class of `arcgis.learn.text` module to predict the country for these incomplete house addresses.
The house addresses in the dataset consist of text in multiple languages like English, Japanese, French, Spanish, etc. The dataset is a small subset of the house addresses taken from [OpenAddresses data](http://results.openaddresses.io/)
**A note on the dataset**
- The data is collected around 2020-05-27 by [OpenAddresses](http://openaddresses.io).
- The data licenses can be found in `data/country-classifier/LICENSE.txt`.
# Prerequisites
- Data preparation and model training workflows using arcgis.learn have a dependency on [transformers](https://huggingface.co/transformers/v3.0.2/index.html). Refer to the section **"Install deep learning dependencies of arcgis.learn module"** [on this page](https://developers.arcgis.com/python/guide/install-and-set-up/#Install-deep-learning-dependencies) for detailed documentation on the installation of the dependencies.
- **Labeled data**: For `TextClassifier` to learn, it needs to see documents/texts that have been assigned a label. Labeled data for this sample notebook is located at `data/country-classifier/house-addresses.csv`
- To learn more about how `TextClassifier` works, please see the guide on [Text Classification with arcgis.learn](https://developers.arcgis.com/python/guide/text-classification).
# Imports
```
import os
import zipfile
import pandas as pd
from pathlib import Path
from arcgis.gis import GIS
from arcgis.learn import prepare_textdata
from arcgis.learn.text import TextClassifier
gis = GIS('home')
```
# Data preparation
Data preparation involves splitting the data into training and validation sets, creating the necessary data structures for loading data into the model and so on. The `prepare_data()` function can directly read the training samples and automate the entire process.
```
training_data = gis.content.get('ab36969cfe814c89ba3b659cf734492a')
training_data
filepath = training_data.download(file_name=training_data.name)
with zipfile.ZipFile(filepath, 'r') as zip_ref:
zip_ref.extractall(Path(filepath).parent)
DATA_ROOT = Path(os.path.join(os.path.splitext(filepath)[0]))
data = prepare_textdata(DATA_ROOT, "classification", train_file="house-addresses.csv",
text_columns="Address", label_columns="Country", batch_size=64)
```
The `show_batch()` method can be used to see the training samples, along with labels.
```
data.show_batch(10)
```
# TextClassifier model
`TextClassifier` model in `arcgis.learn.text` is built on top of [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library. The model training and inferencing workflow are similar to computer vision models in `arcgis.learn`.
Run the command below to see what backbones are supported for the text classification task.
```
print(TextClassifier.supported_backbones)
```
Call the model's `available_backbone_models()` method with the backbone name to get the available models for that backbone. The call to **available_backbone_models** method will list out only few of the available models for each backbone. Visit [this](https://huggingface.co/transformers/pretrained_models.html) link to get a complete list of models for each backbone.
```
print(TextClassifier.available_backbone_models("xlm-roberta"))
```
## Load model architecture
Invoke the `TextClassifier` class by passing the data and the backbone you have chosen. The dataset consists of house addresses in multiple languages like Japanese, English, French, Spanish, etc., hence we will use a [multi-lingual transformer backbone](https://huggingface.co/transformers/v3.0.2/multilingual.html) to train our model.
```
model = TextClassifier(data, backbone="xlm-roberta-base")
```
## Model training
The `learning rate`[[1]](#References) is a **tuning parameter** that determines the step size at each iteration while moving toward a minimum of a loss function, it represents the speed at which a machine learning model **"learns"**. `arcgis.learn` includes a learning rate finder, and is accessible through the model's `lr_find()` method, that can automatically select an **optimum learning rate**, without requiring repeated experiments.
```
model.lr_find()
```
Training the model is an iterative process. We can train the model using its `fit()` method till the validation loss (or error rate) continues to go down with each training pass also known as an epoch. This is indicative of the model learning the task.
```
model.fit(epochs=6, lr=0.001)
```
## Validate results
Once we have the trained model, we can see the results to see how it performs.
```
model.show_results(15)
```
### Test the model prediction on an input text
```
text = """1016, 8A, CL RICARDO LEON - SANTA ANA (CARTAGENA), 30319"""
print(model.predict(text))
```
## Model metrics
To get a sense of how well the model is trained, we will calculate some important metrics for our `text-classifier` model. First, to find how accurate[[2]](#References) the model is in correctly predicting the classes in the dataset, we will call the model's `accuracy()` method.
```
model.accuracy()
```
Other important metrics to look at are Precision, Recall & F1-measures [[3]](#References). To find `precision`, `recall` & `f1` scores per label/class we will call the model's `metrics_per_label()` method.
```
model.metrics_per_label()
```
## Get misclassified records
Its always a good idea to see the cases where your model is not performing well. This step will help us to:
- Identify if there is a problem in the dataset.
- Identify if there is a problem with text/documents belonging to a specific label/class.
- Identify if there is a class imbalance in your dataset, due to which the model didn't see much of the labeled data for a particular class, hence not able to learn properly about that class.
To get the **misclassified records** we will call the model's `get_misclassified_records` method.
```
misclassified_records = model.get_misclassified_records()
misclassified_records.style.set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\
.set_properties(**{'text-align': "left"}).hide_index()
```
## Saving the trained model
Once you are satisfied with the model, you can save it using the save() method. This creates an Esri Model Definition (EMD file) that can be used for inferencing on unseen data.
```
model.save("country-classifier")
```
# Model inference
The trained model can be used to classify new text documents using the predict method. This method accepts a string or a list of strings to predict the labels of these new documents/text.
```
text_list = data._train_df.sample(15).Address.values
result = model.predict(text_list)
df = pd.DataFrame(result, columns=["Address", "CountryCode", "Confidence"])
df.style.set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\
.set_properties(**{'text-align': "left"}).hide_index()
```
# Conclusion
In this notebook, we have built a text classifier using `TextClassifier` class of `arcgis.learn.text` module. The dataset consisted of house addresses of 10 countries written in languages like English, Japanese, French, Spanish, etc. To achieve this we used a [multi-lingual transformer backbone](https://huggingface.co/transformers/v3.0.2/multilingual.html) like `XLM-RoBERTa` to build a classifier to predict the country for an input house address.
# References
[1] [Learning Rate](https://en.wikipedia.org/wiki/Learning_rate)
[2] [Accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision)
[3] [Precision, recall and F1-measures](https://scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-and-f-measures)
| github_jupyter |
<a href="https://colab.research.google.com/github/Vladm0z/HSE_Biotech/blob/main/seminar3_CpG_freq.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install biopython
```
#Seq object
```
from Bio.Seq import Seq
gene = Seq("GTGAAAAAGATGCAATCTATCGTACTCGCACTTTCCCTGGTTCTGGTCGCTCCCATGGCA"
"GCACAGGCTGCGGAAATTACGTTAGTCCCGTCAGTAAAATTACAGATAGGCGATCGTGAT"
"AATCGTGGCTATTACTGGGATGGAGGTCACTGGCGCGACCACGGCTGGTGGAAACAACAT"
"TATGAATGGCGAGGCAATCGCTGGCACCTACACGGACCGCCGCCACCGCCGCGCCACCAT"
"AAGAAAGCTCCTCATGATCATCACGGCGGTCATGGTCCAGGCAAACATCACCGCTAA")
float(gene.count("G") + gene.count("C"))/len(gene)
gene = Seq("CGCCGCGcg")
gene.upper().count("CG")
gene = Seq("AAAA")
gene.upper().count("AA")
```
#Функции
```
def init_di_nt_dict():
all_nt = ['A', 'C', 'G', 'T']
di_nt_dict = {}
for nt1 in all_nt:
for nt2 in all_nt:
di_nt_dict[nt1+nt2] = 0
return di_nt_dict
def count_di_nt_in_seq(seq):
di_nt_dict = init_di_nt_dict()
for di_nt in di_nt_dict.keys():
di_nt_dict[di_nt] = seq.count(di_nt)
return di_nt_dict
def count_mono_nt_in_seq(seq):
all_nt = ['A', 'C', 'G', 'T']
count_dict = {}
for nt in all_nt:
count_dict[nt] = seq.count(nt)
return count_dict
def mono_nt_freq_in_seq(seq):
counts_dict = count_mono_nt_in_seq(seq)
total_counts = sum(counts_dict.values())
freq_dict = {}
for k in counts_dict.keys():
freq_dict[k] = counts_dict[k]/total_counts
return freq_dict
def di_nt_freq_in_seq(seq):
di_nt_counts = count_di_nt_in_seq(seq)
total_counts = sum(di_nt_counts.values())
di_nt_freq = {}
for di_nt in di_nt_counts.keys():
di_nt_freq[di_nt] = di_nt_counts[di_nt]/total_counts
return di_nt_freq
```
#Анализ
```
# Human: https://www.ncbi.nlm.nih.gov/genome/?term=human
from Bio import Entrez
from Bio import SeqIO
Entrez.email = "A.N.Other@example.com"
with Entrez.efetch(db="nucleotide", id="NC_000022.11", rettype="fasta", retmode="text") as handle:
chr22_record = SeqIO.read(handle, "fasta")
chr22_record
with Entrez.efetch(db="nucleotide", id="NC_000085.7", rettype="fasta", retmode="text") as handle:
mouse_record = SeqIO.read(handle, "fasta")
mouse_record
with Entrez.efetch(db="nucleotide", id="NC_000913", rettype="fasta", retmode="text") as handle:
bact_record = SeqIO.read(handle, "fasta")
bact_record
with Entrez.efetch(db="nucleotide", id="NC_003076.8", rettype="fasta", retmode="text") as handle:
arabidosis_record = SeqIO.read(handle, "fasta")
arabidosis_record
# target_record = chr22_record
#target_record = mouse_record
#target_record = bact_record
target_record = arabidosis_record
target_seq = target_record.seq.upper()
with Entrez.efetch(db="nucleotide", id="JMSD01000001.1", rettype="fasta", retmode="text") as handle:
target_record = SeqIO.read(handle, "fasta")
target_seq = target_record.seq.upper()
target_record
target_seq.count('N')
len(target_seq)
target_seq.count("CG")
target_seq.count("GC")
di_nt_freq_dict = di_nt_freq_in_seq(target_seq)
di_nt_freq_dict
import pandas as pd
plot_df = pd.DataFrame(di_nt_freq_dict.items())
plot_df.columns = ['di_nt', 'freq']
plot_df['type'] = 'seq_di_nt'
plot_df
import seaborn as sns
valor_plot = sns.barplot(
data= plot_df,
x= 'di_nt',
y= 'freq',
hue='type')
```
##Вычисляем ожидаемые частоты ди-нуклеотидов
```
mono_nt_freqs = mono_nt_freq_in_seq(target_seq)
mono_nt_freqs
expected_di_nt_freqs = init_di_nt_dict()
expected_di_nt_freqs
for k in expected_di_nt_freqs.keys():
freq1 = mono_nt_freqs[k[0]]
freq2 = mono_nt_freqs[k[1]]
expected_di_nt_freqs[k] = freq1*freq2
#print('%s = %s * %s' % (k, freq1, freq2))
expected_di_nt_freqs
expected_df = pd.DataFrame(expected_di_nt_freqs.items())
expected_df.columns = ['di_nt', 'freq']
expected_df['type'] = 'expected'
expected_df
# Fig size: https://stackoverflow.com/a/47955814/310453
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.barplot(
data= pd.concat([plot_df, expected_df]),
x= 'di_nt',
y= 'freq',
hue='type'
).set_title(target_record.description)
```
Вычисляем ожидаемые частоты ди-нуклеотидов
```
mono_nt_freqs = mono_nt_freq_in_seq(target_seq)
mono_nt_freqs
expected_di_nt_freqs = init_di_nt_dict()
expected_di_nt_freqs
for k in expected_di_nt_freqs.keys():
freq1 = mono_nt_freqs[k[0]]
freq2 = mono_nt_freqs[k[1]]
expected_di_nt_freqs[k] = freq1*freq2
#print('%s = %s * %s' % (k, freq1, freq2))
expected_di_nt_freqs
expected_df = pd.DataFrame(expected_di_nt_freqs.items())
expected_df.columns = ['di_nt', 'freq']
expected_df['type'] = 'expected'
expected_df
pd.concat([plot_df, expected_df]).head()
# Fig size: https://stackoverflow.com/a/47955814/310453
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.barplot(
data= pd.concat([plot_df, expected_df]),
x= 'di_nt',
y= 'freq',
hue='type'
).set_title(target_record.description)
```
##Перемешиваем нт в исходной последовательности
```
import re
rand_seq = re.compile(r'[^ACGT]').sub('', str(target_seq))
print("%s => %s" % (len(target_seq), len(rand_seq)))
rand_seq[1:20]
from random import shuffle
nt_list = list(rand_seq)
shuffle(nt_list) # shuffles in place!
rand_seq = ''.join(nt_list)
rand_seq[1:20]
rand_freq_dict = di_nt_freq_in_seq(rand_seq)
rand_freq_dict
rand_df = pd.DataFrame(rand_freq_dict.items())
rand_df.columns = ['di_nt', 'freq']
rand_df['type'] = 'random'
rand_df.head()
_ = sns.barplot(
data= pd.concat([plot_df, expected_df, rand_df]),
x= 'di_nt',
y= 'freq',
hue='type'
).set_title(target_record.description)
```
| github_jupyter |
# SageMaker Serverless Inference
## HuggingFace Text Classification example
Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies. This takes away the undifferentiated heavy lifting of selecting and managing servers. Serverless Inference integrates with AWS Lambda to offer you high availability, built-in fault tolerance and automatic scaling.
Serverless Inference is a great choice for customers that have intermittent or unpredictable prediction traffic. For example, a document processing service used to extract and analyze data on a periodic basis. Customers that choose Serverless Inference should make sure that their workloads can tolerate cold starts. A cold start can occur when your endpoint doesn’t receive traffic for a period of time. It can also occur when your concurrent requests exceed the current request usage. The cold start time will depend on your model size, how long it takes to download, and your container startup time.
## Introduction
Text Classification can be used to solve various use-cases like sentiment analysis, spam detection, hashtag prediction etc.
This notebook demonstrates the use of the [HuggingFace `transformers` library](https://huggingface.co/transformers/) together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on multi class text classification. In particular, the pre-trained model will be fine-tuned using the [`20 newsgroups dataset`](http://qwone.com/~jason/20Newsgroups/). To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
<b>Notebook Setting</b>
- <b>SageMaker Classic Notebook Instance</b>: `ml.m5.xlarge` Notebook Instance & `conda_pytorch_p36 Kernel`
- <b>SageMaker Studio</b>: `Python 3 (PyTorch 1.6 Python 3.6 CPU Optimized)`
- <b>Regions Available</b>: SageMaker Serverless Inference is currently available in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney)
## Table of Contents
- Setup
- Data Preprocessing
- Model Training
- Deployment
- Endpoint Configuration (Adjust for Serverless)
- Serverless Endpoint Creation
- Endpoint Invocation
- Cleanup
- Conclusion
# Development Environment and Permissions
## Setup
If you run this notebook in SageMaker Studio, you need to make sure `ipywidgets` is installed and restart the kernel, so please uncomment the code in the next cell, and run it.
```
# %%capture
# import IPython
# import sys
# !{sys.executable} -m pip install ipywidgets
# IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used
```
Let's install the required packages from HuggingFace and SageMaker
```
import sys
!{sys.executable} -m pip install "scikit_learn==0.20.0" "sagemaker>=2.86.1" "transformers==4.6.1" "datasets==1.6.2" "nltk==3.4.4"
```
Make sure SageMaker version is >= 2.86.1
```
import sagemaker
print(sagemaker.__version__)
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
s3_prefix = "huggingface_serverless/20_newsgroups"
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
### Data Preparation
Now we'll download a dataset from the web on which we want to train the text classification model.
In this example, let us train the text classification model on the [`20 newsgroups dataset`](http://qwone.com/~jason/20Newsgroups/). The `20 newsgroups dataset` consists of 20000 messages taken from 20 Usenet newsgroups.
```
import os
import shutil
data_dir = "20_newsgroups_bulk"
if os.path.exists(data_dir): # cleanup existing data folder
shutil.rmtree(data_dir)
!aws s3 cp s3://sagemaker-sample-files/datasets/text/20_newsgroups/20_newsgroups_bulk.tar.gz .
!tar xzf 20_newsgroups_bulk.tar.gz
!ls 20_newsgroups_bulk
file_list = [os.path.join(data_dir, f) for f in os.listdir(data_dir)]
print("Number of files:", len(file_list))
import pandas as pd
documents_count = 0
for file in file_list:
df = pd.read_csv(file, header=None, names=["text"])
documents_count = documents_count + df.shape[0]
print("Number of documents:", documents_count)
```
Let's inspect the dataset files and analyze the categories.
```
categories_list = [f.split("/")[1] for f in file_list]
categories_list
```
We can see that the dataset consists of 20 topics, each in different file.
Let us inspect the dataset to get some understanding about how the data and the label is provided in the dataset.
```
df = pd.read_csv("./20_newsgroups_bulk/rec.motorcycles", header=None, names=["text"])
df
df["text"][0]
df = pd.read_csv("./20_newsgroups_bulk/comp.sys.mac.hardware", header=None, names=["text"])
df
df["text"][0]
```
As we can see from the above, there is a single file for each class in the dataset. Each record is just a plain text paragraphs with header, body, footer and quotes. We will need to process them into a suitable data format.
## Data Preprocessing
We need to preprocess the dataset to remove the header, footer, quotes, leading/trailing whitespace, extra spaces, tabs, and HTML tags/markups.
Download the `nltk` tokenizer and other libraries
```
import nltk
from nltk.tokenize import word_tokenize
import re
import string
nltk.download("punkt")
from sklearn.datasets.twenty_newsgroups import (
strip_newsgroup_header,
strip_newsgroup_quoting,
strip_newsgroup_footer,
)
```
This following function will remove the header, footer and quotes (of earlier messages in each text).
```
def strip_newsgroup_item(item):
item = strip_newsgroup_header(item)
item = strip_newsgroup_quoting(item)
item = strip_newsgroup_footer(item)
return item
```
The following function will take care of removing leading/trailing whitespace, extra spaces, tabs, and HTML tags/markups.
```
def process_text(texts):
final_text_list = []
for text in texts:
# Check if the sentence is a missing value
if isinstance(text, str) == False:
text = ""
filtered_sentence = []
# Lowercase
text = text.lower()
# Remove leading/trailing whitespace, extra space, tabs, and HTML tags/markups
text = text.strip()
text = re.sub("\[.*?\]", "", text)
text = re.sub("https?://\S+|www\.\S+", "", text)
text = re.sub("<.*?>+", "", text)
text = re.sub("[%s]" % re.escape(string.punctuation), "", text)
text = re.sub("\n", "", text)
text = re.sub("\w*\d\w*", "", text)
for w in word_tokenize(text):
# We are applying some custom filtering here, feel free to try different things
# Check if it is not numeric
if not w.isnumeric():
filtered_sentence.append(w)
final_string = " ".join(filtered_sentence) # final string of cleaned words
final_text_list.append(final_string)
return final_text_list
```
Now we will read each of the `20_newsgroups` dataset files, call `strip_newsgroup_item` and `process_text` functions we defined earlier, and then aggregate all data into one dataframe.
```
all_categories_df = pd.DataFrame()
for file in file_list:
print(f"Processing {file}")
label = file.split("/")[1]
df = pd.read_csv(file, header=None, names=["text"])
df["text"] = df["text"].apply(strip_newsgroup_item)
df["text"] = process_text(df["text"].tolist())
df["label"] = label
all_categories_df = all_categories_df.append(df, ignore_index=True)
```
Let's inspect how many categories there are in our dataset.
```
all_categories_df["label"].value_counts()
```
In our dataset there are 20 categories which is too much, so we will combine the sub-categories.
```
# replace to politics
all_categories_df["label"].replace(
{
"talk.politics.misc": "politics",
"talk.politics.guns": "politics",
"talk.politics.mideast": "politics",
},
inplace=True,
)
# replace to recreational
all_categories_df["label"].replace(
{
"rec.sport.hockey": "recreational",
"rec.sport.baseball": "recreational",
"rec.autos": "recreational",
"rec.motorcycles": "recreational",
},
inplace=True,
)
# replace to religion
all_categories_df["label"].replace(
{
"soc.religion.christian": "religion",
"talk.religion.misc": "religion",
"alt.atheism": "religion",
},
inplace=True,
)
# replace to computer
all_categories_df["label"].replace(
{
"comp.windows.x": "computer",
"comp.sys.ibm.pc.hardware": "computer",
"comp.os.ms-windows.misc": "computer",
"comp.graphics": "computer",
"comp.sys.mac.hardware": "computer",
},
inplace=True,
)
# replace to sales
all_categories_df["label"].replace({"misc.forsale": "sales"}, inplace=True)
# replace to science
all_categories_df["label"].replace(
{
"sci.crypt": "science",
"sci.electronics": "science",
"sci.med": "science",
"sci.space": "science",
},
inplace=True,
)
```
Now we are left with 6 categories, which is much better.
```
all_categories_df["label"].value_counts()
```
Let's calculate number of words for each row.
```
all_categories_df["word_count"] = all_categories_df["text"].apply(lambda x: len(str(x).split()))
all_categories_df.head()
```
Let's get basic statistics about the dataset.
```
all_categories_df["word_count"].describe()
```
We can see that the mean value is around 159 words. However, there are outliers, such as a text with 11351 words. This can make it harder for the model to result in good performance. We will take care to drop those rows.
Let's drop empty rows first.
```
no_text = all_categories_df[all_categories_df["word_count"] == 0]
print(len(no_text))
# drop these rows
all_categories_df.drop(no_text.index, inplace=True)
```
Let's drop the rows that are longer than 256 words, as it is a length close to the mean value of the word count. This is done to make it easy for the model to train without outliers.
```
long_text = all_categories_df[all_categories_df["word_count"] > 256]
print(len(long_text))
# drop these rows
all_categories_df.drop(long_text.index, inplace=True)
all_categories_df["label"].value_counts()
```
Let's get basic statistics about the dataset after our outliers fixes.
```
all_categories_df["word_count"].describe()
```
This looks much more balanced.
Now we drop the `word_count` columns as we will not need it anymore.
```
all_categories_df.drop(columns="word_count", axis=1, inplace=True)
all_categories_df
```
Let's convert categorical label to integer number, in order to prepare the dataset for training.
```
categories = all_categories_df["label"].unique().tolist()
categories
categories.index("recreational")
all_categories_df["label"] = all_categories_df["label"].apply(lambda x: categories.index(x))
all_categories_df["label"].value_counts()
```
We partition the dataset into 80% training and 20% validation set and save to `csv` files.
```
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(all_categories_df, test_size=0.2)
train_df.to_csv("train.csv", index=None)
test_df.to_csv("test.csv", index=None)
```
Let's inspect the label distribution in the training dataset
```
train_df["label"].value_counts()
```
Let's inspect the label distribution in the test dataset
```
test_df["label"].value_counts()
```
## Tokenization
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library [tokenizers](https://github.com/huggingface/tokenizers). The “Fast” implementations allows:
- A significant speed-up in particular when doing batched tokenization.
- Additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).
```
from datasets import load_dataset
from transformers import AutoTokenizer
# tokenizer used in preprocessing
tokenizer_name = "distilbert-base-uncased"
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
```
### Load train and test datasets
Let's create a [Dataset](https://huggingface.co/docs/datasets/loading_datasets.html) from our local `csv` files for training and test we saved earlier.
```
dataset = load_dataset("csv", data_files={"train": "train.csv", "test": "test.csv"})
dataset
dataset["train"]
dataset["train"][0]
dataset["test"]
dataset["test"][0]
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch["text"], padding="max_length", truncation=True)
train_dataset = dataset["train"]
test_dataset = dataset["test"]
```
### Tokenize train and test datasets
Let's tokenize the train dataset
```
train_dataset = train_dataset.map(tokenize, batched=True)
```
Let's tokenize the test dataset
```
test_dataset = test_dataset.map(tokenize, batched=True)
```
### Set format for PyTorch
```
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])
```
## Uploading data to `sagemaker_session_bucket`
After we processed the datasets, we are going to upload it to S3.
```
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)
# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)
print(training_input_path)
print(test_input_path)
```
## Training the HuggingFace model for supervised text classification
In order to create a sagemaker training job we need a `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In an Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in .....
```python
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./code',
instance_type='ml.p3.2xlarge',
instance_count=1,
volume_size=256,
role=role,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
hyperparameters = {'epochs': 1,
'model_name':'distilbert-base-uncased',
'num_labels': 6
})
```
When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running.
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --num_labels 6
```
The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments.
SageMaker is providing useful properties about the training environment through various environment variables, including the following:
* `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting.
* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.
* `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set.
To run your training job locally you can define `instance_type='local'` or `instance_type='local-gpu'` for `gpu` usage. _Note: this does not work within SageMaker Studio_
We create a metric_definition dictionary that contains regex-based definitions that will be used to parse the job logs and extract metrics
```
metric_definitions = [
{"Name": "loss", "Regex": "'loss': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "learning_rate", "Regex": "'learning_rate': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_loss", "Regex": "'eval_loss': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_accuracy", "Regex": "'eval_accuracy': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_f1", "Regex": "'eval_f1': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_precision", "Regex": "'eval_precision': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_recall", "Regex": "'eval_recall': ([0-9]+(.|e\-)[0-9]+),?"},
{"Name": "eval_runtime", "Regex": "'eval_runtime': ([0-9]+(.|e\-)[0-9]+),?"},
{
"Name": "eval_samples_per_second",
"Regex": "'eval_samples_per_second': ([0-9]+(.|e\-)[0-9]+),?",
},
{"Name": "epoch", "Regex": "'epoch': ([0-9]+(.|e\-)[0-9]+),?"},
]
```
## Creating an Estimator and start a training job
```
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters = {"epochs": 1, "model_name": "distilbert-base-uncased", "num_labels": 6}
```
Now, let's define the SageMaker `HuggingFace` estimator with resource configurations and hyperparameters to train Text Classification on `20 newsgroups` dataset, running on a `p3.2xlarge` instance.
```
huggingface_estimator = HuggingFace(
entry_point="train.py",
source_dir="./code",
instance_type="ml.p3.2xlarge",
instance_count=1,
volume_size=256,
role=role,
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
hyperparameters=hyperparameters,
metric_definitions=metric_definitions,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})
```
## Deployment
### Serverless Configuration
#### Memory size - `memory_size_in_mb`
Your serverless endpoint has a minimum RAM size of <b>1024 MB (1 GB)</b>, and the maximum RAM size you can choose is 6144 MB (6 GB). The memory sizes you can select are <b>1024 MB</b>, <b>2048 MB</b>, <b>3072 MB</b>, <b>4096 MB</b>, <b>5120 MB</b>, or <b>6144 MB. Serverless Inference auto-assigns compute resources proportional to the memory you select. If you select a larger memory size, your container has access to more `vCPUs`. Select your endpoint’s memory size according to your model size. Generally, the memory size should be at least as large as your model size. You may need to benchmark in order to select the right memory selection for your model based on your latency SLAs. The memory size increments have different pricing; see the Amazon SageMaker pricing page for more information.
#### Concurrent invocations - `max_concurrency`
Serverless Inference manages predefined scaling policies and quotas for the capacity of your endpoint. Serverless endpoints have a quota for how many concurrent invocations can be processed at the same time. If the endpoint is invoked before it finishes processing the first request, then it handles the second request concurrently. You can set the maximum concurrency for a <b>single endpoint up to 200</b>, and the total number of serverless endpoint variants you can host in a Region is 50. The total concurrency you can share between all serverless endpoints per Region in your account is 200. The maximum concurrency for an individual endpoint prevents that endpoint from taking up all the invocations allowed for your account, and any endpoint invocations beyond the maximum are throttled.
```
from sagemaker.serverless.serverless_inference_config import ServerlessInferenceConfig
serverless_config = ServerlessInferenceConfig(
memory_size_in_mb=6144,
max_concurrency=1,
)
```
### Serverless Endpoint Creation
Now that we have a `ServerlessInferenceConfig`, we can create a serverless endpoint and deploy our model to it.
```
%%time
predictor = huggingface_estimator.deploy(serverless_inference_config=serverless_config)
```
## Endpoint Invocation
Using few samples, you can now invoke the SageMaker endpoint to get predictions.
```
def predict_sentence(sentence):
result = predictor.predict({"inputs": sentence})
index = int(result[0]["label"].split("LABEL_")[1])
print(categories[index])
sentences = [
"The modem is an internal AT/(E)ISA 8-bit card (just a little longer than a half-card).",
"In the cage I usually wave to bikers. They usually don't wave back. My wife thinks it's strange but I don't care.",
"Voyager has the unusual luck to be on a stable trajectory out of the solar system.",
]
# using the same processing logic that we used during data preparation for training
processed_sentences = process_text(sentences)
for sentence in processed_sentences:
predict_sentence(sentence)
```
## Clean up
Endpoints should be deleted when no longer in use, since (per the [SageMaker pricing page](https://aws.amazon.com/sagemaker/pricing/)) they're billed by time deployed.
```
predictor.delete_endpoint()
```
## Conclusion
In this notebook you successfully ran SageMaker Training Job with the HuggingFace framework to fine-tune a pre-trained transformer on text classification using the `20 newsgroups dataset` dataset.
Then, you prepared the Serverless configuration required, and deployed your model to SageMaker Serverless Endpoint. Finally, you invoked the Serverless endpoint with sample data and got the prediction results.
As next steps, you can try running SageMaker Training Jobs with your own algorithm and your own data, and deploy the model to SageMaker Serverless Endpoint.
| github_jupyter |
# 手机
- 5月3日更新
- https://ac.scmor.com/
- https://github.com.cnpmjs.org/
- http://g.widyun.com/ github下载工具
```
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import pandas as pd
import numpy as np
```
I. JD爬虫
- https://zhuanlan.zhihu.com/p/56441988
- https://club.jd.com/comment/productCommentSummaries.action?referenceIds=100006945233
评论数量接口
```
import requests
r = requests.get('https://club.jd.com/comment/productCommentSummaries.action?referenceIds=100006945233')
r.headers
r.text
```
Python 标准库与爬虫相关的
- urllib.request
For HTTP and HTTPS URLs, this function returns a http.client.HTTPResponse object slightly modified. In addition to the three new methods above, the msg attribute contains the same information as the reason attribute — the reason phrase returned by server — instead of the response headers as it is specified in the documentation for HTTPResponse.
```
import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
r = opener.open('https://list.jd.com/list.html?cat=9987,653,655')
r.getheaders()
t = r.read()
t.decode()
laheaderss = '11', '11pro', '11max', 'SE', 'Mi10Q', 'Mi10', 'S20 5G', 'S20 5G+'
height = [150.9, 144, 158, 138.4, 164.02, 162.6, 151.7, 161.9]
width = [75.7, 71.4, 77.8, 67.3, 74.77, 74.8, 69.1, 73.7]
depth = [8.3, 8.1, 8.1, 7.3, 7.88, 8.96, 7.9, 7.8]
weight = [194, 188, 226, 148, 192, 208, 163, 186]
```
### Pandas 读入Excel时,如何指定类型?
有两种方法:Dtype specifications 与 Cell converters
参考:https://pandas.pydata.org/docs/user_guide/io.html#parsing-dates
1. dtype 指定列的类型 {"列名":"类型"}
2. converters 参数有可能转变 Excel 单元格的内容
```
df = pd.read_excel('phone.ods', sheet_name='工作表1', engine='odf', dtype={'SKU':str})
df.head()
df.dtypes
%matplotlib widget
```
P40 pro 2640 * 1200
```
pow(2640 ** 2 + 1200 ** 2, 0.5) / 441
16 / 10
16 / 9
158.2 / 72.6
2640 / 1200
fig, ax = plt.subplots(figsize=(6,2))
ax.stem(np.asarray(width), bottom=70, use_line_collection=True)
xs = range(len(labels))
def format_fn(tick_val, tick_pos):
if int(tick_val) in xs:
return labels[int(tick_val)]
else:
return ''
ax.get_xaxis().set_major_formatter(ticker.FuncFormatter(format_fn))
```
- [Colored label texts in a matplotlib stem plot](https://stackoverflow.com/questions/48423074/colored-label-texts-in-a-matplotlib-stem-plot)
```
n = np.arange(0, 10)
x1 = np.sin(n)
x2 = np.cos(n)
fig, ax = plt.subplots()
ax.stem(n, x1, 'b', markerfmt='bo', label="First")
ax.stem(n, x2, 'g', markerfmt='go', label="Second")
fig
ax.cla()
ax.plot([i for i in range(len(height))], .1 * np.asarray(height))
ax.plot([i for i in range(len(height))], -.1 * np.asarray(weight))
ax.stem(-1 * np.asarray(depth))
ax.stem(.1 * np.asarray(width))
dates = ['2019-02-26', '2019-02-26', '2018-11-10', '2018-11-10',
'2018-09-18', '2018-08-10', '2018-03-17', '2018-03-16',
'2018-03-06', '2018-01-18', '2017-12-10', '2017-10-07',
'2017-05-10', '2017-05-02', '2017-01-17', '2016-09-09',
'2016-07-03', '2016-01-10', '2015-10-29', '2015-02-16',
'2014-10-26', '2014-10-18', '2014-08-26']
levels = np.tile([-5, 5, -3, 3, -1, 1],
int(np.ceil(len(dates)/6)))[:len(dates)]
levels
np.ceil(len(dates)/6)
5 * np.asarray(depth)
[i for i in range(4)]
d = {"CommentsCount":[{"SkuId":100008348534,"ProductId":100008348534,"ShowCount":77951,"ShowCountStr":"7.7万+","CommentCountStr":"291万+","CommentCount":2914618,"AverageScore":5,"DefaultGoodCountStr":"234万+","DefaultGoodCount":2347161,"GoodCountStr":"53万+","GoodCount":538793,"AfterCount":18632,"OneYear":0,"AfterCountStr":"1.8万+","VideoCount":8265,"VideoCountStr":"8200+","GoodRate":0.94,"GoodRateShow":94,"GoodRateStyle":141,"GeneralCountStr":"9800+","GeneralCount":9811,"GeneralRate":0.017,"GeneralRateShow":2,"GeneralRateStyle":3,"PoorCountStr":"1.8万+","PoorCount":18853,"SensitiveBook":0,"PoorRate":0.043,"PoorRateShow":4,"PoorRateStyle":6}]}
d
```
{"CommentsCount":[{"SkuId":100008348534,"ProductId":100008348534,"ShowCount":77951,"ShowCountStr":"7.7万+","CommentCountStr":"291万+","CommentCount":2914618,"AverageScore":5,"DefaultGoodCountStr":"234万+","DefaultGoodCount":2347161,"GoodCountStr":"53万+","GoodCount":538793,"AfterCount":18632,"OneYear":0,"AfterCountStr":"1.8万+","VideoCount":8265,"VideoCountStr":"8200+","GoodRate":0.94,"GoodRateShow":94,"GoodRateStyle":141,"GeneralCountStr":"9800+","GeneralCount":9811,"GeneralRate":0.017,"GeneralRateShow":2,"GeneralRateStyle":3,"PoorCountStr":"1.8万+","PoorCount":18853,"SensitiveBook":0,"PoorRate":0.043,"PoorRateShow":4,"PoorRateStyle":6}]}
{"CommentsCount":[{"SkuId":100008348542,"ProductId":100008348542,"ShowCount":77951,"ShowCountStr":"7.7万+","CommentCountStr":"291万+","CommentCount":2914618,"AverageScore":5,"DefaultGoodCountStr":"234万+","DefaultGoodCount":2347160,"GoodCountStr":"53万+","GoodCount":538794,"AfterCount":18632,"OneYear":0,"AfterCountStr":"1.8万+","VideoCount":8265,"VideoCountStr":"8200+","GoodRate":0.94,"GoodRateShow":94,"GoodRateStyle":141,"GeneralCountStr":"9800+","GeneralCount":9811,"GeneralRate":0.017,"GeneralRateShow":2,"GeneralRateStyle":3,"PoorCountStr":"1.8万+","PoorCount":18853,"SensitiveBook":0,"PoorRate":0.043,"PoorRateShow":4,"PoorRateStyle":6}]}
| github_jupyter |
```
import os, sys
os.getcwd()
#!pip install azure-storage-blob --user
#!pip install storefact --user
import os, sys
import configparser
sys.path.append('/home/jovyan/.local/lib/python3.6/site-packages/')
print(sys.path)
os.path.abspath("AzureDownload/config.txt")
os.getcwd()
config = configparser.ConfigParser()
config.read("/home/jovyan/AzureDownload/config.txt")
config.sections()
```
### Credentials setup, read the WoS jounral name mapped table from Azure
```
import time
from azure.storage.blob import BlockBlobService
CONTAINERNAME = "mag-2019-01-25"
BLOBNAME= "MAGwosJournalMatch/OpenSci3Journal.csv/part-00000-tid-8679026268804875386-7586e989-d017-4b12-9d5a-53fc6497ec02-1116-c000.csv"
LOCALFILENAME= "/home/jovyan/openScience/code-data/OpenSci3Journal.csv"
block_blob_service=BlockBlobService(account_name=config.get("configuration","account"),account_key=config.get("configuration","password"))
#download from blob
t1=time.time()
block_blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+BLOBNAME) % (t2 - t1))
import pandas as pd
openJ = pd.read_csv('OpenSci3Journal.csv', escapechar='\\', encoding='utf-8')
openJ.count()
```
### To verify that the Spark output is consistent, we compare the pandas dataframes before and after the WoS jounral mapping
```
open0 = pd.read_csv('OpenSci3.csv', escapechar='\\', encoding='utf-8')
open0.count()
```
### Compare matched MAG journal names and WoS journal names
```
openJ['Journal'] = openJ.Journal.str.lower()
openJ['WoSjournal'] = openJ.WoSjournal.str.lower()
matched = openJ[openJ['Journal'] == openJ['WoSjournal']]
matched.count()
```
### Matching with UCSD map of science journal names
```
journalMap = pd.read_csv('WoSmatch/journalName.csv')
journalMap['journal_name'] = journalMap.journal_name.str.lower()
JwosMap = journalMap[journalMap['source_type']=="Thomson"]
MAGmatched = pd.merge(openJ, JwosMap, left_on=['Journal'], right_on=['journal_name'], how='left')
MAGmatched.count()
WoSmatched = pd.merge(openJ, JwosMap, left_on=['WoSjournal'], right_on=['journal_name'], how='left')
WoSmatched.count()
```
### Combining matched journal names from WoS and MAG to the UCSD map of science
```
MAGmatched.update(WoSmatched)
MAGmatched.count()
```
### Mapping from matched jounrals to subdisciplines
```
JsubMap = pd.read_csv('WoSmatch/jounral-subdiscipline.csv')
JsubMap.journ_id = JsubMap.journ_id.astype('float64')
subMatched = pd.merge(MAGmatched, JsubMap, left_on=['journ_id'], right_on=['journ_id'], how='left').drop(columns='formal_name')
subMatched.count()
#subMatched.dtypes
subTable = pd.read_csv('WoSmatch/subdiscipline.csv')
subTable.subd_id = subTable.subd_id.astype('float64')
subNameMatched = pd.merge(subMatched, subTable, left_on=['subd_id'], right_on=['subd_id'], how='left').drop(columns=['size','x','y'])
subNameMatched.count()
```
### Since each journal has a distribution of corresponding disciplines, we will collect the disipline vectors into new columns
```
majTable = pd.read_csv('WoSmatch/discipline.csv')
majTable.disc_id = majTable.disc_id.astype('float64')
discMatched = pd.merge(subNameMatched, majTable, left_on=['disc_id'], right_on=['disc_id'], how='left').drop(columns=['color','x','y'])
discMatched.jfraction = discMatched.jfraction.astype('str')
discMatched.subd_name = discMatched.subd_name.astype('str')
discMatched.disc_name = discMatched.disc_name.astype('str')
temp = pd.DataFrame()
temp = discMatched[['PaperId','jfraction','subd_name','disc_name']]
temp['jfraction'] = discMatched.groupby(['PaperId'])['jfraction'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp['subd_name'] = discMatched.groupby(['PaperId'])['subd_name'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp['disc_name'] = discMatched.groupby(['PaperId'])['disc_name'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp2 = temp.drop_duplicates()
temp2.count()
OpenSci3Disc = pd.merge(MAGmatched, temp2, left_on=['PaperId'], right_on=['PaperId'], how='left').drop(columns=['source_type','journ_id','journal_name'])
OpenSci3Disc
OpenSci3Disc.to_csv('OpenSci3Discipline.csv',index=False, sep=',', encoding='utf-8')
```
| github_jupyter |
In the [previous part](http://earthpy.org/pandas-basics.html) we looked at very basic ways of work with pandas. Here I am going to introduce couple of more advance tricks. We will use very powerful pandas IO capabilities to create time series directly from the text file, try to create seasonal means with *resample* and multi-year monthly means with *groupby*.
Import usual suspects and change some output formatting:
```
import pandas as pd
import numpy as np
%matplotlib inline
pd.set_option('max_rows',15) # this limit maximum numbers of rows
```
## Load data
We load data from two files, parse their dates and create Dataframe
```
ham_tmin = pd.read_csv('./Ham_tmin.txt', parse_dates=True, index_col=0, names=['Time','tmin'])
ham_tmax = pd.read_csv('./Ham_tmax.txt', parse_dates=True, index_col=0, names=['Time','tmax'])
tm = pd.DataFrame({'TMAX':ham_tmax.tmax/10.,'TMIN':ham_tmin.tmin/10.})
tm
```
## Seasonal means with resample
Initially pandas was created for analysis of financial information and it thinks not in seasons, but in quarters. So we have to resample our data to quarters. We also need to make a shift from standard quarters, so they correspond with seasons. This is done by using 'Q-NOV' as a time frequency, indicating that year in our case ends in November:
```
tmd = tm.to_period(freq='D')
tmd.resample('Q-NOV').head()
q_mean = tm.resample('Q-NOV')
q_mean.head()
```
Winter temperatures
```
q_mean.index.quarter
q_mean[q_mean.index.quarter==1].plot(figsize=(8,5))
```
##Exercise
Plot summer mean
If you don't mind to sacrifice first two months (that strictly speaking can't represent the whole winter of 1890-1891), there is another way to do similar thing by just resampling to 3M (3 months) interval starting from March (third data point):
```
tm[59:63]
m3_mean = tm[59:].resample('3M', closed='left')
m3_mean.head()
```
Results are different, let's find out wich one is wrong, or maybe we did something silly?
```
tm[59:151]
```
Now in order to select all winter months we have to choose Februaries (last month of the season):
```
m3_mean[m3_mean.index.month==2].plot(figsize=(8,5))
```
Result is the same except for the first point.
##Exercise
Calculate 10 day intervals
```
tm.resample('10D', closed='left')['TMAX'].plot()
```
## Multi-year monthly means with *groupby*
<img src="files/splitApplyCombine.png">
First step will be to add another column to our DataFrame with month numbers:
```
tm['mon'] = tm.index.month
tm
```
Now we can use [*groupby*](http://pandas.pydata.org/pandas-docs/stable/groupby.html) to group our values by months and calculate mean for each of the groups (month in our case):
```
monmean = tm.groupby('mon').aggregate(np.mean)
monmean.plot(kind='bar')
```
##Exercise
- Calculate and plot monthly mean temperatures for 1891-1950 and 1951-2010
- Calculate and plot differences between this two variables
Sometimes it is useful to look at the [box plots](http://en.wikipedia.org/wiki/Box_plot) for every month:
```
ax = tm.boxplot(column=['TMAX'], by='mon')
ax = tm.boxplot(column=['TMIN'], by='mon')
```
| github_jupyter |
# Motivation
Hi everyone. I think that _every_ student should have some ability to write code. In our digital world, being fluent in some programming language is almost as important as your ability to read and write. Sure, you can survive _without_ it ... but your are so much more valuable as a scientist and employee _with_ it. Acorrding to some research, high-paying jobs with coding requirements pay about \\$22,000 more (on average) than jobs that don't require coding. Be aware that this is correlation, not causation. Is a chemist that knows how to code going to make an extra \\$22,000 per year compared to a chemist that doesn't know how to code? That seems unlikely. However, I am confident that, with all other variables being equal, a ~~chemist~~ scientist that can code will make more money and have more job opportunities than a ~~chemist~~ scientist that cannot code.
So, am I expecting this one excercise to make you into a marketable coder? Absolutely not. I know that coding is a skill and it takes time to develop that skill. But you need some place to start...
Here are the things that I want you to take home from this exercise:
* See the differences between real and complex Fourier transforms
* Gain a deeper understanding of quadrature detection
* Make connections between the theories and equations that we discuss in class and the real life phenomenon and observables that you see in lab
* See coding as a tool to help you simulate complex numerical problems
* See coding as an extension or alternative to Excel
* See how easy it is to produce consistent, publication-quality figures
* See that coding a simulation like this is a great way to understand all of the theory behind that simulation
To that end, I am introducing you the the Python programming language - arguably the hottest, most in-demand, and easiest language to learn. And I'm doing that with something called Jupyter. Jupyter is a web-based programming environment that was designed to:
* run in a web browser
* combine code, documentation, comments, and graphics
* focus on the programming languages **JU**lia, **PYT**hon, and **R**
* emulate the look and feel of software like Mathematica
To give you some context, Instagram uses [Python](https://www.zdnet.com/article/programming-languages-how-instagrams-taming-a-multimillion-line-python-monster/) and Netflix uses [Jupyter](https://medium.com/netflix-techblog/notebook-innovation-591ee3221233). Want to work for either of those companies? In general, scientists are good with data and statistics; these are exactly the job skills that hiring managers are looking for at companies like Netflix and Instagram. It's important to make note that I'm not talking about computer scientists or software engineers. I'm talking about data scientists - LITERALLY THE STUFF THAT WE DO EVERY DAY!
# Task 0
This is called a _markdown_ cell. It can contain text, links, graphics, equations, documentation, comments, etc. I'm going to use these cells to give you instructions.
Click on it. See that blue vertical bar that appeared in the left margin? That tells you that this cell is "active".
Double-click this cell. See how the text changed from nicely formatted text to ugly, unformatted text? This is analogous to looking at a web page versus looking at the HTML code that makes that web page. Press **SHIFT+ENTER** to execute this cell and go back to the formatted text.
The cell below this one is called a _code_ cell. It contains Python code and can contain a few commands or an entire program. Run that cell (remember, **SHIFT+ENTER**) and see what happens. After you do that, go back to that cell and change the code so that it prints **Hello, [YOUR NAME]**. Run it again. Did it work?
```
print("Hello, world!")
```
# Scientific background
We have been talking about the **FID** (**F**ree **I**nduction **D**ecay). This is the _time-domain_ signal that we get straight out of the spectrometer. We can represent the FID mathematically as a damped cosine function. Technically, it should be a sum of damped cosine functions (one for each peak that we see in our spectrum) but we're trying to simplify things here. If you need an example, consider a sample that contains $H_2O$ as the analyte. There are two equivalent ${^1}H's$ so we see one peak in our spectrum. We can model the FID for this sample as:
$$f(x)=cos({\omega}2{\pi}t)e^{\frac{-t}{T_2}}$$
After we do the Fourier transform of our spectrum, we get a peak with a frequency of $\omega$. Right? Let's find out...
# Task 1
Run the cell below. This should produce two plots:
* a damped cosine with a frequency of _OMEGA_ (in this example, it's set to 20 Hz)
* the Fourier transform of that signal
```
# import the libraries so we can do cool math stuff and make pretty plots
import numpy as np
import matplotlib.pyplot as plt
# define a standard look for all of the plots
%matplotlib inline
font = {'size' : 20}
plt.rc('font', **font)
# define all of our constants for these simulations
NP = 16384 # number of data points; must be 2^n
OMEGA = 20.0 # the frequency of our peak
DECAY = 3.0 # this is R2 which equals 1/T2
DWELL = 1 / 8000.0 # calculate the dwell time based on the sweep width
AQ = NP * DWELL # the acquisition time is just the number of data points times the delay between points
LLIM = 30.0 # left limit
RLIM = -30.0 # right limit - yes, I know negative numbers should be on the left, but NMR has weird conventions
BLIM = -1500.0 # bottom limit
TLIM = 1500.0 # top limit
# create the time points that we want to use for the sin and cos calculations; create NP data points between 0.00 - AQ seconds
x = np.linspace(0.00, AQ, NP)
# calculate the damped oscillation - we use 2*pi*omega to get units of Hertz instead of rad/s
y = np.cos(OMEGA * 2 * np.pi * x) * np.exp(-x * DECAY)
# do the Fourier transform and calculate the frequencies for the x-axis
ft_y = np.fft.fft(y)
freq = np.fft.fftfreq(NP) / DWELL
# make the stacked plots
fig, axs = plt.subplots(2, 1, figsize=(15,7.5))
axs[0].plot(x, y)
axs[0].set_xlim(0.00, AQ)
axs[0].set_xlabel('time')
axs[0].set_ylabel('intensity')
axs[0].set_ylim(-1.00, 1.00)
axs[0].grid(True)
axs[1].plot(freq, ft_y.real)
axs[1].set_xlim(LLIM, RLIM)
axs[1].set_xlabel('frequency')
axs[1].set_ylabel('intensity')
axs[1].set_ylim(BLIM, TLIM)
axs[1].grid(True)
fig.tight_layout()
plt.show()
```
# Reflection 1
Let's talk about what you see and what you expected to see. We'll play a game to see if we can figure out what happened.
# Task 2
Hmm. So a _real_ Fourier transform has ambiguity that you observe as peaks at $\pm\omega$. Let's see if we can fix that. We could cut the spectrum in half and discard the right half. Wouldn't that get rid of the false peak? Maybe there's a better way?
I've already given you a clue by emphasizing that we used a _real_ Fourier transform. What about a _complex_ Fourier transform? Where do we get the imaginary component to do a complex FT? First, we'll check the math to make sure it's feasible. If it works out, then we'll worry about how to make it happen physically.
Let's compare the real Fourier transforms of cos and sin and see if that helps us.
Run the code cell below
```
yc = np.cos(OMEGA * 2 * np.pi * x) * np.exp(-x * DECAY)
ys = np.sin(OMEGA * 2 * np.pi * x) * np.exp(-x * DECAY)
ft_yc = np.fft.fft(yc)
ft_ys = np.fft.fft(ys)
fig, axs = plt.subplots(2, 1, figsize=(15,7.5))
axs[0].plot(freq, ft_yc.real)
axs[0].set_xlim(LLIM, RLIM)
axs[0].set_xlabel('frequency')
axs[0].set_ylabel('intensity')
axs[0].set_ylim(BLIM, TLIM)
axs[0].grid(True)
axs[1].plot(freq, ft_ys.imag)
axs[1].set_xlim(LLIM, RLIM)
axs[1].set_xlabel('frequency')
axs[1].set_ylabel('intensity')
axs[1].set_ylim(BLIM, TLIM)
axs[1].grid(True)
fig.tight_layout()
plt.show()
```
# Reflection 2
Interesting. Remember that we defined _OMEGA_ to be 20 Hz at the very beginning of this exercise. The true signals at $\omega$ are both positive but the false signals at $-\omega$ have opposite signs. What if we take the difference of these two signals?!? Mathematically, that's very easy to do: $cos({\omega}t)-sin({\omega}t)$. Done.
But how do we generate those two signals in the instrument so that we can treat one as the real component and one as the imaginary component? If you were going to explain the difference between sin and cos to someone, what would you tell them?
Let's go back to my game ...
# Task 3
Let's bring all of this together and see what we end up with.
* We know that the Fourier transform of a real signal is no good.
* We know that we have to generate a complex signal
* We know that it's as simple as putting two detectors around the probe: one at $0^{\circ}$ and one at $90^{\circ}$ - one for the sin component and the other for the cos component
Run the code cell below
```
ycomp = yc + (ys * 1j)
ft_yc = np.fft.fft(ycomp)
fig, axs = plt.subplots(2, 1, figsize=(15,7.5))
axs[0].plot(x, yc, x, ys)
axs[0].set_xlim(0.00, AQ)
axs[0].set_xlabel('time')
axs[0].set_ylabel('intensity')
axs[0].set_ylim(-1.00, 1.00)
axs[0].grid(True)
axs[1].plot(freq, ft_yc.real)
axs[1].set_xlim(LLIM, RLIM)
axs[1].set_xlabel('frequency')
axs[1].set_ylabel('intensity')
axs[1].set_ylim(BLIM, 2 * TLIM)
axs[1].grid(True)
fig.tight_layout()
plt.show()
```
# Reflection 3
Is this better? Is this what you expected for a sample that contains only $H_2O$ as the analyte?
Would you be mad if I told you that there was no "second detector"? :-)
| github_jupyter |
# Semantic querying of earth observation data
Semantique (to be pronounced with sophisticated French accent) is a structured framework for semantic querying of earth observation data.
The core of a semantic query is the **query recipe**. It contains instructions that together formulate a recipe for inference of new knowledge. These instructions can be grouped into multiple **results**, each representing a distinct piece of knowledge. A semantic query recipe is different from a regular data cube query statement because it allows you to refer directly to real-world concepts by their name, without having to be aware how these concepts are actually represented by the underlying data, and all the technical implications that come along with that. For example, you can ask how often *water* was observed at certain locations during a certain timespan, without the need to specify the rules that define how the collected data should be used to infer if an observation can actually be classified as being *water*.
These rules are instead specified in a separate component which we call the **ontology**. It maps *a priori* knowledge of the real world to the data values in the image domain. Hence, an ontology is a repository of rulesets. Each ruleset uniquely defines a **semantic concept** that exists in the real world, by formulating how this concept is represented by collected data (which may possibly be [semantically enriched](https://doi.org/10.3390/data4030102) to some extent). Usually, these rules describe a binary relationship between the data values and the semantic concepts (i.e. the rules can be evaluated to either "true" or "false"). For example:
> IF data value a > x AND data value b < y THEN water
The data and information layers are stored together in a **factbase**. A factbase is described by its layout, which is a repository of metadata objects. Each metadata object describes the content of a specific **resource** of data or information.
An ontology and a factbase should be provided when executing a semantic query recipe, together with the spatio-temporal extent in which the query should be evaluated. The query recipe itself is independent from these components. To some extent, at least. Of course, when you refer to a concept named "water" in your query recipe, it can only be executed alongside an ontology that defines how "water" can be represented by collected data, and a factbase that acutally contains these data. Unfortunately, we can't do magic.. However, the query recipe itself does not contain any information nor cares about how "water" is defined, and all the technical details that come along with that. There is a clear separation between the *definitions of the concepts* (these are stored as rules in the ontology) and *how these definitions are applied to infer new knowledge* (this is specified as instructions in the query recipe).
That also means that query recipes remain fairly stable even when concepts are defined in a different way. For example, if we have a new technique to utilize novel data source for water detection from space, the factbase and the ontology change. The factbase needs to contain these novel data sources, and the ontology needs to implement rules that use the new technique for water detection. However, the query *how often was water observed* remains the same, since in itself it does not contain any information on how water is defined. This is in line with the seperation between the *world domain* and the *image domain*. Concepts in the world domain are fairly stable, while data and techniques in the image domain constantly change.
Hence, the explicitly separated structure makes the semantic EO data querying process as implemented in semantique different from regular EO data querying, where this separation is usually not clear, and the different components are weaved together into a single query statement. Thanks to this structure, semantique is useful for those user groups that lack the advanced technical knowledge of EO data, but can benefit from the applications of it in their specific domain. Furthermore, it eases interoperability of EO data analysis workflows, also for expert users.
This notebook introduces the semantique package and provides basic examples of how to use it in a common semantic querying workflow.
## Content
- [Components](#Components)
- [The query recipe](#The-query-recipe)
- [The factbase](#The-factbase)
- [The ontology](#The-ontology)
- [The spatio-temporal extent](#The-spatio-temporal-extent)
- [Additional configuration parameters](#Additional-configuration-parameters)
- [Processing](#Processing)
## Prepare
Import the semantique package:
```
import semantique as sq
```
Import other packages we will use in this demo:
```
import xarray as xr
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import json
```
## Components
In semantique, a semantic query is processed by a query processor, with respect to a given ontology and factbase, and within the bounds of a given spatio-temporal extent. Below we will describe in more detail how semantique allows you to construct the required components for query processing.
### The query recipe
The first step in the semantic querying process is to construct the query recipe for inference of new knowledge. That is, you have to write *instructions* that tell the query processor what steps it should take to obtain your desired result. In semantique you can do this in a flexible manner, by combining basic building blocks with each other. Each building block represents a specific component of a result instruction, like a reference to a semantic concept or a certain processing task.
We start with an empty query recipe:
```
recipe = sq.QueryRecipe()
```
Such a [QueryRecipe](https://zgis.github.io/semantique/_generated/semantique.QueryRecipe.html) object has the same structure as a dictionary, with each element containing the instructions for a specific result. You can request as many results as you want.
Now we have to fill the empty query recipe by adding the instructions for all of our desired results one by one to our initialized recipe object. We do this by combining semantique's building blocks together in **processing chains**. A processing chain always has a *with-do structure*.
In the *with* part, you attach a block that contains a **reference** to an object that contains data or information. We call this the *input object* of the processing chain. The query processor will evaluate this reference into a multi-dimensional array containing a set of data values, and usually having at least a spatial and a temporal dimension. Each cell in this array is called a *pixel* and represents an observation on a specific location in space at a specific moment in time. We also call the array a **data cube**.
In most cases the reference in the *with* part of the processing chain will be a reference to a real-world semantic concept defined in an ontology. If the rules in the ontology describe *binary relationships* between the semantic concepts and the pixel values, the corresponding data cube will be boolean, with "true" values (i.e. 1) for those pixels that are identified as being an observation of the referenced concept, and "false" values (i.e. 0) for all other pixels in the spatio-temporal extent. In the [References notebook](references.ipynb) you can find an overview of all other types of references a processing chain may start with.
In the *do* part, you specify one or more **actions** that should be applied to the input object. Each action is a well-defined data cube operation that performs a *single* task. For example, applying a function to each pixel of a data cube, reducing a particular dimension of a data cube, filtering the pixels of a data cube based on some condition, et cetera. Each building block that represents such an action is labeled by an action word that should intuitively describe the operation it performs. Therefore we also call these type of building blocks **verbs**. In the [Verbs notebook](verbs.ipynb) you can find an overview of all implemented verbs and their functionalities.
> WITH input_object DO apply_first_action THEN apply_second_action THEN apply_third_action
So let's show a basic example of how to construct such a processing chain. You can refer to any semantic concept by using the [concept()](https://zgis.github.io/semantique/_generated/semantique.concept.html#semantique.concept) function. How to specify the reference, depends on the structure of the ontology that the query will be processed against. Usually, an ontology does not only list rulesets of semantic concepts, but also formalizes a categorization of these concepts. That is, a reference to a specific semantic concept usually consists of the name of that concept, *and* the name of the category it belongs to. Optionally there can be multiple hierarchies of categories, for example to group concepts of different semantic levels (e.g. an entity *water body* is of a lower semantic level than an entity *lake*, since lake is by definition always a water body, but a water body not necessarily a lake). See the [Ontology section](#The-ontology) for details. The [concept()](https://zgis.github.io/semantique/_generated/semantique.concept.html#semantique.concept) function lets you specify as many levels as you need, starting with the lowest-level category, and ending with the name of the semantic concept itself.
The common lowest-level categorization groups the semantic concepts into very abstract types. For example, a semantic concept might be an *entity* (a phenonemon with a distinct and independent *existence*, e.g. a forest or a lake) or an *event* (a phenonemon that *takes place*, e.g. a fire or a flood). If the semantic concepts are stored as direct element of these lowest-level categories without any further subdivision, we can refer to a semantic concept such as *water body* as follows.
> **NOTE** <br/> Currently we only focus on pixel based queries. Hence, the query processor evaluates for each pixel if the observed phenonemon in that pixel is *part of* a given entity or not, considering only the data value of the pixel itself. The semantique framework is flexible enough to also support object-based approaches. In that case te rulesets of concepts should look further than only individual pixels. Creating such rulesets is still a challenge..
```
water = sq.concept("entity", "water")
print(json.dumps(water, indent = 2))
```
If you use ontologies that include sub-categories, you can simply use the same function to refer to them, in a form as below. There is no limit on how many sub-categories you can use in a reference. Of course, this all depends on the categorization of the ontology that you will use.
```
lake = sq.concept("entity", "natural_entities", "water_bodies", "lake")
```
Note that each reference is nothing more than a textual reference. At the construction stage, no data processing is done at all. More specifically: the reference is an object of class [CubeProxy](https://zgis.github.io/semantique/_generated/semantique.CubeProxy.html), meaning that it will be evaluated into a data cube, but only when executing the query recipe.
```
type(water)
```
For convenience, commonly used lowest-level semantic concept categories (e.g. entities) are also implemented as separate construction functions, such that you can call them directly. Hence, the code below produces the same output as above.
```
water = sq.entity("water")
print(json.dumps(water, indent = 2))
```
The *do* part of the processing chain can be formulated by applying the actions as methods to the input object. Just an in the *with* part, this will not perform any action just yet. It only constructs the textual recipe for the result, which will be executed at the processing stage.
The code below shows a simple set of instructions that form the recipe for a result. The instructions consist of a single processing chain, starting with a reference to the concept "water", and subsequently applying a single action to it. During processing, this will be evaluated into a two dimensional data cube with for each location in space the number of times water was observed. Right now, it is nothing more than a textual recipe.
```
water_count = sq.entity("water").reduce("time", "count")
print(json.dumps(water_count, indent = 2))
```
Instead saving result instructions as separate objects, we include them as an element in our recipe object. We can include as many result instructions in a single query as we want.
```
recipe["water_map"] = sq.entity("water").reduce("time", "count")
recipe["vegetation_map"] = sq.entity("vegetation").reduce("time", "count")
recipe["water_time_series"] = sq.entity("water").reduce("space", "percentage")
```
You can apply as many actions as you want simply by adding more actions blocks to the chain.
```
recipe["avg_water_count"] = sq.entity("water").\
reduce("time", "count").\
reduce("space", "mean")
```
Some of the action blocks allow to join information from other objects into the active evaluation object. For example, instead of only calculating the water count as shown above, we might be interested in the summed count of the concepts water and vegetation. Such an instruction can be modelled by nesting multiple processing chains.
```
recipe["summed_count"] = sq.entity("water").\
reduce("time", "count").\
evaluate("add", sq.entity("vegetation").reduce("time", "count"))
```
Again, it is important to notice that the query construction phase does not include *any* loading nor analysis of data or information. It simply creates a textual query recipe, which will be executed at a later stage. The query we constructed in all the examples above looks like [this](https://github.com/ZGIS/semantique/blob/main/demo/files/recipe.json).
We can export and share this query recipe as a JSON file.
```
with open("files/recipe.json", "w") as file:
json.dump(recipe, file, indent = 2)
```
### The factbase
The factbase is the place where the raw EO data and possibly derived information layers are stored. As mentioned before, the factbase is supposed to have a *layout* file that describes its content. This file has a dictionary-like structure. Each of its elements is again a dictionary, and represents the highest-level category of resources. This nested, hierarchical structure continues depending on the amount of sub-categories, until the point where you reach a metadata object belonging to a specific resource. It summarizes the data values of that resource, and also contains information on where to find this resource inside the storage structure of the factbase. Unless you create your own factbase, you will usually not write a layout file from scratch. Instead, the factbase you are using should already come with a layout file.
Semantique utilizes the layout file to create an internal model of the factbase. It pairs it with a **retriever function**. This function is able to read a reference to a specific resource, lookup its metadata object in the layout file, and use these metadata to retrieve the corresponding data values as a data cube from the actual data storage location.
However, the exact structure of a layout file (i.e. what metadata keys it exactly contains), as well as the way the retriever function has to retrieve the actual data values, heavily depends on the format of the data storage. Data may be stored on a database server utilizing some specific database management system, simply as files on disk, or whatever else.
Therefore, semantique offers a flexible structure in which different factbase formats are modelled by different classes, with different retriever functions. All these classes inherit from an abstract base class named [Factbase](https://zgis.github.io/semantique/_generated/semantique.factbase.Factbase.html#semantique.factbase.Factbase), which serves a general template for how a factbase should be modelled.
Currently semantique contains two built-in factbase formats. The first one is called [Opendatacube](https://zgis.github.io/semantique/_generated/semantique.factbase.Opendatacube.html) and is tailored to usage with the EO specific [OpenDataCube](https://www.opendatacube.org/) database management system. This class has a OpenDataCube-specific retriever function that knows exactly how to retrieve data from this system. You would initialize an instance from this class as by providing it the layout file, as well as an OpenDataCube connection object. This object allows the retriever function to connect with the database server an actually retrieve data from it. Probably all factbase formats that store the data on a server will need such kind of a connection object.
```python
factbase = sq.factbase.Opendatacube(layout, connection = datacube.Datacube())
```
The second one is called [GeotiffArchive](https://zgis.github.io/semantique/_generated/semantique.factbase.GeotiffArchive.html) and has a much simpler format that assumes each resource is stored as a GeoTIFF file within a single ZIP archive. This class contains a retriever function that knows how to load GeoTIFF files as multi-dimensional arrays in Python, and how to subset (and possibly also resample and/or reproject) them to a given spatio-temporal extent. Instead of a database connection, we provide the initializer with the location of the ZIP file in which the resources are stored.
```python
factbase = sq.factbase.GeotiffArchive(layout, src = "foo.zip")
```
In the future more built-in factbase formats might be added, but as user you can also write your own class for a specific factbase format that you use. See the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#Creating-custom-factbase-classes) for details. It is important to note that the query processor does not care at all what the format of the factbase is and how resources are retrieved from the factbase. It only cares about what input the retriever function accepts, and in what format it returns the retrieved resource.
In our examples we will use the simpler [GeotiffArchive](https://zgis.github.io/semantique/_generated/semantique.factbase.GeotiffArchive.html) factbase format. We have a set of [example resources](https://github.com/ZGIS/semantique/blob/main/demo/files/resources.zip) for a tiny [spatial extent](https://github.com/ZGIS/semantique/blob/main/demo/files/footprint.json) and only three different timestamps, as well as a [layout file](https://github.com/ZGIS/semantique/blob/main/demo/files/factbase.json) that contains all necessary metadata entries the retriever function of this format needs.
```
with open("files/factbase.json", "r") as file:
layout = json.load(file)
factbase = sq.factbase.GeotiffArchive(layout, src = "files/resources.zip")
```
The retriever function is a method of this factbase instance, which will internally be called by the query processor whenever a specific resource is referenced.
```
hasattr(factbase, "retrieve")
```
### The ontology
The ontology plays an essential role in the semantic querying framework. It serves as the mapping between the image-domain and the real-world domain. That is, it contains rulesets that define how real-world concepts and their properties are represented by the data in the factbase. By doing that, it also formalizes how concepts are categorized and how the relations between multiple concepts and/or their properties are structured.
These rulesets are stored in a dictionary-like structure. Each of its elements is again a dictionary, and represents the highest-level category of concepts. This nested, hierarchical structure continues depending on the amount of sub-categories, until the point where you reach a ruleset defining a specific concept.
In semantique, an ontology is always paired with a **translator function**. This function is able to read a reference to a specific concept, lookup its ruleset object in the ontology, and use these rules to translate the reference into a data cube. When the rules describe *binary relationships* between the semantic concepts and the data values, this data cube will be boolean, where pixels that are identified as being an observation of the concept get a a "true" value (i.e. 1), and the other pixels get a "false" value (i.e. 0).
However, the way the rules are specified, and therefore also the way they should be evaluated by the translator function, are not fixed. Basically, you can do this in any way you want. For example, your rules could be a set of parameters for a given machine learning model, and your translator a function that knows how to run that model with those parameters. Your rules could also be paths or download links to some Python scripts, and your translator a function that knows how to execute these scripts. Hence, just as the factbase models described before, the ontology models in semantique can have many different formats.
Therefore, semantique offers a flexible structure in which different ontology formats are modelled by different classes, with different translator functions. All these classes inherit from an abstract base class named [Ontology](https://zgis.github.io/semantique/_generated/semantique.ontology.Ontology.html), which serves a general template for how an ontology should be modelled. Currently there is only one built-in ontology format in semantique, called (unsurprisinly) [Semantique](https://zgis.github.io/semantique/_generated/semantique.ontology.Semantique.html). We will introduce this format below. As a user you can also write your own class for a specific ontology format that you use by inheriting from the abstract [Ontology](https://zgis.github.io/semantique/_generated/semantique.ontology.Ontology.html) class. See the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#Creating-custom-ontology-classes) for details. It is important to note that the query processor does not care at all what the format of the ontology is and how it translated concept references. It only cares about what input the translator function accepts, and in what format it returns the translated concepts.
Back to the semantique specific ontology format. We can create an instance of it by providing a dictionary with rulesets that was shared with us. However, expert users can also create their own ontology from scratch. In that case, you'll start with an empty ontology, and iteratively fill it with rules afterwards:
```
ontology = sq.ontology.Semantique()
```
The translator function is a method of this ontology instance, which will internally be called by the query processor whenever a specific concept is referenced.
```
hasattr(ontology, "translate")
```
In this example, we will focus solely on defining entities, and use a one-layer categorization. That is, our only category is *entity*. The first step is to add this category as element to the ontology. Its value can still be an empty dictionary. We will add the concept definitions afterwards.
> **NOTE** <br/> The examples we use below are heavily simplified and don't always make sense, but are meant mainly to get an idea of how the package works.
```
ontology["entity"] = {}
```
Lets first look deeper into the structure of concept definitions. Each concept is defined by one or more named **properties** it has. For example, a entity *lake* may be defined by its *color* (a blueish, water-like color) in combination with its *texture* (it has an approximately flat surface). That is, the ruleset of a semantic concept definition is a set of distinct property definitions.
Now, we need to construct rules that define a binary relationship between a property and the data values in the factbase. That is, the rules should define for each pixel in our data if it meets a specific property ("true"), or not ("false"). In the Semantique-format, we can do this by utilizing the same building blocks as we did for constructing our query recipe. The only difference is that a processing chain will now usually start with a reference to a factbase resource. During query processing, this reference will be send to the retriever function of the factbase, which will return a data cube filled with the requested data values. Then, pre-defined actions will be applied to this data cube. Usually these actions will encompass the evaluation of a comparison operator, in which the value of each pixel is compared to some constant (set of) value(s), returning a "true" value (i.e. 1) when the comparison holds, and a "false" value (i.e. 0) otherwise.
For example: we utilize the "Color type" resource to define if a pixel has a water-like color. This resource is a layer of semantically enriched data and contains categorical values. The categories with indices 21, 22, 23 and 24 correspond to color combinations that *appear* to be water. Hence, we state that a pixel meets the color property of a lake when its value in this "Color type" resource corresponds with one of the above mentioned indices. Furthermore, we state that a pixel meets the texture property of a lake when its value in the "slope" resource equals 0.
```
ontology["entity"]["lake"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
"texture": sq.topography("slope").evaluate("equal", 0)
}
```
To define the entity, its property cubes are combined using an [all()](https://zgis.github.io/semantique/_generated/semantique.processor.reducers.all_.html) merger. That means that a pixel is evaluated as being part of an entity if and only if it meets *all* properties of that entity.
Now we define a second entity *river*, which we say has the same color property of a lake, but instead has a non-zero slope.
> **NOTE** <br/> Different entities do not *need* to have the same properties defined.
```
ontology["entity"]["river"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
"texture": sq.topography("slope").evaluate("not_equal", 0)
}
```
As you see, there is a relation between the entities *lake* and *river*. They share a property. However, we defined the same property twice. This is not needed, because in the Semantique-format, you can always refer to other entities in your ontology, as well as to properties in these entities. In this way, you can intuitively model relations between different semantic concepts. Hence, the same *river* definition can also be structured as follows:
```
ontology["entity"]["river"] = {
"color": sq.entity("lake", property = "color"),
"texture": sq.entity("lake", property = "texture").evaluate("invert")
}
```
Or, to take it a step further, as below. Basically we are saying here that a *lake* has the color of *water* and the texture of a *plain* (again, we oversimplify here!).
```
ontology["entity"]["water"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
}
ontology["entity"]["vegetation"] = {
"color": sq.appearance("Color type").evaluate("in", [1, 2, 3, 4, 5, 6]),
}
ontology["entity"]["plain"] = {
"color": sq.entity("vegetation", property = "color"),
"texture": sq.topography("slope").evaluate("equal", 0)
}
ontology["entity"]["lake"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture")
}
ontology["entity"]["river"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture").evaluate("invert")
}
```
We can also model relationships in a way where some entity is the union of other entities.
```
ontology["entity"]["natural_area"] = {
"members": sq.collection(sq.entity("water"), sq.entity("vegetation")).merge("or")
}
```
It is also possible to include temporal information. For example, we only consider an observation to be part of a lake when over time more than 80% of the observations at that location are identified as water, excluding those observations that are identified as a cloud.
```
ontology["entity"]["lake"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture"),
"continuity": sq.entity("water", property = "color").\
filter(sq.entity("cloud").evaluate("invert")).\
reduce("time", "percentage").\
evaluate("greater", 80)
}
```
The flexible structure with the building blocks of semantique make many more structures possible. Now you have an idea of how to construct and ontology from scratch using the built-in Semantique-format, we move on and construct a complete ontology in one go. We use simpler rulesets as above, since our demo factbase only contains a very limited set of resources.
```
ontology = sq.ontology.Semantique()
ontology["entity"] = {}
ontology["entity"]["water"] = {"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24])}
ontology["entity"]["vegetation"] = {"color": sq.appearance("Color type").evaluate("in", [1, 2, 3, 4, 5, 6])}
ontology["entity"]["builtup"] = {"color": sq.appearance("Color type").evaluate("in", [13, 14, 15, 16, 17])}
ontology["entity"]["cloud"] = {"color": sq.atmosphere("Color type").evaluate("equal", 25)}
ontology["entity"]["snow"] = {"color": sq.appearance("Color type").evaluate("in", [29, 30])}
```
Our constructed ontology looks like [this](https://github.com/ZGIS/semantique/blob/main/demo/files/ontology.json). We can export and share this ontology as a JSON file.
```
with open("files/ontology.json", "w") as file:
json.dump(ontology, file, indent = 2)
```
That also means that as non-expert we don't have to worry about constructing our own ontology from scratch. We can simply load a shared ontology in the same way as we loaded the layout file of the factbase, and construct the ontology object accordingly.
```
with open("files/ontology.json", "r") as file:
rules = json.load(file)
ontology = sq.ontology.Semantique(rules)
```
### The spatio-temporal extent
Semantic query recipes are general recipes for inference of new knowledge. In theory, they are not restricted to specific areas or specific timespans. However, the recipes are executed with respect to given spatio-temporal bounds. That is, we need to provide both a spatial and temporal extent when executing a semantic query recipe.
To model a spatial extent, semantique contains the [SpatialExtent](https://zgis.github.io/semantique/_generated/semantique.extent.SpatialExtent.html) class. An instance of this class can be initialized by providing it any object that can be read by the [GeoDataFrame](https://geopandas.org/docs/reference/api/geopandas.GeoDataFrame.html) initializer of the [geopandas](https://geopandas.org/en/stable/) package. Any additional keyword arguments will be forwarded to this initializer. In practice, this means you can read any GDAL-supported file format with [geopandas.read_file()](https://geopandas.org/en/stable/docs/reference/api/geopandas.read_file.html), and then use that object to initialize a spatial extent. In this demo we use a small, rectangular area around Zell am See in Salzbuger Land, Austria.
```
geodf = gpd.read_file("files/footprint.geojson")
geodf.explore()
space = sq.SpatialExtent(geodf)
```
To model a temporal extent, semantique contains the [TemporalExtent](https://zgis.github.io/semantique/_generated/semantique.extent.TemporalExtent.html) class. An instance of this class can be initialized by providing it the first timestamp of the timespan, and the last timestamp of the timespan. The given interval is treated as being closed at both sides.
```
time = sq.TemporalExtent("2019-01-01", "2020-12-31")
```
Just as with the spatial extent, there is a lot of flexibility in how you can provide your timestamps. You can provide dates in formats as "2020-12-31" or "2020/12/31", but also complete ISO8601 timestamps such as "2020-12-31T14:37:22". As long as the [Timestamp](https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.html) initializer of the [pandas](https://pandas.pydata.org/) package can understand it, it is supported by semantique. Any additional keyword arguments will be forwarded to this initializer.
### Additional configuration parameters
The last thing we have left before executing our semantic query recipe, is to define some additional configuration parameters. This includes the desired coordinate reference system (CRS) in which spatial coordinates should be represented, as well as the time zone in which temporal coordinates should be represented. You should also provide the desired spatial resolution of your output, as a list containing respectively the y and x resolution in CRS units (i.e. usually meters for projected CRS and degrees for geographic CRS) and including direction. Note that for most CRS, that means that the first value (i.e. the y-value) of the resolution will always be negative.
There are also other configuration parameters that can be included to tune the behaviour of the query processor. See the [Advanced usage notebook](advanced.ipynb) for details.
```
config = {"crs": 3035, "tz": "UTC", "spatial_resolution": [-10, 10]}
```
## Processing
Now we have all components constructed, we are ready to execute our semantic query recipe. Hooray! This step is quite simple. You call the [execute()](https://zgis.github.io/semantique/_generated/semantique.QueryRecipe.execute.html#semantique.QueryRecipe.execute) method of our recipe object, and provide it the factbase object, the ontology object, the spatial and temporal extents, and the additional configuration parameters. Then, just be a bit patient... Internally, the query processor will solve all references, evaluate them into data cubes, and apply the defined actions to them. In the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#The-query-processor-class) the implementation of query processing is described in some more detail.
```
response = recipe.execute(factbase, ontology, space, time, **config)
```
The response of the query is a dictionary which one element per result.
```
for key in response.keys():
print(key)
```
Each result is stored as an instance of the [DataArray](http://xarray.pydata.org/en/stable/user-guide/data-structures.html#dataarray) class from the [xarray](https://docs.xarray.dev/en/stable/) package, which serves as the backbone for most of the analysis tasks the query processor performs.
```
for x in response.values():
print(type(x))
```
The dimensions the arrays depend on the actions that were called in the result instruction. Some results might only have spatial dimensions (i.e. a map).
```
response["water_map"]
```
Other results might only have the temporal dimension (i.e. a time series).
```
response["water_time_series"]
```
And other results might even be dimensionless (i.e. a single aggregated value).
```
response["avg_water_count"]
```
There may also be results that contain both the spatial and temporal dimension, as well as results that contain an additonal, thematic dimension.
Since the result objects are [DataArray](http://xarray.pydata.org/en/stable/user-guide/data-structures.html#dataarray) objects, we can use xarray for any further processing, and also to visualize the results. Again, see the [xarray documentation](http://xarray.pydata.org/en/stable/index.html) for more details on what that package has to offer (which is a lot!). For now, we will just plot some of our obtained results to give an impression. In the [Gallery notebook](gallery.ipynb) you can find much more of such examples.
```
f, (ax1, ax2) = plt.subplots(1, 2, figsize = (15, 5))
water_count = response["water_map"]
values = list(range(int(np.nanmin(water_count)), int(np.nanmax(water_count)) + 1))
levels = [x - 0.5 for x in values + [max(values) + 1]]
colors = plt.cm.Blues
water_count.plot(ax = ax1, levels = levels, cmap = colors, cbar_kwargs = {"ticks": values, "label": "count"})
ax1.set_title("Water")
vegetation_count = response["vegetation_map"]
values = list(range(int(np.nanmin(vegetation_count)), int(np.nanmax(vegetation_count)) + 1))
levels = [x - 0.5 for x in values + [max(values) + 1]]
colors = plt.cm.Greens
vegetation_count.plot(ax = ax2, levels = levels, cmap = colors, cbar_kwargs = {"ticks": values, "label": "count"})
ax2.set_title("Vegetation")
plt.tight_layout()
plt.draw()
```
Do note how the water count map contains many pixels that are counted as water but are clearly not water in the real world. Instead, these pixels correspond to observations in the shadow of a mountain. The color of water and shadow on a satellite image is very similar. Since in our ontology we only defined water based on its *color* property, it cannot differtiate it from shadow. This shows how important it is for accurate results to use multiple properties in entity definitions!
| github_jupyter |
# Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
```
from google.colab import drive
drive.mount("/content/gdrive")
import sys
sys.path.append("/content/gdrive/My Drive/intro-to-pytorch")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
# Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models.
```
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
```
## Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers.
```
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
```
The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`.
```
torch.save(model.state_dict(), 'checkpoint.pth')
```
Then we can load the state dict with `torch.load`.
```
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
```
And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`.
```
model.load_state_dict(state_dict)
```
Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
```
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
```
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
```
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
```
Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
```
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model)
```
| github_jupyter |
# NLP Using PySpark
## Objective:
- The objective from this project is to create a <b>Spam filter using NaiveBayes classifier</b>.
- It is required to obtain <b>f1_scored > 0.9</b>.
- We'll use a dataset from UCI Repository. SMS Spam Detection: https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
### Create a spark session and import the required libraries
```
import findspark
findspark.init()
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
### Read the readme file to learn more about the data
### Read the data into a DataFrame
```
df = spark.read.format("csv") \
.option("delimiter", "\t")\
.load("SMSSpamCollection")
```
### Print the schema
```
df.printSchema()
```
### Rename the first column to 'class' and second column to 'text'
```
df = df.withColumnRenamed("_c0","class")
df = df.withColumnRenamed("_c1","text")
df.printSchema()
```
### Show the first 10 rows from the dataframe
- Show once with truncate=True and once with truncate=False
```
df.show(10)
df.show(10, truncate =False)
```
## Clean and Prepare the Data
### Create a new feature column contains the length of the text column
```
import pyspark.sql.functions as F
df = df.withColumn('length', F.length('text'))
```
### Show the new dataframe
```
df.show()
```
### Get the average text length for each class
```
avg_length = df.groupBy("class").agg(F.mean('length').alias('Avg. Length'))
avg_length.show()
```
## Feature Transformations
### Perform the following steps to obtain TF-IDF:
1. Import the required transformers/estimators for the subsequent steps.
2. Create a <b>Tokenizer</b> from the text column.
3. Create a <b>StopWordsRemover</b> to remove the <b>stop words</b> from the column obtained from the <b>Tokenizer</b>.
4. Create a <b>CountVectorizer</b> after removing the <b>stop words</b>.
5. Create the <b>TF-IDF</b> from the <b>CountVectorizer</b>.
```
from pyspark.ml.feature import Tokenizer, StopWordsRemover, CountVectorizer,IDF,StringIndexer
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.linalg import Vector
tokenizer = Tokenizer(inputCol="text", outputCol="token_text")
stop_word_remover = StopWordsRemover(inputCol='token_text',outputCol='stop_tokens')
count_vec = CountVectorizer(inputCol='stop_tokens',outputCol='c_vec')
idf = IDF(inputCol="c_vec", outputCol="tf_idf")
```
- Convert the <b>class column</b> to index using <b>StringIndexer</b>
- Create feature column from the <b>TF-IDF</b> and <b>lenght</b> columns.
```
stringIndexer = StringIndexer(inputCol='class',outputCol='label')
vecAssembler = VectorAssembler(inputCols=['tf_idf','length'],outputCol='features')
```
## The Model
- Create a <b>NaiveBayes</b> classifier with the default parameters.
```
from pyspark.ml.classification import NaiveBayes
nb = NaiveBayes()
```
## Pipeline
### Create a pipeline model contains all the steps starting from the Tokenizer to the NaiveBays classifier.
```
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[stringIndexer, tokenizer, stop_word_remover, count_vec,idf, vecAssembler, nb])
```
### Split your data to trian and test data with ratios 0.7 and 0.3 respectively.
```
X_train, X_test = df.randomSplit([0.7, 0.3],seed = 42)
```
### Fit your Pipeline model to the training data
```
model = pipeline.fit(X_train)
```
### Perform predictions on tests dataframe
```
predictions = model.transform(X_test)
```
### Print the schema of the prediction dataframe
```
predictions.printSchema()
```
## Model Evaluation
- Use <b>MulticlassClassificationEvaluator</b> to calculate the <b>f1_score</b>.
```
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator()
f1_sc = evaluator.evaluate(predictions, {evaluator.metricName: "f1"})
print('f1_score = ', f1_sc)
```
| github_jupyter |
# Customize a TabNet Model
## This tutorial gives examples on how to easily customize a TabNet Model
### 1 - Customizing your learning rate scheduler
Almost all classical pytroch schedulers are now easy to integrate with pytorch-tabnet
### 2 - Use your own loss function
It's really easy to use any pytorch loss function with TabNet, we'll walk you through that
### 3 - Customizing your evaluation metric and evaluations sets
Like XGBoost, you can easily monitor different metrics on different evaluation sets with pytorch-tabnet
```
from pytorch_tabnet.tab_model import TabNetClassifier
import torch
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import roc_auc_score
import pandas as pd
import numpy as np
np.random.seed(0)
import os
import wget
from pathlib import Path
from matplotlib import pyplot as plt
%matplotlib inline
```
### Download census-income dataset
```
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
dataset_name = 'census-income'
out = Path(os.getcwd()+'/data/'+dataset_name+'.csv')
out.parent.mkdir(parents=True, exist_ok=True)
if out.exists():
print("File already exists.")
else:
print("Downloading file...")
wget.download(url, out.as_posix())
```
### Load data and split
```
train = pd.read_csv(out)
target = ' <=50K'
if "Set" not in train.columns:
train["Set"] = np.random.choice(["train", "valid", "test"], p =[.8, .1, .1], size=(train.shape[0],))
train_indices = train[train.Set=="train"].index
valid_indices = train[train.Set=="valid"].index
test_indices = train[train.Set=="test"].index
```
### Simple preprocessing
Label encode categorical features and fill empty cells.
```
nunique = train.nunique()
types = train.dtypes
categorical_columns = []
categorical_dims = {}
for col in train.columns:
if types[col] == 'object' or nunique[col] < 200:
print(col, train[col].nunique())
l_enc = LabelEncoder()
train[col] = train[col].fillna("VV_likely")
train[col] = l_enc.fit_transform(train[col].values)
categorical_columns.append(col)
categorical_dims[col] = len(l_enc.classes_)
else:
train.fillna(train.loc[train_indices, col].mean(), inplace=True)
```
### Define categorical features for categorical embeddings
```
unused_feat = ['Set']
features = [ col for col in train.columns if col not in unused_feat+[target]]
cat_idxs = [ i for i, f in enumerate(features) if f in categorical_columns]
cat_dims = [ categorical_dims[f] for i, f in enumerate(features) if f in categorical_columns]
```
# 1 - Customizing your learning rate scheduler
TabNetClassifier, TabNetRegressor and TabNetMultiTaskClassifier all takes two arguments:
- scheduler_fn : Any torch.optim.lr_scheduler should work
- scheduler_params : A dictionnary that contains the parameters of your scheduler (without the optimizer)
----
NB1 : Some schedulers like torch.optim.lr_scheduler.ReduceLROnPlateau depend on the evolution of a metric, pytorch-tabnet will use the early stopping metric you asked (the last eval_metric, see 2-) to perform the schedulers updates
EX1 :
```
scheduler_fn=torch.optim.lr_scheduler.ReduceLROnPlateau
scheduler_params={"mode":'max', # max because default eval metric for binary is AUC
"factor":0.1,
"patience":1}
```
-----
NB2 : Some schedulers require updates at batch level, they can be used very easily the only thing to do is to add `is_batch_level` to True in your `scheduler_params`
EX2:
```
scheduler_fn=torch.optim.lr_scheduler.CyclicLR
scheduler_params={"is_batch_level":True,
"base_lr":1e-3,
"max_lr":1e-2,
"step_size_up":100
}
```
-----
NB3: Note that you can also customize your optimizer function, any torch optimizer should work
```
# Network parameters
max_epochs = 20 if not os.getenv("CI", False) else 2
batch_size = 1024
clf = TabNetClassifier(cat_idxs=cat_idxs,
cat_dims=cat_dims,
cat_emb_dim=1,
optimizer_fn=torch.optim.Adam, # Any optimizer works here
optimizer_params=dict(lr=2e-2),
scheduler_fn=torch.optim.lr_scheduler.OneCycleLR,
scheduler_params={"is_batch_level":True,
"max_lr":5e-2,
"steps_per_epoch":int(train.shape[0] / batch_size)+1,
"epochs":max_epochs
},
mask_type='entmax', # "sparsemax",
)
```
### Training
```
X_train = train[features].values[train_indices]
y_train = train[target].values[train_indices]
X_valid = train[features].values[valid_indices]
y_valid = train[target].values[valid_indices]
X_test = train[features].values[test_indices]
y_test = train[target].values[test_indices]
```
# 2 - Use your own loss function
The default loss for classification is torch.nn.functional.cross_entropy
The default loss for regression is torch.nn.functional.mse_loss
Any derivable loss function of the type lambda y_pred, y_true : loss(y_pred, y_true) should work if it uses torch computation (to allow gradients computations).
In particular, any pytorch loss function should work.
Once your loss is defined simply pass it loss_fn argument when defining your model.
/!\ : One important thing to keep in mind is that when computing the loss for TabNetClassifier and TabNetMultiTaskClassifier you'll need to apply first torch.nn.Softmax() to y_pred as the final model prediction is softmaxed automatically.
NB : Tabnet also has an internal loss (the sparsity loss) which is summed to the loss_fn, the importance of the sparsity loss can be mitigated using `lambda_sparse` parameter
```
def my_loss_fn(y_pred, y_true):
"""
Dummy example similar to using default torch.nn.functional.cross_entropy
"""
softmax_pred = torch.nn.Softmax(dim=-1)(y_pred)
logloss = (1-y_true)*torch.log(softmax_pred[:,0])
logloss += y_true*torch.log(softmax_pred[:,1])
return -torch.mean(logloss)
```
# 3 - Customizing your evaluation metric and evaluations sets
When calling the `fit` method you can speficy:
- eval_set : a list of tuples like (X_valid, y_valid)
Note that the last value of this list will be used for early stopping
- eval_name : a list to name each eval set
default will be val_0, val_1 ...
- eval_metric : a list of default metrics or custom metrics
Default : "auc", "accuracy", "logloss", "balanced_accuracy", "mse", "rmse"
NB : If no eval_set is given no early stopping will occure (patience is then ignored) and the weights used will be the last epoch's weights
NB2 : If `patience<=0` this will disable early stopping
NB3 : Setting `patience` to `max_epochs` ensures that training won't be early stopped, but best weights from the best epochs will be used (instead of the last weight if early stopping is disabled)
```
from pytorch_tabnet.metrics import Metric
class my_metric(Metric):
"""
2xAUC.
"""
def __init__(self):
self._name = "custom" # write an understandable name here
self._maximize = True
def __call__(self, y_true, y_score):
"""
Compute AUC of predictions.
Parameters
----------
y_true: np.ndarray
Target matrix or vector
y_score: np.ndarray
Score matrix or vector
Returns
-------
float
AUC of predictions vs targets.
"""
return 2*roc_auc_score(y_true, y_score[:, 1])
clf.fit(
X_train=X_train, y_train=y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)],
eval_name=['train', 'val'],
eval_metric=["auc", my_metric],
max_epochs=max_epochs , patience=0,
batch_size=batch_size,
virtual_batch_size=128,
num_workers=0,
weights=1,
drop_last=False,
loss_fn=my_loss_fn
)
# plot losses
plt.plot(clf.history['loss'])
# plot auc
plt.plot(clf.history['train_auc'])
plt.plot(clf.history['val_auc'])
# plot learning rates
plt.plot(clf.history['lr'])
```
## Predictions
```
preds = clf.predict_proba(X_test)
test_auc = roc_auc_score(y_score=preds[:,1], y_true=y_test)
preds_valid = clf.predict_proba(X_valid)
valid_auc = roc_auc_score(y_score=preds_valid[:,1], y_true=y_valid)
print(f"FINAL VALID SCORE FOR {dataset_name} : {clf.history['val_auc'][-1]}")
print(f"FINAL TEST SCORE FOR {dataset_name} : {test_auc}")
# check that last epoch's weight are used
assert np.isclose(valid_auc, clf.history['val_auc'][-1], atol=1e-6)
```
# Save and load Model
```
# save tabnet model
saving_path_name = "./tabnet_model_test_1"
saved_filepath = clf.save_model(saving_path_name)
# define new model with basic parameters and load state dict weights
loaded_clf = TabNetClassifier()
loaded_clf.load_model(saved_filepath)
loaded_preds = loaded_clf.predict_proba(X_test)
loaded_test_auc = roc_auc_score(y_score=loaded_preds[:,1], y_true=y_test)
print(f"FINAL TEST SCORE FOR {dataset_name} : {loaded_test_auc}")
assert(test_auc == loaded_test_auc)
```
# Global explainability : feat importance summing to 1
```
clf.feature_importances_
```
# Local explainability and masks
```
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(20,20))
for i in range(3):
axs[i].imshow(masks[i][:50])
axs[i].set_title(f"mask {i}")
```
# XGB
```
from xgboost import XGBClassifier
clf_xgb = XGBClassifier(max_depth=8,
learning_rate=0.1,
n_estimators=1000,
verbosity=0,
silent=None,
objective='binary:logistic',
booster='gbtree',
n_jobs=-1,
nthread=None,
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=0.7,
colsample_bytree=1,
colsample_bylevel=1,
colsample_bynode=1,
reg_alpha=0,
reg_lambda=1,
scale_pos_weight=1,
base_score=0.5,
random_state=0,
seed=None,)
clf_xgb.fit(X_train, y_train,
eval_set=[(X_valid, y_valid)],
early_stopping_rounds=40,
verbose=10)
preds = np.array(clf_xgb.predict_proba(X_valid))
valid_auc = roc_auc_score(y_score=preds[:,1], y_true=y_valid)
print(valid_auc)
preds = np.array(clf_xgb.predict_proba(X_test))
test_auc = roc_auc_score(y_score=preds[:,1], y_true=y_test)
print(test_auc)
```
| github_jupyter |
# ANDES Demonstration of `DGPRCTExt` on IEEE 14-Bus System
Prepared by Jinning Wang. Last revised 12 September 2021.
## Background
Voltage signal is set manually to demonstrate `DGPRCTExt`.
In the modified IEEE 14-bus system, 10 `PVD1` are conencted to `Bus4`, and 1 `DGPRCTExt` is added aiming at `PVD1_2`.
## Conclusion
`DGPRCTExt` can be used to implement protection on `DG` models, where the voltage signal can be manipulated manually. This feature allows co-simulation where you can input the external voltage signal into ADNES by `set` function.
```
import andes
from andes.utils.paths import get_case
andes.config_logger(stream_level=30)
ss = andes.load(get_case('ieee14/ieee14_dgprctext.xlsx'),
setup=False,
no_output=True)
ss.setup()
# use constant power model for PQ
ss.PQ.config.p2p = 1
ss.PQ.config.q2q = 1
ss.PQ.config.p2z = 0
ss.PQ.config.q2z = 0
# turn off under-voltage PQ-to-Z conversion
ss.PQ.pq2z = 0
ss.PFlow.run()
```
## Simulation
Let's run the simulation and manipulate the voltage signal manually.
1) run the TDS to 1s.
```
ss.TDS.config.tf = 1
ss.TDS.run()
```
2) store initial Bus4 voltage value.
```
bus4v0 = ss.Bus.v.v[3]
```
3) set the external voltage at 0.7 manually.
```
ss.DGPRCTExt.set(src='v', idx='DGPRCTExt_1', attr='v', value=0.7)
```
4) continue the TDS to 5s.
```
ss.TDS.config.tf = 5
ss.TDS.run()
```
5) reset the external voltage back to normal manually.
```
ss.DGPRCTExt.set(src='v', idx='DGPRCTExt_1', attr='v', value=bus4v0)
```
6) continue the TDS to 10s.
```
ss.TDS.config.tf = 10
ss.TDS.run()
```
## Results
### system frequency
```
ss.TDS.plt.plot(ss.GENROU.omega,
ycalc=lambda x:60*x,
title='Generator Speed $\omega$')
```
### Lock flag
The lock flag is raised at after `TVl1` when the voltage drop below `Vl1`.
```
ss.TDS.plt.plot(ss.DGPRCTExt.ue,
title='DGPRCTExt\_1 lock flag (applied on PVD1\_2)')
```
### PVD1_2 read frequency and frequency signal source
The `PVD1_2` read frequency is locked, but the signal source (in the `BusFreq 4`) remains unchanged
```
ss.TDS.plt.plot(ss.PVD1.fHz,
a=(0,1),
title='PVD1 Read f')
ss.TDS.plt.plot(ss.DGPRCTExt.fHz,
title='BusFreq 4 Output f')
```
### PVD1_2 power command
`PVD1_2` power commands are locked to 0 **immediately**.
Once the protection was released, they returned to normal **immediately**.
```
ss.TDS.plt.plot(ss.PVD1.Psum,
a=(0,1),
title='PVD1 $P_{tot}$ (active power command)')
ss.TDS.plt.plot(ss.PVD1.Qsum,
a=(0,1),
title='PVD1 $Q_{tot}$ (reactive power command)')
```
### PVD1_2 current command
Consequently, `PVD1_2` current commands are locked to 0 **immediately**.
Once the protection was released, they returned to normal **immediately**.
```
ss.TDS.plt.plot(ss.PVD1.Ipul,
a=(0,1),
title='PVD1 $I_{p,ul}$ (current command before hard limit)')
ss.TDS.plt.plot(ss.PVD1.Iqul,
a=(0,1),
title='PVD1 $I_{q,ul}$ (current command before hard limit)')
```
### PVD1_2 output current
As a result, `PVD1_2` output current decreased to 0 **gradually**.
When the protection was released, they returned to normal **gradually**.
Here, the `PVD1` output current `Lag` time constant (`tip` and `tiq`) are modified to 0.5, which is only for observation.
Usually, power electronic device can response in ms level.
```
ss.TDS.plt.plot(ss.PVD1.Ipout_y,
a=(0,1),
title='PVD1 $I_{p,out}$ (actual output current)')
ss.TDS.plt.plot(ss.PVD1.Iqout_y,
a=(0,1),
title='PVD1 $I_{q,out}$ (actual output current)')
```
## Cleanup
```
!andes misc -C
```
| github_jupyter |
# Am I feeding my network crap
Given that my research on the image content of optical flow images shows such huge variety is my image generation doing anything useful to it??? Perhaps experiment with a very small network for say only 10 classes??
First lets look at the output for something relatively easy like cricket
```
import os
import sys
up1 = os.path.abspath('../../utils/')
up2 = os.path.abspath('../../models/')
sys.path.insert(0, up1)
sys.path.insert(0, up2)
from optical_flow_data_gen import DataGenerator
from ucf101_data_utils import get_test_data_opt_flow, get_train_data_opt_flow
from motion_network import getKerasCifarMotionModel2, getKerasCifarMotionModelOnly
from keras.optimizers import SGD
from matplotlib import pyplot as plt
from keras.optimizers import SGD
import cv2
import numpy as np
```
# Is it the data or my classifier
I am starting to wonder what it is about my optical flow data that might be causing so much easier. Regardless about the unconverged flow images I feel the author of the data still managed with it. So there's essentially two things I can either get a large amount of improvement simply on how I train my classifier (slower?), or my data set is not quite right. I've already seen that I wasn't even doing any random transforms on my opt flow images courtesy my badly written opt flow data generator.
Any how what I am aiming to do is use a stinkingly cheap data model to explore what might be wrong.
```
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.initializers import Ones
from keras import optimizers
def getModel(lr=1e-2):
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(224, 224, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(101))
model.add(Activation('sigmoid'))
optimizers.SGD(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
return model
model_fast_lr = getModel(lr=1e-2)
from optical_flow_data_gen import DataGenerator
from ucf101_data_utils import get_test_data_opt_flow, get_train_data_opt_flow
training_options = { 'rescale' : 1./255,
'shear_range' : 0.2,
'zoom_range' : 0.2,
'horizontal_flip' : True,
'rotation_range':20,
'width_shift_range':0.2,
'height_shift_range':0.2}
validation_options = { 'rescale' : 1./255 }
params_train = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size': 128,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : False}
params_valid = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size':128,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : True}
id_labels_train = get_train_data_opt_flow('../../data/ucf101_splits/trainlist01.txt')
labels = id_labels_train[1]
id_test = get_test_data_opt_flow('../../data/ucf101_splits/testlist01.txt', \
'../../data/ucf101_splits/classInd.txt')
training_generator = DataGenerator(*id_labels_train, **params_train)
validation_generator = DataGenerator(id_test[0], id_test[1], **params_valid)
mod1 = model_fast_lr.fit_generator(generator=training_generator, steps_per_epoch=64,
validation_data=validation_generator, validation_steps=64,
use_multiprocessing=True,
workers=2, epochs=5,
verbose=1)
plt.plot(mod1.history['acc'])
plt.plot(mod1.history['val_acc'])
mod2 = model_fast_lr.fit_generator(generator=training_generator, steps_per_epoch=64,
validation_data=validation_generator, validation_steps=32,
use_multiprocessing=True,
workers=2, epochs=20,
verbose=1)
plt.plot(mod2.history['acc'])
plt.plot(mod2.history['val_acc'])
from motion_network import getSimonyanOxfordModel
from keras.optimizers import SGD
simonyan_model=getSimonyanOxfordModel((224,224,2), 101, printmod=0)
mypotim = SGD(lr=0.5e-2, momentum=0.9)
simonyan_model.compile(loss='categorical_crossentropy',
optimizer=mypotim,
metrics=['accuracy'])
from optical_flow_data_gen import DataGenerator
from ucf101_data_utils import get_test_data_opt_flow, get_train_data_opt_flow
params_train = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size': 256,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : False}
params_valid = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size':256,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : True}
id_labels_train = get_train_data_opt_flow('../../data/ucf101_splits/trainlist01.txt')
labels = id_labels_train[1]
id_test = get_test_data_opt_flow('../../data/ucf101_splits/testlist01.txt', \
'../../data/ucf101_splits/classInd.txt')
training_generator = DataGenerator(*id_labels_train, **params_train)
validation_generator = DataGenerator(id_test[0], id_test[1], **params_valid)
mod2 = simonyan_model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=2, epochs=5,
verbose=1)
mod2 = simonyan_model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=2, epochs=20,
verbose=1)
from optical_flow_data_gen import DataGenerator
from ucf101_data_utils import get_test_data_opt_flow, get_train_data_opt_flow
params_train = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size': 32,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : False}
params_valid = { 'data_dir' : "/data/tvl1_flow",
'dim': (224,224),
'batch_size':32,
'n_frames': 1,
'n_frequency': 1,
'shuffle': True,
'n_classes' : 101,
'validation' : True}
id_labels_train = get_train_data_opt_flow('../../data/ucf101_splits/trainlist01.txt')
labels = id_labels_train[1]
id_test = get_test_data_opt_flow('../../data/ucf101_splits/testlist01.txt', \
'../../data/ucf101_splits/classInd.txt')
training_generator = DataGenerator(*id_labels_train, **params_train)
validation_generator = DataGenerator(id_test[0], id_test[1], **params_valid)
simonyan_model2=getSimonyanOxfordModel((224,224,2), 101, printmod=0)
mypotim = SGD(lr=1e-3, momentum=0.9)
simonyan_model2.compile(loss='categorical_crossentropy',
optimizer=mypotim,
metrics=['accuracy'])
mod2 = simonyan_model2.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=2, epochs=20,
verbose=1)
from keras import regularizers
def getSimonyanOxfordModelNoBN(input_shape, n_classes, printmod=1, dropout=1):
model = Sequential()
weight_decay = 1e-4
model.add(Conv2D(96, (7,7), strides=2, padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(256, (5,5), strides=2, padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(512, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(Conv2D(512, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(Conv2D(512, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(4096))
model.add(Activation('relu'))
model.add(Dropout(0.9))
model.add(Dense(2048))
model.add(Activation('relu'))
model.add(Dropout(0.9))
model.add(Dense(n_classes, activation='softmax'))
if (printmod==1 ):
model.summary()
return model
simonyan_model_no_bn=getSimonyanOxfordModelNoBN((224,224,2), 101, printmod=0)
mypotim = SGD(lr=1e-2, momentum=0.9)
simonyan_model_no_bn.compile(loss='categorical_crossentropy',
optimizer=mypotim,
metrics=['accuracy'])
mod2 = simonyan_model_no_bn.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=2, epochs=20,
verbose=1)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Набор данных взят с https://www.kaggle.com/aashi20/top-50-spotify-songs
# Top-50 песен в Spotify в 2019 году.
data = pd.read_csv("datasets/top50.csv")
print(data)
data = pd.read_csv("datasets/top50.csv")
data.head(15)
NewData = pd.DataFrame()
Dta = pd.get_dummies(data['Genre'])
NewData = pd.concat([NewData, Dta])
NewData.head()
dat = data['Beats.Per.Minute']
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Beats.Per.Minute'] = dat
NewData.head(15)
dat = data['Energy']
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Energy'] = dat
NewData.head(15)
dat = data['Danceability']
dat = np.clip(dat, 55, 100)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Danceability'] = dat
NewData.head(15)
dat = data['Loudness..dB..']
dat = np.clip(dat, -10, 0)
dat = np.array(dat).reshape(-1, 1)
dat= MinMaxScaler().fit_transform(dat).flatten()
NewData['Loudness..dB..'] = dat
NewData.head(15)
dat = data['Liveness']
dat = np.clip(dat, 0, 25)
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Liveness'] = dat
NewData.head(15)
dat = data['Valence.']
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Valence.'] = dat
NewData.head(15)
dat = data['Length.']
dat = np.clip(dat, 140, 500)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Length.'] = dat
NewData.head(15)
dat = data['Acousticness..']
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Acousticness..'] = dat
NewData.head(15)
dat = data['Speechiness.']
dat = np.clip(dat, 0, 40)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Speechiness.'] = dat
NewData.head(15)
dat = data['Popularity']
dat = np.clip(dat, 77, 100)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Popularity'] = dat
NewData.head(15)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from numpy import array
from numpy import cumsum
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.layers import Bidirectional
from keras.layers import Embedding
# word embedding
from gensim.models import Word2Vec
import multiprocessing
from keras.optimizers import Adam
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
```
https://machinelearningmastery.com/develop-bidirectional-lstm-sequence-classification-python-keras/#:~:text=Bidirectional%20LSTMs%20are%20an%20extension,LSTMs%20on%20the%20input%20sequence.
```
X_train = pd.read_pickle('../X_train.pickle')
X_test = pd.read_pickle('../X_test.pickle')
X_train['tokenized_text'] = X_train['tokenized_text'].apply(lambda x: ' '.join(x))
X_test['tokenized_text'] = X_test['tokenized_text'].apply(lambda x: ' '.join(x))
max_features = 100
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(X_train['tokenized_text']))
list_tokenized_train = tokenizer.texts_to_sequences(X_train['tokenized_text'])
list_tokenized_test = tokenizer.texts_to_sequences(X_test['tokenized_text'])
pad_train = pad_sequences(list_tokenized_train, maxlen=300, padding='post')
pad_test = pad_sequences(list_tokenized_test, maxlen=300, padding='post')
vocab_size = len(tokenizer.word_index)+1
# load model
cbow = Word2Vec.load('../CBOW300.bin')
print(cbow)
word_vec = cbow.wv
# create a weight matrix for the Embedding layer from a loaded embedding
def get_weight_matrix(embedding, vocab):
# total vocabulary size plus 0 for unknown words
vocab_size = len(vocab) + 1
# define weight matrix dimensions with all 0
weight_matrix = np.zeros((vocab_size, 300))
# step vocab, store vectors using the Tokenizer's integer mapping
for word, i in vocab.items():
try:
weight_matrix[i] = embedding[word]
except:
pass
return weight_matrix
# get vectors in the right order
embedding_vectors = get_weight_matrix(word_vec, tokenizer.word_index)
embedding = Embedding(vocab_size,300,weights = [embedding_vectors],input_length=300,trainable = False)
# define problem properties
n_timesteps = 300
# define LSTM
model = Sequential()
model.add(embedding)
model.add(Bidirectional(LSTM(20, return_sequences=True), input_shape=(n_timesteps, 1)))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# train LSTM
# fit model for one epoch on this sequence
model.fit(pad_train, X_train['target'], epochs=1, batch_size=1, verbose=2, validation_split=0.2)
# evaluate LSTM
yhat = model.predict_classes(X_test, verbose=0)
for i in range(n_timesteps):
print('Expected:', X_train['target'], 'Predicted', yhat[0, i])
```
| github_jupyter |
```
import pandas as pd
import numpy as np
#for data visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#for evaluation
from sklearn.metrics import mean_absolute_error, r2_score, classification_report,confusion_matrix , accuracy_score, f1_score
import time
import warnings
warnings.filterwarnings('ignore')
df = pd.read_excel('/content/data_epilepsy.xlsx',sheet_name='SZONF')
df.head()
target = pd.read_excel('/content/data_epilepsy.xlsx',sheet_name='targetS-ZONF')
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df)
scaled_data = scaler.transform(df)
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca.fit(scaled_data)
x_pca = pca.transform(scaled_data)
scaled_data.shape
x_pca.shape
plt.figure(figsize=(8,6))
plt.scatter(x_pca[:,0],x_pca[:,1],c=df['Unnamed: 1'],cmap='plasma')
plt.xlabel('First principal component')
plt.ylabel('Second principal component')
x=df.iloc[:,0:]
y=target
from sklearn.model_selection import train_test_split
x_Train,x_Test,y_Train,y_Test =train_test_split(x,y,train_size =.8)
```
#DecisionTreeClassifier
```
from sklearn.tree import DecisionTreeClassifier
model=DecisionTreeClassifier()
import numpy as np
from sklearn.utils.multiclass import is_multilabel
model.fit(x_Train,y_Train)
# Necessary imports
from scipy.stats import randint
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import RandomizedSearchCV
# Creating the hyperparameter grid
param_dist = {"max_depth": [3, None],
"max_features": randint(1, 9),
"min_samples_leaf": randint(1, 9),
"criterion": ["gini", "entropy"]}
# Instantiating Decision Tree classifier
tree = DecisionTreeClassifier()
# Instantiating RandomizedSearchCV object
tree_cv = RandomizedSearchCV(tree, param_dist, cv = 5)
tree_cv.fit(x_Train, y_Train)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is {}".format(tree_cv.best_score_))
y_pred = tree_cv.predict(x_Test)
y_pred
from sklearn.metrics import accuracy_score
print(accuracy_score(y_Test,y_pred))
dtc_acc = accuracy_score(y_Test,y_pred)
print(dtc_acc)
results = pd.DataFrame()
results
tempResults = pd.DataFrame({'Algorithm':['Decision tree Classifier Method'], 'Accuracy':[dtc_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
##Logistic Regression
```
# Necessary imports
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
# Creating the hyperparameter grid
c_space = np.logspace(-5, 9, 13)
param_grid = {'C': c_space}
# Instantiating logistic regression classifier
logreg = LogisticRegression()
# Instantiating the GridSearchCV object
logreg_cv = GridSearchCV(logreg, param_grid, cv = 6)
logreg_cv.fit(x_Train, y_Train)
# Print the tuned parameters and score
print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_))
print("Best score is {}".format(logreg_cv.best_score_))
y_pred = logreg_cv.predict(x_Test)
y_pred
y_Test
from sklearn.metrics import accuracy_score
print(accuracy_score(y_Test, y_pred))
lr_acc = accuracy_score(y_Test, y_pred)
print(lr_acc)
tempResults = pd.DataFrame({'Algorithm':['Logistic Regression Method'], 'Accuracy':[lr_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
#SVM-Linear
```
from sklearn import svm
#Create a svm Classifier
clf = svm.SVC(kernel='linear') # Linear Kernel
#Train the model using the training sets
clf.fit(x_Train, y_Train)
#Predict the response for test dataset
y_pred = clf.predict(x_Test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_Test, y_pred)
cm
from sklearn.metrics import accuracy_score
print(accuracy_score(y_Test,y_pred))
svm_lin_acc = accuracy_score(y_Test,y_pred)
print(svm_lin_acc)
tempResults = pd.DataFrame({'Algorithm':['SVM-Linear Karnel Classifier Method'], 'Accuracy':[svm_lin_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
#KNN
```
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
#making the instance
model = KNeighborsClassifier(n_jobs=-1)
#Hyper Parameters Set
params = {'n_neighbors':[9,10,11,12,13,14],
'leaf_size':[5,6,7,8,9],
'weights':['uniform', 'distance'],
'algorithm':['auto', 'ball_tree','kd_tree','brute'],
'n_jobs':[-1]}
#Making models with hyper parameters sets
model1 = GridSearchCV(model, param_grid=params, n_jobs=1)
#Learning
model1.fit(x_Train, y_Train)
#The best hyper parameters set
print("Best Hyper Parameters:\n",model1.best_params_)
# Predicting the Test set results
y_pred = model1.predict(x_Test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_Test, y_pred)
cm
from sklearn.metrics import accuracy_score
print(accuracy_score(y_Test, y_pred))
knn_acc = accuracy_score(y_Test, y_pred)
print(knn_acc)
tempResults = pd.DataFrame({'Algorithm':['KNN Classifier Method'], 'Accuracy':[knn_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
##RandomForest
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
xTrain, yTrain = make_classification(n_samples=1000, n_features=4097,
n_informative=2, n_redundant=0,
random_state=0, shuffle=False)
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(xTrain, yTrain)
RandomForestClassifier(...)
RandomForestClassifier(max_depth=2, random_state=0)
print(clf.feature_importances_)
y_pred = clf.predict(x_Test)
y_pred
accuracy_score(y_Test, y_pred)
rfc_acc = accuracy_score(y_Test,y_pred)
print(rfc_acc)
tempResults = pd.DataFrame({'Algorithm':['Random Forest Classifier Method'], 'Accuracy':[rfc_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
#XGBoost
```
import numpy as np
from sklearn.datasets import load_svmlight_files
from sklearn.metrics import accuracy_score
from xgboost.sklearn import XGBClassifier
model =XGBClassifier()
eval_set =[(x_Train,y_Train)]
model.fit(x_Train,y_Train,early_stopping_rounds= 10, eval_metric ='logloss',eval_set=eval_set,verbose=True)
#make predictions for test data
predictions = model.predict(x_Test)
y_xgb_pred =model.predict(x_Test)
print(y_Test)
print(y_xgb_pred)
cm = confusion_matrix(y_xgb_pred,y_Test)
print(cm)
# evoluate predictions
accuracy = accuracy_score(y_Test, predictions)
print("Accuracy: %.2f%%" %(accuracy * 100.0))
from sklearn.metrics import accuracy_score
print(accuracy_score(y_Test,y_pred))
xgb_acc = accuracy_score(y_Test, y_pred)
print(xgb_acc)
results = pd.DataFrame()
results
tempResults = pd.DataFrame({'Algorithm':['XGBoost Classifier Method'], 'Accuracy':[xgb_acc]})
results = pd.concat( [results, tempResults] )
results = results[['Algorithm','Accuracy']]
results
```
##Perform Kmean Clustering
```
from sklearn.cluster import KMeans
from scipy.stats import zscore
# Scale the Dataset
df_scaled = df.apply(zscore)
# Let us check optimal number of clusters-
# expect 3 to four clusters from the pair panel visual inspection hence restricting from 2 to 6
cluster_range = range( 1, 10 )
cluster_errors = []
for num_clusters in cluster_range:
clusters = KMeans( num_clusters,n_init = 15, random_state=2)
clusters.fit(df_scaled)
# capture the cluster lables
labels = clusters.labels_
# capture the centroids
centroids = clusters.cluster_centers_
# capture the intertia
cluster_errors.append( clusters.inertia_ )
# combine the cluster_range and cluster_errors into a dataframe by combining them
clusters_df = pd.DataFrame( { "num_clusters":cluster_range, "cluster_errors": cluster_errors } )
clusters_df[0:10]
# Number of clusters
kmeans = KMeans(n_clusters=3, n_init = 15, random_state=2)
# Fitting the input data
kmeans.fit(df_scaled)
#Centroids
centroids=kmeans.cluster_centers_
centroid_df = pd.DataFrame(centroids, columns = list(df_scaled) )
centroid_df
# Elbow plot
plt.figure(figsize=(12,6))
plt.plot( clusters_df.num_clusters, clusters_df.cluster_errors, marker = "o" )
plt.show()
```
#Feature importance using XGBoost
```
from xgboost import plot_importance
from matplotlib import pyplot
# plot feature importance
plot_importance(model)
pyplot.show()
```
| github_jupyter |
```
import numpy as np
import ipyvolume as ipv
import symfit as sf
import ectopylasm as ep
xyz = np.array((np.random.random(1000), np.random.normal(0, 0.01, 1000), np.random.random(1000)))
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
ipv.show()
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x, y, z = sf.variables('x, y, z')
plane_model = {x: (x0 * a + y0 * b + z0 * c - y * b - z * c) / a}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
That's not really a great fit. y0 should be about 0, certainly not 0.75. Also the stds seem weird and chi_squared is high.
Let's try again with initial values for x0, y0 and z0. We can set it to any of our random points.
```
initial_guess = xyz.T[0]
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x0.value = initial_guess[0]
y0.value = initial_guess[1]
z0.value = initial_guess[2]
x, y, z = sf.variables('x, y, z')
plane_model = {x: (x0 * a + y0 * b + z0 * c - y * b - z * c) / a}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
```
Hmm, weird.
```
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
Hmm, ok, it's actually not totally off, at least it goes through the actual plane of points. The angle is just pretty much off.
Let's try including some limits, because x and z are also waaaaay way out there.
```
initial_guess = xyz.T[0]
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x0.value = initial_guess[0]
x0.min, x0.max = (0, 1)
y0.value = initial_guess[1]
z0.value = initial_guess[2]
z0.min, z0.max = (0, 1)
x, y, z = sf.variables('x, y, z')
plane_model = {x: (x0 * a + y0 * b + z0 * c - y * b - z * c) / a}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
```
Again, pretty crappy.
```
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
Let's try with initial values for a b c as well that together I think should be a pretty good fit already.
```
initial_guess = xyz.T[0]
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
a.value = 0.0001
b.value = 1
c.value = 0.0001
x0.value = initial_guess[0]
x0.min, x0.max = (0, 1)
y0.value = initial_guess[1]
z0.value = initial_guess[2]
z0.min, z0.max = (0, 1)
x, y, z = sf.variables('x, y, z')
plane_model = {x: (x0 * a + y0 * b + z0 * c - y * b - z * c) / a}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
Crap!
Perhaps I should try to parameterize the plane differently... Is the division by a a problem, because it will give division by zero?
```
initial_guess = xyz.T[0]
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
a.value = 0
b.value = 1
c.value = 0
x0.value = initial_guess[0]
x0.min, x0.max = (0, 1)
y0.value = initial_guess[1]
z0.value = initial_guess[2]
z0.min, z0.max = (0, 1)
x, y, z = sf.variables('x, y, z')
plane_model = {y: (x0 * a + y0 * b + z0 * c - x * a - z * c) / b}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
```
Ahhh, that was it! Coolio.
```
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
Does this also work without the initial guesses and limits?
```
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x, y, z = sf.variables('x, y, z')
plane_model = {y: (x0 * a + y0 * b + z0 * c - x * a - z * c) / b}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
Indeed it does, although it takes 4 times as many iterations. Still, good to know both ways work.
Ok, but still, this business with using x vs y because of the division by zero is not ideal, because you don't know in advance which direction should be used.
Two possible solutions I can see:
1. Find a better parameterization within symfit
2. Code two parameterizations and when one fit fails to converge, try the other.
Let's try the first option first.
```
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x, y, z, lhs, rhs = sf.variables('x, y, z, lhs, rhs')
plane_model = {lhs: x * a + y * b + z * c,
rhs: x0 * a + y0 * b + z0 * c}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2], constraints=[sf.Equality(lhs, rhs)])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
a, b, c, x0, y0, z0, lhs, rhs = sf.parameters('a, b, c, x0, y0, z0, lhs, rhs')
x, y, z = sf.variables('x, y, z')
plane_model = {lhs: x * a + y * b + z * c,
rhs: x0 * a + y0 * b + z0 * c}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2], constraints=[sf.Equality(lhs, rhs)])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x, y, z = sf.variables('x, y, z')
plane_model = {x * a + y * b + z * c: x0 * a + y0 * b + z0 * c}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2])
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
```
Martin Roelfs instead suggested the following approach (https://github.com/tBuLi/symfit/issues/254#issuecomment-503474091), except with `d` instead of `x0, y0, z0`:
```
a, b, c, x0, y0, z0 = sf.parameters('a, b, c, x0, y0, z0')
x, y, z, f = sf.variables('x, y, z, f')
plane_model = {f: x * a + y * b + z * c - (x0 * a + y0 * b + z0 * c)}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2], f=np.zeros_like(xyz[0]))
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane((p_fit['x0'], p_fit['y0'], p_fit['z0']), (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1))
ipv.show()
```
That doesn't work so well... For completeness sake, let's also try with `d` then.
```
a, b, c, d = sf.parameters('a, b, c, d')
x, y, z, f = sf.variables('x, y, z, f')
plane_model = {f: x * a + y * b + z * c - d}
plane_fit = sf.Fit(plane_model, x=xyz[0], y=xyz[1], z=xyz[2], f=np.zeros_like(xyz[0]))
plane_fit_result = plane_fit.execute()
print(plane_fit_result)
```
We'll have to modify `plot_plane` to directly take `d`... done.
```
ipv.clear()
ipv.scatter(*xyz, marker='circle_2d')
p_fit = plane_fit_result.params
ep.plot_plane(None, (p_fit['a'], p_fit['b'], p_fit['c']), (0, 1), (0, 1), d=p_fit['d'])
ipv.show()
```
Excellent, problem solved!
| github_jupyter |
# SVM based Sentiment Analysis
Let's perform a SVM based Sentiment Analysis based on Support a Vector Machine Model on Twitter Sentiments of US Airline passengers.
**Fill in the blanks**
```
import nltk
nltk.download('stopwords')
```
## Import Libraries
```
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from nltk.tokenize import TweetTokenizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import confusion_matrix, roc_auc_score, recall_score, precision_score
```
## Import data
```
data = pd.read_csv("Tweets_Airline.csv")
data.head()
```
## We take only the tweets we are very confident with. We use the BeautifulSoup library to process html encoding present in some tweets.
```
data_clean = data.copy()
data_clean = data_clean[data_clean['airline_sentiment_confidence'] > 0.65]
data_clean['text_clean'] = data_clean['text'].apply(lambda x: BeautifulSoup(x, "lxml").text)
```
## For simplicity we are going to distinguish two cases: tweets with negative sentiment and tweets with non-negative sentiment
```
data_clean['sentiment'] = data_clean['airline_sentiment'].apply(lambda s : 1 if s == 'negative' else 0) #Hint: Assign 1 to negative class and 0 to rest
data_clean = data_clean.loc[:, ['text_clean', 'sentiment']]
data_clean.head()
```
## We split the data into training and testing set:
```
train, test = train_test_split(data_clean, test_size=0.2, random_state=1)
X_train = train['text_clean'].values
X_test = test['text_clean'].values
y_train = train['sentiment']
y_test = test['sentiment']
```
## Preprocessing the Data
```
def tokenize(text):
tknzr = TweetTokenizer()
return tknzr.tokenize(text)
def stem(doc):
return (stemmer.stem(w) for w in analyzer(doc))
en_stopwords = set(stopwords.words("english"))
vectorizer = CountVectorizer(
analyzer = 'word',
tokenizer = tokenize,
lowercase = True,
ngram_range=(1, 1),
stop_words = en_stopwords)
```
## We are going to use cross validation and grid search to find good hyperparameters for our SVM model. We need to build a pipeline.
```
kfolds = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)
np.random.seed(1)
pipeline_svm = make_pipeline(vectorizer, SVC(probability=True,
kernel= 'linear',
class_weight= 'balanced'))
#Hint : Linear kernel with balanced class weights
grid_svm = GridSearchCV(pipeline_svm,
param_grid = {'svc__C': [0.01, 0.1, 1]},
cv = kfolds,
scoring="roc_auc",
verbose=1,
n_jobs=-1)
grid_svm.fit(X_train, y_train)
grid_svm.score(X_test, y_test)
print(grid_svm.best_params_)
print(grid_svm.best_score_)
```
## Let's see how the model (with the best hyperparameters) works on the test data:
```
def report_results(model, X, y):
pred_proba = model.predict_proba(X)[:, 1]
pred = model.predict(X)
auc = roc_auc_score(y, pred_proba)
acc = accuracy_score(y, pred)
f1 = f1_score(y, pred)
prec = precision_score(y, pred)
rec = recall_score(y, pred)
result = {'auc': auc, 'f1': f1, 'acc': acc, 'precision': prec, 'recall': rec}
return result
report_results(grid_svm.best_estimator_, X_test, y_test)
```
## ROC Curve
```
def get_roc_curve(model, X, y):
pred_proba = model.predict_proba(X)[:, 1]
fpr, tpr, _ = roc_curve(y, pred_proba)
return fpr, tpr
fpr, tpr = get_roc_curve(grid_svm.best_estimator_, X_test, y_test)
plt.figure(figsize=(14,8))
plt.plot(fpr, tpr, color="red")
plt.plot([0, 1], [0, 1], color='black', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Roc curve')
plt.show()
```
## Prediction
```
pred = grid_svm.predict(["flying with @united is always a great experience."])
print('negative' if pred == np.array([1]) else 'not negative')
pred = grid_svm.predict(["flying with @united is always a great experience. If you don't lose your luggage"])
print('negative' if pred == np.array([1]) else 'not negative')
```
**It easily distinguishes the text based on context!!**
| github_jupyter |
```
import camelot
import pandas as pd
import requests
import zipfile
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
from datetime import date
#Define date of today used format
d=date.today()
date_str=d.strftime("%d%m%Y")
#Define a manual date (e.g. if running on a date where no new data is uploaded)
#date_str="15012021"
# Find link to latest file from the web page
# assumes stabel initial part of filename and webpage location
#req = Request("https://www.ssi.dk/sygdomme-beredskab-og-forskning/sygdomsovervaagning/c/covid19-overvaagning")
req = Request("https://covid19.ssi.dk/overvagningsdata/private-tests")
html_page = urlopen(req)
soup = BeautifulSoup(html_page, "html")
for link in soup.findAll('a'):
ref=link.get('href')
if isinstance(ref, str) and "opgoerelse-pcr-og-antigentest" in ref.lower() and date_str in ref.lower(): # NB compare in lower case
print("url for zip file: " + link.get('href'))
url=link.get('href')
#date_str = url[50:58]
# Direct download data and unpacking
r = requests.get(url, allow_redirects=True)
open('data_privat.pdf', 'wb').write(r.content)
PU_file='data_privat.pdf'
# Manually define PDF file to extract tables from in case above download breaks
#PU_file = "Opgoerelse-pcr-og-antigentest-05012021-fsda.pdf"
PU_tables = camelot.read_pdf(PU_file, pages = '2', flavor='stream')
PU_data=PU_tables[0].df
PU_data
#remove wrong rows
PU_data=PU_data[5:-1]
PU_data
PU_data.columns = ['Date', 'Antal testudbydere', 'Antal PCR tests' , 'Antal positiv PCR tests','Positiv procent PCR' ,'Antal antigen tests','Antal positiv antigen tests','Positiv procent antigen']
PU_data['Date']=pd.to_datetime(PU_data['Date'], format='%d.%m.%Y')
PU_data['Antal PCR tests'] = PU_data['Antal PCR tests'].str.replace('.', '').astype(float)
PU_data['Antal positiv PCR tests'] = PU_data['Antal positiv PCR tests'].str.replace('.', '').astype(float)
PU_data['Positiv procent PCR'] = PU_data['Positiv procent PCR'].str.replace(',', '.').str.replace('%', '').astype(float)
PU_data['Antal antigen tests'] = PU_data['Antal antigen tests'].str.replace('.', '').astype(float)
PU_data['Antal positiv antigen tests'] = PU_data['Antal positiv antigen tests'].str.replace('.', '').astype(float)
PU_data['Positiv procent antigen'] = PU_data['Positiv procent antigen'].str.replace(',', '.').str.replace('%', '').astype(float)
PU_data=PU_data.set_index(['Date'])
PU_data
PU_data.to_pickle('data_private.dat')
PU_data.plot(y='Positiv procent antigen',style='.',color='green',label='From "Antigen Tests" (Quick tests)');
```
| github_jupyter |
# Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick
* [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al.
* An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara
* TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec)
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.

To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.

Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
## Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
```
import time
import numpy as np
import tensorflow as tf
import utils
```
Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
```
## Preprocessing
Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
```
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
```
And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
> **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`.
```
## Your code here
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
# +1 because ranint does not include the last parameter
R = random.randint(1,window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
# idx is excluded
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
print(train_words[100])
print(get_target(train_words, 100, 10))
```
Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
```
## Building the graph
From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network.

The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal.
> **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`.
```
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
```
## Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
> **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform).
```
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
```
## Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss).
> **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works.
```
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
```
## Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
```
## Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
```
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
```
Restore the trained network if you need to:
```
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
```
| github_jupyter |
# Changing K
To get started, let's read in our necessary libraries.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
import helper2 as h
import tests as t
from IPython import display
%matplotlib inline
# Make the images larger
plt.rcParams['figure.figsize'] = (16, 9)
```
1. To get started, there is a function called simulate_data within the helpers2 module. Read the documentation on the function by running the cell below. Then use the function to simulate a dataset with 200 data points (rows), 5 features (columns), and 4 centers
```
h.simulate_data()
data = h.simulate_data(200, 5, 4)
k_value = 4
# This will check that your dataset appears to match ours before moving forward
t.test_question_1(data)
k_value = 4
# Check your solution against ours.
t.test_question_2(k_value)
```
3. Let's try a few different values for k and fit them to our data using KMeans.
To use KMeans, you need to follow three steps:
I. Instantiate your model.
II. Fit your model to the data.
III. Predict the labels for the data.
```
#Try instantiating a model with 4 centers
kmeans_4 = KMeans(n_clusters=4)
# Then fit the model to your data using the fit method
model_4 = kmeans_4.fit(data)
# Finally predict the labels on the same data to show the category that point belongs to
labels_4 = model_4.predict(data)
# If you did all of that correctly, this should provide a plot of your data colored by center
h.plot_data(data, labels_4)
```
4. Now try again, but this time fit kmeans using 2 clusters instead of 4 to your data.
```
# Try instantiating a model with 2 centers
kmeans_2 = KMeans(n_clusters=2)
# Then fit the model to your data using the fit method
model_2 = kmeans_2.fit(data)
# Finally predict the labels on the same data to show the category that point belongs to
labels_2 = model_2.predict(data)
# If you did all of that correctly, this should provide a plot of your data colored by center
h.plot_data(data, labels_2)
# Try instantiating a model with 7 centers
kmeans_7 = KMeans(n_clusters=7)
# Then fit the model to your data using the fit method
model_7 = kmeans_7.fit(data)
# Finally predict the labels on the same data to show the category that point belongs to
labels_7 = model_7.predict(data)
# If you did all of that correctly, this should provide a plot of your data colored by center
h.plot_data(data, labels_7)
```
Visually, we get some indication of how well our model is doing, but it isn't totally apparent. Each time additional centers are considered, the distances between the points and the center will decrease. However, at some point, that decrease is not substantial enough to suggest the need for an additional cluster.
Using a scree plot is a common method for understanding if an additional cluster center is needed. The elbow method used by looking at a scree plot is still pretty subjective, but let's take a look to see how many cluster centers might be indicated.
6. Once you have fit a kmeans model to some data in sklearn, there is a score method, which takes the data. This score is an indication of how far the points are from the centroids. By fitting models for centroids from 1-10, and keeping track of the score and the number of centroids, you should be able to build a scree plot.
This plot should have the number of centroids on the x-axis, and the absolute value of the score result on the y-axis. You can see the plot I retrieved by running the solution code. Try creating your own scree plot, as you will need it for the final questions
```
# A place for your work - create a scree plot - you will need to
# Fit a kmeans model with changing k from 1-10
# Obtain the score for each model (take the absolute value)
# Plot the score against k
def get_kmeans_score(data, center):
'''
returns the kmeans score regarding SSE for points to centers
INPUT:
data - the dataset you want to fit kmeans to
center - the number of centers you want (the k value)
OUTPUT:
score - the SSE score for the kmeans model fit to the data
'''
#instantiate kmeans
kmeans = KMeans(n_clusters=center)
# Then fit the model to your data using the fit method
model = kmeans.fit(data)
# Obtain a score related to the model fit
score = np.abs(model.score(data))
return score
scores = []
centers = list(range(1,11))
for center in centers:
scores.append(get_kmeans_score(data, center))
plt.plot(centers, scores, linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.ylabel('SSE');
plt.title('SSE vs. K');
# Run our solution
centers, scores = h.fit_mods()
#Your plot should look similar to the below
plt.plot(centers, scores, linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.ylabel('SSE');
plt.title('SSE vs. K');
value_for_k = 4
# Test your solution against ours
display.HTML(t.test_question_7(value_for_k))
```
| github_jupyter |
# CH. 7 - TOPIC MODELS
## Activities
#### Activity 1
```
# not necessary
# added to suppress warnings coming from pyLDAvis
import warnings
warnings.filterwarnings('ignore')
import langdetect # language detection
import matplotlib.pyplot # plotting
import nltk # natural language processing
import numpy # arrays and matrices
import pandas # dataframes
import pyLDAvis # plotting
import pyLDAvis.sklearn # plotting
import regex # regular expressions
import sklearn # machine learning
# define path
path = '~/packt-data/topic-model-health-tweets/latimeshealth.txt'
# load data
df = pandas.read_csv(path, sep="|", header=None)
df.columns = ["id", "datetime", "tweettext"]
# define quick look function for data frame
def dataframe_quick_look(df, nrows):
print("SHAPE:\n{shape}\n".format(shape=df.shape))
print("COLUMN NAMES:\n{names}\n".format(names=df.columns))
print("HEAD:\n{head}\n".format(head=df.head(nrows)))
dataframe_quick_look(df, nrows=2)
# view final data that will be carried forward
raw = df['tweettext'].tolist()
print("HEADLINES:\n{lines}\n".format(lines=raw[:5]))
print("LENGTH:\n{length}\n".format(length=len(raw)))
# define function for checking language of tweets
# filter to english only
def do_language_identifying(txt):
try:
the_language = langdetect.detect(txt)
except:
the_language = 'none'
return the_language
# define function to perform lemmatization
def do_lemmatizing(wrd):
out = nltk.corpus.wordnet.morphy(wrd)
return (wrd if out is None else out)
# define function to cleaning tweet data
def do_tweet_cleaning(txt):
# identify language of tweet
# return null if language not english
lg = do_language_identifying(txt)
if lg != 'en':
return None
# split the string on whitespace
out = txt.split(' ')
# identify screen names
# replace with SCREENNAME
out = ['SCREENNAME' if i.startswith('@') else i for i in out]
# identify urls
# replace with URL
out = [
'URL' if bool(regex.search('http[s]?://', i))
else i for i in out
]
# remove all punctuation
out = [regex.sub('[^\\w\\s]|\n', '', i) for i in out]
# make all non-keywords lowercase
keys = ['SCREENNAME', 'URL']
out = [i.lower() if i not in keys else i for i in out]
# remove keywords
out = [i for i in out if i not in keys]
# remove stopwords
list_stop_words = nltk.corpus.stopwords.words('english')
list_stop_words = [regex.sub('[^\\w\\s]', '', i) for i in list_stop_words]
out = [i for i in out if i not in list_stop_words]
# lemmatizing
out = [do_lemmatizing(i) for i in out]
# keep words 4 or more characters long
out = [i for i in out if len(i) >= 5]
return out
# apply cleaning function to every tweet
clean = list(map(do_tweet_cleaning, raw))
# remove none types
clean = list(filter(None.__ne__, clean))
print("HEADLINES:\n{lines}\n".format(lines=clean[:5]))
print("LENGTH:\n{length}\n".format(length=len(clean)))
# turn tokens back into strings
# concatenate using whitespaces
clean_sentences = [" ".join(i) for i in clean]
print(clean_sentences[0:10])
```
#### Activity 2
```
# define global variables
number_words = 10
number_docs = 10
number_features = 1000
# bag of words conversion
# count vectorizer (raw counts)
vectorizer1 = sklearn.feature_extraction.text.CountVectorizer(
analyzer="word",
max_df=0.95,
min_df=10,
max_features=number_features
)
clean_vec1 = vectorizer1.fit_transform(clean_sentences)
print(clean_vec1[0])
feature_names_vec1 = vectorizer1.get_feature_names()
# define function to calculate perplexity based on number of topics
def perplexity_by_ntopic(data, ntopics):
output_dict = {
"Number Of Topics": [],
"Perplexity Score": []
}
for t in ntopics:
lda = sklearn.decomposition.LatentDirichletAllocation(
n_components=t,
learning_method="online",
random_state=0
)
lda.fit(data)
output_dict["Number Of Topics"].append(t)
output_dict["Perplexity Score"].append(lda.perplexity(data))
output_df = pandas.DataFrame(output_dict)
index_min_perplexity = output_df["Perplexity Score"].idxmin()
output_num_topics = output_df.loc[
index_min_perplexity, # index
"Number Of Topics" # column
]
return (output_df, output_num_topics)
# execute function on vector of numbers of topics
# takes several minutes
df_perplexity, optimal_num_topics = perplexity_by_ntopic(
clean_vec1,
ntopics=[i for i in range(1, 21) if i % 2 == 0]
)
print(df_perplexity)
# define and fit lda model
lda = sklearn.decomposition.LatentDirichletAllocation(
n_components=optimal_num_topics,
learning_method="online",
random_state=0
)
lda.fit(clean_vec1)
# define function to format raw output into nice tables
def get_topics(mod, vec, names, docs, ndocs, nwords):
# word to topic matrix
W = mod.components_
W_norm = W / W.sum(axis=1)[:, numpy.newaxis]
# topic to document matrix
H = mod.transform(vec)
W_dict = {}
H_dict = {}
for tpc_idx, tpc_val in enumerate(W_norm):
topic = "Topic{}".format(tpc_idx)
# formatting w
W_indices = tpc_val.argsort()[::-1][:nwords]
W_names_values = [
(round(tpc_val[j], 4), names[j])
for j in W_indices
]
W_dict[topic] = W_names_values
# formatting h
H_indices = H[:, tpc_idx].argsort()[::-1][:ndocs]
H_names_values = [
(round(H[:, tpc_idx][j], 4), docs[j])
for j in H_indices
]
H_dict[topic] = H_names_values
W_df = pandas.DataFrame(
W_dict,
index=["Word" + str(i) for i in range(nwords)]
)
H_df = pandas.DataFrame(
H_dict,
index=["Doc" + str(i) for i in range(ndocs)]
)
return (W_df, H_df)
# get nice tables
W_df, H_df = get_topics(
mod=lda,
vec=clean_vec1,
names=feature_names_vec1,
docs=raw,
ndocs=number_docs,
nwords=number_words
)
# word-topic table
print(W_df)
# document-topic table
print(H_df)
# iteractive plot
# pca biplot and histogram
lda_plot = pyLDAvis.sklearn.prepare(lda, clean_vec1, vectorizer1, R=10)
pyLDAvis.display(lda_plot)
```
#### Activity 3
```
# bag of words conversion
# tf-idf method
vectorizer2 = sklearn.feature_extraction.text.TfidfVectorizer(
analyzer="word",
max_df=0.5,
min_df=20,
max_features=number_features,
smooth_idf=False
)
clean_vec2 = vectorizer2.fit_transform(clean_sentences)
print(clean_vec2[0])
feature_names_vec2 = vectorizer2.get_feature_names()
# define and fit nmf model
nmf = sklearn.decomposition.NMF(
n_components=optimal_num_topics,
init="nndsvda",
solver="mu",
beta_loss="frobenius",
random_state=0,
alpha=0.1,
l1_ratio=0.5
)
nmf.fit(clean_vec2)
# get nicely formatted result tables
W_df, H_df = get_topics(
mod=nmf,
vec=clean_vec2,
names=feature_names_vec2,
docs=raw,
ndocs=number_docs,
nwords=number_words
)
# word-topic table
print(W_df)
# document-topic table
print(H_df)
```
| github_jupyter |
# Intro to GIS with Python
## What is GIS?
GIS stands for _geographic information system_. Colloquially, it's the process of presenting and analyzing data on maps. GIS allows us to visualize and characterize the nature of spatially distributed data, including weather, infrastructure, and populations. As you can imagine, this is key for disaster response scenarios for both diagnosing the situation, as well as planning and monitoring the response.
There are dozens of different GIS software options, both free and commercial. In this course, we will focus on free, python-based tools and packages. The principles taught in this course should carry over to most common GIS implementations.
In particular, we will be using:
- GDAL
- geopandas
This content is based off of the [Automating GIS Processes course](https://automating-gis-processes.github.io/2018/) from the University of Helsinki
```
import geopandas as gpd
import contextily as ctx # for basemaps
from shapely.geometry import Point, LineString, Polygon
from matplotlib import pyplot as plt
```
## Reading in GIS data
For this lesson we are using data in Shapefile format representing distributions of specific beautifully colored fish species called Damselfish and the country borders of Europe.
We're going to use the `wget` terminal command to download a file from a url.
We then use `unzip` to unzip the archive into a folder of the same name. The `-o` option is used to overwrite the folder if it already exists
We then us `ls` to see the contents of the folder
```
!wget https://github.com/Automating-GIS-processes/FEC/raw/master/data/DAMSELFISH.zip -O fish_data.zip
!unzip -o fish_data.zip -d fish_data
!ls fish_data
```
Typically reading the data into Python is the first step of the analysis pipeline. In GIS, there exists various dataformats such as [Shapefile](https://en.wikipedia.org/wiki/Shapefile), [GeoJSON](https://en.wikipedia.org/wiki/GeoJSON), [KML](https://en.wikipedia.org/wiki/Keyhole_Markup_Language), and [GPKG](https://en.wikipedia.org/wiki/GeoPackage) that are probably the most common vector data formats. Geopandas is capable of reading data from all of these formats (plus many more). Reading spatial data can be done easily with geopandas using `gpd.read_file()` -function:
```
# path to shapefile
filepath = "fish_data/DAMSELFISH_distributions.shp"
# Read file using gpd.read_file()
data = gpd.read_file(filepath)
data.head() #look at top entries - looks like a pandas dataframe
data.columns
# Note the column 'geometry' is full of shapely Polygon objects
type(data['geometry'].iloc[0])
```
Note that the data are in (lon, lat) ordering --- this is because the convention is (x, y) for computers, but (lat, lon) for coordinates. This is a frequent cause of error.
```
data['geometry']
# geopandas adds useful attributes to the geodataframe, such as the ability to get bounds
# of all the geometry data
data.bounds
# similary, we can get attributes such as boundary
data.boundary
```
## Coordinate reference systems
There are many different coordinate reference systems (CRS), which refer to different ways of indicating where on the earth you are referring to when you give a coordinate. Different CRS use different models of the earth's surface, map projections, units, and origin points (where 0,0 is). The discussion of the specifics is beyond the scope of this course.
For the purposes of this course, we will primarily use the two following:
### WGS 84: https://epsg.io/4326
```
The CRS used by the GPS system
units: degrees
0,0 is the intersection of greenwich meridian and equator
epsg code: 4326
```
### Web Mercator: https://epsg.io/3857
```
The CRS used by most web maps, such as Google maps, OSM, Bing, etc.
Not accurate at high latitudes >85 degrees, <-85 degrees
units: meters
0,0 is intersection of greensich meridian and equator
epsg code: 3857
```
```
# area will warn you if you're trying to do area calculations in geographic CRS
data.area
data_in_3857 = data.to_crs('epsg:3857')
data_in_3857.area
# we can check which species can be found between latitudes 10 and 20 degrees north
data.intersects(Polygon([(-180,10),(180,10),(180,20),(-180,20)]))
```
## Exercises
Using the polygon objects in the `geometry` column of the data frame:
- create a new column called `area` which represent the areas of each row in the shapefile
- What are the max, min, median, and quartiles values of the areas?
- What fraction of the areas are greater than 25 square degrees?
- What species has the largest total area?
```
```
## Plotting
Geopandas provides a useful `.plot()` function which creates a matplotlib figure and returns an axes object.
There's a ton of additional libraries that provide more plotting functionality, and we'll explore a few of them here. There's no "correct" set of libraries to use for GIS in python, and it's up to you to figure out which ones fit the best into your workflow.
The `cmap` option to the `.plot()` function allows you to pass in a [matplotlib colormap name](https://matplotlib.org/gallery/color/colormap_reference.html), which are collections of colors used to visualize data
```
# we can use the built-in geopandas plot function to visualize
ax = data.plot(figsize=(10,5), alpha=0.6, cmap='Set2')
```
currently the colors are assigned arbitrarily. However, we can also use colors to encode information.
Let's first use colors to categorize by endangerment status. To do so, we pass the `column` argument to `plot()`. For reference, we also set `legend=True`
```
ax = data.plot(figsize=(10,5), alpha=0.6, cmap='Set2', column='category', legend=True)
```
Another common use of colors to encode data is to represent numerical data in an area with colors. This is known as a [choropleth](https://en.wikipedia.org/wiki/Choropleth_map).
Let's use this to encode the areas of each region
```
#then pass the area column as an argument
ax = data.plot(figsize=(10,5), alpha=0.6, column='shape_Area', legend=True)
```
The colorbar legend is too big relative to the figure. We'll have to do some manual adjustments. There are tools to create axes grids for colorbars available in:
https://matplotlib.org/3.1.0/tutorials/toolkits/axes_grid.html
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1, 1)
divider = make_axes_locatable(ax) #makes it so you can append to the axes
#put another axes to the right of it, at 5% of the total width with 0.1 points of padding in between
cax = divider.append_axes("right", size="5%", pad=0.1)
# note that you have to specify both ax and cax as arguments for it to work
data.plot(figsize=(10,5), alpha=0.6, column='area',
legend=True, ax=ax, cax=cax)
```
The data by itself looks just like a bunch of blobs. Let's put it on a map for context
[Contextily](https://github.com/geopandas/contextily) is a library for creating basemaps. It pulls data from a host of different basemap providers - see [documentation](https://contextily.readthedocs.io/en/latest/) for more details.
```
# the data is currently in WGS84 (epsg:4326)
data.crs
ax = data.plot(figsize=(10,5), alpha=0.6, cmap='Set2', column='category')
# now we add a basemap. ctx finds a basemap for a background from
# an online repository.
# It assumes the data is in web mercator (epsg:3857) unless you specify otherwise
ctx.add_basemap(ax, crs=data.crs)
# we can set bounds using matplotlib
ax = data.plot(figsize=(10,5), alpha=0.6, cmap='Set2', column='category')
ax.set_xlim([-180,180])
ax.set_ylim([-85,85])
ctx.add_basemap(ax, crs=data.crs)
```
We can use different background styles:
.
Note that some styles only contain labels or lines.
```
# to look at all of the different providers, check:
ctx.providers
```
previews of the different basemap styles can be viewed at: http://leaflet-extras.github.io/leaflet-providers/preview/
```
ax = data.plot(figsize=(10,5), alpha=0.6, cmap='Set2', column='category')
ax.set_xlim([-180,180])
ax.set_ylim([-85,85])
# to specify the type of basemap, specify the source argument
# the syntax is ctx.providers.{provider name}.{provider style}
ctx.add_basemap(ax, crs=data.crs, source=ctx.providers.Stamen.Watercolor)
# you can add labels independently of the background
ctx.add_basemap(ax, crs=data.crs, source=ctx.providers.CartoDB.DarkMatterOnlyLabels)
# we can download background tiles as images for quicker loading (don't need to keep redownloading)
# let's use the bounds of one of the fish locations as an example
w,s,e,n = data.loc[25,'geometry'].bounds
data.loc[10,'geometry'].bounds
```
the function bounds2img takes coordinates and [zoom level](https://wiki.openstreetmap.org/wiki/Zoom_levels) and downloads the corresponding tiles of the map as images
```
img, ext = ctx.bounds2img(w, s, e, n, 6, ll=True) #ll means coordinates are in lat-lon
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.imshow(img, extent=ext)
# bounds2img returns things in epsg:3857, so we need to plot the data in the same crs
data.to_crs(epsg=3857).plot(ax=ax, cmap='Set3', alpha=0.8)
ax_bounds = data.to_crs(epsg=3857).loc[25,'geometry'].bounds
ax.set(xlim=[ax_bounds[0],ax_bounds[2]],ylim=[ax_bounds[1],ax_bounds[3]])
```
## Writing to a shapefile
First we'll make a directory for outputting data to. We use the `mkdir` command which makes an empty folder. The `-p` option will skip it if the directory already exists
```
!mkdir output_data -p
# let's write the first 50 rows of the shapefile to a new file
outfp = "output_data/DAMSELFISH_distributions_SELECTION.shp"
# Select first 50 rows
selection = data[0:50]
# Write those rows into a new Shapefile (the default output file format is Shapefile)
selection.to_file(outfp)
```
## Converting shapes to GeoDataFrames
You can use Shapely geometric objects to create a GeoDataFrame from scratch.
```
# Create an empty geopandas GeoDataFrame
newdata = gpd.GeoDataFrame()
# add a geometry column (necessary for shapefile)
newdata['geometry'] = None
# Let's see what we have at the moment
print(newdata)
# Coordinates of the MIT main campus in Decimal Degrees
coordinates = [(-71.092562, 42.357602), ( -71.080155, 42.361553), ( -71.089817, 42.362584), (-71.094688, 42.360198)]
# Create a Shapely polygon from the coordinate-tuple list
poly = Polygon(coordinates)
# Let's see what we have
poly
```
Quick checkpoint! Find the coordinates of the corners of a place that has significant meaning to you. Just like we did above, make a Shapely polygon from the coordinate-tuple list of the corners of your personal landmark.
Display it! It can be as big as you want. If you want, share out with the class the place and why it is significant to you.
```
# Coordinates of place of signficance in Decimal Degrees
coordinates_personal =[]
# Create a Shapely polygon from the coordinate-tuple list
poly_personal =Polygon
#Show the place and share out its significance if you want
# Insert the polygon into 'geometry' -column at index 0
newdata.loc[0, 'geometry'] = poly
newdata
newdata.loc[0, 'location'] = 'MIT main campus'
newdata
```
Before exporting the data it is necessary to set the coordinate reference system (projection) for the GeoDataFrame.
We will set the crs using a function from `fiona`, another GIS library integrated with geopandas.
```
# Set the GeoDataFrame's coordinate system to WGS84 (i.e. epsg code 4326)
newdata = newdata.set_crs('epsg:4326')
# Let's see how the crs definition looks like
newdata.crs
outfp = "output_data/MIT_campus.shp"
# Write the data into that Shapefile
newdata.to_file(outfp)
# Let's plot it
ax = newdata.to_crs(epsg=3857).plot(figsize=(10,5),alpha = 0.5, color='#FF55FF')
ctx.add_basemap(ax)
ax.set_axis_off() # remove the x-y axes
```
# Exercise
Find an interesting GIS dataset and:
- visualize some raw data
- ask an interesting analysis question about it:
- intersections, sizes, quantities
- relationships
- e.g. which latitudes contain the most endangered species? what countries have the most ports per km of coastline?
- Visualize some of your analysis
As per usual, we'll ask a few volunteers to present their results.
Here are some resources to look for GIS datasets:
- Cambridge, MA GIS data: http://cambridgegis.github.io/gisdata.html
- Free GIS data: https://freegisdata.rtwilson.com/
- Data.gov: https://www.data.gov/
```
```
An important part responsibility of GIS engineers during the pandemic is to visualize spread and case intensity during the pandemic. Using datasets from the following sources:
* Visualize raw data collected from sources around the world about the state of the pandemic
* Explore connections between various factors and come up with a hypothesis for your research. Some ideas could be connecting COVID data in different counties to socioeconomy, age, or building architecture data. Remember, mapping data speaks louder than graphs or datasets.
* Present your findings to the rest of the class and come up with a possible solution to the problem or connection that you explored
COVID-19 Datasets:
* COVID-19 Dataset (Kaggle): www.kaggle.com/imdevskp/corona-virus-report
* New York Times Dataset: https://github.com/nytimes/covid-19-data
* JHU Dataset: https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data
* Feel free to explore more area specific datasets or datasets which outline other conditions. These are just suggestions.
To make your research connections, be sure to explore population and demographic datasets of different counties around the country. Be creative with your research!
```
```
Map a shape of your hometown onto the map. Similar to how we mapped the coordinates of the MIT campus on a map, map the coordinates of your hometown onto a map. It doesn't have to exact, but just take a couple rough coordinates and visualize your place on the map. The TAs will try to map these shapes onto a full map so that we can get an idea of where everyone is from and visualize how geographically diverse our class is.
```
```
| github_jupyter |
# `pymdptoolbox` demo
```
import warnings
from mdptoolbox import mdp
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
```
## The problem
* You have a 20-sided die, and you get to roll repeatedly until the sum of your rolls either gets as close as possible to 21 or you bust.
* Your score is the numerical value of the sum of your rolls; if you bust, you get zero.
* What is the optimal strategy?

## The solution
Let's look at what we have to deal with:
* State space is 23-dimensional (sum of rolls can be 0-21 inclusive, plus the terminal state)
* Action space is 2-dimensional (roll/stay)
* State transitions are stochastic; requires transition matrix $T(s^\prime;s,a)$
* $T$ is mildly sparse (some transitions like 9->5 or 0->21 are impossible)
* Rewards depend on both state and action taken from that state, but are not stochastic (only ever get positive reward when choosing "stay")
We're going to use the [*value iteration*](https://pymdptoolbox.readthedocs.io/en/latest/api/mdp.html#mdptoolbox.mdp.ValueIteration) algorithm. Looking at the documentation, we can see that it requires as input a transition matrx, a reward matrix, and a discount factor (we will use $\gamma = 1$).
Let's first specify the transition "matrix". It's going to be a 3-dimensional tensor of shape $(|\mathcal{A}|,|\mathcal{S}|,|\mathcal{S}|) = (2, 23, 23)$. Most entries are probably zero, so let's start with a zero matrix and fill in the blanks. I'm going reserve the very last state (the 23rd entry) for the terminal state.
```
def make_transition_matrix(n_sides=20, max_score=21):
"""Constructs the transition matrix for the MDP
Arguments:
n_sides: number of sides on the die being rolled
max_score: the maximum score of the game before going bust
Returns:
np.ndarray: array of shape (A,S,S), where A=2, and S=max_score+2
representing the transition matrix for the MDP
"""
A = 2
S = max_score + 2
T = np.zeros(shape=(A, S, S))
p = 1/n_sides
# All the "roll" action transitions
# First, the transition from state s to any non terminal state s' has probability
# 1/n_sides unless s' <= s or s' > s + n_sides
for s in range(0, S-1):
for sprime in range(s+1, S-1):
if sprime <= s + n_sides:
T[0,s,sprime] = p
# The rows of T[0] must all sum to one, so all the remaining probability goes to
# the terminal state
for s in range(0, S-1):
T[0,s,S-1] = 1 - T[0,s].sum()
# It is impossible to transition out of the terminal state; it is "absorbing"
T[0,S-1,S-1] = 1
# All the "stay" action transitions
# This one is simple - all "stay" transitions dump you in the terminal state,
# regardless of starting state
T[1,:,S-1] = 1
T[T<0] = 0 # There may be some very small negative probabilities due to rounding
# errors - this fixes errythang
return T
# Take a peek at a smaller version
T = make_transition_matrix(n_sides=4, max_score=5)
print("roll transitions:")
print(T[0])
print("\nstay transitions:")
print(T[1])
```
Now let's build the reward matrix. This is going to be a tensor of shape $(|\mathcal{S}|,|\mathcal{A}|) = (23,2)$. This one is even simpler than the transition matrix because only "stay" actions generate nonzero rewards, which are equal to the index of the state itself.
```
def make_reward_matrix(max_score=21):
"""Create the reward matrix for the MDP.
Arguments:
max_score: the maximum score of the game before going bust
Returns:
np.ndarray: array of shape (S,A), where A=2, and S=max_score+2
representing the reward matrix for the MDP
"""
A = 2
S = max_score + 2
R = np.zeros(shape=(S, A))
# Only need to create rewards for the "stay" action
# Rewards are equal to the state index, except for the terminal state, which
# always returns zero
for s in range(0, S-1):
R[s,1] = s
return R
# Take a peek at a smaller version
R = make_reward_matrix(max_score=5)
print("roll rewards:")
print(R[:,0])
print("\nstay rewards:")
print(R[:,1])
```
## The algorithm
Alright, now that we have the transition and reward matrices, our MDP is completely defined, and we can use the `pymdptoolbox` to help us figure out the optimal policy/strategy.
```
n_sides = 20
max_score = 21
T = make_transition_matrix(n_sides, max_score)
R = make_reward_matrix(max_score)
model = mdp.ValueIteration(
transitions=T,
reward=R,
discount=1,
epsilon=0.001,
max_iter=1000,
)
model.setVerbose()
model.run()
print(f"Algorithm finished running in {model.time:.2e} seconds")
```
That ran pretty fast, didn't it? Unfortunately most realistic MDP problems have millions or billions of possible states (or more!), so this doesn't really scale very well. But it works for our small problem very well.
## The results
Now let's analyze the results. The `ValueIteration` object gives us easy access to the optimal value function and policy.
```
plt.plot(model.V, marker='o')
x = np.linspace(0, max_score, 10)
plt.plot(x, x, linestyle="--", color='black')
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlim(-1, max_score+2)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("State value $V$")
plt.title("MDP optimal value function $V^*(s)$")
plt.show()
plt.plot(model.policy, marker='o')
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlim(-1, max_score+2)
ticks = [0, 1]
labels = ["roll", "stay"]
plt.yticks(ticks, labels)
plt.ylim(-0.25, 1.25)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("Policy $\pi$")
plt.title("MDP optimal policy $\pi^*(s)$")
plt.show()
```
Looks like the optimal policy is to keep rolling until the sum gets to 10. This is why $V(s) = s$ for $s>=10$ (black dashed line); because that's the score you end up with when following this policy. For $s<10$, it's actually a bit higher than $s$ because you get an opportunity to roll again to get a higher score, and the sum is low enough that your chances of busting are relatively low. We can see the slope is positive for $s \le 21 - 20 = 1$ because it's impossible to bust below that point, but the slope becomes negative between $1 \le s \le 10$ because you're more likely to bust the higher you get.
We can also calculate the state distribution $\rho_\pi(s_0 \rightarrow s,t)$, which tells us the probability to be in any one of the states $s$ after a time $t$ when starting from state $s_0$:
$$
\rho_\pi(s_0 \rightarrow s,t) = \sum_{s^\prime} T(s;s^\prime,\pi(s^\prime)) \rho_\pi(s_0 \rightarrow s^\prime, t-1) \\
\text{where }\rho_\pi(s_0 \rightarrow s, 0) = \delta_{s, s_0}
$$
```
def calculate_state_distribution(policy, T, t_max=10):
S = len(policy)
# Reduce transition matrix to T(s';s) since policy is fixed
T_ = np.zeros(shape=(S, S))
for s in range(S):
for sprime in range(S):
T_[s,sprime] = T[policy[s],s,sprime]
T = T_
# Initialize rho
rho = np.zeros(shape=(S, S, t_max+1))
for s in range(0, S):
rho[s,s,0] = 1
# Use the iterative update equation
for t in range(1, t_max+1):
rho[:,:,t] = np.einsum("ji,kj->ki", T, rho[:,:,t-1])
return rho
rho = calculate_state_distribution(model.policy, T, 5)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # Ignore the divide by zero error from taking log(0)
plt.imshow(np.log10(rho[0].T), cmap='viridis')
cbar = plt.colorbar(shrink=0.35, aspect=9)
cbar.ax.set_title(r"$\log_{10}(\rho)$")
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("Number of rolls/turns $t$")
plt.title(r"Optimal state distribution $\rho_{\pi^*}(s_0\rightarrow s;t)$")
plt.subplots_adjust(right=2, top=2)
plt.show()
```
| github_jupyter |
<p style = "font-size : 50px; color : #532e1c ; font-family : 'Comic Sans MS'; text-align : center; background-color : #bedcfa; border-radius: 5px 5px;"><strong>Titanic EDA and Prediction</strong></p>
<img style="float: center; border:5px solid #ffb037; width:100%" src = https://sn56.scholastic.com/content/dam/classroom-magazines/sn56/issues/2018-19/020419/the-titanic-sails-again/SN56020919_Titanic-Hero.jpg>
<a id = '0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Table of Contents</strong></p>
* [Data Description](#1.0)
* [EDA](#2.0)
* [Survived Column](#2.1)
* [Pclass Column](#2.2)
* [Name Column](#2.3)
* [Sex Column](#2.4)
* [Age Column](#2.5)
* [Fare Column](#2.6)
* [SibSp Column](#2.7)
* [Parch Column](#2.8)
* [Ticket Column](#2.9)
* [Embarked Column](#2.10)
* [Findings From EDA](#3.0)
* [Data Preprocessing](#4.0)
* [Models](#5.0)
* [Logistic Regression](#5.1)
* [Knn](#5.2)
* [Decision Tree Classifier](#5.3)
* [Random Forest Classifier](#5.4)
* [Ada Boost Classifier](#5.5)
* [Gradient Boosting Classifier](#5.6)
* [Stochastic Gradient Boosting (SGB)](#5.7)
* [XgBoost](#5.8)
* [Cat Boost Classifier](#5.9)
* [Extra Trees Classifier](#5.10)
* [LGBM Classifier](#5.11)
* [Voting Classifier](#5.12)
* [Models Comparison](#6.0)
<a id = '1.0'></a>
<p style = "font-size : 30px; color : #4e8d7c ; font-family : 'Comic Sans MS'; "><strong>Data Description :-</strong></p>
<ul>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Survival : 0 = No, 1 = Yes</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>pclass(Ticket Class) : 1 = 1st, 2 = 2nd, 3 = 3rd</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Sex(Gender) : Male, Female</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Age : Age in years</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>SibSp : Number of siblings/spouses abroad the titanic</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Parch : Number of parents/children abrod the titanic</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Ticket : Ticket Number</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Fare : Passenger fare</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Cabin : Cabin Number</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Embarked : Port of Embarkation, C = Cherbourg, Q = Queenstown, S = Southampton</strong></li>
</ul>
```
# necessary imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
%matplotlib inline
train_df = pd.read_csv('../input/titanic/train.csv')
train_df.head()
train_df.describe()
train_df.var()
train_df.info()
# Checking for null values
train_df.isna().sum()
```
<a id = '2.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Exploratory Data Analysis (EDA)</strong></p>
```
# visualizing null values
import missingno as msno
msno.bar(train_df)
plt.show()
# heatmap
plt.figure(figsize = (18, 8))
corr = train_df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, annot = True, fmt = '.2f', linewidths = 1, annot_kws = {'size' : 15})
plt.show()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Heatmap is not useful in case of categorical variables, so we will analyse each column to check how each column is contributing in prediction.</strong></p>
<a id = '2.1'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Survived Column</strong></p>
```
plt.figure(figsize = (12, 7))
sns.countplot(y = 'Survived', data = train_df)
plt.show()
values = train_df['Survived'].value_counts()
labels = ['Not Survived', 'Survived']
fig, ax = plt.subplots(figsize = (5, 5), dpi = 100)
explode = (0, 0.06)
patches, texts, autotexts = ax.pie(values, labels = labels, autopct = '%1.2f%%', shadow = True,
startangle = 90, explode = explode)
plt.setp(texts, color = 'grey')
plt.setp(autotexts, size = 12, color = 'white')
autotexts[1].set_color('black')
plt.show()
```
<a id = '2.2'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Pclass Column</strong></p>
```
train_df.Pclass.value_counts()
train_df.groupby(['Pclass', 'Survived'])['Survived'].count()
plt.figure(figsize = (16, 8))
sns.countplot('Pclass', hue = 'Survived', data = train_df)
plt.show()
values = train_df['Pclass'].value_counts()
labels = ['Third Class', 'Second Class', 'First Class']
explode = (0, 0, 0.08)
fig, ax = plt.subplots(figsize = (5, 6), dpi = 100)
patches, texts, autotexts = ax.pie(values, labels = labels, autopct = '%1.2f%%', shadow = True,
startangle = 90, explode = explode)
plt.setp(texts, color = 'grey')
plt.setp(autotexts, size = 13, color = 'white')
autotexts[2].set_color('black')
plt.show()
sns.catplot('Pclass', 'Survived', kind = 'point', data = train_df, height = 6, aspect = 2)
plt.show()
```
<a id = '2.3'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Name Column</strong></p>
```
train_df.Name.value_counts()
len(train_df.Name.unique()), train_df.shape
```
<a id = '2.4'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Sex Column</strong></p>
```
train_df.Sex.value_counts()
train_df.groupby(['Sex', 'Survived'])['Survived'].count()
plt.figure(figsize = (16, 7))
sns.countplot('Sex', hue = 'Survived', data = train_df)
plt.show()
sns.catplot(x = 'Sex', y = 'Survived', data = train_df, kind = 'bar', col = 'Pclass')
plt.show()
sns.catplot(x = 'Sex', y = 'Survived', data = train_df, kind = 'point', height = 6, aspect =2)
plt.show()
plt.figure(figsize = (15, 6))
sns.catplot(x = 'Pclass', y = 'Survived', kind = 'point', data = train_df, hue = 'Sex', height = 6, aspect = 2)
plt.show()
```
<a id = '2.5'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Age Column</strong></p>
```
plt.figure(figsize = (15, 6))
plt.style.use('ggplot')
sns.distplot(train_df['Age'])
plt.show()
sns.catplot(x = 'Sex', y = 'Age', kind = 'box', data = train_df, height = 5, aspect = 2)
plt.show()
sns.catplot(x = 'Sex', y = 'Age', kind = 'box', data = train_df, col = 'Pclass')
plt.show()
```
<a id = '2.6'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Fare Column</strong></p>
```
plt.figure(figsize = (14, 6))
plt.hist(train_df.Fare, bins = 60, color = 'orange')
plt.xlabel('Fare')
plt.show()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>We can see that lot of zero values are there in Fare column so we will replace zero values with mean value of Fare column later.</strong></p>
```
sns.catplot(x = 'Sex', y = 'Fare', data = train_df, kind = 'box', col = 'Pclass')
plt.show()
```
<a id = '2.7'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>SibSp Column</strong></p>
```
train_df['SibSp'].value_counts()
plt.figure(figsize = (16, 5))
sns.countplot(x = 'SibSp', data = train_df, hue = 'Survived')
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', data = train_df, height = 5, aspect =2)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', hue = 'Sex', data = train_df, height = 6, aspect = 2)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', col = 'Sex', data = train_df)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', col = 'Pclass', kind = 'bar', data = train_df)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'point', hue = 'Sex', data = train_df, height = 6, aspect = 2)
plt.show()
```
<a id = '2.8'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Parch Column</strong></p>
```
train_df.Parch.value_counts()
sns.catplot(x = 'Parch', y = 'Survived', data = train_df, hue = 'Sex', kind = 'bar', height = 6, aspect = 2)
plt.show()
```
<a id = '2.9'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Ticket Column</strong></p>
```
train_df.Ticket.value_counts()
len(train_df.Ticket.unique())
```
<a id = '2.10'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Embarked Column</strong></p>
```
train_df['Embarked'].value_counts()
plt.figure(figsize = (14, 6))
sns.countplot('Embarked', hue = 'Survived', data = train_df)
plt.show()
sns.catplot(x = 'Embarked', y = 'Survived', kind = 'bar', data = train_df, col = 'Sex')
plt.show()
```
<a id = '3.0'></a>
<p style = "font-size : 30px; color : #4e8d7c ; font-family : 'Comic Sans MS';"><strong>Findings From EDA :-</strong></p>
<ul>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Females Survived more than Males.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers Travelling in Higher Class Survived More than Passengers travelling in Lower Class.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Name column is having all unique values so this column is not suitable for prediction, we have to drop it.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>In First Class Females were more than Males, that's why Fare of Females Passengers were high.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Survival Rate is higher for those who were travelling with siblings or spouses.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers travelling with parents or children have higher survival rate.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Ticket column is not useful and does not have an impact on survival.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Cabin column have a lot of null values , it will be better to drop this column.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers travelling from Cherbourg port survived more than passengers travelling from other two ports.</strong></li>
</ul>
<a id = '4.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Data Pre-Processing</strong></p>
```
# dropping useless columns
train_df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
train_df.head()
train_df.isna().sum()
# replacing Zero values of "Fare" column with mean of column
train_df['Fare'] = train_df['Fare'].replace(0, train_df['Fare'].mean())
# filling null values of "Age" column with mean value of the column
train_df['Age'].fillna(train_df['Age'].mean(), inplace = True)
# filling null values of "Embarked" column with mode value of the column
train_df['Embarked'].fillna(train_df['Embarked'].mode()[0], inplace = True)
# checking for null values after filling null values
train_df.isna().sum()
train_df.head()
train_df['Sex'] = train_df['Sex'].apply(lambda val: 1 if val == 'male' else 0)
train_df['Embarked'] = train_df['Embarked'].map({'S' : 0, 'C': 1, 'Q': 2})
train_df.head()
train_df.describe()
train_df.var()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Variance in "Fare" column is very high so we have to normalize these columns.</strong></p>
```
train_df['Age'] = np.log(train_df['Age'])
train_df['Fare'] = np.log(train_df['Fare'])
train_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Now training data looks much better let's take a look at test data.</strong></p>
```
test_df = pd.read_csv('../input/titanic/test.csv')
test_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Performing same steps on test data.</strong></p>
```
# dropping useless columns
test_df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
# replacing Zero values of "Fare" column with mean of column
test_df['Fare'] = test_df['Fare'].replace(0, test_df['Fare'].mean())
# filling null values of "Age" column with mean value of the column
test_df['Age'].fillna(test_df['Age'].mean(), inplace = True)
# filling null values of "Embarked" column with mode value of the column
test_df['Embarked'].fillna(test_df['Embarked'].mode()[0], inplace = True)
test_df.isna().sum()
# filling null values of "Fare" column with mean value of the column
test_df['Fare'].fillna(test_df['Fare'].mean(), inplace = True)
test_df['Sex'] = test_df['Sex'].apply(lambda val: 1 if val == 'male' else 0)
test_df['Embarked'] = test_df['Embarked'].map({'S' : 0, 'C': 1, 'Q': 2})
test_df.head()
test_df['Age'] = np.log(test_df['Age'])
test_df['Fare'] = np.log(test_df['Fare'])
test_df.var()
test_df.isna().any()
test_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Now both training and test data is cleaned and preprocessed, let's start with model building.</strong></p>
```
# creating X and y
X = train_df.drop('Survived', axis = 1)
y = train_df['Survived']
# splitting data intp training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 0)
```
<a id = '5.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong> Models</strong></p>
<a id = '5.1'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Logistic Regression</strong></p>
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of logistic regression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
lr_acc = accuracy_score(y_test, lr.predict(X_test))
print(f"Training Accuracy of Logistic Regression is {accuracy_score(y_train, lr.predict(X_train))}")
print(f"Test Accuracy of Logistic Regression is {lr_acc}")
print(f"Confusion Matrix :- \n {confusion_matrix(y_test, lr.predict(X_test))}")
print(f"Classofocation Report : -\n {classification_report(y_test, lr.predict(X_test))}")
# hyper parameter tuning of logistic regression
from sklearn.model_selection import GridSearchCV
grid_param = {
'penalty': ['l1', 'l2'],
'C' : [0.001, 0.01, 0.1, 0.005, 0.5, 1, 10]
}
grid_search_lr = GridSearchCV(lr, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_lr.fit(X_train, y_train)
# best parameters and best score
print(grid_search_lr.best_params_)
print(grid_search_lr.best_score_)
# best estimator
lr = grid_search_lr.best_estimator_
# accuracy score, confusion matrix and classification report of logistic regression
lr_acc = accuracy_score(y_test, lr.predict(X_test))
print(f"Training Accuracy of Logistic Regression is {accuracy_score(y_train, lr.predict(X_train))}")
print(f"Test Accuracy of Logistic Regression is {lr_acc}")
print(f"Confusion Matrix :- \n {confusion_matrix(y_test, lr.predict(X_test))}")
print(f"Classofocation Report : -\n {classification_report(y_test, lr.predict(X_test))}")
```
<a id = '5.2'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>KNN</strong></p>
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of knn
knn_acc = accuracy_score(y_test, knn.predict(X_test))
print(f"Training Accuracy of KNN is {accuracy_score(y_train, knn.predict(X_train))}")
print(f"Test Accuracy of KNN is {knn_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, knn.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, knn.predict(X_test))}")
```
<a id = '5.3'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Decision Tree Classifier</strong></p>
```
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of decision tree
dtc_acc = accuracy_score(y_test, dtc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, dtc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {dtc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, dtc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, dtc.predict(X_test))}")
# hyper parameter tuning of decision tree
grid_param = {
'criterion' : ['gini', 'entropy'],
'max_depth' : [3, 5, 7, 10],
'splitter' : ['best', 'random'],
'min_samples_leaf' : [1, 2, 3, 5, 7],
'min_samples_split' : [1, 2, 3, 5, 7],
'max_features' : ['auto', 'sqrt', 'log2']
}
grid_search_dtc = GridSearchCV(dtc, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_dtc.fit(X_train, y_train)
# best parameters and best score
print(grid_search_dtc.best_params_)
print(grid_search_dtc.best_score_)
# best estimator
dtc = grid_search_dtc.best_estimator_
# accuracy score, confusion matrix and classification report of decision tree
dtc_acc = accuracy_score(y_test, dtc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, dtc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {dtc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, dtc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, dtc.predict(X_test))}")
```
<a id = '5.4'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Random Forest Classifier</strong></p>
```
from sklearn.ensemble import RandomForestClassifier
rd_clf = RandomForestClassifier()
rd_clf.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of random forest
rd_clf_acc = accuracy_score(y_test, rd_clf.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, rd_clf.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {rd_clf_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, rd_clf.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, rd_clf.predict(X_test))}")
```
<a id = '5.5'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Ada Boost Classifier</strong></p>
```
from sklearn.ensemble import AdaBoostClassifier
ada = AdaBoostClassifier(base_estimator = dtc)
ada.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of ada boost
ada_acc = accuracy_score(y_test, ada.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, ada.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {ada_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, ada.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, ada.predict(X_test))}")
# hyper parameter tuning ada boost
grid_param = {
'n_estimators' : [100, 120, 150, 180, 200],
'learning_rate' : [0.01, 0.1, 1, 10],
'algorithm' : ['SAMME', 'SAMME.R']
}
grid_search_ada = GridSearchCV(ada, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_ada.fit(X_train, y_train)
# best parameter and best score
print(grid_search_ada.best_params_)
print(grid_search_ada.best_score_)
ada = grid_search_ada.best_estimator_
# accuracy score, confusion matrix and classification report of ada boost
ada_acc = accuracy_score(y_test, ada.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, ada.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {ada_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, ada.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, ada.predict(X_test))}")
```
<a id = '5.6'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Gradient Boosting Classifier</strong></p>
```
from sklearn.ensemble import GradientBoostingClassifier
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of gradient boosting classifier
gb_acc = accuracy_score(y_test, gb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, gb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {gb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, gb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, gb.predict(X_test))}")
```
<a id = '5.7'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Stochastic Gradient Boosting (SGB)</strong></p>
```
sgb = GradientBoostingClassifier(subsample = 0.90, max_features = 0.70)
sgb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of stochastic gradient boosting classifier
sgb_acc = accuracy_score(y_test, sgb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, sgb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {sgb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, sgb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, sgb.predict(X_test))}")
```
<a id = '5.8'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>XgBoost</strong></p>
```
from xgboost import XGBClassifier
xgb = XGBClassifier(booster = 'gbtree', learning_rate = 0.1, max_depth = 5, n_estimators = 180)
xgb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of xgboost
xgb_acc = accuracy_score(y_test, xgb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, xgb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {xgb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, xgb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, xgb.predict(X_test))}")
```
<a id = '5.9'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Cat Boost Classifier</strong></p>
```
from catboost import CatBoostClassifier
cat = CatBoostClassifier(iterations=10)
cat.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of cat boost
cat_acc = accuracy_score(y_test, cat.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, cat.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {cat_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, cat.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, cat.predict(X_test))}")
```
<a id = '5.10'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Extra Trees Classifier</strong></p>
```
from sklearn.ensemble import ExtraTreesClassifier
etc = ExtraTreesClassifier()
etc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of extra trees classifier
etc_acc = accuracy_score(y_test, etc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, etc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {etc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, etc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, etc.predict(X_test))}")
```
<a id = '5.11'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>LGBM Classifier</strong></p>
```
from lightgbm import LGBMClassifier
lgbm = LGBMClassifier(learning_rate = 1)
lgbm.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of lgbm classifier
lgbm_acc = accuracy_score(y_test, lgbm.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, lgbm.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {lgbm_acc} \n")
print(f"{confusion_matrix(y_test, lgbm.predict(X_test))}\n")
print(classification_report(y_test, lgbm.predict(X_test)))
```
<a id = '5.12'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Voting Classifier</strong></p>
```
from sklearn.ensemble import VotingClassifier
classifiers = [('Gradient Boosting Classifier', gb), ('Stochastic Gradient Boosting', sgb), ('Cat Boost Classifier', cat),
('XGboost', xgb), ('Decision Tree', dtc), ('Extra Tree', etc), ('Light Gradient', lgbm),
('Random Forest', rd_clf), ('Ada Boost', ada), ('Logistic', lr)]
vc = VotingClassifier(estimators = classifiers)
vc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of voting classifier
vc_acc = accuracy_score(y_test, vc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, vc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {vc_acc} \n")
print(f"{confusion_matrix(y_test, vc.predict(X_test))}\n")
print(classification_report(y_test, vc.predict(X_test)))
```
<a id = '6.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong> Models Comparison</strong></p>
```
models = pd.DataFrame({
'Model' : ['Logistic Regression', 'KNN', 'Decision Tree Classifier', 'Random Forest Classifier','Ada Boost Classifier',
'Gradient Boosting Classifier', 'Stochastic Gradient Boosting', 'XgBoost', 'Cat Boost', 'Extra Trees Classifier', 'Voting Classifier'],
'Score' : [lr_acc, knn_acc, dtc_acc, rd_clf_acc, ada_acc, gb_acc, sgb_acc, xgb_acc, cat_acc, etc_acc, vc_acc]
})
models.sort_values(by = 'Score', ascending = False)
plt.figure(figsize = (15, 10))
sns.barplot(x = 'Score', y = 'Model', data = models)
plt.show()
final_prediction = sgb.predict(test_df)
prediction = pd.DataFrame(final_prediction)
submission = pd.read_csv('../input/titanic/gender_submission.csv')
submission['Survived'] = prediction
submission.to_csv('Submission.csv', index = False)
```
<p style = "font-size : 25px; color : #f55c47 ; font-family : 'Comic Sans MS'; "><strong>If you like my work, please do Upvote.</strong></p>
| github_jupyter |
# Find Pairwise Interactions
This notebook demonstrates how to calculate pairwise intra- and inter-molecular interactions at specified levels of granularity within biological assemblies and asymmetric units.
```
from pyspark.sql import SparkSession
from mmtfPyspark.io import mmtfReader
from mmtfPyspark.utils import ColumnarStructure
from mmtfPyspark.interactions import InteractionExtractorPd
```
### Start a Spark Session
```
spark = SparkSession.builder.appName("Interactions").getOrCreate()
```
## Define Interaction Partners
Interactions are defined by specifing two subsets of atoms, named **query** and **target**. Once defined, interactions can calculated between these two subsets.
### Use Pandas Dataframes to Create Subsets
The InteractionExtractorPd internally uses Pandas dataframe queries to create query and target atom sets. Any of the Pandas column names below can be used to create subsets.
Example of a structure represented in a Pandas dataframe.
```
structures = mmtfReader.download_mmtf_files(["1OHR"]).cache()
# get first structure from Spark RDD (keys = PDB IDs, value = mmtf structures)
first_structure = structures.values().first()
# convert to a Pandas dataframe
df = ColumnarStructure(first_structure).to_pandas()
df.head(5)
```
### Create a subset of atoms using boolean expressions
The following query creates a subset of ligand (non-polymer) atoms that are not water (HOH) or heavy water (DOD).
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
df_lig = df.query(query)
df_lig.head(5)
```
## Calculate Interactions
The following boolean expressions specify two subsets: ligands (query) and polymer groups (target). In this example, interactions within a distance cutoff of 4 Å are calculated.
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
target = "polymer"
distance_cutoff = 4.0
# the result is a Spark dataframe
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target)
# get the first 5 rows of the Spark dataframe and display it as a Pandas dataframe
interactions.limit(5).toPandas()
```
## Calculate all interactions
If query and target are not specified, all interactions are calculated. By default, intermolecular interactions are calculated.
```
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff)
interactions.limit(5).toPandas()
```
## Aggregate Interactions at Different Levels of Granularity
Pairwise interactions can be listed at different levels of granularity by setting the **level**:
* **level='coord'**: pairwise atom interactions, distances, and coordinates
* **level='atom'**: pairwise atom interactions and distances
* **level='group'**: pairwise atom interactions aggregated at the group (residue) level (default)
* **level='chain'**: pairwise atom interactions aggregated at the chain level
The next example lists the interactions at the **coord** level, the level of highest granularity. You need to scroll in the dataframe to see all columns.
```
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, level='coord')
interactions.limit(5).toPandas()
```
## Calculate Inter- vs Intra-molecular Interactions
Inter- and intra-molecular interactions can be calculated by explicitly setting the **inter** and **intra** flags.
* **inter=True** (default)
* **intra=False** (default)
### Find intermolecular salt-bridges
This example uses the default settings, i.e., finds intramolecular salt-bridges.
```
query = "polymer and (group_name in ['ASP', 'GLU']) and (atom_name in ['OD1', 'OD2', 'OE1', 'OE2'])"
target = "polymer and (group_name in ['ARG', 'LYS', 'HIS']) and (atom_name in ['NH1', 'NH2', 'NZ', 'ND1', 'NE2'])"
distance_cutoff = 3.5
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, level='atom')
interactions.limit(5).toPandas()
```
### Find intramolecular hydrogen bonds
In this example, the inter and intra flags have been set to find intramolecular hydrogen bonds.
```
query = "polymer and element in ['N','O']"
target = "polymer and element in ['N','O']"
distance_cutoff = 3.5
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target,
inter=False, intra=True,
level='atom')
interactions.limit(5).toPandas()
```
## Calculate Interaction in the Biological Assembly vs. Asymmetric Unit
```
structures = mmtfReader.download_mmtf_files(["1STP"]).cache()
```
By default, interactions in the first biological assembly are calculated. The **bio** parameter specifies the biological assembly number. Most PDB structure have only one biological assembly (bio=1), a few have more than one.
* **bio=1** use first biological assembly (default)
* **bio=2** use second biological assembly
* **bio=None** use the asymmetric unit
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
target = "polymer"
distance_cutoff = 4.0
# The asymmetric unit is a monomer (1 ligand, 1 protein chain)
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=None)
print("Ligand interactions in asymmetric unit (monomer) :", interactions.count())
# The first biological assembly is a tetramer (4 ligands, 4 protein chain)
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=1)
print("Ligand interactions in 1st bio assembly (tetramer) :", interactions.count())
# There is no second biological assembly, in that case zero interactions are returned
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=2)
print("Ligand interactions in 2st bio assembly (does not exist):", interactions.count())
```
The 1st biological unit contains 68 - 4x16 = 4 additional interactions not found in the asymmetric unit.
## Stop Spark!
```
spark.stop()
```
| github_jupyter |
[](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-02/lab.ipynb)
# Lab 2 - Adversarial Search
[Connect 4](https://en.wikipedia.org/wiki/Connect_Four) is a classic board game in which 2 players alternate placing markers in columns, and the goal is to get 4 in a row, either horizontally, vertically or diagonally. See the short video below
```
from IPython.display import YouTubeVideo
YouTubeVideo("ylZBRUJi3UQ")
```
The game is implemented below. It will play a game where both players take random (legal) actions. The MAX player is represented with a X and the MIN player with an O. The MAX player starts. Execute the code.
```
import random
from copy import deepcopy
from typing import Sequence
NONE = '.'
MAX = 'X'
MIN = 'O'
COLS = 7
ROWS = 6
N_WIN = 4
class ArrayState:
def __init__(self, board, heights, n_moves):
self.board = board
self.heights = heights
self.n_moves = n_moves
@staticmethod
def init():
board = [[NONE] * ROWS for _ in range(COLS)]
return ArrayState(board, [0] * COLS, 0)
def result(state: ArrayState, action: int) -> ArrayState:
"""Insert in the given column."""
assert 0 <= action < COLS, "action must be a column number"
if state.heights[action] >= ROWS:
raise Exception('Column is full')
player = MAX if state.n_moves % 2 == 0 else MIN
board = deepcopy(state.board)
board[action][ROWS - state.heights[action] - 1] = player
heights = deepcopy(state.heights)
heights[action] += 1
return ArrayState(board, heights, state.n_moves + 1)
def actions(state: ArrayState) -> Sequence[int]:
return [i for i in range(COLS) if state.heights[i] < ROWS]
def utility(state: ArrayState) -> float:
"""Get the winner on the current board."""
board = state.board
def diagonalsPos():
"""Get positive diagonals, going from bottom-left to top-right."""
for di in ([(j, i - j) for j in range(COLS)] for i in range(COLS + ROWS - 1)):
yield [board[i][j] for i, j in di if i >= 0 and j >= 0 and i < COLS and j < ROWS]
def diagonalsNeg():
"""Get negative diagonals, going from top-left to bottom-right."""
for di in ([(j, i - COLS + j + 1) for j in range(COLS)] for i in range(COLS + ROWS - 1)):
yield [board[i][j] for i, j in di if i >= 0 and j >= 0 and i < COLS and j < ROWS]
lines = board + \
list(zip(*board)) + \
list(diagonalsNeg()) + \
list(diagonalsPos())
max_win = MAX * N_WIN
min_win = MIN * N_WIN
for line in lines:
str_line = "".join(line)
if max_win in str_line:
return 1
elif min_win in str_line:
return -1
return 0
def terminal_test(state: ArrayState) -> bool:
return state.n_moves >= COLS * ROWS or utility(state) != 0
def printBoard(state: ArrayState):
board = state.board
"""Print the board."""
print(' '.join(map(str, range(COLS))))
for y in range(ROWS):
print(' '.join(str(board[x][y]) for x in range(COLS)))
print()
if __name__ == '__main__':
s = ArrayState.init()
while not terminal_test(s):
a = random.choice(actions(s))
s = result(s, a)
printBoard(s)
print(utility(s))
```
The last number 0, -1 or 1 is the utility or score of the game. 0 means it was a draw, 1 means MAX player won and -1 means MIN player won.
### Exercise 1
Modify the code so that you can play manually as the MIN player against the random AI.
### Exercise 2
Implement standard minimax with a fixed depth search. Modify the utility function to handle non-terminal positions using heuristics. Find a value for the depth such that moves doesn't take longer than approx. 1s to evaluate. See if you can beat your connect4 AI.
### Exercise 3
Add alpha/beta pruning to your minimax. Change your depth so that moves still takes approx 1 second to evaluate. How much deeper can you search? See if you can beat your connect4 AI.
### Exercise 4
Add move ordering. The middle columns are often "better" since there's more winning positions that contain them. Evaluate the moves in this order: [3,2,4,1,5,0,6]. How much deeper can you search now? See if you can beat your connect4 AI
### Exercise 5 - Optional
Improve your AI somehow. Consider
* Better heuristics
* Faster board representations (look up bitboards)
* Adding a transposition table (see class below)
* Better move ordering
```
class TranspositionTable:
def __init__(self, size=1_000_000):
self.size = size
self.vals = [None] * size
def board_str(self, state: ArrayState):
return ''.join([''.join(c) for c in state.board])
def put(self, state: ArrayState, utility: float):
bstr = self.board_str(state)
idx = hash(bstr) % self.size
self.vals[idx] = (bstr, utility)
def get(self, state: ArrayState):
bstr = self.board_str(state)
idx = hash(bstr) % self.size
stored = self.vals[idx]
if stored is None:
return None
if stored[0] == bstr:
return stored[1]
else:
return None
```
| github_jupyter |
# Building deep retrieval models
**Learning Objectives**
1. Converting raw input examples into feature embeddings.
2. Splitting the data into a training set and a testing set.
3. Configuring the deeper model with losses and metrics.
## Introduction
In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurization#user_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.
Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.
Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.
Each learning objective will correspond to a _#TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb)
## Preliminaries
We first import the necessary packages.
```
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
```
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
```
!pip install tensorflow==2.5.0
```
**NOTE: Please ignore any incompatibility warnings and errors.**
**NOTE: Restart your kernel to use updated packages.**
```
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
```
This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
```
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
```
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
```
We also do some housekeeping to prepare feature vocabularies.
```
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
```
## Model definition
### Query model
We start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
```
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
```
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:
```
+----------------------+
| 128 x 64 |
+----------------------+
| relu
+--------------------------+
| 256 x 128 |
+--------------------------+
| relu
+------------------------------+
| ... x 256 |
+------------------------------+
```
Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.
We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
```
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
```
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
### Candidate model
We can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
```
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
```
And expand it with hidden layers:
```
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
```
### Combined model
With both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
```
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
```
## Training the model
### Prepare the data
We first split the data into a training set and a testing set.
```
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
```
### Shallow model
We're ready to try out our first, shallow, model!
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
```
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
### Deeper model
What about a deeper model with two layers?
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
```
The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this:
```
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
```
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers:
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
# Model extends the depth to three layers
# TODO 3a -- your code goes here
```
In fact, we don't see improvement over the shallow model:
```
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
```
This is a good illustration of the fact that deeper and larger models, while capable of superior performance, often require very careful tuning. For example, throughout this tutorial we used a single, fixed learning rate. Alternative choices may give very different results and are worth exploring.
With appropriate tuning and sufficient data, the effort put into building larger and deeper models is in many cases well worth it: larger models can lead to substantial improvements in prediction accuracy.
## Next Steps
In this tutorial we expanded our retrieval model with dense layers and activation functions. To see how to create a model that can perform not only retrieval tasks but also rating tasks, take a look at [the multitask tutorial](https://www.tensorflow.org/recommenders/examples/multitask).
| github_jupyter |
### Recommendations with MovieTweetings: Getting to Know The Data
Throughout this lesson, you will be working with the [MovieTweetings Data](https://github.com/sidooms/MovieTweetings/tree/master/recsyschallenge2014). To get started, you can read more about this project and the dataset from the [publication here](http://crowdrec2013.noahlab.com.hk/papers/crowdrec2013_Dooms.pdf).
**Note:** There are solutions to each of the notebooks available by hitting the orange jupyter logo in the top left of this notebook. Additionally, you can watch me work through the solutions on the screencasts that follow each workbook.
To get started, read in the libraries and the two datasets you will be using throughout the lesson using the code below.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('https://raw.githubusercontent.com/sidooms/MovieTweetings/master/latest/movies.dat', delimiter='::', header=None, names=['movie_id', 'movie', 'genre'], dtype={'movie_id': object}, engine='python')
reviews = pd.read_csv('https://raw.githubusercontent.com/sidooms/MovieTweetings/master/latest/ratings.dat', delimiter='::', header=None, names=['user_id', 'movie_id', 'rating', 'timestamp'], dtype={'movie_id': object, 'user_id': object, 'timestamp': object}, engine='python')
```
#### 1. Take a Look At The Data
Take a look at the data and use your findings to fill in the dictionary below with the correct responses to show your understanding of the data.
```
# number of movies
print("The number of movies is {}.".format(movies.shape[0]))
# number of ratings
print("The number of ratings is {}.".format(reviews.shape[0]))
# unique users
print("The number of unique users is {}.".format(reviews.user_id.nunique()))
# missing ratings
print("The number of missing reviews is {}.".format(int(reviews.rating.isnull().mean()*reviews.shape[0])))
# the average, min, and max ratings given
print("The average, minimum, and max ratings given are {}, {}, and {}, respectively.".format(np.round(reviews.rating.mean(), 0), reviews.rating.min(), reviews.rating.max()))
# number of different genres
genres = []
for val in movies.genre:
try:
genres.extend(val.split('|'))
except AttributeError:
pass
# we end up needing this later
genres = set(genres)
print("The number of genres is {}.".format(len(genres)))
# Use your findings to match each variable to the correct statement in the dictionary
a = 53968
b = 10
c = 7
d = 31245
e = 15
f = 0
g = 4
h = 712337
i = 28
dict_sol1 = {
'The number of movies in the dataset': d,
'The number of ratings in the dataset': h,
'The number of different genres': i,
'The number of unique users in the dataset': a,
'The number missing ratings in the reviews dataset': f,
'The average rating given across all ratings': c,
'The minimum rating given across all ratings': f,
'The maximum rating given across all ratings': b
}
# Check your solution
t.q1_check(dict_sol1)
```
#### 2. Data Cleaning
Next, we need to pull some additional relevant information out of the existing columns.
For each of the datasets, there are a couple of cleaning steps we need to take care of:
#### Movies
* Pull the date from the title and create new column
* Dummy the date column with 1's and 0's for each century of a movie (1800's, 1900's, and 2000's)
* Dummy column the genre with 1's and 0's for each genre
#### Reviews
* Create a date out of time stamp
* Create month and year 1/0 dummy columns from the timestamp
You can check your results against the header of my solution by running the cell below with the **show_clean_dataframes** function.
```
# pull date if it exists
create_date = lambda val: val[-5:-1] if val[-1] == ')' else np.nan
# apply the function to pull the date
movies['date'] = movies['movie'].apply(create_date)
# Return century of movie as a dummy column
def add_movie_year(val):
if val[:2] == yr:
return 1
else:
return 0
# Apply function
for yr in ['18', '19', '20']:
movies[str(yr) + "00's"] = movies['date'].apply(add_movie_year)
# Function to split and return values for columns
def split_genres(val):
try:
if val.find(gene) >-1:
return 1
else:
return 0
except AttributeError:
return 0
# Apply function for each genre
for gene in genres:
movies[gene] = movies['genre'].apply(split_genres)
movies.head() #Check what it looks like
import datetime
change_timestamp = lambda val: datetime.datetime.fromtimestamp(int(val)).strftime('%Y-%m-%d %H:%M:%S')
reviews['date'] = reviews['timestamp'].apply(change_timestamp)
# pull date if it exists
create_date = lambda val: val[-5:-1] if val[-1] == ')' else np.nan
# apply the function to pull the date
movies['date'] = movies['movie'].apply(create_date)
# Return century of movie as a dummy column
def add_movie_year(val):
if val[:2] == yr:
return 1
else:
return 0
# Apply function
for yr in ['18', '19', '20']:
movies[str(yr) + "00's"] = movies['date'].apply(add_movie_year)
reviews.date[0][:4] # year
reviews.date[0][5:7] # month
# Create month dummy columns
for month in range(1,13):
reviews['month_' + str(month)] = reviews['date'].apply(lambda x: 1 if x[5:7] == str(month) else 0)
# Create year dummy columns
for yr in range(2013, 2019):
reviews['year_' + str(yr)] = reviews['date'].apply(lambda x: 1 if x[:4] == str(yr) else 0)
reviews.head()
# now reviews and movies are the final dataframes with the necessary columns
reviews.to_csv('./reviews_clean.csv')
movies.to_csv('./movies_clean.csv')
# pass your movies and reviews dataframes
reviews_new, movies_new = t.show_clean_dataframes()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
import wtools
#%matplotlib notebook
# Make the random numbers predictable for testing
np.random.seed(0)
```
# Making Gridded/Mesh Data
## A simple case
First, create a dictionary of your model data. For this example, we create a uniformly discretized 3D volume of data. The first array is some random data `nx` by `ny` by `nz` (10 by 10 by 10 for the snippet below) and the second array include some spatial data ranging from 0 to 1000 which we restructure in a contiguous fashion (x then y then z) which we can use for reference when checking how our data is displayed.
```
models = {
'rand': np.random.randn(10,10,10),
'spatial': np.linspace(0, 1000, 1000).reshape((10,10,10)),
}
```
Once you have your model dictionary created, create a `Grid` object and feed it your models like below. Note that we print this object to ensure it was constructed properly and if not, fill out the parts that are missing. On the backend, this print/output of the object calls `grid.validate()` which ensures the grid is ready for use!
```
grid = wtools.Grid(models=models)
grid
```
Now let's use this new `Grid` object. Please reference `Grid`'s code docs on https://wtools.readthedocs.io/en/latest/ to understand what attributes and methods are present.
```
grid.keys
grid.x0
grid.hx
_ = grid.save('output/simple.json')
grid.plot_3d_slicer('spatial', yslice=3.5)
```
## Spatially Referenced Grids
Now, what if you know the spatial reference of your grid? Then go ahead and pass the origin and cell spacings to the `Grid` object upon intialization. For this example, we will recreate some volumetric data and build a spatial reference frame.
```
nx, ny, nz = 12, 20, 15
models = {
'rand': np.random.randn(nx,ny,nz),
'spatial': np.linspace(0, nx*ny*nz, nx*ny*nz).reshape((nx,ny,nz)),
}
```
Now lets build the cell spacings along each axis for our gridded data. It is very important to note that the cell sizes do NOT have to be uniformly spaced.
```
origin = (100.0, 350.0, -1000.0)
xs = np.array([100, 50] + [10]*(nx-4) + [50, 100])
ys = np.array([100, 50] + [10]*(ny-4) + [50, 100])
zs = np.array([10]*(nz-6) + [25, 50, 75, 100, 150, 200])
grid = wtools.Grid(models=models, x0=origin, h=[xs, ys, zs])
grid
```
Now lets display this meshed data with a plotting resolution that represents the model discretization.
```
grid.plot_3d_slicer('spatial')
```
## Now Check that File I/O works both ways
```
_ = grid.save('output/advanced.json')
load = wtools.Grid.load_mesh('output/advanced.json')
load
load.equal(grid), grid.equal(load)
```
# PVGeo
Note that we have also overridden the toVTK method so that serialized `Grid` objects can be loaded directly into ParaVIew using the `wplugins.py` file delivered in this repo.
```
#type(load.toVTK())
```
| github_jupyter |
# [How to train an object detection model with mmdetection](https://www.dlology.com/blog/how-to-train-an-object-detection-model-with-mmdetection/) | DLology blog
```
# You can add more model configs like below.
MODELS_CONFIG = {
'faster_rcnn_r50_fpn_1x': {
'config_file': 'configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712.py'
},
'cascade_rcnn_r50_fpn_1x': {
'config_file': 'configs/cascade_rcnn_r50_fpn_1x.py',
},
'retinanet_r50_fpn_1x': {
'config_file': 'configs/retinanet_r50_fpn_1x.py',
}
}
```
## Your settings
```
# TODO: change URL to your fork of my repository if necessary.
git_repo_url = 'https://github.com/Leo10y/mmdetection_object_detection_demo'
# Pick the model you want to use
# Select a model in `MODELS_CONFIG`.
selected_model = 'faster_rcnn_r50_fpn_1x' # 'cascade_rcnn_r50_fpn_1x'
# Total training epochs.
total_epochs = 8
# Name of the config file.
config_file = MODELS_CONFIG[selected_model]['config_file']
```
## Install Open MMLab Detection Toolbox
Restart the runtime if you have issue importing `mmdet` later on.
```
import os
from os.path import exists, join, basename, splitext
%cd /content
project_name = os.path.abspath(splitext(basename(git_repo_url))[0])
mmdetection_dir = os.path.join(project_name, "mmdetection")
if not exists(project_name):
# clone "depth 1" will only get the latest copy of the relevant files.
!git clone -q --recurse-submodules --depth 1 $git_repo_url
print("Update mmdetection repo")
!cd {mmdetection_dir} && git checkout master && git pull
# dependencies
!pip install -q mmcv terminaltables
# build
!cd {mmdetection_dir} && python setup.py install
!pip install -r {os.path.join(mmdetection_dir, "requirements.txt")}
import sys
sys.path.append(mmdetection_dir)
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
```
## Stash the repo if you want to re-modify `voc.py` and config file.
```
!cd {mmdetection_dir} && git config --global user.email "leonidas.katsaitis@web.de" && git config --global user.name "Leo10y" && git stash
```
## Modify `voc.py`
### parse data classes
```
%cd {project_name}
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET
anno_path = os.path.join(project_name, "data/VOC2007/Annotations")
voc_file = os.path.join(mmdetection_dir, "voc.py")
classes_names = []
xml_list = []
for xml_file in glob.glob(anno_path + "/*.xml"):
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall("object"):
classes_names.append(member[0].text)
classes_names = list(set(classes_names))
classes_names.sort()
classes_names
import re
fname = voc_file
with open(fname) as f:
s = f.read()
s = re.sub('CLASSES = \(.*?\)',
'CLASSES = ({})'.format(", ".join(["\'{}\'".format(name) for name in classes_names])), s, flags=re.S)
with open(fname, 'w') as f:
f.write(s)
!cat {voc_file}
```
## Modify config file
```
import os
config_fname = os.path.join(project_name, 'mmdetection', config_file)
assert os.path.isfile(config_fname), '`{}` not exist'.format(config_fname)
config_fname
fname = config_fname
with open(fname) as f:
s = f.read()
work_dir = re.findall(r"work_dir = \'(.*?)\'", s)[0]
# Update `num_classes` including `background` class.
s = re.sub('num_classes=.*?,',
'num_classes={},'.format(len(classes_names) + 1), s)
s = re.sub('ann_file=.*?\],',
"ann_file=data_root + 'VOC2007/ImageSets/Main/trainval.txt',", s, flags=re.S)
s = re.sub('total_epochs = \d+',
'total_epochs = {} #'.format(total_epochs), s)
if "CocoDataset" in s:
s = re.sub("dataset_type = 'CocoDataset'",
"dataset_type = 'VOCDataset'", s)
s = re.sub("data_root = 'data/coco/'",
"data_root = 'data/VOCdevkit/'", s)
s = re.sub("annotations/instances_train2017.json",
"VOC2007/ImageSets/Main/trainval.txt", s)
s = re.sub("annotations/instances_val2017.json",
"VOC2007/ImageSets/Main/test.txt", s)
s = re.sub("annotations/instances_val2017.json",
"VOC2007/ImageSets/Main/test.txt", s)
s = re.sub("train2017", "VOC2007", s)
s = re.sub("val2017", "VOC2007", s)
else:
s = re.sub('img_prefix=.*?\],',
"img_prefix=data_root + 'VOC2007/',".format(total_epochs), s)
with open(fname, 'w') as f:
f.write(s)
!cat {config_fname}
%cd {mmdetection_dir}
!python setup.py install
os.makedirs("data/VOCdevkit", exist_ok=True)
voc2007_dir = os.path.join(project_name, "data/VOC2007")
os.system("ln -s {} data/VOCdevkit".format(voc2007_dir))
!python tools/train.py {config_fname}
checkpoint_file = os.path.join(mmdetection_dir, work_dir, "latest.pth")
assert os.path.isfile(
checkpoint_file), '`{}` not exist'.format(checkpoint_file)
checkpoint_file
```
## Test predict
Turn down the `score_thr` if you think the model is missing any bbox.
Turn up the `score_thr` if you see too much overlapping bboxes with low scores.
```
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
import mmcv
from mmcv.runner import load_checkpoint
import mmcv.visualization.image as mmcv_image
# fix for colab
def imshow(img, win_name='', wait_time=0): plt.figure(
figsize=(50, 50)); plt.imshow(img)
mmcv_image.imshow = imshow
from mmdet.models import build_detector
from mmdet.apis import inference_detector, show_result, init_detector
%cd {mmdetection_dir}
score_thr = 0.8
# build the model from a config file and a checkpoint file
model = init_detector(config_fname, checkpoint_file)
# test a single image and show the results
img = 'data/VOCdevkit/VOC2007/JPEGImages/15.jpg'
result = inference_detector(model, img)
show_result(img, result, model.CLASSES,
score_thr=score_thr, out_file="result.jpg")
from IPython.display import Image
Image(filename='result.jpg')
```
## Download the config file
```
from google.colab import files
files.download(config_fname)
```
## Download checkpoint file.
### Option1 : upload the checkpoint file to your Google Drive
Then download it from your Google Drive to local file system.
During this step, you will be prompted to enter the token.
```
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once in a notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fname = os.path.basename(checkpoint_file)
# Create & upload a text file.
uploaded = drive.CreateFile({'title': fname})
uploaded.SetContentFile(checkpoint_file)
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
```
### Option2 : Download the checkpoint file directly to your local file system
This method may not be stable when downloading large files like the model checkpoint file. Try **option 1** instead if not working.
```
files.download(checkpoint_file)
```
| github_jupyter |
# 9. Archivos
## 9.1 ¿Qué es un archivo?
Un archivo es un contenedor de información. En un archivo la información se almacena como un conjunto de bytes consecutivos. En el interior del archivo, la información se organiza acorde a un formato concreto (texto, binario, executable, etc.).
Los archivos se representan como series de unos (1) y ceros (0) para ser procesador por el sistema (computador).
Un archivo se organiza en tres partes:
1. Encabezado - tiene la metadata del contenido del archivo (nombre, tamaño, tipo, etc).
2. Datos - contenido del archivo
3. Fin del archivo - EOF (End-Of-File).
## 9.2 Operaciones básicas sobre un archivo
Ejemplo 9.2.1: Obtener la ruta actual del archivo en edición.
```
import pathlib
resultado = pathlib.Path().resolve()
resultado
```
Ejemplo 9.2.2: Obtener el nombre del archivo actual.
```
%%javascript
IPython.notebook.kernel.execute(`notebookname = '${window.document.getElementById("notebook_name").innerHTML}'`)
notebookname
nombre_archivo = notebookname + '.ipynb'
nombre_archivo
```
Ejemplo 9.2.3: Preguntar si un archivo existe.
```
dir(resultado)
resultado.absolute
resultado.absolute()
resultado = str(resultado)
resultado = str(resultado)
resultado
nombre_archivo
import os
ruta_absoluta = os.path.join(resultado, nombre_archivo)
ruta_absoluta
os.path.exists(ruta_absoluta)
ruta_absoluta_no_existente = os.path.join(resultado, 'taller01_archivos.ipynb')
ruta_absoluta_no_existente
os.path.exists(ruta_absoluta_no_existente)
```
**Ejemplo 9.2.4**:
Leer el contenido de un archivo.
```
ruta_absoluta
def leer_contenido_archivo(ruta_archivo):
"""
Lee el contenido de un archivo especificado en un ruta.
:param ruta_archivo:string: Ruta del archivo a leer.
:return NoneType.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l)
else:
print('ERROR: La ruta indicada no corresponde a un archivo.')
else:
print('ERROR: El archivo no existe.')
help(leer_contenido_archivo)
leer_contenido_archivo(ruta_absoluta_no_existente)
leer_contenido_archivo(resultado)
leer_contenido_archivo(ruta_absoluta)
```
**Ejemplo 9.2.5**
Acceder al contenido de un archivo de texto plano ya existente.
```
ruta_archivo_paises = 'T001-09-paises.txt'
ruta_archivo_paises
os.path.exists(ruta_archivo_paises)
os.path.isdir(ruta_archivo_paises)
os.path.isfile(ruta_archivo_paises)
with open(ruta_archivo_paises, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
```
## 9.3 Proceso de escritura de archivos
**Ejemplo 9.3.1**
Solicitar al usuario el ingreso de diez números y guardarlos en una lista. Después de capturados esos valores procederemos a guardar ese contenido en un archivo de texto plano.
```
numeros = []
for i in range(10):
while True:
try:
numero = float(input('Digite un número: '))
break
except:
print()
print('MENSAJE: Debe digitar un valor que corresponda con un número.')
print()
print()
numeros.append(numero)
print()
ruta_archivo_numeros = 'T001-09-numeros.txt'
with open(ruta_archivo_numeros, 'wt', encoding='utf-8') as f:
for n in numeros:
f.write(f'{n}\n')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
linea = f.readline()
print(linea, end='')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
len(linea)
```
**Ejemplo 9.3.2**
Sumar el contenido del archivo que contiene número (`T001-09-numeros.txt`).
```
def leer_contenido_archivo(ruta_archivo):
"""
Lee el contenido de un archivo.
:param ruta_archivo:str: Ruta absoluta o relativa del archivo a leer.
:return list: Contenido del archivo.
"""
contenido = []
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in f.readlines():
contenido.append(l.strip())
return contenido
help(leer_contenido_archivo)
ruta_archivo_numeros
resultado = leer_contenido_archivo(ruta_archivo_numeros)
resultado
help(sum)
# suma_numeros = sum(resultado) # TypeError
type(resultado[0])
type(resultado[-1])
suma_numeros = sum(float(e) for e in resultado)
suma_numeros
type(suma_numeros)
```
**Ejemplo 9.3.3**
Solicitar al usuario que digite nombres de países de cualquier parte del mundo.
El programa termina cuando el usuario haya escrito la palabra `FIN`.
Después de esa tarea crearemos un archivo para guardar todos los países que el usuario digitó.
```
paises = []
pais = ''
while pais != 'FIN':
while True:
pais = input('Digite el nombre de un país (FIN para terminar): ')
pais = pais.strip()
if len(pais):
break
else:
print()
print('MENSAJE: Debe escribir una cadena que no contenga sólo espacios.')
print()
if pais != 'FIN':
paises.append(pais)
print()
paises
len(paises)
archivo_paises = 'T001-09-paises.txt'
with open(archivo_paises, 'wt', encoding='utf-8') as f:
for p in paises:
f.write(f'{p}\n')
with open(archivo_paises, 'rt', encoding='utf-8', newline='') as f:
for l in f.readlines():
# print(l.replace('\n', ''))
print(l, end='')
help(open)
# El modo de trabajo por defecto al abrir o escribir un archivo es de texto (t):
with open(archivo_paises, 'r', encoding='utf-8', newline='') as f:
for l in f.readlines():
print(l, end='')
otros_paises = ['Guatemala', 'España', 'India', 'Grecia', 'El Congo', 'Sur África', 'Panamá', 'Uruguay', 'Canadá']
otros_paises
len(otros_paises)
archivo_paises
with open(archivo_paises, 'at', encoding='utf-8') as f:
for p in otros_paises:
f.write(f'{p}\n')
```
**Ejemplo 9.3.4**
Seleccionar un directorio del sistema (o una carpeta de archivos), y guardar el listado de archivos y carpetas (directorios) en un archivo de texto plano.
```
help(os.listdir)
os.listdir()
ruta_directorio = r'C:\Windows'
ruta_archivos_directorio = 'T001-09-archivos.txt'
if os.path.exists(ruta_directorio):
with open(ruta_archivos_directorio, 'wt', encoding='utf-8') as f:
for a in os.listdir(ruta_directorio):
f.write(f'{a}\n')
```
**Ejemplo 9.3.5**
Leer las n primeras líneas de un archivo de texto. Se debe crear una función.
```
from itertools import islice
def leer_n_lineas_archivo(ruta_archivo, n):
"""
Lee una cantidad arbitraria de líneas de un archivo de texto plano.
ruta_archivo: Ruta del archivo a leer.
n: cantidad de líneas a leer.
"""
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in islice(f, n):
print(l)
help(leer_n_lineas_archivo)
ruta_archivos_directorio
leer_n_lineas_archivo(ruta_archivos_directorio, 5)
leer_n_lineas_archivo(archivo_paises, 3)
leer_n_lineas_archivo(archivo_paises, 10)
leer_n_lineas_archivo(ruta_archivos_directorio, 20)
```
**Ejemplo 9.3.6**
Leer las n últimas líneas de un archivo de texto. Se debe definir una función.
```
import os
def leer_n_ultimas_lineas(ruta_archivo, n):
"""
Lee una cantidad arbitraria de líneas de un archivo de texto plano. Se leen las n últimas líneas.
ruta_archivo: Ruta del archivo a leer.
n: cantidad de líneas a leer.
"""
tamagnio_bufer = 8192
tamagnio_archivo = os.stat(ruta_archivo).st_size
contador = 0
datos = []
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
if tamagnio_bufer > tamagnio_archivo:
tamagnio_bufer = tamagnio_archivo - 1
while True:
contador += 1
f.seek(tamagnio_archivo - tamagnio_bufer * contador)
datos.extend(f.readlines())
if len(datos) >= n or f.tell() == 0:
break
return datos[-n:]
help(leer_n_ultimas_lineas)
resultado = leer_n_ultimas_lineas(ruta_archivos_directorio, 5)
resultado = [r.strip() for r in resultado]
resultado
resultado = leer_n_ultimas_lineas(archivo_paises, 5)
resultado = [r.strip() for r in resultado]
resultado
```
**Ejemplo 9.3.7**
Leer un archivo de palabras y determinar cuál es la palabra más extensa (mayor número de caracteres).
```
def palabra_mas_extensa(ruta_archivo):
"""
Obtiene la palabra más extensa en un archivo de texto.
ruta_archivo: Ruta del archivo a leer.
return: La palabra más larga. Si el archivo no existe, retorna None.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
palabras = f.read().split('\n')
mayor_longitud = len(max(palabras, key=len))
return [p for p in palabras if len(p) == mayor_longitud]
else:
return None
else:
return None
help(palabra_mas_extensa)
archivo_paises
resultado = palabra_mas_extensa(archivo_paises)
resultado
```
**Ejemplo 9.3.8**
Escribir una función para obtener el tamaño (en bytes) de un archivo de texto plano.
```
import os
def obtener_tamagnio_archivo(ruta_archivo):
"""
Obtiene la cantidad de bytes que ocupa un archivo.
ruta_archivo: Ruta del archivo.
return: Cantidad bytes que ocupa el archivo.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
metadata_archivo = os.stat(ruta_archivo)
return metadata_archivo.st_size
else:
return None
else:
return None
help(obtener_tamagnio_archivo)
obtener_tamagnio_archivo(archivo_paises)
obtener_tamagnio_archivo(ruta_archivos_directorio)
```
## 9.4 Escritura y lectura de archivos binarios con el módulo `pickle`
El módulo `pickle` nos permite escribir datos en una representación binaria.
**Ejemplo 9.4.1**
Crear un diccionario con nombres de países (llaves) y sus respectivas capitales (valores).
Luego crear un archivo binario utilizando el módulo `pickle`.
Al final se debe leer ese archivo para reestablecer el contenido del diccionario `paises`.
```
paises = {
'Colombia': 'Bogotá',
'Perú': 'Lima',
'Alemania': 'Berlín',
'Argentina': 'Buenos Aires',
'Estados Unidos': 'Washington',
'Rusia': 'Moscú',
'Ecuador': 'Quito'
}
type(paises)
len(paises)
paises
def es_ruta_valida(ruta):
"""
Verfica si una ruta especifica es válida.
ruta: Ruta a validar.
return: True si la ruta es válida, False en caso contrario.
"""
try:
archivo = open(ruta, 'w')
archivo.close()
return True
except IOError:
return False
import os
import pickle
def guardar_datos_archivo_binario(ruta_archivo, contenido):
"""
Guardar los datos de un objeto Python en un archivo.
ruta_archivo: Ruta del archivo donde se van a guardar los datos.
contenido: Objeto Python con la información a escribir.
return: True cuando el contenido se haya escrito en el disco.
raises: Cuando la ruta no corresponde con un archivo
"""
if es_ruta_valida(ruta_archivo):
with open(ruta_archivo, 'wb') as f:
pickle.dump(contenido, f)
return True
else:
raise Exception(f'La ruta ({ruta_archivo}) no corresponde con un archivo.')
help(guardar_datos_archivo_binario)
archivo_objeto_paises = 'T001-09-objeto-paises.pkl'
guardar_datos_archivo_binario(archivo_objeto_paises, paises)
import pickle
def leer_contenido_archivo_binario(ruta_archivo):
"""
Lee el contenido de un archivo binario.
ruta_archivo: Ruta del archivo binario a leer.
return: Objeto Python recuperado desde un archivo binario.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rb') as f:
return pickle.load(f)
else:
return None
else:
return None
help(leer_contenido_archivo_binario)
archivo_objeto_paises
resultado = leer_contenido_archivo_binario(archivo_objeto_paises)
type(resultado)
len(resultado)
resultado
```
## 9.5 Lectura y Escritura de Archivos CSV
En un archivo CSV (*Comma Separated Values*) el contenido (a los registros o filas) se encuentro estructurado u organizado con la especificación de un carácter de separación de los diferentes datos que integran un registro (fila).
id;marca;cpu;ram;ssd<br>
1001;MSi;Intel;32;500<br>
1002;Apple;Intel;16;720<br>
1003;Clone;Intel;128;10000
### 9.5.1 Lectura de un archivo CSV
Un archivo CSV se puede abrir directamente con la función `open()` y explorar el contenido por medio de un cilo `for` invocando la función `readlines()`.
```
archivo_computadores = 'T001-09-computadores.csv'
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
archivo_csv = csv.reader(f, delimiter=';')
for r in archivo_csv:
print(r)
```
Lectura de un archivo CSV con la clase `DictReader`:
```
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, delimiter=';')
for r in registros:
print(r['id'], r['marca'])
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, delimiter=';')
total_ssd = 0
for r in registros:
total_ssd += int(r['ssd'])
print('Almacenamiento total entre los tres computadores:', total_ssd, 'GB.')
```
### 9.5.2 Uso del argumento `quotechar` de la función `csv.reader()`
El argumento `quotechar` permite especificar el cáracter que encierra texto que incluya el cáracter delimitador.
documento,nombre_completo,direccion<br>
123456789,Daniela Ortiz,Carrera 10 #75-43, Casa 38<br>
654987321,Julio Ordoñez,Vereda El Mortiño
```
encabezado = ['documento','nombre_completo','direccion']
datos =[
['123456789','Daniela Ortiz','Carrera 10 #75-43, Casa 38'],
['654987321','Julio Ordoñez','Vereda El Mortiño']
]
```
Escribir el contenido de varias listas sobre un archivo CSV:
```
import csv
personas = 'T001-09-personas.csv'
with open(personas, 'wt', encoding='utf-8', newline='') as f:
escritura_csv = csv.writer(f, delimiter=',')
escritura_csv.writerow(encabezado)
for d in datos:
escritura_csv.writerow(d)
```
Abrir el archivo CSV recién creado:
```
with open(personas, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, quotechar='"')
for r in registros:
print(r)
```
## 9.6 Lectura de archivos CSV con la librería Pandas.
```
import pandas as pd
pd.__version__
help(pd.read_csv)
df = pd.read_csv(personas)
df
df.info()
df = pd.read_csv(archivo_computadores)
df
df = pd.read_csv(archivo_computadores, sep=None)
df
df = pd.read_csv(archivo_computadores, sep=';')
df
```
Lectura de un archivo CSV desde una URL:
```
df = pd.read_csv('https://raw.githubusercontent.com/favstats/demdebates2020/master/data/debates.csv')
df.head()
df.tail()
df.info()
df.head(20)
df.tail(30)
df.describe()
```
## 9.7 Escritura de archivos CSV con la librería Pandas
```
help(df.to_csv)
type(df)
df.to_csv('T001-09-debate.csv', index=False)
```
| github_jupyter |
```
from google.colab import drive
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import seaborn as sns
import os
drive.mount('/content/drive')
# Importing Deep Learning Libraries
from keras.preprocessing.image import load_img, img_to_array
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense,Input,Dropout,GlobalAveragePooling2D,Flatten,Conv2D,BatchNormalization,Activation,MaxPooling2D
from keras.models import Model,Sequential
from keras.optimizers import Adam,SGD,RMSprop
from keras import callbacks
from sklearn.model_selection import train_test_split
%cd drive/MyDrive/facial_expression/fer2013
df = pd.read_csv('fer2013.csv')
label_to_text = {0:'anger', 1:'disgust', 2:'fear', 3:'happiness', 4: 'sadness', 5: 'surprise', 6: 'neutral'}
img_array = np.stack(df.pixels.apply(lambda x: np.array(x.split(' ')).reshape(48, 48, 1).astype('float32')), axis=0)
labels = df.emotion.values
X_train, X_test, y_train, y_test = train_test_split(img_array, labels, test_size=0.1, random_state=2)
X_train = X_train/255
X_test = X_test/255
from keras.models import Sequential
from keras.layers import Dense , Activation , Dropout ,Flatten, BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D, AveragePooling2D
from keras.metrics import categorical_accuracy
from keras.models import model_from_json
from keras.callbacks import ModelCheckpoint
from keras.optimizers import *
from keras.layers.normalization import BatchNormalization
basemodelRelu = tf.keras.models.Sequential([tf.keras.layers.Conv2D(64,(3,3),activation='relu',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(128,(3,3),activation='relu',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(512,(3,3),activation='relu',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(7,activation = 'softmax')
])
basemodelTanh = tf.keras.models.Sequential([tf.keras.layers.Conv2D(64,(3,3),activation='tanh',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(128,(3,3),activation='tanh',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(512,(3,3),activation='tanh',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='tanh'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(512, activation='tanh'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(7,activation = 'softmax')
])
basemodelSigmoid = tf.keras.models.Sequential([tf.keras.layers.Conv2D(64,(3,3),activation='sigmoid',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(128,(3,3),activation='sigmoid',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Conv2D(512,(3,3),activation='sigmoid',input_shape = (48,48,1)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='sigmoid'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(512, activation='sigmoid'),
tf.keras.layers.Dropout(0.25),
#
tf.keras.layers.Dense(7,activation = 'softmax')
])
basemodelRelu.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=["accuracy"])
basemodelTanh.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=["accuracy"])
basemodelSigmoid.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=["accuracy"])
try:
os.mkdir("checkpoint")
except:
pass
file_name = 'best_model.h5'
checkpoint_path= os.path.join('checkpoint',file_name)
print(checkpoint_path)
call_back = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
monitor='val_accuracy',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='max')
from keras import callbacks
filenameRelu='model_train_newRelu.csv'
filenameTanh='model_train_newTanh.csv'
filenameSigmoid='model_train_newSigmoid.csv'
filepathRelu = os.path.join('checkpoint', filenameRelu)
filepathTanh = os.path.join('checkpoint', filenameTanh)
filepathSigmoid = os.path.join('checkpoint', filenameSigmoid)
csv_logRelu=callbacks.CSVLogger(filenameRelu, separator=',', append=False)
csv_logTanh=callbacks.CSVLogger(filenameTanh, separator=',', append=False)
csv_logSigmoid=callbacks.CSVLogger(filenameSigmoid, separator=',', append=False)
checkpointRelu = callbacks.ModelCheckpoint(filepathRelu, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=False, mode='max')
checkpointTanh = callbacks.ModelCheckpoint(filepathTanh, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=False, mode='max')
checkpointSigmoid = callbacks.ModelCheckpoint(filepathSigmoid, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=False, mode='max')
callbacks_listRelu = [csv_logRelu,checkpointRelu]
callbacks_listTanh = [csv_logTanh,checkpointTanh]
callbacks_listSigmoid = [csv_logSigmoid,checkpointSigmoid]
callbacks_listRelu = [csv_logRelu]
callbacks_listTanh = [csv_logTanh]
callbacks_listSigmoid = [csv_logSigmoid]
histTanh = basemodelTanh.fit(X_train, y_train, epochs=30, validation_data=(X_test, y_test) ,callbacks=checkpointTanh)
histRelu = basemodelRelu.fit(X_train, y_train, epochs=30, validation_data=(X_test, y_test) ,callbacks=checkpointRelu)
histSigmoid = basemodelSigmoid.fit(X_train, y_train, epochs=30, validation_data=(X_test, y_test) ,callbacks=checkpointSigmoid)
from matplotlib import pyplot
%matplotlib inline
train_loss=histRelu.history['loss']
val_loss=histRelu.history['val_loss']
train_acc=histRelu.history['accuracy']
val_accTanh=histTanh.history['val_accuracy']
val_accRelu=histRelu.history['val_accuracy']
val_accSigmoid=histSigmoid.history['val_accuracy']
epochs = range(len(train_acc))
pyplot.plot(epochs,val_accRelu,'r', label='ReLU')
pyplot.plot(epochs,val_accTanh,'b', label='Tanh')
pyplot.plot(epochs,val_accSigmoid,'g', label='Sigmoid')
pyplot.title('Activations')
pyplot.ylabel('Accuracy')
pyplot.xlabel('Epochs')
pyplot.legend()
pyplot.figure()
```
| github_jupyter |
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
## Purity Randomized Benchmarking
- Last Updated: July 25, 2019
- Requires: qiskit-terra 0.9, qiskit-ignis 0.2, qiskit-aer 0.3
## Introduction
**Purity Randomized Benchmarking** is a variant of the Randomized Benchmarking (RB) method, which quantifies how *coherent* the errors are. The protocol executes the RB sequneces containing of Clifford gates, and then calculates the *purity* $Tr(\rho^2)$, and fits the purity result to an exponentially decaying curve.
This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module in order to perform purity RB.
```
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import the RB functions
import qiskit.ignis.verification.randomized_benchmarking as rb
#Import the measurement mitigation functions
import qiskit.ignis.mitigation.measurement as mc
#Import Qiskit classes
import qiskit
from qiskit.providers.aer import noise
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, coherent_unitary_error
from qiskit.quantum_info import state_fidelity
```
## Select the Parameters of the Purity RB Run
First, wee need to choose the regular RB parameters:
- **nseeds**: The number of seeds. For each seed there you will get a separate list of output circuits.
- **length_vector**: The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.
- **rb_pattern**: A list of the form [[i],[j],[k],...] or [[i,j],[k,l],...], etc. which will make simultaneous RB sequences. All the patterns should have the same dimetion, namely only 1-qubit sequences Qk or only 2-qubit sequences Qi,Qj, etc. The number of qubits is the sum of the entries.
- **length_multiplier = None**: No length_multiplier for purity RB.
- **seed_offset**: What to start the seeds at (e.g. if we want to add more seeds later).
- **align_cliffs**: If true adds a barrier across all qubits in rb_pattern after each set of cliffords.
As well as another parameter for purity RB:
- **is_purity = True**
In this example we run 2Q purity RB (on qubits Q0,Q1).
```
# Example of 2-qubits Purity RB
#Number of qubits
nQ = 2
#Number of seeds (random sequences)
nseeds = 3
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q1
rb_pattern = [[0,1]]
```
## Generate Purity RB sequences
We generate purity RB sequences. We start with a small example (so it doesn't take too long to run).
In order to generate the purity RB sequences **rb_purity_circs**, which is a list of lists of lists of quantum circuits, we run the function rb.randomized_benchmarking_seq.
This function returns:
- **rb_purity_circs**: A list of lists of lists of circuits for the purity rb sequences (separate list for each of the $3^n$ options and for each seed).
- **xdata**: The Clifford lengths (with multiplier if applicable).
- **rb_opts_dict**: Option dictionary back out with default options appended.
As well as:
- **npurity**: the number of purity rb circuits (per seed) which equals to $3^n$, where $n$ is the dimension, e.g npurity=3 for 1-qubit RB, npurity=9 for 2-qubit RB.
In order to generate each of the $3^n$ circuits, we need to do (per each of the $n$ qubits) either:
- nothing (Pauli-$Z$), or
- $\pi/2$-rotation around $x$ (Pauli-$X$), or
- $\pi/2$-rotation around $y$ (Pauli-$Y$),
and then measure the result.
```
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['is_purity'] = True
rb_purity_circs, xdata, npurity = rb.randomized_benchmarking_seq(**rb_opts)
print (npurity)
```
To illustrate, we print the circuit names for purity RB (for length=0 and seed=0)
```
for j in range(len(rb_purity_circs[0])):
print (rb_purity_circs[0][j][0].name)
```
As an example, we print the circuit corresponding to the first RB sequences, for the first and last parameter.
```
for i in {0, npurity-1}:
print ("circ no. ", i)
print (rb_purity_circs[0][i][0])
```
## Define a non-coherent noise model
We define a non-coherent noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT gate.
```
noise_model = noise.NoiseModel()
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
```
We can execute the purity RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results result_list.
```
#Execute purity RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
purity_result_list = []
import time
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job = qiskit.execute(new_circ, backend=backend, noise_model=noise_model, shots=shots, basis_gates=['u1','u2','u3','cx'])
purity_result_list.append(job.result())
print("Finished Simulating Purity RB Circuits")
```
## Fit the results
Calculate the *purity* $Tr(\rho^2)$ as the sum $\sum_k \langle P_k \rangle ^2/2^n$, and fit the purity result into an exponentially decaying function to obtain $\alpha$.
```
rbfit_purity = rb.PurityRBFitter(purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
```
Print the fit result (seperately for each pattern)
```
print ("fit:", rbfit_purity.fit)
```
## Plot the results and the fit
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Standard RB results
For comparison, we also print the standard RB fit results
```
standard_result_list = []
count = 0
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
if d==0:
standard_result_list.append(purity_result_list[count])
count += 1
rbfit_standard = rb.RBFitter(standard_result_list, xdata, rb_opts['rb_pattern'])
print (rbfit_standard.fit)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_standard.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Standard RB'%(nQ), fontsize=18)
plt.show()
```
## Measurement noise model and measurement error mitigation
Since part of the noise might be due to measurement errors and not only due to coherent errors, we repeat the example with measurement noise and demonstrate a mitigation of measurement errors before calculating the purity rb fitter.
```
#Add measurement noise
for qi in range(nQ):
read_err = noise.errors.readout_error.ReadoutError([[0.75, 0.25],[0.1,0.9]])
noise_model.add_readout_error(read_err,[qi])
#Generate the calibration circuits
meas_calibs, state_labels = mc.complete_meas_cal(qubit_list=[0,1])
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 200
#Execute the calibration circuits
job_cal = qiskit.execute(meas_calibs, backend=backend, shots=shots, noise_model=noise_model)
meas_result = job_cal.result()
#Execute the purity RB circuits
meas_purity_result_list = []
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job_pur = qiskit.execute(new_circ, backend=backend, shots=shots, noise_model=noise_model, basis_gates=['u1','u2','u3','cx'])
meas_purity_result_list.append(job_pur.result())
#Fitters
meas_fitter = mc.CompleteMeasFitter(meas_result, state_labels)
rbfit_purity = rb.PurityRBFitter(meas_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
#no correction
rho_pur = rbfit_purity.fit
print('Fit (no correction) =', rho_pur)
#correct data
correct_purity_result_list = []
for meas_result in meas_purity_result_list:
correct_purity_result_list.append(meas_fitter.filter.apply(meas_result))
#with correction
rbfit_cor = rb.PurityRBFitter(correct_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
rho_pur = rbfit_cor.fit
print('Fit (w/ correction) =', rho_pur)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_cor.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Mitigated Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Define a coherent noise model
We define a coherent noise model for the simulator. In this example we expect the purity RB to measure no errors, but standard RB will still measure a non-zero error.
```
err_unitary = np.zeros([2, 2], dtype=complex)
angle_err = 0.1
for i in range(2):
err_unitary[i, i] = np.cos(angle_err)
err_unitary[i, (i+1) % 2] = np.sin(angle_err)
err_unitary[0, 1] *= -1.0
error = coherent_unitary_error(err_unitary)
noise_model = noise.NoiseModel()
noise_model.add_all_qubit_quantum_error(error, 'u3')
#Execute purity RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
coherent_purity_result_list = []
import time
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job = qiskit.execute(new_circ, backend=backend, shots=shots, noise_model=noise_model, basis_gates=['u1','u2','u3','cx'])
coherent_purity_result_list.append(job.result())
print("Finished Simulating Purity RB Circuits")
rbfit_purity = rb.PurityRBFitter(coherent_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
```
Print the fit result (seperately for each pattern)
```
print ("fit:", rbfit_purity.fit)
```
## Plot the results and the fit
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Standard RB results
For comparison, we also print the standard RB fit results
```
standard_result_list = []
count = 0
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
if d==0:
standard_result_list.append(coherent_purity_result_list[count])
count += 1
rbfit_standard = rb.RBFitter(standard_result_list, xdata, rb_opts['rb_pattern'])
print (rbfit_standard.fit)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_standard.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Standard RB'%(nQ), fontsize=18)
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# Settings
```
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
```
# Dataset loading
```
dataset_name='pokemon'
images_dir = 'C:\\Users\\Khalid\\Documents\projects\\pokemon\DS06\\'
validation_percentage = 20
valid_format = 'png'
from training.generators.file_image_generator import create_image_lists, get_generators
imgs_list = create_image_lists(
image_dir=images_dir,
validation_pct=validation_percentage,
valid_imgae_formats=valid_format
)
inputs_shape= image_size=(200, 200, 3)
batch_size = 32
latents_dim = 32
intermediate_dim = 50
training_generator, testing_generator = get_generators(
images_list=imgs_list,
image_dir=images_dir,
image_size=image_size,
batch_size=batch_size,
class_mode=None
)
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
```
# Model's Layers definition
```
units=20
c=50
menc_lays = [
tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
venc_lays = [
tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=units*c*c, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
]
```
# Model definition
```
model_name = dataset_name+'VAE_Convolutional_reconst_1ell_01ssmi'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.VAE import VAE as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference_mean',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': menc_lays
}
,
{
'name': 'inference_logvariance',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': venc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.structural_similarity import prepare_ssim_multiscale
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood_with_lower_bound as ellwlb
ae.compile(loss={'x_logits': lambda x_true, x_logits: ellwlb(x_true, x_logits)+ 0.1*similarity_to_distance(prepare_ssim_multiscale([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
```
# Callbacks
```
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
```
# Model Training
```
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
```
# Model Evaluation
## inception_score
```
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
```
## Frechet_inception_distance
```
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
```
## perceptual_path_length_score
```
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
```
## precision score
```
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
```
## recall score
```
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
```
# Image Generation
## image reconstruction
### Training dataset
```
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
```
## with Randomness
```
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
```
### Complete Randomness
```
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
```
| github_jupyter |
```
import cv2
import numpy as np
import dlib
from tkinter import *
import time
from PIL import Image, ImageTk
cap = cv2.VideoCapture(0)
ret,frame=cap.read()
detector = dlib.get_frontal_face_detector()
count =0
marks =0
root = Tk()
root.geometry("975x585")
root.title("Exam Cheating Identifier v1.1")
root.iconbitmap('fav.ico')
#i = StrVar()
#i = 0
#j = StrVar()
#j = 0
tbt= StringVar()
#functions
def tick():
time_string = time.strftime("%H:%M:%S")
clock.config(text=time_string)
fd()
clock.after(200, tick)
def fd():
global count
global marks
ret,frame=cap.read()
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces=detector(gray)
x = 0
y = 0
for face in faces:
x,y=face.left(),face.top()
w,h=face.right(),face.bottom()
cv2.rectangle(frame,(x,y),(w,h),(0,225,0),3)
if (x == 0) and (y == 0):
print((x,y),"No Face")
count = count - 1
print(count)
if count == -5:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -10:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -15:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -20:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -25:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
else:
print((x,y),"Face")
txt.insert(0.0,count)
txt.delete(0.0,'end')
txt.insert(0.0,count)
im1 = Image.fromarray(frame)
photo_root = ImageTk.PhotoImage(im1)
img_root.config(image = photo_root)
img_root.image = photo_root
txt.insert(0.0,count)
txt.delete(0.0,'end')
txt.insert(0.0,count)
f2 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f2.pack(side = TOP, fill="x")
f3 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f3.pack(side = BOTTOM, fill="x")
f1 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f1.pack(side = RIGHT, fill="y")
#Labels
l1 = Label(f1, text = " "*2, bg = "silver" )
l1.pack()
l1a = Label(f1, text = " STUDENTS RECORD ",
bg = "silver" , fg = "black" , font = ("Berlin Sans FB Demi",20,"bold") )
l1a.pack()
l1b = Label(f1, text = " Marks Deduction ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1b.pack()
l1c = Label(f1, text = " ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1c.pack()
l2 = Label(f2, text = " EXAM CHEATING IDENTIFIER ",
bg = "silver" , fg = "black" , font = ("Berlin Sans FB Demi",30,"bold") )
l2.pack()
l3 = Label(f3, text = "Members: M.Hamza, Fouzan, Waqas, Haris, Zeeshan ", bg = "silver" )
l3.pack(side=LEFT)
l3a = Label(f3, text = "Instructor: Sir Roohan ❤ ", bg = "silver" )
l3a.pack(side=RIGHT)
clock=Label(f3, font=("times", 10, "bold"), fg="green", bg="silver")
clock.pack(anchor=S,side=BOTTOM )
# Student images icon
photo = PhotoImage(file="2.png")
img1 = Label(f1, image=photo, bg="silver")
img1.pack(pady=2,padx=15)
seat1 = Label(f1, text="Deducted Points",bg="silver",font=("Arial",10,"italic")).pack(pady=0,padx=18)
img_root = Label(root, text = "Live Streaming")
img_root.pack()
### Text
txt = Text(f1, height = 1, width = 3,bg = "silver",fg = "red", font=("Arial",30,"bold"))
txt.pack()
l1c = Label(f1, text = " Marks Deducted ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1c.pack()
txt2 = Text(f1, height = 1, width = 3,bg = "silver",fg = "red", font=("Arial",30,"bold"))
txt2.pack(pady = 5)
#Buttons
B2 = Button(f1,text="Start", bg="gray", fg="white", height=2 , width=15,font=("Arial",10,"bold"), command=tick)
B2.pack(side=LEFT,pady=15, padx=15, anchor="se")
B3 = Button(f1,text="Quit", bg="gray", fg="white", height=2 , width=15,font=("Arial",10,"bold"),command=root.destroy)
B3.pack(side=RIGHT,pady=15,padx=15,anchor="sw")
root.mainloop()
cap.release()
cap.release()
```
| github_jupyter |
# Working with Sparse Layouts in the GraphBLAS Dialect
This example will go over how to use the `--graphblas-lower` pass from `graphblas-opt` to lower the GraphBLAS dialect ops that directly manipulate the layouts of sparse tensors. In particular, we'll focus on the `graphblas.convert_layout` and `graphblas.transpose` ops.
Since the [ops reference](../../ops_reference.rst) already documents these ops with examples, we'll only briefly describe them here.
Let’s first import some necessary modules and generate an instance of our JIT engine.
```
import mlir_graphblas
from mlir_graphblas.tools.utils import sparsify_array
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
```
## Overview of graphblas.convert_layout
Here, we'll show how to use the `graphblas.convert_layout` op.
This op takes 1 sparse matrix in CSR or CSC format and creates a new sparse matrix of the desired format.
We'll give several examples below of how this will work.
First, we'll define an example input CSR matrix.
```
dense_matrix = np.array(
[
[1.1, 0. , 0. , 0. ],
[0. , 0. , 2.2, 0. ],
[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ]
],
dtype=np.float64,
)
csr_matrix = sparsify_array(dense_matrix, [False, True])
```
## graphblas.convert_layout (CSR->CSC)
Let's convert this matrix to CSC format.
```
mlir_text = """
#CSR64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
#CSC64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @csr_to_csc(%sparse_tensor: tensor<?x?xf64, #CSR64>) -> tensor<?x?xf64, #CSC64> {
%answer = graphblas.convert_layout %sparse_tensor : tensor<?x?xf64, #CSR64> to tensor<?x?xf64, #CSC64>
return %answer : tensor<?x?xf64, #CSC64>
}
"""
```
Here are the passes we'll use.
```
passes = [
"--graphblas-structuralize",
"--graphblas-optimize",
"--graphblas-lower",
"--sparsification",
"--sparse-tensor-conversion",
"--linalg-bufferize",
"--func-bufferize",
"--tensor-constant-bufferize",
"--tensor-bufferize",
"--finalizing-bufferize",
"--convert-linalg-to-loops",
"--convert-scf-to-std",
"--convert-memref-to-llvm",
"--convert-math-to-llvm",
"--convert-openmp-to-llvm",
"--convert-arith-to-llvm",
"--convert-math-to-llvm",
"--convert-std-to-llvm",
"--reconcile-unrealized-casts"
]
engine.add(mlir_text, passes)
csc_matrix = engine.csr_to_csc(csr_matrix)
csc_matrix.toarray()
np.all(dense_matrix == csc_matrix.toarray())
```
## graphblas.convert_layout (CSC->CSR)
Let's convert the CSC matrix back to CSR format.
Let's first get rid of our original `csr_matrix` so we don't get correct results purely by accident.
```
del csr_matrix
```
Here's the MLIR code to convert from CSC to CSR.
```
mlir_text = """
#CSR64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
#CSC64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @csc_to_csr(%sparse_tensor: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSR64> {
%answer = graphblas.convert_layout %sparse_tensor : tensor<?x?xf64, #CSC64> to tensor<?x?xf64, #CSR64>
return %answer : tensor<?x?xf64, #CSR64>
}
"""
engine.add(mlir_text, passes)
csr_matrix = engine.csc_to_csr(csc_matrix)
csr_matrix.toarray()
np.all(dense_matrix == csr_matrix.toarray())
```
## graphblas.convert_layout (CSC->CSC, CSR->CSR)
For completeness, we'll show how to convert to and from the same exact layouts.
The MLIR code to do so is shown below.
```
mlir_text = """
#CSR64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
#CSC64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @csc_to_csc(%sparse_tensor: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSC64> {
%answer = graphblas.convert_layout %sparse_tensor : tensor<?x?xf64, #CSC64> to tensor<?x?xf64, #CSC64>
return %answer : tensor<?x?xf64, #CSC64>
}
func @csr_to_csr(%sparse_tensor: tensor<?x?xf64, #CSR64>) -> tensor<?x?xf64, #CSR64> {
%answer = graphblas.convert_layout %sparse_tensor : tensor<?x?xf64, #CSR64> to tensor<?x?xf64, #CSR64>
return %answer : tensor<?x?xf64, #CSR64>
}
"""
engine.add(mlir_text, passes)
```
Let's verify that converting to and from the same layout give correct results.
```
csc_result = engine.csc_to_csc(csc_matrix)
csc_result.toarray()
np.all(dense_matrix == csc_result.toarray())
csr_result = engine.csr_to_csr(csr_matrix)
csr_result.toarray()
np.all(dense_matrix == csr_result.toarray())
```
## Overview of graphblas.transpose
Here, we'll show how to use the `graphblas.transpose` op.
`graphblas.transpose` returns a new sparse matrix that’s the transpose of the input matrix. Note that the behavior of this op differs depending on the sparse encoding of the specified output tensor type.
The input/output behavior of `graphblas.transpose` is fairly simple. Our examples here aren't intended to show anything interesting but to merely act as reproducible references.
The important thing to know about `graphblas.transpose` is how it is implemented.
When transposing a CSC matrix to a CSR matrix, we simply need to swap the dimension sizes and reverse the indexing. Thus, the only "real" work done here is changing metadata. The same goes for transposing a CSC matrix to a CSR matrix.
Here's an example of transposing a CSR matrix to a CSC matrix.
```
dense_matrix = np.array(
[
[1.1, 0. , 0. , 0. ],
[0. , 0. , 2.2, 0. ],
],
dtype=np.float64,
)
csr_matrix = sparsify_array(dense_matrix, [False, True])
mlir_text = """
#CSR64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
#CSC64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @transpose_csr_to_csc(%sparse_tensor: tensor<?x?xf64, #CSR64>) -> tensor<?x?xf64, #CSC64> {
%answer = graphblas.transpose %sparse_tensor : tensor<?x?xf64, #CSR64> to tensor<?x?xf64, #CSC64>
return %answer : tensor<?x?xf64, #CSC64>
}
"""
engine.add(mlir_text, passes)
csc_matrix_transpose = engine.transpose_csr_to_csc(csr_matrix)
csc_matrix_transpose.toarray()
np.all(dense_matrix.T == csc_matrix_transpose.toarray())
```
However, when we're transposing a CSR matrix and want to return a CSR matrix as well, there is "real" work that is done. This "real" work involves doing exactlly what `graphblas.convert_layout` does under the covers in addition to changing the metadata. The same goes for transposing a CSC matrix to a CSC matrix.
The example below shows how to transpose a CSC mmatrix to a CSC matrix.
```
mlir_text = """
#CSR64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
#CSC64 = #sparse_tensor.encoding<{
dimLevelType = [ "dense", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @transpose_csc_to_csc(%sparse_tensor: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSC64> {
%answer = graphblas.transpose %sparse_tensor : tensor<?x?xf64, #CSC64> to tensor<?x?xf64, #CSC64>
return %answer : tensor<?x?xf64, #CSC64>
}
"""
engine.add(mlir_text, passes)
csc_matrix = engine.transpose_csc_to_csc(csc_matrix_transpose)
csc_matrix.toarray()
np.all(dense_matrix == csc_matrix.toarray())
```
| github_jupyter |
## Histograms of Oriented Gradients (HOG)
As we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person.
<br>
<figure>
<img src = "./in_cell_images/pedestrians.jpeg" width = "100%" style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align:left; font-style:italic">Fig. 1. - Pedestrians.</figcaption>
</figure>
<br>
One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005.
<br>
<figure>
<img src = "./in_cell_images/woman.jpg" width = "100%" style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align:left; font-style:italic">Fig. 2. - High and Low Contrast.</figcaption>
</figure>
<br>
The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection.
In this notebook, you will learn:
* How the HOG algorithm works
* How to use OpenCV to create a HOG descriptor
* How to visualize the HOG descriptor.
# The HOG Algorithm
As its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps:
1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3).
2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window.
3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs.
4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins.
5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks.
6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations.
7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor.
8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image.
9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM.
<br>
<figure>
<img src = "./in_cell_images/HOG Diagram2.png" width = "100%" style = "border: thin silver solid; padding: 1px">
<figcaption style = "text-align:left; font-style:italic">Fig. 3. - HOG Diagram.</figcaption>
</figure>
<br>
<figure>
<video src = "./in_cell_images/HOG Animation - Medium.mp4" width="100%" controls autoplay loop> </video>
<figcaption style = "text-align:left; font-style:italic">Vid. 1. - HOG Animation.</figcaption>
</figure>
# Why The HOG Algorithm Works
As we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell.
### Dealing with contrast
Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground.
To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**.
In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically.
### Loading Images and Importing Resources
The first step in building our HOG descriptor is to load the required packages into Python and to load our image.
We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis.
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Load the image
image = cv2.imread('./images/triangle_tile.jpeg')
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Print the shape of the original and gray scale images
print('The original image has shape: ', original_image.shape)
print('The gray scale image has shape: ', gray_image.shape)
# Display the images
plt.subplot(121)
plt.imshow(original_image)
plt.title('Original Image')
plt.subplot(122)
plt.imshow(gray_image, cmap='gray')
plt.title('Gray Scale Image')
plt.show()
```
# Creating The HOG Descriptor
We will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below:
`cv2.HOGDescriptor(win_size = (64, 128),
block_size = (16, 16),
block_stride = (8, 8),
cell_size = (8, 8),
nbins = 9,
win_sigma = DEFAULT_WIN_SIGMA,
threshold_L2hys = 0.2,
gamma_correction = true,
nlevels = DEFAULT_NLEVELS)`
Parameters:
* **win_size** – *Size*
Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size.
* **block_size** – *Size*
Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get.
* **block_stride** – *Size*
Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well.
* **cell_size** – *Size*
Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get.
* **nbins** – *int*
Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees.
* **win_sigma** – *double*
Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms.
* **threshold_L2hys** – *double*
L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004.
* **gamma_correction** – *bool*
Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm.
* **nlevels** – *int*
Maximum number of detection window increases.
As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results.
In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`.
```
# Specify the parameters for our HOG descriptor
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (6, 6)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Number of gradient orientation bins
num_bins = 9
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
```
# Number of Elements In The HOG Descriptor
The resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins:
<span class="mathquill">
\begin{equation}
\mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins})
\end{equation}
</span>
If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">Total$_x$</span>, is the total number of blocks along the width of the detection window, and <span class="mathquill">Total$_y$</span>, is the total number of blocks along the height of the detection window. This formula for <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, takes into account the extra blocks that result from overlapping. After calculating <span class="mathquill">Total$_x$</span> and <span class="mathquill">Total$_y$</span>, we can get the total number of blocks in the detection window by multiplying <span class="mathquill">Total$_x$ $\times$ Total$_y$</span>. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to:
<span class="mathquill">
\begin{equation}
\mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y
\end{equation}
</span>
Where <span class="mathquill">cells$_x$</span> is the total number of cells along the width of the detection window, and <span class="mathquill">cells$_y$</span>, is the total number of cells along the height of the detection window. And <span class="mathquill">$N_x$</span> is the horizontal block stride in units of `cell_size` and <span class="mathquill">$N_y$</span> is the vertical block stride in units of `cell_size`.
Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above.
```
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Print the total number of elements the HOG feature vector should have
print('\nThe total number of elements in the HOG Feature Vector should be: ',
tot_bx, 'x',
tot_by, 'x',
num_cells_per_block[0], 'x',
num_cells_per_block[1], 'x',
num_bins, '=',
tot_els)
# Print the shape of the HOG Descriptor to see that it matches the above
print('\nThe HOG Descriptor has shape:', hog_descriptor.shape)
print()
```
# Visualizing The HOG Descriptor
We can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell.
OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image.
The code below produces an interactive plot so that you can interact with the figure. The figure contains:
* the grayscale image,
* the HOG Descriptor (feature vector),
* a zoomed-in portion of the HOG Descriptor, and
* the histogram of the selected cell.
**You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value.
**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
```
%matplotlib notebook
import copy
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Print the shape of the feature vector for reference
print('The feature vector has shape:', hog_descriptor.shape)
# Print the reshaped feature vector
print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape)
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Print the shape of the ave_grad array for reference
print('The average gradient array has shape: ', ave_grad.shape)
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
```
# Understanding The Histograms
Let's take a look at a couple of snapshots of the above figure to see if the histograms for the selected cell make sense. Let's start looking at a cell that is inside a triangle and not near an edge:
<br>
<figure>
<img src = "./in_cell_images/snapshot1.png" width = "70%" style = "border: thin silver solid; padding: 1px">
<figcaption style = "text-align:center; font-style:italic">Fig. 4. - Histograms Inside a Triangle.</figcaption>
</figure>
<br>
In this case, since the triangle is nearly all of the same color there shouldn't be any dominant gradient in the selected cell. As we can clearly see in the Zoom Window and the histogram, this is indeed the case. We have many gradients but none of them clearly dominates over the other.
Now let’s take a look at a cell that is near a horizontal edge:
<br>
<figure>
<img src = "./in_cell_images/snapshot2.png" width = "70%" style = "border: thin silver solid; padding: 1px">
<figcaption style = "text-align:center; font-style:italic">Fig. 5. - Histograms Near a Horizontal Edge.</figcaption>
</figure>
<br>
Remember that edges are areas of an image where the intensity changes abruptly. In these cases, we will have a high intensity gradient in some particular direction. This is exactly what we see in the corresponding histogram and Zoom Window for the selected cell. In the Zoom Window, we can see that the dominant gradient is pointing up, almost at 90 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 90-degree bin in the histogram to dominate strongly over the others. This is in fact what we see.
Now let’s take a look at a cell that is near a vertical edge:
<br>
<figure>
<img src = "./in_cell_images/snapshot3.png" width = "70%" style = "border: thin silver solid; padding: 1px">
<figcaption style = "text-align:center; font-style:italic">Fig. 6. - Histograms Near a Vertical Edge.</figcaption>
</figure>
<br>
In this case we expect the dominant gradient in the cell to be horizontal, close to 180 degrees, since that’s the direction in which there is a sharp change in intensity. Therefore, we should expect to see the 170-degree bin in the histogram to dominate strongly over the others. This is what we see in the histogram but we also see that there is another dominant gradient in the cell, namely the one in the 10-degree bin. The reason for this, is because the HOG algorithm is using unsigned gradients, which means 0 degrees and 180 degrees are considered the same. Therefore, when the histograms are being created, angles between 160 and 180 degrees, contribute proportionally to both the 10-degree bin and the 170-degree bin. This results in there being two dominant gradients in the cell near the vertical edge instead of just one.
To conclude let’s take a look at a cell that is near a diagonal edge.
<br>
<figure>
<img src = "./in_cell_images/snapshot4.png" width = "70%" style = "border: thin silver solid; padding: 1px">
<figcaption style = "text-align:center; font-style:italic">Fig. 7. - Histograms Near a Diagonal Edge.</figcaption>
</figure>
<br>
To understand what we are seeing, let’s first remember that gradients have an *x*-component, and a *y*-component, just like vectors. Therefore, the resulting orientation of a gradient is going to be given by the vector sum of its components. For this reason, on vertical edges the gradients are horizontal, because they only have an x-component, as we saw in Figure 4. While on horizontal edges the gradients are vertical, because they only have a y-component, as we saw in Figure 3. Consequently, on diagonal edges, the gradients are also going to be diagonal because both the *x* and *y* components are non-zero. Since the diagonal edges in the image are close to 45 degrees, we should expect to see a dominant gradient orientation in the 50-degree bin. This is in fact what we see in the histogram but, just like in Figure 4., we see there are two dominant gradients instead of just one. The reason for this is that when the histograms are being created, angles that are near the boundaries of bins, contribute proportionally to the adjacent bins. For example, a gradient with an angle of 40 degrees, is right in the middle of the 30-degree and 50-degree bin. Therefore, the magnitude of the gradient is split evenly into the 30-degree and 50-degree bin. This results in there being two dominant gradients in the cell near the diagonal edge instead of just one.
Now that you know how HOG is implemented, in the workspace you will find a notebook named *Examples*. In there, you will be able set your own paramters for the HOG descriptor for various images. Have fun!
| github_jupyter |
# KNeighborsClassifier with MaxAbsScaler
This Code template is for the Classification task using a simple KNeighborsClassifier based on the K-Nearest Neighbors algorithm using MaxAbsScaler technique.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder,MaxAbsScaler
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
file_path= ""
```
List of features which are required for model training .
```
features = []
```
Target feature for prediction.
```
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on the minority class, although typically it is performing on the minority class that is most important.
One approach to address imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Model
KNN is one of the easiest Machine Learning algorithms based on Supervised Machine Learning technique. The algorithm stores all the available data and classifies a new data point based on the similarity. It assumes the similarity between the new data and data and put the new case into the category that is most similar to the available categories.KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the available data.
#### Model Tuning Parameters
> - **n_neighbors** -> Number of neighbors to use by default for kneighbors queries.
> - **weights** -> weight function used in prediction. {**uniform,distance**}
> - **algorithm**-> Algorithm used to compute the nearest neighbors. {**‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’**}
> - **p** -> Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
> - **leaf_size** -> Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
## Data Rescaling
MaxAbsScaler scales each feature by its maximum absolute value.
This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.
[For More Reference](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
```
model=make_pipeline(MaxAbsScaler(),KNeighborsClassifier(n_jobs=-1))
model.fit(x_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* where:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Vikas Mishra, Github: [Profile](https://github.com/Vikaas08)
| github_jupyter |
# 63-inference-zero-shot-model
> Exploring influence of model selection on zero shot classification inference.
In this notebook, we perform zero-shot classification on our transcripts. Since at this point in our work, we have access to the CSV files, we will use them directly. We'll start with one, which can then be expanded upon.
#### Common helpful packages
```
#all_no_test
#Data analysis and processing
import pandas as pd
import numpy as np
#ml and dl
from transformers import pipeline
from sklearn.metrics import confusion_matrix
#plotting
import matplotlib.pyplot as plt
import seaborn as sns
# file system and python operations
import glob
import os.path
import re
%matplotlib inline
```
# Load data
In this section, we're going to use glob to see all available csv files, and then we're just going to pick one to process. Keep in mind that once you have a *list* of filenames, you can then use a list comprehension or for loop to apply the same operation to the in batch.
#### Filename constants
```
# Box prefix (uncomment and use the lower variable if the Box directory on your computer is called "Box Sync")
box_prefix = '/data/p_dsi/wise/data/'
#box_prefix = '~/Box Sync/DSI Documents/'
# Data filepath
csv_filepath = os.path.expanduser(box_prefix + 'cleaned_data/csv_files')
# get list of all csvs in directory
all_transcript_csvs_list = glob.glob(csv_filepath + '/*.csv')
# show the names of the first 4
all_transcript_csvs_list[0:4]
```
Now, I'm going to use the filename to load _just the first csv_. This is indicated by index `0`.
```
#read the csv
transcript_df_list = [pd.read_csv(transcript_csv) for transcript_csv in all_transcript_csvs_list[:2]]
#concatonate list of dfs into a single df
transcript_df= pd.concat(transcript_df_list, ignore_index=True)
#show the first 5 rows
display(transcript_df.head())
#print size
transcript_df.shape
```
## A bit of pre-processing
As you can see in the dataframe above, some of the labels are `NaN`, essentially meaning that they're empty. Since these don't have labels, we're just going to go ahead and drop these rows from the dataframe. Then, we'll reset the index (this basically means to renumber the index. Since we took out some rows, those row indices will be missing. We want to renumber so that the index has continguous numbering.)
```
# drop NA rows
transcript_df = transcript_df.dropna()
# reset the index
# the drop=True argument means that we don't make the old index a new column in our dataframe
transcript_df = transcript_df.reset_index(drop=True)
# Add a column named sample_id with the reset_index trick
transcript_df = transcript_df.reset_index().rename(columns={'index':'sample_id'})
# see the new size
print(transcript_df.shape)
```
After processing, we can see that there were 7 unlabeled examples in this dataset, which have been removed. Let's take another look just to make sure everything looks good:
```
transcript_df.head(3)
```
Great.
# Zero-shot classification
Now, we're going to actually perform the zero-shot classification using the text from above. We'll first declare some parameters and create the zero-shot pipeline for easy modification of the labels and model type.
```
# these are the labels that were added to the transcripts (don't change these or the order)
transcript_labels = ['PRS', 'REP', 'NEU', 'OTR']
# define list of candidate labels (change these, but not the order)
candidate_labels = ["praise", "reprimand", "neutral", "opportunity to respond"]
# define and create pipeline
classifier = pipeline("zero-shot-classification", model = "typeform/distilbert-base-uncased-mnli", device=0)
```
Now, we're going to build a dictionary of correspondences using the labels above (which is why we don't change the order). If you do change the order, just know that the position in one set of labels must have its match at the same position in the other labels. See below.
We'll use this later to match between our names for the labels (`candidate_labels`) and the transcript labels (`transcript_labels`).
```
labels_lookup = dict(zip(candidate_labels, transcript_labels))
labels_lookup
rev_labels_lookup = dict(zip(transcript_labels, candidate_labels))
rev_labels_lookup
```
## Make predictions on one cleaned transcript
Now, we'll then use the `speech` column of `transcript_df` to provide inputs to the classifier in batch, and get the results in batch.
```
# Get all the rows of text in the speech column, and convert the data structure into a list. Use this as the sequences (text) argument to the classifier
# Use the candidate_labels variable we defined above, and use that as the candidate_labels ARGUMENT to the classifier
results_list = classifier(sequences = transcript_df['speech'].tolist(),
candidate_labels = candidate_labels, device=0)
```
## Convert to a reasonable representation
The results returned are a list of dictionaries. Each dictionary contains the sequence, the labels, and the probabilities associated with each label. This will be shown below. Dictionaries are easily converted to pandas dataframes, where each key becomes a column of the new dataframe. In the same way, lists of dictionaries are also easily converted to pandas dataframes.
```
#look at the first 2 elements of results_list
results_list[:2]
#example of converting a single element of this list to a dataframe
pd.DataFrame(results_list[0])
#use a list comprehension to generate a list of these dataframes
results_df_list = [pd.DataFrame(result) for result in results_list]
#add the sample_id column for each dataframe
results_df_list = [df.assign(sample_id = ind) for ind, df in enumerate(results_df_list)]
#concatenate this list together to form one single dataframe
results_df = pd.concat(results_df_list)
results_df.head()
#pivot_wider and take `sequence` off the index
results_df = results_df.pivot(index=['sample_id', 'sequence'], columns='labels', values='scores').reset_index(level='sequence')
results_df
#get the max of the labels columns (using candidate_labels) and add a column called 'pred_zslabels'
results_df['pred_zslabels'] = results_df[candidate_labels].idxmax(axis=1)
#now we'll use the lookup dictionary we made above to create the pred column to correspond to the real labels
results_df['pred'] = results_df['pred_zslabels'].replace(labels_lookup)
results_df.head()
#now, we'll join on the real labels and some other columns just to make sure things look right
results_df = results_df.merge(transcript_df[['sample_id', 'label', 'speech']], left_index=True, right_on='sample_id')
#there's a current challenge in the data with leading and trailing whitespace so let's make sure to strip that off our labels
results_df['label'] = results_df['label'].str.strip()
#let's rename the 'label' column to be 'truth'
results_df = results_df.rename(columns={'label':'truth'})
#let's look at the first 15 rows of the result
results_df.head(15)
```
Great! This looks correct. Let's start with the performance evaluation.
## Evaluate performance using confusion matrix
Given the structure our results, let's finish up with a confusion matrix.
```
#use confusion matrix function from scikit-learn metrics
c_ma = confusion_matrix(results_df.dropna()['truth'], results_df.dropna()['pred'], labels=transcript_labels)
#create a dataframe from the confusion matrix
c_df = pd.DataFrame(c_ma,
columns = transcript_labels,
index = transcript_labels)
#use reverse lookup table from above to use the actual labels that we assigned for zero shot
c_df.rename(columns=rev_labels_lookup, index=rev_labels_lookup, inplace=True)
#add axis labels for seaborn
c_df.index.name = 'Actual Labels'
c_df.columns.name = 'Predicted Labels'
#inspect
c_df
#use seaborne to display as heatmap
ax = sns.heatmap(c_df, cmap='Blues', annot=True);
ax.set_xticklabels(ax.get_xticklabels(),rotation = 45);
ax.set_yticklabels(ax.get_yticklabels(),rotation = 45);
```
Let's inspect this confusion matrix.
* First, we can see that there are no reprimand labels in this transcript.
* It is hard for zero shot to distinguish neutral. Looking across the row, neutral is incorrectly predicted to be every class, but ESPECIALLY opportunity to respond.
* The model is great at classifying opportunity to respond (high sensitivity), but that's mostly because it seems to think EVERYTHING is an opportunity to respond (e.g., low precision. It predicts 124 out of the 163 examples to be opportunity to respond (to be fair, correctly classifying 49 of them).
Great steps to improve from here include:
* Employing different labels
* Using different models
* Inspecting the eyeball set of misclassified samples to identify if there's any similarities or improvements which can be proposed.
### Quick sanity checks
Here, we'll just do a few quick sanity checks to make sure things look right
```
#this should be the total number of rows in the results and transcripts dataframe (after dropping NAs)
c_df.sum().sum() #passed
#this should be equal to the sum of the rows of the confusion matrix
results_df['truth'].value_counts() #passed
```
We can also see that there are twice as many `neutral` are there are `opportunity to respond`, and twice as many `opportunity to respond` as there are `positive`.
```
#this should be equal to the sum of the columns of the confusion matrix
results_df['pred'].value_counts() #passed
```
| github_jupyter |
# Load data and libraries
```
from google.colab import drive
drive.mount('/content/drive')
!pip install shap
!pip install pyitlib
import os
os.path.abspath(os.getcwd())
os.chdir('/content/drive/My Drive/Protein project')
os.path.abspath(os.getcwd())
from __future__ import division ###for float operation
from collections import Counter
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score ##tp / (tp + fn)
from sklearn.metrics import precision_score #tp / (tp + fp)
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import KFold, StratifiedKFold
from pyitlib import discrete_random_variable as drv
import time
import timeit
import networkx as nx
import matplotlib.pyplot as plt
def readData(filename):
fr = open(filename)
returnData = []
headerLine=fr.readline()###move cursor
for line in fr.readlines():
lineStrip = line.strip().replace('"','')
lineList = lineStrip.split('\t')
returnData.append(lineList)###['3','2',...]
return returnData
"""first case P450 = [['1','1',....],[],[].....,[]] second case P450 = array([['1','1',....],[],[].....,[]]), third case P450 = """
P450 = readData('P450.txt') ### [[],[],[],....[]]
P450 = np.array(P450) ### either [['1','1',....],[],[].....,[]] or array([['1','1',....],[],[].....,[]]) works, but note that keys are '1', '0'
#P450 = P450.astype(int) ### for shap array [[1,1,....],[],[].....,[]], keys are 1, 0
M=np.matrix([[245, 9, 0, 3, 0, 2, 65, 8],
[9, 218, 17, 17, 49, 10, 50, 17],
[0, 17, 175, 16, 25, 13, 0, 46],
[3, 17, 16, 194, 19, 0, 0, 3],
[0, 49, 25, 19, 199, 10, 0, 3],
[2, 10, 13, 0, 10, 249, 50, 74],
[65, 50, 0, 0, 0, 50, 262, 11],
[8, 17, 46, 3, 3, 74, 11, 175]])
X = P450[:,0:8]
y = P450[:,-1]
def readData2(filename):
fr = open(filename)
returnData = []
headerLine=fr.readline()###move cursor
for line in fr.readlines():
linestr = line.strip().replace(', ','')
lineList = list(linestr)
returnData.append(lineList)###['3','2',...]
return returnData
lactamase = readData2('lactamase.txt')
lactamase = np.array(lactamase)
#lactamase = lactamase.astype(int)
M2 = np.matrix([[101, 5, 0, 2, 0, 14, 4, 37],
[5 ,15, 14 ,1 ,7 ,7, 0 ,19],
[0, 14, 266, 15, 14, 2, 26, 4],
[2, 1, 15, 28, 2 ,15, 4, 0],
[0, 7, 14, 2, 32, 9 ,0, 8],
[14, 7, 2 ,15, 9, 29, 7, 9],
[4, 0, 26, 4 ,0 ,7 ,72, 21],
[37, 19, 4, 0, 8, 9, 21, 211]])
X2 = lactamase[:,0:8]
y2 = lactamase[:,-1]
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted ### Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing underscore)
#from sklearn.utils.multiclass import unique_labels, not necessary, can be replaced by array(list(set()))
```
# Bayesian network
```
"""
Bayesian network implementation
API inspired by SciKit-learn.
"""
class Bayes_net(BaseEstimator, ClassifierMixin):
def fit(self,X,y,M = None):
raise NotImplementedError
def predict_proba(self, X): ### key prediction methods, all other prediction methods will use it first.
raise NotImplementedError
def predict_binary(self,X):
"""
Perform classification on an array of test vectors X, predict P(C1|X), works only for binary classifcation
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : ndarray of shape (n_samples,)
Predicted P(C1|X)
"""
Prob_C = self.predict_proba(X) ### Prob_C is n*|C| np.array
return(Prob_C[:,0])
def predict(self, X):
"""
Perform classification on an array of test vectors X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : ndarray of shape (n_samples,)
Predicted target values for X
"""
Prob_C = self.predict_proba(X) ## Prob_C is |C|*n np.array ,C is self.C
return( np.array([self.classes_[ele] for ele in np.argmax(Prob_C, axis=1)] ) )
def Conditional_log_likelihood_general(self,y_true,y_pred_prob,C):
"""Calculate the conditional log likelihood.
:param y_true: The true class labels. e.g ['1','1',.....'0','0']
:param y_pred_prob: np.array shows prob of each class for each instance. ith column is the predicted prob for class C[i]
:param C: Class labels e.x array(['1','0']), C has to use same labels as y_true.
:return: CLL. A scalar.
"""
C = list(C) ## only list can use .index
cll = []
for i in range(len(y_true)):
cll.append( y_pred_prob[i,C.index(y_true[i])] ) ## \hat p(c_true|c_true)
cll = [np.log2(ele) for ele in cll]
cll = np.array(cll)
return(sum(cll))
def plot_tree_structure(self,mapping = None,figsize = (5,5)):
check_is_fitted(self)
parent = self.parent_
egdes = [(k,v) for v,k in parent.items() if k is not None]
G = nx.MultiDiGraph()
G.add_edges_from(egdes)
#mapping=dict(zip(range(8),['b0','b1','b2','b3','b4','b5','b6','b7']))
plt.figure(figsize=figsize)
nx.draw_networkx(G,nx.shell_layout(G))
```
## Naive Bayes
```
class NB(Bayes_net):
name = "NB"
def __init__(self, alpha = 1):
self.alpha = alpha
def fit(self,X, y, M = None):
""" Implementation of a fitting function.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The training input samples.
y : array-like, shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in
regression).
Returns
-------
self : object
Returns self.
"""
# countDict_, classes_, p_ , P_class_prior_, Dict_C_, K_ ,training_time_, is_fitted_ are fitted "coef_"
# coef_ has to been refreshed each fitting.
X, y = check_X_y(X, y)
t = time.process_time()
#start timing
countDict = Counter(y) ## {c1:n1,c2:n2,c3:n3} sorted by counts
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape ## num of features 8 ### values same order as .keys()
P_class_prior = [(ele+self.alpha)/ ( n + self.alpha*len(C) ) for ele in countDict.values()] ### prior for each class [p1,p2,p3]
P_class_prior = dict(zip(C, P_class_prior)) ## {c1:p1,c2:p2,c3:p3} ## should in correct order, .keys .values.
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {} ## [x1 unique , x2 unique .... x8unique]
for c in C:
ListCounter_c = []
for i in range(p):
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,i]
ListCounter_c.append(Counter(x_i_c))
if c == C[0]:
x_i = X[:,i]
K[i] = len(Counter(x_i))
Dict_C[c] = ListCounter_c
CP_time = np.array(time.process_time() - t)
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_,self.training_time_ = Dict_C,p,P_class_prior,K,np.array(C),countDict,CP_time
return self
def predict_proba(self,X):
"""
Return probability estimates for the test vector X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : array-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in
the model. The columns correspond to the classes in sorted
order, as they appear in the attribute :term:`classes_`.
"""
check_is_fitted(self)
X = check_array(X)
Prob_C = []
for ins in X:
P_class = self.P_class_prior_.copy() ### {c1:p1, c2:p2} #### !!!! dict1 = dict2 , change both simultaneously!!!
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
for i in range(self.p_):
P_class[c] = P_class[c] * (ListCounter_c[i][ins[i]]+self.alpha) / (self.countDict_[c] + self.alpha*self.K_[i])
## normalize P_class
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()}
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
nb = NB()
nb.fit(X,y)
nb.predict_proba(X)
#nb.get_params()
#nb.classes_
print(nb.name)
print(nb.predict_proba(X))
nb.score(X,y)
```
## TAN
```
class TAN(Bayes_net):
name = "TAN"
def __init__(self, alpha = 1,starting_node = 0):
self.starting_node = starting_node
self.alpha = alpha
def To_CAT(self, X_i):
"""For using CMI purpose, convert X_i e.g ['a','b','a']/['0','1','0'] to [0,1,0].
:param X_i: one feature column.
:return: list(type int)
"""
X_i_list = list(set(X_i));X_i_dict = dict(zip(X_i_list, range(len(X_i_list)) ))
return([X_i_dict[ele] for ele in X_i])
def get_mutual_inf(self,X,Y):
"""get conditional mutual inf of all pairs of features, part of training
:return: np.array matrix.
"""
t = time.process_time()
n,p = X.shape
M = np.zeros((p,p))
Y = self.To_CAT(Y)
for i in range(p):
X_i = X[:,i]
X_i = self.To_CAT(X_i)
for j in range(p):
X_j = X[:,j]
X_j = self.To_CAT(X_j)
M[i,j] = drv.information_mutual_conditional(X_i,X_j,Y)
mutual_inf_time = time.process_time() - t
return M, mutual_inf_time
def Findparent(self,X,Y):
M,mutual_inf_time = self.get_mutual_inf(X,Y)
t = time.process_time()
np.fill_diagonal(M,0)
p = int(M.shape[0])
V = range(p) #### . set of all nodes
st = self.starting_node
Vnew = [st] #### vertex that already found their parent. intitiate it with starting node. TAN randomly choose one
parent = {st:None} ## use a dict to show nodes' interdepedency
while set(Vnew) != set(V): ### when their are still nodes whose parents are unknown.
index_i = [] ### after for loop, has same length as Vnew, shows the closest node that not in Vnew with Vnew.
max_inf = [] ### corresponding distance
for i in range(len(Vnew)): ## can be paralelled
vnew = Vnew[i]
ListToSorted = [int(e) for e in M[:,vnew]]###
index = sorted(range(len(ListToSorted)),key = lambda k: ListToSorted[k],reverse = True)
index_i.append([ele for ele in index if ele not in Vnew][0])
max_inf.append(M[index_i[-1],vnew])
index1 = sorted(range(len(max_inf)),key = lambda k: max_inf[k],reverse = True)[0] ## relative position, Vnew[v1,v2] index_i[v4,v5] max_inf[s1,s2] index1 is the position in those 3 list
Vnew.append(index_i[index1]) ### add in that node
parent[index_i[index1]] = Vnew[index1] ## add direction, it has to be that the new added node is child, otherwise some nodes has 2 parents which is wrong.
prim_time = time.process_time() - t
return parent,mutual_inf_time,prim_time
def fit(self,X,y,M = None): ### this is based on trainning data !!!
X, y = check_X_y(X, y)
parent,mutual_inf_time,prim_time = self.Findparent(X,y)
t = time.process_time()
countDict = Counter(y)
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape
P_class = [(ele+self.alpha)/( n + self.alpha*len(C) ) for ele in list(countDict.values())] ### prior for each class [p1,p2,p3], ### .values same order as .keys()
P_class = dict(zip(C, P_class)) ## {c1:p1,c2:p2,c3:p3} ## should in correct order, .keys .values.
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {}
root_i = self.starting_node ## 0 ,1 ,2 shows the position, thus int
x_i = X[:,root_i]
K[root_i] = len(Counter(x_i))
for c in C: ### c origianl class label '1' not 1
ListCounter_c = {}
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,root_i]
ListCounter_c[root_i] = Counter(x_i_c) ### list_counter_c keys are 0,1,2,3... showing position hence int. Counter(x_i_c) keys are original values of x, not position. hence not necesarily int
for i in [e for e in range(0,p) if e != root_i]:
if c == C[0]:
x_i = X[:,i]
K[i] =len(Counter(x_i))
x_parent = X[:,parent[i]]
x_parent_counter = Counter(x_parent)
x_parent_counter_length = len(x_parent_counter)
x_parent_value = list(x_parent_counter.keys())
dict_i_c = {}
for j in range(x_parent_counter_length):
row_inx_c_parent_j = [row for row in range(n) if y[row] == c and x_parent[row] == x_parent_value[j]]
x_i_c_p_j = X[row_inx_c_parent_j, i]
dict_i_c[x_parent_value[j]] = Counter(x_i_c_p_j) ### x_parent_value[j] can make sure it is right key.
ListCounter_c[i] = dict_i_c
Dict_C[c] = ListCounter_c
CP_time = time.process_time() - t
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_, self.parent_ = Dict_C,p,P_class,K,np.array(C),countDict,parent
self.training_time_ = np.array([mutual_inf_time,prim_time,CP_time])
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = []
root_i = self.starting_node
for ins in X:
P_class = self.P_class_prior_.copy()
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
P_class[c] = P_class[c] * (ListCounter_c[root_i][ins[root_i]]+self.alpha) / (self.countDict_[c]+self.alpha*self.K_[root_i])
for i in [e for e in range(0,self.p_) if e != root_i]:
pValue = ins[self.parent_[i]] ### replicate C times
try:### ListCounter_c[i][pValue],pavlue does show in training
Deno = sum(list(ListCounter_c[i][pValue].values() )) ## number of y =1, xparent = pvalue , ListCounter_c[i][pValue], pavlue does not show in training , keyerror
P_class[c] = P_class[c] * (ListCounter_c[i][pValue][ins[i]] + self.alpha) / (Deno + self.alpha*self.K_[i]) ## ListCounter1[i][pValue][ins[i]] = number of y =1 xparent = pvalue, xi = xi
except: ##ListCounter_c[i][pValue],pavlue does not show in training
Deno = 0 ## ListCounter_c[i] this is when class == c, ith feature, >> {parent(i) == value1: Counter, parent(i) == value2: Counter }, counter shows the distribution of x_i when class ==c and parent == pvalue
P_class[c] = P_class[c] * (0 + self.alpha) / (Deno + self.alpha*self.K_[i])
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()} ### normalize p_class
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
tan = TAN()
tan.get_params()
tan.fit(X,y)
#tan.fit(X,y)
print(tan.predict_proba(X))
tan.score(X,y)
```
## STAN
```
class STAN(Bayes_net):
name = "STAN"
def __init__(self,alpha = 1,starting_node = 0):
self.starting_node = starting_node
self.alpha = alpha
def Findparent(self,M):
M = M.copy()
np.fill_diagonal(M,0)
p = int(M.shape[0])
V = range(p) #### . set of all nodes
st = self.starting_node
Vnew = [st] #### vertex that already found their parent. intitiate it with starting node. TAN randomly choose one
parent = {st:None} ## use a dict to show nodes' interdepedency
while set(Vnew) != set(V): ### when their are still nodes whose parents are unknown.
index_i = [] ### after for loop, has same length as Vnew, shows the closest node that not in Vnew with Vnew.
max_inf = [] ### corresponding distance
for i in range(len(Vnew)): ## can be paralelled
vnew = Vnew[i]
ListToSorted = [int(e) for e in M[:,vnew]]###
index = sorted(range(len(ListToSorted)),key = lambda k: ListToSorted[k],reverse = True)
index_i.append([ele for ele in index if ele not in Vnew][0])
max_inf.append(M[index_i[-1],vnew])
index1 = sorted(range(len(max_inf)),key = lambda k: max_inf[k],reverse = True)[0] ## relative position, Vnew[v1,v2] index_i[v4,v5] max_inf[s1,s2] index1 is the position in those 3 list
Vnew.append(index_i[index1]) ### add in that node
parent[index_i[index1]] = Vnew[index1] ## add direction, it has to be that the new added node is child, otherwise some nodes has 2 parents which is wrong.
return parent
def fit(self,X,y,M): ### this is based on trainning data !!!
X, y = check_X_y(X, y)
parent = self.Findparent(M)
t = time.process_time()
countDict = Counter(y)
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape
P_class = [(ele+self.alpha)/( n + self.alpha*len(C) ) for ele in list(countDict.values())] ### prior for each class [p1,p2,p3], ### .values same order as .keys()
P_class = dict(zip(C, P_class))
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {}
root_i = self.starting_node ## 0 ,1 ,2 shows the position, thus int
x_i = X[:,root_i]
K[root_i] = len(Counter(x_i))
for c in C: ### c origianl class label '1' not 1
ListCounter_c = {}
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,root_i]
ListCounter_c[root_i] = Counter(x_i_c) ### list_counter_c keys are 0,1,2,3... showing position hence int. Counter(x_i_c) keys are original values of x, not position. hence not necesarily int
for i in [e for e in range(0,p) if e != root_i]:
if c == C[0]:
x_i = X[:,i]
K[i] =len(Counter(x_i))
x_parent = X[:,parent[i]] ## will duplicate C times.
x_parent_counter = Counter(x_parent)
x_parent_counter_length = len(x_parent_counter)
x_parent_value = list(x_parent_counter.keys())
dict_i_c = {}
for j in range(x_parent_counter_length):
row_inx_c_parent_j = [row for row in range(n) if y[row] == c and x_parent[row] == x_parent_value[j]]
x_i_c_p_j = X[row_inx_c_parent_j, i]
dict_i_c[x_parent_value[j]] = Counter(x_i_c_p_j) ### x_parent_value[j] can make sure it is right key.
ListCounter_c[i] = dict_i_c
Dict_C[c] = ListCounter_c
CP_time = np.array(time.process_time() - t)
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_,self.parent_ = Dict_C,p,P_class,K,np.array(C),countDict,parent
self.training_time_ = CP_time
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = []
root_i = self.starting_node
for ins in X:
P_class = self.P_class_prior_.copy()
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
P_class[c] = P_class[c] * (ListCounter_c[root_i][ins[root_i]]+self.alpha) / (self.countDict_[c]+self.alpha*self.K_[root_i])
for i in [e for e in range(0,self.p_) if e != root_i]:
pValue = ins[self.parent_[i]] ### replicate C times
try:### ListCounter_c[i][pValue],pavlue does show in training
Deno = sum(list(ListCounter_c[i][pValue].values() )) ## number of y =1, xparent = pvalue , ListCounter_c[i][pValue], pavlue does not show in training , keyerror
P_class[c] = P_class[c] * (ListCounter_c[i][pValue][ins[i]] + self.alpha) / (Deno + self.alpha*self.K_[i]) ## ListCounter1[i][pValue][ins[i]] = number of y =1 xparent = pvalue, xi = xi
except: ##ListCounter_c[i][pValue],pavlue does not show in training
Deno = 0 ## ListCounter_c[i] this is when class == c, ith feature, >> {parent(i) == value1: Counter, parent(i) == value2: Counter }, counter shows the distribution of x_i when class ==c and parent == pvalue
P_class[c] = P_class[c] * (0 + self.alpha) / (Deno + self.alpha*self.K_[i])
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()} ### normalize p_class
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
stan = STAN()
stan.get_params()
stan.fit(X,y,M)
print(stan.predict_proba(X))
print(stan.name)
stan.score(X,y)
from sklearn.utils.estimator_checks import check_estimator
#check_estimator(NB)
```
## TAN_bagging
```
class TAN_bagging(Bayes_net):
name = "TAN_bagging"
def __init__(self, alpha = 1):
self.alpha = alpha
def fit(self,X,y,M = None):
"""initialize model = [] . and training time."""
X,y = check_X_y(X,y)
n,p = X.shape ### number of features
"""fit base models"""
training_time = 0
models = []
for i in range(p):
model = TAN(self.alpha, starting_node= i)
model.fit(X,y)
models.append(model)
training_time += model.training_time_
self.models_ , self.p_= models,p
self.training_time_ = training_time/p ### the fitting can be paralelled, hence define averge training time for this bagging
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/self.p_
return(Prob_C)
tan_bag = TAN_bagging()
print(tan_bag.name)
tan_bag.fit(X,y)
tan_bag.predict_proba(X)
```
## STAN bagging
```
class STAN_bagging(Bayes_net):
name = "STAN_bagging"
def __init__(self,alpha = 1):
self.alpha = alpha
def fit(self,X,y,M):
X,y = check_X_y(X,y)
n,p = X.shape
training_time = 0
models = []
for i in range(p):
model = STAN(self.alpha, starting_node= i)
model.fit(X,y,M)
models.append(model)
training_time += model.training_time_
self.models_, self.p_ = models,p
self.training_time_ = training_time/p ### the fitting can be paralelled, hence define averge training time for this bagging
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/self.p_
return(Prob_C)
stan_bag = STAN_bagging()
stan_bag.fit(X,y,M)
stan_bag.fit(X,y,M)
stan_bag.predict_proba(X)
```
## Ensemble TAN (STAN_TAN_bagging)
```
class STAN_TAN_bagging(Bayes_net):
name = "STAN_TAN_bagging"
def __init__(self,alpha = 1):
self.alpha = alpha
def fit(self,X,y,M):
X,y = check_X_y(X,y)
n,p = X.shape
training_time = 0
models = []
## train p TAN base models
for i in range(p):
model = TAN(self.alpha, starting_node= i)
model.fit(X,y)
models.append(model)
training_time += model.training_time_
#append STAN
model = STAN(self.alpha, starting_node = 0) ### starting node not importance for TAN, very robust
model.fit(X,y,M)
models.append(model)
self.models_, self.p_ = models, p
self.training_time_ = training_time/p ### after paralell, only consider average of p TAN_MT, ignore TAN since it takes less time than TAN_MT
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/(self.p_+ 1)
return(Prob_C)
stan_tan_bag = STAN_TAN_bagging()
stan_tan_bag.fit(X,y,M)
stan_tan_bag.predict_proba(X)
```
# Cross validation
```
import warnings
warnings.filterwarnings("ignore")
def get_cv(cls,X,Y,M,n_splits=10,cv_type = "KFold",verbose = True):
""" Cross validation to get CLL and accuracy and training time and precision and recall.
"""
if cv_type == "StratifiedKFold":
cv = StratifiedKFold(n_splits= n_splits, shuffle=True, random_state=42)##The folds are made by preserving the percentage of samples for each class.
else:
cv = KFold(n_splits=n_splits, shuffle=True, random_state=42)
model = cls()
X,Y = check_X_y(X,Y)
binarizer = MultiLabelBinarizer() ## for using recall and precision score
binarizer.fit(Y)
Accuracy = []
Precision = []
Recall = []
CLL = []
training_time = []
for folder, (train_index, val_index) in enumerate(cv.split(X, Y)):#### X,Y are array, data is list
X_train,X_val = X[train_index],X[val_index]
y_train,y_val = Y[train_index],Y[val_index]
model.fit(X_train,y_train,M) ### whether data is list or array does not matter, only thing matters is label has to be same.
training_time.append(model.training_time_)
y_pred_prob= model.predict_proba(X_val)
y_pred_class = model.predict(X_val)
accuracy = accuracy_score(y_val, y_pred_class)
precision = precision_score(binarizer.transform(y_val),
binarizer.transform(y_pred_class),
average='macro')
recall = recall_score(binarizer.transform(y_val),
binarizer.transform(y_pred_class),
average='macro')
cll = model.Conditional_log_likelihood_general(y_val,y_pred_prob,model.classes_)
if verbose:
print("accuracy in %s fold is %s" % (folder+1,accuracy))
print("CLL in %s fold is %s" % (folder+1,cll))
print("precision in %s fold is %s" % (folder+1,precision))
print("recall in %s fold is %s" % (folder+1,recall))
print("training time in %s fold is %s" % (folder+1,training_time[-1]))
print(10*'__')
CLL.append(cll)
Accuracy.append(accuracy)
Recall.append(recall)
Precision.append(precision)
return Accuracy, CLL, training_time,Precision,Recall
Accuracy, CLL, training_time,Precision,Recall= get_cv(NB,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(TAN,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(TAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN_TAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
```
# plot the Bayesian network
```
tan0 = TAN(starting_node=0)
tan0.fit(X,y)
tan0.plot_tree_structure()
tan1 = TAN(starting_node=1)
tan1.fit(X,y)
tan1.plot_tree_structure()
tan4 = TAN(starting_node=4)
tan4.fit(X,y)
tan4.plot_tree_structure()
tan7 = TAN(starting_node=7)
tan7.fit(X,y)
tan7.plot_tree_structure()
stan0 = STAN(starting_node = 0)
stan0.fit(X,y,M)
stan0.plot_tree_structure()
stan1 = STAN(starting_node = 1)
stan1.fit(X,y,M)
stan1.plot_tree_structure()
stan4 = STAN(starting_node = 4)
stan4.fit(X,y,M)
stan4.plot_tree_structure()
```
| github_jupyter |
# Connect 4 sur un SenseHat
---
## Introduction
### Règles du Jeu
Le Connect 4, Four in a Row, ou Puissance 4 en français est un jeu se déroulant sur une grille de 6 rangées et 7 colonnes. En insérant tour à tour un jeton coloré dans la dernière rangée, qui tombe ensuite dans le plus bas emplacement disponible, les joueurs tentent d'avoir quatre jetons de leur couleur alignés horizontalement, verticalement, ou diagonalement.
Si toutes les cases sont remplies sans gagnant, la partie est déclarée nulle.
### Mise en place sur SenseHat
L'écran du SenseHat étant fait de 8\*8 pixels, il a été décidé d'utiliser cette surface de la manière suivante :
- Une **zone de jeu**, de 6*7 pixels bleus
- Un espace de sélection, avec un **curseur** de la couleur du joueur en train de jouer

## Installation
### 1. Importer SenseHat & autres modules
La première étape de la programmation de ce jeu est l'importation du module Sense_Hat afin de pouvoir communiquer avec le SenseHat.
```
from sense_hat import SenseHat
#from sense_emu import SenseHat
from time import sleep, time
from gamelib import *
sense = SenseHat()
```
```from sense_hat import SenseHat``` permet l'intéraction avec le module SenseHat. <br/>
```#from sense_emu import SenseHat``` permet d'utiliser l'émulateur SenseHat si la ligne est décommentée <br/>
```from time import sleep, time``` permet d'utiliser la fonction sleep(time) afin de pouvoir ralentir le programme <br/>
```from gamelib import *``` importe les couleurs de ```gamelib``` <br/>
<br/>
```sense = SenseHat()``` permet d'appeler les fonctions liées au SenseHat.
### 2. Définir et initialiser les variables générales
Ces variables seront cruciales au bon fonctionnement du jeu.
```
repeat = 1 # Repeats the program if launched as standalone
playerScore = [0, 0] # Score of the players
turns = 0 # Amount of turns passed
gameOver = 0 # Is the game over?
stopGame = 0 # =1 makes main() stop the game
# Creates two lists of 4 pixels to make winning streaks detection easier
fourYellow = [[248, 252, 0]] * 4
fourRed = [[248, 0, 0]] * 4
# Puts BLUE, RED and YELLOW from gamelib into a list
colors = (BLUE, RED, YELLOW)
```
### 3. Fonction ```main()```
La fonction ```main()``` est la fonction principale du jeu, qui le fait démarrer, continuer, où l'arrête.
```
def main():
"""
Main function, initialises the game, starts it, and stops it when needed.
"""
global gameOver
global playerScore
global stopGame
global turns
turns = 0 # Resets the turns passed
# Stops the game if a player has 2 points or if stop_game() set
# stopGame to 1 and the game is supposed to stop now
if (
repeat == 0 and
(playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1)
):
stopGame = 0 # Resets stopGame
gameOver = 0 # Resets gameOver
return
# If the game should continue, resets gameOver and playerScore to 0
else:
gameOver = 0 # Resets gameOver
if playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1:
stopGame = 0 # Resets stopGame
playerScore = [0, 0] # Resets the playerScore
show() # Resets the display for a new game
turn() # Starts a new turn
```
Le morceau de code <br/>
```
if (
repeat == 0 and
(playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1)
):
```
est indenté spécialement pour suivre le standard PEP8 tout en ne faisant pas plus de 79 caractères de long.
La fonction ```main()``` appèle les fonctions ```show()``` et ```turn()```, décrites ci-dessous en sections 4. et 5.
### 4. Fonction ```show()```
La fonction ```show()``` réinitialise l'affichage, puis y créé la zone de jeu en bleu de 6\*7 pixels.
```
def show():
"""
Sets up the playing field : 6*7 blue pixels
"""
sense.clear() # Resets the pixels
# Creates the 6*7 blue playing field
for y in range(6):
for x in range(7):
sense.set_pixel(x, 7-y, colors[0])
```
### 5. Fonction ```turn()```
La fonction ```turn()``` gère les tours, appèle la fonction ```select_column(p)``` pour que le joueur `p` sélectionne où placer son jeton, et cause un match nul si toutes les cases sont pleines (42 tours écoulés).
```
def turn():
"""
Decides whose turn it is, then calls select_column(p) to allow the player p
to make their selection
"""
global turns
if gameOver == 0: # Checks that the game isn't over
if turns % 2 == 0 and turns != 42: # If the turn is even it's p1's
turns += 1 # Increments turns
select_column(1) # Asks p1 to select a column for their token
elif turns % 2 == 1 and turns != 42: # If the turn is odd, it's p2's
turns += 1 # Increments turns
select_column(2) # Asks p2 to select a column for their token
elif turns == 42: # If 42 turns have passed..
player_scored(0) # ..then it's a draw
```
### 6. Fonction ```player_score(p)```
La fonction ```player_score(p)``` est appelée lorsqu'un joueur ```p``` marque un point, ou lorsqu'il y a match nul (p vaut alors 0). <br/>
Lorsqu'un joueur marque son premier point, son score s'affiche dans sa couleur sur l'écran, avant que le jeu ne soit relancé. <br/>
Lorsqu'un joueur marque son deuxième point, son score s'affiche dans sa couleur, puis l'écran entier, avant que le jeu et les scores ne soient réinitialisés. Si le jeu est appelé comme module, il renvoie à la sélection de jeu, sinon le jeu recommence.
```
def player_scored(p):
"""
Manages the scoring system.
p in player_scored(p) is the player who just scored.
p == 0 -> draw
p == 1 -> p1 scored
p == 2 -> p2 scored
If one of the players won the round, show their score in their color and
prepare the field for the next round. If one of the players has two points,
they win the game, the screen turns to their color and the game is reset.
If it's a draw, no points are given and the field gets prepared for the
next round.
"""
global gameOver
gameOver = 1 # The game has ended
global playerScore
if p != 0: # Checks if it's a draw
playerScore[p - 1] += 1 # Increments the winner's score
sense.show_letter(str(playerScore[p - 1]), colors[p]) # Shows score
# Ends the game if the player already had a point
if playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1:
sleep(1.5) # Pauses long enough to see the score
sense.clear(colors[p]) # Turns the screen into the winner's color
sleep(1.5) # Pauses long enough to see the winner's screen
sense.clear() # Clears the display
main() # Calls the main game function
```
### 7. Fonction ```select_column(p)```
La fonction ```select_column(p)``` permet au joueur ```p``` de sélectionner dans quelle colonne il veut poser son jeton en déplaçant le joystick à droite ou à gauche. La sélection commence au centre pour l'aspect pratique. <br/>
<br/>
```x = (x + 1) % 7``` permet de s'assurer que `x` reste dans la zone de jeu faisant 7 pixels.<br/>
Lorsque le choix est fait, et que le joueur a appuyé sur le joystick vers le bas, la fonction ```put_down(x, p)``` est appelée, avec ```x``` comme colonne choisie. Cette fonction va vérifier que l'espace est libre, et si ce n'est pas le cas, rappeler ```select_column(p)``` afin que le joueur ne gaspille pas son tour.
```
def select_column(p):
"""
Asks the player to select a column with the joystick, then calls for the
function to drop the token if it is clear.
p is the player whose turn it is.
If the joystick is moved upwards, the game is ended.
The function calls put_down(x,p) in order to drop the token down.
If it turns out the column is full,
put_down(x,p) will call select_column(p) back.
show_selection(x,p) is used to show the current selection.
Returns the selected column with x.
"""
x = 3 # Starts the selection in the middle of the playing field
selection = True # Is the player selecting?
while selection:
for event in sense.stick.get_events(): # Listens for joystick events
if event.action == 'pressed': # When the joystick is moved..
if event.direction == 'right': # ..to the right..
x = (x + 1) % 7 # ..then move the cursor to the right
elif event.direction == 'left': # ..to the left..
x = (x - 1) % 7 # ..then move the cursor to the left
elif event.direction == 'down': # Pressing down confirms
selection = False # Ends selection
put_down(x, p) # Calls the function that drops the token
elif event.direction == 'up': # Pressing up..
global stopGame
stopGame = 1 # ..will make main() end the game..
player_scored(0) # ..and causes a draw
show_selection(x, p) # Calls the function that shows the selection
return x # Returns which column was selected
```
Si le joueur appuie vers le haut, `stopGame` devient `True`, ce qui va faire que le jeu s'arrête à la prochaine invocation de `main()`, qui arrive après que `player_scored(0)` soit appelé. <br/>
<br/>
La fonction renvoie `x`, c'est à dire la coordonée de la colonne choisie, et appèle ```show_selection(x, p)``` afin que le curseur du joueur soit affiché correctement pendant la sélection.
### 8. Fonction ```show_selection(x, p)```
La fonction ```show_selection(x, p)``` affiche l'emplacement du curseur du joueur `p` avec la couleur appropriée, et rend aux pixels leur couleur originelle après le passage du curseur.
```
def show_selection(x, p):
"""
Shows the cursor for the column selection.
x is the currently selected column
p is the player playing
Ensures that the replacement to black stops when the game is over in order
to prevent conflict with the score display.
"""
for i in range(7):
if i == x and gameOver == 0: # Checks that i is in the playing field
# Colors the selection with the player p's color
sense.set_pixel(i, 0, colors[p])
elif gameOver == 0:
# Resets the pixels once the cursor has moved
sense.set_pixel(i, 0, (0, 0, 0))
```
Lorsque le jeu n'est pas en cours (```gameOver =! 0```), la fonction ne fait plus rien, afin d'éviter qu'elle n'interfère avec par exemple l'affichage des résultats.
### 9. Fonction ```put_down(x, p)```
La fonction ```put_down(x, p)``` vérifie que la colonne `x` choisie par le joueur est bien libre, puis trouve le plus bas emplacement libre, appèle la fonction ```animate_down(x, y, p)``` afin d'animer la chute puis y affiche le jeton du joueur.<br/>
Si la colonne n'est pas libre, ```put_down(x, p)``` rappèle ```select_column(p)``` afin d'éviter que le joueur ne gaspille son tour.<br/>
Une fois le jeton placé, la fonction appèle ```check_connectfour(x, y)``` afin de regarder si le jeton posé créé une suite de quatre. S'il n'y a pas de connection, c'est au tour de l'autre joueur avec ```turn()```.
```
def put_down(x, p):
"""
Puts the token down in the selected column.
x is the selected column
p is the player playing
If the selected column is full, select_column(p) is called back to ensure
the player doesn't waste their turn.
The token is animated down with animate_down(x,y,p) before being set.
If the token is not a winning one, calls for the next turn with turn().
"""
# Checks that the column is free (BLUE)
if sense.get_pixel(x, 2) == [0, 0, 248]:
for y in range(7): # Finds the lowest available spot
if sense.get_pixel(x, 7-y) == [0, 0, 248]: # If it's free then..
animate_down(x, y, p) # ..calls for the animation down and..
sense.set_pixel(x, 7 - y, colors[p]) # ..puts the token there
# Checks if it's a winning move
if check_connectfour(x, 7 - y) is False:
turn() # If not, starts the next turn
return
return
else:
select_column(p) # If there is no free spot, restarts selection
return
```
La fonction ```sense.get_pixel(x, y)``` ne renvoie pas la valeur qui a été assignée au pixel directement, mais la fait passer à travers une autre opération, ce qui explique l'utilisation d'une valeur de bleu (```[0,0,248]```) qui n'est pas ```BLUE```.
### 10. Fonction ```animate_down(x, y, p)```
La fonction ```animate_down(x, y, p)``` fait apparaître puis disparaître un pixel de la couleur du joueur `p` dans chaque case de la colonne `x` jusqu'au point `y`, avant de redonner aux pixels leur couleur d'origine (Noire `[0,0,0]` ou `BLUE`)
```
def animate_down(x, y, p):
"""
Creates an animation that makes a pixel move down the selected column to
the lowest available spot.
x is the selected column
y is the lowest available spot
p is the player playing
Ensures that the first two rows stay black, and that the others turn BLUE
again after the animation.
"""
# For each available spot from the top of the column
for z in range(7 - y):
sense.set_pixel(x, z, colors[p]) # Set the pixel to the player's color
sleep(0.03) # Wait long enough for it to be noticeable
if z != 1 and z != 0: # If it's not the first two rows
sense.set_pixel(x, z, colors[0]) # Set the pixel back to BLUE
else: # Otherwise
sense.set_pixel(x, 1, [0, 0, 0]) # Set it to black
```
### 11. Fonction ```check_connectfour(x, y)```
La fonction ```check_connectfour(x, y)``` va faire une série de tests afin de regarder si le jeton posé à l'emplacement `x, y` cause une suite de 4 pixels horizontalement, verticalement et en diagonale.
```
def check_connectfour(x, y):
"""
Checks if there is four same-colored token next to each other.
x is the last played token's column
y is the last played token's row
Returns False if there is no winning move this turn. Return True and thus
makes the game end if it was a winning move.
"""
# First asks if there is a win horizontally and vertically
if check_horizontal(x, y) is False and check_vertical(x, y) is False:
# Then diagonally from the bottom left to the upper right
if check_diagonal_downleft_upright(x, y) is False:
# And then diagonally the other way
if check_diagonal_downright_upleft(x, y) is False:
# If not, then continue playing by returning False
return(False)
```
La fonction appèle d'abord 1) ```check_horizontal(x, y)``` et 2) ```check_vertical(x, y)```, puis regarde pour les deux diagonales 3) ```check_diagonal_downleft_upright(x, y)``` et 4) ```check_diagonal_downright_upleft(x, y)```. <br/>
<br/>
Si le pixel ne fait aucune suite, alors toutes les conditions seront `False`, ce que la fonction retournera, et ce sera le tour de l'autre joueur.

#### 11.1 ```check_horizontal(x, y)```
La fonction ```check_horizontal(x, y)``` va faire une liste `horizontal` de tous les pixels de la rangée `y` où le jeton a été placé, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_horizontal(x, y):
"""
Checks if there is four same-colored tokens in the same row.
x is the last played token's column
y is the last played token's row
Returns False if there isn't four same-colored tokens on the same row.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
# Makes a list out of the row
horizontal = sense.get_pixels()[8 * y:8 * y + 7]
for z in range(4): # Checks the row by four groups of four tokens
if horizontal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if horizontal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move.
return False # Returns that there were no winning move.
```
#### 11.2 ```check_vertical(x, y)```
La fonction ```check_vertical(x, y)``` va faire une liste `vertical` de tous les pixels de la colonne `x` où le jeton a été placé, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, trois fois de suite afin de couvrir l'entièreté de la colonne.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_vertical(x, y):
"""
Checks if there is four same-colored tokens in the same column.
x is the last played token's column
y is the last played token's row
Returns False if there isn't four same-colored tokens in the column.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
# Makes a list out of the column
vertical = [sense.get_pixel(x, 2), sense.get_pixel(x, 3),
sense.get_pixel(x, 4), sense.get_pixel(x, 5),
sense.get_pixel(x, 6), sense.get_pixel(x, 7)]
for z in range(3): # Checks the column by three groups of four tokens
if vertical[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if vertical[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
#### 11.3 ```check_diagonal_downleft_upright(x, y)```
La fonction ```check_diagonal_downleft_upright(x, y)``` va faire une liste `diagonal` de tous les pixels de la diagonale allant d'en bas à gauche à en haut à droite en passant par le point `x, y` où le jeton a été placé grâce à la fonction ```create_diagonal_downleft_upright(diagonal, x, y)```, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_diagonal_downleft_upright(x, y):
"""
Checks if there is four same-colored token in the bottom-left to
upper-right diagonal.
x is the last played token's column
y is the last played token's row
Calls create_diagonal_downleft_upright to create a list from the diagonal.
Returns False if there isn't four same-colored tokens in the diagonal.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
diagonal = [] # Resets the list
# Calls a function to create a list from the pixels in a bottom-left to
# upper-right diagonal
create_diagonal_downleft_upright(diagonal, x, y)
for z in range(4): # Checks the diagonal by four groups of four tokens
if diagonal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if diagonal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
##### 11.3.1 ```create_diagonal_downleft_upright(diagonal, x, y)```
En utilisant `try` et `except`, la fonction ```create_diagonal_downleft_upright(diagonal, x, y)``` tente de créer une liste de 7 pixels passant en diagonale du point `x, y` d'en bas à gauche à en haut à droite.<br/>
L'utilisation de `try` et `except` permet d'éviter que le programme crashe lorsque la fonction tente d'ajouter un pixel hors limites à la liste. <br/><br/>
La fonction retourne la liste `diagonale` aussi grande que ce qu'elle a pu obtenir.
```
def create_diagonal_downleft_upright(diagonal, x, y):
"""
Creates a list of seven pixels in a bottom left to upper right diagonal
centered around the last placed token.
diagonal is the list
x is the last played token's column
y is the last played token's row
As the function might try to take into account pixels that are out of
bounds, there is a try except ValueError in order to prevent out of bounds
errors. The list might be shorter than seven pixels, but the function works
anyway.
Returns the list of diagonal pixels.
"""
for z in range(7): # To have a 7 pixel list
# Tries to get values that might be out of bounds, three pixels down
# left and three pixels up right in a diagonal from the token
try:
diagonal.append(sense.get_pixel(x - z + 3, y + z - 3))
except: # Catches out of bounds errors
ValueError
return(diagonal) # Returns the list of pixels
```
#### 11.4 ```check_diagonal_downright_upleft(x, y)```
La fonction ```check_diagonal_downright_upleft(x, y)``` va faire une liste `diagonal` de tous les pixels de la diagonale allant d'en bas à droite à en haut à gauche en passant par le point `x, y` où le jeton a été placé grâce à la fonction ```create_diagonal_downright_upleft(diagonal, x, y)```, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_diagonal_downright_upleft(x, y):
"""
Checks if there is four same-colored token in the bottom-right to
upper-left diagonal.
x is the last played token's column
y is the last played token's row
Calls create_diagonal_downright_upleft to create a list from the diagonal.
Returns False if there isn't four same-colored tokens in the diagonal.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
diagonal = [] # Resets the list
# Calls a function to create a list from the pixels in a bottom-right to
# upper-left diagonal
create_diagonal_downright_upleft(diagonal, x, y)
for z in range(4): # Checks the diagonal by four groups of four tokens
if diagonal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if diagonal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
##### 11.4.1 ```create_diagonal_downright_upleft(diagonal, x, y)```
En utilisant `try` et `except`, la fonction ```create_diagonal_downright_upleft(diagonal, x, y)``` tente de créer une liste de 7 pixels passant en diagonale du point `x, y` d'en bas à droite à en haut à gauche.<br/>
L'utilisation de `try` et `except` permet d'éviter que le programme crashe lorsque la fonction tente d'ajouter un pixel hors limites à la liste.<br/>
<br/>
La fonction retourne la liste `diagonale` aussi grande que ce qu'elle a pu obtenir.
```
def create_diagonal_downright_upleft(diagonal, x, y):
"""
Creates a list of seven pixels in a bottom right to upper left diagonal
centered around the last placed token.
diagonal is the list
x is the last played token's column
y is the last played token's row
As the function might try to take into account pixels that are out of
bounds, there is a try except ValueError in order to prevent out of bounds
errors. The list might be shorter than seven pixels, but the function works
anyway.
Returns the list of diagonal pixels.
"""
for z in range(7): # To have a 7 pixel list
# Tries to get values that might be out of bounds, three pixels down
# right and three pixels up left in a diagonal from the token
try:
diagonal.append(sense.get_pixel(x - z + 3, y - z + 3))
except: # Catches out of bounds errors
ValueError
return(diagonal) # Returns the list of pixels
```
### 12. Module ou Standalone?
Ce morceau de code fait en sorte que le jeu se répète s'il est standalone `repeat = 1` mais pas s'il est importé comme module `repeat = 0` afin de permettre de retourner à la sélection de jeux.
```
# Execute the main() function when the file is executed,
# but do not execute when the module is imported as a module.
print('module name =', __name__)
if __name__ == '__main__':
main()
global repeat
repeat = 1 # If the game is played as standalone, make it repeat
else:
global repeat
repeat = 0 # If the game is played as a module, make it quit when over
```
| github_jupyter |
# Text classification with Reuters-21578 datasets
### See: https://kdd.ics.uci.edu/databases/reuters21578/README.txt for more information
```
%pylab inline
import re
import xml.sax.saxutils as saxutils
from BeautifulSoup import BeautifulSoup
from gensim.models.word2vec import Word2Vec
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, LSTM
from multiprocessing import cpu_count
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer, sent_tokenize
from nltk.stem import WordNetLemmatizer
from pandas import DataFrame
from sklearn.cross_validation import train_test_split
```
## General constants (modify them according to you environment)
```
# Set Numpy random seed
random.seed(1000)
# Newsline folder and format
data_folder = 'd:\\ml_data\\reuters\\'
sgml_number_of_files = 22
sgml_file_name_template = 'reut2-NNN.sgm'
# Category files
category_files = {
'to_': ('Topics', 'all-topics-strings.lc.txt'),
'pl_': ('Places', 'all-places-strings.lc.txt'),
'pe_': ('People', 'all-people-strings.lc.txt'),
'or_': ('Organizations', 'all-orgs-strings.lc.txt'),
'ex_': ('Exchanges', 'all-exchanges-strings.lc.txt')
}
# Word2Vec number of features
num_features = 500
# Limit each newsline to a fixed number of words
document_max_num_words = 100
# Selected categories
selected_categories = ['pl_usa']
```
## Prepare documents and categories
```
# Create category dataframe
# Read all categories
category_data = []
for category_prefix in category_files.keys():
with open(data_folder + category_files[category_prefix][1], 'r') as file:
for category in file.readlines():
category_data.append([category_prefix + category.strip().lower(),
category_files[category_prefix][0],
0])
# Create category dataframe
news_categories = DataFrame(data=category_data, columns=['Name', 'Type', 'Newslines'])
def update_frequencies(categories):
for category in categories:
idx = news_categories[news_categories.Name == category].index[0]
f = news_categories.get_value(idx, 'Newslines')
news_categories.set_value(idx, 'Newslines', f+1)
def to_category_vector(categories, target_categories):
vector = zeros(len(target_categories)).astype(float32)
for i in range(len(target_categories)):
if target_categories[i] in categories:
vector[i] = 1.0
return vector
# Parse SGML files
document_X = {}
document_Y = {}
def strip_tags(text):
return re.sub('<[^<]+?>', '', text).strip()
def unescape(text):
return saxutils.unescape(text)
# Iterate all files
for i in range(sgml_number_of_files):
if i < 10:
seq = '00' + str(i)
else:
seq = '0' + str(i)
file_name = sgml_file_name_template.replace('NNN', seq)
print('Reading file: %s' % file_name)
with open(data_folder + file_name, 'r') as file:
content = BeautifulSoup(file.read().lower())
for newsline in content('reuters'):
document_categories = []
# News-line Id
document_id = newsline['newid']
# News-line text
document_body = strip_tags(str(newsline('text')[0].body)).replace('reuter\n', '')
document_body = unescape(document_body)
# News-line categories
topics = newsline.topics.contents
places = newsline.places.contents
people = newsline.people.contents
orgs = newsline.orgs.contents
exchanges = newsline.exchanges.contents
for topic in topics:
document_categories.append('to_' + strip_tags(str(topic)))
for place in places:
document_categories.append('pl_' + strip_tags(str(place)))
for person in people:
document_categories.append('pe_' + strip_tags(str(person)))
for org in orgs:
document_categories.append('or_' + strip_tags(str(org)))
for exchange in exchanges:
document_categories.append('ex_' + strip_tags(str(exchange)))
# Create new document
update_frequencies(document_categories)
document_X[document_id] = document_body
document_Y[document_id] = to_category_vector(document_categories, selected_categories)
```
## Top 20 categories (by number of newslines)
```
news_categories.sort_values(by='Newslines', ascending=False, inplace=True)
news_categories.head(20)
```
## Tokenize newsline documents
```
# Load stop-words
stop_words = set(stopwords.words('english'))
# Initialize tokenizer
# It's also possible to try with a stemmer or to mix a stemmer and a lemmatizer
tokenizer = RegexpTokenizer('[\'a-zA-Z]+')
# Initialize lemmatizer
lemmatizer = WordNetLemmatizer()
# Tokenized document collection
newsline_documents = []
def tokenize(document):
words = []
for sentence in sent_tokenize(document):
tokens = [lemmatizer.lemmatize(t.lower()) for t in tokenizer.tokenize(sentence) if t.lower() not in stop_words]
words += tokens
return words
# Tokenize
for key in document_X.keys():
newsline_documents.append(tokenize(document_X[key]))
number_of_documents = len(document_X)
```
## Word2Vec Model
### See: https://radimrehurek.com/gensim/models/word2vec.html and https://code.google.com/p/word2vec/ for more information
```
# Load an existing Word2Vec model
w2v_model = Word2Vec.load(data_folder + 'reuters.word2vec')
# Create new Gensim Word2Vec model
w2v_model = Word2Vec(newsline_documents, size=num_features, min_count=1, window=10, workers=cpu_count())
w2v_model.init_sims(replace=True)
w2v_model.save(data_folder + 'reuters.word2vec')
```
## Vectorize each document
```
num_categories = len(selected_categories)
X = zeros(shape=(number_of_documents, document_max_num_words, num_features)).astype(float32)
Y = zeros(shape=(number_of_documents, num_categories)).astype(float32)
empty_word = zeros(num_features).astype(float32)
for idx, document in enumerate(newsline_documents):
for jdx, word in enumerate(document):
if jdx == document_max_num_words:
break
else:
if word in w2v_model:
X[idx, jdx, :] = w2v_model[word]
else:
X[idx, jdx, :] = empty_word
for idx, key in enumerate(document_Y.keys()):
Y[idx, :] = document_Y[key]
```
## Split training and test sets
```
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)
```
## Create Keras model
```
model = Sequential()
model.add(LSTM(int(document_max_num_words*1.5), input_shape=(document_max_num_words, num_features)))
model.add(Dropout(0.3))
model.add(Dense(num_categories))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## Train and evaluate model
```
# Train model
model.fit(X_train, Y_train, batch_size=128, nb_epoch=5, validation_data=(X_test, Y_test))
# Evaluate model
score, acc = model.evaluate(X_test, Y_test, batch_size=128)
print('Score: %1.4f' % score)
print('Accuracy: %1.4f' % acc)
```
| github_jupyter |
# Comparison of dgemm calibrations w.r.t. how the matrices are generated
In these experiments, we perform calls to dgemm with three square matrices of order 2,048.
We will look at dgemm performance and the node temperatures and frequencies for different matrix generation methods. Each element of the matrix is set to some random number between 0 and 1. Then, we apply a mask to set its N lowest order bits to 1.
```
import io
import os
import zipfile
import pandas
import yaml
import datetime
import re
from plotnine import *
import plotnine
plotnine.options.figure_size = (12, 8)
import warnings
warnings.simplefilter(action='ignore') # removing annoying warning
import json
import cashew
print(cashew.__git_version__)
from cashew import archive_extraction as ae
def read_csv(archive_name, csv_name, columns=None, filter_func=lambda x: x):
archive = zipfile.ZipFile(archive_name)
df = pandas.read_csv(io.BytesIO(filter_func(archive.read(csv_name))), names=columns)
df.columns = df.columns.str.strip()
df['jobid'] = int(get_yaml(archive_name, 'info.yaml')['jobid'])
df['index'] = range(len(df))
return df
def get_yaml(archive_name, yaml_name):
archive = zipfile.ZipFile(archive_name)
return yaml.load(io.BytesIO(archive.read(yaml_name)))
import numpy
directories = ['matrix_generation/pyxis/2/']
def get_monitoring(archive_name, csv_name, min_start=None, max_stop=None):
df = read_csv(archive_name, csv_name)
for col in ['start', 'stop']:
df[col] = pandas.to_datetime(df[col])
first = df['start'].min()
df['start'] -= first
df['stop'] -= first
if min_start is not None:
old_len = len(df)
df = df[df['start'] >= pandas.to_timedelta(min_start, unit='s')]
# print('Archive %s: removed %d entries that happened before time %.2f s' % (archive_name, old_len-len(df), min_start))
if max_stop is not None:
old_len = len(df)
df = df[df['stop'] <= pandas.to_timedelta(max_stop, unit='s')]
# print('Archive %s: removed %d entries that happened after time %.2f s' % (archive_name, old_len-len(df), max_stop))
if min_start is not None:
first = df['start'].min()
df['start'] -= first
df['stop'] -= first
return df
def my_read_monitoring(archive_name, columns=None):
'''
Custom implementation of read_monitoring, to *not* read the temperature, as it is not available for Pyxis cluster.
'''
csv_name = 'monitoring.csv'
df = ae.read_archive_csv_enhanced(archive_name, csv_name, columns=columns)
df['timestamp'] = pandas.to_datetime(df['timestamp'])
core_mapping = ae.platform_to_cpu_mapping(ae.get_platform(archive_name))
columns = ['timestamp', 'cluster', 'node', 'jobid', 'start_time', 'expfile_hash']
frequency = ae.my_melt(df, 'frequency_core_', columns)
# removing the cores with unknown IDs (they are not real cores, just hyperthreads)
frequency = frequency[frequency['group'].isin(core_mapping)]
for frame, val in [(frequency, 'frequency')]:
frame['value'] = frame[f'{val}_core_']
frame.drop(f'{val}_core_', axis=1, inplace=True)
frame['cpu'] = frame.apply(lambda row: core_mapping[row['group']], axis=1)
frame['core'] = frame['group']
frame.drop('group', axis=1, inplace=True)
frame['kind'] = val
frequency['value'] *= 1e-9 # Hz → GHz
df = pandas.concat([frequency])
info = ae.read_yaml(archive_name, 'info.yaml')
timestamps = info['timestamp']
for step in ['start', 'stop']:
df[f'{step}_exp'] = pandas.to_datetime(timestamps['run_exp'][step]).timestamp()
df['timestamp'] = df['timestamp'].astype(numpy.int64) / 10 ** 9
return df
def read_archive(archive_name):
df = read_csv(archive_name, 'result.csv')
df['start'] = df['timestamp']
df['end'] = df['start'] + df['duration']
df['mnk'] = df['m'] * df['n'] * df['k']
df['gflops'] = 2*df['mnk'] / df['duration'] * 1e-9
core_mapping = ae.platform_to_cpu_mapping(ae.get_platform(archive_name))
df['cpu'] = df.apply(lambda row: core_mapping[row.core], axis=1)
info = get_yaml(archive_name, 'info.yaml')
installfile = info['installfile']
matrix_init = get_yaml(archive_name, installfile)['matrix_initialization']
try:
mask_size = get_yaml(archive_name, installfile)['matrix_initialization_mask_size']
except KeyError:
mask_size = 0
hosts = [key for key in info.keys() if key.endswith('grid5000.fr')]
assert len(hosts) == 1
host = hosts[0]
host = host[:-len('@lyon.grid5000.fr')]
start_time = df['start'].min()
stop_time = df['end'].max()
df['start'] -= start_time
df['start'] = pandas.to_timedelta(df['start'], unit='s')
df['end'] -= start_time
monitoring = my_read_monitoring(archive_name)
monitoring['date'] = pandas.to_datetime(monitoring['start_time'], unit='s').astype(str)
for date in monitoring['date'].unique():
monitoring.loc[monitoring['date'] == date, 'real_start_time'] = monitoring[monitoring['date'] == date]['timestamp'].min()
monitoring['start_exp'] -= monitoring['real_start_time']
monitoring['stop_exp'] -= monitoring['real_start_time']
monitoring['timestamp'] -= monitoring['real_start_time']
monitoring['cpu_id'] = monitoring['node'].astype(str) + ':' + monitoring['cpu'].astype(str)
for tmp in [df, monitoring]:
tmp['matrix_content'] = matrix_init
tmp['mask_size'] = mask_size
tmp['host'] = host
return df, monitoring
dataframes = []
cluster = set()
for directory in directories:
for filename in os.listdir(directory):
if not filename.endswith('.zip'):
continue
path = os.path.join(directory, filename)
cluster.add(get_yaml(path, 'info.yaml')['cluster'])
dataframes.append(read_archive(path))
assert len(cluster) == 1
cluster = cluster.pop()
#node = 'dahu-1'
#cpu = 0
performance = pandas.concat([t[0] for t in dataframes])
#performance = performance[(performance['host'] == node) & (performance['cpu'] == cpu)]
monitoring = pandas.concat([t[1] for t in dataframes])
#frequency = frequency[(frequency['host'] == node) & (frequency['cpu'] == cpu)]
#performance = performance[performance['matrix_content'] == 'random']
#frequency = frequency[frequency['matrix_content'] == 'random']
performance.head()
monitoring.head()
```
## Performance
```
(ggplot(performance)
+ aes(x='factor(mask_size)', y='duration', fill='factor(mask_size)')
+ theme_bw()
+ geom_boxplot(outlier_alpha=0, size=1)
+ xlab('Mask size (bits)')
+ ylab('Duration (s)')
+ labs(color='Mask size (bits)')
+ scale_fill_brewer(palette='Blues')
+ ggtitle('Distribution of DGEMM durations (matrices of size 2048×2048)')
+ facet_wrap(['host', 'cpu'], labeller='label_both')
)
(ggplot(performance)
+ aes(x='timestamp', y='duration', color='factor(mask_size)')
+ theme_bw()
+ geom_point(alpha=0.3)
+ xlab('Timestamp (s)')
+ ylab('Duration (s)')
+ labs(color='Mask size (bits)')
+ scale_color_brewer(palette='Blues')
+ ggtitle('Evolution of DGEMM durations (matrices of size 2048×2048)')
+ facet_wrap(['host', 'cpu'], labeller='label_both')
+ guides(color=guide_legend(override_aes={'alpha': 1, 'size': 4}))
)
(ggplot(performance)
+ aes(x='factor(core)', y='duration', fill='factor(mask_size)')
+ theme_bw()
+ geom_boxplot(outlier_alpha=0, size=1)
+ xlab('Core n°')
+ ylab('Duration (s)')
+ labs(color='Mask size (bits)')
+ scale_fill_brewer(palette='Blues')
+ ggtitle('Distribution of DGEMM durations (matrices of size 2048×2048)')
+ facet_wrap(['mask_size', 'host', 'cpu'], labeller='label_both', scales='free_x')
)
```
## Frequency
```
(ggplot(monitoring[(monitoring['kind'] == 'frequency') & (monitoring['timestamp'] > monitoring['start_exp'] + 10) & (monitoring['timestamp'] < monitoring['stop_exp'] - 200)])
+ aes(x='factor(mask_size)', y='value', fill='factor(mask_size)')
+ theme_bw()
+ geom_boxplot(outlier_alpha=0, size=1)
+ xlab('Mask size (bits)')
+ ylab('Frequency (GHz)')
+ labs(color='Mask size (bits)')
+ scale_fill_brewer(palette='Blues')
+ ggtitle('Distribution of core frequencies during the experiments')
+ facet_wrap(['host', 'cpu'], labeller='label_both')
)
(ggplot(monitoring[(monitoring['kind'] == 'frequency') & (monitoring['timestamp'] > monitoring['start_exp'] + 10) & (monitoring['timestamp'] < monitoring['stop_exp'] - 200)])
+ aes(x='timestamp', y='value', color='factor(mask_size)')
+ theme_bw()
+ geom_point(alpha=0.3)
+ xlab('Timestamp (s)')
+ ylab('Frequency (GHz)')
+ labs(color='Mask size (bits)')
+ scale_color_brewer(palette='Blues')
+ ggtitle('Evolution of core frequencies during the experiments')
+ facet_wrap(['host', 'cpu'], labeller='label_both')
+ guides(color=guide_legend(override_aes={'alpha': 1, 'size': 4}))
)
```
| github_jupyter |
There are a lot of sections in here, so be sure to browse around - pandas, matplotlib, Illustrator, and some specifically for items in the homeowrk.
# General pandas
## Importing
What's the full import statement? You'll want: pandas, matplotlib, pyplot, and you'll want to make sure graphs are going to be displayed in your notebook. And that fonts are going to save right in PDFs.
## Reading in files
When files are in a subdirectory, you can't just say `pd.read_csv("filename.csv")` - you need to say "oh you're in a subdirectory, let me go in there, too." Usually something like `pdf.read_csv("my_folder/filename.csv")`.
## Data types
Be sure to check your data types! Numbers should be ints or floats, dates should be datetimes (although usually it's okay for years to not be). You convert between most with `.astype`, but for dates you need to do something like `df['new_columns'] = pd.to_dataframe(df['date_columns'], format="....")`.
## Fixing things up
Sometimes it's easier to do things in pandas/matplotlib, sometimes it's easier to do in Illustrator. As a general rule, sorting, grids, and sizing are all easier in pandas and annotations are easier in Illustrator.
## Combining datasets
To combine datasets in pandas, you use `.merge`. In a perfect world, you'd do this to combine two dataframes named df1 and df2:
```python
df1.merge(df2)
```
to merge the LEFT dataset (df1) with the RIGHT dataset (df2). Unfortuantely this is always impossible, because it requires your datasets to have column names in common so it can guess how to join them together.
Instead, you need to tell it two things:
* `left_on`, the name of the join column for the first dataframe
* `right_on`, the name of the join column for the second dataframe.
It will usually be something like, "I want to join `city` in df1 with `municipality_name` in df2," in which case you'd run:
```python
df1.merge(df2, left_on='city', right_on='municipality_name')
```
This looks cool, but you also need to save your new merged dataframe somewhere. Your end result will probably look like this:
```python
merged = df1.merge(df2, left_on='city', right_on='municipality_name')
merged.head()
```
## Reversing
Usually you can use `.sort_values()` to sort things, but every now and again you just want it to go in reverse. You can use `ascending=False` or you can lose your mind and do something like:
```python
df.iloc[::-1]
```
Scary, right? You can also do it on a single column, `df['Name'].iloc[::-1]`.
# Graphing
## Saving graphs
Make sure you're always doing the same imports up top!
```python
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['pdf.fonttype'] = 42
```
Especially that last one, so when you save your PDFs they'll have editable text. You can save using this command:
```python
plt.savefig("output.pdf")
```
It will only work if it's in the **same cell** as where you're graphing.
## Multiple and stacked bar charts
A normal bar chart is just a bar for every row. But sometimes you want TWO bars for every row!
Let's say we had a coffee shop, and every row was a month. If we had a `tea` column, we could plot tea sales like this:
```python
df.plot(kind='bar', x='month', y='tea')
```
If we also had a `coffee` column, though, we could also make a grouped bar chart, where each month gets a bar for tea AND a bar for coffee.
```python
df.plot(kind='bar', x='month', y=['tea', 'coffee'])
```
If we were interested in the total between tea and coffee, we could also stack the tea and coffee on top of each other by adding `stacked=True`
```python
df.plot(kind='bar', x='month', y=['tea', 'coffee'], stacked=True)
```
Sometimes you want something stacked out of 100% but your columns don't add up to 100%. In that case, do math.
## Putting multiple graphs on the same chart
If you have two separate plots you want to put on top of each other, you can do it two different ways:
1) You can just save them separately and cut and paste in Illustrator to stack them on top of each other. This way sucks.
2) You know how we made a plot and saved it to `ax` and then did like `ax.set_ylim(...)` and `ax.set_title(...)` and stuff? You can make a blank `ax`, and then keep telling graphs to go on top of it.
For example, this will create a separate plot for each continent:
```python
df.groupby('Continent').plot(x='GDP_per_capita', y='life_expectancy', marker='.', linestyle='')
```
It's kind of like a for loop, making a new plot for each group. Instead, we need to create a blank graph, and then say "hey `.plot`, use this graph!!!" We do that like this:
```python
fig, ax = plt.subplots()
df.groupby('Continent').plot(ax=ax, x='GDP_per_capita', y='life_expectancy', marker='.', linestyle='')```
In the first line, `fig, ax = plt.subplots()` we are building a blank canvas. Why does the code look like that? I don't know, because we aren't allowed to have nice things.
Then each time we want to draw a chart, we pass `ax=ax` to `.plot` to tell it where to draw instead of making a totally new graph.
Sometimes this works when you're stacking separate graphs on top of each other, too (like layering), but sometimes it doesn't. I'd like to explain more but it seriously changes every 6 months and I have no idea where we're at right now.
## Highlighting points/categories/etc
There are a lot of ways to do this!
```python
# Building colors to pass to matplotlib or Seaborn
# Use as color=colors for matplotlib, palette=colors for Seaborn
def build_colors(row):
if row['Country'] == 'Switzerland':
return 'red'
elif row['Country'] == 'Germany':
return 'red'
else:
return 'lightgrey'
colors = df.reset_index().apply(build_colors, axis=1)
```
And then you use it by passing `color=` to your `.plot` method:
```python
df.plot(x='Country', y='life_expectancy', color=colors)
```
## Line graphs (and other types)
If you want a line graph, `style='-'`. If you want dots, `style='.'`. If you want lines AND dots, you want `style='.-'`. Why can't it all just be `kind='bar'` and `kind='scatter'` and `kind='line'` and `kind='line-with-dots'`? I don't know, life sucks.
You can also look at https://matplotlib.org/api/markers_api.html - instead of `.` you can use `o` or `v` or a million other things.
## Coloring your dots and such
* `color='red'` to make everything red
* `markeredgecolor='black'` to make the edges black
* `markerfacecolor='blue'` to make the fill color blue (why is it called face color and not fill color????)
* `markersize=10` or `size=10` to make the circles 10 pixels (depending on the kind of graph you're making)
If you want them to not have a marker color or edge color of whatever disappear, make the color `'none'`.
## Size of the graph
`figsize=(10, 20)` when you're graphing, or when you're doing `plot.subplots` if you're doing the `ax=ax` trick.
## Grid Lines
To add/remove/adjust grid lines, follow this guide: http://jonathansoma.com/lede/data-studio/matplotlib/adding-grid-lines-to-a-matplotlib-chart/
Or just note you can do things like this:
```python
ax.grid('on', axis='y', which='major', linestyle='-', linewidth='0.5', color='red')
```
To turn on the grid for a specific part of an axis and give it a specific style/color.
`linestyle` can be `:` or `-` or `--` or `-.` - read here if you don't believe me: https://matplotlib.org/devdocs/gallery/lines_bars_and_markers/line_styles_reference.html. You can also change the dashes in Illustrator, which might be easier.
"WHAT DOES 'major' MEAN???" - read the next section.
## Tick Marks
There are two kinds of tick marks on an axis. MINOR and MAJOR. MAJOR get labels. MINOR do not.
When setting tick marks on the x or y axis, you can use `.set_xticks` and `.set_yticks` to make tick mark labels appear in specific places. For example:
```python
ax.set_xticks([2005, 2006, 2007, 2008])
```
But!!! The above doesn't work if your data is actual dates, only if it's integers. If you try to do the above with a datetime, you'll get an error.
Instead, you need to reach in and adjust the MAJOR ticks (because you want labels). If you wanted to make there be a tick mark WITH A LABEL every single month, it would look like this:
```python
import matplotlib.dates as dates
ax.xaxis.set_major_locator(dates.MonthLocator())
```
If you also wanted there to be a tick mark WITHOUT a label (minor) every 5 days, it would look like this:
```python
ax.xaxis.set_major_locator(dates.DayLocator(interval=5))
```
## Rotating tick marks
`rot=0`, `rot=90`, etc when using `plot`, OR select them all in Illustrator (hold down SHIFT when clicking them, or just draw a box that touches them all), `Object > Transform > Transform Each`, check 'Preview', and then play around with 'Angle'
## Adding an extra axis
Sometimes you just want more labels! Let's say you graphed something very long where you had everyone's name on the left-hand side but also wanted it on the right-hand side. You'd build your graph, save it as `ax`, then use the following code:
```python
# Duplicate the graph, giving it the same x axis
alt = ax.twinx()
# Set the new graph to have its tick marks in the same position
alt.set_yticks(ax.yaxis.get_ticklocs())
alt.set_ylim(ax.get_ylim())
# Set the labels for the tick marks to be the same, too
alt.set_yticklabels(df['Name'])
```
Why does it work? Who knows. But be careful, you might need to sort/reverse to make the labels correct.
# Illustrator
## Learning on Lynda
You can log onto Lynda.com (for free!) and find some tutorials with this link: https://ctl.columbia.edu/resources-and-technology/teaching-with-technology/tech-resources/lynda/
There are a _lot_ of lessons (and series) on there about using Illustrator, I'm honestly not sure what the good ones are! You could try something like the below:
* https://www.lynda.com/Illustrator-tutorials/Illustrator-CC-2019-Essential-Training/756294-2.html
* https://www.lynda.com/Illustrator-tutorials/Illustrator-CC-2019-One-One-Fundamentals-Revision/784289-2.html
* https://www.lynda.com/learning-paths/Design/become-a-digital-illustrator
* https://www.lynda.com/Illustrator-training-tutorials/227-0.html?category=beginner_337
Love something on there? Hate something? **Let everyone else know on Slack** so we can make sure we're using good tutorials!
## Opening things in Illustrator
Before you do _ANYTHING_ you should remove clipping masks.
1. Select everything (Command+A)
2. `Object > Clipping Mask > Release` (Command + Option + 7)
3. Keep releasing clipping masks again and again and again until it doesn't work any more
## Fill vs. stroke colors
Fill is the inside, stroke is the outline.

You select them separately. The white-with-a-red-line color means no color.
## Background colors
In Illustrator, draw a square as big as your entire artboard, then do `Select > Arrange > Send to Back` to make it go behind everything else.
## Editing lines
`Window > Stroke` to open up the stroke menu, then you can change the size with "Weight" or make it dashed with "Dashed line" (you might need to click the little... thing in the upper right-hand corner of the Stroke window and pick 'Show options' to be able to see that)

## Selecting multiple things in Illustrator
Hold shift, click multiple things. Or click and drag a box around them.
## Selecting all of the _____
Things that look like what you have selected: `Select > Same > Appearance` or `Fill Color` or `Stroke Color` or whatever
Text: `Select > Object > All Text Objects`
## My grid/axis lines are on top of my chart!
Select the line, then `Object > Arrange > Send to Back`
## I sent something to the back and it disappeared!!!
Maybe you have a white rectangle as a background? Try clicking the background and hitting delete.
## Rotating text or other things
Click it (black arrow), then move your mouse around its edge until you see a thing that kind of implies you can rotate it. Click and drag.
## Drawing straight lines or rotating nicely
Hold shift while you draw the line or rotate or move a thing and it will go straight.
## Lining up things
When you have multiple things selected, the `Align` bar becomes active at the top. You can... align things with other things using it instead of manually pushing things around. You might want to play around with the different "Align to..." options.

The "key object" one can be pretty good, as it uses the "key object" as an anchor and moves everything around it. You select the key object by clicking (without holding shift) after you've made your selection. Key object = blue box.
# Tips specifically for Part 2
## Making donut charts
Donut charts are just pies missing the center. So if you make a pie and just _draw a big white circle in the middle_, it suddenly becomes a donut.
## Making pie charts
if `kind='scatter'` makes a scatter and `kind='bar'` makes a bar, how do you think you make a pie? Well, [`kind='pie'`](https://pandas.pydata.org/pandas-docs/stable/visualization.html#pie-plot)!
If you pass `labels=` to the pie chart maker it'll label your data. Try passing `labels='region'` and see the totally-wrong-but-kind-of-funny thing that happens. Then fix it! (or ask me about it).
## Multiple pie charts
If you're trying to do multiple pie charts from the same data (which you are), you have a few options
**Combining in Illustrator**
Make a pie chart for each graph, combine them in Illustrator.
**Building them in the same matplotlib graph**
First, select _only_ the columns you want to make the pie from. You can do this two ways:
1. Using the super-weird `df[['col1','col2']]` style multiple-column selection
2. You could also use `df.drop('col3', axis=1)` (without `inplace=True`, you don't want it to last forever!)
Try that in a cell by itself to make sure you've gotten rid of `region`. Once it looks okay, put a `.plot` right after that and tell it to make a pie, but with `subplots=True`.
## The Economist arrow chart??
We aren't making an arrow chart, it's ugly. We're doing another kind of chart!
### Part One: The lines
It doesn't work with pandas, so we're going to be talking right to matplotlib. We're going to use something called `ax.hlines`.
To see how `hlines` works, try this out:
```python
fig, ax = plt.subplots()
ax.hlines(xmin=[1, 2, 3, 4], xmax=[7, 8, 9, 20], y=[1, 2, 3, 4])
```
The lines start from `xmin` and go to `xmax`. First xmin + first xmax + first y, second xmin + second xmax + second y, etc.
Start by adapting the code above to build the chart below.

While the y axis is _obviously_ the region, it won't let you do it! It wants numbers! So we'll cheat and use `df.index`, which is the `0, 1, 2...` thing on the left-hand side.
### Part Two: The dots
`ax.scatter` is going to be your new friend. It takes an `x` and a `y`, and you'll use it to add little nubs for the 2011 profits. Remember, since it's `ax.scatter` we're talking directly to matplotlib and we can't just use column names, we have to give it the `df`, too.

### Part Three: The labels
Just copy and paste this part! First we're saying "put the y ticks at these numbers" then we're saying "oh wait use these words instead." Why do we need both? I don't know, programming.
```python
ax.set_yticks(df.index)
ax.yaxis.set_ticklabels(df['region'])
```

### Part Four: Your options from here
You're picking the style, right? So do whatever you want!
1. You could also do a dot for 2007, just in a different color. It would also use `ax.scatter`.
2. If you really wanted to do arrows you could, get rid of the dots and use the 'Arrowheads' part of the 'Stroke' menu in Illustrator
3. Would any annotations or grids be useful here? What are you trying to stress?
4. How do you feel about the size of the dots and the lines?
5. You probably wanted to sort it (especially since I told you to). If you use `.sort_valeus` it actually won't work because we keep using the index in a kind of unusual way (the left-hand-side number, the index, gets out of order). So after you sort, try `.reset_index(drop=True)`. Save that back into your `df` to update it forever.
## Colors won't change from grey
Open up the colors menu, click the thing in the upper right-hand corner, and select 'RGB' instead of 'Greyscale".
## Bar plots with multiple things going on
You can give multiple `y` values, like `y=['col1','col2']` if you want to have a grouped bar plot. You add `stacked=True` if you want them stacked on top of each other.
## Invisible grids
Sometimes instead of having a grid you can see, you only see the grid lines when it intersects with your data. I'd probably graph the grid in an ugly color, then select them and `Object > Arrange > Move to Front`, then make it white.
## More arrow tips for The Guardian one
Okay, maybe we didn't like the arrow chart before, but I guess we're doing one now. Make it similarly to how you make the one for the Economist.
It sure seems like all of the arrows are coming from one place. Maybe `hlines` is fine with that?
Make the arrowheads in Illustrator. `Window > Stroke` to open the stroke menu, then click the upper right-hand corner to make sure **Show Options** is turned on. Then you can play with the **Arrowheads** options.
## Tips for the Guardian commuting one
There are a _lot_ of ways to do all of this bits and pieces for this one! If you have a thought and want to talk through it before you start feel free to chat me up in Slack.
By the way, did you know Illustrator has a graph tool?

Sometimes instead of fighting with matplotlib it's an easy way to get lines or boxes or whatever that are the right relative size. Feel free to play around with it.
| github_jupyter |
<a href="https://colab.research.google.com/github/technologyhamed/Neuralnetwork/blob/Single/ArticleSummarization/ArticleSummarization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
*قسمت 1: پیدا کردن امتیاز TF IDF هر کلمه
ابتدا فایل خوانده می شود و تمام رشته ها در قالب Pandas DataFrame ذخیره می شوند
```
#Import libraries
%matplotlib inline
import pandas as pd
import numpy as np
import os
import glob
import requests as requests
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.datasets import load_files
import nltk
nltk.download('stopwords')
# Just making the plots look better
mpl.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (8,6)
mpl.rcParams['font.size'] = 12
url='https://raw.githubusercontent.com/technologyhamed/Neuralnetwork/Single/Datasets/Article_Summarization_project/Article.txt'
filename='../content/sample_data/Article.txt'
df = pd.read_csv(url)
df.to_csv(filename)
str_article = list()
article_files = glob.glob(filename)
d = list()
for article in article_files:
with open(article, encoding='utf-8') as f:
filename = os.path.basename(article.split('.')[0])
lines = (line.rstrip() for line in f) # All lines including the blank ones
lines = list(line for line in lines if line) # Non-blank lines
#str_article.rstrip()
d.append(pd.DataFrame({'article': "اخبار", 'paragraph': lines}))
doc = pd.concat(d)
doc
#doc['article'].value_counts().plot.bar();
```
Importing NLTK corpus to remove stop words from the vector.
```
from nltk.corpus import stopwords
```
Split the lines into sentences/words.
```
doc['sentences'] = doc.paragraph.str.rstrip('.').str.split('[\.]\s+')
doc['words'] = doc.paragraph.str.strip().str.split('[\W_]+')
#This line is used to remove the English stop words
stop = stopwords.words('english')
doc['words'] = doc['words'].apply(lambda x: [item for item in x if item not in stop])
#doc.head()
doc
```
```
# This is formatted as code
```
Split the paragraph into sentences.
```
rows = list()
for row in doc[['paragraph', 'sentences']].iterrows():
r = row[1]
for sentence in r.sentences:
rows.append((r.paragraph, sentence))
sentences = pd.DataFrame(rows, columns=['paragraph', 'sentences'])
#sentences = sentences[sentences.sentences.str.len() > 0]
sentences.head()
```
Split the paragraph into words.
```
rows = list()
for row in doc[['paragraph', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.paragraph, word))
words = pd.DataFrame(rows, columns=['paragraph', 'words'])
#remove empty spaces and change words to lower case
words = words[words.words.str.len() > 0]
words['words'] = words.words.str.lower()
#words.head()
#words
```
Calculate word counts in the article.
```
rows = list()
for row in doc[['article', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.article, word))
wordcount = pd.DataFrame(rows, columns=['article', 'words'])
wordcount['words'] = wordcount.words.str.lower()
wordcount.words = wordcount.words.str.replace('\d+', '')
wordcount.words = wordcount.words.str.replace(r'^the', '')
wordcount = wordcount[wordcount.words.str.len() > 2]
counts = wordcount.groupby('article')\
.words.value_counts()\
.to_frame()\
.rename(columns={'words':'n_w'})
#counts.head()
counts
#wordcount
#wordcount.words.tolist()
#counts.columns
```
Plot number frequency graph.
```
def pretty_plot_top_n(series, top_n=20, index_level=0):
r = series\
.groupby(level=index_level)\
.nlargest(top_n)\
.reset_index(level=index_level, drop=True)
r.plot.bar()
return r.to_frame()
pretty_plot_top_n(counts['n_w'])
word_sum = counts.groupby(level=0)\
.sum()\
.rename(columns={'n_w': 'n_d'})
word_sum
tf = counts.join(word_sum)
tf['tf'] = tf.n_w/tf.n_d
tf.head()
#tf
```
Plot top 20 words based on TF
```
pretty_plot_top_n(tf['tf'])
c_d = wordcount.article.nunique()
c_d
idf = wordcount.groupby('words')\
.article\
.nunique()\
.to_frame()\
.rename(columns={'article':'i_d'})\
.sort_values('i_d')
idf.head()
idf['idf'] = np.log(c_d/idf.i_d.values)
idf.head()
#idf
```
IDF values are all zeros because in this example, only 1 article is considered & all unique words appeared in the same article. IDF values are 0 if it appears in all the documents.
```
tf_idf = tf.join(idf)
tf_idf.head()
#tf_idf
tf_idf['tf_idf'] = tf_idf.tf * tf_idf.idf
tf_idf.head()
#tf_idf
```
-------------------------------------------------
**Part 2: Using Hopfield Network to find the most important words**
In this part, the TF scores are treated as the Frequency Vector i.e. the input to Hopfield Network.
Frequency Matrix is constructed to be treated as Hopfield Network weights.
```
freq_matrix = pd.DataFrame(np.outer(tf_idf["tf"], tf_idf["tf"]), tf_idf["tf"].index, tf_idf["tf"].index)
#freq_matrix.head()
freq_matrix
```
Finding the maximum of the frequency vector and matrix
```
vector_max = tf_idf['tf'].max()
print(vector_max)
matrix_max = freq_matrix.max().max()
print(matrix_max)
```
Normalizing the frequency vector
```
tf_idf['norm_freq'] = tf_idf.tf / vector_max
temp_df = tf_idf[['tf', 'norm_freq']]
#temp_df
temp_df.head(20)
#tf_idf.head()
#tf_idf
```
Normalizing the frequency matrix
```
freq_matrix_norm = freq_matrix.div(matrix_max)
freq_matrix_norm
np.fill_diagonal(freq_matrix_norm.values, 0)
freq_matrix_norm
```
.
```
#define sigmoid function
#currently just a placeholder because tanh activation function is selected instead
def sigmoid(x):
beta = 1
return 1 / (1 + np.exp(-x * beta))
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["norm_freq"])
temp = np.tanh(freq_matrix_norm @ tf_idf["hopfield_value"])
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
#temp
#temp
#temp.head()
temp.head(20)
```
# **الگوریتم هابفیلد**
```
#safe limit
itr = 0
zero_itr = 0
max_itr = 5 #maximum iteration where Delta Energy is 0
char_list = []
delta_energy = 0
threshold = 0
energy = 0
init_energy = 0
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["norm_freq"])
while (delta_energy < 0.0001):
itr = itr + 1
#Calculation of output vector from Hopfield Network
#y = activation_function(sum(W * x))
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["hopfield_value"])
#Calculation of Hopfield Energy Function and its Delta
#E = [-1/2 * sum(Wij * xi * xj)] + [sum(threshold*xi)]
energy = (-0.5 * tf_idf["hopfield_value"] @ freq_matrix_norm @ tf_idf["hopfield_value"]) \
+ (np.sum(threshold * tf_idf["hopfield_value"]))
#Append to list for characterization
char_list.append(energy)
#Find Delta for Energy
delta_energy = energy - init_energy
#print ('Energy = {}'.format(energy))
#print ('Init_Energy = {}'.format(init_energy))
#print ('Delta_Energy = {}'.format(delta_energy))
#print ()
init_energy = energy #Setting the current energy to be previous energy in next iteration
#break the loop if Delta Energy reached zero after a certain iteration
if (delta_energy == 0):
zero_itr = zero_itr + 1
if (zero_itr == max_itr):
print("Hopfield Loop exited at Iteration {}".format(itr))
break
big_grid = np.arange(0,itr)
plt.plot(big_grid,char_list, color ='blue')
plt.suptitle('Hopfield Energy Value After Each Iteration')
# Customize the major grid
plt.grid(which='major', linestyle='-', linewidth='0.5', color='red')
# Customize the minor grid
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.minorticks_on()
plt.rcParams['figure.figsize'] = [13, 6]
plt.show()
#tf_idf.head()
#tf_idf
#final_hopfield_output = tf_idf["hopfield_value"]
final_output_vector = tf_idf["hopfield_value"]
final_output_vector.head(10)
#final_output_vector.head()
#final_output_vector
#tf_idf
```
Once again, it is shown that the words <font color=green>***kipchoge***</font> and <font color=green>***marathon***</font> are the the most important word. It is highly likely that it is accurate because the article was about the performance of Eliud Kipchoge running a marathon.
-------------------------------------------------
**Part 3: خلاصه مقاله**
```
txt_smr_sentences = pd.DataFrame({'sentences': sentences.sentences})
txt_smr_sentences['words'] = txt_smr_sentences.sentences.str.strip().str.split('[\W_]+')
rows = list()
for row in txt_smr_sentences[['sentences', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.sentences, word))
txt_smr_sentences = pd.DataFrame(rows, columns=['sentences', 'words'])
#remove empty spaces and change words to lower case
txt_smr_sentences['words'].replace('', np.nan, inplace=True)
txt_smr_sentences.dropna(subset=['words'], inplace=True)
txt_smr_sentences.reset_index(drop=True, inplace=True)
txt_smr_sentences['words'] = txt_smr_sentences.words.str.lower()
##Initialize 3 new columns
# w_ind = New word index
# s_strt = Starting index of a sentence
# s_stp = Stopping index of a sentence
# w_scr = Hopfield Value for words
txt_smr_sentences['w_ind'] = txt_smr_sentences.index + 1
txt_smr_sentences['s_strt'] = 0
txt_smr_sentences['s_stp'] = 0
txt_smr_sentences['w_scr'] = 0
#Iterate through the rows to check if the current sentence is equal to
#previous sentence. If not equal, determine the "start" & "stop"
start = 0
stop = 0
prvs_string = ""
for i in txt_smr_sentences.index:
#print (i)
if (i == 0):
start = 1
txt_smr_sentences.iloc[i,3] = 1
prvs_string = txt_smr_sentences.iloc[i,0]
else:
if (txt_smr_sentences.iloc[i,0] != prvs_string):
stop = txt_smr_sentences.iloc[i-1,2]
txt_smr_sentences.iloc[i-(stop-start)-1:i,4] = stop
start = txt_smr_sentences.iloc[i,2]
txt_smr_sentences.iloc[i,3] = start
prvs_string = txt_smr_sentences.iloc[i,0]
else:
txt_smr_sentences.iloc[i,3] = start
if (i == len(txt_smr_sentences.index)-1):
last_ind = txt_smr_sentences.w_ind.max()
txt_smr_sentences.iloc[i-(last_ind-start):i+1,4] = last_ind
#New Column for length of sentence
txt_smr_sentences['length'] = txt_smr_sentences['s_stp'] - txt_smr_sentences['s_strt'] + 1
#Rearrange the Columns
txt_smr_sentences = txt_smr_sentences[['sentences', 's_strt', 's_stp', 'length', 'words', 'w_ind', 'w_scr']]
txt_smr_sentences.head(100)
#txt_smr_sentences
```
Check if word has Hopfield Score value, and update *txt_smr_sentences*
```
for index, value in final_output_vector.items():
for i in txt_smr_sentences.index:
if(index[1] == txt_smr_sentences.iloc[i,4]):
txt_smr_sentences.iloc[i,6] = value
#New Column for placeholder of sentences score
txt_smr_sentences['s_scr'] = txt_smr_sentences.w_scr
txt_smr_sentences.head(100)
# three_sigma = 3 * math.sqrt((tf_idf.loc[:,"hopfield_value"].var()))
# three_sigma
# tf_idf["hopfield_value"]
aggregation_functions = {'s_strt': 'first', \
's_stp': 'first', \
'length': 'first', \
's_scr': 'sum'}
tss_new = txt_smr_sentences.groupby(txt_smr_sentences['sentences']).aggregate(aggregation_functions)\
.sort_values(by='s_scr', ascending=False).reset_index()
tss_new
import math
max_word = math.floor(0.1 * tss_new['s_stp'].max())
print("Max word amount for summary: {}\n".format(max_word))
summary = tss_new.loc[tss_new['s_strt'] == 1, 'sentences'].iloc[0] + ". " ##Consider the Title of the Article
length_printed = 0
for i in tss_new.index:
if (length_printed <= max_word):
summary += tss_new.iloc[i,0] + ". "
length_printed += tss_new.iloc[i,3] ##Consider the sentence where max_word appear in the middle
else:
break
class style:
BOLD = '\033[1m'
END = '\033[0m'
print('\n','--------------------------------------------------------')
s = pd.Series([style.BOLD+summary+style.END])
print(s.str.split(' '))
print('\n')
#!jupyter nbconvert --to html ./ArticleSummarization.ipynb
```
| github_jupyter |
```
import numpy as np
from scipy.stats import norm
from stochoptim.scengen.scenario_tree import ScenarioTree
from stochoptim.scengen.scenario_process import ScenarioProcess
from stochoptim.scengen.variability_process import VariabilityProcess
from stochoptim.scengen.figure_of_demerit import FigureOfDemerit
```
We illustrate on a Geometric Brownian Motion (GBM) the two ways (forward vs. backward) to build a scenario tree with **optimized scenarios**.
# Define a `ScenarioProcess` instance for the GBM
```
S_0 = 2 # initial value (at stage 0)
delta_t = 1 # time lag between 2 stages
mu = 0 # drift
sigma = 1 # volatility
```
The `gbm_recurrence` function below implements the dynamic relation of a GBM:
* $S_{t} = S_{t-1} \exp[(\mu - \sigma^2/2) \Delta t + \sigma \epsilon_t\sqrt{\Delta t}], \quad t=1,2,\dots$
where $\epsilon_t$ is a standard normal random variable $N(0,1)$.
The discretization of $\epsilon_t$ is done by quasi-Monte Carlo (QMC) and is implemented by the `epsilon_sample_qmc` method.
```
def gbm_recurrence(stage, epsilon, scenario_path):
if stage == 0:
return {'S': np.array([S_0])}
else:
return {'S': scenario_path[stage-1]['S'] \
* np.exp((mu - sigma**2 / 2) * delta_t + sigma * np.sqrt(delta_t) * epsilon)}
def epsilon_sample_qmc(n_samples, stage, u=0.5):
return norm.ppf(np.linspace(0, 1-1/n_samples, n_samples) + u / n_samples).reshape(-1, 1)
scenario_process = ScenarioProcess(gbm_recurrence, epsilon_sample_qmc)
```
# Define a `VariabilityProcess` instance
A `VariabilityProcess` provides the *variability* of a stochastic problem along the stages and the scenarios. What we call 'variability' is a positive number that indicates how variable the future is given the present scenario.
Mathematically, a `VariabilityProcess` must implement one of the following two methods:
* the `lookback_fct` method which corresponds to the function $\mathcal{V}_{t}(S_{1}, ..., S_{t})$ that provides the variability at stage $t+1$ given the whole past scenario,
* the `looknow_fct` method which corresponds to the function $\mathcal{\tilde{V}}_{t}(\epsilon_t)$ that provides the variability at stage $t+1$ given the present random perturbation $\epsilon_t$.
If the `lookback_fct` method is provided, the scenarios can be optimized using the key world argument `optimized='forward'`.
If the `looknow_fct` method is provided, the scenarios can be optimized using the key world argument `optimized='backward'`.
```
def lookback_fct(stage, scenario_path):
return scenario_path[stage]['S'][0]
def looknow_fct(stage, epsilon):
return np.exp(epsilon[0])
my_variability = VariabilityProcess(lookback_fct, looknow_fct)
```
# Define a `FigureOfDemerit` instance
```
def demerit_fct(stage, epsilons, weights):
return 1 / len(epsilons)
my_demerit = FigureOfDemerit(demerit_fct, my_variability)
```
# Optimized Assignment of Scenarios to Nodes
### `optimized='forward'`
```
scen_tree = ScenarioTree.from_recurrence(last_stage=3, init=3, recurrence={1: (2,), 2: (1,2), 3: (1,2,3)})
scen_tree.fill(scenario_process,
optimized='forward',
variability_process=my_variability,
demerit=my_demerit)
scen_tree.plot('S')
scen_tree.plot_scenarios('S')
```
### `optimized='backward'`
```
scen_tree = ScenarioTree.from_recurrence(last_stage=3, init=3, recurrence={1: (2,), 2: (1,2), 3: (1,2,3)})
scen_tree.fill(scenario_process,
optimized='backward',
variability_process=my_variability,
demerit=my_demerit)
scen_tree.plot('S')
scen_tree.plot_scenarios('S')
```
| github_jupyter |
```
import wandb
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Flatten, Lambda
from tensorflow.keras.optimizers import Adam
import gym
import argparse
import numpy as np
from collections import deque
import random
tf.keras.backend.set_floatx('float64')
parser = argparse.ArgumentParser()
parser.add_argument('--gamma', type=float, default=0.95)
parser.add_argument('--lr', type=float, default=0.005)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--eps', type=float, default=1.0)
parser.add_argument('--eps_decay', type=float, default=0.995)
parser.add_argument('--eps_min', type=float, default=0.01)
args = parser.parse_args()
gamma = 0.95
lr = 0.005
batch_size = 32
eps = 1.0
eps_decay = 0.995
eps_min = 0.01
class ReplayBuffer:
def __init__(self, capacity=10000):
self.buffer = deque(maxlen=capacity)
def put(self, state, action, reward, next_state, done):
self.buffer.append([state, action, reward, next_state, done])
def sample(self):
sample = random.sample(self.buffer,batch_size)
states, actions, rewards, next_states, done = map(np.asarray, zip(*sample))
states = np.array(states).reshape(batch_size, -1)
next_states = np.array(next_states).reshape(batch_size, -1)
return states, actions, rewards, next_states, done
def size(self):
return len(self.buffer)
class ActionStateModel:
def __init__(self, state_dim, aciton_dim):
self.state_dim = state_dim
self.action_dim = aciton_dim
self.epsilon = eps
self.model = self.create_model()
def create_model(self):
model = tf.keras.Sequential([
Input((self.state_dim,)),
Dense(32, activation='relu'),
Dense(16, activation='relu'),
Dense(self.action_dim)
])
model.compile(loss='mse', optimizer=Adam(lr))
return model
def predict(self, state):
return self.model.predict(state)
def get_action(self, state):
state = np.reshape(state, [1, self.state_dim])
self.epsilon *= eps_decay
self.epsilon = max(self.epsilon, eps_min)
q_value = self.predict(state)[0]
if np.random.random() < self.epsilon:
return random.randint(0, self.action_dim-1)
return np.argmax(q_value)
def train(self, states, targets):
self.model.fit(states, targets, epochs=1, verbose=0)
class Agent:
def __init__(self, env):
self.env = env
self.state_dim = self.env.observation_space.shape[0]
self.action_dim = self.env.action_space.n
self.model = ActionStateModel(self.state_dim, self.action_dim)
self.target_model = ActionStateModel(self.state_dim, self.action_dim)
self.target_update()
self.buffer = ReplayBuffer()
def target_update(self):
weights = self.model.model.get_weights()
self.target_model.model.set_weights(weights)
def replay(self):
for _ in range(10):
states, actions, rewards, next_states, done = self.buffer.sample()
targets = self.target_model.predict(states)
next_q_values = self.target_model.predict(next_states).max(axis=1)
targets[range(batch_size), actions] = rewards + (1-done) * next_q_values * gamma
self.model.train(states, targets)
def train(self, max_episodes=1000):
for ep in range(max_episodes):
done, total_reward = False, 0
state = self.env.reset()
while not done:
action = self.model.get_action(state)
next_state, reward, done, _ = self.env.step(action)
self.buffer.put(state, action, reward*0.01, next_state, done)
total_reward += reward
state = next_state
if self.buffer.size() >= batch_size:
self.replay()
self.target_update()
print('EP{} EpisodeReward={}'.format(ep, total_reward))
wandb.log({'Reward': total_reward})
def main():
env = gym.make('CartPole-v1')
agent = Agent(env)
agent.train(max_episodes=1000)
if __name__ == "__main__":
main()
env = gym.make('Pendulum-v0')
agent = Agent(env)
env.observation_space
env.observation_space.shape[0]
action
state
state=env.reset()
action=model.get_action(state)
state_dim=env.observation_space.shape[0]
action_dim=env.action_space.n
state_dim
action_dim
env.observation_space
env.action_space
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from bitdotio_pandas import BitDotIOPandas
import seaborn as sns
import os
sns.set(font_scale=1.5, style='whitegrid')
plt.rcParams["font.family"] = "sans serif"
RDATA = os.path.join('data', 'raw')
PDATA = os.path.join('data', 'processed')
# Style configuration
COLORS = [
'#0059ff',
'#fdbd28',
'#28D9AA',
'#EE5149',
'#060F41',
'#788995',
'#FF69B4',
'#7F00FF',
]
GREY = '#788995'
DARK_GREY = '#060F41'
BLUE = '#0059ff'
DBLUE = '#060F41'
GOLD = '#fdbd28'
GREEN = '#28D9AA'
RED = '#EE5149'
BLACK = '#000000'
WHITE = '#FFFFFF'
LINEWIDTH = 5
LINESPACING = 1.25
FS_SUPTITLE = 30
FS_CAPTION = 24
FS_LABEL = 24
FS_FOOTNOTE = 20
```
## Part 1: Summer Reading Pipeline
Caution: this dataset is too large to download on Deepnote, but is provided here for reference or for download to use locally.
The purpose of this notebook is to download raw checkout records from the Seattle Public Library and publish prepared datasets back to bit.io.
### Download data from SPL
```
for directory in ['data/raw', 'data/processed']:
if not os.path.exists(directory):
os.makedirs(directory)
# This step will take a while, the raw file is ~8.3 GB
! curl -o data/raw/checkouts_by_title.csv "https://data.seattle.gov/api/views/tmmm-ytt6/rows.csv?accessType=DOWNLOAD"
```
### Load data into pandas
```
df = pd.read_csv(os.path.join(RDATA, 'checkouts_by_title.csv'))
df.head()
```
### Filtering the data
I will look at the last ~5 years of ebook and audiobook checkouts.
```
df = df.loc[(df['MaterialType'].isin(['EBOOK', 'AUDIOBOOK'])) & (df['CheckoutYear'] >= 2016)].copy()
df.head()
```
### Getting the top subjects
Each title can have many subject labeles. I will find the most popular subject labels for both ebooks and audiobooks.
```
# Count subjects and map subjects to rows, this is crude/slow but works, could be parallelized easily
subject_counts_ab = defaultdict(int)
subject_counts_eb = defaultdict(int)
i = 0
for idx, row in df.iterrows():
if isinstance(row.Subjects, str):
temp_subjects = row.Subjects.split(', ')
for subject in temp_subjects:
if row.MaterialType == 'EBOOK':
subject_counts_eb[subject] += row.Checkouts
elif row.MaterialType == 'AUDIOBOOK':
subject_counts_ab[subject] += row.Checkouts
i += 1
if i % 1000000 == 0:
print(f'{100*i/df.shape[0]:.3f}%')
# Construct series with subject counts
sca_series = pd.Series(subject_counts_ab)
sce_series = pd.Series(subject_counts_eb)
# View top subjects for audio books
top_subjects_a = list(sca_series.sort_values(ascending=False).iloc[:15].index)
top_subjects_a
# View top subjects for e-books
top_subjects_e = list(sce_series.sort_values(ascending=False).iloc[:15].index)
top_subjects_e
# Get top subjects across both
top_subjects = set(top_subjects_a).intersection(top_subjects_e)
top_subjects
# Create convenience columns for top subjects of interest, further denormalizing the dataset
for subject in top_subjects:
df[subject] = df['Subjects'].str.contains(subject)
# Drop unneeded columns and clean up column names
df = df.drop(columns=['UsageClass', 'CheckoutType'])
df.columns = [col.lower() for col in df.columns]
df = df.rename(columns = {'materialtype': 'material_type',
'checkoutyear': 'checkout_year',
'checkoutmonth': 'checkout_month',
'publicationyear': 'publication_year',
'juvenile fiction': 'juvenile_fiction',
'biography & autobiography': 'biography_autobiography',
'science fiction': 'science_fiction',
'historical fiction': 'historical_fiction'})
df.head()
# Split up the audiobook and ebook tables
df = df.drop(columns='material_type')
dfa = df.loc[df['material_type'] == 'AUDIOBOOK'].copy()
dfe = df.loc[df['material_type'] == 'EBOOK'].copy()
```
### Store the cleaned-up data
To get an API key to connect to bit.io, go [here](https://bit.io/bitdotio/seattle_library) (sign up for a free account if needed, and click "connect" above the data preview).
```
# Create connection to bit.io, you will need your own API key to connect
bpd = BitDotIOPandas(username="bitdotio", repo="seattle_library")
# Upload audiobook records in chunks, if you'd like to write to bit.io you'll need to point this at a repo that you can write to
if 'audiobook_checkouts_by_title_test' in bpd.list_tables():
bpd.delete_table('audiobook_checkouts_by_title_test')
bpd.to_table(dfa, 'audiobook_checkouts_by_title_test', chunksize=50000)
# Upload audiobook records in chunks, if you'd like to write to bit.io you'll need to point this at a repo that you can write to
if 'ebook_checkouts_by_title_test' in bpd.list_tables():
bpd.delete_table('ebook_checkouts_by_title_test')
bpd.to_table(dfe, 'ebook_checkouts_by_title_test', chunksize=50000)
```
### Next, we continue our work in summer_reading_analysis.ipynb
| github_jupyter |
### What does XML stand for?
XML stands for Extensible Markup Language
### What is a markup language?
According to _Wikipedia_
> In computer text processing, a markup language is a system for annotating a document in a way that is syntactically distinguishable from the text, meaning when the document is processed for display, the markup language is not shown, and is only used to format the text.
### What is the purpose of XML?
- XML is used for sharing of data.
- Sends data in a structure format.
### Basic Terminology
#### Tag
Generally, strings that `<` and ends with `>`.
#### Types of tags
- start-tag, such as `<section>`
- end-tag, such as `</section>`
- empty-element tag, such as `<line-break />`
#### Element
- Logical document component that either begins with a start-tag and ends with a matching end-tag
OR
- consists only of an empty-element tag.
##### Examples
- `<greeting>Hello, world!</greeting>`.
- `<line-break />`.
#### Attribute
```
- name-value pair that exists within a start-tag or empty-element tag.
```
#### Examples
- `<img src="foo.jpg" alt="foo" />`
<!-- illustrates an attribute with a list of values -->
- `<div class="inner outer-box"> FooBar</div>`
### Example
### Basic Parsing
```
import xml.etree.ElementTree as ET
# for parsing document from string
root = ET.fromstring(val)
# for parsing from a file
root = ET.parse('file.xml').getroot()
```
#### Getting Interesting Elements
```
for child in root.iter():
print(child.tag, child.attrib)
for neighbor in root.iter('neighbor'):
print(neighbor.tag, neighbor.attrib)
```
### More realistic example
```
URL = 'https://www.hackadda.com/latest/feed/'
# fetch feed from this URL and extract some interesting data
# like, find urls that contain 'django' in them.
```
#### Iterating through every element
```
import xml.etree.ElementTree as ET
tree = ET.iterparse('data.xml')
# first element is event
for _, ele in tree:
print(ele.tag, ele.attrib)
```
### SAX(Simple API for XML)
- This can be used to parse XML element by element, line by line.
- It is generally slower and memory inefficient, prefer `ET.iterparse` instead.
### DOM(Document Object Model) API
- DOM is a cross-language API from W3C(World Wide Web Consortium) for accessing and modifying XML documents.
```
from xml.dom.minidom import parse, parseString
tree = parse(source)
countries = tree.getElementsByTagName('country')
for country in countries:
tag = country.tagName
children = country.childNodes
print('name:', country.getAttribute('name'))
```
#### SideNotes
- This is generally memory consuming and `xml.etree.ElementTree` is generally preferred over it.
### Security Considerations
From the official `python` documentation
<div class="alert alert-danger">
> Warning: The XML modules are not secure against erroneous or maliciously constructed data. If you need to parse untrusted or unauthenticated data see the [XML vulnerabilities](https://docs.python.org/3/library/xml.html#xml-vulnerabilities) and The [defusedxml](https://docs.python.org/3/library/xml.html#defusedxml-package) Package sections.
</div>
### Other useful parsing libraries
- [`xmltodict`](https://docs.python.org/3/library/xml.html#xml-vulnerabilities)
- convert XML to JSON like object
- [`untangle`](https://github.com/stchris/untangle)
- converts XML to a python like object
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 2, Module 3*
---
# Cross-Validation
## Assignment
- [x] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [x] Continue to participate in our Kaggle challenge.
- [x] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
- [x] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [x] Commit your notebook to your fork of the GitHub repo.
You won't be able to just copy from the lesson notebook to this assignment.
- Because the lesson was ***regression***, but the assignment is ***classification.***
- Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.
So you will have to adapt the example, which is good real-world practice.
1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`
3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))
## Stretch Goals
### Reading
- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
### Doing
- Add your own stretch goals!
- Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.
- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
### BONUS: Stacking!
Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
```python
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
```
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.impute import KNNImputer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import cross_val_score
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
train.shape, test.shape
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
test = wrangle(test)
train
target = 'status_group'
features = train.columns.drop('status_group')
X_train = train[features]
y_train = train[target]
X_test = test[features]
%config IPCompleter.greedy=True
pipeline = make_pipeline(
ce.OrdinalEncoder(),
KNNImputer(n_neighbors=3),
RandomForestClassifier(n_jobs=-2, random_state = 42)
)
k = 5
scores = cross_val_score(pipeline, X_train, y_train, cv=k,
scoring='accuracy')
print(f'MAE for {k} folds:', scores)
# That took a crazy amount of time, wasn't even any improvement haha
pipeline.named_steps.randomforestclassifier
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state = 42)
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'randomforestclassifier__n_estimators' : [10,15,30,44],
'randomforestclassifier__min_samples_leaf' : [5,10,30]
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=100,
cv=5,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Cross-validation MAE', -search.best_score_)
# must look to cleaning the wrangling
```
| github_jupyter |
The normal distribution is most commonly used to model the returns of stock market. However, the market is well-known to exhibit rare disastrous event (black-swan event). To incorporate this into the model, fat-tail distribution is use.
There are various fat-tail distribution such as, student's t-distribution, pareto distribution, exponential distribution and many more. This article will explore the normal and t-distribution. The S&P 500 monthly data from year 1871 to 2018 is used. The source of the data is from Robert Shiller and retrieved from <a href="https://datahub.io/core/s-and-p-500">here</a>
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
sns.set()
sp500 = pd.read_csv("dataset/sp500.csv",index_col='Date',parse_dates=True)
print(sp500.info())
sp500['return'] = sp500['SP500'].pct_change()
ret_arr = sp500['return'].values
# remove nan
ret_arr = ret_arr[~np.isnan(ret_arr)]
mu = np.mean(ret_arr)
sigma = np.std(ret_arr)
print(ret_arr)
print("Average monthly return: {}".format(mu))
print("Standard deviation of monthly return: {}".format(sigma))
num_bins = 300
x = np.linspace(np.min(ret_arr), np.max(ret_arr), num_bins)
normal_dist = stats.norm.pdf(x, mu, sigma)
t_dist = stats.t.pdf(x, 1, mu, sigma)
fig,ax = plt.subplots(nrows=2, ncols=1, figsize=(12,12))
ax[0].hist(ret_arr, bins=x, density=True)
ax[0].set_title('SP500 monthly return density', size='large')
ax[1].plot(x, normal_dist, label='normal distribution')
ax[1].plot(x, t_dist, label='t-distribution')
ax[1].set_title("Normal distribution pdf & Student's t-distribution pdf (df = 1)",size='large')
ax[1].legend()
plt.tight_layout()
```
The normal distribution takes in mean and standard deviation as inputs $r \sim N(\mu, \sigma)$. The t-distribution takes in mean, standard deviation and degree of freedom (df) as inputs $r \sim T(\mu, \sigma, \nu)$. The t-distribution above use df = 1. It is obvious that the t-distribution has fatter tail than the normal distribution. As df increases, the tail of the t-distribution gets skinnier and eventually equal to the normal distribution when df approaches infinity.
<p style="text-align:center;"> $$\lim_{\nu \to \infty} T(\mu, \sigma, \nu) = N(\mu, \sigma)$$ </p>
```
fig,ax = plt.subplots(nrows=1, ncols=1, figsize=(12,8))
ax.hist(ret_arr, bins=x, density=True, label='S&P 500 returns')
ax.plot(x, normal_dist, label='normal distribution')
ax.plot(x, t_dist, label='t-distribution')
ax.set_title('Comparison with normal and t-distribution', size='large')
ax.axvline(x=mu+2.5*sigma,color='r', linestyle='--', label='$\mu$ +- 2.5*$\sigma$')
ax.axvline(x=mu-2.5*sigma,color='r', linestyle='--')
ax.legend()
plt.tight_layout()
tail_count = len(ret_arr[ret_arr > (mu + 2.5*sigma)]) + len(ret_arr[ret_arr < (mu - 2.5*sigma)])
print("Count of return at tail: ",tail_count)
print("Percent of tail return: {} %".format(tail_count/len(ret_arr)*100))
```
Most of the middle portion of the stock market returns can be describe using normal distribution while the tails are better described using the t-distribution. Hence, in most cases (about 98% of the time) the stock market returns can be modelled using normal distribution while 2% of the time the stock market is better modelled using t-distribution. Though 2% seems like a tiny probability, with large sample that 2% probability will definitely occur and when it happen, it may harm (depending on the position direction) the unprepared. Therefore, it is prudent to not be mistaken between low probability with impossibility.
Thus, there is no perfect way to model the stock market returns because:
- Model using fat-tail distribution will incur lots of opportunity cost because you're being conservative but you're safeguard from the tail risk.
- Model using normal distribution will be accurate most of the time but you're exposed to tail risk which can be catastrophic.
In conclusion, knowing when and what distribution to use is what differentiate sophisticated trader from the average trader.
| github_jupyter |
```
# Importing all dependencies needed to conduct analysis
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
import pandas as pd
import numpy as np
from password import password
from sqlalchemy import create_engine
engine = create_engine(f'postgresql://postgres:{password}@localhost/EmployeeSQL')
# Accessing SQL Database Table named 'employees'
connection = engine.connect()
data1 = engine.execute("SELECT * FROM Employee_info.employees", connection)
data1 = pd.read_sql("SELECT * FROM Employee_info.employees", connection)
data1.head()
# Accessing SQL Database Table named 'salaries'
connection = engine.connect()
data2 = engine.execute("SELECT * FROM Employee_info.salaries", connection)
data2 = pd.read_sql("SELECT * FROM Employee_info.salaries", connection)
# data2.describe()
data2.head()
# Accessing SQL Database Table named 'titles'
connection = engine.connect()
data3 = engine.execute("SELECT * FROM Employee_info.titles", connection)
data3 = pd.read_sql("SELECT * FROM Employee_info.titles", connection)
data3.head()
# Creating an exploratory histogram of salary frequency in the database
data2.plot.hist(x = "salary" , title = "Salary by Employee")
# Merging 'employee' table with 'salary' table
merged_data12 = pd.merge(data1, data2, how = 'left', on = "emp_no")
merged_data12.head()
# Renaming 'emp_id_title' to 'title_id' to be able to merge with 'title' table
merged_data12 = merged_data12.rename(columns={"emp_title_id":"title_id"})
merged_data12.head()
# Merging 'employee/salary' table with 'title' table
merged_data123 = pd.merge(merged_data12, data3, how = 'left', on = "title_id")
merged_data123.head()
# Extracting 'title' and 'salary' columns into a dataframe
salary_title = merged_data123[["title","salary"]]
salary_title = salary_title.set_index("title")
# Grouping by 'title' and summarizing groupby by mean salary of each title, sorted desc
avg_salary_title = salary_title.groupby(by="title").mean().sort_values("salary")
avg_salary_title.head()
# Creating a bar graph of average salaries by title
# Looks like fake data, because the salaries that are suppose to be higher
# like "Senior Engineer" are lower than "staff"
avg_salary_title.plot (kind='bar', title = "Average Salary by Title")
plt.xlabel("Employee Titles")
plt.ylabel("Average Salary ($)")
plt.savefig("Images/Average_Salary_by_Title.png")
# Searching for my salary on the database, employee # 499942
# I find my name is "April Foolsday" and I realized the data is fake!!!
my_salary = merged_data123.loc[merged_data123["emp_no"] == 499942]
my_salary
```
| github_jupyter |
### 3. Tackle the Titanic dataset
```
# To support both python 2 and python 3
# 让这份笔记同步支持 python 2 和 python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
# 让笔记全程输入稳定
np.random.seed(42)
# To plot pretty figures
# 导入绘图工具
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
# 设定图片保存路径,这里写了一个函数,后面直接调用即可
PROJECT_ROOT_DIR = "F:\ML\Machine learning\Hands-on machine learning with scikit-learn and tensorflow"
CHAPTER_ID = "Classification_MNIST_03"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Ignore useless warnings (see SciPy issue #5998)
# 忽略无用警告
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
```
**目标**是根据年龄,性别,乘客等级,他们的出发地等属性来预测乘客是否幸存下来。
* 首先,登录Kaggle并前往泰坦尼克号挑战下载train.csv和test.csv。 将它们保存到datasets / titanic目录中。
* 接下来,让我们加载数据:
```
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
```
数据已经分为训练集和测试集。 但是,测试数据不包含标签:
* 你的目标是使用培训数据培训最佳模型,
* 然后对测试数据进行预测并将其上传到Kaggle以查看最终得分。
让我们来看看训练集的前几行:
```
train_data.head()
```
* **Survived**: 这是目标,0表示乘客没有生存,1表示他/她幸存。
* **Pclass**: 乘客客舱级别
* **Name, Sex, Age**: 这个不需要解释
* **SibSp**:乘坐泰坦尼克号的乘客中有多少兄弟姐妹和配偶。
* **Parch**: 乘坐泰坦尼克号的乘客中有多少孩子和父母。
* **Ticket**: 船票 id
* **Fare**: 支付的价格(英镑)
* **Cabin**: 乘客的客舱号码
* **Embarked**: 乘客登上泰坦尼克号的地点
```
train_data.info()
```
Okay, the **Age, Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will **ignore the Cabin for now and focus on the rest**. The **Age** attribute has about 19% null values, so we will need to decide what to do with them.
* Replacing null values with the median age seems reasonable.
The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will **ignore them**.
Let's take a look at the **numerical attributes**:
**Age,Cabin和Embarked**属性有时为null(小于891非null),尤其是**Cabin**(77%为null)。 我们现在将忽略Cabin并专注于其余部分。 Age属性有大约19%的空值,因此我们需要决定如何处理它们。
* 用年龄中位数替换空值似乎是合理的。
**Name和Ticket**属性可能有一些值,但转换为模型可以使用的有用数字会有点棘手。 所以现在,我们将忽略它们。
我们来看看数值属性:
```
train_data.describe()
# only in a Jupyter notebook
# 另一种快速了解数据的方法是绘制直方图
%matplotlib inline
import matplotlib.pyplot as plt
train_data.hist(bins=50, figsize=(20,15))
plt.show()
```
* 只有38%幸存。 :(这足够接近40%,因此准确度将是评估我们模型的合理指标。
* 平均票价是32.20英镑,这看起来并不那么昂贵(但当时可能还有很多钱)。
* 平均年龄不到30岁。
让我们检查目标是否确实为0或1:
```
train_data["Survived"].value_counts()
```
现在让我们快速浏览所有分类属性:
```
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
```
“ Embarked ”属性告诉我们乘客的出发地点:C = Cherbourg 瑟堡,Q = Queenstown 皇后镇,S = Southampton 南安普敦。
现在让我们构建我们的预处理流水线。 我们将重用我们在前一章中构建的DataframeSelector来从DataFrame中选择特定属性:
```
from sklearn.base import BaseEstimator, TransformerMixin
# A class to select numerical or categorical columns
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
```
让我们为数值属性构建管道:
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy="median")
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", Imputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
```
我们还需要一个用于字符串分类列的imputer(常规Imputer不适用于那些):
```
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
```
我们可以使用**OneHotEncoder**将每个分类值转换为**单热矢量**。
现在这个类只能处理整数分类输入,但在Scikit-Learn 0.20中它也会处理字符串分类输入(参见PR#10521)。 所以现在我们从future_encoders.py导入它,但是当Scikit-Learn 0.20发布时,你可以从sklearn.preprocessing导入它:
```
from sklearn.preprocessing import OneHotEncoder
```
现在我们可以为分类属性构建管道:
```
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
```
最后,合并数值和分类管道:
```
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
```
现在我们有一个很好的预处理管道,它可以获取原始数据并输出数字输入特征,我们可以将这些特征提供给我们想要的任何机器学习模型。
```
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
```
让我们不要忘记获得标签:
```
y_train = train_data["Survived"]
```
我们现在准备训练分类器。 让我们从SVC开始吧
```
from sklearn.svm import SVC
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
```
模型经过训练,让我们用它来测试测试集:
```
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
```
现在我们可以:
* 用这些预测构建一个CSV文件(尊重Kaggle除外的格式)
* 然后上传它并希望能有好成绩。
可是等等! 我们可以比希望做得更好。 为什么我们不使用交叉验证来了解我们的模型有多好?
```
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
```
好吧,超过73%的准确率,明显优于随机机会,但它并不是一个好成绩。 看看Kaggle泰坦尼克号比赛的排行榜,你可以看到你需要达到80%以上的准确率才能进入前10%的Kagglers。 有些人达到了100%,但由于你可以很容易地找到泰坦尼克号的受害者名单,似乎很少有机器学习涉及他们的表现! ;-)所以让我们尝试建立一个达到80%准确度的模型。
我们来试试**RandomForestClassifier**:
```
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
```
这次好多了!
* 让我们为每个模型绘制所有10个分数,而不只是查看10个交叉验证折叠的平均准确度
* 以及突出显示下四分位数和上四分位数的方框图,以及显示分数范围的“whiskers(胡须)”(感谢Nevin Yilmaz建议这种可视化)。
请注意,**boxplot()函数**检测异常值(称为“fliers”)并且不包括它们在whiskers中。 特别:
* 如果下四分位数是$ Q_1 $而上四分位数是$ Q_3 $
* 然后四分位数范围$ IQR = Q_3 - Q_1 $(这是盒子的高度)
* 且任何低于$ Q_1 - 1.5 \ IQR $ 的分数都是一个**异常值**,任何分数都高于$ Q3 + 1.5 \ IQR $也是一个异常值。
```
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores],
labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
```
为了进一步改善这一结果,你可以:比较更多模型并使用交叉验证和网格搜索调整超参数,做更多的特征工程,例如:
* 用他们的总和取代SibSp和Parch,
* 尝试识别与Survived属性相关的名称部分(例如,如果名称包含“Countess”,那么生存似乎更有可能),
* 尝试将数字属性转换为分类属性:例如,
* 不同年龄组的存活率差异很大(见下文),因此可能有助于创建一个年龄段类别并使用它代替年龄。
* 同样,为独自旅行的人设置一个特殊类别可能是有用的,因为只有30%的人幸存下来(见下文)。
```
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
```
### 4. Spam classifier
Apache SpamAssassin的公共数据集下载spam and ham的示例
* 解压缩数据集并熟悉数据格式。
* 将数据集拆分为训练集和测试集。
* 编写数据preparation pipeline,将每封电子邮件转换为特征向量。您的preparation pipeline应将电子邮件转换为(稀疏)向量,指示每个可能单词的存在或不存在。例如,如果全部电子邮件只包含四个单词:
“Hello,” “how,” “are,” “you,”
then the email“Hello you Hello Hello you” would be converted into a vector [1, 0, 0, 1]
意思是:[“Hello” is present, “how” is absent, “are” is absent, “you” is present]),
或者[3, 0, 0, 2],如果你更喜欢计算每个单词的出现次数。
* 您可能希望在preparation pipeline中添加超参数以对是否剥离电子邮件标题进行控制,将每封电子邮件转换为小写,删除标点符号,将所有网址替换为“URL”用“NUMBER”替换所有数字,甚至执行*stemming*(即,修剪单词结尾;有可用的Python库)。
* 然后尝试几个分类器,看看你是否可以建立一个伟大的垃圾邮件分类器,具有高召回率和高精度。
First, let's fetch the data:
```
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=SPAM_PATH)
tar_bz2_file.close()
fetch_spam_data()
```
Next, let's load all the emails:
```
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
```
We can use Python's email module to parse these emails (this handles headers, encoding, and so on):
```
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
```
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
```
print(ham_emails[1].get_content().strip())
```
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
```
print(spam_emails[6].get_content().strip())
```
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
```
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
fromfrom collectionscollectio import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
```
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have.
Now let's take a look at the email headers:
```
for header, value in spam_emails[0].items():
print(header,":",value)
```
There's probably a lot of useful information in there, such as the sender's email address (12a1mailbot1@web.de looks fishy), but we will just focus on the Subject header:
```
spam_emails[0]["Subject"]
```
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
```
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great BeautifulSoup library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment). The following function first drops the <head> section, then converts all <a> tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as > or ):
```
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
```
Let's see if it works. This is HTML spam:
```
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
```
And this is the resulting plain text:
```
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
```
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
```
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
```
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit (NLTK). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):
$ pip3 install nltk
```
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
```
We will also need a way to replace URLs with the word "URL". For this, we could use hard core regular expressions but we will just use the urlextract library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):
$ pip3 install urlextract
```
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
```
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's split() method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
```
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
```
Let's try this transformer on a few emails:
```
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
```
This looks about right!
Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose fit() method will build the vocabulary (an ordered list of the most common words) and whose transform() method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
```
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
```
What does this matrix mean? Well, the 64 in the third row, first column, means that the third email contains 64 words that are not part of the vocabulary. The 1 next to it means that the first word in the vocabulary is present once in this email. The 2 next to it means that the second word is present twice, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "of", the second word is "and", etc.
```
vocab_transformer.vocabulary_
```
We are now ready to train our first spam classifier! Let's transform the whole dataset:
```
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
```
Over 98.7%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.
But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
```
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
```
| github_jupyter |
# Which Whiskey Kaggle Challenge
## Imports
```
import pandas as pd
import zipfile
import numpy as np
import gensim
import os
import re
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.decomposition import TruncatedSVD
from scipy.stats import randint, uniform
import seaborn as sns
import matplotlib.pyplot as plt
from bs4 import BeautifulSoup
import requests
import spacy
from spacy.tokenizer import Tokenizer
from collections import Counter
import squarify
import warnings
import pyLDAvis.gensim
```
## Create Dataframe
```
train = pd.read_csv('train.csv.zip')
test = pd.read_csv('test.csv')
train.head()
```
## Clean Dataframe
### Load Functions
```
nlp = spacy.load("en_core_web_lg")
def clean_soup(df_column, spec_chars_remove = []):
"""
Input: dataframe column and list of specific characters to remove,
Output: List of cleaned observations
"""
soupy = [BeautifulSoup(df_column[ii], 'lxml').get_text()
for ii in range(df_column.shape[0])]
for char in spec_chars_remove:
soupy = [soupy[ii].replace(char, ' ') for ii in range(len(soupy))]
to_clean = ['[^A-Za-z ]+', ' ', ' ']
for char in to_clean:
soupy = [re.sub(char, ' ', soupy[ii]) for ii in range(len(soupy))]
df_feature = pd.Series([nlp(soupy[ii].lower().strip()) for ii in range(len(soupy))])
for row in range(df_feature.shape[0]):
df_feature[row] = " ".join([token.lemma_ for token in df_feature[row]])
return df_feature
def make_tokens(df_feature, addl_stop_words = ['-PRON-']):
"""
Input: Column of a dataframe/ Pandas Series,
stop words you'd like to add to nlp's defaults
Output: List consisting of tokens for each observation
Assumes: nlp object initialized as nlp
"""
tokens = []
tokenizer = Tokenizer(nlp.vocab)
STOP_WORDS = nlp.Defaults.stop_words.union(addl_stop_words)
for doc in tokenizer.pipe(df_feature, batch_size=500):
doc_tokens = []
for token in doc:
if token.text not in STOP_WORDS:
doc_tokens.append(token.text.lower())
tokens.append(doc_tokens)
return tokens
def whiskey_wrangle(df, stop_words = [], spec_chars = []):
df = df.copy()
df['description_redo'] = clean_soup(df['description'], spec_chars)
df['description_tokens'] = make_tokens(df['description_redo'], stop_words)
df['description_processed'] = df['description_tokens'].copy()
for row in range(df['description_tokens'].shape[0]):
df['description_processed'][row] = " ".join(df['description_tokens'][row])
df.drop(columns = ['description'])
return df
def count(docs):
"""
Input: Series of spacy docs objects / dataframe column
Output: Pandas dataframe consisting of words and their
stats based on how many times they appear in the series
"""
word_counts = Counter()
appears_in = Counter()
total_docs = len(docs)
for doc in docs:
word_counts.update(doc)
appears_in.update(set(doc))
temp = zip(word_counts.keys(), word_counts.values())
wc = pd.DataFrame(temp, columns = ['word', 'count'])
wc['rank'] = wc['count'].rank(method='first', ascending=False)
total = wc['count'].sum()
wc['pct_total'] = wc['count'].apply(lambda x: x / total)
wc = wc.sort_values(by='rank')
wc['cul_pct_total'] = wc['pct_total'].cumsum()
t2 = zip(appears_in.keys(), appears_in.values())
ac = pd.DataFrame(t2, columns=['word', 'appears_in'])
wc = ac.merge(wc, on='word')
wc['appears_in_pct'] = wc['appears_in'].apply(lambda x: x / total_docs)
return wc.sort_values(by='rank')
```
### Tune dataframe
```
stop_words = ['-PRON-', 'pron', 's', 't', 'whiskey',
'whisky', 'bottle', 'year', 'hint', 'note', 'finish',
'palate', 'nose', 'like', 'good', 'new', 'aroma',
'slightly', 'release', 'long', 'subtle', 'balance',
'rich', 'age', 'single', 'fruit', 'add', 'light',
'clean', 'distillery', 'flavor', 'cask', 'wood',
'sorft','water', 'time', 'distil', 'bit', 'bottling',
'old', 'young', 'fresh', 'hot', 'soft', 'mature',
'complex','']
train_wrangled = whiskey_wrangle(train, stop_words)
test_wrangled = whiskey_wrangle(test, stop_words)
wc_cat_1 = count(train_wrangled[train_wrangled['category']==1]['description_tokens'])
wc_cat_2 = count(train_wrangled[train_wrangled['category']==2]['description_tokens'])
wc_cat_3 = count(train_wrangled[train_wrangled['category']==3]['description_tokens'])
wc_cat_4 = count(train_wrangled[train_wrangled['category']==4]['description_tokens'])
wc = count(train_wrangled['description_tokens'])
wc_top50 = wc[wc['rank'] <= 50] #explore what the top words are and help find additional stop_words
plt.rcParams['figure.figsize'] = (12,8)
squarify.plot(sizes=wc_top50['pct_total'], label=wc_top50['word'], alpha=.8 )
plt.axis('off')
plt.show()
wc_cat_1
X_train, X_val, y_train, y_val = train_test_split(train_wrangled['description_processed'], train_wrangled['category'],
test_size=0.20, random_state=42, stratify = train_wrangled['category'])
X_train.shape, X_val.shape, y_train.shape, y_val.shape
vect = TfidfVectorizer(stop_words=stop_words)
rfc = RandomForestClassifier()
svd = TruncatedSVD(algorithm='randomized')
# pipline using latent semantic indexing
lsi = Pipeline([('vect', vect),
('svd', svd)])
pipeline = Pipeline([('lsi', lsi),
('clf', rfc)])
# The pipeline puts together a bunch fit then transform,fit then predict.
parameters = {
'lsi__vect__ngram_range': [(1,1),(1,2),(1,3)],
'lsi__vect__max_df': uniform( 0.5, 1.0),
'lsi__vect__min_df': uniform(.01, .05),
'lsi__vect__max_features': randint(500,10000),
'lsi__svd__n_components': randint(5, 90),
'clf__n_estimators': randint(50, 500),
'clf__max_depth': [5, 10, 20, 40, None],
'clf__min_samples_leaf': randint(5,50),
'clf__max_features': uniform(0, 1),
'clf__class_weight':['balanced','balanced_subsample',None]
}
search = RandomizedSearchCV(
pipeline,
param_distributions=parameters,
n_iter=30,
cv=5,
return_train_score=True,
verbose = 10,
n_jobs = -1
)
search.fit(X_train, y_train)
print(f'Best score: {search.best_score_}\n')
print(f'Best hyperparameters: \n{search.best_params_}\n')
best_pipeline = search.best_estimator_
best_pipeline.fit(X_train, y_train)
print(f'Validation Accuracy: \n{best_pipeline.score(X_val, y_val)}\n')
len(best_pipeline.steps[0][1].steps[0][1].get_feature_names())
X_test = test_wrangled['description_processed']
test_pred = best_pipeline.predict(X_test)
submission = pd.DataFrame({'id': test_wrangled['id'], 'category':test_pred})
submission['category'] = submission['category'].astype('int64')
submission.head()
submission.to_csv(f'submission_1.csv', index=False)
```
| github_jupyter |
```
import os, sys, time, importlib
import geopandas as gpd
import pandas as pd
import networkx as nx
sys.path.append('/home/wb514197/Repos/GOSTnets')
import GOSTnets as gn
import GOSTnets.calculate_od_raw as calcOD
from GOSTnets.load_osm import *
import rasterio as rio
from osgeo import gdal
import numpy as np
from shapely.geometry import Point
sys.path.append('/home/wb514197/Repos/INFRA_SAP')
from infrasap import aggregator
%load_ext autoreload
%autoreload 2
import glob
import os
import numpy as np
import pandas as pd
import geopandas as gpd
import rasterio as rio
from rasterio import features
from rasterstats import zonal_stats
from rasterio.warp import reproject, Resampling
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from infrasap import rasterMisc
country = 'zimbabwe'
iso3 = 'ZWE'
epsg = 32736
base_in = "/home/public/Data/PROJECTS/INFRA_SAP"
in_folder = os.path.join(base_in, iso3)
# define data paths
focal_admin2 = os.path.join(in_folder, "admin.shp")
focal_osm = os.path.join(in_folder, f"{country}-latest.osm.pbf")
pop_name = "WP_2020_1km"
wp_1km = os.path.join(in_folder, f"{pop_name}.tif")
urban_extents = os.path.join(in_folder, "urban_extents.shp")
airports = os.path.join(in_folder, "airports.shp")
ports = os.path.join(in_folder, "ports.shp")
borders = os.path.join(in_folder, "borders.shp")
base_out = "/home/wb514197/data/INFRA_SAP" # GOT permission denied using public
out_folder = os.path.join(base_out, iso3)
if not os.path.exists(out_folder):
os.makedirs(out_folder)
targets_rio = rio.open("/home/wb514197/data/ENERGY/targets.tif") # from GRIDFINDER Paper
# targets = targets_rio.read(1)
wp_100m = rio.open(os.path.join(out_folder, "zwe_ppp_2020_UNadj.tif"))
wp_arr = wp_100m.read(1, masked=True)
pop_fb = rio.open(os.path.join(out_folder, "population_zwe_2019-07-01.tif"))
fb_arr = pop_fb.read(1, masked=True)
wp_arr.shape
fb_arr.shape
rasterMisc.standardizeInputRasters(targets_rio, wp_100m, os.path.join(out_folder, "energy", "targets_ZWE_wp.tif"), data_type='C')
rasterMisc.standardizeInputRasters(targets_rio, pop_fb, os.path.join(out_folder, "energy", "targets_ZWE_fb.tif"), data_type='C')
targets_fb = rio.open(os.path.join(out_folder, "energy", "targets_ZWE_fb.tif"))
targets_fb_arr = targets_fb.read(1)
targets_wp = rio.open(os.path.join(out_folder, "energy", "targets_ZWE_wp.tif"))
targets_wp_arr = targets_wp.read(1)
targets_wp.shape == wp_100m.shape
targets_fb.shape == pop_fb.shape
intersect_wp = targets_wp_arr*wp_arr
intersect_fb = targets_fb_arr*fb_arr
admin = gpd.read_file(focal_admin2)
zs_sum_wp = pd.DataFrame(zonal_stats(admin, wp_arr, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_wp'})
zs_electrified_wp = pd.DataFrame(zonal_stats(admin, intersect_wp, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_electrified_wp'})
zs_sum_fb = pd.DataFrame(zonal_stats(admin, fb_arr, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_fb'})
zs_electrified_fb = pd.DataFrame(zonal_stats(admin, intersect_fb, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_electrified_fb'})
res = pd.concat([admin, zs_sum_wp, zs_electrified_wp, zs_sum_fb, zs_electrified_fb], axis=1)
res['pct_access_wp'] = res['pop_electrified_wp']/res['pop_wp']
res['pct_access_fb'] = res['pop_electrified_fb']/res['pop_fb']
res.columns
res.to_file(os.path.join(out_folder, "energy", "ElectricityAccess.shp"), driver='ESRI Shapefile')
res_table = res.drop(['geometry','Shape_Leng','Shape_Area'], axis=1)
res_table.to_csv(os.path.join(out_folder, "energy", "ElectricityAccess2.csv"), index=False)
transmission = gpd.read_file(os.path.join(in_folder, 'transmission_lines.shp'))
transmission = transmission.to_crs(epsg)
transmission['buffer'] = transmission.buffer(10000)
transmission_buff = transmission.set_geometry('buffer')
transmission_buff = transmission_buff.unary_union
transmission_buff
transmission_buff_gdf = gpd.GeoDataFrame(geometry=[transmission_buff],
crs=epsg)
transmission_buff_gdf
admin_proj = admin.to_crs(epsg)
admin_buff = gpd.overlay(admin_proj, transmission_buff_gdf, how='intersection')
admin_buff.plot('OBJECTID')
admin_buff_wgs = admin_buff.to_crs('EPSG:4326')
zs_10k_trans_wp = pd.DataFrame(zonal_stats(admin_buff_wgs, wp_arr, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_transmission_wp'})
zs_10k_trans_fb = pd.DataFrame(zonal_stats(admin_buff_wgs, fb_arr, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_transmission_fb'})
res = pd.concat([admin, zs_sum_wp, zs_electrified_wp, zs_sum_fb, zs_electrified_fb, zs_10k_trans_wp, zs_10k_trans_fb], axis=1)
res['pct_access_wp'] = res['pop_electrified_wp']/res['pop_wp']
res['pct_access_fb'] = res['pop_electrified_fb']/res['pop_fb']
res['pct_transmission_10k_wp'] = res['pop_transmission_wp']/res['pop_wp']
res['pct_transmission_10k_fb'] = res['pop_transmission_fb']/res['pop_fb']
res.to_file(os.path.join(out_folder, "energy", "ElectricityAccess.shp"), driver='ESRI Shapefile')
res_table = res.drop(['geometry','Shape_Leng','Shape_Area'], axis=1)
res_table.to_csv(os.path.join(out_folder, "energy", "ElectricityAccess3.csv"), index=False)
```
| github_jupyter |
[Binary Tree Tilt](https://leetcode.com/problems/binary-tree-tilt/)。定义倾斜程度,节点的倾斜程度等于左子树节点和与右子树节点和的绝对差,而整棵树的倾斜程度等于所有节点倾斜度的和。求一棵树的倾斜程度。
思路:因为求倾斜程度牵涉到节点的累计和,所以在设计递归函数时返回一个累加和。
```
def findTilt(root: TreeNode) -> int:
res = 0
def rec(root): # 返回累加和的递归函数
if not root:
return 0
nonlocal res
left_sum = rec(root.left)
right_sum = rec(root.right)
res += abs(left_sum-right_sum)
return left_sum+right_sum+root.val
rec(root)
return res
```
京东2019实习笔试题:
体育场突然着火了,现场需要紧急疏散,但是过道真的是太窄了,同时只能容许一个人通过。现在知道了体育场的所有座位分布,座位分布图是一棵树,已知每个座位上都坐了一个人,安全出口在树的根部,也就是1号结点的位置上。其他节点上的人每秒都能向树根部前进一个结点,但是除了安全出口以外,没有任何一个结点可以同时容纳两个及以上的人,这就需要一种策略,来使得人群尽快疏散,问在采取最优策略的情况下,体育场最快可以在多长时间内疏散完成。
示例数据:
6
2 1
3 2
4 3
5 2
6 1
思路:在第二层以下的所有节点,每次均只能移动一个节点,所以散场的时间由第二层以下的节点数决定。找到所有分支中节点数最大的那一支,返回其节点数即可
```
n = int(input())
branches = list()
for _ in range(n-1):
a, b = map(int, input().split())
if b == 1: # 新分支
branches.append(set([a]))
for branch in branches:
if b in branch:
branch.add(a)
print(branches)
print(max(map(len, branches)))
```
[Leaf-Similar Trees](https://leetcode.com/problems/leaf-similar-trees/)。一颗二叉树,从左往右扫描经过的所有叶节点构成的序列为叶节点序列。给两颗二叉树,判断两棵树的叶节点序列是否相同。
思路:易得叶节点序列可以通过中序遍历得到。
```
def leafSimilar(root1: TreeNode, root2: TreeNode) -> bool:
def get_leaf_seq(root):
res=list()
if not root:
return res
s=list()
while root or s:
while root:
s.append(root)
root=root.left
vis_node=s.pop()
if not vis_node.left and not vis_node.right:
res.append(vis_node.val)
if vis_node.right:
root=vis_node.right
return res
seq_1,seq_2=get_leaf_seq(root1),get_leaf_seq(root2)
if len(seq_1)!=len(seq_2):
return False
for val_1,val_2 in zip(seq_1,seq_2):
if val_1!=val_2:
return False
return True
```
[Increasing Order Search Tree](https://leetcode.com/problems/increasing-order-search-tree/)。给一颗BST,将其转化成只有右分支的单边树。
思路:只有右分支的BST,那么根节点是最小节点,一直往右一直增。BST的递增序列是通过中序遍历得到,新建一棵树即可。
```
def increasingBST(root: TreeNode) -> TreeNode:
res = None
s = list()
while s or root:
while root:
s.append(root)
root = root.left
vis_node = s.pop()
if res is None: # 第一个节点特殊处理
res = TreeNode(vis_node.val)
ptr = res
else:
ptr.right = TreeNode(vis_node.val)
ptr = ptr.right
if vis_node.right:
root = vis_node.right
return res
```
| github_jupyter |
# All
## Set Up
```
print("Installing dependencies...")
%tensorflow_version 2.x
!pip install -q t5
import functools
import os
import time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import t5
```
## Set UP TPU Runtime
```
ON_CLOUD = True
if ON_CLOUD:
print("Setting up GCS access...")
import tensorflow_gcs_config
from google.colab import auth
# Set credentials for GCS reading/writing from Colab and TPU.
TPU_TOPOLOGY = "v3-8"
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU zdetection
TPU_ADDRESS = tpu.get_master()
print('Running on TPU:', TPU_ADDRESS)
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
auth.authenticate_user()
tf.config.experimental_connect_to_host(TPU_ADDRESS)
tensorflow_gcs_config.configure_gcs_from_colab_auth()
tf.disable_v2_behavior()
# Improve logging.
from contextlib import contextmanager
import logging as py_logging
if ON_CLOUD:
tf.get_logger().propagate = False
py_logging.root.setLevel('INFO')
@contextmanager
def tf_verbosity_level(level):
og_level = tf.logging.get_verbosity()
tf.logging.set_verbosity(level)
yield
tf.logging.set_verbosity(og_level)
```
## 4b
```
def dumping_dataset(split, shuffle_files = False):
del shuffle_files
if split == 'train':
ds = tf.data.TextLineDataset(
[
'gs://scifive/finetune/bioasq4b/bioasq_4b_train_1.tsv',
]
)
else:
ds = tf.data.TextLineDataset(
[
'gs://scifive/finetune/bioasq4b/bioasq_4b_test.tsv',
]
)
# # Split each "<t1>\t<t2>" example into (input), target) tuple.
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["", ""],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Map each tuple to a {"input": ... "target": ...} dict.
ds = ds.map(lambda *ex: dict(zip(["input", "target"], ex)))
return ds
print("A few raw validation examples...")
for ex in tfds.as_numpy(dumping_dataset("train").take(5)):
print(ex)
def ner_preprocessor(ds):
def normalize_text(text):
text = tf.strings.lower(text)
text = tf.strings.regex_replace(text,"'(.*)'", r"\1")
return text
def to_inputs_and_targets(ex):
"""Map {"inputs": ..., "targets": ...}->{"inputs": ner..., "targets": ...}."""
return {
"inputs":
tf.strings.join(
["bioasq4b: ", normalize_text(ex["input"])]),
"targets": normalize_text(ex["target"])
}
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('bioasq4b')
t5.data.TaskRegistry.add(
"bioasq4b",
# Supply a function which returns a tf.data.Dataset.
dataset_fn=dumping_dataset,
splits=["train", "validation"],
# Supply a function which preprocesses text from the tf.data.Dataset.
text_preprocessor=[ner_preprocessor],
# Lowercase targets before computing metrics.
postprocess_fn=t5.data.postprocessors.lower_text,
# We'll use accuracy as our evaluation metric.
metric_fns=[t5.evaluation.metrics.accuracy,
t5.evaluation.metrics.sequence_accuracy,
],
# output_features=t5.data.Feature(vocabulary=t5.data.SentencePieceVocabulary(vocab))
)
nq_task = t5.data.TaskRegistry.get("bioasq4b")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 128, "targets": 128})
print("A few preprocessed validation examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
## Dataset Mixture
```
t5.data.MixtureRegistry.remove("bioasqb")
t5.data.MixtureRegistry.add(
"bioasqb",
["bioasq4b"],
default_rate=1.0
)
```
## Define Model
```
# Using pretrained_models from wiki + books
MODEL_SIZE = "base"
# BASE_PRETRAINED_DIR = "gs://t5-data/pretrained_models"
BASE_PRETRAINED_DIR = "gs://t5_training/models/bio/pmc_v1"
PRETRAINED_DIR = os.path.join(BASE_PRETRAINED_DIR, MODEL_SIZE)
# MODEL_DIR = "gs://t5_training/models/bio/bioasq4b_pmc_v2"
MODEL_DIR = "gs://t5_training/models/bio/bioasq4b_pmc_v2"
MODEL_DIR = os.path.join(MODEL_DIR, MODEL_SIZE)
# Set parallelism and batch size to fit on v2-8 TPU (if possible).
# Limit number of checkpoints to fit within 5GB (if possible).
model_parallelism, train_batch_size, keep_checkpoint_max = {
"small": (1, 256, 16),
"base": (2, 128*2, 8),
"large": (8, 64, 4),
"3B": (8, 16, 1),
"11B": (8, 16, 1)}[MODEL_SIZE]
tf.io.gfile.makedirs(MODEL_DIR)
# The models from our paper are based on the Mesh Tensorflow Transformer.
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=TPU_ADDRESS,
tpu_topology=TPU_TOPOLOGY,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 512, "targets": 52},
learning_rate_schedule=0.001,
save_checkpoints_steps=1000,
keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,
iterations_per_loop=100,
)
```
## Finetune
```
FINETUNE_STEPS = 45000
model.finetune(
mixture_or_task_name="bioasqb",
pretrained_model_dir=PRETRAINED_DIR,
finetune_steps=FINETUNE_STEPS
)
```
## Predict
```
year = 4
output_dir = 'predict_output_pmc_lower'
import tensorflow.compat.v1 as tf
# for year in range(4,7):
for batch in range (1,6):
task = "%dB%d"%(year, batch)
dir = "bioasq%db"%(year)
input_file = task + '_factoid_predict_input_lower.txt'
output_file = task + '_predict_output.txt'
predict_inputs_path = os.path.join('gs://t5_training/t5-data/bio_data', dir, 'eval_data', input_file)
print(predict_inputs_path)
predict_outputs_path = os.path.join('gs://t5_training/t5-data/bio_data', dir, output_dir, MODEL_SIZE, output_file)
with tf_verbosity_level('ERROR'):
# prediction_files = sorted(tf.io.gfile.glob(predict_outputs_path + "*"))
model.batch_size = 8 # Min size for small model on v2-8 with parallelism 1.
model.predict(
input_file=predict_inputs_path,
output_file=predict_outputs_path,
# inputs=predict_inputs_path,
# targets=predict_outputs_path + '-' + str(prediction_files[-1].split("-")[-1]),
# Select the most probable output token at each step.
temperature=0,
)
print("Predicted task : " + task)
prediction_files = sorted(tf.io.gfile.glob(predict_outputs_path + "*"))
# print('score', score)
print("\nPredictions using checkpoint %s:\n" % prediction_files[-1].split("-")[-1])
# t5_training/t5-data/bio_data/bioasq4b/eval_data/4B1_factoid_predict_input.txt
```
| github_jupyter |
```
import tensorflow as tf
import math
print('TensorFlow version: ' + tf.__version__)
from tensorflow.examples.tutorials.mnist import input_data as mnist_data
mnist = mnist_data.read_data_sets("../MNIST_data", one_hot=True, reshape=True, validation_size=0)
x_train = mnist.train.images # we will not be using these to feed in data
y_train = mnist.train.labels # instead we will be using next_batch function to train in batches
x_test = mnist.test.images
y_test = mnist.test.labels
print ('We have '+str(x_train.shape[0])+' training examples in dataset')
TUTORIAL_NAME = 'Tutorial3a'
MODEL_NAME = 'convnetTFonAndroid'
SAVED_MODEL_PATH = '../' + TUTORIAL_NAME+'_Saved_model/'
BATCH_SIZE = 100
KEEP_PROB = 0.75
LEARNING_RATE_MAX = 1e-1 #1e-2
LEARNING_RATE_MIN = 1e-1 #1e-5
TRAIN_STEPS = 600 #600 * 8
def getExponentiallyDecayingLR(TRAIN_STEPS):
up = -math.log(LEARNING_RATE_MAX,10)
lp = -math.log(LEARNING_RATE_MIN,10)
return LEARNING_RATE_MIN + (LEARNING_RATE_MAX - LEARNING_RATE_MIN)* math.pow(10,(up - lp)*((i)/(TRAIN_STEPS+1e-9)))
for i in range(TRAIN_STEPS+1):
LEARNING_RATE = getExponentiallyDecayingLR(TRAIN_STEPS)
if i%(TRAIN_STEPS/10)==0:
print 'Learning Rate: '+str(LEARNING_RATE)
keepProb = tf.placeholder(tf.float32)
lRate = tf.placeholder(tf.float32)
W1 = tf.Variable(tf.truncated_normal([6, 6, 1, 6], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1, tf.float32, [6]))
W2 = tf.Variable(tf.truncated_normal([5, 5, 6, 12], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1, tf.float32, [12]))
W3 = tf.Variable(tf.truncated_normal([4, 4, 12, 24], stddev=0.1))
B3 = tf.Variable(tf.constant(0.1, tf.float32, [24]))
W4 = tf.Variable(tf.truncated_normal([7 * 7 * 24, 200], stddev=0.1))
B4 = tf.Variable(tf.constant(0.1, tf.float32, [200]))
W5 = tf.Variable(tf.truncated_normal([200, 10], stddev=0.1))
B5 = tf.Variable(tf.constant(0.1, tf.float32, [10]))
# The model
X = tf.placeholder(tf.float32, [None, 28*28], name='modelInput')
X_image = tf.reshape(X, [-1, 28, 28, 1])
Y_ = tf.placeholder(tf.float32, [None, 10])
Y1 = tf.nn.relu(tf.nn.conv2d(X_image, W1, strides=[1, 1, 1, 1], padding='SAME') + B1)
Y2 = tf.nn.relu(tf.nn.conv2d(Y1, W2, strides=[1, 2, 2, 1], padding='SAME') + B2)
Y3 = tf.nn.relu(tf.nn.conv2d(Y2, W3, strides=[1, 2, 2, 1], padding='SAME') + B3)
YY = tf.reshape(Y3, shape=[-1, 7 * 7 * 24])
Y4 = tf.nn.relu(tf.matmul(YY, W4) + B4)
YY4 = tf.nn.dropout(Y4, keepProb)
Ylogits = tf.matmul(YY4, W5) + B5
Y = tf.nn.softmax(Ylogits)
#########
Ylogits = tf.matmul(Y4, W5) + B5
Y_inf = tf.nn.softmax(Ylogits, name='modelOutput')
#########
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_))*100
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_step = tf.train.AdamOptimizer(lRate).minimize(cross_entropy)
tf.set_random_seed(0)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
saver = tf.train.Saver()
for i in range(TRAIN_STEPS+1):
up = -math.log(LEARNING_RATE_MAX,10)
lp = -math.log(LEARNING_RATE_MIN,10)
LEARNING_RATE = getExponentiallyDecayingLR(TRAIN_STEPS)
batch_X, batch_Y = mnist.train.next_batch(BATCH_SIZE)
sess.run(train_step, feed_dict={X: batch_X, Y_: batch_Y, lRate:LEARNING_RATE, keepProb:KEEP_PROB})
if i%100 == 0:
print('Latest learning rate is: '+str(LEARNING_RATE))
if i%100 == 0:
print('Training Step:' + str(i)
+ ' Accuracy = ' + str(sess.run(accuracy, feed_dict={X: x_test, Y_: y_test, keepProb:1.0}))
+ ' Loss = ' + str(sess.run(cross_entropy, {X: x_test, Y_: y_test}))
)
# uncomment this when learning rate is decreasing to save checkpoints
# if i%600 == 0:
# out = saver.save(sess, SAVED_MODEL_PATH + MODEL_NAME + '.ckpt', global_step=i)
tf.train.write_graph(sess.graph_def, SAVED_MODEL_PATH , MODEL_NAME + '.pbtxt')
tf.train.write_graph(sess.graph_def, SAVED_MODEL_PATH , MODEL_NAME + '.pb',as_text=False)
from tensorflow.python.tools import freeze_graph
# Freeze the graph
input_graph = SAVED_MODEL_PATH+MODEL_NAME+'.pb'
input_saver = ""
input_binary = True
input_checkpoint = SAVED_MODEL_PATH+MODEL_NAME+'.ckpt-'+str(TRAIN_STEPS) # change this value TRAIN_STEPS here as per your latest checkpoint saved
output_node_names = 'modelOutput'
restore_op_name = 'save/restore_all'
filename_tensor_name = 'save/Const:0'
output_graph = SAVED_MODEL_PATH+'frozen_'+MODEL_NAME+'.pb'
clear_devices = True
initializer_nodes = ""
variable_names_blacklist = ""
freeze_graph.freeze_graph(
input_graph,
input_saver,
input_binary,
input_checkpoint,
output_node_names,
restore_op_name,
filename_tensor_name,
output_graph,
clear_devices,
initializer_nodes,
variable_names_blacklist
)
```
| github_jupyter |
Fig.1c-f.See Fig.1b for GPS data download. Siemsicity catalogue and InSAR dispalcement timeseries for a selected point are on the github page
```
import numpy as np
import matplotlib.pyplot as plt
#import matplotlib.cm as cm
import datetime
import pandas as pd
import matplotlib.dates as mdates
import time
from datetime import datetime as dt
```
Functions to read GPS data from file and reference it to 'MKEA' station
```
def read_ref_data(time1,time2):
filename='../GPS_data/'+'MKEA.txt'
dfin = pd.read_csv(filename, header=0, delimiter=r"\s+")
index = ['Time', 'East', 'North', 'Up']
dataval=pd.DataFrame(index=index);dataerr=pd.DataFrame(index=index);
dataval=pd.concat([dfin['YYMMMDD'].rename('date'), (dfin['_e0(m)']+dfin['__east(m)']).rename('east'), (dfin['____n0(m)']+dfin['_north(m)']).rename('north'),
(dfin['u0(m)']+dfin['____up(m)']).rename('up'),dfin['yyyy.yyyy'].rename('dateval')], axis=1)
dataerr=pd.concat([dfin['YYMMMDD'].rename('date'), dfin['sig_e(m)'], dfin['sig_n(m)'], dfin['sig_u(m)']], axis=1,ignore_index=False)
dataval['date']=pd.to_datetime(dataval['date'], format='%y%b%d', errors='ignore')
dataerr['date']=pd.to_datetime(dataval['date'], format='%y%b%d', errors='ignore')
mask= (dataval['date'] > time1) & (dataval['date'] < time2)
dataval=dataval[mask];dataerr=dataerr[mask];
dataval=dataval.set_index(['date'])
dataval=dataval.resample('D').interpolate(method='linear')
dataval=dataval.reset_index()
return dataval
def read_data(sitename,time1,time2):
filename='../GPS_data/'+sitename+'.txt'
dfin = pd.read_csv(filename, header=0, delimiter=r"\s+")
index = ['Time', 'East', 'North', 'Up']
dataval=pd.DataFrame(index=index);dataerr=pd.DataFrame(index=index);
dataval=pd.concat([dfin['YYMMMDD'].rename('date'), (dfin['_e0(m)']+dfin['__east(m)']).rename('east'), (dfin['____n0(m)']+dfin['_north(m)']).rename('north'),
(dfin['u0(m)']+dfin['____up(m)']).rename('up'),dfin['yyyy.yyyy'].rename('dateval')], axis=1)
dataerr=pd.concat([dfin['YYMMMDD'].rename('date'), dfin['sig_e(m)'], dfin['sig_n(m)'], dfin['sig_u(m)']], axis=1,ignore_index=False)
dataval['date']=pd.to_datetime(dataval['date'], format='%y%b%d', errors='ignore')
dataerr['date']=pd.to_datetime(dataval['date'], format='%y%b%d', errors='ignore')
mask= (dataval['date'] > time1) & (dataval['date'] < time2)
dataval=dataval[mask];dataerr=dataerr[mask];
#reference to MKEA
dataval2=read_ref_data(time1,time2)
merged=pd.merge(dataval,dataval2,how='inner',on=['date'])
merged['east']=merged['east_x']-merged['east_y'];
merged['north']=merged['north_x']-merged['north_y'];
merged['up']=merged['up_x']-merged['up_y'];
merged['dateval']=(merged['dateval_x']+merged['dateval_y'])*0.5
merged=merged[['date','east','north','up','dateval']]
return merged
years = mdates.YearLocator(1) # every year
months = mdates.MonthLocator(interval=3) # every month
yearsFmt = mdates.DateFormatter('%Y')
data = pd.read_csv('eq_catalogs/eq_summit_2011_2020.csv',header=0, parse_dates=['time'],index_col='time')
data2=data.resample('M')['mag'].count().to_frame()
data = pd.read_csv('eq_catalogs/eq_east_decollement_2011_2020.csv',header=0, parse_dates=['time'],index_col='time')
data3=data.resample('M')['mag'].count().to_frame()
data = pd.read_csv('eq_catalogs/eq_west_decollement_2011_2020.csv',header=0, parse_dates=['time'],index_col='time')
data4=data.resample('M')['mag'].count().to_frame()
InSAR_ts = pd.read_csv('y1494_x1673_ts.txt',skiprows=6,header=None,delimiter=r"\s+",
names=['date','disp'],parse_dates=['date'])
InSAR_ts['date'] = [pd.to_datetime(d) for d in InSAR_ts['date']];
```
Plot figure
```
x=[datetime.date(2012, 1, 1), datetime.date(2020, 6, 30)]
y=[0,40];
fig, axes = plt.subplots(nrows=5, ncols=1, sharex=True,figsize=(10, 15), gridspec_kw = {'height_ratios':[1,1.5,1,1,1]});
#customize axes
fig.subplots_adjust(hspace=0.05)
ff=20;ffti=20;
for i in range(5):
axes[i].tick_params(labelsize=ff);
axes[i].tick_params(axis='y',length=10, width=3);axes[i].tick_params(axis='y',length=10, width=3);
axes[i].tick_params(which='minor',length=0, width=0);
axes[i].xaxis.set_major_locator(years);
axes[i].xaxis.set_major_formatter(yearsFmt)
if i>0:
axes[i].spines['top'].set_color('none');
axes[0].spines['bottom'].set_color('none');
axes[0].set_yticks([0,10,20]);axes[1].set_yticks([0,10,20,30]);
axes[4].set_xlim([x[0], x[-1]]);axes[4].set_yticks(np.arange(0, 80, 20));
axes[4].tick_params(axis='x',length=13, width=3);axes[4].tick_params(which='minor',length=7, width=2);
#plot InSAR timeseries data
axes[0].plot_date(InSAR_ts['date'],InSAR_ts['disp'],marker='o',label='InSAR',markersize=8.0);
axes[0].set_ylabel('InSAR LOS (cm)',fontsize=ffti)
axes[0].set_ylim([-3,30]);
#plot GPS timeseries
site_list=['MOKP','MLSP','SLPC','PHAN','ALAL','PAT3'];
time1=x[0];time2=x[1];t=0;
for site in site_list:
gps_data=read_data(site,time1,time2);
gps_data['up']=(gps_data['up']-gps_data['up'].iloc[0:10].mean())*100
gps_data['north']=(gps_data['north']-gps_data['north'].iloc[0:10].mean())*100 #+t;t=t+15;
gps_data['east']=(gps_data['east']-gps_data['east'].iloc[0:10].mean())*100 #+t; #t=t+15;
gps_data['horz']=(gps_data['east']*gps_data['east']+gps_data['north']*gps_data['north'])**(0.5)+t; t=t+2;
axes[1].plot_date(gps_data['date'],gps_data['horz'],marker='o',markersize=2.0,label=site);
axes[1].set_ylim([-2,53]);axes[1].set_yticks([10,20,30,40,50]);
axes[1].set_ylabel('GPS Horz.(cm)',fontsize=ffti);
lgnd=axes[1].legend(labelspacing=-2.5,fontsize=10,bbox_to_anchor=(0.31, 0.65), bbox_transform=plt.gcf().transFigure,frameon=False)
for handle in lgnd.legendHandles:
handle._legmarker.set_markersize(6)
#Plot siesmicity rate
axes[2].bar(data2.index,data2.mag,color='black',edgecolor='k',linewidth=5);
axes[3].bar(data3.index,data3.mag,color='black',edgecolor='k',linewidth=5);
axes[4].bar(data4.index,data4.mag,color='black',edgecolor='k',linewidth=5);
axes[2].set_ylim([0, 80]);axes[2].set_yticks(np.arange(0, 80, 20));
axes[3].set_ylim([0, 40]);axes[3].set_yticks(np.arange(0, 40, 10));
axes[4].set_ylim([0, 40]);axes[4].set_yticks(np.arange(0, 40, 10));
axes[3].set_ylabel('No. of quakes/month',fontsize=ffti);
vlines=[datetime.date(2014, 1, 31),datetime.date(2015, 8, 15),datetime.date(2018, 5, 1)];
for i in range(len(vlines)):
for j in range(5):
axes[j].axvline(vlines[i],color='r',linestyle='--');
marker=datetime.date(2017, 8, 1);
axes[2].text(marker, 60, 'Summit', fontsize=15,style='italic');
axes[3].text(marker, 35, 'Eastern decollement', fontsize=15,style='italic');
axes[4].text(marker, 35, 'Western decollement', fontsize=15,style='italic');
#save plot
#plt.savefig('Fig.1c-f_displacement_over_SeismicityRates.pdf',dpi=300,bbox_inches='tight',transparent=True)
```
| github_jupyter |
# Object Detection using Managed Spot Training
The example here is almost the same as [Amazon SageMaker Object Detection using the RecordIO format](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb).
This notebook tackles the exact same problem with the same solution, but it has been modified to be able to run using SageMaker Managed Spot infrastructure. SageMaker Managed Spot uses [EC2 Spot Instances](https://aws.amazon.com/ec2/spot/) to run Training at a lower cost.
Please read the original notebook and try it out to gain an understanding of the ML use-case and how it is being solved. We will not delve into that here in this notebook.
## Setup
Again, we won't go into detail explaining the code below, it has been lifted verbatim from [Amazon SageMaker Object Detection using the RecordIO format](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb).
```
!pip install -qU awscli boto3 sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
role = get_execution_role()
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = 'DEMO-ObjectDetection'
training_image = get_image_uri(sess.boto_region_name, 'object-detection', repo_version="latest")
```
### Download And Prepare Data
Note: this notebook downloads and uses the Pascal VOC dateset, please be aware of the database usage rights:
"The VOC data includes images obtained from the "flickr" website. Use of these images must respect the corresponding terms of use:
* "flickr" terms of use (https://www.flickr.com/help/terms)"
```
# Download the dataset
!wget -P /tmp http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
!wget -P /tmp http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
!wget -P /tmp http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
# Extract the data.
!tar -xf /tmp/VOCtrainval_11-May-2012.tar && rm /tmp/VOCtrainval_11-May-2012.tar
!tar -xf /tmp/VOCtrainval_06-Nov-2007.tar && rm /tmp/VOCtrainval_06-Nov-2007.tar
!tar -xf /tmp/VOCtest_06-Nov-2007.tar && rm /tmp/VOCtest_06-Nov-2007.tar
# Convert data into RecordIO
!python tools/prepare_dataset.py --dataset pascal --year 2007,2012 --set trainval --target VOCdevkit/train.lst
!rm -rf VOCdevkit/VOC2012
!python tools/prepare_dataset.py --dataset pascal --year 2007 --set test --target VOCdevkit/val.lst --no-shuffle
!rm -rf VOCdevkit/VOC2007
```
### Upload data to S3
```
# Upload the RecordIO files to train and validation channels
train_channel = prefix + '/train'
validation_channel = prefix + '/validation'
sess.upload_data(path='VOCdevkit/train.rec', bucket=bucket, key_prefix=train_channel)
sess.upload_data(path='VOCdevkit/val.rec', bucket=bucket, key_prefix=validation_channel)
s3_train_data = 's3://{}/{}'.format(bucket, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket, validation_channel)
```
# Object Detection using Managed Spot Training
For Managed Spot Training using MXNet we need to configure three things:
1. Enable the `train_use_spot_instances` constructor arg - a simple self-explanatory boolean.
2. Set the `train_max_wait` constructor arg - this is an int arg representing the amount of time you are willing to wait for Spot infrastructure to become available. Some instance types are harder to get at Spot prices and you may have to wait longer. You are not charged for time spent waiting for Spot infrastructure to become available, you're only charged for actual compute time spent once Spot instances have been successfully procured.
3. Setup a `checkpoint_s3_uri` constructor arg. This arg will tell SageMaker an S3 location where to save checkpoints (assuming your algorithm has been modified to save checkpoints periodically). While not strictly necessary checkpointing is highly recommended for Manage Spot Training jobs due to the fact that Spot instances can be interrupted with short notice and using checkpoints to resume from the last interruption ensures you don't lose any progress made before the interruption.
Feel free to toggle the `train_use_spot_instances` variable to see the effect of running the same job using regular (a.k.a. "On Demand") infrastructure.
Note that `train_max_wait` can be set if and only if `train_use_spot_instances` is enabled and **must** be greater than or equal to `train_max_run`.
```
train_use_spot_instances = True
train_max_run=3600
train_max_wait = 7200 if train_use_spot_instances else None
import uuid
checkpoint_suffix = str(uuid.uuid4())[:8]
checkpoint_s3_uri = 's3://{}/artifacts/object-detection-checkpoint-{}/'.format(bucket, checkpoint_suffix) if train_use_spot_instances else None
```
## Training
Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.
```
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
od_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p3.2xlarge',
train_volume_size = 50,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess,
train_use_spot_instances=train_use_spot_instances,
train_max_run=train_max_run,
train_max_wait=train_max_wait,
checkpoint_s3_uri=checkpoint_s3_uri)
od_model.set_hyperparameters(base_network='resnet-50',
use_pretrained_model=1,
num_classes=20,
mini_batch_size=32,
epochs=1,
learning_rate=0.001,
lr_scheduler_step='3,6',
lr_scheduler_factor=0.1,
optimizer='sgd',
momentum=0.9,
weight_decay=0.0005,
overlap_threshold=0.5,
nms_threshold=0.45,
image_shape=300,
label_width=350,
num_training_samples=16551)
train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
od_model.fit(inputs=data_channels, logs=True)
```
# Savings
Towards the end of the job you should see two lines of output printed:
- `Training seconds: X` : This is the actual compute-time your training job spent
- `Billable seconds: Y` : This is the time you will be billed for after Spot discounting is applied.
If you enabled the `train_use_spot_instances` var then you should see a notable difference between `X` and `Y` signifying the cost savings you will get for having chosen Managed Spot Training. This should be reflected in an additional line:
- `Managed Spot Training savings: (1-Y/X)*100 %`
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import malaya_speech.train.model.alconformer as conformer
import malaya_speech.train.model.transducer as transducer
import malaya_speech
import tensorflow as tf
import numpy as np
subwords = malaya_speech.subword.load('transducer.subword')
featurizer = malaya_speech.tf_featurization.STTFeaturizer(
normalize_per_feature = True
)
X = tf.compat.v1.placeholder(tf.float32, [None, None], name = 'X_placeholder')
X_len = tf.compat.v1.placeholder(tf.int32, [None], name = 'X_len_placeholder')
batch_size = tf.shape(X)[0]
features = tf.TensorArray(dtype = tf.float32, size = batch_size, dynamic_size = True, infer_shape = False)
features_len = tf.TensorArray(dtype = tf.int32, size = batch_size)
init_state = (0, features, features_len)
def condition(i, features, features_len):
return i < batch_size
def body(i, features, features_len):
f = featurizer(X[i, :X_len[i]])
f_len = tf.shape(f)[0]
return i + 1, features.write(i, f), features_len.write(i, f_len)
_, features, features_len = tf.while_loop(condition, body, init_state)
features_len = features_len.stack()
padded_features = tf.TensorArray(dtype = tf.float32, size = batch_size)
padded_lens = tf.TensorArray(dtype = tf.int32, size = batch_size)
maxlen = tf.reduce_max(features_len)
init_state = (0, padded_features, padded_lens)
def condition(i, padded_features, padded_lens):
return i < batch_size
def body(i, padded_features, padded_lens):
f = features.read(i)
len_f = tf.shape(f)[0]
f = tf.pad(f, [[0, maxlen - tf.shape(f)[0]], [0,0]])
return i + 1, padded_features.write(i, f), padded_lens.write(i, len_f)
_, padded_features, padded_lens = tf.while_loop(condition, body, init_state)
padded_features = padded_features.stack()
padded_lens = padded_lens.stack()
padded_lens.set_shape((None))
padded_features.set_shape((None, None, 80))
padded_features = tf.expand_dims(padded_features, -1)
padded_features, padded_lens
padded_features = tf.identity(padded_features, name = 'padded_features')
padded_lens = tf.identity(padded_lens, name = 'padded_lens')
config = malaya_speech.config.conformer_small_encoder_config
config['dropout'] = 0.0
conformer_model = conformer.Model(**config)
decoder_config = malaya_speech.config.conformer_small_decoder_config
decoder_config['embed_dropout'] = 0.0
transducer_model = transducer.rnn.Model(
conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config
)
p = tf.compat.v1.placeholder(tf.int32, [None, None])
z = tf.zeros((tf.shape(p)[0], 1),dtype=tf.int32)
c = tf.concat([z, p], axis = 1)
p_len = tf.compat.v1.placeholder(tf.int32, [None])
c
training = True
logits = transducer_model([padded_features, c, p_len], training = training)
logits
sess = tf.Session()
sess.run(tf.global_variables_initializer())
var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_list)
saver.restore(sess, 'asr-small-alconformer-transducer/model.ckpt-500000')
decoded = transducer_model.greedy_decoder(padded_features, padded_lens, training = training)
decoded = tf.identity(decoded, name = 'greedy_decoder')
decoded
encoded = transducer_model.encoder(padded_features, training = training)
encoded = tf.identity(encoded, name = 'encoded')
encoded_placeholder = tf.placeholder(tf.float32, [config['dmodel']], name = 'encoded_placeholder')
predicted_placeholder = tf.placeholder(tf.int32, None, name = 'predicted_placeholder')
t = transducer_model.predict_net.get_initial_state().shape
states_placeholder = tf.placeholder(tf.float32, [int(i) for i in t], name = 'states_placeholder')
ytu, new_states = transducer_model.decoder_inference(
encoded=encoded_placeholder,
predicted=predicted_placeholder,
states=states_placeholder,
training = training
)
ytu = tf.identity(ytu, name = 'ytu')
new_states = tf.identity(new_states, name = 'new_states')
ytu, new_states
initial_states = transducer_model.predict_net.get_initial_state()
initial_states = tf.identity(initial_states, name = 'initial_states')
# sess = tf.Session()
# sess.run(tf.global_variables_initializer())
# var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
# saver = tf.train.Saver(var_list = var_list)
# saver.restore(sess, 'asr-small-conformer-transducer/model.ckpt-325000')
files = [
'speech/record/savewav_2020-11-26_22-36-06_294832.wav',
'speech/record/savewav_2020-11-26_22-40-56_929661.wav',
'speech/record/675.wav',
'speech/record/664.wav',
'speech/example-speaker/husein-zolkepli.wav',
'speech/example-speaker/mas-aisyah.wav',
'speech/example-speaker/khalil-nooh.wav',
'speech/example-speaker/shafiqah-idayu.wav',
'speech/khutbah/wadi-annuar.wav',
]
ys = [malaya_speech.load(f)[0] for f in files]
padded, lens = malaya_speech.padding.sequence_1d(ys, return_len = True)
import collections
import numpy as np
import tensorflow as tf
BeamHypothesis = collections.namedtuple(
'BeamHypothesis', ('score', 'prediction', 'states')
)
def transducer(
enc,
total,
initial_states,
encoded_placeholder,
predicted_placeholder,
states_placeholder,
ytu,
new_states,
sess,
beam_width = 10,
norm_score = True,
):
kept_hyps = [
BeamHypothesis(score = 0.0, prediction = [0], states = initial_states)
]
B = kept_hyps
for i in range(total):
A = B
B = []
while True:
y_hat = max(A, key = lambda x: x.score)
A.remove(y_hat)
ytu_, new_states_ = sess.run(
[ytu, new_states],
feed_dict = {
encoded_placeholder: enc[i],
predicted_placeholder: y_hat.prediction[-1],
states_placeholder: y_hat.states,
},
)
for k in range(ytu_.shape[0]):
beam_hyp = BeamHypothesis(
score = (y_hat.score + float(ytu_[k])),
prediction = y_hat.prediction,
states = y_hat.states,
)
if k == 0:
B.append(beam_hyp)
else:
beam_hyp = BeamHypothesis(
score = beam_hyp.score,
prediction = (beam_hyp.prediction + [int(k)]),
states = new_states_,
)
A.append(beam_hyp)
if len(B) > beam_width:
break
if norm_score:
kept_hyps = sorted(
B, key = lambda x: x.score / len(x.prediction), reverse = True
)[:beam_width]
else:
kept_hyps = sorted(B, key = lambda x: x.score, reverse = True)[
:beam_width
]
return kept_hyps[0].prediction
%%time
r = sess.run(decoded, feed_dict = {X: padded, X_len: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
%%time
encoded_, padded_lens_ = sess.run([encoded, padded_lens], feed_dict = {X: padded, X_len: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
s = sess.run(initial_states)
for i in range(len(encoded_)):
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = encoded_placeholder,
predicted_placeholder = predicted_placeholder,
states_placeholder = states_placeholder,
ytu = ytu,
new_states = new_states,
sess = sess,
beam_width = 1,
)
print(malaya_speech.subword.decode(subwords, r))
encoded = transducer_model.encoder_inference(padded_features[0])
g = transducer_model._perform_greedy(encoded, tf.shape(encoded)[0],
tf.constant(0, dtype = tf.int32),
transducer_model.predict_net.get_initial_state())
g
indices = g.prediction
minus_one = -1 * tf.ones_like(indices, dtype=tf.int32)
blank_like = 0 * tf.ones_like(indices, dtype=tf.int32)
indices = tf.where(indices == minus_one, blank_like, indices)
num_samples = tf.cast(tf.shape(X[0])[0], dtype=tf.float32)
total_time_reduction_factor = featurizer.frame_step
stime = tf.range(0, num_samples, delta=total_time_reduction_factor, dtype=tf.float32)
stime /= tf.cast(featurizer.sample_rate, dtype=tf.float32)
etime = tf.range(total_time_reduction_factor, num_samples, delta=total_time_reduction_factor, dtype=tf.float32)
etime /= tf.cast(featurizer.sample_rate, dtype=tf.float32)
non_blank = tf.where(tf.not_equal(indices, 0))
non_blank_transcript = tf.gather_nd(indices, non_blank)
non_blank_stime = tf.gather_nd(tf.repeat(tf.expand_dims(stime, axis=-1), tf.shape(indices)[-1], axis=-1), non_blank)[:,0]
non_blank_transcript = tf.identity(non_blank_transcript, name = 'non_blank_transcript')
non_blank_stime = tf.identity(non_blank_stime, name = 'non_blank_stime')
%%time
r = sess.run([non_blank_transcript, non_blank_stime], feed_dict = {X: padded[:1], X_len: lens[:1]})
list(zip([subwords._id_to_subword(row - 1) for row in r[0]], r[1]))
saver = tf.train.Saver()
saver.save(sess, 'output-small-alconformer/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'placeholder' in n.name
or 'encoded' in n.name
or 'decoder' in n.name
or 'ytu' in n.name
or 'new_states' in n.name
or 'padded_' in n.name
or 'initial_states' in n.name
or 'non_blank' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output-small-alconformer', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('output-small-alconformer/frozen_model.pb')
input_nodes = [
'X_placeholder',
'X_len_placeholder',
'encoded_placeholder',
'predicted_placeholder',
'states_placeholder',
]
output_nodes = [
'greedy_decoder',
'encoded',
'ytu',
'new_states',
'padded_features',
'padded_lens',
'initial_states',
'non_blank_transcript',
'non_blank_stime'
]
inputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in input_nodes}
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
test_sess = tf.Session(graph = g)
r = test_sess.run(outputs['greedy_decoder'], feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
encoded_, padded_lens_, s = test_sess.run([outputs['encoded'], outputs['padded_lens'], outputs['initial_states']],
feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
i = 0
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = inputs['encoded_placeholder'],
predicted_placeholder = inputs['predicted_placeholder'],
states_placeholder = inputs['states_placeholder'],
ytu = outputs['ytu'],
new_states = outputs['new_states'],
sess = test_sess,
beam_width = 1,
)
malaya_speech.subword.decode(subwords, r)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output-small-alconformer/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
input_nodes,
output_nodes, transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('output-small-alconformer/frozen_model.pb.quantized')
inputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in input_nodes}
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
test_sess = tf.Session(graph = g)
r = test_sess.run(outputs['greedy_decoder'], feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
encoded_, padded_lens_, s = test_sess.run([outputs['encoded'], outputs['padded_lens'], outputs['initial_states']],
feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
i = 0
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = inputs['encoded_placeholder'],
predicted_placeholder = inputs['predicted_placeholder'],
states_placeholder = inputs['states_placeholder'],
ytu = outputs['ytu'],
new_states = outputs['new_states'],
sess = test_sess,
beam_width = 1,
)
malaya_speech.subword.decode(subwords, r)
```
| github_jupyter |
#### import external libraries
```
import numpy as np
import pandas as pd
from sklearn import set_config
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
# setup the directory
import sys
sys.path.append('../../')
```
#### IPython extensions
```
%load_ext autoreload
%autoreload 2
```
#### import internal libraries
```
from preprocess.utils import preprocess as ps
from preprocess.utils import visualizations as vs
from preprocess.utils import exploratory_analysis as EDA
```
## Constants
```
PATH_FILE = '../../data/datasets/'
FILE_NAME = 'Base_de_Datos.csv'
DROP_COLS = ['Year','Publisher']
```
## Data gathering
```
df = pd.read_csv(PATH_FILE + FILE_NAME)
df.head(n=2)
```
### First filter before splitting and preprocessing
```
def filter_dataframe(df, col, val):
'''
Filter dataframes per column and value
'''
col = df.columns[col]
val = str(val)
df = df.query(f"{col}==@val")
return df
df_nintendo = filter_dataframe(df, col=1, val='Nintendo')
df_nintendo.head(n=8)
df_nintendo.shape
```
## Data split
```
df_nintendo_train, df_nintendo_test = train_test_split(df_nintendo, random_state=42, test_size=0.15)
df_nintendo_train.shape , df_nintendo_test.shape
```
## Preprocessing
```
# Change the data type: from object to datetime
df_nintendo_train = ps.preprocess(df=df_nintendo_train,
drop_cols=DROP_COLS)
df_nintendo_train.head(n=2)
```
## Multiple linear regression
#### Training
```
# Dimensions of the df
df_nintendo_train.shape
# Separate the predictive and target features; containts train and validation data
y = df_nintendo_train['y']
X = df_nintendo_train.drop(['y'], axis = 1)
X.head(n=2)
# Select the optimal num of bins
lenght_df = df_nintendo_train.shape[0]
nbins = EDA.number_of_bins(n=lenght_df)
# Histogram of monthly averages
vs.histogram_plot(df=df_nintendo_train,
rand_var='y',
nbins=nbins,
title='Histogram of the target feature')
# Boxplot of global sales
vs.box_plot(df=df_nintendo_train,
rand_var='y',
title='Boxplot of target variable')
# set up the model
model = LinearRegression()
model
# train the model
model.fit(X, y)
model.get_params()
```
#### Testing
```
# test dataset
df_nintendo_test = ps.preprocess(df=df_nintendo_test,
drop_cols=DROP_COLS)
df_nintendo_test.head(n=2)
# Separate the variables
y_nintendo_test = df_nintendo_test['y'].reset_index(drop=True)
x_nintendo_test = df_nintendo_test.drop(['y'], axis = 1)
x_nintendo_test.head(n=2)
# predict
y_nintendo_pred = model.predict(x_nintendo_test)
y_nintendo_pred
# Check the Mean Absolute Error (MAE)
MAE = metrics.mean_absolute_error(y_true=y_nintendo_test, y_pred=y_nintendo_pred)
print(f"Mean Absolute Error of linear regression Nintendo: {MAE:.3f}")
# coefficients of the model
W = model.coef_
b = model.intercept_
final_model = {'W' : W,
'b' : b}
final_model
```
## Multiple linear regression for all pusliher types
```
# Unique values
unique_publisher = df.Publisher.unique()
# recycling code
MAE_linear_reg = {}
predictions_linear_reg = {}
for publisher in unique_publisher:
# training section
df_temp = filter_dataframe(df, col=1, val=publisher)
df_train, df_test = train_test_split(df_temp, random_state=42, test_size=0.15)
df_train = ps.preprocess(df=df_train,
drop_cols=DROP_COLS)
y = df_train['y']
X = df_train.drop(['y'], axis = 1)
# set up the model
model = LinearRegression()
model.fit(X, y)
# testing section
df_test = ps.preprocess(df=df_test,
drop_cols=DROP_COLS)
y_test = df_test['y'].reset_index(drop=True)
x_test = df_test.drop(['y'], axis = 1)
y_pred = model.predict(x_test)
predictions_linear_reg[publisher] = y_pred
MAE = metrics.mean_absolute_error(y_true=y_test, y_pred=y_pred)
MAE_linear_reg[publisher] = MAE
print(f"Mean Absolute Error (MAE) of {publisher}: : {MAE:.4f}")
# diction to dataframe: predictions
df_predictions = pd.DataFrame.from_dict(predictions_linear_reg, orient='index').reset_index()
df_predictions = df_predictions.rename(columns = {0 : 'prediction1', 1 : 'prediction2', 2 : 'prediction3',
3 : 'prediction4', 4 : 'prediction5', 'index' : 'Publisher'})
df_predictions
# diction to dataframe: MAE
df_MAE = pd.DataFrame.from_dict(MAE_linear_reg, orient='index').reset_index()
df_MAE = df_MAE.rename(columns = {0 : 'MAE', 'index' : 'Publisher'})
df_MAE
# Merge df
df_final_results = df_MAE.merge(df_predictions, on = 'Publisher')
df_final_results
# save the predictions
df_final_results.to_csv('../../analysis/predictions/df_predictions_linearreg_standard.csv', index=False)
```
| github_jupyter |
```
import os
import glob
import numpy as np
import random
import h5py
extension = ".jpg"
folder_name = "image_raw_GT"
annotations_filename = "annotations.txt"
images_names = glob.glob(folder_name+"/*"+extension)
number_digits = 5
n = len(images_names)
names_list = list("0"*(number_digits-len(str(i))) + str(i) + extension for i in list(range(0,n )))
#print(names_list)
array_from_txt = np.loadtxt(annotations_filename)
print(array_from_txt.shape)
centers = array_from_txt[0:n, 1:3]
scales = array_from_txt[0:n, 3]
parts_raw = array_from_txt[0:n, 4:20]
parts = np.reshape(parts_raw, (n, 8, 2))
rotations_raw = array_from_txt[0:n, 20:29]
rotations = np.reshape(rotations_raw, (n, 3, 3))
translations = array_from_txt[0:n, 29:33]
print(rotations_raw.shape)
print(rotations.shape)
print(translations.shape)
indices = list(range(0, n))
indices_shuffled = random.sample(indices, n)
#print(indices_shuffled)
#names_list_shuffled = names_list[indices_shuffled]
#print(names_list_shuffled)
names_list_shuffled = [names_list[i] for i in indices_shuffled]
#print(names_list_shuffled)
centers_shuffled = centers[indices_shuffled]
scales_shuffled = scales[indices_shuffled]
parts_shuffled = parts[indices_shuffled]
rotations_shuffled = rotations[indices_shuffled]
translations_shuffled = translations[indices_shuffled]
train_ratio = 0.7
train_n = round(train_ratio*n)
valid_n = n - train_n
#print(train_n, valid_n)
#train_indices = indices_shuffled[0:train_n + 1]
#valid_indices = indices_shuffled[train_n + 1:n]
names_list_train = names_list_shuffled[0:train_n+1]
names_list_valid = names_list_shuffled[train_n+1:n]
centers_train = centers_shuffled[0:train_n+1]
centers_valid = centers_shuffled[train_n+1:n]
scales_train = scales_shuffled[0:train_n+1]
scales_valid = scales_shuffled[train_n+1:n]
parts_train = parts_shuffled[0:train_n+1]
parts_valid = parts_shuffled[train_n+1:n]
rotations_train = rotations_shuffled[0:train_n+1]
rotations_valid = rotations_shuffled[train_n+1:n]
translations_train = translations_shuffled[0:train_n+1]
translations_valid = translations_shuffled[train_n+1:n]
with h5py.File('drones_train.h5', 'w') as hdf_drones_train:
hdf_drones_train.create_dataset('center', data=centers_train)
hdf_drones_train.create_dataset('scale', data=scales_train)
hdf_drones_train.create_dataset('part', data=parts_train)
hdf_drones_train.create_dataset('rotation', data=rotations_train)
hdf_drones_train.create_dataset('translation', data=translations_train)
with h5py.File('drones_valid.h5', 'w') as hdf_drones_valid:
hdf_drones_valid.create_dataset('center', data=centers_valid)
hdf_drones_valid.create_dataset('scale', data=scales_valid)
hdf_drones_valid.create_dataset('part', data=parts_valid)
hdf_drones_valid.create_dataset('rotation', data=rotations_valid)
hdf_drones_valid.create_dataset('translation', data=translations_valid)
#with drones_train_images = open("drones_train.txt","w"):
with open('drones_train.txt', 'w') as f:
for train_name in names_list_train:
f.write(train_name+"\n")
with open('drones_valid.txt', 'w') as f:
for valid_name in names_list_valid:
f.write(valid_name+"\n")
```
| github_jupyter |
# Working with Lists - Part 6
The light and the end of the tunnel.
```
header = ['lastName','firstName', 'timesAtBat', 'hits', 'homeRuns',
'runs', 'rbi', 'walks', 'years', 'careerTimesAtBat',
'careerHits', 'careerHomeRuns', 'careerRuns', 'careerRBI',
'careerWalks']
steve = ['Balboni','Steve',512,117,29,54,88,43,6,1750,412,100,204,276,155]
bruce = ['Bochte','Bruce',407,104,6,57,43,65,12,5233,1478,100,643,658,653]
sid = ['Bream','Sid',522,140,16,73,77,60,4,730,185,22,93,106,86]
players =[
('Balboni','Steve',[512,117,29,54,88,43,6,1750,412,100,204,276,155]),
('Bochte','Bruce',[407,104,6,57,43,65,12,5233,1478,100,643,658,653]),
('Bream','Sid',[522,140,16,73,77,60,4,730,185,22,93,106,86])
]
data = players[2][2]
```
## Filter/Map (we won't use reduce)
You can filter data from a list or map it to something else with filter and map.
### Filter
The code below shows how the filter function can accept a function and list. It will then return a list with filter applied to each element of the list.
```
def greater_fiveh(x):
if x>500:
return True
data
list(filter(greater_fiveh,data))
```
### Map
The code below shows how the **map** function can accept a function and list. It will then return a list with map applied to each element of the list. In this example - we are mapping radii to areas
```
from math import pi
def calc_area(r):
return pi * r**2
radii = [10,20,34,23,67,55,87,98]
list(map(calc_area,radii))
```
## Lambdas
Map radii to areas using lambda instead - this works for one-line functions, but if you need more complicated processing, write the function and pass that to the map.
```
list(map(lambda r: pi *r**2,radii))
```
Lambdas can be used in the filter function as well.
```
list(filter(lambda r: r>50,radii))
```
## Two-Dimensional Lists
Python supports two-dimensional lists (list of lists). In this sample, let's make a list of lists for population growth over time.
```
population = [
[106,107,111,133,221,767,1766],
[502,635,809,947,1402,3634,5268],
[2,2,2,6,13,30,46],
[163,203,276,408,547,729,628],
[2,7,26,82,172,307,392],
[16,24,38,74,167,511,809]
]
continents =['Africa','Asia','Australia','Europe','North America','South America']
header = ['Continent','1750','1800','1850','1900','1950','2000','2050']
```
Build a table with a two-dimensional list and nested for loops. I had to play arround with the values to get all of this to work. Still not perfect.
```
for h in header:
print("%13s" % h, end=' ')
print()
for i in range(6):
print("%13s" % continents[i], end=' ')
for j in range(7):
print("%14s" % population[i][j], end=' ')
print()
```
Here is a version using PrettyTable. Look at the 10/7 notebook for discussion about a 'bug' with this code.
```
from prettytable import PrettyTable
tab = PrettyTable()
tab.field_names = header
for h in header:
tab.align[h]='r'
for i in range(len(pop2)):
population[i].insert(0,continents[i])
tab.add_row(population[i])
print(tab)
population
```
## Zipping
- first_names = ['Steve','Bruce','Sid']
- last_names = ['Balboni','Bochte','Bream']
- home_runs = [29,6,16]
The zip() function lets you collect parallel lists and merge them into a list of tuples.
```
first_names = ['Steve','Bruce','Sid','Anthony']
last_names = ['Balboni','Bochte','Bream']
home_runs = [29,6,16,17]
list(zip(first_names,last_names,home_runs))
```
| github_jupyter |
# Providing your notebook
## It's all JSON
```
!head -20 tour.ipynb
```
# Deliver as HTML
## nbconvert
$ jupyter nbconvert tour.ipynb
[NbConvertApp] Converting notebook tour.ipynb to html
[NbConvertApp] Writing 219930 bytes to tour.html
[tour.html](tour.html)
## hosted static
### Raw HTML online
### [nbviewer](https://github.com/jupyter/nbviewer)
### Github
## Download link
# Live notebooks
## Hosted live notebooks
[Wakari](https://wakari.io/)
## Notebook server
$ jupyter notebook
[I 09:28:21.390 NotebookApp] Serving notebooks from local directory: /Users/catherinedevlin/werk/tech-talks/jupyter-notebook
[I 09:28:21.390 NotebookApp] 0 active kernels
[I 09:28:21.390 NotebookApp] The IPython Notebook is running at: http://localhost:8888/
[I 09:28:21.390 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
## [tmpnb](https://github.com/jupyter/tmpnb)
Service that launches Docker containers
[try.jupyter.org](https://try.jupyter.org)
## [Thebe](https://oreillymedia.github.io/thebe/)
## S5 slideshow
$ jupyter nbconvert --to slides tour.ipynb
[NbConvertApp] Converting notebook tour.ipynb to slides
[NbConvertApp] Writing 223572 bytes to tour.slides.html
[tour.slides.html](tour.slides.html)
<table> <tr><th><th>Description</th></th><th>Power</th>
<th>Ease for user</th><th>Ease for you</th></tr>
<tr><td><a href="https://ipython.org/ipython-doc/3/notebook/nbconvert.html">jupyter nbconvert</a></td><td>Generate static HTML from notebook</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="http://nbviewer.ipython.org">nbviewer</a></td><td>Renders from a JSON URL</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td>GitHub</td>
<td>Automatic rendering at GitHub URL</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++++</td>
</tr>
<tr><td>Download link</td><td>From any rendered notebook</td>
<td class="rating power">++++</td>
<td class="rating user-ease">+</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td>Commercially hosted</td><td><a href="https://wakari.io/">Wakari</a>, etc.</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td><a href="http://jupyter-notebook.readthedocs.org/en/latest/public_server.html">Notebook server</a></td><td></td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
</tr>
<tr><td>[tmpnb](https://github.com/jupyter/tmpnb)
<a href="https://try.jupyter.org/">(try.jupyter.org)</a></td>
<td>Dockerized notebook server per connection</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="https://oreillymedia.github.io/thebe/">Thebe</a></td><td>Connects HTML to notebook server</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="https://ipython.org/ipython-doc/3/notebook/nbconvert.html">jupyter convert --to slides</a></td><td>S5</td>
<td class="rating power">+</td>
<td class="rating user-ease">++++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td><a href="https://github.com/damianavila/RISE">RISE</a></td><td>S5 with executable cells</td>
<td class="rating power">+++</td>
<td class="rating user-ease">++++</td>
<td class="rating dev-ease">++</td>
</tr>
</table>
| github_jupyter |
```
from pathlib import Path
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pyprojroot
import seaborn as sns
import searchnets
def cm_to_inches(cm):
return cm / 2.54
mpl.style.use(['seaborn-darkgrid', 'seaborn-paper'])
```
paths
```
SOURCE_DATA_ROOT = pyprojroot.here('results/searchstims/source_data/3stims_white_background')
FIGURES_ROOT = pyprojroot.here('docs/paper/figures/experiment-1/searchstims-3stims-white-background')
```
constants
```
LEARNING_RATE = 1e-3
NET_NAMES = [
'alexnet',
]
METHODS = [
'initialize',
'transfer'
]
MODES = [
'classify',
]
```
## load source data
Get just the transfer learning results, then group by network, stimulus, and set size, and compute the mean accuracy for each set size.
```
df_all = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('all.csv')
)
stim_acc_diff_df = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('stim_acc_diff.csv')
)
net_acc_diff_df = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('net_acc_diff.csv')
)
df_acc_diff_by_stim = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('acc_diff_by_stim.csv'),
index_col='net_name'
)
```
columns will be stimuli, in increasing order of accuracy drop across models
```
# not sure why my sorting isn't working right in the script that generates source data,
# but it's clear by eye the effect size is in the right order
# so I'm just typing them manually
FIG_COLUMNS = ['RVvGV', 'RVvRHGV', '2_v_5']
```
rows will be nets, in decreasing order of accuracy drops across stimuli
```
FIG_ROWS = net_acc_diff_df['net_name'].values.tolist()
print(FIG_ROWS)
```
## plot figure
```
pal = sns.color_palette("rocket", n_colors=6)
len(pal)
cmaps = {}
for net in ('alexnet', 'CORnet_Z', 'VGG16', 'CORnet_S'):
cmaps[net] = {
'transfer': {
'unit_both': pal[3],
'mn_both': pal[2],
},
'initialize': {
'unit_both': pal[5],
'mn_both': pal[4],
}
}
UNIT_COLORS = {
'present': 'violet',
'absent': 'lightgreen',
'both': 'darkgrey'
}
# default colors used for plotting mean across sampling units in each condition
MN_COLORS = {
'present': 'magenta',
'absent': 'lawngreen',
'both': 'black'
}
def metric_v_set_size_df(df, net_name, method, stimulus, metric, conditions,
unit_colors=UNIT_COLORS, mn_colors=MN_COLORS,
ax=None, title=None, save_as=None, figsize=(10, 5),
set_xlabel=False, set_ylabel=False, set_ylim=True,
ylim=(0, 1.1), yticks=None, plot_mean=True, add_legend=False, dpi=600):
"""plot accuracy as a function of visual search task set size
for models trained on a single task or dataset
Accepts a Pandas dataframe and column names that determine what to plot.
Dataframe is produces by searchstims.utils.general.results_csv function.
Parameters
----------
df : pandas.Dataframe
path to results.gz file saved after measuring accuracy of trained networks
on test set of visual search stimuli
net_name : str
name of neural net architecture. Must be a value in the 'net_name' column
of df.
method : str
method used for training. Must be a value in the 'method' column of df.
stimulus : str
type of visual search stimulus, e.g. 'RVvGV', '2_v_5'. Must be a value in
the 'stimulus' column of df.
metric : str
metric to plot. One of {'acc', 'd_prime'}.
conditions : list, str
conditions to plot. One of {'present', 'absent', 'both'}. Corresponds to
'target_condition' column in df.
Other Parameters
----------------
unit_colors : dict
mapping of conditions to colors used for plotting 'sampling units', i.e. each trained
network. Default is UNIT_COLORS defined in this module.
mn_colors : dict
mapping of conditions to colors used for plotting mean across 'sampling units'
(i.e., each trained network). Default is MN_COLORS defined in this module.
ax : matplotlib.Axis
axis on which to plot figure. Default is None, in which case a new figure with
a single axis is created for the plot.
title : str
string to use as title of figure. Default is None.
save_as : str
path to directory where figure should be saved. Default is None, in which
case figure is not saved.
figsize : tuple
(width, height) in inches. Default is (10, 5). Only used if ax is None and a new
figure is created.
set_xlabel : bool
if True, set the value of xlabel to "set size". Default is False.
set_ylabel : bool
if True, set the value of ylabel to metric. Default is False.
set_ylim : bool
if True, set the y-axis limits to the value of ylim.
ylim : tuple
with two elements, limits for y-axis. Default is (0, 1.1).
plot_mean : bool
if True, find mean accuracy and plot as a separate solid line. Default is True.
add_legend : bool
if True, add legend to axis. Default is False.
Returns
-------
None
"""
if ax is None:
fig, ax = plt.subplots(dpi=dpi, figsize=figsize)
df = df[(df['net_name'] == net_name)
& (df['method'] == method)
& (df['stimulus'] == stimulus)]
if not all(
[df['target_condition'].isin([targ_cond]).any() for targ_cond in conditions]
):
raise ValueError(f'not all target conditions specified were found in dataframe.'
f'Target conditions specified were: {conditions}')
handles = []
labels = []
set_sizes = None # because we verify set sizes is the same across conditions
net_nums = df['net_number'].unique()
# get metric across set sizes for each training replicate
# we end up with a list of vectors we can pass to ax.plot,
# so that the 'line' for each training replicate gets plotted
for targ_cond in conditions:
metric_vals = []
for net_num in net_nums:
metric_vals.append(
df[(df['net_number'] == net_num)
& (df['target_condition'] == targ_cond)][metric].values
)
curr_set_size = df[(df['net_number'] == net_num)
& (df['target_condition'] == targ_cond)]['set_size'].values
if set_sizes is None:
set_sizes = curr_set_size
else:
if not np.array_equal(set_sizes, curr_set_size):
raise ValueError(
f'set size for net number {net_num}, '
f'target condition {targ_cond}, did not match others'
)
for row_num, arr_metric in enumerate(metric_vals):
x = np.arange(1, len(set_sizes) + 1) * 2
# just label first row, so only one entry shows up in legend
if row_num == 0:
label = f'training replicate, {method}'
else:
label = None
ax.plot(x, arr_metric, color=unit_colors[targ_cond], linewidth=1,
linestyle='--', alpha=0.95, label=label)
ax.set_xticks(x)
ax.set_xticklabels(set_sizes)
ax.set_xlim([0, x.max() + 2])
if plot_mean:
mn_metric = np.asarray(metric_vals).mean(axis=0)
if targ_cond == 'both':
mn_metric_label = f'mean, {method}'
else:
mn_metric_label = f'mean {metric}, {targ_cond}, {method}'
labels.append(mn_metric_label)
mn_metric_line, = ax.plot(x, mn_metric,
color=mn_colors[targ_cond], linewidth=1.5,
alpha=0.65,
label=mn_metric_label)
ax.set_xticks(x)
ax.set_xticklabels(set_sizes)
ax.set_xlim([0, x.max() + 2])
handles.append(mn_metric_line)
if title:
ax.set_title(title)
if set_xlabel:
ax.set_xlabel('set size')
if set_ylabel:
ax.set_ylabel(metric)
if yticks is not None:
ax.set_yticks(yticks)
if set_ylim:
ax.set_ylim(ylim)
if add_legend:
ax.legend(handles=handles,
labels=labels,
loc='lower left')
if save_as:
plt.savefig(save_as)
FIGSIZE = tuple(cm_to_inches(size) for size in (7, 2.5))
DPI = 300
n_rows = len(FIG_ROWS)
n_cols = len(FIG_COLUMNS)
fig, ax = plt.subplots(n_rows, n_cols, sharey=True, figsize=FIGSIZE, dpi=DPI)
ax = ax.reshape(n_rows, n_cols)
fig.subplots_adjust(hspace=0.5)
LABELSIZE = 6
XTICKPAD = 2
YTICKPAD = 1
for ax_ in ax.ravel():
ax_.xaxis.set_tick_params(pad=XTICKPAD, labelsize=LABELSIZE)
ax_.yaxis.set_tick_params(pad=YTICKPAD, labelsize=LABELSIZE)
STIM_FONTSIZE = 4
add_legend = False
for row, net_name in enumerate(FIG_ROWS):
df_this_net = df_all[df_all['net_name'] == net_name]
for method in ['transfer', 'initialize']:
for col, stim_name in enumerate(FIG_COLUMNS):
unit_colors = {'both': cmaps[net_name][method]['unit_both']}
mn_colors = {'both': cmaps[net_name][method]['mn_both']}
ax[row, col].set_axisbelow(True) # so grid is behind
metric_v_set_size_df(df=df_this_net,
net_name=net_name,
method=method,
stimulus=stim_name,
metric='accuracy',
conditions=['both'],
unit_colors=unit_colors,
mn_colors=mn_colors,
set_ylim=True,
ax=ax[row, col],
ylim=(0.4, 1.1),
yticks=(0.5, 0.6, 0.7, 0.8, 0.9, 1.0),
add_legend=add_legend)
if row == 0:
title = stim_name.replace('_',' ')
ax[row, col].set_title(title,
fontsize=STIM_FONTSIZE,
pad=5) # pad so we can put image over title without it showing
if col == 0:
ax[row, col].set_ylabel('accuracy')
net_name_for_fig = net_name.replace('_', ' ')
ax[row, col].text(0, 0.15, net_name_for_fig, fontweight='bold', fontsize=8)
# add a big axis, hide frame
big_ax = fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axis
big_ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
big_ax.grid(False)
handles, labels = ax[0, 0].get_legend_handles_labels()
LEGEND_FONTSIZE = 4
BBOX_TO_ANCHOR = (0.0125, 0.2, 0.8, .075)
big_ax.legend(handles, labels,
bbox_to_anchor=BBOX_TO_ANCHOR,
ncol=2, mode="expand", frameon=True,
borderaxespad=0., fontsize=LEGEND_FONTSIZE);
big_ax.set_xlabel("set size", labelpad=0.1);
for ext in ('svg', 'png'):
fig_path = FIGURES_ROOT.joinpath(
f'acc-v-set-size/acc-v-set-size.{ext}'
)
plt.savefig(fig_path)
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 8
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# If we're running on Colab, install modsimpy
# https://pypi.org/project/modsimpy/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pint
!pip install modsimpy
!mkdir figs
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# import functions from the modsim.py module
from modsim import *
```
### Functions from the previous chapter
```
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
```
### Reading the data
```
# Get the data file
import os
filename = 'World_population_estimates2.csv'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/notebooks/data/World_population_estimates2.csv
def read_table2(filename):
tables = pd.read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
return table2
#table2 = read_table2()
#table2.to_csv('data/World_population_estimates2.csv')
table2 = pd.read_csv('World_population_estimates2.csv')
table2.index = table2.Year
table2.head()
un = table2.un / 1e9
census = table2.census / 1e9
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Estimated world population')
```
### Running the quadratic model
Here's the update function for the quadratic growth model with parameters `alpha` and `beta`.
```
def update_func_quad(pop, t, system):
"""Update population based on a quadratic model.
pop: current population in billions
t: what year it is
system: system object with model parameters
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
```
Extract the starting time and population.
```
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = get_first_value(census)
```
Initialize the system object.
```
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
```
Run the model and plot results.
```
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
```
### Generating projections
To generate projections, all we have to do is change `t_end`
```
system.t_end = 2250
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'World population projection')
savefig('figs/chap08-fig01.pdf')
```
The population in the model converges on the equilibrium population, `-alpha/beta`
```
results[system.t_end]
-system.alpha / system.beta
```
**Exercise:** What happens if we start with an initial population above the carrying capacity, like 20 billion? Run the model with initial populations between 1 and 20 billion, and plot the results on the same axes.
```
# Solution
p0_array = linspace(1, 25, 11)
for system.p_0 in p0_array:
results = run_simulation(system, update_func_quad)
plot(results)
decorate(xlabel='Year',
ylabel='Population (billions)',
title='Projections with hypothetical starting populations')
```
### Comparing projections
We can compare the projection from our model with projections produced by people who know what they are doing.
```
# Get the data file
import os
filename = 'World_population_estimates3.csv'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/notebooks/data/World_population_estimates3.csv
def read_table3(filename = 'data/World_population_estimates.html'):
tables = pd.read_html(filename, header=0, index_col=0, decimal='M')
table3 = tables[3]
table3.columns = ['census', 'prb', 'un']
return table3
#table3 = read_table3()
#table3.to_csv('data/World_population_estimates3.csv')
table3 = pd.read_csv('World_population_estimates3.csv')
table3.index = table3.Year
table3.head()
```
`NaN` is a special value that represents missing data, in this case because some agencies did not publish projections for some years.
This function plots projections from the UN DESA and U.S. Census. It uses `dropna` to remove the `NaN` values from each series before plotting it.
```
def plot_projections(table):
"""Plot world population projections.
table: DataFrame with columns 'un' and 'census'
"""
census_proj = table.census / 1e9
un_proj = table.un / 1e9
plot(census_proj.dropna(), ':', color='C0', label='US Census')
plot(un_proj.dropna(), '--', color='C1', label='UN DESA')
```
Run the model until 2100, which is as far as the other projections go.
```
system = System(t_0=t_0,
t_end=2100,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
results = run_simulation(system, update_func_quad)
plt.axvspan(1950, 2016, color='C0', alpha=0.05)
plot_results(census, un, results, 'World population projections')
plot_projections(table3)
savefig('figs/chap08-fig02.pdf')
```
People who know what they are doing expect the growth rate to decline more sharply than our model projects.
## Exercises
**Exercise:** The net growth rate of world population has been declining for several decades. That observation suggests one more way to generate projections, by extrapolating observed changes in growth rate.
The `modsim` library provides a function, `compute_rel_diff`, that computes relative differences of the elements in a sequence.
Here's how we can use it to compute the relative differences in the `census` and `un` estimates:
```
alpha_census = compute_rel_diff(census)
plot(alpha_census, label='US Census')
alpha_un = compute_rel_diff(un)
plot(alpha_un, label='UN DESA')
decorate(xlabel='Year', label='Net growth rate')
```
Other than a bump around 1990, net growth rate has been declining roughly linearly since 1965. As an exercise, you can use this data to make a projection of world population until 2100.
1. Define a function, `alpha_func`, that takes `t` as a parameter and returns an estimate of the net growth rate at time `t`, based on a linear function `alpha = intercept + slope * t`. Choose values of `slope` and `intercept` to fit the observed net growth rates since 1965.
2. Call your function with a range of `ts` from 1960 to 2020 and plot the results.
3. Create a `System` object that includes `alpha_func` as a system variable.
4. Define an update function that uses `alpha_func` to compute the net growth rate at the given time `t`.
5. Test your update function with `t_0 = 1960` and `p_0 = census[t_0]`.
6. Run a simulation from 1960 to 2100 with your update function, and plot the results.
7. Compare your projections with those from the US Census and UN.
```
# Solution
def alpha_func(t):
intercept = 0.02
slope = -0.00021
return intercept + slope * (t - 1970)
# Solution
ts = linrange(1960, 2020)
alpha_model = TimeSeries(alpha_func(ts), ts)
plot(alpha_model, color='gray', label='model')
plot(alpha_census)
plot(alpha_un)
decorate(xlabel='Year', ylabel='Net growth rate')
# Solution
t_0 = 1960
t_end = 2100
p_0 = census[t_0]
# Solution
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha_func=alpha_func)
# Solution
def update_func_alpha(pop, t, system):
"""Update population based on a quadratic model.
pop: current population in billions
t: what year it is
system: system object with model parameters
"""
net_growth = system.alpha_func(t) * pop
return pop + net_growth
# Solution
update_func_alpha(p_0, t_0, system)
# Solution
results = run_simulation(system, update_func_alpha);
# Solution
plot_results(census, un, results, 'World population projections')
plot_projections(table3)
```
**Related viewing:** You might be interested in this [video by Hans Rosling about the demographic changes we expect in this century](https://www.youtube.com/watch?v=ezVk1ahRF78).
| github_jupyter |
# Oddstradamus
### Good odds and where to find them
### Modelling (Away)
In this notebook, the 3 remaining data frames are used to classify matches between away win and double chance draw or home win. The procedure is the same as in the [previous notebook](https://github.com/mue94/oddstradamus/blob/main/05_Model_Home.ipynb).
Therefore, here is an overview of the machine learning algorithms used within this notebook. Note also the PCA as point number 3 for each data frame.
| 1 Full-Dataframe | 2 50%-Dataframe | 3 Favorite-Dataframe |
| - | - | - |
| 1 Dummy Classifier | 1 Dummy Classifier | 1 Dummy Classifier |
| 2 Logistic Regression | 2 Logistic Regression | 2 Logistic Regression |
| 4 Logistic Regression (PCA) | 4 Logistic Regression (PCA) | 4 Logistic Regression (PCA) |
| 5 Support Vector Machine (PCA) | 5 Support Vector Machine (PCA) | 5 Support Vector Machine (PCA) |
| 6 Random Forest (PCA) | 6 Random Forest (PCA) | 6 Random Forest (PCA) |
| 7 Extra Trees (PCA) | 7 Extra Trees (PCA) | 7 Extra Trees (PCA) |
| 8 KNN (PCA) | 8 KNN (PCA) | 8 KNN (PCA) |
| 9 AdaBoost (PCA) | 9 AdaBoost (PCA) | 9 AdaBoost (PCA) |
| 10 XGBoost (PCA) | 10 XGBoost (PCA) | 10 XGBoost (PCA) |
```
# import packages
# basic modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
from joblib import dump, load
# scikit-learn
from sklearn.dummy import DummyClassifier
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score
# models
import xgboost as xgb
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier, ExtraTreesClassifier
#settings
warnings.filterwarnings('ignore')
```
### Metric
```
# defining the metrics
# risk-averse
def profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out, insert=1):
profit = 0
for i in range(len(actual)):
if predicted[i] == '2':
if actual[i] == predicted[i]:
profit = profit + (((insert - (1/dc_odd[i])) * (1/avg_odd_fav[i])) * float(away_odd[i])) - ((insert - (1/dc_odd[i])) * (1/avg_odd_fav[i]))
else:
profit = profit - ((insert - (1/dc_odd[i])) * (1/avg_odd_fav[i]))
if predicted[i] == '1X':
if actual [i] == predicted[i]:
profit = profit + (((insert - (1/away_odd[i])) * (1/avg_odd_out[i])) * float(dc_odd[i])) - ((insert - (1/away_odd[i])) * (1/avg_odd_out[i]))
else:
profit = profit - ((insert - (1/away_odd[i])) * (1/avg_odd_out[i]))
print(round(profit,2),'units in',len(actual),'games')
return print(round(profit / len(actual)*100, 2),'% gain per game')
# risk-taking
def profit_metric_fav(actual, predicted, away_odd, dc_odd, insert=1):
profit = 0
for i in range(len(actual)):
if predicted[i] == '2':
if actual[i] == predicted[i]:
profit = profit + ((insert - (1/dc_odd[i])) * float((away_odd[i])) - (insert - (1/dc_odd[i])))
else:
profit = profit - (insert - (1/dc_odd[i]))
if predicted[i] == '1X':
if actual [i] == predicted[i]:
profit = profit + (insert - (1/away_odd[i])) * float((dc_odd[i])) - (insert - (1/away_odd[i]))
else:
profit = profit - (insert - (1/away_odd[i]))
print(round(profit,2),'units in',len(actual),'games')
return print(round(profit / len(actual)*100, 2),'% gain per game')
```
## 1 Full Dataframe
```
# loading the dataframe
X_train_full = pd.read_csv('Data/model_away_csv/X_train_away_full.csv')
X_test_full = pd.read_csv('Data/model_away_csv/X_test_away_full.csv')
y_train_full = pd.read_csv('Data/model_away_csv/y_train_away_full.csv')
y_test_full = pd.read_csv('Data/model_away_csv/y_test_away_full.csv')
y_train_full = y_train_full['two_way_a_odd']
y_test_full = y_test_full['two_way_a_odd']
# assigning the variables of the profit metric
actual = y_test_full
away_odd = X_test_full.MaxAway
dc_odd = X_test_full.dc_home
avg_odd_fav = X_test_full.awaywin_avg_odd
avg_odd_out = X_test_full.homewin_avg_odd
```
#### 1.1 Dummy Classifier
```
# create a Dummy-Model
model_dummy_full = DummyClassifier(strategy = 'most_frequent')
# fit the model
model_dummy_full.fit(X_train_full, y_train_full)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_dummy_full = model_dummy_full.predict(X_test_full)
predicted = y_pred_dummy_full.tolist()
# print the classification report
print(classification_report(y_test_full, y_pred_dummy_full))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.2 Logistic Regression
```
# create a Logistic Regression Model / fit the model
logreg_full = LogisticRegression()
logreg_full.fit(X_train_full, y_train_full)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_full = logreg_full.predict(X_test_full)
predicted = y_pred_logreg_full.tolist()
# print the classification report
print(classification_report(y_test_full, y_pred_logreg_full))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.3 Dimensionality Reduction
##### Data Scaling
PCA requires scaling/normalization of the data to work properly.
```
# assign the standard scaler
scaler = StandardScaler()
# scale train data with standard scaler
X_train_full_scaled = scaler.fit_transform(X_train_full)
df_full_scaled = pd.DataFrame(data=X_train_full_scaled,columns=X_train_full.columns)
```
##### Principal Componenent Analysis
```
# create a PCA object
pca_full = PCA(n_components=None)
# fit the PCA object
df_full_scaled_pca = pca_full.fit(df_full_scaled)
# plot the explained variance ratio for each principal component
plt.figure(figsize=(10,6))
plt.scatter(x=[i+1 for i in range(len(df_full_scaled_pca.explained_variance_ratio_))],
y=df_full_scaled_pca.explained_variance_ratio_,
s=200, alpha=0.75,c='orange',edgecolor='k')
plt.grid(True)
plt.title("Explained variance ratio of the \nfitted principal component vector\n",fontsize=25)
plt.xlabel("Principal components",fontsize=15)
plt.xticks([i+1 for i in range(len(df_full_scaled_pca.explained_variance_ratio_))],fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Explained variance ratio",fontsize=15)
plt.show()
# transform the scaled data set using the fitted PCA object
X_train_full_scaled_trans = pca_full.transform(df_full_scaled)
# put it in a data frame
X_train_full_scaled_trans = pd.DataFrame(data=X_train_full_scaled_trans)
# assign the most meaningful variables
variables_full = [0,1,2,3,4,5,6]
```
#### 1.4 Logistic Regression (PCA)
```
# create a Logistic Regression Model / fit the model (PCA data)
logreg_full_pca = LogisticRegression()
logreg_full_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# scale test data with standard scaler / transform the scaled data set using the fitted PCA object
X_test_full_scaled = scaler.transform(X_test_full)
X_test_full_scaled_trans = pca_full.transform(X_test_full_scaled)
# put it in a data frame
X_test_full_scaled_trans = pd.DataFrame(data=X_test_full_scaled_trans)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_full_pca = logreg_full_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
predicted = y_pred_logreg_full_pca.tolist()
# print the classification report
print(classification_report(y_test_full, y_pred_logreg_full_pca))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.5 Support Vector Machine (PCA)
```
# create a SVM-Model
model_full_svm_pca = SVC(kernel='linear', verbose = 1)
# fit the model
#model_full_svm_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# save the fitted model
#dump(model_full_svm_pca, 'SVM_full_away.pickle')
# load the fitted model
model_full_svm_pca = load('SVM_full_away.pickle')
# predict the test data
y_pred_full_svm_pca = model_full_svm_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_svm_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_svm_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.6 Random Forest (PCA)
```
# create a Random Forest-Model
model_full_rf_pca = RandomForestClassifier(n_estimators=120,
random_state=42,
max_features = 'sqrt',
n_jobs=-1, verbose = 1)
# fit the model
model_full_rf_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# predict the test data
y_pred_full_rf_pca = model_full_rf_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_rf_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_rf_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.7 Extra Trees (PCA)
```
# create a Extra Trees-Model
model_full_extra_pca = ExtraTreesClassifier(n_estimators=100, random_state=42)
# fit the model
model_full_extra_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# predict the test data
y_pred_full_extra_pca = model_full_extra_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_extra_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_extra_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.8 KNN (PCA)
```
# create a KNN-Model
model_full_knn_pca = KNeighborsClassifier(n_neighbors=5, metric='minkowski', n_jobs=-1)
# fit the model
model_full_knn_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# predict the test data
y_pred_full_knn_pca = model_full_knn_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_knn_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_knn_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.9 AdaBoost (PCA)
```
# create a AdaBoost-Model
model_full_ada_pca = AdaBoostClassifier()
# fit the model
model_full_ada_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# predict the test data
y_pred_full_ada_pca = model_full_ada_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_ada_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_ada_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 1.10 XGBoost (PCA)
```
# create a XGBoost-Model
model_full_xgb_pca = xgb.XGBClassifier()
# fit the model
model_full_xgb_pca.fit(X_train_full_scaled_trans[variables_full], y_train_full)
# predict the test data
y_pred_full_xgb_pca = model_full_xgb_pca.predict(X_test_full_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_full, y_pred_full_xgb_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_full_xgb_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
### 2 50% Dataframe
```
# loading the dataframe
X_train_50 = pd.read_csv('Data/model_away_csv/X_train_away_50.csv')
X_test_50 = pd.read_csv('Data/model_away_csv/X_test_away_50.csv')
y_train_50 = pd.read_csv('Data/model_away_csv/y_train_away_50.csv')
y_test_50 = pd.read_csv('Data/model_away_csv/y_test_away_50.csv')
y_train_50 = y_train_50['two_way_a_odd']
y_test_50 = y_test_50['two_way_a_odd']
# assigning the variables of the profit metric
actual = y_test_50
away_odd = X_test_50.MaxAway
dc_odd = X_test_50.dc_home
avg_odd_fav = X_test_50.awaywin_avg_odd
avg_odd_out = X_test_50.homewin_avg_odd
```
#### 2.1 DummyClassifier
```
# create a Dummy-Model
model_50_dummy = DummyClassifier(strategy = 'most_frequent')
# fit the model
model_50_dummy.fit(X_train_50, y_train_50)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_50_dummy = model_50_dummy.predict(X_test_50)
predicted = y_pred_50_dummy.tolist()
# print the classification report
print(classification_report(y_test_50, y_pred_50_dummy))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.2 Logistic Regression
```
# create a Logistic Regression Model / fit the model
logreg_50 = LogisticRegression()
logreg_50.fit(X_train_50, y_train_50)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_50 = logreg_50.predict(X_test_50)
predicted = y_pred_logreg_50.tolist()
# print the classification report
print(classification_report(y_test_50, y_pred_logreg_50))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.3 Dimensionality Reduction
##### Data scaling
PCA requires scaling/normalization of the data to work properly.
```
# scale train data with standard scaler
X_train_50_scaled = scaler.fit_transform(X_train_50)
df_50_scaled = pd.DataFrame(data=X_train_50_scaled,columns=X_train_50.columns)
```
##### Dimensionality Reduction
```
# create a PCA object
pca_50 = PCA(n_components=None)
# fit the PCA object
df_50_scaled_pca = pca_50.fit(df_50_scaled)
# plot the explained variance ratio for each principal component
plt.figure(figsize=(10,6))
plt.scatter(x=[i+1 for i in range(len(df_50_scaled_pca.explained_variance_ratio_))],
y=df_50_scaled_pca.explained_variance_ratio_,
s=200, alpha=0.75,c='orange',edgecolor='k')
plt.grid(True)
plt.title("Explained variance ratio of the \nfitted principal component vector\n",fontsize=25)
plt.xlabel("Principal components",fontsize=15)
plt.xticks([i+1 for i in range(len(df_50_scaled_pca.explained_variance_ratio_))],fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Explained variance ratio",fontsize=15)
plt.show()
# transform the scaled data set using the fitted PCA object
X_train_50_scaled_trans = pca_50.transform(df_50_scaled)
# put it in a data frame
X_train_50_scaled_trans = pd.DataFrame(data=X_train_50_scaled_trans)
# assign the most meaningful variables
variables_50 = [0,1,2,3,4,5,6]
```
#### 2.4 Logistic Regression (PCA)
```
# create a Logistic Regression Model / fit the model (PCA data)
logreg_50_pca = LogisticRegression()
logreg_50_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# scale test data with standard scaler / transform the scaled data set using the fitted PCA object
X_test_50_scaled = scaler.transform(X_test_50)
X_test_50_scaled_trans = pca_50.transform(X_test_50_scaled)
# put it in a data frame
X_test_50_scaled_trans = pd.DataFrame(data=X_test_50_scaled_trans)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_50_pca = logreg_50_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
predicted = y_pred_logreg_50_pca.tolist()
# print the classification report
print(classification_report(y_test_50, y_pred_logreg_50_pca))
# print the result of the profit metric
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.5 Support Vector Machine (PCA)
```
# create a SVM-Model
model_50_svm_pca = SVC(kernel='linear', verbose = 1)
# fit the model
model_50_svm_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_svm_pca = model_50_svm_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_svm_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_svm_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.6 Random Forest (PCA)
```
# create a Random Forest-Model
model_50_rf = RandomForestClassifier(n_estimators=120,
random_state=42,
max_features = 'sqrt',
n_jobs=-1, verbose = 1)
# fit the model
model_50_rf.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_rf = model_50_rf.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_rf))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_rf
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.7 Extra Trees (PCA)
```
# create a Extra Trees-Model
model_50_extra_pca = ExtraTreesClassifier(n_estimators=100, random_state=42)
# fit the model
model_50_extra_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_extra_pca = model_50_extra_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_extra_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_extra_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.8 KNN (PCA)
```
# create a KNN-Model
model_50_knn_pca = KNeighborsClassifier(n_neighbors=5, metric='minkowski', n_jobs=-1)
# fit the model
model_50_knn_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_knn_pca = model_50_knn_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_knn_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_knn_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.9 AdaBoost (PCA)
```
# create a AdaBoost-Model
model_50_ada_pca = AdaBoostClassifier()
# fit the model
model_50_ada_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_ada_pca = model_50_ada_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_ada_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_ada_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
#### 2.10 XGBoost (PCA)
```
# create a XGBoost-Model
model_50_xgb_pca = xgb.XGBClassifier()
# fit the model
model_50_xgb_pca.fit(X_train_50_scaled_trans[variables_50], y_train_50)
# predict the test data
y_pred_50_xgb_pca = model_50_xgb_pca.predict(X_test_50_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_50, y_pred_50_xgb_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_50_xgb_pca
profit_metric(actual, predicted, away_odd, dc_odd, avg_odd_fav, avg_odd_out)
```
## 3 Favorite Dataframe
```
# loading the dataframe
X_train_fav = pd.read_csv('Data/model_away_csv/X_train_away_fav.csv')
X_test_fav = pd.read_csv('Data/model_away_csv/X_test_away_fav.csv')
y_train_fav = pd.read_csv('Data/model_away_csv/y_train_away_fav.csv')
y_test_fav = pd.read_csv('Data/model_away_csv/y_test_away_fav.csv')
y_train_fav = y_train_fav['two_way_a_odd']
y_test_fav = y_test_fav['two_way_a_odd']
# assigning the variables of the profit metric
actual = y_test_fav
home_odd = X_test_fav.MaxAway
dc_odd = X_test_fav.dc_home
```
#### 3.1 Dummy-Classifier
```
# create a Dummy-Model
model_dummy_fav = DummyClassifier(strategy = 'most_frequent')
# fit the model
model_dummy_fav.fit(X_train_fav, y_train_fav)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_dummy_fav = model_dummy_fav.predict(X_test_fav)
predicted = y_pred_dummy_fav.tolist()
# print the classification report
print(classification_report(y_test_fav, y_pred_dummy_fav))
# print the result of the profit metric
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.2 Logistic Regression
```
# create a Logistic Regression Model
logreg_fav = LogisticRegression()
# fit the model
logreg_fav.fit(X_train_fav, y_train_fav)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_fav = logreg_fav.predict(X_test_fav)
predicted = y_pred_logreg_fav.tolist()
# print the classification report
print(classification_report(y_test_fav, y_pred_logreg_fav))
# print the result of the profit metric
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.3 Dimensionality Reduction
##### Data scaling
PCA requires scaling/normalization of the data to work properly.
```
# scale train data with standard scaler
X_train_fav_scaled = scaler.fit_transform(X_train_fav)
df_fav_scaled = pd.DataFrame(data=X_train_fav_scaled,columns=X_train_fav.columns)
```
##### Principal Component Analysis
```
# create a PCA object
pca_fav = PCA(n_components=None)
# fit the PCA object
df_fav_scaled_pca = pca_fav.fit(df_fav_scaled)
# plot the explained variance ratio for each principal component
plt.figure(figsize=(10,6))
plt.scatter(x=[i+1 for i in range(len(df_fav_scaled_pca.explained_variance_ratio_))],
y=df_fav_scaled_pca.explained_variance_ratio_,
s=200, alpha=0.75,c='orange',edgecolor='k')
plt.grid(True)
plt.title("Explained variance ratio of the \nfitted principal component vector\n",fontsize=25)
plt.xlabel("Principal components",fontsize=15)
plt.xticks([i+1 for i in range(len(df_fav_scaled_pca.explained_variance_ratio_))],fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Explained variance ratio",fontsize=15)
plt.show()
# transform the scaled data set using the fitted PCA object
X_train_fav_scaled_trans = pca_fav.transform(df_fav_scaled)
# put it in a data frame
X_train_fav_scaled_trans = pd.DataFrame(data=X_train_fav_scaled_trans)
# assign the most meaningful variables
variables_fav = [0,1,2,3,4,5,6]
```
#### 3.4 Logistic Regression (PCA)
```
# create a Logistic Regression Model / fit the model (PCA data)
logreg_fav_pca = LogisticRegression()
logreg_fav_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# scale test data with standard scaler / transform the scaled data set using the fitted PCA object
X_test_fav_scaled = scaler.transform(X_test_fav)
X_test_fav_scaled_trans = pca_fav.transform(X_test_fav_scaled)
# put it in a data frame
X_test_fav_scaled_trans = pd.DataFrame(data=X_test_fav_scaled_trans)
# predict the test data / assign the predicted-variable for the profit metric
y_pred_logreg_fav_pca = logreg_fav_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
predicted = y_pred_logreg_fav_pca.tolist()
# print the classification report
print(classification_report(y_test_fav, y_pred_logreg_fav_pca))
# print the result of the profit metric
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.5 Support Vector Machine (PCA)
```
# create a SVM-Model
model_fav_svm_pca = SVC(kernel='linear', verbose = 1)
# fit the model
model_fav_svm_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_svm_pca = model_fav_svm_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_svm_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_svm_pca
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.6 Random Forest (PCA)
```
# create a Random Forest-Model
model_fav_rf = RandomForestClassifier(n_estimators=120,
random_state=42,
max_features = 'sqrt',
n_jobs=-1, verbose = 1)
# fit the model
model_fav_rf.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_rf = model_fav_rf.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_rf))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_rf
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.7 Extra Trees (PCA)
```
# create a Extra Trees-Model
model_fav_extra_pca = ExtraTreesClassifier(n_estimators=100, random_state=42)
# fit the model
model_fav_extra_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_extra_pca = model_fav_extra_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_extra_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_extra_pca
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.8 KNN (PCA)
```
# create a KNN-Model
model_fav_knn_pca = KNeighborsClassifier(n_neighbors=5, metric='minkowski', n_jobs=-1)
# fit the model
model_fav_knn_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_knn_pca = model_fav_knn_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_knn_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_knn_pca
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.9 AdaBoost (PCA)
```
# create a AdaBoost-Model
model_fav_ada_pca = AdaBoostClassifier()
# fit the model
model_fav_ada_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_ada_pca = model_fav_ada_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_ada_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_ada_pca
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
#### 3.10 XGBoost (PCA)
```
# create a XGBoost-Model
model_fav_xgb_pca = xgb.XGBClassifier()
# fit the model
model_fav_xgb_pca.fit(X_train_fav_scaled_trans[variables_fav], y_train_fav)
# predict the test data
y_pred_fav_xgb_pca = model_fav_xgb_pca.predict(X_test_fav_scaled_trans[[0,1,2,3,4,5,6]])
# print the classification report
print(classification_report(y_test_fav, y_pred_fav_xgb_pca))
# assign the predicted-variable for the profit metric / print the result of the profit metric
predicted = y_pred_fav_xgb_pca
profit_metric_fav(actual, predicted, away_odd, dc_odd)
```
| github_jupyter |
## Converting the paired data from "Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions" into ConvoKit format (the data used in section 4 of their paper).
#### Note: we are only converting the subset data used to measure successful vs. unsuccessful arguments. All data provided by
--------------------
Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions
Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee.
In Proceedings of the 25th International World Wide Web Conference (WWW'2016).
The paper, data, and associated materials can be found at:
http://chenhaot.com/pages/changemyview.html
If you use this data, please cite:
@inproceedings{tan+etal:16a,
author = {Chenhao Tan and Vlad Niculae and Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title = {Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions},
year = {2016},
booktitle = {Proceedings of WWW}
}
Note at the blog in the hyperlink above, the data we used is the original data (linked with corresponding README, PDF and Slides). We did *not* use the updated data provided on 11/11/2016
Before starting the data conversion, you need to download the data, linked above, and extract the data from the tar archive.
------------------------------------
```
import os
#here I set the working directory to where I store the convokit package
os.chdir('C:\\Users\\Andrew\\Desktop\\Cornell-Conversational-Analysis-Toolkit')
from convokit import Corpus, User, Utterance, meta_index
import pandas as pd
```
Load the original pair data:
```
pairDFtrain=pd.read_json('C:\\Users\\Andrew\\Documents\\pair_task\\train_pair_data.jsonlist',lines=True)
print(len(pairDFtrain))
pairDFtrain['train']=1
pairDFtrain.tail()
pairDFhold=pd.read_json('C:\\Users\\Andrew\\Documents\\pair_task\\heldout_pair_data.jsonlist',lines=True)
print(len(pairDFhold))
pairDFhold['train']=0
pairDFhold.head()
pairDF=pd.concat([pairDFtrain,pairDFhold])
len(pairDF)
```
Note: Each observation has the reply comments in a conversation that changes the OP's (OP: original poster) mind (positive column) and a conversation that does not change the OP's mind (negative column). Unfortunately, this does not include the comments that OP made after their original post: the comments made by the OP in response to the second conversant's arguments. To find the comments made by OP (i.e. the other half of the conversation), we need to retrieve them from the 'all' dataset.
First: collect the unique identifiers for each original post in our dataset
```
nyms = list(set(pairDF.op_name))
len(nyms)
```
Collect each post from the full dataset (this has the full comment threads, whereas the pair data above only has the first response):
Note: if you have not run this notebook before, then you will need to uncomment the following seven code cells. It will load the full dataset into your working memory and save only the observations that match with the posts in the pair_data above.
```
# #note: this is over 2 GB of data, uncomment the following two lines to read in the data
# dataT = pd.read_json('C:\\Users\\Andrew\\Documents\\all\\train_period_data.jsonlist', lines=True)
# # len(dataT)
```
Keep only the posts that are identified in our original dataset:
```
# #note: this reduces the 2 GB dataset to a similar size as our original dataset
# dataT=dataT[dataT.name.isin(nyms)]
# len(dataT)
# # do the same for the holdout data
# dataH = pd.read_json('C:\\Users\\Andrew\\Documents\\all\\heldout_period_data.jsonlist', lines=True)
# len(dataH)
# dataH=dataH[dataH.name.isin(nyms)]
# len(dataH)
# #combine holdout and train datasets
# data = pd.concat([dataT,dataH])
# len(data)
```
Saving the posts from the full dataset that are the same as posts in our pair data.
```
# #note: I save the data as a pickle file so I don't have to reload the 2 GB dataset in my working memory
# data.to_pickle('C:\\Users\\Andrew\\Downloads\\pairAll.pkl')
```
Here, I have already run this notebook, so I can just load this dataset back into working memory.
```
data = pd.read_pickle('C:\\Users\\Andrew\\Downloads\\pairAll.pkl')
data.tail()
len(data)
len(pairDF)
data.columns
```
only keep the comments and the identifier for merging with the original dataset:
```
data=data[['comments','name']]
pairDF.columns
```
This joins the comments in the 'all' data, with the posts we are interested in studying:
```
pairDF=pairDF.join(data.set_index('name'), on='op_name')
len(pairDF)
pairDF.tail()
```
Now that we have all comments made within every CMV post in our dataset, we need to extract only the comments that correspond to a positive argument and negative argument (i.e. the ones recorded as either changing OP's mind or not).
First, collect the identifiers for each comment made by the respondent attempting to change the OP's mind (there is a respondent in both the positive and negative columns).
```
def collectResponses(responseList):
iDs=[]
if len(responseList['comments'])>0:
for each in responseList['comments']:
iDs.append(each['id'])
return iDs
pairDF['negIDs']=pairDF.negative.apply(lambda x: collectResponses(x))
pairDF['posIDs']=pairDF.positive.apply(lambda x: collectResponses(x))
```
Now collect each of the comment identifiers that signify a response to the challenger by OP
```
def collectOPcommentIDs(op_auth, allComments, replyIDs):
opIds =[]
for comment in allComments:
if comment['parent_id'].split('_')[1] in replyIDs:
if 'author' in comment.keys():
if comment['author'] == op_auth:
opIds.append(comment['id'])
return opIds
pairDF['opRepliesPos'] = pairDF[['op_author','comments','posIDs']].apply(lambda x: collectOPcommentIDs(x['op_author'],x['comments'],x['posIDs']),axis=1)
pairDF['opRepliesNeg'] = pairDF[['op_author','comments','negIDs']].apply(lambda x: collectOPcommentIDs(x['op_author'],x['comments'],x['negIDs']),axis=1)
```
Here I collect and properly order each of the comment IDs made in the thread _only_ by either OP or the 2nd conversant studied for both succesful and unsuccesful arguments:
```
def orderThreadids(comments, replyIDs, opCommentIDs):
threadIDs=list(replyIDs)
for comment in comments:
if comment['id'] in opCommentIDs:
pID= comment['parent_id'].split('_')[1]
if pID in replyIDs:
threadIDs.insert(threadIDs.index(pID)+1,comment['id'])
return threadIDs
pairDF['posOrder']= pairDF[['comments','posIDs','opRepliesPos']].apply(lambda x: orderThreadids(x['comments'],x['posIDs'],x['opRepliesPos']) ,axis = 1)
pairDF['negOrder']= pairDF[['comments','negIDs','opRepliesNeg']].apply(lambda x: orderThreadids(x['comments'],x['negIDs'],x['opRepliesNeg']) ,axis = 1)
```
This function takes the ordered thread IDs for only the successful and unsuccesful arguments measured in the original paper (although, note: I have also collected the OP replies from the 'all' data, which wasn't included in the smaller pair_data).
Note: I don't convert this section into convokit format, but instead I convert the full comment threads later in this notebook. If you are interested in looking at the successful and unsuccessful arguments in the convokit format, see the 'success' attribute in each utterance's metadata
```
def collectThread(comments, orderedThreadids):
threadComments=[]
for iD in orderedThreadids:
for comment in comments:
if iD==comment['id']:
threadComments.append(comment)
return threadComments
pairDF['positiveThread'] = pairDF[['comments','posOrder']].apply(lambda x: collectThread(x['comments'],x['posOrder']),axis=1)
pairDF['negativeThread'] = pairDF[['comments','negOrder']].apply(lambda x: collectThread(x['comments'],x['negOrder']),axis=1)
```
Note above: I have just collected each individual thread (with OP comments). However, when studying this data, we may be interested in looking at the entire conversation. Therefore, instead of only converting the positive threads and negative threads into convokit format, here I simply add an attribute to the comments if they are part of either the positive or negative thread.
Here I add the success attribute and the pair identification (see my readme file for a more detailed explanation of 'success' and 'pair_ids') :
```
# Create an identification # for the paired unsuccessful/successful arguments,
# Note: the pair # will be the same for successful-unsuccessful matched pairs with the prefix 'p_' for pair
# if there is no paired argument for the comment (i.e. it was either the original post by OP or an uncategorized comment),
# then pair_id = None
c=0
pairIDS={}
for i, r in pairDF.iterrows():
c=c+1
for comment in r.comments:
if comment['id'] in r.posOrder:
comment['success']=1
if comment['name'] in pairIDS.keys():
pairIDS[comment['name']].append('p_'+str(c))
pairIDS[comment['name']]=list(set(pairIDS[comment['name']]))
else:
pairIDS[comment['name']]=['p_'+str(c)]
pairIDS[comment['name']]=list(set(pairIDS[comment['name']]))
elif comment['id'] in r.negOrder:
comment['success']=0
if comment['name'] in pairIDS.keys():
pairIDS[comment['name']].append('p_'+str(c))
pairIDS[comment['name']]=list(set(pairIDS[comment['name']]))
else:
pairIDS[comment['name']]=['p_'+str(c)]
pairIDS[comment['name']]=list(set(pairIDS[comment['name']]))
if comment['name'] not in pairIDS.keys():
pairIDS[comment['name']]=[]
if 'success' not in comment.keys():
comment['success']=None
#make a column for pair_ids collected at the op post level, note: this won't be unique at the observation level in our pairDF dataframe, but I'm just doing this for quick conversion and after converting it into convokit, I add the list in at the conversation-level metadata and it is unique per conversation
threads = list(set(pairDF.op_name))
pids =[]
for thread in threads:
pid=[]
for i,r in pairDF[pairDF.op_name==thread].iterrows():
for comment in r.comments:
if len(pairIDS[comment['name']])>0:
for p in pairIDS[comment['name']]:
pid.append(p)
pid=list(set(pid))
pids.append(pid)
pairDF['pIDs']=pairDF.op_name.apply(lambda x: pids[threads.index(x)])
```
Now the data is collected in a pandas dataframe with each thread's comments fully accounted for. Convert it into convokit format:
The first step is to create a list of all Redditors, or 'users' in convokit parlance:
```
users = list(set(pairDF.op_author))
for i,r in pairDF.iterrows():
for comment in r.comments:
if 'author' in comment.keys():
if comment['author'] not in users:
users.append(comment['author'])
else: continue
len(users)
```
Note: I don't have metadata on individual users. I briefly considered creating a unique identifier for each user and including the 'username' as metadata, but since each Reddit username is unique, it would be superfluousC:\Users\Andrew\Desktop\Cornell-Conversational-Analysis-Toolkit. I believe other relevant information (such as whether a Redditor is the original poster) is specific to individual conversations and utterances.
2 metadata points of note: 'author_flair_css_class' and 'author_flair_text' both describe flags that appear next to an author in a subeddit. In the changemyview subreddit the moderators use this to illustrate whether the author has changed someone's mind and it can be seen as both an award and evidence of credibility in the subreddit. While I would include this as author metadata, I believe, instead, that it is actually 'conversation' metadata because this flag would be updated overtime if the author changes multiple people's minds over the course of many conversations. Since this data was collected overtime, the flag is likely to change per user across multiple conversations, possibly across utterances.
I will include the user_meta dictionary, just in case, so data can be added to it later.
```
user_meta={}
for user in users:
user_meta[user]={}
corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()}
print("number of users in the data = {0}".format(len(corpus_users)))
```
Next: create utterances
```
c=0
count=0
errors=[]
utterance_corpus = {}
for i , r in pairDF.iterrows():
#this creates an Utterance using the metadata provided in the original file. Note: this is for the original post in each observation within the pandas dataframe
utterance_corpus[r.op_name]=Utterance(id=r.op_name ,
user=corpus_users[r.op_author],
root=r.op_name ,
reply_to=None,
timestamp=None,
text=r.op_text,
meta= {'pair_ids':[],
'success':None,
'approved_by': None,
'author_flair_css_class': None,
'author_flair_text': None,
'banned_by': None,
'controversiality': None,
'distinguished': None,
'downs': None,
'edited': None,
'gilded': None,
'likes': None,
'mod_reports':None,
'num_reports': None,
'replies': [com['id'] for com in r.comments if com['parent_id']==r.op_name],
'report_reasons': None,
'saved': None,
'score': None,
'score_hidden': None,
'subreddit': None,
'subreddit_id': None,
'ups': None,
'user_reports': None})
#note: now for every comment in the original thread, make an utterance
for comment in r.comments:
try:
utterance_corpus[comment['name']]=Utterance(id=comment['name'],
user=corpus_users[comment['author']],
root=r.op_name,
reply_to=comment['parent_id'],
timestamp=int(comment['created']),
text=comment['body'] ,
meta={
'pair_ids':pairIDS[comment['name']],
'success':comment['success'],
'approved_by': comment['approved_by'],
'author_flair_css_class': comment['author_flair_css_class'],
'author_flair_text': comment['author_flair_text'],
'banned_by': comment['banned_by'],
'controversiality': comment['controversiality'],
'distinguished': comment['distinguished'],
'downs': comment['downs'],
'edited': comment['edited'],
'gilded': comment['gilded'],
'likes': comment['likes'],
'mod_reports':comment['mod_reports'],
'num_reports': comment['num_reports'],
'replies':comment['replies'],
'report_reasons': comment['report_reasons'],
'saved': comment['saved'],
'score': comment['score'],
'score_hidden': comment['score_hidden'],
'subreddit': comment['subreddit'],
'subreddit_id': comment['subreddit_id'],
'ups': comment['ups'],
'user_reports': comment['user_reports']
})
#this except catches multiple comments that have no text body, see errors examples below
except:
c=c+1
errors.append(comment)
utterance_corpus[comment['name']]=Utterance(id=comment['name'],
user=User(name='[missing]'),
root=r.op_name,
reply_to=comment['parent_id'],
timestamp=None,
text=None ,
meta={
'pair_ids':pairIDS[comment['name']],
'success':comment['success'],
'approved_by': None,
'author_flair_css_class': None,
'author_flair_text': None,
'banned_by': None,
'controversiality': None,
'distinguished': None,
'downs': None,
'edited': None,
'gilded': None,
'likes': None,
'mod_reports': None,
'num_reports': None,
'replies': None,
'report_reasons': None,
'saved': None,
'score': None,
'score_hidden': None,
'subreddit': None,
'subreddit_id': None,
'ups': None,
'user_reports': None
})
print('there were '+str(c)+' comments that were missing common attributes')
```
The 530 comments missing common attributes (note that none of them have a text body or author) have been included in the corpus for completeness (note: each were caught by the exception in the above code, but still included), here are some examples of these comments:
```
errors[22]
errors[99]
errors[395]
len(utterance_corpus)
```
Note above: the # of individual posts is less than each recorded comment in our dataset. This stands scrutiny when reviewing the dataset for two reasons:
1. each positive and negative thread correspond to the same original post.
2. original posts were re-used to compare different successful/non-successful arguments.
##### Creating a corpus from a list of utterances:
```
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
change_my_view_corpus = Corpus(utterances=utterance_list, version=1)
print("number of conversations in the dataset = {}".format(len(change_my_view_corpus.get_conversation_ids())))
```
Note: 3051 is the number of original posts recorded in the dataset (both train and hold out data)
```
convo_ids = change_my_view_corpus.get_conversation_ids()
for i, convo_idx in enumerate(convo_ids[0:2]):
print("sample conversation {}:".format(i))
print(change_my_view_corpus.get_conversation(convo_idx).get_utterance_ids())
```
##### Add conversation-level metadata:
```
convos = change_my_view_corpus.iter_conversations()
for convo in convos:
convo.add_meta('op-userID',pairDF[pairDF.op_name==convo._id].op_author[pairDF[pairDF.op_name==convo._id].index[0]])
convo.add_meta('op-text-body',pairDF[pairDF.op_name==convo._id].op_text[pairDF[pairDF.op_name==convo._id].index[0]])
convo.add_meta('op-title',pairDF[pairDF.op_name==convo._id].op_title[pairDF[pairDF.op_name==convo._id].index[0]])
convo.add_meta('pair_ids',pairDF[pairDF.op_name==convo._id].pIDs[pairDF[pairDF.op_name==convo._id].index[0]])
convo_ids= change_my_view_corpus.get_conversation_ids()
for cv in convo_ids:
change_my_view_corpus.get_conversation(cv).add_meta('train',int(pairDF[pairDF.op_name==cv].train[pairDF[pairDF.op_name==cv].index[0]]))
```
##### Add corpus title:
```
change_my_view_corpus.meta['name'] = "Change My View Corpus"
change_my_view_corpus.print_summary_stats()
change_my_view_corpus.dump('change-my-view-corpus', base_path='C:\\Users\\Andrew\\Desktop\\CMV data')
```
### Notes
- The original data compiled by the authors only included the challenger replies. To extract the full argument (i.e. a conversation between OP and the challenger), we selected the comments by OP for inclusion in a successful or unsuccessful argument (i.e. "success" = 1 or 0) by collecting all OP replies to any of the corresponding successful/unsuccessful comments by the challenger. This is a conservative measure of the overall "arugment." It does not include comments made in response to the challenger's posts by other individuals nor include comments made by OP if he replied to those outside individuals. All other comments in the thread (including separate comments made by OP) have the "success" field taking the value of None.
- If you are interested in expanding the 'arguments' to ensure all conversants are included, then I would suggest the following method:
1. Collect all originally provided successful and unsuccessful comments (collected at the Utterance-level conditioning on both "success" = 1 or 0 and user_id != the OP's user_id).
2. Collect all comments made by the OP.
3. Using the reply_to identifier, recur up the comments made in the full comment thread for each original post; collecting every comment thread that OP has made a comment in.
4. Select any comment thread from step 3 for inclusion in a successful/unsuccessful argument if the challenger has also made a comment in that thread.
- Overall, I believe the conservative measurement of 'argument' that I have used is better because the second method (above) would include argument threads where a challenger is only minimally relevant.
- Details on the collection procedure:
We started from the data collected by the Winning Arguments paper (cited above). The data was collected from their host at this blog:
https://chenhaot.com/pages/changemyview.html (note: data used in this corpus is from the original data collection -- NOT the updated data on 11/11/16)
Note: we originally intended to only convert their pair_data into Convokit format (i.e. the data they use in Section 4 of the paper, which looks at differences between arguments that were convincing/unconvincing to the OP in changing their mind). However, the pair_data only included the replies to the original post (not OP's other comments in the thread -- so there was no conversation, nor did they have all comments in the thread). Therefore, we matched the OP posts in the pair_data with the same observation in their 'all' data, from which we collected all comments for each thread.
| github_jupyter |
### The problem: Given a dataset with sensitive personal information, how can one compute and release functions of the dataset while protecting individual privacy?
### SDL techniques are no longer sufficient.
Recent privacy failures have showed us that redaction of identifiers is insufficient for protecting privacy, and auxiliary information needs to be taken into account. We need to recognize that *any* useful analysis of personal data *must* leak some information about individuals, and these leakages *accumulate* with multiple analyses.
### Differential Privacy
- mathematically provable privacy guarantee
- states that any information-related risk to a person should not change signicantly as a result of that person's information being indcluded, or not, in the analysis
### [A Privacy "Budget"](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-budget.ipynb)
- DP provides provable privacy guarantees with respect to the cumulative risk from successive data releases
- Setting the budget is a policy question
### Differentitially Private Computations
Algorithms maintain differential privacy via the introduction of carefully crafted random noise into the computation.
Types of computations that can be made differentiallly private:
- descriptive statistics
- [counts](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-count.ipynb)
- [mean](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-mean.ipynb)
- [median](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-median.ipynb)
- histograms
- boxplots
- [cdf](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-mm.ipynb)
- supervised and unsupervised ML tasks
- [regression](https://github.com/umadesai/census-dp/blob/master/notebooks/dp-regression.ipynb)
- classification
- generation of synthetic data
### Read more
- [Differential Privacy: An Introduction For Statistical Agencies](https://gss.civilservice.gov.uk/wp-content/uploads/2018/12/12-12-18_FINAL_Privitar_Kobbi_Nissim_article.pdf) Page et al.
- [Differential Privacy: A Primer for a Non-technical Audience](http://www.jetlaw.org/wp-content/uploads/2018/12/4_Wood_Final.pdf) Wood et al.
- [A Firm Foundation for Private Data Analysis](http://delivery.acm.org/10.1145/1870000/1866758/p86-dwork.pdf?ip=108.28.104.96&id=1866758&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1562937387_c049c03e734df8e04aac19ab857b3961) Dwork.
- [The Algorithmic Foundations of Differential Privacy](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) Dwork & Roth.
| github_jupyter |
```
import matplotlib.pyplot as plt
from matplotlib import style
import numpy as np
style.use ('ggplot')
import pandas
filename = 'ModelData.csv' #Importation of Data
trainingdata = pandas.read_csv(filename)
print(trainingdata.shape)
print(trainingdata)
print(trainingdata)
X = trainingdata.values[:, :2]
Y = trainingdata.values[:, 8]
filename2 = 'Testdata.csv'
TTdata = pandas.read_csv(filename2)
XT = TTdata.values[:, :2]
YT = TTdata.values[:, :8]
class SVM:
def __init__(self, visualization=True): #An Intition class for the classifier
self.visualization = visualization
self.colors = {1:'r',-1:'b'}
if self.visualization: #visualisation of the classifier framework on a grid
self.fig = plt.figure()
self.ax = self.fig.add_subplot(2,2,1)
#self.ax.subplot(2, 2, i + 1)
#self.ax.subplots_adjust(wspace=0.4, hspace=0.4)
def train(self, fitdata): #Building Model using the training data
fitdata = np.array([],[])
self.fitdata = fitdata
alldata = []
Group = {} #Population base on {w,b}
transforms = [[1,-1],
[1,1],
[-1,1],
[-1,-1]]
for yi in self.fitdata:
for featureset in self.fitdata[yi]:
for feature in featureset:
alldata.append(feature)
self.max_feature_value = X[:, 0].max() + 1 #Search processs
self.min_feature_value = X[:, 0].min() - 1
alldata = None
h=5
x_min = X[:, 0].min() - 1
x_max = X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
stepsize = [self.max_feature_value* 0.05]
b_range = 5
b_multiple = 5
lastest_optimum = self.max_feature_value*10
for step in stepsize: #Optimization process
w = np.array([lastest_optimum,lastest_optimum])
optimized = False
while not optimized:
for b in np.arange(-1*(self.max_feature_value*b_multiple), self.max_feature_value*b_multiple, step*b_multiple):
for transformation in transforms:
wt = w*transformation
found_option = True
for i in self.fitdata:
for xi in self.fitdata[i]:
yi=i
if not yi*(np.dot(wt,xi)+b) >=1:
found_option = False
#break
if found_option:
Group[np.linalg.norm(wt)] = [wt,b]
if w[0] < 0:
optimized = True
print('Processed Optimized')
else:
w = w - step
norms = sorted ([n for n in Group]) #Sorted list of the magnitudes
Group_choice = Group[norms[0]]
self.w = Group_choice[0]
self.b = Group_choice[1]
latest_optimum = Group_choice[0][0]+step*2
for i in self.fitdata:
for xi in self.fitdata[i]:
yi=i
print(xi,':',yi*(np.dot(self.w,xi)+self.b))
def predict(self,features): #Prediction of unlabelled data vs the model built
classification = np.sign(np.dot(np.array(features), self.w)+self.b)
if classification !=0 and self.visualization:
self.ax.scatter(features[:,0], features[:,1], marker='*', c=self.colors[classification])
return classification
def visualize(self): #Displays points of each classs
for i, clf in enumerate((SVM().train)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
self.ax.subplot(2, 2, i + 1)
self.ax.subplots_adjust(wspace=0.4, hspace=0.4)
Z = self.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
self.ax.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# Plot also the training points
self.ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
self.ax.xlim(xx.min(), xx.max())
self.ax.ylim(yy.min(), yy.max())
self.ax.xticks(())
self.ax.yticks(())
#[[self.ax.scatter(X[:, 0], X[:, 1],color=self.colors[i])for x in labels[i]] for i in labels]
def hyperplane(x,w,b,c): #Gives the hyperplane value c=X.W+b , decision boundary, positive or negative,for the graph
calc = (-w[0]*x-b+c)/w[1]
return calc
datarange = (self.min_feature_value*9.0,self.max_feature_value*1.1)
hyper_x_max = datarange[1]
hyper_x_min = datarange[0]
postivesv1 = hyperplane(hyper_x_min, self.w, self.b, 1)#positive support vector, is going to be y, (w.X+b) = 1
postivesv2 = hyperplane(hyper_x_max, self.w, self.b, 1)#second point
self.ax.plot([hyper_x_min,hyper_x_max], [postivesv1, postivesv2], 'k')
negativesv1 = hyperplane(hyper_x_min, self.w, self.b, -1)#negative support vector, (W.X+b) = -1
negativesv2 = hyperplane(hyper_x_max, self.w, self.b, -1)#second point
self.ax.plot([hyper_x_min,hyper_x_max], [negativesv1, negativesv2], 'k')
decisionsv1 = hyperplane(hyper_x_min, self.w, self.b, 0)#decision margin, (W.X+b) = 0
decisionsv2 = hyperplane(hyper_x_max, self.w, self.b, 0)#second point
self.ax.plot([hyper_x_min,hyper_x_max], [decisionsv1, decisionsv2], 'k--')
plt.show()
labels = {-1:X[:,0], 1:X[:,1]}
classifier = SVM.train(fitdata = labels, self = classifier)
trainingdata.astype(int)
SVM.visualize(classifier)
#Dset = {-1:trainingdata[:,1:7],1:trainingdata[:,8]}
#labels = {-1:X[:,0], 1:X[:,1]}
#fit = classifier.train(fitdata = labels)
#classifier.visualize()
predictTest = TTdata.values[:, :2]
for f in predictTest:
SVM.predict(f)
SVM.visualize(classifier)
```
| github_jupyter |
```
import pandas as pd
from pulp import *
from random import normalvariate
d = {'Supply_Region':['USA', 'Germany', 'Japan', 'Brazil', 'India'], 'Dmd':[2719.6,84.1,1676.8,145.4,156.4]}
v = {'Supply_Region':['USA', 'Germany', 'Japan', 'Brazil', 'India'],
'USA':[6,13,20,12,22],'Germany':[13,6,14,14,13],'Japan':[20,14,3,21,10],
'Brazil':[12,14,21,8,23], 'India':[17,13,9,21,8]}
f = {'Supply_Region':['USA', 'Germany', 'Japan', 'Brazil', 'India'],
'Low_Cap':[6500,4980,6230,3230,2110], 'High_Cap':[9500,7270,9100,4730,3080]}
p = {'Supply_Region':['USA', 'Germany', 'Japan', 'Brazil', 'India'],
'Low_Cap':[500,500,500,500,500], 'High_Cap':[1500,1500,1500,1500,1500]}
demand = pd.DataFrame(data = d)
demand = demand.set_index('Supply_Region')
var_cost = pd.DataFrame(data = v)
var_cost = var_cost.set_index('Supply_Region')
fix_cost = pd.DataFrame(data = f)
fix_cost = fix_cost.set_index('Supply_Region')
cap = pd.DataFrame(data = p)
cap = cap.set_index('Supply_Region')
print(fix_cost)
print(demand)
print(cap)
print(var_cost)
def pulp_model_(fix_cost, var_cost, demand,cap):
# Initialize, Define Decision Vars., and Objective Function
model = LpProblem("Capacitated Plant Location Model", LpMinimize)
loc = ['USA', 'Germany', 'Japan', 'Brazil', 'India']
size = ['Low_Cap','High_Cap']
x = LpVariable.dicts("production", [(i,j) for i in loc for j in loc],
lowBound=0, upBound=None, cat='Continuous')
y = LpVariable.dicts("plant",
[(i,s) for s in size for i in loc], cat='Binary')
model += (lpSum([fix_cost.loc[i,s] * y[(i,s)] for s in size for i in loc])
+ lpSum([(var_cost.loc[i,j]+normalvariate(0.5,0.5)) * x[(i,j)] for i in loc for j in loc]))
# Define the constraints
for j in loc:
rd = normalvariate(0, demand.loc[j,'Dmd']*.05)
model += lpSum([x[(i, j)] for i in loc]) == (demand.loc[j,'Dmd']+rd)
for i in loc:
model += lpSum([x[(i, j)] for j in loc]) <= lpSum([cap.loc[i,s] * y[i,s]
for s in size])
model.solve()
o = {}
for i in loc:
o[i] = value(lpSum([x[(i,j)] for j in loc]))
o['Obj'] = value(model.objective)
return(o)
output = []
for i in range(100):
output.append(pulp_model_(fix_cost, var_cost, demand, cap))
df = pd.DataFrame(output)
import matplotlib.pyplot as plt
df['India'].hist()
plt.title('Histogram of Prod. At Germany Region')
# Histogram of Germany production
plt.hist(df['Germany'])
plt.show()
plt.title('Histogram of Prod. At USA Region')
# Histogram of Germany production
plt.hist(df['USA'])
plt.show()
plt.title('Histogram of Prod. At Japan Region')
# Histogram of Germany production
plt.hist(df['Japan'])
plt.show()
```
| github_jupyter |
<!---------------------- Introduction Section ------------------->
<h1> PTRAIL: A <b><i>P</i></b>arallel
<b><i>TR</i></b>ajectory
d<b><i>A</i></b>ta
preprocess<b><i>I</i></b>ng
<b><i>L</i></b>ibrary
</h1>
<h2> Introduction </h2>
<p align='justify'>
PTRAIL is a state-of-the art Mobility Data Preprocessing Library that mainly deals with filtering data, generating features and interpolation of Trajectory Data.
<b><i> The main features of PTRAIL are: </i></b>
<ol align='justify'>
<li> PTRAIL uses primarily parallel computation based on
python Pandas and numpy which makes it very fast as compared
to other libraries available.
</li>
<li> PTRAIL harnesses the full power of the machine that
it is running on by using all the cores available in the
computer.
</li>
<li> PTRAIL uses a customized DataFrame built on top of python
pandas for representation and storage of Trajectory Data.
</li>
<li> PTRAIL also provides several Temporal and kinematic features
which are calculated mostly using parallel computation for very
fast and accurate calculations.
</li>
<li> Moreover, PTRAIL also provides several filteration and
outlier detection methods for cleaning and noise reduction of
the Trajectory Data.
</li>
<li> Apart from the features mentioned above, <i><b> four </b></i>
different kinds of Trajectory Interpolation techniques are
offered by PTRAIL which is a first in the community.
</li>
</ol>
</p>
<!----------------- Dataset Link Section --------------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
In the introduction of the library, the seagulls dataset is used
which can be downloaded from the link below: <br>
<span> ↪ </span>
<a href="https://github.com/YakshHaranwala/PTRAIL/blob/main/examples/data/gulls.csv" target='_blank'> Seagulls Dataset </a>
</p>
<!----------------- NbViewer Link ---------------------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
Note: Viewing this notebook in GitHub will not render JavaScript
elements. Hence, for a better experience, click the link below
to open the Jupyter notebook in NB viewer.
<span> ↪ </span>
<a href="https://nbviewer.jupyter.org/github/YakshHaranwala/PTRAIL/blob/main/examples/0.%20Intro%20to%20PTRAIL.ipynb" target='_blank'> Click Here </a>
</p>
<!------------------------- Documentation Link ----------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
The Link to PTRAIL's Documentation is: <br>
<span> ↪ </span>
<a href='https://PTRAIL.readthedocs.io/en/latest/' target='_blank'> <i> PTRAIL Documentation </i> </a>
<hr style="height:6px;background-color:black">
</p>
<h2> Importing Trajectory Data into a PTRAILDataFrame Dataframe </h2>
<p align='justify'>
PTRAIL Library stores Mobility Data (Trajectories) in a specialised
pandas Dataframe structure called PTRAILDataFrame. As a result, the following
constraints are enforced for the data to be able to be stores in a PTRAILDataFrame.
<ol align='justify'>
<li>
Firstly, for a mobility dataset to be able to work with PTRAIL Library needs
to have the following mandatory columns present:
<ul type='square'>
<li> DateTime </li>
<li> Trajectory ID </li>
<li> Latitude </li>
<li> Longitude </li>
</ul>
</li>
<li>
Secondly, PTRAILDataFrame has a very specific constraint for the index of the
dataframes, the Library enforces a multi-index consisting of the
<b><i> Trajectory ID, DateTime </i></b> columns because the operations of the
library are dependent on the 2 columns. As a result, it is recommended
to not change the index and keep the multi-index of <b><i> Trajectory ID, DateTime </i></b>
at all times.
</li>
<li>
Note that since PTRAILDataFrame Dataframe is built on top of
python pandas, it does not have any restrictions on the number
of columns that the dataset has. The only requirement is that
the dataset should atleast contain the above mentioned four columns.
</li>
</ol>
</p>
<hr style="height:6px;background-color:black">
```
"""
METHOD - I:
1. Enter the trajectory data into a list.
2. Then, convert the list into a PTRAILDataFrame
Dataframe to be used with PTRAIL Library.
"""
import pandas as pd
from ptrail.core.TrajectoryDF import PTRAILDataFrame
list_data = [
[39.984094, 116.319236, '2008-10-23 05:53:05', 1],
[39.984198, 116.319322, '2008-10-23 05:53:06', 1],
[39.984224, 116.319402, '2008-10-23 05:53:11', 1],
[39.984224, 116.319404, '2008-10-23 05:53:11', 1],
[39.984224, 116.568956, '2008-10-23 05:53:11', 1],
[39.984224, 116.568956, '2008-10-23 05:53:11', 1]
]
list_df = PTRAILDataFrame(data_set=list_data,
latitude='lat',
longitude='lon',
datetime='datetime',
traj_id='id')
print(f"The dimensions of the dataframe:{list_df.shape}")
print(f"Type of the dataframe: {type(list_df)}")
"""
METHOD - II:
1. Enter the trajectory data into a dictionary.
2. Then, convert the dictionary into a PTRAILDataFrame
Dataframe to be used with PTRAIL Library.
"""
dict_data = {
'lat': [39.984198, 39.984224, 39.984094, 40.98, 41.256],
'lon': [116.319402, 116.319322, 116.319402, 116.3589, 117],
'datetime': ['2008-10-23 05:53:11', '2008-10-23 05:53:06', '2008-10-23 05:53:30', '2008-10-23 05:54:06', '2008-10-23 05:59:06'],
'id' : [1, 1, 1, 3, 3],
}
dict_df = PTRAILDataFrame(data_set=dict_data,
latitude='lat',
longitude='lon',
datetime='datetime',
traj_id='id')
print(f"The dimensions of the dataframe:{dict_df.shape}")
print(f"Type of the dataframe: {type(dict_df)}")
# Now, printing the head of the dataframe with data
# imported from a list.
list_df.head()
# Now, printing the head of the dataframe with data
# imported from a dictionary.
dict_df.head()
"""
METHOD - III:
1. First, import the seagulls dataset from the csv file
using pandas into a pandas dataframe.
2. Then, convert the pandas dataframe into a PTRAILDataFrame
DataFrame to be used with PTRAIL library.
"""
df = pd.read_csv('https://raw.githubusercontent.com/YakshHaranwala/PTRAIL/main/examples/data/gulls.csv')
seagulls_df = PTRAILDataFrame(data_set=df,
latitude='location-lat',
longitude='location-long',
datetime='timestamp',
traj_id='tag-local-identifier',
rest_of_columns=[])
print(f"The dimensions of the dataframe:{seagulls_df.shape}")
print(f"Type of the dataframe: {type(seagulls_df)}")
# Now, print the head of the seagulls_df dataframe.
seagulls_df.head()
```
<h1>Trajectory Feature Extraction </h1>
<p align='justify'>
As mentioned above, PTRAIL offers a multitude of features
which are calculated based on both Datetime, and the coordinates
of the points given in the data. Both the feature module are named
as follows:
</p>
<ul align='justify'>
<li> temporal_features (based on DateTime) </li>
<li> kinematic_features (based on geographical coordinates) </li>
</ul>
<hr style="background-color:black; height:7px">
<h2> PTRAIL Temporal Features </h2>
<p align='justify'>
The following steps are performed to demonstrate the usage of
Temporal features present in PTRAIL:
<ul type='square', align='justify'>
<li>Various features Date, Time, Week-day, Time of Day etc are
calculated using temporal_features.py module functions and
the results are appended to the original dataframe.
</li>
<li> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</li>
</ul>
</p>
```
%%time
"""
To demonstrate the temporal features, we will:
1. First, import the temporal_features.py module from the
features package.
2. Generate Date, Day_Of_Week, Time_Of_day features on
the seagulls dataset.
3. Print the execution time of the code.
4. Finally, check the head of the dataframe to
see the results of feature generation.
"""
from ptrail.features.temporal_features import TemporalFeatures as temporal
temporal_features_df = temporal.create_date_column(seagulls_df)
temporal_features_df = temporal.create_day_of_week_column(temporal_features_df)
temporal_features_df = temporal.create_time_of_day_column(temporal_features_df)
temporal_features_df.head()
```
<h2> PTRAIL kinematic Features </h2>
<p align='justify'>
The following steps are performed to demonstrate the usage of
kinematic features present in PTRAIL:
</p>
<ul type='square', align='justify'>
<li>Various features like Distance, Jerk and rate of bearing rate are
calculated using kinematic_features.py module functions and
the results are appended to the original dataframe.
</li>
<li> While calculating jerk, the columns of acceleration, speed and
distance all are added to the dataframe. Similarly, when calculating
rate of bearing rate, the column of Bearing and Bearing rate are
added to the dataframe.
</li>
<li> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</li>
</ul>
```
%%time
"""
To demonstrate the kinematic features, we will:
1. First, import the kinematic_features.py module from the
features package.
2. Calculate Distance, Jerk and Rate of bearing rate features on
the seagulls dataset.
3. Print the execution time of the code.
4. Finally, check the head of the dataframe to
see the results of feature generation.
"""
from ptrail.features.kinematic_features import KinematicFeatures as kinematic
kinematic_features_df = kinematic.create_distance_column(seagulls_df)
kinematic_features_df = kinematic.create_jerk_column(kinematic_features_df)
kinematic_features_df = kinematic.create_rate_of_br_column(kinematic_features_df)
kinematic_features_df.head()
```
<h1> Trajectory Data Preprocessing </h1>
<p align = 'justify'>
In the form of preprocessing PTRAIL provides:
</p>
<ul align = 'justify'>
<li> Outlier Detection </li>
<li> Data filtering based on various constraints </li>
<li> Trajectory Interpolation </li>
</ul>
<p> For interpolation the user needs to provide the type of interpolation. </p>
<hr style="background-color:black; height:7px">
<h2> Outlier Detection and Data Filtering </h2>
<p> PTRAIL provides several outlier detection method based on: </p>
<ul align='justify'>
<li>Distance</li>
<li> Speed </li>
<li> Trajectory length </li>
<li> Hampel filter algorithm </li>
</ul>
<p> It also provides several filtering methods based on various constraints such as: </p>
<ul type = 'square' align='justify'>
<li> Date </li>
<li> Speed </li>
<li> Distance, etc. </li>
</ul>
<p align = 'justify'> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</p>
```
%%time
"""
To demonstrate the kinematic features, we will:
1. First, import the filters.py module from the
preprocessing package.
2. Detect outliers and Filter out data based on
bounding box, date and distance.
3. Print the execution time of the code.
"""
from ptrail.preprocessing.filters import Filters as filt
# Makes use of hampel filter from preprocessing package for outlier removal
outlier_df = filt.hampel_outlier_detection(seagulls_df, column_name='lat')
print(f"Length of original: {len(seagulls_df)}")
print(f"Length after outlier removal: {len(outlier_df)}")
print(f"Number of points removed: {len(seagulls_df) - len(outlier_df)}")
%%time
# Filters and gives out data contained within the given bounding box
filter_df_bbox = filt.filter_by_bounding_box(seagulls_df, (61, 24, 65, 25))
print(f"Length of original: {len(seagulls_df)}")
print(f"Length of data in the bounding box: {len(filter_df_bbox)}")
print(f"Number of points removed: {len(seagulls_df) - len(filter_df_bbox)}")
%%time
# Gives out the points contained within the given date range
filter_df_date = filt.filter_by_date(temporal_features_df, start_date='2009-05-28', end_date='2009-06-30')
print(f"Length of original: {len(temporal_features_df)}")
print(f"Length of data within specified date: {len(filter_df_date)}")
print(f"Number of points removed: {len(temporal_features_df) - len(filter_df_date)}")
%%time
# Filtered dataset with a given maximum distance
filter_df_distance = filt.filter_by_max_consecutive_distance(kinematic_features_df, max_distance=4500)
print(f"Length of original: {len(kinematic_features_df)}")
print(f"Length of Max distance Filtered DF: {len(filter_df_distance)}")
print(f"Number of points removed: {len(kinematic_features_df) - len(filter_df_distance)}")
```
<h2> Trajectory Interpolation </h2>
<p align='justify'>
As mentioned above, for the first time in the community, PTRAIL
offers <b><i> four different Interpolation Techniques </i></b> built
into it.
Trajectory Interpolation is widely used when the trajectory data
on hand is not clean and has several Jumps in it which makes the
trajectory abrupt. Using interpolation techniques, those jumps
between the trajectory points can be filled in order to make the
trajectory smoother.
PTRAIL offers following four interpolation techniques:
</p>
<ol align='justify'>
<i>
<li> Linear Interpolation </li>
<li> Cubic Interpolation </li>
<li> Random-Walk Interpolation </li>
<li> Kinematic Interpolation </li>
</i>
</ol>
In the examples of interpolation provided in this notebook, the
examples are demonstrated on a single trajectory selected from the
seagulls dataset. However, it is to be noted that the interpolation
works equally well on datasets containing several trajectories.
<h3> Note </h3>
<p align='justify'>
Furthermore, it is also worthwhile to note that PTRAIL only
interpolates 4 fundamental columns i.e., <i><b> Latitude, Longitude,
DateTime and Trajectory ID </b></i>. Hence, the dataframe returned
by the interpolation methods only contain the above mentioned 4
columns. This decision is taken while keeping the following point
in mind that it is not possible to interpolate other kinds of
data that may or may not be present in the dataset.
</p>
```
"""
First, the following operations are performed:
1. Import the necessary interpolation modules
from the preprocessing package.
2. Select a single trajectory ID from the seagulls
dataset and then plot it using folium.
3. The number of points having time jump greater than
4 hours between 2 points is also calculated and shown.
"""
import ptrail.utilities.constants as const
import folium
from ptrail.preprocessing.interpolation import Interpolation as ip
small_seagulls = seagulls_df.reset_index().loc[seagulls_df.reset_index()[const.TRAJECTORY_ID] == '91732'][[const.TRAJECTORY_ID, const.DateTime, const.LAT, const.LONG]]
time_del = small_seagulls.reset_index()[const.DateTime].diff().dt.total_seconds()
print((time_del > 3600*4).value_counts())
# Here, we plot the smaller trajectory on a folium map.
sw = small_seagulls[['lat', 'lon']].min().values.tolist()
ne = small_seagulls[['lat', 'lon']].max().values.tolist()
coords = [zip(small_seagulls[const.LAT], small_seagulls[const.LONG])]
m1 = folium.Map(location=[small_seagulls[const.LAT].iloc[0], small_seagulls[const.LONG].iloc[0]], zoom_start=1000)
folium.PolyLine(coords,
color='blue',
weight=2,
opacity=0.7).add_to(m1)
m1.fit_bounds([sw, ne])
m1
%%time
"""
Now, to demonstrate interpolation, the following steps
are taken:
1. Interpolate the selected trajectory using all of
the above mentioned techniques and print their
execution times along with them.
2. Then, plot all the trajectories side by side along
with the original trajectory on a scatter plot to
see how the time jumps are filled and the points
are inserted into the trajectory.
"""
# First, linear interpolation is performed.
linear_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='linear')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(linear_ip_gulls)}")
%%time
# Now, cubic interpolation is performed.
cubic_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='cubic')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(cubic_ip_gulls)}")
%%time
# Now, random-walk interpolation is performed.
rw_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='random-walk')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(rw_ip_gulls)}")
%%time
# Now, kinematic interpolation is performed.
kin_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='kinematic')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(kin_ip_gulls)}")
"""
Now, plotting all the scatter plots side by side
in order to compare the interpolation techniques.
"""
import matplotlib.pyplot as plt
plt.scatter(small_seagulls[const.LAT],
small_seagulls[const.LONG],
s=15, color='purple')
plt.title('Original Trajectory', color='black', size=15)
plt.show()
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(25, 20))
ax[0, 0].scatter(linear_ip_gulls[const.LAT],
linear_ip_gulls[const.LONG],
s=50, color='red')
ax[0, 0].set_title('Linear Interpolation', color='black', size=40)
ax[0, 1].scatter(cubic_ip_gulls[const.LAT],
cubic_ip_gulls[const.LONG],
s=50, color='orange')
ax[0, 1].set_title('Cubic Interpolation', color='black', size=40)
ax[1, 0].scatter(rw_ip_gulls[const.LAT],
rw_ip_gulls[const.LONG],
s=50, color='blue')
ax[1, 0].set_title('Random-Walk Interpolation', color='black', size=40)
ax[1, 1].scatter(kin_ip_gulls[const.LAT],
kin_ip_gulls[const.LONG],
s=50, color='brown')
ax[1, 1].set_title('Kinematic Interpolation', color='black', size=40)
for plot in enumerate(ax.flat):
plot[1].set_xlabel('Latitude', color='grey', size=25)
plot[1].set_ylabel('Longitude', color='grey', size=25)
```
| github_jupyter |
# Getting Started with Boutiques
As you've seen from our documentation, Boutiques is a flexible way to represent command line executables and distribute them across compute ecosystems consistently. A Boutiques tool descriptor is a JSON file that fully describes the input and output parameters and files for a given command line call (or calls, as you can include pipes(`|`) and ampersands (`&`)). There are several ways Boutiques helps you build a tool descriptor for your tool:
- The [boutiques command-line utility](https://github.com/boutiques/boutiques/) contains a validator, simulator, and other tools which can help you either find an existing descriptor you wish to model yours after, or build and test your own.
- The [examples](https://github.com/aces/cbrain-plugins-neuro/tree/master/cbrain_task_descriptors) provide useful references for development.
To help you aid in this process, we will walk through the process of making an tool descriptor for [FSL's BET](http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET) (finished product found [here](https://github.com/aces/cbrain-plugins-neuro/blob/master/cbrain_task_descriptors/fsl_bet.json)).
## Step 1: Describing the command line
The first step in creating an tool descriptor for your command line call is creating a fully descriptive list of your command line options. If your tool was written in Python and you use the `argparse` library, then this is already done for you in large part. For many tools (bash, Python, or otherwise) this can be obtained by typing executing it with the `-h` flag. In the case of FSL's BET, we get the following:
```
%%bash
bet -h
```
Looking at all of these flags, we see a list of options which can be summarized by:
```
bet [INPUT_FILE] [MASK] [FRACTIONAL_INTENSITY] [VERTICAL_GRADIENT] [CENTER_OF_GRAVITY] [OVERLAY_FLAG] [BINARY_MASK_FLAG] [APPROX_SKULL_FLAG] [NO_SEG_OUTPUT_FLAG] [VTK_VIEW_FLAG] [HEAD_RADIUS] [THRESHOLDING_FLAG] [ROBUST_ITERS_FLAG] [RES_OPTIC_CLEANUP_FLAG] [REDUCE_BIAS_FLAG] [SLICE_PADDING_FLAG] [MASK_WHOLE_SET_FLAG] [ADD_SURFACES_FLAG] [ADD_SURFACES_T2] [VERBOSE_FLAG] [DEBUG_FLAG]
```
Now that we have summarized all command line options for our tool - some of which describe inputs and others, outputs - we can begin to craft our JSON Boutiques tool descriptor.
## Step 2: Understanding Boutiques + JSON
For those unfamiliar with JSON, we recommend following this [3 minute JSON tutorial](http://www.secretgeek.net/json_3mins) to get you up to speed. In short, a JSON file is a dictionary object which contains *keys* and associated *values*. A *key* informs us what is being described, and a *value* is the description (which, importantly, can be arbitrarily typed). The Boutiques tool descriptor is a JSON file which requires the following keys, or, properties:
- `name`
- `description`
- `schema-version`
- `command-line`
- `inputs`
- `output-files`
Some additional, optional, properties that a Boutiques fill will recognize are:
- `groups`
- `tool-version`
- `suggested-resources`
- `container-image`:
- `type`
- `image`
- `index`
In the case of BET, we will of course populate the required elements, but will also include `tool-version` and `groups`.
## Step 3: Populating the tool descriptor
We will break-up populating the tool descriptor into two sections: adding meta-parameters (such as `name`, `description`, `schema-version`, `command-line`, `tool-version`, and `docker-image`, `docker-index` if we were to include them) and i/o-parameters (such as `inputs`, `output-files`, and `groups`).
Currently, before adding any details, our tool descriptor should looks like this:
```
{
"name" : TODO,
"tool-version": TODO,
"description": TODO,
"command-line": TODO,
"schema-version": TODO,
"inputs": TODO,
"output-files": TODO,
}
```
### Step 3.1: Adding meta-parameters
Many of the meta-parameters will be obvious to you if you're familiar with the tool, or extractable from the message received earlier when you passed the `-h` flag into your program. We can update our JSON to be the following:
```
{
"name" : "fsl_bet",
"tool-version" : "1.0.0",
"description" : "Automated brain extraction tool for FSL",
"command-line" : "bet [INPUT_FILE] [MASK] [FRACTIONAL_INTENSITY] [VERTICAL_GRADIENT] [CENTER_OF_GRAVITY] [OVERLAY_FLAG] [BINARY_MASK_FLAG] [APPROX_SKULL_FLAG] [NO_SEG_OUTPUT_FLAG] [VTK_VIEW_FLAG] [HEAD_RADIUS] [THRESHOLDING_FLAG] [ROBUST_ITERS_FLAG] [RES_OPTIC_CLEANUP_FLAG] [REDUCE_BIAS_FLAG] [SLICE_PADDING_FLAG] [MASK_WHOLE_SET_FLAG] [ADD_SURFACES_FLAG] [ADD_SURFACES_T2] [VERBOSE_FLAG] [DEBUG_FLAG]",
"schema-version" : "0.4",
"inputs": TODO,
"output-files": TODO,
"groups": TODO
}
```
### Step 3.2: Adding i/o parameters
Inputs and outputs of many applications are complicated - outputs can be dependent upon input flags, flags can be mutually exclusive or require at least one option, etc. The way Boutiques handles this is with a detailed schema which consists of options for inputs and outputs, as well as optionally specifying groups of inputs which may add additional layers of input complexity.
As you have surely noted, tools do only contain a single "name" or "version" being used, but may have many input and output parameters. This means that inputs, outputs, and groups, will be described as a list. Each element of these lists will be a dictionary following the input, output, or group schema, respectively. This means that our JSON actually looks more like this:
```
{
"name" : "fsl_bet",
"tool-version" : "1.0.0",
"description" : "Automated brain extraction tool for FSL",
"command-line" : "bet [INPUT_FILE] [MASK] [FRACTIONAL_INTENSITY] [VERTICAL_GRADIENT] [CENTER_OF_GRAVITY] [OVERLAY_FLAG] [BINARY_MASK_FLAG] [APPROX_SKULL_FLAG] [NO_SEG_OUTPUT_FLAG] [VTK_VIEW_FLAG] [HEAD_RADIUS] [THRESHOLDING_FLAG] [ROBUST_ITERS_FLAG] [RES_OPTIC_CLEANUP_FLAG] [REDUCE_BIAS_FLAG] [SLICE_PADDING_FLAG] [MASK_WHOLE_SET_FLAG] [ADD_SURFACES_FLAG] [ADD_SURFACES_T2] [VERBOSE_FLAG] [DEBUG_FLAG]",
"schema-version" : "0.4",
"inputs": [
{TODO},
{TODO},
...
],
"output-files": [
{TODO},
{TODO},
...
],
}
```
As the file is beginning to grow considerably in number of lines, we will no longer show you the full JSON at each step but will simply show you the dictionaries responsible for output, input, and group entries.
#### Step 3.2.1: Specifying inputs
The input schema contains several options, many of which can be ignored in this first example with the exception of `id`, `name`, and `type`. For BET, there are several input values we can choose to demonstrate this for you. We have chosen three with considerably different functionality and therefore schemas. In particular:
- `[INPUT_FILE]`
- `[FRACTIONAL_INTENSITY]`
- `[CENTER_OF_GRAVITY]`
**`[INPUT_FILE]`** The simplest of these in the `[INPUT_FILE]` which is a required parameter that simply expects a qualified path to a file. The dictionary entry is:
```
{
"id" : "infile",
"name" : "Input file",
"type" : "File",
"description" : "Input image (e.g. img.nii.gz)",
"optional": false,
"value-key" : "[INPUT_FILE]"
}
```
**`[FRACTIONAL_INTENSITY]`** This parameter documents an optional flag that can be passed to the executable. Along with the flag, when it is passed, is a floating point value that can range from 0 to 1. We are able to validate at the level of Boutiques whether or not a valid input is passed, so that jobs are not submitted to the execution engine which will error, but they get flagged upon validation of inputs. This dictionary is:
```
{
"id" : "fractional_intensity",
"name" : "Fractional intensity threshold",
"type" : "Number",
"description" : "Fractional intensity threshold (0->1); default=0.5; smaller values give larger brain outline estimates",
"command-line-flag": "-f",
"optional": true,
"value-key" : "[FRACTIONAL_INTENSITY]",
"integer" : false,
"minimum" : 0,
"maximum" : 1
}
```
**`[CENTER_OF_GRAVITY]`** The center of gravity value expects a triple (i.e. [X, Y, Z] position) if the flag is specified. Here we are able to set the condition that the length of the list received after this flag is 3, by specifying that the input is a list that has both a minimum and maximum length.
```
{
"id" : "center_of_gravity",
"name" : "Center of gravity vector",
"type" : "Number",
"description" : "The xyz coordinates of the center of gravity (voxels, not mm) of initial mesh surface. Must have exactly three numerical entries in the list (3-vector).",
"command-line-flag": "-c",
"optional": true,
"value-key" : "[CENTER_OF_GRAVITY]",
"list" : true,
"min-list-entries" : 3,
"max-list-entries" : 3
}
```
For further examples of different types of inputs, feel free to explore [more examples](https://github.com/aces/cbrain-plugins-neuro/tree/master/cbrain_task_descriptors).
#### Step 3.2.2: Specifying outputs
The output schema also contains several options, with the only mandatory ones being `id`, `name`, and `path-template`. We again demonstrate an example from BET:
- `outfile`
**`outfile`** All of the output parameters in BET are similarly structured, and exploit the same core functionality of basing the output file, described by `path-template`, as a function of an input value on the command line, here given by `[MASK]`. The `optional` flag also describes whether or not a derivative should always be produced, and whether Boutiques should indicate an error if a file isn't found. The output descriptor is thus:
```
{
"id" : "outfile",
"name" : "Output mask file",
"description" : "Main default mask output of BET",
"path-template" : "[MASK].nii.gz",
"optional" : true
}
```
An extension of the feature of naming outputs based on inputs exists in newer versions of the schema than this example was originally developed, and enable stripping the extension of the input values used, as well. An example of this can be seen [here](https://github.com/neurodata/boutiques-tools/blob/master/cbrain_task_descriptors/ndmg.json#L158).
#### Step 3.2.3: Specifying groups
The group schema enables provides an additional layer of complexity when considering the relationships between inputs. For instance, if multiple inputs within a set are mutually exclusive, they may be grouped and a flag set indicating that only one can be selected. Alternatively, if at least one option within a group must be specified, the user can also set a flag indicating such. The following group from the BET implementation is used to illustrate this:
- `variational_params_group`
**`variational_params_group`** Many flags exist in BET, and each of them is represented in the command line we specified earlier. However, as you may have noticed when reading the output of `bet -h`, several of these options are mutually exclusive to one another. In order to again prevent jobs from being submitted to a scheduler and failing there, Boutiques enables grouping of inputs and forcing such mutual exclusivity so that the invalid inputs are flagged in the validation stage. This group dictionary is:
```
{
"id" : "variational_params_group",
"name" : "Variations on Default Functionality",
"description" : "Mutually exclusive options that specify variations on how BET should be run.",
"members" : ["robust_iters_flag", "residual_optic_cleanup_flag", "reduce_bias_flag", "slice_padding_flag", "whole_set_mask_flag", "additional_surfaces_flag", "additional_surfaces_t2"],
"mutually-exclusive" : true
}
```
Though an example of `one-is-required` input groups is not available in our BET example, you can investigate a validated tool descriptor [here](https://github.com/neurodata/boutiques-tools/blob/master/cbrain_task_descriptors/ndmg.json#L13) to see how it is implemented.
### Step 3.3: (optional) Extending the tool descriptor
Now that the basic implementation of this tool has been done, you can check out the [schema](https://github.com/boutiques/boutiques/blob/master/tools/python/boutiques/schema/descriptor.schema.json) to explore deeper functionality of Boutiques. For example, if you have created a Docker or Singularity container, you can associate an image with your tool descriptor and any compute resource with Docker or Singularity installed will launch the executable through them (an example of using Docker can be found [here](https://github.com/neurodata/boutiques-tools/blob/master/cbrain_task_descriptors/ndmg.json#L6)).
## Step 4: Validating the tool descriptor
Once you've completed your Boutiques tool descriptor, you should run the [validator](https://github.com/boutiques/boutiques#validation) to ensure that you have created it correctly. The `README.md` [here](https://github.com/boutiques/boutiques/) describes how to install and use the validator and remainder of the Boutiques shell (`bosh`) tools on your tool descriptor.
## Step 5: Using the tool descriptor
Once the tool descriptor has been validated, your tool is now ready to be integrated in a platform that supports Boutiques. You can use the `localExec.py` tool described [here](https://github.com/boutiques/boutiques/tree/master/tools) to launch your container locally for preliminary testing. Once you feel comfortable with your tool, you can contact your system administrator and have them integrate it into their compute resources so you can test and use it to process your data.
| github_jupyter |
# Modeling and Simulation in Python
Chapter 10
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
```
### Under the hood
To get a `DataFrame` and a `Series`, I'll read the world population data and select a column.
`DataFrame` and `Series` contain a variable called `shape` that indicates the number of rows and columns.
```
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
table2.shape
census = table2.census / 1e9
census.shape
un = table2.un / 1e9
un.shape
```
A `DataFrame` contains `index`, which labels the rows. It is an `Int64Index`, which is similar to a NumPy array.
```
table2.index
```
And `columns`, which labels the columns.
```
table2.columns
```
And `values`, which is an array of values.
```
table2.values
```
A `Series` does not have `columns`, but it does have `name`.
```
census.name
```
It contains `values`, which is an array.
```
census.values
```
And it contains `index`:
```
census.index
```
If you ever wonder what kind of object a variable refers to, you can use the `type` function. The result indicates what type the object is, and the module where that type is defined.
`DataFrame`, `Int64Index`, `Index`, and `Series` are defined by Pandas.
`ndarray` is defined by NumPy.
```
type(table2)
type(table2.index)
type(table2.columns)
type(table2.values)
type(census)
type(census.index)
type(census.values)
```
## Optional exercise
The following exercise provides a chance to practice what you have learned so far, and maybe develop a different growth model. If you feel comfortable with what we have done so far, you might want to give it a try.
**Optional Exercise:** On the Wikipedia page about world population estimates, the first table contains estimates for prehistoric populations. The following cells process this table and plot some of the results.
Select `tables[1]`, which is the second table on the page.
```
table1 = tables[1]
table1.head()
```
Not all agencies and researchers provided estimates for the same dates. Again `NaN` is the special value that indicates missing data.
```
table1.tail()
```
Some of the estimates are in a form we can't read as numbers. We could clean them up by hand, but for simplicity I'll replace any value that has an `M` in it with `NaN`.
```
table1.replace('M', np.nan, regex=True, inplace=True)
```
Again, we'll replace the long column names with more convenient abbreviations.
```
table1.columns = ['prb', 'un', 'maddison', 'hyde', 'tanton',
'biraben', 'mj', 'thomlinson', 'durand', 'clark']
```
This function plots selected estimates.
```
def plot_prehistory(table):
"""Plots population estimates.
table: DataFrame
"""
plot(table.prb, 'ro', label='PRB')
plot(table.un, 'co', label='UN')
plot(table.hyde, 'yo', label='HYDE')
plot(table.tanton, 'go', label='Tanton')
plot(table.biraben, 'bo', label='Biraben')
plot(table.mj, 'mo', label='McEvedy & Jones')
```
Here are the results. Notice that we are working in millions now, not billions.
```
plot_prehistory(table1)
decorate(xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
```
We can use `xlim` to zoom in on everything after Year 0.
```
plot_prehistory(table1)
decorate(xlim=[0, 2000], xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
```
See if you can find a model that fits these data well from Year -1000 to 1940, or from Year 1 to 1940.
How well does your best model predict actual population growth from 1950 to the present?
```
# Solution
def update_func_prop(pop, t, system):
"""Compute the population next year with proportional growth.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop
return pop + net_growth
# Solution
t_0 = 1
p_0 = table1.biraben[t_0]
prehistory = System(t_0=t_0,
t_end=2016,
p_0=p_0,
alpha=0.0011)
# Solution
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
# Solution
results = run_simulation(prehistory, update_func_prop)
plot_prehistory(table1)
plot(results, color='gray', label='model')
decorate(xlim=[0, 2000], xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
# Solution
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results / 1000, color='gray', label='model')
decorate(xlim=[1950, 2016], xlabel='Year',
ylabel='World population (billions)',
title='Prehistoric population estimates')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy import stats as st
filename = "../data/Klout_scores.csv"
klout = pd.read_csv(filename)
klout.info()
klout.head()
klout.describe()
klout.hist(bins=20)
```
# Sampling distribution
```
def s_sample_mean(s_population, n):
"""For a sample of size n, calculate the standard deviation of the sample mean,
given the standard deviation of the population.
"""
return s_population / np.sqrt(n)
std = klout.std()[0]
s = s_sample_mean(std, 35)
print(s)
```
### `scipy.stats.norm.cdf`
```
Signature: st.norm.cdf(x, *args, **kwds)
Docstring:
Cumulative distribution function of the given RV.
Parameters
----------
x : array_like
quantiles
arg1, arg2, arg3,... : array_like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array_like, optional
location parameter (default=0)
scale : array_like, optional
scale parameter (default=1)
Returns
-------
cdf : ndarray
Cumulative distribution function evaluated at `x`
```
```
z_score = (40 - 37.72) / 2.71
# Normal distribution
1 - st.norm.cdf(z_score)
# standard deviation of the mean of a random sample of 250 users
s = s_sample_mean(std, 250)
print(s)
z_score = (40 - 37.72) / s
print("zscore = {:.3f}".format(z_score))
# Normal distribution
1 - st.norm.cdf(z_score)
```
### `scipy.stats.norm.ppf`
```Signature: st.norm.ppf(q, *args, **kwds)
Docstring:
Percent point function (inverse of `cdf`) at q of the given RV.
Parameters
----------
q : array_like
lower tail probability
arg1, arg2, arg3,... : array_like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array_like, optional
location parameter (default=0)
scale : array_like, optional
scale parameter (default=1)
Returns
-------
x : array_like
quantile corresponding to the lower tail probability q.
```
```
st.norm.ppf(0.975)
print(40 - 1.96 * 1.015)
print(40 + 1.96 * 1.015)
st.norm.ppf(0.99)
print(40 - 2.326 * 1.015)
print(40 + 2.326 * 1.015)
filename = "../data/Engagement ratio.csv"
engagement = pd.read_csv(filename)
engagement.describe()
std = engagement.std()[0]
mean = engagement.mean()[0]
n = 20
sample_std = s_sample_mean(std, n)
print(sample_std)
sample_mean = 0.13
print(sample_mean - 1.96 * sample_std)
print(sample_mean + 1.96 * sample_std)
sample_std = s_sample_mean(0.64, 20)
print(sample_std)
sample_std = s_sample_mean(0.73, 20)
print(sample_std)
(8.94 - 7.5) / 0.143
(8.35 - 8.2) / .163
1 - st.norm.cdf(10.06)
1 - st.norm.cdf(.92)
def ci(mean, std, confidence):
'''Calculate the confidence interval for the specified normal distribution of N(mean, std)
at given confidence.
'''
std_error = st.norm.ppf(confidence)
return mean - std_error * std, mean + std_error * std
sample_mean = np.mean([8, 9, 12, 13, 14, 16])
sample_std = s_sample_mean(2.8, 6)
ci(sample_mean, sample_std, .975)
sample_std = s_sample_mean(10, 25)
print(sample_std)
z_score = (75 - 68) / sample_std
print(z_score)
1 - st.norm.cdf(3.5)
ci(75, 2, .975)
78.92 - 75
st.norm.ppf(.995)
sample_std = s_sample_mean(18, 9)
print(sample_std)
z_score = (175 - 180) / sample_std
print(z_score)
1 - st.norm.cdf(.833)
ci(175, 6, .995)
```
| github_jupyter |
# Response to Christian Vogel's Dissertation
# Prelude
```
%load_ext autoreload
%autoreload 2
# First, I have to laod different modules that I use for analyzing the data and for plotting:
import sys, os, collections
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt; plt.rcdefaults()
from matplotlib.pyplot import figure
from collections import Counter
# Second, I have to load the Text Fabric app
from tf.fabric import Fabric
from tf.app import use
A = use('bhsa', hoist=globals())
```
## Introduction and Laudatio
1. This defense is for me like a continuation of a long conversation about exegetical methodology that Chris and I started several years ago. In the last couple of days, Chris and I have enjoyed continuing this tradition by talking and discussing his dissertation quite a bit. It has been a good, fruitful, and thought-provoking dialogue among friends, colleagues, and brethren of faith. Your openness, vulnerability, passion for the word and the exegetical task always inspire me!
2. But this is a formal defense of your dissertation and so I will try to do my best, to undergo a temporal metamorphosis from being your friend to becoming your academic opponent. 😉
3. But before I share some of my questions and disagreements, let me say again that it was a delight and pleasure to read through your book. I wish my English proficiency and your eloquent writing skill could be easily transferred to me.
4. And your dissertation makes me proud of this building (seminary) and the OT department, allowing and stimulating dissertations like yours!
5. Chris, thank you for your hard work with the biblical text, allowing me to see things that I have not seen before and to establish connections that I would not have made before.
6. But the fact that I am not able to see some things that you see raises a methodological question. Why do you see things that I do not see, and why are you making connections that I cannot make? Some of the answers may have to do with reading skills, biography, and spirituality. And there is much to learn on that front. But some of the answers have to do with the scholarly exercise of exegetical methodology. Your dissertation has helped me to (re)formulate the quest for objectivity on the one hand (think of Christof Hardmeier’s “empirische Textpragmatik”), and the quest for legitimate subjectivity on the other hand (think of Paul Riceour’s “surplus of meaning”).
7. To me, your dissertation reads like the digital post-processing of an old Qumran fragment (2 Sam 2-5) with photoshop. You are playing with the color values, contrast, light spectra in such a way that you can make things visible that got lost through the ages (in your case: the tradition of reading). But at the same time, we all know from image processing, that you can over-process an image to such extents that the image gets distorted. When one does that you can see things in the image, that the original image did not show. You can make an image differ significantly from its original by making them look much brighter and sharper than they were in reality. The irony, of course is, that texts themselves function like image processors as they sharpen the contrasts, colors, and dynamics of history’s bruta facta (and yes, I am aware of the complications of that term… [cf. Nagel’s “What is it like to be a bat”]). Exegetical method, then, seeks to recover the textual product of post-processed historical events.
To me, however, your dissertation, reads often as a post-processing of the text’s post-processing of the historical datum. It’s like interpretation on steroids.
8. So, let me first start with 2-3 concrete examples of your interpretation that I am methodologically challenged by, before asking the larger methodological question. Here comes the inductive approach:
## Chris’ Method in Practice
### "who were with him": detecting significance...
You often detect phenomena as “significant”. In this way you seek to registers intentionality (in fact: authorial intent). But how do you measure whether something is significant (a method chapter is missing!)? For example, on p75 you argue that it “is not coincidental, for the phrases אִ֣ישׁ וּבֵיתֹ֑ו and אֲשֶׁר־עִמֹּ֛ו” to occur together. However, is this really significant? You can find anywhere a combination of two phrases in a clause that appear only once together in the OT. Why does it have significance here and not elsewhere? You might want to argue that אִ֣ישׁ וּבֵיתֹ֑ו is actually more significant as it appear only in the Davidic cycle (1 Sam 27:3, 2 Sam 2:3) and the opening of Ex 1:1 (still the questions that would have to be answered is “in what way would it be significant?”). For example: If you just look at the next clause in 2 Sam 2:3 (וַיֵּשְׁב֖וּ בְּעָרֵ֥י חֶבְרֹֽון) a pl wayyiqtol followed by a prepositional complement phrase (בְּ with a pl construct of עִיר [which cities belong to Hebron?]) only appears here in v3 and nowhere else in the TNK. I think this has no significance and thus you rightly do not discuss it. But then, what justifies the previous phenomenon (phrases אִ֣ישׁ וּבֵיתֹ֑ו and אֲשֶׁר־עִמֹּ֛ו”) to have significance when its singularity cannot be used as an argument? Every day is individual and unrepeated (as well as unrepeatable). But that does not yet make it a special day… And if one would really want to argue for the equation of “singularity = significance”, then the text would be inflated with significance to such extent, that the whole concept of rhetoric and textual intentionality would collapse.
```
WithHim = '''
verse
w1:word lex=>CR
<: w2:word lex=<M prs_ps=p3
'''
WithHim = A.search(WithHim)
A.table(WithHim, start=1, end=48, condensed=True)
```
### נגד as "significant"
נגד: You argue that the term נגד “provides an interesting link between David and Saul” (p81, 1 Sam 11 => 2 Sam 2:4). However, the term appears 52x in 1 Sam and not just in 1 Sam 11:9. Consequently, נגד does not establish any strong link, really. Now, it must be said, that you do create another link with 1 Sam 11: Jabesh-Gilead. And this is the real strong link. But the fact that you first mention the word-usage based link (נגד) is representative for much of your approach. It seems that for your argumentation verbal repetition – even when insignificant – are prioritized over stronger textual phenomena (Jabesh-Gilead). Thus, for the reader there seems to be a wrong order of arguments present. This order potentially shows your methodological bias: lexical-connections are receiving preference over those phenomena that weigh heavier in the actual process (see cognitive linguistics, cognitive literary studies).
```
NGD = '''
clause typ=Way0|WayX
word lex=NGD[
word lex=L
'''
NGD = A.search(NGD)
A.table(NGD, start=1, end=5, condensed=True)
```
### "lets make strong your hand"
תֶּחֱזַ֣קְנָה יְדֵיכֶ֗ם: The phraseological link between 2 Sam 2:7 and Judg 7:11 is – in my eyes – quite problematic and asks for an inquiry into the method of “link establishment” (you say “It does not seem accidental, for example, that the almost identical phrase occurs only one time previously in the Old Testament, namely in Judg 7:11”, p89). What is presented as a very rare phrase appears in the TNK as valence construction at least 46x. Yes, there are variations of this valence regarding tense and pronominal suffix but those do not contribute to the essential meaning of the valence. Therefore, how can a very normal expression be of strong interpretative significance, when its specialty is marked by a tense and pronominal suffix? As an example, the phrase “had raised hands” has not more significance in a text in which the construction “to raise” + “hand” appears multiple times but not in a perfect tense.
```
StrongHand = '''
clause
phrase
word lex=HJH[
phrase
word lex=MSPR/
word lex=JWM/
'''
StrongHand = A.search(StrongHand)
A.table(StrongHand, start=1, end=45, condensed=True)
```
### "His Firstborn"
בְכֹורֹו֙ (p166, 2 Sam 3:2): You argue that the “designation ‘his firstborn’ (בְכֹורֹו֙) occurs only twice previously in the Old Testament” (firstborn of Gideon [Judg 8:20], firstborn of Judah [Gen 38:6]). You then state “The verbal link to Judg 8 connects David with Gideon at the very point in the Gideon narrative where the issue of kingship first comes up […]” (p166). However, the very formulation does not appear just 3 times in total but at least 20 times. With 3sgM suffix it appears 13 times. And with 3sgM suffix + apposition to a proper name (as in 2 Sam 3:2) 9 times. Basically, the phrase is very normal, and regularly used. Consequently, the phrase does not create a specific connection to Gideon and Judah. Are there some analogies to the account in Gideon? Yes, I think one could argue that. But are these analogies intentional? That is, in my opinion, not to be determined on objective grounds – and definitely not with the help of a phrase that is used often and generally…
```
FirstBorn = '''
phrase_atom rela=Appo
word lex=BKR/ prs_ps=p3
'''
FirstBorn = A.search(FirstBorn)
A.table(FirstBorn, start=1, end=95, condensed=True)
```
### Hebron as "Paradise"?
Another example is your important suggestion that Hebron is functioning as a new paradise (p77-79). Besides mentioning the city of Hebron (no appositions, no attributive clauses are used), the text in 2 Sam 2:1 does nowhere make a single explicit link to the narratives in Gen 13 (and you seem to acknowledge that in your discussion of 2 Sam 2:1). The idea of Hebron representing the garden of Eden is potentially present in Gen 13 – I agree with that – but importing the presence of that meaning-connecting into 2 Sam 2:1 without a single explicit allusion is problematic. The only implicit possible connection that you seek to establish is the connection between Hebron as a refugee city and David as a priestly figure (p79, but that connection is not present in 2 Sam 2:1). But even this connection is a difficult one, since (a) Hebron does not receive any distinct prominence in Josh 20-21 (it appears in a list of many cities neither at the beginning nor end) and (b) none of this connection is made when Hebron is introduced in the narration. Although, your statements on pp78-79 are cautious (“raises the possibility”, “seems to be associated”) these “possibilities” are transformed into “obviousness” at the end of your book (cf. pp301, 314, 316 [“it is significant that David reigns at Hebron, which was both a city of refuge and a place of worship linked to the original garden sanctuary”]). Your “garden” interpretation - considering the fact that Gen 2-3, 13 allusions are absent (see my discussion on “Bones and Flesh” below) and at most meager - could be taken as a product of eisegesis. Let me be clear: I would love to see the garden connection with Hebron, my theology wants it, but scholarly methodology prohibits it – at least a strong statement about it.
### "Number of Days"
וַֽיְהִי֙ מִסְפַּ֣ר הַיָּמִ֔ים: You seek to establish a connection between 1 Sam 27:9 and 2 Sam 2:1-11 with the help of the phrase וַֽיְהִי֙ מִסְפַּ֣ר הַיָּמִ֔ים by arguing for it significance (“does not seem accidental”, p103) on the basis of it appearing only twice in the TNK (1 Sam 27:7 and 2 Sam 2:11). But, again, this clause/formulation is rather standard. From a cognitive perspective, what is more important is the relative clause following אֲשֶׁר֩ הָיָ֨ה דָוִ֥ד מֶ֛לֶךְ בְּחֶבְרֹ֖ון עַל־בֵּ֣ית יְהוּדָ֑ה שֶׁ֥בַע שָׁנִ֖ים וְשִׁשָּׁ֥ה חֳדָשִֽׁים. And it is that information that contrasts David’s time in Ziklag with that in Hebron. While I do not have a problem with וַֽיְהִי֙ מִסְפַּ֣ר הַיָּמִ֔ים being used as a comparative connector with 1 Sam 27:9, I do have a problem with the prioritization of the word usage over information that is of more cognitive value. I think a reordering of the argumentative chain would be helpful. This will help prevent the impression that you choose word connections naively.
### "Son of" [Saul] without a proper name following
בֶּן־שָׁא֗וּל (p254, 2 Sam 4:1): It is not initially clear to the reader whether you are discussing the lexical realization of the construct “Son of Saul” or the grammatical phenomenon “Son of [proper name]”. Since you detect only 2 occurrences the reader can disambiguate your claim quickly. However, interestingly the grammatical construction “Son of [proper name]” appears regularly (not just twice). This raises the methodological question, why lexical connections are dominating over grammatical constructions. It this method reduction a way to create (artificial – I would say) significance? If grammatical constructions would have been included in your “close reading” approach you could have seen that the only cases in which the construction is used in the Samuel books outside of 2 Sam 4:1 is in connection with David “Son of Isai” (1 Sam 20:27; 20:31; 22:7; 25:10). In all cases it is negatively used by either Saul or Nabal, as they prevent to put into their mouth the word “David”…. I would say, that this is a more significant (because well-distributed) observation that could indicate authorial intent, compared to the lexical realization of the single grammatical construction with “Saul” (“Son of Saul” vs “Son of Isai”). With this as background one could disagree with your functional analysis of the phrase “Son of Saul” on p259 when you argue that it indicates “his weakness […] merely the son of another)”, and rather suggest, that the author’s bias becomes visible here, as he (the author) treats Ish-Boshet the same way as Saul treats David.
```
SonSaul = '''
book
clause
p1:phrase function=Subj|Objc
w1:word lex=BN/ st=c nu=sg
w2:word sp=nmpr
p1 =: w1
p1 := w2
w1 <: w2
'''
SonSaul = A.search(SonSaul)
A.table(SonSaul, start=1, end=48)
```
### "your Bones and your Flesh"
עַצְמְךָ֥ וּֽבְשָׂרְךָ֖ (p287-288, 2 Sam 5:1): You discuss the construction at length and argue for a connection with Gen 2. But should the discussion about potential idiomatic usage not have priority over the designation of textual connection? Looking not only at previous usages of the construction but also at those appearances after 2 Sam 5 suggests an idiomatic usage of the phrase. If that is the case, Gen 2 could have used the phrase idiomatically as well. In that case no significant connection with Gen 2 would be established by 2 Sam 5:2. Of course, one could also argue that the idiomatic use of the construction was initiated by the non-idiomatic appearance in Gen 2. Whatever position one takes, a discussion of idioms needs to be included in order to prevent the accusation of interpretative naivete.
```
BoneFlesh = '''
book
clause
w1:word lex=BFR/
w2:word lex=<YM/
'''
BoneFlesh = A.search(BoneFlesh)
A.table(BoneFlesh, start=1, end=48, condensed=True)
```
### the "Amelekite man"
The Amelekite man (p80): You state that the Amalekite man was lying but this is not clear on the basis of the textual account. One could match the account of the Amalekite man in 2 Sam 1 with the account of Saul’s death in 1 Sam 31 (see particularly the expression אַחֲרֵ֣י נִפְלֹ֑ו in 2 Sam 1:10). This, then, would have effects on our claim that David is represented as a righteous judge…
### grammatical codes trigger functions
“by parenthesis in v4” (p262): Why not mentioning the grammatical codes (e.g. fronting of the subject and interrupting the wayyiqtol chain) that belong to the triggering of a parenthesis?
### With his House
```
WithHouse = '''
verse
w1:word lex=>JC/
<: w2:word lex=W
<: w3:word lex=BJT/ prs_ps=p3|p2|p1
'''
WithHouse = A.search(WithHouse)
A.table(WithHouse, start=1, end=48, condensed=True)
```
### And the sat in the cities of...
```
SatinCities = '''
clause typ=Way0
phrase function=Pred
word lex=JCB[ nu=pl
<: phrase
word lex=B
word lex=<JR/ nu=pl st=c
'''
SatinCities = A.search(SatinCities)
A.table(SatinCities, start=1, end=5, condensed=True)
```
### Misc
- “heard that he had died” p259)
- Mahanaim (p98). Correlation – Causation. Death Metaphors…
- “Going up to Hebron”: Well that is a normal reality since Hebron lies on crest of the Judean mountain ridge.
## Chris’ Method in Theory
1. You often use the term “close reading” (cf. p69) and at times it reads as a contrast to the historical critical attitude towards the text. However, nowhere do you really define what “close reading” in contrast to other methods actually means. Many readers would expect with the use of the term “close reading” a methodological reflection on Riceour (think of “surplus of meaning”) and Gadamer (think of “Wirkungsgeschichte”) in order to protect oneself from scholarly critique by relativizing and contextualizing one’s own claims and understanding of “close reading”. This is, however, missing, and thus the often-used declarative statements (e.g. “very intentional”, “obvious parallels”, “obvious contrast” etc.) in the dissertation can be taken as reflecting a hermeneutical naivete. You could protect yourself by using more of the subjunctive language like “increasing the likelihood that the author is intentionally alluding to…” (p70).
2. You use quite often the phrase “intention” (or similar phrases) and often operate with the idea of “author’s intent” (cf. p1 “showing their intentional design”, p165 “intentionally placed”, “intentionally ambiguous”, …). This is, of course, a risky approach in the world of critical biblical scholarship. While I think we can and must uphold the concept as such we cannot do so without defining what it means and how subjective intentionality (of the author/editor) is objectively materialized in the text. I miss such a discussion and framework. Without defining such relationship, it becomes difficult to test an interpretative approach like the one you take. How can I measure, test, and when needed disagree with an interpretation if you do not provide a framework by which you allow yourself to be tested by… Only pieces of art evade themselves from the controls of the scientific method. But you want to contribute to the academic realm. Thus a discussion of art and method is necessary.
3. While you are eager to detect intentionality in the text it seems that you remain very much on the lexical and general thematic level (word usage [“verbal repetition”, p65], allusion, phrases, themes) but do not take fully into account matters of syntax (e.g. fronting) and text-syntaxis (narrative background, interruption of wayyqitol chains, etc.) which contribute to the guiding of interpretation and speech acts on an even more foundational level (think of cognitive linguistics, cognitive literary studies). Reader’s conclusions are fundamentally generated by the general cognitive process (“linguistic system”) rather than by rhetorical means (“literary design”).
4. If “close reading” means to observe closely all textual elements. Why do we, for example, not find a discussion of the interesting use of Qal Passive forms being rendered by the Masoretes as Niphal/Pual forms (cf. 2 Sam 3:2 [ketiv: וילדו; qere: וַיִּוָּלְד֧וּ, compare with 4QSama: ויולד - representing obviously a later stage in the language development]; 2 Sam 3:5 [יֻלְּד֥וּ], obviously the Masoretes try to create a passive stem form that is used in their own times by changing an old Qal passive either into a Niphal or into a Pual)? Is this not part of a close reading, where we pick up materialized hints that suggest a very old age of the text? I understand that you want to stay away from matters of textual reconstruction, but such forms should not be neglected as you can still protect yourself form historical speculation. One could (and should) make an argument for that the dichotomy between “close reading” and “textual criticism” is false.
5. Since you explain that you will be working with the final text, you should have explained how you relate to the Masoretic traces within the final text (Qere – Ketib). You ignore the final text’s divisions markers (Setumas, Petuchas, etc.) while one can make a good argument that these text-dividers are not just part of the final text but even pre-Masoretic. Are they excluded from your definition of the “final text”? For example, on p282-283 you discuss the different options for dividing the text into meaningful units. You disagree with those scholars who see the end of the section in 2 Sam 5:3 (Petucha) and 2 Sam 5:12 (Setuma) and rather suggest to end with 2 Sam 5:5. But by doing so, you override the Setumas and Petuchas that are placed right there where several scholars see the demarcations of the textual units.
## In Sum...
In sum: What I miss methodologically is your definition of intentionality and its workings:
1. What is authorial intent (think of the work of speech act theory)?
2. How do textual intentionality and authorial intent relate, since they are not identical?
3. How is the author’s intentionality textually materialized and encoded? What of that materialization can be generalized and thus objectively/scientifically traced and measured, and what cannot be generalized and thus remains in the sphere of the “surplus of meaning” that can only be subjectively excavated with - if one decides to do so - the necessary help of the Holy Spirit. This is an important discussion if our works are intended not only for the community of faith but the community of scholars.
4. There do not seem to be dead metaphors in your approach to the text. Each word, place name, name is metaphorical (e.g. Mechanaim) and has meaning. The meaning of proper names can, however, only be suggested strongly after the discussion of dead metaphors (Today, “Frankfurt” as most city names are dead metaphors. Nobody thinks of the place where the franks had their fort over the Main river. “Redbud” is a dead metaphor for 11 months of the year. Only in the springtime, one realizes why the road is called “Redbud”. Throughout 11 months people do not know – when one asks – what the meaning of the road is. But the metaphor resurrects in spring…)
5. What does 2 Sam 2-5 tell us about the author? Would you say that he made his comparative links very visible or would you say his links a very subtle, inviting the reader to investigate potential comparative links (think of the “lying Amalekite”)?
## And on a more general level...
1. How is the reader’s intentionality when he/she decides matters of textual ambiguity and surplus of meaning to be “benchmarked” in a community of faith (denomination, tradition, etc.) that seeks to embrace the idea of sola, tota, prima scriptura?
2. If we categorize your book as a product of the university (the humanities), how would a Christian university have to position the humanities between the spectrum of the arts and sciences?
| github_jupyter |
## Regression (Classic Dataset = Boston House Price)
Problem: Build a model to predict the price of a home in the city of Boston, USA. To train our predictive model, we will use the Boston House Price dataset from the UCI repository.
Dataset Attributes:
1. CRIM: per capita crime rate by town
2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS: proportion of non-retail business acres per town
4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX: nitric oxides concentration (parts per 10 million)
6. RM: average number of rooms per dwelling
7. AGE: proportion of owner-occupied units built prior to 1940
8. DIS: weighted distances to five Boston employment centers
9. RAD: index of accessibility to radial highways
10. TAX: full-value property-tax rate per 10,000
11. PTRATIO: pupil-teacher ratio by town
12. Bk: proportion of blacks by town
13. LSTAT: lower status of the population
14. MEDV: Median value of owner-occupied homes in 1000s
## Packages
```
from sklearn.preprocessing import StandardScaler
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import mean_squared_error
from sklearn import cross_validation
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import AdaBoostRegressor
#from xgboost import XGBClassifier
from pandas import read_csv
from pandas.tools.plotting import scatter_matrix
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Loading
```
file = "boston-houses.csv"
columns = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
data = read_csv(file, delim_whitespace = True, names = columns)
```
## Summarizing
```
# Shape
print(data.shape)
# Types
print(data.dtypes)
# Head
print(data.head(10))
# Describe
print(data.describe())
# Correlation
print(data.corr(method = 'pearson'))
```
## Visualization¶
```
data.hist()
plt.show()
# Correlation Matrix
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(data.corr(), vmin = -1, vmax = 1, interpolation = 'none')
fig.colorbar(cax)
ticks = np.arange(0,14,1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(columns)
ax.set_yticklabels(columns)
plt.show()
```
## Data Preparation
```
# Separation in Training and Test Data
data_values = data.values
# Independent variables
X = data_values[:,0:13]
#Dependent variable
Y = data_values[:,13]
# Training and Test Data Sets
X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y,
test_size = 0.20,
random_state = 7)
```
## Algorithm Evaluation
```
pipelines = []
pipelines.append(('Scaled-LR', Pipeline([('Scaler', StandardScaler()),('LR', LinearRegression())])))
pipelines.append(('Scaled-LASSO', Pipeline([('Scaler', StandardScaler()),('LASSO', Lasso())])))
pipelines.append(('Scaled-EN', Pipeline([('Scaler', StandardScaler()),('EN', ElasticNet())])))
pipelines.append(('Scaled-KNN', Pipeline([('Scaler', StandardScaler()),('KNN', KNeighborsRegressor())])))
pipelines.append(('Scaled-CART', Pipeline([('Scaler', StandardScaler()),('CART', DecisionTreeRegressor())])))
pipelines.append(('Scaled-SVR', Pipeline([('Scaler', StandardScaler()),('SVR', SVR())])))
results = []
names = []
for name, model in pipelines:
kfold = cross_validation.KFold(n = len(X_train), n_folds = 10, random_state = 7)
cross_val_result = cross_validation.cross_val_score(model,
X_train,
Y_train,
cv = kfold,
scoring = 'neg_mean_squared_error')
results.append(cross_val_result)
names.append(name)
text = "%s: %f (%f)" % (name, cross_val_result.mean(), cross_val_result.std())
print(text)
fig = plt.figure()
fig.suptitle('Comparing the algorithms')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
```
## Tuning KNN (best result)
```
# Scale
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
#k
k_values = np.array([1,3,5,7,9,11,13,15,17,19,21])
val_grid = dict(n_neighbors = k_values)
#model
model = KNeighborsRegressor()
#K
kfold = cross_validation.KFold(n = len(X_train), n_folds = 10, random_state = 7)
# Tunning
grid = GridSearchCV(estimator = model, param_grid = val_grid, scoring = 'neg_mean_squared_error', cv = kfold)
grid_result = grid.fit(rescaledX, Y_train)
print("Best MSE: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
for params, mean_score, scores in grid_result.grid_scores_:
print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))
```
## Ensemble methods
```
ensembles = []
ensembles.append(('Scaled-AB', Pipeline([('Scaler', StandardScaler()),('AB', AdaBoostRegressor())])))
ensembles.append(('Scaled-GBM', Pipeline([('Scaler', StandardScaler()),('GBM', GradientBoostingRegressor())])))
ensembles.append(('Scaled-RF', Pipeline([('Scaler', StandardScaler()),('RF', RandomForestRegressor())])))
ensembles.append(('Scaled-ET', Pipeline([('Scaler', StandardScaler()),('ET', ExtraTreesRegressor())])))
results = []
names = []
for name, model in ensembles:
kfold = cross_validation.KFold(n = len(X_train), n_folds = 10, random_state = 7)
cross_val_result = cross_validation.cross_val_score(model,
X_train,
Y_train,
cv = kfold, scoring = 'neg_mean_squared_error')
results.append(cross_val_result)
names.append(name)
text = "%s: %f (%f)" % (name, cross_val_result.mean(), cross_val_result.std())
print(text)
fig = plt.figure()
fig.suptitle('Comparing the algorithms')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
```
## Tuning GBM (Gradient Boosting Method)
```
# Scale
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
# Number of estimators (trees).
val_grid = dict(n_estimators = np.array([50,100,150,200,250,300,350,400]))
# Model
model = GradientBoostingRegressor(random_state = 7)
# K
kfold = cross_validation.KFold(n = len(X_train), n_folds = 10, random_state = 7)
# Tunning
grid = GridSearchCV(estimator = model, param_grid = val_grid, cv = kfold, scoring = 'neg_mean_squared_error')
grid_result = grid.fit(rescaledX, Y_train)
print("Best MSE: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
for params, mean_score, scores in grid_result.grid_scores_:
print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))
```
## Finishing the Predictive Model
```
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
model = GradientBoostingRegressor(random_state = 7, n_estimators = 400)
model.fit(rescaledX, Y_train)
# Applying the template to test data
rescaledValidationX = scaler.transform(X_test)
prediction = model.predict(rescaledValidationX)
print(mean_squared_error(Y_test, prediction))
```
# End
| github_jupyter |
Resources Used
- wget.download('https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/_downloads/da4babe668a8afb093cc7776d7e630f3/generate_tfrecord.py')
- Setup https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html
# 0. Setup Paths
```
WORKSPACE_PATH = 'Tensorflow/workspace'
SCRIPTS_PATH = 'Tensorflow/scripts'
APIMODEL_PATH = 'Tensorflow/models'
ANNOTATION_PATH = WORKSPACE_PATH+'/annotations'
IMAGE_PATH = WORKSPACE_PATH+'/images'
MODEL_PATH = WORKSPACE_PATH+'/models'
PRETRAINED_MODEL_PATH = WORKSPACE_PATH+'/pre-trained-models'
CONFIG_PATH = MODEL_PATH+'/my_ssd_mobnet/pipeline.config'
CHECKPOINT_PATH = MODEL_PATH+'/my_ssd_mobnet/'
```
# 1. Create Label Map
```
labels = [{'name':'Mask', 'id':1}, {'name':'No mask', 'id':2}]
with open(ANNOTATION_PATH + '\label_map.pbtxt', 'w') as f:
for label in labels:
f.write('item { \n')
f.write('\tname:\'{}\'\n'.format(label['name']))
f.write('\tid:{}\n'.format(label['id']))
f.write('}\n')
```
# 2. Create TF records
```
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x{IMAGE_PATH + '/test'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/test.record'}
```
# 3. Download TF Models Pretrained Models from Tensorflow Model Zoo
```
!cd Tensorflow && git clone https://github.com/tensorflow/models
#wget.download('http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz')
#!mv ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz {PRETRAINED_MODEL_PATH}
#!cd {PRETRAINED_MODEL_PATH} && tar -zxvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
```
# 4. Copy Model Config to Training Folder
```
CUSTOM_MODEL_NAME = 'my_ssd_mobnet'
!mkdir {'Tensorflow\workspace\models\\'+CUSTOM_MODEL_NAME}
!cp {PRETRAINED_MODEL_PATH+'/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config'} {MODEL_PATH+'/'+CUSTOM_MODEL_NAME}
```
# 5. Update Config For Transfer Learning
```
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.protos import pipeline_pb2
from google.protobuf import text_format
CONFIG_PATH = MODEL_PATH+'/'+CUSTOM_MODEL_NAME+'/pipeline.config'
config = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
config
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(CONFIG_PATH, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)
pipeline_config.model.ssd.num_classes = 2
pipeline_config.train_config.batch_size = 4
pipeline_config.train_config.fine_tune_checkpoint = PRETRAINED_MODEL_PATH+'/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/checkpoint/ckpt-0'
pipeline_config.train_config.fine_tune_checkpoint_type = "detection"
pipeline_config.train_input_reader.label_map_path= ANNOTATION_PATH + '/label_map.pbtxt'
pipeline_config.train_input_reader.tf_record_input_reader.input_path[:] = [ANNOTATION_PATH + '/train.record']
pipeline_config.eval_input_reader[0].label_map_path = ANNOTATION_PATH + '/label_map.pbtxt'
pipeline_config.eval_input_reader[0].tf_record_input_reader.input_path[:] = [ANNOTATION_PATH + '/test.record']
config_text = text_format.MessageToString(pipeline_config)
with tf.io.gfile.GFile(CONFIG_PATH, "wb") as f:
f.write(config_text)
```
# 6. Train the model
```
print("""python {}/research/object_detection/model_main_tf2.py --model_dir={}/{} --pipeline_config_path={}/{}/pipeline.config --num_train_steps=5000""".format(APIMODEL_PATH, MODEL_PATH,CUSTOM_MODEL_NAME,MODEL_PATH,CUSTOM_MODEL_NAME))
```
# 7. Load Train Model From Checkpoint
```
import os
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
detection_model = model_builder.build(model_config=configs['model'], is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(CHECKPOINT_PATH, 'ckpt-6')).expect_partial()
@tf.function
def detect_fn(image):
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections
```
# 8. Detect in Real-Time
```
import cv2
import numpy as np
category_index = label_map_util.create_category_index_from_labelmap(ANNOTATION_PATH+'/label_map.pbtxt')
cap.release()
# Setup capture
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
while True:
ret, frame = cap.read()
image_np = np.array(frame)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
# detection_classes should be ints.
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.5,
agnostic_mode=False)
cv2.imshow('object detection', cv2.resize(image_np_with_detections, (800, 600)))
if cv2.waitKey(1) & 0xFF == ord('q'):
cap.release()
break
detections = detect_fn(input_tensor)
from matplotlib import pyplot as plt
```
| github_jupyter |
```
import pandas as pd
bx_ratings = pd.read_csv('BX-Book-Ratings.csv')
bx_books = pd.read_csv('BX-Books.csv')
bx_users = pd.read_csv('BX-Users.csv')
bx_ratings.head()
print len(bx_ratings), len(bx_books), len(bx_users)
print len(bx_ratings)
#bx_ratings = bx_ratings[ bx_ratings['Book-Rating'] != 0]
print len(bx_ratings)
map_books = {}
for book in bx_ratings['ISBN']:
if( map_books.get(book) == None):
map_books[book] = 1
else:
map_books[book] += 1
count = 0
for key in map_books.keys():
if( map_books[key] >= 20):
count += 1
print count
map_users = {}
for book in bx_ratings['User-ID']:
if( map_users.get(book) == None):
map_users[book] = 1
else:
map_users[book] += 1
count = 0
for key in map_users.keys():
if( map_users[key] >= 5):
count += 1
print count
for key in map_books.keys():
if( map_books[key] < 20 ):
map_books.pop(key)
print len(map_books)
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['ISBN'] in map_books , axis = 1) ]
len(bx_ratings)
map_users = {}
for book in bx_ratings['User-ID']:
if( map_users.get(book) == None):
map_users[book] = 1
else:
map_users[book] += 1
count = 0
for key in map_users.keys():
if( map_users[key] >= 5):
count += 1
print count
for key in map_users.keys():
if( map_users[key] < 5 ):
map_users.pop(key)
print len(map_users)
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['User-ID'] in map_users, axis = 1) ]
print len(bx_ratings), len(map_users), len(map_books)
average_user = {}
bx_ratings.head()
i = 0
for row in bx_ratings.iterrows():
#print row[1][0], row[1][1] , row[1][2] , "\n"
if( row[1][2] > 5 ):
if( average_user.get(row[1][0]) == None ):
average_user[row[1][0]] = [row[1][2], 1]
else:
average_user[row[1][0]][0] += row[1][2]
average_user[row[1][0]][1] += 1
final_average = {}
i = 0
for item in average_user.iterkeys():
final_average[item] = average_user[item][0]*1.0 / average_user[item][1]
if( i <= 10):
print type(item),final_average[item]
i += 1
cnt1 = 0
cnt2 = 0
for i,row in bx_ratings.iterrows():
if( row['Book-Rating'] == 0 ):
if( final_average.get(row['User-ID']) != None):
bx_rating
s.set_value(i,'Book-Rating' , int(final_average[ row['User-ID'] ]) )
cnt1 += 1
cnt2 += 1
print cnt1
cnt = 0
for row in bx_ratings.iterrows():
if( row[1][2] == 0):
cnt += 1
print cnt
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['Book-Rating'] != 0 , axis = 1) ]
cnt = 0
for row in bx_ratings.iterrows():
if( row[1][2] == 0):
cnt += 1
print cnt
bx_ratings.to_csv("newData/df_ratings.csv", sep=',',index=False)
user_list = {}
book_list = {}
for i,item in bx_ratings.iterrows():
user_list[item['User-ID']] = 1
book_list[item['ISBN'] ] = 1
bx_books.head()
bx_books = bx_books[ bx_books.apply( lambda x: book_list.get( x['ISBN']) != None , axis = 1) ]
bx_users = bx_users[ bx_users.apply( lambda x: user_list.get( x['User-ID']) != None ,axis =1 )]
bx_books.head()
bx_users.head()
print len(bx_ratings), len(bx_books), len(bx_users)
bx_users = bx_users.drop( [ 'Unnamed: 3' , 'Unnamed: 4' , 'Unnamed: 5' ] , axis = 1)
bx_users.head()
for i,row in bx_users.iterrows():
tmp = row['Location'].split(',')[-1]
bx_users.set_value(i,'Location',tmp)
bx_users.head()
country_cnt = {}
country_average = {}
for i,row in bx_users.iterrows():
#print row['Age']
if( str(row['Age']) == "nan"):
#print "Rishabh"
continue
if( country_average.get(row['Location'] ) != None ):
country_average[ row['Location'] ] += int(row['Age'])
country_cnt[ row['Location'] ] += 1
else:
country_average[row['Location'] ] = int(row['Age'])
country_cnt[row['Location'] ] = 1
for keys in country_average.iterkeys():
country_average[keys] /= country_cnt[keys]
for i,row in bx_users.iterrows():
if( str(row['Age']) == "nan" ):
if( country_average.get( row['Location']) == None ):
bx_users.set_value(i, 'Age' , 25)
else:
bx_users.set_value(i, 'Age' , int(country_average[ row['Location'] ] ) )
bx_users.head()
for i,item in bx_users.iterrows():
tmp = int( item['Age'])
bx_users.set_value(i,'Age' ,tmp )
bx_users['Age'].value_counts()
set( bx_users['Age'] )
for i,item in bx_users.iterrows():
if( item['Location'] == "" ):
bx_users.set_value(i,'Location' , "Global")
bx_users.to_csv("newData/df_userss.csv", sep=',',index=False)
bx_books.to_csv("newData/df_bookss.csv" , sep = ',' , index = False)
```
| github_jupyter |
# <font color=red>Submission of VAT return to Altinn3</font>
Test-script to file a VAT return to the app on Altinn3, which is running on the tt02 environment on Altinn Studio.
## <font color=red>Check that required libraries are installed</font>
```
try:
import requests
import base64
import xmltodict
import xml.dom.minidom
import polling
from Steg import load_xml_files, get_altinntoken, get_instance, upload_vatreturnsubmission, upload_vatreturn
from Steg import upload_attachment, log_in_idporten, update_process, create_instance, get_feedback
from Steg.SubmissionServices import Environment, LoginMethod, App
from Steg.validation import validate_vat_return, validate_example_file
except ImportError as e:
try:
print("Missing dependencies, installing with pip")
!pip install python-jose[cryptography] cryptography
!pip install xmltodict
!pip install requests
!pip install polling
import xmltodict
import requests
import polling
from Steg import load_xml_files, get_altinntoken, get_instance, upload_vatreturnsubmission, upload_vatreturn
from Steg import upload_attachment, log_in_idporten, update_process, create_instance, get_feedback
from Steg.SubmissionServices import Environment, LoginMethod, App
from Steg.validation import validate_vat_return, validate_example_file
except ImportError as err:
print("Missing dependencies, installing with pip")
!python3 -m pip install python-jose[cryptography] cryptography
!python3 -m pip install 'python-jose[cryptography]'
!python3 -m pip install xmltodict
!python3 -m pip install requests
!python3 -m pip install polling
import xmltodict
import requests
import polling
from Steg import load_xml_files, get_altinntoken, get_instance, upload_vatreturnsubmission, upload_vatreturn
from Steg import upload_attachment, log_in_idporten, update_process, create_instance, get_feedback
from Steg.SubmissionServices import Environment, LoginMethod, App
from Steg.validation import validate_vat_return, validate_example_file
```
## <font color=red>Define variables</font>
```
# No need to touch these
environment = Environment.tt02 # tt02
login_method = LoginMethod.idporten # idporten
domain = environment.value # Domain to the app on Altinn Studio
app = App.ETM2.value # Which app repository that is running
# File name for VAT Return Submission ("envelope") and VAT Return that is going to be uploaded.
# Located in ./eksempler/konvolutt/ and ./eksempler/melding/
VatReturnSubmission_filename = "mvakonvolutt1.xml"
VatReturn_filename = "mvakode3.xml"
# Insert organization number you wish to send in for.
# Make note that the person that is going to file needs to have the appropriate roles and rights.
org_number = "123456789"
```
## <font color=red>Define VAT Return and VAT Return Submission</font>
```
vat_return_xml, envelope_xml = load_xml_files.get(VatReturnSubmission_filename, VatReturn_filename)
```
## <font color=red>Generate ID-porten token</font>
The token is valid for 300 seconds, re-run this part if you haven't reached the altinntoken in time.
```
header = log_in_idporten.get_idtoken()
```
## <font color=red>Validate message(Optional)</font>
```
validation = validate_vat_return(dict(header), xml=vat_return_xml)
```
## <font color=red>Get AltinnToken</font>
```
altinn_token = get_altinntoken.get(header)
```
## <font color=red>Create new instance</font>
```
instance = create_instance.create(domain, altinn_token, app, org_number)
partyId = instance['instanceOwner']['partyId']
instanceGuid = instance['data'][0]['instanceGuid']
dataId = instance['data'][0]['id']
instanceUrl = domain + instance['appId'] + "/instances/" + partyId + "/" + instanceGuid
VATReturnSubmissionURL = instanceUrl + "/data/" + dataId
VATReturnURL = instanceUrl + "/data?dataType=mvamelding"
VATREturnAttachmentURL = instanceUrl + "/data?dataType=binaerVedlegg"
```
## <font color=red>Upload VAT Return Submission</font>
```
upload_vatreturnsubmission.upload(VATReturnSubmissionURL, envelope_xml, altinn_token)
```
## <font color=red>Upload VAT Return</font>
```
upload_vatreturn.upload(VATReturnURL, vat_return_xml, altinn_token)
```
## <font color=red>Upload Attachments(optional)</font>
#### .xml attachment
```
xml_attachment = upload_attachment.upload(VATREturnAttachmentURL, "mva-vedlegg.xml", "text/xml", altinn_token)
```
#### .pdf attachment
```
pdf_attachment = upload_attachment.upload(VATREturnAttachmentURL, "pdf-vedlegg.pdf", "application/pdf", altinn_token)
```
#### .png attachment
```
png_attachment = upload_attachment.upload(VATREturnAttachmentURL, "png-vedlegg.png", "image/png", altinn_token)
```
## <font color=red>File VAT Return Submission</font>
```
FillingProcess = update_process.next_step(instanceUrl, altinn_token)
```
## <font color=red>Confirm VAT Return Submission</font>
```
ConfirmationProcess = update_process.next_step(instanceUrl, altinn_token)
```
## <font color=red>Feedback VAT Return Submission</font>
```
feedback = get_feedback.get_feedback_sync(partyId + "/" + instanceGuid, altinn_token, domain, app)
get_feedback.save_to_disk(feedback.json(), altinn_token,
"./eksempler/tilbakemelding/" +
partyId + "/" + instanceGuid + "/")
```
| github_jupyter |
<h1 style="font-size:40px;"><center>Exercise II:<br> Convolutional Neural Networks
</center></h1>
## Short summary
In this exercise, we will design a CNN to classify rgb images. These folder has **three files**:
- **configClassifier.py:** this involves definitions of all parameters and data paths
- **utilsClassifier.py:** includes utility functions required to grab and visualize data
- **runClassifier.ipynb:** contains the script to design, train and test the network
Make sure that before running this script, you created an environment and **installed all required libraries** such
as keras.
## The data
There exists also a subfolder called **data** which contains the traning, validation, and testing data each has both RGB input images together with the corresponding ground truth images.
## The exercises
As for the previous lab all exercises are found below.
## The different 'Cells'
This notebook contains several cells with python code, together with the markdown cells (like this one) with only text. Each of the cells with python code has a "header" markdown cell with information about the code. The table below provides a short overview of the code cells.
| # | CellName | CellType | Comment |
| :--- | :-------- | :-------- | :------- |
| 1 | Init | Needed | Sets up the environment|
| 2 | Ex | Exercise 1| A class definition of a CNN model |
| 3 | Loading | Needed | Loading parameters and initializing the model |
| 4 | Stats | Needed | Show data distribution |
| 5 | Ex | Exercise 2 | Data augementation |
| 6 | Data | Needed | Generating the data batches |
| 7 | Debug | Needed | Debugging the data |
| 8 | Device | Needed | Selecting CPU/GPU |
| 9 | Optimization | Exercise 2 | Selecting an optimization method |
| 10 | Training | Exercise 1-2-3 | Training the model |
| 11 | Testing | Exercise 1-2-3| Testing the method |
| 12 | Confusion matrix | Information | Plotting the confusion matrix|
| 13 | Plotting | Information | View some of test samples |
| 13 | Plotting | Information | View layer activations|
In order for you to start with the exercise you need to run all cells. It is important that you do this in the correct order, starting from the top and work you way down the cells. Later when you have started to work with the notebook it may be easier to use the command "Run All" found in the "Cell" dropdown menu.
## Writing the report
First the report should be written within this notebook. We have prepared the last cell in this notebook for you where you should write the report. The report should contain 4 parts:
* Name:
* Introduction: A **few** sentences where you give a small introduction of what you have done in the lab.
* Answers to questions: For each of the questions provide an answer. It can be short answers or a longer ones depending on the nature of the questions, but try to be effective in your writing.
* Conclusion: Summarize your findings in a few sentences.
1) We first start with importing all required modules
```
import os
from configClassifier import *
from utilsClassifier import *
cfg = flying_objects_config()
if cfg.GPU >=0:
print("creating network model using gpu " + str(cfg.GPU))
os.environ['CUDA_VISIBLE_DEVICES'] = str(cfg.GPU)
elif cfg.GPU >=-1:
print("creating network model using cpu ")
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
import tensorflow as tf
from tensorflow import keras
from sklearn.metrics import confusion_matrix
import seaborn as sns
from datetime import datetime
import pprint
# import the necessary packages
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from keras.models import Model
from keras.callbacks import TensorBoard
```
2) Here, we have the network model class definition. In this class, the most important function is the one called **create_model()**. As defined in the exercises section, your task is to update the network architecture defined in this function such that the network will return the highest accuracy for the given training, validation, and testing data.
```
class ClassifierDNNModel():
def __init__(self, num_classes=10, batch_size=32, inputShape=(64,64,3), dropout_prob=0.25):
# super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
self.batch_size = batch_size
self.inputShape = inputShape
self.dropout_prob = dropout_prob
def create_model(self):
model = Sequential()
chanDim = -1
# CONV => RELU => POOL
model.add(Conv2D(25, (3, 3), padding="same",
input_shape=self.inputShape))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(50, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(2048))
model.add(Activation("relu"))
# softmax classifier
model.add(Dense(self.num_classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
def display_activation(self, activations, col_size, row_size, act_index):
activation = activations[act_index]
activation_index = 0
fig, ax = plt.subplots(row_size, col_size, figsize=(row_size * 2.5, col_size * 1.5))
fig.suptitle("activations in layer " + str(act_index+1))
for row in range(0, row_size):
for col in range(0, col_size):
ax[row][col].imshow(activation[0, :, :, activation_index], cmap='gray')
activation_index += 1
plt.show()
```
3) We import the network **hyperparameters** and build a simple cnn by calling the class introduced in the previous step. Please note that to change the hyperparameters, you just need to change the values in the file called **configClassifier.py.**
```
image_shape = (cfg.IMAGE_HEIGHT, cfg.IMAGE_WIDTH, cfg.IMAGE_CHANNEL)
modelObj = ClassifierDNNModel(num_classes=len(cfg.CLASSES), batch_size=cfg.BATCH_SIZE, inputShape=image_shape, dropout_prob=cfg.DROPOUT_PROB)
model = modelObj.create_model()
print(cfg)
```
4) We call the utility function **show_statistics** to display the data distribution. This is just for debugging purpose.
```
#### show how the data looks like
show_statistics(cfg.training_data_dir, fineGrained=cfg.fineGrained, title=" Training Data Statistics ")
show_statistics(cfg.validation_data_dir, fineGrained=cfg.fineGrained, title=" Validation Data Statistics ")
show_statistics(cfg.testing_data_dir, fineGrained=cfg.fineGrained, title=" Testing Data Statistics ")
```
5) We **augment** the data by flipping the image horizontally or vertically. As described in the exercises section below, one of your tasks is to update this data augmentation part in order to increase the network efficiency.
```
# setup data
if cfg.DATA_AUGMENTATION:
print("Data is being augmented!")
aug_parameters = ImageDataGenerator(
# zoom_range=0.2, # randomly zoom into images
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
else:
print("Data will not be augmented!")
aug_parameters = ImageDataGenerator(
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
```
6) We now create batch generators to get small batches from the entire dataset. There is no need to change these functions as they already return **normalized inputs as batches**.
```
nbr_train_data = get_dataset_size(cfg.training_data_dir)
nbr_valid_data = get_dataset_size(cfg.validation_data_dir)
nbr_test_data = get_dataset_size(cfg.testing_data_dir)
train_batch_generator = generate_classification_batches(cfg.training_data_dir, image_shape, cfg.BATCH_SIZE, cfg.CLASSES, fineGrained=cfg.fineGrained)
valid_batch_generator = generate_classification_batches(cfg.validation_data_dir, image_shape, cfg.BATCH_SIZE, cfg.CLASSES, fineGrained=cfg.fineGrained)
test_batch_generator = generate_classification_batches(cfg.testing_data_dir, image_shape, cfg.BATCH_SIZE, cfg.CLASSES, fineGrained=cfg.fineGrained)
aug_train_batch_generator = generate_augmented_classification_batches(train_batch_generator, aug_parameters)
aug_valid_batch_generator = generate_augmented_classification_batches(valid_batch_generator, aug_parameters)
print("Data batch generators are created!")
```
7) We can visualize how the data looks like for debugging purpose
```
if cfg.DEBUG_MODE:
t_x, t_y = next(train_batch_generator)
print('x', t_x.shape, t_x.dtype, t_x.min(), t_x.max())
print('y', t_y.shape, t_y.dtype, t_y.min(), t_y.max())
a_x, a_y = next(aug_train_batch_generator)
print('x', a_x.shape, a_x.dtype, a_x.min(), a_x.max())
print('y', a_y.shape, a_y.dtype, a_y.min(), a_y.max())
```
8) We select which processing unit to use, either CPU or GPU. In case of having multiple GPUs, we can still select which GPU to use.
9) We set the training configuration. As a part of the exercises, this function can also be updated to test different **optimization methods** such as **SGD, ADAM,** etc.
```
opt = tf.optimizers.Adam(cfg.LEARNING_RATE)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer= opt, metrics=['accuracy'])
model.summary()
```
10) We can now feed the training and validation data to the network. This will train the network for **some epochs**. Note that the epoch number is also predefined in the file called **configClassifier.py.**
```
history = model.fit(aug_train_batch_generator,
epochs=cfg.NUM_EPOCHS,
verbose=1,
steps_per_epoch=(nbr_train_data//cfg.BATCH_SIZE), # total batch number
validation_steps=(nbr_valid_data // cfg.BATCH_SIZE), # total batch number
validation_data=aug_valid_batch_generator,
callbacks=[TensorBoard(log_dir="logs/{}".format(datetime.now().strftime("%Y%m%d-%H%M%S")), write_graph=True, write_images=False,
histogram_freq=0)])
```
11) We can test the model with the test data
```
# testing model
test_result = model.evaluate(test_batch_generator,
steps=(nbr_test_data//cfg.BATCH_SIZE))
test_loss = round(test_result[0], 4)
test_acc = round(test_result[1], 4)
print("Test Loss: ", str(test_loss), "Test Accuracy: ", str(test_acc))
```
12) We can plot a confusion matrix showing **the class-wise accuracies**
```
true_classes = []
pred_classes = []
for i in range(0, nbr_test_data//cfg.BATCH_SIZE +1):
t_data, t_label = next(test_batch_generator)
pred_labels = model.predict(t_data, batch_size=cfg.BATCH_SIZE)
pred_classes.extend(np.argmax(pred_labels, axis=1))
true_classes.extend(np.argmax(t_label, axis=1))
#print (" true classes: " + str(len(true_classes)) + " pred classes: " + str(len(pred_classes)))
confusion_mtx = confusion_matrix(np.array(true_classes), np.array(pred_classes))
plt.figure(figsize=(10, 8))
plt.title("normalized confusion matrix")
norm_confusion_mtx = 100* confusion_mtx.astype('float') / confusion_mtx.sum(axis=1)[:, np.newaxis]
sns.heatmap(norm_confusion_mtx, annot=True, fmt="f")
plt.show()
```
13) We can also show sample classification results
```
t_data, t_label = next(test_batch_generator)
print(t_data[0].shape)
plt.imshow(t_data[0])
pred_labels = model.predict(t_data, batch_size=cfg.BATCH_SIZE)
plot_sample_classification_results(t_data, t_label, cfg.CLASSES, pred_labels, test_acc)
```
14) Finally, we can visualize CNN layer activations for a given sample input
```
# Visualize CNN Layers
t_data, t_label = next(test_batch_generator)
layer_outputs = [layer.output for layer in model.layers]
activation_model = Model(inputs=model.input, outputs=layer_outputs)
activations = activation_model.predict(t_data[10].reshape(1, cfg.IMAGE_HEIGHT, cfg.IMAGE_WIDTH, cfg.IMAGE_CHANNEL))
plt.imshow(t_data[10])
plt.title("sample input for the activation test")
modelObj.display_activation(activations, 4, 4, 1) # Displaying output of layer 2
modelObj.display_activation(activations, 4, 4, 3) # Displaying output of layer 4
modelObj.display_activation(activations, 4, 4, 4) # Displaying output of layer 5
```
## EXERCISES
Please do all exercises desribed below. Note that all your source code as well as the log folders must be provided as final results **before April 05, 2019.**
#### Exercise 1)
Update the network architecture given in the function **create_model** of the class ClassifierDNNModel.
**Hint:** You can add more convolution, max pooling layers etc. Batch normalization and dropout are other options to be considered. You can also try applying different activation functions.
#### Exercise 2)
Use different **optimization** (e.g. ADAM, SGD, etc) and **regularization** (e.g. data augmentation, dropout) methods to increase the network accuracy.
#### Exercise 3)
In the file **configClassifier.py**, there is a flag named as **cfg.fineGrained** which is set to **False**. This flag defines the classification granularity level. In the default setting, i.e. when it is **False**, there exist 3 class types: **Square**, **Triangle**, and **Circle**. In case of switching this flag to **True**, the class number goes to 15. Repeat previous exercises 1) and 2) after setting this flag to **True** and provide results.
#### Hint:
All network resposes are stored in a **log folder** which is automatically created. To visualize these responses, we can use the tensorboard as follows:
- First make sure that there is a new folder created with **a date and time stamp** under folder **logs**
- Next, open a terminal and type
> tensorboard --logdir=./logs
- Finally, open a web browser and type
> http://localhost:6006
- You can have an overview of all accuracies on the tensorboard. For more information about tensorboard, please see https://www.tensorflow.org/guide/summaries_and_tensorboard
# The report!
### Name
### Introduction
### Answers to questions
### Summary
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified "YOLO" instructions preceding the code.
* Added details about anchor boxes.
* Added explanation of how score is calculated.
* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.
* `iou`: clarify instructions for finding the intersection.
* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.
* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.
* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.
* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.
* `predict`: hint on calling sess.run.
* Spelling, grammar, wording and formatting updates to improve clarity.
## Import libraries
Run the following cell to load the packages and dependencies that you will find useful as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.
</center></caption>
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
#### Class score
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
#### **Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful references**
* [Keras argmax](https://keras.io/backend/#argmax)
* [Keras max](https://keras.io/backend/#max)
**Additional Hints**
* For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here.
* Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep.
**Useful reference**:
* [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**Additional Hints**:
* For the `tf.boolean_mask`, we can keep the default `axis=None`.
**Reminder**: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
#### **Exercise**: Implement iou(). Some hints:
- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.
- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = max(box1_x1, box2_x1)
yi1 = max(box1_y1, box2_y1)
xi2 = min(box1_x2, box2_x2)
yi2 = min(box1_y2, box2_y2)
inter_width = max(xi2 - xi1, 0)
inter_height = max(yi2 - yi1, 0)
inter_area = inter_width * inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1_y2 - box1_y1) * (box1_x2 - box1_x1)
box2_area = (box2_y2 - box2_y1) * (box2_x2 - box2_x1)
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
#### YOLO non-max suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
** Reference documentation **
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather)
Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`.
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
## Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pre-trained model on images
In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes.
* We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt".
* We'll read class names and anchors from text files.
* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pre-trained model
* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes.
* You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5".
* These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
#### Hint: Using the TensorFlow Session object
* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.
* To evaluate a list of tensors, we call `sess.run()` like this:
```
sess.run(fetches=[tensor1,tensor2,tensor3],
feed_dict={yolo_model.input: the_input_variable,
K.learning_phase():0
}
```
* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
feed_dict={ yolo_model.input: image_data, K.learning_phase(): 0 }
out_scores, out_boxes, out_classes = sess.run(fetches=[scores, boxes, classes], feed_dict=feed_dict)
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
## <font color='darkblue'>What you should remember:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
| github_jupyter |
## Using a color palette
Using a palette can help discriminate between several groups and get a better sens of data. You may refer to [Seaborn documentation](https://seaborn.pydata.org/tutorial/color_palettes.html) for extensive information on this topic.
```
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
# set a grey background (use sns.set_theme() if seaborn version 0.11.0 or above)
sns.set(style="darkgrid")
df = sns.load_dataset('iris')
sns.boxplot(x=df["species"], y=df["sepal_length"], palette="Blues")
plt.show()
```
## Applying a uniform color
Of course you can easily apply an uniform color to every boxes. Find a list of the numerous colors you can use here. The most common ones are
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
```
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
# set a grey background (use sns.set_theme() if seaborn version 0.11.0 or above)
sns.set(style="darkgrid")
df = sns.load_dataset('iris')
sns.boxplot(x=df["species"], y=df["sepal_length"], color='skyblue')
plt.show()
```
## Specifying a color for each distribution
Specifying colors 'by hand' is quickly performed by creating a dictionnary composed of 'category': 'color' pairs as key:value, such as we did in the following example with my_pal.
```
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
# set a grey background (use sns.set_theme() if seaborn version 0.11.0 or above)
sns.set(style="darkgrid")
df = sns.load_dataset('iris')
my_pal = {"versicolor": "g", "setosa": "b", "virginica":"m"}
sns.boxplot(x=df["species"], y=df["sepal_length"], palette=my_pal)
plt.show()
```
## Highliting a particular group
You may want to highlight one distribution among others, and this can be done again by creating a custom palette dictionnary, such as before.
```
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
# set a grey background (use sns.set_theme() if seaborn version 0.11.0 or above)
sns.set(style="darkgrid")
df = sns.load_dataset('iris')
my_pal = {species: "r" if species == "versicolor" else "b" for species in df.species.unique()}
sns.boxplot( x=df["species"], y=df["sepal_length"], palette=my_pal)
plt.show()
```
## Adding transparency to your figure
I personally think that charts look better with transparency. I find out how to do it using [mwaskom's Github post](https://github.com/mwaskom/seaborn/issues/979).
If you want to dig deeper on the matter, you can start with [matplotlib documentation on Artist objects](https://matplotlib.org/tutorials/intermediate/artists.html).
```
# libraries & dataset
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="darkgrid")
df = sns.load_dataset('iris')
# usual boxplot, plotted on a matplotlib.axes object named ax
ax = sns.boxplot(x='species', y='sepal_length', data=df)
# adding transparency to colors
for patch in ax.artists:
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, .3))
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.