text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Hide the code completely
from IPython.display import HTML
tag = HTML('''<style>div.input{display:none;}</style>''')
display(tag)
```
<table>
<td style="width:140px; height:140px"><img src='examples/02/img/logo-ICCT.PNG'></td>
</table>
<center><h1>Interaktivni tečaj automatskog upravljanja (ICCT)</h1></center><br>
Dobrodošli na platormu Interaktivnog tečaja automatskog upravljanja (ICCT)!
Interaktivni primjeri organizirani su u četiri poglavlja: Matematički primjeri, Primjeri iz vremenske domene, Primjeri iz frekvencijske domene te Primjeri iz prostora stanja.
## Sadržaj
### 1. Matematički primjeri
1.1 Kompleksni brojevi<br>
1.1.1 [Kompleksni brojevi u Kartezijevom sustavu](examples/01/M-01-Kompleksni_brojevi_Kartezijev_sustav.ipynb)<br>
1.1.2 [Kompleksni brojevi u polarnom sustavu](examples/01/M-02-Kompleksni_brojevi_polarni_sustav.ipynb)<br>
1.1.2 [Potenciranje kompleksnih brojeva](examples/01/M-03-Potenciranje_kompleksnih_brojeva.ipynb)<br>
1.2 [Deriviranje polinoma](examples/01/M-04-Deriviranje_polinoma.ipynb)<br>
1.3 [Integriranje polinoma](examples/01/M-05-Integriranje_polinoma.ipynb)<br>
1.4 [Operacije nad matricama](examples/01/M-06-Operacije_nad_matricama.ipynb)<br>
1.5 [Grafovi funkcija](examples/01/M-07-Grafovi_funkcija.ipynb)<br>
1.6 [Brza Fourierova transformacija (FFT)](examples/01/M-08-Brza_Fourierova_transformacija.ipynb)<br>
1.7 [Laplaceova transformacija](examples/01/M-09-Laplaceova_transformacija.ipynb)<br>
### 2. Primjeri iz vremenske domene
2.1 [Sustav za upravljanje razinom vode u spremniku](examples/02/TD-01-Sustav_za_kontrolu_razine_vode.ipynb)<br>
2.2 [Sustav upravljanja položaja / azimuta antene](examples/02/TD-02-Sustav_upravljanja_polozajem_azimuta_antene.ipynb)<br>
2.3 [Mehanički sustavi](examples/02/TD-03-Mehanicki_sustavi.ipynb)<br>
2.4 [Diferencijalne jednadžbe](examples/02/TD-04-Diferencijalne_jednadzbe.ipynb)<br>
2.5 Linearizacija<br>
2.5.1 [Linearizacija funkcije](examples/02/TD-05-Linearizacija_funkcije.ipynb)<br>
2.5.2 [Linearizacija jednostavnog njihala](examples/02/TD-06-Linearizacija_njihalo.ipynb)<br>
2.6 [Polovi i nule - osnove](examples/02/TD-07-Polovi_nule_odziv_sustava.ipynb)<br>
2.7 [Položaj polova i nula](examples/02/TD-08-Polozaj_polova_i_nula.ipynb)<br>
2.8 [Rastav na parcijalne razlomke](examples/02/TD-09-Rastav_na_parcijalne_razlomke.ipynb)<br>
2.9 [Osnove sustava prvog i drugog reda](examples/02/TD-10-Osnove_sustava_prvog_i_drugog_reda.ipynb)<br>
2.10 [Vremenski odziv sustava prvog reda](examples/02/TD-11-Vremenski_odziv_sustava_prvog_reda.ipynb)<br>
2.11 [Aproksimacija dominantnog_pola](examples/02/TD-12-Aproksimacija_dominantnim_polom.ipynb)<br>
2.12 [Loading Problem](examples/02/TD-13-Loading_Problem.ipynb)<br>
2.13 [Kriteriji stabilnosti Routh i Hurwitz](examples/02/TD-14-Routh_Hurwitz_kriteriji_stabilnosti.ipynb)<br>
2.14 PID regulator<br>
2.14.1 [Vremenski odziv](examples/02/TD-15-PID_regulator_vremenski_odziv.ipynb)<br>
2.14.2 [Sustav zatvorene petlje](examples/02/TD-16-PID_regulator_model_zatvorene_petlje.ipynb)<br>
2.15 [Pogreška stacionarnog stanja](examples/02/TD-17-Pogreska_stacionarnog_stanja.ipynb)<br>
2.16 [Geometrijsko mjesto korijena (Root Locus)](examples/02/TD-18-Geometrijsko_mjesto_korijena.ipynb)<br>
### 3. Primjeri iz frekvencijske domene
3.1 [Prijenosne funkcije](examples/03/FD-01-Prijenosne_funkcije.ipynb)<br>
3.2 [Bodeov dijagram](examples/03/FD-02-Formulacija_Bodeovog_dijagrama.ipynb)<br>
3.3 [Nyquistov dijagram](examples/03/FD-03-Formulacija_Nyquistovog_dijagrama.ipynb)<br>
3.4 [Sustavi s negativnom povratnom vezom](examples/03/FD-04-Sustavi_s_negativnom_povratnom_vezom.ipynb)<br>
3.5 [Amplitudna i fazna rezerva](examples/03/FD-05-Amplitudna_i_fazna_rezerva.ipynb)<br>
3.6 PID regulator - osnove<br>
3.6.1 [Ugađanje PID regulatora](examples/03/FD-06-Formulacija_PID_regulatora.ipynb)<br>
3.6.2 [PID upravljanje sustavima prvog reda](examples/03/FD-07-PID_upravljanje_sustavima_prvog_reda.ipynb)<br>
3.6.3 [PID upravljanje sustavom prvog reda s integratorom](examples/03/FD-08-PID_upravljanje_sustavom_prvog_reda_s_integratorom.ipynb)<br>
3.6.4 [PID upravljanje sustavom prvog reda s vremenskim kašnjenjem](examples/03/FD-09-PID_upravljanje_sustavom_prvog_reda_s_vremenom_kasnjenja.ipynb)<br>
3.6.5 [PID upravljanje neprigušenim i kritički prigušenim sustavima drugog reda](examples/03/FD-10-PID_upravljanje_neprigusenim_i_kriticki_prigusenim_sustavima_drugog_reda.ipynb)<br>
3.6.6 [PID upravljanje podprigušenim sustavom drugog reda](examples/03/FD-11-PID_upravljanje_podprigusenim_sustavom_drugog_reda.ipynb)<br>
3.6.7 [PID upravljanje nadprigušenim sustavom drugog reda](examples/03/FD-12-PID_upravljanje_nadprigusenim_sustavom_drugog_reda.ipynb)<br>
3.6.8 [PID odbacivanje smetnji](examples/03/FD-13-Odbacivanje_smetnji_PID_regulatorom.ipynb)<br>
3.6.9 [Upravljanje sustavom prvog reda diskretnim PID regulatorom](examples/03/FD-14-Upravljanje_sustavom_prvog_reda_diskretnim_PID_regulatorom.ipynb)<br>
3.6.10 [Upravljanje sustavom drugog reda diskretnim PID regulatorom](examples/03/FD-15-Upravljanje_sustavom_drugog_reda_diskretnim_PID_regulatorom.ipynb)<br>
3.7 Dizajn PID regulatora<br>
3.7.1 [P regulator s operacijskim pojačalom](examples/03/FD-16-P_regulator_s_operacijskim_pojacalom.ipynb)<br>
3.7.2 [PI regulator s operacijskim pojačalom](examples/03/FD-17-PI_regulator_s_operacijskim_pojacalom.ipynb)<br>
3.7.3 [PD regulator s operacijskim pojačalom](examples/03/FD-18-PD_regulator_s_operacijskim_pojacalom.ipynb)<br>
3.7.4 [PID regulator s operacijskim pojačalom](examples/03/FD-19-PID_regulator_s_operacijskim_pojacalom.ipynb)<br>
3.8 Stvarni sustavi<br>
3.8.1 [Masa-opruga-prigušivač](examples/03/FD-20-1DoF_masa_opruga_prigusivac.ipynb)<br>
3.8.2 [Kugla i greda](examples/03/FD-21-Kugla_i_greda.ipynb)<br>
3.8.3 [DC Motor](examples/03/FD-22-Kaskadno_upravljanje_DC_motorom.ipynb)<br>
3.8.4 [Pozicioniranje kugličnim vijkom](examples/03/FD-23-DC_sustav_pozicioniranja_s_kuglicnim_vijkom.ipynb)<br>
3.8.5 [Njihalo na kolicima](examples/03/FD-24-Njihalo_na_kolicima.ipynb)<br>
### 4. Primjeri iz prostora stanja
4.1 [Rješenje diferencijalnih jednadžbu u matričnoj formi](examples/04/SS-01-Rjesenje_diferencijalne_jednadzbe_u_matricnoj_formi.ipynb)<br>
4.2 [Modalna analiza](examples/04/SS-02-Modalna_analiza.ipynb)<br>
4.3 Dijagonalne matrice<br>
4.3.1 [Isključivo konvergentni modovi](examples/04/SS-03-Dijagonalne_matrice_s_konvergentnim_modovima.ipynb)<br>
4.3.2 [Divergentni modovi](examples/04/SS-04-Dijagonalna_matrica_divergentni_mod.ipynb)<br>
4.4 Jordanova forma<br>
4.4.1 [Jordanova forma s realnim svojstvenim vrijednostima](examples/04/SS-05-Jordanova_forma_s_realnim_svojstvenim_vrijednostima.ipynb)<br>
4.4.2 [Jordanova forma s kompleksnim svojstvenim vrijednostima](examples/04/SS-06-Jordanova_forma_s_kompleksnim_svojstvenim_vrijednostima.ipynb)<br>
4.5 [Prijelaz od diferencijalne jednadžbe do prostora stanja](examples/04/SS-07-Od_diferencijalne_jednadzbe_do_prostora_stanja.ipynb)<br>
4.6 [Modalna analiza sustava masa-opruga-prigušivač](examples/04/SS-08-Modalna_analiza_sustava_masa_opruga_prigusivac.ipynb)<br>
4.7 [Dinamika brzine vozila](examples/04/SS-09-Dinamika_brzine_vozila.ipynb)<br>
4.7.1 [Modalna analiza dinamike brzine vozila](examples/04/SS-10-Modalna_analiza_dinamike_brzine_vozila.ipynb)<br>
4.8 [Dinamika bočnog položaja lunarnog prizemljivača](examples/04/SS-11-Dinamika_bocnog_polozaja_lunarnog_prizemljivaca.ipynb)<br>
4.8.1 [Modalna analiza lunarnog prizemljivača](examples/04/SS-12-Modalna_analiza_lunarnog_prizemljivaca.ipynb)<br>
4.9 [Točke ravnoteže](examples/04/SS-13_Tocke_ravnoteze.ipynb)<br>
4.9.1 [Primjer 1](examples/04/SS-14-Tocke_ravnoteze_primjer_1.ipynb)<br>
4.9.2 [Primjer 2](examples/04/SS-15-Tocke_ravnoteze_primjer_2.ipynb)<br>
4.9.3 [Primjer 3](examples/04/SS-16-Tocke_ravnoteze_primjer_3.ipynb)<br>
4.10 [Unutarnja stabilnost](examples/04/SS-17-Unutarnja_stabilnost.ipynb)<br>
4.10.1 [Primjer 1](examples/04/SS-18-Unutarnja_stabilnost_primjer_1.ipynb)<br>
4.10.2 [Primjer 2](examples/04/SS-19-Unutarnja_stabilnost_primjer_2.ipynb)<br>
4.10.3 [Primjer 3](examples/04/SS-20-Unutarnja_stabilnost_primjer_3.ipynb)<br>
4.10.4 [Primjer 4](examples/04/SS-21-Unutarnja_stabilnost_primjer_4.ipynb)<br>
4.11 [Osmotrivost](examples/04/SS-22-Osmotrivost.ipynb)<br>
4.12 [Upravljivost](examples/04/SS-23-Upravljivost.ipynb)<br>
4.13 [Formulacija prostora stanja i prijenosna funkcija](examples/04/SS-24-Formulacija_prostora_stanja_i_prijenosna_funkcija.ipynb)<br>
4.14 [Unutarnja i vanjska stabilnost](examples/04/SS-25-Unutarnja_i_vanjska_stabilnost.ipynb)<br>
4.15 [Asimptotski_promatrač](examples/04/SS-26-Asimptotski_promatraci.ipynb)<br>
4.16 [Luenbergerov promatrač s dinamičkim zahtjevima](examples/04/SS-27-Luenbergerov_promatrac_s_dinamickim_zahtjevima.ipynb)<br>
4.17 [romatrač za sustav masa-opruga-prigušivač](examples/04/SS-28-Promatrac_za_sustav_masa_opruga_prigusivac.ipynb)<br>
4.18 [Promatrač za neosmotrive sustave](examples/04/SS-29-Promatrac_za_neosmotrive_sustave.ipynb)<br>
4.19 [Upravljanje povratnom vezom stanja](examples/04/SS-30-Upravljanje_povratnom_vezom_stanja.ipynb)<br>
4.19.1 [Povratna veza stanja - Performanse](examples/04/SS-31-Povratna_veza_stanja_Performanse.ipynb)<br>
4.19.2 [Povratna veza stanja - Specifikacija praćenja](examples/04/SSS-32-Povratna_veza_stanja_Specifikacija_pracenja.ipynb)<br>
4.19.3 [Upravljanje povratnom vezom stanja za sustav masa-opruga-prigušivač](examples/04/SS-33-Upravljanje_povratnom_vezom_stanja_za_sustav_masa_opruga_prigusivac.ipynb)<br>
4.20 [Dizajn regulatora](examples/04/SS-34-Dizajn_regulatora.ipynb)<br>
4.20.1 [Rregulator za sustav masa-opruga-prigušivač](examples/04/SS-35-Dizajn_regulatora_za_sustav_masa_opruga_prigusivac.ipynb)<br>
4.21 Primjeri stvarnih sustava<br>
4.21.1 [Upravljanje satelitom u orbiti](examples/04/SS-36-Upravljanje_satelitom_u_orbiti.ipynb)<br>
4.21.2 [Upravljanje uzdužnom brzinom kvadkoptera](examples/04/SS-37-Upravljanje_uzduznom_brzinom_kvadkoptera.ipynb)<br>
4.21.3 [Upravljanje bočnog položaja lunarnog prizemljivača](examples/04/SS-38-Upravljanje_bocnog_polozaja_lunarnog_prizemljivaca.ipynb)<br>
4.21.4 [Upravljanje položajem tereta dizalice](examples/04/SS-39-Upravljanje_polozajem_tereta_dizalice.ipynb)<br>
4.21.5 [Upravljanje robotskom rukom s fleksibilnim zglobom](examples/04/SS-40-Upravljanje_robotskom_rukom_s_fleksibilnim_zglobom.ipynb)<br>
4.21.6 [Upravljanje položajem rotacijskog aktuatora](examples/04/SS-41-Upravljanje_polozajem_rotacijskog_aktuatora.ipynb)<br>
4.21.7 [Upravljanje položajem projektila](examples/04/SS-42-Upravljanje_polozajem_projektila.ipynb)<br>
4.21.8 [Upravljanje glavom čvrstog diska](examples/04/SS-43-Upravljanje_glavom_cvrstog_diska.ipynb)<br>
4.21.9 [Tempomat vozila](examples/04/SS-44-Tempomat_vozila.ipynb)<br>
4.21.10 [Upravljanje putanjom rulanja zrakoplova](examples/04/SS-45-Upravljanje_putanjom_zrakoplova.ipynb)<br>
4.21.11 [Upravljanje bočnim položajem kvadkoptera](examples/04/SS-46-Upravljanje_bocnim_polozajem_kvadkoptera.ipynb)<br>
4.21.12 [Pneumatsko upravljanje položaja](examples/04/SS-47-Pneumatsko_upravljanje_polozaja.ipynb)<br>
4.21.13 [Upravljanje brzinom autonomnog podvodnog vozila](examples/04/SS-48-Upravljanje_brzinom_autonomnog_podvodnog_vozila.ipynb)<br>
4.21.14 [Upravljanje smjerom autonomnog podvodnog vozila](examples/04/SS-49-Upravljanje_smjerom_autonomnog_podvodnog_vozila.ipynb)<br>
4.21.15 [Upravljanje dubinom autonomnog podvodnog vozila](examples/04/SS-50-Upravljanje_dubinom_autonomnog_podvodnog_vozila.ipynb)<br>
<br>
<div align="center">Pročitajte više o projektu ICCT na našim <a href="https://icct.cafre.unipi.it/">web stranicama.</a></div>
<!-- ## FAQ
### Can I use the developed Jupyter notebooks for my school project? -->
<table>
<td style="width:250px; height:51px"><img src='https://mobilnost.hr/cms_files/2018/04/1524218777_logosbeneficaireserasmus-right-hr.jpg'></td>
</table>
| github_jupyter |
# MagPySV example workflow - high latitude observatories
# Setup
```
# Setup python paths and import some modules
from IPython.display import Image
import sys
import os
import datetime as dt
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
# Import all of the MagPySV modules
import magpysv.denoise as denoise
import magpysv.io as io
import magpysv.model_prediction as model_prediction
import magpysv.plots as plots
import magpysv.tools as tools
%matplotlib notebook
```
# Downloading data
```
from gmdata_webinterface import consume_webservices as cws
# Required dataset - only the hourly WDC dataset is currently supported
cadence = 'hour'
service = 'WDC'
# Start and end dates of the data download
start_date = dt.date(1980, 1, 1)
end_date = dt.date(2010, 12, 31)
# Observatories of interest
observatory_list = ['BLC', 'BRW', 'MBC', 'OTT', 'RES', 'STJ', 'THL', 'VIC', 'YKC']
# Output path for data
download_dir = 'data'
cws.fetch_data(start_date= start_date, end_date=end_date,
station_list=observatory_list, cadence=cadence,
service=service, saveroot=download_dir)
```
# Initial processing
Extract all data from the WDC files, convert into the proper hourly means using the tabular base and save the X, Y and Z components to CSV files. This may take a few minutes.
```
io.wdc_to_hourly_csv(wdc_path=download_dir, write_dir=os.path.join(download_dir, 'hourly'), obs_list=observatory_list,
print_obs=True)
# Path to file containing baseline discontinuity information
baseline_data = tools.get_baseline_info()
# Loop over all observatories and calculate SV series as annual differences of monthly means (ADMM) for each
for observatory in observatory_list:
print(observatory)
# Load hourly data
data_file = observatory + '.csv'
hourly_data = io.read_csv_data(
fname=os.path.join(download_dir, 'hourly', data_file),
data_type='mf')
# Discard days with Ap > threshold (where Ap is the daily average of the 3-hourly ap values) - optional,
# uncomment the next two lines
# hourly_data = tools.apply_Ap_threshold(obs_data=hourly_data, Ap_file=os.path.join('index_data', 'ap_daily.csv'),
# threshold=30.0)
# Resample to monthly means
resampled_field_data = tools.data_resampling(hourly_data, sampling='MS', average_date=True)
# Correct documented baseline changes
tools.correct_baseline_change(observatory=observatory,
field_data=resampled_field_data,
baseline_data=baseline_data, print_data=True)
# Write out the monthly means for magnetic field
io.write_csv_data(data=resampled_field_data,
write_dir=os.path.join(download_dir, 'monthly_mf'),
obs_name=observatory)
# Calculate SV from monthly field means
sv_data = tools.calculate_sv(resampled_field_data,
mean_spacing=12)
# Write out the SV data
io.write_csv_data(data=sv_data,
write_dir=os.path.join(download_dir, 'monthly_sv', 'admm'),
obs_name=observatory)
```
# High latitude regions
```
from IPython.display import Image
Image("zonemap.png")
```
Rerun the analysis below for each of the three high latitude regions. Besides the Setup section, everything preceding this cell only needs be run only once.
## Concatenate the data for our selected observatories
Select observatories in one high latitude region.
```
observatory_list = ['MBC', 'RES', 'THL'] # Polar cap
#observatory_list = ['BLC', 'BRW', 'YKC'] # Auroral zone
#observatory_list = ['OTT', 'STJ', 'VIC'] # Sub-auroral zone
```
Concatenate the data for our selected observatories.
```
# Where the data are stored
download_dir = 'data'
# Start and end dates of the analysis as (year, month, day)
start = dt.datetime(1980, 1, 1)
end = dt.datetime(2010, 12, 31)
obs_data, model_sv_data, model_mf_data = io.combine_csv_data(
start_date=start, end_date=end, obs_list=observatory_list,
data_path=os.path.join(download_dir, 'monthly_sv', 'admm'),
model_path='model_predictions', day_of_month=15)
dates = obs_data['date']
obs_data
```
# SV plots
```
for observatory in observatory_list:
fig = plots.plot_sv(dates=dates, sv=obs_data.filter(regex=observatory),
model=model_sv_data.filter(regex=observatory),
fig_size=(6, 6), font_size=10, label_size=16, plot_legend=False,
obs=observatory, model_name='COV-OBS')
```
# Outlier detection
Optionally remove spikes in the data before denoising. Large outliers can affect the denoising process so better to remove them beforehand for some series (i.e. at high latitude observatories). Try changing the threshold or window length to see how this affects which points are identified as outliers.
```
obs_data.drop(['date'], axis=1, inplace=True)
for column in obs_data:
obs_data[column] = denoise.detect_outliers(dates=dates, signal=obs_data[column], obs_name=column,
threshold=4,
window_length=120, plot_fig=True, fig_size=(10,3))
obs_data.insert(0, 'date', dates)
```
# Residuals
To calculate SV residuals, we need SV predictions from a geomagnetic field model. This example uses output from the COV-OBS model by Gillet et al. (2013, Geochem. Geophys. Geosyst.,
https://doi.org/10.1002/ggge.20041; 2015, Earth, Planets and Space,
https://doi.org/10.1186/s40623-015-0225-z2013) to obtain model
predictions for these observatory locations. The code can be obtained from
http://www.spacecenter.dk/files/magnetic-models/COV-OBSx1/ and no modifications
are necessary to run it using functions found MagPySV's model_prediction module. For convenience, model output for the locations used in this notebook are included in the examples directory.
```
residuals = tools.calculate_residuals(obs_data=obs_data, model_data=model_sv_data)
model_sv_data.drop(['date'], axis=1, inplace=True)
obs_data.drop(['date'], axis=1, inplace=True)
```
# External noise removal
Compute covariance matrix of the residuals (for all observatories combined) and its eigenvalues and eigenvectors. Since the residuals represent signals present in the data, but not the internal field model, we use them to find a proxy for external magnetic fields (Wardinski & Holme, 2011, GJI, https://doi.org/10.1111/j.1365-246X.2011.04988.x).
```
denoised, proxy, eigenvals, eigenvecs, projected_residuals, corrected_residuals, cov_mat = denoise.eigenvalue_analysis(
dates=dates, obs_data=obs_data, model_data=model_sv_data, residuals=residuals,
proxy_number=1)
```
# Denoised SV plots
Plots showing the original SV data, the denoised data (optionally with a running average) and the field model predictions.
```
for observatory in observatory_list:
xratio, yratio, zratio = plots.plot_sv_comparison(dates=dates, denoised_sv=denoised.filter(regex=observatory),
residuals=residuals.filter(regex=observatory),
corrected_residuals = corrected_residuals.filter(regex=observatory),
noisy_sv=obs_data.filter(regex=observatory), model=model_sv_data.filter(regex=observatory),
model_name='COV-OBS',
fig_size=(6,6), font_size=10, label_size=14, obs=observatory, plot_rms=True)
```
Plots showing the denoised data (optionally with a running average) and the field model predictions.
```
for observatory in observatory_list:
plots.plot_sv(dates=dates, sv=denoised.filter(regex=observatory), model=model_sv_data.filter(regex=observatory),
fig_size=(6, 6), font_size=10, label_size=14, plot_legend=False, obs=observatory,
model_name='COV-OBS')
```
# Plot proxy signal, eigenvalues and eigenvectors
Compare the proxy signal used to denoise the data with a geomagnetic index at the same temporal resolution. Dst measures the intensity of the equatorial electrojet (the "ring current"). AE measures the intensity of the auroral electrojet. Files included with this notebook for annual differences: dst_admm.csv, ap_admm_dst and ae_admm.csv
```
plots.plot_index_dft(index_file=os.path.join('index_data', 'dst_admm.csv'), dates=denoised.date, signal=proxy.astype('float'),
fig_size=(6, 6), font_size=10, label_size=14, plot_legend=True, index_name='Dst')
```
Plot the eigenvalues of the covariance matrix of the residuals
```
plots.plot_eigenvalues(values=eigenvals, font_size=12, label_size=16, fig_size=(6, 3))
```
Plot the eigenvectors corresponding to the three largest eigenvalues. The noisiest direction (v_000, used to denoise in this example) is mostly:
Z in the polar region (no strong correlation with Dst, AE or Dst indices), X and Z in auroral zone (correlates with the AE index) and X and Z in the sub-auroral zone (correlates with the Dst index, similar to European observatories)
```
plots.plot_eigenvectors(obs_names=observatory_list, eigenvecs=eigenvecs[:,0:3], fig_size=(6, 4),
font_size=10, label_size=14)
```
# Outlier detection
Remove remaining spikes in the time series (if needed).
```
denoised.drop(['date'], axis=1, inplace=True)
for column in denoised:
denoised[column] = denoise.detect_outliers(dates=dates, signal=denoised[column], obs_name=column, threshold=5,
window_length=120, plot_fig=False, fig_size=(10, 3), font_size=10,
label_size=14)
denoised.insert(0, 'date', dates)
```
# Write denoised data to file
```
for observatory in observatory_list:
print(observatory)
sv_data=denoised.filter(regex=observatory)
sv_data.insert(0, 'date', dates)
sv_data.columns = ["date", "dX", "dY", "dZ"]
io.write_csv_data(data=sv_data, write_dir=os.path.join(download_dir, 'denoised', 'highlat'),
obs_name=observatory, decimal_dates=False)
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
from fastai.basics import *
```
# Rossmann
## Data preparation / Feature engineering
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz). Then you shold untar them in the dirctory to which `PATH` is pointing below.
For completeness, the implementation used to put them together is included below.
```
PATH=Config().data_path()/Path('rossmann/')
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(PATH/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
```
We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
```
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
```
`join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.
Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "\_y" to those on the right.
```
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
```
Join weather/state names.
```
weather = join_df(weather, state_names, "file", "StateName")
```
In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
```
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
```
The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should *always* consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
```
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
if drop: df.drop(fldname, axis=1, inplace=True)
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
```
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
```
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
```
Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
*Aside*: Why not just do an inner join?
If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
```
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
```
Next we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data.
```
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
```
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
```
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
```
We'll replace some erroneous / outlying data.
```
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
```
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
```
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
```
Same process for Promo dates. You may need to install the `isoweek` package first.
```
# If needed, uncomment:
# ! pip install isoweek
from isoweek import Week
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_pickle(PATH/'joined')
joined_test.to_pickle(PATH/'joined_test')
```
## Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
```
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
```
We'll be applying this to a subset of columns:
```
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
#df = train[columns]
df = train[columns].append(test[columns])
```
Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:
This will apply to each row with School Holiday:
* A applied to every row of the dataframe in order of store and date
* Will add to the dataframe the days since seeing a School Holiday
* If we sort in the other direction, this will count the days until another holiday.
```
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
```
We'll do this for two more fields.
```
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
```
We're going to set the active index to Date.
```
df = df.set_index("Date")
```
Then set null values from elapsed field calculations to 0.
```
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
```
Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction.
```
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
```
Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
```
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
```
Now we'll merge these values onto the df.
```
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
```
It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
```
df.to_pickle(PATH/'df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = pd.read_pickle(PATH/'joined')
joined_test = pd.read_pickle(PATH/f'joined_test')
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
```
The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
```
joined = joined[joined.Sales!=0]
```
We'll back this up as well.
```
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_pickle(PATH/'train_clean')
joined_test.to_pickle(PATH/'test_clean')
```
| github_jupyter |
# Introduction to the jupyter notebook
**Authors**: Thierry D.G.A Mondeel, Stefania Astrologo, Ewelina Weglarz-Tomczak & Hans V. Westerhoff <br/>
University of Amsterdam <br/>
2016 - 2019
**Acknowledgements:** This material is heavily based on [Learning IPython for Interactive Computing and Data Visualization, second edition](https://github.com/ipython-books/minibook-2nd-code).
# Why are we here?
This first notebook is a fast introduction to the user interface of the Jupyter notebook. The point is to get you comfortable with executing cells of code, adding new ones and finding your way around. Do not spend more than 20 minutes on this part of the tutorial.
# The Jupyter notebook: What is it for?
The Jupyter notebook is a flexible tool that helps you create readable analyses. You can keep data, code, images, comments, formulae and plots together.
# Take the user interface tour
<span style="color:red">**Assignment (5 min):**</span> Take the user interface tour by clicking "Help > User Interface Tour"
# Cells and cells
Note that Jupyter notebooks have confusing terminology: the boxes of text and code (like this one) are referred to as cells. Not to be confused with biological cells ;)
## The most important buttons of the menu at the top of the screen
At a minimum familiarize yourself with these buttons and what they do:
* The "Run" button: run the currently selected cell
* The "+" button: adds a new empty cell
* In the "Cell" menu > "Run all": runs all cells in the notebook.
* The "File > Download as" menu
* The "Cell type" toolbar button (The one labeled "Markdown" if this cell is selected. This button switches a cell between text (Markdown) and code.
# Running snippets of code
First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single "kernel" for a specific language. This notebook is associated with the Python kernel, therefore runs Python code.
## Code cells allow you to enter and run code
Run the "code cell" below by selecting it (click on it with your mouse) and using `Shift-Enter` or pressing the <button class='btn btn-default btn-xs'><i class="icon-step-forward fa fa-step-forward"></i></button> button in the toolbar above.
The cell defines a variable 'a' assigns a value of 10 to it and prints the variable.
```
a = 10
a
```
# A notebook may have two useful types of cells
* A **Markdown cell** contains text (like this one). In addition to formatting options like bold or italics, we can add links, images, HTML elements, mathematical equations, and more.
* A **code cell** contains code to be executed by the Python kernel. There are also kernels for other languages like [R](https://www.r-project.org/).
<span style="color:red">**Assignment (2 min):**</span> Try adding a code cell and a markdown cell below.
In the toolbar use the "+" the add new cells. Focus on one cell and use the toolbar to make it a code or markdown cell.
- In the code cell try computing 2*2
- Write some text, e.g. your name, in the markdown cell
# Keyboard shortcuts
If you are on a Mac replace ctrl by cmd.
* `Shift`-`Enter`: run the cell and select the cell below
* `Ctrl`-`s`: save the notebook
<span style="color:red">**Assignment (1 min):**</span> Try these out on the cells above
# Memory and the kernel
Code is run in a separate process called the Kernel. The Kernel can be interrupted or restarted.
<span style="color:red">**Assignment (1 min):**</span>
Try running the following cell, it contains a sleep command that will do absolutely nothing for 20 seconds. During this time the kernel will be busy. Notice that in the top-right corner the circle will be black to indicate this. Hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above to interrupt the computation.
```
import time
time.sleep(20)
```
**Key takeaway:** the circular indicator shows you if Jupyter is busy computing something, and you can interrupt this if needed.
# Plots: The power of powers
As a fun introduction to doing science using Python let's look at exponential growth.
"The greatest shortcoming of the human race is our inability to understand the exponential function." --Albert Allen Bartlett (https://en.wikipedia.org/wiki/Albert_Allen_Bartlett)
Exponential growth is fast. Consider a population of bacteria or cancer cells. Each generation each bacteria in the population divides in two. The code below shows the (perhaps surprising) rate of growth in the number of bacteria.
<span style="color:red">**Assignment (3 min):**</span>
* Execute the two cells below.
* Write down the number of bacteria/cells after 25 generations. Look carefully at the y-axis.
* Change the number of generations in the code cell to 50. Execute the cell again and notice the change on the y-axis.
* You doubled the number of generations. By how much did the number of bacteria increase?
* Are you surprised or not?
```
import matplotlib.pyplot as plt
population_size = {0:1} # in generation 0 there is one bacteria
for generation in range(1,25): # simulation of generations 1-24
population_size[generation] = population_size[generation-1]*2
plt.plot(list(population_size.values()))
ax = plt.gca() # plt.gca gets the current figure so that we can alter its properties
ax.set_xlabel('Generations')
ax.set_ylabel('# bacteria')
plt.show()
```
# The end
You should now be comfortable with the interface and running code cells.
Return to the "tutorial hub" notebook and continue with the next part of the tutorial.
| github_jupyter |
Neuromorphic engineering I
## Lab 8: Silicon Synaptic Circuits
Team member 1: Jan Hohenheim
Team member 2: Maxim Gärtner
Date:
----------------------------------------------------------------------------------------------------------------------
This week, we will see how synaptic circuits generate currents when stimulated by voltage pulses. Specifically we will measure the response of the synapse to a single pulse, and to a sequence of spikes.
The objectives of this lab are to:
- Analyze log-domain synapse circuits.
- Measure the response properties of the diff-pair integrator (DPI) synapse and of the dual diff-pair integrator (DDI) synapse.
## 1. Prelab
**A Differential Pair Integrator circuit**

**(1)** Write the equations characterizing $I_{w}, I_{thr} , I_{in}, I_{\tau}, I_{syn}, I_C$ assuming all corresponding FETs are in saturation and operate in weak-inversion.
> - $I_w = I_0 e^\frac{\kappa V_w}{U_T}$
> - $I_{thr} = I_0 e^\frac{\kappa V_{thr} - V_{dd}}{U_T}$
> - $I_{in} = I_0 e^\frac{\kappa V_{syn} - V_{syn}}{U_T} = I_w \frac{e^\frac{\kappa V_{syn}}{U_T}}{e^\frac{\kappa V_{syn}}{U_T} + e^\frac{\kappa V_{thr}}{U_T}}$
> - $I_{\tau} = I_0 e^\frac{\kappa(V_{dd} - V_\tau)}{U_T}$
> - $I_{syn} = I_0 e^\frac{\kappa(V_{dd} - V_{syn})}{U_T}$
> - $I_C = C \frac{d}{dt} (V_{dd} - V_{syn})$
**(2)** What is the time constant of the circuit?
> $\tau = \frac{CU_T}{\kappa I_\tau}$
**(3)** Derive the circuit's response to a step input assuming $I_{w}(t < 0) = 0, I_{w}(t > 0) \gg I_{\tau}$.
> - $I_w \ll I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = 0 \Rightarrow \frac{d}{dt}I_{syn} = - \frac{I_{syn}}{\tau}$
> - $I_w \gg I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau} \Rightarrow \frac{d}{dt}I_{syn} = \frac{I_w I_{thr} - I_{syn}I_\tau}{\tau I_\tau}$
```
import numpy as np
import matplotlib.pyplot as plt
def get_next_I_syn(I_syn, tau, I_tau, I_thr, I_w, dt):
return I_syn + (I_w*I_thr - I_syn*I_tau)/(tau * I_tau)*dt
tau = 0.3
I_tau = 5e-9
I_w = 5e-7
I_thr = 5e-6
x = np.linspace(0, 2, 100)
dt = x[1] - x[0]
y = [0]
for _ in range(len(x[1:])):
I_syn = get_next_I_syn(y[-1], tau, I_tau, I_thr, I_w, dt)
y.append(I_syn)
plt.plot(x, y, label="$I_{syn}$")
plt.title(r"$I_{syn}$ with $I_{w}(t < 0) = 0, I_{w}(t > 0) \gg I_{\tau}$")
plt.ylabel("$I_{syn}$ [A]")
plt.xlabel("t [s]")
plt.legend()
plt.show()
```
**(4)** Derive the circuit's response to a step input assuming $I_{w}(t < 0) \gg I_{\tau}, I_{w}(t > 0) = 0$.
> - $I_w \ll I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = 0 \Rightarrow \frac{d}{dt}I_{syn} = - \frac{I_{syn}}{\tau}$
> - $I_w \gg I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau} \Rightarrow \frac{d}{dt}I_{syn} = \frac{I_w I_{thr} - I_{syn}I_\tau}{\tau I_\tau}$
```
import numpy as np
import matplotlib.pyplot as plt
def get_next_I_syn(I_syn, tau, I_tau, I_thr, I_w, dt):
return I_syn + (I_w*I_thr - I_syn*I_tau)/(tau * I_tau)*dt
tau = 0.3
I_tau = 5e-7
I_w = 5e-9
I_thr = 5e-6
x = np.linspace(0, 2, 100)
dt = x[1] - x[0]
y = [5e-4]
for _ in range(len(x[1:])):
I_syn = get_next_I_syn(y[-1], tau, I_tau, I_thr, I_w, dt)
y.append(I_syn)
plt.plot(x, y, label="$I_{syn}$")
plt.title(r"$I_{syn}$ with $I_{w}(t < 0) \gg I_{\tau}, I_{w}(t > 0) = 0$")
plt.ylabel("$I_{syn}$ [A]")
plt.xlabel("t [s]")
plt.legend()
plt.show()
```
**(5)** Suppose we stimulate the circuit with a regular spike train of frequency $f$ (high enough). What happens to $I_{syn}$ in steady-state (average value)?
> $\tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau}$
> Steady-state $\Rightarrow \frac{d}{dt}I_{syn} = 0\frac{A}{s}$
> $\Rightarrow I_{syn} = \frac{I_w I_{thr}}{I_\tau}$
**(6)** In what conditions (tau and thr) is the step response dependent only on $I_{w}$?
> Per the formula above, when $I_{thr} = I_\tau$
# 2 Setup
## 2.1 Connect the device
```
# import the necessary libraries
import pyplane
import time
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
# create a Plane object and open the communication
if 'p' not in locals():
p = pyplane.Plane()
try:
p.open('/dev/ttyACM0')
except RuntimeError as e:
del p
print(e)
p.get_firmware_version()
# Send a reset signal to the board, check if the LED blinks
p.reset(pyplane.ResetType.Soft)
time.sleep(0.5)
# NOTE: You must send this request events every time you do a reset operetion, otherwise the recieved data is noisy.
# Because the class chip need to do handshake to get the communication correct.
p.request_events(1)
# Try to read something, make sure the chip responses
p.read_current(pyplane.AdcChannel.GO0_N)
# If any of the above steps fail, delete the object, close and halt, stop the server and ask the TA to restart
# please also say your board number: ttyACMx
# del p
```
## 2.2 Chip configuration
* To measure DPI synapse:
```
p.send_coach_events([pyplane.Coach.generate_aerc_event(
pyplane.pyplane.Coach.CurrentOutputSelect.SelectLine5,
pyplane.Coach.VoltageOutputSelect.SelectLine2,
pyplane.Coach.VoltageInputSelect.NoneSelected,
pyplane.Coach.SynapseSelect.DPI,0)])
```
## 2.3 C2F
* To set up the C2F circuit:
```
# setup C2F
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_HYS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_BIAS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_PWLK_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_REF_L, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_REF_H, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
# setup output rail-to-rail buffer
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.RR_BIAS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
```
## 2.4 BiasGen
In a simplified form, the output of a branch of the BiasGen will be the gate voltage $V_b$ for the bias current $I_b$, and if the current mirror has a ratio of $w$ and the bias transistor operates in subthreshold-saturation:
\begin{equation}
I_b = w\frac{BG_{fine}}{256}I_{BG_{master}}
\end{equation}
Where $I_{BG_{master}}$ is the `BiasGenMasterCurrent` $\in \left\{ 60~\rm{pA}, 460~\rm{pA}, 3.8~\rm{nA}, 30~\rm{nA}, 240~\rm{nA} \right\}$, $BG_{fine}$ is the integer fine value $\in [0, 256)$
To set a bias, use the function similar to the following:
```
p.send_coach_event(pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.BIAS_NAME, \
pyplane.Coach.BiasType.BIAS_TYPE, \
pyplane.Coach.BiasGenMasterCurrent.MASTER_CURRENT, FINE_VALUE))
```
**You may have noticed that there are some biases that are not used to directly generate a current, but rather what matters is the voltage, e.g. $V_{gain}$, $V_{ex}$ and $V_{inh}$ in our HWTA circuit. Even though they may have a `BIAS_NAME` ending with `_N` or `_P` it only indicates that they are connected to the gate of an N- or a P-FET, but the `BIAS_TYPE` parameter can be both `_N` or `_P`. For example, setting a `_N` bias to `BIAS_TYPE = P` will only make this voltage very close to GND, which _is_ sometimes the designed use case.**
## 2.5 Pulse extender circuit
In case you didn't look into the last problem in prelab, the pulse extender circuit basically defines the pulse width, which is inversely proportional to the parameter `PEX_VTAU_N`.
# 3 DPI synapse
The **DPI synapse** receives a voltage pulse train, $V_{pulse}$, as input and
outputs a corresponding synaptic current, $I_{syn}$. Additionally, the synaptic voltage, $V_{syn}$, is provided.
Bias parameters $V_{weight}$ & $V_{tau}$ affect the amplitude and decay of the response, while $V_{thr}$ acts as an additional weight bias. $C_{syn}$ sizing was chosen for a capacitance of 2pF.

**Pin map**
**$V_{syn}$ = adc[14]**
**$I_{syn}$ = c2f[9]**
The task of this exercise it to tune the parameters and observe the behavior of the DPI synapse.
## 3.1 Basic impulse response
- **Set parameters**
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Plot the data**
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn,isyn = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
plt.plot(t,vsyn,'-')
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 1: Measured values of $V_{syn}$ as a function of time')
plt.grid()
plt.show()
plt.plot(t,isyn,'-')
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 2: Measured C2F values of $I_{syn}$ as a function of time')
plt.grid()
plt.show()
```
- **Save the data**
```
np.savetxt('data/data_ex_3_1.csv',[t,vsyn,isyn] , delimiter=',')
```
## 3.2 Different $I_{weight}$
Repeat 3.1 with a smaller and a larger $I_{weight}$, compare the three curves in the same plot.
- **Set smaller bias**
```
## REMINDER , RESET ALL PARAMETERS AS 3.1
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 50)]) #change weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Save data**
```
np.savetxt('data/data_ex_3_2_smaller.csv',[t,vsyn,isyn] , delimiter=',')
```
- **Set larger bias**
```
#Insert a bigger I weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 150)]) #change weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Save data**
```
np.savetxt('data/data_ex_3_2_bigger.csv',[t,vsyn,isyn] , delimiter=',')
```
- **Plot**
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_2_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_2_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_w$','$V_{syn}$ - Normal $I_w$','$V_{syn}$ - Larger $I_w$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 3: Measured values of $V_{syn}$ as function of time for varying $I_{w}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_w$','C2F$(I_{syn})$ - Normal $I_w$','C2F$(I_{syn})$ - Larger $I_w$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 4: Measured values of $I_{syn}$ as function of time for varying $I_{w}$')
plt.grid()
plt.show()
```
## 3.3 Different $I_{tau}$
Repeat 3.1 with a smaller and a larger $I_{tau}$, compare the three curves in the same plot.
```
## REMINDER , RESET ALL PARAMETERS AS 3.1
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)]) #change tau
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_3_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 40)]) #change tau
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_3_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_3_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_3_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{𝜏}$','$V_{syn}$ - Normal $I_{𝜏}$','$V_{syn}$ - Larger $I_{𝜏}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 5: Measured values of $V_{syn}$ as function of time for varying $I_{𝜏}$')
plt.grid()
plt.show()
plt.plot(t,isyn_smaller,t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{𝜏}$','C2F$(I_{syn})$ - Normal $I_{𝜏}$','C2F$(I_{syn})$ - Larger $I_{𝜏}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 6: Measured values of $I_{syn}$ as function of time for varying $I_{𝜏}$')
plt.grid()
plt.show()
```
## 3.4 Different $I_{thr}$
Repeat 3.1 with a smaller and a larger $I_{thr}$, compare the three curves in the same plot.
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)]) #change threshold
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_4_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 80)]) #change threshold
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_4_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_4_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_4_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{thr}$','$V_{syn}$ - Normal $I_{thr}$','$V_{syn}$ - Larger $I_{thr}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 7: Measured values of $V_{syn}$ as function of time for varying $I_{thr}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{thr}$','C2F$(I_{syn})$ - Normal $I_{thr}$','C2F$(I_{syn})$ - Larger $I_{thr}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 8: Measured values of $I_{syn}$ as function of time for varying $I_{thr}$')
plt.grid()
plt.show()
```
## 3.5 Different pulse width
Repeat 3.1 with a smaller and a larger pulse width, compare the three curves in the same plot.
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 6)]) # Change pulse width
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_5_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 14)]) # Change pulse width
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_5_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_5_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_5_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{\\rm{pulse\ width}}$','$V_{syn}$ - Normal $I_{\\rm{pulse\ width}}$','$V_{syn}$ - Larger $I_{\\rm{pulse\ width}}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 9: Measured values of $V_{syn}$ as function of time for varying $I_{\\rm{pulse\ width}}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{\\rm{pulse\ width}}$','C2F$(I_{syn})$ - Normal $I_{\\rm{pulse\ width}}$','C2F$(I_{syn})$ - Larger $I_{\\rm{pulse\ width}}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 10: Measured values of $I_{syn}$ as function of time for varying $I_{\\rm{pulse\ width}}$')
plt.grid()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 12: Reinforcement Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 12 Video Material
* **Part 12.1: Introduction to the OpenAI Gym** [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)
* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)
* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)
* Part 12.4: Atari Games with Keras Neural Networks [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)
* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb)
# Part 12.1: Introduction to the OpenAI Gym
[OpenAI Gym](https://gym.openai.com/) aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments. The goal is to standardize how environments are defined in AI research publications so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, developers can only use Gym with Python.
OpenAI gym is pip-installed onto your local machine. There are a few significant limitations to be aware of:
* OpenAI Gym Atari only **directly** supports Linux and Macintosh
* OpenAI Gym Atari can be used with Windows; however, it requires a particular [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30)
* OpenAI Gym can not directly render animated games in Google CoLab.
Because OpenAI Gym requires a graphics display, the only way to display Gym in Google CoLab is an embedded video. The presentation of OpenAI Gym game animations in Google CoLab is discussed later in this module.
### OpenAI Gym Leaderboard
The OpenAI Gym does have a leaderboard, similar to Kaggle; however, the OpenAI Gym's leaderboard is much more informal compared to Kaggle. The user's local machine performs all scoring. As a result, the OpenAI gym's leaderboard is strictly an "honor's system." The leaderboard is maintained the following GitHub repository:
* [OpenAI Gym Leaderboard](https://github.com/openai/gym/wiki/Leaderboard)
If you submit a score, you are required to provide a writeup with sufficient instructions to reproduce your result. A video of your results is suggested, but not required.
### Looking at Gym Environments
The centerpiece of Gym is the environment, which defines the "game" in which your reinforcement algorithm will compete. An environment does not need to be a game; however, it describes the following game-like features:
* **action space**: What actions can we take on the environment, at each step/episode, to alter the environment.
* **observation space**: What is the current state of the portion of the environment that we can observe. Usually, we can see the entire environment.
Before we begin to look at Gym, it is essential to understand some of the terminology used by this library.
* **Agent** - The machine learning program or model that controls the actions.
Step - One round of issuing actions that affect the observation space.
* **Episode** - A collection of steps that terminates when the agent fails to meet the environment's objective, or the episode reaches the maximum number of allowed steps.
* **Render** - Gym can render one frame for display after each episode.
* **Reward** - A positive reinforcement that can occur at the end of each episode, after the agent acts.
* **Nondeterministic** - For some environments, randomness is a factor in deciding what effects actions have on reward and changes to the observation space.
It is important to note that many of the gym environments specify that they are not nondeterministic even though they make use of random numbers to process actions. It is generally agreed upon (based on the gym GitHub issue tracker) that nondeterministic property means that a deterministic environment will still behave randomly even when given consistent seed value. The seed method of an environment can be used by the program to seed the random number generator for the environment.
The Gym library allows us to query some of these attributes from environments. I created the following function to query gym environments.
```
import gym
def query_environment(name):
env = gym.make(name)
spec = gym.spec(name)
print(f"Action Space: {env.action_space}")
print(f"Observation Space: {env.observation_space}")
print(f"Max Episode Steps: {spec.max_episode_steps}")
print(f"Nondeterministic: {spec.nondeterministic}")
print(f"Reward Range: {env.reward_range}")
print(f"Reward Threshold: {spec.reward_threshold}")
```
We will begin by looking at the MountainCar-v0 environment, which challenges an underpowered car to escape the valley between two mountains. The following code describes the Mountian Car environment.
```
query_environment("MountainCar-v0")
```
There are three distinct actions that can be taken: accelrate forward, decelerate, or accelerate backwards. The observation space contains two continuous (floating point) values, as evident by the box object. The observation space is simply the position and velocity of the car. The car has 200 steps to escape for each epasode. You would have to look at the code to know, but the mountian car recieves no incramental reward. The only reward for the car is given when it escapes the valley.
```
query_environment("CartPole-v1")
```
The CartPole-v1 environment challenges the agent to move a cart while keeping a pole balanced. The environment has an observation space of 4 continuous numbers:
* Cart Position
* Cart Velocity
* Pole Angle
* Pole Velocity At Tip
To achieve this goal, the agent can take the following actions:
* Push cart to the left
* Push cart to the right
There is also a continuous variant of the mountain car. This version does not simply have the motor on or off. For the continuous car the action space is a single floating point number that specifies how much forward or backward force is being applied.
```
query_environment("MountainCarContinuous-v0")
```
Note: ignore the warning above, it is a relativly inconsequential bug in OpenAI Gym.
Atari games, like breakout can use an observation space that is either equal to the size of the Atari screen (210x160) or even use the RAM memory of the Atari (128 bytes) to determine the state of the game. Yes thats bytes, not kilobytes!
```
query_environment("Breakout-v0")
query_environment("Breakout-ram-v0")
```
### Render OpenAI Gym Environments from CoLab
It is possible to visualize the game your agent is playing, even on CoLab. This section provides information on how to generate a video in CoLab that shows you an episode of the game your agent is playing. This video process is based on suggestions found [here](https://colab.research.google.com/drive/1flu31ulJlgiRL1dnN2ir8wGh9p7Zij2t).
Begin by installing **pyvirtualdisplay** and **python-opengl**.
```
!pip install gym pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
```
Next, we install needed requirements to display an Atari game.
```
!apt-get update > /dev/null 2>&1
!apt-get install cmake > /dev/null 2>&1
!pip install --upgrade setuptools 2>&1
!pip install ez_setup > /dev/null 2>&1
!pip install gym[atari] > /dev/null 2>&1
```
Next we define functions used to show the video by adding it to the CoLab notebook.
```
import gym
from gym.wrappers import Monitor
import glob
import io
import base64
from IPython.display import HTML
from pyvirtualdisplay import Display
from IPython import display as ipythondisplay
display = Display(visible=0, size=(1400, 900))
display.start()
"""
Utility functions to enable video recording of gym environment
and displaying it.
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
```
Now we are ready to play the game. We use a simple random agent.
```
#env = wrap_env(gym.make("MountainCar-v0"))
env = wrap_env(gym.make("Atlantis-v0"))
observation = env.reset()
while True:
env.render()
#your agent goes here
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break;
env.close()
show_video()
```
| github_jupyter |
# Using Python and NumPy more efficiently
As with any programming language, there are more efficient and less efficient ways to write code that has the same functional behavior. In Python, it can be particularly jarring that `for` loops have a relatively high per-loop cost. For simple `for` loops, there can be alternative approaches using regular Python that are both better performing and easier to read. For numerical calculations, `NumPy` provides additional capabilities that can dramatically improve performance.
```
# Math libraries
import math
import numpy as np
# Create a convenience function for using the Python `timeit` module
import timeit
def ms_from_timeit(function_as_string, argument_as_string, runs=100, repeat=10):
"""Returns the milliseconds per function call"""
timer = timeit.Timer(function_as_string+'('+argument_as_string+')',
setup='from __main__ import '+function_as_string+', '+argument_as_string)
return min(timer.repeat(repeat, runs)) / runs * 1000
```
## Calling a function on 10,000 values
Let's start with a simple task: calculate the square root on 10,000 randomly generated values.
```
# Create a list of 10000 random floats in [0, 1)
import random
random_list = [random.random() for i in range(10000)]
```
### Using a `for` loop
A simple implementation is to use a `for` loop to step through the input list and append each square-root value to an output list.
```
def sqrt_python_loop(python_list):
result = []
for value in python_list:
result.append(math.sqrt(value))
return result
print("Using a Python loop takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_loop', 'random_list')))
```
### Using list comprehension
For `for` loops that only need to operate on an element-by-element basis, we can use Python's list comprehension for a significant performance boost.
```
def sqrt_python_list_comprehension(python_list):
result = [math.sqrt(value) for value in python_list]
return result
print("Using Python list comprehension takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_list_comprehension', 'random_list')))
```
### Using `map`
One can also use the built-in function `map` to obtain faster performance, although it may be less readable than using list comprehension.
```
def sqrt_python_map(python_list):
result = map(math.sqrt, python_list)
return result
print("Using Python map takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_map', 'random_list')))
```
## Calling a numerical function on 10,000 numbers
The above examples have significant overhead due to the adherence to "vanilla" Python. For numerical calculations, use NumPy.
```
# Create a NumPy ndarray equivalent for the same list of random floats
random_ndarray = np.array(random_list)
```
### Using NumPy incorrectly
While NumPy is quite powerful, it's entirely possible to use it sub-optimally. In the following example, which sticks with using `map`, the additional overhead of converting to/from NumPy ndarrays completely dominates the run time.
```
def sqrt_numpy_map(numpy_array):
result = np.array(map(np.sqrt, numpy_array))
return result
print("Using NumPy with map takes {0:5.3f} ms".format(ms_from_timeit('sqrt_numpy_map', 'random_ndarray')))
```
### Using NumPy correctly
Most of NumPy's functions are already designed to act element-wise on NumPy arrays, so there's actually no need to use `map`.
```
def sqrt_numpy_ufunc(numpy_array):
result = np.sqrt(numpy_array)
return result
print("Using NumPy universal function takes {0:5.3f} ms".format(ms_from_timeit('sqrt_numpy_ufunc', 'random_ndarray')))
```
## Using NumPy on two-dimensional arrays
```
# Create a 2D NumPy ndarray from the same list of random floats
random_ndarray_2d = np.array(random_list).reshape(100, 100)
def std_1d(numpy_2d_array):
result = np.zeros(numpy_2d_array.shape[1])
for index in np.arange(numpy_2d_array.shape[0]):
result[index] = np.std(numpy_2d_array[index, :])
return result
print("Using NumPy avoiding `axis` takes {0:5.3f} ms".format(ms_from_timeit('std_1d', 'random_ndarray_2d')))
def std_1d_axis(numpy_2d_array):
result = np.std(numpy_2d_array, axis=0)
return result
print("Using NumPy using `axis` takes {0:5.3f} ms".format(ms_from_timeit('std_1d_axis', 'random_ndarray_2d')))
```
| github_jupyter |
# Bias, variance, K-fold cross validation and Leaning curves
This notebook explores the relationship between the number of K folds, the bias, variance and learning curve for a simple toy data set. The code in python was used to generate the plots and simulations used for the following stats.stackexchange post
- https://stats.stackexchange.com/questions/61546/optimal-number-of-folds-in-k-fold-cross-validation-is-leave-one-out-cv-always/357572?noredirect=1#comment672417_357572
## Question: how to chose K in K-fold cross validation
### Libraries
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
plt.style.use('seaborn-white')
%matplotlib inline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import train_test_split, ShuffleSplit, KFold
from sklearn.metrics import mean_squared_error
from scipy import interpolate
```
### Viewing the toy data set and degree 4 polynomial regression
```
#Utility variables
degs = np.arange(0,11)
degrees = [4]
Train_MSE_list, Test_MSE_list = [], []
#Initializing noisy non linear data
n = 10000
x = np.linspace(0,1,n)
x_plot = np.linspace(0,1,10*n)
noise = np.random.uniform(-.5,.5, size = n)
y = np.sin(x * 1 * np.pi - .5)
y_noise = y + noise
Y = (y + noise).reshape(-1,1)
X = x.reshape(-1,1)
rs = ShuffleSplit(n_splits=1, train_size = 15, test_size=5)
rs.get_n_splits(X)
for train_index, test_index in rs.split(X):
X_train, X_test, y_train, y_test = X[train_index],X[test_index],Y[train_index], Y[test_index]
#Setup plot figures
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(1, 2, 1)
for d in degs:
#Create an sklearn pipeline, fit and plot result
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=d, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
Train_MSE = mean_squared_error(y_train, pipeline.predict(X_train))
Test_MSE = mean_squared_error(y_test, pipeline.predict(X_test))
Train_MSE_list.append(Train_MSE)
Test_MSE_list.append(Test_MSE)
if d in degrees:
plt.plot(x_plot, pipeline.predict(x_plot.reshape(-1,1)), label = 'd = {}'.format(d), color = 'red')
#First plot left hand side
ax.plot(x,y,color = 'darkblue',linestyle = '--', label = 'f(x)')
ax.scatter(X_train,y_train, facecolors = 'none', edgecolor = 'darkblue')
ax.set_title('Noisy sine curve, 15 data points')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim(-1.5,1.5)
ax.legend()
#========================== RHS plot ====================#
rs = ShuffleSplit(n_splits=1, train_size = 60, test_size=15)
rs.get_n_splits(X)
for train_index, test_index in rs.split(X):
X_train, X_test, y_train, y_test = X[train_index],X[test_index],Y[train_index], Y[test_index]
ax = fig.add_subplot(1, 2, 2)
for d in degs:
#Create an sklearn pipeline, fit and plot result
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=d, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
Train_MSE = mean_squared_error(y_train, pipeline.predict(X_train))
Test_MSE = mean_squared_error(y_test, pipeline.predict(X_test))
Train_MSE_list.append(Train_MSE)
Test_MSE_list.append(Test_MSE)
if d in degrees:
plt.plot(x_plot, pipeline.predict(x_plot.reshape(-1,1)), label = 'd = {}'.format(d), color = 'red')
#First plot left hand side
ax.plot(x,y,color = 'darkblue',linestyle = '--', label = 'f(x)')
ax.scatter(X_train,y_train, facecolors = 'none', edgecolor = 'darkblue')
ax.set_title('Noisy sine curve, 60 data points')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim(-1.5,1.5)
ax.legend()
plt.show()
```
### Learning curve
```
#Utility variables
CV_Mean_MSE, CV_Std_MSE = [],[]
train_sizes=np.array([5,10,15,20,25,30,35,40,50,60,70,80,90,100])
test_sizes = np.array([1,2,3,4,5,6,7,8,10,12,14,16,18,20])
for s in range(len(train_sizes)):
Test_MSE_list = []
rs = ShuffleSplit(n_splits=300, train_size = train_sizes[s], test_size=test_sizes[s])
rs.get_n_splits(X)
for train_index, test_index in rs.split(X):
#print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test, y_train, y_test = X[train_index],X[test_index],Y[train_index], Y[test_index]
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=4, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
#Inner loop results
Test_MSE_list.append(mean_squared_error(y_test, pipeline.predict(X_test)))
#Calculating loop results: mean and std
CV_Mean_MSE.append(np.mean(Test_MSE_list))
CV_Std_MSE.append(np.std(Test_MSE_list))
#Converting to numpy for convenience
CV_Mean_MSE = np.asarray(CV_Mean_MSE)
CV_Std_MSE = np.asarray(CV_Std_MSE)
#Plotting
plt.figure(figsize = (7,7))
plt.fill_between(train_sizes, 1 - (CV_Mean_MSE - CV_Std_MSE),
1 - (CV_Mean_MSE + CV_Std_MSE), alpha=0.1, color="g")
plt.plot(train_sizes, 1 - CV_Mean_MSE, 'o-', color="g",
label="Cross-validation")
plt.hlines(1 - 1/12 , 0,100, linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="best")
plt.ylim(0.4,1)
plt.ylabel('1 - MSE')
plt.xlabel('Size of training set')
plt.title('1 - Error (MSE) vs Training size ')
```
# Approach 1) Re-sampling with replacement from 10,000 points at each bootstrap iteration
- Iterate i times (e.g. 100 or 200 times). At each iteration, change the data set by sampling N data points from the original dataset
- For each dataset i: Perform K fold CV for one value of K
- Calculate the mean MSE of the K fold CV
- Calculate the mean and standard deviation across the i iterations for the same value of K
- Repeat the above steps for different k = 5.. N
## Small data set: Increasing K improves bias slightly
### Small dataset - 40 points
```
#Utility variables
CV_Mean_MSE_small, CV_Var_MSE_small = [],[]
k_folds_range = np.array([2,4,6,8,10,15,20,25,29,35,39])
for k in k_folds_range:
#Reset list at start of loop
i_Mean_MSE = []
#Repeat experiment i times
for i in range(300):
#Reset list at start of loop
Kfold_MSE_list = []
#Resample with replacement from original dataset
rs = ShuffleSplit(n_splits=1, train_size = 40, test_size=1)
rs.get_n_splits(X)
for subset_index, _ in rs.split(X):
X_subset, Y_subset, = X[subset_index],Y[subset_index]
#Loop over kfold splits
kf = KFold(n_splits = k)
for train_index, test_index in kf.split(X_subset):
X_train, X_test = X_subset[train_index], X_subset[test_index]
y_train, y_test = Y_subset[train_index], Y_subset[test_index]
#Fit model on X_train
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=4, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
#Store each Kfold MSE values on X_test
Kfold_MSE_list.append(mean_squared_error(y_test, pipeline.predict(X_test)))
#Average over the K folds for a single "i" iteration
i_Mean_MSE.append(np.mean(Kfold_MSE_list))
#Average and std for a particular k value over all i iterations
CV_Mean_MSE_small.append(np.mean(i_Mean_MSE))
CV_Var_MSE_small.append(np.var(i_Mean_MSE, ddof = 1))
#Convert to numpy for convenience
CV_Mean_MSE_small = np.asarray(CV_Mean_MSE_small)
CV_Var_MSE_small = np.asarray(CV_Var_MSE_small)
CV_Std_MSE_small = np.sqrt(CV_Var_MSE_small)
#Plotting result - LHS - 1 - MSE
fig = plt.figure(figsize=(16,8))
fig.add_subplot(1, 2, 1)
k_folds_range = np.array([2,4,6,8,10,15,20,25,30,35,39])
plt.fill_between(k_folds_range, 1 - (CV_Mean_MSE_small - CV_Std_MSE_small),
1 - (CV_Mean_MSE_small + CV_Std_MSE_small), alpha=0.1, color="g", label = '$\pm 1$ std')
plt.plot(k_folds_range, 1 - CV_Mean_MSE_small, 'o-', color="g",
label="Cross-validation mean")
plt.hlines(1 - 1/12 , min(k_folds_range),max(k_folds_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="lower right"),
plt.ylim(0.7,1)
plt.ylabel('1 - MSE'), plt.xlabel('Kfolds')
plt.title('1 - MSE vs Number of Kfolds: 40 data points, 300 iterations bootstrap ')
```
### Printing the standard deviation for each K value
```
pd.DataFrame(data = {'K = ':k_folds_range,'Mean MSE': CV_Mean_MSE_small,'Std MSE': CV_Std_MSE_small })
```
### Viewing variance as a function of k
```
plt.figure(figsize = (7,7))
plt.plot(k_folds_range, CV_Std_MSE_small, 'o-', color="g",
label="Cross-validation Variance")
plt.legend(loc="best")
plt.ylabel('Std MSE')
plt.xlabel('Kfolds')
plt.ylim(0,.05)
plt.title('Var MSE vs Number of Kfolds: 40 data points, 100 iterations bootstrap ')
```
## Large dataset: Increasing K increases the variance, constant bias
### Large data set: 200 points
```
#Utility variables
CV_Mean_MSE_larger, CV_Std_MSE_larger = [],[]
k_folds_range = np.array([5,20,40,80,125,175,199])
for k in k_folds_range:
#Reset list at start of loop
i_Mean_MSE = []
#Repeat experiment i times
for i in range(50):
#Reset list at start of loop
Kfold_MSE_list = []
#Resample with replacement from original dataset
rs = ShuffleSplit(n_splits=1, train_size = 200, test_size=1)
rs.get_n_splits(X)
for subset_index, _ in rs.split(X):
X_subset, Y_subset, = X[subset_index],Y[subset_index]
#Loop over kfold splits
kf = KFold(n_splits = k)
for train_index, test_index in kf.split(X_subset):
X_train, X_test = X_subset[train_index], X_subset[test_index]
y_train, y_test = Y_subset[train_index], Y_subset[test_index]
#Fit model on X_train
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=4, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
#Store each Kfold MSE values on X_test
Kfold_MSE_list.append(mean_squared_error(y_test, pipeline.predict(X_test)))
#Average over the K folds for a single "i" iteration
i_Mean_MSE.append(np.mean(Kfold_MSE_list))
#Average and std for a particular k value over all i iterations
CV_Mean_MSE_larger.append(np.mean(i_Mean_MSE))
CV_Std_MSE_larger.append(np.std(i_Mean_MSE))
#Convert to numpy for convenience
CV_Mean_MSE_larger = np.asarray(CV_Mean_MSE_larger)
CV_Std_MSE_larger = np.asarray(CV_Std_MSE_larger)
#Plotting result - LHS
fig = plt.figure(figsize=(16,8))
fig.add_subplot(1, 2, 1)
k_folds_range = np.array([5,20,40,80,125,175,199])
plt.fill_between(k_folds_range, 1 - (CV_Mean_MSE_larger - CV_Std_MSE_larger),
1 - (CV_Mean_MSE_larger + CV_Std_MSE_larger), alpha=0.1, color="g")
plt.plot(k_folds_range, 1 - CV_Mean_MSE_larger, 'o-', color="g",
label="Cross-validation")
plt.hlines(1 - 1/12 , min(k_folds_range),max(k_folds_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="best")
plt.ylim(0.7,1)
plt.ylabel('1 - MSE')
plt.xlabel('Kfolds')
plt.title('1 - MSE vs Number of Kfolds: 200 data points ')
```
### Printing the standard deviation for each K value
```
pd.DataFrame(data = {'K = ':k_folds_range,'Mean MSE': CV_Mean_MSE_larger,'Std MSE': CV_Std_MSE_larger })
plt.figure(figsize = (7,7))
plt.plot(k_folds_range, CV_Std_MSE_larger, 'o-', color="g",
label="Cross-validation Variance")
#plt.hlines(1 - 1/12 , min(split_range),max(split_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="best")
#plt.ylim(0.8,1)
plt.ylabel('Std MSE')
plt.xlabel('Kfolds')
plt.title('Var MSE vs Number of Kfolds: 200 data points, 100 iterations bootstrap ')
```
# Approach 2) Repeated K-fold with shuffle = True with the same dataset
- Iterate i times (e.g. 50 times). At each iteration, keep the same dataset but reshuffle it
- For each i: Perform K fold CV for one value of K
- Calculate the mean MSE of the K fold CV
- Calculate the mean and standard deviation across the i iterations for the same value of K
- Repeat the above steps for different k = 5.. N
## Small dataset
```
#Utility variables
CV_Mean_MSE_small, CV_Var_MSE_small = [],[]
k_folds_range = np.array([2,4,6,8,10,15,20,25,29,35,39])
#Subsample from original dataset
rs = ShuffleSplit(n_splits=1, train_size = 40, test_size=1)
rs.get_n_splits(X)
for subset_index, _ in rs.split(X):
X_subset, Y_subset, = X[subset_index],Y[subset_index]
for k in k_folds_range:
#Reset list at start of loop
i_Mean_MSE = []
#Repeat experiment i times
for i in range(50):
#Reset list at start of loop
Kfold_MSE_list = []
#Loop over kfold splits
kf = KFold(n_splits = k, shuffle = True)
for train_index, test_index in kf.split(X_subset):
X_train, X_test = X_subset[train_index], X_subset[test_index]
y_train, y_test = Y_subset[train_index], Y_subset[test_index]
#Fit model on X_train
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=4, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
#Store each Kfold MSE values on X_test
Kfold_MSE_list.append(mean_squared_error(y_test, pipeline.predict(X_test)))
#Average over the K folds for a single "i" iteration
i_Mean_MSE.append(np.mean(Kfold_MSE_list))
#Average and std for a particular k value over all i iterations
CV_Mean_MSE_small.append(np.mean(i_Mean_MSE))
CV_Var_MSE_small.append(np.var(i_Mean_MSE, ddof = 1))
#Convert to numpy for convenience
CV_Mean_MSE_small = np.asarray(CV_Mean_MSE_small)
CV_Var_MSE_small = np.asarray(CV_Var_MSE_small)
CV_Std_MSE_small = np.sqrt(CV_Var_MSE_small)
#Plotting result - LHS - 1 - MSE
fig = plt.figure(figsize=(16,8))
fig.add_subplot(1, 2, 1)
k_folds_range = np.array([2,4,6,8,10,15,20,25,30,35,39])
plt.fill_between(k_folds_range, 1 - (CV_Mean_MSE_small - CV_Std_MSE_small),
1 - (CV_Mean_MSE_small + CV_Std_MSE_small), alpha=0.1, color="g", label = '$\pm 1$ std')
plt.plot(k_folds_range, 1 - CV_Mean_MSE_small, 'o-', color="g",
label="Cross-validation mean")
plt.hlines(1 - 1/12 , min(k_folds_range),max(k_folds_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="lower right"),
plt.ylim(0.7,1)
plt.ylabel('1 - MSE'), plt.xlabel('Kfolds')
plt.title('1 - MSE vs Number of Kfolds: 40 data points, 100 iterations bootstrap ')
plt.figure(figsize = (7,7))
plt.plot(k_folds_range, CV_Std_MSE_small, 'o-', color="g",
label="Cross-validation Variance")
plt.legend(loc="best")
plt.ylabel('Std MSE')
plt.xlabel('Kfolds')
plt.ylim(0,.05)
plt.title('Var MSE vs Number of Kfolds: 40 data points, 100 iterations bootstrap ')
```
## Large dataset
```
#Utility variables
CV_Mean_MSE_larger, CV_Std_MSE_larger = [],[]
k_folds_range = np.array([5,20,40,80,125,175,199])
#Resample with replacement from original dataset
rs = ShuffleSplit(n_splits=1, train_size = 200, test_size=1)
rs.get_n_splits(X)
for subset_index, _ in rs.split(X):
X_subset, Y_subset, = X[subset_index],Y[subset_index]
for k in k_folds_range:
#Reset list at start of loop
i_Mean_MSE = []
#Repeat experiment i times
for i in range(50):
#Reset list at start of loop
Kfold_MSE_list = []
#Loop over kfold splits
kf = KFold(n_splits = k, shuffle = True)
for train_index, test_index in kf.split(X_subset):
X_train, X_test = X_subset[train_index], X_subset[test_index]
y_train, y_test = Y_subset[train_index], Y_subset[test_index]
#Fit model on X_train
pipeline = Pipeline([('polynomialfeatures', PolynomialFeatures(degree=4, include_bias=True, interaction_only=False)),
('linearregression', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True))])
pipeline.fit(X_train,y_train)
#Store each Kfold MSE values on X_test
Kfold_MSE_list.append(mean_squared_error(y_test, pipeline.predict(X_test)))
#Average over the K folds for a single "i" iteration
i_Mean_MSE.append(np.mean(Kfold_MSE_list))
#Average and std for a particular k value over all i iterations
CV_Mean_MSE_larger.append(np.mean(i_Mean_MSE))
CV_Std_MSE_larger.append(np.std(i_Mean_MSE))
#Convert to numpy for convenience
CV_Mean_MSE_larger = np.asarray(CV_Mean_MSE_larger)
CV_Std_MSE_larger = np.asarray(CV_Std_MSE_larger)
#Plotting result - LHS
fig = plt.figure(figsize=(16,8))
fig.add_subplot(1, 2, 1)
k_folds_range = np.array([5,20,40,80,125,175,199])
plt.fill_between(k_folds_range, 1 - (CV_Mean_MSE_larger - CV_Std_MSE_larger),
1 - (CV_Mean_MSE_larger + CV_Std_MSE_larger), alpha=0.1, color="g")
plt.plot(k_folds_range, 1 - CV_Mean_MSE_larger, 'o-', color="g",
label="Cross-validation")
plt.hlines(1 - 1/12 , min(k_folds_range),max(k_folds_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="best")
plt.ylim(0.9,.93)
plt.ylabel('1 - MSE')
plt.xlabel('Kfolds')
plt.title('1 - MSE vs Number of Kfolds: 200 data points ')
plt.figure(figsize = (7,7))
plt.plot(k_folds_range, CV_Std_MSE_larger, 'o-', color="g",
label="Cross-validation Variance")
#plt.hlines(1 - 1/12 , min(split_range),max(split_range), linestyle = '--', color = 'gray', alpha = .5, label = 'True noise $\epsilon$')
plt.legend(loc="best")
#plt.ylim(0.8,1)
plt.ylabel('Std MSE')
plt.xlabel('Kfolds')
plt.title('Var MSE vs Number of Kfolds: 200 data points, 50 iterations bootstrap ')
CV_Std_MSE_larger
```
| github_jupyter |
# Python
Kevin J. Walchko
created 16 Nov 2017
----
Here we will use python as our programming language. Python, like any other language, is really vast and complex. We will just cover the basics we need.
## Objectives
- Understand
- general syntax
- for/while loops
- if/elif/else
- functions
- data types: tuples, list, strings, etc
- intro to classes
## References
- [Python tutorialspoint](https://www.tutorialspoint.com/python/)
- [Python classes/objects](https://www.tutorialspoint.com/python/python_classes_objects.htm)
## Setup
```
from __future__ import print_function
from __future__ import division
import numpy as np
```
# Python
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. An interpreted language, Python has a design philosophy which emphasizes code readability (notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax which allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. The language provides constructs intended to enable writing clear programs on both a small and large scale.
<img src="rossum.png" width="300px">
### Python’s Benevolent Dictator For Life!
“Python is an experiment in how much freedom program-mers need. Too much freedom and nobody can read another's code; too little and expressive-ness is endangered.”
- Guido van Rossum
## Why Use It?
- Simple and easy to use and very efficient
- What you can do in a 100 lines of python could take you a 1000 in C++ … this is the reason many startups (e.g., Instagram) use python and keep using it
- 90% of robotics uses either C++ or python
- Although C++ is faster in run-time, development (write, compile, link, etc) is much slower due to complex syntax, memory management, pointers (they can be fun!) and difficulty in debugging any sort of real program
- Java is dying (or dead)
- Microsoft is still struggling to get people outside of the Windows OS to embrace C#
- Apple's swift is too new and constantly making major changes ... maybe some day
## Who Uses It?
- Industrial Light & Magic (Stars Wars people): used in post production scripting to tie together outputs from other C++ programs
- Eve-Online (big MMORGP game): used for both client and server aspects of the game
- Instagram, Spotify, SurveyMonkey, The Onion, Bitbucket, Pinterest, and more use Django (python website template framework) to create/serve millions of users
- Dropbox, Paypal, Walmart and Google (YouTube)
- Note: Guido van Rossum worked for Google and now works for Dropbox
## Running Programs on UNIX (or your robot)
- Call python program via the python interpreter: `python my_program.py`
- This is kind of the stupid way
- Make a python file directly executable
- Add a shebang (it’s a Unix thing) to the top of your program: `#!/usr/bin/env python`
- Make the file executable: `chmod a+x my_program.py`
- Invoke file from Unix command line: `./my_program.py`
## Enough to Understand Code (Short Version)
- Indentation matters for functions, loops, classes, etc
- First assignment to a variable creates it
- Variable types (int, float, etc) don’t need to be declared.
- Assignment is = and comparison is ==
- For numbers + - * % are as expected
- modulas (%) returns the remainder: 5%3 => 2
- Logical operators are words (and, or, not) not symbols
- We are using `__future__` for python 2 / 3 compatibility
- The basic printing command is print(‘hello’)
- Division works like expected:
- Float division: 5/2 = 2.5
- Integer division: 5//2 = 2
- Start comments with #, rest of line is ignored
- Can include a “documentation string” as the first line of a new function or class you define
```python
def my_function(n):
"""
my_function(n) takes a positive integer and returns n + 5
"""
# assert ... remember this from ECE281?
assert n>0, "crap, n is 0 or negative!"
return n+5
```
# Printing
Again, to have Python 3 compatability and help you in the future, we are going to print things using the print function. Python 2 by default uses a print statement. Also, it is good form to use the newer `format()` function on strings rather than the old C style `%s` for a string or `%d` for an integer. There are lots of cool things you can do with `format()` but we won't dive too far into it ... just the basics.
**WARNING:** Your homework with Code Academy uses the old way to `print`, just do it for that and get through it. For this class we are doing it this way!
```
from __future__ import division # fix division
from __future__ import print_function # use print function
print('hello world') # single quotes
print("hello world") # double quotes
print('3/4 is', 3/4) # this prints 0.75
print('I am {} ... for {} yrs I have been training Jedhi'.format("Yoda", 853))
print('float: {:5.1f}'.format(3.1424567)) # prints float: 3.1
```
## Unicode
Unicode sucks in python 2.7, but if you want to use it:
- [alphabets](https://en.wikipedia.org/wiki/List_of_Unicode_characters)
- [arrows](https://en.wikipedia.org/wiki/List_of_Unicode_characters#Arrows)
- [emoji](https://en.wikipedia.org/wiki/Emoji#Unicode_blocks)
```
print(u'\u21e6 \u21e7 \u21e8 \u21e9')
print(u'\u2620')
# this is a dictionary, we will talk about it next ... sorry for the out of order
uni = {
'left': u'\u21e6',
'up': u'\u21e7',
'right': u'\u21e8',
'down': u'\u21e9',
}
print(u'\nYou must go {}'.format(uni['up'])) # notice all strings have u on the front
```
# Data Types
Python isn't typed, so you don't really need to keep track of variables and delare them as ints, floats, doubles, unsigned, etc. There are a few places where this isn't true, but we will deal with those as we encounter them.
```
# bool
z = True # or False
# integers (default)
z = 3
# floats
z = 3.124
z = 5/2
print('z =', z)
# dictionary or hash tables
bob = {'a': 5, 'b': 6}
print('bob["a"]:', bob['a'])
# you can assign a new key/values pair
bob['c'] = 'this is a string!!'
print(bob)
print('len(bob) =', len(bob))
# you can also access what keys are in a dict
print('bob.keys() =', bob.keys())
# let's get crazy and do different types and have a key that is an int
bob = {'a': True, 11: [1,2,3]}
print('bob = ', bob)
print('bob[11] = ', bob[11]) # don't do this, it is confusing!!
# arrays or lists are mutable (changable)
# the first element is 0 like all good programming languages
bob = [1,2,3,4,5]
bob[2] = 'tom'
print('bob list', bob)
print('bob list[3]:', bob[3]) # remember it is zero indexed
# or ... tuple will do this too
bob = [1]*5
print('bob one-liner version 2:', bob)
print('len(bob) =', len(bob))
# strings
z = 'hello world!!'
z = 'hello' + ' world' # concatinate
z = 'hhhello world!@#$'[2:13] # strings are just an array of letters
print('my crazy string:', z)
print('{}: {} {:.2f}'.format('formatting', 3.1234, 6.6666))
print('len(z) =', len(z))
# tuples are immutable (not changable which makes them faster/smaller)
bob = (1,2,3,4)
print('bob tuple', bob)
print('bob tuple*3', bob*3) # repeats tuple 3x
print('len(bob) =', len(bob))
# since tuples are immutable, this will throw an error
bob[1] = 'tom'
# assign multiple variables at once
bob = (4,5,)
x,y = bob
print(x,y)
# wait, I changed by mind ... easy to swap
x,y = y,x
print(x,y)
```
# Flow Control
## Logic Operators
Flow control is generally done via some math operator or boolean logic operator.


## For Loop
```
# range(start, stop, step) # this only works for integer values
range(3,10) # jupyter cell will always print the last thing
# iterates from start (default 0) to less than the highest number
for i in range(5):
print(i)
# you can also create simple arrays like this:
bob = [2*x+3 for x in range(4)]
print('bob one-liner:', bob)
for i in range(2,8,2): # start=2, stop<8, step=2, so notice the last value is 6 NOT 8
print(i)
# I have a list of things ... maybe images or something else.
# A for loop can iterate through the list. Here, each time
# through, i is set to the next letter in my array 'dfec'
things = ['d', 'e', 'f', 'c']
for ltr in things:
print(ltr)
# enumerate()
# sometimes you need a counter in your for loop, use enumerate
things = ['d', 'e', 'f', 3.14] # LOOK! the last element is a float not a letter ... that's OK
for i, ltr in enumerate(things):
print('things[{}]: {}'.format(i, ltr))
# zip()
# somethimes you have a couple arrays that you want to work on at the same time, use zip
# to combine them together
# NOTE: all arrays have to have the SAME LENGTH
a = ['bob', 'tom', 'sally']
b = ['good', 'evil', 'nice']
c = [10, 20, 15]
for name, age, status in zip(a, c, b): # notice I mixed up a, b, c
status = status.upper()
name = name[0].upper() + name[1:] # strings are immutable
print('{} is {} yrs old and totally {}'.format(name, age, status))
```
## if / elif / else
```
# classic if/then statements work the same as other languages.
# if the statement is True, then do something, if it is False, then skip over it.
if False:
print('should not get here')
elif True:
print('this should print')
else:
print('this is the default if all else fails')
n = 5
n = 3 if n==1 else n-1 # one line if/then statement
print(n)
```
## While
```
x = 3
while True: # while loop runs while value is True
if not x: # I will enter this if statement when x = False or 0
break # breaks me out of a loop
else:
print(x)
x -= 1
```
# Exception Handling
When you write code you should think about how you could break it, then design it so you can't. Now, you don't necessary need to write bullet proof code ... that takes a lot of time (and time is money), but you should make an effort to reduce your debug time.
A list of Python 2.7 exceptions is [here](https://docs.python.org/2/library/exceptions.html). **KeyboardInterrupt:** is a common one when a user pressed ctl-C to quit the program. Some others:
```
BaseException
+-- SystemExit
+-- KeyboardInterrupt
+-- GeneratorExit
+-- Exception
+-- StopIteration
+-- StandardError
| +-- BufferError
| +-- ArithmeticError
| | +-- FloatingPointError
| | +-- OverflowError
| | +-- ZeroDivisionError
| +-- AssertionError
| +-- AttributeError
| +-- EnvironmentError
| | +-- IOError
| | +-- OSError
| | +-- WindowsError (Windows)
| | +-- VMSError (VMS)
| +-- EOFError
| +-- ImportError
| +-- LookupError
| | +-- IndexError
| | +-- KeyError
| +-- MemoryError
| +-- NameError
| | +-- UnboundLocalError
| +-- ReferenceError
| +-- RuntimeError
| | +-- NotImplementedError
| +-- SyntaxError
| | +-- IndentationError
| | +-- TabError
| +-- SystemError
| +-- TypeError
| +-- ValueError
| +-- UnicodeError
| +-- UnicodeDecodeError
| +-- UnicodeEncodeError
| +-- UnicodeTranslateError
+-- Warning
+-- DeprecationWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+-- FutureWarning
+-- ImportWarning
+-- UnicodeWarning
+-- BytesWarning
```
```
# exception handling ... use in your code in smart places
try:
a = (1,2,) # tupple ... notice the extra comma after the 2
a[0] = 1 # this won't work!
except: # this catches any exception thrown
print('you idiot ... you cannot modify a tuple!!')
# error
5/0
try:
5/0
except ZeroDivisionError as e:
print(e)
# raise # this rasies the error to the next
# level so i don't have to handle it here
try:
5/0
except ZeroDivisionError as e:
print(e)
raise # this rasies the error to the next (in this case, the Jupyter GUI handles it)
# level so i don't have to handle it here
```
- When would you want to use `raise`?
- Why not *always* handle the error here?
- What is different when the `raise` command is used?
```
# Honestly, I generally just use Exception from which most other exceptions
# are derived from, but I am lazy and it works fine for what I do
try:
5/0
except Exception as e:
print(e)
# all is right with the world ... these will work, nothing will print
assert True
assert 3 > 1
# this will fail ... and we can add a message if we want to
assert 3 < 1, 'hello ... this should fail'
```
# Libraries
We will need to import `math` to have access to trig and other functions. There will be other libraries like `numpy`, `cv2`, etc you will need to.
```
import math
print('messy', math.cos(math.pi/4))
# that looks clumbsy ... let's do this instead
from math import cos, pi
print('simpler math:', cos(pi/4))
# or we just want to shorten the name to reduce typing ... good programmers are lazy!
import numpy as np
# well what is in the math library I might want to use????
dir(math)
# what is tanh???
help(math.tanh)
print(math.__doc__) # print the doc string for the library ... what does it do?
```
# Functions
There isn't too much that is special about python functions, just the format.
```
def my_cool_function(x):
"""
This is my cool function which takes an argument x
and returns a value
"""
return 2*x/3
my_cool_function(6) # 2*6/3 = 4
```
# Classes and Object Oriented Programming (OOP)
Ok, we don't have time to really teach you how to do this. It would be better if your real programming classes did this. So we are just going to [kludge](https://www.merriam-webster.com/dictionary/kludge) this together here, because these could be useful in this class. In fact I generally (and 99% of the world) does OOP.
Classes are awesome because of a few reasons. First, they help you reuse code instead of duplicating the code in other places all over your program. Classes will save your life when you realize you want to change a function and you will only change it in one spot instead of 10 different spots with slightly different code. You can also put a bunch of related functions together because they make sense. Another important part of Classes is that they allow you to create more flexible functions.
We are going to keep it simple and basically show you how to do OOP in python very simply. This will be a little familar from ECE382 with structs (sort of)
```
class ClassName(object):
"""
So this is my cool class
"""
def __init__(self, x):
"""
This is called a constructor in OOP. When I make an object
this function is called.
self = contains all of the objects values
x = an argument to pass something into the constructor
"""
self.x = x
print('> Constructor called', x)
def my_cool_function(self, y):
"""
This is called a method (function) that works on
the class. It always needs self to access class
values, but can also have as many arguments as you want.
I only have 1 arg called y"""
self.x = y
print('> called function: {}'.format(self.x))
def __del__(self):
"""
Destructor. This is called when the object goes out of scope
and is destoryed. It take NO arguments other than self.
Note, this is hard to call in jupyter, because it will probably
get called with the program (notebook) ends (shutsdown)
"""
pass
a = ClassName('bob')
a.my_cool_function(3.14)
b = ClassName(28)
b.my_cool_function('tom')
for i in range(3):
a = ClassName('bob')
```
There are tons of things you can do with objects. Here is one example. Say we have a ball class and for some reason we want to be able to add balls together.
```
class Ball(object):
def __init__(self, color, radius):
# this ball always has this color and raduis below
self.radius = radius
self.color = color
def __str__(self):
"""
When something tries to turn this object into a string,
this function gets called
"""
s = 'Ball {}, radius: {:.1f}'.format(self.color, self.radius)
return s
def __add__(self, a):
c = Ball('gray', a.radius + self.radius)
return c
r = Ball('red', 3)
g = Ball('green', radius=4)
b = Ball(radius=5, color='blue')
print(r)
print(g)
print(b)
print('total size:', r.radius+b.radius+g.radius)
print('Add method:', r+b+g)
# the base class of all objects in Python should be
# object. It comes with these methods already defined.
dir(object)
```
Now you can have classes with functions that make intuitive sense! If I want to calculate the area of a shape, call function `area()`. I don't need a function `areaCircle()` and `areaSquare()`. Or no, maybe the author named the function `area_circle()` or `AreaCircle()` or `areacircle()` or ...
```python
from math import pi
class Circle(object):
def __init__(self, radius):
self.radius = radius
def area(self):
return pi*self.radius**2
class Square(object):
def __init__(self, length, width):
self.length = length
self.width = width
def area(self):
return length*width
```
# Exercises
- Please run this notebook and change numbers/variables/etc so you understand how they work ... your grade depends on your understanding!
# Questions
1. What is the difference between `/` and `//`?
1. How do you use the `.format()` command on a string?
1. What does mutable/immutable mean for datatypes?
1. What is a hash table and how do you add new values and retrieve (or access) values in it?
1. On one line, how would I do a for loop that returns a new array of : [2,4,8,16]?
1. Write a function that takes a value between [5,-5] and returns the value divided by 2. Make sure to check the input meets the bounds and throws an error if it is wrong
1. Write a class for a `Circle`. Have the constructor take a radius value and if it is not given, have a default radius of 1.0. Also include 2 methods: area(), circumference(). Make sure it inherits from `object` (the base class).
-----------
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
| github_jupyter |
# GDL - Steerable CNNs
**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/Geometric_deep_learning/tutorial2_steerable_cnns.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/Geometric_deep_learning/tutorial2_steerable_cnns.ipynb)
**Empty notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/Geometric_deep_learning/tutorial2_steerable_cnns_unanswered.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/Geometric_deep_learning/tutorial2_steerable_cnns_unanswered.ipynb)
**Authors:** Gabriele Cesa
During the lectures, you have learnt that the symmetries of a machine learning task can be modelled with **groups**.
In the previous tutorial, you have also studied the framework of *Group-Convolutional Neural Networks* (**GCNNs**), which describes a neural architecture design equivariant to general groups.
The feature maps of a GCNN are functions over the elements of the group.
A naive implementation of group-convolution requires computing and storing a response for each group element.
For this reason, the GCNN framework is not particularly convenient to implement networks equivariant to groups with infinite elements.
Steerable CNNs are a more general framework which solves this issue.
The key idea is that, instead of storing the value of a feature map on each group element, the model stores the *Fourier transform* of this feature map, up to a finite number of frequencies.
In this tutorial, we will first introduce some Representation theory and Fourier theory (*non-commutative harmonic analysis*) and, then, we will explore how this idea is used in practice to implement Steerable CNNs.
## Prerequisite Knowledge
Throughout this tutorial, we will assume you are already familiar with some concepts of **group theory**, such as *groups*, *group actions* (in particular *on functions*), *semi-direct product* and *order of a group*, as well as basic **linear algebra**.
We start by importing the necessary packages.
You can run the following command to install all the requirements:
`> pip install torch torchvision numpy matplotlib escnn scipy`
```
import torch
import numpy as np
import scipy
import os
np.set_printoptions(precision=3, suppress=True, linewidth=10000, threshold=100000)
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# If the fonts in the plots are incorrectly rendered, comment out the next two lines
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
matplotlib.rcParams['lines.linewidth'] = 2.0
import urllib.request
from urllib.error import HTTPError
CHECKPOINT_PATH = "../../saved_models/DL2/GDL"
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# Files to download
pretrained_files = [
"steerable_c4-pretrained.ckpt",
"steerable_so2-pretrained.ckpt",
"steerable_c4-accuracies.npy",
"steerable_so2-accuracies.npy",
]
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/DL2/GDL/"
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please contact the author with the full output including the following error:\n", e)
```
## 1. Representation Theory and Harmonic Analysis of Compact Groups
We will make use of the `escnn` [library](https://github.com/QUVA-Lab/escnn) throughout this tutorial.
You can also find its documentation [here](https://quva-lab.github.io/escnn/).
```
try:
from escnn.group import *
except ModuleNotFoundError: # Google Colab does not have escnn installed by default. Hence, we do it here if necessary
!pip install --quiet escnn
from escnn.group import *
```
First, let's create a group.
We will use the *Cyclic Group* $G=C_8$ as an example.
This group contains the $8$ planar rotations by multiples of $\frac{2\pi}{8}$.
In `escnn`, a groups are instances of the abstract class `escnn.group.Group`, which provides some useful functionalities.
We instantiate groups via a *factory method*.
To build the cyclic group of order $8$, we use this factory method:
```
G = cyclic_group(N=8)
# We can verify that the order of this group is 8:
G.order()
```
A group is a collection of group elements together with a binary operation to combine them.
This is implemented in the class `escnn.group.GroupElement`.
We can access the *identity* element $e \in G$ as
```
G.identity
```
or sample a random element as
```
G.sample()
```
Group elements can be combined via the binary operator `@`; we can also take the inverse of an element using `~`:
```
a = G.sample()
b = G.sample()
print(a)
print(b)
print(a @ b)
print(~a)
```
Representation theory is a fundamental element in Steerable CNNs and to construct a Fourier theory over groups.
In this first section, we will introduce the essential concepts.
### 1.1 Group Representation
A **linear group representation** $\rho$ of a compact group $G$ on a vector space (called *representation space*) $\mathbb{R}^d$ is a *group homomorphism* from $G$ to the general linear group $GL(\mathbb{R}^d)$, i.e. it is a map $\rho : G \to \mathbb{R}^{d \times d}$ such that:
$$\rho(g_1 g_2) = \rho(g1) \rho(g2) \quad \forall g_1,g_2 \in G \ .$$
In other words, $\rho(g)$ is a $d \times d$ *invertible* matrix.
We refer to $d$ as the *size* of the representation.
#### Example: the Trivial Representation
The simplest example of *group representation* is the **trivial representation** which maps every element to $1 \in \mathbb{R}$, i.e. $\rho: g \mapsto 1$.
One can verify that it satisfies the condition above.
We can construct this representation as follows:
```
rho = G.trivial_representation
```
`rho` is an instance of `escnn.group.Representation`. This class provides some functionalities to work with group representations. We can also use it as a callable function to compute the representation of a group element; this will return a squared matrix as a `numpy.array`.
Let verify that the trivial representation does indeed verify the condition above:
```
g1 = G.sample()
g2 = G.sample()
print(rho(g1) @ rho(g2))
print(rho(g1 @ g2))
```
Note that the trivial representation has size $1$:
```
rho.size
```
#### Example: rotations
Another common example of group representations is given by 2D rotations.
Let $SO(2)$ be the group of all planar rotations; note that we can identify each rotation by an angle $\theta \in [0, 2\pi)$.
Then, the standard representation of planar rotations as $2\times 2$ rotation matrices is a representation of $SO(2)$:
$$
\rho: r_{\theta} \mapsto \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}
$$
where $r_\theta \in SO(2)$ is a counter-clockwise rotation by $\theta$.
Let's try to build this group and, then, verify that this is a representation:
```
G = so2_group()
rho = G.standard_representation()
g1 = G.sample()
g2 = G.sample()
print(f'g1={g1}, g2={g2}, g1 * g2 = {g1 @ g2}')
print()
print('rho(g1) @ rho(g2)')
print(rho(g1) @ rho(g2))
print()
print('rho(g1 * g2)')
print(rho(g1 @ g2))
```
---
#### QUESTION 1
Show that any representation $\rho: G \to \mathbb{R}^{d \times d}$ also satisfies the following two properties:
- let $e \in G$ be the identity element. Then, $\rho(e)$ is the identity matrix of size $d$.
- let $g \in G$ and $g^{-1}$ be its inverse (i.e. $g \cdot g^{-1} = e$). Then, $\rho(g^{-1}) = \rho(g)^{-1}$.
#### ANSWER 1
First question.
First, note that for any $g \in G$:
$$
\rho(g) = \rho(g \cdot e) = \rho(g) \rho(e)
$$
Because $\rho(g)$ is invertible, we can left multiply by $\rho(g)^{-1}$ to find that $\rho(e)$ is the identify.
Second question.
Note that
$$
\rho(e) = \rho(g \cdot g^{-1}) = \rho(g) \rho(g^{-1})
$$
Using the fact $\rho(e)$ is the identity, by left-multiplying by $\rho(g)^{-1}$ we recover the original statement.
---
#### Direct Sum
We can combine representations to build a larger representation via the **direct sum**.
Given representations $\rho_1 : G \to \mathbb{R}^{d_1 \times d_1}$ and $\rho_2 : G \to \mathbb{R}^{d_2 \times d_2}$, their *direct sum* $\rho_1 \oplus \rho_2: G \to \mathbb{R}^{(d_1 + d_2) \times (d_1 + d_2)}$ is defined as
$$
(\rho_1 \oplus \rho_2)(g) = \begin{bmatrix}\rho_1(g) & 0 \\ 0 & \rho_2(g) \end{bmatrix}
$$
Its action is therefore given by the independent actions of $\rho_1$ and $\rho_2$ on the orthogonal subspaces $\mathbb{R}^{d_1}$ and $\mathbb{R}^{d_2}$ of $\mathbb{R}^{d_1 + d_2}$.
Let's see an example:
```
rho_sum = rho + rho
g = G.sample()
print(rho(g))
print()
print(rho_sum(g))
```
Note that the direct sum of two representations has size equal to the sum of their sizes:
```
rho.size, rho_sum.size
```
We can combine arbitrary many representations in this way, e.g. $\rho \oplus \rho \oplus \rho \oplus \rho$:
```
rho_sum = rho + rho + rho + rho
# or, more simply:
rho_sum = directsum([rho, rho, rho, rho])
rho_sum.size
```
#### The Regular Representation
Another important representation is the **regular representation**.
The regular representation describes the action of a group $G$ on the vector space of functions over the group $G$.
Assume for the moment that the group $G$ is *finite*, i.e. $|G| < \infty$.
The set of functions over $G$ is equivalent to the vector space $\mathbb{R}^{|G|}$.
We can indeed interpret a vector $\mathbf{f} \in \mathbb{R}^{|G|}$ as a function over $G$, where the $i$-th entry of $\mathbf{f}$ is interpreted as the value of the function on the $i$-th element $g_i \in G$.
The **regular representation** of $G$ is a $|G|$ dimensional representation.
Recall the left action of $G$ on a function $f: G \to \mathbb{R}$:
$$
[g.f](h) := f(g^{-1} h)
$$
The new function $g.f$ is still a function over $G$ and belongs to the same vector space.
If we represent the function $f$ as a vector $\mathbf{f}$, the vector representing the function $g.f$ will have permuted entries with respect to $\mathbf{f}$.
This permutation is the regular representation of $g \in G$.
---
#### QUESTION 2
Show that the space of functions over $G$ is a vector space.
To do so, show that functions satisfy the properties of a vector space; see [here](https://en.wikipedia.org/wiki/Vector_space#Notation_and_definition).
#### ANSWER 2
Let $f_1, f_2, f_3: G \to \mathbb{R}$ be three functions and $\alpha, \beta \in \mathbb{R}$ scalars.
The point-wise sum of two functions is the function $[f_1 + f_2]: G \to \mathbb{R}$ defined as
$$
[f_1 + f_2](g) = f_1(g) + f_2(g)
$$
The scalar multiplication is also defined pointwise as
$$
[\alpha \cdot f_1](g) = \alpha f_1(G)
$$
We now verify the required properties of a vector space.
- associativity: $[f_1 + (f_2 + f_3)](g) = f_1(g) + f_2(g) + f_3(g) = [(f_1 + f_2) + f_3](g)$
- commutativity: $[f_1 + f_2)](g) = f_1(g) + f_2(g) = f_2(g) + f_1(g) = [f_2 + f_1](g)$
- identity: define the function $\mathbf{O}: G \to 0$; $[f_1 + \mathbf{O} ](g) = f_1(g) + \mathbf{O} (g) = f_1(g)$
- inverse: define $[-f_1](g) = -1 \cdot f_1(g)$; then $[f_1 + (-f_1)](g) = f_1(g) - f_1(g) = 0$
- compatibility: $[\alpha \cdot (\beta \cdot f_1)](g) = \alpha \beta f_1(g) = [(\alpha \beta)\cdot f_1](g)$
- identity (multiplication): $[1 \cdot f_1](g) = 1 f_1(g) = f_1(g)$
- distributivity (vector): $[\alpha \cdot (f_1 + f_2)](g) = \alpha (f_1 + f_2)(g) = \alpha f_1(g) + \alpha f_2(g)$
- distributivity (scalar): $[(\alpha + \beta) \cdot f_1](g) = (\alpha + \beta) f_1(g) = \alpha f_1(g) + \beta f_1(g)$
---
For finite groups, we can generate this representation.
We assume that the $i$-th entry is associated with the element of $G=C_8$ corresponing to a rotation by $i \frac{2\pi}{8}$.
```
G = cyclic_group(8)
rho = G.regular_representation
# note that the size of the representation is equal to the group's order |G|
rho.size
```
the identity element maps a function to itself, so the entries are not permuted
```
rho(G.identity)
```
The regular representation of the rotation by $1\frac{2\pi}{8}$ just cyclically shifts each entry to the next position since $r_{1\frac{2\pi}{8}}^{-1} r_{i\frac{2\pi}{8}} = r_{(i-1)\frac{2\pi}{8}}$:
```
rho(G.element(1))
```
Let's see an example of the action on a function.
We consider a function which is zero on all group elements apart from the identity ($i=0$).
```
f = np.zeros(8)
f[0] = 1
f
```
Observe that $\rho(e) \mathbf{f} = \mathbf{f}$, where $e = 0\frac{2\pi}{8}$ is the identity element.
```
rho(G.identity) @ f
```
$\mathbf{f}$ is non-zero only on the element $e$.
If an element $g$ acts on this function, it moves the non-zero value to the entry associated with $g$:
```
rho(G.element(1)) @ f
rho(G.element(6)) @ f
```
---
#### QUESTION 3
Prove the result above.
#### ANSWER 3
Let's call $\delta_g: G \to \mathbb{R}$ the function defined as
$$
\delta_g(h) = \begin{cases} 1 & \text{if } h = g \\ 0 & \text{otherwise}\end{cases}
$$
which is zero everywhere apart from $g \in G$, where it is $1$.
The function $\delta_e$ is represented by the vector $\mathbf{f}$ above.
We now want to show that $[g.\delta_e](h) = \delta_g(h)$:
$$
[g.\delta_e](h) = \delta_e(g^{-1}h)
= \begin{cases} 1 & \text{if } g^{-1}h = e \\ 0 & \text{otherwise}\end{cases}
= \begin{cases} 1 & \text{if } h = g \\ 0 & \text{otherwise}\end{cases}
= \delta_g(h)
$$
---
#### Equivalent Representations
Two representations $\rho$ and $\rho'$ of a group $G$ on the same vector space $\mathbb{R}^d$ are called *equivalent* (or **isomorphic**) if and only if they are related by a change of basis $Q \in \mathbb{R}^{d \times d}$, i.e.
$$ \forall g \in G \quad \rho(g) = Q \rho'(g) Q^{-1} \ . $$
Equivalent representations behave similarly since their composition is *basis-independent* as seen by
$$ \rho'(g_1) \rho'(g_2) = Q \rho(g_1)Q^{−1}Q \rho(g_2)Q^{−1} = Q \rho(g_1)\rho(g_2)Q^{−1} \ .$$
*Direct sum* and *change of basis matrices* provide a way to combine representations to construct larger and more complex representations.
In the next example, we concatenate two trivial representations and two regular representations and apply a random change of basis $Q$.
The final representation is formally defined as:
$$
\rho(g) = Q
\left(
\rho_\text{trivial} \oplus
\rho_\text{regular} \oplus
\rho_\text{regular} \oplus
\rho_\text{trivial}
\right)
Q^{-1}
$$
```
d = G.trivial_representation.size * 2 + G.regular_representation.size * 2
Q = np.random.randn(d, d)
rho = directsum(
[G.trivial_representation, G.regular_representation, G.regular_representation, G.trivial_representation],
change_of_basis=Q
)
rho.size
```
#### Irreducible Representations (or *Irreps*)
Under minor conditions, any representation can be decomposed in this way, that is, any representation $\rho$ of a compact group $G$ can be written as a *direc sum* of a number of smaller representations, up to a *change of basis*.
These "smaller representations" can not be decomposed further and play a very important role in the theory of group representations and steerable CNNs and are called **irreducible representations**, or simply **irreps**.
The set of *irreducible representations* of a group $G$ is generally denoted as $\hat{G}$.
We will often use the notation $\hat{G} = \{\rho_j\}_j$ to index this set.
We can access the irreps of a group via the `irrep()` method.
The *trivial representation* is *always* an irreducible representation.
For $G=C_8$, we access it with the index $j=0$:
```
rho_0 = G.irrep(0)
print(rho_0 == G.trivial_representation)
rho_0(G.sample())
```
The next irrep $j=1$ gives the representation of $i\frac{2\pi}{8}$ as the $2 \times 2$ rotation matrix by $\theta = i\frac{2\pi}{8}$:
```
rho = G.irrep(1)
g = G.sample()
print(g)
print()
print(rho(g))
```
Irreducible representations provide the building blocks to construct any representation $\rho$ via direct sums and change of basis, i.e:
$$ \rho = Q \left( \bigoplus_{j \in \mathcal{I}} \rho_j \right) Q^{-1} $$
where $\mathcal{I}$ is an index set (possibly with repetitions) over $\hat{G}$.
Internally, any `escnn.group.Representation` is indeed implemented as a list of irreps (representing the index set $\mathcal{I}$) and a change of basis $Q$.
An irrep is identified by a *tuple* `id`.
Let's see an example.
Let's take the regular representaiton of $C_8$ and check its decomposition into irreps:
```
rho = G.regular_representation
rho.irreps
rho.change_of_basis
# let's access second irrep
rho_id = rho.irreps[1]
rho_1 = G.irrep(*rho_id)
# we verify it is the irrep j=1 we described before
rho_1(g)
```
Finally, let's verify that this direct sum and this change of basis indeed yield the regular representation
```
# evaluate all the irreps in rho.irreps:
irreps = [
G.irrep(*irrep)(g) for irrep in rho.irreps
]
# build the direct sum
direct_sum = np.asarray(scipy.sparse.block_diag(irreps, format='csc').todense())
print('Regular representation of', g)
print(rho(g))
print()
print('Direct sum of the irreps:')
print(direct_sum)
print()
print('Apply the change of basis on the direct sum of the irreps:')
print(rho.change_of_basis @ direct_sum @ rho.change_of_basis_inv)
print()
print('Are the two representations equal?', np.allclose(rho(g), rho.change_of_basis @ direct_sum @ rho.change_of_basis_inv))
```
### 1.2 Fourier Transform
We can finally approach the harmonic analysis of functions over a group $G$.
Note that a representation $\rho: G \to \mathbb{R}^{d \times d}$ can be interpreted as a collection of $d^2$ functions over $G$, one for each matrix entry of $\rho$.
The **Peter-Weyl theorem** states that the collection of functions in the matrix entries of all irreps $\hat{G}$ of a group $G$ spans the space of all (square-integrable) functions over $G$.
This result gives us a way to parameterize functions over the group. This is the focus of this section.
In particular, this is useful to parameterize functions over groups with infinite elements.
In this section, we will first consider the *dihedral group* $D_8$ as example.
This is the group containing the $8$ planar rotations by angles multiple of $\frac{2\pi}{8}$ and *reflection* along the $X$ axis.
The group contains in total $16$ elements ($8$ normal rotations and $8$ rotations preceeded by the reflection).
```
G = dihedral_group(8)
G.order()
# element representing the reflection (-) and no rotations
G.reflection
# element representing a rotation by pi/2 (i.e. 2 * 2pi/8) and no reflections (+)
G.element((0, 2))
# reflection followed by a rotation by pi/2
print(G.element((0, 2)) @ G.reflection)
# we can also directly generate this element as
print(G.element((1, 2)))
# a rotation by pi/2 followed by a reflection is equivalent to a reclection followed by a rotation by 6*2pi/8
G.reflection @ G.element((0, 2))
```
The list of all elements in the group is obtaied as:
```
G.elements
```
#### Fourier and Inverse Fourier Transform
For most groups, the entries of the irreps don't only span the space of functions but form also a basis (i.e. these functions are mutually orthogonal to each other).
Therefore, we can write a function $f: G \to \mathbb{R}$ as
$$ f(g) = \sum_{\rho_j \in \hat{G}} \sum_{m,n < d_j} w_{j,m,n} \cdot \sqrt{d_j} [\rho_j(g)]_{mn}$$
where $d_j$ is the dimension of the irrep $\rho_j$, while $m, n$ index the $d_j^2$ entries of $\rho_j$.
The coefficients $\{ w_{j, m, n} \in \mathbb{R} \}_{j, m, n}$ parameterize the function $f$ on this basis.
The $\sqrt{d_j}$ is a scalar factor to ensure the basis is normalized.
We rewrite this expression in a cleaner form by using the following fact.
If $A, B \in \mathbb{R}^{d \times d}$, then
$$\text{Tr}(A^T B) = \sum_{m, n < d} A_{mn} B_{mn} \in \mathbb{R} \ .$$
By definining $\hat{f}(\rho_j) \in \mathbb{R}^{d_j \times d_j}$ as the matrix containing the $d_j^2$ coefficients $\{ w_{j, m, n} \in \mathbb{R} \}_{m, n < d_j}$, we can express the **Inverse Fourier Transform** as:
$$ f(g) = \sum_{\rho_j \in \hat{G}} \sqrt{d_j} \text{Tr}\left(\rho_j(g)^T \hat{f}(\rho_j)\right) $$
Similarly, we can project a general function $f: G \to \mathbb{R}$ on an element $\rho_{j,m,n}: G \to \mathbb{R}$ of the basis via:
$$ w_{j,m,n} = \frac{1}{|G|} \sum_{g \in G} f(g) \sqrt{d_j} [\rho_j(g)]_{m, n} \ . $$
The projection over all entries of $\rho_j$ can be more cleanly written as follows:
$$ \hat{f}(\rho_j) = \frac{1}{|G|} \sum_{g \in G} f(g) \sqrt{d_j} \rho_j(g) \ . $$
which we refer to as **Fourier Transform**.
If the group $G$ is *infinite*, we replace the average over the group elements with an *integral* over them:
$$ \hat{f}(\rho_j) = \int_G f(g) \sqrt{d_j} \rho_j(g) dg \ , $$
For a finite group $G$, we can access all its irreps by using the ``Group.irreps()`` method.
Let's see an example:
```
irreps = G.irreps()
print(f'The dihedral group D8 has {len(irreps)} irreps')
# the first one, is the 1-dimensional trivial representation
print(irreps[0] == G.trivial_representation == G.irrep(0, 0))
```
---
#### QUESTION 4
We can now implement the Fourier Transform and the Inverse Fourier Transform for the Dihedral Group $D_8$.
Using the equations above, implement the following methods:
---
```
def fourier_transform_D8(f: np.array):
# the method gets in input a function on the elements of D_8
# and should return a dictionary mapping each irrep's `id` to the corresponding Fourier Transform
# The i-th element of `f` stores the value of the function on the group element `G.elements[i]`
G = dihedral_group(8)
assert f.shape == (16,), f.shape
ft = {}
########################
# INSERT YOUR CODE HERE:
for rho in G.irreps():
d = rho.size
rho_g = np.stack([rho(g) for g in G.elements], axis=0)
ft[rho.id] = (f.reshape(-1, 1, 1) * rho_g).mean(0) * np.sqrt(d)
########################
return ft
def inverse_fourier_transform_D8(ft: dict):
# the method gets in input a dictionary mapping each irrep's `id` to the corresponding Fourier Transform
# and should return the function `f` on the elements of D_8
# The i-th element of `f` stores the value of the function on the group element `G.elements[i]`
G = dihedral_group(8)
f = np.zeros(16)
########################
# INSERT YOUR CODE HERE:
for rho in G.irreps():
d = rho.size
for i, g in enumerate(G.elements):
f[i] += np.sqrt(d) * (ft[rho.id] * rho(g)).sum()
########################
return f
```
We now want to verify that the **Fourier Transform** and the **Inverse Fourier Transform** are inverse of each other:
```
f = np.random.randn(16)
ft = fourier_transform_D8(f)
new_f = inverse_fourier_transform_D8(ft)
assert np.allclose(f, new_f)
```
#### Parameterizing functions over infinite groups
This allows us to also parameterize functions over infinite groups, such as $O(2)$, i.e. the group of all planar rotations and reflections.
```
G = o2_group()
# the group has infinite many elements, so the `order` method just returns -1
G.order()
```
The equations remain the same, but this group has an *infinite* number of *irreps*.
We can, however, parameterize a function over the group by only considering a finite number of irreps in the sum inside the definition of *Inverse Fourier Transform*.
Let $\tilde{G} \subset \hat{G}$ be a finite subset of the irreps of $G$.
We can then write the following transforms within the subspace of functions spanned only by the entries of the irreps in $\tilde{G}$.
**Inverse Fourier Transform**:
$$ f(g) = \sum_{\rho_j \in \tilde{G}} \sqrt{d_j} \text{Tr}\left(\rho_j(g)^T \hat{f}(\rho_j)\right) $$
and **Fourier Transform**:
$$ \hat{f}(\rho_j) = \int_G f(g) \sqrt{d_j} \rho_j(g) dg \ , $$
---
#### QUESTION 5
We can now implement the Inverse Fourier Transform for the Orthogonal Group $O(2)$.
Since the group has infinite many elements, we can not store the values the function take on each element.
Instead, we just sample the function on a particular element of the group:
---
```
def inverse_fourier_transform_O2(g: GroupElement, ft: dict):
# the method gets in input a dictionary mapping each irrep's `id` to the corresponding Fourier Transform
# and a group element `g`
# The method should return the value of the function evaluated on `g`.
G = o2_group()
f = 0
########################
# INSERT YOUR CODE HERE:
for rho, ft_rho in ft.items():
rho = G.irrep(*rho)
d = rho.size
f += np.sqrt(d) * (ft_rho * rho(g)).sum()
########################
return f
```
Let's plot a function.
First we generate a random function by using a few irreps.
```
irreps = [G.irrep(0, 0)] + [G.irrep(1, j) for j in range(3)]
ft = {
rho.id: np.random.randn(rho.size, rho.size)
for rho in irreps
}
```
Then, we generate a grid on the group where to evaluate the function, i.e. we choose a finite set of element of $G$.
Like the Dihedral group, $O(2)$ contains rotations (parameterized by an angle $\theta \in [0, 2\pi)$) and a reflection followed by any rotation.
For example:
```
G.sample()
```
To build our grid, we sample $100$ rotations and $100$ rotations preceeded by a reflection:
```
N = 100
thetas = [i*2*np.pi/N for i in range(N)]
grid_rot = [G.element((0, theta)) for theta in thetas]
grid_refl = [G.element((1, theta)) for theta in thetas]
```
We now evaluate the function over all these elements and, finally, plot it:
```
f_rot = [
inverse_fourier_transform_O2(g, ft) for g in grid_rot
]
f_refl = [
inverse_fourier_transform_O2(g, ft) for g in grid_refl
]
plt.plot(thetas, f_rot, label='rotations')
plt.plot(thetas, f_refl, label='reflection + rotations')
plt.xlabel('theta [0, 2pi)')
plt.ylabel('f(g)')
plt.legend()
plt.show()
```
Observe that using more irreps allows one to parameterize more flexible functions.
Let's try to add some more:
```
irreps = [G.irrep(0, 0)] + [G.irrep(1, j) for j in range(8)]
ft = {
rho.id: np.random.randn(rho.size, rho.size)
for rho in irreps
}
f_rot = [
inverse_fourier_transform_O2(g, ft) for g in grid_rot
]
f_refl = [
inverse_fourier_transform_O2(g, ft) for g in grid_refl
]
plt.plot(thetas, f_rot, label='rotations')
plt.plot(thetas, f_refl, label='reflection + rotations')
plt.xlabel('theta [0, 2pi)')
plt.ylabel('f(g)')
plt.legend()
plt.show()
```
#### Fourier Transform of shifted functions
Recall that a group element $g \in G$ can act on a function $f: G \to \mathbb{R}$ as:
$$ [g.f](h) = f(g^{-1}h) \ .$$
The Fourier transform defined before has the convenient property that the Fourier transform of $f$ and of $[g.f]$ are related as follows:
$$\widehat{g.f}(\rho_j) = \rho_j(g) \widehat{f} $$
for any irrep $\rho_j$.
---
#### QUESTION 6
Prove the property above.
#### ANSWER 6
$$
\begin{align}
\widehat{g.f}(\rho_j)
&= \int_G [g.f](h) \sqrt{d_j} \rho_j(h) dh \\
&= \int_G f(g^{-1}h) \sqrt{d_j} \rho_j(h) dh \\
\text{Define $t = g^{-1}h$ and, therefore, $h=gt$:} \\
&= \int_G f(t) \sqrt{d_j} \rho_j(gt) dt \\
&= \int_G f(t) \sqrt{d_j} \rho_j(g)\rho_j(t) dt \\
&= \rho_j(g) \int_G f(t) \sqrt{d_j} \rho_j(t) dt \\
&= \rho_j(g) \hat{f}(\rho_j)
\end{align}
$$
---
We can verify this property visually:
```
irreps = [G.irrep(0, 0)] + [G.irrep(1, j) for j in range(8)]
# first, we generate a random function, as earlier
ft = {
rho.id: np.random.randn(rho.size, rho.size)
for rho in irreps
}
# second, we sample a random group element `g`
g = G.sample()
print(f'Transforming the function with g={g}')
# finally, we transform the Fourier coefficients as in the equations above:
gft = {
rho.id: rho(g) @ ft[rho.id]
for rho in irreps
}
# Let's now visualize the two functions:
f_rot = [
inverse_fourier_transform_O2(g, ft) for g in grid_rot
]
f_refl = [
inverse_fourier_transform_O2(g, ft) for g in grid_refl
]
gf_rot = [
inverse_fourier_transform_O2(g, gft) for g in grid_rot
]
gf_refl = [
inverse_fourier_transform_O2(g, gft) for g in grid_refl
]
plt.plot(thetas, f_rot, label='rotations')
plt.plot(thetas, f_refl, label='reflection + rotations')
plt.xlabel('theta [0, 2pi)')
plt.ylabel('f(g)')
plt.title('f')
plt.legend()
plt.show()
plt.plot(thetas, gf_rot, label='rotations')
plt.plot(thetas, gf_refl, label='reflection + rotations')
plt.xlabel('theta [0, 2pi)')
plt.ylabel('f(g)')
plt.title('g.f')
plt.legend()
plt.show()
```
#### From the Fourier Transform to the Regular Representation
For simplicity, we can stack all the Fourier coefficients (the output of the Fourier transform, that is, the input of the inverse Fourier transform) into a unique vector.
We define the vector $\mathbf{f}$ as the stack of the columns of each Fourier coefficients matrix $f(\rho_j)$.
Let's first introduce some notation.
We denote the stack of two vectors $\mathbf{v_1}, \mathbf{v_2}$ as $\mathbf{v_1} \oplus \mathbf{v_2}$.
The use of $\oplus$ is not random: if $\rho_1$ is a representation acting on $\mathbf{v_1}$ and $\rho_2$ is a representation acting on $\mathbf{v_2}$, then the *direct sum* $\rho_1 \oplus \rho_2$ acts on the concatenated vector $\mathbf{v_1} \oplus \mathbf{v_2}$.
Second, we denote by $\text{vec}(A)$ the vector which is the stack of the columns of a matrix $A$.
In `numpy`, this is written as `A.T.reshape(-1)`, where the transpose is necessary since `numpy` stacks rows by default.
Then, we write:
$$ \mathbf{f} = \bigoplus_{\rho_j} \text{vec}(\hat{f}(\rho_j)) \ .$$
Moreover, by using $\widehat{g.f}(\rho_j) = \rho_j(g) \hat{f}(\rho_j)$, we see that the vector containing the coefficients of the function $[g.f]$ will be:
$$
\bigoplus_{\rho_j} \text{vec}(\rho_j(g) \hat{f}(\rho_j)) =
\bigoplus_{\rho_j} \left(\bigoplus^{d_j} \rho_j(g)\right) \text{vec}(\hat{f}(\rho_j))
$$
In other words, the group $G$ is acting on the vector $\mathbf{f}$ with the following representation:
$$
\rho(g) = \bigoplus_{\rho_j} \bigoplus^{d_j} \rho_j(g)
$$
i.e. $\rho(g) \mathbf{f}$ is the vector containing the Fourier coefficients of the function $[g.f]$.
Note that, essentially, the representation $\rho$ acts on a vector space containing functions over $G$.
This should remind you of the **regular representation** we defined for *finite groups* earlier.
Indeed, it turns out that, if $G$ is finite, the representation $\rho$ we have just constructed is **isomorphic** (*equivalent*) to the *regular representation* defined earlier.
The change of basis $Q$ is a matrix which performs the Fourier transform, while $Q^{-1}$ performs the inverse Fourier transform.
More formally:
$$ \rho_\text{reg}(g) = Q^{-1} \left(\bigoplus_{\rho_j} \bigoplus^{d_j} \rho_j(g) \right) Q $$
where each irrep $\rho_j$ is repeated $d_j$ times, i.e. a number of times equal to its size.
**Intuition**: recall that a function $f : G \to \mathbb{R}$ is just a vector living in a vector space. Such vector can be expressed with respect to any basis for this vector space. The first time we introduced the *regular representation* for finite groups, we chose a basis where each axis is associated with a group element; the action of $G$ is realized in this basis by a permutation of all the axes. Here, instead, we defined a basis for the same vector space where $G$ acts indipendently on different subsets of the axes, i.e. the action of $G$ is a block-diagonal matrix (the direct sum of irreps). This is often a more convenient choice of basis as we will see later.
Let verify this equivalence for the Dihdral group $D_8$:
```
G = dihedral_group(8)
rho_irreps = []
for rho_j in G.irreps():
d_j = rho_j.size
# repeat each irrep a number of times equal to its size
rho_irreps += [rho_j]*d_j
rho = directsum(rho_irreps)
print('The representations have the same size:')
print(rho.size, G.regular_representation.size)
print('And contain the same irreps:')
print(rho.irreps)
print(G.regular_representation.irreps)
# Fourier transform matrix:
Q = G.regular_representation.change_of_basis
# inverse Fourier transform matrix:
Qinv = G.regular_representation.change_of_basis_inv
# let's check that the two representations are indeed equivalent
g = G.sample()
rho_g = rho(g)
reg_g = G.regular_representation(g)
print()
print('Are the two representations equivalent?', np.allclose(Q @ rho_g @ Qinv, reg_g))
```
When $G$ is not finite, we can not explicitly store the regular representation $\rho_\text{reg}$ or the Fourier transform matrix $Q$, since they are infinite dimensional.
Nevertheless, as we have done earlier, we can just consider a subset of all functions, spanned only by a finite number of irreps.
We can sample the function on any group element via the Inverse Fourier Transform when needed, without the need to compute the full Inverse Fourier Transform $Q^{-1}$ to store all values.
This is the underlying idea we will exploit later to build GCNNs equivariant to infinite groups.
We can easily generate this representation as:
```
G = o2_group()
irreps = [G.irrep(0, 0)] + [G.irrep(1, j) for j in range(8)]
rho = G.spectral_regular_representation(*irreps, name='regular_reprepresentation')
rho.irreps
```
#### Irreps with redundant entries: the case of $SO(2)$
We need to conclude with a final note about the Fourier transform.
When we introduced it earlier, we said that the entries of the irreps form a **basis** for the functions over *most* groups.
Indeed, there exists some groups where the entries of the irreps are partially redundant and, therefore, form an *overcomplete* basis.
This is the case, for example, of the group of planar rotations $SO(2)$ (or the group of $N$ discrete rotations $C_N$).
Indeed, an irrep of $SO(2)$ has form:
$$
\rho_j(r_\theta) = \begin{bmatrix}
\cos(j \cdot \theta) & -\sin(j \cdot \theta) \\
\sin(j \cdot \theta) & \cos(j \cdot \theta) \\
\end{bmatrix}
$$
for $\theta \in [0, 2\pi)$, where the integer $j \in \mathbb{N}$ is interpreted as the rotational *frequency*.
You can observe that the two columns of $\rho_j(r_\theta)$ contain redundant elements and span the same $2$ dimensional space of functions.
It is indeed sufficient to consider only one of the two columns to parameterize functions over $SO(2)$.
This also means that the irrep $\rho_j$ appears only once (instead of $d_j=2$ times) in the regular representation.
We don't generally need to worry much about this, since we can generate the representation as earlier:
```
G = so2_group()
irreps = [G.irrep(j) for j in range(8)]
rho = G.spectral_regular_representation(*irreps, name='regular_reprepresentation')
# observe that each irrep is now repeated only once, even if some are 2-dimensional
rho.irreps
```
## 2. From Group CNNs to Steerable CNNs
We consider a GCNN equivariant to a *semi-direct* product group $\mathbb{R}^n \rtimes G$, with compact group $G \leq O(n)$.
This setting covers equivariance to **isometries** (distance preserving transformations) of the Euclidean space $\mathbb{R}^n$; in particular, it includes equivariance to *translations* in $\mathbb{R}^n$ and to a origin-preserving symmetry $G$ (e.g. rotations or reflections in $n$-dimensions).
We call $G$ a **point group**.
If $G=O(n)$, the group of all rotations and reflections in $\mathbb{R}^n$, then $E(n) = \mathbb{R}^n \rtimes O(n)$ is called the **Euclidean group**, and includes all isometries of $\mathbb{R}^n$.
### 2.1 Feature Fields
In a GCNN, a feature map is a signal $f: \mathbb{R}^n \times G \to \mathbb{R}$.
The action of an element $(x, g) \in \mathbb{R}^n \rtimes G$ is:
$$ [(x, g).f](y,h):= f(g^{-1}(y-x), g^{-1}h) $$
where $x, y \in \mathbb{R}^n$ and $g, h \in G$.
---
#### QUESTION 7
Prove the action has indeed this form.
#### ANSWER 7
First, recall the group law: for any $(x, g)$ and $(y, h) \in \mathbb{R}^n \rtimes G$
$$
(x, g) \cdot (y, h) = (x + g.y, gh)
$$
where $x, y, g.y \in \mathbb{R}^n$ and $g, h \in G$.
Second, recall the inverse element is $(x, g)^{-1} = (-g^{-1}.x, g^{-1})$.
Then:
$$
[(x, g).f](y, h) = f((x, g)^{-1} \cdot (y, h)) = f(-g^{-1}.x + g^{-1}.y, g^{-1}h) = f(g^{-1}.(y-x), g^{-1}h)
$$
---
In a GCNN, a feature map $f$ is stored as a multi-dimensional array with an axis for each of the $n$ spatial dimensions and one for the group $G$.
In a steerable CNN, we replace the $G$ axis with a "Fourier" axis, which contains $c$ Fourier coefficients used to parameterize a function over $G$, as described in the previous section.
Again, let's call $\rho: G \to \mathbb{R}^{c \times c}$ the representation of $G$ acting on these $c$ coefficients.
The result is equivalent to a standard GCNN if $G$ is finite (and we have $c = |G|$), but we can now also use infinite $G$, such as $SO(2)$.
A feature map $f$ can now be interpreted as a vector field on the space $\mathbb{R}^n$, i.e.:
$$ f: \mathbb{R}^n \to \mathbb{R}^c $$
which assigns a $c$-dimensional feature vector $f(x)\in\mathbb{R}^c$ to each spatial position $x\in\mathbb{R}^n$.
We call such vector field a **feature vector field**.
The action of $\mathbb{R}^n \rtimes G$ on one such feature vector field is defined as:
$$ [(x, g).f](y):= \rho(g) f(g^{-1}(y-x)) $$
where $x, y \in \mathbb{R}^n$ and $g \in G$.
---
#### QUESTION 8
Prove that this is indeed the right action of $\mathbb{R}^n \rtimes G$ on the feature vector field $f: \mathbb{R}^n \to \mathbb{R}^c$.
Recall the action of this group over the functions of the form $\underline{f}: \mathbb{R}^n \rtimes G \to \mathbb{R}$ that we described earlier.
Moreover, note that the vector $f(x) \in \mathbb{R}^c$ contains the $c$ Fourier coefficients of the function $\underline{f}(x, \cdot) : G \to \mathbb{R}$ along its $G$ axis, i.e.:
$$
f(x) = \bigoplus_{\rho_j} \text{vec}\left(\widehat{\underline{f}(x, \cdot)}(\rho_j)\right)
$$
#### ANSWER 8:
We know from the previous question that
$$
[(x, g).\underline{f}](y, h) = \underline{f}(g^{-1}(y-x), g^{-1}h)
$$
Recall also that $\rho(g) = \bigoplus_{\rho_j} \bigoplus^{d_j} \rho_j(g) \in \mathbb{R}^{c \times c}$ is the regular representation of $G$ acting on the vector of Fourier coefficients.
Then:
$$
\begin{align}
[(x, g).f](y)
&= \bigoplus_{\rho_j} \text{vec}\left(\left[\widehat{[(x, g).\underline{f}](y, \cdot)}\right](\rho_j)\right) \\
&= \bigoplus_{\rho_j} \text{vec}\left(\left[\widehat{\underline{f}(g^{-1}(y-x), g^{-1}\cdot)}\right](\rho_j)\right) \\
&= \bigoplus_{\rho_j} \text{vec}\left(\rho_j(g) \left[\widehat{\underline{f}(g^{-1}(y-x), \cdot)}\right](\rho_j)\right) \\
&= \rho(g) f(g^{-1}(y-x))
\end{align}
$$
Note that in the equations above, the square brakets in $[\widehat{\cdot}]$ indicate that $\widehat{\cdot}$ covers the whole content of the brackets.
---
### General Steerable CNNs
The framework of Steerable CNNs is actually more general and allows for any representation $\rho$ of $G$.
A different choice of $\rho$ generally require some structural change in the architecture, e.g. by adapting the non-linearity used to ensure equivariance.
Anyways, for simplicity, we will stick with the Fourier example in this tutorial.
Throughout the rest of this tutorial, we will assume $n=2$ for simplicity.
That means we will be working for example with planar images and with the isometries of the plane (2D rotations or mirroring).
The actions of $g \in G=SO(2)$ on two examples of feature vector fields over $\mathbb{R}^2$ are shown next.
On the left, $\rho$ is the trivial representation of $SO(2)$ while, on the right, $\rho$ is the representation of $SO(2)$ as $2\times 2$ rotation matrices.

### 2.2 Defining a Steerable CNN
We can now proceed with building a Steerable CNN.
First we import some other useful packages.
```
from escnn import group
from escnn import gspaces
from escnn import nn
```
First, we need to choose the group $G$ of point symmetries (reflections and rotations) which are being considered.
All of these choices are subgroups $G\leq O(2)$ of the orthogonal group.
For simplicity, we first consider the *finite* group $G=C_4$, which models the $4$ *rotations* by angle $\theta \in \big\{0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}\big\}$.
Because these are perfect symmetries of the grid, transforming an image with this group does not require any interpolation.
We will later extend our examples to an infinite group such as $SO(2)$ or $O(2)$.
Recall that a semi-direct product $\mathbb{R}^2 \rtimes G$ is defined by $G$ but also by the action of $G$ on $\mathbb{R}^2$.
We determine both the **point group** $G$ and its **action on the space** $\mathbb{R}^2$ by instantiating a subclass of `gspace.GSpace`.
For the rotational action of $G=C_4$ on $\mathbb{R}^2$ this is done by:
```
r2_act = gspaces.rot2dOnR2(N=4)
r2_act
# we can access the group G as
G = r2_act.fibergroup
G
```
Having specified the symmetry transformation on the *base space* $\mathbb{R}^2$, we next need to define the representation $\rho: G \to \mathbb{R}^{c \times c}$ which describes how a **feature vector field** $f : \mathbb{R}^2 \to \mathbb{R}^c$ transforms under the action of $G$.
This transformation law of feature fields is implemented by ``nn.FieldType``.
We instantiate the `nn.FieldType` modeling a GCNN feature by passing it the `gspaces.GSpace` instance and the *regular representation* of $G=C_4$.
We call a feature field associated with the regular representation $\rho_\text{reg}$ a **regular feature field**.
```
feat_type = nn.FieldType(r2_act, [G.regular_representation])
feat_type
```
Recall that the regular representation of a finite group $G$ built by `G.regular_representation` is a permutation matrix of shape $|G| \times |G|$:
```
G.regular_representation(G.sample())
```
#### Deep Feature spaces
The deep feature spaces of a GCNN typically comprise multiple channels.
Similarly, the feature spaces of a steerable CNN can include multiple independent feature fields.
This is achieved via **direc sum**, but stacking multiple copies of $\rho$.
For example, we can use $3$ copies of the regular representation $\rho_\text{reg}: G \to \mathbb{R}^{|G|}$.
The full feature space is in this case modeled as a *stacked* field $f: \mathbb{R}^2 \to \mathbb{R}^{3|G|}$ which transforms according to the **direct sum** of three regular representations:
$$
\rho(r_\theta)
\ =\ \rho_\text{reg}(r_\theta) \oplus \rho_\text{reg}(r_\theta) \oplus \rho_\text{reg}(r_\theta)
\ =\ \begin{bmatrix}
\rho_\text{reg}(\theta) & 0 & 0 \\
0 & \rho_\text{reg}(\theta) & 0 \\
0 & 0 & \rho_\text{reg}(\theta) \\
\end{bmatrix}
\quad\in\ \mathbb{R}^{3|G| \times 3|G|}
$$
We instantiate a `nn.FieldType` composed of $3$ regular representations by passing the full field representation as a list of three regular representations:
```
# Technically, one can also construct the direct-sum representation G.regular_representation + G.regular_representation + G.regular_representation as done
# before. Passing a list containing 3 copies of G.regular_representation allows for more efficient implementation of certain operations internally.
feat_type = nn.FieldType(r2_act, [G.regular_representation]*3)
feat_type
```
#### Input Features
Each hidden layer of a steerable CNN has its own transformation law which the user needs to specify (equivalent to the choice of number of channels in each layer of a conventional CNN).
The *input* and *output* of a steerable CNN are also feature fields and their type (i.e. transformation law) is typically determined by the inference task.
The most common example is that of gray-scale input images.
A rotation of a gray-scale image is performed by moving each pixel to a new position without changing their intensity values.
The invariance of the scalar pixel values under rotations is modeled by the **trivial representation** $\rho_0: G\to\mathbb{R},\ g\mapsto 1$ of $G$ and identifies them as **scalar fields**.
Formally, a scalar field is a function $f: \mathbb{R}^2 \to \mathbb{R}$ mapping to a feature vector with $c=1$ channels.
A rotation $r_\theta \in C_4$ transforms this scalar field as
$$ \big[r_{\theta}\,. f\big](x)
\ :=\ \rho_0(r_\theta)\,f\big(r_\theta^{-1}x\big)
\ =\ 1\cdot f\big(r_\theta^{-1}x\big)
\ =\ f\big(r_\theta^{-1}x\big) \ .
$$
We instantiate the `nn.FieldType` modeling a gray-scale image by passing it the trivial representation of $G$:
```
feat_type_in = nn.FieldType(r2_act, [G.trivial_representation])
feat_type_in
```
#### Equivariant Layers
When we build a model **equivariant** to a group $G$, we require that the output produced by the model transforms consistently when the input transforms under the action of an element $g \in G$.
For a function $F$ (e.g. a neural network), the **equivariance constraint** requires:
$$ \mathcal{T}^\text{out}_g \big[F(x)\big]\ =\ F\big(\mathcal{T}^\text{in}_g[x]\big) \quad \forall g\in G$$
where $\mathcal{T}^\text{in}_g$ is the transformation of the input by the group element $g$ while $\mathcal{T}^\text{out}_g$ is the transformation of the output by the same element.
The *field type* `feat_type_in` we have just defined above precisely describes $\mathcal{T}^\text{in}$.
The transformation law $\mathcal{T}^\text{out}$ of the output of the first layer is similarly chosen by defining an instance `feat_type_out` of `nn.FieldType`.
For example, let's use $3$ *regular feature fields* in output:
```
feat_type_out = nn.FieldType(r2_act, [G.regular_representation]*3)
```
As a shortcut, we can also use:
```
feat_type_in = nn.FieldType(r2_act, [r2_act.trivial_repr])
feat_type_out = nn.FieldType(r2_act, [r2_act.regular_repr]*3)
```
Once having defined how the input and output feature spaces should transform, we can build neural network functions as **equivariant modules**.
These are implemented as subclasses of an abstract base class `nn.EquivariantModule` which itself inherits from `torch.nn.Module`.
**Equivariant Convolution Layer**: We start by instantiating a convolutional layer that maps between fields of types `feat_type_in` and `feat_type_out`.
Let $\rho_\text{in}: G \to \mathbb{R}^{c_\text{in} \times c_\text{in}}$ and $\rho_\text{out}: G \to \mathbb{R}^{c_\text{out} \times c_\text{out}}$ be respectively the representations of $G$ associated with `feat_type_in` and `feat_type_out`.
Then, an equivariant convolution layer is a standard convolution layer with a filter $k: \mathbb{R}^2 \to \mathbb{R}^{c_\text{out} \times c_\text{in}}$ (note the number of input and output channels) which satisfies a particular **steerability constraint**:
$$
\forall g \in G, x \in \mathbb{R}^2 \quad k(g.x) = \rho_\text{out}(g) k(x) \rho_\text{in}(g)^{-1}
$$
In particular, the use of convolution guarantees the translation equivariance, while the fact the filters satisfy this steerability constraint guarantees the $G$-equivairance.
---
#### QUESTION 9
Show that if a filter $k: \mathbb{R}^2 \to \mathbb{R}^{c_\text{out} \times c_\text{in}}$ satisfies the constraint above, the convolution with it is equivariant to $G$, i.e. show that
$$
f_\text{out} = k \star f_\text{in} \implies [g.f_\text{out}] = k \star [g.f_\text{in}]
$$
for all $g \in G$.
The action on the features $f_\text{in}$ and $f_\text{out}$ is the one previously defined, i.e:
$$
[g.f_\text{in}](x) = \rho_\text{in}(g) f(g^{-1}x)
$$
and
$$
[g.f_\text{out}](x) = \rho_\text{out}(g) f(g^{-1}x)
$$
while the convolution is defined as
$$
f_\text{out}(y) = [k \star f_\text{in}](y) = \int_{\mathbb{R}^2} k(x-y) f_\text{in}(x) dx
$$
#### ANSWER 9
Note that, because $k$ satisfies the steerabile constraint, it follows that $k(x) = \rho_\text{out}(g) k(g^{-1}.x) \rho_\text{in}(g)^{-1}$.
Then:
$$
\begin{align}
k \star [g.f_\text{in}](y)
&= \int_{\mathbb{R}^2} k(x-y) [g.f_\text{in}](x) dx \\
&= \int_{\mathbb{R}^2} k(x-y) \rho_\text{in}(g)f_\text{in}(g^{-1}x) dx \\
&= \rho_\text{out}(g) \int_{\mathbb{R}^2} k(g^{-1}.(x-y))f_\text{in}(g^{-1}x) dx \\
\text{Define $z = g^{-1}.x$} \\
&= \rho_\text{out}(g) \int_{\mathbb{R}^2} k(z - g^{-1}.y))f_\text{in}(z) dz \\
&= \rho_\text{out}(g) f_\text{out}(g^{-1}.y) \\
&= [g.f_\text{out}](y)
\end{align}
$$
---
The steerability constraint restricts the space of possible learnable filters to a smaller space of equivariant filters.
Solving this constraint goes beyond the scope of this tutorial; fortunately, the `nn.R2Conv` module takes care of properly parameterizing the filter $k$ such that it satisfies the constraint.
```
conv = nn.R2Conv(feat_type_in, feat_type_out, kernel_size=3)
```
Each equivariant module has an input and output type.
As a function (`.forward()`), it *requires* its inputs to transform according to its input type and is guaranteed to return feature fields associated with its output type.
To prevent the user from accidentally feeding an incorrectly transforming input field into an equivariant module, we perform a dynamic type checking.
In order to do so, we define **geometric tensors** as data containers.
They are wrapping a *PyTorch* `torch.Tensor` to augment them with an instance of `FieldType`.
Let's build a few random 32x32 gray-scale images and wrap them into an `nn.GeometricTensor`:
```
x = torch.randn(4, 1, 32, 32)
# FieldType is a callable object; its call method can be used to wrap PyTorch tensors into GeometricTensors
x = feat_type_in(x)
assert isinstance(x.tensor, torch.Tensor)
assert isinstance(x, nn.GeometricTensor)
```
As usually done in *PyTorch*, an image or feature map is stored in a 4-dimensional array of shape BxCxHxW, where B is the batch-size, C is the number of channels and W and H are the spatial dimensions.
We can feed a geometric tensor to an equivariant module as we feed normal tensors in *PyTorch*'s modules:
```
y = conv(x)
```
We can verify that the output is indeed associated with the output type of the convolutional layer:
```
assert y.type == feat_type_out
```
Lets check whether the output transforms as described by the output type when the input transforms according to the input type.
The $G$-transformation of a geometric tensor is hereby conveniently done by calling `nn.GeometricTensor.transform()`.
```
# for each group element
for g in G.elements:
# transform the input with the current group element according to the input type
x_transformed = x.transform(g)
# feed the transformed input in the convolutional layer
y_from_x_transformed = conv(x_transformed)
# the result should be equivalent to rotating the output produced in the
# previous block according to the output type
y_transformed_from_x = y.transform(g)
assert torch.allclose(y_from_x_transformed.tensor, y_transformed_from_x.tensor, atol=1e-5), g
```
Any network operation is required to be equivariant.
`escnn.nn` provides a wide range of equivariant network modules which guarantee this behavior.
**Non-Linearities**:
As an example, we will next apply an *equivariant nonlinearity* to the output feature field of the convolution.
Since the regular representations of a finite group $G$ consists of permutation matrices, any pointwise nonlinearity like *ReLUs* is equivariant.
Note that this is *not* the case for many other choices of representations / field types!
We instantiate a `escnn.nn.ReLU`, which, as an `nn.EquivariantModule`, requires to be informed about its input type to be able to perform the type checking.
Here we are passing `feat_type_out`, the output of the equivariant convolution layer, as input type.
It is not necessary to pass an output type to the nonlinearity since this is here determined by its input type.
```
relu = nn.ReLU(feat_type_out)
z = relu(y)
```
We can verify the equivariance again:
```
# for each group element
for g in G.elements:
y_transformed = y.transform(g)
z_from_y_transformed = relu(y_transformed)
z_transformed_from_y = z.transform(g)
assert torch.allclose(z_from_y_transformed.tensor, z_transformed_from_y.tensor, atol=1e-5), g
```
**Deeper Models**: In *deep learning* we usually want to stack multiple layers to build a deep model.
As long as each layer is equivariant and consecutive layers are compatible, the equivariance property is preserved by induction.
The compatibility of two consecutive layers requires the output type of the first layer to be equal to the input type of the second layer.
In case we feed an input with the wrong type to a module, an error is raised:
```
layer1 = nn.R2Conv(feat_type_in, feat_type_out, kernel_size=3)
layer2 = nn.ReLU(feat_type_in) # the input type of the ReLU should be the output type of the convolution
x = feat_type_in(torch.randn(3, 1, 7, 7))
try:
y = layer2(layer1(x))
except AssertionError as e:
print(e)
```
Simple deeper architectures can be built using a **SequentialModule**:
```
feat_type_in = nn.FieldType(r2_act, [r2_act.trivial_repr])
feat_type_hid = nn.FieldType(r2_act, 8*[r2_act.regular_repr])
feat_type_out = nn.FieldType(r2_act, 2*[r2_act.regular_repr])
model = nn.SequentialModule(
nn.R2Conv(feat_type_in, feat_type_hid, kernel_size=3),
nn.InnerBatchNorm(feat_type_hid),
nn.ReLU(feat_type_hid, inplace=True),
nn.R2Conv(feat_type_hid, feat_type_hid, kernel_size=3),
nn.InnerBatchNorm(feat_type_hid),
nn.ReLU(feat_type_hid, inplace=True),
nn.R2Conv(feat_type_hid, feat_type_out, kernel_size=3),
).eval()
```
As every layer is equivariant and consecutive layers are compatible, the whole model is equivariant.
```
x = torch.randn(1, 1, 17, 17)
x = feat_type_in(x)
y = model(x)
# for each group element
for g in G.elements:
x_transformed = x.transform(g)
y_from_x_transformed = model(x_transformed)
y_transformed_from_x = y.transform(g)
assert torch.allclose(y_from_x_transformed.tensor, y_transformed_from_x.tensor, atol=1e-5), g
```
**Invariant Pooling Layer**: Usually, at the end of the model we want to produce a single feature vector to use for classification.
To do so, it is common to pool over the spatial dimensions, e.g. via average pooling.
This produces (approximatively) translation-invariant feature vectors.
```
# average pooling with window size 11
avgpool = nn.PointwiseAvgPool(feat_type_out, 11)
y = avgpool(model(x))
print(y.shape)
```
In our case, the feature vectors $f(x)\in\mathbb{R}^c$ associated to each point $x\in\mathbb{R}^2$ have a well defined transformation law.
The output of the model now transforms according to `feat_type_out` (here two $C_4$ regular fields, i.e. 8 channels).
For our choice of regular representations (which are permutation representations) the channels in the feature vectors associated to each point permute when the input is rotated.
```
for g in G.elements:
print(f'rotation by {g}:', y.transform(g).tensor[0, ...].detach().numpy().squeeze())
```
Many learning tasks require to build models which are **invariant** under rotations.
We can compute invariant features from the output of the model using an **invariant map**.
For instance, we can take the maximum value within each regular field.
We do so using `nn.GroupPooling`:
```
invariant_map = nn.GroupPooling(feat_type_out)
y = invariant_map(avgpool(model(x)))
for g in G.elements:
print(f'rotation by {g}:', y.transform(g).tensor[0, ...].detach().numpy().squeeze())
# for each group element
for g in G.elements:
# rotated the input image
x_transformed = x.transform(g)
y_from_x_transformed = invariant_map(avgpool(model(x_transformed)))
y_transformed_from_x = y # no .transform(g) needed since y should be invariant!
# check that the output did not change
# note that here we are not rotating the original output y as before
assert torch.allclose(y_from_x_transformed.tensor, y_transformed_from_x.tensor, atol=1e-6), g
```
### 2.3 Steerable CNN with infinite group $G$
We can now repeat the same constructions with $G$ being an infinite group, e.g. the group of all planar rotations $G=SO(2)$.
```
# use N=-1 to indicate all rotations
r2_act = gspaces.rot2dOnR2(N=-1)
r2_act
G = r2_act.fibergroup
G
# For simplicity we take a single-channel gray-scale image in input and we output a single-channel gray-scale image, i.e. we use scalar fields in input and output
feat_type_in = nn.FieldType(r2_act, [G.trivial_representation])
feat_type_out = nn.FieldType(r2_act, [G.trivial_representation])
```
As intermidiate feature types, we want to use again the *regular representation*.
Because $G$ has an infinite number of elements, we use use the Fourier transform idea described earlier.
For example, we will use the first three irreps of $G=SO(2)$, which contains cosines and sines of frequency $0$, $1$ and $2$.
Earlier, we built this representation as
``rho = G.spectral_regular_representation(*[G.irrep(f) for f in range(3)])``
To apply a non-linearity, e.g. ELU, we can use the *Inverse Fourier Transform* to sample the function, apply the non-linearity and, finally, compute the *Fourier Transform* to recover the coeffients.
Because $G$ has infinite elements, the Fourier Transform requires an integral over $G$; this can be **approximated** by a sum over a finite number of samples.
The more samples one take, the better the approximation will be, although this also increase the computational cost.
Fortunately, the class `nn.FourierELU` takes care of most of these details.
We can just specify which `irreps` to consider, the number of `channels` (i.e. copies of the regular representation) and the number `N` of elements of $G$ where to sample the function:
```
nonlinearity = nn.FourierELU(r2_act, 16, irreps=[(f,) for f in range(3)], N=12)
# we do not need to pre-define the feature type: FourierELU will create it internally and we can just access it as
feat_type_hid = nonlinearity.in_type
# note also the its input and output types are the same
assert nonlinearity.in_type == nonlinearity.out_type
```
Let's build a simple $G=SO(2)$ equivariant model:
```
equivariant_so2_model = nn.SequentialModule(
nn.R2Conv(feat_type_in, feat_type_hid, kernel_size=7),
nn.IIDBatchNorm2d(feat_type_hid),
nonlinearity,
nn.R2Conv(feat_type_hid, feat_type_hid, kernel_size=7),
nn.IIDBatchNorm2d(feat_type_hid),
nonlinearity,
nn.R2Conv(feat_type_hid, feat_type_out, kernel_size=7),
).eval()
```
and check its equivariance to a few elements of $SO(2)$:
```
x = torch.randn(1, 1, 23, 23)
x = feat_type_in(x)
y = equivariant_so2_model(x)
# check equivariance to N=16 rotations
N = 16
try:
for i in range(N):
g = G.element(i*2*np.pi/N)
x_transformed = x.transform(g)
y_from_x_transformed = equivariant_so2_model(x_transformed)
y_transformed_from_x = y.transform(g)
assert torch.allclose(y_from_x_transformed.tensor, y_transformed_from_x.tensor, atol=1e-3), g
except:
print('Error! The model is not equivariant!')
```
---
#### QUESTION 10
The model is not perfectly equivariant to $G=SO(2)$ ! Why is this an expected behaviour?
#### ANSWER 10
The $SO(2)$ group includes all continuous planar rotations.
However, when an image is represented on a pixel grid, only the $4$ rotations by angles multiple of $\pi/2$ are perfect, while other rotations involve some form of interpolation and generally introduce some noise.
This prevents perfect equivariance to all rotations, since rotated versions of the same image inherently include some noise.
A similar argument applies to the filters used during convolution: the steerability constraint described before involve a rotation of the filter $k$ itself, but also the filter needs to be represented on discrete grid.
---
While the model can not be perfectly equivariant, we can compare it with a *conventional CNN* baseline.
Let's build a CNN similar to our equivariant model but which is not constrained to be equivariant:
```
conventional_model = torch.nn.Sequential(
torch.nn.Conv2d(feat_type_in.size, feat_type_hid.size, kernel_size=7),
torch.nn.BatchNorm2d(feat_type_hid.size),
torch.nn.ELU(),
torch.nn.Conv2d(feat_type_hid.size, feat_type_hid.size, kernel_size=7),
torch.nn.BatchNorm2d(feat_type_hid.size),
torch.nn.ELU(),
torch.nn.Conv2d(feat_type_hid.size, feat_type_out.size, kernel_size=7),
).eval()
```
To compare the two models, we compute their *equivariance error* for a few elements of $G$.
We define the equivariance error of a model $F$ with respect to a group element $g \in G$ and an input $x$ as:
$$
\epsilon_g(F) = \frac{||F(g.X) - g.F(X)||_2}{||F(x)||_2}
$$
Note that this is a form of *relative* error.
Let's now compute the equivariance error of the two models:
```
# let's generate a random image of shape W x W
W = 37
x = torch.randn(1, 1, W, W)
# Because a rotation by an angle smaller than 90 degrees moves pixels outsize the image, we mask out all pixels outside the central disk
# We need to do this both for the input and the output
def build_mask(W):
center_mask = np.zeros((2, W, W))
center_mask[1, :, :] = np.arange(0, W) - W // 2
center_mask[0, :, :] = np.arange(0, W) - W // 2
center_mask[0, :, :] = center_mask[0, :, :].T
center_mask = center_mask[0, :, :] ** 2 + center_mask[1, :, :] ** 2 < .9*(W // 2) ** 2
center_mask = torch.tensor(center_mask.reshape(1, 1, W, W), dtype=torch.float)
return center_mask
# create the mask for the input
input_center_mask = build_mask(W)
# mask the input image
x = x * input_center_mask
x = feat_type_in(x)
# compute the output of both models
y_equivariant = equivariant_so2_model(x)
y_conventional = feat_type_out(conventional_model(x.tensor))
# create the mask for the output images
output_center_mask = build_mask(y_equivariant.shape[-1])
# We evaluate the equivariance error on N=100 rotations
N = 100
error_equivariant = []
error_conventional = []
# for each of the N rotations
for i in range(N+1):
g = G.element(i / N * 2*np.pi)
# rotate the input
x_transformed = x.transform(g)
x_transformed.tensor *= input_center_mask
# F(g.X) feed the transformed images in both models
y_from_x_transformed_equivariant = equivariant_so2_model(x_transformed).tensor
y_from_x_transformed_conventional = conventional_model(x_transformed.tensor)
# g.F(x) transform the output of both models
y_transformed_from_x_equivariant = y_equivariant.transform(g)
y_transformed_from_x_conventional = y_conventional.transform(g)
# mask all the outputs
y_from_x_transformed_equivariant = y_from_x_transformed_equivariant * output_center_mask
y_from_x_transformed_conventional = y_from_x_transformed_conventional * output_center_mask
y_transformed_from_x_equivariant = y_transformed_from_x_equivariant.tensor * output_center_mask
y_transformed_from_x_conventional = y_transformed_from_x_conventional.tensor * output_center_mask
# compute the relative error of both models
rel_error_equivariant = torch.norm(y_from_x_transformed_equivariant - y_transformed_from_x_equivariant).item() / torch.norm(y_equivariant.tensor).item()
rel_error_conventional = torch.norm(y_from_x_transformed_conventional - y_transformed_from_x_conventional).item() / torch.norm(y_conventional.tensor).item()
error_equivariant.append(rel_error_equivariant)
error_conventional.append(rel_error_conventional)
# plot the error of both models as a function of the rotation angle theta
fig, ax = plt.subplots(figsize=(10, 6))
xs = [i*2*np.pi / N for i in range(N+1)]
plt.plot(xs, error_equivariant, label='SO(2)-Steerable CNN')
plt.plot(xs, error_conventional, label='Conventional CNN')
plt.title('Equivariant vs Conventional CNNs', fontsize=20)
plt.xlabel(r'$g = r_\theta$', fontsize=20)
plt.ylabel('Equivariance Error', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
plt.legend(fontsize=20)
plt.show()
```
## 3. Build and Train Steerable CNNs
Finally, we will proceed with implementing a **Steerable CNN** and train it on rotated MNIST.
### Dataset
We will evaluate the model on the *rotated* MNIST dataset.
First, we download the (non-rotated) MNIST 12k data:
```
# download the dataset
!wget -nc http://www.iro.umontreal.ca/~lisa/icml2007data/mnist.zip
# uncompress the zip file
!unzip -n mnist.zip -d mnist
```
Then, we build the dataset and some utility functions:
```
from torch.utils.data import Dataset
from torchvision.transforms import RandomRotation
from torchvision.transforms import Pad
from torchvision.transforms import Resize
from torchvision.transforms import ToTensor
from torchvision.transforms import Compose
from tqdm.auto import tqdm
from PIL import Image
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class MnistDataset(Dataset):
def __init__(self, mode, rotated: bool = True):
assert mode in ['train', 'test']
if mode == "train":
file = "mnist/mnist_train.amat"
else:
file = "mnist/mnist_test.amat"
data = np.loadtxt(file)
images = data[:, :-1].reshape(-1, 28, 28).astype(np.float32)
# images are padded to have shape 29x29.
# this allows to use odd-size filters with stride 2 when downsampling a feature map in the model
pad = Pad((0, 0, 1, 1), fill=0)
# to reduce interpolation artifacts (e.g. when testing the model on rotated images),
# we upsample an image by a factor of 3, rotate it and finally downsample it again
resize1 = Resize(87) # to upsample
resize2 = Resize(29) # to downsample
totensor = ToTensor()
if rotated:
self.images = torch.empty((images.shape[0], 1, 29, 29))
for i in tqdm(range(images.shape[0]), leave=False):
img = images[i]
img = Image.fromarray(img, mode='F')
r = (np.random.rand() * 360.)
self.images[i] = totensor(resize2(resize1(pad(img)).rotate(r, Image.BILINEAR))).reshape(1, 29, 29)
else:
self.images = torch.zeros((images.shape[0], 1, 29, 29))
self.images[:, :, :28, :28] = torch.tensor(images).reshape(-1, 1, 28, 28)
self.labels = data[:, -1].astype(np.int64)
self.num_samples = len(self.labels)
def __getitem__(self, index):
image, label = self.images[index], self.labels[index]
return image, label
def __len__(self):
return len(self.labels)
# Set the random seed for reproducibility
np.random.seed(42)
# build the rotated training and test datasets
mnist_train = MnistDataset(mode='train', rotated=True)
train_loader = torch.utils.data.DataLoader(mnist_train, batch_size=64)
mnist_test = MnistDataset(mode='test', rotated=True)
test_loader = torch.utils.data.DataLoader(mnist_test, batch_size=64)
# for testing purpose, we also build a version of the test set with *non*-rotated digits
raw_mnist_test = MnistDataset(mode='test', rotated=False)
```
### $SO(2)$ equivariant architecture
We now build an $SO(2)$ equivariant CNN.
Because the inputs are still gray-scale images, the input type of the model is again a *scalar field*.
In the intermidiate layers, we will use *regular fields*, such that the models are equivalent to *group-equivariant convolutional neural networks* (GCNNs).
The final classification is performed by a fully connected layer.
```
class SO2SteerableCNN(torch.nn.Module):
def __init__(self, n_classes=10):
super(SO2SteerableCNN, self).__init__()
# the model is equivariant under all planar rotations
self.r2_act = gspaces.rot2dOnR2(N=-1)
# the input image is a scalar field, corresponding to the trivial representation
in_type = nn.FieldType(self.r2_act, [self.r2_act.trivial_repr])
# we store the input type for wrapping the images into a geometric tensor during the forward pass
self.input_type = in_type
# We need to mask the input image since the corners are moved outside the grid under rotations
self.mask = nn.MaskModule(in_type, 29, margin=1)
# convolution 1
# first we build the non-linear layer, which also constructs the right feature type
# we choose 8 feature fields, each transforming under the regular representation of SO(2) up to frequency 3
# When taking the ELU non-linearity, we sample the feature fields on N=16 points
activation1 = nn.FourierELU(self.r2_act, 8, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation1.in_type
self.block1 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=7, padding=1, bias=False),
nn.IIDBatchNorm2d(out_type),
activation1,
)
# convolution 2
# the old output type is the input type to the next layer
in_type = self.block1.out_type
# the output type of the second convolution layer are 16 regular feature fields
activation2 = nn.FourierELU(self.r2_act, 16, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation2.in_type
self.block2 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation2
)
# to reduce the downsampling artifacts, we use a Gaussian smoothing filter
self.pool1 = nn.SequentialModule(
nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=2)
)
# convolution 3
# the old output type is the input type to the next layer
in_type = self.block2.out_type
# the output type of the third convolution layer are 32 regular feature fields
activation3 = nn.FourierELU(self.r2_act, 32, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation3.in_type
self.block3 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation3
)
# convolution 4
# the old output type is the input type to the next layer
in_type = self.block3.out_type
# the output type of the fourth convolution layer are 64 regular feature fields
activation4 = nn.FourierELU(self.r2_act, 32, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation4.in_type
self.block4 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation4
)
self.pool2 = nn.SequentialModule(
nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=2)
)
# convolution 5
# the old output type is the input type to the next layer
in_type = self.block4.out_type
# the output type of the fifth convolution layer are 96 regular feature fields
activation5 = nn.FourierELU(self.r2_act, 64, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation5.in_type
self.block5 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation5
)
# convolution 6
# the old output type is the input type to the next layer
in_type = self.block5.out_type
# the output type of the sixth convolution layer are 64 regular feature fields
activation6 = nn.FourierELU(self.r2_act, 64, irreps=[(f,) for f in range(4)], N=16, inplace=True)
out_type = activation6.in_type
self.block6 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=1, bias=False),
nn.IIDBatchNorm2d(out_type),
activation6
)
self.pool3 = nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=1, padding=0)
# number of output invariant channels
c = 64
# last 1x1 convolution layer, which maps the regular fields to c=64 invariant scalar fields
# this is essential to provide *invariant* features in the final classification layer
output_invariant_type = nn.FieldType(self.r2_act, c*[self.r2_act.trivial_repr])
self.invariant_map = nn.R2Conv(out_type, output_invariant_type, kernel_size=1, bias=False)
# Fully Connected classifier
self.fully_net = torch.nn.Sequential(
torch.nn.BatchNorm1d(c),
torch.nn.ELU(inplace=True),
torch.nn.Linear(c, n_classes),
)
def forward(self, input: torch.Tensor):
# wrap the input tensor in a GeometricTensor
# (associate it with the input type)
x = self.input_type(input)
# mask out the corners of the input image
x = self.mask(x)
# apply each equivariant block
# Each layer has an input and an output type
# A layer takes a GeometricTensor in input.
# This tensor needs to be associated with the same representation of the layer's input type
#
# Each layer outputs a new GeometricTensor, associated with the layer's output type.
# As a result, consecutive layers need to have matching input/output types
x = self.block1(x)
x = self.block2(x)
x = self.pool1(x)
x = self.block3(x)
x = self.block4(x)
x = self.pool2(x)
x = self.block5(x)
x = self.block6(x)
# pool over the spatial dimensions
x = self.pool3(x)
# extract invariant features
x = self.invariant_map(x)
# unwrap the output GeometricTensor
# (take the Pytorch tensor and discard the associated representation)
x = x.tensor
# classify with the final fully connected layer
x = self.fully_net(x.reshape(x.shape[0], -1))
return x
```
#### Equivariance Test before training
Let's instantiate the model:
```
model = SO2SteerableCNN().to(device)
```
The model is now randomly initialized.
Therefore, we do not expect it to produce the right class probabilities.
However, the model should still produce the same output for rotated versions of the same image.
This is true for rotations by multiples of $\frac{\pi}{2}$, but is only approximate for other rotations.
Let's test it on a random test image:
we feed $N=20$ rotated versions of the first image in the test set and print the output logits of the model for each of them.
```
def test_model_single_image(model: torch.nn.Module, x: torch.Tensor, N: int = 8):
np.set_printoptions(linewidth=10000)
x = Image.fromarray(x.cpu().numpy()[0], mode='F')
# to reduce interpolation artifacts (e.g. when testing the model on rotated images),
# we upsample an image by a factor of 3, rotate it and finally downsample it again
resize1 = Resize(87) # to upsample
resize2 = Resize(29) # to downsample
totensor = ToTensor()
x = resize1(x)
# evaluate the `model` on N rotated versions of the input image `x`
model.eval()
print()
print('##########################################################################################')
header = 'angle | ' + ' '.join(["{:5d}".format(d) for d in range(10)])
print(header)
with torch.no_grad():
for r in range(N):
x_transformed = totensor(resize2(x.rotate(r*360./N, Image.BILINEAR))).reshape(1, 1, 29, 29)
x_transformed = x_transformed.to(device)
y = model(x_transformed)
y = y.to('cpu').numpy().squeeze()
angle = r * 360. / N
print("{:6.1f} : {}".format(angle, y))
print('##########################################################################################')
print()
# retrieve the first image from the test set
x, y = next(iter(raw_mnist_test))
# evaluate the model
test_model_single_image(model, x, N=20)
```
The output of the model is already almost invariant but we observe small fluctuations in the outputs.
This is the effect of the discretization artifacts (e.g. the pixel grid can not be perfectly rotated by any angle without interpolation) and can not be completely removed.
#### Training the model
Let's train the model now.
The procedure is the same used to train a normal *PyTorch* architecture:
```
# build the training and test function
def test(model: torch.nn.Module):
# test over the full rotated test set
total = 0
correct = 0
with torch.no_grad():
model.eval()
for i, (x, t) in enumerate(test_loader):
x = x.to(device)
t = t.to(device)
y = model(x)
_, prediction = torch.max(y.data, 1)
total += t.shape[0]
correct += (prediction == t).sum().item()
return correct/total*100.
def train(model: torch.nn.Module, lr=1e-4, wd=1e-4, checkpoint_path: str = None):
if checkpoint_path is not None:
checkpoint_path = os.path.join(CHECKPOINT_PATH, checkpoint_path)
if checkpoint_path is not None and os.path.isfile(checkpoint_path):
model.load_state_dict(torch.load(checkpoint_path))
model.eval()
return
loss_function = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd)
for epoch in tqdm(range(21)):
model.train()
for i, (x, t) in enumerate(train_loader):
optimizer.zero_grad()
x = x.to(device)
t = t.to(device)
y = model(x)
loss = loss_function(y, t)
loss.backward()
optimizer.step()
del x, y, t, loss
if epoch % 10 == 0:
accuracy = test(model)
print(f"epoch {epoch} | test accuracy: {accuracy}")
if checkpoint_path is not None:
torch.save(model.state_dict(), checkpoint_path)
```
Finally, train the $SO(2)$ equivariant model:
```
# set the seed manually for reproducibility
torch.manual_seed(42)
model = SO2SteerableCNN().to(device)
train(model, checkpoint_path="steerable_so2-pretrained.ckpt")
accuracy = test(model)
print(f"Test accuracy: {accuracy}")
def test_model_rotations(model: torch.nn.Module, N: int = 24, M: int = 2000, checkpoint_path: str = None):
# evaluate the `model` on N rotated versions of the first M images in the test set
if checkpoint_path is not None:
checkpoint_path = os.path.join(CHECKPOINT_PATH, checkpoint_path)
if checkpoint_path is not None and os.path.isfile(checkpoint_path):
accuracies = np.load(checkpoint_path)
return accuracies.tolist()
model.eval()
# to reduce interpolation artifacts (e.g. when testing the model on rotated images),
# we upsample an image by a factor of 3, rotate it and finally downsample it again
resize1 = Resize(87) # to upsample
resize2 = Resize(29) # to downsample
totensor = ToTensor()
accuracies = []
with torch.no_grad():
model.eval()
for r in tqdm(range(N)):
total = 0
correct = 0
for i in range(M):
x, t = raw_mnist_test[i]
x = Image.fromarray(x.numpy()[0], mode='F')
x = totensor(resize2(resize1(x).rotate(r*360./N, Image.BILINEAR))).reshape(1, 1, 29, 29).to(device)
x = x.to(device)
y = model(x)
_, prediction = torch.max(y.data, 1)
total += 1
correct += (prediction == t).sum().item()
accuracies.append(correct/total*100.)
if checkpoint_path is not None:
np.save(checkpoint_path, np.array(accuracies))
return accuracies
accs_so2 = test_model_rotations(model, 16, 10000, checkpoint_path="steerable_so2-accuracies.npy")
# plot the accuracy of as a function of the rotation angle theta applied to the test set
fig, ax = plt.subplots(figsize=(10, 6))
N = 16
xs = [i*2*np.pi / N for i in range(N+1)]
plt.plot(xs, accs_so2 + [accs_so2[0]])
plt.title('SO(2)-Steerable CNN', fontsize=20)
plt.xlabel(r'Test rotation $\theta \in [0, 2\pi)$', fontsize=20)
plt.ylabel('Accuracy', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
plt.show()
```
Even after training, the model is not perfectly $SO(2)$ equivariant, but we observe the accuracy is rather stable to rotations.
#### $C_4$ equivariant architecture
For comparison, let's build a similar architecture equivariant only to $N=4$ rotations.
```
class CNSteerableCNN(torch.nn.Module):
def __init__(self, n_classes=10):
super(CNSteerableCNN, self).__init__()
# the model is equivariant to rotations by multiples of 2pi/N
self.r2_act = gspaces.rot2dOnR2(N=4)
# the input image is a scalar field, corresponding to the trivial representation
in_type = nn.FieldType(self.r2_act, [self.r2_act.trivial_repr])
# we store the input type for wrapping the images into a geometric tensor during the forward pass
self.input_type = in_type
# We need to mask the input image since the corners are moved outside the grid under rotations
self.mask = nn.MaskModule(in_type, 29, margin=1)
# convolution 1
# first we build the non-linear layer, which also constructs the right feature type
# we choose 8 feature fields, each transforming under the regular representation of C_4
activation1 = nn.ELU(nn.FieldType(self.r2_act, 8*[self.r2_act.regular_repr]), inplace=True)
out_type = activation1.in_type
self.block1 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=7, padding=1, bias=False),
nn.IIDBatchNorm2d(out_type),
activation1,
)
# convolution 2
# the old output type is the input type to the next layer
in_type = self.block1.out_type
# the output type of the second convolution layer are 16 regular feature fields
activation2 = nn.ELU(nn.FieldType(self.r2_act, 16*[self.r2_act.regular_repr]), inplace=True)
out_type = activation2.in_type
self.block2 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation2
)
self.pool1 = nn.SequentialModule(
nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=2)
)
# convolution 3
# the old output type is the input type to the next layer
in_type = self.block2.out_type
# the output type of the third convolution layer are 32 regular feature fields
activation3 = nn.ELU(nn.FieldType(self.r2_act, 32*[self.r2_act.regular_repr]), inplace=True)
out_type = activation3.in_type
self.block3 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation3
)
# convolution 4
# the old output type is the input type to the next layer
in_type = self.block3.out_type
# the output type of the fourth convolution layer are 32 regular feature fields
activation4 = nn.ELU(nn.FieldType(self.r2_act, 32*[self.r2_act.regular_repr]), inplace=True)
out_type = activation4.in_type
self.block4 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation4
)
self.pool2 = nn.SequentialModule(
nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=2)
)
# convolution 5
# the old output type is the input type to the next layer
in_type = self.block4.out_type
# the output type of the fifth convolution layer are 64 regular feature fields
activation5 = nn.ELU(nn.FieldType(self.r2_act, 64*[self.r2_act.regular_repr]), inplace=True)
out_type = activation5.in_type
self.block5 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=2, bias=False),
nn.IIDBatchNorm2d(out_type),
activation5
)
# convolution 6
# the old output type is the input type to the next layer
in_type = self.block5.out_type
# the output type of the sixth convolution layer are 64 regular feature fields
activation6 = nn.ELU(nn.FieldType(self.r2_act, 64*[self.r2_act.regular_repr]), inplace=True)
out_type = activation6.in_type
self.block6 = nn.SequentialModule(
nn.R2Conv(in_type, out_type, kernel_size=5, padding=1, bias=False),
nn.IIDBatchNorm2d(out_type),
activation6
)
self.pool3 = nn.PointwiseAvgPoolAntialiased(out_type, sigma=0.66, stride=1, padding=0)
# number of output invariant channels
c = 64
output_invariant_type = nn.FieldType(self.r2_act, c*[self.r2_act.trivial_repr])
self.invariant_map = nn.R2Conv(out_type, output_invariant_type, kernel_size=1, bias=False)
# Fully Connected classifier
self.fully_net = torch.nn.Sequential(
torch.nn.BatchNorm1d(c),
torch.nn.ELU(inplace=True),
torch.nn.Linear(c, n_classes),
)
def forward(self, input: torch.Tensor):
# wrap the input tensor in a GeometricTensor
# (associate it with the input type)
x = self.input_type(input)
# mask out the corners of the input image
x = self.mask(x)
# apply each equivariant block
# Each layer has an input and an output type
# A layer takes a GeometricTensor in input.
# This tensor needs to be associated with the same representation of the layer's input type
#
# Each layer outputs a new GeometricTensor, associated with the layer's output type.
# As a result, consecutive layers need to have matching input/output types
x = self.block1(x)
x = self.block2(x)
x = self.pool1(x)
x = self.block3(x)
x = self.block4(x)
x = self.pool2(x)
x = self.block5(x)
x = self.block6(x)
# pool over the spatial dimensions
x = self.pool3(x)
# extract invariant features
x = self.invariant_map(x)
# unwrap the output GeometricTensor
# (take the Pytorch tensor and discard the associated representation)
x = x.tensor
# classify with the final fully connected layer
x = self.fully_net(x.reshape(x.shape[0], -1))
return x
```
Instantiate and train the $C_4$ equivariant model:
```
torch.manual_seed(42)
model_c4 = CNSteerableCNN().to(device)
train(model_c4, checkpoint_path="steerable_c4-pretrained.ckpt")
accuracy = test(model_c4)
print(f"Test accuracy: {accuracy}")
accs_c4 = test_model_rotations(model_c4, 16, 10000, checkpoint_path="steerable_c4-accuracies.npy")
```
Finally, let's compare the performance of both models on the rotated test sets:
```
# plot the accuracy of as a function of the rotation angle theta applied to the test set
fig, ax = plt.subplots(figsize=(10, 6))
N=16
xs = [i*2*np.pi / N for i in range(N+1)]
plt.plot(xs, accs_so2 + [accs_so2[0]], label=r'$SO(2)$-Steerable CNN')
plt.plot(xs, accs_c4 + [accs_c4[0]], label=r'$C_4$-Steerable CNN')
plt.title(r'$C_4$ vs $SO(2)$ Steerable CNNs', fontsize=20)
plt.xlabel(r'Test rotation ($\theta \in [0, 2\pi)$)', fontsize=20)
plt.ylabel('Accuracy', fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=15)
plt.legend(fontsize=20)
plt.show()
```
While perfect equivariance to $SO(2)$ is not achievable due to the discretizations, the $SO(2)$ equivariant architecture is more stable over the rotations of the test set than the $C_4$ model.
Moreover, since $C_4$ is the only perfect symmetry of the pixel grid and since $C_4 < SO(2)$, the $SO(2)$ equivariant architecture is also perfectly equivariant to rotations by multiples of $\pi/2$.
## Conclusion
In this tutorial, you first leart about *group representation theory* and the *Fourier Transform* over compact groups.
These are the mathematical tools used to formalize Steerable CNNs.
In the second part of this tutorial, you learnt about *steerable feature fields* and *steerable CNNs*.
In particular, the previously defined Fourier transform allowed us to build a steerable CNN which is equivalent to a Group-Convolutional Neural Network (GCNN) equivariant to translations and the continuous group $G=SO(2)$ of rotations.
In our steerable CNNs, we mostly leveraged the *regular representation* of the group $G$, but the framework of steerable CNNs allows for a variety of representations.
If you are interested in knowing more about steerable CNNs, this is a (non-exhaustive) list of relevant works you can check out:
- [Steerable CNNs](https://arxiv.org/abs/1612.08498)
- [Harmonic Networks: Deep Translation and Rotation Equivariance](https://arxiv.org/abs/1612.04642)
- [3D Steerable CNNs](https://arxiv.org/abs/1807.02547)
- [Tensor Field Networks](https://arxiv.org/abs/1802.08219)
- [A General Theory of Equivariant CNNs on Homogeneous Spaces](https://arxiv.org/abs/1811.02017)
- [Cormorant: Covariant Molecular Neural Networks](https://arxiv.org/abs/1906.04015)
- [General E(2)-Equivariant Steerable CNNs](https://arxiv.org/abs/1911.08251)
- [A Program to Build E(N)-Equivariant Steerable CNNs](https://openreview.net/forum?id=WE4qe9xlnQw)
| github_jupyter |
### What is Jupyter Notebooks?
Jupyter is a web-based interactive development environment that supports multiple programming languages, however most commonly used with the Python programming language.
The interactive environment that Jupyter provides enables students, scientists, and researchers to create reproducible analysis and formulate a story within a single document.
Lets take a look at an example of a completed Jupyter Notebook: [Example Notebook](http://nbviewer.jupyter.org/github/cossatot/lanf_earthquake_likelihood/blob/master/notebooks/lanf_manuscript_notebook.ipynb)
### Jupyter Notebook Features
* File Browser
* Markdown Cells & Syntax
* Kernels, Variables, & Environment
* Command vs. Edit Mode & Shortcuts
### What is Markdown?
Markdown is a markup language that uses plain text formatting syntax. This means that we can modify the formatting our text with the use of various symbols on our keyboard as indicators.
Some examples include:
* Headers
* Text modifications such as italics and bold
* Ordered and Unordered lists
* Links
* Tables
* Images
* Etc.
Now I'll showcase some examples of how this formatting is done:
Headers:
# H1
## H2
### H3
#### H4
##### H5
###### H6
Text modifications:
Emphasis, aka italics, with *asterisks* or _underscores_.
Strong emphasis, aka bold, with **asterisks** or __underscores__.
Combined emphasis with **asterisks and _underscores_**.
Strikethrough uses two tildes. ~~Scratch this.~~
Lists:
1. First ordered list item
2. Another item
* Unordered sub-list.
1. Actual numbers don't matter, just that it's a number
1. Ordered sub-list
4. And another item.
* Unordered list can use asterisks
- Or minuses
+ Or pluses
Links:
http://www.umich.edu
<http://www.umich.edu>
[The University of Michigan's Homepage](www.http://umich.edu/)
To look into more examples of Markdown syntax and features such as tables, images, etc. head to the following link: [Markdown Reference](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
### Kernels, Variables, and Environment
A notebook kernel is a “computational engine” that executes the code contained in a Notebook document. There are kernels for various programming languages, however we are solely using the python kernel which executes python code.
When a notebook is opened, the associated kernel is automatically launched for our convenience.
```
### This is python
print("This is a python code cell")
```
A kernel is the back-end of our notebook which not only executes our python code, but stores our initialized variables.
```
### For example, lets initialize variable x
x = 1738
print("x has been set to " + str(x))
### Print x
print(x)
```
Issues arrise when we restart our kernel and attempt to run code with variables that have not been reinitialized.
If the kernel is reset, make sure to rerun code where variables are intialized.
```
## We can also run code that accepts input
name = input("What is your name? ")
print("The name you entered is " + name)
```
It is important to note that Jupyter Notebooks have in-line cell execution. This means that a prior executing cell must complete its operations prior to another cell being executed. A cell still being executing is indicated by the [*] on the left-hand side of the cell.
```
print("This won't print until all prior cells have finished executing.")
```
### Command vs. Edit Mode & Shortcuts
There is an edit and a command mode for jupyter notebooks. The mode is easily identifiable by the color of the left border of the cell.
Blue = Command Mode.
Green = Edit Mode.
Command Mode can be toggled by pressing **esc** on your keyboard.
Commands can be used to execute notebook functions. For example, changing the format of a markdown cell or adding line numbers.
Lets toggle line numbers while in command mode by pressing **L**.
#### Additional Shortcuts
There are a lot of shortcuts that can be used to improve productivity while using Jupyter Notebooks.
Here is a list:

### How do you install Jupyter Notebooks?
**Note:** *Coursera provides embedded jupyter notebooks within the course, thus the download is not a requirement unless you wish to explore jupyter further on your own computer.*
Official Installation Guide: https://jupyter.readthedocs.io/en/latest/install.html
Jupyter recommends utilizing Anaconda, which is a platform compatible with Windows, macOS, and Linux systems.
Anaconda Download: https://www.anaconda.com/download/#macos
| github_jupyter |
```
import pandas as pd
import numpy as np
import math
from IPython.display import display
from bokeh.io import show, output_notebook
from bokeh.plotting import figure, ColumnDataSource
from bokeh.models import HoverTool, ranges
output_notebook()
def readtrace(infile):
ret = {}
name = None
cols = None
types = None
data = []
for line in infile:
line = line.strip()
if line == "": continue
elif line == "%":
df = pd.DataFrame.from_records(data, columns=cols)
for i, col in enumerate(df.columns.values):
if types[i] == "num": df[col] = pd.to_numeric(df[col])
elif types[i] == "time": df[col] = pd.to_datetime(df[col], unit="ms")
ret[name] = df
name = None
cols = None
types = None
data = []
elif name is None: name = line
elif cols is None: cols = line.split("\t")
elif types is None: types = line.split("\t")
else: data.append(line.split("\t"))
return ret
class reader:
def __init__(self, infile):
self.data = readtrace(infile)
def stats(self):
ret = []
for name in ["count", "dist"]:
df = self.data["stat:" + name].set_index(["trace", "stat"])
df.index.names = [None, None]
ret.append(df)
return ret
def seqplot(self):
rpcs = self.data["rpcs"]
events = self.data["events"]
spans = self.data["spans"]
axes = self.data["axes"]["name"].tolist()
timerange = self.data["timerange"]["time"].tolist()
hover = HoverTool()
hover.tooltips = "<div style='max-width: 400px; word-wrap: wrap-all'>@content</div>"
p = figure(y_axis_type="datetime", x_range=axes, tools=["ypan", "ywheel_zoom", hover, "reset"], active_scroll="ywheel_zoom")
p.segment(y0="start", y1="end", x0="location", x1="location", source=ColumnDataSource(spans), line_width=4, color="lime", alpha=0.6)
p.triangle("location", "end", source=ColumnDataSource(spans), size=12, color="green")
p.inverted_triangle("location", "start", source=ColumnDataSource(spans), size=8, color="lime")
p.circle("origin", "time", size=8, source=ColumnDataSource(rpcs), color="blue")
p.segment(y0="time", y1="time", x0="origin", x1="destination", source=ColumnDataSource(rpcs), color="blue")
p.circle("location", "time", size=8, source=ColumnDataSource(events), color="red")
p.y_range = ranges.Range1d(timerange[1], timerange[0])
p.xaxis.major_label_orientation = math.pi/6
p.sizing_mode = "scale_width"
p.height = 400
return p
with open("/tmp/spark-trace.out") as infile:
trace = reader(infile)
for stat in trace.stats(): display(stat)
show(trace.seqplot())
```
| github_jupyter |
```
import collections
import numpy as np
import pickle
experiments = ['BM25', 'PACRR', 'MP', 'KNRM', 'ConvKNRM'
]
metrics = ['RaB', 'ARaB']
methods = ['tf', 'bool']
qry_bias_paths = {}
for metric in metrics:
qry_bias_paths[metric] = {}
for exp_name in experiments:
qry_bias_paths[metric][exp_name] = {}
for _method in methods:
qry_bias_paths[metric][exp_name][_method] = 'data/msmarco_passage/run_bias_%s_%s_%s.pkl' % (exp_name, _method, metric)
queries_gender_annotated_path = "resources/queries_gender_annotated.csv"
at_ranklist = [5, 10, 20, 30, 40]
qry_bias_perqry = {}
for metric in metrics:
qry_bias_perqry[metric] = {}
for exp_name in experiments:
qry_bias_perqry[metric][exp_name] = {}
for _method in methods:
_path = qry_bias_paths[metric][exp_name][_method]
print (_path)
with open(_path, 'rb') as fr:
qry_bias_perqry[metric][exp_name][_method] = pickle.load(fr)
queries_effective = {}
with open(queries_gender_annotated_path, 'r') as fr:
for line in fr:
vals = line.strip().split(',')
qryid = int(vals[0])
qrytext = ' '.join(vals[1:-1])
qrygender = vals[-1]
if qrygender == 'n':
queries_effective[qryid] = qrytext
len(queries_effective)
eval_results_bias = {}
eval_results_feml = {}
eval_results_male = {}
for metric in metrics:
eval_results_bias[metric] = {}
eval_results_feml[metric] = {}
eval_results_male[metric] = {}
for exp_name in experiments:
eval_results_bias[metric][exp_name] = {}
eval_results_feml[metric][exp_name] = {}
eval_results_male[metric][exp_name] = {}
for _method in methods:
eval_results_bias[metric][exp_name][_method] = {}
eval_results_feml[metric][exp_name][_method] = {}
eval_results_male[metric][exp_name][_method] = {}
for at_rank in at_ranklist:
_bias_list = []
_feml_list = []
_male_list = []
for qryid in queries_effective.keys():
if qryid in qry_bias_perqry[metric][exp_name][_method][at_rank]:
_bias_list.append(qry_bias_perqry[metric][exp_name][_method][at_rank][qryid][0])
_feml_list.append(qry_bias_perqry[metric][exp_name][_method][at_rank][qryid][1])
_male_list.append(qry_bias_perqry[metric][exp_name][_method][at_rank][qryid][2])
else:
pass
#print ('missing', metric, exp_name, _method, at_rank, qryid)
eval_results_bias[metric][exp_name][_method][at_rank] = np.mean([(_male_x-_feml_x) for _male_x, _feml_x in zip(_male_list, _feml_list)])
eval_results_feml[metric][exp_name][_method][at_rank] = np.mean(_feml_list)
eval_results_male[metric][exp_name][_method][at_rank] = np.mean(_male_list)
for metric in metrics:
print (metric)
for at_rank in at_ranklist:
for _method in methods:
for exp_name in experiments:
print ("%25s\t%2d %5s\t%f\t%f\t%f" % (exp_name, at_rank, _method, eval_results_bias[metric][exp_name][_method][at_rank], eval_results_feml[metric][exp_name][_method][at_rank], eval_results_male[metric][exp_name][_method][at_rank]))
print ("==========")
```
| github_jupyter |
```
import sys
from pathlib import Path
curr_path = str(Path().absolute())
parent_path = str(Path().absolute().parent)
sys.path.append(parent_path) # 添加路径到系统路径
import gym
import torch
import math
import datetime
import numpy as np
from collections import defaultdict
from envs.gridworld_env import CliffWalkingWapper
from QLearning.agent import QLearning
from common.utils import plot_rewards
from common.utils import save_results,make_dir
```
## QLearning算法
```
class QLearning(object):
def __init__(self,state_dim,
action_dim,cfg):
self.action_dim = action_dim
self.lr = cfg.lr # 学习率
self.gamma = cfg.gamma
self.epsilon = 0
self.sample_count = 0
self.epsilon_start = cfg.epsilon_start
self.epsilon_end = cfg.epsilon_end
self.epsilon_decay = cfg.epsilon_decay
self.Q_table = defaultdict(lambda: np.zeros(action_dim)) # 用嵌套字典存放状态->动作->状态-动作值(Q值)的映射,即Q表
def choose_action(self, state):
self.sample_count += 1
self.epsilon = self.epsilon_end + (self.epsilon_start - self.epsilon_end) * \
math.exp(-1. * self.sample_count / self.epsilon_decay) # epsilon是会递减的,这里选择指数递减
# e-greedy 策略
if np.random.uniform(0, 1) > self.epsilon:
action = np.argmax(self.Q_table[str(state)]) # 选择Q(s,a)最大对应的动作
else:
action = np.random.choice(self.action_dim) # 随机选择动作
return action
def predict(self,state):
action = np.argmax(self.Q_table[str(state)])
return action
def update(self, state, action, reward, next_state, done):
Q_predict = self.Q_table[str(state)][action]
if done: # 终止状态
Q_target = reward
else:
Q_target = reward + self.gamma * np.max(self.Q_table[str(next_state)])
self.Q_table[str(state)][action] += self.lr * (Q_target - Q_predict)
def save(self,path):
import dill
torch.save(
obj=self.Q_table,
f=path+"Qleaning_model.pkl",
pickle_module=dill
)
print("保存模型成功!")
def load(self, path):
import dill
self.Q_table =torch.load(f=path+'Qleaning_model.pkl',pickle_module=dill)
print("加载模型成功!")
```
## 训练
```
def train(cfg,env,agent):
print('开始训练!')
print(f'环境:{cfg.env_name}, 算法:{cfg.algo_name}, 设备:{cfg.device}')
rewards = [] # 记录奖励
ma_rewards = [] # 记录滑动平均奖励
for i_ep in range(cfg.train_eps):
ep_reward = 0 # 记录每个episode的reward
state = env.reset() # 重置环境, 重新开一局(即开始新的一个episode)
while True:
action = agent.choose_action(state) # 根据算法选择一个动作
next_state, reward, done, _ = env.step(action) # 与环境进行一次动作交互
agent.update(state, action, reward, next_state, done) # Q-learning算法更新
state = next_state # 存储上一个观察值
ep_reward += reward
if done:
break
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(ma_rewards[-1]*0.9+ep_reward*0.1)
else:
ma_rewards.append(ep_reward)
if (i_ep+1)%20 == 0:
print('回合:{}/{}, 奖励:{}'.format(i_ep+1, cfg.train_eps, ep_reward))
print('完成训练!')
return rewards,ma_rewards
```
## 测试
```
def test(cfg,env,agent):
# env = gym.make("FrozenLake-v0", is_slippery=False) # 0 left, 1 down, 2 right, 3 up
# env = FrozenLakeWapper(env)
print('开始测试!')
print(f'环境:{cfg.env_name}, 算法:{cfg.algo_name}, 设备:{cfg.device}')
# 由于测试不需要使用epsilon-greedy策略,所以相应的值设置为0
cfg.epsilon_start = 0.0 # e-greedy策略中初始epsilon
cfg.epsilon_end = 0.0 # e-greedy策略中的终止epsilon
rewards = [] # 记录所有回合的奖励
ma_rewards = [] # 记录所有回合的滑动平均奖励
rewards = [] # 记录所有episode的reward
ma_rewards = [] # 滑动平均的reward
for i_ep in range(cfg.test_eps):
ep_reward = 0 # 记录每个episode的reward
state = env.reset() # 重置环境, 重新开一局(即开始新的一个episode)
while True:
action = agent.predict(state) # 根据算法选择一个动作
next_state, reward, done, _ = env.step(action) # 与环境进行一个交互
state = next_state # 存储上一个观察值
ep_reward += reward
if done:
break
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(ma_rewards[-1]*0.9+ep_reward*0.1)
else:
ma_rewards.append(ep_reward)
print(f"回合:{i_ep+1}/{cfg.test_eps},奖励:{ep_reward:.1f}")
print('完成测试!')
return rewards,ma_rewards
```
## 设置参数
```
curr_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") # 获取当前时间
algo_name = 'Q-learning' # 算法名称
env_name = 'CliffWalking-v0' # 环境名称
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # 检测GPU
class QlearningConfig:
'''训练相关参数'''
def __init__(self):
self.algo_name = algo_name # 算法名称
self.env_name = env_name # 环境名称
self.device = device # 检测GPU
self.train_eps = 400 # 训练的回合数
self.test_eps = 20 # 测试的回合数
self.gamma = 0.9 # reward的衰减率
self.epsilon_start = 0.95 # e-greedy策略中初始epsilon
self.epsilon_end = 0.01 # e-greedy策略中的终止epsilon
self.epsilon_decay = 300 # e-greedy策略中epsilon的衰减率
self.lr = 0.1 # 学习率
class PlotConfig:
''' 绘图相关参数设置
'''
def __init__(self) -> None:
self.algo_name = algo_name # 算法名称
self.env_name = env_name # 环境名称
self.device = device # 检测GPU
self.result_path = curr_path + "/outputs/" + self.env_name + \
'/' + curr_time + '/results/' # 保存结果的路径
self.model_path = curr_path + "/outputs/" + self.env_name + \
'/' + curr_time + '/models/' # 保存模型的路径
self.save = True # 是否保存图片
```
## 创建环境和智能体
```
def env_agent_config(cfg,seed=1):
'''创建环境和智能体
Args:
cfg ([type]): [description]
seed (int, optional): 随机种子. Defaults to 1.
Returns:
env [type]: 环境
agent : 智能体
'''
env = gym.make(cfg.env_name)
env = CliffWalkingWapper(env)
env.seed(seed) # 设置随机种子
state_dim = env.observation_space.n # 状态维度
action_dim = env.action_space.n # 动作维度
agent = QLearning(state_dim,action_dim,cfg)
return env,agent
```
## 执行训练并输出结果
```
cfg = QlearningConfig()
plot_cfg = PlotConfig()
# 训练
env, agent = env_agent_config(cfg, seed=1)
rewards, ma_rewards = train(cfg, env, agent)
make_dir(plot_cfg.result_path, plot_cfg.model_path) # 创建保存结果和模型路径的文件夹
agent.save(path=plot_cfg.model_path) # 保存模型
save_results(rewards, ma_rewards, tag='train',
path=plot_cfg.result_path) # 保存结果
plot_rewards(rewards, ma_rewards, plot_cfg, tag="train") # 画出结果
# 测试
env, agent = env_agent_config(cfg, seed=10)
agent.load(path=plot_cfg.model_path) # 导入模型
rewards, ma_rewards = test(cfg, env, agent)
save_results(rewards, ma_rewards, tag='test', path=plot_cfg.result_path) # 保存结果
plot_rewards(rewards, ma_rewards, plot_cfg, tag="test") # 画出结果
```
| github_jupyter |
## Multi-Fidelity BO with Discrete Fidelities using KG
In this tutorial, we show how to do multi-fidelity BO with discrete fidelities based on [1], where each fidelity is a different "information source." This tutorial uses the same setup as the [continuous multi-fidelity BO tutorial](https://botorch.org/tutorials/multi_fidelity_bo), except with discrete fidelity parameters that are interpreted as multiple information sources.
We use a GP model with a single task that models the design and fidelity parameters jointly. In some cases, where there is not a natural ordering in the fidelity space, it may be more appropriate to use a multi-task model (with, say, an ICM kernel). We will provide a tutorial once this functionality is in place.
[1] [M. Poloczek, J. Wang, P.I. Frazier. Multi-Information Source Optimization. NeurIPS, 2017](https://papers.nips.cc/paper/2017/file/df1f1d20ee86704251795841e6a9405a-Paper.pdf)
[2] [J. Wu, S. Toscano-Palmerin, P.I. Frazier, A.G. Wilson. Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning. Conference on Uncertainty in Artificial Intelligence (UAI), 2019](https://arxiv.org/pdf/1903.04703.pdf)
### Set dtype and device
```
import os
import torch
tkwargs = {
"dtype": torch.double,
"device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
}
SMOKE_TEST = os.environ.get("SMOKE_TEST")
```
### Problem setup
We'll consider the Augmented Hartmann multi-fidelity synthetic test problem. This function is a version of the Hartmann6 test function with an additional dimension representing the fidelity parameter; details are in [2]. The function takes the form $f(x,s)$ where $x \in [0,1]^6$ and $s \in \{0.5, 0.75, 1\}$. The target fidelity is 1.0, which means that our goal is to solve $\max_x f(x,1.0)$ by making use of cheaper evaluations $f(x,s)$ for $s \in \{0.5, 0.75\}$. In this example, we'll assume that the cost function takes the form $5.0 + s$, illustrating a situation where the fixed cost is $5.0$.
```
from botorch.test_functions.multi_fidelity import AugmentedHartmann
problem = AugmentedHartmann(negate=True).to(**tkwargs)
fidelities = torch.tensor([0.5, 0.75, 1.0], **tkwargs)
```
#### Model initialization
We use a `SingleTaskMultiFidelityGP` as the surrogate model, which uses a kernel from [2] that is well-suited for multi-fidelity applications. The `SingleTaskMultiFidelityGP` models the design and fidelity parameters jointly, so its domain is $[0,1]^7$.
```
from botorch.models.gp_regression_fidelity import SingleTaskMultiFidelityGP
from botorch.models.transforms.outcome import Standardize
from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood
from botorch.utils.transforms import unnormalize, standardize
from botorch.utils.sampling import draw_sobol_samples
def generate_initial_data(n=16):
# generate training data
train_x = torch.rand(n, 6, **tkwargs)
train_f = fidelities[torch.randint(3, (n,1))]
train_x_full = torch.cat((train_x, train_f), dim=1)
train_obj = problem(train_x_full).unsqueeze(-1) # add output dimension
return train_x_full, train_obj
def initialize_model(train_x, train_obj):
# define a surrogate model suited for a "training data"-like fidelity parameter
# in dimension 6, as in [2]
model = SingleTaskMultiFidelityGP(
train_x,
train_obj,
outcome_transform=Standardize(m=1),
data_fidelity=6
)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
return mll, model
```
#### Define a helper function to construct the MFKG acquisition function
The helper function illustrates how one can initialize an $q$MFKG acquisition function. In this example, we assume that the affine cost is known. We then use the notion of a `CostAwareUtility` in BoTorch to scalarize the "competing objectives" of information gain and cost. The MFKG acquisition function optimizes the ratio of information gain to cost, which is captured by the `InverseCostWeightedUtility`.
In order for MFKG to evaluate the information gain, it uses the model to predict the function value at the highest fidelity after conditioning on the observation. This is handled by the `project` argument, which specifies how to transform a tensor `X` to its target fidelity. We use a default helper function called `project_to_target_fidelity` to achieve this.
An important point to keep in mind: in the case of standard KG, one can ignore the current value and simply optimize the expected maximum posterior mean of the next stage. However, for MFKG, since the goal is optimize information *gain* per cost, it is important to first compute the current value (i.e., maximum of the posterior mean at the target fidelity). To accomplish this, we use a `FixedFeatureAcquisitionFunction` on top of a `PosteriorMean`.
```
from botorch import fit_gpytorch_model
from botorch.models.cost import AffineFidelityCostModel
from botorch.acquisition.cost_aware import InverseCostWeightedUtility
from botorch.acquisition import PosteriorMean
from botorch.acquisition.knowledge_gradient import qMultiFidelityKnowledgeGradient
from botorch.acquisition.fixed_feature import FixedFeatureAcquisitionFunction
from botorch.optim.optimize import optimize_acqf
from botorch.acquisition.utils import project_to_target_fidelity
bounds = torch.tensor([[0.0] * problem.dim, [1.0] * problem.dim], **tkwargs)
target_fidelities = {6: 1.0}
cost_model = AffineFidelityCostModel(fidelity_weights={6: 1.0}, fixed_cost=5.0)
cost_aware_utility = InverseCostWeightedUtility(cost_model=cost_model)
def project(X):
return project_to_target_fidelity(X=X, target_fidelities=target_fidelities)
def get_mfkg(model):
curr_val_acqf = FixedFeatureAcquisitionFunction(
acq_function=PosteriorMean(model),
d=7,
columns=[6],
values=[1],
)
_, current_value = optimize_acqf(
acq_function=curr_val_acqf,
bounds=bounds[:,:-1],
q=1,
num_restarts=10 if not SMOKE_TEST else 2,
raw_samples=1024 if not SMOKE_TEST else 4,
options={"batch_limit": 10, "maxiter": 200},
)
return qMultiFidelityKnowledgeGradient(
model=model,
num_fantasies=128 if not SMOKE_TEST else 2,
current_value=current_value,
cost_aware_utility=cost_aware_utility,
project=project,
)
```
#### Define a helper function that performs the essential BO step
This helper function optimizes the acquisition function and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. The function `optimize_acqf_mixed` sequentially optimizes the acquisition function over $x$ for each value of the fidelity $s \in \{0, 0.5, 1.0\}$.
```
from botorch.optim.initializers import gen_one_shot_kg_initial_conditions
from botorch.optim.optimize import optimize_acqf_mixed
torch.set_printoptions(precision=3, sci_mode=False)
NUM_RESTARTS = 10 if not SMOKE_TEST else 2
RAW_SAMPLES = 512 if not SMOKE_TEST else 4
def optimize_mfkg_and_get_observation(mfkg_acqf):
"""Optimizes MFKG and returns a new candidate, observation, and cost."""
X_init = gen_one_shot_kg_initial_conditions(
acq_function = mfkg_acqf,
bounds=bounds,
q=4,
num_restarts=10,
raw_samples=512,
)
candidates, _ = optimize_acqf_mixed(
acq_function=mfkg_acqf,
bounds=bounds,
fixed_features_list=[{6: 0.5}, {6: 0.75}, {6: 1.0}],
q=4,
num_restarts=NUM_RESTARTS,
raw_samples=RAW_SAMPLES,
batch_initial_conditions=X_init,
options={"batch_limit": 5, "maxiter": 200},
)
# observe new values
cost = cost_model(candidates).sum()
new_x = candidates.detach()
new_obj = problem(new_x).unsqueeze(-1)
print(f"candidates:\n{new_x}\n")
print(f"observations:\n{new_obj}\n\n")
return new_x, new_obj, cost
```
### Perform a few steps of multi-fidelity BO
First, let's generate some initial random data and fit a surrogate model.
```
train_x, train_obj = generate_initial_data(n=16)
```
We can now use the helper functions above to run a few iterations of BO.
```
cumulative_cost = 0.0
N_ITER = 3 if not SMOKE_TEST else 1
for _ in range(N_ITER):
mll, model = initialize_model(train_x, train_obj)
fit_gpytorch_model(mll)
mfkg_acqf = get_mfkg(model)
new_x, new_obj, cost = optimize_mfkg_and_get_observation(mfkg_acqf)
train_x = torch.cat([train_x, new_x])
train_obj = torch.cat([train_obj, new_obj])
cumulative_cost += cost
```
### Make a final recommendation
In multi-fidelity BO, there are usually fewer observations of the function at the target fidelity, so it is important to use a recommendation function that uses the correct fidelity. Here, we maximize the posterior mean with the fidelity dimension fixed to the target fidelity of 1.0.
```
def get_recommendation(model):
rec_acqf = FixedFeatureAcquisitionFunction(
acq_function=PosteriorMean(model),
d=7,
columns=[6],
values=[1],
)
final_rec, _ = optimize_acqf(
acq_function=rec_acqf,
bounds=bounds[:,:-1],
q=1,
num_restarts=10,
raw_samples=512,
options={"batch_limit": 5, "maxiter": 200},
)
final_rec = rec_acqf._construct_X_full(final_rec)
objective_value = problem(final_rec)
print(f"recommended point:\n{final_rec}\n\nobjective value:\n{objective_value}")
return final_rec
final_rec = get_recommendation(model)
print(f"\ntotal cost: {cumulative_cost}\n")
```
### Comparison to standard EI (always use target fidelity)
Let's now repeat the same steps using a standard EI acquisition function (note that this is not a rigorous comparison as we are only looking at one trial in order to keep computational requirements low).
```
from botorch.acquisition import qExpectedImprovement
def get_ei(model, best_f):
return FixedFeatureAcquisitionFunction(
acq_function=qExpectedImprovement(model=model, best_f=best_f),
d=7,
columns=[6],
values=[1],
)
def optimize_ei_and_get_observation(ei_acqf):
"""Optimizes EI and returns a new candidate, observation, and cost."""
candidates, _ = optimize_acqf(
acq_function=ei_acqf,
bounds=bounds[:,:-1],
q=4,
num_restarts=10,
raw_samples=512,
options={"batch_limit": 5, "maxiter": 200},
)
# add the fidelity parameter
candidates = ei_acqf._construct_X_full(candidates)
# observe new values
cost = cost_model(candidates).sum()
new_x = candidates.detach()
new_obj = problem(new_x).unsqueeze(-1)
print(f"candidates:\n{new_x}\n")
print(f"observations:\n{new_obj}\n\n")
return new_x, new_obj, cost
cumulative_cost = 0.0
train_x, train_obj = generate_initial_data(n=16)
for _ in range(N_ITER):
mll, model = initialize_model(train_x, train_obj)
fit_gpytorch_model(mll)
ei_acqf = get_ei(model, best_f=train_obj.max())
new_x, new_obj, cost = optimize_ei_and_get_observation(ei_acqf)
train_x = torch.cat([train_x, new_x])
train_obj = torch.cat([train_obj, new_obj])
cumulative_cost += cost
final_rec = get_recommendation(model)
print(f"\ntotal cost: {cumulative_cost}\n")
```
| github_jupyter |
___
<a href='https://www.prosperousheart.com/'> <img src='files/learn to code online.png' /></a>
___
DataFrames are the true workhorse of pandas. You'll learn more here.
The DataFrame has the following input options:
- data
- index
- columns
- dtype
- copy
Learn more about these options <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html">here</a>.
```
import numpy as np
import pandas as pd
# https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randn.html
from numpy.random import randn
# https://stackoverflow.com/questions/36847022/what-numbers-that-i-can-put-in-numpy-random-seed
# setting the seed ensures we get the same rando numbers
np.random.seed(101)
df = pd.DataFrame(randn(5,4), ["A", "B", "C", "D", "E"], ["W", "X", "Y", "Z"]) # there will be 5 rows & 4 columns as per randn
df
```
# Accessing Data From A DataFrame
Each column is a Pandas series with common indexes. You will use the same bracket notation to pull data.
```
# get column (series) W
print(type(df['W']))
df['W']
# There is another way similar to SQL to get the column. The prior way is the norm.
# This is not suggested as it can get confused with varies methods of a DataFrame
df.W
# To get multiple columns back, you need to pass in a list of column names
df[["W", "Z"]]
# You can use this same notation for a single column - suggested to do this to create the habit
df[["W",]]
```
# Add New Columns To A DF
To create a new column, you must create and add it in or add into the DF upon creation. If you try to call a column that doesn't exist - you will get an error.
<div class="alert alert-block alert-warning">The following 3 examples give you <b>NaN</b> - why do you think that is?</div>
```
df["new"] = df["W"] + df[["Z",]]
df
df["new"] = df[["W",]] + df["Z"]
df
df["new"] = df[["W",]] + df[["Z",]]
df
```
<div class="alert alert-block alert-warning">Use the block below to see if you can figure it out.</div>
When adding columns, this is proper notation.
```
df["new"] = df["W"] + df["Z"]
df
```
# Dropping Data From A DataFrame
By default, the DataFrame <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html">drop function</a> is set to look at the index or labels first. If you try to drop a column (a whole series) without changing **axis** to 1, you will receive an error.
```
df.drop('new') # this is expecting a label or row name
print(df)
df.drop('new', axis=1) # this is expecting a column
```
You'll notice in the cell below that **df** still has the "new" column. This is because the **drop** function does not happen in place. It returns a DataFrame - it does not change the original one unless you reassign it back to the variable **OR** change the _inplace_ input to True.
```
df
df.drop('new', axis=1, inplace=True) # this is expecting a column & done in place
df
```
Pandas has this set to False as default to ensure you don't lose information.
What should you do to drop row C?
```
df.drop('C') # same as df.drop('C', axis=0)
```
<div class="alert alert-block alert-warning">Why does the axis use 0 for rows and 1 for columns?</div>
DataFrames are essentially fancy index markers on top of a numpy array.
You access rows from the 0 access, as it is represented in the 0th place in a shape tuple.
And you access columns from the 1 place in the tuple.
```
df.shape # shows tuple (# of rows, # of columns)
```
# Selecting Rows From A DataFrame
There are 2 ways to get rows from a DF.
## loc
`df.loc[idx_label]`
```
df.loc['B'] # returns a series
```
## iloc
This is based on numerical positioning in the DataFrame - regardless of labels. Top row is generally 0.
`df.iloc[num]`
```
df.iloc[1] # same as saying df.loc['B'] for this example
```
# Selecting Subsets Of Rows & Columns
## Single Item
`df.loc[row, col]`
```
print(df)
df.loc['B', 'X']
```
## Multiple Item
`df.loc[[list_of_rows], [list_of_cols]]`
This basically takes all the rows and only returns the matching columns for those rows. It is a subset of data, not just a single piece of data.
```
print(df)
df.loc[['B', 'D'], ['X', 'Z']]
```
<div class="alert alert-block alert-info">DIV option 1: alert-info</div>
<div class="alert alert-block alert-success">DIV option 2: success</div>
<div class="alert alert-block alert-warning">DIV option 3: warning</div>
| github_jupyter |
# Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis.
>Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words.
Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.
<img src="assets/reviews_ex.png" width=40%>
### Network Architecture
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=40%>
>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*
>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data.
>**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1.
We don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).
---
### Load in and visualize the data
```
import numpy as np
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:1000])
print()
print(labels[:20])
```
## Data pre-processing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. Here are the processing steps, we'll want to take:
>* We'll want to get rid of periods and extraneous punctuation.
* Also, you might notice that the reviews are delimited with newline characters `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter.
* Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
```
from string import punctuation
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30]
```
### Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.
> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
```
# feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review in reviews_split:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
```
**Test your code**
As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
```
# stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
```
### Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.
```
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
```
### Removing Outliers
As an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:
1. Getting rid of extremely long or short reviews; the outliers
2. Padding/truncating the remaining data so that we have reviews of the same length.
Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.
```
# outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
```
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.
> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`.
```
print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
# get indices of any reviews with length 0
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
# remove 0-length reviews and their labels
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
encoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
```
---
## Padding sequences
To deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.
> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network.
* The data should come from `review_ints`, since we want to feed integers to the network.
* Each row should be `seq_length` elements long.
* For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`.
* For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.
As a small example, if the `seq_length=10` and an input review is:
```
[117, 18, 128]
```
The resultant, padded sequence should be:
```
[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]
```
**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
```
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
# getting the correct rows x cols shape
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
# for each review, I grab that review and
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
## test statements - do not change - ##
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==seq_length, "Each feature row should contain seq_length values."
# print first 10 values of the first 30 batches
print(features[:30,:10])
```
## Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
> **Exercise:** Create the training, validation, and test sets.
* You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example.
* Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9.
* Whatever data is left will be split in half to create the validation and *testing* data.
```
split_frac = 0.8
## split data into training, validation, and test data (features and labels, x and y)
split_idx = int(len(features)*split_frac)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
val_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
val_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## print out the shapes of your resultant feature data
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
# print(type(train_x))
```
**Check your work**
With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:
```
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
```
---
## DataLoaders and Batching
After creating training, test, and validation data, we can create DataLoaders for this data by following two steps:
1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html#) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.
2. Create DataLoaders and batch our training, validation, and test Tensor datasets.
```
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
train_loader = DataLoader(train_data, batch_size=batch_size)
```
This is an alternative to creating a generator function for batching our data into full batches.
```
import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))
# dataloaders
batch_size = 50
# make sure the SHUFFLE your training data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)
# obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y)
```
---
# Sentiment Network with PyTorch
Below is where you'll define the network.
<img src="assets/network_diagram.png" width=40%>
The layers are as follows:
1. An [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) that converts our word tokens (integers) into embeddings of a specific size.
2. An [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) defined by a hidden_state size and number of layers
3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size
4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network.
### The Embedding Layer
We need to add an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.
### The LSTM Layer(s)
We'll create an [LSTM](https://pytorch.org/docs/stable/nn.html#lstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.
Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships.
> **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.
Note: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.
```
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sig(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
```
## Instantiate the network
Here, we'll instantiate the network. First up, defining the hyperparameters.
* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.
* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).
* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.
* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
* `n_layers`: Number of LSTM layers in the network. Typically between 1-3
> **Exercise:** Define the model hyperparameters.
```
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
```
---
## Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.
>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.html#bceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.
We also have some data and training hyparameters:
* `lr`: Learning rate for our optimizer.
* `epochs`: Number of times to iterate through the training dataset.
* `clip`: The maximum gradient value to clip at (to prevent exploding gradients).
```
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
```
---
## Testing
There are a few ways to test your network.
* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.
* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**.
```
# Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
```
### Inference on a test review
You can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!
> **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!
* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length.
```
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def tokenize_review(test_review):
test_review = test_review.lower() # lowercase
# get rid of punctuation
test_text = ''.join([c for c in test_review if c not in punctuation])
# splitting by spaces
test_words = test_text.split()
# tokens
test_ints = []
test_ints.append([vocab_to_int[word] for word in test_words])
return test_ints
# test code and generate tokenized review
test_ints = tokenize_review(test_review_neg)
print(test_ints)
# test sequence padding
seq_length=200
features = pad_features(test_ints, seq_length)
print(features)
# test conversion to tensor and pass into your model
feature_tensor = torch.from_numpy(features)
print(feature_tensor.size())
def predict(net, test_review, sequence_length=200):
net.eval()
# tokenize review
test_ints = tokenize_review(test_review)
# pad tokenized sequence
seq_length=sequence_length
features = pad_features(test_ints, seq_length)
# convert to tensor to pass into your model
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
# initialize hidden state
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
# get the output from the model
output, h = net(feature_tensor, h)
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
# printing output value, before rounding
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
# print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# call function
seq_length=200 # good to use the length that was trained on
predict(net, test_review_neg, seq_length)
```
### Try out test_reviews of your own!
Now that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.
Later, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!
| github_jupyter |
```
import pandas as pd
import librosa
import numpy as np
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
import IPython.display as ipd
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import shuffle
# Import for local library
import os
import sys
sys.path.insert(0, os.path.abspath('../lib'))
import util
```
```
df = pd.read_csv('../../SQLqueries/fine_path_species_label.csv', header = None)
df.columns = ['label_id', 'audio_id', 'fine_start_time', 'fine_end_time', 'species', 'sound_type', 'path']
df = util.remove_label_bug(df, verbose=0)
df_all = pd.read_csv('../../SQLqueries/fine_mosquito_species_inc_none.csv', header = None)
df_all.columns = ['label_id', 'audio_id', 'fine_start_time', 'fine_end_time', 'species', 'sound_type', 'path']
df_all = util.remove_label_bug(df_all, verbose=1)
df_noise = pd.read_csv('../../SQLqueries/fine_path_non_mosquito.csv', header = None)
df_noise.columns = ['label_id', 'audio_id', 'fine_start_time', 'fine_end_time', 'species', 'sound_type', 'path']
df_noise = util.remove_label_bug(df_noise, verbose=1)
# Append larvae recordings from dataframe to mosquito instead of noise.
#This is a hotfix to merge the two types from the database
# Note that species information has gone missing for this field.
df = df.append(df_noise[df_noise["sound_type"] == " 'larvae'"])
# Select noise data as labelled entries of all except larvae type
df_noise = df_noise[df_noise["sound_type"] != " 'larvae'"]
# See list of paths used in dataframe
print('Dataframe original:', df["path"].unique())
print('Dataframe noise:', df_noise["path"].unique())
# Choose to train on data which contains the strings below in the path filename for true holdout data split
index_list_train = []
index_list_test = []
for index, path in enumerate(df["path"]):
# if 'Thai' not in path and 'Culex/sounds/00' not in path:
if 'Thai' not in path and 'Culex/sounds/00' not in path:
index_list_train.append(index)
else:
# Removing duplicate entries for "xxxx.wav" and "June/Julyxxx_COW.wav"
if 'cow' not in path and 'COW' not in path and 'HLC' not in path and 'hlc' not in path and 'June' not in path:
index_list_test.append(index)
print('Number of clips chosen for training:', len(index_list_train))
print('Number of clips chosen for testing:', len(index_list_test))
df_train = df.iloc[index_list_train]
df_test = df.iloc[index_list_test]
# Choose to train on noise which contains the strings below in the path filename for true holdout data split
index_list_train = []
index_list_test = []
for index, path in enumerate(df_noise["path"]):
if 'Thai' not in path and 'Culex/sounds/00' not in path:
index_list_train.append(index)
else:
index_list_test.append(index)
print('Number of clips chosen for training:', len(index_list_train))
print('Number of clips chosen for testing:', len(index_list_test))
df_train_noise = df_noise.iloc[index_list_train]
df_test_noise = df_noise.iloc[index_list_test]
# See list of paths used in train/test/noise dataframes
print('Train:', df_train["path"].unique())
print('Test:', df_test["path"].unique())
print('Train noise:', df_train_noise["path"].unique())
print('Test noise:', df_test_noise["path"].unique())
x_s_tr, x_s_tr_l = util.get_wav_for_df(df_train, 8000)
x_s_te, x_s_te_l = util.get_wav_for_df(df_test, 8000)
x_n_tr, x_n_tr_l = util.get_wav_for_df(df_train_noise, 8000)
```
## Augmenting dataset with extra data and splitting
It now remains to decide on a way of splitting the data for a comprehensive training and test set. We can use cross-validation on a subset of the training set to further select model hyperparameters. We will explictly split into:
* Train signal, train noise
* Test signal, test noise
```
noise_path_names_Culex = [ '/Culex/sounds/0001_norm.wav', '/Culex/sounds/0002_norm.wav', '/Culex/sounds/0003_norm.wav',
'/Culex/sounds/0004_norm.wav', '/Culex/sounds/0005_norm.wav', '/Culex/sounds/0006_norm.wav', '/Culex/sounds/0007_norm.wav',
'/Culex/sounds/0008_norm.wav', '/Culex/sounds/0009_norm.wav', '/Culex/sounds/0010_norm.wav', '/Culex/sounds/0011_norm.wav',
'/Culex/sounds/0012_norm.wav', '/Culex/sounds/0013_norm.wav', '/Culex/sounds/0014_norm.wav', '/Culex/sounds/0015_norm.wav',
'/Culex/sounds/0016_norm.wav', '/Culex/sounds/0017_norm.wav', '/Culex/sounds/0018_norm.wav', '/Culex/sounds/0019_norm.wav',
'/Culex/sounds/0020_norm.wav', '/Culex/sounds/0025_norm.wav', '/Culex/sounds/0041_norm.wav', '/Culex/sounds/0042_norm.wav',
'/Culex/sounds/0043_norm.wav', '/Culex/sounds/0044_norm.wav', '/Culex/sounds/0046_norm.wav', '/Culex/sounds/0047_norm.wav',
'/Culex/sounds/0051_norm.wav', '/Culex/sounds/0053_norm.wav', '/Culex/sounds/0054_norm.wav', '/Culex/sounds/0056_norm.wav']
x_n_add_Culex, s_n_add_Culex = util.get_wav_for_path(noise_path_names_Culex, sr=8000)
x_n_add_CDC, s_n_add_CDC = util.get_wav_for_path(['/CDC/sounds/background.wav'], sr=8000)
# Confirmed noise (Dav):
dav_noise = ['/Experiments/sounds/noise0.wav', '/Experiments/sounds/noise1.wav', '/Experiments/sounds/noise2.wav',
'/Experiments/sounds/noise3.wav', '/Experiments/sounds/noise4.wav', '/Experiments/sounds/noise5.wav',
'/Experiments/sounds/noise6.wav']
x_n_add_dav, s_n_add_dav = util.get_wav_for_path(dav_noise, sr=8000)
# Important: need to filter out duplicates. MUST UPDATE df_all
x_n_Thai, s_n_Thai = util.get_noise_wav_for_df(df_all, ['Thai'], 0.75, 8000, verbose=1)
# x_n_Thai_July, s_n_Thai_July = get_noise_wav_for_df(df, ['July'], 8000)
x_n_Culex, s_n_Culex = util.get_noise_wav_for_df(df, ['Culex/sounds/00'], 0.00, 8000, verbose=2)
# Print unique species in set(s)
print('Training species')
print(df_train["species"].unique())
print('Testing species')
print(df_test["species"].unique())
```
## Split signal
We hold out some data to split. No obvious split currently so will wing it.
```
# Run a few files to verify
ipd.Audio(x_s_te[20], rate = 8000)
```
### Features for signal from query
```
X_s_tr = util.get_feat(x_s_tr, sr=8000)
X_s_te = util.get_feat(x_s_te, sr=8000)
X_n_tr = util.get_feat(x_n_tr, sr=8000)
```
### Features for noise from Culex Q home data and Thai
```
X_n_Thai = util.get_feat(x_n_Thai, sr=8000)
X_n_Culex = util.get_feat(x_n_Culex, sr=8000)
```
### Features for noise from CDC and remaining Culex data, and other noise
```
X_n_add_Culex = util.get_feat(x_n_add_Culex, sr=8000)
X_n_add_CDC = util.get_feat(x_n_add_CDC, sr=8000)
X_n_add_dav = util.get_feat(x_n_add_dav, sr=8000)
```
### Concatenate and trim+shuffle for balanced signal and noise
```
X_tr = np.vstack([X_s_tr, shuffle(X_n_tr)[:len(X_s_tr)]])
y_tr = np.zeros(len(X_tr))
y_tr[:len(X_s_tr)] = 1
plt.plot(y_tr)
# y_te = np.hstack([np.ones(len(X_s_te)), np.zeros(len(X_n_Culex)), np.zeros(len(X_n_add_Culex))])
y_te = np.ones(len(X_s_te))
```
# Train model
```
clf = RandomForestClassifier(max_depth=2, random_state=0, verbose=0, n_jobs=-1)
# clf = SVC(verbose=2, probability=False)
clf.fit(X_tr,y_tr)
preds = clf.predict(np.vstack([X_s_te]))
print('Signal acc', accuracy_score(y_te, preds))
plt.figure()
plt.plot(preds, '.')
plt.plot(y_te)
preds = clf.predict(np.vstack([X_n_Culex, X_n_add_Culex]))
y_te = np.hstack([np.zeros(len(X_n_Culex)), np.zeros(len(X_n_add_Culex))])
print('Noise acc', accuracy_score(y_te, preds))
plt.figure()
plt.plot(preds, '.')
plt.plot(y_te)
```
# Evaluate Model
```
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
_,_ = util.df_metadata(df_train, plot=True, filepath='Graphs/Train2.pdf')
_,_ = util.df_metadata(df_test, plot=True, filepath='Graphs/Test2.pdf')
print('Test set processing (i) ...')
species_wav_dict = {}
for species in df_test["species"].unique():
x, signal_length = util.get_wav_for_df(df_test[df_test["species"] == species], 8000)
species_wav_dict[species.strip().strip("\'")] = [x, signal_length] # Correct for odd spacing and syntax in database (Maybe check SQL query?
print('Completed wav extraction for species', species)
print('Test set processing (ii) ...')
accs = []
for species in species_wav_dict.keys():
X_test = util.get_feat(species_wav_dict[species][0], sr=8000)
y_test = np.ones(np.shape(X_test)[0])
preds = clf.predict(X_test)
acc = accuracy_score(y_test, preds)
accs.append(acc)
print('Seconds for species:', species, species_wav_dict[species][1], 'acc', acc)
plt.bar([*list(species_wav_dict.keys())[:-1], 'larvae'], accs)
plt.xticks(rotation=90)
plt.grid()
plt.ylabel('Accuracy per species')
plt.savefig('Graphs/RFtestsignal2.pdf', bbox_inches='tight')
plt.show()
preds = clf.predict(X_n_Culex)
acc = accuracy_score(np.zeros(len(X_n_Culex)),preds)
print('Seconds for Culex noise:', s_n_Culex, 'acc', acc)
plt.bar('Culex implied noise', acc)
preds = clf.predict(X_n_add_Culex)
acc = accuracy_score(np.zeros(len(X_n_add_Culex)),preds)
print('Seconds for Culex additional noise files:', s_n_add_Culex, 'acc', acc)
plt.bar('Culex files noise', acc)
preds = clf.predict(X_n_Thai)
acc = accuracy_score(np.zeros(len(X_n_Thai)),preds)
print('Seconds for Thai assumed 0 noise files:', s_n_Thai, 'acc', acc)
plt.bar('Thai implied noise', acc)
preds = clf.predict(X_n_add_dav)
acc = accuracy_score(np.zeros(len(X_n_add_dav)),preds)
print('Seconds for Davide varied 0, noise files:', s_n_add_dav, 'acc', acc)
plt.bar('Davide speech', acc)
preds = clf.predict(X_n_add_CDC)
acc = accuracy_score(np.zeros(len(X_n_add_CDC)),preds)
print('Seconds for CDC files 0, noise files:', s_n_add_dav, 'acc', acc)
plt.bar('CDC files noise', acc)
plt.xticks(rotation=90)
plt.grid(axis = 'y', which='both')
plt.ylabel('Accuracy per class')
plt.minorticks_on()
# plt.savefig('Graphs/SVMtestnoise.pdf', bbox_inches='tight')
plt.show()
```
## Create overall confusion matrix
```
# Stack full data together
y_te = np.hstack([np.ones(len(X_s_te)), np.zeros(len(X_n_Culex)), np.zeros(len(X_n_add_Culex)),
np.zeros(len(X_n_Thai)), np.zeros(len(X_n_add_dav))])
X_te = np.vstack([X_s_te, X_n_Culex, X_n_add_Culex,
X_n_Thai, X_n_add_dav])
# preds = clf.predict(X_te)
# accuracy_score(y_te, preds)
# plot_confusion_matrix(clf, X_test, y_test) # doctest: +SKIP
# plt.show() # doctest: +SKIP
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(clf, X_te, y_te, normalize='true', cmap=plt.cm.Blues) # doctest: +SKIP
# plt.savefig('Graphs/SVMconfnormunbalanced.pdf', bbox_inches='tight')
plt.show() # doctest: +SKIP
plot_confusion_matrix(clf, X_te, y_te, normalize=None, cmap=plt.cm.Blues, values_format='d') # doctest: +SKIP
# plt.savefig('Graphs/SVMconfunbalanced.pdf', bbox_inches='tight')
plt.show() # doctest: +SKIP
# Stack balanced test data together
y_te = np.hstack([np.ones(len(X_s_te[:18063])), np.zeros(len(X_s_te[:18063]))])
X_te = np.vstack([shuffle(X_s_te)[:18063], shuffle(X_n_add_Culex)[:6021],
shuffle(X_n_Thai)[:6021], shuffle(X_n_add_dav)[:]])
plot_confusion_matrix(clf, X_te, y_te, normalize='true',cmap=plt.cm.Blues) # doctest: +SKIP
# plt.savefig('Graphs/RFconfnormbalanced.pdf', bbox_inches='tight')
plt.show() # doctest: +SKIP
plot_confusion_matrix(clf, X_te, y_te, normalize=None,cmap=plt.cm.Blues, values_format='d') # doctest: +SKIP
# plt.savefig('Graphs/RFconfbalanced.pdf', bbox_inches='tight')
plt.show() # doctest: +SKIP
preds = clf.predict(X_te)
accuracy_score(y_te, preds)
```
# Bug reports
* ```19 June 2018_359_379cow```: End time of signal is longer than actual signal?
* Events are marked with mosquito "deaths" as point labels, where end time = start time:
* ```Label duration of 0.0 seconds at path '/Thai/sounds/larvae_#12-20_rec1.wav' ... deleting index 933```
* ```Label duration of 0.0 seconds at path '/Thai/sounds/larvae_#12-20_rec1.wav' ... deleting index 935```
* ```Label duration of 0.0 seconds at path '/Thai/sounds/larvae_#12-20_rec1.wav' ... deleting index 936```
* ```Label duration of 0.0 seconds at path '/Thai/sounds/larvae_#12-20_rec1.wav' ... deleting index 938```
* Larvae are not marked as mosquito, species information absent from labels
* Noise labels are not consistent in experiments: some data is marked as "background", some is untagged, some recordings contain positive labels with the implication that the negative labels are noise, whereas other recordings only strongly label positive events, ignoring lower SNR positives
* Errors in record datetime 30/07 vs 30/06.. not all entries have a datetime label
* Some extracted Thai noise contains mosquito: maybe write padding to cut down the assumed noise with a small margin for error to be safe
645-657: label missing species for:
`98.014746 105.544811 #651cow
109.454388 131.549004 #652cow
136.725064 156.988787 #653cow
161.201218 177.720557 #654cow
182.111948 194.818073 #655cow
197.571296 206.064990 #656cow
209.602882 223.313930 #657cow`
species present in:
`26.413735 37.526432 #645COW
42.499442 55.591018 #646COW
60.715454 68.906293 #647COW`
#235-239.txt and 17 June 2018_235-239cow.txt: recordings appear really similar other than sample rate:
# two label tracks are supplied (???)
235-239 MISSING LAST LABEL: 20 SEC MOSQUITO LABELLED AS NOISE
WE CANNOT ASSUME NON 1S ARE NOISE: in Thai data, we filtered all the data by *species info having to be in original database entry*, hence this discarded a significant amount of training data and led to the non 1 assumption not being true in Thai, but working fine in Culex.
# Fixes
Load dataframe with mosquito +ve but no species present in same query. Can filter further in pandas in Python to test by species
| github_jupyter |
# Supervised Learning - Linear Regression
Do you remember the recipe for Machine Learning? Let me remind you once again!
* Define Problem : We start by defining the problem we are trying to solve. This can be as simple as prediction of your next semester's result based on your previous results.
* Collect Data : Next step is to collect relevant data based on the problem definition. This can be your grades in different semesters.
* Prepare Data : The data collected for our problem is preprocessed. This can be removing redundant grades and replacing the missing ones.
* Select Model(Algorithm) : After the data is ready, we proceed to select the machine learning model. The selection is based on the problem type e.g. classification, regression etc and the data that is available to us. The model can be linear regression model in our case.
* Train Model : The selected model is then trained to learn from the data we have collected.
* Evaluate Model : Final step is to evaluate the model that we have trained for accuracy and view the results.
This is exactly what we are going to do here.
## Step 1 - Define Problem
The data scientists at AwesomeMart have collected 2013 sales data for 1559 products across 10 stores in different cities. The aim is to build a predictive model and find out the sales of each product at a particular store using machine learning.
Using this model, AwesomeMart will try to understand the properties of products and stores which play a key role in increasing sales.
## Step 2 - Collect & Prepare Data¶
Step 2.1 - Import Data & Primary Data Analysis
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
#Reading the dataset in a dataframe using Pandas
df = pd.read_csv("train.csv")
```
Now let us do some quick data analysis!
```
df.head()
df.shape
df.describe()
```
Here are a few inferences, you can draw by looking at the output of describe() function:
* Average cost of an item is 140
* AwesomeMart was first established at 1985
* They have a max sales of 13,086 and min of 33
* There are about 8,523 products in store and 12 features.
For the non-numerical values (e.g. Item_Fat_Content, Item_Type etc.), we can look at frequency distribution to understand whether they make sense or not. The frequency table can be printed by following command:
```
df['Item_Fat_Content'].value_counts()
```
## Step 2.2 - Finding & Imputing Missing Values
```
df.isnull().sum()
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values = 'NaN', strategy = 'mean')
imputer = imputer.fit(df.iloc[:, 1:2])
df.iloc[:, 1:2] = imputer.transform(df.iloc[:, 1:2])
df['Outlet_Size'] = df['Outlet_Size'].fillna('Medium')
df.isnull().sum()
```
Awesome! No we don't have any missing values.
## Step 2.3 - Data Visualization
```
# plt.figure(figsize=(6,6))
# sns.boxplot(x = 'Item_MRP', y = 'Item_Outlet_Sales', data = df)
# plt.figure(figsize=(6,6))
# sns.barplot(x = 'Item_Weight', y = 'Item_Outlet_Sales', data = df)
# plt.figure(figsize=(6,6))
# sns.violinplot(x = 'Outlet_Size', y = 'Item_Outlet_Sales', hue = 'Loan_Status', data = df, split = True)
```
## Step 3 - Modeling
Since, sklearn requires all inputs to be numeric, we should convert all our categorical variables into numeric by encoding the categories
```
df.head()
df.Item_Identifier.value_counts()
Item_Identifier_New = pd.get_dummies(df.Item_Identifier,prefix='Item_Identifier').Item_Identifier_ID
df.Item_Fat_Content.value_counts()
Item_Fat_Content_New = pd.get_dummies(df.Item_Fat_Content,prefix='Item_Fat_Content').Item_Fat_Content_Low Fat
df.Item_Type.value_counts()
Item_Type_New = pd.get_dummies(df.Item_Type,prefix='Item_Type').Item_Type_Dairy
df.Self_Employed.value_counts()
self_emp_category = pd.get_dummies(df.Self_Employed,prefix='employed').employed_Yes
loan_status = pd.get_dummies(df.Loan_Status,prefix='status').status_Y
property_category = pd.get_dummies(df.Property_Area,prefix='property')
```
| github_jupyter |
# Clustered Multitask GP (w/ Pyro/GPyTorch High-Level Interface)
## Introduction
In this example, we use the Pyro integration for a GP model with additional latent variables.
We are modelling a multitask GP in this example. Rather than assuming a linear correlation among the different tasks, we assume that there is cluster structure for the different tasks. Let's assume there are $k$ different clusters of tasks. The generative model for task $i$ is:
$$
p(\mathbf y_i \mid \mathbf x_i) = \int \sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i) \: p(\mathbf f (\mathbf x_i) ) \: d \mathbf f
$$
where $z_i$ is the cluster assignment for task $i$. There are therefore $k$ latent functions $\mathbf f = [f_1 \ldots f_k]$, each modelled by a GP, representing each cluster.
Our goal is therefore to infer:
- The latent functions $f_1 \ldots f_k$
- The cluster assignments $z_i$ for each task
```
import math
import torch
import pyro
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
```
## Adding additional latent variables to the likelihood
The standard GPyTorch variational objects will take care of inferring the latent functions $f_1 \ldots f_k$. However, we do need to add the additional latent variables $z_i$ to the models. We will do so by creating a custom likelihood that models:
$$
\sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i)
$$
GPyTorch's likelihoods are capable of modeling additional latent variables. Our custom likelihood needs to define the following three functions:
- `pyro_model` (needs to call through to `super().pyro_model` at the end), which defines the prior distribution for additional latent variables
- `pyro_guide` (needs to call through to `super().pyro_guide` at the end), which defines the variational (guide) distribution for additional latent variables
- `forward`, which defines the observation distributions conditioned on `\mathbf f (\mathbf x_i)` and any additional latent variables.
### The pyro_model function
For each task, we will model the cluster assignment with a `OneHotCategorical` variable, where each cluster has equal probability. The `pyro_model` function will make a `pyro.sample` call to this prior distribution and then call the super method:
```python
# self.prior_cluster_logits = torch.zeros(num_tasks, num_clusters)
def pyro_model(self, function_dist, target):
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(
function_dist,
target,
cluster_assignment_samples=cluster_assignment_samples
)
```
Note that we are adding an additional argument `cluster_assignment_samples` to the `super().pyro_model` call. This will pass the cluster assignment samples to the `forward` call, which is necessary for inference.
### The pyro_guide function
For each task, the variational (guide) diustribution will also be a `OneHotCategorical` variable, which will be defined by the parameter `self.variational_cluster_logits`. The `pyro_guide` function will make a `pyro.sample` call to this prior distribution and then call the super method:
```python
def pyro_guide(self, function_dist, target):
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
```
Note that we are adding an additional argument `cluster_assignment_samples` to the `super().pyro_model` call. This will pass the cluster assignment samples to the `forward` call, which is necessary for inference.
### The forward function
The `pyro_model` fuction passes the additional keyword argument `cluster_assignment_samples` to the `forward` call. Therefore, our forward method will define the conditional probability $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$, where $\mathbf f(\mathbf x)$ corresponds to the variable `function_samples` and $z_i$ corresponds to the variable `cluster_assignment_samples`.
In our example $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$ corresponds to a Gaussian noise model.
```python
# self.raw_noise is the Gaussian noise parameter
# function_samples is `n x k`
# cluster_assignment_samples is `k x t`, where `t` is the number of tasks
def forward(self, function_samples, cluster_assignment_samples):
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
# The to_event call is necessary because we are returning a multitask distribution,
# where each task dimension corresponds to each of the `t` tasks
```
This is all we need for inference! However, if we want to use this model to make predictions, the `cluster_assignment_samples` keyword argument will not be passed into the function. Therefore, we need to make sure that `forward` can handle both inference and predictions:
```python
def forward(self, function_samples, cluster_assignment_samples=None):
if cluster_assignment_samples is None:
# We'll get here at prediction time
# We'll use the variational distribution when making predictions
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
```
```
class ClusterGaussianLikelihood(gpytorch.likelihoods.Likelihood):
def __init__(self, num_tasks, num_clusters):
super().__init__()
# These are parameters/buffers for the cluster assignment latent variables
self.register_buffer("prior_cluster_logits", torch.zeros(num_tasks, num_clusters))
self.register_parameter("variational_cluster_logits", torch.nn.Parameter(torch.randn(num_tasks, num_clusters)))
# The Gaussian observational noise
self.register_parameter("raw_noise", torch.nn.Parameter(torch.tensor(0.0)))
# Other info
self.num_tasks = num_tasks
self.num_clusters = num_clusters
self.max_plate_nesting = 1
def pyro_guide(self, function_dist, target):
# Here we add the extra variational distribution for the cluster latent variable
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
def pyro_model(self, function_dist, target):
# Here we add the extra prior distribution for the cluster latent variable
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(function_dist, target, cluster_assignment_samples=cluster_assignment_samples)
def forward(self, function_samples, cluster_assignment_samples=None):
# For inference, cluster_assignment_samples will be passed in
# This bit of code is for when we use the likelihood in the predictive mode
if cluster_assignment_samples is None:
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
# Now we return the observational distribution, based on the function_samples and cluster_assignment_samples
res = pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
return res
```
## Constructing the PyroGP model
The PyroGP model is essentially the same as the model we used in the simple example, except for two changes
- We now will use our more complicated `ClusterGaussianLikelihood`
- The latent function should be vector valued to correspond to the `k` latent functions. As a result, we will learn a batched variational distribution, and use a `IndependentMultitaskVariationalStrategy` to convert the batched variational distribution into a `MultitaskMultivariateNormal` distribution.
```
class ClusterMultitaskGPModel(gpytorch.models.pyro.PyroGP):
def __init__(self, train_x, train_y, num_functions=2, reparam=False):
num_data = train_y.size(-2)
# Define all the variational stuff
inducing_points = torch.linspace(0, 1, 64).unsqueeze(-1)
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
num_inducing_points=inducing_points.size(-2),
batch_shape=torch.Size([num_functions])
)
# Here we're using a IndependentMultitaskVariationalStrategy - so that the output of the
# GP latent function is a MultitaskMultivariateNormal
variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy(
gpytorch.variational.VariationalStrategy(self, inducing_points, variational_distribution),
num_tasks=num_functions,
)
# Standard initializtation
likelihood = ClusterGaussianLikelihood(train_y.size(-1), num_functions)
super().__init__(variational_strategy, likelihood, num_data=num_data, name_prefix=str(time.time()))
self.likelihood = likelihood
self.num_functions = num_functions
# Mean, covar
self.mean_module = gpytorch.means.ZeroMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
res = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return res
```
This model can now be used to perform inference on cluster assignments, as well as make predictions using the inferred cluster assignments!
| github_jupyter |
```
import pandas as pd
from sklearn.model_selection import train_test_split, cross_validate, StratifiedKFold, cross_val_predict
from sklearn.neural_network import MLPClassifier
from sklearn.dummy import DummyClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import PrecisionRecallDisplay
from sklearn.feature_extraction.text import CountVectorizer
# from label_studio_converter import Converter
# c = Converter('../data/labeling_output/abbreviations/project-4-at-2021-04-17-22-20-5dc81c5e.json')
data = pd.read_json('../data/labeling_output/abbreviations/project-4-at-2021-04-17-22-20-5dc81c5e.json')
data['label'] = data['annotations'].apply(lambda x: x[0]['result'][0]['value']['choices'][0]).map({'Correct':1,'Incorrect':0})
data['appears_in_table'] = data['data'].apply(lambda x: x['appears_in_table'])
for k in data['data'][0].keys():
data[k] = data['data'].apply(lambda x: x[k])
data = data.drop_duplicates(subset=['abrv_text','abrv_long_form'])
data['label'].sum()/len(data['label'])
in_table_data = data[data.appears_in_table]
in_table_data['label'].sum()/len(in_table_data['label'])
from sklearn.ensemble import RandomForestClassifier
import spacy
import scispacy
nlp = spacy.load('en_core_sci_lg')
import numpy as np
def featurize_scispacy(data):
abrv_vectors = np.array([nlp(x).vector for x in data['abrv_text']])
long_form_vectors = np.array([nlp(x).vector for x in data['abrv_long_form']])
X = np.concatenate([abrv_vectors, long_form_vectors],axis=1)
y = data['label']
return X, y
def featurize_character_distribution(data):
vec = CountVectorizer(analyzer='char')
abrv_vectors = vec.fit_transform(data['abrv_text'])
long_form_vectors = vec.fit_transform(data['abrv_long_form'])
X = np.concatenate([abrv_vectors.toarray(), long_form_vectors.toarray()],axis=1)
y = data['label']
return X, y
X_scispacy, y = featurize_scispacy(data)
X_chardist, y = featurize_character_distribution(data)
X = np.concatenate([X_scispacy, X_chardist],axis=1)
def split_and_train(X, y, filter_features=False, heldout_test=False):
if heldout_test:
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42, test_size=0.3
)
else:
X_train = X
y_train = y
if filter_features:
correlation = X_train.corrwith(pd.Series(y_train))
good_features = correlation.sort_values(ascending=False).head(1000).index
X_train = X_train[good_features]
X_test = X_test[good_features]
rfc = RandomForestClassifier(max_depth=20)
mlp = MLPClassifier(
hidden_layer_sizes=(
300,
150,
50,
),
max_iter=2000,
early_stopping=False,
n_iter_no_change=500,
random_state=42,
)
dummy = DummyClassifier(strategy="most_frequent")
models = [dummy, rfc, mlp]
model_scores = {}
confusion_matrices = {}
pr_curves = {}
for model in models:
scores = cross_validate(
model,
X_train,
y_train,
scoring=['accuracy','precision','recall','f1','roc_auc','average_precision'],
cv=StratifiedKFold(5),
return_train_score=True,
)
y_pred = cross_val_predict(model, X_train, y_train, cv=StratifiedKFold(5,),)
y_pred_proba = cross_val_predict(model, X_train, y_train, cv=StratifiedKFold(5, ), method='predict_proba',)
pr_curve = precision_recall_curve(y_train, y_pred_proba[:,1])
cm = confusion_matrix(y_train, y_pred)
confusion_matrices[model.__class__.__name__] = cm
pr_curves[model.__class__.__name__] = pr_curve
model_scores[model.__class__.__name__] = scores
return model_scores, confusion_matrices, pr_curves
```
# Combined features
```
model_scores, confustion_matrices, pr_curves = split_and_train(X,y)
results_df = []
for model, scores in model_scores.items():
x = pd.DataFrame(scores).mean()
results_df.append(x)
results_df = pd.DataFrame(results_df,index=model_scores.keys())
results_df[[col for col in results_df if 'test' in col]]
```
PR curves look a bit weird... maybe because these are aggregated results from cross_val_predict?
```
precision, recall, thresholds = pr_curves['MLPClassifier']
disp = PrecisionRecallDisplay(precision,recall)
disp.plot()
plot_precision_recall_vs_threshold(precision, recall, thresholds)
precision, recall, thresholds = pr_curves['RandomForestClassifier']
disp = PrecisionRecallDisplay(precision,recall)
disp.plot()
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
"""
Modified from:
Hands-On Machine learning with Scikit-Learn
and TensorFlow; p.89
"""
plt.figure(figsize=(8, 8))
plt.title("Precision and Recall Scores as a function of the decision threshold")
plt.plot(thresholds, precisions[:-1], "b--", label="Precision")
plt.plot(thresholds, recalls[:-1], "g-", label="Recall")
plt.ylabel("Score")
plt.xlabel("Decision Threshold")
plt.legend(loc='best')
import matplotlib.pyplot as plt
plot_precision_recall_vs_threshold(precision, recall, thresholds)
```
# SciSpacy only
minus a few percent vs with chardist
```
model_scores, confustion_matrices, pr_curves = split_and_train(X_scispacy,y)
results_df = []
for model, scores in model_scores.items():
x = pd.DataFrame(scores).mean()
results_df.append(x)
results_df = pd.DataFrame(results_df,index=model_scores.keys())
results_df[[col for col in results_df if 'test' in col]]
```
# CharDist only
Performance is almost same?? Random forest is even better? I'm suspicious
```
model_scores, confusion_matrices, pr_curves = split_and_train(X_chardist,y)
results_df = []
for model, scores in model_scores.items():
x = pd.DataFrame(scores).mean()
results_df.append(x)
results_df = pd.DataFrame(results_df,index=model_scores.keys())
results_df[[col for col in results_df if 'test' in col]]
precision, recall, thresholds = pr_curves['RandomForestClassifier']
disp = PrecisionRecallDisplay(precision,recall)
disp.plot()
precision, recall, thresholds = pr_curves['MLPClassifier']
disp = PrecisionRecallDisplay(precision,recall)
disp.plot()
```
| github_jupyter |
```
import numpy as np
import libpysal as ps
from stwr.gwr import GWR, MGWR,STWR
from stwr.sel_bw import *
from stwr.utils import shift_colormap, truncate_colormap
import geopandas as gp
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import pyplot
import pandas as pd
import math
from matplotlib.gridspec import GridSpec
import time
import csv
import copy
import rasterio
import rasterio.plot
import rasterio.features
import rasterio.warp
import pyproj
#Read Data.
#list of coordinates
cal_coords_list =[]
#list of y
cal_y_list =[]
#list of X
cal_X_list =[]
#time intervel list
delt_stwr_intervel =[0.0]
csvFile = open("../Data_STWR/RealWorldData/precip_isotope_D3.csv", "r")
df = pd.read_csv(csvFile,header = 0,names=['Longitude','Latitude','Elevation','ppt','tmean','d2h','timestamp'],
dtype = {"Longitude" : "float64","Latitude":"float64",
"Elevation":"float64","ppt":"float64","tmean":"float64","d2h":"float64",
"timestamp":"float64"},
skip_blank_lines = True,
keep_default_na = False)
df.info()
#Sort the records by observation time.
df = df.sort_values(by=['timestamp'])
all_data = df.values
tick_time = all_data[0,-1]
cal_coord_tick = []
cal_X_tick =[]
cal_y_tick =[]
#If the time interval is less than "time_tol", the observation time is considered to be the same.
time_tol = 1.0e-7
# all records
lensdata = len(all_data) # number of rows, or number of observations.
for row in range(lensdata): # traverse all observation data
cur_time = all_data[row,-1] # get current timestamp
# if the timestamp changes,
if(abs(cur_time-tick_time)>time_tol):
cal_coords_list.append(np.asarray(cal_coord_tick))
cal_X_list.append(np.asarray(cal_X_tick))
cal_y_list.append(np.asarray(cal_y_tick))
delt_t = cur_time - tick_time
delt_stwr_intervel.append(delt_t)
tick_time =cur_time
cal_coord_tick = []
cal_X_tick =[]
cal_y_tick =[]
coords_tick = np.array([all_data[row,0],all_data[row,1]]) ## find the coords in your data
cal_coord_tick.append(coords_tick)
x_tick = np.array([all_data[row,2],all_data[row,3],all_data[row,4]]) ## this is to get all your X, here we have X1, X2, X3.
cal_X_tick.append(x_tick)
y_tick = np.array([all_data[row,5]]) ## find the y in your data
cal_y_tick.append(y_tick)
#GWR only processes the last observation data point.
cal_cord_gwr = np.asarray(cal_coord_tick)
cal_X_gwr = np.asarray(cal_X_tick)
cal_y_gwr = np.asarray(cal_y_tick)
cal_coords_list.append(np.asarray(cal_coord_tick))
cal_X_list.append(np.asarray(cal_X_tick))
cal_y_list.append(np.asarray(cal_y_tick))
#spherical is the parameter to set whether it is spherical coordinates or Euclidean coordinates
stwr_selector_ = Sel_Spt_BW(cal_coords_list, cal_y_list, cal_X_list,#gwr_bw0,
delt_stwr_intervel,spherical = True)
#Search for optimal bandwidth.
optalpha,optsita,opt_btticks,opt_gwr_bw0 = stwr_selector_.search()
#Build the STWR model.
stwr_model = STWR(cal_coords_list,cal_y_list,cal_X_list,delt_stwr_intervel,optsita,opt_gwr_bw0,tick_nums=opt_btticks+1,alpha =optalpha,spherical = True,recorded=1)
#Fit the STWR model
stwr_results = stwr_model.fit()
stwr_results.summary()
stwr_scale = stwr_results.scale
stwr_residuals = stwr_results.resid_response
#GWR only processes the data points observed at the the lastest time stage .
gwr_selector = Sel_BW(cal_cord_gwr, cal_y_gwr, cal_X_gwr,spherical = True)
#Search the bandwidth
gwr_bw= gwr_selector.search(bw_min=2)
#build the GWR model
gwr_model = GWR(cal_cord_gwr, cal_y_gwr, cal_X_gwr, gwr_bw,spherical = True)
gwr_results = gwr_model.fit()
gwr_results.summary()
gw_rscale = gwr_results.scale
gwr_residuals = gwr_results.resid_response
#Prediction
#list of coordintates need to be predict.
Pred_Coords_list =[]
#list of X values of the coordinates.
X_pre_list = []
theight1 = rasterio.open('../Data_STWR/RealWorldData/extgmted1.tif') ## elevation
bheight1 = theight1.read(1)
ppt1 = rasterio.open('../Data_STWR/RealWorldData/extppt1.tif') ## precipitation
bppt1 = ppt1.read(1)
mean1 = rasterio.open('../Data_STWR/RealWorldData/extmean1.tif') ## Tmeans
bmean1 = mean1.read(1)
# These three TIF should have the same profile
ppt1.profile
# These three TIF should have the same profile
ppt1.profile == mean1.profile
#record the profile of the "*.tif" file, including the information about the transform and "nodata".
pf = ppt1.profile
transform =ppt1.profile['transform']
nodata = pf['nodata']
# Z is the predicted y surface of STWR model
Z = bppt1.copy()
#Z = Z.astype(np.float64)
#Z2 is the predicted y surface of GWR model
Z2 = bppt1.copy()
#Z2 = Z2.astype(np.float64)
mask = ppt1.dataset_mask()
for row in range(mask.shape[0]):
for col in range (mask.shape[1]):
if(mask[row,col]>0):
X_tick = np.array([bheight1[row,col],bppt1[row,col],bmean1[row,col]])
X_pre_list.append(X_tick) # X used for prediction
Pred_Coords_list.append(ppt1.xy(row,col)) # coords for prediction
X_pre_arr = np.asarray(X_pre_list)
alllen_stwr = len(Pred_Coords_list)
allklen_stwr = X_pre_arr.shape[1]+1
rec_parmas_stwr = np.ones((alllen_stwr,allklen_stwr))
calen_stwr = len(cal_y_list[-1])
prelen_stwr = X_pre_arr.shape[0]
#list of y predicted by the STWR model
Pre_y_list = np.ones_like(X_pre_arr[:,1])
#list of y predicted by the GWR model
Pre_gwr_y_list = Pre_y_list.copy()
stwr_pre_parmas = np.ones((prelen_stwr,allklen_stwr))
#If the number of points to be predicted is more than the number we used for building the model(STWR or GWR),
# their prediciton need to be split into servel parts.
if (calen_stwr>=prelen_stwr):
predPointList = Pred_Coords_list
PreX_list = X_pre_arr
#Predicted result of STWR
pred_stwr_dir_result = stwr_model.predict(predPointList,PreX_list,stwr_scale,stwr_residuals)
pre_y_stwr = pred_stwr_dir_result.predictions
#Predicted result of GWR
pred_gwr_dir_result = gwr_model.predict(predPointList,PreX_list,gw_rscale,gwr_residuals)
pre_y_gwr = pred_gwr_dir_result.predictions
#gwr
else:
spl_parts_stwr = math.ceil(prelen_stwr*1.0/calen_stwr)
spl_X_stwr = np.array_split(X_pre_arr, spl_parts_stwr, axis = 0)
spl_coords_stwr = np.array_split(Pred_Coords_list, spl_parts_stwr, axis = 0)
pred_stwr_result = np.array_split(Pre_y_list, spl_parts_stwr, axis = 0)
#uncomment if you want to predict the coefficient surfaces by STWR model.
# pred_stwrparmas_result = np.array_split(stwr_pre_parmas, spl_parts_stwr, axis = 0)
#uncomment if you want to predict the coefficient surfaces by STWR model.
#Split the y to be predicted into servel parts for GWR prediction
pred_gwr_result = np.array_split(Pre_gwr_y_list, spl_parts_stwr, axis = 0)
#Split the y to be predicted into servel parts for GWR prediction
for j in range(spl_parts_stwr):
predPointList_tick = [spl_coords_stwr[j]]
PreX_list_tick = [spl_X_stwr[j]]
pred_stwr_spl_result = stwr_model.predict(predPointList_tick,PreX_list_tick,stwr_scale,stwr_residuals)
pred_stwr_result[j] =pred_stwr_spl_result.predictions
#uncomment if you want to predict the coefficient surfaces by STWR model.
# pred_stwrparmas_result[j] =np.reshape(pred_stwr_spl_result.params.flatten(),(-1,allklen_stwr))
#uncomment if you want to predict the coefficient surfaces by STWR model.
#GWR
pred_gwr_spl_result = gwr_model.predict(spl_coords_stwr[j],spl_X_stwr[j],gw_rscale,gwr_residuals)
pred_gwr_result[j] =pred_gwr_spl_result.predictions
#GWR
pre_y_stwr = pred_stwr_result[0]
# pre_parmas_stwr = pred_stwrparmas_result[0]
combnum = spl_parts_stwr-1
#gwr
pre_y_gwr=pred_gwr_result[0]
#gwr
for s in range(combnum):
pre_y_stwr = np.vstack((pre_y_stwr,pred_stwr_result[s+1]))
# pre_parmas_stwr = np.vstack((pre_parmas_stwr,pred_stwrparmas_result[s+1]))
#gwr
pre_y_gwr = np.vstack((pre_y_gwr,pred_gwr_result[s+1]))
#gwr
idx = 0
mask_ppt = ppt1.dataset_mask()
for row in range(mask_ppt.shape[0]):
for col in range (mask_ppt.shape[1]):
if(mask[row,col]>0):
Z[row,col] = pre_y_stwr[idx]
#Predicted y surface by GWR
Z2[row,col] = pre_y_gwr[idx]
#Predicted y surface by GWR
idx = idx+1
#Output the predicted y surface by STWR.
with rasterio.open('../Data_STWR/RealWorldData/output/Rst3_stwr_nd_newt.tif', 'w', driver='GTiff',
height=Z.shape[0],
width=Z.shape[1], count=1, dtype=Z.dtype,
crs='+proj=latlong', transform=transform,nodata = nodata) as dststwr:
dststwr.write(Z, 1)
#Output the predicted y surface by GWR.
with rasterio.open('../Data_STWR/RealWorldData/output/Rst3_gwr_nd_newt.tif', 'w', driver='GTiff',
height=Z2.shape[0],
width=Z2.shape[1], count=1, dtype=Z2.dtype,
crs='+proj=latlong', transform=transform,nodata = nodata) as dstgwr:
dstgwr.write(Z2, 1)
# pyplot.title("Predicted δ2H Surface of STWR")
pyplot.imshow(Z,cmap='binary',vmin=-238.478, vmax=18.4553)
pyplot.show()
pyplot.title("Predicted δ2H Surface of GWR")
pyplot.imshow(Z2, cmap='binary',vmin=-238.478, vmax=18.4553)
pyplot.show()
```
| github_jupyter |
```
# Before scrap go there and put linkedin/robots.txt
# or type this in any website to get their resective rules for web scrapping
from bs4 import BeautifulSoup
# beautiful soup 4 for web scraping
# import lxml
with open("basic_+_class_selector_vs_tag_+_web.html", encoding="utf8") as file:
contents = file.read()
soup = BeautifulSoup(contents, "html.parser")
# html.parser or lxml may not work in some websites
# print(soup.title)
# print(soup.title.name)
# print(soup.title.string)
# print(soup)
print(soup.prettify())
"""
Find the info that we look for using Beautiful Soup
"""
all_anchor_tags = soup.find_all(name="link")
print(all_anchor_tags)
for tag in all_anchor_tags:
print(tag.get("href"))
print(tag.getText())
heading = soup.find(name="h2")
print(heading)
section_heading = soup.find(name="div", class_="bacon")
print(section_heading)
```
## Scrap data from live website
- Get the most popular topic from https://news.ycombinator.com/
- Return the link of the most popular topic and its website
- Can use as an attachment to multiple source that can populate blog post
```
import requests
from bs4 import BeautifulSoup
# Get response from the website
response = requests.get("https://news.ycombinator.com/")
# Store the whole html to the page
yc_web_page = response.text
# Create the soup to access
soup = BeautifulSoup(yc_web_page, "html.parser")
# find the frst anchor task, with class = title
articles = soup.find_all(name="a", class_="titlelink")
article_texts = []
article_links = []
# get the text inside the article_tag
for article_tag in articles:
article_texts.append(article_tag.getText())
article_links.append(article_tag.get("href"))
article_upvotes = [int(score.getText().split()[0]) for score in soup.find_all(name="span", class_="score")]
# print(article_upvotes)
# Find the largest index by using the max and largest function
largest_index = article_upvotes.index(max(article_upvotes))
print(article_texts[largest_index], article_links[largest_index])
```
## Scrap data and publish data to textfile
- Example of web scrapping and publishing data to textfile
- Data is publish to the current folder
```
import requests
from bs4 import BeautifulSoup
URL="https://web.archive.org/web/20200518073855/https://www.empireonline.com/movies/features/best-movies-2/"
response = requests.get(URL)
web_site_html = response.text
soup = BeautifulSoup(web_site_html, "html.parser")
all_movies = soup.find_all(name="h3", class_="title")
# print(all_movies)
movie_titles = [movie.getText() for movie in all_movies]
movies = movie_titles[::-1]
with open("movies.txt", mode="w") as file:
for movie in movies:
file.write(f"{movie}\n")
```
| github_jupyter |
# PyQtGraph
## Fast Online Plotting in Python
---------------------------------------------
"PyQtGraph is a pure-python graphics and GUI library built on PyQt4 / PySide and numpy. It is intended for use in mathematics / scientific / engineering applications. Despite being written entirely in python, the library is very fast due to its heavy leverage of numpy for number crunching and Qt's GraphicsView framework for fast display." - http://www.pyqtgraph.org/
## PyQtGraph or Matplotlib?
If you just need to make neat publication-quality plots/figures, then Matplotlib should be your first choice. However, if you are interested in making fast plot updates (> 50 updates per sec), then PyQtGraph is probably the best library to use.
### Prerequisites for this notebook:
* Numpy
* (optional) Basics of PyQt
This notebook covers a few basic features of the library that are sufficient to get you started.
The main topics covered here are:
* Animate data stored in numpy arrays (~ a video).
* How to style your plots.
* How to setup a grid layout.
Refer to the examples provided in the package to learn different features of PyQtGraph. These examples can be accessed via a GUI by running the following in a python shell:
```
import pyqtgraph.examples
pyqtgraph.examples.run()
```
## Animate Numpy Arrays
```
import pyqtgraph as pg # pg is often used as the shorthand notation
from pyqtgraph.Qt import QtCore # import QtCore from the Qt library
```
pyqtgraph.Qt links to the PyQt library. We wish to use the timer() function of the pyqt library in our example. The timer function can be used if you want someething to happen “in a while” or “every once in a while”.
```
app = pg.QtGui.QApplication([]) # init QApplication
```
Here, app refers to an instance of the Qt's QApplication class.
QApplication manages the GUI-application's control flow, where all events from the window system and other sources are processed and dispatched. There can only be one QApplication object defined for all your plots created.
```
x = np.random.rand(500,50,50) # create a random numpy array to display - 500 images of size 50x50
pg.setConfigOptions(antialias=True) # enable antialiasing
view = pg.GraphicsView() # create a main graphics window
view.show() # show the window
```
When displaying images at a different resolution, setting antialias to True makes the graphics appear smooth without any artifacts. Antialiasing minimizes aliasing when representing a high-resolution image at a lower resolution. Other useful config options are 'background' and 'foreground' colors.
GraphicsView generates a main graphics window. The default size is (640,480). You can change this to the size of your choice by using the resize function, e.g, view.resize(50,50).
```
p = pg.PlotItem() # add a plotItem
view.setCentralItem(p) # add the plotItem to the graphicsWindow and set it as central
```
For a given graphics window, you can create multiple plots. Here, we created a single plot item and added it to the graphics window.
```
img = pg.ImageItem(border='w', levels=(x.min(),x.max())) # create an image object
p.addItem(img) # add the imageItem to the plotItem
```
Within each plot, you can define multiple drawing items (or artists). Here, we added an image item. Examples of other items are: PlotCurveItem, ArrowItem, etc.
```
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data update function
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
```
Here, we create a function to update the image item with new data. To this end, we use a counter to iterate over each image stored within x.
```
# setup and start the timer
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
```
The timer function is used to repeatedly call the animLoop with a delay of 0 between each call.
```
app.exec_() # execute the app
```
Finally, you need to execute the QApplication. Any PyQtGraph code must be wrapped between the app initialization and the app execution. Here is the code all put together (execute and check):
```
# Animate a 3D numpy array
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.QtGui.QApplication([])
x = np.random.rand(500,50,50)
pg.setConfigOptions(antialias=True)
# main graphics window
view = pg.GraphicsView()
# show the window
view.show()
# add a plotItem
p = pg.PlotItem()
# add the plotItem to the graphicsWindow and set it as central
view.setCentralItem(p)
# create an image object
img = pg.ImageItem(border='w', levels=(x.min(),x.max()))
# add the imageItem to the plotItem
p.addItem(img)
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data generator
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
app.exec_()
```
## Exercise 1
* Animate an RGB array.
* Animate a 2D array (sequence of line plots). Use pg.PlotCurveItem instead of pg.ImageItem and setData instead of setImage to update the data.
#Styling Plots
PyQtGraph provides a function called mkPen(args) to create a drawing pen that can be passed as an argument (pen = pg.mkPen()) to style while defining several plot items. A few examples of defining mkPen are:
* pg.mkPen('y', width=3, style=QtCore.Qt.DashLine) # Make a dashed yellow line 2px wide
* pg.mkPen(0.5) # Solid gray line 1px wide
* pg.mkPen(color=(200,200,255), style=QtCore.Qt.DotLine) # Dotted pale-blue line
##Exercise 2
Repeat Exercise 1 with a yellow dashed line plot animation.
#Plots Grid Layout
You can create a grid layout for your plots using the GraphicsLayout function. The layout can then be used as a placeholder for all your plots within the main graphics window. Here is an example with two plots placed next to each other beneath a wide text block:
```
# imports
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
# init qApp
app = pg.QtGui.QApplication([])
# setup the main window
view = pg.GraphicsView()
view.resize(900,500)
view.setWindowTitle('Notebook')
view.show()
# main layout
layout = pg.GraphicsLayout(border='r') # with a red bordercolor
# set the layout as a central item
view.setCentralItem(layout)
# create a text block
label = pg.LabelItem('PyQtGraph Grid Layout Example', size='25px', color='y')
# create a plot with two random curves
p1 = pg.PlotItem()
curve11 = pg.PlotCurveItem(pen=pg.mkPen(color='g', width=1))
curve12 = pg.PlotCurveItem(pen=pg.mkPen(color='b', width=1, style=QtCore.Qt.DashLine))
p1.addItem(curve11); p1.addItem(curve12)
curve11.setData(np.random.rand(100))
curve12.setData(np.random.rand(100))
# create another plot with two random curves
p2 = pg.PlotItem()
curve21 = pg.PlotCurveItem(pen=pg.mkPen(color='w', width=1, style=QtCore.Qt.DotLine))
curve22 = pg.PlotCurveItem(pen=pg.mkPen(color='c', width=1, style=QtCore.Qt.DashLine))
p2.addItem(curve21); p2.addItem(curve22)
curve21.setData(np.random.rand(100))
curve22.setData(np.random.rand(100))
# Finally organize the layout
layout.addItem(label, row=0, col=0, colspan=2)
layout.addItem(p1, row=1, col=0)
layout.addItem(p2, row=1, col=1)
app.exec_()
```
The above example also shows how to draw multiple curves within the same plot.
##Exercise 3
* Create a grid layout like the example above and animate one of the curves in the left plot.
* Animate both curves within the left plot.
# Summary
In this notebook, we have covered the basics of the PyQtGraph library to make fast animations in Python. We suggest you next to have a look at the main documentation of the library and also the examples provided within the library. Enjoy animating plots!
| github_jupyter |
<a id="title_ID"></a>
# Using Kepler Data to Plot a Light Curve
<br>This notebook tutorial demonstrates the process of loading and extracting information from Kepler light curve FITS files to plot a light curve and display the photometric aperture.
<img style="float: right;" src="./light_curve_tres2.png" alt="light_curve_tres2" width="800px"/>
### Table of Contents
<div style="text-align: left"> <br> [Introduction](#intro_ID) <br> [Imports](#imports_ID) <br> [Getting the Data](#data_ID) <br> [Reading FITS Extensions](#header_ID) <br> [Plotting a Light Curve](#lightcurve_ID) <br> [The Aperture Extension](#aperture_ID) <br> [Additional Resources](#resources_ID) <br> [About this Notebook](#about_ID) </div>
***
<a id="intro_ID"></a>
## Introduction
**Light curve background:**
A light curve is a plot of flux versus time that shows the variability of light output from an object. This is one way to find planets periodically transitting a star. The light curves made here will plot the corrected and uncorrected fluxes from Kepler data of object KIC 11446443 (TRES-2).
**Some notes about the file:** kplr_011446443-2009131110544_slc.fits
<br>The filename contains phrases for identification, where
- kplr = Kepler
- 011446443 = Kepler ID number
- 2009131110544 = year 2009, day 131, time 11:05:44
- slc = short cadence
**Defining some terms:**
- **Cadence:** the frequency with which summed data are read out. Files are either short cadence (a 1 minute sum) or long cadence (a 30 minute sum).
- **SAP Flux:** Simple Aperture Photometry flux; flux after summing the calibrated pixels within the optimal aperture
- **PDCSAP Flux:** Pre-search Data Conditioned Simple Aperture Photometry; these are the flux values nominally corrected for instrumental variations.
- **BJD:** Barycentric Julian Day; this is the Julian Date that has been corrected for differences in the Earth's position with respect to the Solar System Barycentre (center of mass of the Solar System).
- **HDU:** Header Data Unit; a FITS file is made up of Header or Data units that contain information, data, and metadata relating to the file. The first HDU is called the primary, and anything that follows is considered an extension.
For more information about the Kepler mission and collected data, visit the [Kepler archive page](https://archive.stsci.edu/kepler/). To read more details about light curves and relevant data terms, look in the [Kepler archive manual](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=16).
[Top of Page](#title_ID)
***
<a id="imports_ID"></a>
## Imports
Let's start by importing some libraries to the environment:
- *matplotlib notebook* for creating interactive plots
- *astropy.io fits* for accessing FITS files
- *astropy.table Table* for creating tidy tables of the data
- *matplotlib* for plotting data
```
%matplotlib notebook
from astropy.io import fits
from astropy.table import Table
import matplotlib.pyplot as plt
```
[Top of Page](#title_ID)
***
<a id="data_ID"></a>
## Getting the Data
Start by importing libraries from Astroquery. For a longer, more detailed description using of Astroquery, please visit this [tutorial](https://github.com/spacetelescope/MAST-API-Notebooks/blob/master/MUG2018_APITutorial_Astroquery.ipynb) or read the Astroquery [documentation](https://astroquery.readthedocs.io/en/latest/#).
```
from astroquery.mast import Mast
from astroquery.mast import Observations
```
<br>Next, we need to find the data file. This is similar to searching for the data using the [MAST Portal](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) in that we will be using certain keywords to find the file. The target name of the object we are looking for is kplr011446443, collected by the Kepler spacecraft.
```
keplerObs = Observations.query_criteria(target_name='kplr011446443', obs_collection='Kepler')
keplerProds = Observations.get_product_list(keplerObs[1])
yourProd = Observations.filter_products(keplerProds, extension='kplr011446443-2009131110544_slc.fits',
mrp_only=False)
yourProd
```
<br>Now that we've found the data file, we can download it using the reults shown in the table above:
```
Observations.download_products(yourProd, mrp_only = False, cache = False)
```
<br>Click on the blue URL above to download the file. You are now ready to complete the rest of the notebook.
[Top of Page](#title_ID)
***
<a id="header_ID"></a>
## Reading FITS Extensions
<br>Now that we have the file, we can start working with the data. We will begin by assigning a shorter name to the file to make it easier to use. Then, using the info function from astropy.io.fits, we can see some information about the FITS Header Data Units:
```
filename = "./mastDownload/Kepler/kplr011446443_sc_Q113313330333033302/kplr011446443-2009131110544_slc.fits"
fits.info(filename)
```
- **No. 0 (Primary): **
<br>This HDU contains meta-data related to the entire file.
- **No. 1 (Light curve): **
<br>This HDU contains a binary table that holds data like flux measurements and times. We will extract information from here when we define the parameters for the light curve plot.
- **No. 2 (Aperture): **
<br>This HDU contains the image extension with data collected from the aperture. We will also use this to display a bitmask plot that visually represents the optimal aperture used to create the SAP_FLUX column in HDU1.
For more detailed information about header extensions, look [here](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=17).
<br>Let's say we wanted to see more information about the extensions than what the fits.info command gave us. For example, we can access information stored in the header of the Binary Table extension (No. 1, LIGHTCURVE). The following line opens the FITS file, writes the first HDU extension into header1, and then closes the file. Only 24 columns are displayed here but you can view them all by adjusting the range:
```
with fits.open(filename) as hdulist:
header1 = hdulist[1].header
print(repr(header1[0:24])) #repr() prints the info into neat columns
```
<br> We can also view a table of the data from the Binary Table extension. This is where we can find the flux and time columns to be plotted later. Here only the first four rows of the table are displayed:
```
with fits.open(filename) as hdulist:
binaryext = hdulist[1].data
binarytable = Table(binaryext)
binarytable[1:5]
```
[Top of Page](#title_ID)
***
<a id="lightcurve_ID"></a>
## Plotting a Light Curve
<br>Now that we have seen and accessed the data, we can begin to plot a light curve:
1. Open the file using command fits.open. This will allow the program to read and store the data we will manipulate to be plotted. Here we've also renamed the file with a phrase that is easier to handle (see line 1).
<br>
<br>
2. Start by calibrating the time. Because the Kepler data is in BKJD (Kepler Barycentric Julian Day) we need to convert it to time in Julian Days (BJD) if we want to be able to compare it to other outside data. For a more detailed explanation about time conversions, visit the [page 13](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=13) or [page 17](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=17) of the Kepler Archive Manual.
<br>
- Read in the BJDREF times, both the integer (BJDREFI) and the floating point (BJDREFF). These are found as columns of data in the *binary extension* of the header.
<br>
<br>
3. Read in the columns of times and fluxes (both uncorrected and corrected) from the data.
```
with fits.open(filename, mode="readonly") as hdulist:
# Read in the "BJDREF" which is the time offset of the time array.
bjdrefi = hdulist[1].header['BJDREFI']
bjdreff = hdulist[1].header['BJDREFF']
# Read in the columns of data.
times = hdulist[1].data['time']
sap_fluxes = hdulist[1].data['SAP_FLUX']
pdcsap_fluxes = hdulist[1].data['PDCSAP_FLUX']
```
4. Now that the appropriate data has been read and stored, convert the times to BJDS by adding the BJDREF times to the data of times.
<br>
<br>
5. Finally, we can plot the fluxes against time. We can also set a title and add a legend to the plot. We can label our fluxes accordingly and assign them colors and styles ("-k" for a black line, "-b" for a blue line).
```
# Convert the time array to full BJD by adding the offset back in.
bjds = times + bjdrefi + bjdreff
plt.figure(figsize=(9,4))
# Plot the time, uncorrected and corrected fluxes.
plt.plot(bjds, sap_fluxes, '-k', label='SAP Flux')
plt.plot(bjds, pdcsap_fluxes, '-b', label='PDCSAP Flux')
plt.title('Kepler Light Curve')
plt.legend()
plt.xlabel('Time (days)')
plt.ylabel('Flux (electrons/second)')
plt.show()
```
[Top of Page](#title_ID)
***
<a id="aperture_ID"></a>
## The Aperture Extension
<br>We can also make a plot of the third HDU; the image extension (No. 2, APERTURE). This data is stored as an array of integers that encodes which pixels were collected from the spacecraft and which were used in the optimal aperture (look here for more information on the [aperture extension](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=20)).
<br>
<br>First, we need to re-open the FITS file and access the header. Next, we read in the image extension and print it as an array:
```
with fits.open(filename) as hdulist:
imgdata = hdulist[2].data
print(imgdata)
```
We can also show the data in a plot:
```
plt.figure(2)
plt.title('Kepler Aperture')
plt.imshow(imgdata, cmap=plt.cm.YlGnBu_r)
plt.xlabel('Column')
plt.ylabel('Row')
plt.colorbar()
```
[Top of Page](#title_ID)
***
<a id="resources_ID"></a>
## Additional Resources
For more information about the MAST archive and details about mission data:
<br>
<br>[MAST API](https://mast.stsci.edu/api/v0/index.html)
<br>[Kepler Archive Page (MAST)](https://archive.stsci.edu/kepler/)
<br>[Kepler Archive Manual](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf)
<br>[Exo.MAST website](https://exo.mast.stsci.edu/exo/ExoMast/html/exomast.html)
***
<a id="about_ID"></a>
## About this Notebook
**Author:** Josie Bunnell, STScI SASP Intern
<br>**Updated On:** 08/10/2018
***
[Top of Page](#title_ID)
<img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="STScI logo" width="200px"/>
| github_jupyter |
# License
***
Copyright (C) 2017 J. Patrick Hall, jphall@gwu.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
***
# Simple target encoding: rate-by-level - Pandas and numpy
## Imports
```
import pandas as pd # pandas for handling mixed data sets
from numpy.random import uniform # numpy for basic math and matrix operations
```
#### Create a sample data set
```
scratch_df = pd.DataFrame({'x1': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'],
'x2': ['C', 'D', 'D', 'D', 'C', 'C', 'E', 'C', 'E', 'E'],
'y': [0, 0, 1, 0, 1, 1, 1, 1, 0, 1]})
scratch_df
```
#### Encode categorical variables using a rate-by-level approach
```
# make a new deep copy of scratch_df
# so you can run this cell many times w/o errors
scratch_df1 = scratch_df.copy()
# loop through columns to create new encoded columns
for col_name in scratch_df.columns[:-1]:
new_col_name = col_name + '_encode'
# create a dictionary of original categorical value:event rate for that value
row_val_dict = {}
for level in scratch_df[col_name].unique():
row_val_dict[level] = scratch_df[scratch_df[col_name] == level]['y'].mean()
# apply the transform from the dictionary on all rows in the column
scratch_df1[new_col_name] = scratch_df[col_name].apply(lambda i: row_val_dict[i])
scratch_df1
```
#### Perturb to prevent overfitting
```
# make a new deep copy of scratch_df
# so you can run this cell many times w/o errors
scratch_df2 = scratch_df.copy()
# loop through columns to create new encoded columns
for col_name in scratch_df.columns[:-1]:
new_col_name = col_name + '_encode'
row_val_dict = {}
# create a dictionary of original categorical value:event rate for that value
for level in scratch_df[col_name].unique():
# apply the transform from the dictionary on all rows in the column
# add in a little random noise, can prevent overfitting for rare levels
row_val_dict[level] = (scratch_df[scratch_df[col_name] == level]['y'].mean())
scratch_df2[new_col_name] = scratch_df[col_name].apply(lambda i: row_val_dict[i] + uniform(low=-0.05, high=0.05))
scratch_df2
```
| github_jupyter |
# Add model: translation attention ecoder-decocer over the b4 dataset
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchtext import data
import pandas as pd
import unicodedata
import string
import re
import random
import copy
from contra_qa.plots.functions import simple_step_plot, plot_confusion_matrix
import matplotlib.pyplot as plt
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from nltk.translate.bleu_score import sentence_bleu
% matplotlib inline
import time
import math
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since):
now = time.time()
s = now - since
return '%s' % asMinutes(s)
```
### Preparing data
```
df2 = pd.read_csv("data/boolean5_train.csv")
df2_test = pd.read_csv("data/boolean5_test.csv")
df2["text"] = df2["sentence1"] + df2["sentence2"]
df2_test["text"] = df2_test["sentence1"] + df2_test["sentence2"]
all_sentences = list(df2.text.values) + list(df2_test.text.values)
df2train = df2.iloc[:8500]
df2valid = df2.iloc[8500:]
df2train.tail()
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
# Turn a Unicode string to plain ASCII, thanks to
# http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
example = "ddddda'''~~çãpoeéééééÈ'''#$$##@!@!@AAS@#12323fdf"
print("Before:", example)
print()
print("After:", normalizeString(example))
pairs_A = list(zip(list(df2train.sentence1.values), list(df2train.and_A.values)))
pairs_B = list(zip(list(df2train.sentence1.values), list(df2train.and_B.values)))
pairs_A = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_A]
pairs_B = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_B]
pairs_A_val = list(zip(list(df2valid.sentence1.values), list(df2valid.and_A.values)))
pairs_B_val = list(zip(list(df2valid.sentence1.values), list(df2valid.and_B.values)))
pairs_A_val = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_A_val]
pairs_B_val = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_B_val]
all_text_pairs = zip(all_sentences, all_sentences)
all_text_pairs = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in all_text_pairs]
def readLangs(lang1, lang2, pairs, reverse=False):
# Reverse pairs, make Lang instances
if reverse:
pairs = [tuple(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
f = lambda x: len(x.split(" "))
MAX_LENGTH = np.max(list(map(f, all_sentences)))
MAX_LENGTH = 20
def filterPair(p):
cond1 = len(p[0].split(' ')) < MAX_LENGTH
cond2 = len(p[1].split(' ')) < MAX_LENGTH
return cond1 and cond2
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
def prepareData(lang1, lang2, pairs, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, pairs, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
_, _, training_pairs_A = prepareData("eng_enc",
"eng_dec",
pairs_A)
print()
input_lang, _, _ = prepareData("eng_enc",
"eng_dec",
all_text_pairs)
output_lang = copy.deepcopy(input_lang)
print()
print()
_, _, valid_pairs_A = prepareData("eng_enc",
"eng_dec",
pairs_A_val)
_, _, training_pairs_B = prepareData("eng_enc",
"eng_dec",
pairs_B)
print()
_, _, valid_pairs_B = prepareData("eng_enc",
"eng_dec",
pairs_B_val)
```
### sentences 2 tensors
```
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
def tensorFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
indexes.append(EOS_token)
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
def tensorsFromPair(pair):
input_tensor = tensorFromSentence(input_lang, pair[0])
target_tensor = tensorFromSentence(output_lang, pair[1])
return (input_tensor, target_tensor)
def tensorsFromTriple(triple):
input_tensor = tensorFromSentence(input_lang, triple[0])
target_tensor = tensorFromSentence(output_lang, triple[1])
label_tensor = torch.tensor(triple[2], dtype=torch.long).view((1))
return (input_tensor, target_tensor, label_tensor)
```
### models
```
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
hidden_size = 256
eng_enc_v_size = input_lang.n_words
eng_dec_v_size = output_lang.n_words
input_lang.n_words
encoderA = EncoderRNN(eng_enc_v_size, hidden_size)
decoderA = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderA.load_state_dict(torch.load("b5_encoder1_att.pkl"))
decoderA.load_state_dict(torch.load("b5_decoder1_att.pkl"))
encoderB = EncoderRNN(eng_enc_v_size, hidden_size)
decoderB = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderB.load_state_dict(torch.load("b5_encoder2_att.pkl"))
decoderB.load_state_dict(torch.load("b5_decoder2_att.pkl"))
```
## translating
```
def translate(encoder,
decoder,
sentence,
max_length=MAX_LENGTH):
with torch.no_grad():
input_tensor = tensorFromSentence(input_lang, sentence)
input_length = input_tensor.size()[0]
encoder_hidden = encoder.initHidden()
encoder_outputs = torch.zeros(
max_length, encoder.hidden_size, device=device)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device) # SOS
decoder_hidden = encoder_hidden
decoded_words = []
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_hidden, encoder_outputs)
_, topone = decoder_output.data.topk(1)
if topone.item() == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[topone.item()])
decoder_input = topone.squeeze().detach()
return " ".join(decoded_words)
```
## translation of a trained model: and A
```
for t in training_pairs_A[0:3]:
print("input_sentence : " + t[0])
neural_translation = translate(encoderA,
decoderA,
t[0],
max_length=MAX_LENGTH)
print("neural translation : " + neural_translation)
reference = t[1] + ' <EOS>'
print("reference translation : " + reference)
reference = reference.split(" ")
candidate = neural_translation.split(" ")
score = sentence_bleu([reference], candidate)
print("blue score = {:.2f}".format(score))
print()
```
## translation of a trained model: and B
```
for t in training_pairs_B[0:3]:
print("input_sentence : " + t[0])
neural_translation = translate(encoderB,
decoderB,
t[0],
max_length=MAX_LENGTH)
print("neural translation : " + neural_translation)
reference = t[1] + ' <EOS>'
print("reference translation : " + reference)
reference = reference.split(" ")
candidate = neural_translation.split(" ")
score = sentence_bleu([reference], candidate)
print("blue score = {:.2f}".format(score))
print()
```
## Defining the And model
model inner working:
- $s_1$ is the first sentence (e.g., 'penny is thankful and naomi is alive')
- $s_2$ is the second sentence (e.g., 'penny is not alive')
- $h_A = dec_{A}(enc_{A}(s_1, \vec{0}))$
- $h_B = dec_{B}(enc_{B}(s_1, \vec{0}))$
- $h_{inf} = \sigma (W[h_A ;h_B] + b)$
- $e = enc_{A}(s_2, h_{inf})$
- $\hat{y} = softmax(We + b)$
```
class AndModel(nn.Module):
def __init__(self,
encoderA,
decoderA,
encoderB,
decoderB,
hidden_size,
output_size,
max_length,
input_lang,
target_lang,
SOS_token=0,
EOS_token=1):
super(AndModel, self).__init__()
self.max_length = max_length
self.hidden_size = hidden_size
self.output_size = output_size
self.encoderA = encoderA
self.decoderA = decoderA
self.encoderB = encoderB
self.decoderB = decoderB
self.input_lang = input_lang
self.target_lang = target_lang
self.SOS_token = SOS_token
self.EOS_token = EOS_token
self.fc_inf = nn.Linear(hidden_size * 2, hidden_size)
self.fc_out = nn.Linear(hidden_size, output_size)
def encode(self,
sentence,
encoder,
is_tensor,
hidden=None):
if not is_tensor:
input_tensor = tensorFromSentence(self.input_lang, sentence)
else:
input_tensor = sentence
input_length = input_tensor.size()[0]
if hidden is None:
encoder_hidden = encoder.initHidden()
else:
encoder_hidden = hidden
encoder_outputs = torch.zeros(self.max_length,
encoder.hidden_size,
device=device)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
self.encoder_outputs = encoder_outputs
return encoder_hidden
def decode(self,
tensor,
decoder,
out_tensor):
decoder_input = torch.tensor([[self.SOS_token]], device=device)
decoder_hidden = tensor
decoded_words = []
for di in range(self.max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, self.encoder_outputs)
_, topone = decoder_output.data.topk(1)
if topone.item() == self.EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(self.target_lang.index2word[topone.item()])
decoder_input = topone.squeeze().detach()
if not out_tensor:
output = " ".join(decoded_words)
else:
output = decoder_hidden
return output
def sen2vec(self, sentence, encoder, decoder, is_tensor, out_tensor):
encoded = self.encode(sentence, encoder, is_tensor)
vec = self.decode(encoded, decoder, out_tensor)
return vec
def sen2vecA(self, sentence, is_tensor):
encoded = self.encode(sentence, self.encoderA, is_tensor)
vec = self.decode(encoded, self.decoderA, out_tensor=True)
return vec
def sen2vecB(self, sentence, is_tensor):
encoded = self.encode(sentence, self.encoderB, is_tensor)
vec = self.decode(encoded, self.decoderB, out_tensor=True)
return vec
def forward(self, s1, s2):
hA = self.sen2vecA(s1, is_tensor=True)
hB = self.sen2vecB(s1, is_tensor=True)
h_inf = torch.cat([hA, hB], dim=2).squeeze(1)
h_inf = torch.sigmoid(self.fc_inf(h_inf))
h_inf = h_inf.view((1, h_inf.shape[0], h_inf.shape[1]))
e = self.encode(s2,
self.encoderA,
hidden=h_inf,
is_tensor=True)
output = self.fc_out(e).squeeze(1)
return output
def predict(self, s1, s2):
out = self.forward(s1, s2)
softmax = nn.Softmax(dim=1)
out = softmax(out)
indices = torch.argmax(out, 1)
return indices
addmodel = AndModel(encoderA,
decoderA,
encoderB,
decoderB,
hidden_size=256,
output_size=2,
max_length=MAX_LENGTH,
input_lang=input_lang,
target_lang=output_lang)
```
Test encoding decoding
```
for ex in training_pairs_B[0:3]:
print("===========")
ex = ex[0]
print("s1:\n")
print(ex)
print()
ex_A = addmodel.sen2vec(ex,
addmodel.encoderA,
addmodel.decoderA,
is_tensor=False,
out_tensor=False)
ex_B = addmodel.sen2vec(ex,
addmodel.encoderB,
addmodel.decoderB,
is_tensor=False,
out_tensor=False)
print("inference A:\n")
print(ex_A)
print()
print("inference B:\n")
print(ex_B)
for ex in training_pairs_B[0:1]:
print("===========")
ex = ex[0]
print("s1:\n")
print(ex)
print()
ex_A = addmodel.sen2vecA(ex,is_tensor=False)
ex_B = addmodel.sen2vecB(ex,is_tensor=False)
print(ex_A)
print()
print(ex_B)
train_triples = zip(list(df2train.sentence1.values), list(df2train.sentence2.values), list(df2train.label.values))
train_triples = [(normalizeString(s1), normalizeString(s2), l) for s1, s2, l in train_triples]
train_triples_t = [tensorsFromTriple(t) for t in train_triples]
train_triples = zip(list(df2train.sentence1.values), list(df2train.sentence2.values), list(df2train.label.values))
train_triples = [(normalizeString(s1), normalizeString(s2), l) for s1, s2, l in train_triples]
train_triples_t = [tensorsFromTriple(t) for t in train_triples]
valid_triples = zip(list(df2valid.sentence1.values), list(df2valid.sentence2.values), list(df2valid.label.values))
valid_triples = [(normalizeString(s1), normalizeString(s2), l) for s1, s2, l in valid_triples]
valid_triples_t = [tensorsFromTriple(t) for t in valid_triples]
len(valid_triples_t)
test_triples = zip(list(df2_test.sentence1.values), list(df2_test.sentence2.values), list(df2_test.label.values))
test_triples = [(normalizeString(s1), normalizeString(s2), l) for s1, s2, l in test_triples]
test_triples_t = [tensorsFromTriple(t) for t in test_triples]
example = train_triples[0]
print(example)
example_t = train_triples_t[0]
print(example_t)
```
## Prediction BEFORE training
```
n_iters = 100
training_pairs_little = [random.choice(train_triples_t) for i in range(n_iters)]
predictions = []
labels = []
for i in range(n_iters):
s1, s2, label = training_pairs_little[i]
pred = addmodel.predict(s1, s2)
label = label.item()
pred = pred.item()
predictions.append(pred)
labels.append(label)
plot_confusion_matrix(labels,
predictions,
classes=["no", "yes"],
path="confusion_matrix.png")
```
### Training functions
```
def CEtrain(s1_tensor,
s2_tensor,
label,
model,
optimizer,
criterion):
model.train()
optimizer.zero_grad()
logits = model(s1_tensor, s2_tensor)
loss = criterion(logits, label)
loss.backward()
optimizer.step()
return loss
```
Test CEtrain
```
CE = nn.CrossEntropyLoss()
addmodel_opt = torch.optim.SGD(addmodel.parameters(), lr= 0.3)
loss = CEtrain(s1_tensor=example_t[0],
s2_tensor=example_t[1],
label=example_t[2],
model=addmodel,
optimizer=addmodel_opt,
criterion=CE)
assert type(loss.item()) == float
```
## Little example of training
```
epochs = 10
learning_rate = 0.1
CE = nn.CrossEntropyLoss()
encoderA = EncoderRNN(eng_enc_v_size, hidden_size)
decoderA = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderA.load_state_dict(torch.load("b5_encoder1_att.pkl"))
decoderA.load_state_dict(torch.load("b5_decoder1_att.pkl"))
encoderB = EncoderRNN(eng_enc_v_size, hidden_size)
decoderB = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderB.load_state_dict(torch.load("b5_encoder2_att.pkl"))
decoderB.load_state_dict(torch.load("b5_decoder2_att.pkl"))
addmodel = AndModel(encoderA,
decoderA,
encoderB,
decoderB,
hidden_size=256,
output_size=2,
max_length=MAX_LENGTH,
input_lang=input_lang,
target_lang=output_lang)
# # for model in [encoderA, decoderA, encoderB, decoderB]:
# for model in [encoderB, decoderB]:
# for param in model.parameters():
# param.requires_grad = False
# addmodel_opt = torch.optim.SGD(addmodel.parameters(), lr= learning_rate)
addmodel_opt = torch.optim.Adagrad(addmodel.parameters(), lr= learning_rate)
# addmodel_opt = torch.optim.Adadelta(addmodel.parameters(), lr= learning_rate)
# addmodel_opt = torch.optim.Adam(addmodel.parameters(), lr= learning_rate)
# addmodel_opt = torch.optim.SparseAdam(addmodel.parameters(), lr= learning_rate)
# addmodel_opt = torch.optim.RMSprop(addmodel.parameters(), lr= learning_rate)
losses_per_epoch = []
for i in range(epochs):
losses = []
start = time.time()
n_iters = 1000
training_pairs_little = [random.choice(train_triples_t) for i in range(n_iters)]
for t in training_pairs_little:
s1, s2, label = t
loss = CEtrain(s1_tensor=s1,
s2_tensor=s2,
label=label,
model=addmodel,
optimizer=addmodel_opt,
criterion=CE)
losses.append(loss.item())
mean_loss = np.mean(losses)
losses_per_epoch.append(mean_loss)
print("epoch {}/{}".format(i+1, epochs), timeSince(start), "mean loss = {:.2f}".format(mean_loss))
simple_step_plot([losses_per_epoch],
"loss",
"loss example ({} epochs)".format(epochs),
"loss_example.png",
figsize=(15,4))
```
## Prediction AFTER training
```
n_iters = 100
training_pairs_little = [random.choice(train_triples_t) for i in range(n_iters)]
predictions = []
labels = []
for i in range(n_iters):
s1, s2, label = training_pairs_little[i]
pred = addmodel.predict(s1, s2)
label = label.item()
pred = pred.item()
predictions.append(pred)
labels.append(label)
plot_confusion_matrix(labels,
predictions,
classes=["no", "yes"],
path="confusion_matrix.png")
n_iters = len(valid_triples_t)
valid_pairs_little = [random.choice(valid_triples_t) for i in range(n_iters)]
predictions = []
labels = []
for i in range(n_iters):
s1, s2, label = valid_pairs_little[i]
pred = addmodel.predict(s1, s2)
label = label.item()
pred = pred.item()
predictions.append(pred)
labels.append(label)
plot_confusion_matrix(labels,
predictions,
classes=["no", "yes"],
path="confusion_matrix.png")
n_iters = len(test_triples_t)
test_pairs_little = [random.choice(test_triples_t) for i in range(n_iters)]
predictions = []
labels = []
for i in range(n_iters):
s1, s2, label = test_pairs_little[i]
pred = addmodel.predict(s1, s2)
label = label.item()
pred = pred.item()
predictions.append(pred)
labels.append(label)
plot_confusion_matrix(labels,
predictions,
classes=["no", "yes"],
path="confusion_matrix.png")
```
| github_jupyter |
```
from sklearn.neighbors import KNeighborsClassifier
from scipy.signal import resample
def squeeze_stretch(s,y,scale=1.1):
n_old =s.shape[0]
knn=KNeighborsClassifier(n_neighbors=3,weights='uniform')
if scale >=1:
n_new = scale * s.shape[0]
s_new = resample(s,int(n_new))
y_new = resample(y,int(n_new))
mid_point = int(n_new) // 2
confident_samples = np.ceil(y_new) == np.round(y_new)
# Get KNN on confident samples
x_axis = np.arange(s_new.shape[0])
X = x_axis[confident_samples].reshape(-1,1)
y = np.abs(np.ceil(y_new[confident_samples]))
knn.fit(X,y)
y_new = knn.predict(x_axis.reshape(-1,1))
result_x = s_new
result_y = y_new
result_y[result_y>4] = 4
else:
n_new = scale * s.shape[0]
s_new = resample(s,int(n_new))
y_new = resample(y,int(n_new))
x_axis = np.arange(s_new.shape[0])
confident_samples = np.ceil(y_new) == np.round(y_new)
print(confident_samples.sum())
X_knn = x_axis[confident_samples].reshape(-1,1)
y_knn = np.abs(np.ceil(y_new[confident_samples]))
print(y.shape)
knn.fit(X_knn,y_knn)
y_new = knn.predict(x_axis.reshape(-1,1))
pad_width = int(n_old - n_new)
if pad_width % 2 == 0:
lp = rp = pad_width // 2
else:
lp = pad_width // 2
rp =lp + 1
s_new = np.pad(s_new,(lp,rp),mode='constant')
y_new = np.pad(y_new,(lp,rp),mode='constant')
low = np.quantile(s[y<1],0.15)
high = np.quantile(s[y<1],0.85)
rand_num = np.random.uniform(low,high,lp+rp)
s_new[:lp] = rand_num[:lp]
s_new[-rp:] = rand_num[lp:]
y_new[:lp] = 0
y_new[-rp:] = 0
result_x = s_new
result_y = np.round(np.abs(y_new))
result_y[result_y>4] = 4
return result_x,result_y
# data processing
import pandas as pd
import numpy as np
from scipy.signal import medfilt
from sklearn.preprocessing import MinMaxScaler
import pywt
#visualization
import matplotlib.pyplot as plt
#model estimation
from sklearn.metrics import accuracy_score
#custom functions
from config import *
from DataGenerator import *
```
# Load the data
```
DATA_PATH = './data/raw/'
TRAIN_NAME = f'{DATA_PATH}train.csv'
train = pd.read_csv(TRAIN_NAME)
train.head()
GetData=DataGenerator()
```
```
DATA_PATH = './data/raw/'
TEST_NAME = f'{DATA_PATH}test.csv'
test = pd.read_csv(TEST_NAME)
test.head()
GetData=DataGenerator()
predictions = np.zeros((GetData.X_test.shape[0],1104,5))
for i in range(5):
model=DL_model(input_size=INPUT_SIZE ,hyperparams=HYPERPARAM)
model.load_weights(f'./data/weights/UNET_model_{i}_.h5')
predictions += model.predict(GetData.X_test)/5
predictions = predictions[:,:1100:,:]
def prepare_test(pred_test, df_test):
wells = df_test['well_id'].sort_values().unique().tolist()
list_df_wells = [df_test.loc[df_test['well_id'].isin([w]), :].copy() for w in wells]
for df in list_df_wells:
df.index = np.arange(df.shape[0])
for i, df_well in enumerate(list_df_wells):
df_well['label'] = np.argmax(pred_test[i, :], axis=1)
result = pd.concat(list_df_wells, axis=0)
return result
submit = prepare_test(predictions, test)
submit[['row_id', 'well_id', 'label']].to_csv('data/result/0.983_submit.csv', index=False)
submit
```
| github_jupyter |
The decimal module implements fixed and floating point arithmetic using the model familiar to most people, rather than the IEEE floating point version implemented by most computer hardware and familiar to programmers. A Decimal instance can represent any number exactly, round up or down, and apply a limit to the number of significant digits.
# Decimal
```
import decimal
fmt = '{0:<25}{1:<25}'
print(fmt.format('Input', 'Output'))
print(fmt.format('-'*25, '-'*25))
#Integer
print(fmt.format(5, decimal.Decimal(5)))
#String
print(fmt.format('3.14', decimal.Decimal('3.14')))
#Float
f = 0.1
print(fmt.format(repr(f), decimal.Decimal(str(f))))
print('{:0.23g}{:<25}'.format(f, str(decimal.Decimal.from_float(f))[:25]))
```
Decimals can also be created from tuples containing a sign flag
```
import decimal
# Tuple
t = (1, (1, 1), -2)
print('Input :', t)
print('Decimal:', decimal.Decimal(t))
```
# Formatting
```
import decimal
d = decimal.Decimal(1.1)
print('Precision:')
print('{:.1}'.format(d))
print('{:.2}'.format(d))
print('{:.3}'.format(d))
print('{:.18}'.format(d))
print('\nWidth and precision combined:')
print('{:5.1f} {:5.1g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('\nZero padding:')
print('{:05.1}'.format(d))
print('{:05.2}'.format(d))
print('{:05.3}'.format(d))
```
# Arithmetic
```
import decimal
a = decimal.Decimal('5.1')
b = decimal.Decimal('3.14')
c = 4
d = 3.14
print('a =', repr(a))
print('b =', repr(b))
print('c =', repr(c))
print('d =', repr(d))
print()
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print()
print('a + c =', a + c)
print('a - c =', a - c)
print('a * c =', a * c)
print('a / c =', a / c)
print()
print('a + d =', end=' ')
try:
print(a + d)
except TypeError as e:
print(e)
```
# Special Value
```
import decimal
for value in ['Infinity', 'NaN', '0']:
print(decimal.Decimal(value), decimal.Decimal('-' + value))
print()
# Math with infinity
print('Infinity + 1:', (decimal.Decimal('Infinity') + 1))
print('-Infinity + 1:', (decimal.Decimal('-Infinity') + 1))
# Print comparing NaN
print(decimal.Decimal('NaN') == decimal.Decimal('Infinity'))
print(decimal.Decimal('NaN') != decimal.Decimal(1))
```
# Context
```
import decimal
context = decimal.getcontext()
print('Emax =', context.Emax)
print('Emin =', context.Emin)
print('capitals =', context.capitals)
print('prec =', context.prec)
print('rounding =', context.rounding)
print('flags =')
for f, v in context.flags.items():
print(' {}: {}'.format(f, v))
print('traps =')
for t, v in context.traps.items():
print(' {}: {}'.format(t, v))
```
## Precision
```
import decimal
d = decimal.Decimal('0.123456')
for i in range(1, 5):
decimal.getcontext().prec = i
print(i, ':', d, d * 1)
```
## Local Context
```
import decimal
with decimal.localcontext() as c:
c.prec = 2
print('Local precision:', c.prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
print()
print('Default precision:', decimal.getcontext().prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
import decimal
# Set up a context with limited precision
c = decimal.getcontext().copy()
c.prec = 3
# Create our constant
pi = c.create_decimal('3.1415')
# The constant value is rounded off
print('PI :', pi)
# The result of using the constant uses the global context
print('RESULT:', decimal.Decimal('2.01') * pi)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# XLAコンパイラAPI
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/xla/tutorials/compile"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/xla/tutorials/compile.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/xla/tutorials/compile.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/xla/tutorials/compile.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlowとXLAライブラリをインポートします。XLAには、一部または全てのモデルを [XLA](https://www.tensorflow.org/extend/xla/) でコンパイルする実験的なAPIである `xla.compile()` が含まれています。
```
import tensorflow as tf
from tensorflow.contrib.compiler import xla
```
必要ないくつかの定数を定義し、 MNISTのデータセットを用意します。
```
# それぞれの入力イメージの大きさは、 28 x 28ピクセル
IMAGE_SIZE = 28 * 28
# 個別の数字のラベル [0..9] の個数
NUM_CLASSES = 10
# それぞれのトレーニングバッチ(ステップ)での標本数
TRAIN_BATCH_SIZE = 100
# トレーニングステップを実行する回数
TRAIN_STEPS = 1000
# MNISTデータセットをロードする。
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()
test_ds = tf.data.Dataset.from_tensor_slices(test).batch(TRAIN_BATCH_SIZE)
iterator = tf.data.Iterator.from_structure(train_ds.output_types, train_ds.output_shapes)
images, labels = iterator.get_next()
images = tf.reshape(images, [-1, IMAGE_SIZE])
images, labels = tf.cast(images, tf.float32), tf.cast(labels, tf.int64)
```
# モデルを構築する関数の定義
以下のコードブロックは、順伝搬と逆伝搬の両方を行う、1つのdenseレイヤーを持つ簡単なモデルを構築する関数を含みます。
コードが呼ばれたとき、2つの値を返します。 `y` は、それぞれのターゲットのクラスの予測確率を表す `tf.Tensor` です。 `train_step` は `global_step` の値を増加し、変数の更新を行う `tf.Operation` です。
```
def build_mnist_model(x, y_):
y = tf.keras.layers.Dense(NUM_CLASSES).apply(x)
cross_entropy = tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=y)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
return y, train_step
```
# XLA の有効化
XLA を有効化するには `build_mnist_model` 関数を `xla.compile` に渡します。以下のコードブロックは、モデルを `xla.compile()` 関数でラップします。これにより、提供された入力を持つターゲット関数をXLAで実行できます。
```
[y] = xla.compile(build_mnist_model, inputs=[images, labels])
```
グラフをコンパイルするとき、XLAはターゲット関数によって構築されたグラフの全てのノードを、いくつかのXLAのオペレータで置き換えます。
xla.compileは、生成されたXLAのオペレータから独立して実行できる `tf.Operation` を返しません
代わりに、ターゲット関数から返された `tf.Operation` ノードは、返された全ての `tf.Tensor` の値との制御依存関係として追加されます。これにより、 返されたテンソルが評価されるときに、 `tf.Operation` ノードの実行をトリガします。
擬似コードによるxla.compileの実装は、以下のようになります:
---
```
# TensorFlowに、XLAが扱いやすい方法でコードを実行するよう依頼する
y, train_step = build_mnist_model(images, labels)
with tf.control_dependencies([train_step]):
y = tf.identity(y)
# TensorFlowに、XLAが扱いやすい方法でコードの実行を停止するよう依頼する
```
---
xla.compile()は常に `tf.Tensor` のリスト(1要素しか無かったとしても)を返します。
もしあなたが構築したグラフを今表示したら、通常のTensorFlowのグラフとそれほど変わらないことがわかり、前に述べたXLAのオペレータを見つけることができないでしょう。これは、あなたが `sess.run()` でグラフを実行しようとしても、実際のコンパイルは後ほど発生するからです。後ほど、TensorFlowは実際にXLAオペレータを生成する一連のグラフ書き換えパスをトリガーします。これは、すべての入力がそろったときに、計算をコンパイルして実行します。
# モデルの学習とテスト
```
# セッションを作成しすべての変数を初期化。
# xla.compile()は、Keras model.fit() APIやTF eager modeとはまだ動作しません。
sess = tf.Session()
sess.run(tf.global_variables_initializer())
```
以下のコードブロックはモデルを学習します。 `y` の評価は、制御依存関係がある `train_step` をトリガします。これは、モデル変数を更新します。
```
# 学習用データセットを与える
sess.run(iterator.make_initializer(train_ds))
# TRAIN_STEPS ステップだけ実行する
for i in range(TRAIN_STEPS):
sess.run(y)
print("Model trained for %s steps." % TRAIN_STEPS)
# 学習済みモデルをテストする
# テスト用データセットを与える
sess.run(iterator.make_initializer(test_ds))
# 精度を計算する
correct_prediction = tf.equal(tf.argmax(y, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % sess.run(accuracy))
# セッションを片付ける
sess.close()
```
| github_jupyter |
# Statistical Independence
The word “independence” generaly means free from external control or influence, but it also has a lot of connotations in US culture, as it probably does throughout the world. We will apply the concept of independence to many random phenomena, and the implication of independence is generally the same as the definition above: phenomena that are independent cannot influence each other.
In fact, we have already been applying the concept of independence throughout this book when we assume that the outcome of a coin flip, die roll, or simulation does not depend on the values seen in other trials of the same type of experiment. However, now we have the mathematical tools to define the concept of independence precisely.
## Conditional probabilities and independence
Based on the discussion above, try to answer the following question about what independence should mean for conditional probabilities. (Don't worry if you don't intuitively know the answer -- you can keep trying if you don't get it right at first!)
```
from jupyterquiz import display_quiz
git_path="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/questions/"
#display_quiz("../questions/si-conditional.json")
display_quiz(git_path + "si-conditional.json")
```
Click the “+” sign to reveal the answer and discussion -->
```{toggle}
If $B$ is independent of $A$, then knowledge of $A$ occurring should not change the probability of $B$ occurring. I.e., if we are *given* that $A$ occurred, then the conditional probability of $B$ occurring should equal the unconditional probability:
$$
P(B|A) = P(B)
$$
Let's see the implications of this by substituting the formula for $P(B|A)$ from the definition:
<!--
\begin{align}\frac{P(A \cap B)}{P(A)} &= P(B) \\
\Rightarrow P(A \cap B) &= P(A)P(B)
\end{align}
-->
$$
\frac{P(A \cap B)}{P(A)} &= P(B) \\
\Rightarrow P(A \cap B) &= P(A)P(B)
$$ (p-b-given-a)
Now we might ask: if $B$ is independent of $A$, does that imply that $A$ is independent of $B$? Let's assume that {eq}`p-b-given-a` holds and apply the result to the definition for $P(A|B$), assuming that $P(B)>0$:
\begin{align*}
P(A|B) & =\frac{ P(A \cap B) } {P(B) } \\
& = \frac{ P(A) P( B) } {P(B) } \\
& = P(A)
\end{align*}
So if $P(B|A) = P(B)$, then $P(A|B)=P(A)$.
```
## Formal definition of statistically independent events
A simple definition for conditional probability of events that satisfies all the forms of independence discussed above and that can deal with events with probability zero is as follows:
```{panels}
:column: col-9
DEFINITION
^^^
statistically independent (two events)
: Given a probability space $S, \mathcal{F}, P$ and two events $A\in \mathcal{F}$ and $B \in \mathcal{F}$, $A$ and $B$ are {\it statistically independent} if and only if (iff)
$$
P(A \cap B) = P(A)P(B).
$$
```
If the context is clear, we will often just write “independent” instead of “statistically independent” or write *s.i.*, which is a commonly used abbreviation.
````{note}
Please take time to study the definition of *statistically independent* carefully. In particular, note the following:
* **Events** can be statistically independent or not
* Probabilities **are not** something that are statistically independent or not
* The “if and only if” statement means that the definition applies in both directions:
* If events $A$ and $B$ are statistically independent, then the probability of the intersection of the events factors as the product of the individual events, $P(A \cap B) = P(A)P(B)$.
* If we have events $A$ and $B$ for which $P(A \cap B) = P(A)P(B)$, then $A$ and $B$ are statistically independent.
````
## When can we assume independence?
Statistical independence is often assumed for many types of events. However, it is important to be careful when applying such a strong assumption because events can be coupled in ways that are subtle. For example, consider the Magician's Coin example. Many people assume that the event of getting Heads on the second flip of the chosen coin will be independent of the outcome of the first flip of the coin. However, we have seen that this assumption is wrong! So, when can we assume that events will be independent?
**Events can be assumed to be statistically independent if they arise from completely separate random phenomena.**
In the case of the Magician's Coin, this assumption is violated in a subtle way. If we knew that the two-headed coin was in use, then we would know the results completely. What is subtle is the fact that observing the outcome of the first flip may give some information about which coin is in use (although we won't be able to show this for observing heads on the first flip until Chapter 6).
Examples that are assumed to result from separate random phenomena are extensive:
* **Devices to generate randomness in games:** Independence can usually be assume dfor different flips of a fair coin, rolls of a fair die, or card hards drawn from shuffled decks.
* **Failures of different devices in systems:** mechanical and electrical devices fail ar random, and the failures at different devices are often assumed to be independent; examples include light bulbs in a building or computers in a lab.
* **Characteristics of people unrelated to any grouping of those people:** for example, for a group of people at a meeting, having a March birthday would generally be independent events across any two people.
Let's apply this concept to find a simpler way to solve a problem that was introduced in {doc}`../04-probability1/axiomatic-prob`:
**Example**
**(Take 3)** A fair six-sided die is rolled twice. What is the probability that either of the rolls is a value less than 3?
As before, let $E_i$ be the event that the top face on roll $i$ is less than 3, for $i=1,2$.
We assume that different different rolls of the die are independent, so $E_1$ and $E_2$ are independent.
As in {doc}`../04/probability1/corollaries`, we can use Corollary 5 of the Axioms of Probability to write
$$
P(E_1 \cup E_2) = P(E_1) + P(E_2) - P(E_1 \cap E_2)
$$
Before, we had to enumerate $E_1 \cap E_2$ over the sample space for the combine roll of the dice to determine $P(E_1 \cap E_2)$. Now, we can just apply statistical independence to write $P(E_1 \cap E_2) = P(E_1)P(E_2)$, yielding
\begin{align*}
P(E_1 \cup E_2) &= P(E_1) + P(E_2) - P(E_1)P(E_2) \\
&= \frac{1}{3} + \frac{1}{3} - \left(\frac{1}{3}\right)\left(\frac{1}{3} \right) \\
&= \frac 5 9 .
\end{align*}
**Exercises**
Answer these questions to practice this form of statistical independence:
```
#display_quiz("../questions/si1.json")
display_quiz(git_path + "si1.json")
```
If $A$ and $B$ are s.i. events, then the following pairs of events are also s.i.:
* $A$ and $\overline{B}$
* $\overline{A}$ and $B$
* $\overline{A}$ and $\overline{B}$
I.e., if the probability of an event $A$ occurring does not depend on whether some event $B$ occurs, then it cannot depend on whether the event $B$ does not occur. This probably matches your intuition. However, we should verify it. Let's check the first example. We need to evaluate $P(A \cap \overline{B})$ to see if it factors as $P(A)P(\overline{B})$. Referring to the Venn diagram below, we can see that $A$ consists of the union of the mutually exclusive parts, $A \cap B$ and $A \cap \overline{B}$. So we can write $P\left(A \cap \overline{B} \right)= P(A) - P(A \cap B)$.
<img src="figs/si-intersection.png" alt="Venn Diagram Showing Relation of $A$, $A \cap \overline{B}$, and $A \cap B$" width="400px" style="margin-left:auto;margin-right:auto;">
Then by utilizing the fact that $A$ and $B$ are s.i., we have
\begin{align}
P\left(A \cap \overline{B} \right) &= P(A) - P(A \cap B) \\
&= P(A) - P(A) P(B) \\
&= P(A) \left[ 1- P\left(B\right) \right] \\
&= P(A) P\left( \overline{B} \right)
\end{align}
So, if $A$ and $B$ are s.i., so are $A$ and $\overline{B}$. The other expressions follow through similar manipulation. This is important because we often use this fact to simplify solving problems. We start with a simple example to demonstrate the basic technique:
**Example**
**(Take 4)** A fair six-sided die is rolled twice. What is the probability that either of the rolls is a value less than 3?
As before, let $E_i$ be the event that the top face on roll $i$ is less than 3, for $i=1,2$, and $E_1$ and $E_2$ are s.i. then
\begin{align}
P(E_1 \cup E_2) &= 1 - P\left(\overline{E_1 \cup E_2}\right) \\
&= 1 - P\left( \overline{E_1} \cap \overline{E_2} \right) \\
&= 1 - P\left( \overline{E_1} \right) P\left( \overline{E_2} \right) \\
&= 1 - \left[ 1 - P\left( {E_1} \right)\right]
\left[ 1- P\left( {E_2} \right) \right]\\
&= 1- \left[ 1 - \left( \frac 2 6 \right) \right] \left[ 1 - \left( \frac 2 6 \right) \right] \\
&= \frac 5 9
\end{align}
Of course for this simple example, it is easiest to directly compute $P\left(\overline{E_1} \right)$, but the full approach shown here is a template that is encountered often when dealing with unions of s.i. events.
To see the power of this method, we first need to define s.i. for more than two events:
````{panels}
:column: col-9
DEFINITION
^^^
statistically independent (for any number of events)
: Given a probability space $S, \mathcal{F}, P$, a collection of events $E_0, E_1, \ldots E_{n-1}$ in $\mathcal{F}$ are {\it statistically independent} if and only if (iff)
\begin{align}
P(E_i \cap E_j) &= P(E_i) P(E_j), ~~ \forall i \ne j \\
P(E_i \cap E_j \cap E_k) &= P(E_i) P(E_j) P(E_k), ~~ \forall i \ne j \ne k \\
\ldots
P(E_0 \cap E_1 \cap \ldots \cap E_{n-1}) &= P(E_0) P(E_1) \cdots P(E_{n-1}), \\
\end{align}
````
It is not sufficient to just check that the probability of every pair of events factors as the product of the probabilities of the individual events. That defines a weaker form of independence:
````{panels}
:column: col-9
DEFINITION
^^^
pairwise statistically independent
: Given a probability space $S, \mathcal{F}, P$, a collection of events $E_0, E_1, \ldots E_{n-1}$ in $\mathcal{F}$ are {\it pairwise statistically independent} if and only if (iff)
\begin{align}
P(E_i \cap E_j) &= P(E_i) P(E_j), ~~ \forall i \ne j
\end{align}
````
We want to use complements to convert the unions to intersections and the resulting general form looks like
\begin{align}
P\left( \bigcup_i E_i \right) &=
1- \prod_i \left[ 1- P\left( E_i \right) \right].
\end{align}
It may be helpful to interpret this as follows: The complement of any of a collection of events occurring is that none of those events occurs; thus the probability that any of a collection of events occurs is one minus the probability that none of those events occurs.
Compare the simplicity of this approach to the form for directly solving for the probability of unions of events (Corrolary 7 from {doc}`../04-probability1/corollaries`):
\begin{eqnarray*}
P\left( \bigcup_{k=1}^{n} A_k \right) &=&
\sum_{k=1}^{n} P\left(A_j\right)
-\sum_{j<k} P \left( A_j \cap A_k \right) + \cdots \\
&& +
(-1)^{(n+1) } P\left(A_1 \cap A_2 \cap \cdots \cap A_n \right)
\end{eqnarray*}
Now apply this approach to solve the following practice problems:
```
from jupyterquiz import display_quiz
git_path="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/"
#display_quiz("quiz/si-unions.json")
display_quiz(git_path + "06-conditional-prob/quiz/si-unions.json")
```
## Relating Statistical Independent and Mutually Exclusive Events
```
git_path1="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/06-conditional-prob/quiz/"
#display_quiz("quiz/si-me.json")
display_quiz(git_path1 + "si-me.json")
```
Click the “+” sign to reveal the discussion -->
```{toggle}
Suppose $A$ and $B$ are events that are both mutually exclusive and statistically independent.
Since $A$ and $B$ are m.e., $A \cap B = \emptyset$, which further implies $P(A \cap B) = P(\emptyset) =0$.
Since $A$ and $B$ are s.i., $P(A \cap B) = P(A) P(B)$.
Combining these, we have that $P(A \cap B) = P(A)P(B) = 0$, which can only occur if either or both of $P(A)=0$ or $P(B)=0$.
Thus, events **cannot be both statistically independent and mutually exclusive unless at least one of the events has probability zero**.
To gain some further insight into this, consider further the m.e. condition, $A \cap B = \emptyset$. This condition implies that if $A$ occurs, then $B$ cannot have occurred, and vice versa. Thus, knowing that either $A$ or $B$ occurred provides a lot of information about the other event. Thus, $A$ and $B$ cannot be independent if they are m.e., except in the special case already identified.
```
## Terminology Review
Use the flashcards below to help you review the terminology introduced in this section.
```
from jupytercards import display_flashcards
#display_flashcards('flashcards/'+'independence.json')
github='https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/'
github+='06-conditional-prob/flashcards/'
display_flashcards(github+'independence.json')
```
| github_jupyter |
# Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
## Import necessary packages
```
from __future__ import print_function # to conform python 2.x print to python 3.x
import numpy as np
import turicreate
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
'''compute norm of a sparse vector
Thanks to: Jaiyam Sharma'''
def norm(x):
sum_sq=x.dot(x.T)
norm=np.sqrt(sum_sq)
return(norm)
```
## Load in the Wikipedia dataset
```
wiki = turicreate.SFrame('people_wiki.sframe/')
```
For this assignment, let us assign a unique ID to each document.
```
wiki = wiki.add_row_number()
```
## Extract TF-IDF matrix
We first use Turi Create to compute a TF-IDF representation for each document.
```
wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['text'])
wiki.head()
```
For the remainder of the assignment, we will use sparse matrices. Sparse matrices are matrices) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
```
def sframe_to_scipy(x, column_name):
'''
Convert a dictionary column of an SFrame into a sparse matrix format where
each (row_id, column_id, value) triple corresponds to the value of
x[row_id][column_id], where column_id is a key in the dictionary.
Example
>>> sparse_matrix, map_key_to_index = sframe_to_scipy(sframe, column_name)
'''
assert type(x[column_name][0]) == dict, \
'The chosen column must be dict type, representing sparse data.'
# Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack(column_name, ['feature', 'value'])
# Map feature words to integers
unique_words = sorted(x['feature'].unique())
mapping = {word:i for i, word in enumerate(unique_words)}
x['feature_id'] = x['feature'].apply(lambda x: mapping[x])
# Create numpy arrays that contain the data for the sparse matrix.
row_id = np.array(x['id'])
col_id = np.array(x['feature_id'])
data = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((data, (row_id, col_id)), shape=(width, height))
return mat, mapping
%%time
corpus, mapping = sframe_to_scipy(wiki, 'tf_idf')
assert corpus.shape == (59071, 547979)
print('Check passed correctly!')
```
## Train an LSH model
The idea behind LSH is to translate the sign of our tf-idf scores into a binary index (1 or 0) by using seeing if our score falls above or below a randomly defined line. This <a href="http://ethen8181.github.io/machine-learning/recsys/content_based/lsh_text.html">link</a> is helpful for understanding LSH and our code in more detail.
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as <strong>random binary projection</strong>, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
```
def generate_random_vectors(dim, n_vectors):
return np.random.randn(dim, n_vectors)
```
To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
```
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
generate_random_vectors(n_vectors=3, dim=5)
```
We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
```
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
n_vectors = 16
random_vectors = generate_random_vectors(corpus.shape[1], n_vectors)
random_vectors.shape
```
Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
```
sample = corpus[0] # vector of tf-idf values for document 0
bin_indices_bits = sample.dot(random_vectors[:,0]) >= 0
bin_indices_bits
```
Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
```
sample.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
```
We can compute all of the bin index bits at once as follows. Note the absence of the explicit `for` loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the `for` loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
```
sample.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(sample.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
```
All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
```
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
```
We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
```
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
```
By the [rules of binary number representation](https://en.wikipedia.org/wiki/Binary_number#Decimal), we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
```
index_bits = (sample.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print(index_bits)
print(powers_of_two)
print(index_bits.dot(powers_of_two))
```
Since it's the dot product again, we batch it with a matrix operation:
```
index_bits = sample.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
```
This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
1. Compute the integer bin indices. This step is already completed.
2. For each document in the dataset, do the following:
* Get the integer bin index for the document.
* Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
* Add the document id to the end of the list.
```
from collections import defaultdict
def train_lsh(data, n_vectors, seed=None):
if seed is not None:
np.random.seed(seed)
dim = data.shape[1]
random_vectors = generate_random_vectors(dim, n_vectors)
# Partition data points into bins,
# and encode bin index bits into integers
bin_indices_bits = data.dot(random_vectors) >= 0
powers_of_two = 1 << np.arange(n_vectors - 1, -1, step=-1)
bin_indices = bin_indices_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i
table = defaultdict(list)
for idx, bin_index in enumerate(bin_indices):
# Fetch the list of document ids associated with the bin and add the document id to the end.
# data_index: document ids
# append() will add a list of document ids to table dict() with key as bin_index
if bin_index not in table: # YOUR CODE HERE
table[bin_index] = []
table[bin_index].append(idx)
# Note that we're storing the bin_indices here
# so we can do some ad-hoc checking with it,
# this isn't actually required
model = {'data': data,
'table': table,
'random_vectors': random_vectors,
'bin_indices': bin_indices,
'bin_indices_bits': bin_indices_bits}
return model
```
**Checkpoint**.
```
def compare_bits(model, id_1, id_2):
bits1 = model['bin_indices_bits'][id_1]
bits2 = model['bin_indices_bits'][id_2]
print('Number of agreed bits: ', np.sum(bits1 == bits2))
return np.sum(bits1 == bits2)
model = train_lsh(corpus, 16, seed=475)
obama_id = wiki[wiki['name'] == 'Barack Obama']['id'][0]
biden_id = wiki[wiki['name'] == 'Joe Biden']['id'][0]
similariy = compare_bits(model, obama_id, biden_id)
```
**Note.** We will be using the model trained here in the following sections, unless otherwise indicated.
## Inspect bins
After generating our LSH model, let's examine the generated bins to get a deeper understanding of them. Here, we will look at these similar products' bins to see if the result matches intuition. Remember the idea behind LSH is that similar data points will tend to fall into nearby bins.
```
# This function will help us get similar items, given the id
def get_similarity_items(X_tfidf, item_id, topn=5):
"""
Get the top similar items for a given item id.
The similarity measure here is based on cosine distance.
"""
query = X_tfidf[item_id]
scores = X_tfidf.dot(query.T).toarray().ravel()
best = np.argpartition(scores, -topn)[-topn:]
similar_items = sorted(zip(best, scores[best]), key=lambda x: -x[1])
similar_item_ids = [similar_item for similar_item, _ in similar_items]
print("Similar items to id: {}".format(item_id))
for _id in similar_item_ids:
print(wiki[_id]['name'])
print('\n')
return similar_item_ids
```
Let us look at some documents and see which bins they fall into.
```
wiki[wiki['name'] == 'Barack Obama']
```
**Quiz Question**. What is the document `id` of Barack Obama's article?
**Quiz Question**. Which bin contains Barack Obama's article? Enter its integer index.
```
model
```
Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
```
wiki[wiki['name'] == 'Joe Biden']
```
**Quiz Question**. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
1. 16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
2. 15 out of 16 places
3. 13 out of 16 places
4. 11 out of 16 places
5. 9 out of 16 places
```
print (wiki[wiki['name'] == 'Joe Biden']['id'][0])
print (wiki[wiki['name'] == 'Barack Obama']['id'][0])
print (np.array(model['bin_indices_bits'][24478], dtype=int))
print (model['bin_indices'][24478])
model['bin_indices_bits'][35817] == model['bin_indices_bits'][24478]
sum(model['bin_indices_bits'][35817] == model['bin_indices_bits'][24478])
```
Compare the result with a former British diplomat
```
jones_id = wiki[wiki['name']=='Wynn Normington Hugh-Jones']['id'][0]
compare_bits(model, obama_id, jones_id)
```
How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
```
model['table'][model['bin_indices'][35817]]
```
There is one more document in the same bin. Which document is it?
```
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
res = compare_bits(model, obama_id, docs[0]['id']), compare_bits(model, obama_id, biden_id)
```
**In summary**, similar data points will in general _tend to_ fall into _nearby_ bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. **Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.**
## Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
```
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
```
To obtain candidate bins that differ from the query bin by some number of bits, we use `itertools.combinations`, which produces all possible subsets of a given list. See [this documentation](https://docs.python.org/3/library/itertools.html#itertools.combinations) for details.
```
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
```
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
```
(0, 1, 3)
```
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
```
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print(diff)
```
With this output in mind, implement the logic for nearby bin search:
```
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
"""
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
"""
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = ~alternate_bits[i] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
more_docs = table[nearby_bin] # Get all document_ids of the bin
candidate_set.update(more_docs) # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
```
**Checkpoint**. Running the function with `search_radius=0` should yield the list of documents belonging to the same bin as the query.
```
obama_bin_index = model['bin_indices_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set({35817, 54743}):
print('Passed test')
else:
print('Check your code')
print('List of documents in the same bin as Obama: {}'.format(candidate_set))
```
**Checkpoint**. Running the function with `search_radius=1` adds more documents to the fore.
```
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set({42243, 28804, 1810, 48919, 24478, 31010, 7331, 23716, 51108, 48040, 36266, 33200, 25023, 23617, 54743, 34910, 35817, 34159, 14451, 23926, 39032, 12028, 43775}):
print('Passed test')
else:
print('Check your code')
print(candidate_set)
```
**Note**. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
```
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in range(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = turicreate.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
```
Let's try it out with Obama:
```
query(corpus[35817,:], model, k=10, max_search_radius=3)
```
To identify the documents, it's helpful to join this table with the Wikipedia table:
```
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
```
We have shown that we have a working LSH implementation!
# Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
## Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
```
wiki[wiki['name']=='Barack Obama']
%%time
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in range(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print('Radius:', max_search_radius)
print(result.join(wiki[['id', 'name']], on='id').sort('distance'))
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
```
Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
```
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
**Quiz Question**. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
**Quiz Question**. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
```
for i, v in enumerate(average_distance_from_query_history):
if v <= 0.78:
print (i, v)
```
## Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
* Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
* Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
```
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = turicreate.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
```
The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
```
%%time
max_radius = 17
precision = {i:[] for i in range(max_radius)}
average_distance = {i:[] for i in range(max_radius)}
query_time = {i:[] for i in range(max_radius)}
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in range(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in range(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in range(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in range(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
The observations for Barack Obama generalize to the entire dataset.
## Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
```
precision = {i:[] for i in range(5,20)}
average_distance = {i:[] for i in range(5,20)}
query_time = {i:[] for i in range(5,20)}
num_candidates_history = {i:[] for i in range(5,20)}
ground_truth = {}
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in range(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in range(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in range(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in range(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in range(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
We see a similar trade-off between quality and performance: as the number of random vectors increases, the query time goes down as each bin contains fewer documents on average, but on average the neighbors are likewise placed farther from the query. On the other hand, when using a small enough number of random vectors, LSH becomes very similar brute-force search: Many documents appear in a single bin, so searching the query bin alone covers a lot of the corpus; then, including neighboring bins might result in searching all documents, just as in the brute-force approach.
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/gdrive')
```
## Download the image assets( FG, FG_MASK, BG) from drive
```
# Background Images
!cp -r /content/gdrive/My\ Drive/Assignment15/A/Input/bg /content/
# Foreground Images
!cp -r /content/gdrive/My\ Drive/Assignment15/A/Input/fg150 /content/
# Foreground Masks
!cp -r /content/gdrive/My\ Drive/Assignment15/A/Input/fg_mask /content/
```
## Download the Dataset.zip from drive which contains Overlay, Masks and Depth images
```
# Background Images
!cp -r /content/gdrive/My\ Drive/Assignment15/A/Output/Dataset.zip /content/
! unzip -q Dataset.zip -d Dataset
!rm -r Dataset.zip
%matplotlib inline
import matplotlib.pyplot as plt
def show_images(images: list, imgName):
n: int = len(images)
f = plt.figure(figsize=(20,5))
for i in range(n):
# Debug, plot figure
f.add_subplot(1, n, i + 1).axis('off')
plt.imshow(images[i], cmap="gray")
plt.savefig(imgName)
plt.show(block=True)
from os import listdir
# from google.colab.patches import cv2_imshow
# import cv2
import matplotlib.pyplot as plt
import numpy as np
from random import randint
import PIL
from PIL import Image
import time
%matplotlib inline
import os, errno
# Paths used
DB_name = 'Dataset/'
path_BG = 'bg'
path_FG = 'fg150'
path_fg_mask = 'fg_mask'
bg_imageListDir = listdir(path_BG)
fg_imageListDir = listdir(path_FG)
program_starts = time.time()
bg_imgs = []
fg_imgs = []
fg_mask_imgs = []
overlay_imgs = []
overlay_mask_imgs = []
depth_imgs = []
for i in range(40,50):
bg_image = bg_imageListDir[i]
outputDir = DB_name
outputDir = outputDir + bg_image[:-4]
bg_img = Image.open(path_BG+'/'+bg_image).resize((224,224), Image.ANTIALIAS)
bg_imgs.append(np.asarray(bg_img))
# for fgID,fg_image in enumerate(fg_imageListDir):
fg_image = fg_imageListDir[i]
outputDir_fg = outputDir + '/' + fg_image[:-4]
fg_img = Image.open(path_FG+'/'+fg_image).resize((150,150), Image.ANTIALIAS)
fg_mask = Image.open(path_fg_mask+'/mask_'+fg_image).resize((150,150), Image.ANTIALIAS)
imgNum = np.random.randint(1,41)
overlay = Image.open(outputDir_fg + "/overlay/" + str(imgNum) + '.jpg' )
mask = Image.open(outputDir_fg + "/mask/" + str(imgNum) + '.jpg' )
depth = Image.open(outputDir_fg + "/depth/" + str(imgNum) + '.jpg' )
fg_imgs.append(np.asarray(fg_img))
fg_mask_imgs.append(np.asarray(fg_mask))
overlay_imgs.append(np.asarray(overlay))
overlay_mask_imgs.append(np.asarray(mask))
depth_imgs.append(np.asarray(depth))
print("INPUTS")
print("1. Background Images (Scenes) = 100")
show_images(bg_imgs, "bg_imgs.png")
print("2. Foreground Images with Transparent Background = 100")
show_images(fg_imgs, "fg_imgs.png")
print("3. Mask for foreground = 100")
show_images(fg_mask_imgs, "fg_mask_imgs.png")
print("OUTPUTS")
print("4. Overlay the foreground on top or background randomly. Flip foreground as well. We call this fg_bg = 400000")
show_images(overlay_imgs, "overlay_imgs.png")
print("5. Mask for foreground in the Overlayed Image(fg_bg) = 400000")
show_images(overlay_mask_imgs, "overlay_mask_imgs.png")
print("6. Depth map generated for Overlayed images(fg_bg) = 400000")
show_images(depth_imgs, "depth_imgs.png")
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import patsy as pt
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import statsmodels.api as sm
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
import warnings
warnings.filterwarnings('ignore')
```
## 7.8 Lab: Non-Linear Modelling
Load wage dataset
```
wage_df = pd.read_csv('./data/Wage.csv')
wage_df = wage_df.drop(wage_df.columns[0], axis=1)
wage_df['education'] = wage_df['education'].map({'1. < HS Grad': 1.0,
'2. HS Grad': 2.0,
'3. Some College': 3.0,
'4. College Grad': 4.0,
'5. Advanced Degree': 5.0
})
wage_df.head()
```
### Polynomial regression
```
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Fit linear model
model = sm.OLS(y, X).fit()
y_hat = model.predict(X)
model.summary()
# STATS
# ----------------------------------
# Reference: https://stats.stackexchange.com/questions/44838/how-are-the-standard-errors-of-coefficients-calculated-in-a-regression
# Covariance of coefficient estimates
mse = np.sum(np.square(y_hat - y)) / y.size
cov = mse * np.linalg.inv(X.T @ X)
# ...or alternatively this stat is provided by stats models:
#cov = model.cov_params()
# Calculate variance of f(x)
var_f = np.diagonal((X @ cov) @ X.T)
# Derive standard error of f(x) from variance
se = np.sqrt(var_f)
conf_int = 2*se
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=X[:, 1], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=X[:, 1], y=y_hat+conf_int, color='blue');
sns.lineplot(x=X[:, 1], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
```
### Selecting degrees of freedom for polynomial regression with ANOVA
**ISL Authors:** In performing a polynomial regression we must decide on the degree of the polynomial to use. One way to do this is by using hypothesis tests. We now fit models ranging from linear to a degree-5 polynomial and seek to determine the simplest model which is sufficient to explain the relationship between wage and age.
```
# Derive 5 degree polynomial features of age
degree = 5
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Get models of increasing degrees
model_1 = sm.OLS(y, X[:, 0:2]).fit()
model_2 = sm.OLS(y, X[:, 0:3]).fit()
model_3 = sm.OLS(y, X[:, 0:4]).fit()
model_4 = sm.OLS(y, X[:, 0:5]).fit()
model_5 = sm.OLS(y, X[:, 0:6]).fit()
# Compare models with ANOVA
display(sm.stats.anova_lm(model_1, model_2, model_3, model_4, model_5))
```
**ISL Authors:** The p-value comparing the linear Model 1 to the quadratic Model 2 is essentially zero (<10−15), indicating that a linear fit is not sufficient. Sim- ilarly the p-value comparing the quadratic Model 2 to the cubic Model 3 is very low (0.0017), so the quadratic fit is also insufficient. The p-value comparing the cubic and degree-4 polynomials, Model 3 and Model 4, is ap- proximately 5 % while the degree-5 polynomial Model 5 seems unnecessary because its p-value is 0.37. Hence, either a cubic or a quartic polynomial appear to provide a reasonable fit to the data, but lower- or higher-order models are not justified.
```
model_5.pvalues
```
**Revision note:** ISL suggests that the above results should be same as for annova pvalues, but that isn;t observed here us statsmodels. Why?
**ISL Authors:** However, the ANOVA method works whether or not we used orthogonal polynomials; it also works when we have other terms in the model as well. For example, we can use anova() to compare these three models:
```
# Derive 5 degree polynomial features of age
degree = 3
f = 'education +' + ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Get models of increasing degrees
model_1 = sm.OLS(y, X[:, 0:3]).fit()
model_2 = sm.OLS(y, X[:, 0:4]).fit()
model_3 = sm.OLS(y, X[:, 0:5]).fit()
# Compare models with ANOVA
display(sm.stats.anova_lm(model_1, model_2, model_3))
```
### Polynomial logistic regression with bootstrapped confidence intervals
```
# Create logistic repsonse for wage > 250
wage_df['wage_above_250'] = (wage_df['wage'] > 250).astype(np.float64)
wage_df.head()
def logit_boot(df, idx):
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, df.loc[idx])
y = np.asarray(df['wage_above_250'].loc[idx])
# Some sample for predictions observations
x1_test = np.arange(20,81)
X_test = np.array([np.ones(len(x1)), x1, np.power(x1, 2), np.power(x1, 3), np.power(x1, 4)]).T
# Fit logistic regression model
model = sm.Logit(y, X).fit(disp=0)
y_hat = model.predict(X_test)
return y_hat
def tenth_percentile(df, idx):
Z = np.array(df.loc[idx])
return np.percentile(Z, 10)
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
# Get y_hat for B number of bootstrap samples
B = 1000
boot_obs = boot(logit_boot, wage_df, samples=B)
SE_pred = np.std(boot_obs, axis=0)
# Calculate 5% and 95% percentiles of y_hat across all bootstrap samples
upper = np.percentile(boot_obs, 95, axis=0)
lower = np.percentile(boot_obs, 5, axis=0)
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage_above_250'])
# Some test observations
x1_test = np.arange(20,81)
X_test = np.array([np.ones(len(x1)), x1, np.power(x1, 2), np.power(x1, 3), np.power(x1, 4)]).T
# Fit logistic regression model
model = sm.Logit(y, X).fit(disp=0)
y_hat = model.predict(X_test)
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
plot_df = pd.DataFrame({'Age': x1_test, 'Pr(Wage>250 | Age)': y_hat})
sns.lineplot(x='Age', y='Pr(Wage>250 | Age)', data=plot_df, color='red')
sns.lineplot(x=x1_test, y=upper, color='blue');
sns.lineplot(x=x1_test, y=lower, color='blue');
# Plot all f(x) estimations
for b in boot_obs:
#plot_df = pd.DataFrame({'Age': boot_obs[0][:, 0], 'Pr(Wage>250 | Age)': boot_obs[0][:, 1]})
sns.lineplot(x=x1_test, y=b, alpha=0.05)
```
Here I've used the bootstrap sampling method to get estimates of f(x) for 1000 samples of the dataset. The 5th and 95th percentile of these estimates are shown in blue. The estimate for f(x) using the full dataset is shown in red.
**Revision note:** I expected the 5th and 95th percentiles to correspond to the confidence intervals reported by the ISL authors. They are largely similar except for the higher bound for high values of age which tends to zero here but for the ISL authors tends to 1.
### Step function
```
### Step function
steps = 6
# Segment data into 4 segments by age
cuts = pd.cut(wage_df['age'], steps)
X = np.asarray(pd.get_dummies(cuts))
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
```
## 7.8.2 Splines
```
# Putting confidence interval calcs into function for convenience.
def confidence_interval(X, y, y_hat):
"""Compute 5% confidence interval for linear regression"""
# STATS
# ----------------------------------
# Reference: https://stats.stackexchange.com/questions/44838/how-are-the-standard-errors-of-coefficients-calculated-in-a-regression
# Covariance of coefficient estimates
mse = np.sum(np.square(y_hat - y)) / y.size
cov = mse * np.linalg.inv(X.T @ X)
# ...or alternatively this stat is provided by stats models:
#cov = model.cov_params()
# Calculate variance of f(x)
var_f = np.diagonal((X @ cov) @ X.T)
# Derive standard error of f(x) from variance
se = np.sqrt(var_f)
conf_int = 2*se
return conf_int
# Fit spline with 6 degrees of freedom
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('bs(age, df=7, degree=3, include_intercept=True)', wage_df)
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=wage_df['age'], y=y_hat+conf_int, color='blue');
sns.lineplot(x=wage_df['age'], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
# Fit a natural spline with seven degrees of freedom
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('cr(age, df=7)', wage_df) # REVISION NOTE: Something funky happens when df=6
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=wage_df['age'], y=y_hat+conf_int, color='blue');
sns.lineplot(x=wage_df['age'], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
```
Comparing the above two plots we can see the increased linearity of the natural spline at the boundaries of age. This seems to yield a slight increase in confidence at the extremes of age.
The ISLR authors cover smoothing splines in addition to the above. Smoothing splines seem to be poorly supported in python, I could only find `scipy.interpolate.UnivariateSpline`.
### 7.8.3 GAMs
**ISL Authors:** We now fit a GAM to predict wage using natural spline functions of year and age, treating education as a qualitative predictor, as in (7.16). Since this is just a big linear regression model using an appropriate choice of basis functions, we can simply do this using the lm() function.
```
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('cr(year, df=4)+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# Plot estimated f(year)
sns.lineplot(x=wage_df['year'], y=y_hat);
# Plot estimated f(age)
sns.lineplot(x=wage_df['age'], y=y_hat);
# Plot estimated f(education)
sns.boxplot(x=wage_df['education'], y=y_hat);
```
Not quite the same as plots achived by ISL authors using R, but gives similar insight.
### Comparing GAM configurations with ANOVA
```
# Model 1
X = pt.dmatrix('cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model1 = sm.OLS(y, X).fit(disp=0)
# Model 2
X = pt.dmatrix('year+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model2 = sm.OLS(y, X).fit(disp=0)
# Model 3
X = pt.dmatrix('cr(year, df=4)+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model3 = sm.OLS(y, X).fit(disp=0)
# Compare models with ANOVA
display(sm.stats.anova_lm(model1, model2, model3))
```
The `Pr(>F)` of 0.000174 for `Model 2` suggests that it is significantly better than model 1 whereas with a pvalue > 0.05 model 3 does not seem to be significantly better than model 2.
We condlude that inclusion of a linear year feature improves the model, but there is no evidence that a non-linear function of year improves the model.
```
display(model3.summary())
```
Inspecting the pvalues for model 3 features we note a pvalue >0.05 for x9 which correspondes to the 5th degree of freedom for age.
**Revision note:** The ISL authors report high pvalues for year features, which would reinforce the above ANOVA result, but we can't see that here. Perhaps the OLS `.summary()` is not equivalent to R's `summary(gam)`
### Local Regression GAM
```
x = np.asarray(wage_df['age'])
y = np.asarray(wage_df['wage'])
# Create lowess feature for age
wage_df['age_lowess'] = sm.nonparametric.lowess(y, x, frac=.7, return_sorted=False)
# Fit logistic regression model
X = pt.dmatrix('cr(year, df=4)+ age_lowess + education', wage_df)
y = np.asarray(wage_df['wage'])
model = sm.OLS(y, X).fit(disp=0)
model.summary()
```
| github_jupyter |
# Deploy a Trained TensorFlow V2 Model
In this notebook, we walk through the process of deploying a trained model to a SageMaker endpoint. If you recently ran [the notebook for training](get_started_mnist_deploy.ipynb) with %store% magic, the `model_data` can be restored. Otherwise, we retrieve the
model artifact from a public S3 bucket.
```
# setups
import os
import json
import sagemaker
from sagemaker.tensorflow import TensorFlowModel
from sagemaker import get_execution_role, Session
import boto3
# Get global config
with open('code/config.json', 'r') as f:
CONFIG=json.load(f)
sess = Session()
role = get_execution_role()
%store -r tf_mnist_model_data
try:
tf_mnist_model_data
except NameError:
import json
# copy a pretrained model from a public public to your default bucket
s3 = boto3.client('s3')
bucket = CONFIG['public_bucket']
key = 'datasets/image/MNIST/model/tensorflow-training-2020-11-20-23-57-13-077/model.tar.gz'
s3.download_file(bucket, key, 'model.tar.gz')
tf_mnist_model_data = sess.upload_data(
path='model.tar.gz', bucket=sess.default_bucket(), key_prefix='model/tensorflow')
os.remove('model.tar.gz')
print(tf_mnist_model_data)
```
## TensorFlow Model Object
The `TensorFlowModel` class allows you to define an environment for making inference using your
model artifact. Like `TensorFlow` estimator class we discussed
[in this notebook for training an Tensorflow model](
get_started_mnist_train.ipynb), it is high level API used to set up a docker image for your model hosting service.
Once it is properly configured, it can be used to create a SageMaker
endpoint on an EC2 instance. The SageMaker endpoint is a containerized environment that uses your trained model
to make inference on incoming data via RESTful API calls.
Some common parameters used to initiate the `TensorFlowModel` class are:
- role: An IAM role to make AWS service requests
- model_data: the S3 bucket URI of the compressed model artifact. It can be a path to a local file if the endpoint
is to be deployed on the SageMaker instance you are using to run this notebook (local mode)
- framework_version: version of the MXNet package to be used
- py_version: python version to be used
```
model = TensorFlowModel(
role=role,
model_data=tf_mnist_model_data,
framework_version='2.3.0',
)
```
## Execute the Inference Container
Once the `TensorFlowModel` class is initiated, we can call its `deploy` method to run the container for the hosting
service. Some common parameters needed to call `deploy` methods are:
- initial_instance_count: the number of SageMaker instances to be used to run the hosting service.
- instance_type: the type of SageMaker instance to run the hosting service. Set it to `local` if you want run the hosting service on the local SageMaker instance. Local mode are typically used for debugging.
<span style="color:red"> Note: local mode is not supported in SageMaker Studio </span>
```
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer
# set local_mode to False if you want to deploy on a remote
# SageMaker instance
local_mode=False
if local_mode:
instance_type='local'
else:
instance_type='ml.c4.xlarge'
predictor = model.deploy(
initial_instance_count=1,
instance_type=instance_type,
)
```
## Making Predictions Against a SageMaker endpoint
Once you have the `Predictor` instance returned by `model.deploy(...)`, you can send prediction requests to your endpoints. In this case, the model accepts normalized
batch images in depth-minor convention.
```
# use some dummy inputs
import numpy as np
dummy_inputs = {
'instances': np.random.rand(4, 28, 28, 1)
}
res = predictor.predict(dummy_inputs)
print(res)
```
The formats of the input and output data correspond directly to the request and response
format of the `Predict` method in [TensorFlow Serving REST API](https://www.tensorflow.org/tfx/serving/api_rest), for example, the key of the array to be
parsed to the model in the `dummy_inputs` needs to be called `instances`. Moreover, the input data needs to have a batch dimension.
```
# Uncomment the following lines to see an example that cannot be processed by the endpoint
#dummy_data = {
# 'instances': np.random.rand(28, 28, 1).tolist()
#}
#print(predictor.predict(inputs))
```
Now, let's use real MNIST test to test the endpoint. We use helper functions defined in `code.utils` to
download MNIST data set and normalize the input data.
```
from utils.mnist import mnist_to_numpy, normalize
import random
import matplotlib.pyplot as plt
%matplotlib inline
data_dir = '/tmp/data'
X, _ = mnist_to_numpy(data_dir, train=False)
# randomly sample 16 images to inspect
mask = random.sample(range(X.shape[0]), 16)
samples = X[mask]
# plot the images
fig, axs = plt.subplots(nrows=1, ncols=16, figsize=(16, 1))
for i, splt in enumerate(axs):
splt.imshow(samples[i])
```
Since the model accepts normalized input, you will need to normalize the samples before
sending it to the endpoint.
```
samples = normalize(samples, axis=(1, 2))
predictions = predictor.predict(
np.expand_dims(samples, 3) # add channel dim
)['predictions']
# softmax to logit
predictions = np.array(predictions, dtype=np.float32)
predictions = np.argmax(predictions, axis=1)
print("Predictions: ", predictions.tolist())
```
## (Optional) Clean up
If you do not plan to use the endpoint, you should delete it to free up some computation
resource. If you use local, you will need to manually delete the docker container bounded
at port 8080 (the port that listens to the incoming request).
```
import os
if not local_mode:
predictor.delete_endpoint()
else:
os.system("docker container ls | grep 8080 | awk '{print $1}' | xargs docker container rm -f")
```
| github_jupyter |
# Robot Class
In this project, we'll be localizing a robot in a 2D grid world. The basis for simultaneous localization and mapping (SLAM) is to gather information from a robot's sensors and motions over time, and then use information about measurements and motion to re-construct a map of the world.
### Uncertainty
As you've learned, robot motion and sensors have some uncertainty associated with them. For example, imagine a car driving up hill and down hill; the speedometer reading will likely overestimate the speed of the car going up hill and underestimate the speed of the car going down hill because it cannot perfectly account for gravity. Similarly, we cannot perfectly predict the *motion* of a robot. A robot is likely to slightly overshoot or undershoot a target location.
In this notebook, we'll look at the `robot` class that is *partially* given to you for the upcoming SLAM notebook. First, we'll create a robot and move it around a 2D grid world. Then, **you'll be tasked with defining a `sense` function for this robot that allows it to sense landmarks in a given world**! It's important that you understand how this robot moves, senses, and how it keeps track of different landmarks that it sees in a 2D grid world, so that you can work with it's movement and sensor data.
---
Before we start analyzing robot motion, let's load in our resources and define the `robot` class. You can see that this class initializes the robot's position and adds measures of uncertainty for motion. You'll also see a `sense()` function which is not yet implemented, and you will learn more about that later in this notebook.
```
# import some resources
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
# the robot class
class robot:
# --------
# init:
# creates a robot with the specified parameters and initializes
# the location (self.x, self.y) to the center of the world
#
def __init__(self, world_size = 100.0, measurement_range = 30.0,
motion_noise = 1.0, measurement_noise = 1.0):
self.world_size = world_size
self.x = world_size / 2.0
self.y = world_size / 2.0
self.measurement_range = measurement_range
self.motion_noise = motion_noise
self.measurement_noise = measurement_noise
self.landmarks = []
self.num_landmarks = 0
# returns a positive, random float in the range [-1, 1]
def rand(self):
return random.random() * 2.0 - 1.0
# --------
# move: attempts to move robot by dx, dy. If outside world
# boundary, then the move does nothing and instead returns failure
#
def move(self, dx, dy):
x = self.x + dx + self.rand() * self.motion_noise
y = self.y + dy + self.rand() * self.motion_noise
if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:
return False
else:
self.x = x
self.y = y
return True
# --------
# sense: returns x- and y- distances to landmarks within visibility range
# because not all landmarks may be in this range, the list of measurements
# is of variable length. Set measurement_range to -1 if you want all
# landmarks to be visible at all times
#
## TODO: complete the sense function
def sense(self):
''' This function does not take in any parameters, instead it references internal variables
(such as self.landmarks) to measure the distance between the robot and any landmarks
that the robot can see (that are within its measurement range).
This function returns a list of landmark indices, and the measured distances (dx, dy)
between the robot's position and said landmarks.
This function should account for measurement_noise and measurement_range.
One item in the returned list should be in the form: [landmark_index, dx, dy].
'''
measurements = []
## TODO: iterate through all of the landmarks in a world
## TODO: For each landmark
## 1. compute dx and dy, the distances between the robot and the landmark
## 2. account for measurement noise by *adding* a noise component to dx and dy
## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise
## - Feel free to use the function self.rand() to help calculate this noise component
## - It may help to reference the `move` function for noise calculation
## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range
## then we cannot record them; if they do fall in the range, then add them to the measurements list
## as list.append([index, dx, dy]), this format is important for data creation done later
## TODO: return the final, complete list of measurements
for idx, (x, y) in enumerate(self.landmarks):
while True:
xn = x + self.rand() * self.measurement_noise
if xn >= 0.0 and xn < self.world_size:
break
while True:
yn = y + self.rand() * self.measurement_noise
if yn >= 0.0 and yn < self.world_size:
break
dx = xn - self.x
dy = yn - self.y
if np.sqrt(dx*dx + dy*dy) <= self.measurement_range:
# add landmark to list of observed landmarks
measurements.append([idx, dx, dy])
return measurements
# --------
# make_landmarks:
# make random landmarks located in the world
#
def make_landmarks(self, num_landmarks):
self.landmarks = []
for i in range(num_landmarks):
self.landmarks.append([round(random.random() * self.world_size),
round(random.random() * self.world_size)])
self.num_landmarks = num_landmarks
# called when print(robot) is called; prints the robot's location
def __repr__(self):
return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)
```
## Define a world and a robot
Next, let's instantiate a robot object. As you can see in `__init__` above, the robot class takes in a number of parameters including a world size and some values that indicate the sensing and movement capabilities of the robot.
In the next example, we define a small 10x10 square world, a measurement range that is half that of the world and small values for motion and measurement noise. These values will typically be about 10 times larger, but we ust want to demonstrate this behavior on a small scale. You are also free to change these values and note what happens as your robot moves!
```
world_size = 10.0 # size of world (square)
measurement_range = 5.0 # range at which we can sense landmarks
motion_noise = 0.2 # noise in robot motion
measurement_noise = 0.2 # noise in the measurements
# instantiate a robot, r
r = robot(world_size, measurement_range, motion_noise, measurement_noise)
# print out the location of r
print(r)
```
## Visualizing the World
In the given example, we can see/print out that the robot is in the middle of the 10x10 world at (x, y) = (5.0, 5.0), which is exactly what we expect!
However, it's kind of hard to imagine this robot in the center of a world, without visualizing the grid itself, and so in the next cell we provide a helper visualization function, `display_world`, that will display a grid world in a plot and draw a red `o` at the location of our robot, `r`. The details of how this function wors can be found in the `helpers.py` file in the home directory; you do not have to change anything in this `helpers.py` file.
```
# import helper function
from helpers import display_world
# define figure size
plt.rcParams["figure.figsize"] = (5,5)
# call display_world and display the robot in it's grid world
print(r)
display_world(int(world_size), [r.x, r.y])
```
## Movement
Now you can really picture where the robot is in the world! Next, let's call the robot's `move` function. We'll ask it to move some distance `(dx, dy)` and we'll see that this motion is not perfect by the placement of our robot `o` and by the printed out position of `r`.
Try changing the values of `dx` and `dy` and/or running this cell multiple times; see how the robot moves and how the uncertainty in robot motion accumulates over multiple movements.
#### For a `dx` = 1, does the robot move *exactly* one spot to the right? What about `dx` = -1? What happens if you try to move the robot past the boundaries of the world?
```
# choose values of dx and dy (negative works, too)
dx = 1
dy = 2
r.move(dx, dy)
# print out the exact location
print(r)
# display the world after movement, not that this is the same call as before
# the robot tracks its own movement
display_world(int(world_size), [r.x, r.y])
```
## Landmarks
Next, let's create landmarks, which are measurable features in the map. You can think of landmarks as things like notable buildings, or something smaller such as a tree, rock, or other feature.
The robot class has a function `make_landmarks` which randomly generates locations for the number of specified landmarks. Try changing `num_landmarks` or running this cell multiple times to see where these landmarks appear. We have to pass these locations as a third argument to the `display_world` function and the list of landmark locations is accessed similar to how we find the robot position `r.landmarks`.
Each landmark is displayed as a purple `x` in the grid world, and we also print out the exact `[x, y]` locations of these landmarks at the end of this cell.
```
# create any number of landmarks
num_landmarks = 3
r.make_landmarks(num_landmarks)
# print out our robot's exact location
print(r)
# display the world including these landmarks
display_world(int(world_size), [r.x, r.y], r.landmarks)
# print the locations of the landmarks
print('Landmark locations [x,y]: ', r.landmarks)
```
## Sense
Once we have some landmarks to sense, we need to be able to tell our robot to *try* to sense how far they are away from it. It will be up t you to code the `sense` function in our robot class.
The `sense` function uses only internal class parameters and returns a list of the the measured/sensed x and y distances to the landmarks it senses within the specified `measurement_range`.
### TODO: Implement the `sense` function
Follow the `##TODO's` in the class code above to complete the `sense` function for the robot class. Once you have tested out your code, please **copy your complete `sense` code to the `robot_class.py` file in the home directory**. By placing this complete code in the `robot_class` Python file, we will be able to refernce this class in a later notebook.
The measurements have the format, `[i, dx, dy]` where `i` is the landmark index (0, 1, 2, ...) and `dx` and `dy` are the measured distance between the robot's location (x, y) and the landmark's location (x, y). This distance will not be perfect since our sense function has some associated `measurement noise`.
---
In the example in the following cell, we have a given our robot a range of `5.0` so any landmarks that are within that range of our robot's location, should appear in a list of measurements. Not all landmarks are guaranteed to be in our visibility range, so this list will be variable in length.
*Note: the robot's location is often called the **pose** or `[Pxi, Pyi]` and the landmark locations are often written as `[Lxi, Lyi]`. You'll see this notation in the next notebook.*
```
# try to sense any surrounding landmarks
measurements = r.sense()
# this will print out an empty list if `sense` has not been implemented
print(measurements)
```
**Refer back to the grid map above. Do these measurements make sense to you? Are all the landmarks captured in this list (why/why not)?**
---
## Data
#### Putting it all together
To perform SLAM, we'll collect a series of robot sensor measurements and motions, in that order, over a defined period of time. Then we'll use only this data to re-construct the map of the world with the robot and landmark locations. You can think of SLAM as peforming what we've done in this notebook, only backwards. Instead of defining a world and robot and creating movement and sensor data, it will be up to you to use movement and sensor measurements to reconstruct the world!
In the next notebook, you'll see this list of movements and measurements (which you'll use to re-construct the world) listed in a structure called `data`. This is an array that holds sensor measurements and movements in a specific order, which will be useful to call upon when you have to extract this data and form constraint matrices and vectors.
`data` is constructed over a series of time steps as follows:
```
data = []
# after a robot first senses, then moves (one time step)
# that data is appended like so:
data.append([measurements, [dx, dy]])
# for our example movement and measurement
print(data)
# in this example, we have only created one time step (0)
time_step = 0
# so you can access robot measurements:
print('Measurements: ', data[time_step][0])
# and its motion for a given time step:
print('Motion: ', data[time_step][1])
```
### Final robot class
Before moving on to the last notebook in this series, please make sure that you have copied your final, completed `sense` function into the `robot_class.py` file in the home directory. We will be using this file in the final implementation of slam!
| github_jupyter |
<a href="https://colab.research.google.com/github/imiled/DeepLearningMaster/blob/master/Tensorflow_Utils.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!apt-get update > /dev/null 2>&1
!apt-get install cmake > /dev/null 2>&1
!pip install --upgrade setuptools > /dev/null 2>&1
!pip install tensorflow-gpu==2.0.0 > /dev/null 2>&1
import tensorflow as tf
import numpy as np
```
Let's try to fit a parabollic function using
```
f = lambda x: 2*x**2 + x +1
x_train = np.linspace(-100,100,1000)
y_train = f(x_train)
x_test = np.linspace(-110,-100.01,10)
y_test = f(x_test)
```
# Model Definition
### Sequential API
```
sequential_model = tf.keras.models.Sequential()
sequential_model.add(tf.keras.layers.Dense(64, input_shape=(1,), activation='relu'))
sequential_model.add(tf.keras.layers.Dense(32, activation='relu'))
sequential_model.add(tf.keras.layers.Dense(1))
sequential_model.summary()
sequential_model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
sequential_model.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
sequential_model.predict(x_test)
```
### Functional API
```
x = tf.keras.layers.Input(shape=(1,))
dense_relu_64 = tf.keras.layers.Dense(64, activation='relu')(x)
dense_relu_32 = tf.keras.layers.Dense(32, activation='relu')(dense_relu_64)
y = tf.keras.layers.Dense(1)(dense_relu_32)
functional_model = tf.keras.Model(x, y)
functional_model.summary()
functional_model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
functional_model.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
functional_model.predict(x_test)
```
### Model Subclassing
```
class NN(tf.keras.Model):
def __init__(self):
super(NN, self).__init__()
self.dense_relu_64 = tf.keras.layers.Dense(64, activation='relu')
self.dense_relu_32 = tf.keras.layers.Dense(32, activation='relu')
self.dense_linear_1 = tf.keras.layers.Dense(1)
def call(self, inputs):
x = self.dense_relu_64(inputs)
x = self.dense_relu_32(x)
x = self.dense_linear_1(x)
return x
subclassing = NN()
x_test_sub = np.expand_dims(x_test, axis=1)
print(subclassing(x_test_sub))
x_test.shape
```
# Training Model Subclassing
### Fit
```
subclassing.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
subclassing.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
subclassing.predict(x_test)
```
### tf.GradientTape
```
def optimize(model, x, y):
with tf.GradientTape() as tape: # save the cpst function
pred = model(x)
loss = tf.reduce_mean(tf.keras.losses.MSE(pred, y))
grads = tape.gradient(loss, model.trainable_weights)
optimizer = tf.keras.optimizers.Adam()
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return model, loss
subclassing = NN()
x_test_sub = np.expand_dims(x_test, axis=1)
epochs = 10
for i in range(epochs):
subclassing, loss = optimize(subclassing, x_test_sub, y_test)
print(i, loss)
```
| github_jupyter |
# Testing Code with pytest
In this lesson we will be going over some of the things we've learned so far about testing and demonstrate how to use pytest to expand your tests. We'll start by looking at some functions which have been provided for you, and then move on to testing them.
In your repo you should find a Python script called `fibonacci.py`, which contains a couple of functions providing slightly different implementations of the [Fibonacci sequence](https://en.wikipedia.org/wiki/Fibonacci_number). Each of these should take an integer input `n` and return the first `n` Fibonacci numbers.
Once you've had a look at these functions and are happy with using them, let's move on to testing them.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Testing functions</h2>
</div>
<div class="panel-body">
<ol>
<li>
<p>Create a new script called <code>test_fibonacci.py</code>, or similar. In this script, write a test function for each of the Fibonacci implementations. Consider the following questions when writing your tests:</p>
<ul>
<li>How many different inputs do you need to test to be confident that the function is working as expected?</li>
<li>For a given input, is there a known, well-defined answer against which you can check the output?</li>
<li>Does the function output have any other qualities which might be wrong, and which should be tested?</li>
</ul>
<p>Remember that in order for your tests to call your functions, that script will need to import them.</p>
</li>
</ol>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Generalising the tests</h2>
</div>
<div class="panel-body">
<p>The approach we've used above, with one test for each function, is fine. But it's very specific to this particular scenario - if we introduced another implementation, we would have to write a new test function for it, which is not the point of modularity. Since our functions are supposed to give the same output, a better approach would be to have one generalised test function which could test any function we pass it.</p>
<ol>
<li>Combine your tests into one test function which takes a function as input and uses that as the function to be tested. Run your Fibonacci implementations through this new test and make sure they still pass.</li>
<li>The above solutions testing a specific input are fine in theory, but the point of tests is to find unexpected behaviour. Generalise your test function to test correct behaviour for a Fibonacci sequence of random length. You will probably want to look at the <code>numpy.random</code> module.</li>
</ol>
</div>
</section>
Next, let's add a third implementation of the Fibonacci sequence.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Testing a Third Implementation</h2>
</div>
<div class="panel-body">
<p>Copy the functions above (exactly as shown here) into your <code>fibonacci.py</code> script. Use your tests to find the bugs and compare its output to the previous implementations.</p>
</div>
</section>
<section class="solution panel panel-primary">
<div class="panel-heading">
<h2><span class="fa fa-eye"></span> Solution</h2>
</div>
</section>
The actual `fib_recursive` function should read:
and should pass the tests.
## Introducing `pytest`
`pytest` is a Python module which contains a lot of tools for automating tests, rather than running the test for each function one at a time as we've done so far. We won't go into much detail with this, but you should know that it exists and to look into it if you need to write a large number of tests.
The most basic way to use `pytest` is with the command-line tool it provides. This command takes a filename as input, runs the functions defined there and reports whether they pass or fail.
This works in this example because I've used a file containing only our first versions of the tests, which took no input. Using the new combined test, `pytest` doesn't know what input to provide, so it reports the test as having failed. However, there is a commonly-used feature in `pytest` which addresses this, which is the `parametrize` decorator. This allows you to specify inputs for the input parameters of your test functions. What makes it particularly useful though, is that you can specify several for each parameter and `pytest` will automatically run the test with all of those inputs. In this way you can automate testing your functions with a wide range of inputs without having to type out many different function calls yourself.
For our example, we can use this decorator to pass in the functions we wish to test, like this:
Now when we run this script with `pytest`, you'll notice that even though we have only defined one function, it still runs two tests, one with each of our Fibonacci functions as input.
This should also pass all the previous tests written. You may have also wanted to add tests that detect the `RecursionError` when $n==0$.
| github_jupyter |
```
!pip install dask
import dask.array as da
a = da.arange(18,chunks=4)
a.compute()
a.chunks
a
import pandas as pd
%time temp = pd.read_csv('HR_comma_sep.csv')
import dask.dataframe as dd
%time df = dd.read_csv('HR_comma_sep.csv')
import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame({'P':[10,20,30], 'Q':[40,50,60]}, index=['p','q','r'])
ddf = dd.from_pandas(df,npartitions=2)
ddf.head()
ddf[['Q','P']]
import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame({'X':[11,12,13],'Y':[41,51,61]})
ddf = dd.from_pandas(df,npartitions=2)
ddf.head()
ddf.iloc[:,[1,0]].compute()
ddf.iloc[:,[1,0]].compute()
ddf[['X']].compute()
ddf = dd.read_csv('HR_comma_sep.csv')
ddf.head()
ddf2 = ddf[ddf.salary=='low']
ddf2.compute().head()
ddf2.head()
ddf.compute()
ddf.groupby('left').mean().compute()
from dask import dataframe as dd
type(ddf)
type(df)
ddf = dd.from_pandas(df,chunksize=4)
type(ddf)
pd_df = ddf.compute()
pd_df
type(pd_df)
import dask.bag as db
items_bag = db.from_sequence([1,2,3,4,5,6,7,8,9,10], npartitions=3)
items_bag.take(2)
items_odd = items_bag.filter(lambda x: x if x % 2 !=0 else None)
items_odd.compute()
items_square = items_bag.map(lambda x: x**2)
items_square.compute()
import dask.bag as db
text = db.read_text('sample.txt')
text.compute()
text.take(2)
text.to_textfiles('/path/to/data/*.text.gz')
import dask.bag as db
dict_bag = db.from_sequence([{'item_name': 'Egg', 'price':5}, {'item_name': 'Bread', 'price': 20}, {'item_name': 'Milk', 'price':54}], npartitions=2)
dict_bag.compute()
df = dict_bag.to_dataframe()
df.compute().reset_index(drop=True)
pd_df = pd_df.reset_index(drop=True)
pd_df
from dask import delayed, compute
@delayed
def cube(item):
return item**3
@delayed
def average(items):
return sum(items)/len(items)
item_list = [2,3,4]
cube_list = [cube(i) for i in item_list]
computation_graph = average(cube_list)
computation_graph.compute()
!pip install graphviz
import os
os.environ["PATH"] += os.pathsep + 'D:/Program Files (x86)/Graphviz2.38/bin/'
!conda install python-graphviz
computation_graph.visualize()
delayed?
import dask.dataframe as dd
ddf = dd.read_csv('HR_comma_sep.csv')
ddf.head()
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0,100))
scaler.fit(ddf[['last_evaluation']])
performance_score = scaler.transform(ddf[['last_evaluation']])
performance_score
import dask.dataframe as dd
ddf = dd.read_csv('HR_comma_sep.csv')
ddf.head()
from dask_ml.preprocessing import Categorizer, OneHotEncoder
from sklearn.pipeline import make_pipeline
!pip install dask-ml
from dask_ml.preprocessing import Categorizer, OneHotEncoder
from sklearn.pipeline import make_pipeline
pipe1 = make_pipeline(Categorizer())
pipe1.fit(ddf[['salary',]])
result1 = pipe1.transform(ddf[['salary',]])
result1.head()
from dask_ml.preprocessing import Categorizer, OrdinalEncoder
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(Categorizer(), OrdinalEncoder())
pipe.fit(ddf[['salary',]])
result = pipe.transform(ddf[['salary',]])
result.head(73)
import pandas as pd
df = pd.read_csv('HR_comma_sep.csv')
df.head()
data = df[['satisfaction_level', 'last_evaluation']]
label = df['left']
from dask.distributed import Client
client = Client()
from sklearn.externals.joblib import parallel_backend
import joblib
import sys
sys.modules['sklearn.externals.joblib'] = joblib
with parallel_backend('dask'):
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, label, test_size=0.2, random_state=0)
model = RandomForestClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Accuracy:', accuracy_score(y_test, y_pred))
import dask.dataframe as dd
ddf = dd.read_csv('HR_comma_sep.csv')
data = ddf[['satisfaction_level', 'last_evaluation']].to_dask_array(lengths=True)
label = ddf['left'].to_dask_array(lengths=True)
from dask_ml.linear_model import LogisticRegression
from dask_ml.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data,label)
model = LogisticRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Accuracy:', accuracy_score(y_test, y_pred))
data = ddf[['satisfaction_level', 'last_evaluation']].to_dask_array(lengths=True)
from dask_ml.cluster import KMeans
model = KMeans(n_clusters=3)
model.fit(data)
label = model.labels_
label.compute()
import matplotlib.pyplot as plt
x = data[:,0].compute()
y = data[:,1].compute()
cluster_labels = label.compute()
data.compute()
plt.scatter(x,y,c=cluster_labels)
plt.xlabel('Satisfaction Level')
plt.ylabel('Performance Level')
plt.title('Groups of employees who left the company')
plt.show()
```
| github_jupyter |
```
import numpy as np
from numpy.random import normal, uniform
from scipy.stats import multivariate_normal as mv_norm
from collections import OrderedDict
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits import mplot3d
%matplotlib inline
```
## Functions to Generate the Training and Test Datasets
#### Details of target function generation
The target function at each node is generated as follows:
$T = \mathbf{a}^T\phi(\mathbf{X}) + Z$, where
$\mathbf{X} = [X_1, X_2, \ldots, X_N]^T$ denotes the random data point,
$\phi(\mathbf{X}) = [1, X_1, X_2, \ldots, X_N]^T$ denotes the feature vector obtained from data point,
$\mathbf{a} = [a_0, a_1, \ldots, a_N]^T$ denotes the weight vector,
$Z$ denotes Gaussian noise with zero mean and $T$ denotes the target value.
For simplicity we assume $Z \sim \mathcal{N}(0, \beta^{-1})$, where $\beta$ denotes the precision. Hence the target values $T \sim \mathcal{N}(\mathbf{a}^T\phi(\mathbf{X}), \beta^{-1})$
Therefore the likelihood of $T = t$ given $\mathbf{X} = \mathbf{x}$ denoted by $p(t|\mathbf{x}, \mathbf{a})$ has the Gaussian distribution $\mathcal{N}(\mathbf{a}^T\phi(\mathbf{x}), \beta^{-1})$ whose likelihood is given by $G(t, \mathbf{a}^T\phi(\mathbf{x}), \beta^{-1})$
```
# x_vec = [x1, x2, ... , xi] and xi is available to node i only
def real_function(a_vec, noise_sigma, X):
N = X.shape[0]
N_samples = X.shape[1]
#Evaluates the real function
f_value = a_vec[0]
for i in range(0, N):
f_value += a_vec[i+1]*X[i,:]
if noise_sigma==0:
# Recovers the true function
return f_value
else:
return f_value + normal(0, noise_sigma, N_samples)
```
#### Details of data points generation across the network
Data point $\mathbf{X} = [X_1, X_2, \ldots, X_N]^T$ is an $N$ dimensional vector, where each $X_i \sim Unif[l_i, u_i]$.
```
# generate training set for each node
def generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_samples):
# generates N_samples copies of X which are uniformly distributed over [l,u]
N = len(l_vec)
X = np.zeros((N, N_samples), dtype=float)
for i in range(0,N):
X[i, :] = uniform(l_vec[i], u_vec[i], N_samples)
# Evaluate the real function for training example inputs
t = real_function(a_vec, noise_sigma, X)
return X, t
```
## Training and Testing Procedure
### Training at each node without cooperation
We consider a network of $N$ nodes. We generate $N$ datasets network wide.
For node $i$:
Each node $i$'s local and private dataset is denoted by $\mathcal{D}_i = \{(\mathbf{X}_i^{(j)}, t^{(j)}), j \in \{1,2, \ldots, N_{0}\}\}$, where each $\mathbf{X}_i^{(j)}$ is an $N$ dimensional data point.
Using the given dataset $\mathcal{D}_i$ at node $i$, we want to able to predict $t$ given a new input $\mathbf{x}$, i.e, make a prediction based the following predictive distribution
\begin{align}
p(t|\mathbf{x}, \mathcal{D}_i)
\end{align}
The predictive distribution can be obtained as follows
\begin{align}
p(t|\mathbf{x}, \mathcal{D}_i) &= \int p(t, \mathbf{a}|\mathbf{x}, \mathcal{D}_i)d\mathbf{a} \\
& = \int p(t|\mathbf{x}, \mathbf{a}, \mathcal{D}_i)p(\mathbf{a}|\mathcal{D}_i)d\mathbf{a} \\
& = \int p(t|\mathbf{x}, \mathbf{a})p(\mathbf{a}|\mathcal{D}_i)d\mathbf{a}
\end{align}
We train each node using the dataset $\mathcal{D}_i$ to obtain $p(\mathbf{a}|\mathcal{D}_i)$. We obtain the posterior distribution on weight vector $\mathbf{a}$ is a Bayesian fashion, i.e., we start with a prior on $\mathbf{a}$ given by
\begin{align}
p(\mathbf{a}) = G(\mathbf{a}, \boldsymbol{\mu}_0, \boldsymbol{\Sigma}_0)
\end{align}
For simplicity we consider $\boldsymbol{\mu}_0 = 0$ and $\boldsymbol{\Sigma}_0 = \alpha^{-1}I$.
We update the posterior distribution on $\mathbf{a}$ in an online fashion or sequential fashion as we observe the data. Let $\boldsymbol{\mu}^{(k)}_i$ and $\boldsymbol{\Sigma}^{(k)}_i$ denote the mean and covariance matrix of the posterior distribution after observing $k$ samples from $\mathcal{D}_i$. Then, after observing $k+1$th point $(\mathbf{x}_i^{(k+1)}, t_i^{(k+1)})$ we use Bayes rule (for more details on Bayesian linear regression please refer to Bishop's treatment of the Bayesian approach to linear regression.) to obtain $\boldsymbol{\mu}^{(k+1)}_i$ and $\boldsymbol{\Sigma}^{(k+1)}_i$ as follows
\begin{align}
(\boldsymbol{\Sigma}^{(k+1)}_i)^{-1}
&= (\boldsymbol{\Sigma}^{(k)}_i)^{-1} + \beta \phi(\mathbf{x}_i^{(k+1)})^T\phi(\mathbf{x}_i^{(k+1)})
\\
\boldsymbol{\mu}^{(k+1)}_i
&= \boldsymbol{\Sigma}^{(k+1)}_i\left((\boldsymbol{\Sigma}^{(k)}_i)^{-1} \boldsymbol{\mu}_i^{(k)} + \beta \phi(\mathbf{x}_i^{(k+1)})^T t_i^{(k+1)} \right)
\end{align}
Update using the above equations until we have looped through the entire local datasets.
### Training at each node with peer-to-peer cooperation
Again we want to train each node using the dataset $\mathcal{D}_i$ and cooperation with neighbors in the graph given by social interaction matrix $\mathbf{W}$ to obtain $p^{(k)}(\mathbf{a})$ after each node has observed $k$ training samples.
We obtain the posterior distribution on weight vector $\mathbf{a}$ is a Bayesian fashion, i.e., we start with a prior on $\mathbf{a}$ given by
\begin{align}
p^{(0)}(\mathbf{a}) = G(\mathbf{a}, \boldsymbol{\mu}_0, \boldsymbol{\Sigma}_0)
\end{align}
For simplicity we consider $\boldsymbol{\mu}_0 = 0$ and $\boldsymbol{\Sigma}_0 = \alpha^{-1}I$.
$\underline{\text{Local Bayesian Update Step:}}$
We update the posterior distribution on $\mathbf{a}$ in an online fashion or sequential fashion as we observe the data. Let $\boldsymbol{\mu}^{(k)}_i$ and $\boldsymbol{\Sigma}^{(k)}_i$ denote the mean and covariance matrix of the posterior distribution after observing $k$ samples from $\mathcal{D}_i$. Then, after observing $k+1$th point $(\mathbf{x}_i^{(k+1)}, t_i^{(k+1)})$ we use Bayesian update to obtain $\boldsymbol{\mu}^{(k+1)}_i$ and $\boldsymbol{\Sigma}^{(k+1)}_i$ as follows
\begin{align}
(\boldsymbol{\Sigma}^{(k+1)}_i)^{-1}
&= (\boldsymbol{\Sigma}^{(k)}_i)^{-1} + \beta \phi(\mathbf{x}_i^{(k+1)})^T\phi(\mathbf{x}_i^{(k+1)})
\\
\boldsymbol{\mu}^{(k+1)}_i
&= \boldsymbol{\Sigma}^{(k+1)}_i\left((\boldsymbol{\Sigma}^{(k)}_i)^{-1} \boldsymbol{\mu}_i^{(k)} + \beta \phi(\mathbf{x}_i^{(k+1)})^T t_i^{(k+1)} \right)
\end{align}
$\underline{\text{Consensus Step:}}$
The merged covariance matrix $\overline{\boldsymbol{\Sigma}}^{(k+1)}_i$ for node $i$ is given as
\begin{align}
(\overline{\boldsymbol{\Sigma}}^{(k+1)}_i)^{-1} = \sum_{j = 1}^N W_{ij}(\boldsymbol{\Sigma}_j^{(k+1)})^{-1}.
\end{align}
The merged mean value for node $i$ is given as
\begin{align}
\overline{\boldsymbol{\mu}}^{(k+1)}_i = \overline{\boldsymbol{\Sigma}}^{(k+1)}_i \sum_{j=1}^N W_{ij}(\boldsymbol{\Sigma}_j^{(k+1)})^{-1}\mu_j .
\end{align}
Update using the above equations until we have looped through the entire local datasets.
### Prediction on the test dataset at each node
The predictive distribution on plugging in the values gives us
\begin{align}
p(t| \mathbf{x}) &= \int p(t| \mathbf{x}, \mathbf{a}) p^{(N_0)}(\mathbf{a})d\mathbf{a}
\\
& = \int G(t, \mathbf{a}^T\phi(\mathbf{x}), \beta^{-1}) G(\mathbf{a}, \overline{\boldsymbol{\mu}}^{(N_0)}_i, \overline{\boldsymbol{\Sigma}}^{(N_0)}_i) d\mathbf{a}
\\
& = G(t, (\overline{\boldsymbol{\mu}}^{(N_0)}_i)^T\phi(\mathbf{x}), \overline{\boldsymbol{\Sigma}}^{\ast}_i),
\end{align}
where
\begin{align}
\overline{\boldsymbol{\Sigma}}^{\ast}_i = \beta^{-1} + \phi(\mathbf{x})^T\overline{\boldsymbol{\Sigma}}^{(N_0)}_i \phi(\mathbf{x})
\end{align}
## Initialize the Linear Bayes Class Object
#### Details of each node and its posterior distribution
Each node has access to $\mathbf{X}_i = [X_{1i}, X_{2i}, \ldots, X_{iN}]$ which an $N$ dimensional data point. However $\mathbf{X}_i \in \mathcal{X}_i \subset \mathbb{R}^N$, where $\mathcal{X}_i$ denotes the local data space.
```
class LinearSeqBayes(object):
"""
A class that holds parameter prior/posterior and handles
the hyper-parameter updates with new data
Note: variables starting with "_vec" indicate Nx1 dimensional
column vectors, those starting with "_mat" indicate
matrices, and those starting with "_arr" indicate
1xN dimensional arrays.
Args:
meam0_arr (np.array): prior mean vector of size 1xM
covar0_mat (np.ndarray): prior covariance matrix of size MxM
beta (float): known real-data noise precision
"""
def __init__(self, mean0_arr, covar0_mat, beta):
self.prior = mv_norm(mean=mean0_arr, cov=covar0_mat)
self.meanPrev_vec = mean0_arr.reshape(mean0_arr.shape + (1,)) #reshape to column vector
self.covarPrev_mat = covar0_mat
self.beta = beta
self.meanCurrent_vec = self.meanPrev_vec
self.covarCurrent_mat = self.covarPrev_mat
self.posterior = self.prior
self.prediction = self.prior
def get_phi_mat(self, X):
N = X.shape[0]
phi_mat = np.ones((X.shape[0]+1, X.shape[1]))
for i in range(0,N):
phi_mat[i,:] = X[i,:]
return phi_mat
def get_phi(self, x_vec):
"""
Note that the other terms in x_vec are not from other nodes
in the network. These are local N dimensional data points
If some dimensions are not seen at node i they are set to zero
"""
N = len(x_vec)
phi_vec = np.ones((1, N+1))
for i in range(0,N):
phi_vec[:, i] = x_vec[i]
return phi_vec
def set_posterior(self, x_vec, t):
"""
Updates current mean vec and covariance matrix given x and t value
"""
phi_vec = self.get_phi(x_vec)
self.covarCurrent_mat = np.linalg.inv(np.linalg.inv(self.covarPrev_mat) + self.beta*phi_vec.T.dot(phi_vec))
self.meanCurrent_vec = self.covarCurrent_mat.dot(np.linalg.inv(self.covarPrev_mat).dot(self.meanPrev_vec)) + \
self.covarCurrent_mat.dot(self.beta*phi_vec.T.dot(t))
self.posterior = mv_norm(mean=self.meanCurrent_vec.flatten(), cov=self.covarCurrent_mat)
def merge_PosteriorParams(self, W_vec, meanCurrent_dict, covarCurrent_mat_dict):
N = len(W_vec)
dummy_mean = np.zeros((N+1,1), dtype = float)
dummy_covar = np.zeros((N+1,N+1), dtype = float)
for i in range(0,N):
dummy_mean += np.linalg.inv(covarCurrent_mat_dict[i]).dot(meanCurrent_dict[i])*W_vec[i]
dummy_covar += np.linalg.inv(covarCurrent_mat_dict[i])*W_vec[i]
self.covarCurrent_mat = np.linalg.inv(dummy_covar)
self.meanCurrent_vec = self.covarCurrent_mat.dot(dummy_mean)
def update_prevPosteriorParams(self):
# update the previous mean and covariance to new updated one using one sample (x_vec,t)
self.covarPrev_mat = self.covarCurrent_mat
self.meanPrev_vec = self.meanCurrent_vec
def predict_test_set(self,X):
N_samples = X.shape[1]
x_mat = self.get_phi_mat(X)
predictions = []
for idx in range(0,N_samples):
x = x_mat[:,idx]
sig_sq_x = 1/self.beta + x.T.dot(self.covarCurrent_mat.dot(x))
mean_x = self.meanCurrent_vec.T.dot(x)
predictions.append(normal(mean_x.flatten(), np.sqrt(sig_sq_x)))
return np.array(predictions)
def compute_mse(self, t, predictions):
N = len(t)
err = np.array(t-predictions)
err = np.square(err)
return sum(err)/N
def make_scatter(self, x1_arr, x2_arr, t_arr, real_parms, samples=None, stdevs=None):
"""
A helper function to plot noisy data, the true function,
and optionally a set of lines specified by the nested array of
weights of size NxM where N is number of lines, M is 2 for
this simple model
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1_arr, x2_arr, t_arr, alpha=0.5)
ax.set_xlabel('x_1')
ax.set_ylabel('x_2')
ax.set_zlabel('t')
x1, x2 = np.mgrid[-1:1:.01, -1.5:1.5:.01]
x = np.stack((x1,x2))
ax.plot_surface(x1, x2, real_function(a_vec, 0, x), cmap=cm.coolwarm)
_ = plt.title('Real Data from Noisy Linear Function')
```
### Bayesian Linear Regression for single node
```
# Real function parameters
N_train = 500
a_0 = -0.3
a_1 = 0.5
a_2 = 0.8
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
# generates N training samples
[X_train_mat, t_train_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_train)
N_test = int(N_train/5)
[X_test_mat,t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes.make_scatter(X_train_mat[0,:], X_train_mat[1,:], t_train_vec, real_parms = [a_0, a_1, a_2])
```
#### Main Training loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples
[X_train_mat, t_train_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_train)
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
linbayes.set_posterior(X_train_mat[:,n], t_train_vec[n])
linbayes.update_prevPosteriorParams()
predictions_vec = linbayes.predict_test_set(X_test_mat)
mse_vec[n] = linbayes.compute_mse(t_test_vec, predictions_vec.flatten())
avg_mse_vec += mse_vec
avg_mse_vec = avg_mse_vec/max_runs
avg_mse_vec_1node = avg_mse_vec
plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_1node,'k', label='Mean Squared Error for Central Node')
plt.xlabel(r'Epoch', fontsize = 12)
plt.ylabel(r'MSE', fontsize = 12)
plt.legend()
plt.ylim([0.8, 3.2])
#plt.xlim([0,500])
plt.savefig('MSEVsIter_1node_LearningGlobal.eps', dpi = 450)
plt.show()
```
### Bayesian Linear Regression for two nodes without cooperation
```
# Real function parameters
N_train = 500
a_0 = -0.3
a_1 = 0.5
a_2 = 0.5
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
l1_vec = np.array([l1, 0])
u1_vec = np.array([u1, 0])
l2_vec = np.array([0, l2])
u2_vec = np.array([0, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node1.make_scatter(X1_train_mat[0,:], X1_train_mat[1,:], t1_train_vec, real_parms = [a_0, a_1, a_2])
linbayes_node2.make_scatter(X2_train_mat[0,:], X2_train_mat[1,:], t2_train_vec, real_parms = [a_0, a_1, a_2])
```
#### Main Training loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec_node1 = np.zeros((N_train), dtype = float)
avg_mse_vec_node2 = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
linbayes_node1.set_posterior(X1_train_mat[:,n], t1_train_vec[n])
linbayes_node1.update_prevPosteriorParams()
predictions_vec_node1 = linbayes_node1.predict_test_set(X_test_mat)
mse_vec_node1[n] = linbayes_node1.compute_mse(t_test_vec, predictions_vec_node1.flatten())
linbayes_node2.set_posterior(X2_train_mat[:,n], t2_train_vec[n])
linbayes_node2.update_prevPosteriorParams()
predictions_vec_node2 = linbayes_node2.predict_test_set(X_test_mat)
mse_vec_node2[n] = linbayes_node2.compute_mse(t_test_vec, predictions_vec_node2.flatten())
avg_mse_vec_node1 += mse_vec_node1
avg_mse_vec_node2 += mse_vec_node2
avg_mse_vec_node1 = avg_mse_vec_node1/max_runs
avg_mse_vec_node2 = avg_mse_vec_node2/max_runs
avg_mse_vec_node1_NoCoop = avg_mse_vec_node1
avg_mse_vec_node2_NoCoop = avg_mse_vec_node2
mse_central, = plt.plot(np.linspace(0, N_train, num=N_train), 1.27821171*np.ones((N_train), dtype = float), linestyle= '--', color = [0, 0,0],label='Mean Squared Error at Central Node')
mse_node1, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node1_NoCoop, color = '#e41a1c',label='Mean Squared Error at Node 1')
mse_node2, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node2_NoCoop, color = '#377eb8', label='Mean Squared Error at Node 2')
plt.xlabel(r'Number of communication rounds', fontsize=12)
plt.ylabel(r'MSE', fontsize=12)
plt.legend(fontsize=12)
plt.ylim([0.8, 3.2])
plt.savefig('MSEVsIter_2nodes_LearningNoCooperation_centralNode.eps', dpi = 450)
plt.show()
```
### Bayesian Linear Regression for two nodes with cooperation
```
# Real function parameters
N_train = 500
N = 2
W = np.array([np.array([0.9, 0.1]), np.array([0.6, 0.4])])
a_0 = -0.3
a_1 = 0.5
a_2 = 0.5
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
l1_vec = np.array([l1, 0])
u1_vec = np.array([u1, 0])
l2_vec = np.array([0, l2])
u2_vec = np.array([0, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
```
#### Main Training Loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec_node1 = np.zeros((N_train), dtype = float)
avg_mse_vec_node2 = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
# perform local bayesian update at each node
linbayes_node1.set_posterior(X1_train_mat[:,n], t1_train_vec[n])
linbayes_node2.set_posterior(X2_train_mat[:,n], t2_train_vec[n])
# initialize the dictionaries with current posterior parameters
mean_dict[0] = linbayes_node1.meanCurrent_vec
mean_dict[1] = linbayes_node2.meanCurrent_vec
covar_mat_dict[0] = linbayes_node1.covarCurrent_mat
covar_mat_dict[1] = linbayes_node2.covarCurrent_mat
# perform the consensus step
linbayes_node1.merge_PosteriorParams(W[0], mean_dict, covar_mat_dict)
linbayes_node2.merge_PosteriorParams(W[1], mean_dict, covar_mat_dict)
# update the local posteriors with merged posteriors
linbayes_node1.update_prevPosteriorParams()
linbayes_node2.update_prevPosteriorParams()
# evaluate on the test dataset
predictions_vec_node1 = linbayes_node1.predict_test_set(X_test_mat)
mse_vec_node1[n] = linbayes_node1.compute_mse(t_test_vec, predictions_vec_node1.flatten())
predictions_vec_node2 = linbayes_node2.predict_test_set(X_test_mat)
mse_vec_node2[n] = linbayes_node2.compute_mse(t_test_vec, predictions_vec_node2.flatten())
avg_mse_vec_node1 += mse_vec_node1
avg_mse_vec_node2 += mse_vec_node2
avg_mse_vec_node1 = avg_mse_vec_node1/max_runs
avg_mse_vec_node2 = avg_mse_vec_node2/max_runs
mse_central, = plt.plot(np.linspace(0, N_train, num=N_train), 1.27821171*np.ones((N_train), dtype = float), linestyle= '--', color = [0, 0,0],label='Mean Squared Error at Central Node')
mse_node1, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node1, color = '#e41a1c', label='Mean Squared Error at Node 1')
mse_node2, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node2, color = '#377eb8', label='Mean Squared Error at Node 2')
plt.xlabel(r'Number of communication rounds', fontsize=12)
plt.ylabel(r'MSE', fontsize=12)
plt.legend(fontsize=12)
plt.ylim([0.8, 3.2])
plt.savefig('MSEVsIter_2nodes_LearningWithCoop_centralNode.eps', dpi = 450)
plt.show()
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
# Preprocecss data
X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png")
X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png")
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
display(X_train.head())
```
# Model parameters
```
# Model parameters
BATCH_SIZE = 8
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 456
WIDTH = 456
CHANNELS = 3
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
```
# Pre-procecess images
```
train_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
# Pre-procecss train set
for i, image_id in enumerate(X_train['id_code']):
preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss validation set
for i, image_id in enumerate(X_val['id_code']):
preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss test set
for i, image_id in enumerate(test['id_code']):
preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH)
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
class RAdam(optimizers.Optimizer):
"""RAdam optimizer.
# Arguments
lr: float >= 0. Learning rate.
beta_1: float, 0 < beta < 1. Generally close to 1.
beta_2: float, 0 < beta < 1. Generally close to 1.
epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
decay: float >= 0. Learning rate decay over each update.
weight_decay: float >= 0. Weight decay for each param.
amsgrad: boolean. Whether to apply the AMSGrad variant of this
algorithm from the paper "On the Convergence of Adam and
Beyond".
# References
- [Adam - A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980v8)
- [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ)
- [On The Variance Of The Adaptive Learning Rate And Beyond](https://arxiv.org/pdf/1908.03265v1.pdf)
"""
def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0., weight_decay=0., amsgrad=False, **kwargs):
super(RAdam, self).__init__(**kwargs)
with K.name_scope(self.__class__.__name__):
self.iterations = K.variable(0, dtype='int64', name='iterations')
self.lr = K.variable(lr, name='lr')
self.beta_1 = K.variable(beta_1, name='beta_1')
self.beta_2 = K.variable(beta_2, name='beta_2')
self.decay = K.variable(decay, name='decay')
self.weight_decay = K.variable(weight_decay, name='weight_decay')
if epsilon is None:
epsilon = K.epsilon()
self.epsilon = epsilon
self.initial_decay = decay
self.initial_weight_decay = weight_decay
self.amsgrad = amsgrad
def get_updates(self, loss, params):
grads = self.get_gradients(loss, params)
self.updates = [K.update_add(self.iterations, 1)]
lr = self.lr
if self.initial_decay > 0:
lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
t = K.cast(self.iterations, K.floatx()) + 1
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='m_' + str(i)) for (i, p) in enumerate(params)]
vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='v_' + str(i)) for (i, p) in enumerate(params)]
if self.amsgrad:
vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='vhat_' + str(i)) for (i, p) in enumerate(params)]
else:
vhats = [K.zeros(1, name='vhat_' + str(i)) for i in range(len(params))]
self.weights = [self.iterations] + ms + vs + vhats
beta_1_t = K.pow(self.beta_1, t)
beta_2_t = K.pow(self.beta_2, t)
sma_inf = 2.0 / (1.0 - self.beta_2) - 1.0
sma_t = sma_inf - 2.0 * t * beta_2_t / (1.0 - beta_2_t)
for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
m_corr_t = m_t / (1.0 - beta_1_t)
if self.amsgrad:
vhat_t = K.maximum(vhat, v_t)
v_corr_t = K.sqrt(vhat_t / (1.0 - beta_2_t) + self.epsilon)
self.updates.append(K.update(vhat, vhat_t))
else:
v_corr_t = K.sqrt(v_t / (1.0 - beta_2_t) + self.epsilon)
r_t = K.sqrt((sma_t - 4.0) / (sma_inf - 4.0) *
(sma_t - 2.0) / (sma_inf - 2.0) *
sma_inf / sma_t)
p_t = K.switch(sma_t > 5, r_t * m_corr_t / v_corr_t, m_corr_t)
if self.initial_weight_decay > 0:
p_t += self.weight_decay * p
p_t = p - lr * p_t
self.updates.append(K.update(m, m_t))
self.updates.append(K.update(v, v_t))
new_p = p_t
# Apply constraints.
if getattr(p, 'constraint', None) is not None:
new_p = p.constraint(new_p)
self.updates.append(K.update(p, new_p))
return self.updates
def get_config(self):
config = {
'lr': float(K.get_value(self.lr)),
'beta_1': float(K.get_value(self.beta_1)),
'beta_2': float(K.get_value(self.beta_2)),
'decay': float(K.get_value(self.decay)),
'weight_decay': float(K.get_value(self.weight_decay)),
'epsilon': self.epsilon,
'amsgrad': self.amsgrad,
}
base_config = super(RAdam, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer = RAdam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=2).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [es, cosine_lr]
optimizer = RAdam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, ax = plt.subplots(figsize=(20, 4))
ax.plot(cosine_lr.learning_rates)
ax.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
| github_jupyter |
# Building NMF Model Using Spruce Eats Data
I used the scraped and cleaned Spruce Eats data to build a recommender engine in this notebook. It loads the **se_df.pk** pickle data created in the **scrape_spruce_eats** notebook.
### Table of Contents
* [1. Imports and Functions](#sec1)
* [2. Load DataFrame From Pickle](#sec2)
* [3. Pre-process Descriptions](#sec3)
* [4. Create Lists of Stop Words](#sec4)
* [5. Create Recommender Model](#sec5)
* [6. Recommender Testing](#sec6)
* [7. Pickle DataFrame](#sec7)
<a id='sec1'></a>
### 1. Imports and Functions
* **var_to_pickle**: Writes the given variable to a pickle file
* **read_pickle**: Reads the given pickle file
* **cocktail_recommender**: Builds recommendation engine using NMF
```
import sys
import pandas as pd
import numpy as np
import re
import spacy
from sklearn.feature_extraction import text
sys.path.append('../code')
from lw_pickle import var_to_pickle, read_pickle
from cocktail_recommender import cocktail_recommender
```
<a id='sec2'></a>
### 2. Load DataFrame From Pickle
This cell loads the final DataFrame of scraped and organized cocktail recipes.
```
df_pk = '../data/se_df.pk'
df = read_pickle(df_pk)
```
<a id='sec3'></a>
### 3. Pre-process Descriptions
In this section I created a pair of text preprocessing functions that lemmatize words using Spacy. I then restricted drink descriptions to nouns and adjectives and lemmatized them.
```
scy = spacy.load("en_core_web_sm")
# Simple function that lemmatizes lists of names and base spirits
def list_prepro(items):
item_str = ' '.join(set([i for row in items for i in row]))
doc = scy(item_str)
words = [token.lemma_ for token in doc]
words = list(set(filter(lambda w: '-' not in w, words)))
return words
# Simple function that lemmatizes a description
def desc_prepro(desc):
pos_keep = ['ADJ', 'NOUN', 'PROPN']
doc = scy(desc)
words = [token.lemma_ for token in doc if token.pos_ in pos_keep]
words = list(filter(lambda w: '-' not in w, words))
return ' '.join(words)
df['description'] = df['description'].map(desc_prepro)
```
<a id='sec4'></a>
### 4. Create Lists of Stop Words
I created separate lists of stop words for two models: one includes several shared stop words and the other is more aggressive, containing drink names and base spirits as well.
```
# Manually-populated list of generic stop words
gen_stop_words = ['cocktail', 'drink', 'recipe', 'make', 'mix', 'flavor', 'good',
'ingredient', 'taste', 'perfect', 'little', 'bar', 'nice', 'blue',
'great', 'way', 'favorite', 'new', 'popular', 'delicious', 'green',
'party', 'fun', 'black', 'sure', 'time', 'glass', 'woo', 'year',
'st', 'shot', 'garnish', 'pink', 'bit', 'different', 'choice',
'drink', 'bartender', 'recipe', 'fantastic', 'delicious', 'use',
'taste', 'nice', 'liquor', 'drink', 'bit', 'drinker', 'try']
safe_sw = text.ENGLISH_STOP_WORDS.union(gen_stop_words)
# Lemmatized lists of base spirits and drink names
base_spirits = list_prepro(df['base_spirits'].tolist())
name_words = list_prepro(df['name_words'].tolist())
fun_sw = text.ENGLISH_STOP_WORDS.union(gen_stop_words + base_spirits + name_words)
```
<a id='sec5'></a>
### 5. Create Recommender Model
The imported **cocktail_recommender** class takes the cocktail DataFrame and stop words lists as input to create two sets of NMF vectors. The safe and fun stop words vectors blend to create a single, adjustable model. The input string is converted to an NMF vector, which is then used to find the most similar recipes to that input.
```
cr = cocktail_recommender(df, safe_sw, fun_sw)
```
<a id='sec6'></a>
### 6. Recommender Testing
This cell is for testing recommender calls.
```
cr.recommend('rum', exclude_inputs=False, weirdness=.5)[1]['name']
```
<a id='sec7'></a>
### 7. Pickle DataFrame
Saves the recommender to a pickle file.
```
reco_pk = '../data/reco.pk'
var_to_pickle(cr, reco_pk)
```
| github_jupyter |
```
# default_exp callback.PredictionDynamics
```
# PredictionDynamics
> Callback used to visualize model predictions during training.
This is an implementation created by Ignacio Oguiza (timeseriesAI@gmail.com) based on a [blog post](http://localhost:8888/?token=83bca9180c34e1c8991886445942499ee8c1e003bc0491d0) by Andrej Karpathy I read some time ago that I really liked. One of the things he mentioned was this:
>"**visualize prediction dynamics**. I like to visualize model predictions on a fixed test batch during the course of training. The “dynamics” of how these predictions move will give you incredibly good intuition for how the training progresses. Many times it is possible to feel the network “struggle” to fit your data if it wiggles too much in some way, revealing instabilities. Very low or very high learning rates are also easily noticeable in the amount of jitter." A. Karpathy
```
#export
from fastai.callback.all import *
from tsai.imports import *
# export
class PredictionDynamics(Callback):
order, run_valid = 65, True
def __init__(self, show_perc=1., figsize=(6, 6), alpha=.3, size=30, color='lime', cmap='gist_rainbow'):
"""
Args:
show_perc: percent of samples from the valid set that will be displayed. Default: 1 (all).
You can reduce it if the number is too high and the chart is too busy.
alpha: level of transparency. Default:.3. 1 means no transparency.
figsize: size of the chart. You may want to expand it if too many classes.
size: size of each sample in the chart. Default:30. You may need to decrease it a bit if too many classes/ samples.
color: color used in regression plots.
cmap: color map used in classification plots.
The red line in classification tasks indicate the average probability of true class.
"""
store_attr("show_perc,figsize,alpha,size,color,cmap")
def before_fit(self):
self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds")
if not self.run:
return
self.cat = True if (hasattr(self.dls, "c") and self.dls.c > 1) else False
if self.show_perc != 1:
valid_size = len(self.dls.valid.dataset)
self.show_idxs = np.random.choice(valid_size, int(round(self.show_perc * valid_size)), replace=False)
# Prepare ground truth container
self.y_true = []
def before_epoch(self):
# Prepare empty pred container in every epoch
self.y_pred = []
def after_pred(self):
if self.training:
return
# Get y_true in epoch 0
if self.epoch == 0:
self.y_true.extend(self.y.cpu().flatten().numpy())
# Gather y_pred for every batch
if self.cat:
y_pred = torch.gather(F.softmax(self.pred.detach().cpu(), 1), -1, self.y.cpu().reshape(-1, 1).long())
else:
y_pred = self.pred.detach().cpu()
self.y_pred.extend(y_pred.flatten().numpy())
def after_epoch(self):
# Ground truth
if self.epoch == 0:
self.y_true = np.array(self.y_true)
if self.show_perc != 1:
self.y_true = self.y_true[self.show_idxs]
self.y_bounds = (np.min(self.y_true), np.max(self.y_true))
self.min_x_bounds, self.max_x_bounds = np.min(self.y_true), np.max(self.y_true)
self.y_pred = np.array(self.y_pred)
if self.show_perc != 1:
self.y_pred = self.y_pred[self.show_idxs]
if self.cat:
self.update_graph(self.y_pred, self.y_true)
else:
# Adjust bounds during validation
self.min_x_bounds = min(self.min_x_bounds, np.min(self.y_pred))
self.max_x_bounds = max(self.max_x_bounds, np.max(self.y_pred))
x_bounds = (self.min_x_bounds, self.max_x_bounds)
self.update_graph(self.y_pred, self.y_true, x_bounds=x_bounds, y_bounds=self.y_bounds)
def after_fit(self):
plt.close(self.graph_ax.figure)
def update_graph(self, y_pred, y_true, x_bounds=None, y_bounds=None):
if not hasattr(self, 'graph_fig'):
self.df_out = display("", display_id=True)
if self.cat:
self._cl_names = self.dls.vocab
self._classes = L(self.dls.vocab.o2i.values())
self._n_classes = len(self._classes)
self._h_vals = np.linspace(-.5, self._n_classes - .5, self._n_classes + 1)[::-1]
_cm = plt.get_cmap(self.cmap)
self._color = [_cm(1. * c/self._n_classes) for c in range(1, self._n_classes + 1)][::-1]
self._rand = []
for i, c in enumerate(self._classes):
self._rand.append(.5 * (np.random.rand(np.sum(y_true == c)) - .5))
self.graph_fig, self.graph_ax = plt.subplots(1, figsize=self.figsize)
self.graph_out = display("", display_id=True)
self.graph_ax.clear()
if self.cat:
for i, c in enumerate(self._classes):
self.graph_ax.scatter(y_pred[y_true == c], y_true[y_true == c] + self._rand[i], color=self._color[i],
edgecolor='black', alpha=self.alpha, linewidth=.5, s=self.size)
self.graph_ax.vlines(np.mean(y_pred[y_true == c]), i - .5, i + .5, color='r')
self.graph_ax.vlines(.5, min(self._h_vals), max(self._h_vals), linewidth=.5)
self.graph_ax.hlines(self._h_vals, 0, 1, linewidth=.5)
self.graph_ax.set_xlim(0, 1)
self.graph_ax.set_ylim(min(self._h_vals), max(self._h_vals))
self.graph_ax.set_xticks(np.linspace(0, 1, 11))
self.graph_ax.set_yticks(self._classes)
self.graph_ax.set_yticklabels(self._cl_names)
self.graph_ax.set_xlabel('probability of true class', fontsize=12)
self.graph_ax.set_ylabel('true class', fontsize=12)
self.graph_ax.grid(axis='x', color='gainsboro', linewidth=.2)
else:
self.graph_ax.scatter(y_pred, y_true, lw=1, color=self.color,
edgecolor='black', alpha=self.alpha, linewidth=.5, s=self.size)
self.graph_ax.set_xlim(*x_bounds)
self.graph_ax.set_ylim(*y_bounds)
self.graph_ax.plot([*x_bounds], [*x_bounds], color='gainsboro')
self.graph_ax.set_xlabel('y_pred', fontsize=12)
self.graph_ax.set_ylabel('y_true', fontsize=12)
self.graph_ax.grid(color='gainsboro', linewidth=.2)
self.graph_ax.set_title(f'Prediction Dynamics \nepoch: {self.epoch +1}/{self.n_epoch}')
self.df_out.update(pd.DataFrame(np.stack(self.learn.recorder.values)[-1].reshape(1,-1),
columns=self.learn.recorder.metric_names[1:-1], index=[self.epoch]))
self.graph_out.update(self.graph_ax.figure)
from fastai.data.all import *
from fastai.metrics import *
from tsai.data.all import *
from tsai.models.utils import *
from tsai.learner import *
from tsai.models.InceptionTimePlus import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
check_data(X, y, splits, False)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True)]
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=PredictionDynamics())
learn.fit_one_cycle(2, 3e-3)
# hide
from tsai.imports import *
out = create_scripts(); beep(out)
```
| github_jupyter |
```
import os
rutaBase = os.getcwd().replace('\\', '/') + '/'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
rutaMETEO = 'F:/OneDrive - Universidad de Cantabria/Series/AEMET/2016_pet080_UNICAN/data/Precipitacion/'
METEO = pd.read_csv(rutaMETEO + 'pcp_1950.csv', parse_dates=True, index_col=0)
stns = ['1115', '1117B', '1120', '1122I', '1127', '1127U', '1128', '1129']#, '1124'
#stns = ['1131I', '1136A', '1139E', '1140', '1144', '1151']
attrs = pd.read_csv(rutaMETEO + 'Estaciones_pcp.csv', encoding='latin1', index_col=0)
attrs = attrs.loc[stns,:]
attrs
attrs = attrs.loc[:,['NOMBRE', 'NOM_PROV', 'C_X', 'C_Y', 'ALTITUD']]
attrs.columns = ['NAME', 'PROVINCE', 'X', 'Y', 'Z']
attrs.index.name = 'CODE'
attrs
attrs.to_csv('../data/stations_pas.csv')
pcp_d.to_csv('../data/daily_rainfall_Pas.csv', float_format='%.1f')
pcp_d = METEO.loc[:, stns]
pcp_d /= 10
pcp_d.count()
annualMean = pcp_d.groupby(pcp_d.index.year).mean()
annualMean.head()
daysYear = pcp_d.groupby(pcp_d.index.year).count()
daysYear.head()
stn = stns[0]
plt.plot(annualMean[stn])
plt.plot(daysYear[stn])
annualMean.loc[daysYear[stn] > 330, stn].mean() * 365
Pan = pd.Series(index=stns)
for stn in stns:
Pan[stn] = annualMean.loc[daysYear[stn] > 330, stn].mean() * 365
Pan
data = pd.concat((attrs.ALTITUD, Pan), axis=1)
data.columns = ['Z', 'Pan']
from scipy.stats import linregress
# fit the linear regression
m, n, *perf = linregress(data.Z, data.Pan)
print('P = {0:.3f} Z + {1:.3f}'.format(m, n))
perf
# plot the regression between elevation and annual precipitation
plt.scatter(data.Z, data.Pan)
# recta de regresión
xlim = np.array([0, 1000])#ypso.Z.max()])
plt.plot(xlim, m * xlim + n, 'k--')
# configuración
plt.title('', fontsize=16, weight='bold')
plt.xlabel('altitud (msnm)', fontsize=13)
plt.xlim(xlim)
plt.ylabel('Panual (mm)', fontsize=13)
plt.ylim(0, 2200);
# guardar la figura
#plt.savefig('../output/Ex4_linear regression Z-Pannual.png', dpi=300)
```
DEM
```
def read_ascii(filename, datatype='float'):
"""Import an ASCII file. Data is saved as a 2D numpy array and the attributes as integers or floating point numbers.
Parameters:
-----------
filename: string. Name (including path and extension) of the ASCII file
Output:
-------
Results are given as methods of the function
attributes: list. A list of six attributes:
ncols: int. Number of columns
nrows: int. Number of rows
xllcorner: float. X coordinate of the left lower corner
yllcorner: float. Y coordinate of the left lower corner
cellsize: int. Spatial discretization
NODATA_value: float. Value representing no data
data: naddary[nrows,ncols]. The data in the map"""
with open(filename, 'r+') as file:
# import all the lines in the file
asc = file.readlines()
# extract attributes
ncols = int(asc[0].split()[1])
nrows = int(asc[1].split()[1])
xllcorner = float(asc[2].split()[1])
yllcorner = float(asc[3].split()[1])
cellsize = int(asc[4].split()[1])
NODATA_value = float(asc[5].split()[1])
attributes = [ncols, nrows, xllcorner, yllcorner, cellsize, NODATA_value]
# extract data
data = np.zeros((nrows, ncols))
for i in range(nrows):
data[i, :] = asc[i + 6].split()
data[data == NODATA_value] = np.nan
#data = np.ma.masked_invalid(data)
data = data.astype(datatype)
file.close()
return data, attributes
def write_ascii(filename, data, attributes, format='%.0f '):
"""Export a 2D numpy array and its corresponding attributes as an ascii raster.
Parameters:
-----------
filename: string. Name (including path and extension) of the ASCII file
data: narray. 2D array with the data to be exported
attributes: narray[6x1]. Array including the following information: ncols, nrows, xllcorner, yllcorner, cellsize, NODATA_value
format: string. Format in which the values in 'data' will be exported
Output:
-------
An .asc raster file"""
aux = data.copy()
# unmask data if masked
if np.ma.is_masked(aux):
np.ma.set_fill_value(aux, attributes[5])
aux = aux.filled()
# convert NaN to NODATA_value
aux[np.isnan(aux)] = attributes[5]
# export ascii
with open(filename, 'w+') as file:
# write attributes
file.write('ncols\t\t{0:<8}\n'.format(attributes[0]))
file.write('nrows\t\t{0:<8}\n'.format(attributes[1]))
file.write('xllcorner\t{0:<8}\n'.format(attributes[2]))
file.write('yllcorner\t{0:<8}\n'.format(attributes[3]))
file.write('cellsize\t{0:<8}\n'.format(attributes[4]))
file.write('NODATA_value\t{0:<8}\n'.format(attributes[5]))
# write data
for i in range(aux.shape[0]):
#values = df.iloc[i, 6:].tolist()
values = aux[i, :].tolist()
file.writelines([format % item for item in values])
file.write("\n")
file.close()
dem, attributes = read_ascii('../data/dem_pas2.asc', datatype='float')
dem.shape
im = plt.imshow(dem, cmap='pink')
cb = plt.colorbar(im)
cb.set_label('elevation (masl)')
plt.axis('off');
np.nanmin(dem), np.nanmax(dem)
ncells = np.sum(~np.isnan(dem))
ncells
Zs = np.arange(start=0, stop=1701, step=100)
Zs
hypso = pd.DataFrame(index=Zs, columns=['Aac', 'A'])
for Z in Zs:
hypso.loc[Z, 'Aac'] = np.sum(dem < Z) / ncells
#hypso.loc[Z, 'A'] = ((np.sum(dem < Z) - np.sum(dem < Z - 100))) / ncells
hypso
area = pd.Series(index=Zs)
for Z in Zs:
area[Z] = (np.sum(dem < Z) - np.sum(dem < Z - 100))
plt.plot(Zs, hypso.Aac)
plt.title('Hypsometric curve', fontsize=16, weight='bold')
plt.xlabel('elevation (masl)', fontsize=13)
plt.ylabel('area (-)', fontsize=13);
```
| github_jupyter |
```
import re
import numpy as np
import pandas as pd
import collections
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
import tensorflow as tf
from sklearn.cross_validation import train_test_split
from unidecode import unidecode
from nltk.util import ngrams
from tqdm import tqdm
import time
permulaan = [
'bel',
'se',
'ter',
'men',
'meng',
'mem',
'memper',
'di',
'pe',
'me',
'ke',
'ber',
'pen',
'per',
]
hujung = ['kan', 'kah', 'lah', 'tah', 'nya', 'an', 'wan', 'wati', 'ita']
def naive_stemmer(word):
assert isinstance(word, str), 'input must be a string'
hujung_result = [e for e in hujung if word.endswith(e)]
if len(hujung_result):
hujung_result = max(hujung_result, key = len)
if len(hujung_result):
word = word[: -len(hujung_result)]
permulaan_result = [e for e in permulaan if word.startswith(e)]
if len(permulaan_result):
permulaan_result = max(permulaan_result, key = len)
if len(permulaan_result):
word = word[len(permulaan_result) :]
return word
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def classification_textcleaning(string):
string = re.sub(
'http\S+|www.\S+',
'',
' '.join(
[i for i in string.split() if i.find('#') < 0 and i.find('@') < 0]
),
)
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
string = ' '.join(
[i for i in re.findall('[\\w\']+|[;:\-\(\)&.,!?"]', string) if len(i)]
)
string = string.lower().split()
string = [naive_stemmer(word) for word in string]
return ' '.join([word for word in string if len(word) > 1])
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i].split()[:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
classification_textcleaning('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')
import os
emotion_files = [f for f in os.listdir(os.getcwd()) if 'translated-' in f]
emotion_files
texts, labels = [], []
for f in emotion_files:
with open(f) as fopen:
dataset = list(filter(None, fopen.read().split('\n')))
labels.extend([f.split('-')[1]] * len(dataset))
texts.extend(dataset)
unique_labels = np.unique(labels).tolist()
labels = LabelEncoder().fit_transform(labels)
unique_labels
for i in range(len(texts)):
texts[i] = classification_textcleaning(texts[i])
concat = ' '.join(texts).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
max_features = len(dictionary)
maxlen = 100
batch_size = 32
embedded_size = 256
X = str_idx(texts, dictionary, maxlen)
train_X, test_X, train_Y, test_Y = train_test_split(X,
labels,
test_size = 0.2)
class Model:
def __init__(
self, embedded_size, dict_size, dimension_output, learning_rate
):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(
tf.random_uniform([dict_size, embedded_size], -1, 1)
)
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
self.logits = tf.identity(
tf.layers.dense(
tf.reduce_mean(encoder_embedded, 1), dimension_output
),
name = 'logits',
)
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(embedded_size, max_features, len(unique_labels), 5e-4)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'fast-text/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name)
and 'Adam' not in n.name
and 'beta' not in n.name
]
)
strings.split(',')
tf.trainable_variables()
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 5, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = train_X[i : min(i + batch_size, train_X.shape[0])]
batch_y = train_Y[i : min(i + batch_size, train_X.shape[0])]
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.X: batch_x,
model.Y: batch_y
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = test_X[i : min(i + batch_size, test_X.shape[0])]
batch_y = test_Y[i : min(i + batch_size, test_X.shape[0])]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.X: batch_x,
model.Y: batch_y
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
saver.save(sess, "fast-text/model.ckpt")
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
batch_x = test_X[i : min(i + batch_size, test_X.shape[0])]
batch_y = test_Y[i : min(i + batch_size, test_X.shape[0])]
predict_Y += np.argmax(
sess.run(
model.logits, feed_dict = {model.X: batch_x, model.Y: batch_y}
),
1,
).tolist()
real_Y += batch_y.tolist()
from sklearn import metrics
print(metrics.classification_report(real_Y, predict_Y, target_names = unique_labels))
import json
with open('fast-text-emotion.json') as fopen:
p = json.load(fopen)
str_idx([classification_textcleaning(text)],p['dictionary'], len(text.split()))
text = 'kerajaan sebenarnya sangat sayangkan rakyatnya, tetapi sebenarnya benci'
new_vector = str_idx([classification_textcleaning(text)],x['dictionary'], len(text.split()))
#sess.run(tf.nn.softmax(model.logits), feed_dict={model.X:new_vector})
new_vector
import json
with open('fast-text-emotion.json','w') as fopen:
fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('fast-text', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('fast-text/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run(tf.nn.softmax(logits), feed_dict = {x: new_vector})
labels
texts[0]
text = 'bodoh sial'
new_vector = str_idx([classification_textcleaning(text)],p['dictionary'], len(text.split()))
test_sess.run(tf.nn.softmax(logits), feed_dict = {x: new_vector})
new_vector
len(text.split())
classification_textcleaning(text)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
import numpy as np
import os
types_names = {90:'Ia', 67: '91bg', 52:'Iax', 42:'II', 62:'Ibc',
95: 'SLSN', 15:'TDE', 64:'KN', 88:'AGN', 92:'RRL', 65:'M-dwarf',
16:'EB',53:'Mira', 6:'MicroL', 991:'MicroLB', 992:'ILOT',
993:'CART', 994:'PISN',995:'MLString'}
cases = os.listdir('/media/emille/git/COIN/RESSPECT_work/PLAsTiCC/metrics_paper/resspect_metric/SALT2_fit/WFD/')
#cases.remove('fiducial.csv')
#cases.remove('random.csv')
cases.remove('.ipynb_checkpoints')
cases.remove('perfect3000_0.csv')
cases.remove('perfect1500.csv')
#cases.remove('non-survivors')
#cases.remove('fidtucial6000fail5999')
```
# Run wfit again
```
for name in cases:
print(os.getcwd())
print(name)
os.chdir('/media/emille/git/COIN/RESSPECT_work/PLAsTiCC/metrics_paper/resspect_metric/posteriors/WFD/' + \
name[:-4] + '/test_mysamples/omprior_0.01_flat/results/')
print(os.getcwd())
os.system('wfit.exe test_salt2mu_' + name[:-4] + '.M0DIF -ompri 0.3 -dompri 0.01 -ommin 0.299 -ommax 0.301 ' + \
'-hmin 70 -hmax 70 -hsteps 1 -wmin -10 -wmax 9')
os.system('wfit.exe test_salt2mu_lowz_withbias_' + name[:-4] + '.M0DIF -ompri 0.3 -dompri 0.01 -ommin 0.299 -ommax 0.301 ' + \
'-hmin 70 -hmax 70 -hsteps 1 -wmin -10 -wmax 9')
os.chdir('../../')
```
# check percentages
```
for name in cases:
fname = + name
data = pd.read_csv(fname)
if 'code_zenodo' in data.keys():
types, freq = np.unique(data['code_zenodo'].values, return_counts=True)
else:
types, freq = np.unique(data['code'].values, return_counts=True)
print('\n')
print('case: ' + name)
for i in range(len(types)):
print('perc ' + types_names[types[i]] + ' : ', round(freq[i]/data.shape[0], 2))
print('Total number: ', data.shape[0])
print('\n')
names = []
pop_Ia_all = []
pop_nIa_all = []
perc_Ia_all = []
perc_nIa_all = []
wfit_w_all = []
wfit_wsig_all = []
wfit_om_all = []
wfit_omsig_all = []
wfit_w_all_lowz = []
wfit_wsig_all_lowz = []
wfit_om_all_lowz = []
wfit_omsig_all_lowz = []
stan_w_all = []
stan_wsig_all = []
stan_om_all = []
stan_omsig_all = []
stan_w_all_lowz = []
stan_wsig_all_lowz = []
stan_om_all_lowz = []
stan_omsig_all_lowz = []
other_index = []
other_name = []
for case in cases:
names.append(case[:-4])
pop = {}
perc = {}
samples_dir = '/media2/RESSPECT/data/PLAsTiCC/for_metrics/posteriors/WFD/' + case[:-4] + '/'
fname = '/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/samples/' + case
data = pd.read_csv(fname)
if 'code_zenodo' in data.keys():
stats = np.unique(data['code_zenodo'].values, return_counts=True)
else:
stats = np.unique(data['code'].values, return_counts=True)
fname_cosmo = case[:-4] + 'test_mysamples/results/test_salt2mu_' + case[:-4] + '.M0DIF.cospar'
cosmofit = pd.read_csv(fname_cosmo, delim_whitespace=True,
comment='#', names=['w', 'wsig_marg', 'OM', 'OM_sig', 'chi2',
'Ndof', 'sigint', 'wran', 'OMran', 'label'])
wfit_w_all.append(cosmofit['w'].values[0])
wfit_wsig_all.append(cosmofit['wsig_marg'].values[0])
wfit_om_all.append(cosmofit['OM'].values[0])
wfit_omsig_all.append(cosmofit['OM_sig'].values[0])
fname_cosmo_lowz = case[:-4] + '/results/test_salt2mu_lowz_withbias_' + case[:-4] + '.M0DIF.cospar'
cosmofit_lowz = pd.read_csv(fname_cosmo_lowz, delim_whitespace=True,
comment='#', names=['w', 'wsig_marg', 'OM', 'OM_sig', 'chi2',
'Ndof', 'sigint', 'wran', 'OMran', 'label'])
wfit_w_all_lowz.append(cosmofit_lowz['w'].values[0])
wfit_wsig_all_lowz.append(cosmofit_lowz['wsig_marg'].values[0])
wfit_om_all_lowz.append(cosmofit_lowz['OM'].values[0])
wfit_omsig_all_lowz.append(cosmofit_lowz['OM_sig'].values[0])
fname_stan = case[:-4] + '/results/stan_summary_' + case[:-4] + '.dat'
op1 = open(fname_stan, 'r')
lin1 = op1.readlines()
op1.close()
for i in range(len(lin1)):
if lin1[i].split(' ')[0] == 'om':
c = 3
found = False
while not found:
if lin1[i].split(' ')[c] != '':
stan_om_all.append(lin1[i].split(' ')[c])
found=True
else:
c = c + 1
d = 8
found2 = False
while not found2:
if lin1[i].split(' ')[d] != '':
stan_omsig_all.append(lin1[i].split(' ')[d])
found2 = True
else:
d = d + 1
elif lin1[i].split(' ')[0] == 'w':
c = 3
found = False
while not found:
if lin1[i].split(' ')[c] != '':
stan_w_all.append(lin1[i].split(' ')[c])
found=True
else:
c = c + 1
d = 8
found2 = False
while not found2:
if lin1[i].split(' ')[d] != '':
stan_wsig_all.append(lin1[i].split(' ')[d])
found2 = True
else:
d = d + 1
fname_stan = case[:-4] + '/results/stan_summary_' + case[:-4] + '_lowz_withbias.dat'
op2 = open(fname_stan, 'r')
lin2 = op2.readlines()
op2.close()
for j in range(len(lin2)):
if lin2[j].split(' ')[0] == 'om':
stan_om_all_lowz.append(lin2[j].split(' ')[3])
stan_omsig_all_lowz.append(lin2[j].split(' ')[8])
elif lin2[j].split(' ')[0] == 'w':
c = 3
found = False
while not found:
if lin2[j].split(' ')[c] != '':
stan_w_all_lowz.append(lin2[j].split(' ')[c])
found=True
else:
c = c + 1
d = 8
found2 = False
while not found2:
if lin2[j].split(' ')[d] != '':
stan_wsig_all_lowz.append(lin2[j].split(' ')[d])
found2 = True
else:
d = d + 1
flag_Ia = np.array(stats[0]) == 90
pop[90] = stats[1][flag_Ia][0]
perc[90] = round(100 * stats[1][flag_Ia][0]/data.shape[0])
if len(stats[0]) == 2:
other_code = [item for item in stats[0] if item !=90][0]
pop[other_code] = stats[1][flag_Ia][0]
perc[other_code] = round(100 * stats[1][flag_Ia][0]/data.shape[0])
pop_nIa_all.append(pop[other_code])
perc_nIa_all.append(perc[other_code])
other_index.append(other_code)
other_name.append(types_names[other_code])
elif len(stats[0]) > 2:
other_code = [item for item in stats[0] if item !=90]
for item in range(flag_Ia.shape[0]):
if not flag_Ia[item]:
pop[stats[0][item]] = stats[1][item]
perc[stats[0][item]] = round(100 * stats[1][item]/data.shape[0])
pop_nIa_all.append([pop[item] for item in other_code])
perc_nIa_all.append([perc[item] for item in other_code])
other_index.append(other_code)
other_name.append([types_names[i] for i in other_code])
elif len(stats[0]) == 1:
other_code = '--'
pop_nIa_all.append(None)
perc_nIa_all.append(None)
other_index.append(None)
other_name.append(None)
pop_Ia_all.append(pop[90])
perc_Ia_all.append(perc[90])
data_all = {}
data_all['case'] = names
data_all['other_name'] = other_name
data_all['other_code'] = other_index
data_all['nIa'] = pop_Ia_all
data_all['nothers'] = pop_nIa_all
data_all['perc_Ia'] = perc_Ia_all
data_all['perc_others'] = perc_nIa_all
data_all['wfit_w'] = wfit_w_all
data_all['wfit_wsig'] = wfit_wsig_all
data_all['wfit_om'] = wfit_om_all
data_all['wfit_omsig'] = wfit_omsig_all
data_all['wfit_w_lowz'] = wfit_w_all_lowz
data_all['wfit_wsig_lowz'] = wfit_wsig_all_lowz
data_all['wfit_om_lowz'] = wfit_om_all_lowz
data_all['wfit_omsig_lowz'] = wfit_omsig_all_lowz
data_all['stan_w'] = stan_w_all
data_all['stan_wsig'] = stan_wsig_all
data_all['stan_om'] = stan_om_all
data_all['stan_omsig'] = stan_omsig_all
data_all['stan_w_lowz'] = stan_w_all_lowz
data_all['stan_wsig_lowz'] = stan_wsig_all_lowz
data_all['stan_om_lowz'] = stan_om_all_lowz
data_all['stan_omsig_lowz'] = stan_omsig_all_lowz
data_all = pd.DataFrame(data_all)
data_all
data_all.to_csv('summary_cases.csv', index=False)
data_all.to_csv('/media2/RESSPECT2/data/posteriors/WFD/summary_cases_WFD.csv', index=False)
data_all = pd.read_csv('summary_cases.csv', index_col=False)
flag_w = data_all['wfit_w'].values < 1000
plt.figure(figsize=(16,15))
plt.subplot(3,2,1)
plt.hist(data_all['wfit_w'][~flag_w], color='darkblue')
plt.hist(data_all['wfit_w'][flag_w], color='blue')
plt.xlabel('w_from_wfit', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(3,2,2)
plt.hist(data_all['wfit_wsig'][~flag_w], color='darkblue')
plt.hist(data_all['wfit_wsig'][flag_w], color='blue')
plt.xlabel('wsig_from_wfit', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(3,2,3)
plt.hist(data_all['stan_w'][~flag_w], color='b')
plt.hist(data_all['stan_w'][flag_w], color='darkblue')
plt.xlabel('w_from_stan', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(3,2,4)
plt.hist(data_all['stan_wsig'][~flag_w], color='b')
plt.hist(data_all['stan_wsig'][flag_w], color='darkblue')
plt.xlabel('wsig_from_stan', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(3,2,5)
plt.hist(data_all['stan_w_lowz'][~flag_w], color='green', alpha=0.5)
plt.hist(data_all['stan_w_lowz'][flag_w], color='brown')
plt.xlabel('w_from_stan_with_lowz', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(3,2,6)
plt.hist(data_all['stan_wsig_lowz'][~flag_w], color='green', alpha=0.5)
plt.hist(data_all['stan_wsig_lowz'][flag_w], color='brown')
plt.xlabel('wsig_from_stan_with_lowz', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.show()
def highlight_col(x):
r = 'background-color: pink'
df1 = pd.DataFrame('', index=x.index, columns=x.columns)
df1.iloc[:, 5] = r
return df1
wdiff = data_all['wfit_w'].values - data_all['stan_w'].values
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.hist(data_all['stan_w'][~flag_w], color='b')
plt.hist(data_all['stan_w'][flag_w], color='darkblue')
plt.xlabel('w_from_stan', fontsize=14)
plt.ylabel('N', fontsize=14)
plt.subplot(1,2,2)
plt.hist(wdiff[~flag_w], color='darkblue')
plt.hist(wdiff[flag_w], color='b')
plt.xlabel('wfit_w - stan_w', fontsize=14)
plt.ylabel('N', fontsize=4)
plt.show()
data_all['case']
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=72, Om0=0.3)
theor_dist = [cosmo.distmod(z).value for z in np.arange(0.001,1.5,0.005)]
for name in cases:
fname_fitres = name[:-4] + '/results/test_salt2mu_' + name[:-4] + '.fitres'
fitres = pd.read_csv(fname_fitres, comment='#', delim_whitespace=True)
flag = fitres['SIM_TYPE_INDEX'].values == 11
z = fitres['SIM_ZCMB'].values
mu = fitres['MU'].values
muerr = fitres['MUERR'].values
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)
ax.set_title(name, fontsize=26)
if sum(flag) > 0:
plt.errorbar(z[flag], mu[flag], yerr=muerr[flag], fmt='o', alpha=0.1, label='spec-Ia', color='blue')
if sum(~flag) > 0:
plt.errorbar(z[~flag], mu[~flag], yerr=muerr[~flag], fmt='^', alpha=0.1, label='photo-Ia', color='green')
plt.plot(np.arange(0.001, 1.5,0.005), theor_dist, label='w = -1', color='red')
w = str(cosmofit['w'].values[0])
if len(w) >= 6:
w1 = w[:6]
else:
w1 = w.ljust(6, '0')
werr = str(cosmofit['wsig_marg'].values[0])
if len(werr) >= 6:
werr1 = werr[:5]
else:
werr1 = werr.ljust(5, '0')
flag_case = data_all['case'].values == name[:-4]
ax.text(0.2, 32, 'stan = ' + str(data_all[flag_case]['stan_w'].values[0]) + r' $\pm$ ' + str(data_all[flag_case]['stan_wsig'].values[0]), fontsize=20)
ax.text(0.2, 30, r'wfit = ' + w1 + r' $\pm$ ' + werr1 , fontsize=20)
ax.set_xlabel('redshift', fontsize=22)
ax.set_ylabel('mu', fontsize=22)
plt.legend(fontsize=22, loc='lower right')
plt.savefig('plots/distances/dist_' + name[:-4] + '.png')
plt.close('all')
for name in cases:
fname_fitres = name[:-4] + '/results/test_salt2mu_lowz_withbias_' + name[:-4] + '.fitres'
fitres = pd.read_csv(fname_fitres, comment='#', delim_whitespace=True)
flag = np.logical_or(fitres['SIM_TYPE_INDEX'].values == 11, fitres['SIM_TYPE_INDEX'].values == 1)
z = fitres['SIM_ZCMB'].values
mu = fitres['MU'].values
muerr = fitres['MUERR'].values
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)
ax.set_title(name, fontsize=26)
if sum(flag) > 0:
#data2 = pd.concat([data[flag], lowz], ignore_index=True)
plt.errorbar(z[flag], mu[flag], yerr=muerr[flag], fmt='o', alpha=0.1, label='spec-Ia', color='blue')
if sum(~flag) > 0:
plt.errorbar(z[~flag], mu[~flag], yerr=muerr[~flag], fmt='^', alpha=0.1, label='photo-Ia', color='green')
plt.plot(np.arange(0.001, 1.5,0.005), theor_dist, label='w = -1', color='red')
w = str(cosmofit_lowz['w'].values[0])
if len(w) >= 6:
w1 = w[:6]
else:
w1 = w.ljust(6, '0')
werr = str(cosmofit_lowz['wsig_marg'].values[0])
if len(werr) >= 6:
werr1 = werr[:5]
else:
werr1 = werr.ljust(5, '0')
flag_case = data_all['case'].values == name[:-4]
ax.text(0.2, 32, 'stan = ' + str(data_all[flag_case]['stan_w_lowz'].values[0]) + r' $\pm$ ' + \
str(data_all[flag_case]['stan_wsig_lowz'].values[0]), fontsize=20)
ax.text(0.2, 30, r'wfit = ' + w1 + r' $\pm$ ' + werr1 , fontsize=20)
ax.set_xlabel('redshift', fontsize=22)
ax.set_ylabel('mu', fontsize=22)
plt.legend(fontsize=22, loc='lower right')
plt.savefig('plots/distances/dist_' + name[:-4] + '_lowz_withbias.png')
plt.close('all')
```
| github_jupyter |
# [Code Hello World](https://academy.dqlab.id/main/livecode/45/110/524)
```
print(10*2+5)
print("Academy DQLab")
```
# [Melakukan Comment Pada Python](https://academy.dqlab.id/main/livecode/45/110/525)
```
print(10*2+5) #fungsi matematika
print("Academy DQLab") #fungsi mencetak kalimat
```
# [Printing Data Type](https://academy.dqlab.id/main/livecode/45/110/527)
```
var_string="Belajar Python DQLAB"
var_int=10
var_float=3.14
var_list=[1,2,3,4]
var_tuple=("satu","dua","tiga")
var_dict={"nama":"Ali", 'umur':20}
print(var_string)
print(var_int)
print(var_float)
print(var_list)
print(var_tuple)
print(var_dict)
print(type(var_string))
print(type(var_int))
print(type(var_float))
print(type(var_list))
print(type(var_tuple))
print(type(var_dict))
```
# [IF Statement](https://academy.dqlab.id/main/livecode/45/111/529)
```
i = 7 #inisialisasi variable i yang memiliki nilai 10
if(i==10): #pengecekan nilai i apakah sama dengan 10
print("ini adalah angka 10") #jika TRUE maka akan mencetak kalimat ini
```
# [IF … ELSE …](https://academy.dqlab.id/main/livecode/45/111/530)
```
i = 5 #inisialisasi variable i yang memiliki nilai 10
if(i==10): #pengecekan nilai i apakah sama dengan 10
print("ini adalah angka 10") #jika TRUE maka akan mencetak kalimat ini
else:
print("bukan angka 10") #jika FALSE akan mencetak kalimat ini
```
# [IF … ELIF … ELSE ….](https://academy.dqlab.id/main/livecode/45/111/531)
```
i=3
if(i==5):
print("ini adalah angka 5")
elif(i>5):
print("lebih besar dari 5")
else:
print("lebih kecil dari 5")
```
# [NESTED IF](https://academy.dqlab.id/main/livecode/45/111/532)
```
if (i<7):
print("nilai i kurang dari 7")
if (i<3):
print("nilai i kurang dari 7 dan kurang dari 3")
else:
print("nilai i kurang dari 7 tapi lebih dari 3")
```
# [Praktek Operasi Matematika](https://academy.dqlab.id/main/livecode/45/112/534)
```
a=10
b=5
selisih = a-b
jumlah = a+b
kali = a*b
bagi = a/b
print("Hasil penjumlahan dan b adalah", jumlah)
print("Selisih a dan b adalah :",selisih)
print("Hasil perkalian a dan b adalah :",kali)
print("Hasil pembagian a dan b adalah:",bagi)
```
# [Operasi modulus](https://academy.dqlab.id/main/livecode/45/112/536)
```
c=10
d=3
modulus=c%d
print("Hasil modulus",modulus)
```
# [Tugas Mid Praktek](https://academy.dqlab.id/main/livecode/45/112/538)
```
angka=5
if(angka%2 == 0):
print("angka termasuk bilangan genap")
else:
print("angka termasuk bilangan ganjil")
```
# [while](https://academy.dqlab.id/main/livecode/45/113/540)
```
j = 0 #nilai awal j =0
while j<6: #ketika j kurang dari 6 lakukan perulangan, jika tidak stop perulangan
print("Ini adalah perulangan ke -",j) #lakukan perintah ini ketika perulangan
j=j+1 #setiap kali diakhir perulangan update nilai dengan ditambah 1.
```
# [for (1)](https://academy.dqlab.id/main/livecode/45/113/542)
```
for i in range (1,6): #perulangan for sebagai inisialisasi dari angka 1 hingga angka yang lebih kecil daripada 6.
print("Ini adalah perulangan ke -", i) #perintah jika looping akan tetap berjalan
```
# [for (2) with access element](https://academy.dqlab.id/main/livecode/45/113/543)
```
for i in range (1,11):
if(i%2 == 0):
print("Angka genap",i)
else:
print("Angka ganjil",i)
```
# [Membuat fungsi sendiri](https://academy.dqlab.id/main/livecode/45/114/545)
```
# Membuat Fungsi
def salam():
print("Hello, Selamat Pagi")
## Pemanggilan Fungsi
salam()
```
# [Parameter pada fungsi](https://academy.dqlab.id/main/livecode/45/114/546)
```
def luas_segitiga(alas, tinggi): #alas dan tinggi merupakan parameter yang masuk
luas = (alas * tinggi) / 2
print("Luas segitiga: %f" % luas);
# Pemanggilan fungsi
##4 dan 6 merupakan parameter yang diinputkan kedalam fungsi luas segitiga
luas_segitiga(4, 6)
```
# [Fungsi dengan Return Value](https://academy.dqlab.id/main/livecode/45/114/547)
```
def luas_segitiga(alas, tinggi): #alas dan tinggi merupakan parameter yang masuk
luas = (alas * tinggi) / 2
return luas
# Pemanggilan fungsi
##4 dan 6 merupakan parameter yang diinputkan kedalam fungsi luas segitiga
print("Luas segitiga: %d" % luas_segitiga(4, 6))
```
# [Import Package dan Menggunakan modul](https://academy.dqlab.id/main/livecode/45/115/549)
```
import math
print("Nilai pi adalah:", math.pi)# math.pi merupakan sintak untuk memanggil fungsi
```
# [Import dengan Module Rename atau Alias](https://academy.dqlab.id/main/livecode/45/115/550)
```
import math as m #menggunakan m sebagai module rename atau alias
print("Nilai pi adalah:", m.pi) #m.pi merupakan sintak untuk memanggil fungsi
```
# [Import Sebagian Fungsi](https://academy.dqlab.id/main/livecode/45/115/560)
```
from math import pi
print("Nilai pi adalah", pi)
```
# [Import Semua isi Moduls](https://academy.dqlab.id/main/livecode/45/115/561)
```
from math import *
print("Nilai e adalah:", e)
```
# [Membaca Teks File (CSV)](https://academy.dqlab.id/main/livecode/45/116/552)
```
import csv
# tentukan lokasi file, nama file, dan inisialisasi csv
f = open('penduduk_gender_head.csv', 'r')
reader = csv.reader(f)
# membaca baris per baris
for row in reader:
print (row)
# menutup file csv
f.close()
```
# [Membaca file CSV dengan menggunakan PANDAS](https://academy.dqlab.id/main/livecode/45/116/553)
```
import pandas as pd
table = pd.read_csv("https://academy.dqlab.id/dataset/penduduk_gender_head.csv")
table.head()
print(table)
```
# [Bar Chart](https://academy.dqlab.id/main/livecode/45/117/555)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
table = pd.read_csv("https://academy.dqlab.id/dataset/penduduk_gender_head.csv")
table.head()
x_label = table['NAMA KELURAHAN']
plt.bar(x=np.arange(len(x_label)),height=table['LAKI-LAKI WNI'])
plt.show()
```
# [Parameter dalam Grafik (Memberikan Nilai Axis dari data CSV)](https://academy.dqlab.id/main/livecode/45/117/556)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
table = pd.read_csv("https://academy.dqlab.id/dataset/penduduk_gender_head.csv")
table.head()
x_label = table['NAMA KELURAHAN']
plt.bar(x=np.arange(len(x_label)),height=table['LAKI-LAKI WNI'])
plt.xticks(np.arange(len(x_label)), table['NAMA KELURAHAN'], rotation=30)
plt.show()
```
# [Menambah Title dan Label pada Grafik](https://academy.dqlab.id/main/livecode/45/117/557)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
table = pd.read_csv("https://academy.dqlab.id/dataset/penduduk_gender_head.csv")
table.head()
x_label = table['NAMA KELURAHAN']
plt.bar(x=np.arange(len(x_label)),height=table['LAKI-LAKI WNI'])
plt.xticks(np.arange(len(x_label)), table['NAMA KELURAHAN'], rotation=90)
plt.xlabel('Keluarahan di Jakarta pusat')
plt.ylabel('Jumlah Penduduk Laki - Laki')
plt.title('Persebaran Jumlah Penduduk Laki-Laki di Jakarta Pusat')
plt.show()
```
| github_jupyter |
```
import random
from collections import Counter
import numpy as np
from googletrans import Translator
from nltk.tokenize import word_tokenize
import codecs
hm_lines = 5000000
translator = Translator()
stopwords = codecs.open("hindi_stopwords.txt", "r", encoding='utf-8', errors='ignore').read().split('\n')
def create_lexicon(pos_hin, neg_eng, pos_eng, neg_hin):
lexicon = []
for file_name in [pos_hin, neg_eng, pos_eng, neg_hin]:
with codecs.open(file_name, 'r',encoding='utf-8',errors='ignore') as f:
contents = f.read()
for line in contents.split('$'):
data = line.strip('\n')
if data:
all_words = word_tokenize(data)
lexicon += list(all_words)
lexicons = []
for word in lexicon:
if not word in stopwords:
lexicons.append(word)
word_counts = Counter(lexicons) # it will return kind of dictionary
l2 = []
for word in word_counts:
if 60 > word_counts[word]:
l2.append(word)
return l2
def sample_handling(sample, lexicon, classification):
featureset = []
with codecs.open(sample, 'r', encoding="utf8",errors='ignore') as f:
contents = f.read()
for line in contents.split('$'):
data = line.strip('\n')
if data:
all_words = word_tokenize(data)
all_words_new = []
for word in all_words:
if not word in stopwords:
all_words_new.append(word)
features = np.zeros(len(lexicon))
for word in all_words_new:
if word in lexicon:
idx = lexicon.index(word)
features[idx] = 1
features = list(features)
featureset.append([features, classification])
return featureset
def create_feature_set_and_labels(pos_hin, neg_eng, pos_eng, neg_hin, test_size=0.2):
lexicon = create_lexicon(pos_hin, neg_eng, pos_eng, neg_neg_hin)
features = []
features += sample_handling(pos_hin, lexicon, 1)
features += sample_handling(neg_eng, lexicon, 0)
features += sample_handling(pos_eng, lexicon, 1)
features += sample_handling(neg_hin, lexicon, 0)
random.shuffle(features)
features = np.array(features)
#print(len(features))
testing_size = int((1 - test_size) * len(features))
x_train = list(features[:, 0][:testing_size]) # taking features array upto testing_size
y_train = list(features[:, 1][:testing_size]) # taking labels upto testing_size
x_test = list(features[:, 0][testing_size:])
y_test = list(features[:, 1][testing_size:])
return x_train, y_train, x_test, y_test
def check_class(text, lexicon):
line = translator.translate(text, dest='hi').text
classifier = SupervisedDBNClassification.load('dbn.pkl')
predict_set = []
all_words = word_tokenize(line)
# all_words = [lemmatizer.lemmatize(i) for i in all_words]
features = np.zeros(len(lexicon))
for word in all_words:
if word in lexicon:
idx = lexicon.index(word)
features[idx] += 1
features = list(features)
predict_set.append(features)
predict_set = np.array(predict_set, dtype=np.float32)
predict_set = classifier.predict(predict_set)
#print(predict_set)
def create_feature_set_and_labels_simple(pos, neg, test_size=0.2):
lexicon = create_lexicon(pos, neg)
features = []
features += sample_handling(pos, lexicon, [1, 0])
features += sample_handling(neg, lexicon, [0, 1])
random.shuffle(features)
features = np.array(features)
#print(len(features))
testing_size = int((1 - test_size) * len(features))
x_train = list(features[:, 0][:testing_size])
y_train = list(features[:, 1][:testing_size])
x_test = list(features[:, 0][testing_size:])
y_test = list(features[:, 1][testing_size:])
return x_train, y_train, x_test, y_test
if __name__ == '__main__':
create_lexicon('pos_hindi.txt', 'neg_hindi.txt', 'pos_eng.txt', 'neg_eng.txt')
```
| github_jupyter |
## 开发环境搭建
### Anaconda
Anaconda是用于大规模数据处理、预测分析和科学计算的 Python和R编程语言的免费平台,旨在简化包管理和部署。
它集成了很多用 于数据处理和科学计算的第三方库,使得我们不用额外再去安装。同 时,Anaconda提供了强大的安装包管理功能。
Anaconda官网(https://www.anaconda.com/download) 下载对应版本的安装文件
### Anaconda navigator

### TIPS:
Windows 上安装需要特殊选择:

### 创建运行环境
1. PyCharm中:

2. Anaconda中

3. 命令行

4. pip安装

## Python必要基础语法介绍
### __main__
相信读者在学习 Python 基础语法的时候,在程序最后经常会遇到 这句话,这里简单解释下它的意义。总地来说,这句代码的作用是既能 保证当前的.py 文件直接运行,也能保证其可以作为模块被其他.py 文件 导入。下面通过几个例子帮助读者加深理解。
首先,创建一个.py文件,设置文件名,这里为Hello.py。编辑文 件,运行下面这句代码。
```python
print(__name__)
```
基于上述对__name__的理解,新建 name_main.py 文件,写入下面 这段代码。
```python
def printHello():
print("Hello World!")
print(__name__)
if __name__ == '__main__':
printHello()
```
### 列表解析式
列表解析式是 Python 提供的一种从列表中便捷地抽取数据的方 式,类似于数学上集合的表示方式。实际上,它完全可以由for循环语句 代替实现,只不过会略显烦琐。来看一个例子,代码如下。
```python
list1 = [1,2,3,4,5]
l_even = [i for i in list1 if i%2 == 0]
l_even
```
对于l_even,完全可以通过for循环语句获取,代码如下。
```python
l_even = []
for i in list1:
if i%2 == 0:
l_even.appen(i)
```
后者明显更加麻烦一点。对于列表解析式,初学者要学会通过这种 拆解的方法理解它的使用意图。自己在工作学习中也不用刻意去追求复 杂的列表解析式,熟悉之后便能运用自如了。一定注意不要为了追求所 谓的简捷而牺牲代码的可读性。
### 装饰器
装饰器是用来“装饰”的,这里的“装饰”可以理解为“加强”的 意思。也就是说,可以通过装饰器来加强我们的程序。装饰器一般用于 装饰函数和类,这里仅介绍对函数的装饰。
既然我们想要函数有加强的功能,直接写在函 数里面不就行了?当然可以,这确实是一种可行的方法。但是假设想要 每个函数打印此函数执行的时间,按照上面的方法,就要在每个函数里 面记录开始时间和结束时间,然后计算时间差,再打印出此时间差。这 样会使函数变得臃肿,包含太多和函数功能不相关的内容。假设有几十 个这样的函数将要多出几十甚至上百行代码来实现这个功能。所以按照 常规的方法,把函数“升级”到加强版是十分烦琐的。而装饰器就是化繁 为简的法宝。通过装饰器,可以通过简单地定义一个函数,然后在每个 函数前多加一行代码就可以实现函数的“升级”,这就是“装饰”。这也是 使用装饰器的原因。
```python
import time
def printtime(func):
def wrapper(*args, **kwargs):
print(time.ctime())
return func(*args, **kwargs)
return wrapper
@printtime
def printhello(name):
print('Hello', name)
if __name__ == '__main__':
printhello('Sam')
```
这里定义了一个装饰器,用于打印函数开始执行的时间。上面的程序@printtime 就是装饰器的关键。下面去掉这句,看看怎样实现同样的 功能。在这之前,我们必须知道在Python里面,函数也是对象,也能被 当作参数传递,而装饰器的本质就是函数。
### 递归函数
在函数的内部还可以调用函数,不过一般来说再次调用的函数都是其他函数,如果再次调用的函数是函数本身,那么这个函数就是递归函数。
一个十分经典的例子就是阶乘的计算(为了简化,未考虑0的阶乘)。阶乘的概念很简单:n!=n*(n-1)*(n-2)*...2*1。基于此,可以写出计算阶乘的函数,如下所示。
```python
def factorial_normal(n):
result = 1
for i in range(n):
result = result*n
n = n-1
return result
factorial_normal(5)
```
这是一种解决的方法,逻辑比较简单,下面来看递归的实现方式。 根据阶乘的概念,可以得到 n!=n*(n-1)!。基于此,我们可以写出 如下计算阶乘的递归函数。
```python
def factorial_recursion(n):
if n == 1:
return 1
return n*factorial_recursion(n-1)
```
这两种方法都是可行的,但是很明显使用递归的方式要更加简捷一 些,而且可以清晰地看出计算的逻辑。
### OOP
Python 支持面向对象编程(Object-Oriented Programming,简称 OOP),在Python中实现OOP的关键就是类和对象。这里简单介绍一些相关的基础知识,以便大家对面向对象有基本的认识。
面向对象使得我们可以通过抽象的方法来简化程序,其一大优点就是代码复用(在多态继承上的应用尤为突出)。来看下面一段代码。
```python
class Person:
has_hair = True
def __init__(self, name, age):
self.name = name
self.age = age
def sayhello(self, words):
print("Hello, I'm", self.name)
print(words)
if __name__ == '__main__':
Sally = Person('Sally', 20)
Sally.sayhello("Nice to meet you")
Tom = Person('Tom', 19)
Tom.sayhello("Nice to meet you too")
```
这里通过class关键字定义了一个名为Person的类,其中Person称为 类名。在类的内部,定义了一个变量 has_hair,称为类属性;定义的两 个函数称为类方法。下面通过给 Person 传入必须的参数得到两个实例 Sally、Tom,这个过程称为实例化。
注意这里的self代表实例。第一个函数是在实例被创建的时候自动 执行的,它给实例增添了name和age属性,这些属性只有实例本身才 有,称为实例属性。
最后通过实例调用了sayhello方法,打印了问候语。
### The Zen of Python
打开IPython,输入 import this。

```text
Python之禅
优美胜于丑陋
明确胜于隐晦
简单胜于复杂
复杂胜于凌乱
扁平胜于嵌套
稀疏胜于紧凑
可读性至关重要
即便特例,也需服从以上规则
除非刻意追求,错误不应跳过
面对歧义条件,拒绝尝试猜测
解决问题的最优方法应该有且只有一个
尽管这一方法并非显而易见
动手胜于空想
空想胜于不想
难以解释的实现方案,不是好方案
易于解释的实现方案,才是好方案
命名空间是个绝妙的理念,多多益善!
```
| github_jupyter |
## Plotting of profile results
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib import gridspec
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database, hyswan_db
from teslakit.database import SplitStorage
def Load_SIM_NEARSHORE_all(db, vns=[], decode_times=False, use_cftime=False, prf=[]):
ps = db.paths.site.SIMULATION.nearshore
# locate simulations
#sims = sorted([x for x in os.listdir(ps) if x.isdigit() and op.isdir(op.join(ps, x))])
sims = sorted([x for x in os.listdir(ps) if x.endswith("_prf_"+str(prf))])
# read all simulations
for c, nm in enumerate(sims):
ps_sim = op.join(ps, nm)
s = SplitStorage(ps_sim)
# read time and variables from first file
if c==0:
ds = s.Load(vns=vns, decode_times=False)
# generate output xarray.Dataset, dims and vars
out = xr.Dataset({}, coords={'time': ds.time, 'n_sim':range(len(sims))})
for vds in ds.variables:
if vds == 'time': continue
out[vds] = (('time', 'n_sim',), np.nan*np.zeros((len(out.time), len(out.n_sim))))
out[vds].loc[dict(n_sim=c)] = ds[vds]
else:
ds = s.Load(vns=vns, decode_times=False)
# fill output xarray.Dataset
for vds in ds.variables:
if vds == 'time': continue
out[vds].loc[dict(n_sim=c)] = ds[vds]
# optional decode times to cftime
if decode_times:
out = xr.decode_cf(out, use_cftime=use_cftime)
return out
# --------------------------------------
# Teslakit database
p_data = r'/media/administrador/HD/Dropbox/Guam/teslakit/data'
# p_data=r'/Users/laurac/Dropbox/Guam/teslakit/data'
db = Database(p_data)
# set site
db.SetSite('GUAM')
# hyswan simulation database
db_sim = hyswan_db(db.paths.site.HYSWAN.sim)
```
### Set profile and load data
```
prf=8
profiles=xr.open_dataset('/media/administrador/HD/Dropbox/Guam/bati guam/Profiles_Guam_curt.nc')
profile=profiles.sel(profile=prf)
profile
def Plot_profile(profile):
colors=['royalblue','crimson','gold','darkmagenta','darkgreen','darkorange','mediumpurple','coral','pink','lightgreen','darkgreen','darkorange']
fig=plt.figure(figsize=[17,4])
gs1=gridspec.GridSpec(1,1)
ax=fig.add_subplot(gs1[0])
ax.plot(profile.Distance_profile, profile.Elevation,linewidth=3,color=colors[prf],alpha=0.7,label='Profile: ' + str(prf))
s=np.where(profile.Elevation<0)[0][0]
ax.plot(profile.Distance_profile[s],profile.Elevation[s],'s',color=colors[prf],markersize=10)
ax.plot([0,1500],[0,0],':',color='plum',alpha=0.7)
ax.plot([0,1500],[np.nanmin(profile.Elevation),np.nanmin(profile.Elevation)],':',color='plum',alpha=0.7)
ax.set_xlabel(r'Distance (m)', fontsize=14)
ax.set_ylabel(r'Elevation (m)', fontsize=14)
ax.legend()
ax.set_xlim([0,np.nanmax(profile.Distance_profile)])
Plot_profile(profile)
```
### Load waves
```
# Simulation
sim=Load_SIM_NEARSHORE_all(db,vns=['Hs','Tp','Dir'], decode_times=False, use_cftime=False, prf=prf)
print(sim)
sim2=db.Load_SIM_OFFSHORE_all(vns=['level','wind_dir','wind_speed'], decode_times=False, use_cftime=False) #Level=SS+AT+MMSL
sim['level']=sim2.level
sim['wind_dir']=sim2.wind_dir
sim['wind_speed']=sim2.wind_speed
print(sim)
SIM=sim.to_dataframe().reset_index()
print(SIM)
SIM.to_pickle(os.path.join(db.paths.site.SIMULATION.nearshore,'Simulations_profile_'+str(prf)))
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### load packages
```
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
```
### Load dataset
```
dataset = 'cassins'
dims = (32,32,1)
n_components = 64
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
syllable_df = pd.read_pickle(DATA_DIR/'cassins'/ 'cassins.pickle')
top_labels = (
pd.DataFrame(
{i: [np.sum(syllable_df.labels.values == i)] for i in syllable_df.labels.unique()}
)
.T.sort_values(by=0, ascending=False)[:20]
.T
)
syllable_df = syllable_df[syllable_df.labels.isin(top_labels.columns)]
syllable_df[:3]
syllable_df = syllable_df.reset_index()
syllable_df['subset'] = 'train'
syllable_df.loc[:1000, 'subset'] = 'valid'
syllable_df.loc[1000:1999, 'subset'] = 'test'
specs = np.array(list(syllable_df.spectrogram.values))
specs = np.array([np.concatenate([np.zeros((32,1)), i], axis=1) for i in tqdm(specs)])
syllable_df['spectrogram'] = syllable_df['spectrogram'].astype('object')
syllable_df['spectrogram'] = list(specs)
Y_train = np.array(list(syllable_df.labels.values[syllable_df.subset == 'train']))
Y_valid = np.array(list(syllable_df.labels.values[syllable_df.subset == 'valid']))
Y_test = np.array(list(syllable_df.labels.values[syllable_df.subset == 'test']))
X_train = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'train'])) #/ 255.
X_valid = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'valid']))# / 255.
X_test = np.array(list(syllable_df.spectrogram.values[syllable_df.subset == 'test'])) #/ 255.
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).astype('int').flatten()
Y_test = enc.fit_transform([[i] for i in Y_test]).astype('int').flatten()
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
```
### define networks
```
from tensorflow.keras.layers import (
Conv2D,
Reshape,
Bidirectional,
Dense,
RepeatVector,
TimeDistributed,
LSTM
)
from tfumap.vae import VAE, Sampling
encoder_inputs = tf.keras.Input(shape=dims)
x = Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation=tf.nn.leaky_relu, padding="same"
)(encoder_inputs)
x = Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation=tf.nn.leaky_relu, padding="same"
)(x)
x = Conv2D(
filters=128, kernel_size=3, strides=(2, 1), activation=tf.nn.leaky_relu, padding="same"
)(x)
x = Conv2D(
filters=128, kernel_size=3, strides=(2, 1), activation=tf.nn.leaky_relu, padding="same"
)(x)
x = Reshape(target_shape=(8, 2*128))(x)
x = Bidirectional(LSTM(units=100, activation="relu"))(x)
x = Dense(units=512)(x)
z_mean = tf.keras.layers.Dense(n_components, name="z_mean")(x)
z_log_var = tf.keras.layers.Dense(n_components, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = tf.keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
latent_inputs = tf.keras.Input(shape=(n_components,))
x = Dense(units=512)(latent_inputs)
x = RepeatVector(8)(x)
x = Bidirectional(LSTM(units=100, activation="relu", return_sequences=True))(x)
x = TimeDistributed(Dense(2*128))(x)
x = Reshape(target_shape=(8,2,128))(x)
x = tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(1, 2), padding="SAME", activation=tf.nn.leaky_relu
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(1, 2), padding="SAME", activation=tf.nn.leaky_relu
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation=tf.nn.leaky_relu
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=(2, 2), padding="SAME", activation=tf.nn.leaky_relu
)(x)
x = tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME", activation="sigmoid"
)(x)
decoder_outputs = Reshape(target_shape=(32, 32, 1))(x)
decoder = tf.keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
```
### Create model and train
```
X_train.shape
X_train = X_train.reshape([len(X_train)]+ list(dims))
X_train.shape
vae = VAE(encoder, decoder)
vae.compile(optimizer=tf.keras.optimizers.Adam())
vae.fit(X_train, epochs=30, batch_size=128)
z = vae.encoder.predict(X_train)[0]
```
### Plot model output
```
Y_train
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)].flatten(),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
### View loss
```
from tfumap.umap import retrieve_tensors
import seaborn as sns
```
### Save output
```
dataset = "cassins_dtw"
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ dataset / '64'/ 'vae'
ensure_dir(output_dir)
#vae.save(output_dir)
vae.encoder.save(output_dir / 'encoder')
vae.decoder.save(output_dir / 'encoder')
#loss_df.to_pickle(output_dir / 'loss_df.pickle')
np.save(output_dir / 'z.npy', z)
```
### compute metrics
```
X_test.shape
z_test = encoder.predict(X_test.reshape((len(X_test), 32,32,1)))[0]
```
#### silhouette
```
from tfumap.silhouette import silhouette_score_block
ss, sil_samp = silhouette_score_block(z, Y_train, n_jobs = -1)
ss
ss_test, sil_samp_test = silhouette_score_block(z_test, Y_test, n_jobs = -1)
ss_test
fig, axs = plt.subplots(ncols = 2, figsize=(10, 5))
axs[0].scatter(z[:, 0], z[:, 1], s=0.1, alpha=0.5, c=sil_samp, cmap=plt.cm.viridis)
axs[1].scatter(z_test[:, 0], z_test[:, 1], s=1, alpha=0.5, c=sil_samp_test, cmap=plt.cm.viridis)
```
#### KNN
```
from sklearn.neighbors import KNeighborsClassifier
z
z_test
Y_train
neigh5 = KNeighborsClassifier(n_neighbors=5)
neigh5.fit(z, Y_train)
score_5nn = neigh5.score(z_test, Y_test)
score_5nn
neigh1 = KNeighborsClassifier(n_neighbors=1)
neigh1.fit(z, Y_train)
score_1nn = neigh1.score(z_test, Y_test)
score_1nn
```
#### Trustworthiness
```
from sklearn.manifold import trustworthiness
tw = trustworthiness(X_train_flat[:10000], z[:10000])
tw_test = trustworthiness(X_test_flat[:10000], z_test[:10000])
tw, tw_test
```
### Save output metrics
```
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
```
#### train
```
metrics_df = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"trustworthiness",
"silhouette_score",
"silhouette_samples",
]
)
metrics_df.loc[len(metrics_df)] = [dataset, 'vae', n_components, tw, ss, sil_samp]
metrics_df
save_loc = DATA_DIR / 'projection_metrics' / 'vae' / 'train' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
metrics_df.to_pickle(save_loc)
save_loc
```
#### test
```
metrics_df_test = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"trustworthiness",
"silhouette_score",
"silhouette_samples",
]
)
metrics_df_test.loc[len(metrics_df)] = [dataset, 'vae', n_components, tw_test, ss_test, sil_samp_test]
metrics_df_test
save_loc = DATA_DIR / 'projection_metrics' / 'vae' / 'test' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
metrics_df.to_pickle(save_loc)
```
#### knn
```
nn_acc_df = pd.DataFrame(columns = ["method_","dimensions","dataset","1NN_acc","5NN_acc"])
nn_acc_df.loc[len(nn_acc_df)] = ['vae', n_components, dataset, score_1nn, score_5nn]
nn_acc_df
save_loc = DATA_DIR / 'knn_classifier' / 'vae' / 'train' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
nn_acc_df.to_pickle(save_loc)
```
### Reconstruction
```
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error, r2_score
X_recon = vae.decoder.predict(vae.encoder.predict(X_test.reshape((len(X_test), 32, 32, 1)))[0])
X_real = X_test.reshape((len(X_test), 32, 32, 1))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
reconstruction_acc_df = pd.DataFrame(
columns=["method_", "dimensions", "dataset", "MSE", "MAE", "MedAE", "R2"]
)
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['vae', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
save_loc = DATA_DIR / 'reconstruction_acc' / 'vae' / str(n_components) / (dataset + '.pickle')
ensure_dir(save_loc)
reconstruction_acc_df.to_pickle(save_loc)
```
### Compute clustering quality
```
from sklearn.cluster import KMeans
from sklearn.metrics import homogeneity_completeness_v_measure
def get_cluster_metrics(row, n_init=5):
# load cluster information
save_loc = DATA_DIR / 'clustering_metric_df'/ ('_'.join([row.class_, str(row.dim), row.dataset]) + '.pickle')
print(save_loc)
#if save_loc.exists() and save_loc.is_file():
#
# cluster_df = pd.read_pickle(save_loc)
# return cluster_df
# make cluster metric dataframe
cluster_df = pd.DataFrame(
columns=[
"dataset",
"class_",
"dim",
"silhouette",
"homogeneity",
"completeness",
"v_measure",
"init_",
"n_clusters",
"model",
]
)
y = row.train_label
z = row.train_z
n_labels = len(np.unique(y))
for n_clusters in tqdm(np.arange(n_labels - int(n_labels / 2), n_labels + int(n_labels / 2)), leave=False, desc = 'n_clusters'):
for init_ in tqdm(range(n_init), leave=False, desc='init'):
kmeans = KMeans(n_clusters=n_clusters, random_state=init_).fit(z)
clustered_y = kmeans.labels_
homogeneity, completeness, v_measure = homogeneity_completeness_v_measure(
y, clustered_y
)
ss, _ = silhouette_score_block(z, clustered_y)
cluster_df.loc[len(cluster_df)] = [
row.dataset,
row.class_,
row.dim,
ss,
homogeneity,
completeness,
v_measure,
init_,
n_clusters,
kmeans,
]
# save cluster df in case this fails somewhere
ensure_dir(save_loc)
cluster_df.to_pickle(save_loc)
return cluster_df
projection_df = pd.DataFrame(columns = ['dataset', 'class_', 'train_z', 'train_label', 'dim'])
projection_df.loc[len(projection_df)] = [dataset, 'vae', z, Y_train.flatten(), n_components]
projection_df
get_cluster_metrics(projection_df.iloc[0], n_init=5)
```
| github_jupyter |
# High-Performance Pandas: eval() and query()
As we've already seen in previous sections, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into C via an intuitive syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas.
While these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.
As of version 0.13 (released January 2014), Pandas includes some experimental tools that allow you to directly access C-speed operations without costly allocation of intermediate arrays.
These are the ``eval()`` and ``query()`` functions, which rely on the [Numexpr](https://github.com/pydata/numexpr) package.
In this notebook we will walk through their use and give some rules-of-thumb about when you might think about using them.
## Motivating ``query()`` and ``eval()``: Compound Expressions
We've seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:
```
import numpy as np
rng = np.random.RandomState(42)
x = rng.rand(1000000)
y = rng.rand(1000000)
%timeit x + y
```
As discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb), this is much faster than doing the addition via a Python loop or comprehension:
```
%timeit np.fromiter((xi + yi for xi, yi in zip(x, y)), dtype=x.dtype, count=len(x))
```
But this abstraction can become less efficient when computing compound expressions.
For example, consider the following expression:
```
mask = (x > 0.5) & (y < 0.5)
```
Because NumPy evaluates each subexpression, this is roughly equivalent to the following:
```
tmp1 = (x > 0.5)
tmp2 = (y < 0.5)
mask = tmp1 & tmp2
```
In other words, *every intermediate step is explicitly allocated in memory*. If the ``x`` and ``y`` arrays are very large, this can lead to significant memory and computational overhead.
The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.
The [Numexpr documentation](https://github.com/pydata/numexpr) has more details, but for the time being it is sufficient to say that the library accepts a *string* giving the NumPy-style expression you'd like to compute:
```
import numexpr
mask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)')
np.allclose(mask, mask_numexpr)
```
The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays.
The Pandas ``eval()`` and ``query()`` tools that we will discuss here are conceptually similar, and depend on the Numexpr package.
## ``pandas.eval()`` for Efficient Operations
The ``eval()`` function in Pandas uses string expressions to efficiently compute operations using ``DataFrame``s.
For example, consider the following ``DataFrame``s:
```
import pandas as pd
nrows, ncols = 100000, 100
rng = np.random.RandomState(42)
df1, df2, df3, df4 = (pd.DataFrame(rng.rand(nrows, ncols))
for i in range(4))
```
To compute the sum of all four ``DataFrame``s using the typical Pandas approach, we can just write the sum:
```
%timeit df1 + df2 + df3 + df4
```
The same result can be computed via ``pd.eval`` by constructing the expression as a string:
```
%timeit pd.eval('df1 + df2 + df3 + df4')
```
The ``eval()`` version of this expression is about 50% faster (and uses much less memory), while giving the same result:
```
np.allclose(df1 + df2 + df3 + df4,
pd.eval('df1 + df2 + df3 + df4'))
```
### Operations supported by ``pd.eval()``
As of Pandas v0.16, ``pd.eval()`` supports a wide range of operations.
To demonstrate these, we'll use the following integer ``DataFrame``s:
```
df1, df2, df3, df4, df5 = (pd.DataFrame(rng.randint(0, 1000, (100, 3)))
for i in range(5))
```
#### Arithmetic operators
``pd.eval()`` supports all arithmetic operators. For example:
```
result1 = -df1 * df2 / (df3 + df4) - df5
result2 = pd.eval('-df1 * df2 / (df3 + df4) - df5')
np.allclose(result1, result2)
```
#### Comparison operators
``pd.eval()`` supports all comparison operators, including chained expressions:
```
result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4)
result2 = pd.eval('df1 < df2 <= df3 != df4')
np.allclose(result1, result2)
```
#### Bitwise operators
``pd.eval()`` supports the ``&`` and ``|`` bitwise operators:
```
result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4)
result2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)')
np.allclose(result1, result2)
```
In addition, it supports the use of the literal ``and`` and ``or`` in Boolean expressions:
```
result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)')
np.allclose(result1, result3)
```
#### Object attributes and indices
``pd.eval()`` supports access to object attributes via the ``obj.attr`` syntax, and indexes via the ``obj[index]`` syntax:
```
result1 = df2.T[0] + df3.iloc[1]
result2 = pd.eval('df2.T[0] + df3.iloc[1]')
np.allclose(result1, result2)
```
#### Other operations
Other operations such as function calls, conditional statements, loops, and other more involved constructs are currently *not* implemented in ``pd.eval()``.
If you'd like to execute these more complicated types of expressions, you can use the Numexpr library itself.
## ``DataFrame.eval()`` for Column-Wise Operations
Just as Pandas has a top-level ``pd.eval()`` function, ``DataFrame``s have an ``eval()`` method that works in similar ways.
The benefit of the ``eval()`` method is that columns can be referred to *by name*.
We'll use this labeled array as an example:
```
df = pd.DataFrame(rng.rand(1000, 3), columns=['A', 'B', 'C'])
df.head()
```
Using ``pd.eval()`` as above, we can compute expressions with the three columns like this:
```
result1 = (df['A'] + df['B']) / (df['C'] - 1)
result2 = pd.eval("(df.A + df.B) / (df.C - 1)")
np.allclose(result1, result2)
```
The ``DataFrame.eval()`` method allows much more succinct evaluation of expressions with the columns:
```
result3 = df.eval('(A + B) / (C - 1)')
np.allclose(result1, result3)
```
Notice here that we treat *column names as variables* within the evaluated expression, and the result is what we would wish.
### Assignment in DataFrame.eval()
In addition to the options just discussed, ``DataFrame.eval()`` also allows assignment to any column.
Let's use the ``DataFrame`` from before, which has columns ``'A'``, ``'B'``, and ``'C'``:
```
df.head()
```
We can use ``df.eval()`` to create a new column ``'D'`` and assign to it a value computed from the other columns:
```
df.eval('D = (A + B) / C', inplace=True)
df.head()
```
In the same way, any existing column can be modified:
```
df.eval('D = (A - B) / C', inplace=True)
df.head()
```
### Local variables in DataFrame.eval()
The ``DataFrame.eval()`` method supports an additional syntax that lets it work with local Python variables.
Consider the following:
```
column_mean = df.mean(1)
result1 = df['A'] + column_mean
result2 = df.eval('A + @column_mean')
np.allclose(result1, result2)
```
The ``@`` character here marks a *variable name* rather than a *column name*, and lets you efficiently evaluate expressions involving the two "namespaces": the namespace of columns, and the namespace of Python objects.
Notice that this ``@`` character is only supported by the ``DataFrame.eval()`` *method*, not by the ``pandas.eval()`` *function*, because the ``pandas.eval()`` function only has access to the one (Python) namespace.
## DataFrame.query() Method
The ``DataFrame`` has another method based on evaluated strings, called the ``query()`` method.
Consider the following:
```
result1 = df[(df.A < 0.5) & (df.B < 0.5)]
result2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]')
np.allclose(result1, result2)
```
As with the example used in our discussion of ``DataFrame.eval()``, this is an expression involving columns of the ``DataFrame``.
It cannot be expressed using the ``DataFrame.eval()`` syntax, however!
Instead, for this type of filtering operation, you can use the ``query()`` method:
```
result2 = df.query('A < 0.5 and B < 0.5')
np.allclose(result1, result2)
```
In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.
Note that the ``query()`` method also accepts the ``@`` flag to mark local variables:
```
Cmean = df['C'].mean()
result1 = df[(df.A < Cmean) & (df.B < Cmean)]
result2 = df.query('A < @Cmean and B < @Cmean')
np.allclose(result1, result2)
```
## Performance: When to Use These Functions
When considering whether to use these functions, there are two considerations: *computation time* and *memory use*.
Memory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas ``DataFrame``s will result in implicit creation of temporary arrays:
For example, this:
```
x = df[(df.A < 0.5) & (df.B < 0.5)]
```
Is roughly equivalent to this:
```
tmp1 = df.A < 0.5
tmp2 = df.B < 0.5
tmp3 = tmp1 & tmp2
x = df[tmp3]
```
If the size of the temporary ``DataFrame``s is significant compared to your available system memory (typically several gigabytes) then it's a good idea to use an ``eval()`` or ``query()`` expression.
You can check the approximate size of your array in bytes using this:
```
df.values.nbytes
```
| github_jupyter |
# Visualization: Trading Session
```
import pandas as pd
import numpy as np
import altair as alt
import seaborn as sns
```
### 1. Define parameters and Load model
```
from trading_bot.agent import Agent
model_name = 'model_GOOG_50'
test_stock = 'data/GOOG_2019.csv'
window_size = 10
debug = True
agent = Agent(window_size, pretrained=True, model_name=model_name)
```
### 2. Load test data
```
# read csv into dataframe
df = pd.read_csv(test_stock)
# filter out the desired features
df = df[['Date', 'Adj Close']]
# rename feature column names
df = df.rename(columns={'Adj Close': 'actual', 'Date': 'date'})
# convert dates from object to DateTime type
dates = df['date']
dates = pd.to_datetime(dates, infer_datetime_format=True)
df['date'] = dates
df.head()
```
### 3. Running Eval
```
import logging
import coloredlogs
from trading_bot.utils import show_eval_result, switch_k_backend_device, get_stock_data
from trading_bot.methods import evaluate_model
coloredlogs.install(level='DEBUG')
switch_k_backend_device()
test_data = get_stock_data(test_stock)
initial_offset = test_data[1] - test_data[0]
test_result, history = evaluate_model(agent, test_data, window_size, debug)
show_eval_result(model_name, test_result, initial_offset)
```
### 4. Visualize
```
def visualize(df, history, title="trading session"):
# add history to dataframe
position = [history[0][0]] + [x[0] for x in history]
actions = ['HOLD'] + [x[1] for x in history]
df['position'] = position
df['action'] = actions
# specify y-axis scale for stock prices
scale = alt.Scale(domain=(min(min(df['actual']), min(df['position'])) - 50, max(max(df['actual']), max(df['position'])) + 50), clamp=True)
# plot a line chart for stock positions
actual = alt.Chart(df).mark_line(
color='green',
opacity=0.5
).encode(
x='date:T',
y=alt.Y('position', axis=alt.Axis(format='$.2f', title='Price'), scale=scale)
).interactive(
bind_y=False
)
# plot the BUY and SELL actions as points
points = alt.Chart(df).transform_filter(
alt.datum.action != 'HOLD'
).mark_point(
filled=True
).encode(
x=alt.X('date:T', axis=alt.Axis(title='Date')),
y=alt.Y('position', axis=alt.Axis(format='$.2f', title='Price'), scale=scale),
color='action'
).interactive(bind_y=False)
# merge the two charts
chart = alt.layer(actual, points, title=title).properties(height=300, width=1000)
return chart
chart = visualize(df, history, title=test_stock)
chart
```
| github_jupyter |
```
cc.VerificationHandler.close_browser()
```
## Time to crack in and find some more mother elements
#### Dont let complexity ruin tempo
```
% run contactsScraper.py
orgsForToday = ['National Association for Multi-Ethnicity In Communications (NAMIC)',
'Association for Women in Science',
'Brain Injury Association of America',
'American Society of Home Inspectors',
'NAADAC, the Association for Addiction Professionals',
'American Public Transportation Association',
'Indiana Soybean Alliance',
'Associated Builders and Contractors (ABC)',
'National Association of Social Workers',
'American Marketing Association (AMA)']
org = orgsForToday[9]
vh = cc.MotherSetVerifier(org)
pointers = vh.verifiedPointers
len(pointers)
cc.VerificationHandler.orgRecords.orgSessionStatusCheck()
import numpy as np
np.matrix([pointers, pointers])
## Grandmother Finding Algorithm
gmElements = []
gmMatrix = []
for i in range(len(pointers)):
igmElements = []
for j in range(i):
## Check to see if the Any Mother element is a Big Momma or "Bertha" Element
if pointers[i].get_mother_element() is pointers[j].get_mother_element():
gm = pointers[i].get_mother_element()
else:
gm = pointers[i].common_parent(pointers[j])
# Append Match to Grand Mother Matrix
igmElements.append(gm)
# Check to see if this is a new grand mother element,
# if so append to the gmElements list of unique grandmother elements
if gm not in gmElements:
gmElements.append(gm)
# Append Matrix Row
gmMatrix.append(igmElements)
grandMotherMatrix = np.matrix(gmMatrix)
grandMotherMatrix
```
## Just what was Expexted, 1 grandmother element
```
len(gmElements)
type(gmElements[0])
```
## Find other Mother elements with the same attributes within the found GrandMother
```
a = pointers[1].get_mother_element()
b = pointers[0].get_mother_element()
gm = gmElements[0]
a.parent is gm
a.parent
print(gm.prettify())
b.attrs
a.attrs == b.attrs
a.name
b.name
gm = gmElements[0]
finds = gm.contents
len(finds)
findsSib = gm.find_all("h2")
findsSib
```
## There are verified pointers and there are elements that mimic them
```
gm
mothers = pointers
mothers[0].tom_here()
mothers[0].tom
mothers[0].mary_here()
mothers[0].tom.parent.parent is mothers[0].mary
mothers[0].tom.parent.attrs
mothers[0].tom.parent.contents
mothers[0].tom.parent['toms'] = 0
mothers[0].nathan_here()
mothers[0].nathan
mothers[0].nathan.parent['nathans'] = 0
mothers[0].nathan.parent.parent is mothers[0].get_mother_element()
## Tag elements with atributes up the ancestrial chain from tom all the way to the mother element
def tag_nathans(pt):
## Precondition: The name pointer for this verified pointer is a nathan
return parent_cycle_up(pt.get_mother_element(), pt.nathan.parent, 'nathans', 0)
def tag_toms(pt):
return parent_cycle_up(pt.get_mother_element(), pt.tom.parent, 'toms', 0)
def parent_cycle_up(motherElement, element, atr, num):
if element is motherElement:
return
else:
element[atr] = num
return parent_cycle(motherElement, element.parent, atr, num + 1)
def get_nathan(fnd, taggedPt):
## Lean fnd from a mother
## get from the root to the foot
## precondition fnd is a found mother element
return parent_cycle_down(fnd.children, taggedPt.get_mother_element().children, 'nathans')
def get_tom(fnd, taggedPt):
## Learn a find from a mother
## get tom from the root to the froot
## precondition fnd is a found mother element
return parent_cycle_down(fnd.children, taggedPt.get_mother_element().children, 'toms')
def parent_cycle_down(fi, mi, atr):
## Loop accoss both found and mother iterators
## Precondition the 'atr' is an atribute of at least one elment in mi
for f, s in zip(fi, mi):
## look for attr
print('foundTrunk: ' + f.name + str(f.attrs) + ' motherTrunk: ' + s.name + str(s.attrs))
if atr in s.attrs:
if s[atr] == 0: ## Tag enclosing the pointer
## Return String inside, thats all!
return f.string
else:
return parent_cycle_down(f.children, s.children, atr)
tag_nathans(mothers[1])
tag_toms(mothers[1])
```
## Walking the Tree of a verified pointer
```
mother1 = mothers[1].get_mother_element()
mi = mother1.children
s = next(mi)
s
'nathans' in s.attrs
si = s.children
s = next(si)
s
s.string
mothers[0].get_mother_element
get_tom(mothers[1].get_mother_element(), mothers[0])
get_tom(mothers[0].get_mother_element(), mothers[1])
mothers[1].tom
mothers[0].tom
mothers[1].nathan
get_nathan(mothers[1].get_mother_element(), mothers[0])
mothers[0].nathan
get_nathan(mothers[0].get_mother_element(), mothers[1])
```
## Bring it all together
#### For all verified pointers tag the nathans and toms
#### Test each tagged verfied pointer against each found mother element to identifiy nathans and toms!
#### reunite the estranged family!
```
import pandas as pd
## For all verfied pointers tag the nathans and toms
for mother in mothers:
tag_nathans(mother)
tag_toms(mother)
tomSet = pd.DataFrame([{mother.tom:get_tom(find, mother) for mother in mothers} for find in finds])
nathanSet = pd.DataFrame([{mother.nathan:get_nathan(find, mother) for mother in mothers} for find in finds])
len(finds)
tomSet
nathanSet
```
| github_jupyter |
# Lesson 1.2:
# Introduction to GridAPPS-D
This tutorial provides a first look at the GridAPPS-D Platform and ecosystem for data integration and accelerated application development
__Learning Objectives:__
At the end of the tutorial, the user should be able to
* Explain some advantages of application development using GridAPPS-D
* Describe the layers of the GridAPPS-D architecture
* Summarize how advanced applications can interface with the Platform
* Outline what are GridAPPS-D services and how they are used
* List the GridAPPS-D APIs and their purpose
* Describe the role of the GOSS Message Bus
* Recognize the cores services and managers of the GridAPPS-D Platform
* Understand the role of the GridAPPS-D co-simulation framework
* Recognize the formats used by the internal GridAPPS-D platform databases
---
# Table of Contents
* [1. What is GridAPPS-D?](#1.-What-is-GridAPPS-D?)
* [2. GridAPPS-D Architecture](#2.-GridAPPS-D-Architecture)
* [3. Integration with External Vendor Systems](#3.-Integration-with-External-Vendor-Systems)
* [4. GridAPPS-D Advanced Applications](#4.-GridAPPS-D-Applications)
* [5. GridAPPS-D Services](#5.-GridAPPS-D-Services)
* [6. GridAPPS-D Application Programming Interface](#6.-GridAPPS-D-Application-Programming-Interface)
* [7. GOSS Message Bus](#7.-GOSS-Message-Bus)
* [8. GridAPPS-D Core Services](#8.-GridAPPS-D-Core-Services)
* [9. Co-Simulation Framework](#9.-Co-Simulation-Framework)
* [10. Database Structures](#10.-Database-Structures)
---
# 1. What is GridAPPS-D?
GridAPPS-D™ is an open-source platform that accelerates development and deployment of portable applications for advanced distribution management and operations.
The GridAPPS-D™ project is sponsored by the U.S. DOE’s Office of Electricity, Advanced Grid Research. Its purpose is to reduce the time and cost to integrate advanced functionality into distribution operations, to create a more reliable and resilient grid.
GridAPPS-D enables standardization of data models, programming interfaces, and the data exchange interfaces for:
* devices in the field
* distributed apps in the systems
* applications in the control room
The platform provides
* robust testing tools for applications
* distribution system simulation capabilities
* standardized research capability
* reference architecture for the industry
* application development kit
[[Return to Top](#Table-of-Contents)]
---
# 2. GridAPPS-D Architecture
GridAPPS-D offers a standards-based, open-source platform that enables rapid integration of advanced applications and services through a robust application programming interface (API).
The architecture of the development ecosystem is illustrated below.

[[Return to Top](#Table-of-Contents)]
---
# 3. Integration with External Vendor Systems
External vendor systems are able to interface with GridAPPS-D compliant applications and services through two means.
The first is direct integration through the standards-based API and message bus. This enables products that comply with the GridAPPS-D™ platform to
* reduce utility time and cost to integrate new functionality
* give utilities more choice in technology providers
* scale up or down for any size utility
* expand market opportunities for developers and vendors
The second method is through the standards-based services, such as the DNP3 service, IEEE 2030.5 service, etc. that enable communication between GridAPPS-D compliant applications and external vendor systems through SCADA and other control center protocols.
[[Return to Top](#Table-of-Contents)]
---
# 4. GridAPPS-D Applications
The GridAPPS-D platform and API enable rapid development of advanced power applications that are able to operate in a real-time environment and interface with external software and systems. Multiple power applications have already been developed on the platform, including
* Volt-Var Optimization (VVO)
* Fault Location Isolation and Service Restoration (FLISR)
* Distributed Energy Resource Dispatch and Management (DERMS)
* Solar Forecasting, Load Forecasting, etc.
* and more
Applications can be containerized in Docker for direct integration into the platform or interface through the API. Applications can be written in any programming language, but API libraries are currently available in only Python and Java.
[[Return to Top](#Table-of-Contents)]
---
# 5. GridAPPS-D Services
The GridAPPS-D platform can host a multitude of services for processing both real-time simulation and control center data. These services can be called by any application through the GridAPPS-D API.
Some of the available services include
* __State Estimator__
* __Sensor Simulator__
* __Alarm Service__
* __DNP3 Protocol Service__
* __IEEE 2030.5 Protocol Service__
[[Return to Top](#Table-of-Contents)]
---
# 6. GridAPPS-D Application Programming Interface
GridAPPS-D offers a unique standards-based application programming interface (API) that will be the focus of the lessons in this set of tutorials. The API enables any application, service, or external vendor product to interface with each other, access control center data, run a real-time simulation, and issue equipment control commands.
GridAPPS-D has several APIs to serve different needs and objectives, inlcuding
* __Powergrid Models API__ -- Allows apps and services to access the power system model data
* __Configuration File API__ -- Allows apps to set equipment statuses and system conditions
* __Simulation API__ -- Allows apps to start a real-time simulation and issue equipment commands
* __Timeseries API__ -- Allows apps to pull real-time and historical data
* __Logging API__ -- Allows apps to access logs and publish log messages
Additional APIs are currently under development to enable communication and control of field devices, as well as cyber-physical network co-simulation.
---
# 7. GOSS Message Bus
One of the unique features of GridAPPS-D is the GOSS Message Bus, which enables integration and communication between applications, services, and external software on a publish-subscribe basis.
The GridAPPS-D platform publishes SCADA and simulation data, alarms, and other real-time data. Applications subscribe to the types of messages relevant to their objectives and publish equipment commands and control settings.
[[Return to Top](#Table-of-Contents)]
---
# 8. GridAPPS-D Core Services
"Under the hood" of the GridAPPS-D platform are the core services and managers.
An application developer should not need a detailed understanding of the core services, as all interaction is performed through the various APIs, which will be dicussed in detail in the upcoming tutorial lessons.
The core services provide the key functionality offered by the GridAPPS-D platform, inlcuding database access, processing API calls, handling equipment commands, and running simulations.
Some of the core services included in the GridAPPS-D platform are
* __Platform Manager__ -- Coordinates all of the other managers
* __Process Manager__ -- Coordinates platform component interactions
* __Application Manager__ -- Manages application registration, execution, and status reporting
* __Configuration Manager__ -- Manages the setup and configuration of real-time simulations
* __Simulation Manager__ -- Allows users and apps to create, start, stop, and pause co-simulations
* __Data Manager__ -- Coordinates the integrated repository of model, timeseries data, and metadata
* __Model Manager__ -- Loads and checks CIM-based power system models
* __Logging Manager__ -- Supports logging for application development and execution
* __Services Manager__ -- Coordinates all services available for users and apps
* __Test Manager__ -- Enables creation of simulation events, faults, and network outages
[[Return to Top](#Table-of-Contents)]
---
# 9. Co-Simulation Framework
The co-simulation framework serves as the simulation context for the rest of GridAPPS-D. When a simulation is requested through the GridAPPS-D plaform the simulation manager instantiates a FNCS or HELICS co-simulation federation consisting of two applications. The first application is a powerflow simulator which can be either GridLAB-D or OpenDSS that simulates real world distribution feeder or feeders. The second is a custom application that serves as bridge between the FNCS/HELICS message bus and the GOSS message bus. The data that travels between the co-simulation federation and the rest of the platform are SCADA measurement, SCADA control, and simulation status and control messages.The bridge application subscribes to the simulation input topic to recieve any SCADA control, simulation control, and simulation event messages. The bridge forwards SCADA control commands and simulation events like faults and outages to the powerflow simulator. The bridge publishes SCADA measurements from the powerflow simulator on a simulation output topic that GridAPPS-D applications and other parts of the GridAPPS-D platform subscribe to.
[[Return to Top](#Table-of-Contents)]
---
# 10. Database Structures
Default installation of GridAPPS-D comes with following data stores:
* __MySQL:__ It is used to store log data from platform, applications and services. For details, please see Logging API, which is covered in detail in Lesson 2.7.
* __Blazegraph:__ It is used to store power grid model data. The data contains equipments, properties and their initial measurement values. It is a triplestore that supports complex graph representation and class structure for CIM standard data model.
* __InfluxDB:__ InfluxDB is a time series data store and is used to store simulation output, simulation input, weather and load data. It also store output from services line sensor service and alarms service. For the purposes of the GridAPPS-D project, InfluxDB is managed by Proven. Proven is a database software suite supporting disclosure, collection, and management of modeling and simulation data.
For the purpose of developing applications, the data stores used should be transparent to the application as long the data model and standardized API is used.
[Return to Top](#Table-of-Contents)
----
# Conclusion
Congratulations! You have completed the first GridAPPS-D tutorial lesson!
You should now be able to recognize and explain the various components of the GridAPPS-D architecture.
[[Return to Top](#Table-of-Contents)]
| github_jupyter |
# 关于 GDAL 库的补充
栅格数据处理一个很重要的基础库就是 GDAL,有不少现有程序是直接依据该库写的,所以有必要补充了解下其基本内容,官方资料稍微有些晦涩,然而更简易的资料还比较少,能找到的相对较好地如下所示。
参考资料:
- [Python GDAL课程笔记](https://www.osgeo.cn/python_gdal_utah_tutorial/)
- [Geoprocessing with Python using Open Source GIS](https://www.gis.usu.edu/~chrisg/python/2009/)
- [Python GDAL/OGR Cookbook](https://pcjericks.github.io/py-gdalogr-cookbook/index.html)
- [Open Source Geoprocessing Tutorial](https://github.com/ceholden/open-geo-tutorial)
- [HeadFirst GDAL](https://headfirst-gdal.readthedocs.io/en/latest/index.html#)
## GDAL 简介
**GDAL**(Geospatial Data Abstraction Library)是一个开源栅格空间数据转换库。它利用**抽象数据模型**来表达所支持的各种文件格式。还有一系列**命令行工具**来进行数据转换和处理。 **OGR**(OpenGIS Simple Features Reference Implementation)是GDAL项目的一个子项目, 提供对矢量数据的支持。 一般把这两个库合称为**GDAL/OGR**,或者简称为**GDAL**。
很多著名的GIS软件都使用了GDAL/OGR库, 包括商业公司ESRI的ArcGIS,Google的Google Earth和开源的GRASS GIS系统。可以同时对Linux和windows下的地理空间数据管理系统提供百余种矢量和栅格文件类型的支持。
GDAL/OGR使用面向对象的**C++语言**编写,这令该库在支持百余种格式的同时,还具有很高的执行效率。GDAL/OGR同时还提供多种主流编程语言的绑定,比如**Python**。
GDAL提供对多种栅格数据的支持,包括**Arc/Info ASCII Grid(asc),GeoTiff (tiff),Erdas Imagine Images(img),ASCII DEM(dem)** 等格式。
GDAL使用抽象数据模型(abstract data model)来解析它所支持的数据格式,抽象数据模型包括**数据集(dataset), 坐标系统,仿射地理坐标转换(Affine Geo Transform), 大地控制点(GCPs), 元数据(Metadata),栅格波段(Raster Band),颜色表(Color Table), 子数据集域(Subdatasets Domain),图像结构域(Image_Structure Domain),XML域(XML:Domains)**。
GDAL包括如下几个部分:
- GDALMajorObject类:带有元数据的对象。
- GDALDdataset类:通常是从一个栅格文件中提取的相关联的栅格波段集合和这些波段的元数据;GDALDdataset也负责所有栅格波段的地理坐标转换(georeferencing transform)和坐标系定义。
- GDALDriver类:文件格式驱动类,GDAL会为每一个所支持的文件格式创建一个该类的实体,来管理该文件格式。
- GDALDriverManager类:文件格式驱动管理类,用来管理GDALDriver类。
OGR提供对矢量数据格式的读写支持,它所支持的文件格式包括:**ESRI Shapefiles, S-57, SDTS, PostGIS,Oracle Spatial, Mapinfo mid/mif , Mapinfo TAB**。
OGR包括如下几部分:
- Geometry:类Geometry (包括OGRGeometry等类)封装了OpenGIS的矢量数据模型,并提供了一些几何操作,WKB(Well Knows Binary)和WKT(Well Known Text)格式之间的相互转换,以及空间参考系统(投影)。
- Spatial Reference:类OGRSpatialReference封装了投影和基准面的定义。
- Feature:类OGRFeature封装了一个完整feature的定义,一个完整的feature包括一个geometry和geometry的一系列属性。
- Feature Definition:类OGRFeatureDefn里面封装了feature的属性,类型、名称及其默认的空间参考系统等。一个OGRFeatureDefn对象通常与一个层(layer)对应。
- Layer:类OGRLayer是一个抽象基类,表示数据源类OGRDataSource里面的一层要素(feature)。
- Data Source:类OGRDataSource是一个抽象基类,表示含有OGRLayer对象的一个文件或一个数据库。
- Drivers:类OGRSFDriver对应于每一个所支持的矢量文件格式。类OGRSFDriver由类OGRSFDriverRegistrar来注册和管理。
## 用OGR读写矢量数据
```
try:
from osgeo import ogr
except:
import ogr
# 两种导入方式都可以
```
要读取某种类型的数据,必须先载入数据驱动,即初始化一个对象,让其知道某种数据结构。
```
driver = ogr.GetDriverByName('ESRI Shapefile')
driver
```
上面可以看到 driver是osgeo.ogr.Driver对象,且是Swig Object的proxy,Simplified Wrapper and Interface Generator ([SWIG](https://en.wikipedia.org/wiki/SWIG#:~:text=The%20Simplified%20Wrapper%20and%20Interface,%2C%20Octave%2C%20Scilab%20and%20Scheme.)) 是一个开源软件工具,用来将C语言或C++写的计算机程序或函式库,连接脚本语言(比如Python)和其它语言,其目的是允许其他编程语言调用用C或C ++编写的函数,允许将复杂的数据类型传递给这些函数,能防止内存被不当释放,能跨语言继承对象类,等等。
driver的open()方法可以返回一个数据源对象,其有两个参数:
```Python
open(<filename>, <update>)
```
filename 文件名,update 为0表示只读,1表示可写。
```
import sys
filename = 'ospy_data1/sites.shp'
dataSource = driver.Open(filename,0)
if dataSource is None:
print ('could not open')
sys.exit()
print ('done!')
dataSource
```
接下来看看矢量图中的数据层。
```
layer = dataSource.GetLayer(0)
n = layer.GetFeatureCount()
print ('feature count:', n)
```
一般获取shapefile的layer时都填0,不填也可。这里 layer 就是shapefile整个全部feature(就是shpfile中的几何形状图)组成的。我个人理解,为什么是0:因为 shpfile 的所有feature都在同一层上,不像栅格图那样有很多不同的bands。
下面代码可以读出 整个shapefile的边界。
```
extent = layer.GetExtent() # x_min, x_max, y_min, y_max
print ('extent:', extent)
print ('ul:', extent[0], extent[3])
print ('lr:', extent[1], extent[2])
```
关于 GetExtent,有一个值得注意的事情是:OGR feature (shapefile) extents are different than GDAL raster extents。GDAL 的 extent顺序是Extent format (xmin, ymin, xmax, ymax)
如果需要读取其中的某个feature
```
feat = layer.GetFeature(41)
fid = feat.GetField('id')
print (fid)
feat = layer.GetFeature(0)
fid = feat.GetField('id') #should be a different id
print (fid)
feat = layer.GetNextFeature() #读取下一个
# 按顺序读取feature,循环遍历所有的feature
while feat:
feat = layer.GetNextFeature()
layer.ResetReading() #复位
feat = layer.GetNextFeature()
feat.GetField('id')
```
下面看feature的几何形状
```
geom = feat.GetGeometryRef()
geom.GetX()
geom.GetY()
print (geom)
```
接下来看看写数据。
创建新文件主要是使用:
```Python
driver.CreateDataSource(<filename>)
```
filename文件不能是已经存在的,否则会报错。
创建新文件后,要给其创建新的layer
```Python
dataSource.CreateLayer(<name>,geom_type=OGRwkbGeometryType>)
```
OGRwkbGeometryType 中的[wkb](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry)是指 well-known binary,它是Well-known text representation of geometry,一种代表vector geometry的文本标记语言的二进制等价形式,这种二进制形式可以用来以一种更压缩的形式存储和传输和wkt同样的信息以方便计算机处理,当然就不那么人类可读了。
OGRwkbGeometryType 是 OGR 头文件中有声明的一个 枚举类型,列出了well-known binary geometry types。看个例子就更清楚了:
```
import os
new_file = "ospy_data1/test.shp"
if os.path.isfile(new_file):
driver.DeleteDataSource(new_file) # TODO: it cannot work
ds2 = driver.CreateDataSource(new_file)
layer2 = ds2.CreateLayer('test', geom_type=ogr.wkbPoint)
ds2
```
要添加一个新字段,只能在layer里面加,而且还不能有数据。添加的字段如果是字符串,还要设定宽度。
然后设定几何形状
```
#create point geometry
pointCoord = -124.4577,48.0135
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(pointCoord[0],pointCoord[1])
```
添加一个新feature,首先得把字段field添加齐。注意OFTString是vector类下的OGRFieldType的一种。
```
# ogr.FieldDefn(fieldName, fieldType)
fieldName = 'id'
fieldType = ogr.OFTString
fieldDefn = ogr.FieldDefn(fieldName, fieldType)
layer2.CreateField(fieldDefn)
```
然后创建feature,设置其值。
```
# Create the feature and set values
fieldValue = 'test'
featureDefn = layer2.GetLayerDefn()
outFeature = ogr.Feature(featureDefn)
outFeature.SetGeometry(point)
outFeature.SetField(fieldName, fieldValue)
layer2.CreateFeature(outFeature)
```
这里猜一下 feature,layer,geometry之间的关系。geometry和字段是平级的,他们共同赋予到feature,然后feature在给到layer中。
## 几何形状geometry与投影projection
建立空的geometry对象:ogr.Geometry
定义各种不同的geometry使用的方法是不一样的(point, line, polygon, etc)
新建点point,使用方法AddPoint( <x>, <y>, [<z>])。其中的z坐标一般是省略的,默认值是0
例如:
```
from osgeo import ogr
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(10,20)
```
新建一个line。使用AddPoint(<x>, <y>, [<z>])添加点;使用SetPoint(<index>, <x>, <y>, [<z>])更改点的坐标
```
line = ogr.Geometry(ogr.wkbLineString)
line.AddPoint(10,10)
line.AddPoint(20,20)
line.SetPoint(0,30,30) #(10,10) -> (30,30)
print (line.GetPointCount())
```
读取0号点的x坐标和y坐标
```
print (line.GetX(0))
print (line.GetY(0))
```
新建多边形,首先要新建环(ring),然后把环添加到多边形对象中。
如何创建一个ring?先新建一个ring对象,然后向里面逐个添加点。
```
ring = ogr.Geometry(ogr.wkbLinearRing)
ring.AddPoint(0,0)
ring.AddPoint(100,0)
ring.AddPoint(100,100)
ring.AddPoint(0,100)
# 结束的时候,用CloseRings关闭ring,或者将最后一个点的坐标设定为与第一个点相同。
ring.CloseRings()
# ring.AddPoint(0,0)
```
下面一个例子,创建一个方框:
```
outring = ogr.Geometry(ogr.wkbLinearRing)
outring.AddPoint(0,0)
outring.AddPoint(100,0)
outring.AddPoint(100,100)
outring.AddPoint(0,100)
outring.AddPoint(0,0)
inring = ogr.Geometry(ogr.wkbLinearRing)
inring.AddPoint(25,25)
inring.AddPoint(75,25)
inring.AddPoint(75,75)
inring.AddPoint(25,75)
inring.CloseRings()
polygon = ogr.Geometry(ogr.wkbPolygon)
polygon.AddGeometry(outring)
polygon.AddGeometry(inring)
polygon
```
总之,要先建立一个polygon对象,然后添加ring。数数polygon能有几个ring:
```
print (polygon.GetGeometryCount())
```
从polygon中读取ring时,index的顺序和创建polygon时添加ring的顺序相同
```
polygon.GetGeometryRef(0)
polygon.GetGeometryRef(1)
```
创建复合几何形状multi geometry,例如MultiPoint, MultiLineString, MultiPolygon。用AddGeometry把普通的几何形状加到复合几何形状中,例如:
```
multipoint = ogr.Geometry(ogr.wkbMultiPoint)
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(10,10)
multipoint.AddGeometry(point)
point.AddPoint(20,20)
multipoint.AddGeometry(point)
```
读取MultiGeometry中的Geometry,方法和从Polygon中读取ring是一样的,可以说Polygon是一种内置的MultiGeometry。
不要删除一个已存在的Feature的Geometry,会把python搞崩溃的。
只能删除脚本运行期间创建的Geometry,比方说手工创建出来的,或者调用其他函数自动创建的。就算这个Geometry已经用来创建别的Feature,你还是可以删除它。
例如:Polygon.Destroy()
关于投影Projections,使用SpatialReference对象
多种多样的Projections,GDAL支持WKT, PROJ.4, ESPG, USGS, ESRI.prj
可以从layer和Geometry中读取Projections,例如:
```Python
spatialRef = layer.GetSpatialRef()
spatialRef = geom.GetSpatialReference()
```
投影信息一般存储在.prj文件中,如果没有这个文件,上述函数返回None
建立一个新的Projection:
首先导入osr库,之后使用osr.SpatialReference()创建SpatialReference对象
之后用下列语句向SpatialReference对象导入投影信息
ImportFromWkt(<wkt>)
ImportFromEPSG(<epsg>)
ImportFromProj4(<proj4>)
ImportFromESRI(<proj_lines>)
ImportFromPCI(<proj>, <units>, <parms>)
ImportFromUSGS(<proj_code>, <zone>)
ImportFromXML(<xml>)
导出Projection,使用下面的语句可以导出为字符串
ExportToWkt()
ExportToPrettyWkt()
ExportToProj4()
ExportToPCI()
ExportToUSGS()
ExportToXML()
对一个几何形状Geometry进行投影变换,要先初始化两个Projection,然后创建一个CoordinateTransformation对象,用它来做变换。
```
import osr
sourceSR = osr.SpatialReference()
print (sourceSR) #empty
sourceSR.ImportFromEPSG(32612) #UTM 12N WGS84
print(sourceSR.ExportToWkt())
# print(sourceSR)
targetSR = osr.SpatialReference()
targetSR.ImportFromEPSG(4326) #Geo WGS84
#create coordinate transform to go from UTM to geo
coordTrans = osr.CoordinateTransformation(sourceSR, targetSR)
coordTrans
```
要在适当的时候编辑Geometry,投影变换之后最好就不要再动了吧。
对一个数据源DataSource里面的所有Geometry做投影变换,你得一个一个来。下面是一个例子
```
driver = ogr.GetDriverByName('ESRI Shapefile')
ds = driver.Open('ospy_data1/sites.shp')
layer = ds.GetLayer()
sr = layer.GetSpatialRef() #UTM 12N WGS84
print (sr)
import osr
sr2 = osr.SpatialReference()
sr2.ImportFromEPSG(4326) #unprojected WGS84
ct = osr.CoordinateTransformation(sr, sr2)
feature = layer.GetFeature(0)
geom = feature.GetGeometryRef()
print (geom) #point coords in UTM
geom.Transform(ct)
print (geom) #unprojected point coords
```
将投影写入.prj文件,其实很简单。首先MorphToESRI(),转成字符串,然后开个文本文件往里面写就行了。例如:
```
sr2.MorphToESRI()
file = open('ospy_data1/test.prj', 'w')
file.write(targetSR.ExportToWkt())
file.close()
```
## 过滤器,简单的空间分析,函数和模块
Layer对象有一个方法叫SetAttributeFilter(<where_clause>)可以将Layer中符合某一条件的Feature过滤出来。设定了Filter之后就可以用GetNextFeature()方法依次取出符合条件的Feature了。SetAttributeFilter(None)可以清楚一个Filter。
```
from osgeo import ogr
driver = ogr.GetDriverByName('ESRI Shapefile')
ds = driver.Open('ospy_data1/sites.shp')
layer = ds.GetLayer()
layer.GetFeatureCount()
layer.SetAttributeFilter("cover = 'shrubs'")
layer.GetFeatureCount()
layer.SetAttributeFilter(None)
layer.GetFeatureCount()
```
空间过滤器Spatial filters有两种。一种是SetSpatialFilter(<geom>),过滤某一类型的Feature,例如参数中填Polygon,就是选出Layer中的所有Polygon。
另外还有SetSpatialFilterRect(<minx>, <miny>, <maxx>, <maxy>),参数输入四个坐标,可以选中方框内的Feature
SetSpatialFilter(None)一样是清空空间属性过滤器。
```
ptDS = driver.Open('ospy3_data/sites.shp', 0)
ptLayer = ptDS.GetLayer()
polyDS = driver.Open('ospy3_data/cache_towns.shp')
polyLayer = polyDS.GetLayer()
polyFeature = polyLayer.GetFeature(18)
polyFeature.GetField('name')
poly = polyFeature.GetGeometryRef()
ptLayer.SetSpatialFilter(poly)
print(ptLayer) #should just be one
ptLayer.SetSpatialFilter(None)
print(ptLayer) #everything is back
```
更多内容后面再补充。
## 用GDAL读取栅格数据
GDAL原生支持超过100种栅格数据类型,涵盖所有主流GIS与RS数据格式,包括
- ArcInfo grids, ArcSDE raster, Imagine, Idrisi, ENVI, GRASS, GeoTIFF
- HDF4, HDF5
- USGS DOQ, USGS DEM
- ECW, MrSID
- TIFF, JPEG, JPEG2000, PNG, GIF, BMP
```
from osgeo import gdal
print("GDAL's version is: " + gdal.__version__)
print(gdal)
```
数据下载请前往:https://www.gis.usu.edu/~chrisg/python/2009/lectures/ospy_data4.zip 注意点击就下载了,解压文件后放到如下所示的路径
```
fn = 'ospy_data4/aster.img'
ds = gdal.Open(fn, 0)
print(ds)
```
读取栅格数据集的x方向像素数,y方向像素数,和波段数
```
cols = ds.RasterXSize
rows = ds.RasterYSize
bands = ds.RasterCount
print('Number of bands in image: {n}\n'.format(n=bands))
print('Image size is: {r} rows x {c} columns\n'.format(r=rows, c=cols))
```
GeoTransform是一个list,存储着栅格数据集的地理坐标信息
```Python
adfGeoTransform[0] # top left x 左上角x坐标*
adfGeoTransform[1] # w--e pixel resolution 东西方向上的像素分辨率
adfGeoTransform[2] # rotation, 如果北边朝上,地图的旋转角度就是0
adfGeoTransform[3] # top left y 左上角y坐标
adfGeoTransform[4] # rotation, 如果北边朝上,地图的旋转角度就是0
adfGeoTransform[5] # n-s pixel resolution 南北方向上的像素分辨率
```
```
geotransform = ds.GetGeoTransform()
originX = geotransform[0]
originY = geotransform[3]
pixelWidth = geotransform[1]
pixelHeight = geotransform[5]
print(originX)
print(originY)
print(pixelWidth)
print(pixelHeight)
```
计算某一坐标对应像素的相对位置(pixel offset),也就是该坐标与左上角的像素的相对位置,按像素数计算,计算公式如下:
$$xOffset = int((x – originX) / pixelWidth)$$
$$yOffset = int((y – originY) / pixelHeight)$$
读取某一像素点的值,需要分两步:
1. 首先读取一个波段(band):GetRasterBand (< index >),其参数为波段的索引号
2. 然后用ReadAsArray( < xoff >, < yoff >, < xsize >, < ysize >),读出从(xoff,yoff)开始,大小为(xsize,ysize)的矩阵。如果将矩阵大小设为1X1,就是读取一个像素了。但是这一方法只能将读出的数据放到矩阵中,就算只读取一个像素也是一样。例如:
```
band = ds.GetRasterBand(1)
xOffset = 1
yOffset = 2
data = band.ReadAsArray(xOffset, yOffset, 2, 3)
data
```
如果想一次读取一整张图,那么显然就是将offset都设定为0,size设定为整个图幅的size 即可。
可以看到 2 对应 xsize,在选择出的数据中 对应地是列数。也就是说 如果从数据中进一步取某个像素的值,应该用 data[yoff, xoff]。简而言之,就是这里面row对应y轴,col对应x轴。
```
data[2, 1]
```
如何更有效率的读取栅格数据?显然一个一个的读取效率非常低,将整个栅格数据集都塞进二维数组也不是个好办法,因为这样占的内存还是很多。更好的方法是**按块(block)来存取数据**,只把要用的那一块放进内存。
平铺(tiled),即栅格数据按block存储。有的格式,例如GeoTiff没有平铺,一行是一个block。Erdas imagine格式则按64x64像素平铺。 如果一行是一个block,那么按行读取是比较节省资源的。 如果是平铺的数据结构,那么设定ReadAsArray()的参数值,让它一次只读入一个block,就是效率最高的方法了。例如:
```
rows = 13
cols = 11
xBSize = 5
yBSize = 5
for i in range(0, rows, yBSize):
if i + yBSize < rows:
numRows = yBSize
else:
numRows = rows - i
for j in range(0, cols, xBSize):
if j + xBSize < cols:
numCols = xBSize
else:
numCols = colsnumCols = cols - j
data = band.ReadAsArray(j, i, numCols, numRows)
type(data)
```
处理栅格数据时,numpy 是一个很常用的工具,也是现在数据的默认格式。
处理栅格数据时,有些numpy的功能是很常用的,比如mask,即输入一个数组和条件,输出一个数组 这类功能。比如统计大于0的像素个数,可以联合运用mask和sum两个函数
```
import numpy as np
a = np.array([0, 4, 6, 0, 2])
mask = np.greater(a, 0)
np.sum(mask)
```
## 栅格数据的写入及其他常见处理函数
前面是一些读操作,这里看看写操作。
```
import gdal, gdalconst
```
新建数据集使用的函数如下所示:
```Python
Create(<filename>, <xsize>, <ysize>, [<bands>], [<GDALDataType>])
```
```
driver = gdal.GetDriverByName('HFA')
ds = driver.Create('ospy_data4/sample1.img', 3, 3, 1, gdalconst.GDT_Float32)
```
在上面这条语句的执行过程中,存储空间已经被分配到硬盘上了。接着需要先引入波段对象
```
band = ds.GetRasterBand(1)
```
波段对象支持直接写入矩阵,两个参数分别为x向偏移和y向偏移。首先制造一些数据。
```
import numpy as np
data2 = np.array([ [0,54,100], [87,230,5], [161,120,24] ])
data3 = np.array([ [0,100,23], [78,29,1], [134,245,0] ])
ndvi = (data3 - data2) / (data3 + data2)
ndvi
```
可以留意有分母为0的情况,虽然也能计算,不过还是提前处理下比较好。
```
np.seterr(divide='ignore', invalid='ignore')
ndvi = (data3 - data2) / (data3 + data2)
ndvi
band.WriteArray(ndvi, 0, 0)
```
也可以设定NoData对应的值
```
band.SetNoDataValue(-99)
ND = band.GetNoDataValue()
ND
```
对于栅格图来说,金字塔的建立还是必要的,
```
gdal.SetConfigOption('HFA_USE_RRD', 'YES')
ds.BuildOverviews(overviewlist=[2,4, 8,16,32,64,128])
```
栅格数据拼接,重投影,切割,转换矢量数据 等都是常见操作,后面见到再补充。最后补充一些关于 GDAL 命令行操作还有FWTools工具的内容。
## GDAL 命令行工具 与 FWTools 简单了解
这部分内容主要参考了以下资料:
- [GDAL 笔记一:GDAL命令行入门](https://www.jianshu.com/p/e48d0a17628c)
- [GDAL/OGR Quickstart](https://live.osgeo.org/en/quickstart/gdal_quickstart.html)
- [GDAL——命令使用专题——ogrinfo命令](https://www.cnblogs.com/eshinex/p/10301738.html)
- [FWTools](http://wiki.gis.com/wiki/index.php/FWTools)
GDAL 工具:
- 通过gdalinfo去浏览图片信息
- 通过gdal_translate去进行格式转换
- 通过gdalwarp去重投影你的数据
- 通过gdal_warp或者gdal_merge.py去拼接你的数据
- 通过gdaltindex去建立shapefile拥有栅格编号
OGR 工具:
- 通过ogrinfo获取关于数据的信息
- 通过ogr2ogr去转换栅格数据的格式
这些工具的使用可以通过在命令行中执行类似 --help命令,--help-general 命令来查看
```
! ogrinfo --help-general
! ogrinfo --help
```
更多命令信息都可以在GDAL官网查询:[GDAL documentation » Programs](https://gdal.org/programs/index.html)。
命令清单可以参考:[dwtkns/gdal-cheat-sheet](https://github.com/dwtkns/gdal-cheat-sheet)
FWTools 是一个开源的GIS 工具箱,由Frank Warmerdam 整合了一些流行的工具:
- OpenEV – A high performance raster/vector desktop data viewer and analysis tool.
- MapServer – A web mapping package.
- GDAL/OGR – A library and set of command line utility applications for reading and writing a variety of geospatial raster (GDAL) and vector (OGR) formats.
- PROJ.4 – A cartographic projections library with command-line utilities.
- OGDI – A multi-format raster and vector reading technology noteworthy for inclusion of support for various military formats including VPF (i.e., VMAP, VITD), RPF (i.e., CADRG, CIB), and ADRG.
- Python programming language
最后的效果就是一个软件,里面有了上面这些工具,方便使用,其中OpenEV 是一个桌面应用程序,其他的基本上是命令行工具。现在应该用这个的不是太多,所以重点了解下 GDAL命令行即可。
| github_jupyter |
```
import os
import wandb
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=2., style='whitegrid')
def get_metrics(sweep_id, keys=None, config_keys=None):
api = wandb.Api()
sweep = api.sweep(sweep_id)
if isinstance(keys, list):
keys.extend(['_runtime', '_step', '_timestamp'])
keys = list(set(keys))
data = []
for run in sweep.runs:
cfg = {k: run.config[k] for k in config_keys}
for row in run.scan_history(keys=keys):
data.append(dict(run_id=run.id, **cfg, **row))
return sweep, pd.DataFrame(data)
keys = None ## get everything
## KeOps
_, metrics1 = get_metrics('gausspr/simplex-gp/xt1i60t7', keys=keys, config_keys=['method', 'dataset'])
# _, metrics1 = get_metrics('snym/bilateral-gp-experiments/ze4oomx4', keys=keys, config_keys=['method', 'dataset'])
## Simplex-GP
_, metrics2 = get_metrics('gausspr/simplex-gp/wz0yzdqq', keys=keys, config_keys=['method', 'dataset'])
metrics = pd.concat([metrics1, metrics2])
metrics['train/total_cu_ts'] = metrics.groupby(by=['run_id'])['train/total_ts'].cumsum()
metrics['method'] = metrics['method'].apply(lambda n: 'Simplex-GP' if n == 'BiGP' else n)
metrics
```
## Runtime, RMSE, MLL
```
# fig, axes = plt.subplots(figsize=(10, 10), nrows=2, ncols=2)
fig, axes = plt.subplots(figsize=(17, 7), ncols=2, sharex=True)
dataset = '3droad'
plt_metrics = metrics[(metrics.dataset == dataset) & (metrics._step <= 50)]
plt_metrics['train/total_cu_ts_mins'] = plt_metrics['train/total_cu_ts'].apply(lambda x: x / 60)
plt_metrics = plt_metrics.sort_values(by=['method'], ascending=False)
# sns.lineplot(data=plt_metrics, x='_step', y='train/mll', hue='method', ci=None, ax=axes[0])
# axes[0].set_title(r'Train MLL')
# axes[0].set_xlabel('Epochs')
# axes[0].set_ylabel('')
# axes[0].legend(title='Method')
sns.lineplot(data=plt_metrics, x='_step', y='train/total_cu_ts_mins', hue='method', ci=None, ax=axes[0])
axes[0].set_title(r'Training Time (minutes)')
axes[0].set_xlabel('Epochs')
axes[0].set_ylabel('')
axes[0].legend(title='Method')
# sns.lineplot(data=plt_metrics, x='_step', y='val/rmse', hue='method', ci=None, ax=axes[1,0])
sns.lineplot(data=plt_metrics, x='_step', y='test/rmse', hue='method', ci=None, ax=axes[1])
axes[1].set_xlabel('Epochs')
axes[1].set_ylabel('')
axes[1].set_title('Test RMSE')
axes[1].legend(title='Method')
fig.tight_layout()
# fig.savefig(f'{dataset}-train.pdf', bbox_inches='tight')
```
## Lengthscales and Noise
```
def raw2label(v):
l = v.split('/')[-1]
if l == 'outputscale':
return r'$\alpha$'
elif l == 'noise':
return r'$\sigma^2$'
else:
return fr'$\ell_{{{l}}}$'
dataset = 'houseelectric'
# plt_metrics = metrics[(metrics.dataset == dataset) & (metrics._step == step)].dropna(axis=1)
plt_metrics = metrics[(metrics.dataset == dataset)].dropna(axis=1)
param_columns = list(filter(lambda x: 'param/lengthscale' in x, plt_metrics.columns))
plt_metrics = plt_metrics[['run_id', 'method', 'dataset', '_step'] + param_columns]
plt_metrics = plt_metrics.melt(id_vars=['run_id', 'method', 'dataset', '_step'], var_name='param', value_name='param_value')
plt_metrics = plt_metrics.sort_values(by=['method', 'param_value'], ascending=False)
fig, ax = plt.subplots(figsize=(11, 7))
# fig, ax = plt.subplots()
sns.barplot(data=plt_metrics, x='param', y='param_value', hue='method', ax=ax,
palette=[sns.color_palette('hls', 8)[3], sns.color_palette('hls', 8)[5]])
ax.set_xticklabels([raw2label(t.get_text()) for t in ax.get_xticklabels()])
ax.set_xlabel('Lengthscales')
ax.set_ylabel('')
ax.set_title(f'{dataset}')
ax.legend(title='Method');
fig.savefig(f'{dataset}-ls.pdf', bbox_inches='tight')
def raw2label(v):
l = v.split('/')[-1]
if l == 'outputscale' or l == 'noise':
return l
else:
return fr'$\ell_{{{l}}}$'
dataset = 'houseelectric'
plt_metrics = metrics[(metrics.dataset == dataset)].dropna(axis=1)
param_columns = ['param/outputscale', 'param/noise']
plt_metrics = plt_metrics[['run_id', 'method', 'dataset', '_step'] + param_columns]
plt_metrics = plt_metrics.melt(id_vars=['run_id', 'method', 'dataset', '_step'], var_name='param', value_name='param_value')
plt_metrics = plt_metrics.sort_values(by=['method', 'param_value'], ascending=False)
fig, ax = plt.subplots()
sns.barplot(data=plt_metrics, x='param', y='param_value', hue='method', ax=ax)
ax.set_xticklabels([raw2label(t.get_text()) for t in ax.get_xticklabels()])
ax.set_xlabel('')
ax.set_ylabel('')
fig.savefig(f'{dataset}-scale_noise.png', bbox_inches='tight')
```
## CG Truncation
```
## Simplex-GP CG Truncations with noise
sweep, metrics = get_metrics('gausspr/simplex-gp/ovlqyu20',
keys=['train/total_ts', 'train/mll', 'val/rmse', 'test/rmse'],
config_keys=['dataset', 'cg_iter'])
metrics['train/total_cu_ts'] = metrics.groupby(by=['run_id'])['train/total_ts'].cumsum()
metrics
rmse_data = []
for run in sweep.runs:
rmse_data.append({ 'dataset': run.config['dataset'], 'cg_iter': run.config['cg_iter'], 'best_rmse': run.summary['test/best_rmse'] })
rmse_data = pd.DataFrame(rmse_data)
rmse_data[rmse_data.dataset == 'protein']
fig, axes = plt.subplots(figsize=(10, 10), nrows=2, ncols=2)
dataset = 'protein'
plt_metrics = metrics[(metrics.dataset == dataset) & (metrics._step <= 100)]
plt_metrics = plt_metrics.sort_values(by=['cg_iter'])
# plt_metrics = plt_metrics[plt_metrics['train/mll'] != 'NaN']
plt_metrics.loc[:, 'train/mll'] = pd.to_numeric(plt_metrics['train/mll'])
sns.lineplot(data=plt_metrics, x='_step', y='train/mll', hue='cg_iter', ax=axes[0,0])
sns.lineplot(data=plt_metrics, x='_step', y='train/total_cu_ts', hue='cg_iter', ax=axes[0,1])
sns.lineplot(data=plt_metrics, x='_step', y='val/rmse', hue='cg_iter', ax=axes[1,0])
sns.lineplot(data=plt_metrics, x='_step', y='test/rmse', hue='cg_iter', ax=axes[1,1])
fig.tight_layout()
# fig.savefig(f'{dataset}-cg-iter.png', bbox_inches='tight')
```
| github_jupyter |
# Incremental modeling with decision optimization
This tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, then incrementally modify it.
You will learn how to:
- change coefficients in an expression
- add terms in an expression
- modify constraints and variables bounds
- remove/add constraints
- play with relaxations
Table of contents:
- [Describe the business problem](#Describe-the-business-problem:--Games-Scheduling-in-the-National-Football-League)
* [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)
* [Use decision optimization](#Use-decision-optimization)
* [Step 1: Set up the prescriptive model](#Step-1:-Set-up-the-prescriptive-model)
* [Step 2: Modify the model](#Step-2:-Modify-the-model)
* [Summary](#Summary)
****
## Describe the business problem: Telephone production
A possible descriptive model of the telephone production problem is as follows:
* Decision variables:
* Number of desk phones produced (DeskProduction)
* Number of cellular phones produced (CellProduction)
Objective: Maximize profit
* Constraints:
* The DeskProduction should be greater than or equal to 100.
* The CellProduction should be greater than or equal to 100.
* The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours.
* The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.
This is a type of discrete optimization problem that can be solved by using either **Integer Programming** (IP) or **Constraint Programming** (CP).
> **Integer Programming** is the class of problems defined as the optimization of a linear function, subject to linear constraints over integer variables.
> **Constraint Programming** problems generally have discrete decision variables, but the constraints can be logical, and the arithmetic expressions are not restricted to being linear.
For the purposes of this tutorial, we will illustrate a solution with mathematical programming (MP).
## How decision optimization can help
* Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
* Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
* Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
* Automate the complex decisions and trade-offs to better manage your limited resources.
* Take advantage of a future opportunity or mitigate a future risk.
* Proactively update recommendations based on changing events.
* Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
## Use decision optimization
### Step 1: Set up the prescriptive model
#### Writing a mathematical model
Convert the descriptive model into a mathematical model:
* Use the two decision variables DeskProduction and CellProduction
* Use the data given in the problem description (remember to convert minutes to hours where appropriate)
* Write the objective as a mathematical expression
* Write the constraints as mathematical expressions (use “=”, “<=”, or “>=”, and name the constraints to describe their purpose)
* Define the domain for the decision variables
#### Telephone production: a mathematical model
To express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:
```
maximize: 12 desk_production+20 cell_production
subject to:
desk_production>=100
cell_production>=100
0.2 desk_production+0.4 cell_production<=400
0.5 desk_production+0.4 cell_production<=490
```
```
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
```
The continuous variable desk represents the production of desk telephones.
The continuous variable cell represents the production of cell phones.
```
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.integer_var(name='desk')
cell = m.integer_var(name='cell')
m.maximize(12 * desk + 20 * cell)
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100, "desk")
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100, "cell")
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400, "assembly_limit")
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490, "painting_limit")
```
#### Solve with Decision Optimization
Depending on the size of the problem, the solve stage might fail and require the Commercial Edition of CPLEX engines, which is included in the premium environments in Watson Studio.
You will get the best solution found after ***n*** seconds, because of a time limit parameter.
```
m.print_information()
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
```
### Step 2: Modify the model
#### Modify constraints and variables bounds
The model object provides getters to retrieve variables and constraints by name:
* get_var_by_name
* get_constraint_by_name
The variable and constraint objects both provide properties to access the right hand side (rhs) and left hand side (lhs).
When you modify a rhs or lhs of a variable, you of course need to give a number.
When you modify a rhs or lhs of a constraint, you can give a number or an expression based on variables.
Imagine that you want to build 2000 cells and 1000 desks maximum.
And you want to increase the production of both of them from 100 to 350
```
# Access by name
m.get_var_by_name("desk").ub = 2000
# acess via the object
cell.ub = 1000
m.get_constraint_by_name("desk").rhs = 350
m.get_constraint_by_name("cell").rhs = 350
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
```
The production plan has been updated accordingly to these small changes.
#### Modify expressions
You now want to introduce a new type of product: the "hybrid" telephone.
```
hybrid = m.integer_var(name='hybrid')
```
You need to:
- introduce it in the objective
- introduce it in the existing painting and assembly time constraints
- add a new constraint for its production to produce at least 350 of them.
```
m.add_constraint(hybrid >= 350)
;
```
The objective will move from
<code>
maximize: 12 desk_production+20 cell_production
</code>
to
<code>
maximize: 12 desk_production+20 cell_production + 10 hybrid_prodction
</code>
```
m.get_objective_expr().add_term(hybrid, 10)
;
```
The time constraints will be updated from
<code>
0.2 desk_production+0.4 cell_production<=400
0.5 desk_production+0.4 cell_production<=490
</code>
to
<code>
0.2 desk_production+0.4 cell_production + 0.2 hybrid_production<=400
0.5 desk_production+0.4 cell_production + 0.2 hybrid_production<=490
</code>
When you add a constraint to a model, its object is returned to you by the method add_constraint.
If you don't have it, you can access it via its name
```
m.get_constraint_by_name("assembly_limit").lhs.add_term(hybrid, 0.2)
ct_painting.lhs.add_term(hybrid, 0.2)
;
```
You can now compute the new production plan for our 3 products
```
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
```
Now imagine that you have improved your painting process, the distribution of the coefficients in the painting limits is no longer [0.5, 0.4, 0.2] but [0.1, 0.1, 0.1]
You can modify the coefficients, variable by variable, with set_coefficient or via a list of (variable, coeff) with set_coefficients
```
ct_painting.lhs.set_coefficients([(desk, 0.1), (cell, 0.1), (hybrid, 0.1)])
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
```
#### Relaxations
Now introduce a new constraint: polishing time limit.
```
# constraint: polishing time limit
ct_polishing = m.add_constraint( 0.6 * desk + 0.6 * cell + 0.3 * hybrid <= 290, "polishing_limit")
msol = m.solve()
if msol is None:
print("model can't solve")
```
The model is now infeasible. We need to handle it and dig into the infeasibilities.
You can now use the Relaxer object. You can control the way it will relax the constraints or you can use one of the various automatic modes:
- 'all' relaxes all constraints using a MEDIUM priority; this is the default value.
- 'named' relaxes all constraints with a user name but not the others.
- 'match' looks for priority names within constraint names; unnamed constraints are not relaxed.
Use the 'match' mode.
Polishing constraint is mandatory.
Painting constraint is a nice to have.
Assembly constraint has low priority.
```
ct_polishing.name = "high_"+ct_polishing.name
ct_assembly.name = "low_"+ct_assembly.name
ct_painting.name = "medium_"+ct_painting.name
# if a name contains "low", it has priority LOW
# if a ct name contains "medium" it has priority MEDIUM
# same for HIGH
# if a constraint has no name or does not match any, it is not relaxable.
from docplex.mp.relaxer import Relaxer
relaxer = Relaxer(prioritizer='match', verbose=True)
relaxed_sol = relaxer.relax(m)
relaxed_ok = relaxed_sol is not None
assert relaxed_ok, "relaxation failed"
relaxer.print_information()
m.print_solution()
ct_polishing_relax = relaxer.get_relaxation(ct_polishing)
print("* found slack of {0} for polish ct".format(ct_polishing_relax))
ct_polishing.rhs+= ct_polishing_relax
m.solve()
m.report()
m.print_solution()
```
## Summary
You have learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and modify it in various ways.
#### References
* <a href="https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html" target="_blank" rel="noopener noreferrer">Decision Optimization CPLEX Modeling for Python documentation</a>
* <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html" target="_blank" rel="noopener noreferrer">Watson Studio documentation</a>
<hr>
Copyright © 2017-2021. This notebook and its source code are released under the terms of the MIT License.
<div style="background:#F5F7FA; height:110px; padding: 2em; font-size:14px;">
<span style="font-size:18px;color:#152935;">Love this notebook? </span>
<span style="font-size:15px;color:#152935;float:right;margin-right:40px;">Don't have an account yet?</span><br>
<span style="color:#5A6872;">Share it with your colleagues and help them discover the power of Watson Studio!</span>
<span style="border: 1px solid #3d70b2;padding:8px;float:right;margin-right:40px; color:#3d70b2;"><a href="https://ibm.co/wsnotebooks" target="_blank" style="color: #3d70b2;text-decoration: none;">Sign Up</a></span><br>
</div>
| github_jupyter |
A notebook to demonstrate the use of the analysis functions for lda dictionaries
```
original_dict_file = '/Users/simon/Dropbox/BioResearch/Meta_clustering/KRD/mzml sylvia/molnet130918/carnegie_lda.dict'
```
Load the dictionary
```
import pickle
with open(original_dict_file,'r') as f:
lda_dict = pickle.loads(f.read())
print lda_dict.keys()
```
User can set some parameters here
```
overlap_thresh = 0.3
probability_thresh = 0.1
```
Compute motif degrees
```
%load_ext autoreload
%autoreload 2
from lda_analysis_functions import compute_motif_degrees
motif_degree_dict,motif_degree_list = compute_motif_degrees(lda_dict,probability_thresh,overlap_thresh)
```
print the top 10 motifs by degree
```
for m,d in motif_degree_list[:10]:
print m,d
```
Print the degree of any motif
```
motif = 'motif_236'
print motif_degree_dict[motif]
```
Plot a motif
```
from lda_analysis_functions import plot_motif
import pylab as plt
%matplotlib inline
# plot_motif(lda_dict,'motif_22')
plot_motif(lda_dict,'motif_106',figsize=(20,10))
plot_motif(lda_dict,'motif_125',figsize=(20,10))
```
List the available document metadata fields
```
from lda_analysis_functions import list_metadata_fields
mdf = list_metadata_fields(lda_dict)
print mdf
from lda_analysis_functions import print_mols
print_mols(lda_dict,['1'])
```
Print all molecules in a particular motif
```
motif = 'motif_40'
from lda_analysis_functions import get_motif_mols
mols = get_motif_mols(lda_dict,motif,probability_thresh,overlap_thresh)
print_mols(lda_dict,
mols,
fields = ['precursormass','parentrt','scanno'])
```
Plot a document
```
mol = mols[3]
from lda_analysis_functions import plot_mol
plot_mol(lda_dict,mol,color_motifs = True,figsize=(20,10))
# The optional xlim parameter allows us to zoom in
plot_mol(lda_dict,mol,color_motifs = True,xlim = [255,260],figsize=(20,10))
```
Now for some motif matching
Firstly, load motifdb
```
motifdb_path = '/Users/simon/git/motifdb/'
import sys,os
sys.path.append(os.path.join(motifdb_path,'code','utilities'))
from motifdb_loader import load_db,MotifFilter
dbs_to_load = ['massbank_binned_005','gnps_binned_005']
db_spectra,db_metadata = load_db(dbs_to_load,motifdb_path+'motifs')
mf = MotifFilter(db_spectra,db_metadata)
db_spectra,db_metadata = mf.filter()
from lda_analysis_functions import match_motifs
matches = match_motifs(lda_dict,db_spectra,threshold = 0.00)
match_idx = 11
plot_motif(lda_dict,matches[match_idx][0],xlim=[0,500],figsize=(10,5))
from lda_analysis_functions import plot_motif_from_dict
plot_motif_from_dict(db_spectra[matches[match_idx][1]],xlim=[0,500],figsize=(10,5))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/thatgeeman/pybx/blob/master/nbs/pybx_walkthrough.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
>⚠ Note: walkthrough for v0.1.3 ⚠
>
>run `! pip freeze | grep pybx` to see the installed version.
# PyBx
PyBx is a simple python package to generate anchor boxes (aka default/prior boxes) for object detection
tasks.
```
! pip install pybx # restart runtime if asked
! pip freeze | grep pybx
```
# SSD for Object Detection
This walkthrough is build around the [Single-Shot Detection (SSD)](https://arxiv.org/pdf/1512.02325.pdf) algorithm. The SSD can be imagined as an encoder-decoder model architecture, where the input image is fed into a `backbone` (encoder) to generate inital features, which then goes through a series of 2D convolution layers (decoders) to perform further feature extraction/prediction tasks at each layer. For a single image, each layer in the decoder produces a total of `N x (4 + C)` predictions. Here `C` is the number of classes (plus one for `background` class) in the detection task and 4 comes from the corners of the rectangular bounding box.
### Usage of the term Feature/Filter/Channel
Channel: RGB dimensione, also called a Filter
Feature: (W,H) of a single channel
## Example case
For this example, we assume that our input image is a single channel image is of shape `[B, 3, 300, 300]` where `B` is the batch size. Assuming that a pretrained `VGG-16` is our model `backbone`, the output feature shape would be: `[B, 512, 37, 37]`. Meaning that, 512 channels of shape `[37, 37]` were extracted from each image in the batch. In the subsequent decoder layers, for simplicity we double the channels while halving the feature shape using `3x3` `stride=2` convolutions (except for first decoder layer where convolution is not applied). This results in the following shapes:
```python
torch.Size([-1, 512, 37, 37]) # inp from vgg-16 encoder
torch.Size([-1, 1024, 18, 18]) # first layer logits
torch.Size([-1, 2048, 8, 8]) # second layer logits
torch.Size([-1, 4096, 3, 3]) # third layer logits
```
<img src="https://lilianweng.github.io/lil-log/assets/images/SSD-box-scales.png" width="500" />
## Sample image
Image obtained from USC-SIPI Image Database.
The USC-SIPI image database is a collection of digitized images. It is maintained primarily to support research in image processing, image analysis, and machine vision. The first edition of the USC-SIPI image database was distributed in 1977 and many new images have been added since then.
```
! wget -q -O 'image.jpg' 'https://sipi.usc.edu/database/download.php?vol=misc&img=5.1.12'
```
## About anchor Boxes
We are expected to provide our models with "good" anchor (aka default/prior) boxes. Strong opinion: Our model is [only as good as the initial anchor boxes](https://towardsdatascience.com/anchor-boxes-the-key-to-quality-object-detection-ddf9d612d4f9) that we generate. Inorder to improve the coverage of our model, we tend to add additional anchor boxes of different aspect ratios. Now, for a single image, each layer in the decoder produces a total of `N x A x (4 + C)` predictions. Here `A` is the number of aspect ratios to generate additional anchor boxes.
### Task description
Our aim is to find the maximum number of anchor boxes in varying sizes `feature_szs` and aspect ratios `asp_ratios` across the entire image. We apply no filtering to get rid of low (IOU) anchors.
<img src="https://lilianweng.github.io/lil-log/assets/images/SSD-framework.png" width="600" />
```
feature_szs = [(37,37), (18,18), (8,8), (3,3)]
asp_ratios = [1/2., 1., 2.]
from operator import __mul__
n_boxes = sum([__mul__(*f) for f in feature_szs])
print(f'minimum anchor boxes with 1 aspect ratio: {n_boxes}')
print(f'minimum anchor boxes with {len(asp_ratios)} aspect ratios: {n_boxes*len(asp_ratios)}')
```
# Loading an image
```
from PIL import Image
from matplotlib import pyplot as plt
import numpy as np
import json
im = Image.open("image.jpg").convert('RGB').resize([300,300])
_ = plt.imshow(im)
```
We also make 2 truth bounding boxes `bbox` for this image around the clock and the photoframe in `pascal voc` format:
```
bbox = [dict(x_min=150, y_min=70, x_max=270, y_max=220, label='clock'),
dict(x_min=10, y_min=180, x_max=115, y_max=260, label='frame'),]
bbox
```
Save annotations as a json file.
```
with open('annots.json', 'w') as f:
f.write(json.dumps(bbox))
```
# Using PyBx
```
from pybx import anchor
image_sz = (300, 300, 3) # W, H, C
feature_sz = (3, 3) # number of features along W, H
asp_ratio = 1. # aspect ratio of the anchor box
anchors, labels = anchor.bx(image_sz, feature_sz, asp_ratio)
```
To visualize the anchors:
```
from pybx import vis
v = vis.VisBx(image_sz)
v.show(anchors, labels)
```
The boxes in white are the anchor boxes. We can hightlight them with a different color by looking up specific box labels.
```
anchors.shape, labels
```
We see there are 16 labels and box coordinates reported by `anchor.bx()`, but we are certain that there are only 9 anchor boxes possible for our `feature_sz=3x3` and single `asp_ratio`. Out of the 16 calculated by `anchor.bx()`, 7 of them are considered `invalid` (they are not true anchor boxes) by `pybx` and are not shown or taken into account during further processing. `anchor.bx` in `v0.1.3` preserves them and their labels, but does not use them for calculations or visualisation, once instantiated as a `MultiBx`. To wrap a set of coordinates as `MultiBx`, we can use the `mbx()` method.
```
from pybx.basics import *
b = mbx(anchors, labels) # instantiates MultiBx for us
type(b)
```
We can iterate over a `MultiBx` object using list comprehension to understand the internal checks:
```
[(i, b_.valid()) for i, b_ in enumerate(b)] # only valid boxes shown
```
`b_.valid()` returned `True` meaning that the box is considered valid.
We can also calculate the areas of these boxes.
Each box `b_` of the `MultiBx` b is of type `BaseBx` which has some additional methods.
```
[b_.area() for b_ in b]
```
Displaying the coordinates of the valid boxes:
```
[b_.coords for b_ in b] # selected boxes only!
```
Displaying the labels of valid boxes
```
[b_.label for b_ in b] # selected boxes only!
```
We can ofcourse see all the 16 boxes calculated by `anchor.bx()` from the `MultiBx` as well:
```
b.coords, b.label
```
> The `vis.VisBx` internally converts all coordinates in list/json/ndarray to a `MultiBx` and shows only `valid` boxes.
We can also overlay the features generated by the model on the original image. `logits=True` generates random logits (`np.random.randn`) of the same shape as feature sizes for illustration purposes.
To aid the explainability of the model, actual model logits can also be passed into the same parameter as an array or detached tensor.
```
# ask VisBx to use random logits with logits=True
vis.VisBx(image_sz, logits=True, feature_sz=feature_sz).show(anchors, labels)
# ask VisBx to use passed logits with logits=logits
logits = np.random.randn(3,3) # assuming these are model logits
v = vis.VisBx(image_sz, logits=logits).show(anchors, labels)
```
We can hightlight them with a different color if needed. Anchor boxes generated with `named=True` parameter automatically sets the label for each box in the format: `{anchor_sfx}_{feature_sz}_{asp_ratio}_{box_number}`. `anchor_sfx` is also an optional parameter that can be passed to `anchor.bx()`. Here we change the color of one anchor box and one ground truth box.
```
labels[4]
v = vis.VisBx(image_sz)
v.show(anchors, labels, color={'a_3x3_0.5_4':'red', 'clock':'orange'})
```
# Working with mulitple feature sizes and aspect ratios
Finally we calculate anchor boxes for multiple feature sizes and aspect ratios.
```
feature_szs = [(3, 3), (2, 2)]
asp_ratios = [1/2., 2.]
anchors, labels = anchor.bxs(image_sz, feature_szs, asp_ratios)
```
This is essentially a wrapper to do list comprehension over the passed feature sizes and aspect ratios (but additionally stacks them together into an ndarray).
```
[anchor.bx(image_sz, f, ar) for f in feature_szs for ar in asp_ratios]
```
```
labels[4], labels[32]
v = vis.VisBx(image_sz)
v.show(anchors, labels, color={'a_3x3_0.5_4':'red', 'a_2x2_0.5_0':'red'})
```
As simple as that! Do leave a star or raise issues and suggestions on the project page if you found this useful!
Project page: [GitHub](https://github.com/thatgeeman/pybx)
PyPi Package: [PyBx](https://pypi.org/project/pybx/)
```
```
| github_jupyter |
# Code for capsule_layers.py
```
"""
Some key layers used for constructing a Capsule Network. These layers can used to construct CapsNet on other dataset,
not just MNIST.
*NOTE*: Some functions may be implemented in multiple ways, I keep all of them. You can try them for youself just by
uncommenting them and commenting their counterparts.
"""
import keras.backend as K
import tensorflow as tf
from keras import initializers, layers
def squash(vectors, axis=-1):
"""
The non-linear activation used in Capsule. It drives the length of a large vector to near 1 and small vector to 0
:param vectors: some vectors to be squashed, N-dim tensor
:param axis: the axis to squash
:return: a Tensor with same shape as input vectors
"""
s_squared_norm = K.sum(k.square(vectors), axis=axis, keepdims=True)
scale = s_squared_norm / (1+s_squared_norm) / K.sqrt(s_squared_norm+K.epsilon())
return scale*vectors
class CapsuleLayer(layers.Layer):
def primaryCap(inputs, dim_capsule, n_channels, kernel_size, strides, padding):
"""
Apply Conv2D `n_channels` times and concatenate all capsules
:param inputs: 4D tensor, shape=[None, width, height, channels]
:param dim_capsule: the dim of the output vector of capsule
:param n_channels: the number of types of capsules
:return: output tensor, shape = [None, num_capsule, dim_capsule]
"""
output = layers.Conv2D(filters=dim_capsule*n_channels, kernel_size = kernel_size, strides=strides, padding=padding,
name='primarycap_conv2d')(inputs)
outputs = layer.Reshape(target_shape=[-1, dim_capsule], name='primarycap_reshape')(output)
return layers.Lambda(squash, name='primarycap_squash')(outputs)
```
# Code for capsule_net.py
```
import numpy as np
from keras import backend as K
from keras import layers, models, optimizers
from keras.utils import to_categorical
import matplotlib.pyplot as plt
from PIL import Image
K.set_image_data_format("channels_last")
def CapsNet(input_shape, n_class, routings):
"""
A capsule network on fashion MNIST
:param input_shape: data shape, 3d, [width, height, channels]
:param n_class: number of classes
:routings: number of routing iterations
:return: Two Keras Models, the first one used for training, and the second one for evaluation.
`eval_model` can also be used for training
"""
x = layers.Input(shape=input_shape)
# Layer 1: just a convolutional Conv2D layer
conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)
# Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size = 9, strides=2, padding='valid')
# Layer 3: Capsule layer. Routing algorithm works here
digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings, name='digitcaps')(primarycaps)
# Layer 4: This is auxilary layer to replace each capsule with its length. Just to match the true label's shape.
# If using TensorFlow, this will not be necessary. :)
out_caps = Length(name='capsnet')(digitcaps)
# Decoder network.
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. (for training)
masked = Mask()(digitcaps) # Mask using the capsule with maximum length. (for prediction)
# Shared Decoder Model in training and prediction
decoder = models.Sequential(name='decoder')
decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))
decoder.add(layers.Dense(1024, activation='relu'))
decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))
decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))
# Models for training and evaluation (prediction)
train_model = models.Model([x,y], [out_caps, decoder(masked_by_y)])
eval_model = models.Model(x, [out_caps, decoder(masked)])
# manipulate model
noise = layer.Input(shape=(nclass, 16))
noised_digitcaps = layers.Add()([digitcaps, noise])
masked_noised_y = Mask()([noised_digitcaps, noise])
manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
return train_model, eval_model, manipulate_model
def load_fashion_mnist():
# the data, shuffled and split between train and test sets
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.reshape(-1, 28, 28, 1).astype('float32') / 255.0
x_test = x_test.reshape(-1, 28, 28, 1).astype('float32') / 255.0
y_train = to_categorical(y_train.astype('float32'))
y_test = to_categorical(y_test.astype('float32'))
return (x_train, y_train), (x_test, y_test)
import os
import argparse
from keras.preprocessing.image import ImageDataGenerator
from keras import callbacks
# setting the hyper parameters
parser = argparse.ArgumentParser(description="Capsule network on Fashion MNIST")
parser.add_argument('--epochs', default=50, type=int)
parser.add_argument('--batch_size', default=100, type=int)
parser.add_argument('--lr', default=0.001, type=float, help="Initial learning rate")
parser.add_argument('--lr_decay', default=0.9, type=float, help="The value multiplied by lr at each epoch. Set a larger value for larger epochs")
parser.add_argument('--lam_recon', default=0.392, type=float, help="The cofficient for the loss of decoder")
parser.add_argument('-r', '--routings', default=3, type=int, help="Number of iterations used in routing algorithm. Should > 0")
parser.add_argument('--shift_fraction', default=0.1, type=float, help="Faction of pixels to shift at most in each direction.")
parser.add_argument('--debug', action='store_true', help="Save weights by TensorBoard")
parser.add_argument('--save_dir', default='./result')
parser.add_argument('-t', '--testing', action='store_true', help="Test the trained model on testing dataset")
parser.add_argument('--digit', default=5, type=int, help="Digit to manipulate")
parser.add_argument('-w', '--weights', default=None, help="The path of the saved weights. Should be specified when testing.")
args = parser.parse_args(["--epochs", "2"])
print(args)
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
# load the data
(x_train, y_train), (x_test, y_test) = load_fashion_mnist()
# define the model
model, eval_model, manipulate_model = CapsNet(input_shape=x_train.shape[1:],
n_class=len(np.unique(np.argmax(y_train, 1))),
routings=args.routings)
model.summary()
if args.weights is not None: # init the model weights with provided one
model.load_weights(args.weights)
if not args.testing:
train(model=model, data=((x_train, y_train), (x_test, y_test)), args=args)
else:
if args.weights is None:
print("No weights provided. Will test using random initialized weights.")
manipulate_latent(manipulate_model, (x_test, y_test), args)
test(model=eval_model, data=(x_test, y_test), args=args)
```
| github_jupyter |
# Link Prediction
Build a GNN to predict links in a citation graph of academic papers.
The citation graph we will use for training this GNN is the [CORA Dataset](https://relational.fit.cvut.cz/dataset/CORA) available from the `torch_geometric.datasets.Planetoid` package.
## Setup
The following two cells import Pytorch Geometric (PyG) and a couple of supporting Pytorch packages that are customized against the torch version.
```
import torch
torch.__version__
%%capture
!pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
!pip install -q torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
!pip install -q torch-geometric
```
## Dataset
The CORA dataset consists of a single graph consisting of 2,708 academic papers. Here is the output of printing `dataset[0]` from the [node classification example solution](02-node-classification.ipynb).
```
Data(x=[2708, 1433], edge_index=[2, 10556], y=[2708], train_mask=[2708], val_mask=[2708], test_mask=[2708])
```
The partitioning in different splits is achieved using the different masks. For edge prediction, however, we want to look at edges, so we will apply a transform to the Planetoid dataset to split the dataset randomly by edge.
In addition, we will add negative examples, i.e. edges that don't occur in the graph.
* Compose a transform consisting of `torch_geometric.transforms.NormalizeFeatures` and `torch_geometric.transforms.RandomLinkSplit`.
* The `NormalizeFeatures` transform normalizes the node feature vector elements so they add up to 1. This is to normalize the vectors so their dot product is the same as cosine similarity, a value between 0 (negative pair label) and 1 (positive pair label).
* The `RandomLinkSplit` splits the graph into subgraphs by edge. Split the graph such that 85% edges are in the training split, 5% edges are in the validation split and 10% edges in the test split.
* Add negative training samples, i.e. edges that don't exist in the training subgraph and with an `edge_label` of 0. This is to allow the GNN to see negative examples during training. Add an equal number of negative samples as positive examples. You can do so by setting the `add_negative_train_samples` parameter of `RandomLinkSplit` to `True`.
* Set this transform to the `Planetoid` call to download the CORA dataset using the `transform` parameter, this will automatically create 3 edge-oriented data splits, with additional attributes in their corresponding `Data` objects as shown below.
```
(
Data(x=[2708, 1433], edge_index=[2, 8976], y=[2708], train_mask=[2708], val_mask=[2708], test_mask=[2708], edge_label=[8976], edge_label_index=[2, 8976]),
Data(x=[2708, 1433], edge_index=[2, 8976], y=[2708], train_mask=[2708], val_mask=[2708], test_mask=[2708], edge_label=[526], edge_label_index=[2, 526]),
Data(x=[2708, 1433], edge_index=[2, 9502], y=[2708], train_mask=[2708], val_mask=[2708], test_mask=[2708], edge_label=[1054], edge_label_index=[2, 1054])
)
```
Compute and verify the following on the downloaded dataset.
* Number of (node) features (should be 1,433)
* Number of target classes (should be 7)
* The contents of the first element of the dataset should consist of 3 `Data` elements as shown above.
```
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
# your code here
```
## DataLoader
The `Data` splits do not lend themselves to batch level training. For this exercise, we will work with each subgraph in its entirety.
Note that you could create `DataLoaders` from the splits by doing this:
`train_loader = DataLoader([train_dataset], shuffle=True, ...)`
but it will result in a single batch.
Separate the splits into a `train_dataset`, `val_dataset` and `test_dataset`.
```
from torch_geometric.loader import DataLoader
# your code here
```
## Model
Conceptually, we generate encodings for each node participating in an edge (positive or negative) and compute the similarity between all pairs of nodes. The network learns to push positive node pairs closer together and negative node pairs further apart using gradient descent.
From an implementation point of view, the node embeddings are learned on the entire subgraph in one shot, then the source and destination node vectors are separated out and the similarities between them computed.
Implement a network with the following layers.
1. _k_ layers of `GCNConv`, the first one with input dimension `input_dim` and output dimension `hidden_dim`, and the other _k-1_ layers with input and output dimention `hidden_dim`.
2. All `GCNConv` layers are followed by a `BatchNorm1d` layer.
3. Except for the last `GCNConv` + `BatchNorm1d` pair, all are followed by a `Droput` layer and a `ReLU` activation layer.
4. The output of the last `GCNConv` + `BatchNorm1d` is separated into source and destination nodes by calling `torch.index_select` on the output and the first and second rows of the `edge_label_index`.
5. The similarity between all source and destination nodes is computed by computing the dot product of the source and destination nodes.
6. Because our vectors are pre-normalized, the dot product is the same as cosine similarity, and the range of values corresponds to the labels 0 and 1 for negative and positive pair respectively.
Verify that your network design is sound by sending the training dataset through a newly instantiated instance.

```
import torch.nn as nn
import torch.nn.functional as F
import torch_geometric.nn as pyg_nn
# your code here
```
## Training Loop
As in the previous exercises, organize your code into a `train_step`, `eval_step` and `train_loop`.
The target here is also binary, i.e. there may exist an edge between a pair of nodes or not, so once again we will use the ROC-AUC metric rather than accuracy to evaluate our model.
In this case, since the graphs cannot be batched (using a DataLoader will iterate over a single batch), we can compute the ROC-AUC directly in the `train_step` and `eval_step` functions rather than postponing the computation in the `train_loop` function. As before, you should use the [Scikit-Learn ROC-AUC metric](https://scikit-learn.org/stable/modules/model_evaluation.html#roc-metrics) to compute the AUC metric.
### train_step
1. In the `train_step` generate the predictions by running the `train_dataset` through the model, computing the loss, backpropagating the gradient of the loss and updating the model weights.
2. Compute the AUC score for the entire dataset using the Scikit-Learn function [roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score)
3. Report the loss and AUC score over the entire dataset.
### eval_step
1. Generate the predictions against the validation or test dataset using the current state of the model. As before, compute the loss and AUC score, but do not backpropagate.
### train_loop
1. Run the `train_step` and `eval_step` over the required number of epochs.
```
from sklearn.metrics import roc_auc_score
# your code here
```
## Train
Train the model using the hyperparameters listed below. Use the Adam optimizer. Plot the training plots.
```
# model parameters
INPUT_DIM = dataset.num_features
HIDDEN_DIM = 128
OUTPUT_DIM = dataset.num_classes
NUM_GCN_LAYERS = 3
DROPOUT_PCT = 0.5
# optimizer
LEARNING_RATE = 1e-2
NUM_EPOCHS = 100
import matplotlib.pyplot as plt
import numpy as np
# your code here
```
## Evaluation
Evaluate the model against the `test_dataset` and report the AUC score.
```
# your code here
```
| github_jupyter |
```
checkpoint = "/home/pzhu/data/qa/squad2_model"
predict_file = "data/squad2/dev-v2.0.json"
device = "cuda:0"
from pytorch_transformers import XLNetForQuestionAnswering
model = XLNetForQuestionAnswering.from_pretrained(checkpoint)
model.to(device)
model.eval()
print("loaded")
from xlnet_qa.squad2_reader import SQuAD2Reader
reader = SQuAD2Reader(is_training=False)
examples, features, datasets = reader.squad_data(predict_file)
from tqdm import tqdm
import torch
from torch.utils.data import SequentialSampler, DataLoader
from xlnet_qa.utils_squad import RawResultExtended, write_predictions_extended
sampler = SequentialSampler(datasets)
dataloader = DataLoader(datasets, sampler=sampler, batch_size=1)
def to_list(tensor):
return tensor.detach().cpu().tolist()
data = tuple(t.to(device) for t in next(iter(dataloader)))
example = examples[data[3].item()]
feature = features[data[3].item()]
print(example.question_text)
print(example.doc_tokens)
print(example.orig_answer_text)
print(example.start_position, example.end_position)
outputs = model(input_ids = data[0],
attention_mask = data[1],
token_type_ids = data[2],
cls_index = data[4],
p_mask = data[5]
)
unique_id = int(feature.unique_id)
result = RawResultExtended(unique_id= unique_id,
start_top_log_probs = to_list(outputs[0][0]),
start_top_index = to_list(outputs[1][0]),
end_top_log_probs = to_list(outputs[2][0]),
end_top_index = to_list(outputs[3][0]),
cls_logits = to_list(outputs[4][0])
)
result
import collections
from xlnet_qa.utils_squad import get_final_text, _compute_softmax
def write_predictions_extended(example, feature, result, n_best_size,
max_answer_length, start_n_top, end_n_top, tokenizer):
""" XLNet write prediction logic (more complex than Bert's).
Write final predictions to the json file and log-odds of null if needed.
Requires utils_squad_evaluate.py
"""
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction",
["start_index", "end_index",
"start_log_prob", "end_log_prob"])
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_log_prob", "end_log_prob"])
prelim_predictions = []
# keep track of the minimum score of null start+end of position 0
score_null = 1000000 # large and positive
cur_null_score = result.cls_logits
# if we could have irrelevant answers, get the min score of irrelevant
score_null = min(score_null, cur_null_score)
for i in range(start_n_top):
for j in range(end_n_top):
start_log_prob = result.start_top_log_probs[i]
start_index = result.start_top_index[i]
j_index = i * end_n_top + j
end_log_prob = result.end_top_log_probs[j_index]
end_index = result.end_top_index[j_index]
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= feature.paragraph_len - 1:
continue
if end_index >= feature.paragraph_len - 1:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
start_index=start_index,
end_index=end_index,
start_log_prob=start_log_prob,
end_log_prob=end_log_prob))
prelim_predictions = sorted(
prelim_predictions,
key=lambda x: (x.start_log_prob + x.end_log_prob),
reverse=True)
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
# XLNet un-tokenizer
# Let's keep it simple for now and see if we need all this later.
#
# tok_start_to_orig_index = feature.tok_start_to_orig_index
# tok_end_to_orig_index = feature.tok_end_to_orig_index
# start_orig_pos = tok_start_to_orig_index[pred.start_index]
# end_orig_pos = tok_end_to_orig_index[pred.end_index]
# paragraph_text = example.paragraph_text
# final_text = paragraph_text[start_orig_pos: end_orig_pos + 1].strip()
# Previously used Bert untokenizer
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
tok_text = tokenizer.convert_token_best_sizens_to_string(tok_tokens)
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, tokenizer.do_lower_case,
False)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_log_prob=pred.start_log_prob,
end_log_prob=pred.end_log_prob))
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(
_NbestPrediction(text="", start_log_prob=-1e6,
end_log_prob=-1e6))
total_scores = []
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_log_prob + entry.end_log_prob)
if not best_non_null_entry:
best_non_null_entry = entry
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_log_prob"] = entry.start_log_prob
output["end_log_prob"] = entry.end_log_prob
nbest_json.append(output)
assert len(nbest_json) >= 1
assert best_non_null_entry is not None
score_diff = score_null
print("="*80)
print(score_diff)
print(best_non_null_entry.text)
print(nbest_json)
return best_non_null_entry.text, score_diff
write_predictions_extended(example, feature, result, 20, 30,
model.config.start_n_top, model.config.end_n_top,
reader.tokenizer)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import Sequential, Model, Input
from tensorflow.keras.layers import LSTM, Embedding, Dense, TimeDistributed, Dropout, Bidirectional, Reshape, Concatenate, Activation
from tensorflow.keras.utils import plot_model
from transformers import TFAutoModel
from tensorflow.keras import backend as K
from focal_loss import sparse_categorical_focal_loss
from transformers import AutoModel
from tensorflow.keras.layers import concatenate
from keras_contrib.layers import CRF
# import tensorflow_hub as hub
# import tensorflow_text as text
import pythainlp
import spacy_thai
from nltk.tokenize import RegexpTokenizer
import re
import string
import os
from string import punctuation
def read_raw_text(filename):
with open(filename, 'r', encoding = 'utf-8') as file:
document = file.read()
return document
def read_ann_file(PATH, filename): #filename e.g. 01_nut.a/xxaa.ann
PATH = PATH
document = read_raw_text(PATH + filename[:-4] + '.txt')
df = pd.read_csv(PATH + filename, sep='^([^\s]*)\s', engine='python', header=None).drop(0, axis=1)
token_df = df[df[1].str.contains('T')]
list_tokens = []
seek = 0
for index, row in token_df.iterrows():
text = re.findall('\t.*', row[2])[0][1:]
entityLabel, start, end = re.findall('.*\t', row[2])[0][:-1].split(' ')
start, end = int(start), int(end)
if seek == start:
res = [document[start:end], start, end, entityLabel]
list_tokens.append(res)
else:
# print(seek, start)
res = [document[seek:start], seek, start, 'O']
list_tokens.append(res)
res = [document[start:end], start, end, entityLabel]
list_tokens.append(res)
seek = end
result_text = ''
for t, start, end, ent in list_tokens:
text = f'[{ent}]{t}[/{ent}]'
result_text += text
return result_text, list_tokens
def tokenize(text):
nlp = spacy_thai.load()
pattern = r'\[(.*?)\](.*?)\[\/(.*?)\]'
tokenizer = RegexpTokenizer(pattern)
text = re.sub(r'([ก-๏a-zA-Z\(\)\.\s0-9\-]*)(?=\[\w+\])', r'[O]\1[/O]', text)
text = re.sub(r'([ก-๏a-zA-Z\(\)\.\s0-9\-]+)$', r'[O]\1[/O]', text)
text = re.sub(r'\[O\](\s)*?\[\/O\]', '', text)
t = tokenizer.tokenize(text)
result = []
text_list_ = []
for i in t:
if i[0] == i[2]:
doc = pythainlp.syllable_tokenize(i[1])
token_texts = []
# doc = nlp('สวัสดีค้าบ ท่านผู้เจริญ')
for token in doc:
token_texts.append(token)
# if token.whitespace_: # filter out empty strings
# token_texts.append(token.whitespace_)
if i[0] == 'O' :
for r in range(len(token_texts)):
result.append((token_texts[r], i[0]))
# words.append(r)
else:
for r in range(len(token_texts)):
if r == 0:
result.append((token_texts[r], 'B-' + i[0]))
else:
result.append((token_texts[r], 'I-' + i[0]))
text_list_.append(result)
words = []
tags = []
original_text = []
poss = []
contain_digit = []
contain_punc = []
contain_vowel = []
thai_vowel = 'ะาิีุุึืโเแัำไใฤๅฦ'
def check_condition(condition):
if condition:
return 'True'
else:
return 'False'
for text in text_list_:
w = []
t = []
o = ''
p = []
digit = []
punc = []
vowel = []
for word in text:
w.append(word[0])
t.append(word[1])
# p.append(pythainlp.tag.pos_tag(word[0]))
o += word[0]
digit.append(check_condition(any(char.isdigit() for char in word[0])))
punc.append(check_condition(any(p in word[0] for p in punctuation)))
vowel.append(check_condition(any(p in word[0] for p in thai_vowel)))
words.append(w)
tags.append(t)
contain_digit.append(digit)
contain_punc.append(punc)
contain_vowel.append(vowel)
# poss.append(p)
original_text.append(o)
# dff = pd.DataFrame({'original_text' : original_text,
# 'words' : words,
# # 'pos' : poss,
# 'contain_digit' : contain_digit,
# 'contain_punc' : contain_punc,
# 'contain_vowel' : contain_vowel,
# 'tags' : tags})
return words, tags, original_text, contain_digit, contain_punc, contain_vowel
def read_all_file(PATH):
PATH = PATH
assignee_folder_list = os.listdir(PATH)[3:3+15]
result = {'original_text' : [],
'words' : [],
'tags' : [],
'contain_digit' : [],
'contain_punc' : [],
'contain_vowel' : []}
for assignee_folder in assignee_folder_list:
text_folder_list = sorted(os.listdir(PATH + assignee_folder))
text_folder_list = [i for i in text_folder_list if i[-3:] in ['ann', 'txt']]
text_folder_list = set(map(lambda x : x[:-4], text_folder_list))
for text_folder in text_folder_list:
filename = assignee_folder + '/' + text_folder + '.ann'
try:
text, list_tokens = read_ann_file(PATH, filename)
words, tags, original_text, contain_digit, contain_punc, contain_vowel = tokenize(text)
result['original_text'].append(original_text)
result['words'].append(words)
result['tags'].append(tags)
result['contain_digit'].append(contain_digit)
result['contain_punc'].append(contain_punc)
result['contain_vowel'].append(contain_vowel)
except:
print(filename)
df = pd.DataFrame(result)
return df
def return_train_test(df):
df['pos'] = df['words'].apply(lambda x : [i[1] for i in pythainlp.tag.pos_tag(x)])
max_len = max(df['words'].apply(lambda x: len(x)))
train, test = train_test_split(df, random_state = 42, test_size = 0.2)
word_set = sorted(set([i for sentence in train['words'] for i in sentence]))
pos_set = sorted(set([i for pos in train['pos'] for i in pos]))
tag_set = sorted(set([i for tag in train['tags'] for i in tag]))
word2idx = dict([(v, k) for k, v in enumerate(word_set)])
pos2idx = dict([(v, k) for k, v in enumerate(pos_set)])
tag2idx = dict([(v, k) for k, v in enumerate(tag_set)])
digit2idx = {'True' : 1, 'False' : 0, '<PAD>' : 2}
punc2idx = {'True' : 1, 'False' : 0, '<PAD>' : 2}
vowel2idx = {'True' : 1, 'False' : 0, '<PAD>' : 2}
word2idx['<UNK>'] = len(word2idx)
word2idx['<PAD>'] = len(word2idx)
pos2idx['<UNK>'] = len(pos2idx)
pos2idx['<PAD>'] = len(pos2idx)
tag2idx['<PAD>'] = len(tag2idx)
train['words_idx'] = train['words'].apply(lambda x: [word2idx[i] for i in x])
train['pos_idx'] = train['pos'].apply(lambda x: [pos2idx[i] for i in x])
train['tags_idx'] = train['tags'].apply(lambda x: [tag2idx[i] for i in x])
train['contain_digit_idx'] = train['contain_digit'].apply(lambda x: [digit2idx[i] for i in x])
train['contain_punc_idx'] = train['contain_punc'].apply(lambda x: [punc2idx[i] for i in x])
train['contain_vowel_idx'] = train['contain_vowel'].apply(lambda x: [vowel2idx[i] for i in x])
test_sent = []
test_pos = []
test_tag = []
for sent in test['words']:
t = []
for i in sent:
try:
t.append(word2idx[i])
except:
t.append(word2idx['<UNK>'])
test_sent.append(t)
for sent in test['pos']:
t = []
for i in sent:
try:
t.append(pos2idx[i])
except:
t.append(pos2idx['<UNK>'])
test_pos.append(t)
for sent in test['tags']:
t = []
for i in sent:
t.append(tag2idx[i])
test_tag.append(t)
test['words_idx'] = test_sent
test['pos_idx'] = test_pos
test['tags_idx'] = test_tag
test['contain_digit_idx'] = test['contain_digit'].apply(lambda x: [digit2idx[i] for i in x])
test['contain_punc_idx'] = test['contain_punc'].apply(lambda x: [punc2idx[i] for i in x])
test['contain_vowel_idx'] = test['contain_vowel'].apply(lambda x: [vowel2idx[i] for i in x])
mapping = {'tok2idx' : word2idx,
'pos2idx' : pos2idx,
'tag2idx' : tag2idx}
return train, test, mapping, max_len
# df_q = read_all_file(PATH = 'data/csd_rel_data_annotated/')
# df_p = read_all_file(PATH = 'data/csd_rel_data2_annotated/')
# for i in df_q.columns:
# df_q[i] = df_q[i].apply(lambda x: x[0])
# df_p[i] = df_p[i].apply(lambda x: x[0])
# df_p['pos'] = df_p['words'].apply(lambda x : [i[1] for i in pythainlp.tag.pos_tag(x)])
# df_q['pos'] = df_q['words'].apply(lambda x : [i[1] for i in pythainlp.tag.pos_tag(x)])
df_Coraline = pd.read_csv('[NER]Coraline_annotation_prepared_df.csv').drop(columns = 'Unnamed: 0')
for i in ['words', 'contain_digit', 'contain_punc', 'contain_vowel', 'tags', 'pos']:
df_Coraline[i] = df_Coraline[i].str.strip('[]').str.split(', ').apply(lambda x: [i[1:-1] for i in x])
# df = pd.concat([df_Coraline, df], ignore_index = True)
# df.head()
# pd.concat([df_q, df_p, df_Coraline], ignore_index = True)
df = pd.concat([df_q, df_p, df_Coraline], ignore_index = True)
df
train, test, mapping, max_len = return_train_test(df)
train['padded_words_idx'] = list(pad_sequences(train['words_idx'], maxlen = max_len, padding = 'post', value = mapping['tok2idx']['<PAD>']))
train['padded_pos_idx'] = list(pad_sequences(train['pos_idx'], maxlen = max_len, padding = 'post', value = mapping['pos2idx']['<PAD>']))
train['padded_tags_idx'] = list(pad_sequences(train['tags_idx'], maxlen = max_len, padding = 'post', value = mapping['tag2idx']['<PAD>']))
train['padded_contain_digit_idx'] = list(pad_sequences(train['contain_digit_idx'], maxlen = max_len, padding = 'post', value = 2))
train['padded_contain_punc_idx'] = list(pad_sequences(train['contain_punc_idx'], maxlen = max_len, padding = 'post', value = 2))
train['padded_contain_vowel_idx'] = list(pad_sequences(train['contain_vowel_idx'], maxlen = max_len, padding = 'post', value = 2))
test['padded_words_idx'] = list(pad_sequences(test['words_idx'], maxlen = max_len, padding = 'post', value = mapping['tok2idx']['<PAD>']))
test['padded_pos_idx'] = list(pad_sequences(test['pos_idx'], maxlen = max_len, padding = 'post', value = mapping['pos2idx']['<PAD>']))
test['padded_tags_idx'] = list(pad_sequences(test['tags_idx'], maxlen = max_len, padding = 'post', value = mapping['tag2idx']['<PAD>']))
test['padded_contain_digit_idx'] = list(pad_sequences(test['contain_digit_idx'], maxlen = max_len, padding = 'post', value = 2))
test['padded_contain_punc_idx'] = list(pad_sequences(test['contain_punc_idx'], maxlen = max_len, padding = 'post', value = 2))
test['padded_contain_vowel_idx'] = list(pad_sequences(test['contain_vowel_idx'], maxlen = max_len, padding = 'post', value = 2))
train.iloc[0]['padded_words_idx'][-1]
# !pip install sklearn_crfsuite
from tensorflow.keras import backend as K
from focal_loss import sparse_categorical_focal_loss
from transformers import AutoModel
from tensorflow.keras.layers import concatenate
from keras_contrib.layers import CRF
def focal_loss(y_true, y_pred):
# Loss for imbalanced dataset --> weight more for minor class, weight less for major class
class_weight = [10,10,10,15,15,
10,10,10,10,15,
10,10,10,15,15,
10,10,10,10,10,
1, 0.01
]
loss = sparse_categorical_focal_loss(y_true, y_pred, gamma=2, class_weight = class_weight)
return loss
def train_model(X, y, model):
loss = list()
# Add class weight
# for i in range(150):
# fit model for one epoch on this sequence
hist = model.fit(X, y, batch_size=64, verbose=1, epochs=60, validation_split=0.2 )
loss.append(hist.history['loss'][0])
return model, loss
label = mapping['tag2idx']
input_dim_long = 6032 + 1
input_len_long = len(train['padded_words_idx'].iloc[0])
n_tags = len(label)
output_dim = 8
model_words = Input(shape = (input_len_long,))
emb_words = Embedding(input_dim=input_dim_long, output_dim=output_dim)(model_words)
# output_words = Reshape(target_shape=(output_dim, input_len_long))(emb_words)
model_pos = Input(shape = (input_len_long,))
emb_pos = Embedding(input_dim=input_dim_long, output_dim=output_dim)(model_pos)
# output_pos = Reshape(target_shape=(output_dim, input_len_long))(emb_pos)
model_digit = Input(shape = (input_len_long,))
emb_digit = Embedding(input_dim=input_dim_long, output_dim=output_dim)(model_digit)
model_punc = Input(shape = (input_len_long,))
emb_punc = Embedding(input_dim=input_dim_long, output_dim=output_dim)(model_punc)
model_vowel = Input(shape = (input_len_long,))
emb_vowel = Embedding(input_dim=input_dim_long, output_dim=output_dim)(model_vowel)
input_model = [model_words, model_pos, model_digit, model_punc, model_vowel]
output_embeddings = [emb_words, emb_pos, emb_digit, emb_punc, emb_vowel]
output_model = Concatenate()(output_embeddings)
output_model = Bidirectional(LSTM(units=output_dim, return_sequences=True, dropout=0.5, recurrent_dropout=0.5))(output_model)
output_model = TimeDistributed(Dense(n_tags, activation="softmax"))(output_model)
model = Model(inputs = input_model, outputs = output_model)
model.compile(loss= [focal_loss],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01, epsilon=1e-08),
metrics=['accuracy'])
model.summary()
input_dim = 6032 + 1
output_dim = 8
input_length = max_len
n_tags = len(label)
X_tr_words = []
for i in train['padded_words_idx']:
X_tr_words.append(i)
X_tr_words = np.array(X_tr_words)
X_tr_pos = []
for i in train['padded_pos_idx']:
X_tr_pos.append(i)
X_tr_pos = np.array(X_tr_pos)
X_tr_digit = []
for i in train['padded_contain_digit_idx']:
X_tr_digit.append(i)
X_tr_digit = np.array(X_tr_digit)
X_tr_punc = []
for i in train['padded_contain_punc_idx']:
X_tr_punc.append(i)
X_tr_punc = np.array(X_tr_punc)
X_tr_vowel = []
for i in train['padded_contain_vowel_idx']:
X_tr_vowel.append(i)
X_tr_vowel = np.array(X_tr_vowel)
y_train = [i for i in train['padded_tags_idx']]
y_train = np.array(y_train)
model = train_model([X_tr_words, X_tr_pos, X_tr_digit, X_tr_punc, X_tr_vowel], y_train, model)
X_tr_words
X_te_words = []
for i in test['padded_words_idx']:
X_te_words.append(i)
X_te_words = np.array(X_te_words)
X_te_pos = []
for i in test['padded_pos_idx']:
X_te_pos.append(i)
X_te_pos = np.array(X_te_pos)
X_te_digit = []
for i in test['padded_contain_digit_idx']:
X_te_digit.append(i)
X_te_digit = np.array(X_te_digit)
X_te_punc = []
for i in test['padded_contain_punc_idx']:
X_te_punc.append(i)
X_te_punc = np.array(X_te_punc)
X_te_vowel = []
for i in test['padded_contain_vowel_idx']:
X_te_vowel.append(i)
X_te_vowel = np.array(X_te_vowel)
y_pred = model[0].predict([X_te_words, X_te_pos, X_te_digit, X_te_punc, X_te_vowel])
y_pred = np.argmax(y_pred, axis = 2)
y_test = []
for i in test['padded_tags_idx']:
y_test.append(i)
y_test = np.array(y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test.reshape(y_pred.shape[0]*y_pred.shape[1]),
y_pred.reshape(y_pred.shape[0]*y_pred.shape[1]),
target_names = label.keys())
)
model[0].save('NER_model_v2_26_1_2022.h5')
with open(f'mapping/{}.pickle', 'wb') as dict_:
pickle.dump(idx2word, dict_)
mapping['max_len'] = max_len
for i in mapping.keys():
with open(f'mapping/NER/{i}.pickle', 'wb') as dict_:
pickle.dump(mapping[i], dict_)
```
| github_jupyter |
# Matrix and compartive statistics review
The following notebook is a review of matrices and comparative statistics with examples in python.
The equations and examples are from the following book I highly recommend using to brush up on mathamtics commonly used in economics coursework:
- Dowling, E. T. (2012). Introduction to mathematical economics. McGraw-Hill.
- [Amazon link](https://www.amazon.com/Schaums-Introduction-Mathematical-Economics-Outlines/dp/0071762515/ref=sr_1_7?dchild=1&keywords=mathematics+economics&qid=1593200726&sr=8-7)
# Table of contents
- [1. Matrix basics](#1.-Matrix-basics)
- [2. Special determinants](#2.-Special-determinants)
- [3. Comparative statistics](#3.-Comparative-statistics)
# 1. Matrix basics
```
import numpy as np
np.random.seed(1)
```
## 1.1 Scalar multiplication
```
A = np.random.randint(20, size=(2,2))
A
A*3
```
## 1.2 Matrix addition
```
A = np.random.randint(20, size=(2,2))
B = np.random.randint(20, size=(2,2))
A+B
```
## 1.3 Matrix mulitiplication
```
A = np.random.randint(20, size=(2,2))
B = np.random.randint(20, size=(2,2))
A@B
```
### 1.4 Identity matrix and empty
```
np.eye(3)
np.zeros(shape=(2,2))
```
## 1.5 Matrix inversion
```
A = np.random.randint(20, size=(2,2))
A
#Determinate
round(np.linalg.det(A))
#Invert matrix
np.linalg.inv(A).round(3)
```
# 2. Special determinants
## 2.1 Jacobian
```
import sympy as sy
x1, x2 = sy.symbols('x1 x2', integer=True)
y1 = 5*x1+3*x2
y2 = 25*x1**2+30*x1*x2+9*x2**2
y1
y2
independent_variables = [x1, x2]
functions = [y1, y2]
Jacobian = sy.Matrix(np.zeros(shape=(len(functions), len(independent_variables))))
count = 0
for funcs in functions:
for iv in independent_variables:
Jacobian[count] = sy.diff(funcs, iv)
count+=1
Jacobian
if Jacobian.det()==0:
print("Functional dependence")
```
## 2.2 Hessian
### 2.2.1 Sympy example: Definition
```
from sympy import Function, hessian
from sympy.abc import x, y
f = Function('f')(x, y)
hessian(f, (x, y))
```
### 2.2.1 Sympy example: From 2.1
```
count = 0
Hessian = Jacobian.copy()
for _ in range(0,2):
for iv in independent_variables: #Reverses list order for hessian
Hessian[count] = sy.diff(Hessian[count],iv)
count+=1
Hessian
H1 = Hessian[0]
H2 = Hessian.det()
if H1>0 and H2>0:
print('Positive definite')
print('Minimum point')
if H1<0 and H2>0:
print('Negative definite')
print('Max point')
```
## 2.3 Discriminant
- Tests for positive or negative definiteness of quadratic equations
```
x, y = sy.symbols('x y', integer=True)
z = 2*x**2 + 5*x*y+8*y**2
z
Discrim = sy.Matrix([[2,(5/2)], [(5/2),8]])
Discrim
d1 = Discrim[0]
d2 = Discrim.det() #deter
if d1 and d2>0:
print("Positive definite")
elif d1<0 and d2>0:
print('Negative definite')
```
## 2.4 Higher order hessian
```
x1, x2, x3 = sy.symbols('x1 x2 x3', integer=True)
z = -5*x1**2+10*x1+x1*x3-2*x2**2+4*x2+2*x2*x3-4*x3**2
z
focs = []
for idx,iv in enumerate([x1,x2,x3]):
foc = sy.diff(z,iv)
focs.append(foc)
for idx, foc in enumerate(focs):
print("FOC %s:" %(idx+1), foc)
A,b = sy.linear_eq_to_matrix(focs, [x1, x2, x3])
Hessian = A
Hessian
H1 = Hessian[0]
H2 = Hessian[0:2,0:2].det()
H3 = Hessian.det()
if H1>0 and H2>0 and H3>0:
print('Positive definite')
print('Minimum point')
elif H1<0 and H2>0 and H3<0:
print('Negative definite')
print('Maximum point')
```
## 2.5 Bordered Hessian
```
from sympy import Function, hessian, pprint
from sympy.abc import x, y
f = Function('f')(x, y)
constraint = Function('g')(x, y)
hessian(f, (x, y), [constraint])
```
## 2.6 Eigenvalues & Eigenvectors
```
def eigen(matrix):
trace = np.trace(A)
det = round(np.linalg.det(A),0)
eig_values = np.round((np.sort((trace+np.array([+1,-1])*np.sqrt(trace**2-(4*det)))/2)),1)
solu1, solu2 = [eig_values[:][0], eig_values[:][1]]
print("Original matrix: \n",matrix)
print("Eigen-values:\n {}".format(eig_values))
#Classification of matrix
if solu1>0 and solu2>0:
print('Pos definite')
if solu1<0 and solu2<0:
print('Neg definite')
if (solu1==0 or solu2==0) and (solu1>0 and solu2>0):
print('Pos semi-def')
if (solu1==0 or solu2==0) and (solu1<0 and solu2<0):
print('Pos semi-def')
if (solu1<0 and solu2>0) or (solu1>0 and solu2<0):
print('Indefinite')
A = np.random.randint(20, size=(2,2))
eigen(A)
```
# 3. Comparative statistics
## 3.1 One endogenous variable
$$
Q_d = m-nP+kY\\
Q_s = a+bP
$$
### 3.1.1 Explicit function
$$P* =\frac{m-a+kY}{b+n}$$
### 3.1.2 Implicit function
$$\frac{dP^*}{dY}= - \frac{F_Y}{F_P}$$
```
from sympy.abc import x,n,p,k,y,a,b,m
f = m-n*p+k*y-a-b*p
f
-sy.diff(f,y)/sy.diff(f,p)
```
## 3.2 N-endogenous variables
- `Comparative statistics:` requires a unique equilibrium condition for each endogenous variable
- Measuring the effect of an exogenous variable on the endgenous variables involves taking the total derivative of each equilibrium conditions
- w.r.t to the particular exogenous variable and solving for each of the partial derivatives
$$
F^1(y_1, y_2; x_1, x_2) = 0 \\
F^2(y_1, y_2; x_1, x_2) = 0
$$
#### Note:
- #### Exogenous variables: $x_1$ and $x_2$
- #### Endogenous variables: $y_1$ and $y_2$
## 3.3 Comparative statistics for optimization problems
- Apply comparative statistics to the first order conditions to determine initial optimal values
```
from sympy.abc import r, K, w, L, P, Q
Q = Function('Q')(K, L)
π = p*Q-r*K-w*L
π
```
### 3.3.1 F.O.C
```
focs = []
for idx,iv in enumerate([K,L]):
foc = sy.diff(π,iv)
focs.append(foc)
focs[0]
```
### 3.3.2 Jacobian
- For optimization of a system the Deteriminant of the Jacobian>0
```
Jacobian = sy.Matrix([[π.diff(K,K), π.diff(K,L)],[π.diff(L,K), π.diff(L,L)]])
Jacobian
B = []
for foc in focs:
B.append(foc.diff(r))
B
```
### 3.3.3 Find derivatives
```
J = Jacobian.det()
J1 = Jacobian.copy()
J1[0] =1
J1[2] =0
J1
J2 = Jacobian.copy()
J2[1] =1
J2[3] =0
J2
```
#### 3.3.3.1 Find $\frac{\partial \bar{K}}{\partial r}$
```
J1.det()/J
```
#### 3.3.3.2 Find $\frac{\partial \bar{L}}{\partial r}$
```
J2.det()/J
```
## 3.4 Comparative statistics in constrained optimization
- Optimize comparative statistics with constraints
```
from sympy.abc import r, K, w, L, P, Q, B
lamda = sy.symbols('lamda')
Q = Function('Q')(K, L)
π = Q+lamda*(B-r*K-w*L)
π
```
### 3.4.1 F.O.C
```
focs = []
for idx,iv in enumerate([K,L,lamda]):
foc = sy.diff(π,iv)
focs.append(foc)
focs[0]
focs[1]
focs[2]
```
### 3.4.2 Jacobian
```
independent_variables = [K,L,lamda]
functions = focs
Jacobian = sy.Matrix(np.zeros(shape=(len(functions), len(independent_variables))))
count = 0
for funcs in functions:
for iv in independent_variables:
Jacobian[count] = sy.diff(funcs, iv)
count+=1
Jacobian
J_deter = Jacobian.det()
```
### 3.4.3 Find derivatives
```
def deriv_convert(matrix, col=0):
deriv_col = iter([0, 0, -1])
derivs = Jacobian.copy()
for i in range(1,len(Jacobian)+1):
if i%3==col:
derivs[i-1] = next(deriv_col)
return derivs
```
#### 3.3.4.1 Find $\frac{\partial \bar{K}}{\partial B}$
```
k_b = deriv_convert(derivs, col=1)
k_b
k_b.det()/J_deter
```
#### 3.3.4.2 Find $\frac{\partial \bar{L}}{\partial B}$
```
L_b = deriv_convert(derivs, col=2)
L_b
L_b.det()/J_deter
```
#### 3.3.4.3 Find $\frac{\partial \bar{\lambda}}{\partial B}$
```
lamb_b = deriv_convert(derivs, col=0)
lamb_b
lamb_b.det()/J_deter
```
## 3.5 Envelope theorem
- `Envelope theorem:` Measures the effect of a change in exogenous variables on the optimal value of the objective function
- This can be achieved by simply taking the derivative of the Lagrangian function w.r.t the desired exogenous variable and evaluating the derivative at the values of the optimal solution
```
B, x, px, y, py, lamb= sy.symbols('B x px y py lamda', integer=True)
u = Function('u')(x, y)
constraint = lamb*(B-px*x-py*y)
U = u+constraint
print('Budget constraint:')
U
focs = []
for idx,iv in enumerate([px, py, B]):
foc = sy.diff(U,iv)
focs.append(foc)
```
- $\lambda$: Marginal uility of money
- The extra utility derived from a change in income
- The first and second order conditions are negative:
- $\uparrow \lambda \rightarrow$ negative impact on the quantity of good consumed
```
focs[0]
focs[1]
focs[2]
```
## 3.6 Concave programming
- Optimize comparative statistics with inequality constraints
- Assume that functions are concave
```
B, x, px, y, py, lamb= sy.symbols('B x px y py lamda', integer=True)
u = Function('u')(x, y)
constraint = lamb*(B-px*x-py*y)
U = u+constraint
print('Budget constraint:')
U
#1.A
sy.diff(U, x)
# 1.B
sy.diff(U, y)
#2.A
sy.diff(U,lamb)
```
| github_jupyter |
# HiddenLayer Graph Demo - TensorFlow
```
import os
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
import hiddenlayer as hl
import hiddenlayer.transforms as ht
# Hide GPUs. Not needed for this demo.
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
## VGG 16
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.vgg.vgg_16(inputs)
# Build HiddenLayer graph
hl_graph = hl.build_graph(tf_graph)
# Display graph
# Jupyter Notebook renders it automatically
hl_graph
```
# Alexnet v2
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.alexnet.alexnet_v2(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Use a different color theme
hl_graph.theme = hl.graph.THEMES["blue"].copy() # Two options: basic and blue
# Display
hl_graph
```
# Inception v1
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.inception.inception_v1(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Display
hl_graph
```
## Transforms and Graph Expressions
A Graph Expression is like a Regular Expression for graphs. It simplifies searching for nodes that fit a particular pattern. For example, the graph expression `Conv > Relu` will find Conv layers that are followed by RELU layers. And the expressions `Conv | MaxPool` will match any Conv and MaxPool layers that are in parallel branches (i.e. have the same parent node). See examples of more complex graph expressions below.
Once the graph expression finds the nodes, we use Transforms to modify them. For example, if we want to delete all nodes of type `Const`, we'll use the transform `Prune("Const")`. The graph expression here is simple, `Const`, which matches any node with operation of type Const. And the Prune() transform deletes the node.
See more examples below. And, also, check `SIMPLICITY_TRANSFORMS` in `transforms.py`.
# Inception v1 with Simplified Inception Modules
```
# Define custom transforms to replice the default ones
transforms = [
# Fold inception blocks into one node
ht.Fold("""
( (MaxPool > Conv > Relu) |
(Conv > Relu > Conv > Relu) |
(Conv > Relu > Conv > Relu) |
(Conv > Relu)
) > Concat
""", "Inception", "Inception Module"),
# Fold Conv and Relu together if they come together
ht.Fold("Conv > Relu", "ConvRelu"),
# Fold repeated nodes
ht.FoldDuplicates(),
]
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.inception.inception_v1(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph, transforms=transforms)
# Display
hl_graph.theme = hl.graph.THEMES["blue"].copy()
hl_graph
```
## ResNet v1 50
```
# Custom transforms to group nodes of residual and bottleneck blocks
transforms = [
# Fold Pad into the Conv that follows it
ht.Fold("Pad > Conv", "__last__"),
# Fold Conv/Relu
ht.Fold("Conv > Relu", "ConvRelu"),
# Fold bottleneck blocks
hl.transforms.Fold("""
((ConvRelu > ConvRelu > Conv) | Conv) > Add > Relu
""", "BottleneckBlock", "Bottleneck Block"),
# Fold residual blocks
hl.transforms.Fold("""ConvRelu > ConvRelu > Conv > Add > Relu""",
"ResBlock", "Residual Block"),
]
# Build TensorFlow graph
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.resnet_v1.resnet_v1_50(inputs)
# Build HiddenLayer graph
hl_graph = hl.build_graph(tf_graph, transforms=transforms)
# Customize the theme. The theme is a simple dict defined in graph.py
hl_graph.theme.update({
"fill_color": "#789263",
"outline_color": "#789263",
"font_color": "#FFFFFF",
})
# Display
hl_graph
```
# Overfeat
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 231, 231, 3))
# Build model
predictions, _ = nets.overfeat.overfeat(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Display
hl_graph
```
| github_jupyter |
# Overlap matrices
This notebook will look at different ways of plotting overlap matrices and making them visually appealing.
One way to guarantee right color choices for color blind poeple is using this tool: https://davidmathlogic.com/colorblind
```
%pylab inline
import pandas as pd
import seaborn as sbn
sbn.set_style("ticks")
sbn.set_context("notebook", font_scale = 1.5)
data = np.loadtxt('raw_matrices_review.dat')
good = (data[:9][:])
bad = data[-9:][:]
ugly = data[9:18][:]
# Your Standard plot
fig =figsize(8,8)
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=sbn.light_palette((210, 90, 60), input="husl") )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=0, linecolor='white', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r', vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = ugly >= 0.0001
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=0, linecolor='black', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r',vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = good >= 0.001
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=0, linecolor='black', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r',vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = bad >= 0.01
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm, cbar_kws=cbar_kws )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True, cmap=cmap, norm=norm,vmin=0,vmax=1,cbar_kws=cbar_kws )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cbar_kws={'ticks': '[0.0, 0.2, 0.4, 0.6, 0.8, 1.0]'}
# Playing with pandas and getting more exotic
df = pd.DataFrame(bad, columns=["1","2","3","4","5","6","7","8","9"])
#https://towardsdatascience.com/better-heatmaps-and-correlation-matrix-plots-in-python-41445d0f2bec
def heatmap(x, y, x1,y1, **kwargs):
if 'color' in kwargs:
color = kwargs['color']
else:
color = [1]*len(x)
if 'palette' in kwargs:
palette = kwargs['palette']
n_colors = len(palette)
else:
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sbn.color_palette("Blues", n_colors)
if 'color_range' in kwargs:
color_min, color_max = kwargs['color_range']
else:
color_min, color_max = min(color), max(color) # Range of values that will be mapped to the palette, i.e. min and max possible correlation
def value_to_color(val):
if color_min == color_max:
return palette[-1]
else:
val_position = float((val - color_min)) / (color_max - color_min) # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
ind = int(val_position * (n_colors - 1)) # target index in the color palette
return palette[ind]
if 'size' in kwargs:
size = kwargs['size']
else:
size = [1]*len(x)
if 'size_range' in kwargs:
size_min, size_max = kwargs['size_range'][0], kwargs['size_range'][1]
else:
size_min, size_max = min(size), max(size)
size_scale = kwargs.get('size_scale', 500)
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
if 'x_order' in kwargs:
x_names = [t for t in kwargs['x_order']]
else:
x_names = [t for t in sorted(set([v for v in x]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
if 'y_order' in kwargs:
y_names = [t for t in kwargs['y_order']]
else:
y_names = [t for t in sorted(set([v for v in y]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
marker = kwargs.get('marker', 's')
kwargs_pass_on = {k:v for k,v in kwargs.items() if k not in [
'color', 'palette', 'color_range', 'size', 'size_range', 'size_scale', 'marker', 'x_order', 'y_order'
]}
print(x_names)
print(y_names)
print('here------------')
ax.scatter(
x=x1,
y=y1,
marker=marker,
s=[value_to_size(v) for v in size],
c=[value_to_color(v) for v in color],
**kwargs_pass_on
)
ax.set_xticks([v for k,v in x_to_num.items()])
ax.set_xticklabels([k for k in x_to_num], rotation=45, horizontalalignment='right')
ax.set_yticks([v for k,v in y_to_num.items()])
ax.set_yticklabels([k for k in y_to_num])
ax.grid(False, 'major')
ax.grid(True, 'minor')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor('#F1F1F1')
# Add color legend on the right side of the plot
if color_min < color_max:
ax = plt.subplot(plot_grid[:,-1]) # Use the rightmost column of the plot
col_x = [0]*len(palette) # Fixed x coordinate for the bars
bar_y=np.linspace(color_min, color_max, n_colors) # y coordinates for each of the n_colors bars
bar_height = bar_y[1] - bar_y[0]
ax.barh(
y=bar_y,
width=[5]*len(palette), # Make bars 5 units wide
left=col_x, # Make bars start at 0
height=bar_height,
color=palette,
linewidth=0
)
ax.set_xlim(1, 2) # Bars are going from 0 to 5, so lets crop the plot somewhere in the middle
ax.grid(False) # Hide grid
ax.set_facecolor('white') # Make background white
ax.set_xticks([]) # Remove horizontal ticks
ax.set_yticks(np.linspace(min(bar_y), max(bar_y), 3)) # Show vertical ticks for min, middle and max
ax.yaxis.tick_right() # Show vertical ticks on the right
def corrplot(data, size_scale=500, marker='s'):
corr = pd.melt(data.reset_index(), id_vars='index')
print(corr)
corr.columns = ['index', 'variable', 'value']
x_names = [t for t in sorted(set([v for v in corr['index']]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
x=[x_to_num[v] for v in corr['index']]
y_names = [t for t in sorted(set([v for v in corr['index']]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
y=[y_to_num[v] for v in corr['index']]
heatmap(
corr['index'], corr['value'],x1,y1,
color=corr['value'], color_range=[0, 1],
palette=sbn.diverging_palette(20, 220, n=256),
size=corr['value'].abs(), size_range=[0,1],
marker=marker,
x_order=data.columns,
y_order=data.columns[::-1],
size_scale=size_scale
)
corrplot(df)
corr = pd.melt(df.reset_index(), id_vars='index')
print(corr)
x_names = [t for t in sorted(set([v for v in corr['index']]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
x1=[x_to_num[v] for v in corr['index']]
y_names = [t for t in sorted(set([v for v in corr['variable']]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
y1=[y_to_num[v] for v in corr['variable']]
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
value_names = [t for t in sorted(set([v for v in corr['value']]))]
value = []
for v in corr['value']:
value.append(v)
for v in corr['value']:
print (v)
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sbn.cubehelix_palette(n_colors)
mapping = linspace(0,1,256)
c_index = np.digitize(value, mapping)
plot_colors =[]
for i in c_index:
plot_colors.append(palette[i])
s =np.array(value)*4000
fig = figsize(10,10)
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
ax.scatter(x1,y1,marker='s',s=s,c=plot_colors)
sbn.despine()
ax.grid(False, 'major')
ax.grid(True, 'minor', color='white')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor((0,0,0))
plt.gca().invert_yaxis()
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
xlabel(r'$\lambda$ index')
ylabel(r'$\lambda$ index')
def value_to_size(val, vlaue):
size_scale = 500
size = [1]*len(value)
size_min, size_max = min(size), max(size)
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
heatmap2
value_to_size(value[5], value)
from biokit.viz import corrplot
c = corrplot.Corrplot(df)
c.plot()
def plot(index, columns):
values = "bad_status"
vmax = 0.10
cellsize_vmax = 10000
g_ratio = df.pivot_table(index=index, columns=columns, values=values, aggfunc="mean")
g_size = df.pivot_table(index=index, columns=columns, values=values, aggfunc="size")
annot = np.vectorize(lambda x: "" if np.isnan(x) else "{:.1f}%".format(x * 100))(g_ratio)
# adjust visual balance
figsize = (g_ratio.shape[1] * 0.8, g_ratio.shape[0] * 0.8)
cbar_width = 0.05 * 6.0 / figsize[0]
f, ax = plt.subplots(1, 1, figsize=figsize)
cbar_ax = f.add_axes([.91, 0.1, cbar_width, 0.8])
heatmap2(g_ratio, ax=ax, cbar_ax=cbar_ax,
vmax=vmax, cmap="PuRd", annot=annot, fmt="s", annot_kws={"fontsize":"small"},
cellsize=g_size, cellsize_vmax=cellsize_vmax,
square=True, ax_kws={"title": "{} x {}".format(index, columns)})
plt.show()
"""
This script is created by modifying seaborn matrix.py
in https://github.com/mwaskom/seaborn, by Michael L. Waskom
"""
from __future__ import division
import itertools
import matplotlib as mpl
from matplotlib.collections import LineCollection
import matplotlib.pyplot as plt
from matplotlib import gridspec
import matplotlib.patheffects as patheffects
import numpy as np
import pandas as pd
from scipy.cluster import hierarchy
import seaborn as sns
from seaborn import cm
from seaborn.axisgrid import Grid
from seaborn.utils import (despine, axis_ticklabels_overlap, relative_luminance, to_utf8)
from seaborn.external.six import string_types
def _index_to_label(index):
"""Convert a pandas index or multiindex to an axis label."""
if isinstance(index, pd.MultiIndex):
return "-".join(map(to_utf8, index.names))
else:
return index.name
def _index_to_ticklabels(index):
"""Convert a pandas index or multiindex into ticklabels."""
if isinstance(index, pd.MultiIndex):
return ["-".join(map(to_utf8, i)) for i in index.values]
else:
return index.values
def _matrix_mask(data, mask):
"""Ensure that data and mask are compatabile and add missing values.
Values will be plotted for cells where ``mask`` is ``False``.
``data`` is expected to be a DataFrame; ``mask`` can be an array or
a DataFrame.
"""
if mask is None:
mask = np.zeros(data.shape, np.bool)
if isinstance(mask, np.ndarray):
# For array masks, ensure that shape matches data then convert
if mask.shape != data.shape:
raise ValueError("Mask must have the same shape as data.")
mask = pd.DataFrame(mask,
index=data.index,
columns=data.columns,
dtype=np.bool)
elif isinstance(mask, pd.DataFrame):
# For DataFrame masks, ensure that semantic labels match data
if not mask.index.equals(data.index) \
and mask.columns.equals(data.columns):
err = "Mask must have the same index and columns as data."
raise ValueError(err)
# Add any cells with missing data to the mask
# This works around an issue where `plt.pcolormesh` doesn't represent
# missing data properly
mask = mask | pd.isnull(data)
return mask
class _HeatMapper2(object):
"""Draw a heatmap plot of a matrix with nice labels and colormaps."""
def __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt,
annot_kws, cellsize, cellsize_vmax,
cbar, cbar_kws,
xticklabels=True, yticklabels=True, mask=None, ax_kws=None, rect_kws=None):
"""Initialize the plotting object."""
# We always want to have a DataFrame with semantic information
# and an ndarray to pass to matplotlib
if isinstance(data, pd.DataFrame):
plot_data = data.values
else:
plot_data = np.asarray(data)
data = pd.DataFrame(plot_data)
# Validate the mask and convet to DataFrame
mask = _matrix_mask(data, mask)
plot_data = np.ma.masked_where(np.asarray(mask), plot_data)
# Get good names for the rows and columns
xtickevery = 1
if isinstance(xticklabels, int):
xtickevery = xticklabels
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is True:
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is False:
xticklabels = []
ytickevery = 1
if isinstance(yticklabels, int):
ytickevery = yticklabels
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is True:
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is False:
yticklabels = []
# Get the positions and used label for the ticks
nx, ny = data.T.shape
if not len(xticklabels):
self.xticks = []
self.xticklabels = []
elif isinstance(xticklabels, string_types) and xticklabels == "auto":
self.xticks = "auto"
self.xticklabels = _index_to_ticklabels(data.columns)
else:
self.xticks, self.xticklabels = self._skip_ticks(xticklabels,
xtickevery)
if not len(yticklabels):
self.yticks = []
self.yticklabels = []
elif isinstance(yticklabels, string_types) and yticklabels == "auto":
self.yticks = "auto"
self.yticklabels = _index_to_ticklabels(data.index)
else:
self.yticks, self.yticklabels = self._skip_ticks(yticklabels,
ytickevery)
# Get good names for the axis labels
xlabel = _index_to_label(data.columns)
ylabel = _index_to_label(data.index)
self.xlabel = xlabel if xlabel is not None else ""
self.ylabel = ylabel if ylabel is not None else ""
# Determine good default values for the colormapping
self._determine_cmap_params(plot_data, vmin, vmax,
cmap, center, robust)
# Determine good default values for cell size
self._determine_cellsize_params(plot_data, cellsize, cellsize_vmax)
# Sort out the annotations
if annot is None:
annot = False
annot_data = None
elif isinstance(annot, bool):
if annot:
annot_data = plot_data
else:
annot_data = None
else:
try:
annot_data = annot.values
except AttributeError:
annot_data = annot
if annot.shape != plot_data.shape:
raise ValueError('Data supplied to "annot" must be the same '
'shape as the data to plot.')
annot = True
# Save other attributes to the object
self.data = data
self.plot_data = plot_data
self.annot = annot
self.annot_data = annot_data
self.fmt = fmt
self.annot_kws = {} if annot_kws is None else annot_kws
#self.annot_kws.setdefault('color', "black")
self.annot_kws.setdefault('ha', "center")
self.annot_kws.setdefault('va', "center")
self.cbar = cbar
self.cbar_kws = {} if cbar_kws is None else cbar_kws
self.cbar_kws.setdefault('ticks', mpl.ticker.MaxNLocator(6))
self.ax_kws = {} if ax_kws is None else ax_kws
self.rect_kws = {} if rect_kws is None else rect_kws
# self.rect_kws.setdefault('edgecolor', "black")
def _determine_cmap_params(self, plot_data, vmin, vmax,
cmap, center, robust):
"""Use some heuristics to set good defaults for colorbar and range."""
calc_data = plot_data.data[~np.isnan(plot_data.data)]
if vmin is None:
vmin = np.percentile(calc_data, 2) if robust else calc_data.min()
if vmax is None:
vmax = np.percentile(calc_data, 98) if robust else calc_data.max()
self.vmin, self.vmax = vmin, vmax
# Choose default colormaps if not provided
if cmap is None:
if center is None:
self.cmap = cm.rocket
else:
self.cmap = cm.icefire
elif isinstance(cmap, string_types):
self.cmap = mpl.cm.get_cmap(cmap)
elif isinstance(cmap, list):
self.cmap = mpl.colors.ListedColormap(cmap)
else:
self.cmap = cmap
# Recenter a divergent colormap
if center is not None:
vrange = max(vmax - center, center - vmin)
normlize = mpl.colors.Normalize(center - vrange, center + vrange)
cmin, cmax = normlize([vmin, vmax])
cc = np.linspace(cmin, cmax, 256)
self.cmap = mpl.colors.ListedColormap(self.cmap(cc))
def _determine_cellsize_params(self, plot_data, cellsize, cellsize_vmax):
if cellsize is None:
self.cellsize = np.ones(plot_data.shape)
self.cellsize_vmax = 1.0
else:
if isinstance(cellsize, pd.DataFrame):
cellsize = cellsize.values
self.cellsize = cellsize
if cellsize_vmax is None:
cellsize_vmax = cellsize.max()
self.cellsize_vmax = cellsize_vmax
def _skip_ticks(self, labels, tickevery):
"""Return ticks and labels at evenly spaced intervals."""
n = len(labels)
if tickevery == 0:
ticks, labels = [], []
elif tickevery == 1:
ticks, labels = np.arange(n) + .5, labels
else:
start, end, step = 0, n, tickevery
ticks = np.arange(start, end, step) + .5
labels = labels[start:end:step]
return ticks, labels
def _auto_ticks(self, ax, labels, axis):
"""Determine ticks and ticklabels that minimize overlap."""
transform = ax.figure.dpi_scale_trans.inverted()
bbox = ax.get_window_extent().transformed(transform)
size = [bbox.width, bbox.height][axis]
axis = [ax.xaxis, ax.yaxis][axis]
tick, = axis.set_ticks([0])
fontsize = tick.label.get_size()
max_ticks = int(size // (fontsize / 72))
if max_ticks < 1:
return [], []
tick_every = len(labels) // max_ticks + 1
tick_every = 1 if tick_every == 0 else tick_every
ticks, labels = self._skip_ticks(labels, tick_every)
return ticks, labels
def plot(self, ax, cax):
"""Draw the heatmap on the provided Axes."""
# Remove all the Axes spines
#despine(ax=ax, left=True, bottom=True)
# Draw the heatmap and annotate
height, width = self.plot_data.shape
xpos, ypos = np.meshgrid(np.arange(width) + .5, np.arange(height) + .5)
data = self.plot_data.data
cellsize = self.cellsize
mask = self.plot_data.mask
if not isinstance(mask, np.ndarray) and not mask:
mask = np.zeros(self.plot_data.shape, np.bool)
annot_data = self.annot_data
if not self.annot:
annot_data = np.zeros(self.plot_data.shape)
# Draw rectangles instead of using pcolormesh
# Might be slower than original heatmap
for x, y, m, val, s, an_val in zip(xpos.flat, ypos.flat, mask.flat, data.flat, cellsize.flat, annot_data.flat):
if not m:
vv = (val - self.vmin) / (self.vmax - self.vmin)
size = np.clip(s / self.cellsize_vmax, 0.1, 1.0)
color = self.cmap(vv)
rect = plt.Rectangle([x - size / 2, y - size / 2], size, size, facecolor=color, **self.rect_kws)
ax.add_patch(rect)
if self.annot:
annotation = ("{:" + self.fmt + "}").format(an_val)
text = ax.text(x, y, annotation, **self.annot_kws)
print(text)
# add edge to text
text_luminance = relative_luminance(text.get_color())
text_edge_color = ".15" if text_luminance > .408 else "w"
text.set_path_effects([mpl.patheffects.withStroke(linewidth=1, foreground=text_edge_color)])
# Set the axis limits
ax.set(xlim=(0, self.data.shape[1]), ylim=(0, self.data.shape[0]))
# Set other attributes
ax.set(**self.ax_kws)
if self.cbar:
norm = mpl.colors.Normalize(vmin=self.vmin, vmax=self.vmax)
scalar_mappable = mpl.cm.ScalarMappable(cmap=self.cmap, norm=norm)
scalar_mappable.set_array(self.plot_data.data)
cb = ax.figure.colorbar(scalar_mappable, cax, ax, **self.cbar_kws)
cb.outline.set_linewidth(0)
# if kws.get('rasterized', False):
# cb.solids.set_rasterized(True)
# Add row and column labels
if isinstance(self.xticks, string_types) and self.xticks == "auto":
xticks, xticklabels = self._auto_ticks(ax, self.xticklabels, 0)
else:
xticks, xticklabels = self.xticks, self.xticklabels
if isinstance(self.yticks, string_types) and self.yticks == "auto":
yticks, yticklabels = self._auto_ticks(ax, self.yticklabels, 1)
else:
yticks, yticklabels = self.yticks, self.yticklabels
ax.set(xticks=xticks, yticks=yticks)
xtl = ax.set_xticklabels(xticklabels)
ytl = ax.set_yticklabels(yticklabels, rotation="vertical")
# Possibly rotate them if they overlap
ax.figure.draw(ax.figure.canvas.get_renderer())
if axis_ticklabels_overlap(xtl):
plt.setp(xtl, rotation="vertical")
if axis_ticklabels_overlap(ytl):
plt.setp(ytl, rotation="horizontal")
# Add the axis labels
ax.set(xlabel=self.xlabel, ylabel=self.ylabel)
# Invert the y axis to show the plot in matrix form
ax.invert_yaxis()
def heatmap2(data, vmin=None, vmax=None, cmap=None, center=None, robust=False,
annot=None, fmt=".2g", annot_kws=None,
cellsize=None, cellsize_vmax=None,
cbar=True, cbar_kws=None, cbar_ax=None,
square=False, xticklabels="auto", yticklabels="auto",
mask=None, ax=None, ax_kws=None, rect_kws=None):
# Initialize the plotter object
plotter = _HeatMapper2(data, vmin, vmax, cmap, center, robust,
annot, fmt, annot_kws,
cellsize, cellsize_vmax,
cbar, cbar_kws, xticklabels,
yticklabels, mask, ax_kws, rect_kws)
# Draw the plot and return the Axes
if ax is None:
ax = plt.gca()
if square:
ax.set_aspect("equal")
# delete grid
ax.grid(False)
plotter.plot(ax, cbar_ax)
return ax
fig =figsize(10,10)
ax = heatmap2(good,annot=True, fmt='.2f',cellsize=np.array(value),cellsize_vmax=1, annot_kws={"size": 13},square=True,robust=True,cmap='PiYG' )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.grid(False, 'major')
ax.grid(True, 'minor', color='black', alpha=0.3)
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
fig =figsize(8,8)
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},cmap=sbn.light_palette((210, 90, 60), input="husl") )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
sbn.despine()
ax.grid(False, 'major')
ax.grid(True, 'minor', color='white')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
text = ax.text(x, y, annotation, **self.annot_kws)
# add edge to text
text_luminance = relative_luminance(text.get_color())
text_edge_color = ".15" if text_luminance > .408 else "w"
text.set_path_effects([mpl.patheffects.withStroke(linewidth=1, foreground=text_edge_color)])
ax.text()
```
| github_jupyter |
```
!git clone 'https://github.com/kevincong95/cs231n-emotiw.git'
# Switch to TF 1.x and navigate to the directory
%tensorflow_version 1.x
!pwd
import os
os.chdir('cs231n-emotiw')
!pwd
# Install required packages
!pip install -r 'requirements.txt'
cp '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/Train.zip' '/content/'
cp '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/Val.zip' '/content/'
cp '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/Train_labels.txt' '/content/'
cp '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/Val_labels.txt' '/content/'
!unzip /content/Train.zip
!!unzip /content/Val.zip
from src.preprocessors.audio_preprocessor import AudioPreprocessor
audio_preprocessor_train = AudioPreprocessor(video_folder='Train/' , output_folder='train-full/' , label_path='../Train_labels.txt')
audio_preprocessor_train.preprocess(batch_size=200)
!cp '/content/cs231n-emotiw/train-full/audio-pickle-all-X-openl3.pkl' '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final'
!cp '/content/cs231n-emotiw/train-full/audio-pickle-all-Y-openl3.pkl' '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final'
from src.preprocessors.audio_preprocessor import AudioPreprocessor
audio_preprocessor_val = AudioPreprocessor(video_folder='Val/' , output_folder='val-full/' , label_path='../Val_labels.txt')
audio_preprocessor_val.preprocess(batch_size=200)
!cp '/content/cs231n-emotiw/val-full/audio-pickle-all-X-openl3.pkl' '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final'
!cp '/content/cs231n-emotiw/val-full/audio-pickle-all-Y-openl3.pkl' '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final'
import numpy as np
X_train = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-X-openl3-train-final.pkl', allow_pickle=True)
Y_train = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-Y-openl3-train-final.pkl' , allow_pickle=True)
Y_val = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-Y-openl3-val-final.pkl' , allow_pickle=True)
X_val = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-X-openl3-val-final.pkl' , allow_pickle=True)
def normalize(X_train , X_val):
from sklearn.preprocessing import Normalizer
X_train_copy = X_train
X_val_copy = X_val
scaler = Normalizer()
for i in range(0,X_train_copy.shape[0]):
X_train_copy[i] = scaler.fit_transform(X_train_copy[i])
for i in range(0,X_val_copy.shape[0]):
X_val_copy[i] = scaler.fit_transform(X_val_copy[i])
return X_train_copy , X_val_copy
X_train_norm , X_val_norm = normalize(X_train , X_val)
%tensorflow_version 2.x
def train(X_train, y_train, epochs=4000 , batch_size=32 , X_val=[] , Y_val=[] , val_split=0.1, save_path = None):
"""
Train function with the model architecture
- Outputs
1. Trained model -- saves the model as a .h5 file to the specified path
"""
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ModelCheckpoint
# ******** Keras Functional API ********
inputs = tf.keras.Input(shape=[None,6144])
# CNN Portion
x = tf.keras.layers.Conv1D(64, 2, activation='selu')(inputs) # This does convolves on the time domain. I.e. it is NOT time distributed.
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding='valid')(x)
x = tf.keras.layers.Dropout(0.4)(x)
x = tf.keras.layers.Conv1D(512, 2, activation='selu')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding='valid')(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Conv1D(512, 2, activation='selu')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPooling1D(pool_size=2, strides=1, padding='valid')(x)
x = tf.keras.layers.Dropout(0.2)(x)
# Recurrent Portion
x = tf.keras.layers.Bidirectional(keras.layers.LSTM(10, return_sequences=True, input_shape=[None, 6144] , dropout=0.2 , activation='selu'))(x)
x = tf.keras.layers.Bidirectional(keras.layers.LSTM(5))(x)
x = tf.keras.layers.Dense(32 , activation='selu')(x)
x = tf.keras.layers.Dropout(0.4)(x)
outputs = tf.keras.layers.Dense(3 , activation='softmax')(x)
# Define Hyperparams and Compile
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate= 4.4274509683752373e-05, #This LR came from the hyperparameter tuning using the HyperAS (modified HyperOpt for Keras) -- randomized search
decay_steps=10000,
decay_rate=0.9)
rnn_ae = keras.Model(inputs=inputs, outputs=outputs)
opt = keras.optimizers.Adam(learning_rate=lr_schedule)
rnn_ae.compile(loss='sparse_categorical_crossentropy', optimizer=opt , metrics=['accuracy'])
history = None
if len(X_val) == 0 or len(Y_val) == 0:
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=300)
mc = ModelCheckpoint(save_path, monitor='val_acc', mode='max', verbose=1, save_best_only=True)
history = rnn_ae.fit(X_train , y_train , epochs=epochs , batch_size=batch_size, validation_split=val_split , callbacks=[es, mc])
return rnn_ae , history
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=300)
mc = ModelCheckpoint(save_path, monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
history = rnn_ae.fit(X_train , y_train , epochs=epochs , batch_size=batch_size, validation_data=(X_val, Y_val), callbacks=[es, mc])
return rnn_ae , history
model , history = train(X_train , Y_train , X_val=X_val , Y_val=Y_val , save_path='/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/openl3-cnn-lstm-tuned-lr.h5')
import pandas as pd
import matplotlib.pyplot as plt
```
```
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.plot(hist['epoch'][:25], hist['accuracy'][:25],
label='Train Accuracy')
plt.plot(hist['epoch'][:25], hist['val_accuracy'][:25],
label = 'Val Accuracy')
plt.ylim([0,1])
plt.legend()
plt.show()
plot_history(history)
```
# FOR TEST SET
Concatenate the training and val
```
X_combined = np.concatenate((X_train , X_val))
Y_combined = np.concatenate((Y_train , Y_val))
print(X_combined.shape)
print(Y_combined.shape)
from sklearn.model_selection import train_test_split
X_train , X_val , Y_train , Y_val = train_test_split(X_combined, Y_combined, test_size=0.1, random_state=42)
combined_model , combined_history = train(X_train , Y_train , X_val=X_val , Y_val=Y_val, save_path='/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/openl3-cnn-lstm-tuned-lr-train-and-val-v2.h5')
plot_history(combined_history)
```
| github_jupyter |
# MixedStream objects and thermodynamic equilibrium
MixedStream is an extention of [Stream](https://biosteam.readthedocs.io/en/latest/Stream.html) with 's' (solid), 'l' (liquid), 'L' (LIQUID), and 'g' (gas) flow rates. The upper case 'LIQUID' denotes that it is a distinct phase from 'liquid'.
### Create MixedStream Objects
Before initializing MixedStream objects, first set the species:
```
import biosteam as bst
bst.MixedStream.species = bst.Species('Water', 'Ethanol')
```
Initialize with an ID and optionally T and P. Then you can set flow rates for different phases:
```
ms1 = bst.MixedStream(ID='ms1', T=351, P=101325)
ms1.setflow('l', Ethanol=1, units='kmol/hr')
ms1.setflow('g', Ethanol=2) # Assuming kmol/hr
ms1.getflow('l', 'Ethanol', 'Water')
```
You can **view** flow rates differently by setting the units for show.
```
ms1.show(flow='kg/hr', T='degC')
```
### Get and set flow rates
Flow rates are stored in solid_mol, liquid_mol, LIQUID_mol, and vapor_mol arrays:
```
ms1.solid_mol
ms1.liquid_mol
ms1.LIQUID_mol
ms1.vapor_mol
```
Assign flows using these properties:
```
ms1.liquid_mol[:] = [2, 1]
ms1.vapor_mol[1] = 3
ms1
```
Mass and volumetric flow rates are also availabe as [property_array](https://array-collections.readthedocs.io/en/latest/property_array.html) objects
```
ms1.liquid_mass
ms1.liquid_vol
```
Assign flows through the mass or vol property arrays:
```
# Set gas phase specie flow rate by index
ms1.vapor_mass[0] = 10
ms1.show()
# Set liquid phase flow rates assuming same order as in species object
ms1.liquid_vol[:] = [0.1, 0.2]
ms1.show()
```
### Single phase flow properties
Add 'net' to get the net flow rate
```
ms1.liquid_molnet
ms1.vapor_massnet
ms1.solid_volnet
```
Add 'frac' to get the composition
```
ms1.vapor_molfrac
ms1.liquid_massfrac
ms1.solid_volfrac
```
Note: When a phase has no flow rate, all specie fractions will be equal.
### Overall flow properties
```
ms1.mol
ms1.mass
ms1.vol
```
Note that overall flow rate properties 'molnet', 'massnet', 'volnet', 'molfrac', 'massfrac', and 'volfrac' are also available.
### Material and thermodynamic properties
Access the same properties as Stream objects:
```
ms1.H # Enthalpy (kJ/hr)
ms1.rho # Density (kg/m3)
```
A dictionary of units of measure is available:
```
ms1.units # See documentation for more information
```
### Vapor-liquid equilibrium
Set temperature and pressure:
```
ms2 = bst.MixedStream('ms2', T=353.88)
bst.MixedStream.lazy_energy_balance = False
ms2.setflow('g', Water=1, Ethanol=2)
ms2.setflow('l', Water=2, Ethanol=1)
ms2.VLE(T=353.88, P=101325)
ms2
```
Set pressure and duty:
```
ms2.VLE(P=101325, Q=0)
ms2
```
Set vapor fraction and pressure:
```
ms2.VLE(V=0.5, P=101325)
ms2
```
Set vapor fraction and temperature:
```
ms2.VLE(V=0.5, T=353.88)
ms2
```
Set temperature and duty:
```
ms2.VLE(Q=0, T=353.88)
ms2
```
It is also possible to set light and heavy keys that are not used to calculate equilibrium using the `LNK` and `HNK` key word arguments.
### Liquid-liquid equilibrium
Initialize with MixedStream object with water and octane:
```
# Make stream with hydrophobic species
ms3 = bst.MixedStream('ms3', species=bst.Species('Water', 'Octane'))
ms3.setflow('l', (2, 2))
ms3
```
Adiabatic and isobaric conditions:
```
# Must set liquid-LIQUID guess splits
ms3.LLE()
ms3
```
Note that `LLE` assumes no significat temperature change with phase partitioning, resulting in constant temperature.
Isothermal and isobaric conditions:
```
# Must set liquid-LIQUID guess splits
ms3.LLE(T=340)
ms3
```
| github_jupyter |
```
import random
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import torch,torchvision
from torch.nn import *
from torch.optim import *
# Model Eval
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score,train_test_split
from sklearn.metrics import mean_absolute_error,mean_squared_error,accuracy_score,precision_score,f1_score,recall_score
# Models
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier,AdaBoostClassifier,VotingClassifier,BaggingClassifier,RandomForestClassifier
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from catboost import CatBoost,CatBoostClassifier
from xgboost import XGBClassifier,XGBRFClassifier
from flaml import AutoML
# Other
import pickle
import wandb
PROJECT_NAME = 'Titanic-V6'
device = 'cuda'
np.random.seed(65)
random.seed(65)
torch.manual_seed(65)
pd.read_csv('./data/test.csv')
def save_model(model,name):
pickle.dump(model,open(f'./models/{name}.pkl','wb'))
pickle.dump(model,open(f'./models/{name}.pk','wb'))
def make_submission(model,name):
project_name = name
data = pd.read_csv('./data/test.csv')
ids = data['PassengerId']
new_ticket = []
tickets = data['Ticket']
for ticket in tickets:
ticket = ticket.split(' ')
try:
ticket = int(ticket[0])
except:
try:
ticket = int(ticket[1])
except:
try:
ticket = int(ticket[2])
except:
ticket = 0
new_ticket.append(ticket)
data['Ticket'] = new_ticket
new_names = []
names = data['Name']
for name in names:
name = name.split(' ')[1].replace('.','')
new_names.append(name)
new_ticket.append(ticket)
cabins = data['Cabin']
new_cabins = []
for cabin in cabins:
try:
cabin = cabin[:1]
new_cabins.append(cabin)
except:
new_cabins.append(5000)
del data['Cabin']
data['Cabins'] = new_cabins
data,_,new_data,idx,labels = object_to_int(data,'Cabins')
data,_,new_data,idx,labels = object_to_int(data,'Name')
data['Cabins'].replace({0:np.nan},inplace=True)
data['Cabins'].fillna(data['Cabins'].median(),inplace=True)
data,_,new_data,idx,labels = object_to_int(data,'Embarked')
data,_,new_data,idx,labels = object_to_int(data,'Age')
data,_,new_data,idx,labels = object_to_int(data,'Sex')
data['Age'].fillna(data['Age'].median(),inplace=True)
data['Fare'].fillna(data['Fare'].median(),inplace=True)
name = project_name
data = data.astype(float)
preds = model.predict(data)
df = pd.DataFrame({'PassengerId':ids,'Survived':preds.astype(int)}).to_csv(f'./submission/' + name + '.csv',index=False)
def valid(model,X,y,valid=False):
preds = model.predict(X)
if valid is False:
result = {
'Accuracy':accuracy_score(y_true=y,y_pred=preds),
'Precision':precision_score(y_true=y,y_pred=preds),
'F1':f1_score(y_true=y,y_pred=preds),
'Recall':recall_score(y_true=y,y_pred=preds)
}
else:
result = {
'Val Accuracy':accuracy_score(y_true=y,y_pred=preds),
'Val Precision':precision_score(y_true=y,y_pred=preds),
'Val F1':f1_score(y_true=y,y_pred=preds),
'Val Recall':recall_score(y_true=y,y_pred=preds)
}
return result
def train(model,X_train,X_test,y_train,y_test,name):
wandb.init(project=PROJECT_NAME,name=name)
model.fit(X_train,y_train)
wandb.log(valid(model,X_test,y_test,True))
wandb.log(valid(model,X_train,y_train,False))
make_submission(model,name)
save_model(model,name)
wandb.finish()
def fe(data,col):
max_num = data[col].quantile(0.99)
min_num = data[col].quantile(0.05)
data = data[data[col] > max_num]
data = data[data[col] > min_num]
return data
def object_to_int(data,col):
old_data = data.copy()
data = data[col].tolist()
labels = {}
idx = -1
new_data = []
for data_iter in data:
if data_iter not in list(labels.keys()):
idx += 1
labels[data_iter] = idx
for data_iter in data:
new_data.append(labels[data_iter])
old_data[col] = new_data
return old_data,old_data[col],new_data,idx,labels
# data = pd.read_csv('./data/train.csv')
# data = data.sample(frac=1)
# data.head()
# old_data = data.copy()
# new_ticket = []
# tickets = data['Ticket']
# for ticket in tickets:
# ticket = ticket.split(' ')
# try:
# ticket = int(ticket[0])
# except:
# try:
# ticket = int(ticket[1])
# except:
# try:
# ticket = int(ticket[2])
# except:
# ticket = 0
# new_ticket.append(ticket)
# data['Ticket'] = new_ticket
# data.head()
# new_names = []
# names = data['Name']
# for name in names:
# name = name.split(' ')[1].replace('.','')
# new_names.append(name)
# data['Name'] = new_names
# data,_,new_data,idx,labels = object_to_int(data,'Name')
# cabins = data['Cabin']
# new_cabins = []
# for cabin in cabins:
# try:
# cabin = cabin[:1]
# new_cabins.append(cabin)
# except:
# new_cabins.append(5000)
# del data['Cabin']
# data['Cabins'] = new_cabins
# data,_,new_data,idx,labels = object_to_int(data,'Cabins')
# labels
# data['Cabins'].replace({0:np.nan},inplace=True)
# data.isna().sum()
# data['Cabins'].isna().sum()
# data['Cabins'].fillna(data['Cabins'].median(),inplace=True)
# data.isna().sum()
# data,_,new_data,idx,labels = object_to_int(data,'Embarked')
# data,_,new_data,idx,labels = object_to_int(data,'Age')
# data['Cabins'].fillna(data['Cabins'].median(),inplace=True)
# data.isna().sum()
# data,_,new_data,idx,labels = object_to_int(data,'Embarked')
# data,_,new_data,idx,labels = object_to_int(data,'Age')
# data,_,new_data,idx,labels = object_to_int(data,'Sex')
# data = data.astype(float)
# data.head()
# data = data.astype(float)
# data.head()
# data.to_csv('./data/cleaned-data.csv',index=False)
data = pd.read_csv('./data/cleaned-data.csv')
X = data.drop('Survived',axis=1)
y = data['Survived']
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.0625)
# train(RandomForestClassifier(),X_train,X_test,y_train,y_test,name='baseline')
# Decomposition
# from sklearn.decomposition import PCA
# from sklearn.decomposition import KernelPCA
# pca = KernelPCA(11)
# X_train = pca.fit_transform(X_train)
# X_test = pca.transform(X_test)
# train(RandomForestClassifier(),X_train,X_test,y_train,y_test,name='KernelPCA-decomposition')
# Feature Selection
# from sklearn.feature_selection import VarianceThreshold
# from sklearn.feature_selection import SelectKBest
# from sklearn.feature_selection import RFECV
# from sklearn.feature_selection import SelectFromModel
# fs = SelectFromModel(RandomForestClassifier(),norm_order=11)
# X_train = fs.fit_transform(X_train,y_train)
# train(RandomForestClassifier(),X_train,X_test,y_train,y_test,name='SelectFromModel-decomposition')
# Preproccessing
from sklearn.preprocessing import (
StandardScaler,
RobustScaler,
MinMaxScaler,
MaxAbsScaler,
OneHotEncoder,
Normalizer,
Binarizer
)
preprocessings = [Normalizer,Binarizer] # StandardScaler,RobustScaler,MinMaxScaler,MaxAbsScaler,
X_train_old = X_train.copy()
X_test_old = X_test.copy()
# for preprocessing in preprocessings:
# X_train = X_train_old.copy()
# X_test = X_test_old.copy()
# preprocessing = preprocessing()
# X_train = preprocessing.fit_transform(X_train)
# X_test = preprocessing.transform(X_test)
# train(RandomForestClassifier(),X_train,X_test,y_train,y_test,name=f'preprocessing-{preprocessing}')
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier,AdaBoostClassifier,VotingClassifier,BaggingClassifier,RandomForestClassifier
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from catboost import CatBoost,CatBoostClassifier
from xgboost import XGBClassifier,XGBRFClassifier
models = [
['KNeighborsClassifier',KNeighborsClassifier],
['DecisionTreeClassifier',DecisionTreeClassifier],
['GradientBoostingClassifier',GradientBoostingClassifier],
['AdaBoostClassifier',AdaBoostClassifier],
['VotingClassifier',VotingClassifier],
['BaggingClassifier',BaggingClassifier],
['RandomForestClassifier',RandomForestClassifier],
['SVC',SVC],
['BaggingClassifier',BaggingClassifier],
['ExtraTreesClassifier',ExtraTreesClassifier],
['CatBoost',CatBoost],
['CatBoostClassifier',CatBoostClassifier],
['XGBClassifier',XGBClassifier],
['XGBRFClassifier',XGBRFClassifier],
]
# for model in models:
# try:
# train(model[1](),X_train,X_test,y_train,y_test,name=f'model-{model[0]}')
# except:
# pass
# train(XGBClassifier(),X_train,X_test,y_train,y_test,name=f'XGBClassifier')
param_grid = {
'n_estimators':[25,50,75,100,125,250,375,500,625,750,1000],
'criterion':['gini','entropy'],
'max_depth':[1,2,3,4,5,None],
'min_samples_split':[2,2.5,1.25,5.0],
'min_samples_leaf':[1,2,5,7,10],
'max_features':['auto','sqrt','log2'],
'bootstrap':[False,True],
'oob_score':[False,True],
'warm_start':[False,True],
'class_weight':['balanced','balanced_subsample']
}
model = ExtraTreesClassifier()
model = GridSearchCV(model,cv=5,verbose=5,param_grid=param_grid).fit(X,y)
```
| github_jupyter |
## Writing Reviews to Postgres from CSV
```
import csv
from time import time
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Column, Integer, JSON, String, Text, text, Date
# from models import Review
from credentials import POSTGRESQL_USER, POSTGRESQL_PASSWORD, POSTGRESQL_DB, POSTGRESQL_HOST
from datetime import datetime, timedelta
# import io
import json
import os
import glob
# import base64
def get_paths(pathname):
return sorted([os.path.join(pathname, f) for f in os.listdir(pathname) if f.endswith(".csv")])
DATA_DIR = './data/vacuum_reviews/Stick Vacuums & Electric Brooms'
# DATA_DIR = './data/bed_pillow_reviews/1-Beckham/'
# DATA_DIR = './data/bed_pillow_reviews/2-down alt/'
all_files = get_paths(DATA_DIR)
all_files
def extract_product_category(file_path):
return os.path.normpath(file_path).split(os.sep)[-2]
extract_product_category(all_files[0])
def extract_product_name(file_path):
return ('_').join(os.path.splitext(os.path.split(file_path)[-1])[0].replace(',', '').split(' ')[:-1])
extract_product_name(all_files[1])
Base = declarative_base()
class Review(Base):
__tablename__ = 'reviews'
review_id = Column(Integer, primary_key=True, server_default=text("nextval('reviews_review_id_seq'::regclass)"))
content = Column(Text, nullable=False)
meta_data = Column(JSON)
product_id = Column(String(255), nullable=False)
variation = Column(String(80))
review_date = Column(Date)
rater_id = Column(String(80))
product_category = Column(String(80))
product_rating = Column(Integer)
engine = create_engine(f"postgresql+psycopg2://{POSTGRESQL_USER}:{POSTGRESQL_PASSWORD}@{POSTGRESQL_HOST}:5432/{POSTGRESQL_DB}")
Base.metadata.create_all(engine)
session = sessionmaker(bind=engine)
s = session()
# PRODUCT_ID = 'BISSELL Crosswave' # Rename according to product ID
try:
for file in all_files:
print(f'processing {file}')
# with open(file) as fh:
product_category = extract_product_category(file)
product_id = extract_product_name(file)
with open(file, encoding='utf-8-sig') as fh:
csvreader = csv.reader(fh, delimiter=',')
headers = []
for idx, row in enumerate(csvreader):
if idx == 0:
headers = row
else:
item = dict(zip(headers, row))
# print(item)
entry_item = {
'product_id': product_id,
'product_category': product_category,
'variation': item['Variation'],
'content': item['Title'] + ' - ' + item['Body'],
'product_rating': item['Rating'],
'rater_id': item['Author'],
'review_date': item['Date'].split(' on ')[1],
'meta_data': {k:d for (k, d) in item.items() if k not in ['Variation', 'Body', 'Rating', 'Author', 'Date']}
}
new_review = Review(**entry_item)
try:
s.add(new_review)
except:
print(entry_item)
s.commit()
except():
s.rollback()
s.close()
```
## Basic Text Analysis
```
import pandas as pd
import numpy as np
# Text preprocessing/analysis
import re
from nltk import word_tokenize, sent_tokenize, FreqDist
from nltk.util import ngrams
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import RegexpTokenizer
from collections import Counter
def read_files(files, separator=','):
"""
Takes a list of pathnames and individually reads then concats them into a single DataFrame which is returned.
Can handle Excel files, csv, or delimiter separated text.
"""
processed_files = []
for file in files:
if file.lower().endswith('.xlsx') or file.lower().endswith('.xls'):
processed_files.append(pd.read_excel(file, index_col=None, header=0))
elif file.lower().endswith('.csv'):
processed_files.append(pd.read_csv(file, index_col=None, header=0))
else:
processed_files.append(pd.read_csv(file, sep=separator, index_col=None, header=0))
completed_df = pd.concat(processed_files, ignore_index=True)
return completed_df
pillow_reviews = read_files(all_files)
pillow_reviews['ReviewCountry'], pillow_reviews['ReviewDate'] = pillow_reviews['Date'].str.split(' on ', 1).str
pillow_reviews['ReviewDate'] = pd.to_datetime(pillow_reviews['ReviewDate'])
# Contains Image OR Video
print(f"Reviews with EITHER Image or Video: {len(pillow_reviews[(pillow_reviews.Images!='-') | (pillow_reviews.Videos!='-')])}")
# Contains No Image Nor Video
print(f"Reviews with NO Image or Video: {len(pillow_reviews[(pillow_reviews.Images=='-') & (pillow_reviews.Videos=='-')])}")
# EXPORT FOR TABLEAU USE
# pillow_reviews.to_csv('dataset/pillow_reviews_{}.csv'.format(re.sub(r'(-|:| )', '', str(datetime.now())[:-7])), encoding='utf_8_sig')
prod_1 = pillow_reviews[pillow_reviews['Variation'] == 'B01LYNW421']
prod_1_good = prod_1[prod_1['Rating'] >= 4]
# prod_1_ok = prod_1[prod_1['Rating'] == 3]
prod_1_bad = prod_1[prod_1['Rating'] <= 3]
prod_1_good_reviews = prod_1_good.reset_index()['Body']
# prod_1_ok_reviews = prod_1_ok.reset_index()['Body']
prod_1_bad_reviews = prod_1_bad.reset_index()['Body']
def summarise(pattern, strings, freq):
"""Summarise strings matching a pattern."""
# Find matches
compiled_pattern = re.compile(pattern)
matches = [s for s in strings if compiled_pattern.search(s)]
# Print volume and proportion of matches
print("{} strings, that is {:.2%} of total".format(len(matches), len(matches)/ len(strings)))
# Create list of tuples containing matches and their frequency
output = [(s, freq[s]) for s in set(matches)]
output.sort(key=lambda x:x[1], reverse=True)
return output
def find_outlaw(word):
"""Find words that contain a same character 3+ times in a row."""
is_outlaw = False
for i, letter in enumerate(word):
if i > 1:
if word[i] == word[i-1] == word[i-2] and word[i].isalpha():
is_outlaw = True
break
return is_outlaw
```
## AWS SDK Testing
```
# from credentials import ACCESS_KEY, SECRET_KEY,
# comprehend = boto3.client(
# service_name='comprehend',
# region_name='us-west-2',
# aws_access_key_id=ACCESS_KEY,
# aws_secret_access_key=SECRET_KEY)
# text = single_review
# print('Calling DetectKeyPhrases')
# print(json.dumps(comprehend.detect_key_phrases(Text=text, LanguageCode='en'), sort_keys=True, indent=4))
# print('End of DetectKeyPhrases\n')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import tensorflow as tf
import pickle
```
## Data Preprocessing
```
# Loading formatted data
# I use format the data into pd dataframe
# See data_formatting.ipynb for details
train_data = pd.read_pickle("../dataset/train.pickle")
validate_data = pd.read_pickle("../dataset/validate.pickle")
test_data = pd.read_pickle("../dataset/test.pickle")
```
### Tokenize the source code
#### BoW
For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
text_to_word_sequence does not work since it ask a single string
```
# train_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(train_data[0])
# x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
# validate_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(validate_data[0])
# x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
# test_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(test_data[0])
# x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
```
#### Init the Tokenizer
#### BoW
```
# The paper does not declare the num of words to track, I am using 10000 here
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000)
# Required before using texts_to_sequences
# Arguments; a list of strings
tokenizer.fit_on_texts(list(train_data[0]))
```
For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
```
train_tokenized = tokenizer.texts_to_sequences(train_data[0])
x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
validate_tokenized = tokenizer.texts_to_sequences(validate_data[0])
x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
test_tokenized = tokenizer.texts_to_sequences(test_data[0])
x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
y_train = train_data[train_data.columns[2]].astype(int)
y_validate = validate_data[validate_data.columns[2]].astype(int)
y_test = test_data[test_data.columns[2]].astype(int)
```
## Model Design
This dataset is highly imbalanced, so I am working on adjusting the train weights
https://www.tensorflow.org/tutorials/structured_data/imbalanced_data
```
clear, vulnerable = (train_data[train_data.columns[2]]).value_counts()
total = vulnerable + clear
print("Total: {}\n Vulnerable: {} ({:.2f}% of total)\n".format(total, vulnerable, 100 * vulnerable / total))
weight_for_0 = (1 / clear)*(total)/2.0
weight_for_1 = (1 / vulnerable)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=13, input_length=500))
model.add(tf.keras.layers.Conv1D(filters=512, kernel_size=9, activation="relu"))
model.add(tf.keras.layers.MaxPool1D(pool_size=4))
model.add(tf.keras.layers.Dropout(rate=0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=64, activation="relu"))
model.add(tf.keras.layers.Dense(units=16, activation="relu"))
# I am using the sigmoid rather than the softmax mentioned in the paper
model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Adam Optimization with smaller learning rate
adam = tf.keras.optimizers.Adam(lr=0.001)
# Define the evaluation metrics
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
]
model.compile(optimizer=adam, loss="binary_crossentropy", metrics=METRICS)
model.summary()
history = model.fit(x=x_train, y=y_train, batch_size=128, epochs=10, verbose=1, class_weight=class_weight, validation_data=(x_validate, y_validate))
with open('CWE120_trainHistory', 'wb') as history_file:
pickle.dump(history.history, history_file)
model.save("Simple_CNN_CWE120")
results = model.evaluate(x_test, y_test, batch_size=128)
```
| github_jupyter |
[](https://colab.research.google.com/github/Rishit-dagli/Android-Stream-Day-2020/blob/master/Rock_Paper_Scissors.ipynb)
# Rock Paper Scissors with TF Model Maker
Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.
This is a part of an example where I show how one can very easily do on-device ML with TensorFlow Lite Model Maker and ML Model Binding Plugin.
## Setup
We need to install serveral required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install git+git://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
```
Import the required packages.
```
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task import image_classifier
from tensorflow_examples.lite.model_maker.core.task.model_spec import mobilenet_v2_spec
from tensorflow_examples.lite.model_maker.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
```
## Training the model
### Get the data path
Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.
```
!wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip
!unzip rps.zip
image_path = "rps"
```
You could replace `image_path` with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="http://storage.rishit.tech/storage/Android-Stream-Day-2020/upload-to-colab.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.
### Run the example
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.
1. Load input data specific to an on-device ML app. Split it to training data and testing data.
```
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
```
2. Customize the TensorFlow model.
```
model = image_classifier.create(train_data)
```
3. Evaluate the model.
```
loss, accuracy = model.evaluate(test_data)
```
4. Export to TensorFlow Lite model.
You could download it in the left sidebar same as the uploading part for your own use.
```
model.export(export_dir='.', with_metadata=True)
```
5. Download the trained model by clicking on the folder icon on the left hand side. Right-click on "model.tflite" and select download. Or run the following code:
```
from google.colab import files
files.download('model.tflite')
```
| github_jupyter |
**PROBLEM STATEMENT**
<br/>Predict the Survival of people from Titanic based on the gender, class, age etc.
<br/>Get Sample data from Source- https://data.world/nrippner/titanic-disaster-dataset
<br/>
<br/>**COLUMN DEFINITION**
<br/>survival - Survival (0 = No; 1 = Yes)
<br/>class - Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
<br/>name - Name
<br/>sex - Sex (Male, Female the dataset is Imbalanced towards Males)
<br/>age - Age
<br/>
<br/>**STEPS IN MODELLING**
<br/>1.Data Acquisation
<br/>2.Data understanding
<br/>3.Data visualisation/EDA
<br/>4.Data cleaning/missing imputation/typecasting
<br/>5.Sampling/ bias removal
<br/>6.Anomaly detection
<br/>7.Feature selection/importance
<br/>8.Azure ML Model trigger
<br/>9.Model Interpretation & Error Analysis
<br/>10.Telemetry
<br/>
<br/>**FEATURE ENGINEERING**
<br/>1. Data is Imbalanced with more Males, so Cluster Oversample by 'Sex' Column and then model. This imbalance can be identified via the Data Plots.
## Import functions from Master Notebook:
Import the Functions and dependencies from the Master notebook to be used in the Trigger Notebook
```
%run /Users/.../AMLMasterNotebook
```
## 1.Data Acquisition
1.Acquisition of data from datasource ADLS path in CSV/Parquet/JSON etc format.
<br/>2.Logical Transformations in data.
<br/>3.Transforming columns into required datatypes, converting to pandas df, persisiting actual dataset, intoducing a column 'Index' to assign a unique identifier to each dataset row so that this canm be used to retrieve back the original form after any data manupulations.
```
%scala
//<USER INPUT FILEPATH PARQUET OR CSV>
val filepath= "adl://<ADLS name>.azuredatalakestore.net/Temp/ML-PJC/Titanic.csv"
var df=spark.read.format("csv").option("header", "true").option("delimiter", ",").load(filepath)
//val filepath ="abfss:/.../.parquet"
//var df = spark.read.parquet(filepath)
df.createOrReplaceTempView("vw")
%sql
select * from vw
import pandas as pd
import numpy as np
from pyspark.sql.functions import col
input_dataframe= spark.sql("""select * FROM vw""")
#input_dataframe = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
cols_string=['Name','PClass','Sex']
cols_int=['Age','Survived']
cols_datetime=[]
cols_Float=[]
#Function call: DataTypeConversion(input_dataframe,cols_string,cols_int,cols_datetime,cols_Float)
input_dataframe = DataTypeConversion(input_dataframe,cols_string,cols_int,cols_datetime,cols_Float)
##To assign an Index unique identifier of original record from after data massaging
input_dataframe['Index'] = np.arange(len(input_dataframe))
#Saving data acquired in dbfs for future use
outdir = '/dbfs/FileStore/Titanic.csv'
input_dataframe.to_csv(outdir, index=False)
#input_dataframe = pd.read_csv("/dbfs/FileStore/Dataframe.csv", header='infer')
```
## 2.Data Exploration
1.Exploratory Data Analysis (EDA)- To understand the overall data at hand, analysing each feature independently for its' statistics, the correlation and interraction between variables, data sample etc.
<br/>2.Data Profiling Plots- To analyse the Categorical and Numerical columns separately for any trend in data, biasness in data etc.
```
input_dataframe = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
#Function Call: Data_Profiling_viaPandasProfiling(input_dataframe)
p=Data_Profiling_viaPandasProfiling(input_dataframe)
displayHTML(p)
input_dataframe = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
#User Inputs
cols_all=['Name','PClass','Sex','Age','Survived']
Categorical_cols=['Name','PClass','Sex']
Numeric_cols=['Age','Survived']
Label_col='Survived'
#Data_Profiling_Plots(input_dataframe,Categorical_cols,Numeric_cols,Label_col)
Data_Profiling_Plots(input_dataframe,Categorical_cols,Numeric_cols,Label_col)
```
## 4.Cleansing
To clean the data from NULL values, fix structural errors in columns, drop empty columns, encode the categorical values, normalise the data to bring to the same scale. We also check the Data Distribution via Correlation heatmap of original input dataset v/s the Cleansed dataset to validate whether or not the transformations hampered the original data trend/density.
```
subsample_final = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
filepath="/dbfs/FileStore/Titanic.csv"
#subsample_final=subsample_final.drop(['Index'], axis = 1) # Index is highest variability column hence always imp along PC but has no business value. You can append columns to be dropped by your choice here in the list
inputdf_new=autodatacleaner(subsample_final,filepath,"Titanic","Data Cleanser")
print("Total rows in the new pandas dataframe:",len(inputdf_new.index))
#persist cleansed data sets
filepath1 = '/dbfs/FileStore/Cleansed_Titanic.csv'
inputdf_new.to_csv(filepath1, index=False)
original = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
display(Data_Profiling_Fin(original))
Cleansed=pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
display(Data_Profiling_Fin(Cleansed))
```
## 4.Sampling
Perform Stratified, Systematic, Random, Cluster sampling over data and compare the so obtained sampled dataset with the original data using a NULL Hypothesis, and suggest the best sample obtained thus. Compare the data densities of sampled datasets with that of the original input dataset to validate that our sample matches the data trend of original set.
```
input_dataframe = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer') ## Sample after cleansing so that all categorical cols converted to num and hence no chi test. chi test requires the total of observed and tot of original sample to be same in frequency.
filepath="/dbfs/FileStore/Cleansed_Titanic.csv"
subsample_final = pd.DataFrame()
subsample1 = pd.DataFrame()
subsample2 = pd.DataFrame()
subsample3 = pd.DataFrame()
subsample4 = pd.DataFrame()
#Function Call: Sampling(input_dataframe,filepath,task_type,input_appname,cluster_classified_col_ifany(Supervised))
subsample_final,subsample1,subsample2,subsample3,subsample4=Sampling(input_dataframe,filepath,'Sampling','Titanic','Sex')
#persist sampled data sets
filepath1 = '/dbfs/FileStore/StratifiedSampled_Titanic.csv'
subsample1.to_csv(filepath1, index=False)
filepath2 = '/dbfs/FileStore/RandomSampled_Titanic.csv'
subsample2.to_csv(filepath2, index=False)
filepath3 = '/dbfs/FileStore/SystematicSampled_Titanic.csv'
subsample3.to_csv(filepath3, index=False)
filepath4 = '/dbfs/FileStore/ClusterSampled_Titanic.csv'
subsample4.to_csv(filepath4, index=False)
filepath = '/dbfs/FileStore/subsample_final_Titanic.csv'
subsample_final.to_csv(filepath, index=False)
original = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
display(display_DataDistribution(original,'Survived'))
subsample1 = pd.read_csv("/dbfs/FileStore/StratifiedSampled_Titanic.csv", header='infer')
display(display_DataDistribution(subsample1,'Survived'))
subsample2 = pd.read_csv("/dbfs/FileStore/RandomSampled_Titanic.csv", header='infer')
display(display_DataDistribution(subsample2,'Survived'))
subsample3 = pd.read_csv("/dbfs/FileStore/SystematicSampled_Titanic.csv", header='infer')
display(display_DataDistribution(subsample3,'Survived'))
subsample4 = pd.read_csv("/dbfs/FileStore/ClusterSampled_Titanic.csv", header='infer')
display(display_DataDistribution(subsample4,'Survived'))
```
## 5.Anomaly Detection
Iterate data over various Anomaly-detection techniques and estimate the number of Inliers and Outliers for each.
```
#Calling the Anamoly Detection Function for identifying outliers
outliers_fraction = 0.05
#df =pd.read_csv("/dbfs/FileStore/subsample_final_Titanic.csv", header='infer')
df =pd.read_csv("/dbfs/FileStore/ClusterSampled_Titanic.csv", header='infer')
target_variable = 'Survived'
variables_to_analyze='Sex'
AnomalyDetection(df,target_variable,variables_to_analyze,outliers_fraction,'anomaly_test','Titanic')
```
## 6.Feature Selection
Perform feature selection on the basis of Feature Importance ranking, correlation values, variance within the column.
Choose features with High Importance value score, drop one of the two highly correlated features, drop features which offer zero variability to data and thus do not increase the entropy of dataset.
```
import pandas as pd
import numpy as np
#input_dataframe = pd.read_csv("/dbfs/FileStore/RealEstate.csv", header='infer')
#label_col='Y house price of unit area'
#filepath="/dbfs/FileStore/RealEstate.csv"
#input_appname='RealEstate'
#task_type='FeatureSelectionCleansing'
#Y_discrete='Continuous'
input_dataframe = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
label_col='Survived'
filepath='/dbfs/FileStore/Cleansed_Titanic.csv'
input_appname='Titanic'
task_type='FeatureSelectionCleansing'
Y_discrete='Categorical'
FeatureSelection(input_dataframe,label_col,Y_discrete,filepath,input_appname,task_type)
%pip install ruamel.yaml==0.16.10
%pip install azure-core==1.8.0
%pip install liac-arff==2.4.0
%pip install msal==1.4.3
%pip install msrest==0.6.18
%pip install ruamel.yaml.clib==0.2.0
%pip install tqdm==4.49.0
%pip install zipp==3.2.0
%pip install interpret-community==0.15.0
%pip install azure-identity==1.4.0
%pip install dotnetcore2==2.1.16
%pip install jinja2==2.11.2
%pip install azure-core==1.15.0
%pip install azure-mgmt-containerregistry==8.0.0
%pip install azure-mgmt-core==1.2.2
%pip install distro==1.5.0
%pip install google-api-core==1.30.0
%pip install google-auth==1.32.1
%pip install importlib-metadata==4.6.0
%pip install msal==1.12.0
%pip install packaging==20.9
%pip install pathspec==0.8.1
%pip install requests==2.25.1
%pip install ruamel.yaml.clib==0.2.4
%pip install tqdm==4.61.1
%pip install zipp==3.4.1
%pip install scipy==1.5.2
%pip install charset-normalizer==2.0.3
%pip install websocket-client==1.1.0
%pip install scikit-learn==0.22.1
%pip install interpret-community==0.19.0
%pip install cryptography==3.4.7
%pip install typing-extensions==3.10.0.0
```
## 7.Auto ML Trigger - after preprocessing
Trigger Azure auto ML, pick the best model so obtained and use it to predict the label column. Calculate the Weighted Absolute Accuracy amd push to telemetry. also obtain the data back in original format by using the unique identifier of each row 'Index' and report Actual v/s Predicted Columns. We also provide the direct link to the azure Portal Run for the current experiment for users to follow.
```
import pandas as pd
dfclean = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
#AutoMLFunc(subscription_id,resource_group,workspace_name,input_dataframe,label_col,task_type,input_appname)
df=AutoMLFunc('<subscription_id>','<resource_group>','<workspace_name>',dfclean,'Survived','classification','Titanic')
##df has just index,y actual, y predicted cols, as rest all cols are encoded after manipulation
for col in df.columns:
if col not in ["y_predict","y_actual","Index"]:
df.drop([col], axis=1, inplace=True)
#dataframe is the actual input dataset
dataframe = pd.read_csv("/dbfs/FileStore/Titanic.csv", header='infer')
#Merging Actual Input dataframe with AML output df using Index column
dataframe_fin = pd.merge(left=dataframe, right=df, left_on='Index', right_on='Index')
dataframe_fin
```
## 9.Model Interpretation, Feature Importance, Error Analysis
We can explore the model by splitting the Model metrics over various cohorts and analyse the data and model performance for each subclass.We can also get Global & Local feature Importance values for the Model.
```
df = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
label_col='Survived'
subscription_id='<subscription_id>'
resource_group='<resource_group>'
workspace_name='<workspace_name>'
run_id='<run_id>'
iteration=1
task='classification'
ModelInterpret(df,label_col,subscription_id,resource_group,workspace_name,run_id,iteration,task)
df = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
label_col='Survived'
subscription_id='<subscription_id>'
resource_group='<resource_group>'
workspace_name='<workspace_name>'
run_id='<run_id>'
iteration=1
task='classification'
ErrorAnalysisDashboard(df,label_col,subscription_id,resource_group,workspace_name,run_id,iteration,task)
input_dataframe = pd.read_csv("/dbfs/FileStore/Cleansed_Titanic.csv", header='infer')
label_col='Survived'
subscription_id='<subscription_id>'
resource_group='<resource_group>'
workspace_name='<workspace_name>'
run_id='<run_id>'
iteration=1
task='classification'
sensitive_features=['Sex']
FairnessDashboard(input_dataframe,label_col,subscription_id,resource_group,workspace_name,task_type,sensitive_features)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/pg1992/IA025_2022S1/blob/main/ex05/pedro_moreira/solution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
nome = "Pedro Guilherme Siqueira Moreira"
print(f'Meu nome é {nome}')
```
Este exercicío consiste em treinar no MNIST um modelo de umas camadas, sendo a primeira uma camada convolucional e a segunda uma camada linear de classificação.
Não podemos usar as funções torch.nn.Conv{1,2,3}d
## Importação das bibliotecas
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import random
import torch
import torchvision
from torchvision.datasets import MNIST
```
## Fixando as seeds
```
random.seed(123)
np.random.seed(123)
torch.manual_seed(123)
```
## Define pesos iniciais
```
in_channels = 1
out_channels = 2
kernel_size = 5
stride = 3
# Input image size
height_in = 28
width_in = 28
# Image size after the first convolutional layer.
height_out = (height_in - kernel_size) // stride + 1
width_out = (width_in - kernel_size) // stride + 1
initial_conv_weight = torch.FloatTensor(out_channels, in_channels, kernel_size, kernel_size).uniform_(-0.01, 0.01)
initial_conv_bias = torch.FloatTensor(out_channels,).uniform_(-0.01, 0.01)
initial_classification_weight = torch.FloatTensor(10, out_channels * height_out * width_out).uniform_(-0.01, 0.01)
initial_classification_bias = torch.FloatTensor(10,).uniform_(-0.01, 0.01)
```
## Dataset e dataloader
### Definição do tamanho do minibatch
```
batch_size = 50
```
### Carregamento, criação dataset e do dataloader
```
dataset_dir = '../data/'
dataset_train_full = MNIST(dataset_dir, train=True, download=True,
transform=torchvision.transforms.ToTensor())
print(dataset_train_full.data.shape)
print(dataset_train_full.targets.shape)
```
### Usando apenas 1000 amostras do MNIST
Neste exercício utilizaremos 1000 amostras de treinamento.
```
indices = torch.randperm(len(dataset_train_full))[:1000]
dataset_train = torch.utils.data.Subset(dataset_train_full, indices)
```
## Define os pesos iniciais
```
loader_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=False)
print('Número de minibatches de trenamento:', len(loader_train))
x_train, y_train = next(iter(loader_train))
print("\nDimensões dos dados de um minibatch:", x_train.size())
print("Valores mínimo e máximo dos pixels: ", torch.min(x_train), torch.max(x_train))
print("Tipo dos dados das imagens: ", type(x_train))
print("Tipo das classes das imagens: ", type(y_train))
```
## Camada Convolucional
```
class MyConv2d(torch.nn.Module):
def __init__(self, in_channels: int, out_channels: int, kernel_size: int, stride: int):
super(MyConv2d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size # The same for height and width.
self.stride = stride # The same for height and width.
self.weight = torch.nn.Parameter(torch.FloatTensor(out_channels, in_channels, kernel_size, kernel_size).uniform_(-0.01, 0.01))
self.bias = torch.nn.Parameter(torch.FloatTensor(out_channels,).uniform_(-0.01, 0.01))
def forward(self, x):
assert x.dim() == 4, f'x must have 4 dimensions: {x.shape}'
# Escreva seu código aqui.
return out
```
## Compare se sua implementação está igual à do pytorch usando um exemplo simples
```
in_channels_dummy = 1
out_channels_dummy = 1
kernel_size_dummy = 2
stride_dummy = 1
conv_layer = MyConv2d(in_channels=in_channels_dummy, out_channels=out_channels_dummy, kernel_size=kernel_size_dummy, stride=stride_dummy)
pytorch_conv_layer = torch.nn.Conv2d(in_channels=in_channels_dummy, out_channels=out_channels_dummy, kernel_size=kernel_size_dummy, stride=stride_dummy, padding=0)
# Usa os mesmos pesos para minha implementação e a do pytorch
initial_weights_dummy = torch.arange(in_channels_dummy * out_channels_dummy * kernel_size_dummy * kernel_size_dummy).float()
initial_weights_dummy = initial_weights_dummy.reshape(out_channels_dummy, in_channels_dummy, kernel_size_dummy, kernel_size_dummy)
initial_bias_dummy = torch.arange(out_channels_dummy,).float()
conv_layer.weight.data = initial_weights_dummy
conv_layer.bias.data = initial_bias_dummy
pytorch_conv_layer.load_state_dict(dict(weight=initial_weights_dummy, bias=initial_bias_dummy))
x = torch.arange(30).float().reshape(1, 1, 5, 6)
out = conv_layer(x)
target_out = pytorch_conv_layer(x)
assert torch.allclose(out, target_out, atol=1e-6)
```
## Compare se sua implementação está igual à do pytorch usando um exemplo aleatório
```
x = torch.rand(2, in_channels, height_in, width_in)
conv_layer = MyConv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
pytorch_conv_layer = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=0)
# Usa os mesmos pesos para minha implementação e a do pytorch
conv_layer.weight.data = initial_conv_weight
conv_layer.bias.data = initial_conv_bias
pytorch_conv_layer.load_state_dict(dict(weight=initial_conv_weight, bias=initial_conv_bias))
out = conv_layer(x)
target_out = pytorch_conv_layer(x)
assert torch.allclose(out, target_out, atol=1e-6)
```
## Modelo
```
class Net(torch.nn.Module):
def __init__(self, height_in: int, width_in: int, in_channels: int, out_channels: int, kernel_size: int, stride: int):
super(Net, self).__init__()
self.conv_layer = MyConv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
height_out = (height_in - kernel_size) // stride + 1
width_out = (width_in - kernel_size) // stride + 1
self.classification_layer = torch.nn.Linear(out_channels * height_out * width_out, 10)
def forward(self, x):
hidden = self.conv_layer(x)
hidden = torch.nn.functional.relu(hidden)
hidden = hidden.reshape(x.shape[0], -1)
logits = self.classification_layer(hidden)
return logits
```
## Treinamento
### Definição dos hiperparâmetros
```
n_epochs = 50
lr = 0.1
```
### Laço de treinamento
```
model = Net(height_in=height_in, width_in=width_in, in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
# Usa pesos iniciais pré-difinidos
model.classification_layer.load_state_dict(dict(weight=initial_classification_weight, bias=initial_classification_bias))
model.conv_layer.weight.data = initial_conv_weight
model.conv_layer.bias.data = initial_conv_bias
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr)
epochs = []
loss_history = []
loss_epoch_end = []
total_trained_samples = 0
for i in range(n_epochs):
for x_train, y_train in loader_train:
# predict da rede
outputs = model(x_train)
# calcula a perda
loss = criterion(outputs, y_train)
# zero, backpropagation, ajusta parâmetros pelo gradiente descendente
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_trained_samples += x_train.size(0)
epochs.append(total_trained_samples / len(dataset_train))
loss_history.append(loss.item())
loss_epoch_end.append(loss.item())
print(f'Epoch: {i:d}/{n_epochs - 1:d} Loss: {loss.item()}')
```
### Visualização usual da perda, somente no final de cada minibatch
```
n_batches_train = len(loader_train)
plt.plot(epochs[::n_batches_train], loss_history[::n_batches_train])
plt.xlabel('época')
loss_epoch_end
# Assert do histórico de losses
target_loss_epoch_end = np.array([
2.303267478942871,
2.227701187133789,
1.0923893451690674,
0.5867354869842529,
0.5144089460372925,
0.45026642084121704,
0.4075140357017517,
0.37713879346847534,
0.3534485101699829,
0.3341451585292816,
0.3181140422821045,
0.30457887053489685,
0.29283496737480164,
0.2827608287334442,
0.2738332152366638,
0.2657742500305176,
0.2583288848400116,
0.25117507576942444,
0.24439716339111328,
0.23789969086647034,
0.23167723417282104,
0.22562651336193085,
0.21984536945819855,
0.2142913043498993,
0.20894232392311096,
0.203872948884964,
0.19903430342674255,
0.19439971446990967,
0.18994088470935822,
0.18563991785049438,
0.18147490918636322,
0.17744913697242737,
0.17347246408462524,
0.16947467625141144,
0.16547319293022156,
0.16150487959384918,
0.1574639081954956,
0.1534043848514557,
0.14926929771900177,
0.1452063024044037,
0.1412365883588791,
0.13712672889232635,
0.1331038922071457,
0.1291467249393463,
0.1251506358385086,
0.12116757035255432,
0.11731722950935364,
0.11364627629518509,
0.11001908034086227,
0.10655981302261353])
assert np.allclose(np.array(loss_epoch_end), target_loss_epoch_end, atol=1e-6)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import scanpy as sc
import perturbseq as perturb
sc.logging.print_versions()
```
Annotate perturbations
==
Input:
- scanpy object with gene expression
- cell2guide file:
- file annotating which guide is present in each cell. binary with 0 when the guide is absent and 1 if it is present. note that there can be multiples guides per cell.
- the rows of this file are the cells, and the columns are 'cell', and the set of guides in the experiments.
- guide2gene file (optional):
- file annotating which guides target the same gene.
- 2 columns of this file must be named 'guide' and 'gene'
```
datapath='/ahg/regevdata/projects/Cell2CellCommunication/perturbseq_benchmarks/data/2018-11-09'
dataset='dc_3hr'
gsm_number='GSM2396856'
anno=datapath+'/'+dataset+'/'+gsm_number+'_'+dataset+'_cbc_gbc_dict_lenient.csv.gz' #also experiment with the strict
pref=datapath+'/'+dataset+'/'+dataset
expr_file=pref+'raw_counts.h5ad'
cells2guide_file=pref+'.cell2guide.csv.gz'
guide2gene_file=pref+'.guide2gene.csv.gz'
#read in adata
adata=sc.read(expr_file)
adata.var_names_make_unique()
adata
perturb.io.read_perturbations_csv(adata,
cell2guide_csv=cells2guide_file,
guide2gene_csv=guide2gene_file)
adata
#if processing perturbseq data with cellranger, the guides are included in the expression matrix
#however, they are on a different sequencing depth,
#so they should be removed such that they don't affect normalization of the expression data
perturb.pp.remove_guides_from_gene_names(adata)
#annotate cells with specific guides as being controls
perturb.pp.annotate_controls(adata,control_guides=['m_MouseNTC_100_A_67005'])
adata
adata.write(pref+'.perturb.raw.h5ad')
```
Generic data processing of the expression data
==
```
adata=sc.read(pref+'.perturb.raw.h5ad')
adata
sc.pp.filter_cells(adata, min_genes=200)
sc.pp.filter_genes(adata, min_cells=3)
adata
adata.var['mt'] = adata.var_names.str.startswith('mt-') # annotate the group of mitochondrial genes as 'mt'
sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True)
sc.pl.highest_expr_genes(adata, n_top=20, )
sc.pl.violin(adata, ['n_genes_by_counts', 'total_counts', 'pct_counts_mt'],
jitter=0.4, multi_panel=True)
sc.pl.scatter(adata, x='total_counts', y='pct_counts_mt')
sc.pl.scatter(adata, x='total_counts', y='n_genes_by_counts')
adata = adata[adata.obs.n_genes_by_counts < 3000, :]
adata = adata[adata.obs.pct_counts_mt < 2, :]
sc.pp.normalize_total(adata, target_sum=1e4)
sc.pp.log1p(adata)
sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5)
sc.pl.highly_variable_genes(adata)
adata.raw = adata
#for this example, we will restrict to variable genes so that it runs faster
adata=adata[:,adata.var['highly_variable']]
adata
#regress out the batch effects
to_regress=['total_counts', 'pct_counts_mt']
for batch in set(adata.obs['batch']):
to_regress.append(batch)
adata.obs[batch]=1.0*(adata.obs['batch']==batch)
print(to_regress)
sc.pp.regress_out(adata, to_regress)
sc.pp.scale(adata, max_value=10)
sc.tl.pca(adata, svd_solver='arpack')
sc.pl.pca_variance_ratio(adata, log=True)
sc.pp.neighbors(adata, n_neighbors=15, n_pcs=40)
sc.tl.umap(adata)
sc.tl.louvain(adata)
adata.write(pref+'.perturb.analysis.h5ad')
adata=sc.read(pref+'.perturb.analysis.h5ad')
sc.pl.umap(adata,color=['louvain','pct_counts_mt','total_counts','batch'])
adata
for v in ['perturb.m_Rel_3', 'perturb.m_Spi1_3', 'perturb.m_Nfkb1_4', 'perturb.m_Irf4_4', 'perturb.m_Nfkb1_3', 'perturb.m_Irf4_2', 'perturb.m_Stat3_3', 'perturb.m_Irf2_1', 'perturb.m_E2f4_2', 'perturb.m_Stat1_1', 'perturb.m_Rel_1', 'perturb.m_Rela_2', 'perturb.m_Ets2_4', 'perturb.m_Stat1_3', 'perturb.m_Runx1_2', 'perturb.m_Irf1_1', 'perturb.m_Maff_1', 'perturb.m_Irf4_3', 'perturb.m_Atf3_1', 'perturb.m_Egr2_4', 'perturb.m_Ctcf_2', 'perturb.m_Ahr_1', 'perturb.m_Nfkb1_2', 'perturb.m_E2f1_4', 'perturb.m_Hif1a_4', 'perturb.m_Hif1a_1', 'perturb.m_Maff_4', 'perturb.m_Rel_2', 'perturb.m_Rela_3', 'perturb.m_Ets2_3', 'perturb.m_Cebpb_3', 'perturb.m_Irf1_2', 'perturb.m_E2f1_3', 'perturb.m_Stat2_4', 'perturb.m_Runx1_4', 'perturb.m_Spi1_4', 'perturb.m_Spi1_2', 'perturb.m_Stat2_2', 'perturb.m_Ctcf_1', 'perturb.m_Irf1_4', 'perturb.m_Junb_4', 'perturb.m_Irf2_3', 'perturb.m_Ahr_3', 'perturb.m_Rela_1', 'perturb.m_Irf2_4', 'perturb.m_Relb_1', 'perturb.m_Egr1_4', 'perturb.m_Cebpb_1', 'perturb.m_E2f4_4', 'perturb.m_Atf3_2', 'perturb.m_Irf2_2', 'perturb.m_Stat2_3', 'perturb.m_Stat1_2', 'perturb.m_MouseNTC_100_A_67005', 'perturb.m_Hif1a_3', 'perturb.m_Egr2_2', 'perturb.m_E2f4_3', 'perturb.gene.E2f1', 'perturb.gene.Stat1', 'perturb.gene.Spi1', 'perturb.gene.Ctcf', 'perturb.gene.Egr2', 'perturb.gene.Hif1a', 'perturb.gene.Rela', 'perturb.gene.Cebpb', 'perturb.gene.Irf2', 'perturb.gene.Runx1', 'perturb.gene.Nfkb1', 'perturb.gene.Stat2', 'perturb.gene.MouseNTC', 'perturb.gene.Ets2', 'perturb.gene.Stat3', 'perturb.gene.Maff', 'perturb.gene.Atf3', 'perturb.gene.Irf4', 'perturb.gene.Rel', 'perturb.gene.Ahr', 'perturb.gene.Irf1', 'perturb.gene.E2f4', 'perturb.gene.Relb', 'perturb.gene.Junb', 'perturb.gene.Egr1', 'guide', 'guide.compact', 'gene', 'gene.compact', 'unassigned', 'control']:
del adata.obs[v]
adata.write(pref+'.perturb.analysis.h5ad')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/BrittonWinterrose/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/Drug_Data_NLP_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Concrete solutions to real problems
## An NLP workshop by Emmanuel Ameisen [(@EmmanuelAmeisen)](https://twitter.com/EmmanuelAmeisen), from Insight AI
While there exist a wealth of elaborate and abstract NLP techniques, clustering and classification should always be in our toolkit as the first techniques to use when dealing with this kind of data. In addition to being amongst some of the easiest to scale in production, their ease of use can quickly help business address a set of applied problems:
- How do you automatically make the distinction between different categories of sentences?
- How can you find sentences in a dataset that are most similar to a given one?
- How can you extract a rich and concise representation that can then be used for a range of other tasks?
- Most importantly, how do you find quickly whether these tasks are possible on your dataset at all?
While there is a vast amount of resources on classical Machine Learning, or Deep Learning applied to images, I've found that there is a lack of clear, simple guides as to what to do when one wants to find a meaningful representation for sentences (in order to classify them or group them together for examples). Here is my attempt below.
## It starts with data
### Our Dataset: Disasters on social media
Contributors looked at over 10,000 tweets retrieved with a variety of searches like “ablaze”, “quarantine”, and “pandemonium”, then noted whether the tweet referred to a disaster event (as opposed to a joke with the word or a movie review or something non-disastrous). Thank you [Crowdflower](https://www.crowdflower.com/data-for-everyone/).
### Why it matters
We will try to correctly predict tweets that are about disasters. This is a very relevant problem, because:
- It is actionable to anybody trying to get signal from noise (such as police departments in this case)
- It is tricky because relying on keywords is harder than in most cases like spam
```
!pip install gensim
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
#auth.authenticate_user()
#gauth = GoogleAuth()
#gauth.credentials = GoogleCredentials.get_application_default()
#drive = GoogleDrive(gauth)
import keras
import nltk
import pandas as pd
import numpy as np
import re
import codecs
```
### Sanitizing input
Let's make sure our tweets only have characters we want. We remove '#' characters but keep the words after the '#' sign because they might be relevant (eg: #disaster)
```
#!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
#!unzip drugsCom_raw.zip
df_train = pd.read_table('drugsComTrain_raw.tsv')
df_test = pd.read_table('drugsComTest_raw.tsv')
df_main = pd.concat([df_train, df_test], axis=0)
df_main.head()
# Turn rating into new "binned" column
def rank_bin(array):
y_rank = []
for i in array:
if i <= 4: # Negative Rating Cut Off (Inclusive)
y_rank.append(-1)
elif i >= 10: # Positive Rating Cut Off (Inclusive)
y_rank.append(1)
else:
y_rank.append(0)
return y_rank
df_main["rank_bin"] = rank_bin(df_main["rating"])
df_main.rank_bin.value_counts() # Check to see the bin sizes.
# Upload File Manually
#from google.colab import files
#uploaded = files.upload()
#for fn in uploaded.keys():
# print('User uploaded file "{name}" with length {length} bytes'.format(
# name=fn, length=len(uploaded[fn])))
#downloaded = drive.CreateFile({'id': "1m74XhpHHZXfS3mAM8cbBYl-FHlpjZnEi"})
#downloaded.GetContentFile("socialmedia_relevant_cols.csv")
#input_file = codecs.open("socialmedia_relevant_cols.csv", "r",encoding='utf-8', errors='replace')
#output_file = open("socialmedia_relevant_cols_clean.csv", "w")
#def sanitize_characters(raw, clean):
# for line in input_file:
# out = line
# output_file.write(line)
#sanitize_characters(input_file, output_file)
```
### Let's inspect the data
It looks solid, but we don't really need urls, and we would like to have our words all lowercase (Hello and HELLO are pretty similar for our task)
```
questions = df_main[['review','rating','rank_bin']] #pd.read_csv("socialmedia_relevant_cols_clean.csv")
questions.columns=['text', 'rating', 'class_label']
questions.head()
questions.tail()
questions.describe()
```
Let's use a few regular expressions to clean up pour data, and save it back to disk for future use
```
def standardize_text(df, text_field):
df[text_field] = df[text_field].str.replace(r"http\S+", "")
df[text_field] = df[text_field].str.replace(r"http", "")
df[text_field] = df[text_field].str.replace(r"@\S+", "")
df[text_field] = df[text_field].str.replace(r"[^A-Za-z0-9(),!?@\'\`\"\_\n]", " ")
df[text_field] = df[text_field].str.replace(r"@", "at")
df[text_field] = df[text_field].str.lower()
return df
questions = standardize_text(questions, "text")
questions.to_csv("clean_data.csv")
questions.head()
clean_questions = pd.read_csv("clean_data.csv")
clean_questions.tail()
```
### Data Overview
Let's look at our class balance.
```
clean_questions.groupby("class_label").count()
```
We can see our classes are pretty balanced, with a slight oversampling of the "Irrelevant" class.
### Our data is clean, now it needs to be prepared
Now that our inputs are more reasonable, let's transform our inputs in a way our model can understand. This implies:
- Tokenizing sentences to a list of separate words
- Creating a train test split
- Inspecting our data a little more to validate results
```
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
clean_questions["tokens"] = clean_questions["text"].apply(tokenizer.tokenize)
df_main['review_clean']=clean_questions.text
df_main['tokens']=clean_questions.tokens
clean_questions.head()
```
### Inspecting our dataset a little more
```
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
all_words = [word for tokens in clean_questions["tokens"] for word in tokens]
df_main["sentence_length"] = [len(tokens) for tokens in clean_questions["tokens"]]
VOCAB = sorted(list(set(all_words)))
print("%s words total, with a vocabulary size of %s" % (len(all_words), len(VOCAB)))
print("Max sentence length is %s" % max(df_main["sentence_length"]))
print(df_main.loc[df_main['sentence_length'] == 1992].review.values)
_a = df_main.loc[df_main['sentence_length'] >= 1000].review.count()
_b = df_main.loc[df_main['sentence_length'] >= 750].review.count()
_c = df_main.loc[df_main['sentence_length'] >= 250].review.count()
_d = df_main.loc[df_main['sentence_length'] >= 175].review.count()
_e = df_main.loc[df_main['sentence_length'] >= 100].review.count()
_f = df_main.loc[df_main['sentence_length'] < 100].review.count()
print (" # of Reviews by Length \n %s >1000 words \n %s 1000<>700 words \n %s 750<>500 words \n %s 300<>150 words \n %s 200<>100 words \n %s <100 words\n" % (_a,_b,_c,_d,_e,_f))
df_short = df_main.loc[df_main['sentence_length'] <= 250]
df_short = df_short.sort_values(by='sentence_length', ascending=False)
print("Max sentence length is %s" % max(df_short["sentence_length"]))
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 10))
plt.xlabel('Sentence length')
plt.ylabel('Number of sentences')
plt.title('Length of Tokenized Sentences')
plt.hist(df_short["sentence_length"], bins=250)
plt.show()
a_ = 180
b_ = 175
c_ = 170
d_ = 168
e_ = 167
f_ = 166
g_ = 165
h_ = 165
_a = df_main.loc[df_main['sentence_length'] > a_].review.count()
_b = df_main.loc[df_main['sentence_length'] > b_].review.count()
_c = df_main.loc[df_main['sentence_length'] > c_].review.count()
_d = df_main.loc[df_main['sentence_length'] > d_].review.count()
_e = df_main.loc[df_main['sentence_length'] > e_].review.count()
_f = df_main.loc[df_main['sentence_length'] > f_].review.count()
_g = df_main.loc[df_main['sentence_length'] > g_].review.count()
_h = df_main.loc[df_main['sentence_length'] < h_].review.count()
print (" Cumulative # of Reviews by Length\n %s >%s words \n %s >%s words \n %s >%s words \n %s >%s words \n %s >%s words \n %s >%s words\n %s >%s words\n %s <%s words\n" % (_a,a_,_b,b_,_c,c_,_d,d_,_e,e_,_f,f_,_g,g_,_h,h_))
df_shorter = df_main.loc[df_main['sentence_length'] <= 180]
df_shorter = df_shorter.sort_values(by='sentence_length', ascending=False)
print("Max sentence length is %s" % max(df_shorter["sentence_length"]))
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 10))
plt.xlabel('sentences length')
plt.ylabel('number of sentences')
plt.title('tokenized sentences')
plt.hist(df_shorter["sentence_length"], bins=181)
plt.show()
```
## On to the Machine Learning
Now that our data is clean and prepared, let's dive in to the machine learning part.
## Enter embeddings
Machine Learning on images can use raw pixels as inputs. Fraud detection algorithms can use customer features. What can NLP use?
A natural way to represent text for computers is to encode each character individually, this seems quite inadequate to represent and understand language. Our goal is to first create a useful embedding for each sentence (or tweet) in our dataset, and then use these embeddings to accurately predict the relevant category.
The simplest approach we can start with is to use a bag of words model, and apply a logistic regression on top. A bag of words just associates an index to each word in our vocabulary, and embeds each sentence as a list of 0s, with a 1 at each index corresponding to a word present in the sentence.
## Bag of Words Counts
```
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
def cv(data):
count_vectorizer = CountVectorizer()
emb = count_vectorizer.fit_transform(data)
return emb, count_vectorizer
list_corpus = clean_questions["text"].tolist()
list_labels = clean_questions["class_label"].tolist()
X_train, X_test, y_train, y_test = train_test_split(list_corpus, list_labels, test_size=0.2,
random_state=40)
X_train_counts, count_vectorizer = cv(X_train)
X_test_counts = count_vectorizer.transform(X_test)
```
### Visualizing the embeddings
Now that we've created embeddings, let's visualize them and see if we can identify some structure. In a perfect world, our embeddings would be so distinct that are two classes would be perfectly separated. Since visualizing data in 20k dimensions is hard, let's project it down to 2.
```
from sklearn.decomposition import PCA, TruncatedSVD
import matplotlib
import matplotlib.patches as mpatches
def plot_LSA(test_data, test_labels, savepath="PCA_demo.csv", plot=True):
lsa = TruncatedSVD(n_components=2)
lsa.fit(test_data)
lsa_scores = lsa.transform(test_data)
color_mapper = {label:idx for idx,label in enumerate(set(test_labels))}
color_column = [color_mapper[label] for label in test_labels]
colors = ['orange','blue','blue']
if plot:
plt.scatter(lsa_scores[:,0], lsa_scores[:,1], s=8, alpha=.8, c=test_labels, cmap=matplotlib.colors.ListedColormap(colors))
red_patch = mpatches.Patch(color='orange', label='Irrelevant')
green_patch = mpatches.Patch(color='blue', label='Disaster')
plt.legend(handles=[red_patch, green_patch], prop={'size': 30})
fig = plt.figure(figsize=(16, 16))
plot_LSA(X_train_counts, y_train)
plt.show()
```
These embeddings don't look very cleanly separated. Let's see if we can still fit a useful model on them.
### Fitting a classifier
Starting with a logistic regression is a good idea. It is simple, often gets the job done, and is easy to interpret.
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=30.0, class_weight='balanced', solver='newton-cg',
multi_class='multinomial', n_jobs=-1, random_state=40)
clf.fit(X_train_counts, y_train)
y_predicted_counts = clf.predict(X_test_counts)
```
### Evaluation
Let's start by looking at some metrics to see if our classifier performed well at all.
```
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report
def get_metrics(y_test, y_predicted):
# true positives / (true positives+false positives)
precision = precision_score(y_test, y_predicted, pos_label=None,
average='weighted')
# true positives / (true positives + false negatives)
recall = recall_score(y_test, y_predicted, pos_label=None,
average='weighted')
# harmonic mean of precision and recall
f1 = f1_score(y_test, y_predicted, pos_label=None, average='weighted')
# true positives + true negatives/ total
accuracy = accuracy_score(y_test, y_predicted)
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_predicted_counts)
print("accuracy = %.3f, precision = %.3f, recall = %.3f, f1 = %.3f" % (accuracy, precision, recall, f1))
```
### Inspection
A metric is one thing, but in order to make an actionnable decision, we need to actually inspect the kind of mistakes our classifier is making. Let's start by looking at the confusion matrix.
```
import numpy as np
import itertools
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.winter):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, fontsize=30)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, fontsize=20)
plt.yticks(tick_marks, classes, fontsize=20)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center",
color="white" if cm[i, j] < thresh else "black", fontsize=40)
plt.tight_layout()
plt.ylabel('True label', fontsize=30)
plt.xlabel('Predicted label', fontsize=30)
return plt
cm = confusion_matrix(y_test, y_predicted_counts)
fig = plt.figure(figsize=(10, 10))
plot = plot_confusion_matrix(cm, classes=['Irrelevant','Disaster','Unsure'], normalize=False, title='Confusion matrix')
plt.show()
print(cm)
```
Our classifier never predicts class 3, which is not surprising, seeing as it is critically undersampled. This is not very important here, as the label is not very meaningful. Our classifier creates more false negatives than false positives (proportionally). Depending on the use case, this seems desirable (a false positive is quite a high cost for law enforcement for example).
### Further inspection
Let's look at the features our classifier is using to make decisions.
```
def get_most_important_features(vectorizer, model, n=5):
index_to_word = {v:k for k,v in vectorizer.vocabulary_.items()}
# loop for each class
classes ={}
for class_index in range(model.coef_.shape[0]):
word_importances = [(el, index_to_word[i]) for i,el in enumerate(model.coef_[class_index])]
sorted_coeff = sorted(word_importances, key = lambda x : x[0], reverse=True)
tops = sorted(sorted_coeff[:n], key = lambda x : x[0])
bottom = sorted_coeff[-n:]
classes[class_index] = {
'tops':tops,
'bottom':bottom
}
return classes
importance = get_most_important_features(count_vectorizer, clf, 10)
def plot_important_words(top_scores, top_words, bottom_scores, bottom_words, name):
y_pos = np.arange(len(top_words))
top_pairs = [(a,b) for a,b in zip(top_words, top_scores)]
top_pairs = sorted(top_pairs, key=lambda x: x[1])
bottom_pairs = [(a,b) for a,b in zip(bottom_words, bottom_scores)]
bottom_pairs = sorted(bottom_pairs, key=lambda x: x[1], reverse=True)
top_words = [a[0] for a in top_pairs]
top_scores = [a[1] for a in top_pairs]
bottom_words = [a[0] for a in bottom_pairs]
bottom_scores = [a[1] for a in bottom_pairs]
fig = plt.figure(figsize=(10, 10))
plt.subplot(121)
plt.barh(y_pos,bottom_scores, align='center', alpha=0.5)
plt.title('Irrelevant', fontsize=20)
plt.yticks(y_pos, bottom_words, fontsize=14)
plt.suptitle('Key words', fontsize=16)
plt.xlabel('Importance', fontsize=20)
plt.subplot(122)
plt.barh(y_pos,top_scores, align='center', alpha=0.5)
plt.title('Disaster', fontsize=20)
plt.yticks(y_pos, top_words, fontsize=14)
plt.suptitle(name, fontsize=16)
plt.xlabel('Importance', fontsize=20)
plt.subplots_adjust(wspace=0.8)
plt.show()
top_scores = [a[0] for a in importance[1]['tops']]
top_words = [a[1] for a in importance[1]['tops']]
bottom_scores = [a[0] for a in importance[1]['bottom']]
bottom_words = [a[1] for a in importance[1]['bottom']]
plot_important_words(top_scores, top_words, bottom_scores, bottom_words, "Most important words for relevance")
```
Our classifier correctly picks up on some patterns (hiroshima, massacre), but clearly seems to be overfitting on some irellevant terms (heyoo, x1392)
### TFIDF Bag of Words
Let's try a slightly more subtle approach. On top of our bag of words model, we use a TF-IDF (Term Frequency, Inverse Document Frequency) which means weighing words by how frequent they are in our dataset, discounting words that are too frequent, as they just add to the noise.
```
def tfidf(data):
tfidf_vectorizer = TfidfVectorizer()
train = tfidf_vectorizer.fit_transform(data)
return train, tfidf_vectorizer
X_train_tfidf, tfidf_vectorizer = tfidf(X_train)
X_test_tfidf = tfidf_vectorizer.transform(X_test)
fig = plt.figure(figsize=(16, 16))
plot_LSA(X_train_tfidf, y_train)
plt.show()
```
These embeddings look much more separated, let's see if it leads to better performance.
```
clf_tfidf = LogisticRegression(C=30.0, class_weight='balanced', solver='newton-cg',
multi_class='multinomial', n_jobs=-1, random_state=40)
clf_tfidf.fit(X_train_tfidf, y_train)
y_predicted_tfidf = clf_tfidf.predict(X_test_tfidf)
accuracy_tfidf, precision_tfidf, recall_tfidf, f1_tfidf = get_metrics(y_test, y_predicted_tfidf)
print("accuracy = %.3f, precision = %.3f, recall = %.3f, f1 = %.3f" % (accuracy_tfidf, precision_tfidf,
recall_tfidf, f1_tfidf))
```
The results are a little better, let's see if they translate to an actual difference in our use case.
```
cm2 = confusion_matrix(y_test, y_predicted_tfidf)
fig = plt.figure(figsize=(10, 10))
plot = plot_confusion_matrix(cm2, classes=['Irrelevant','Disaster','Unsure'], normalize=False, title='Confusion matrix')
plt.show()
print("TFIDF confusion matrix")
print(cm2)
print("BoW confusion matrix")
print(cm)
```
Our False positives have decreased, as this model is more conservative about choosing the positive class.
# Looking at important coefficients for linear regression
Insert details here
```
importance_tfidf = get_most_important_features(tfidf_vectorizer, clf_tfidf, 10)
top_scores = [a[0] for a in importance_tfidf[1]['tops']]
top_words = [a[1] for a in importance_tfidf[1]['tops']]
bottom_scores = [a[0] for a in importance_tfidf[1]['bottom']]
bottom_words = [a[1] for a in importance_tfidf[1]['bottom']]
plot_important_words(top_scores, top_words, bottom_scores, bottom_words, "Most important words for relevance")
```
The words it picked up look much more relevant! Although our metrics on our held out validation set haven't increased much, we have much more confidence in the terms our model is using, and thus would feel more comfortable deploying it in a system that would interact with customers.
### Capturing semantic meaning
Our first models have managed to pick up on high signal words. However, it is unlikely that we will have a training set containing all relevant words. To solve this problem, we need to capture the semantic meaning of words. Meaning we need to understand that words like 'good' and 'positive' are closer than apricot and 'continent'.
### Enter word2vec
Word2vec is a model that was pre-trained on a very large corpus, and provides embeddings that map words that are similar close to each other. A quick way to get a sentence embedding for our classifier, is to average word2vec scores of all words in our sentence.
```
downloaded = drive.CreateFile({'id': "0B7XkCwpI5KDYNlNUTTlSS21pQmM"})
downloaded.GetContentFile("GoogleNews-vectors-negative300.bin.gz")
import gensim
word2vec_path = "GoogleNews-vectors-negative300.bin.gz"
word2vec = gensim.models.KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
def get_average_word2vec(tokens_list, vector, generate_missing=False, k=300):
if len(tokens_list)<1:
return np.zeros(k)
if generate_missing:
vectorized = [vector[word] if word in vector else np.random.rand(k) for word in tokens_list]
else:
vectorized = [vector[word] if word in vector else np.zeros(k) for word in tokens_list]
length = len(vectorized)
summed = np.sum(vectorized, axis=0)
averaged = np.divide(summed, length)
return averaged
def get_word2vec_embeddings(vectors, clean_questions, generate_missing=False):
embeddings = clean_questions['tokens'].apply(lambda x: get_average_word2vec(x, vectors,
generate_missing=generate_missing))
return list(embeddings)
embeddings = get_word2vec_embeddings(word2vec, clean_questions)
X_train_word2vec, X_test_word2vec, y_train_word2vec, y_test_word2vec = train_test_split(embeddings, list_labels,
test_size=0.2, random_state=40)
fig = plt.figure(figsize=(16, 16))
plot_LSA(embeddings, list_labels)
plt.show()
```
These look much more separated, let's see how our logistic regression does on them!
```
clf_w2v = LogisticRegression(C=30.0, class_weight='balanced', solver='newton-cg',
multi_class='multinomial', random_state=40)
clf_w2v.fit(X_train_word2vec, y_train_word2vec)
y_predicted_word2vec = clf_w2v.predict(X_test_word2vec)
accuracy_word2vec, precision_word2vec, recall_word2vec, f1_word2vec = get_metrics(y_test_word2vec, y_predicted_word2vec)
print("accuracy = %.3f, precision = %.3f, recall = %.3f, f1 = %.3f" % (accuracy_word2vec, precision_word2vec,
recall_word2vec, f1_word2vec))
```
Still getting better, let's plot the confusion matrix
```
cm_w2v = confusion_matrix(y_test_word2vec, y_predicted_word2vec)
fig = plt.figure(figsize=(10, 10))
plot = plot_confusion_matrix(cm, classes=['Irrelevant','Disaster','Unsure'], normalize=False, title='Confusion matrix')
plt.show()
print("Word2Vec confusion matrix")
print(cm_w2v)
print("TFIDF confusion matrix")
print(cm2)
print("BoW confusion matrix")
print(cm)
```
Our model is strictly better in all regards than the first two models, this is promissing!
### Further inspection
Since our model does not use a vector with one dimension per word, it gets much harder to directly see which words are most relevant to our classification. In order to provide some explainability, we can leverage a black box explainer such as LIME.
```
!pip install lime
from lime import lime_text
from sklearn.pipeline import make_pipeline
from lime.lime_text import LimeTextExplainer
X_train_data, X_test_data, y_train_data, y_test_data = train_test_split(list_corpus, list_labels, test_size=0.2,
random_state=40)
vector_store = word2vec
def word2vec_pipeline(examples):
global vector_store
tokenizer = RegexpTokenizer(r'\w+')
tokenized_list = []
for example in examples:
example_tokens = tokenizer.tokenize(example)
vectorized_example = get_average_word2vec(example_tokens, vector_store, generate_missing=False, k=300)
tokenized_list.append(vectorized_example)
return clf_w2v.predict_proba(tokenized_list)
c = make_pipeline(count_vectorizer, clf)
def explain_one_instance(instance, class_names):
explainer = LimeTextExplainer(class_names=class_names)
exp = explainer.explain_instance(instance, word2vec_pipeline, num_features=6)
return exp
def visualize_one_exp(features, labels, index, class_names = ["irrelevant","relevant", "unknown"]):
exp = explain_one_instance(features[index], class_names = class_names)
print('Index: %d' % index)
print('True class: %s' % class_names[labels[index]])
exp.show_in_notebook(text=True)
visualize_one_exp(X_test_data, y_test_data, 65)
visualize_one_exp(X_test_data, y_test_data, 60)
import random
from collections import defaultdict
random.seed(40)
def get_statistical_explanation(test_set, sample_size, word2vec_pipeline, label_dict):
sample_sentences = random.sample(test_set, sample_size)
explainer = LimeTextExplainer()
labels_to_sentences = defaultdict(list)
contributors = defaultdict(dict)
# First, find contributing words to each class
for sentence in sample_sentences:
probabilities = word2vec_pipeline([sentence])
curr_label = probabilities[0].argmax()
labels_to_sentences[curr_label].append(sentence)
exp = explainer.explain_instance(sentence, word2vec_pipeline, num_features=6, labels=[curr_label])
listed_explanation = exp.as_list(label=curr_label)
for word,contributing_weight in listed_explanation:
if word in contributors[curr_label]:
contributors[curr_label][word].append(contributing_weight)
else:
contributors[curr_label][word] = [contributing_weight]
# average each word's contribution to a class, and sort them by impact
average_contributions = {}
sorted_contributions = {}
for label,lexica in contributors.items():
curr_label = label
curr_lexica = lexica
average_contributions[curr_label] = pd.Series(index=curr_lexica.keys())
for word,scores in curr_lexica.items():
average_contributions[curr_label].loc[word] = np.sum(np.array(scores))/sample_size
detractors = average_contributions[curr_label].sort_values()
supporters = average_contributions[curr_label].sort_values(ascending=False)
sorted_contributions[label_dict[curr_label]] = {
'detractors':detractors,
'supporters': supporters
}
return sorted_contributions
label_to_text = {
0: 'Irrelevant',
1: 'Relevant',
2: 'Unsure'
}
sorted_contributions = get_statistical_explanation(X_test_data, 100, word2vec_pipeline, label_to_text)
# First index is the class (Disaster)
# Second index is 0 for detractors, 1 for supporters
# Third is how many words we sample
top_words = sorted_contributions['Relevant']['supporters'][:10].index.tolist()
top_scores = sorted_contributions['Relevant']['supporters'][:10].tolist()
bottom_words = sorted_contributions['Relevant']['detractors'][:10].index.tolist()
bottom_scores = sorted_contributions['Relevant']['detractors'][:10].tolist()
plot_important_words(top_scores, top_words, bottom_scores, bottom_words, "Most important words for relevance")
```
Looks like very relevant words are picked up! This model definitely seems to make decisions in a very understandable way.
# Leveraging text structure
Our models have been performing better, but they completely ignore the structure. To see whether capturing some more sense of structure would help, we will try a final, more complex model.
### CNNs for text classification
Here, we will be using a Convolutional Neural Network for sentence classification. While not as popular as RNNs, they have been proven to get competitive results (sometimes beating the best models), and are very fast to train, making them a perfect choice for this tutorial.
First, let's embed our text!
```
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
EMBEDDING_DIM = 300
MAX_SEQUENCE_LENGTH = 35
VOCAB_SIZE = len(VOCAB)
VALIDATION_SPLIT=.2
tokenizer = Tokenizer(num_words=VOCAB_SIZE)
tokenizer.fit_on_texts(clean_questions["text"].tolist())
sequences = tokenizer.texts_to_sequences(clean_questions["text"].tolist())
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
cnn_data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(clean_questions["class_label"]))
indices = np.arange(cnn_data.shape[0])
np.random.shuffle(indices)
cnn_data = cnn_data[indices]
labels = labels[indices]
num_validation_samples = int(VALIDATION_SPLIT * cnn_data.shape[0])
embedding_weights = np.zeros((len(word_index)+1, EMBEDDING_DIM))
for word,index in word_index.items():
embedding_weights[index,:] = word2vec[word] if word in word2vec else np.random.rand(EMBEDDING_DIM)
print(embedding_weights.shape)
```
Now, we will define a simple Convolutional Neural Network
```
from keras.layers import Dense, Input, Flatten, Dropout, Merge
from keras.layers import Conv1D, MaxPooling1D, Embedding
from keras.layers import LSTM, Bidirectional
from keras.models import Model
def ConvNet(embeddings, max_sequence_length, num_words, embedding_dim, labels_index, trainable=False, extra_conv=True):
embedding_layer = Embedding(num_words,
embedding_dim,
weights=[embeddings],
input_length=max_sequence_length,
trainable=trainable)
sequence_input = Input(shape=(max_sequence_length,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
# Yoon Kim model (https://arxiv.org/abs/1408.5882)
convs = []
filter_sizes = [3,4,5]
for filter_size in filter_sizes:
l_conv = Conv1D(filters=128, kernel_size=filter_size, activation='relu')(embedded_sequences)
l_pool = MaxPooling1D(pool_size=3)(l_conv)
convs.append(l_pool)
l_merge = Merge(mode='concat', concat_axis=1)(convs)
# add a 1D convnet with global maxpooling, instead of Yoon Kim model
conv = Conv1D(filters=128, kernel_size=3, activation='relu')(embedded_sequences)
pool = MaxPooling1D(pool_size=3)(conv)
if extra_conv==True:
x = Dropout(0.5)(l_merge)
else:
# Original Yoon Kim model
x = Dropout(0.5)(pool)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
#x = Dropout(0.5)(x)
preds = Dense(labels_index, activation='softmax')(x)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
return model
```
Now let's train our Neural Network
```
x_train = cnn_data[:-num_validation_samples]
y_train = labels[:-num_validation_samples]
x_val = cnn_data[-num_validation_samples:]
y_val = labels[-num_validation_samples:]
model = ConvNet(embedding_weights, MAX_SEQUENCE_LENGTH, len(word_index)+1, EMBEDDING_DIM,
len(list(clean_questions["class_label"].unique())), False)
model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=3, batch_size=128)
```
Our best model yet, at least on the surface. Exploring whether it is really performing as expected using the previous method is left to the reader.
# Takeaways
We now have a solid framework for organizing text data, and training classifiers while efficiently inspecting their results. While we've started to get some interesting results, we are far from having solved NLP!
# Thank you!
Feel free to follow me on [Twitter](https://twitter.com/EmmanuelAmeisen), and find out more about Insight on [our website](insightdatascience.com) and check out our [blog](blog.insightdatascience.com) for more content like this.
| github_jupyter |
# Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
**After this assignment you will be able to:**
- Build and apply a deep neural network to supervised learning.
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
```
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v3 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
```
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
```
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
```
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
```
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
```
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
```
$12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
## 3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
### 3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
### 3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: ***[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID***</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
### 3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
## 4 - Two-layer neural network
**Question**: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
```python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
```
```
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation = "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, activation = "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation = "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation = "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
```
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
```
**Expected Output**:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
```
predictions_train = predict(train_x, train_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
```
predictions_test = predict(test_x, test_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
**Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
## 5 - L-layer Neural Network
**Question**: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:
```python
def initialize_parameters_deep(layers_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
```
```
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization. (≈ 1 line of code)
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
You will now train the model as a 4-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
```
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
```
**Expected Output**:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
```
pred_train = predict(train_x, train_y, parameters)
```
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
```
pred_test = predict(test_x, test_y, parameters)
```
**Expected Output**:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
## 6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
```
print_mislabeled_images(classes, test_x, test_y, pred_test)
```
**A few types of images the model tends to do poorly on include:**
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
## 7) Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_image = my_image/255.
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
**References**:
- for auto-reloading external module: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import sys
import pandas as pd
sys.path.append('..')
from data.utils import tables
from data import cro_dataset
df_master.groupby("year").count()
```
# Initial Data descriptive table - All reports
```
df_master = pd.read_csv("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Data/stoxx_inference/Firm_AnnualReport.csv")
# TODO: Detail, but report here the original dataset or the one that could be labelled/inferred? There are 1-2 reports with issues... df_reports_count = df_master.groupby('year')['is_inferred'].sum()
no_of_reports = len(df_master)
df = df_master.groupby("firm_name").first()
df = df[['country','icb_industry']]
df['years'] = df_master.groupby('firm_name')['year'].apply(list).apply(lambda l: f"{min(l)} - {max(l)}")
df['country'] = df['country'].str.upper()
df['icb_industry'] = df['icb_industry'].str.slice(start=3)
df.rename(columns={"country": "Country", "icb_industry": "Industry", "years": "Years"}, inplace=True)
# Add a total column at the end
total_row = df.nunique()
total_row["Years"] = no_of_reports
df.loc[f"Unique firms: {len(df)}"] = total_row
tables.export_to_latex(df, filename="stoxx50_firms.tex", index_names=False, add_midrule_at=1, make_bold_row_at=[2, -3])
```
# Pages
```
import os
import yaml
data_dir = "/Users/david/Projects/fin-disclosures-nlp/input_files/annual_reports"
def get_total_page_no(row):
path = os.path.join(data_dir, f"{row.country}_{row.company}", f"{row.orig_report_type}_{row.year}", row.output_file)
with open(path, 'r') as stream:
content = yaml.safe_load(stream)
return len(content['pages'])
df_master["total_page_no"] = df_master.apply(lambda x: get_total_page_no(x), axis=1)
df_master["total_page_no"].mean()
df_master
```
# Keyword list
```
import pandas as pd
from data import dataframe_preparation
from data.utils import tables
pd.set_option('display.max_colwidth',1000)
vocabulary = dataframe_preparation.get_keywords_from_file("/Users/david/Projects/fin-disclosures-nlp/data/keyword_vocabulary.txt")
unigrams = [word for word in vocabulary if len(word.split(" ")) < 2]
bigrams = [word for word in vocabulary if len(word.split(" ")) > 1]
df_keyword_a = pd.DataFrame(data={"Unigrams": [", ".join(unigrams)]})
df_keyword_b = pd.DataFrame(data={"Bigrams": [", ".join(bigrams)]})
df_keyword = pd.concat([df_keyword_a, df_keyword_b], axis=1, sort=False)
tables.export_to_latex(df_keyword, filename="filter_vocabulary.tex")
print(len(unigrams) + len(bigrams))
```
# Labels
```
data_dir = os.path.join("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/")
df_train = pd.read_pickle(os.path.join(data_dir, 'Firm_AnnualReport_Labels_Training.pkl'))
df_train.query("cro != 'OP'").groupby("cro_sub_type_combined").count()
t_d, t_l, tt_d, tt_l = cro_dataset.prepare_datasets(task="multi-label", cro_category_level="cro", should_filter_op=True, train_neg_sampling_strategy="all", test_neg_sampling_strategy="all")
t_l.sum()
t_l.sum()
t_l.sum()
```
## Status
We now have 933 samples with additional samples of about 15/100 reports that will be added as soon as possible, so more than 1000 samples are likely.
However, the categories both on main CRO level as well as the sub-categories are unbalanced.
In comparison to the other main categories, physical risks (PR) are often **indirect** (about 55%), i.e. there is no explicit mentioning of climate change but rather a consequence. Often, these examples are also labelled as *vague*, i.e. more of the boilerplate type.
```
fig, axes = plt.subplots(nrows=1, ncols=2)
groups.plot(ax=axes[0], kind='bar')
sub_groups.plot(ax=axes[1], kind='bar')
```
## Experimental setup
For the high-level experimental setup, there are multiple options:
- Step-by-Step: First classifiy if CRO relevant or not, then second step classify the main category. Then the sub-category...
- End-To-End: Directly try to infer category
--> I would propose for now that we attempt a Step-By-Step approach and actually start with the second task of classifying the main CRO categories to get an idea of how well the different methods perform. A benefit would this also have that getting the negative samples ise relatively straight-forward, i.e. just the opposite classes.
As document inputs, I would propose that we use paragraphs for now but having a sentence approach then as comparison. *Challenge: How would the labels be assigned for sentences?*
As evaluation metrics, F1-Scores (and Accuracy) are probably most useful.
### Preprocessing
Depending on the downstream classification approach, preprocessing such as stop-words removal, stemming/lemmatization is required. *Challenge: How can we make sure that the tense of a sentense is not removed in this step?*
### Approaches
A set of approaches is selected to compare performance and whether the "heavy-weights" are actually necessary.
Embedding options:
- BoW
- TF-IDF
- Word2Vec
- Doc2Vec
- Contextualized Word Embeddings (from BERT for example, i.e. only use the word embeddings from these models) without transfer learning, i.e. then select a classifier from below
Classifiers:
- Logistic regression
- SVM
- Random Forest
- XGBoost
- Neural Net
And then the state of the art NLU models such as BERT/Roberta/...
```
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sys.path.append('..')
import data
from data.labels_postprocessing import process
df = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_100_combined.pkl")
df = process(df)
groups = df.groupby(['cro']).size()
print(groups)
groups.plot(kind='barh')
sub_groups = df.groupby(['cro','cro_sub_type_combined']).size()
print(sub_groups)
sub_groups.plot(kind='barh')
indirects = df.groupby(['cro','indirect']).size()
print(indirects)
indirects.plot(kind='barh')
vague = df.groupby(['cro','vague']).size()
print(vague)
vague.plot(kind='barh')
past = df.groupby(['cro','past']).size()
print(past)
past.plot(kind='barh')
len(df.span_id.unique())
# spans = df[df.duplicated(subset=['report_id','span_id'], keep=False)]
# spans
df.keyword.count()
```
| github_jupyter |
# Ray RLlib - Introduction to Reinforcement Learning
© 2019-2021, Anyscale. All Rights Reserved

_Reinforcement Learning_ is the category of machine learning that focuses on training one or more _agents_ to achieve maximal _rewards_ while operating in an environment. This lesson discusses the core concepts of RL, while subsequent lessons explore RLlib in depth. We'll use two examples with exercises to give you a taste of RL. If you already understand RL concepts, you can either skim this lesson or skip to the [next lesson](02-Introduction-to-RLlib.ipynb).
## What Is Reinforcement Learning?
Let's explore the basic concepts of RL, specifically the _Markov Decision Process_ abstraction, and to show its use in Python.
Consider the following image:

In RL, one or more **agents** interact with an **environment** to maximize a **reward**. The agents make **observations** about the **state** of the environment and take **actions** that are believed will maximize the long-term reward. However, at any particular moment, the agents can only observe the immediate reward. So, the training process usually involves lots and lot of replay of the game, the robot simulator traversing a virtual space, etc., so the agents can learn from repeated trials what decisions/actions work best to maximize the long-term, cumulative reward.
The trial and error search and delayed reward are the distinguishing characterists of RL vs. other ML methods ([Sutton 2018](06-RL-References.ipynb#Books)).
The way to formalize trial and error is the **exploitation vs. exploration tradeoff**. When an agent finds what appears to be a "rewarding" sequence of actions, the agent may naturally want to continue to **exploit** these actions. However, even better actions may exist. An agent won't know whether alternatives are better or not unless some percentage of actions taken **explore** the alternatives. So, all RL algorithms include a strategy for exploitation and exploration.
## RL Applications
RL has many potential applications. RL became "famous" due to these successes, including achieving expert game play, training robots, autonomous vehicles, and other simulated agents:






Credits:
* [AlphaGo](https://www.youtube.com/watch?v=l7ngy56GY6k)
* [Breakout](https://towardsdatascience.com/tutorial-double-deep-q-learning-with-dueling-network-architectures-4c1b3fb7f756) ([paper](https://arxiv.org/abs/1312.5602))
* [Stacking Legos with Sawyer](https://robohub.org/soft-actor-critic-deep-reinforcement-learning-with-real-world-robots/)
* [Walking Man](https://openai.com/blog/openai-baselines-ppo/)
* [Autonomous Vehicle](https://www.daimler.com/innovation/case/autonomous/intelligent-drive-2.html)
* ["Cassie": Two-legged Robot](https://mime.oregonstate.edu/research/drl/robots/cassie/) (Uses Ray!)
Recently other industry applications have emerged, include the following:
* **Process optimization:** industrial processes (factories, pipelines) and other business processes, routing problems, cluster optimization.
* **Ad serving and recommendations:** Some of the traditional methods, including _collaborative filtering_, are hard to scale for very large data sets. RL systems are being developed to do an effective job more efficiently than traditional methods.
* **Finance:** Markets are time-oriented _environments_ where automated trading systems are the _agents_.
## Markov Decision Processes
At its core, Reinforcement learning builds on the concepts of [Markov Decision Process (MDP)](https://en.wikipedia.org/wiki/Markov_decision_process), where the current state, the possible actions that can be taken, and overall goal are the building blocks.
An MDP models sequential interactions with an external environment. It consists of the following:
- a **state space** where the current state of the system is sometimes called the **context**.
- a set of **actions** that can be taken at a particular state $s$ (or sometimes the same set for all states).
- a **transition function** that describes the probability of being in a state $s'$ at time $t+1$ given that the MDP was in state $s$ at time $t$ and action $a$ was taken. The next state is selected stochastically based on these probabilities.
- a **reward function**, which determines the reward received at time $t$ following action $a$, based on the decision of **policy** $\pi$.
The goal of MDP is to develop a **policy** $\pi$ that specifies what action $a$ should be chosen for a given state $s$ so that the cumulative reward is maximized. When it is possible for the policy "trainer" to fully observe all the possible states, actions, and rewards, it can define a deterministic policy, fixing a single action choice for each state. In this scenario, the transition probabilities reduce to the probability of transitioning to state $s'$ given the current state is $s$, independent of actions, because the state now leads to a deterministic action choice. Various algorithms can be used to compute this policy.
Put another way, if the policy isn't deterministic, then the transition probability to state $s'$ at a time $t+1$ when action $a$ is taken for state $s$ at time $t$, is given by:
\begin{equation}
P_a(s',s) = P(s_{t+1} = s'|s_t=s,a)
\end{equation}
When the policy is deterministic, this transition probability reduces to the following, independent of $a$:
\begin{equation}
P(s',s) = P(s_{t+1} = s'|s_t=s)
\end{equation}
To be clear, a deterministic policy means that one and only one action will always be selected for a given state $s$, but the next state $s'$ will still be selected stochastically.
In the general case of RL, it isn't possible to fully know all this information, some of which might be hidden and evolving, so it isn't possible to specify a fully-deterministic policy.
Often this cumulative reward is computed using the **discounted sum** over all rewards observed:
\begin{equation}
\arg\max_{\pi} \sum_{t=1}^T \gamma^t R_t(\pi),
\end{equation}
where $T$ is the number of steps taken in the MDP (this is a random variable and may depend on $\pi$), $R_t$ is the reward received at time $t$ (also a random variable which depends on $\pi$), and $\gamma$ is the **discount factor**. The value of $\gamma$ is between 0 and 1, meaning it has the effect of "discounting" earlier rewards vs. more recent rewards.
The [Wikipedia page on MDP](https://en.wikipedia.org/wiki/Markov_decision_process) provides more details. Note what we said in the third bullet, that the new state only depends on the previous state and the action taken. The assumption is that we can simplify our effort by ignoring all the previous states except the last one and still achieve good results. This is known as the [Markov property](https://en.wikipedia.org/wiki/Markov_property). This assumption often works well and it greatly reduces the resources required.
## The Elements of RL
Here are the elements of RL that expand on MDP concepts (see [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more details):
#### Policies
Unlike MDP, the **transition function** probabilities are often not known in advance, but must be learned. Learning is done through repeated "play", where the agent interacts with the environment.
This makes the **policy** $\pi$ harder to determine. Because the fully state space usually can't be fully known, the choice of action $a$ for given state $s$ almostly always remains a stochastic choice, never deterministic, unlike MDP.
#### Reward Signal
The idea of a **reward signal** encapsulates the desired goal for the system and provides feedback for updating the policy based on how well particular events or actions contribute rewards towards the goal.
#### Value Function
The **value function** encapsulates the maximum cumulative reward likely to be achieved starting from a given state for an **episode**. This is harder to determine than the simple reward returned after taking an action. In fact, much of the research in RL over the decades has focused on finding better and more efficient implementations of value functions. To illustrate the challenge, repeatedly taking one sequence of actions may yield low rewards for a while, but eventually provide large rewards. Conversely, always choosing a different sequence of actions may yield a good reward at each step, but be suboptimal for the cumulative reward.
#### Episode
A sequence of steps by the agent starting in an initial state. At each step, the agent observes the current state, chooses the next action, and receives the new reward. Episodes are used for both training policies and replaying with an existing policy (called _rollout_).
#### Model
An optional feature, some RL algorithms develop or use a **model** of the environment to anticipate the resulting states and rewards for future actions. Hence, they are useful for _planning_ scenarios. Methods for solving RL problems that use models are called _model-based methods_, while methods that learn by trial and error are called _model-free methods_.
## Reinforcement Learning Example
Let's finish this introduction let's learn about the popular "hello world" (1) example environment for RL, balancing a pole vertically on a moving cart, called `CartPole`. Then we'll see how to use RLlib to train a policy using a popular RL algorithm, _Proximal Policy Optimization_, again using `CartPole`.
(1) In books and tutorials on programming languages, it is a tradition that the very first program shown prints the message "Hello World!".
### CartPole and OpenAI
The popular [OpenAI "gym" environment](https://gym.openai.com/) provides MDP interfaces to a variety of simulated environments. Perhaps the most popular for learning RL is `CartPole`, a simple environment that simulates the physics of balancing a pole on a moving cart. The `CartPole` problem is described at https://gym.openai.com/envs/CartPole-v1. Here is an image from that website, where the pole is currently falling to the right, which means the cart will need to move to the right to restore balance:

This example fits into the MDP framework as follows:
- The **state** consists of the position and velocity of the cart (moving in one dimension from left to right) as well as the angle and angular velocity of the pole that is balancing on the cart.
- The **actions** are to decrease or increase the cart's velocity by one unit. A negative velocity means it is moving to the left.
- The **transition function** is deterministic and is determined by simulating physical laws. Specifically, for a given **state**, what should we choose as the next velocity value? In the RL context, the correct velocity value to choose has to be learned. Hence, we learn a _policy_ that approximates the optimal transition function that could be calculated from the laws of physics.
- The **reward function** is a constant 1 as long as the pole is upright, and 0 once the pole has fallen over. Therefore, maximizing the reward means balancing the pole for as long as possible.
- The **discount factor** in this case can be taken to be 1, meaning we treat the rewards at all time steps equally and don't discount any of them.
More information about the `gym` Python module is available at https://gym.openai.com/. The list of all the available Gym environments is in [this wiki page](https://github.com/openai/gym/wiki/Table-of-environments). We'll use a few more of them and even create our own in subsequent lessons.
```
import gym
import numpy as np
import pandas as pd
import json
```
The code below illustrates how to create and manipulate MDPs in Python. An MDP can be created by calling `gym.make`. Gym environments are identified by names like `CartPole-v1`. A **catalog of built-in environments** can be found at https://gym.openai.com/envs.
```
env = gym.make("CartPole-v1")
print("Created env:", env)
```
Reset the state of the MDP by calling `env.reset()`. This call returns the initial state of the MDP.
```
state = env.reset()
print("The starting state is:", state)
```
Recall that the state is the position of the cart, its velocity, the angle of the pole, and the angular velocity of the pole.
The `env.step` method takes an action. In the case of the `CartPole` environment, the appropriate actions are 0 or 1, for pushing the cart to the left or right, respectively. `env.step()` returns a tuple of four things:
1. the new state of the environment
2. a reward
3. a boolean indicating whether the simulation has finished
4. a dictionary of miscellaneous extra information
Let's show what happens if we take one step with an action of 0.
```
action = 0
state, reward, done, info = env.step(action)
print(state, reward, done, info)
```
A **rollout** is a simulation of a policy in an environment. It is used both during training and when running simulations with a trained policy.
The code below performs a rollout in a given environment. It takes **random actions** until the simulation has finished and returns the cumulative reward.
```
def random_rollout(env):
state = env.reset()
done = False
cumulative_reward = 0
# Keep looping as long as the simulation has not finished.
while not done:
# Choose a random action (either 0 or 1).
action = np.random.choice([0, 1])
# Take the action in the environment.
state, reward, done, _ = env.step(action)
# Update the cumulative reward.
cumulative_reward += reward
# Return the cumulative reward.
return cumulative_reward
```
Try rerunning the following cell a few times. How much do the answers change? Note that the maximum possible reward for `CartPole-v1` is 500. You'll probably get numbers well under 500.
```
reward = random_rollout(env)
print(reward)
reward = random_rollout(env)
print(reward)
```
### Exercise 1
Choosing actions at random in `random_rollout` is not a very effective policy, as the previous results showed. Finish implementing the `rollout_policy` function below, which takes an environment *and* a policy. Recall that the *policy* is a function that takes in a *state* and returns an *action*. The main difference is that instead of choosing a **random action**, like we just did (with poor results), the action should be chosen **with the policy** (as a function of the state).
> **Note:** Exercise solutions for this tutorial can be found [here](solutions/Ray-RLlib-Solutions.ipynb).
```
def rollout_policy(env, policy):
state = env.reset()
done = False
cumulative_reward = 0
# EXERCISE: Fill out this function by copying the appropriate part of 'random_rollout'
# and modifying it to choose the action using the policy.
raise NotImplementedError
# Return the cumulative reward.
return cumulative_reward
def sample_policy1(state):
return 0 if state[0] < 0 else 1
def sample_policy2(state):
return 1 if state[0] < 0 else 0
reward1 = np.mean([rollout_policy(env, sample_policy1) for _ in range(100)])
reward2 = np.mean([rollout_policy(env, sample_policy2) for _ in range(100)])
print('The first sample policy got an average reward of {}.'.format(reward1))
print('The second sample policy got an average reward of {}.'.format(reward2))
assert 5 < reward1 < 15, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.')
assert 25 < reward2 < 35, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.')
```
We'll return to `CartPole` in lesson [01: Application Cart Pole](explore-rllib/01-Application-Cart-Pole.ipynb) in the `explore-rllib` section.
### RLlib Reinforcement Learning Example: Cart Pole with Proximal Policy Optimization
This section demonstrates how to use the _proximal policy optimization_ (PPO) algorithm implemented by [RLlib](http://rllib.io). PPO is a popular way to develop a policy. RLlib also uses [Ray Tune](http://tune.io), the Ray Hyperparameter Tuning framework, which is covered in the [Ray Tune Tutorial](../ray-tune/00-Ray-Tune-Overview.ipynb).
We'll provide relatively little explanation of **RLlib** concepts for now, but explore them in greater depth in subsequent lessons. For more on RLlib, see the documentation at http://rllib.io.
PPO is described in detail in [this paper](https://arxiv.org/abs/1707.06347). It is a variant of _Trust Region Policy Optimization_ (TRPO) described in [this earlier paper](https://arxiv.org/abs/1502.05477). [This OpenAI post](https://openai.com/blog/openai-baselines-ppo/) provides a more accessible introduction to PPO.
PPO works in two phases. In the first phase, a large number of rollouts are performed in parallel. The rollouts are then aggregated on the driver and a surrogate optimization objective is defined based on those rollouts. In the second phase, we use SGD (_stochastic gradient descent_) to find the policy that maximizes that objective with a penalty term for diverging too much from the current policy.

> **NOTE:** The SGD optimization step is best performed in a data-parallel manner over multiple GPUs. This is exposed through the `num_gpus` field of the `config` dictionary. Hence, for normal usage, one or more GPUs is recommended.
(The original version of this example can be found [here](https://raw.githubusercontent.com/ucbrise/risecamp/risecamp2018/ray/tutorial/rllib_exercises/)).
```
import ray
from ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG
from ray.tune.logger import pretty_print
```
Initialize Ray. If you are running these tutorials on your laptop, then a single-node Ray cluster will be started by the next cell. If you are running in the Anyscale platform, it will connect to the running Ray cluster.
```
info = ray.init(ignore_reinit_error=True, log_to_driver=False)
print(info)
```
> **Tip:** Having trouble starting Ray? See the [Troubleshooting](../reference/Troubleshooting-Tips-Tricks.ipynb) tips.
The next cell prints the URL for the Ray Dashboard. **This is only correct if you are running this tutorial on a laptop.** Click the link to open the dashboard.
If you are running on the Anyscale platform, use the URL provided by your instructor to open the Dashboard.
```
print("Dashboard URL: http://{}".format(info["webui_url"]))
```
Instantiate a PPOTrainer object. We pass in a config object that specifies how the network and training procedure should be configured. Some of the parameters are the following.
- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used. In a cluster, these actors will be spread over the available nodes.
- `num_sgd_iter` is the number of epochs of SGD (stochastic gradient descent, i.e., passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO, for each _minibatch_ ("chunk") of training data. Using minibatches is more efficient than training with one record at a time.
- `sgd_minibatch_size` is the SGD minibatch size (batches of data) that will be used to optimize the PPO surrogate objective.
- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers. Here, we have two hidden layers of size 100, each.
- `num_cpus_per_worker` when set to 0 prevents Ray from pinning a CPU core to each worker, which means we could run out of workers in a constrained environment like a laptop or a cloud VM.
```
config = DEFAULT_CONFIG.copy()
config['num_workers'] = 1
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0
agent = PPOTrainer(config, 'CartPole-v1')
```
Now let's train the policy on the `CartPole-v1` environment for `N` steps. The JSON object returned by each call to `agent.train()` contains a lot of information we'll inspect below. For now, we'll extract information we'll graph, such as `episode_reward_mean`. The _mean_ values are more useful for determining successful training.
```
N = 10
results = []
episode_data = []
episode_json = []
for n in range(N):
result = agent.train()
results.append(result)
episode = {'n': n,
'episode_reward_min': result['episode_reward_min'],
'episode_reward_mean': result['episode_reward_mean'],
'episode_reward_max': result['episode_reward_max'],
'episode_len_mean': result['episode_len_mean']}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
print(f'{n:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}')
```
Now let's convert the episode data to a Pandas `DataFrame` for easy manipulation. The results indicate how much reward the policy is receiving (`episode_reward_*`) and how many time steps of the environment the policy ran (`episode_len_mean`). The maximum possible reward for this problem is `500`. The reward mean and trajectory length are very close because the agent receives a reward of one for every time step that it survives. However, this is specific to this environment and not true in general.
```
df = pd.DataFrame(data=episode_data)
df
df.columns.tolist()
```
Let's plot the data. Since the length and reward means are equal, we'll only plot one line:
```
df.plot(x="n", y=["episode_reward_mean", "episode_reward_min", "episode_reward_max"], secondary_y=True)
```
The model is quickly able to hit the maximum value of 500, but the mean is what's most valuable. After 10 steps, we're more than half way there.
FYI, here are two views of the whole value for one result. First, a "pretty print" output.
> **Tip:** The output will be long. When this happens for a cell, right click and select _Enable scrolling for outputs_.
```
print(pretty_print(results[-1]))
```
We'll learn about more of these values as continue the tutorial.
The whole, long JSON blob, which includes the historical stats about episode rewards and lengths:
```
results[-1]
```
Let's plot the `episode_reward` values:
```
episode_rewards = results[-1]['hist_stats']['episode_reward']
df_episode_rewards = pd.DataFrame(data={'episode':range(len(episode_rewards)), 'reward':episode_rewards})
df_episode_rewards.plot(x="episode", y="reward")
```
For a well-trained model, most runs do very well while occasional runs do poorly. Try plotting other results episodes by changing the array index in `results[-1]` to another number between `0` and `9`. (The length of `results` is `10`.)
### Exercise 2
The current network and training configuration are too large and heavy-duty for a simple problem like `CartPole`. Modify the configuration to use a smaller network (the `config['model']['fcnet_hiddens']` setting) and to speed up the optimization of the surrogate objective. (Fewer SGD iterations and a larger batch size should help.)
```
# Make edits here:
config = DEFAULT_CONFIG.copy()
config['num_workers'] = 3
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0
agent = PPOTrainer(config, 'CartPole-v1')
```
Train the agent and try to get a reward of 500. If it's training too slowly you may need to modify the config above to use fewer hidden units, a larger `sgd_minibatch_size`, a smaller `num_sgd_iter`, or a larger `num_workers`.
This should take around `N` = 20 or 30 training iterations.
```
N = 5
results = []
episode_data = []
episode_json = []
for n in range(N):
result = agent.train()
results.append(result)
episode = {'n': n,
'episode_reward_mean': result['episode_reward_mean'],
'episode_reward_max': result['episode_reward_max'],
'episode_len_mean': result['episode_len_mean']}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
print(f'Max reward: {episode["episode_reward_max"]}')
```
# Using Checkpoints
You checkpoint the current state of a trainer to save what it has learned. Checkpoints are used for subsequent _rollouts_ and also to continue training later from a known-good state. Calling `agent.save()` creates the checkpoint and returns the path to the checkpoint file, which can be used later to restore the current state to a new trainer. Here we'll load the trained policy into the same process, but often it would be loaded in a new process, for example on a production cluster for serving that is separate from the training cluster.
```
checkpoint_path = agent.save()
print(checkpoint_path)
```
Now load the checkpoint in a new trainer:
```
trained_config = config.copy()
test_agent = PPOTrainer(trained_config, "CartPole-v1")
test_agent.restore(checkpoint_path)
```
Use the previously-trained policy to act in an environment. The key line is the call to `test_agent.compute_action(state)` which uses the trained policy to choose an action. This is an example of _rollout_, which we'll study in a subsequent lesson.
Verify that the cumulative reward received roughly matches up with the reward printed above. It will be at or near 500.
```
env = gym.make("CartPole-v1")
state = env.reset()
done = False
cumulative_reward = 0
while not done:
action = test_agent.compute_action(state) # key line; get the next action
state, reward, done, _ = env.step(action)
cumulative_reward += reward
print(cumulative_reward)
ray.shutdown()
```
The next lesson, [02: Introduction to RLlib](02-Introduction-to-RLlib.ipynb) steps back to introduce to RLlib, its goals and the capabilities it provides.
| github_jupyter |
```
import numpy as np
import pandas as pd
DATA_DIR = '/home/ubuntu/data/patterns'
TMP_DATA_DIR = '../../data'
brands = pd.read_csv(f'{DATA_DIR}/brand_info.csv')
brands.head()
brands.top_category.value_counts().iloc[:20]
```
### May 2021 data for trial workflow
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202105.csv'
```
##### Part 1
```
data_202105 = pd.read_csv(f'{DATA_DIR}/202105/patterns-part1.csv')
data_202105.head()
data_202105.iloc[0]
data_202105.date_range_start.value_counts()
data_202105.date_range_end.value_counts()
data_202105.brands.value_counts()
```
#### Investigate distribution of brands
```
brand_ids_list = data_202105.safegraph_brand_ids.str.split(',')
data_202105['brand_ids_len'] = brand_ids_list.apply(lambda b: b if type(b) == float else len(b))
data_202105.brand_ids_len.value_counts()
data_202105['brands_len'] = data_202105.brands.str.split(',').apply(lambda b: b if type(b) == float else len(b))
data_202105.brands_len.value_counts()
(data_202105.brand_ids_len == data_202105.brands_len).sum()
data_202105.brand_ids_len.isna().sum()
(data_202105.brand_ids_len.isna() | (data_202105.brand_ids_len == data_202105.brands_len)).sum()
data_202105[['safegraph_brand_ids', 'brands']][data_202105.brand_ids_len.notna() & (data_202105.brand_ids_len != data_202105.brands_len)]
```
The data indicates that considering only entries with brand_ids_len of 1 (1 brand ID) would be reasonable and feasible.
```
data_202105['safegraph_brand_id'] = brand_ids_list.apply(lambda li: li if type(li) == float else li[0])
data_202105 = data_202105.merge(brands[['safegraph_brand_id', 'top_category', 'sub_category']], on='safegraph_brand_id')
keep_cols = ['city', 'region', 'safegraph_brand_id', 'top_category', 'sub_category', 'safegraph_brand_ids', 'brands',
'brand_ids_len', 'brands_len', 'raw_visit_counts', 'raw_visitor_counts', 'visits_by_day', 'poi_cbg', 'median_dwell']
dt_split = data_202105.date_range_start.str.split('-')
data_202105 = data_202105[keep_cols]
data_202105['year'] = dt_split.apply(lambda d: int(d[0]))
data_202105['month'] = dt_split.apply(lambda d: int(d[1]))
data_202105.shape
data_202105.to_csv(f'{TMP_DATA_DIR}/cleaned_202105.csv', index=False, mode='a')
!wc -l $TMP_DATA_DIR/cleaned_202105.csv
```
##### Part 2
```
data_202105_p2 = pd.read_csv(f'{DATA_DIR}/202105/patterns-part2.csv')
```
Sanity checks...
```
data_202105_p2.head()
data_202105_p2.date_range_start.value_counts()
data_202105_p2.date_range_end.value_counts()
brand_ids_list = data_202105_p2.safegraph_brand_ids.str.split(',')
data_202105_p2['brand_ids_len'] = brand_ids_list.apply(lambda b: b if type(b) == float else len(b))
data_202105_p2['brands_len'] = data_202105_p2.brands.str.split(',').apply(lambda b: b if type(b) == float else len(b))
data_202105_p2['safegraph_brand_id'] = brand_ids_list.apply(lambda li: li if type(li) == float else li[0])
data_202105_p2 = data_202105_p2.merge(brands[['safegraph_brand_id', 'top_category', 'sub_category']], on='safegraph_brand_id')
dt_split = data_202105_p2.date_range_start.str.split('-')
data_202105_p2 = data_202105_p2[keep_cols]
data_202105_p2['year'] = dt_split.apply(lambda d: int(d[0]))
data_202105_p2['month'] = dt_split.apply(lambda d: int(d[1]))
data_202105_p2.shape
data_202105_p2.to_csv(tmp_outfile, index=False, mode='a', header=False)
!wc -l $tmp_outfile
```
##### Part 3
```
data_202105 = pd.read_csv(f'{DATA_DIR}/202105/patterns-part3.csv')
```
Minimal date range sanity check
```
data_202105.date_range_end.value_counts()
brand_ids_list = data_202105.safegraph_brand_ids.str.split(',')
data_202105['brand_ids_len'] = brand_ids_list.apply(lambda b: b if type(b) == float else len(b))
data_202105['brands_len'] = data_202105.brands.str.split(',').apply(lambda b: b if type(b) == float else len(b))
data_202105['safegraph_brand_id'] = brand_ids_list.apply(lambda li: li if type(li) == float else li[0])
data_202105 = data_202105.merge(brands[['safegraph_brand_id', 'top_category', 'sub_category']], on='safegraph_brand_id')
dt_split = data_202105.date_range_start.str.split('-')
data_202105 = data_202105[keep_cols]
data_202105['year'] = dt_split.apply(lambda d: int(d[0]))
data_202105['month'] = dt_split.apply(lambda d: int(d[1]))
data_202105.shape
data_202105.to_csv(tmp_outfile, index=False, mode='a', header=False)
!wc -l $tmp_outfile
```
##### Part 4
```
data_202105 = pd.read_csv(f'{DATA_DIR}/202105/patterns-part4.csv')
```
Minimal date range sanity check
```
data_202105.date_range_end.value_counts()
brand_ids_list = data_202105.safegraph_brand_ids.str.split(',')
data_202105['brand_ids_len'] = brand_ids_list.apply(lambda b: b if type(b) == float else len(b))
data_202105['brands_len'] = data_202105.brands.str.split(',').apply(lambda b: b if type(b) == float else len(b))
data_202105['safegraph_brand_id'] = brand_ids_list.apply(lambda li: li if type(li) == float else li[0])
data_202105 = data_202105.merge(brands[['safegraph_brand_id', 'top_category', 'sub_category']], on='safegraph_brand_id')
dt_split = data_202105.date_range_start.str.split('-')
data_202105 = data_202105[keep_cols]
data_202105['year'] = dt_split.apply(lambda d: int(d[0]))
data_202105['month'] = dt_split.apply(lambda d: int(d[1]))
data_202105.shape
data_202105.to_csv(tmp_outfile, index=False, mode='a', header=False)
!sudo mv $tmp_outfile $DATA_DIR
del data_202105, data_202105_p2
```
### Define function based on above and apply to other months
```
def monthly_data_edit_write(infile, brands, outfile, header=False):
data = pd.read_csv(infile)
brand_ids_list = data.safegraph_brand_ids.str.split(',')
data['brand_ids_len'] = brand_ids_list.apply(lambda b: b if type(b) == float else len(b))
data['brands_len'] = data.brands.str.split(',').apply(lambda b: b if type(b) == float else len(b))
data['safegraph_brand_id'] = brand_ids_list.apply(lambda li: li if type(li) == float else li[0])
data = data.merge(brands[['safegraph_brand_id', 'top_category', 'sub_category']], on='safegraph_brand_id')
dt_split = data.date_range_start.str.split('-')
keep_cols = ['city', 'region', 'safegraph_brand_id', 'top_category', 'sub_category', 'safegraph_brand_ids', 'brands',
'brand_ids_len', 'brands_len', 'raw_visit_counts', 'raw_visitor_counts', 'visits_by_day', 'poi_cbg', 'median_dwell']
data = data[keep_cols]
data['year'] = dt_split.apply(lambda d: int(d[0]))
data['month'] = dt_split.apply(lambda d: int(d[1]))
data.to_csv(outfile, index=False, mode='a', header=header)
return data # for ease of sanity checking
```
### April 2021
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202104.csv'
data_202104 = monthly_data_edit_write(f'{DATA_DIR}/202104/patterns-part1.csv', brands, tmp_outfile, True)
data_202104 = monthly_data_edit_write(f'{DATA_DIR}/202104/patterns-part2.csv', brands, tmp_outfile)
data_202104 = monthly_data_edit_write(f'{DATA_DIR}/202104/patterns-part3.csv', brands, tmp_outfile)
data_202104 = monthly_data_edit_write(f'{DATA_DIR}/202104/patterns-part4.csv', brands, tmp_outfile)
!sudo mv $tmp_outfile $DATA_DIR
del data_202104
```
### March 2021
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202103.csv'
data_202103 = monthly_data_edit_write(f'{DATA_DIR}/202103/patterns-part1.csv', brands, tmp_outfile, True)
data_202103 = monthly_data_edit_write(f'{DATA_DIR}/202103/patterns-part2.csv', brands, tmp_outfile)
data_202103 = monthly_data_edit_write(f'{DATA_DIR}/202103/patterns-part3.csv', brands, tmp_outfile)
data_202103 = monthly_data_edit_write(f'{DATA_DIR}/202103/patterns-part4.csv', brands, tmp_outfile)
!sudo mv $tmp_outfile $DATA_DIR
del data_202103
```
### June 2021
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202106.csv'
data_in_dir = f'{DATA_DIR}/202106'
data_202106 = monthly_data_edit_write(f'{data_in_dir}/patterns-part1.csv', brands, tmp_outfile, True)
data_202106 = monthly_data_edit_write(f'{data_in_dir}/patterns-part2.csv', brands, tmp_outfile)
data_202106 = monthly_data_edit_write(f'{data_in_dir}/patterns-part3.csv', brands, tmp_outfile)
data_202106 = monthly_data_edit_write(f'{data_in_dir}/patterns-part4.csv', brands, tmp_outfile)
!sudo mv $tmp_outfile $DATA_DIR
del data_202106
```
### July 2021
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202107.csv'
data_in_dir = f'{DATA_DIR}/202107'
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part1.csv', brands, tmp_outfile, True)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part2.csv', brands, tmp_outfile)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part3.csv', brands, tmp_outfile)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part4.csv', brands, tmp_outfile)
!sudo mv $tmp_outfile $DATA_DIR
```
### August 2021
```
tmp_outfile = f'{TMP_DATA_DIR}/cleaned_202108.csv'
data_in_dir = f'{DATA_DIR}/202108'
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part1.csv', brands, tmp_outfile, True)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part2.csv', brands, tmp_outfile)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part3.csv', brands, tmp_outfile)
data = monthly_data_edit_write(f'{data_in_dir}/patterns-part4.csv', brands, tmp_outfile)
!sudo mv $tmp_outfile $DATA_DIR
```
### Snippet for transforming visits_by_day into arrays
```
data_202105['visits_by_day'] = data_202105.visits_by_day.str.strip('[]').str.split(',')
data_202105['visits_by_day'] = data_202105.visits_by_day.apply(lambda li: np.array(li).astype(int))
```
| github_jupyter |
# Prediction: Beyond Simple Random Walks
The tracking algorithm, at its simplest level, takes each particle in the previous frame and tries to find it in the current frame. This requires knowing where to look for it; if we find an actual particle near that spot, it's probably a match. The basic algorithm (Crocker & Grier) was developed to track particles undergoing Brownian diffusion, which ideally means that a particle's velocity is uncorrelated from one frame to the next. Therefore, the best guess for where a particle is going is that it will be near its most recent location.
Let's formalize this guessing as *prediction*. Consider a function
$$P(t_1, t_0, \vec x(t_0))$$
that takes the particle at position $\vec x(t_0)$ and predicts its future position $\vec x(t_1)$. The optimal predictor for Brownian motion is
$$P(t_1, t_0, \vec x(t_0)) = \vec x(t_0)$$
which happily is also the easiest to implement.
The better our prediction about where to look in the next frame, the more likely we will find the one and only particle we seek. `trackpy` looks for the particle in a small region of radius `search_range`, centered on $P(t_1, t_0, \vec x(t_0))$. So to successfully track particle $i$ puts a limit on the error in our prediction:
$$\|P(t_1, t_0, \vec x_i(t_0)) - \vec x_i(t_1)\| \le \tt{search\_range}$$
This favors a generous `search_range`. However, if `search_range` is too big, then for each particle in the previous frame there will be many possible matches in the current frame, and so matching one frame to the next requires the computer to consider a mind-boggling set of possibilities. Tracking may become impossibly slow, and this causes `trackpy` to halt and raise a `SubnetOversizeException`, rather than keep you waiting forever. So for the Brownian $P$ above, `search_range` must be bigger than the largest particle displacement between frames, but smaller than the typical spacing between particles. If such a value cannot be found among the real numbers, then you have a problem.
However, if particle motion is not strictly Brownian, its velocity probably *is* correlated in time. We may be able to improve $P$. We will now do this with `trackpy`.
## Prescribed predictors
Let's start by demonstrating the mechanics of $P$ in `trackpy`. `trackpy`'s various `link_` functions accept a `predictor` argument, which is a Python function that implements $P$.
Before we see how, let's fake some data: a regular array of particles, translating with constant velocity.
```
%matplotlib inline
from pylab import * # not recommended usage, but we use it for brevity here
import numpy as np
import pandas
def fakeframe(t=0, Nside=4):
xg, yg = np.mgrid[:Nside,:Nside]
dx = 1 * t
dy = -1 * t
return pandas.DataFrame(
dict(x=xg.flatten() + dx, y=yg.flatten() + dy, frame=t))
```
Let's visualize 2 frames. In all of the plots below, the blue circles are the particles of the first frame and the green squares are the particles of the last frame.
```
f0 = fakeframe(0)
f1 = fakeframe(0.8)
plot(f0.x, f0.y, 'bo')
plot(f1.x, f1.y, 'gs')
axis('equal'); ylim(ymin=-1.0, ymax=3.5)
```
Track and visualize.
```
import trackpy
tr = pandas.concat(trackpy.link_df_iter((f0, f1), 0.5))
def trshow(tr, first_style='bo', last_style='gs', style='b.'):
frames = list(tr.groupby('frame'))
nframes = len(frames)
for i, (fnum, pts) in enumerate(frames):
if i == 0:
sty = first_style
elif i == nframes - 1:
sty = last_style
else:
sty = style
plot(pts.x, pts.y, sty)
trackpy.plot_traj(tr, colorby='frame', ax=gca())
axis('equal'); ylim(ymin=-1.0, ymax=3.5)
xlabel('x')
ylabel('y')
trshow(tr)
```
Obviously this is not what we wanted at all! Let's give `trackpy.link_df_iter()` a $P$ which reflects this constant velocity.
We define `predict()` for a single particle, and use the `trackpy.predict.predictor` decorator to let it make predictions for many particles at once. Then, we pass it to `link_df_iter()` via the `predictor` argument.
```
import trackpy.predict
@trackpy.predict.predictor
def predict(t1, particle):
velocity = np.array((1, -1)) # See fakeframe()
return particle.pos + velocity * (t1 - particle.t)
tr = pandas.concat(trackpy.link_df_iter((f0, f1), 0.5, predictor=predict))
trshow(tr)
```
Yay! Remember: Our predictor doesn't have to know exactly where the particle will be; it just has to bias the search enough that the correct identification will be made.
## Dynamic predictors
Of course, it's rare that you will know your particles' velocities ahead of time. It would be much better for the predictor to "learn" about the velocities, and allow different particles to have different velocities that can change over time. To accomplish this, we have to do more than just supply $P$: we have to know particles' most recent velocities.
$$P(t_1, t_0, \vec x_i(t_0)) = \vec x_i(t_0) + \frac{\vec x_i(t_0) - \vec x_i(t_{-1})}{t_0 - t_{-1}} (t_1 - t_0)$$
To implement this kind of prediction in `trackpy`, we use instances of the [`trackpy.predict.NearestVelocityPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L145-L196) class.
There are a few caveats:
- Defining this new $P$ for particle $i$ specifically is problematic, because if a new particle is in frame $t_0$ but wasn't in $t_{-1}$, we won't know its velocity. So newly-appeared particles just borrow the velocity of the closest old particle.
- Velocities are undefined in the first frame of the movie, because there is no previous frame. The code falls back to an initial guess of $\vec v_0 = 0$. However, `NearestVelocityPredict`, and the other classes in `trackpy.predict`, allow one to instead specify an initial velocity profile, field, etc. See the docstring of each class.
- Even though particles may be in motion at the start of the movie, the default of $\vec v_0 = 0$ is not always so bad. In many cases, at least some of the particles are moving slowly enough that they can be tracked and their velocity can be obtained. Because particles with unknown velocity just borrow the nearest known velocity, as we just discussed, this may give the code a foothold to track more particles in later frames. Your mileage may vary.
OK, let's see this in action. We'll make a 3-frame movie that starts with small displacements (because of the $\vec v_0 = 0$ assumption) and then speeds up.
```
frames = (fakeframe(0), fakeframe(0.25), fakeframe(0.65))
```
Without prediction, linking of the particles in the top row can't even make it to the 3rd frame.
```
tr = pandas.concat(trackpy.link_df_iter(frames, 0.5))
trshow(tr)
```
`NearestVelocityPredict` objects work by watching the output of linking as it happens, and updating $P$ to use the latest velocities. These objects provide modified versions of trackpy's two main linking functions, `link_df_iter()` and `link_df()`, that work like their namesakes but add dynamic prediction.
First, we use `link_df_iter()` to link the frames with prediction:
```
pred = trackpy.predict.NearestVelocityPredict()
tr = pandas.concat(pred.link_df_iter(frames, 0.5))
trshow(tr)
```
Alternatively, we can use `link_df()`:
```
pred = trackpy.predict.NearestVelocityPredict()
tr = pred.link_df(pandas.concat(frames), 0.5)
trshow(tr)
```
We'll use `link_df_iter()` for the remaining examples, but `link_df()` is always available as well.
(*Note:* Unlike `link_df_iter()`, this `link_df()` is usually — but not always — a drop-in replacment for `trackpy.link_df()`. Consult the documentation or source code for details.)
### Channel flow prediction
There is one special case that is common enough to deserve a special $P$: channel flow, in which velocities are relatively uniform in one direction. For example, if the channel is in the $x$ (i.e. $\hat i$) direction, particle velocities are very well approximated as
$$\vec v = \hat i v_x(y)$$
where the velocity profile $v_x(y)$ is a smoothly-varying function defined across the channel.
This is implemented by the [`trackpy.predict.ChannelPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L228-L328) class. When creating an instance, you must specify the size of the bins used to create the velocity profile. You can also specify the direction of flow; see the class's docstring for details.
Let's create some particles undergoing accelerating shear.
```
def fakeshear(t=0, Nside=4):
xg, yg = np.mgrid[:Nside,:Nside]
dx = 0.45 * t * yg
return pandas.DataFrame(
dict(x=(xg + dx).flatten(), y=yg.flatten(), frame=t))
```
When we attempt to track them, the algorithm fails for the top row of particles.
```
frames = (fakeshear(0), fakeshear(0.25), fakeshear(0.65))
tr = pandas.concat(trackpy.link_df_iter(frames, 0.5))
trshow(tr)
ylim(ymax=3.5);
```
Now, let's try it with prediction:
```
pred = trackpy.predict.ChannelPredict(0.5, 'x', minsamples=3)
tr = pandas.concat(pred.link_df_iter(frames, 0.5))
trshow(tr)
ylim(ymax=3.5);
```
Much better!
### Drift prediction
Finally, the most symmetric prediction class in `trackpy.predict` is [`DriftPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L199-L225). This just makes predictions based on the average velocity of all particles. It is useful when you have some background convective flow. Note that this does *not* remove the flow from your results; to do that, use `trackpy.compute_drift` and `trackpy.subtract_drift`, as in the walkthrough tutorial.
| github_jupyter |
# Least Squares
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://licensebuttons.net/l/by/4.0/80x15.png" /></a><br />This notebook by Xiaozhou Li is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
All code examples are also licensed under the [MIT license](http://opensource.org/licenses/MIT).
The concept of least squares dates uses permeates modern statistics and mathematical modeling. The key techniques of regression and
parameter estimation have become fundamental tools in the sciences and engineering.
```
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import clear_output, display
```
## Polynomial Fitting
```
def poly_fit(x, y, n):
m = np.size(x)
A = np.zeros([n+1,n+1])
b = np.zeros(n+1)
A_tmp = np.zeros(2*n+1)
for i in range(2*n+1):
for j in range(m):
A_tmp[i] += x[j]**i
if (i < n+1):
b[i] += x[j]**i*y[j]
for i in range(n+1):
A[i] = A_tmp[i:i+n+1]
a = np.linalg.solve(A, b)
return a
def plot_fun(fun, a, b, c='k'):
num = 200
x = np.linspace(a, b, num+1)
y = np.zeros(num+1)
for i in range(num+1):
y[i] = fun(x[i])
plt.plot(x, y, c, linewidth=3)
```
__Example__ Fitting points $(1,2),(2,3),(3,5),(4,7),(5,11),(6,13),(7,17),(8,19),(9,23),(10,29)$ with polynomial
```
x = np.array([1,2,3,4,5,6,7,8,9,10])
y = np.array([2,3,5,7,11,13,17,19,23,29])
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 2)
print (a)
def fitting_fun(a, x):
n = np.size(a)
y = a[n-1]
for i in range(n-1):
y = y*x + a[n-2-i]
return y
#print (fitting_fun(a,0))
def fun(x):
return fitting_fun(a,x)
plot_fun(fun, 1, 10)
```
__Example__ Linear polynomial fitting: linear function with random perturbation
```
def fun1(x):
#return x**3 - x**2 + x
return 3.5*x
m = 20
x = np.linspace(-1,1,m)
y = np.zeros(m)
for i in range(m):
y[i] = fun1(x[i])
y = y + 0.1*np.random.rand(m)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
plot_fun(fun, -1, 1)
```
__Example__ Linear polynomial fitting for quadratic function
```
def fun2(t):
#return x**3 - x**2 + x
return 300*t - 4.9*t*t
m = 20
x = np.linspace(0,2,m)
y = np.zeros(m)
for i in range(m):
y[i] = fun2(x[i])
# y = y + 0.1*np.random.rand(m)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
plot_fun(fun, 0, 2)
# longer range
#t = 50
#plot_fun(fun, 0, t)
#x = np.linspace(0,t,200)
#plt.plot(x, fun2(x),'b')
```
__Example__ Fitting points $(1,2),(2,3),(4,7),(6,13),(7,17),(8,19)$ with polynomial
```
x = np.array([1,2,4,6,7,8])
y = np.array([2,3,7,13,17,19])
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
print (a)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 2)
print (a)
def fitting_fun(a, x):
n = np.size(a)
y = a[n-1]
for i in range(n-1):
y = y*x + a[n-2-i]
return y
#print (fitting_fun(a,0))
def fun(x):
return fitting_fun(a,x)
plot_fun(fun, 1, 10)
print(np.polyfit(x,y,1))
print(np.polyfit(x,y,2))
```
| github_jupyter |
# Predicting reaction performance in C–N cross-coupling using machine learning
DOI: 10.1126/science.aar5169
Ahneman, D. T.; Estrada, J. G.; Lin, S.; Dreher, S. D.; Doyle, A. G. *Science*, **2018**, *360*, 186-190.
Import schema and helper functions
```
import ord_schema
from datetime import datetime
from ord_schema.proto import reaction_pb2
from ord_schema.units import UnitResolver
from ord_schema import validations
from ord_schema import message_helpers
unit_resolver = UnitResolver()
```
# Define a single reaction
Single reaction from the SI to be used as a template for the remaining entries.
Start by writing a helper function for defining stock solutions.
```
# TODO(ccoley) Replace use of this helper class with the message_helpers.set_solute_moles
class stock_solution:
"""Helper class for defining stock solutions."""
def __init__(self, reaction, stock_name):
self.stock = reaction.inputs[stock_name]
self.concentration = 0.0
self.moles = 0.0
self.volume = 0.0
def add_solute(self, role, name, SMILES=None, is_limiting=False, preparation='NONE',
moles=0.0, volume_liters=0.0):
"""Add solute to solution. Keep track of moles of solute and total volume."""
# Solution volume is sum of solute and solvent volumes
self.moles += float(moles)
self.volume += float(volume_liters)
# Add solute and ID
self.solute = self.stock.components.add()
self.solute.reaction_role = reaction_pb2.ReactionRole.__dict__[role]
self.solute.identifiers.add(value=name, type='NAME')
if SMILES != None:
self.solute.identifiers.add(value=SMILES, type='SMILES')
# Other details
self.solute.preparations.add().type = reaction_pb2.CompoundPreparation.PreparationType.Value(preparation)
self.solute.is_limiting = is_limiting
def add_solvent(self, name, SMILES=None, preparation='NONE', volume_liters=0.0):
"""Add solvent to solution. Keep track of total volume."""
# Solution volume is sum of solute and solvent volumes
self.volume += float(volume_liters)
# Add solute and ID
self.solvent = self.stock.components.add()
self.solvent.reaction_role = reaction_pb2.ReactionRole.SOLVENT
self.solvent.identifiers.add(value=name, type='NAME')
if SMILES != None:
self.solvent.identifiers.add(value=SMILES, type='SMILES')
# Other details
self.solvent.preparations.add().type = reaction_pb2.CompoundPreparation.PreparationType.Value(preparation)
def mix(self, concentration_molar=0):
"""Mix function resolves moles and volume from availible information (concentration, moles, volume)"""
self.concentration = concentration_molar
# Resolve concentration
if self.moles > 0 and self.volume > 0:
self.solute.amount.moles.CopyFrom(unit_resolver.resolve(f'{self.moles*(10**6):16f} umol'))
self.solvent.amount.volume.CopyFrom(unit_resolver.resolve(f'{self.volume*(10**6):16f} uL'))
elif self.concentration > 0 and self.volume > 0:
self.moles = self.concentration * self.volume
self.solute.amount.moles.CopyFrom(unit_resolver.resolve(f'{self.moles*(10**6):16f} umol'))
self.solvent.amount.volume.CopyFrom(unit_resolver.resolve(f'{self.volume*(10**6):16f} uL'))
```
**Define reaction inputs**:
- Catalyst in DMSO (0.05 M)
- Electrophile in DMSO (0.50 M)
- Nucleophile in DMSO (0.50 M)
- Additive in DMSO (0.50 M)
- Base in DMSO (0.75 M)
- The SI does not indicate an order of addition
```
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r'Buchwald-Hartwig Amination', type='NAME')
# Catalyst stock solution
catalyst = stock_solution(reaction, r'Pd precatalyst in DMSO')
catalyst.add_solute('CATALYST', r'XPhos', SMILES=r'CC(C)C1=CC(C(C)C)=CC(C(C)C)=C1C2=C(P(C3CCCCC3)C4CCCCC4)C=CC=C2')
catalyst.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
catalyst.mix(concentration_molar=0.05)
# Electrophile stock solution
electrophile = stock_solution(reaction, r'Aryl halide in DMSO')
electrophile.add_solute('REACTANT', r'4-trifuloromethyl chlorobenzene', SMILES=r'ClC1=CC=C(C(F)(F)F)C=C1', is_limiting=True)
electrophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
electrophile.mix(concentration_molar=0.50)
# Nucleophile stock solution
nucleophile = stock_solution(reaction, r'Amine in DMSO')
nucleophile.add_solute('REACTANT', r'p-toluidine', SMILES=r'NC1=CC=C(C)C=C1')
nucleophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
nucleophile.mix(concentration_molar=0.50)
# Additive stock solution
additive = stock_solution(reaction, r'Additive in DMSO')
additive.add_solute('REAGENT', r'5-phenylisoxazole', SMILES=r'o1nccc1c2ccccc2')
additive.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
additive.mix(concentration_molar=0.50)
# Base stock solution
base = stock_solution(reaction, r'Base in DMSO')
base.add_solute('REAGENT', r'P2Et', SMILES=r'CN(C)P(N(C)C)(N(C)C)=NP(N(C)C)(N(C)C)=NCC')
base.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
base.mix(concentration_molar=0.75)
```
Define reaction setup & conditions
```
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type='WELL_PLATE',
material=dict(type='PLASTIC', details='polypropylene'),
volume=unit_resolver.resolve('12.5 uL')
)
)
reaction.setup.is_automated = True
reaction.setup.environment.type = reaction.setup.environment.GLOVE_BOX
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units='CELSIUS', value=60))
# Glove box work
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.SEALED
p_conds.atmosphere.type = p_conds.Atmosphere.NITROGEN
p_conds.atmosphere.details = 'dry nitrogen'
# No safety notes
reaction.notes.safety_notes = ''
```
After 16 h, the plate was opened and the Mosquito was used to add internal standard to each well (3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO). At that point, aliquots were sampled into 384-well plates and analyzed by UPLC.
```
# Standard stock solution
standard = stock_solution(reaction, r'External standard in DMSO')
standard.add_solute('INTERNAL_STANDARD', r'4,4\'-di-tert-butyl-1,1\'-biphenyl', SMILES=r'CC(C)(C)C1=CC=C(C2=CC=C(C(C)(C)C)C=C2)C=C1')
standard.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=3e-6)
standard.mix(concentration_molar=0.0025)
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve('16 hrs'))
# Analyses: UPLC
# Note using LCMS because UPLC is not an option
outcome.analyses['UPLC analysis'].type = reaction_pb2.Analysis.LCMS
outcome.analyses['UPLC analysis'].details = ('UPLC using 3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO external standard')
outcome.analyses['UPLC analysis'].instrument_manufacturer = 'Waters Acquity'
# Define product identity
prod_2a = outcome.products.add()
prod_2a.identifiers.add(value=r'FC(C1=CC=C(NC2=CC=C(C)C=C2)C=C1)(F)F', type='SMILES')
prod_2a.is_desired_product = True
prod_2a.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# The UPLC analysis was used to confirm both identity and yield
prod_2a.measurements.add(type='IDENTITY', analysis_key='UPLC analysis')
prod_2a.measurements.add(type='YIELD', analysis_key='UPLC analysis', percentage=dict(value=10.65781182),
uses_internal_standard=True)
# Reaction provenance
reaction.provenance.city = r'Kenilworth, NJ'
reaction.provenance.doi = r'10.1126/science.aar5169'
reaction.provenance.publication_url = r'https://science.sciencemag.org/content/360/6385/186'
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(reaction_pb2.Person(
name='Benjamin J. Shields', organization='Princeton University', email='bjs4@princeton.edu'))
```
Validate and examine this final prototypical reaction entry
```
outcome.products
validations.validate_message(reaction)
reaction
```
# Full HTE Data Set
```
# Get full set of reactions: I preprocessed this to have SMILES for each component.
# Note I am only including the data that was used for modeling - there are some
# controls and failed reactions in the SI (if we even want them?).
import pandas as pd
import os
if not os.path.isfile('experiment_index.csv'):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/9_Ahneman_Science_CN_Coupling/experiment_index.csv
index = pd.read_csv('experiment_index.csv')
index
# I happened to have ID tables around so we can give the components names
def match_name(column, list_path):
"""Match names from csv files to SMILES."""
if not os.path.isfile(list_path):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/9_Ahneman_Science_CN_Coupling/{list_path}
component_list = pd.read_csv(list_path)
# Get SMILES column
for col in component_list.columns.values:
if 'SMILES' in col:
smi_col = col
# Get name column
names = index[column].copy()
for i in range(len(component_list)):
names = names.replace(component_list[smi_col][i], component_list['name'][i])
return names.values
index['Aryl_halide_name'] = match_name('Aryl_halide_SMILES', 'aryl_halide-list.csv')
index['Additive_name'] = match_name('Additive_SMILES', 'additive-list.csv')
index['Base_name'] = match_name('Base_SMILES', 'base-list.csv')
index['Ligand_name'] = match_name('Ligand_SMILES', 'ligand-list.csv')
index.head()
# Products aren't listed - Use rdkit to get them
from rdkit import Chem
from rdkit.Chem import AllChem
def amination(aryl_halide):
"""Get product based on aryl halide identity."""
replace_with = Chem.MolFromSmiles('NC1=CC=C(C)C=C1')
pattern = Chem.MolFromSmarts('[Cl,Br,I]')
molecule = Chem.MolFromSmiles(aryl_halide)
product = AllChem.ReplaceSubstructs(molecule, pattern, replace_with)
return Chem.MolToSmiles(product[0])
index['Product_SMILES'] = [amination(aryl_halide) for aryl_halide in index['Aryl_halide_SMILES'].tolist()]
index.head()
# Reorder the dataframe
index = index[['Ligand_SMILES', 'Ligand_name',
'Aryl_halide_SMILES', 'Aryl_halide_name',
'Additive_SMILES', 'Additive_name',
'Base_SMILES', 'Base_name',
'Product_SMILES', 'yield']]
# Gonna time execution
import time
class timer:
"""
Returns wall clock-time
"""
def __init__(self, name):
self.start = time.time()
self.name = name
def stop(self):
self.end = time.time()
print(self.name + ': ' + str(self.end - self.start) + ' s')
```
The only aspects of reaction data that vary are: (1) ligand, (2) electrophile, (3) additive, and (4) base.
```
t = timer('3955 Entries')
reactions = []
for lig_s, lig_n, elec_s, elec_n, add_s, add_n, base_s, base_n, prod, y in index.values:
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r'Buchwald-Hartwig Amination', type='NAME')
# Catalyst stock solution
catalyst = stock_solution(reaction, r'Pd precatalyst in DMSO')
catalyst.add_solute('CATALYST', lig_n, SMILES=lig_s)
catalyst.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
catalyst.mix(concentration_molar=0.05)
# Electrophile stock solution
electrophile = stock_solution(reaction, r'Aryl halide in DMSO')
electrophile.add_solute('REACTANT', elec_n, SMILES=elec_s, is_limiting=True)
electrophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
electrophile.mix(concentration_molar=0.50)
# Nucleophile stock solution
nucleophile = stock_solution(reaction, r'Amine in DMSO')
nucleophile.add_solute('REACTANT', r'p-toluidine', SMILES=r'NC1=CC=C(C)C=C1')
nucleophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
nucleophile.mix(concentration_molar=0.50)
# Additive stock solution
additive = stock_solution(reaction, r'Additive in DMSO')
additive.add_solute('REAGENT', add_n, SMILES=add_s)
additive.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
additive.mix(concentration_molar=0.50)
# Base stock solution
base = stock_solution(reaction, r'Base in DMSO')
base.add_solute('REAGENT', base_n, SMILES=base_s)
base.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
base.mix(concentration_molar=0.75)
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type='WELL_PLATE',
material=dict(type='PLASTIC'),
volume=unit_resolver.resolve('12.5 uL')
)
)
reaction.setup.is_automated = True
reaction.setup.environment.type = reaction_pb2.ReactionSetup.ReactionEnvironment.GLOVE_BOX
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units='CELSIUS', value=60))
# Glove box work
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.SEALED
p_conds.atmosphere.type = p_conds.Atmosphere.NITROGEN
p_conds.atmosphere.details = 'dry nitrogen'
# Notes
reaction.notes.safety_notes = ''
# TODO(ccoley) Stock solutions can be defined without using this custom function
# Standard stock solution
standard = stock_solution(reaction, r'External standard in DMSO')
standard.add_solute('INTERNAL_STANDARD', r'4,4\'-di-tert-butyl-1,1\'-biphenyl', SMILES=r'CC(C)(C)C1=CC=C(C2=CC=C(C(C)(C)C)C=C2)C=C1')
standard.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=3e-6)
standard.mix(concentration_molar=0.0025)
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve('16 hrs'))
# Analyses: UPLC/MS
outcome.analyses['UPLC analysis'].type = reaction_pb2.Analysis.LCMS
outcome.analyses['UPLC analysis'].details = ('UPLC using 3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO external standard')
outcome.analyses['UPLC analysis'].instrument_manufacturer = 'Waters Acquity'
# Define product identity
prod_2a = outcome.products.add()
prod_2a.identifiers.add(value=r'FC(C1=CC=C(NC2=CC=C(C)C=C2)C=C1)(F)F', type='SMILES')
prod_2a.is_desired_product = True
prod_2a.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# The UPLC analysis was used to confirm both identity and yield
prod_2a.measurements.add(type='IDENTITY', analysis_key='UPLC analysis')
prod_2a.measurements.add(type='YIELD', analysis_key='UPLC analysis', percentage=dict(value=y),
uses_internal_standard=True)
# Reaction provenance
reaction.provenance.city = r'Kenilworth, NJ'
reaction.provenance.doi = r'10.1126/science.aar5169'
reaction.provenance.publication_url = r'https://science.sciencemag.org/content/360/6385/186'
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(reaction_pb2.Person(
name='Benjamin J. Shields', organization='Princeton University', email='bjs4@princeton.edu')
)
# Validate
output = validations.validate_message(reaction)
for error in output.errors:
print(error)
# Append
reactions.append(reaction)
t.stop()
print(f'Generated {len(reactions)} reactions')
# Inspect random reaction from this set
reactions[15]
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import transforms, datasets, models
import numpy as np
import matplotlib.pyplot as plt
from torch.autograd import Variable
from collections import namedtuple
from IPython.display import Image
%matplotlib inline
np.random.seed(2021)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
batch_size = 1024
transform = transforms.Compose(
[transforms.ToTensor(),
# transforms.Normalize((0.1307,), (0.3081,))
])
train_data = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=transform)
test_data = torchvision.datasets.MNIST(root='./data', train=False,
download=True, transform=transform)
train_size = train_data.data.shape[0]
val_size, train_size = int(0.20 * train_size), int(0.80 * train_size) # 80 / 20 train-val split
test_size = test_data.data.shape[0]
Metric = namedtuple('Metric', ['train_loss', 'train_error', 'val_error', 'val_loss'])
trainloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=0,
sampler=torch.utils.data.sampler.SubsetRandomSampler(np.arange(val_size, val_size+train_size))
)
valloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=0,
sampler=torch.utils.data.sampler.SubsetRandomSampler(np.arange(0, val_size))
)
testloader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
shuffle=False,
num_workers=0
)
print(train_size)
print(val_size)
print(test_size)
idxs = [0, 5, 7, 30, 214,3412, 5555, 6666, 7777]
f, a = plt.subplots(2, 4, figsize=(10, 5))
for i in range(8):
X = train_data.data[idxs[i]]
Y = train_data.targets[idxs[i]]
r, c = i // 4, i % 4
a[r][c].set_title(Y)
a[r][c].axis('off')
a[r][c].imshow(X.numpy())
plt.draw()
def one_epoch(epoch, net, loader, optimizer):
net.train()
running_loss = 0.0
n = 0
correct = 0
total = 0
for i, data in enumerate(loader):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = F.cross_entropy(outputs, labels)
running_loss += loss.item()
loss.backward()
optimizer.step()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
# print statistics
avg_loss = running_loss / total
acc = correct / total
return avg_loss, acc
def train(net, loader, dev_loader, optimizer, epochs):
train_losses = []
valid_losses = []
for epoch in range(epochs): # loop over the dataset multiple times
avg_loss_t, acc_t = one_epoch(epoch, net, loader, optimizer)
avg_loss_v, acc_v = infer(net, dev_loader)
train_losses.append(avg_loss_t)
valid_losses.append(avg_loss_v)
if epoch % 5 == 0:
print('[%d] loss: %.8f, acc: %.4f' %
(epoch + 1, avg_loss_t, acc_t))
print('[valid] loss: %.8f, acc: %.4f' % (avg_loss_v, acc_v))
return train_losses, valid_losses
def train_step(net, loader, dev_loader, optimizer, scheduler, epochs):
net.train()
train_losses = []
valid_losses = []
for epoch in range(epochs): # loop over the dataset multiple times
avg_loss_t, acc_t = one_epoch(epoch, net, loader, optimizer)
avg_loss_v, acc_v = infer(net, dev_loader)
scheduler.step()
if epoch % 5 == 0:
print('[%d] loss: %.8f, acc: %.4f' %
(epoch + 1, avg_loss_t, acc_t))
print('[valid] loss: %.8f, acc: %.4f' % (avg_loss_v, acc_v))
print("lr: {}".format(optimizer.param_groups[0]['lr']))
train_losses.append(avg_loss_t)
valid_losses.append(avg_loss_v)
return train_losses, valid_losses
def infer(net, loader):
net.eval()
running_loss = 0.0
n = 0
correct = 0
total = 0
with torch.no_grad():
for i, data in enumerate(loader):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
# forward + backward + optimize
outputs = net(inputs)
loss = F.cross_entropy(outputs, labels)
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = correct / total
avg_loss = running_loss / total
return avg_loss, acc
### VISUALIZATION ###
def training_plot(a, b):
plt.figure(1)
plt.plot(a, 'b', label="train")
plt.plot(b, 'g', label="valid")
plt.title('Training/Valid Loss')
plt.legend()
plt.show()
### LET'S TRAIN the baseline ###
""" baseline """
model1 = nn.Sequential(nn.Flatten(),
nn.Linear(784, 20),
nn.ReLU(),
nn.Linear(20, 20),
nn.ReLU(),
nn.Linear(20, 10),
)
model1 = model1.to(device)
opt1 = torch.optim.SGD(model1.parameters(), lr=0.8)
train_losses, valid_losses = train(model1, trainloader, valloader, opt1, 100)
test_loss, acc_test = infer(model1, testloader)
print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
training_plot(train_losses, valid_losses)
### LET'S TRAIN the baseline ###
""" baseline with scheduler """
model2 = nn.Sequential(nn.Flatten(),
nn.Linear(784, 20),
nn.ReLU(),
nn.Linear(20, 20),
nn.ReLU(),
nn.Linear(20, 10),
)
model2 = model2.to(device)
opt2 = torch.optim.SGD(model2.parameters(), lr=0.8)
scheduler = torch.optim.lr_scheduler.StepLR(opt2, gamma=0.5, step_size=33)
train_losses, valid_losses = train_step(model2, trainloader, valloader, opt2, scheduler, 100)
test_loss, acc_test = infer(model2, testloader)
print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
training_plot(train_losses, valid_losses)
# """ dropout """
# model3 = nn.Sequential(nn.Linear(784, 20),
# nn.ReLU(),
# nn.Linear(20, 20),
# nn.ReLU(),
# nn.Linear(20, 10), ,
# )
# model3 = model3.to(device)
# model3.apply(init_randn)
# opt3 = torch.optim.SGD(model1.parameters(), lr=0.1)
# train_losses, valid_losses = train(model3, trainloader, valloader, opt1, 150)
# test_loss, acc_test = infer(model3, testloader)
# print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
# training_plot(train_losses, valid_losses)
""" batchnorm """
model4 = nn.Sequential(nn.Flatten(),
nn.Linear(784, 20),
nn.ReLU(),
nn.BatchNorm1d(20),
nn.Linear(20, 20),
nn.ReLU(),
nn.BatchNorm1d(20),
nn.Linear(20, 10),
)
model4 = model4.to(device)
opt4 = torch.optim.SGD(model4.parameters(), lr=0.8,weight_decay=1e-4)
loss = nn.CrossEntropyLoss()
train_losses, valid_losses = train(model4, trainloader, valloader, opt4, 100)
test_loss, acc_test = infer(model4, testloader)
print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
training_plot(train_losses, valid_losses)
"""how to know what is the best learning rate? """
""" trial and error """
### LET'S TRAIN the baseline ###
from collections import defaultdict
def find_best_lr(loader, starting_lr=1e-6, gamma=1.4, trials=5):
lr2loss = defaultdict(list)
for i in range(trials):
torch.manual_seed(i)
net = nn.Sequential(nn.Flatten(),
nn.Linear(784, 20),
nn.ReLU(),
nn.Linear(20, 20),
nn.ReLU(),
nn.Linear(20, 10),
)
net = net.to(device)
optimizer = torch.optim.SGD(model3.parameters(), lr=starting_lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=gamma)
last_loss = -1
for x, y in loader:
x = x.to(x)
y = y.to(y)
out = net(x)
optimizer.zero_grad()
loss = F.cross_entropy(out, y)
loss.backward()
lr = optimizer.param_groups[0]['lr']
if last_loss != -1: lr2loss[lr].append(last_loss - loss.cpu().item())
optimizer.step()
scheduler.step()
last_loss = loss.cpu().item()
return lr2loss
""" baseline with scheduler """
print("# of batches: {}".format(len(trainloader)))
lr2loss = find_best_lr(trainloader, starting_lr=1e-6, gamma=1.4, trials=20)
lrs = list(lr2loss.keys())
vals = [sum(lr2loss[i]) for i in lrs]
plt.plot(lrs, vals)
lrs[np.argmax(vals)]
### LET'S TRAIN the baseline ###
""" baseline """
model1 = nn.Sequential(nn.Flatten(),
nn.Linear(784, 20),
nn.ReLU(),
nn.Linear(20, 20),
nn.ReLU(),
nn.Linear(20, 10),
)
model1 = model1.to(device)
opt1 = torch.optim.SGD(model1.parameters(), lr=0.182)
train_losses, valid_losses = train(model1, trainloader, valloader, opt1, 100)
test_loss, acc_test = infer(model1, testloader)
print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
training_plot(train_losses, valid_losses)
opt1.param_groups[0]['lr'] = 0.09
train_losses, valid_losses = train(model1, trainloader, valloader, opt1, 15)
test_loss, acc_test = infer(model1, testloader)
print("Final TEST SCORE: loss: {} acc: {}".format(test_loss, acc_test))
```
| github_jupyter |
# Investigating the effect of Company Announcements on their Share Price following COVID-19 (using the S&P 500)
A lot of company valuation speculation has come about since the C0rona-VIrus-Disease-2019 (COVID-19 or COVID for short) started to impact the stock market (estimated on the 20$^{\text{th}}$ of February 2020, 2020-02-20). Many investors tried to estimate the impact of the outbreak on businesses and trade accordingly as fast as possible. In this haste, it is possible that they miss-priced the effect of COVID on certain stocks. \
This article lays out a framework to investigate whether the Announcement of Financial Statements after COVID (*id est* (*i.e.*): after 2020-02-20) impacted the price of stocks in any specific industry sector. It will proceed simply by producing a graph of the **movement in average daily close prices for each industry - averaged from the time each company produced a Post COVID Announcement** (i.e.: after they first produced a Financial Statement after 2020-02-20). \
From there, one may stipulate that a profitable investment strategy could consist in going long in stocks of companies (i) that did not release an announcement since COVID yet (ii) within a sector that the framework bellow suggest will probably increase in price following from such an announcement.
## Pre-requisites:
Thomson Reuters Eikon with access to new Eikon Data APIs. \
Required Python Packages: [Refinitiv Eikon Python API](https://developers.refinitiv.com/eikon-apis/eikon-data-api), [Numpy](https://numpy.org/), [Pandas](https://pandas.pydata.org/) and [Matplotlib](https://matplotlib.org/). The Python built in modules [datetime](https://docs.python.org/3/library/datetime.html) and [dateutil](https://dateutil.readthedocs.io/en/stable/) are also required.
### Suplimentary:
[pickle](https://docs.python.org/3/library/pickle.html): If one wishes to copy and manipulate this code, 'pickling' data along the way should aid in making sure no data is lost when / in case there are kernel issues.
$ \\ $
## Import libraries
First we can use the library ' platform ' to show which version of Python we are using
```
# The ' from ... import ' structure here allows us to only import the module ' python_version ' from the library ' platform ':
from platform import python_version
print("This code runs on Python version " + python_version())
```
$$ \\ $$
We use **Refinitiv's [Eikon Python Application Programming Interface (API)](https://developers.refinitiv.com/eikon-apis/eikon-data-api)** to access financial data. We can access it via the Python library "eikon" that can be installed simply by using $\textit{pip install}$.
```
import eikon as ek
# The key is placed in a text file so that it may be used in this code without showing it itself:
eikon_key = open("eikon.txt","r")
ek.set_app_key(str(eikon_key.read()))
# It is best to close the files we opened in order to make sure that we don't stop any other services/programs from accessing them if they need to:
eikon_key.close()
```
$$ \\ $$
The following are Python-built-in modules/librarys, therefore they do not have specific version numbers.
```
# datetime will allow us to manipulate Western World dates
import datetime
# dateutil will allow us to manipulate dates in equations
import dateutil
```
$$ \\ $$
numpy is needed for datasets' statistical and mathematical manipulations
```
import numpy
print("The numpy library imported in this code is version: " + numpy.__version__)
```
$$ \\ $$
pandas will be needed to manipulate data sets
```
import pandas
# This line will ensure that all columns of our dataframes are always shown:
pandas.set_option('display.max_columns', None)
print("The pandas library imported in this code is version: " + pandas.__version__)
```
$$ \\ $$
matplotlib is needed to plot graphs of all kinds
```
import matplotlib
# the use of ' as ... ' (specifically here: ' as plt ') allows us to create a shorthand for a module (here: ' matplotlib.pyplot ')
import matplotlib.pyplot as plt
print("The matplotlib library imported in this code is version: " + matplotlib.__version__)
```
$$ \\ $$
## Defining Functions
$$ \\ $$
The cell below defines a function to plot data on one y axis (as opposed to two, one on the right and one on the left).
```
# Using an implicitly registered datetime converter for a matplotlib plotting method is no longer supported by matplotlib. Current versions of pandas requires explicitly registering matplotlib converters:
pandas.plotting.register_matplotlib_converters()
def plot1ax(dataset, ylabel = "", title = "", xlabel = "Year",
datasubset = [0], # datasubset needs to be a list of the number of each column within the dtaset that needs to be labelled on the left
datarange = False, # If wanting to plot graph from and to a specific point, make datarange a list of start and end date
linescolor = False, # This needs to be a list of the color of each vector to be plotted, in order they are shown in their dataframe from left to right
figuresize = (12,4), # This can be changed to give graphs of different proportions. It is defaulted to a 12 by 4 (ratioed) graph
facecolor="0.25",# This allows the user to change the background color as needed
grid = True, # This allows us to decide whether or not to include a grid in our graphs
time_index = [], time_index_step = 48, # These two variables allow us to dictate the frequency of the ticks on the x-axis of our graph
legend = True):
# The if statement bellow allows for manipulation of the date range that we would like to graph:
if datarange == False:
start_date = str(dataset.iloc[:,datasubset].index[0])
end_date = str(dataset.iloc[:,datasubset].index[-1])
else:
start_date = str(datarange[0])
# The if statement bellow allows us to graph to the end of the dataframe if wanted, whatever date that may be:
if datarange[-1] == -1:
end_date = str(dataset.iloc[:,datasubset].index[-1])
else:
end_date = str(datarange[-1])
fig, ax1 = plt.subplots(figsize=figuresize, facecolor=facecolor)
ax1.tick_params(axis = 'both', colors = 'w')
ax1.set_facecolor(facecolor)
fig.autofmt_xdate()
plt.ylabel(ylabel, color ='w')
ax1.set_xlabel(str(xlabel), color = 'w')
if linescolor == False:
for i in datasubset: # This is to label all the lines in order to allow matplot lib to create a legend
ax1.plot(dataset.iloc[:, i].loc[start_date : end_date],
label = str(dataset.columns[i]))
else:
for i in datasubset: # This is to label all the lines in order to allow matplot lib to create a legend
ax1.plot(dataset.iloc[:, i].loc[start_date : end_date],
label = str(dataset.columns[i]),
color = linescolor)
ax1.tick_params(axis='y')
if grid == True:
ax1.grid()
else:
pass
if len(time_index) != 0:
# locs, labels = plt.xticks()
plt.xticks(numpy.arange(len(dataset.iloc[:,datasubset]), step = time_index_step), [i for i in time_index[0::time_index_step]])
else:
pass
ax1.set_title(str(title) + " \n", color='w')
if legend == True:
plt.legend()
elif legend == "underneath":
ax1.legend(loc = 'upper center', bbox_to_anchor = (0.5, -0.3), fancybox = True, shadow = True, ncol = 5)
elif legend != False:
plt.legend().get_texts()[0].set_text(legend)
plt.show()
```
$$ \\ $$
The cell bellow defines a function that adds a series of daily close prices to the dataframe named 'daily_df' and plots it.
```
# Defining the ' daily_df ' variable before the ' Get_Daily_Close ' function
daily_df = pandas.DataFrame()
def Get_Daily_Close(instrument, # Name of the instrument in a list.
days_back, # Number of days from which to collect the data.
plot_title = False, # If ' = True ', then a graph of the data will be shown.
plot_time_index_step = 30 * 3, # This line dictates the index frequency on the graph/plot's x axis.
col = ""): # This can be changed to name the column of the merged dataframe.
# This instructs the function to use a pre-defined ' daily_df ' variable:
global daily_df
if col == "":
# If ' col ' is not defined, then the column name of the data will be replaced with its instrument abbreviated name followed by " Close Price".
col = str(instrument) + " Close Price"
else:
pass
# This allows for the function to programmatically ensure that all instruments' data are collected - regardless of potential server Timeout Errors.
worked = False
while worked != True:
try:
instrument, err = ek.get_data(instruments = instrument,
fields = [str("TR.CLOSEPRICE(SDate=-" + str(days_back) + ",EDate=0,Frq=D,CALCMETHOD=CLOSE).timestamp"),
str("TR.CLOSEPRICE(SDate=-" + str(days_back) + ",EDate=0,Frq=D,CALCMETHOD=CLOSE)")])
instrument.dropna()
worked = True
except:
# Note that this ' except ' is necessary
pass
instrument = pandas.DataFrame(list(instrument.iloc[:,2]), index = list(instrument.iloc[:,1]), columns = [col])
instrument.index = pandas.to_datetime(instrument.index, format = "%Y-%m-%d")
if plot_title != False:
plot1ax(dataset = instrument.dropna(), ylabel = "Close Price", title = str(plot_title), xlabel = "Year", # legend ="Close Price",
linescolor = "#ff9900", time_index_step = plot_time_index_step, time_index = instrument.dropna().index)
daily_df = pandas.merge(daily_df, instrument, how = "outer", left_index = True, right_index = True)
```
$$ \\ $$
The cell bellow sets up a function that gets Eikon recorded Company Announcement Data through time for any index (or instrument)
```
def Get_Announcement_For_Index(index_instrument, periods_back, show_df = False, show_list = False):
# This allows the function to collect a list of all constituents of the index
index_issuer_rating, err = ek.get_data(index_instrument, ["TR.IssuerRating"])
index_Announcement_list = []
for i in range(len(index_issuer_rating)):
# This allows for the function to programmatically ensure that all instruments' data are collected - regardless of potential server Timeout Errors.
worked = False
while worked != True:
try: # The ' u ' in ' index_issuer_rating_u ' is for 'unique' as it will be for each unique instrument
index_Announcement_u, err = ek.get_data(index_issuer_rating.iloc[i,0],
["TR.JPINCOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)",
"TR.JPCASOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)",
"TR.JPBALOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)"])
worked = True
except:
# Note that this ' except ' is necessary
pass
index_Announcement_list.append(index_Announcement_u)
index_Instrument = []
index_Income_Announcement = []
index_Cash_Announcement = []
index_Balance_Announcement = []
for i in range(len(index_Announcement_list)):
for j in range(len(index_Announcement_list[i])):
index_Instrument.append(index_Announcement_list[i].iloc[j,0])
index_Income_Announcement.append(index_Announcement_list[i].iloc[j,1])
index_Cash_Announcement.append(index_Announcement_list[i].iloc[j,2])
index_Balance_Announcement.append(index_Announcement_list[i].iloc[j,3])
index_Announcement_df = pandas.DataFrame(columns = ["Instrument",
"Income Statement Announcement Date",
"Cash Flos Statement Announcement Date",
"Balance Sheet Announcement Date"])
index_Announcement_df.iloc[:,0] = index_Instrument
index_Announcement_df.iloc[:,1] = pandas.to_datetime(index_Income_Announcement)
index_Announcement_df.iloc[:,2] = pandas.to_datetime(index_Cash_Announcement)
index_Announcement_df.iloc[:,3] = pandas.to_datetime(index_Balance_Announcement)
if show_df == True:
display(index_Announcement_df)
else:
pass
if show_list == True:
for i in range(len(index_Announcement_list)):
display(index_Announcement_list[i])
else:
pass
return index_Announcement_df, index_Announcement_list
```
$$ \\ $$
## Setting Up Dates
Before starting to investigate data pre- or post-COVID, we need to define the specific time when COVID affected stock markets: In this instance we chose "2020-02-20"
```
COVID_start_date = datetime.datetime.strptime("2020-02-20", '%Y-%m-%d').date()
days_since_COVID = (datetime.date.today() - COVID_start_date).days
```
$$ \\ $$
## Announcements
The bellow collects announcements of companies within the index of choice for the past 3 financial periods. In this article, the Standard & Poor's 500 Index (S&P500 or SPX for short) is used as an example. It can be used with indices such as FTSE or DJI instead of the SPX.
```
index_Announcement_df, index_Announcement_list = Get_Announcement_For_Index(index_instrument = ["0#.SPX"],
periods_back = 3,
show_df = False,
show_list = False)
```
Now we can choose only announcements post COVID.
```
Announcement_COVID_date = []
for k in (1,2,3):
index_Instruments_COVID_date = []
index_Announcement_post_COVID_list = []
for i in range(len(index_Announcement_list)):
index_Instrument_COVID_date = []
for j in reversed(index_Announcement_list[i].iloc[:,1]):
try: # Note that ' if (index_Announcement_list[i].iloc[1,1] - COVID_start_date).days >= 0: ' would not work
if (datetime.datetime.strptime(index_Announcement_list[i].iloc[:,1].iloc[-1], '%Y-%m-%d').date() - COVID_start_date).days >= 0:
while len(index_Instrument_COVID_date) == 0:
if (datetime.datetime.strptime(j, '%Y-%m-%d').date() - datetime.datetime.strptime("2020-02-20", '%Y-%m-%d').date()).days >= 0:
index_Instrument_COVID_date.append(j)
else:
index_Instrument_COVID_date.append("NaT")
except:
index_Instrument_COVID_date.append("NaT")
index_Instruments_COVID_date.append(index_Instrument_COVID_date[0])
Instruments_Announcement_COVID_date = pandas.DataFrame(index_Instruments_COVID_date, index = index_Announcement_df.Instrument.unique(), columns = ["Date"])
Instruments_Announcement_COVID_date.Date = pandas.to_datetime(Instruments_Announcement_COVID_date.Date)
Announcement_COVID_date.append(Instruments_Announcement_COVID_date)
Instruments_Income_Statement_Announcement_COVID_date = Announcement_COVID_date[0]
Instruments_Income_Statement_Announcement_COVID_date.columns = ["Date of the First Income Statement Announced after COVID"]
Instruments_Cash_Flow_Statement_Announcement_COVID_date = Announcement_COVID_date[1]
Instruments_Cash_Flow_Statement_Announcement_COVID_date.columns = ["Date of the First Cash Flow Statement Announced after COVID"]
Instruments_Balance_Sheet_COVID_date = Announcement_COVID_date[2]
Instruments_Balance_Sheet_COVID_date.columns = ["Date of the First Balance Sheet Announced after COVID"]
```
$$ \\ $$
## Daily Price
### Post COVID
The cell bellow collects Daily Close Prices for all relevant instruments in the index chosen.
```
for i in index_Announcement_df.iloc[:,0].unique():
Get_Daily_Close(i, days_back = days_since_COVID)
```
Some instruments might have been added to the index midway during out time period of choice. They are the ones bellow:
```
removing = [i.split()[0] + " Close Price" for i in daily_df.iloc[0,:][daily_df.iloc[0,:].isna() == True].index]
print("We will be removing " + removing + " from our dataframe")
```
The cell bellow will remove them to make sure that the do not skew our statistics later on in the code.
```
# This line removes instruments that wera added midway to the index
daily_df_no_na = daily_df.drop(removing, axis = 1).dropna()
```
Now we can focus on stock price movements alone.
```
daily_df_trend = pandas.DataFrame(columns = daily_df_no_na.columns)
for i in range(len(pandas.DataFrame.transpose(daily_df_no_na))):
daily_df_trend.iloc[:,i] = daily_df_no_na.iloc[:,i] - daily_df_no_na.iloc[0,i]
```
The following 3 cells display plots to visualise our data this far.
```
datasubset_list = []
for i in range(len(daily_df_no_na.columns)):
datasubset_list.append(i)
plot1ax(dataset = daily_df_no_na,
ylabel = "Close Price",
title = "Index Constituents' Close Prices",
xlabel = "Date",
legend = False,
datasubset = datasubset_list)
plot1ax(dataset = daily_df_trend, legend = False,
ylabel = "Normalised Close Price",
title = "Index Constituents' Change in Close Prices",
datasubset = datasubset_list, xlabel = "Date",)
```
The graph above shows the change in constituent companies' close prices since COVID.
$ \\ $
## Saving our data
The cell bellow saves variables to a 'pickle' file to quicken subsequent runs of this code if they are seen as necessary.
```
# pip install pickle-mixin
import pickle
pickle_out = open("SPX.pickle","wb")
pickl = (COVID_start_date, days_since_COVID,
index_Announcement_df, index_Announcement_list,
Announcement_COVID_date,
Instruments_Income_Statement_Announcement_COVID_date,
Instruments_Cash_Flow_Statement_Announcement_COVID_date,
Instruments_Balance_Sheet_COVID_date,
daily_df, daily_df_no_na,
daily_df_trend, datasubset_list)
pickle.dump(pickl, pickle_out)
pickle_out.close()
```
The cell bellow can be run to load these variables back into the kernel
```
# pickle_in = open("pickl.pickle","rb")
# COVID_start_date, days_since_COVID, index_Announcement_df, index_Announcement_list, Announcement_COVID_date, Instruments_Income_Statement_Announcement_COVID_date, Instruments_Cash_Flow_Statement_Announcement_COVID_date, Instruments_Balance_Sheet_COVID_date, daily_df, daily_df_no_na, daily_df_trend, datasubset_list = pickle.load(pickle_in)
```
$$ \\ $$
## Post-COVID-Announcement Price Insight
Now we can start investigating price changes after the first Post-COVID-Announcement of each company in our dataset.
```
# This is just to delimitate between the code before and after this point
daily_df2 = daily_df_no_na
```
The cell bellow formats the date-type of our data to enable us to apply them to simple algebra.
```
date_in_date_format = []
for k in range(len(daily_df2)):
date_in_date_format.append(daily_df2.index[k].date())
daily_df2.index = date_in_date_format
```
The cell bellow renames the columns of our dataset.
```
daily_df2_instruments = []
for i in daily_df2.columns:
daily_df2_instruments.append(str.split(i)[0])
```
Now: we collect daily prices only for dates after the first Post-COVID-Announcement of each instrument of interest
```
daily_df2_post_COVID_announcement = pandas.DataFrame()
for i,j in zip(daily_df2.columns, daily_df2_instruments):
daily_df2_post_COVID_announcement = pandas.merge(daily_df2_post_COVID_announcement,
daily_df2[i][daily_df2.index >= Instruments_Income_Statement_Announcement_COVID_date.loc[j].iloc[0].date()],
how = "outer", left_index = True, right_index = True) # Note that the following would not work: ' daily_df2_post_COVID_announcement[i] = daily_df2[i][daily_df2.index >= Instruments_Income_Statement_Announcement_COVID_date.loc[j].iloc[0].date()] '
```
Now we can focus on the trend/change in those prices
```
daily_df2_post_COVID_announcement_trend = pandas.DataFrame()
for i in daily_df2.columns:
try:
daily_df2_post_COVID_announcement_trend = pandas.merge(daily_df2_post_COVID_announcement_trend,
daily_df2_post_COVID_announcement.reset_index()[i].dropna().reset_index()[i] - daily_df2_post_COVID_announcement.reset_index()[i].dropna().iloc[0],
how = "outer", left_index = True, right_index = True)
except:
daily_df2_post_COVID_announcement_trend[i] = numpy.nan
```
And plot them
```
plot1ax(dataset = daily_df2_post_COVID_announcement_trend,
ylabel = "Normalised Close Price",
title = "Index Constituents' Trend In Close Prices From There First Income Statement Announcement Since COVID\n" +
"Only companies that announced an Income Statement since the start of COVID (i.e.:" + str(COVID_start_date) + ") will show",
xlabel = "Days since first Post-COVID-Announcement",
legend = False, # change to "underneath" to see list of all instruments and their respective colors as per this graph's legend.
datasubset = datasubset_list)
```
Some companies have lost and gained a great deal following from their first Post-COVID-Announcement, but most seem to have changed by less than 50 United States of america Dollars (USD).
$$ \\ $$
### Post COVID Announcement Price Change
The cell bellow simply gathers all stocks that decreased, increased or did not change in price since their first Post-COVID-Announcement in an easy to digest [pandas](https://pandas.pydata.org/) table. Note that is they haven't had a Post-COVID-Announcement yet, they will show as unchanged.
```
COVID_priced_in = [[],[],[]]
for i in daily_df2_post_COVID_announcement_trend.columns:
if str(sum(daily_df2_post_COVID_announcement_trend[i].dropna())) != "nan":
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) < 0:
COVID_priced_in[0].append(str.split(i)[0])
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) == 0:
COVID_priced_in[1].append(str.split(i)[0])
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) > 0:
COVID_priced_in[2].append(str.split(i)[0])
COVID_priced_in = pandas.DataFrame(COVID_priced_in, index = ["Did not have the negative impact of COVID priced in enough",
"Had the effects of COVID priced in (or didn't have time to react to new company announcements)",
"Had a price that overcompensated the negative impact of COVID"])
COVID_priced_in
```
$$ \\ $$
## Informative Powers of Announcements Per Sector
We will now investigate the insight behind our analysis per industry sector.
The 2 cells bellow allow us to see the movement in daily price of companies with Post-COVID-Announcements per sector
```
ESector, err = ek.get_data(instruments = [i.split()[0] for i in daily_df2_post_COVID_announcement_trend.dropna(axis = "columns", how = "all").columns],
fields = ["TR.TRBCEconSectorCode",
"TR.TRBCBusinessSectorCode",
"TR.TRBCIndustryGroupCode",
"TR.TRBCIndustryCode",
"TR.TRBCActivityCode"])
ESector["TRBC Economic Sector"] = numpy.nan
ESector_list = [[],[],[],[],[],[],[],[],[],[]]
Sectors_list = ["Energy", "Basic Materials", "Industrials", "Consumer Cyclicals",
"Consumer Non-Cyclicals", "Financials", "Healthcare",
"Technology", "Telecommunication Services", "Utilities"]
for i in range(len(ESector["TRBC Economic Sector Code"])):
for j,k in zip(range(0, 10), Sectors_list):
if ESector.iloc[i,1] == (50 + j):
ESector.iloc[i,6] = k
ESector_list[j].append(ESector.iloc[i,0])
ESector_df = numpy.transpose(pandas.DataFrame(data = [ESector_list[i] for i in range(len(ESector_list))],
index = Sectors_list))
ESector_df_by_Sector = []
for k in Sectors_list:
ESector_df_by_Sector.append(numpy.average([numpy.average(daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna()) for i in [j for j in ESector_df[k].dropna()]]))
ESector_average = pandas.DataFrame(data = ESector_df_by_Sector,
columns = ["Average of Close Prices Post COVID Announcement"],
index = Sectors_list)
ESector_average
```
The 'ESector_average' table above shows the Close Prices Post COVID-Announcement for each company averaged per sector
$$ \\ $$
$$ \\ $$
The cells bellow now allow us to visualise this trend in a graph on an industry sector basis
```
Sector_Average = []
for k in ESector_average.index:
Sector_Average1 = []
for j in range(len(pandas.DataFrame([daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna() for i in ESector_df[k].dropna()]).columns)):
Sector_Average1.append(numpy.average(pandas.DataFrame([daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna() for i in ESector_df[k].dropna()]).iloc[:,j].dropna()))
Sector_Average.append(Sector_Average1)
Sector_Average = numpy.transpose(pandas.DataFrame(Sector_Average, index = ESector_average.index))
```
This cell bellow in particular allows us to collect and save our data before continuing so that we don't have to ask for data from Eikon again were we to manipulate the same content later (just in case)
```
pickle_out = open("SPX2.pickle","wb")
pickl = (COVID_start_date, days_since_COVID,
index_Announcement_df, index_Announcement_list,
Announcement_COVID_date,
Instruments_Income_Statement_Announcement_COVID_date,
Instruments_Cash_Flow_Statement_Announcement_COVID_date,
Instruments_Balance_Sheet_COVID_date,
daily_df, daily_df_no_na,
daily_df_trend, datasubset_list)
pickle.dump(pickl, pickle_out)
pickle_out.close()
plot1ax(dataset = Sector_Average, ylabel = "Price Movement",
title = "Index Constituents' Trend In Close Prices From There First Income Statement Announcement Since COVID Sorted By Sector\n" +
"Only companies that announced an Income Statement since the start of COVID (i.e.:" + str(COVID_start_date) + ") will show",
xlabel = "Trading Day", legend = "underneath",
datasubset = [i for i in range(len(Sector_Average.columns))])
```
$$ \\ $$
# Conclusion
Using S&P 500 (i.e.: SPX) data, this last graph can provide a wholesome picture of industries in the United States of America (USA). We can see a great negative change in instruments’ daily close prices for stocks in the Consumer Cyclical, Utilities, Healthcare and Industrial markets. This is actually surprising because they are the industries that were suggested to be most hindered by COVID in the media before their financial statement announcements; investors thus ought to have priced the negative effects of the Disease on these market sectors appropriately. \
The graph suggests that it may be profitable to short companies within these sectors just before they are due to release their first post-COVID Financial Statements - but naturally does not account for future changes, trade costs or other such variants external to this investigation. \
Companies in the Financial sector seem to have performed adequately. Reasons for movements in this sector can be complex and numerous due to their exposure to all other sectors. \
Tech companies seem to have had the impact of COVID priced in prior to the release of their financial statements. One may postulate the impact of COVID on their share price was actually positive as people rush to online infrastructures they support during confinement. \
Companies dealing with Basic Material have performed relatively well. This may be an indication that investors are losing confidence in all but sectors that offer physical goods in supply chains (rather than in consumer goods) - a retreat to fundamentals in a time of uncertainty. \
**BUT** one must use both the ESector_average table and the last graph before coming to any conclusion. The ESector_average - though simple - can provide more depth to our analysis. Take the Healthcare sector for example: One may assume – based on the last graph alone – that this sector is performing badly when revealing information via Announcements; but the ESector_average shows a positive ‘Average of Close Prices Post COVID Announcement’. This is because only very few companies within the Healthcare sector published Announcements before May 2020, and the only ones that did performed badly, skewing the data negatively on the graph.
## References
You can find more detail regarding the Eikon Data API and related technologies for this notebook from the following resources:
* [Refinitiv Eikon Data API page](https://developers.refinitiv.com/eikon-apis/eikon-data-api) on the [Refinitiv Developer Community](https://developers.refinitiv.com/) web site.
* [Eikon Data API Quick Start Guide page](https://developers.refinitiv.com/eikon-apis/eikon-data-api/quick-start).
* [Eikon Data API Tutorial page](https://developers.refinitiv.com/eikon-apis/eikon-data-api/learning).
* [Python Quants Video Tutorial Series for Eikon API](https://community.developers.refinitiv.com/questions/37865/announcement-new-python-quants-video-tutorial-seri.html).
* [Eikon Data APY Python Reference Guide](https://docs-developers.refinitiv.com/1584688434238/14684/book/en/index.html).
* [Eikon Data API Troubleshooting article](https://developers.refinitiv.com/article/eikon-data-apipython-troubleshooting-refinitiv).
* [Pandas API Reference](https://pandas.pydata.org/docs/reference/index.html).
For any question related to this example or Eikon Data API, please use the Developers Community [Q&A Forum](https://community.developers.refinitiv.com/spaces/92/eikon-scripting-apis.html).
| github_jupyter |
# Neural Network for Hadronic Top Reconstruction
This file creates a feed-forward binary classification neural network for hadronic top reconstruction by classifying quark jet triplets as being from a top quark or not.
```
from __future__ import print_function, division
import pandas as pd
import numpy as np
import torch as th
from torch.autograd import Variable
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.metrics import f1_score, roc_auc_score
from nn_classes import *
import utils
```
## Load the Datasets
Here I load the datasets using my custom <code>Dataset</code> class. This ensures that the data is scaled properly and then the PyTorch <code>DataLoader</code> shuffles and iterates over the dataset in batches.
```
trainset = utils.CollisionDataset("ttH_hadT_cut_raw_train.csv", header=0, target_col=0, index_col=0)
valset = utils.CollisionDataset("ttH_hadT_cut_raw_val.csv", header=0, target_col=0, index_col=0, scaler=trainset.scaler)
testset = utils.CollisionDataset("ttH_hadT_cut_raw_test.csv", header=0, target_col=0, index_col=0, scaler=trainset.scaler)
trainloader = DataLoader(trainset, batch_size=512, shuffle=True, num_workers=5)
testloader = DataLoader(testset, batch_size=512, shuffle=True, num_workers=5)
```
## Initialize the NN, Loss Function, and Optimizer
```
input_dim = trainset.shape[1]
net = DHTTNet(input_dim)
criterion = nn.BCELoss()
optimizer = optim.Adam(net.parameters())
```
## Train the Neural Network
```
train_X = Variable(trainset[:][0])
train_y = trainset[:][1].numpy()
val_X = Variable(valset[:][0])
val_y = valset[:][1].numpy()
train_discriminant = net(train_X).data.numpy()
val_discriminant = net(val_X).data.numpy()
val_curve = [(roc_auc_score(train_y, train_discriminant), roc_auc_score(val_y, val_discriminant))]
for epoch in range(1, 4):
if epoch%2 == 0: print(epoch)
for batch in trainloader:
inputs, targets = Variable(batch[0]), Variable(batch[1])
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
#Evaluate the model on the training set
train_discriminant = net(train_X).data.numpy()
# Evaluate the model on a validation set
val_discriminant = net(val_X).data.numpy()
# Add the ROC AUC to the curve
val_curve.append((roc_auc_score(train_y, train_discriminant), roc_auc_score(val_y, val_discriminant)))
print("Done")
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.plot(range(1, len(val_curve)+1), val_curve)
ax.set_ylabel("ROC AUC")
ax.set_xlabel("Epochs Finished")
ax.set_title("Validation Curves")
handles, _ = ax.get_legend_handles_labels()
labels = ["Training", "Validation"]
plt.legend(handles, labels, loc='lower right')
fig.set_size_inches(18, 10)
fig.savefig("hello.png")
```
## Evaluate the Model's Accuracy
```
correct = 0
total = 0
# For Binary
for data in testloader:
images, labels = data['input'].float(), data['target'].long()
outputs = net(Variable(images))
predicted = th.round(outputs.data).long()
total += labels.size(0)
correct += (predicted.view(-1, 1) == labels.view(-1, 1)).sum()
print('Accuracy of the network on the {} samples: {:f} %'.format(len(testset), (
100 * correct / total)))
```
## Save the Model
Here we only serialize the model parameters, i.e. the weights and such, to be loaded again later as follows:
```python
model = BinaryNet(<input_dim>) # Should be the same input dimensions as before.
model.load_state_dict(th.load(<Path>))
```
```
th.save(net.state_dict(), "neural_net.torch")
```
| github_jupyter |
# A brief, basic introduction to Python for scientific computing - Chapter 3
## Background/prerequisites
This is part of a brief introduction to Python; please find links to the other chapters and authorship information [here](https://github.com/MobleyLab/drug-computing/blob/master/other-materials/python-intro/README.md) on GitHub. This information will assume you have been through the previous chapters already.
For best results with these notebooks, we recommend using the [Table of Contents nbextension](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/toc2) which will provide you with a "Navigate" menu in the top menu bar which, if dragged out, will allow you to easily jump between sections in these notebooks. To install, in your command prompt, use:
* `conda install -c conda-forge jupyter_contrib_nbextensions`
* `jupyter contrib nbextension install --user`
* Open `jupyter notebook` and click the `nbextensions` button to enable `Table of Contents`.
(See the [jupyter nbextensions documentation](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) for more information on using these.)
This notebook continues our dscussion of basic object types in Python.
## Tuples and immutable versus mutable objects
### Tuples are immutable objects
Tuples are similar to lists but are immutable. That is, once they are created, they cannot be changed. Tuples are created using parenthesis instead of brackets:
```
t = (1, 2, 3)
t[1] = 0
```
Like lists, tuples can contain any object, including other tuples and lists:
```
t = (0., 1, 'two', [3, 4], (5,6) )
```
### Tuples are similar to lists but faster
The advantage of tuples is that they are faster than lists, and Python often uses them behind the scenes to achieve efficient passing of data and function arguments. In fact, one can write a comma separated list without any enclosing characters and Python will, by default, interpret it as a tuple:
```
1, 2, 3
"hello", 5., [1, 2, 3]
```
### Strings are immutable objects too
Tuples aren't the only immutable objects in Python. Strings are also immutable:
```
s = "There are 5 cars."
s[10] = "6"
```
To modify strings in this way, we instead need to use slicing to create a new string and then store it:
```
s = s[:10] + "6" + s[11:]
s
```
Floats, integers, and complex numbers are also immutable; however, this is not obvious to the programmer. For these types, what immutable means is that new numeric values always involve the creation of a new spot in memory for a new variable, rather than the modification of the memory used for an existing variable.
## Assignment and name binding
### Variable assgnment in python is interesting
Python treats variable assignment slightly differently than what you might expect from other programming languages where variables must be declared beforehand so that a corresponding spot in memory is available to manipulate. Consider the assignment:
In other programming languages, this statement might be read as "put the value 1 in the spot in memory corresponding to the variable a." In Python, however, this statement says something quite different: "create a spot in memory for an integer variable, give it a value 1, and then point the variable a to it." This behavior is called name binding in Python. It means that most variables act like little roadmaps to spots in memory, rather than designate specific spots themselves.
Consider the following:
```
a = [1, 2, 3]
b = a
a[1] = 0
a
b
```
In the second line, Python bound the variable b to the same spot in memory as the variable `a`. Notice that it did not copy the contents of `a`, and thus any modifications to `a` subsequently affect `b` also. This can sometimes be a convenience and speed execution of a program.
### `copy` can be used when objects need to be coped
If an explicit copy of an object is needed, one can use the copy module:
```
import copy
a = [1, 2, 3]
b = copy.copy(a)
a[1] = 0
a
b
```
Here, the `copy.copy` function makes a new location in memory and copies the contents of `a` to it, and then `b` is pointed to it. Since `a` and `b` now point to separate locations in memory, modifications to one do not affect the other.
Actually the copy.copy function only copies the outermost structure of a list. If a list contains another list, or objects with deeper levels of variables, the copy.deepcopy function must be used to make a full copy.
```
import copy
a = [1, 2, [3, 4]]
b = copy.copy(a)
c = copy.deepcopy(a)
a[2][1] = 5
a
b
c
```
The `copy` module should be used with great caution, which is why it is a module and not part of the standard command set. The vast majority of Python programs do not need this function if one programs in a Pythonic style—that is, if one uses Python idioms and ways of doing things. If you find yourself using the `copy` module frequently, chances are that your code could be rewritten to read and operate much cleaner.
### Not all types of objects need copying because of mutability/immutability
The following example may now puzzle you:
```
a = 1
b = a
a = 2
a
b
```
Why did b not also change? The reason has to do with immutable objects. Recall that values are immutable, meaning they cannot be changed once in memory. In the second line, `b` points to the location in memory where the value "1" was created in the first line. In the third line, a new value "2" is created in memory and `a` is pointed to it—the old value "1" is not modified at all because it is immutable. As a result, `a` and `b` then point to different parts of memory. In the previous example using a list, the list was actually modified in memory because it is mutable.
Similarly, consider the following example:
```
a = 1
b = a
a = []
a.append(1)
a
b
```
Here in the third line, a is assigned to point at a new empty list that is created in memory.
### Rules of thumb for assignments in Python
The general rules of thumb for assignments in Python are the following:
* Assignment using the equals sign ("=") means point the variable name on the left hand side to the location in memory on the right hand side.
* If the right hand side is a variable, point the left hand side to the same location in memory that the right hand side points to. If the right hand side is a new object or value, create a new spot in memory for it and point the left hand side to it.
* Modifications to a mutable object will affect the corresponding location in memory and hence any variable pointing to it. Immutable objects cannot be modified and usually involve the creation of new spots in memory.
### `is` tests whether variables point to the same object
It is possible to determine if two variable names in Python are pointing to the same value or object in memory using the is statement:
```
a = [1, 2, 3]
b = a
a is b
b = [1, 2, 3]
a is b
```
In the next to the last line, a new spot in memory is created for a new list and `b` is assigned to it. This spot is distinct from the area in memory to which a points and thus the is statement returns `False` when `a` and `b` are compared, even though their data is identical.
## Garbage collection and memory use in Python
One might wonder if Python is memory-intensive given the frequency with which it must create new spots in memory for new objects and values. Fortunately, Python handles memory management quite transparently and intelligently. In particular, it uses a technique called garbage collection. This means that for every spot in memory that Python creates for a value or object, it keeps track of how many variable names are pointing at it. When no variable name any longer points to a given spot, Python automatically deletes the value or object in memory, freeing its memory for later use. Consider this example:
```
a = [1, 2, 3, 4] #a points to list 1
b = [2, 3, 4, 5] #b points to list 2
c = a #c points to list 1
a = b #a points to list 2
c = b[1] #c points '3'; list 1 deleted in memory
```
In the last line, there are no longer any variables that point to the first list and so Python automatically deletes it from memory. One can explicitly delete a variable using the `del` statement:
```
a = [1, 2, 3, 4]
del a
```
This will delete the variable name a. In general, however, it does not delete the object to which a points unless a is the only variable pointing to it and Python's garbage-collecting routines kick in. Consider:
```
a = [1, 2, 3, 4]
b = a
del a
b
```
## Multiple assignment
### Multiple assignment looks odd at first, but is frequently used.
Lists and tuples enable multiple items to be assigned at the same time. Consider the following example using lists:
```
[a, b, c] = [1, 5, 9]
a
b
c
```
In this example, Python assigned variables by lining up elements in the lists on each side. The lists must be the same length, or an error will be returned.
Tuples are more efficient for this purpose and are usually used instead of lists for multiple assignments:
```
(a, b, c) = (5, "hello", [1, 2])
a
b
c
```
However, since Python will interpret any non-enclosed list of values separated by commas as a tuple it is more common to see the following, equivalent statement:
```
a, b, c = 5, "hello", [1, 2]
```
Here, each side of the equals sign is interpreted as a tuple and the assignment proceeds as before.
### Functions often use multiple assignment
The preceding notation is particularly helpful for functions that return multiple values. We will discuss this in greater detail later, but here is preview example of a function returning two values:
Technically, the function returns one thing – a tuple containing two values. However, the multiple assignment notation allows us to treat it as two sequential values. Alternatively, one could write this statement as:
In this case, returned would be a tuple containing two values.
### List comprehensions can use multiple assignment
Because of multiple assignment, list comprehensions can also iterate over multiple values:
```
l = [(1,2), (3,4), (5,6)]
[a+b for (a,b) in l]
```
In this example, the tuple (a,b) is assigned to each item in l, in sequence. Since l contains tuples, this amounts to assigning a and b to individual tuple members. We could have done this equivalently in the following, less elegant way:
```
[t[0] + t[1] for t in l]
```
Here, t is assigned to the tuple and we access its elements using bracket indexing. A final alternative would have been:
```
[sum(t) for t in l]
```
A common use of multiple assignment is to swap variable values:
```
a = 1
b = 5
a, b = b, a
a
b
```
## String functions and manipulation
### Python is particularly powerful for working with strings
Python's string processing functions make it enormously powerful and easy to use for processing string and text data, particularly when combined with the utility of lists. Every string in Python (like every other variable) is an object. String functions are member functions of these objects, accessed using dot notation.
Keep in mind two very important points with these functions: (1) strings are immutable, so functions that modify strings actually return new strings that are modified versions of the originals; and (2) all string functions are case sensitive so that `'this'` is recognized as a different string than `'This'`.
Strings can be sliced just like lists. This makes it easy to extract substrings:
```
s = "This is a string"
s[:4]
"This is a string"[-6:]
```
### Strings can be split or joined
Strings can also be split apart into lists. The split function will automatically split strings wherever it finds whitespace (e.g., a space or a line break):
```
"This is a string.\nHello.".split()
```
Alternatively, one can split a string wherever a particular substring is encountered:
```
"This is a string.".split('is')
```
The opposite of the split function is the join function, which takes a list of strings and joins them together with a common separation string. This function is actually called as a member function of the separation string, not of the list to be joined:
```
l = ['This', 'is', 'a', 'string.', 'Hello.']
" ".join(l)
", ".join(["blue", "red", "orange"])
```
The join function can be used with a zero-length string:
```
"".join(["house", "boat"])
```
To remove extra beginning and ending whitespace, use the strip function:
```
" string ".strip()
"string\n\n ".strip()
```
### String replacement is useful
The replace function will make a new string in which all specified substrings have been replaced:
```
"We code in Python. We like it.".replace("We", "You")
```
It is possible to test if a substring is present in a string and to get the index of the first character in the string where the substring starts:
```
s = "This is a string."
"is" in s
s.index("is")
s.index("not")
```
This last one raises a `ValueError` exception since there is no index for `"not"`.
### Justification is handled by special functions, as is capitalization
Sometimes you need to left- or right-justify strings within a certain field width, padding them with extra spaces as necessary. There are two functions for doing that:
```
s = "apple".ljust(10) + "orange".rjust(10) + "\n" \
+ "grape".ljust(10) + "pear".rjust(10)
print(s)
```
There are a number of functions for manipulating capitalization:
```
s = "this is a String."
s.lower()
s.upper()
s.capitalize()
s.title()
```
### Specific functions provide string tests
Finally, there are a number of very helpful utilities for testing strings. One can determine if a string starts or ends with specified substrings:
```
s = "this is a string."
s.startswith("th")
s.startswith("T")
s.endswith(".")
```
You can also test the kind of contents in a string. To see if it contains all alphabetical characters,
```
"string".isalpha()
"string.".isalpha()
```
Similarly, you can test for all numerical characters:
```
"12834".isdigit()
"50 cars".isdigit()
```
## Dictionaries
### Basic dictionaries and keys
Dictionaries are another type in Python that, like lists, are collections of objects. Unlike lists, dictionaries have no ordering. Instead, they associate keys with values similar to that of a database. To create a dictionary, we use braces. The following example creates a dictionary with three items:
```
d = {"city":"Irvine", "state":"CA", "zip":"92697"}
```
Here, each element of a dictionary consists of two parts that are entered in key:value syntax. The keys are like labels that will return the associated value. Values can be obtained by using bracket notation:
```
d["city"]
d["zip"]
d["street"]
```
Notice that a nonexistent key will return an error.
### Dictionary keys are flexible
Dictionary keys do not have to be strings. They can be any immutable object in Python: integers, tuples, or strings. Dictionaries can contain a mixture of these. Values are not restricted at all; they can be any object in Python: numbers, lists, modules, functions, anything.
```
d = {"one" : 80.0, 2 : [0, 1, 1], 3 : (-20,-30), (4, 5) : 60}
d[(4,5)]
d[2]
```
The following example creates an empty dictionary:
```
d = {}
```
Items can be added to dictionaries using assignment and a new key. If the key already exists, its value is replaced:
```
d = {"city":"Irvine", "state":"CA"}
d["city"] = "Costa Mesa"
d["street"] = "Bristol"
d
```
To delete an element from a dictionary, use the del statement:
```
del d["street"]
```
### Additional dictionary operations
One tests if a key is in a dictionary using `in`:
```
d = {"city":"Irvine", "state":"CA"}
"city" in d
```
The size of a dictionary is given by the len function:
```
len(d)
```
To remove all elements from a dictionary, use the clear object function:
```
d = {"city":"Irvine", "state":"CA"}
d.clear()
d
```
One can obtain all keys and values (in no particular order):
```
d = {"city":"Irvine", "state":"CA"}
d.keys()
d.values()
```
Alternatively, one can get (key,value) tuples for the entire dictionary:
```
d.items()
```
Similarly, it is possible to create a dictionary from a list of two-tuples:
```
l = [("street", "Peltason"), ("school", "UCI")]
dict(l)
```
Finally, dictionaries provide a method to return a default value if a given key is not present:
```
d = {"city":"Irvine", "state":"CA"}
d.get("city", "Costa Mesa")
d.get("zip", 92617)
```
## Conditional (`if`) statements
### Basic usage
`if` statements allow conditional execution. Here is an example:
```
x = 2
if x > 3:
print("greater than three")
elif x > 0:
print("greater than zero")
else:
print("less than or equal to zero")
```
Notice that the first testing line begins with `if`, the second `elif` meaning 'else if', and the third with `else`. Each of these is followed by a colon with the corresponding commands to execute. Items after the colon are indented. For `if` statements, both `elif` and `else` are optional.
### Spacing and indentation
A very important concept in Python is that spacing and indentations carry syntactical meaning. That is, they dictate how to execute statements. Colons occur whenever there is a set of sub-commands after an if statement, loop, or function definition. All of the commands that are meant to be grouped together after the colon must be indented by the same amount. Python does not specify how much to indent, but only requires that the commands be indented in the same way. Consider:
```
if 1 < 3:
print("line one")
print("line two")
if 1 < 3:
print("line one")
print("line two")
```
### How many spaces?
It is typical to indent four spaces after each colon. (Tangent: If you use tabs to indent occasionally, you may wish to set your text editor to automaticall convert tabs to spaces to avoid problems down the road.)
Ultimately Python's use of syntactical whitespace helps make its programs look cleaner and more standardized.
Any statement or function returning a Boolean `True` or `False` value can be used in an if statement. The number 0 is also interpreted as `False`, while any other number is considered `True`. Empty lists and objects return `False`, whereas non-empty ones are `True`.
```
d = {}
if d:
print("Dictionary is not empty.")
else:
print("Dictionary is empty.")
```
Single `if` statements (without `elif` or `else` constructs) that execute a single command can be written in one line without indentation:
```
if 5 < 10: print("Five is less than ten.")
```
### Nested `if` statements
Finally, `if` statements can be nested using indentation:
```
s = "chocolate chip"
if "mint" in s:
print("We do not sell mint.")
elif "chocolate" in s:
if "ripple" in s:
print("We are all out of chocolate ripple.")
elif "chip" in s:
print("Chocolate chip is our most popular.")
```
## `for` loops
### Basic usage
Like other programming languages, Python provides a mechanism for looping over consecutive values. Unlike many languages, however, Python's loops do not intrinsically iterate over integers, but rather elements in sequences, like lists and tuples. The general construct is:
Notice that anything falling within the loop is indented beneath the first line, similar to `if` statements. Here are some examples that iterate over tuples and lists:
```
for i in [3, "hello", 9.5]:
print(i)
for i in (2.3, [8, 9, 10], {"city":"Irvine"}):
print(i)
```
Notice that the items in the iterable do not need to be the same type. In each case, the variable i is given the value of the current list or tuple element, and the loop proceeds over these in sequence. One does not have to use the variable i; any variable name will do, but if an existing variable is used, its value will be overwritten by the loop.
### `for` loops using slicing and dictionaries
It is very easy to loop over a part of a list using slicing:
```
l = [4, 6, 7, 8, 10]
for i in l[2:]:
print(i)
```
Iteration over a dictionary proceeds over its keys, not its values. Keep in mind, though, that dictionaries will not return these in any particular order. In general, it may be better to iterate explicitly over keys or values using the dictionary functions that return lists of these:
```
d = {"city":"Irvine", "state":"CA"}
for key in d:
print(key)
for key in d.keys():
print(key)
for val in d.values():
print(val)
```
### Iterating over multiple values
Using Python's multiple assignment capabilities, it is possible to iterate over more than one value at a time:
```
l = [(1, 2), (3, 4), (5, 6)]
for (a, b) in l:
print(a + b)
```
In this example, Python cycles through the list and makes the assignment `(a,b) = element` for each element in the list. Since the list contains two-tuples, it effectively assigns a to the first member of the tuple and b to the second.
Multiple assignment makes it easy to cycle over both keys and values in dictionaries at the same time:
```
d = {"city":"Irvine", "state":"CA"}
d.items()
for (key, val) in d.items():
print("The key is %s and the value is %s" % (key, val))
```
It is possible to iterate over sequences of numbers using the range function:
```
for i in range(4):
print(i)
```
### Pythonic iteration through lists
In other programming languages, one might use the following idiom to iterate through items in a list:
```
l = [8, 10, 12]
for i in range(len(l)):
print(l[i])
```
In Python, however, the following is more natural and efficient, and thus always preferred:
```
l = [8, 10, 12]
for i in l:
print(i)
```
Notice that the second line could have been written in a single line since there is a single command within the loop, although this is not usually preferred because the loop is less clear upon inspection:
```
for i in l: print(l)
```
### Enumerate and loops
If one desires to have the index of the loop in addition to the iterated element, the `enumerate` command is helpful:
```
l = [8, 10, 12]
for (ind, val) in enumerate(l):
print("The %ith element in the list is %d" % (ind, val))
```
Notice that enumerate returns indices that always begin at 0, whether or not the loop actually iterates over a slice of a list:
```
l = [4, 6, 7, 8, 10]
for (ind, val) in enumerate(l[2:]):
print("The %ith element in the list is %d" % (ind, val))
```
### `zip` for working with multiple lists
It is also possible to iterate over two lists simultaneously using the zip function:
```
l1 = [1, 2, 3]
l2 = [0, 6, 8]
for (a, b) in zip(l1, l2):
print(a, b, a+b)
```
The zip function can be used outside of for loops. It simply takes two or more lists and groups them together, making an iterable (an item which can be iterated over) consisting of tuples of corresponding list elements:
```
res = zip([1, 2, 3], [4, 5, 6])
for elem in res: print(elem)
res= zip([1, 2, 3], [4, 5, 6], [7, 8, 9])
for elem in res: print(elem)
```
This behavior, combined with multiple assignment, is how zip allows simultaneous iteration over multiple lists at once.
### Like `if` statements, loops can be nested
```
for i in range(3):
for j in range(0,i):
print (i, j)
```
### `break` and `continue`
It is possible to skip forward to the next loop iteration immediately, without executing subsequent commands in the same indentation block, using the `continue` statement. The following produces the same output as the previous example using `continue`, but is ultimately less efficient because more loop cycles need to be traversed:
```
for i in range(3):
for j in range(3):
if i <= j: continue
print (i, j)
```
One can also terminate the innermost loop using the `break` statement. Again, the following produces the same result but is almost as efficient as the first example because the inner loop terminates as soon as the `break` statement is encountered:
```
for i in range(3):
for j in range(3):
if i <= j: break
print (i, j)
```
## `while` loops
Unlike for loops, while loops do not iterate over a sequence of elements but rather continue so long as some test condition is met. Their syntax follows indentation rules similar to the cases we have seen before. The initial statement takes the form:
The following example computes the first couple of values in the Fibonacci sequence:
```
k1, k2 = 1, 1
while k1 < 20:
k1, k2 = k2, k1 + k2
print(k1)
```
Sometimes it is desired to stop the while loop somewhere in the middle of the commands that follow it. For this purpose, the `break` statement can be used with an infinite loop. In the previous example, we might want to print all Fibonacci numbers less than or equal to 20:
```
k1, k2 = 1, 1
while True:
k1, k2 = k2, k1 + k2
if k1 > 20: break
print(k1)
```
Here the infinite while loop is created with the `while True` statement. Keep in mind that, if multiple loops are nested, the break statement will stop only the innermost loop
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "2a".
* You can find your work in the file directory as version "2".
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* Clarified explanation of 'keep_prob' in the text description.
* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
* Updated print statements and 'expected output' for easier visual comparisons.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = lambd * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3))) / (2 * m)
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd * W3) / m
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd * W2) / m
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd * W1) / m
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
```
**Expected Output**:
```
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
```
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.
**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.
This python statement:
`X = (X < keep_prob).astype(int)`
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
```
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
```
Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
```
**Expected Output**:
```
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
```
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
# Predictive performance comparison
The idea of this notebook is to take a look at the predictive performance on cell lines for all the drugs. The idea is two-fold:
<ul>
<li> Assessing that the source top PVs can yield same predictive performance as a direct ridge on the source data. It would mean that the top PVs contain the relevant information for drug response prediction.
<li> Taking a look at which drug gets predicted using both the PV duos and the consensus representation.
</ul>
We here use all the cell line data for the domain adaptation. Other settings can be imagined as well.
## Parameters (to change)
```
# None for 'rnaseq', 'fpkm' for FPKM
type_data = 'rnaseq'
normalization = 'TMM'
transformation = 'log'
mean_center = True
std_unit = False
filter_mytochondrial = False
protein_coding_only = True
d_test = [40]
n_factors = 70
same_pv_pca = True
drug_file = 'input/drug_list_small.txt' # To change to drug_list.txt for full-scale analysis
n_jobs=5
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
from sklearn.model_selection import GroupKFold, GridSearchCV
from sklearn.linear_model import ElasticNet, Ridge
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.externals.joblib import Parallel, delayed
import pickle
plt.style.use('ggplot')
#Import src implementations
os.environ['OMP_NUM_THREADS'] = '1'
os.environ['KMP_DUPLICATE_LIB_OK']='True'
from data_reader.read_data import read_data
from data_reader.read_drug_response import read_drug_response
from data_reader.read_cna_tumors import read_cna_tumors
from normalization_methods.feature_engineering import feature_engineering
import precise
from precise import DrugResponsePredictor, ConsensusRepresentation
```
## Read all the drug from the file and load all the data
```
with open(drug_file,'r') as drug_file_reader:
drug_file_content = drug_file_reader.read()
drug_file_content = drug_file_content.split('\n')
drug_file_content = [e.split(',') for e in drug_file_content]
# drug_IDs and tumor tissues are ordered in the same way
drug_IDs = np.array(list(zip(*drug_file_content))[0]).astype(int)
tumor_tissues = np.array(list(zip(*drug_file_content))[1])
unique_tumor_tissues = np.unique(tumor_tissues)
target_raw_data = dict()
source_raw_data = dict()
target_barcodes = dict()
source_names = dict()
target_data = dict()
source_data = dict()
source_data_filtered = dict()
source_response_data = dict()
source_names_filtered = dict()
drug_names = dict()
target_primary_site = dict()
# Load cell line data
# /!\ Due to some mismatch in the genes available in TCGA, cell line data has to be loaded all the time
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_raw_data:
continue
X_target, X_source, _, s, target_names = read_data('cell_line',
'tumor',
'count',
None,
tissue_name,
filter_mytochondrial)
target_raw_data[tissue_name] = X_target
source_raw_data[tissue_name] = X_source
target_barcodes[tissue_name] = target_names
source_names[tissue_name] = s
# Normalize the data
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_data:
continue
target_data[tissue_name] = feature_engineering(target_raw_data[tissue_name],
normalization,
transformation,
mean_center,
std_unit)
# source data is not mean-centered as it will be done during cross-validation procedure.
source_data[tissue_name] = feature_engineering(source_raw_data[tissue_name],
normalization,
transformation,
False,
False)
# Normalize for variance
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_data:
continue
target_total_variance = np.sqrt(np.sum(np.var(target_data[tissue_name], 0)))
target_data[tissue_name] = target_data[tissue_name] / target_total_variance * 10**3
source_total_variance = np.sqrt(np.sum(np.var(source_data[tissue_name], 0)))
source_data[tissue_name] = source_data[tissue_name] / source_total_variance * 10**3
# Read drug response
for i, (ID, tissue) in enumerate(zip(drug_IDs, tumor_tissues)):
if (ID, tissue) in source_data_filtered:
continue
x, y, s, name = read_drug_response(ID,
source_data[tissue],
source_names[tissue],
'count')
source_data_filtered[(ID, tissue)] = x
source_response_data[(ID, tissue)] = y
drug_names[(ID, tissue)] = name
source_names_filtered[(ID, tissue)] = s
```
## Principal vector test
Here we compute the predictive performance for several different drugs using either the osurce, the target of both principal vector. The latter one is still biases towards the source.
### Consensus representation
```
l1_ratio = 0
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
X_source = source_data_filtered[ID, tissue]
y_source = source_response_data[ID, tissue]
X_target = target_data[tissue]
pickle_file = 'consensus_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
if pickle_file in os.listdir('./output/pred_performance/'):
print('%s, %s ALREADY COMPUTED'%(ID, tissue))
continue
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(dict(), f, pickle.HIGHEST_PROTOCOL)
pred_performance = {}
for d in d_test:
print(d)
predictor = DrugResponsePredictor(source_data=source_data[tissue][~np.isin(source_names[tissue], source_names_filtered[(ID, tissue)])],\
method='consensus',\
n_representations = 100,\
target_data=X_target,\
n_pv=d,\
n_factors=n_factors,\
n_jobs=n_jobs,\
mean_center=mean_center,\
std_unit=std_unit,\
l1_ratio=l1_ratio)
predictor.alpha_values = list(np.logspace(-2,10,17))
predictor.verbose = 5
predictor.fit(X_source, y_source, use_data=True)
pred_performance[d] = predictor.compute_predictive_performance(X_source, y_source)
plt.plot(predictor.alpha_values, predictor.regression_model_.cv_results_['mean_test_score'], '+-')
plt.title(pred_performance[d])
plt.xscale('log')
plt.show()
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(pred_performance, f, pickle.HIGHEST_PROTOCOL)
```
### ElasticNet/Ridge comparison
```
from sklearn.model_selection import GroupKFold
l1_ratio = 0.
pickle_file = 'elasticnet_drug_l1_ratio_%s_std.pkl'%(l1_ratio)
if pickle_file in os.listdir('./output/pred_performance/'):
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
elasticnet_perf = pickle.load(f)
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
pickle_file = 'en_std_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
if pickle_file in os.listdir('./output/pred_performance/'):
print('%s, %s ALREADY COMPUTED'%(ID, tissue))
continue
if (ID, tissue) in elasticnet_perf:
continue
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(dict(), f, pickle.HIGHEST_PROTOCOL)
X_source = source_data_filtered[ID, tissue]
y_source = source_response_data[ID, tissue]
X_target = target_data[tissue]
#Parameters for the grid search
alpha_values = np.logspace(-5,10,16)
param_grid ={
'regression__alpha': alpha_values
}
#Grid search setup
k_fold_split = GroupKFold(10)
y_predicted = np.zeros(X_source.shape[0])
for train_index, test_index in k_fold_split.split(X_source, y_source, y_source):
grid_en = GridSearchCV(Pipeline([
('normalization', StandardScaler(with_mean=mean_center, with_std=True)),
('regression', ElasticNet(l1_ratio) if l1_ratio > 0 else Ridge())
]),\
cv=10, n_jobs=30, param_grid=param_grid, verbose=1, scoring='neg_mean_squared_error')
grid_en.fit(X_source[train_index], y_source[train_index])
y_predicted[test_index] = grid_en.predict(X_source[test_index])
#Fit grid search
grid_en.fit(X_source, y_source)
elasticnet_perf[ID, tissue] = scipy.stats.pearsonr(y_predicted, y_source)[0]
print(elasticnet_perf[ID, tissue])
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(elasticnet_perf[ID, tissue], f, pickle.HIGHEST_PROTOCOL)
```
## Load pickle and look at results
```
l1_ratio = 0
l1_ratio_en = 0.
two_pv_results = dict()
consensus_pv_results = dict()
source_pv_results = dict()
target_pv_results = dict()
en_results_std = dict()
def sort_dictionary(d):
return {e:d[e] for e in sorted(d)}
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
# Read results of consensus PVs
pickle_file = 'consensus_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
consensus_pv_results[ID,tissue] = sort_dictionary(pickle.load(f))
# Read results of EN
pickle_file = 'en_std_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
'0.0',
n_factors)
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
en_results_std[ID,tissue] = pickle.load(f)
print(en_results[ID, tissue])
for ID, tissue in zip(drug_IDs, tumor_tissues):
# Plot for a specific number of PV
plt.plot([e[0] for e in consensus_pv_results[ID,tissue].items()],
[e[1] for e in consensus_pv_results[ID,tissue].items()],
label='consensus', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in source_pv_results[ID,tissue].items()],
[e[1] for e in source_pv_results[ID,tissue].items()],
label='source', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in target_pv_results[ID,tissue].items()],
[e[1] for e in target_pv_results[ID,tissue].items()],
label='target', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in two_pv_results[ID,tissue].items()],
[e[1] for e in two_pv_results[ID,tissue].items()],
label='2 pv', linewidth=3, alpha=0.5, marker='+')
plt.hlines(en_results[ID,tissue], xmin=0, xmax=plt.xlim()[1], label='Ridge', linewidth=3, alpha=0.7)
plt.title(drug_names[ID, tissue] + ' '+ tissue)
plt.xlabel('Number of Principal Vectors', fontsize=15)
plt.ylabel('Predictive Performance', fontsize=15)
plt.legend()
plt.show()
n_pv = 40
perf_scatter = []
for ID, tissue in zip(drug_IDs, tumor_tissues):
#print(ID, tissue)
if n_pv not in consensus_pv_results[ID,tissue]:
print(ID, tissue)
continue
plt.scatter(en_results_std[ID,tissue],
consensus_pv_results[ID,tissue][n_pv],
color='blue', marker='x', alpha=0.7)
perf_scatter.append([en_results_std[ID,tissue], consensus_pv_results[ID,tissue][n_pv]])
plt.xlabel('ElasticNet', fontsize=20)
plt.ylabel('Consensus \n representation', fontsize=20)
plt.xticks(fontsize=15, color='black')
plt.yticks(fontsize=15, color='black')
plt.tight_layout()
plt.xlim(0.1,0.8)
plt.ylim(0.1,0.8)
plt.plot(plt.xlim(), plt.xlim(), linewidth=3, alpha=0.5)
#plt.savefig('./figures/fig4_pred_perf_consensus_%s_en_%s.png'%(l1_ratio, l1_ratio_en), dpi=300)
plt.show()
perf_scatter = np.array(perf_scatter)
p = scipy.stats.pearsonr(perf_scatter[:,0], perf_scatter[:,1])
print('Pearson Correlation: %s, %s'%(p[0], p[1]))
plt.scatter(perf_scatter[:,1], (perf_scatter[:,0] - perf_scatter[:,1])/perf_scatter[:,0])
np.median((perf_scatter[:,0] - perf_scatter[:,1])/perf_scatter[:,0])
#for e in en_results:
# print(e, en_results[e], consensus_pv_results[e])
for ID, tissue in zip(drug_IDs, tumor_tissues):
#print(ID, tissue)
if n_pv not in consensus_pv_results[ID,tissue]:
print(ID, tissue)
continue
plt.scatter(en_results[ID,tissue],
en_results_std[ID,tissue],
color='blue', marker='x', alpha=0.7)
#perf_scatter.append([en_results[ID,tissue], consensus_pv_results[ID,tissue][n_pv]])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/vgaurav3011/100-Days-of-ML/blob/master/DCGAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import glob
import imageio
import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import layers
import time
from IPython import display
import PIL
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (_,_) = mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5
batch_size = 256
buffer_size = 60000
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(buffer_size).batch(batch_size)
def generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
return model
generator = generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
def discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5,5), strides=(2,2), padding='same', input_shape=[28,28,1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.1))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.1))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
discriminator = discriminator_model()
decision = discriminator(generated_image)
print (decision)
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
seed = tf.random.normal([num_examples_to_generate, noise_dim])
@tf.function
def train_step(images):
noise = tf.random.normal([batch_size, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)
PIL.Image.open('image_at_epoch_{:04d}.png'.format(EPOCHS))
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
anim_file = 'output.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
```
| github_jupyter |
# Python Guide
## Loading Data
The XGBoost python module is able to load data from:
- LibSVM text format file
- Comma-separated values (CSV) file
- NumPy 2D array
- SciPy 2D sparse array
- cuDF DataFrame
- Pandas data frame, and
- XGBoost binary buffer file.
### Loading LibSVM text file
```
dtrain = xgb.DMatrix('train.svm.txt')
dtest = xgb.DMatrix('test.svm.buffer')
```
### Loading a CSV File
Categorical features not supported
Note that XGBoost does not provide specialization for categorical features; if your data contains categorical features, load it as a NumPy array first and then perform corresponding preprocessing steps like one-hot encoding.
```
import xgboost as xgb
dtrain = xgb.DMatrix('train.csv?format=csv&label_column=0')
dtest = xgb.DMatrix('test.csv?format=csv&label_column=0')
```
Use Pandas to load CSV files with headers
Currently, the DMLC data parser cannot parse CSV files with headers. Use Pandas (see below) to read CSV files with headers.
### Loading Numpy Array
```
import numpy as np
data = np.random.rand(5,10)
print(data.shape) # 5 rows and 10 columns
print(data)
label = np.random.randint(2, size=5) # Binary target
dtrain = xgb.DMatrix(data, label = label)
print(dtrain)
```
### Loading Pandas DataFrame
```
import pandas as pd
df = pd.DataFrame(np.arange(12).reshape((4,3)), columns = ['a','b','c'])
df.head()
label = pd.DataFrame(np.random.randint(2, size=4))
label.head()
dtrain = xgb.DMatrix(data, label = label)
```
### Saving into XGBoost Buffer file
```
dtrain.save_binary('train.buffer')
dtrain2 = xgb.DMatrix('train.buffer')
```
## Other Stuff
```
# Missing values can be replaced by a default value in the DMatrix constructor:
dtrain = xgb.DMatrix(data, label=label, missing=-999.0)
# Weights can be set when needed:
w = np.random.rand(5, 1)
dtrain = xgb.DMatrix(data, label=label, missing=-999.0, weight=w)
```
## Setting Parameters, Training, Saving, Re-Loading, Visualization
### Parameters Setting
- XGBoost can use either a list of pairs or a dictionary to set parameters.
For instance:
- Booster parameters
```
param = {'max_depth': 2, 'eta': 1, 'objective' : 'binary:logistic'}
param['nthread'] = 4
param['eval_metric'] = 'auc'
# Can set multiple metrics as well
param['eval_metric'] = ['auc','rmse']
# Specify validation to watch performance
evallist = [(dtest, 'eval'), (dtrain, 'train')]
```
### Training example
- Training a model requires a parameter list and data set.
```
df = pd.DataFrame(np.arange(12).reshape((4,3)), columns = ['a','b','c'])
label = pd.DataFrame(np.random.randint(2, size=4))
df.head()
label.head()
dtrain = xgb.DMatrix(df, label = label)
param = {'max_depth' : 2, 'eta' : 0.2, 'objective' : 'binary:logistic', 'eval_metric' : 'error'}
num_round = 10
bst = xgb.train(params = param, dtrain = dtrain, num_boost_round=num_round)
# After training, the model can be saved.
bst.save_model('save_model.model')
# dumping model as text file
bst.dump_model('dump.raw.txt')
# dumping model with feature map
bst.dump_model('dump.raw.txt', 'featmap.txt')
bst = xgb.Booster()
bst.load_model('/content/save_model.model')
print(bst)
```
### Early Stopping
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds. Early stopping requires at least one set in evals. If there’s more than one, it will use the last.
```
bst = xgb.train(params = param, dtrain = dtrain, num_boost_round=num_round, early_stopping_rounds=2)
```
The model will train until the validation score stops improving. Validation error needs to decrease at least every early_stopping_rounds to continue training.
- If early stopping occurs, the model will have three additional fields: bst.best_score, bst.best_iteration and bst.best_ntree_limit. Note that xgboost.train() will return a model from the last iteration, not the best one.
### Predictions from Model
- Trained Model can be used to make predictions on dataset
```
ypred = bst.predict(dtest)
```
- If early stopping is enabled you can get predictions from the best iteration with bst.best_ntree_limit
```
ypred = bst.predict(dtest, ntree_limit=bst.best_ntree_limit)
```
### Plotting
You can use plotting module to plot importance and output tree.
To plot importance, use xgboost.plot_importance(). This function requires matplotlib to be installed.
```
xgb.plot_importance(bst)
```
To plot the output tree via matplotlib, use xgboost.plot_tree(), specifying the ordinal number of the target tree. This function requires graphviz and matplotlib.
```
xgb.plot_tree(bst, num_trees=2)
```
When you use IPython, you can use the xgboost.to_graphviz() function, which converts the target tree to a graphviz instance. The graphviz instance is automatically rendered in IPython.
```
xgb.to_graphviz(bst, num_trees=2)
```
## Parameter Tuning
- Use the concept of Bias-Variance TradeOff
### Control Overfitting
When you observe high training accuracy, but low test accuracy,it is likely that you encountered overfitting problem.
There are in general two ways that you can control overfitting in XGBoost:
- The first way is to directly control model complexity.
- This includes max_depth, min_child_weight and gamma.
- The second way is to add randomness to make training robust to noise.
- This includes subsample and colsample_bytree.
- You can also reduce stepsize eta. Remember to increase num_round when you do so.
### Handle Imbalanced Dataset
For common cases such as ads clickthrough log, the dataset is extremely imbalanced. This can affect the training of XGBoost model, and there are two ways to improve it.
If you care only about the overall performance metric (AUC) of your prediction
- Balance the positive and negative weights via scale_pos_weight
- Use AUC for evaluation
If you care about predicting the right probability
- In such a case, you cannot re-balance the dataset
- Set parameter max_delta_step to a finite number (say 1) to help convergence
Parameter tuning is art use the following webpage and master
https://xgboost.readthedocs.io/en/latest/parameter.html
### GPUs for XGBoost
- Can be used.
- Specify the tree_method parameter as 'gpu_hist'
Equivalent to the XGBoost fast histogram algorithm. Much faster and uses considerably less memory. NOTE: Will run very slowly on GPUs older than Pascal architecture.
- Faster performance.
| github_jupyter |
# Setup Machine
```
# @markdown ## Install python 3
!env DEBIAN_FRONTEND=noninteractive apt-get install -y -qq python3 python3-dev python3-venv python3-pip > /dev/null
!python --version
# @markdown ## Upgrade pip
!python -m pip install -qq --upgrade pip
!pip --version
# @markdown ## Install dependencies
!pip install -qq transformers==2.8.0
```
# Hugging Face's Transformers Library
https://github.com/huggingface/transformers
```
# @markdown ## Built-in pretrained models in the library
# @markdown More models available [here](https://huggingface.co/models).
def get_transformers_model_list():
from transformers import CONFIG_MAPPING
from itertools import chain
classes = CONFIG_MAPPING.values()
models_per_class = map(lambda c: c.pretrained_config_archive_map.keys(), classes)
models = sorted(list(chain.from_iterable(models_per_class)))
return models
print("Available pretrained models:")
for model in get_transformers_model_list():
print(" %s" % model)
# @markdown ## Configure the tokenizer
# @markdown Select the model whose tokenizer you want to load.
TOKENIZER_FOR_MODEL = "bert-base-cased" # @param ["albert-base-v1", "albert-base-v2", "albert-large-v1", "albert-large-v2", "albert-xlarge-v1", "albert-xlarge-v2", "albert-xxlarge-v1", "albert-xxlarge-v2", "bart-large", "bart-large-cnn", "bart-large-mnli", "bart-large-xsum", "bert-base-cased", "bert-base-cased-finetuned-mrpc", "bert-base-chinese", "bert-base-dutch-cased", "bert-base-finnish-cased-v1", "bert-base-finnish-uncased-v1", "bert-base-german-cased", "bert-base-german-dbmdz-cased", "bert-base-german-dbmdz-uncased", "bert-base-japanese", "bert-base-japanese-char", "bert-base-japanese-char-whole-word-masking", "bert-base-japanese-whole-word-masking", "bert-base-multilingual-cased", "bert-base-multilingual-uncased", "bert-base-uncased", "bert-large-cased", "bert-large-cased-whole-word-masking", "bert-large-cased-whole-word-masking-finetuned-squad", "bert-large-uncased", "bert-large-uncased-whole-word-masking", "bert-large-uncased-whole-word-masking-finetuned-squad", "camembert-base", "ctrl", "distilbert-base-cased", "distilbert-base-cased-distilled-squad", "distilbert-base-german-cased", "distilbert-base-multilingual-cased", "distilbert-base-uncased", "distilbert-base-uncased-distilled-squad", "distilbert-base-uncased-finetuned-sst-2-english", "distilgpt2", "distilroberta-base", "flaubert-base-cased", "flaubert-base-uncased", "flaubert-large-cased", "flaubert-small-cased", "google/electra-base-discriminator", "google/electra-base-generator", "google/electra-large-discriminator", "google/electra-large-generator", "google/electra-small-discriminator", "google/electra-small-generator", "gpt2", "gpt2-large", "gpt2-medium", "gpt2-xl", "openai-gpt", "roberta-base", "roberta-base-openai-detector", "roberta-large", "roberta-large-mnli", "roberta-large-openai-detector", "t5-11b", "t5-3b", "t5-base", "t5-large", "t5-small", "transfo-xl-wt103", "umberto-commoncrawl-cased-v1", "umberto-wikipedia-uncased-v1", "xlm-clm-ende-1024", "xlm-clm-enfr-1024", "xlm-mlm-100-1280", "xlm-mlm-17-1280", "xlm-mlm-en-2048", "xlm-mlm-ende-1024", "xlm-mlm-enfr-1024", "xlm-mlm-enro-1024", "xlm-mlm-tlm-xnli15-1024", "xlm-mlm-xnli15-1024", "xlm-roberta-base", "xlm-roberta-large", "xlm-roberta-large-finetuned-conll02-dutch", "xlm-roberta-large-finetuned-conll02-spanish", "xlm-roberta-large-finetuned-conll03-english", "xlm-roberta-large-finetuned-conll03-german", "xlnet-base-cased", "xlnet-large-cased"]
# @markdown Use this to provide additional settings to the tokenizer ([documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.from_pretrained)).
TOKENIZER_KARGS = {"use_fast": False} # @param {type: "raw"}
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_FOR_MODEL, **TOKENIZER_KARGS)
# @markdown ## Sentences to tokenize
%%writefile sentences.txt
"Always bear in mind that your own resolution to success is more important than any other one thing." -Abraham Lincoln
"In the end, it's not the years in your life that count. It's the life in your years." -Abraham Lincoln
"Only a life lived for others is a life worthwhile." -Albert Einstein
"Try not to become a man of success. Rather become a man of value." -Albert Einstein
"Before anything else, preparation is the key to success." -Alexander Graham Bell
"The most difficult thing is the decision to act, the rest is merely tenacity." -Amelia Earhart
"How wonderful it is that nobody need wait a single moment before starting to improve the world." -Anne Frank
"Whoever is happy will make others happy too." -Anne Frank
"First, have a definite, clear practical ideal; a goal, an objective. Second, have the necessary means to achieve your ends; wisdom, money, materials, and methods. Third, adjust all your means to that end." -Aristotle
"It is during our darkest moments that we must focus to see the light." -Aristotle
"Nothing is impossible, the word itself says, ‘I'm possible!'" -Audrey Hepburn
"The question isn't who is going to let me; it's who is going to stop me." -Ayn Rand
"Never let the fear of striking out keep you from playing the game." -Babe Ruth
"The real test is not whether you avoid this failure, because you won't. It's whether you let it harden or shame you into inaction, or whether you learn from it; whether you choose to persevere." -Barack Obama
"I didn't fail the test. I just found 100 ways to do it wrong." -Benjamin Franklin
"Tell me and I forget. Teach me and I remember. Involve me and I learn." -Benjamin Franklin
"You may be disappointed if you fail, but you are doomed if you don't try." -Beverly Sills
"Love the life you live. Live the life you love." -Bob Marley
"Life is made of ever so many partings welded together." -Charles Dickens
"Life is 10% what happens to me and 90% of how I react to it." -Charles Swindoll
"There are no secrets to success. It is the result of preparation, hard work, and learning from failure." -Colin Powell
"The road to success and the road to failure are almost exactly the same." -Colin R. Davis
"It does not matter how slowly you go as long as you do not stop." -Confucius
"Life is really simple, but we insist on making it complicated." -Confucius
"Success seems to be connected with action. Successful people keep moving. They make mistakes but they don't quit." -Conrad Hilton
"Life is ours to be spent, not to be saved." -D. H. Lawrence
"The purpose of our lives is to be happy." -Dalai Lama
"A successful man is one who can lay a firm foundation with the bricks others have thrown at him." -David Brinkley
"You have brains in your head. You have feet in your shoes. You can steer yourself any direction you choose." -Dr. Seuss
"If life were predictable it would cease to be life and be without flavor." -Eleanor Roosevelt
"The future belongs to those who believe in the beauty of their dreams." -Eleanor Roosevelt
"I never dreamed about success, I worked for it." -Estee Lauder
"I attribute my success to this: I never gave or took any excuse." -Florence Nightingale
"The only limit to our realization of tomorrow will be our doubts of today." -Franklin D. Roosevelt
"When you reach the end of your rope, tie a knot in it and hang on." -Franklin D. Roosevelt
"Everything you've ever wanted is on the other side of fear." -George Addair
"Dreaming, after all, is a form of planning." -Gloria Steinem
"If you genuinely want something, don't wait for it -- teach yourself to be impatient." -Gurbaksh Chahal
"Life itself is the most wonderful fairy tale." -Hans Christian Andersen
"The best and most beautiful things in the world cannot be seen or even touched - they must be felt with the heart." -Helen Keller
"Life is either a daring adventure or nothing at all." -Helen Keller
"Go confidently in the direction of your dreams! Live the life you've imagined." -Henry David Thoreau
"Success usually comes to those who are too busy to be looking for it." -Henry David Thoreau
"When everything seems to be going against you, remember that the airplane takes off against the wind, not with it." -Henry Ford
"Whether you think you can or you think you can't, you're right." -Henry Ford
"It is better to fail in originality than to succeed in imitation." -Herman Melville
"If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success." -James Cameron
"Life is a long lesson in humility." -James M. Barrie
"If you are not willing to risk the usual, you will have to settle for the ordinary." -Jim Rohn
"Successful people do what unsuccessful people are not willing to do. Don't wish it were easier; wish you were better." -Jim Rohn
"Don't be afraid to give up the good to go for the great." -John D. Rockefeller
"The secret of success is to do the common thing uncommonly well." -John D. Rockefeller Jr.
"Life is what happens when you're busy making other plans." -John Lennon
"Do not let making a living prevent you from making a life." -John Wooden
"Things work out best for those who make the best of how things work out." -John Wooden
"May you live all the days of your life." -Jonathan Swift
"Too many of us are not living our dreams because we are living our fears." -Les Brown
"You only live once, but if you do it right, once is enough." -Mae West
"Always remember that you are absolutely unique. Just like everyone else." -Margaret Mead
"Keep smiling, because life is a beautiful thing and there's so much to smile about." -Marilyn Monroe
"Twenty years from now you will be more disappointed by the things that you didn't do than by the ones you did do. So, throw off the bowlines, sail away from safe harbor, catch the trade winds in your sails. Explore, Dream, Discover." -Mark Twain
"I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." -Maya Angelou
"You will face many defeats in life, but never let yourself be defeated." -Maya Angelou
"I alone cannot change the world, but I can cast a stone across the water to create many ripples." -Mother Teresa
"In this life we cannot do great things. We can only do small things with great love." -Mother Teresa
"Spread love everywhere you go. Let no one ever come to you without leaving happier." -Mother Teresa
"Whatever the mind of man can conceive and believe, it can achieve." -Napoleon Hill
"The greatest glory in living lies not in never falling, but in rising every time we fall." -Nelson Mandela
"Dream big and dare to fail." -Norman Vaughan
"If you look at what you have in life, you'll always have more. If you look at what you don't have in life, you'll never have enough." -Oprah Winfrey
"You become what you believe." -Oprah Winfrey
"You know you are on the road to success if you would do your job and not be paid for it." -Oprah Winfrey
"Life is never fair, and perhaps it is a good thing for most of us that it is not." -Oscar Wilde
"Do not go where the path may lead, go instead where there is no path and leave a trail." -Ralph Waldo Emerson
"Life is a succession of lessons which must be lived to be understood." -Ralph Waldo Emerson
"Live in the sunshine, swim the sea, drink the wild air." -Ralph Waldo Emerson
"The only person you are destined to become is the person you decide to be." -Ralph Waldo Emerson
"Life is trying things to see if they work." -Ray Bradbury
"In three words I can sum up everything I've learned about life: it goes on." -Robert Frost
"Don't judge each day by the harvest you reap but by the seeds that you plant." -Robert Louis Stevenson
"I have learned over the years that when one's mind is made up, this diminishes fear." -Rosa Parks
"If you're offered a seat on a rocket ship, don't ask what seat! Just get on." -Sheryl Sandberg
"An unexamined life is not worth living." -Socrates
"If you really look closely, most overnight successes took a long time." -Steve Jobs
"Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma -- which is living with the results of other people's thinking." -Steve Jobs
"Believe you can and you're halfway there." -Theodore Roosevelt
"Many of life's failures are people who did not realize how close they were to success when they gave up." -Thomas A. Edison
"I failed my way to success." -Thomas Edison
"I find that the harder I work, the more luck I seem to have." -Thomas Jefferson
"The only impossible journey is the one you never begin." -Tony Robbins
"People who succeed have momentum. The more they succeed, the more they want to succeed and the more they find a way to succeed. Similarly, when someone is failing, the tendency is to get on a downward spiral that can even become a self-fulfilling prophecy." -Tony Robbins
"The only place where success comes before work is in the dictionary." -Vidal Sassoon
"Winning isn't everything, but wanting to win is." -Vince Lombardi
"I would rather die of passion than of boredom." -Vincent van Gogh
"The way to get started is to quit talking and begin doing." -Walt Disney
"You miss 100% of the shots you don't take." -Wayne Gretzky
"Success is walking from failure to failure with no loss of enthusiasm." -Winston Churchill
"Success is not final; failure is not fatal: It is the courage to continue that counts." -Winston S. Churchill
"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -Thomas J. Watson
"Don't be distracted by criticism. Remember -- the only taste of success some people get is to take a bite out of you." -Zig Ziglar
# @markdown ## Tokenize sentences
# @markdown Character/String to use as separator when printing tokens to file.
TOKENS_SEPARATOR = "|" # @param {type: "string"}
# @markdown Custom settings for the tokenizer's encode method ([documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode)).
ENCODE_KARGS = {"add_special_tokens": False} # @param {type: "raw"}
with open("sentences.txt", "r") as in_file:
with open("tokenized_sentences.txt", "w+") as out_file:
for sentence in in_file:
if sentence.endswith("\n"):
sentence = sentence[:-1]
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(sentence, **ENCODE_KARGS))
out_file.write(TOKENS_SEPARATOR.join(tokens))
out_file.write("\n")
# @markdown ## Tokenized sentences
!cat tokenized_sentences.txt
```
| github_jupyter |
# Contour Plots
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
def f(x, y):
return x**2 + y**2
x = np.arange(-5, 5.0, 0.25)
y = np.arange(-5, 5.0, 0.25)
print(x[:10])
print(y[:10])
```
### Meshgrid
```python
np.meshgrid(
*xi,
copy=True,
sparse=False,
indexing='xy'
)
```
Return coordinate matrices from coordinate vectors.
Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn.
```
X, Y = np.meshgrid(x, y)
print(X)
print(Y)
plt.scatter(X, Y, s=10);
Z = f(X, Y)
print(Z)
plt.contour(X, Y, Z, colors='black');
```
### Colorbars
'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'winter', 'winter_r'
```
plt.contourf(X, Y, Z, 20, cmap='RdGy')
plt.colorbar();
plt.contourf(X, Y, Z, 20, cmap='cool')
plt.colorbar();
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
fig, ax = plt.subplots()
CS = ax.contour(X, Y, Z)
```
| github_jupyter |
# ThreadBuffer Performance
This notebook demonstrates the use of `ThreadBuffer` to generate batches of data asynchronously from the training thread.
Under certain circumstances the main thread can be busy with the training operations, that is interacting with GPU memory and invoking CUDA operations, which is independent of batch generation operations. If the time taken to generate a batch is significant compared to the time taken to train the network for an iteration, and assuming operations can be done in parallel given the limitations of the GIL or other factors, this should speed up the whole training process. The efficiency gains will be relative to the proportion of these two times, so if batch generation is lengthy but training is very fast then very little parallel computation is possible.
[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/threadbuffer_performance.ipynb)
## Setup Environment
The current MONAI master branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:
```
%pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
```
This install for Pytorch 1.6 specifically may be necessary for Colab:
```
%pip install torch==1.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
import numpy as np
import matplotlib.pyplot as plt
import torch
import monai
from monai.data import Dataset, DataLoader, ThreadBuffer, create_test_image_2d
from monai.networks.nets import UNet
from monai.losses import Dice
from monai.transforms import Compose, MapTransform, AddChanneld, ToTensord
monai.utils.set_determinism(seed=0)
monai.config.print_config()
```
The data pipeline is given here which creates random 2D segmentation training pairs. It is artificially slowed by setting the number of worker processes to 0 (often necessary under Windows).
```
class RandomGenerator(MapTransform):
"""Generates a dictionary containing image and segmentation images from a given seed value."""
def __call__(self, seed):
rs = np.random.RandomState(seed)
im, seg = create_test_image_2d(256, 256, num_seg_classes=1, random_state=rs)
return {self.keys[0]: im, self.keys[1]: seg}
data = np.random.randint(0, monai.utils.MAX_SEED, 1000)
trans = Compose(
[
RandomGenerator(keys=("im", "seg")),
AddChanneld(keys=("im", "seg")),
ToTensord(keys=("im", "seg")),
]
)
train_ds = Dataset(data, trans)
train_loader = DataLoader(train_ds, batch_size=20, shuffle=True, num_workers=0)
```
Network, loss, and optimizers defined as normal:
```
device = torch.device("cuda:0")
net = UNet(2, 1, 1, (8, 16, 32), (2, 2, 2), num_res_units=2).to(device)
loss_function = Dice(sigmoid=True)
optimizer = torch.optim.Adam(net.parameters(), 1e-5)
epoch_num = 10
```
A simple training function is defined which only performs step optimization of the network:
```
def train_step(batch):
inputs, labels = batch["im"].to(device), batch["seg"].to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
def train(use_buffer):
# wrap the loader in the ThreadBuffer if selected
src = ThreadBuffer(train_loader, 1) if use_buffer else train_loader
for epoch in range(epoch_num):
for batch in src:
train_step(batch)
```
Timing how long it takes to generate a single batch versus the time taken to optimize the network for one step reveals the proportion of time taken by each during each full training iteration:
```
it = iter(train_loader)
batch = next(it)
%timeit -n 1 next(it)
%timeit -n 1 train_step(batch)
```
Without using an asynchronous buffer for batch generation these operations must be sequential:
```
%timeit -n 1 train(False)
```
With overlap we see a significant speedup:
```
%timeit -n 1 train(True)
```
| github_jupyter |
# Analyze a large dataset with Google BigQuery
**Learning Objectives**
In this lab, you use BigQuery to:
- Access an ecommerce dataset
- Look at the dataset metadata
- Remove duplicate entries
- Write and execute queries
___
## Introduction
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
We have a publicly available ecommerce dataset that has millions of Google Analytics records for the Google Merchandise Store loaded into a table in BigQuery. In this lab, you use a copy of that dataset. Sample scenarios are provided, from which you look at the data and ways to remove duplicate information. The lab then steps you through further analysis the data.
BigQuery can be accessed by its own browser-based interface, Google Data Studio, and many third party tools. In this lab you will use the BigQuery Command Line interface exposed to the JuypterLab notebook via a Python library.
To follow and experiment with the BigQuery queries provided to analyze the data, see Standard SQL Query Syntax.
___
### Set up the notebook environment
__VERY IMPORTANT__: In the cell below you must replace the text 'QWIKLABSPROJECT' with your Qwiklabs Project Name as provided during the setup of your environment. Please leave any surrounding single quotes in place.
```
PROJECT = 'QWIKLABSPROJECT' #TODO Replace with your Qwiklabs PROJECT
import os
os.environ["PROJECT"] = PROJECT
```
## Explore eCommerce data and identify duplicate records
Scenario: Your data analyst team exported the Google Analytics logs for an ecommerce website into BigQuery and created a new table of all the raw ecommerce visitor session data.
Any cell that starts with `%%bigquery` (the BigQuery Magic) will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.
BigQuery supports [two flavors](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#comparison_of_legacy_and_standard_sql) of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with `#standardSQL`.
Our first query is accessing the BigQuery Information Schema which stores all object-related metadata. In this case we want to see metadata details for the "all_sessions_raw" table.
Tip: To run the current cell you can click the cell and hit **shift enter**
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
* EXCEPT(table_catalog, table_schema, is_generated, generation_expression, is_stored, is_updatable,
is_hidden, is_system_defined, is_partitioning_column, clustering_ordinal_position)
FROM
`data-to-insights.ecommerce.INFORMATION_SCHEMA.COLUMNS`
WHERE
table_name="all_sessions_raw"
```
Let's examine how many rows are in the table.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
count(*)
FROM
`data-to-insights.ecommerce.all_sessions_raw`
```
Now, let's take a quick at few rows of data in the table.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
`data-to-insights.ecommerce.all_sessions_raw`
limit 7
```
### Identify duplicate rows
Seeing a sample amount of data may give you greater intuition for what is included in the dataset. But since the table is quite large, a preview is not likely to render meaningful resutls. As you scan and scroll through the sample rows you see there is no singular field that uniquely identifies a row, so you need advanced logic to identify duplicate rows.
The query below uses the SQL GROUP BY function on every field and counts (COUNT) where there are rows that have the same values across every field.
If every field is unique, the COUNT will return 1 as there are no other groupings of rows with the exact same value for all fields.
If there is a row with the same values for all fields, they will be grouped together and the COUNT will be greater than 1. The last part of the query is an aggregation filter using HAVING to only show the results that have a COUNT of duplicates greater than 1.
Run the following query to find duplicate records across all columns.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT COUNT(*) as num_duplicate_rows, * FROM
`data-to-insights.ecommerce.all_sessions_raw`
GROUP BY
fullVisitorId, channelGrouping, time, country, city, totalTransactionRevenue, transactions,
timeOnSite, pageviews, sessionQualityDim, date, visitId, type, productRefundAmount, productQuantity,
productPrice, productRevenue, productSKU, v2ProductName, v2ProductCategory, productVariant,
currencyCode, itemQuantity, itemRevenue, transactionRevenue, transactionId, pageTitle,
searchKeyword, pagePathLevel1, eCommerceAction_type, eCommerceAction_step, eCommerceAction_option
HAVING num_duplicate_rows > 1;
```
As you can see there are quite a few "duplicate" records (615) when analyzed with these parameters.
In your own datasets, even if you have a unique key, it is still beneficial to confirm the uniqueness of the rows with COUNT, GROUP BY, and HAVING before you begin your analysis.
## Analyze the new all_sessions table
In this section you use a deduplicated table called all_sessions.
Scenario: Your data analyst team has provided you with a relevant query, and your schema experts have identified the key fields that must be unique for each record per your schema.
Run the query to confirm that no duplicates exist, this time against the "all_sessions" table:
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
fullVisitorId, # the unique visitor ID
visitId, # a visitor can have multiple visits
date, # session date stored as string YYYYMMDD
time, # time of the individual site hit (can be 0 to many per visitor session)
v2ProductName, # not unique since a product can have variants like Color
productSKU, # unique for each product
type, # a visitor can visit Pages and/or can trigger Events (even at the same time)
eCommerceAction_type, # maps to ‘add to cart', ‘completed checkout'
eCommerceAction_step,
eCommerceAction_option,
transactionRevenue, # revenue of the order
transactionId, # unique identifier for revenue bearing transaction
COUNT(*) as row_count
FROM
`data-to-insights.ecommerce.all_sessions`
GROUP BY 1,2,3,4,5,6,7,8,9,10,11,12
HAVING row_count > 1 # find duplicates
```
The query returns zero records.
## Write basic SQL against the eCommerce data
In this section, you query for insights on the ecommerce dataset.
A good first path of analysis is to find the total unique visitors
The query below determines the total views by counting product_views and the number of unique visitors by counting fullVisitorID.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
COUNT(*) AS product_views,
COUNT(DISTINCT fullVisitorId) AS unique_visitors
FROM `data-to-insights.ecommerce.all_sessions`;
```
The next query shows total unique visitors(fullVisitorID) by the referring site (channelGrouping):
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
COUNT(DISTINCT fullVisitorId) AS unique_visitors,
channelGrouping
FROM `data-to-insights.ecommerce.all_sessions`
GROUP BY 2
ORDER BY 2 DESC;
```
To find deeper insights in the data, the next query lists the five products with the most views (product_views) from unique visitors. The query counts number of times a product (v2ProductName) was viewed (product_views), puts the list in descending order, and lists the top 5 entries:
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
COUNT(*) AS product_views,
(v2ProductName) AS ProductName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2ProductName
ORDER BY product_views DESC
LIMIT 5;
```
Now expand your previous query to include the total number of distinct products ordered and the total number of total units ordered (productQuantity):
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
COUNT(*) AS product_views,
COUNT(productQuantity) AS orders,
SUM(productQuantity) AS quantity_product_ordered,
v2ProductName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2ProductName
ORDER BY product_views DESC
LIMIT 5;
```
Lastly, lets' expand the query to include the average amount of product per order (total number of units ordered/total number of orders, or SUM(productQuantity)/COUNT(productQuantity)).
```
%%bigquery --project $PROJECT
#standardSQL
SELECT
COUNT(*) AS product_views,
COUNT(productQuantity) AS orders,
SUM(productQuantity) AS quantity_product_ordered,
SUM(productQuantity) / COUNT(productQuantity) AS avg_per_order,
(v2ProductName) AS ProductName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2ProductName
ORDER BY product_views DESC
LIMIT 5;
```
We see that among these top 5 products by product views that the 22 oz YouTube Bottle Infuser had the highest avg_per_order with 9.38 units per order.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Papermill
Link and material:
- https://papermill.readthedocs.io/en/latest/
- https://towardsdatascience.com/introduction-to-papermill-2c61f66bea30
- https://medium.com/capital-fund-management/automated-reports-with-jupyter-notebooks-using-jupytext-and-papermill-619e60c37330
- https://medium.com/ai³-theory-practice-business/how-to-build-machine-learning-pipelines-with-airflow-papermill-6baef3832bc6
Jupyter notebooks are the gold standard for doing exploratory analysis (EDA), protoyping and a great tool for documenting Data Science Project.
On problem that emerges when using Jupyter Notebooks to perform repetitive tasks and ETLs is that is lack automation and logging features.
Lack of automation means you would have to open the respective notebook and run it manually whenever needed, and lack of logging that you wouldn't be able to control for possible errors and expections during the execution.
Another problem with Jupyter Notebooks is the low engineering level of the notebook and analysis with a instable process.
## What is Papermill
**Papermill** is a tool for parameterizing and edecuting Jupyter Notebooks and lets you:
- **parameterize** notebooks
- **execute** notebooks
This opens up new opportunities for how notebooks can be used. For example:
- Running a financial report with different values on the first day, on the last day or the beginning of the month. Using parameters maked this task earlier
- Do you want to run a notebook and depending on its results, choose a particular notebook to run next? You can programmatically **execute a workflow** without having to copy and paste fro notebook to notebook manually.
Pratically it tries to fill the gap of automation and logging by offering a way for us to execute notebooks as files, and also by generating a report for each execution.
## The Process:
1. Data Engineering on the data
2. Data Visualization and calculation
3. PDF Output
Important: To use Papermill please use `Jupyter Notebooks` or `Jupyter Labs`
The first thing todo after the library installation is the definition of an ipython kernel where our analysis and reports need to be run
```bash
pip install ipykernel
#or if you are using poetry
poetry add ipykernel
#Generate a new environment
ipython kernel install --user --name=papermill-example
```
Then you can create a workflow of your analysis
### Configure notebook for Papermill
With papermill it's very important also to configure your `Jupyter Notebook` or `Jupyter Lab` instance to accept incoming parameters.
In the input parameter cell, where you want to use papermill, on `cell metadata` inside the advance tools of the jupyter notebook file write the following description:
```json
{
"tags": [
"parameters"
]
}
```
Or in Jupyter Lab click on the button `add tag` adding the tag: `parameters`

### Ipython Kernel Help
when you installed a new library be carefull because probably you are using a kernel without the library installed
**Create an ipython kernel**
`ipython kernel install --user --name=papermill-example`
**Visualize ipython kernel installed**
`jupyter kernelspec list`
**Delete an ipython kernel**
`jupyter kernelspec uninstall papermill-example`
### Launch a Notebook with papermill
To launch the notebook with papermill use this command on terminal:
`papermill ./notebooks/generate_report.ipynb ./notebooks/generate_report_output.ipynb -p analysis "listings"`
`papermill <input_notebook> <output_notebook> -p <parameter_name> <parameter_value> -p <second_p_n> <second_p_value>`
**Warning**: i don't know why, but the interface with python file (.py) don't work cause a circul import in the library
| github_jupyter |
# 4 - Hybdrid Absorbing Boundary Condition (HABC)
# 4.1 - Introduction
In this notebook we describe absorbing boundary conditions and their use combined with the *Hybdrid Absorbing Boundary Condition* (*HABC*). The common points to the previous notebooks <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a>, <a href="02_damping.ipynb">Damping</a> and <a href="03_pml.ipynb">PML</a> will be used here, with brief descriptions.
# 4.2 - Absorbing Boundary Conditions
We initially describe absorbing boundary conditions, the so called A1 and A2 Clayton's conditions and
the scheme from Higdon. These methods can be used as pure boundary conditions, designed to reduce reflections,
or as part of the Hybrid Absorbing Boundary Condition, in which they are combined with an absorption layer in a manner to be described ahead.
In the presentation of these boundary conditions we initially consider the wave equation to be solved on
the spatial domain $\Omega=\left[x_{I},x_{F}\right] \times\left[z_{I},z_{F}\right]$ as show in the figure bellow. More details about the equation and domain definition can be found in the <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a> notebook.
<img src='domain1.png' width=500>
## 4.2.1 - Clayton's A1 Boundary Condition
Clayton's A1 boundary condition is based on a one way wave equation (OWWE). This simple condition
is such that outgoing waves normal to the border would leave without reflection. At the $\partial \Omega_1$ part of the boundary
we have,
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}-c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial x}=0.$
while at $\partial \Omega_3$ the condition is
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}+c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial x}=0.$
and at $\partial \Omega_2$
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}+c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial z}=0.$
## 4.2.2 - Clayton's A2 Boundary Condition
The A2 boundary condition aims to impose a boundary condition that would make outgoing waves leave the domain without being reflected. This condition is approximated (using a Padé approximation in the wave dispersion relation) by the following equation to be imposed on the boundary part $\partial \Omega_1$
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}+c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z^{2}}=0.$
At $\partial \Omega_3$ we have
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}-c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x^{2}}=0.$
while at $\partial \Omega_2$ the condition is
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}-c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z^{2}}=0.$
At the corner points the condition is
- $\displaystyle\frac{\sqrt{2}\partial u(x,z,t)}{\partial t}+c(x,z)\left(\displaystyle\frac{\partial u(x,z,t)}{\partial x}+\displaystyle\frac{\partial u(x,z,t)}{\partial z}\right)=0.$
## 4.2.3 - Higdon Boundary Condition
The Higdon Boundary condition of order p is given at $\partial \Omega_1$ and $\partial \Omega_3$n by:
- $\Pi_{j=1}^p(\cos(\alpha_j)\left(\displaystyle\frac{\partial }{\partial t}-c(x,z)\displaystyle\frac{\partial }{\partial x}\right)u(x,z,t)=0.$
and at $\partial \Omega_2$
- $\Pi_{j=1}^p(\cos(\alpha_j)\left(\displaystyle\frac{\partial}{\partial t}-c(x,z)\displaystyle\frac{\partial}{\partial z}\right)u(x,z,t)=0.$
This method would make that outgoing waves with angle of incidence at the boundary equal to $\alpha_j$ would
present no reflection. The method we use in this notebook employs order 2 ($p=2$) and angles $0$ and $\pi/4$.
Observation: There are similarities between Clayton's A2 and the Higdon condition. If one chooses $p=2$ and
both angles equal to zero in Higdon's method, this leads to the condition:
$ u_{tt}-2cu_{xt}+c^2u_{xx}=0$. But, using the wave equation, we have that $c^2u_{xx}=u_{tt}-c^2u_{zz}$. Replacing this relation in the previous equation, we get: $2u_{tt}-2cu_{xt}-c^2u_{zz}=0$ which is Clayton's A2
boundary condition. In this sence, Higdon's method would generalize Clayton's scheme. But the discretization of
both methods are quite different, since in Higdon's scheme the boundary operators are unidirectional, while
in Clayton's A2 not.
# 4.3 - Acoustic Problem with HABC
In the hybrid absorption boundary condition (HABC) scheme we will also extend the spatial domain as $\Omega=\left[x_{I}-L,x_{F}+L\right] \times\left[z_{I},z_{F}+L\right]$.
We added to the target domain $\Omega_{0}=\left[x_{I},x_{F}\right]\times\left[z_{I},z_{F}\right]$ an extension zone, of length $L$ in both ends of the direction $x$ and at the end of the domain in the direction $z$, as represented in the figure bellow.
<img src='domain2.png' width=500>
The difference with respect to previous schemes, is that this extended region will now be considered as the union of several gradual extensions. As represented in the next figure, we define a region $A_M=\Omega_{0}$. The regions $A_k, k=M-1,\cdots,1$ will be defined as the previous region $A_{k+1}$ to which we add one extra grid line to the left,
right and bottom sides of it, such that the final region $A_1=\Omega$ (we thus have $M=L+1$).
<img src='region1.png' width=500>
We now consider the temporal evolution
of the solution of the HABC method. Suppose that $u(x,z,t-1)$ is the solution at a given instant $t-1$ in all the
extended $\Omega$ domain. We update it to instant $t$, using one of the absorbing boundary conditions described in the previous section (A1, A2 or Higdon) producing a preliminar new function $u(x,z,t)$. Now, call $u_{1}(x,z,t)$ the solution at instant $t$ constructed in the extended region, by applying the same absorbing boundary condition at the border of each of the domains $A_k,k=1,..,M$. The HABC solution will be constructed as a convex combination of $u(x,z,t)$ and $u_{1}(x,z,t)$:
- $u(x,z,t) = (1-\omega)u(x,z,t)+\omega u_{1}(x,z,t)$.
The function $u_{1}(x,z,t)$ is defined (and used) only in the extension of the domain. The function $w$ is a
weight function growing from zero at the boundary $\partial\Omega_{0}$ to one at $\partial\Omega$. The particular weight function to be used could vary linearly, as when the scheme was first proposed by Liu and Sen. But HABC produces better results with a non-linear weight function to be described ahead.
The wave equation employed here will be the same as in the previous notebooks, with same velocity model, source term and initial conditions.
## 4.3.1 The weight function $\omega$
One can choose a *linear* weight function as
\begin{equation}
\omega_{k} = \displaystyle\frac{M-k}{M};
\end{equation}
or preferably a *non linear*
\begin{equation}
\omega_{k}=\left\{ \begin{array}{ll}
1, & \textrm{if $1\leq k \leq P+1$,} \\ \left(\displaystyle\frac{M-k}{M-P}\right)^{\alpha} , & \textrm{if $P+2 \leq k \leq M-1.$} \\ 0 , & \textrm{if $k=M$.}\end{array}\right.
\label{eq:elo8}
\end{equation}
In general we take $P=2$ and we choose $\alpha$ as follows:
- $\alpha = 1.5 + 0.07(npt-P)$, in the case of A1 and A2;
- $\alpha = 1.0 + 0.15(npt-P)$, in the case of Higdon.
The value *npt* designates the number of discrete points that define the length of the blue band in the direction $x$ and/or $z$.
# 4.4 - Finite Difference Operators and Discretization of Spatial and Temporal Domains
We employ the same methods as in the previous notebooks.
# 4.5 - Standard Problem
Redeeming the Standard Problem definitions discussed on the notebook <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a> we have that:
- $x_{I}$ = 0.0 Km;
- $x_{F}$ = 1.0 Km = 1000 m;
- $z_{I}$ = 0.0 Km;
- $z_{F}$ = 1.0 Km = 1000 m;
The spatial discretization parameters are given by:
- $\Delta x$ = 0.01 km = 10m;
- $\Delta z$ = 0.01 km = 10m;
Let's consider a $I$ the time domain with the following limitations:
- $t_{I}$ = 0 s = 0 ms;
- $t_{F}$ = 1 s = 1000 ms;
The temporal discretization parameters are given by:
- $\Delta t$ $\approx$ 0.0016 s = 1.6 ms;
- $NT$ = 626.
The source term, velocity model and positioning of receivers will be as in the previous notebooks.
# 4.6 - Numerical Simulations
For the numerical simulations of this notebook we use several of the notebook codes presented in <a href="02_damping.ipynb">Damping</a> e <a href="03_pml.ipynb">PML</a>. The new features will be described in more detail.
So, we import the following Python and Devito packages:
```
# NBVAL_IGNORE_OUTPUT
import numpy as np
import matplotlib.pyplot as plot
import math as mt
import matplotlib.ticker as mticker
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import cm
```
From Devito's library of examples we import the following structures:
```
# NBVAL_IGNORE_OUTPUT
%matplotlib inline
from examples.seismic import TimeAxis
from examples.seismic import RickerSource
from examples.seismic import Receiver
from examples.seismic import plot_velocity
from devito import SubDomain, Grid, NODE, TimeFunction, Function, Eq, solve, Operator
```
The mesh parameters that we choose define the domain $\Omega_{0}$ plus the absorption region. For this, we use the following data:
```
nptx = 101
nptz = 101
x0 = 0.
x1 = 1000.
compx = x1-x0
z0 = 0.
z1 = 1000.
compz = z1-z0;
hxv = (x1-x0)/(nptx-1)
hzv = (z1-z0)/(nptz-1)
```
As we saw previously, HABC has three approach possibilities (A1, A2 and Higdon) and two types of weights (linear and non-linear). So, we insert two control variables. The variable called *habctype* chooses the type of HABC approach and is such that:
- *habctype=1* is equivalent to choosing A1;
- *habctype=2* is equivalent to choosing A2;
- *habctype=3* is equivalent to choosing Higdon;
Regarding the weights, we will introduce the variable *habcw* that chooses the type of weight and is such that:
- *habcw=1* is equivalent to linear weight;
- *habcw=2* is equivalent to non-linear weights;
In this way, we make the following choices:
```
habctype = 3
habcw = 2
```
The number of points of the absorption layer in the directions $x$ and $z$ are given, respectively, by:
```
npmlx = 20
npmlz = 20
```
The lengths $L_{x}$ and $L_{z}$ are given, respectively, by:
```
lx = npmlx*hxv
lz = npmlz*hzv
```
For the construction of the *grid* we have:
```
nptx = nptx + 2*npmlx
nptz = nptz + 1*npmlz
x0 = x0 - hxv*npmlx
x1 = x1 + hxv*npmlx
compx = x1-x0
z0 = z0
z1 = z1 + hzv*npmlz
compz = z1-z0
origin = (x0,z0)
extent = (compx,compz)
shape = (nptx,nptz)
spacing = (hxv,hzv)
```
As in the case of the acoustic equation with Damping and in the acoustic equation with PML, we can define specific regions in our domain, since the solution $u_{1}(x,z,t)$ is only calculated in the blue region. We will soon follow a similar scheme for creating *subdomains* as was done on notebooks <a href="02_damping.ipynb">Damping</a> and <a href="03_pml.ipynb">PML</a>.
First, we define a region corresponding to the entire domain, naming this region as *d0*. In the language of *subdomains* *d0* it is written as:
```
class d0domain(SubDomain):
name = 'd0'
def define(self, dimensions):
x, z = dimensions
return {x: x, z: z}
d0_domain = d0domain()
```
The blue region will be built with 3 divisions:
- *d1* represents the left range in the direction *x*, where the pairs $(x,z)$ satisfy: $x\in\{0,npmlx\}$ and $z\in\{0,nptz\}$;
- *d2* represents the rigth range in the direction *x*, where the pairs $(x,z)$ satisfy: $x\in\{nptx-npmlx,nptx\}$ and $z\in\{0,nptz\}$;
- *d3* represents the left range in the direction *y*, where the pairs $(x,z)$ satisfy: $x\in\{npmlx,nptx-npmlx\}$ and $z\in\{nptz-npmlz,nptz\}$;
Thus, the regions *d1*, *d2* and *d3* aare described as follows in the language of *subdomains*:
```
class d1domain(SubDomain):
name = 'd1'
def define(self, dimensions):
x, z = dimensions
return {x: ('left',npmlx), z: z}
d1_domain = d1domain()
class d2domain(SubDomain):
name = 'd2'
def define(self, dimensions):
x, z = dimensions
return {x: ('right',npmlx), z: z}
d2_domain = d2domain()
class d3domain(SubDomain):
name = 'd3'
def define(self, dimensions):
x, z = dimensions
if((habctype==3)&(habcw==1)):
return {x: x, z: ('right',npmlz)}
else:
return {x: ('middle', npmlx, npmlx), z: ('right',npmlz)}
d3_domain = d3domain()
```
The figure below represents the division of domains that we did previously:
<img src='domain3.png' width=500>
After we defining the spatial parameters and constructing the *subdomains*, we then generate the *spatial grid* and set the velocity field:
```
grid = Grid(origin=origin, extent=extent, shape=shape, subdomains=(d0_domain,d1_domain,d2_domain,d3_domain), dtype=np.float64)
v0 = np.zeros((nptx,nptz))
X0 = np.linspace(x0,x1,nptx)
Z0 = np.linspace(z0,z1,nptz)
x10 = x0+lx
x11 = x1-lx
z10 = z0
z11 = z1 - lz
xm = 0.5*(x10+x11)
zm = 0.5*(z10+z11)
pxm = 0
pzm = 0
for i in range(0,nptx):
if(X0[i]==xm): pxm = i
for j in range(0,nptz):
if(Z0[j]==zm): pzm = j
p0 = 0
p1 = pzm
p2 = nptz
v0[0:nptx,p0:p1] = 1.5
v0[0:nptx,p1:p2] = 2.5
```
Previously we introduce the local variables *x10,x11,z10,z11,xm,zm,pxm* and *pzm* that help us to create a specific velocity field, where we consider the whole domain (including the absorpion region). Below we include a routine to plot the velocity field.
```
def graph2dvel(vel):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(3)
scale = np.amax(vel[npmlx:-npmlx,0:-npmlz])
extent = [fscale*(x0+lx),fscale*(x1-lx), fscale*(z1-lz), fscale*(z0)]
fig = plot.imshow(np.transpose(vel[npmlx:-npmlx,0:-npmlz]), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.title('Velocity Profile')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Velocity [km/s]')
plot.show()
```
Below we include the plot of velocity field.
```
# NBVAL_IGNORE_OUTPUT
graph2dvel(v0)
```
Time parameters are defined and constructed by the following sequence of commands:
```
t0 = 0.
tn = 1000.
CFL = 0.4
vmax = np.amax(v0)
dtmax = np.float64((min(hxv,hzv)*CFL)/(vmax))
ntmax = int((tn-t0)/dtmax)+1
dt0 = np.float64((tn-t0)/ntmax)
```
With the temporal parameters, we generate the time properties with *TimeAxis* as follows:
```
time_range = TimeAxis(start=t0,stop=tn,num=ntmax+1)
nt = time_range.num - 1
```
The symbolic values associated with the spatial and temporal grids that are used in the composition of the equations are given by:
```
(hx,hz) = grid.spacing_map
(x, z) = grid.dimensions
t = grid.stepping_dim
dt = grid.stepping_dim.spacing
```
We set the Ricker source:
```
f0 = 0.01
nsource = 1
xposf = 0.5*(compx-2*npmlx*hxv)
zposf = hzv
src = RickerSource(name='src',grid=grid,f0=f0,npoint=nsource,time_range=time_range,staggered=NODE,dtype=np.float64)
src.coordinates.data[:, 0] = xposf
src.coordinates.data[:, 1] = zposf
```
Below we include the plot of Ricker source.
```
# NBVAL_IGNORE_OUTPUT
src.show()
```
We set the receivers:
```
nrec = nptx
nxpos = np.linspace(x0,x1,nrec)
nzpos = hzv
rec = Receiver(name='rec',grid=grid,npoint=nrec,time_range=time_range,staggered=NODE,dtype=np.float64)
rec.coordinates.data[:, 0] = nxpos
rec.coordinates.data[:, 1] = nzpos
```
The displacement field *u* and the velocity *vel* are allocated:
```
u = TimeFunction(name="u",grid=grid,time_order=2,space_order=2,staggered=NODE,dtype=np.float64)
vel = Function(name="vel",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
vel.data[:,:] = v0[:,:]
```
We include the source term as *src_term* using the following command:
```
src_term = src.inject(field=u.forward,expr=src*dt**2*vel**2)
```
The Receivers are again called *rec_term*:
```
rec_term = rec.interpolate(expr=u)
```
The next step is to generate the $\omega$ weights, which are selected using the *habcw* variable. Our construction approach will be in two steps: in a first step we build local vectors *weightsx* and *weightsz* that represent the weights in the directions $x$ and $z$, respectively. In a second step, with the *weightsx* and *weightsz* vectors, we distribute them in two global arrays called *Mweightsx* and *Mweightsz* that represent the distribution of these weights along the *grid* in the directions $x$ and $z$ respectively. The *generateweights* function below perform the operations listed previously:
```
def generateweights():
weightsx = np.zeros(npmlx)
weightsz = np.zeros(npmlz)
Mweightsx = np.zeros((nptx,nptz))
Mweightsz = np.zeros((nptx,nptz))
if(habcw==1):
for i in range(0,npmlx):
weightsx[i] = (npmlx-i)/(npmlx)
for i in range(0,npmlz):
weightsz[i] = (npmlz-i)/(npmlz)
if(habcw==2):
mx = 2
mz = 2
if(habctype==3):
alphax = 1.0 + 0.15*(npmlx-mx)
alphaz = 1.0 + 0.15*(npmlz-mz)
else:
alphax = 1.5 + 0.07*(npmlx-mx)
alphaz = 1.5 + 0.07*(npmlz-mz)
for i in range(0,npmlx):
if(0<=i<=(mx)):
weightsx[i] = 1
elif((mx+1)<=i<=npmlx-1):
weightsx[i] = ((npmlx-i)/(npmlx-mx))**(alphax)
else:
weightsx[i] = 0
for i in range(0,npmlz):
if(0<=i<=(mz)):
weightsz[i] = 1
elif((mz+1)<=i<=npmlz-1):
weightsz[i] = ((npmlz-i)/(npmlz-mz))**(alphaz)
else:
weightsz[i] = 0
for k in range(0,npmlx):
ai = k
af = nptx - k - 1
bi = 0
bf = nptz - k
Mweightsx[ai,bi:bf] = weightsx[k]
Mweightsx[af,bi:bf] = weightsx[k]
for k in range(0,npmlz):
ai = k
af = nptx - k
bf = nptz - k - 1
Mweightsz[ai:af,bf] = weightsz[k]
return Mweightsx,Mweightsz
```
Once the *generateweights* function has been created, we execute it with the following command:
```
Mweightsx,Mweightsz = generateweights();
```
Below we include a routine to plot the weight fields.
```
def graph2dweight(D):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(-3)
fscale = 10**(-3)
scale = np.amax(D)
extent = [fscale*x0,fscale*x1, fscale*z1, fscale*z0]
fig = plot.imshow(np.transpose(D), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.title('Weight Function')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Weights')
plot.show()
```
Below we include the plot of weights field in $x$ direction.
```
# NBVAL_IGNORE_OUTPUT
graph2dweight(Mweightsx)
```
Below we include the plot of weights field in $z$ direction.
```
# NBVAL_IGNORE_OUTPUT
graph2dweight(Mweightsz)
```
Next we create the fields for the weight arrays *weightsx* and *weightsz*:
```
weightsx = Function(name="weightsx",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
weightsx.data[:,:] = Mweightsx[:,:]
weightsz = Function(name="weightsz",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
weightsz.data[:,:] = Mweightsz[:,:]
```
For the discretization of the A2 and Higdon's boundary conditions (to calculate $u_{1}(x,z,t)$) we need information from three time levels, namely $u(x,z,t-1)$, $u (x,z,t)$ and $u(x,z,t+1)$. So it is convenient to create the three fields:
```
u1 = Function(name="u1" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
u2 = Function(name="u2" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
u3 = Function(name="u3" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
```
We will assign to each of them the three time solutions described previously, that is,
- u1(x,z) = u(x,z,t-1);
- u2(x,z) = u(x,z,t);
- u3(x,z) = u(x,z,t+1);
These three assignments can be represented by the *stencil01* given by:
```
stencil01 = [Eq(u1,u.backward),Eq(u2,u),Eq(u3,u.forward)]
```
An update of the term *u3(x,z)* will be necessary after updating *u(x,z,t+1)* in the direction $x$, so that we can continue to apply the HABC method. This update is given by *stencil02* defined as:
```
stencil02 = [Eq(u3,u.forward)]
```
For the acoustic equation with HABC without the source term we need in $\Omega$
- eq1 = u.dt2 - vel0 * vel0 * u.laplace;
So the *pde* that represents this equation is given by:
```
pde0 = Eq(u.dt2 - u.laplace*vel**2)
```
And the *stencil* for *pde0* is given to:
```
stencil0 = Eq(u.forward, solve(pde0,u.forward))
```
For the blue region we will divide it into $npmlx$ layers in the $x$ direction and $npmlz$ layers in the $z$ direction. In this case, the representation is a little more complex than shown in the figures that exemplify the regions $A_{k}$ because there are intersections between the layers.
**Observation:** Note that the representation of the $A_{k}$ layers that we present in our text reflects the case where $npmlx=npmlz$. However, our code includes the case illustrated in the figure, as well as situations in which $npmlx\neq npmlz$. The discretizations of the bounadry conditions A1, A2 and Higdon follow in the bibliographic references at the end. They will not be detailled here, but can be seen in the codes below.
In the sequence of codes below we build the *pdes* that represent the *eqs* of the regions $B_{1}$, $B_{2}$ and $B_{3}$ and/or in the corners (red points in the case of *A2*) as represented in the following figure:
<img src='region2.png' width=500>
In the sequence, we present the *stencils* for each of these *pdes*.
So, for the A1 case we have the following *pdes* and *stencils*:
```
if(habctype==1):
# Region B_{1}
aux1 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x+1,z] + (vel[x,z]*dt-hx)*u3[x+1,z])/(vel[x,z]*dt+hx)
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
aux2 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x-1,z] + (vel[x,z]*dt-hx)*u3[x-1,z])/(vel[x,z]*dt+hx)
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
aux3 = ((-vel[x,z]*dt+hz)*u2[x,z] + (vel[x,z]*dt+hz)*u2[x,z-1] + (vel[x,z]*dt-hz)*u3[x,z-1])/(vel[x,z]*dt+hz)
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
```
For the A2 case we have the following *pdes* and *stencils*:
```
if(habctype==2):
# Region B_{1}
cte11 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]
cte21 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]*vel[x,z]
cte31 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]
cte41 = (1/(dt**2))
cte51 = (1/(4*hz**2))*vel[x,z]**2
aux1 = (cte21*(u3[x+1,z] + u1[x,z]) + cte31*u1[x+1,z] + cte41*(u2[x,z]+u2[x+1,z]) + cte51*(u3[x+1,z+1] + u3[x+1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte11
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
cte12 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]
cte22 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]**2
cte32 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]
cte42 = (1/(dt**2))
cte52 = (1/(4*hz**2))*vel[x,z]*vel[x,z]
aux2 = (cte22*(u3[x-1,z] + u1[x,z]) + cte32*u1[x-1,z] + cte42*(u2[x,z]+u2[x-1,z]) + cte52*(u3[x-1,z+1] + u3[x-1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte12
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
cte13 = (1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z]
cte23 = -(1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z] - (1/(2*hx**2))*vel[x,z]**2
cte33 = -(1/(2*dt**2)) - (1/(2*dt*hz))*vel[x,z]
cte43 = (1/(dt**2))
cte53 = (1/(4*hx**2))*vel[x,z]*vel[x,z]
aux3 = (cte23*(u3[x,z-1] + u1[x,z]) + cte33*u1[x,z-1] + cte43*(u2[x,z]+u2[x,z-1]) + cte53*(u3[x+1,z-1] + u3[x-1,z-1] + u1[x+1,z] + u1[x-1,z]))/cte13
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
# Red point rigth side
stencil4 = [Eq(u[t+1,nptx-1-k,nptz-1-k],(1-weightsz[nptx-1-k,nptz-1-k])*u3[nptx-1-k,nptz-1-k] +
weightsz[nptx-1-k,nptz-1-k]*(((-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-1-k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-2-k]
+ (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-1-k]
+ (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-2-k])
/ (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))))) for k in range(0,npmlz)]
# Red point left side
stencil5 = [Eq(u[t+1,k,nptz-1-k],(1-weightsx[k,nptz-1-k] )*u3[k,nptz-1-k]
+ weightsx[k,nptz-1-k]*(( (-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-2-k]
+ (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-1-k]
+ (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-2-k])
/ (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))))) for k in range(0,npmlx)]
```
For the Higdon case we have the following *pdes* and *stencils*:
```
if(habctype==3):
alpha1 = 0.0
alpha2 = np.pi/4
a1 = 0.5
b1 = 0.5
a2 = 0.5
b2 = 0.5
# Region B_{1}
gama111 = np.cos(alpha1)*(1-a1)*(1/dt)
gama121 = np.cos(alpha1)*(a1)*(1/dt)
gama131 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]
gama141 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]
gama211 = np.cos(alpha2)*(1-a2)*(1/dt)
gama221 = np.cos(alpha2)*(a2)*(1/dt)
gama231 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]
gama241 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]
c111 = gama111 + gama131
c121 = -gama111 + gama141
c131 = gama121 - gama131
c141 = -gama121 - gama141
c211 = gama211 + gama231
c221 = -gama211 + gama241
c231 = gama221 - gama231
c241 = -gama221 - gama241
aux1 = ( u2[x,z]*(-c111*c221-c121*c211) + u3[x+1,z]*(-c111*c231-c131*c211) + u2[x+1,z]*(-c111*c241-c121*c231-c141*c211-c131*c221)
+ u1[x,z]*(-c121*c221) + u1[x+1,z]*(-c121*c241-c141*c221) + u3[x+2,z]*(-c131*c231) +u2[x+2,z]*(-c131*c241-c141*c231)
+ u1[x+2,z]*(-c141*c241))/(c111*c211)
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
gama112 = np.cos(alpha1)*(1-a1)*(1/dt)
gama122 = np.cos(alpha1)*(a1)*(1/dt)
gama132 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]
gama142 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]
gama212 = np.cos(alpha2)*(1-a2)*(1/dt)
gama222 = np.cos(alpha2)*(a2)*(1/dt)
gama232 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]
gama242 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]
c112 = gama112 + gama132
c122 = -gama112 + gama142
c132 = gama122 - gama132
c142 = -gama122 - gama142
c212 = gama212 + gama232
c222 = -gama212 + gama242
c232 = gama222 - gama232
c242 = -gama222 - gama242
aux2 = ( u2[x,z]*(-c112*c222-c122*c212) + u3[x-1,z]*(-c112*c232-c132*c212) + u2[x-1,z]*(-c112*c242-c122*c232-c142*c212-c132*c222)
+ u1[x,z]*(-c122*c222) + u1[x-1,z]*(-c122*c242-c142*c222) + u3[x-2,z]*(-c132*c232) +u2[x-2,z]*(-c132*c242-c142*c232)
+ u1[x-2,z]*(-c142*c242))/(c112*c212)
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
gama113 = np.cos(alpha1)*(1-a1)*(1/dt)
gama123 = np.cos(alpha1)*(a1)*(1/dt)
gama133 = np.cos(alpha1)*(1-b1)*(1/hz)*vel[x,z]
gama143 = np.cos(alpha1)*(b1)*(1/hz)*vel[x,z]
gama213 = np.cos(alpha2)*(1-a2)*(1/dt)
gama223 = np.cos(alpha2)*(a2)*(1/dt)
gama233 = np.cos(alpha2)*(1-b2)*(1/hz)*vel[x,z]
gama243 = np.cos(alpha2)*(b2)*(1/hz)*vel[x,z]
c113 = gama113 + gama133
c123 = -gama113 + gama143
c133 = gama123 - gama133
c143 = -gama123 - gama143
c213 = gama213 + gama233
c223 = -gama213 + gama243
c233 = gama223 - gama233
c243 = -gama223 - gama243
aux3 = ( u2[x,z]*(-c113*c223-c123*c213) + u3[x,z-1]*(-c113*c233-c133*c213) + u2[x,z-1]*(-c113*c243-c123*c233-c143*c213-c133*c223)
+ u1[x,z]*(-c123*c223) + u1[x,z-1]*(-c123*c243-c143*c223) + u3[x,z-2]*(-c133*c233) +u2[x,z-2]*(-c133*c243-c143*c233)
+ u1[x,z-2]*(-c143*c243))/(c113*c213)
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
```
The surface boundary conditions of the problem are the same as in the notebook <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a>. They are placed in the term *bc* and have the following form:
```
bc = [Eq(u[t+1,x,0],u[t+1,x,1])]
```
We will then define the operator (*op*) that will join the acoustic equation, source term, boundary conditions and receivers.
- 1. The acoustic wave equation in the *d0* region: *[stencil01];*
- 2. Source term: *src_term;*
- 3. Updating solutions over time: *[stencil01,stencil02];*
- 4. The acoustic wave equation in the *d1*, *d2* e *d3* regions: *[stencil1,stencil2,stencil3];*
- 5. The equation for red points for A2 method: *[stencil5,stencil4];*
- 6. Boundry Conditions: *bc;*
- 7. Receivers: *rec_term;*
We then define two types of *op*:
- The first *op* is for the cases A1 and Higdon;
- The second *op* is for the case A2;
The *ops* are constructed by the following commands:
```
# NBVAL_IGNORE_OUTPUT
if(habctype!=2):
op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1] + bc + rec_term,subs=grid.spacing_map)
else:
op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1,stencil02,stencil4,stencil5] + bc + rec_term,subs=grid.spacing_map)
```
Initially:
```
u.data[:] = 0.
u1.data[:] = 0.
u2.data[:] = 0.
u3.data[:] = 0.
```
We assign to *op* the number of time steps it must execute and the size of the time step in the local variables *time* and *dt*, respectively.
```
# NBVAL_IGNORE_OUTPUT
op(time=nt,dt=dt0)
```
We view the result of the displacement field at the end time using the *graph2d* routine given by:
```
def graph2d(U,i):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(3)
x0pml = x0 + npmlx*hxv
x1pml = x1 - npmlx*hxv
z0pml = z0
z1pml = z1 - npmlz*hzv
scale = np.amax(U[npmlx:-npmlx,0:-npmlz])/10.
extent = [fscale*x0pml,fscale*x1pml,fscale*z1pml,fscale*z0pml]
fig = plot.imshow(np.transpose(U[npmlx:-npmlx,0:-npmlz]),vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.axis('equal')
if(i==1): plot.title('Map - Acoustic Problem with Devito - HABC A1')
if(i==2): plot.title('Map - Acoustic Problem with Devito - HABC A2')
if(i==3): plot.title('Map - Acoustic Problem with Devito - HABC Higdon')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Displacement [km]')
plot.draw()
plot.show()
# NBVAL_IGNORE_OUTPUT
graph2d(u.data[0,:,:],habctype)
```
We plot the Receivers shot records using the *graph2drec* routine.
```
def graph2drec(rec,i):
plot.figure()
plot.figure(figsize=(16,8))
fscaled = 1/10**(3)
fscalet = 1/10**(3)
x0pml = x0 + npmlx*hxv
x1pml = x1 - npmlx*hxv
scale = np.amax(rec[:,npmlx:-npmlx])/10.
extent = [fscaled*x0pml,fscaled*x1pml, fscalet*tn, fscalet*t0]
fig = plot.imshow(rec[:,npmlx:-npmlx], vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f s'))
plot.axis('equal')
if(i==1): plot.title('Receivers Signal Profile - Devito with HABC A1')
if(i==2): plot.title('Receivers Signal Profile - Devito with HABC A2')
if(i==3): plot.title('Receivers Signal Profile - Devito with HABC Higdon')
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
plot.show()
# NBVAL_IGNORE_OUTPUT
graph2drec(rec.data,habctype)
assert np.isclose(np.linalg.norm(rec.data), 990, rtol=1)
```
# 4.7 - Conclusions
We have presented the HABC method for the acoustic wave equation, which can be used with any of the
absorbing boundary conditions A1, A2 or Higdon. The notebook also include the possibility of using these boundary conditions alone, without being combined with the HABC. The user has the possibilty of testing several combinations of parameters and observe the effects in the absorption of spurious reflections on computational boundaries.
The relevant references for the boundary conditions are furnished next.
## 4.8 - References
- Clayton, R., & Engquist, B. (1977). "Absorbing boundary conditions for acoustic and elastic wave equations", Bulletin of the seismological society of America, 67(6), 1529-1540. <a href="https://pubs.geoscienceworld.org/ssa/bssa/article/67/6/1529/117727?casa_token=4TvjJGJDLQwAAAAA:Wm-3fVLn91tdsdHv9H6Ek7tTQf0jwXVSF10zPQL61lXtYZhaifz7jsHxqTvrHPufARzZC2-lDw">Reference Link.</a>
- Engquist, B., & Majda, A. (1979). "Radiation boundary conditions for acoustic and elastic wave calculations," Communications on pure and applied mathematics, 32(3), 313-357. DOI: 10.1137/0727049. <a href="https://epubs.siam.org/doi/abs/10.1137/0727049">Reference Link.</a>
- Higdon, R. L. (1987). "Absorbing boundary conditions for difference approximations to the multidimensional wave equation," Mathematics of computation, 47(176), 437-459. DOI: 10.1090/S0025-5718-1986-0856696-4. <a href="https://www.ams.org/journals/mcom/1986-47-176/S0025-5718-1986-0856696-4/">Reference Link.</a>
- Higdon, Robert L. "Numerical absorbing boundary conditions for the wave equation," Mathematics of computation, v. 49, n. 179, p. 65-90, 1987. DOI: 10.1090/S0025-5718-1987-0890254-1. <a href="https://www.ams.org/journals/mcom/1987-49-179/S0025-5718-1987-0890254-1/">Reference Link.</a>
- Liu, Y., & Sen, M. K. (2018). "An improved hybrid absorbing boundary condition for wave equation modeling," Journal of Geophysics and Engineering, 15(6), 2602-2613. DOI: 10.1088/1742-2140/aadd31. <a href="https://academic.oup.com/jge/article/15/6/2602/5209803">Reference Link.</a>
| github_jupyter |
### Convert 9 bands CRs to 5 bands
```
#==========================================
# Gain to compression ratio (CR) conversion
# Author: Nasim Alamdari
# Date: Dec. 2020
#==========================================
import numpy as np
# Example:
# Audiogram = [10, 10, 20,20,25,30,35,40,40]
# Soft gains = [4.0, 3.0, 11.0, 10.0, 12.0, 20.0, 23.0, 23.0, 20.0]
# Moderate gains = [2.0, 2.0, 10.0, 9.0, 12.0, 21.0, 22.0, 21.0, 18.0]
# Loud gains = 1.0, 0.0, 6.0, 6.0, 7.0, 16.0, 18.0, 15.0, 13.0]
# Hearing aid type: BTE, Foam eartip
DSLv5_S_G = [4.0, 3.0, 11.0, 10.0, 12.0, 20.0, 23.0, 23.0, 20.0]
ModerateG = [2.0, 2.0, 10.0, 9.0, 12.0, 21.0, 22.0, 21.0, 18.0]
LoudG = [1.0, 0.0, 6.0, 6.0, 7.0, 16.0, 18.0, 15.0, 13.0]
CT_m = 60.0
CT_L = 80.0
RelT = 1000e-3; # Release time (sec)
AttT = 1e-2; # Attack time (sec)
def gain_to_compressionRatio(gains_m, gains_L, CT_m, CT_L):
DSLv5_Moderate_Gains = gains_m
DSLv5_Loud_Gains = gains_L
CT_moderate = CT_m
CT_loud = CT_L
y1_b1 = CT_moderate + DSLv5_Moderate_Gains[0]
y1_b2 = CT_moderate + DSLv5_Moderate_Gains[1]
y1_b3 = CT_moderate + DSLv5_Moderate_Gains[2]
y1_b4 = CT_moderate + DSLv5_Moderate_Gains[3]
y1_b5 = CT_moderate + DSLv5_Moderate_Gains[4]
y2_b1 = CT_loud + DSLv5_Loud_Gains[0]
y2_b2 = CT_loud + DSLv5_Loud_Gains[1]
y2_b3 = CT_loud + DSLv5_Loud_Gains[2]
y2_b4 = CT_loud + DSLv5_Loud_Gains[3]
y2_b5 = CT_loud + DSLv5_Loud_Gains[4]
diff_1 = y2_b1 - y1_b1
diff_2 = y2_b2 - y1_b2
diff_3 = y2_b3 - y1_b3
diff_4 = y2_b4 - y1_b4
diff_5 = y2_b5 - y1_b5
CR1 = np.ceil(10* ( CT_loud-CT_moderate)/ ( diff_1 ) )/10
CR2 = np.ceil(10* ( CT_loud-CT_moderate)/ ( diff_2 ) )/10
CR3 = np.ceil(10* ( CT_loud-CT_moderate)/ ( diff_3 ) )/10
CR4 = np.ceil(10* ( CT_loud-CT_moderate)/ ( diff_4 ) )/10
CR5 = np.ceil(10* ( CT_loud-CT_moderate)/ ( diff_5 ) )/10
if CR1 < 1.0 or CR1 == 'inf':
CR1 = 1.0
if CR2 < 1.0 or CR2 == 'inf':
CR2 = 1.0
if CR3 < 1.0 or CR3 == 'inf':
CR3 = 1.0
if CR4 < 1.0 or CR4 == 'inf':
CR4 = 1.0
if CR5 < 1.0 or CR5 == 'inf':
CR5 = 1.0
Cr = [CR1, CR2, CR3, CR4, CR5]
return Cr
Cr = gain_to_compressionRatio (np.double(ModerateG), np.double(LoudG), np.double(CT_m), np.double(CT_L))
INITIAL_CRs = Cr
# Compute 5 bands solft gains
soft_Gains = [np.ceil((DSLv5_S_G[0]+DSLv5_S_G[1]+DSLv5_S_G[2])/3),
np.ceil((DSLv5_S_G[3]+DSLv5_S_G[4])/2),
DSLv5_S_G[5],
np.ceil((DSLv5_S_G[6]+DSLv5_S_G[7])/2),
DSLv5_S_G[8] ];
print("INITIAL_CRs = ", INITIAL_CRs)
print("solf_Gains = ", soft_Gains)
```
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
# unique words
vocab = set(text)
counts = Counter(text)
# sort so that most occuring word gets low index
vocab = sorted(counts, key=counts.get, reverse=True)
# print(vocab)
# word_to_ix, ix_to_word
vocab_to_int, int_to_vocab = {}, {}
for i, word in enumerate(vocab):
vocab_to_int[word] = i
int_to_vocab[i] = word
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'-': '||Dash||',
'\n': '||Return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words) // batch_size
words = words[:n_batches*batch_size]
features, targets = [], []
for idx in range(0, (len(words) - sequence_length)):
features.append(words[idx : idx+sequence_length])
targets.append(words[idx + sequence_length])
feature_tensors = torch.from_numpy(np.asarray(features))
target_tensors = torch.from_numpy(np.asarray(targets))
data = TensorDataset(feature_tensors, target_tensors)
data_loader = DataLoader(data, batch_size=batch_size)
return data_loader
# return a dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.embeds = nn.Embedding(vocab_size, embedding_dim)
# set class variables
# define model layers
self.lstm = nn.LSTM(embedding_dim, hidden_dim,
n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embeds = self.embeds(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs (convert the output of lstm layer (lstm_out) into a single vector)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
output = self.fc(lstm_out)
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
# return one batch of output word scores and the hidden state
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
# initialize hidden state with zero weights, and move to GPU if available
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if(train_on_gpu):
rnn.cuda()
inp, target = inp.cuda(), target.cuda()
rnn.zero_grad()
# perform backpropagation and optimization
h = tuple([each.data for each in hidden])
# perform backpropagation and optimization
output, h = rnn(inp, h)
loss = criterion(output, target)
loss.backward()
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of uniqe tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = 12 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 15
# Learning Rate
learning_rate = 0.002
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:** I have tried some different hyperparameters like by changing the learning rate, embedding dim, batch_size and hidden dim and got better result in this configuration.
We could achive much more accuracy by opting or trying some other hyperparameters.
For embedding according to google and other resources the best dimension are between 100 to 300 so I have chosen 256 as embedding_dim as well as hidden_dim.
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
# run the cell multiple times to get different results!
gen_length = 1000 # modify the length to your preference
prime_word = 'george' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.