code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# CA Coronavirus Cases and Deaths Trends
CA's [Blueprint for a Safer Economy](https://www.cdph.ca.gov/Programs/CID/DCDC/Pages/COVID-19/COVID19CountyMonitoringOverview.aspx) assigns each county [to a tier](https://www.cdph.ca.gov/Programs/CID/DCDC/Pages/COVID-19/COVID19CountyMonitoringOverview.aspx) based on case rate and test positivity rate. What's opened / closed [under each tier](https://www.cdph.ca.gov/Programs/CID/DCDC/CDPH%20Document%20Library/COVID-19/Dimmer-Framework-September_2020.pdf).
Tiers, from most severe to least severe, categorizes coronavirus spread as <strong><span style='color:#6B1F84'>widespread; </span></strong>
<strong><span style='color:#F3324C'>substantial; </span></strong><strong><span style='color:#F7AE1D'>moderate; </span></strong><strong><span style = 'color:#D0E700'>or minimal.</span></strong>
**Counties must stay in the current tier for 3 consecutive weeks and metrics from the last 2 consecutive weeks must fall into less restrictive tier before moving into a less restrictive tier.**
We show *only* case charts labeled with each county's population-adjusted tier cut-offs.
**Related daily reports:**
1. **[US counties report on cases and deaths for select major cities](https://cityoflosangeles.github.io/covid19-indicators/us-county-trends.html)**
1. **[Los Angeles County, detailed indicators](https://cityoflosangeles.github.io/covid19-indicators/coronavirus-stats.html)**
1. **[Los Angeles County neighborhoods report on cases and deaths](https://cityoflosangeles.github.io/covid19-indicators/la-neighborhoods-trends.html)**
Code available in GitHub: [https://github.com/CityOfLosAngeles/covid19-indicators](https://github.com/CityOfLosAngeles/covid19-indicators)
<br>
Get informed with [public health research](https://github.com/CityOfLosAngeles/covid19-indicators/blob/master/reopening-sources.md)
```
import altair as alt
import altair_saver
import geopandas as gpd
import os
import pandas as pd
from processing_utils import default_parameters
from processing_utils import make_charts
from processing_utils import make_maps
from processing_utils import neighborhood_utils
from processing_utils import us_county_utils
from processing_utils import utils
from datetime import date, datetime, timedelta
from IPython.display import display_html, Markdown, HTML, Image
# For map
import branca.colormap
import ipywidgets
# There's a warning that comes up about projects, suppress
import warnings
warnings.filterwarnings("ignore")
# Default parameters
time_zone = default_parameters.time_zone
start_date = datetime(2021, 3, 1).date()
today_date = default_parameters.today_date
fulldate_format = default_parameters.fulldate_format
#alt.renderers.enable('html')
STATE = "CA"
jhu = us_county_utils.clean_jhu(start_date)
jhu = jhu[jhu.state_abbrev==STATE]
hospitalizations = us_county_utils.clean_hospitalizations(start_date)
vaccinations = utils.clean_vaccines_by_county()
vaccinations_demog = utils.clean_vaccines_by_demographics()
ca_counties = list(jhu[jhu.state_abbrev==STATE].county.unique())
# Put LA county first
ca_counties.remove("Los Angeles")
ca_counties = ["Los Angeles"] + ca_counties
data_through = jhu.date.max()
display(Markdown(
f"Report updated: {default_parameters.today_date.strftime(fulldate_format)}; "
f"data available through {data_through.strftime(fulldate_format)}."
)
)
title_font_size = 9
def plot_charts(cases_df, hospital_df, vaccine_df, vaccine_demog_df, county_name):
cases_df = cases_df[cases_df.county==county_name]
hospital_df = hospital_df[hospital_df.county==county_name]
vaccine_df = vaccine_df[vaccine_df.county==county_name]
vaccine_df2 = vaccine_demog_df[vaccine_demog_df.county==county_name]
name = cases_df.county.iloc[0]
cases_chart, deaths_chart = make_charts.setup_cases_deaths_chart(cases_df, "county", name)
hospitalizations_chart = make_charts.setup_county_covid_hospital_chart(
hospital_df.drop(columns = "date"), county_name)
vaccines_type_chart = make_charts.setup_county_vaccination_doses_chart(vaccine_df, county_name)
vaccines_pop_chart = make_charts.setup_county_vaccinated_population_chart(vaccine_df, county_name)
vaccines_age_chart = make_charts.setup_county_vaccinated_category(vaccine_df2, county_name, category="Age Group")
outbreak_chart = (alt.hconcat(
cases_chart,
deaths_chart,
make_charts.add_tooltip(hospitalizations_chart, "hospitalizations")
).configure_concat(spacing=50)
)
#https://stackoverflow.com/questions/60328943/how-to-display-two-different-legends-in-hconcat-chart-using-altair
vaccines_chart = (alt.hconcat(
make_charts.add_tooltip(vaccines_type_chart, "vaccines_type"),
make_charts.add_tooltip(vaccines_pop_chart, "vaccines_pop"),
make_charts.add_tooltip(vaccines_age_chart, "vaccines_age"),
).resolve_scale(color="independent")
.configure_view(stroke=None)
.configure_concat(spacing=0)
)
outbreak_chart = (make_charts.configure_chart(outbreak_chart)
.configure_title(fontSize=title_font_size)
)
vaccines_chart = (make_charts.configure_chart(vaccines_chart)
.configure_title(fontSize=title_font_size)
)
county_state_name = county_name + f", {STATE}"
display(Markdown(f"#### {county_state_name}"))
try:
us_county_utils.county_caption(cases_df, county_name)
except:
pass
us_county_utils.ca_hospitalizations_caption(hospital_df, county_name)
us_county_utils.ca_vaccinations_caption(vaccine_df, county_name)
make_charts.show_svg(outbreak_chart)
make_charts.show_svg(vaccines_chart)
display(Markdown("<strong>Cases chart, explained</strong>"))
Image("../notebooks/chart_parts_explained.png", width=700)
```
<a id='counties_by_region'></a>
## Counties by Region
<strong>Superior California Region: </strong> [Butte](#Butte), Colusa,
[El Dorado](#El-Dorado),
Glenn,
[Lassen](#Lassen), Modoc,
[Nevada](#Nevada),
[Placer](#Placer), Plumas,
[Sacramento](#Sacramento),
[Shasta](#Shasta), Sierra, Siskiyou,
[Sutter](#Sutter),
[Tehama](#Tehama),
[Yolo](#Yolo),
[Yuba](#Yuba)
<br>
<strong>North Coast:</strong> [Del Norte](#Del-Norte),
[Humboldt](#Humboldt),
[Lake](#Lake),
[Mendocino](#Mendocino),
[Napa](#Napa),
[Sonoma](#Sonoma), Trinity
<br>
<strong>San Francisco Bay Area:</strong> [Alameda](#Alameda),
[Contra Costa](#Contra-Costa),
[Marin](#Marin),
[San Francisco](#San-Francisco),
[San Mateo](#San-Mateo),
[Santa Clara](#Santa-Clara),
[Solano](#Solano)
<br>
<strong>Northern San Joaquin Valley:</strong> Alpine, Amador, Calaveras,
[Madera](#Madera), Mariposa,
[Merced](#Merced),
Mono,
[San Joaquin](#San-Joaquin),
[Stanislaus](#Stanislaus),
[Tuolumne](#Tuolumne)
<br>
<strong>Central Coast:</strong> [Monterey](#Monterey),
[San Benito](#San-Benito),
[San Luis Obispo](#San-Luis-Obispo),
[Santa Barbara](#Santa-Barbara),
[Santa Cruz](#Santa-Cruz),
[Ventura](#Ventura)
<br>
<strong>Southern San Joaquin Valley:</strong> [Fresno](#Fresno),
Inyo,
[Kern](#Kern),
[Kings](#Kings),
[Tulare](#Tulare)
<br>
<strong>Southern California:</strong> [Los Angeles](#Los-Angeles),
[Orange](#Orange),
[Riverside](#Riverside),
[San Bernardino](#San-Bernardino)
<br>
<strong>San Diego-Imperial:</strong> [Imperial](#Imperial),
[San Diego](#San-Diego)
<br>
<br>
[**Summary of CA County Severity Map**](#summary)
<br>
[**Vaccinations by Zip Code**](#vax_map)
Note for <i>small values</i>: If the 7-day rolling average of new cases or new deaths is under 10, the 7-day rolling average is listed for the past week, rather than a percent change. Given that it is a rolling average, decimals are possible, and are rounded to 1 decimal place. Similarly for hospitalizations.
```
for c in ca_counties:
id_anchor = c.replace(" - ", "-").replace(" ", "-")
display(HTML(f"<a id={id_anchor}></a>"))
plot_charts(jhu, hospitalizations, vaccinations, vaccinations_demog, c)
display(HTML(
"<br>"
"<a href=#counties_by_region>Return to top</a><br>"
))
```
<a id=summary></a>
## Summary of CA Counties
```
ca_boundary = gpd.read_file(f"{default_parameters.S3_FILE_PATH}ca_counties_boundary.geojson")
def grab_map_stats(df):
# Let's grab the last available date for each county
df = (df.sort_values(["county", "fips", "date2"],
ascending = [True, True, False])
.drop_duplicates(subset = ["county", "fips"], keep = "first")
.reset_index(drop=True)
)
# Calculate its severity metric
df = df.assign(
severity = (df.cases_avg7 / df.tier3_case_cutoff).round(1)
)
# Make gdf
gdf = pd.merge(ca_boundary, df,
on = ["fips", "county"], how = "left", validate = "1:1")
gdf = gdf.assign(
cases_avg7 = gdf.cases_avg7.round(1),
deaths_avg7 = gdf.deaths_avg7.round(1),
)
return gdf
gdf = grab_map_stats(jhu)
```
#### Severity by County
Severity measured as proportion relative to Tier 1 (minimal) threshold.
<br>*1 = at Tier 1 threshold*
<br>*2 = 2x higher than Tier 1 threshold*
```
MAX_SEVERITY = gdf.severity.max()
light_gray = make_charts.light_gray
#https://stackoverflow.com/questions/47846744/create-an-asymmetric-colormap
"""
Against Tier 4 cut-off
If severity = 1 when case_rate = 7 per 100k
If severity = x when case_rate = 4 per 100k
If severity = y when case_rate = 1 per 100k
x = 4/7; y = 1/7
Against Tier 1 cut-off
If severity = 1 when case_rate = 1 per 100k
If severity = x when case_rate = 4 per 100k
If severity = y when case_rate = 7 per 100k
x = 4; y = 7
"""
tier_4_colormap_cutoff = [
(1/7), (4/7), 1, 2.5, 5
]
tier_1_colormap_cutoff = [
1, 4, 7, 10, 15
]
# Note: CA reopening guidelines have diff thresholds based on how many vaccines are administered...
# We don't have vaccine info, so ignore, use original cut-offs
colormap_cutoff = tier_4_colormap_cutoff
colorscale = branca.colormap.StepColormap(
colors=["#D0E700", "#F7AE1D", "#F77889",
"#D59CE8", "#B249D4", "#6B1F84", # purples
],
index=colormap_cutoff,
vmin=0, vmax=MAX_SEVERITY,
)
popup_dict = {
"county": "County",
"severity": "Severity",
}
tooltip_dict = {
"county": "County: ",
"severity": "Severity: ",
"new_cases": "New Cases Yesterday: ",
"cases_avg7": "New Cases (7-day rolling avg): ",
"new_deaths": "New Deaths Yesterday: ",
"deaths_avg7": "New Deaths (7-day rolling avg): ",
"cases": "Cumulative Cases",
"deaths": "Cumulative Deaths",
}
fig = make_maps.make_choropleth_map(gdf.drop(columns = ["date", "date2"]),
plot_col = "severity",
popup_dict = popup_dict,
tooltip_dict = tooltip_dict,
colorscale = colorscale,
fig_width = 570, fig_height = 700,
zoom=6, centroid = [36.2, -119.1])
display(Markdown("Severity Scale"))
display(colorscale)
fig
table = (gdf[gdf.severity.notna()]
[["county", "severity"]]
.sort_values("severity", ascending = False)
.reset_index(drop=True)
)
df1_styler = (table.iloc[:14].style.format({'severity': "{:.1f}"})
.set_table_attributes("style='display:inline'")
#.set_caption('Caption table 1')
.hide_index()
)
df2_styler = (table.iloc[15:29].style.format({'severity': "{:.1f}"})
.set_table_attributes("style='display:inline'")
#.set_caption('Caption table 2')
.hide_index()
)
df3_styler = (table.iloc[30:].style.format({'severity': "{:.1f}"})
.set_table_attributes("style='display:inline'")
#.set_caption('Caption table 2')
.hide_index()
)
display(Markdown("#### Counties (in order of decreasing severity)"))
display_html(df1_styler._repr_html_() +
df2_styler._repr_html_() +
df3_styler._repr_html_(), raw=True)
```
[Return to top](#counties_by_region)
```
# Vaccination data by zip code
def select_latest_date(df):
df = (df[df.date == df.date.max()]
.sort_values(["county", "zipcode"])
.reset_index(drop=True)
)
return df
vax_by_zipcode = neighborhood_utils.clean_zipcode_vax_data()
vax_by_zipcode = select_latest_date(vax_by_zipcode)
popup_dict = {
"county": "County",
"zipcode": "Zip Code",
"fully_vaccinated_percent": "% Fully Vax"
}
tooltip_dict = {
"county": "County: ",
"zipcode": "Zip Code",
"at_least_one_dose_percent": "% 1+ dose",
"fully_vaccinated_percent": "% fully vax"
}
colormap_cutoff = [
0, 0.2, 0.4, 0.6, 0.8, 1
]
colorscale = branca.colormap.StepColormap(
colors=["#CDEAF8", "#97BFD6", "#5F84A9",
"#315174", "#17375E",
],
index=colormap_cutoff,
vmin=0, vmax=1,
)
fig = make_maps.make_choropleth_map(vax_by_zipcode.drop(columns = "date"),
plot_col = "fully_vaccinated_percent",
popup_dict = popup_dict,
tooltip_dict = tooltip_dict,
colorscale = colorscale,
fig_width = 570, fig_height = 700,
zoom=6, centroid = [36.2, -119.1])
```
<a id=vax_map></a>
#### Full Vaccination Rates by Zip Code
```
display(Markdown("% Fully Vaccinated by Zip Code"))
display(colorscale)
fig
zipcode_dropdown = ipywidgets.Dropdown(description="Zip Code",
options=sorted(vax_by_zipcode.zipcode.unique()),
value=90012)
def make_map_show_table(x):
plot_col = "fully_vaccinated_percent"
popup_dict = {
"county": "County",
"zipcode": "Zip Code",
"fully_vaccinated_percent": "% Fully Vax"
}
tooltip_dict = {
"county": "County: ",
"zipcode": "Zip Code",
"at_least_one_dose_percent": "% 1+ dose",
"fully_vaccinated_percent": "% fully vax"
}
colormap_cutoff = [
0, 0.2, 0.4, 0.6, 0.8, 1
]
colorscale = branca.colormap.StepColormap(
colors=["#CDEAF8", "#97BFD6", "#5F84A9",
"#315174", "#17375E",
],
index=colormap_cutoff,
vmin=0, vmax=1,
)
fig_width = 300
fig_height = 300
zoom = 12
df = vax_by_zipcode.copy()
subset_df = (df[df.zipcode==x]
.assign(
# When calculating centroids, use EPSG:2229, but when mapping, put it back into EPSG:4326
# https://gis.stackexchange.com/questions/372564/userwarning-when-trying-to-get-centroid-from-a-polygon-geopandas
lon = df.geometry.centroid.x,
lat = df.geometry.centroid.y,
county_partial_vax_avg = neighborhood_utils.calculate_county_avg(df,
group_by="county",
output_col = "at_least_one_dose_percent"),
county_full_vax_avg = neighborhood_utils.calculate_county_avg(df,
group_by = "county",
output_col = "fully_vaccinated_percent"),
at_least_one_dose_percent = round(df.apply(lambda x: x.at_least_one_dose_percent * 100, axis=1), 0),
fully_vaccinated_percent = round(df.apply(lambda x: x.fully_vaccinated_percent * 100, axis=1), 0),
).drop(columns = "date")
)
display_cols = ["county", "zipcode", "population",
"% 1+ dose", "% fully vax",
"county_partial_vax_avg", "county_full_vax_avg",
]
table = (subset_df.rename(columns = {
"at_least_one_dose_percent": "% 1+ dose",
"fully_vaccinated_percent": "% fully vax",})
[display_cols].style.format({
'% 1+ dose': "{:.0f}%",
'% fully vax': "{:.0f}%",
'date': '{:%-m-%d-%y}',
'population': '{:,.0f}',
'county_partial_vax_avg': '{:.0f}%',
'county_full_vax_avg': '{:.0f}%',
}).set_table_attributes("style='display:inline'")
.hide_index()
)
display_html(table)
center = [subset_df.lat, subset_df.lon]
fig = make_maps.make_choropleth_map(subset_df,
plot_col, popup_dict, tooltip_dict,
colorscale, fig_width, fig_height, zoom, center)
display(fig)
ipywidgets.interact(make_map_show_table, x=zipcode_dropdown)
```
[Return to top](#counties_by_region)
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
np.random.seed(219)
tf.set_random_seed(219)
# Load training and eval data from tf.keras
(train_data, train_labels), (test_data, test_labels) = \
tf.keras.datasets.mnist.load_data()
train_data = train_data[:50]
train_labels = train_labels[:50]
train_data = train_data / 255.
train_labels = np.asarray(train_labels, dtype=np.int32)
test_data = test_data[:50]
test_labels = test_labels[:50]
test_data = test_data / 255.
test_labels = np.asarray(test_labels, dtype=np.int32)
batch_size = 16
# for train
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_labels))
#train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataset = train_dataset.shuffle(buffer_size=10000, seed=None, reshuffle_each_iteration=False)
train_dataset = train_dataset.repeat(count=2)
train_dataset = train_dataset.batch(batch_size=batch_size)
print(train_dataset)
# for test
test_dataset = tf.data.Dataset.from_tensor_slices((test_data, test_labels))
test_dataset = test_dataset.shuffle(buffer_size = 10000)
test_dataset = test_dataset.repeat(count=2)
test_dataset = test_dataset.batch(batch_size = batch_size)
print(test_dataset)
```
## 1. `from_string_handle`
```python
@staticmethod
from_string_handle(
string_handle,
output_types,
output_shapes=None,
output_classes=None
)
```
Creates a new, uninitialized Iterator based on the given handle.
### 1.1 `make_one_shot_iterator()`
Creates an Iterator for enumerating the elements of this dataset.
* Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization.
```
train_iterator = train_dataset.make_one_shot_iterator()
test_iterator = test_dataset.make_one_shot_iterator()
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
x, y = iterator.get_next()
x = tf.cast(x, dtype = tf.float32)
y = tf.cast(y, dtype = tf.int32)
sess = tf.Session(config=sess_config)
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator_handle = sess.run(test_iterator.string_handle())
# Train
max_epochs = 2
step = 0
for epoch in range(max_epochs):
#sess.run(iterator.initializer) 할 필요 없음
try:
while True:
train_labels_ = sess.run(y, feed_dict={handle: train_iterator_handle})
test_labels_ = sess.run(y, feed_dict={handle: test_iterator_handle})
print("step: %d labels:" % step)
print(train_labels_)
print(test_labels_)
step += 1
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
```
### 1.2 `make_initializable_iterator()`
```python
make_initializable_iterator(shared_name=None)
```
Creates an Iterator for enumerating the elements of this dataset.
사용법
```python
dataset = ...
iterator = dataset.make_initializable_iterator()
# ...
sess.run(iterator.initializer)
```
```
train_iterator = train_dataset.make_initializable_iterator()
test_iterator = test_dataset.make_initializable_iterator()
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
handle, train_iterator.output_types, train_iterator.output_shapes)
x, y = iterator.get_next()
x = tf.cast(x, dtype = tf.float32)
y = tf.cast(y, dtype = tf.int32)
sess = tf.Session(config=sess_config)
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator_handle = sess.run(test_iterator.string_handle())
train_initializer = iterator.make_initializer(train_dataset)
test_initializer = iterator.make_initializer(test_dataset)
# Train
max_epochs = 2
step = 0
for epoch in range(max_epochs):
sess.run(train_iterator.initializer)
sess.run(test_iterator.initializer)
try:
while True:
train_labels_ = sess.run(y, feed_dict={handle: train_iterator_handle})
test_labels_ = sess.run(y, feed_dict={handle: test_iterator_handle})
print("step: %d labels:" % step)
print(train_labels_)
print(test_labels_)
step += 1
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
```
## 2. `from_structure`
```python
@staticmethod
from_structure(
output_types,
output_shapes=None,
shared_name=None,
output_classes=None
)
```
Creates a new, uninitialized Iterator with the given structure.
### 2.1 `make_one_shot_iterator()`
Creates an Iterator for enumerating the elements of this dataset.
* Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization.
```
train_iterator = train_dataset.make_one_shot_iterator()
test_iterator = test_dataset.make_one_shot_iterator()
iterator = tf.data.Iterator.from_structure(train_iterator.output_types)
train_initializer = iterator.make_initializer(train_dataset)
test_initializer = iterator.make_initializer(test_dataset)
x, y = iterator.get_next()
sess = tf.Session(config=sess_config)
# Train for `num_epochs`, where for each epoch, we first iterate over
# dataset_range, and then iterate over dataset_evens.
for _ in range(2):
# Initialize the iterator to `dataset_range`
print('train')
sess.run(train_initializer)
while True:
try:
y_ = sess.run(y)
print(y_)
except tf.errors.OutOfRangeError:
break
print('test')
sess.run(test_initializer)
while True:
try:
y_ = sess.run(y)
print(y_)
except tf.errors.OutOfRangeError:
break
```
### 2.2 `make_initializable_iterator()`
```python
make_initializable_iterator(shared_name=None)
```
Creates an Iterator for enumerating the elements of this dataset.
사용법
```python
dataset = ...
iterator = dataset.make_initializable_iterator()
# ...
sess.run(iterator.initializer)
```
```
train_iterator = train_dataset.make_initializable_iterator()
test_iterator = test_dataset.make_initializable_iterator()
iterator = tf.data.Iterator.from_structure(train_iterator.output_types)
train_initializer = iterator.make_initializer(train_dataset)
test_initializer = iterator.make_initializer(test_dataset)
x, y = iterator.get_next()
sess = tf.Session(config=sess_config)
# Train for `num_epochs`, where for each epoch, we first iterate over
# dataset_range, and then iterate over dataset_evens.
for _ in range(2):
# Initialize the iterator to `dataset_range`
print('train')
sess.run(train_initializer)
while True:
try:
y_ = sess.run(y)
print(y_)
except tf.errors.OutOfRangeError:
break
print('test')
sess.run(test_initializer)
while True:
try:
y_ = sess.run(y)
print(y_)
except tf.errors.OutOfRangeError:
break
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import json
import tensorflow as tf
import nltk
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import pickle
```
Let's load the data and set the column names.
```
col_names=["tweet emotion", "tweet id", "tweet date", "query", "tweet user", "tweet text" ]
data=pd.read_csv("training.1600000.processed.noemoticon.csv",
names=col_names, header=None, encoding='latin1')
```
Let's check the first 5 records.
```
data.head()
```
Tweet text column will be cleaned. Characters such as @, http, extra spaces will be removed.
```
data['tweet text']=data['tweet text'].str.lower()
data['tweet text']=data['tweet text'].str.replace('@\S{1,}', '', regex=True)
data['tweet text']=data['tweet text'].str.replace('http\S{1,}', '', regex=True)
data['tweet text']=data['tweet text'].str.replace('www\.\S{1,}', '', regex=True)
data['tweet text']=data['tweet text'].str.replace('\s{2,}',' ', regex=True)
data['tweet text']=data['tweet text'].str.strip()
```
Let's check the first 10 records after the cleaning.
```
data.iloc[:10,5]
```
Stop words are those that don't have particular meanings and don't alter the semantics of the sentences. Thus, such words will be removed.
```
stop_words=['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've", "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'nor', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ma']
print(stop_words)
```
The following will ensure that stop words are removed from Tweet Text.
```
def check_stop_words(x):
x_list=x.split()
for index, item in enumerate(x_list):
if item in stop_words:
del x_list[index]
return " ".join(x_list)
data['tweet text']=data['tweet text'].apply(check_stop_words)
data['tweet text']=data['tweet text'].str.replace('\s{2,}',' ', regex=True)
data['tweet text']=data['tweet text'].str.strip()
data["tweet label"]=np.where(data["tweet emotion"]==4, 1, 0)
```
We will split the data into 70% training and 30% testing.
```
X_train, X_test, y_train, y_test = train_test_split(data["tweet text"], data["tweet label"], test_size=0.30)
y_train.value_counts()
y_test.value_counts()
```
Let's check the first 20 records of the training data.
```
X_train.iloc[1:20,]
y_train.iloc[1:20,]
```
The following are the hyperparameters of the model. The top 50,000 most common words are chosen to be placed in the tokenizer.
```
vocab_size = 50000
embedding_dim = 16
max_length = 120
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
```
The following will set up and run the tensorflow model. Bidirectional LSTM is particularly useful when the order of words has an impact on the overall tone of the sentences. Drop out layer also helps in reducing overfitting.
```
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(X_train)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(X_train)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(X_test)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 3
training_padded = np.array(training_padded)
training_labels = np.array(y_train)
testing_padded = np.array(testing_padded)
testing_labels = np.array(y_test)
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels),
verbose=1)
```
The following will plot the accuracy and validation accuracy against the number of epochs.
```
def plot(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric])
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
plt.show()
plot(history, 'accuracy')
plot(history, 'loss')
```
The following will save the model and tokenizer for use in the Flask app.
```
# Saving the model and tokenizer
with open("tokenizer.pickle", "wb") as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
model.save("tf_model.h5")
```
| github_jupyter |
# Simulators
## Introduction
This notebook shows how to import the *Qiskit Aer* simulator backend and use it to run ideal (noise free) Qiskit Terra circuits.
```
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit
from qiskit import Aer, transpile
from qiskit.tools.visualization import plot_histogram, plot_state_city
import qiskit.quantum_info as qi
```
## The Aer Provider
The `Aer` provider contains a variety of high performance simulator backends for a variety of simulation methods. The available backends on the current system can be viwed using `Aer.backends`
```
Aer.backends()
```
## The Aer Simulator
The main simulator backend of the Aer provider is the `AerSimulator` backend. A new simulator backend can be created using `Aer.get_backend('aer_simulator')`.
```
simulator = Aer.get_backend('aer_simulator')
```
The default behavior of teh `AerSimulator` backend is to mimic the execution of an actual device. If a `QuantumCircuit` containing measurements is run it will return a count dictionary containing the final values of any classical registers in the circuit. The circuit may contain gates, measurements, resets, conditionals, and other custom simulator instructions that will be discussed in another notebook.
### Simulating a quantum circuit
The basic operation runs a quantum circuit and returns a counts dictionary of measurement outcomes. Here we run a simple circuit that prepares a 2-qubit Bell-state $\left|\psi\right\rangle = \frac{1}{2}\left(\left|0,0\right\rangle + \left|1,1 \right\rangle\right)$ and measures both qubits.
```
# Create circuit
circ = QuantumCircuit(2)
circ.h(0)
circ.cx(0, 1)
circ.measure_all()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get counts
result = simulator.run(circ).result()
counts = result.get_counts(circ)
plot_histogram(counts, title='Bell-State counts')
```
### Returning measurement outcomes for each shot
The `QasmSimulator` also supports returning a list of measurement outcomes for each individual shot. This is enabled by setting the keyword argument `memory=True` in the `run`.
```
# Run and get memory
result = simulator.run(circ, shots=10, memory=True).result()
memory = result.get_memory(circ)
print(memory)
```
## Aer Simulator Options
The `AerSimulator` backend supports a variety of configurable options which can be updated using the `set_options` method. See the `AerSimulator` API documentation for additional details.
### Simulation Method
The `AerSimulator` supports a variety of simulation methods, each of which supports a different set of instructions. The method can be set manually using `simulator.set_option(method=value)` option, or a simulator backend with a preconfigured method can be obtained directly from the `Aer` provider using `Aer.get_backend`.
When simulating ideal circuits, changing the method between the exact simulation methods `stabilizer`, `statevector`, `density_matrix` and `matrix_product_state` should not change the simulation result (other than usual variations from sampling probabilities for measurement outcomes)
```
# Increase shots to reduce sampling variance
shots = 10000
# Stabilizer simulation method
sim_stabilizer = Aer.get_backend('aer_simulator_stabilizer')
job_stabilizer = sim_stabilizer.run(circ, shots=shots)
counts_stabilizer = job_stabilizer.result().get_counts(0)
# Statevector simulation method
sim_statevector = Aer.get_backend('aer_simulator_statevector')
job_statevector = sim_statevector.run(circ, shots=shots)
counts_statevector = job_statevector.result().get_counts(0)
# Density Matrix simulation method
sim_density = Aer.get_backend('aer_simulator_density_matrix')
job_density = sim_density.run(circ, shots=shots)
counts_density = job_density.result().get_counts(0)
# Matrix Product State simulation method
sim_mps = Aer.get_backend('aer_simulator_matrix_product_state')
job_mps = sim_mps.run(circ, shots=shots)
counts_mps = job_mps.result().get_counts(0)
plot_histogram([counts_stabilizer, counts_statevector, counts_density, counts_mps],
title='Counts for different simulation methods',
legend=['stabilizer', 'statevector',
'density_matrix', 'matrix_product_state'])
```
#### Automatic Simulation Method
The default simulation method is `automatic` which will automatically select a one of the other simulation methods for each circuit based on the instructions in those circuits. A fixed simualtion method can be specified by by adding the method name when getting the backend, or by setting the `method` option on the backend.
### GPU Simulation
The `statevector`, `density_matrix` and `unitary` simulators support running on a NVidia GPUs. For these methods the simulation device can also be manually set to CPU or GPU using `simulator.set_options(device='GPU')` backend option. If a GPU device is not available setting this option will raise an exception.
```
from qiskit.providers.aer import AerError
# Initialize a GPU backend
# Note that the cloud instance for tutorials does not have a GPU
# so this will raise an exception.
try:
simulator_gpu = Aer.get_backend('aer_simulator')
simulator_gpu.set_options(device='GPU')
except AerError as e:
print(e)
```
The `Aer` provider will also contain preconfigured GPU simulator backends if Qiskit Aer was installed with GPU support on a complatible system:
* `aer_simulator_statevector_gpu`
* `aer_simulator_density_matrix_gpu`
* `aer_simulator_unitary_gpu`
*Note: The GPU version of Aer can be installed using `pip install qiskit-aer-gpu`.*
### Simulation Precision
One of the available simulator options allows setting the float precision for the `statevector`, `density_matrix` `unitary` and `superop` methods. This is done using the `set_precision="single"` or `precision="double"` (default) option:
```
# Configure a single-precision statevector simulator backend
simulator = Aer.get_backend('aer_simulator_statevector')
simulator.set_options(precision='single')
# Run and get counts
result = simulator.run(circ).result()
counts = result.get_counts(circ)
print(counts)
```
Setting the simulation precesion applies to both CPU and GPU simulation devices. Single precision will halve the requried memeory and may provide performance improvements on certain systems.
## Custom Simulator Instructions
### Saving the simulator state
The state of the simulator can be saved in a variety of formats using custom simulator instructions.
| Circuit method | Description |Supported Methods |
|----------------|-------------|------------------|
| `save_state` | Save the simulator state in the native format for the simulation method | All |
| `save_statevector` | Save the simulator state as a statevector | `"automatic"`, `"statevector"`, `"matrix_product_state"`, `"extended_stabilizer"`|
| `save_stabilizer` | Save the simulator state as a Clifford stabilizer | `"automatic"`, `"stabilizer"`|
| `save_density_matrix` | Save the simulator state as a density matrix | `"automatic"`, `"statevector"`, `"matrix_product_state"`, `"density_matrix"` |
| `save_matrix_product_state` | Save the simulator state as a a matrix product state tensor | `"automatic"`, `"matrix_product_state"`|
| `save_unitary` | Save the simulator state as unitary matrix of the run circuit | `"automatic"`, `"unitary"`|
| `save_superop` | Save the simulator state as superoperator matrix of the run circuit | `"automatic"`, `"superop"`|
Note that these instructions are only supported by the Aer simulator and will result in an error if a circuit containing them is run on a non-simulator backend such as an IBM Quantum device.
#### Saving the final statevector
To save the final statevector of the simulation we can append the circuit with the `save_statevector` instruction. Note that this instruction should be applied *before* any measurements if we do not want to save the collapsed post-measurement state
```
# Construct quantum circuit without measure
circ = QuantumCircuit(2)
circ.h(0)
circ.cx(0, 1)
circ.save_statevector()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get statevector
result = simulator.run(circ).result()
statevector = result.get_statevector(circ)
plot_state_city(statevector, title='Bell state')
```
#### Saving the circuit unitary
To save the unitary matrix for a `QuantumCircuit` we can append the circuit with the `save_unitary` instruction. Note that this circuit cannot contain any measurements or resets since these instructions are not suppored on for the `"unitary"` simulation method
```
# Construct quantum circuit without measure
circ = QuantumCircuit(2)
circ.h(0)
circ.cx(0, 1)
circ.save_unitary()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get unitary
result = simulator.run(circ).result()
unitary = result.get_unitary(circ)
print("Circuit unitary:\n", unitary.round(5))
```
#### Saving multiple states
We can also apply save instructions at multiple locations in a circuit. Note that when doing this we must provide a unique label for each instruction to retrieve them from the results
```
# Construct quantum circuit without measure
steps = 5
circ = QuantumCircuit(1)
for i in range(steps):
circ.save_statevector(label=f'psi_{i}')
circ.rx(i * np.pi / steps, 0)
circ.save_statevector(label=f'psi_{steps}')
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get saved data
result = simulator.run(circ).result()
data = result.data(0)
data
```
### Setting the simulator to a custom state
The `AerSimulator` allows setting a custom simulator state for several of its simulation methods using custom simulator instructions
| Circuit method | Description |Supported Methods |
|----------------|-------------|------------------|
| `set_statevector` | Set the simulator state to the specified statevector | `"automatic"`, `"statevector"`, `"density_matrix"`|
| `set_stabilizer` | Set the simulator state to the specified Clifford stabilizer | `"automatic"`, `"stabilizer"`|
| `set_density_matrix` | Set the simulator state to the specified density matrix | `"automatic"`, `"density_matrix"` |
| `set_unitary` | Set the simulator state to the specified unitary matrix | `"automatic"`, `"unitary"`, `"superop"`|
| `set_superop` | Set the simulator state to the specified superoperator matrix | `"automatic"`, `"superop"`|
**Notes:**
* These instructions must be applied to all qubits in a circuit, otherwise an exception will be raised.
* The input state must also be a valid state (statevector, denisty matrix, unitary etc) otherwise an exception will be raised.
* These instructions can be applied at any location in a circuit and will override the current state with the specified one. Any classical register values (eg from preceeding measurements) will be unaffected
* Set state instructions are only supported by the Aer simulator and will result in an error if a circuit containing them is run on a non-simulator backend such as an IBM Quantum device.
#### Setting a custom statevector
The `set_statevector` instruction can be used to set a custom `Statevector` state. The input statevector must be valid ($|\langle\psi|\psi\rangle|=1$)
```
# Generate a random statevector
num_qubits = 2
psi = qi.random_statevector(2 ** num_qubits, seed=100)
# Set initial state to generated statevector
circ = QuantumCircuit(num_qubits)
circ.set_statevector(psi)
circ.save_state()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get saved data
result = simulator.run(circ).result()
result.data(0)
```
#### Using the initialize instruction
It is also possible to initialize the simulator to a custom statevector using the `initialize` instruction. Unlike the `set_statevector` instruction this instruction is also supported on real device backends by unrolling to reset and standard gate instructions.
```
# Use initilize instruction to set initial state
circ = QuantumCircuit(num_qubits)
circ.initialize(psi, range(num_qubits))
circ.save_state()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get result data
result = simulator.run(circ).result()
result.data(0)
```
#### Setting a custom density matrix
The `set_density_matrix` instruction can be used to set a custom `DensityMatrix` state. The input density matrix must be valid ($Tr[\rho]=1, \rho \ge 0$)
```
num_qubits = 2
rho = qi.random_density_matrix(2 ** num_qubits, seed=100)
circ = QuantumCircuit(num_qubits)
circ.set_density_matrix(rho)
circ.save_state()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get saved data
result = simulator.run(circ).result()
result.data(0)
```
#### Setting a custom stabilizer state
The `set_stabilizer` instruction can be used to set a custom `Clifford` stabilizer state. The input stabilizer must be a valid `Clifford`.
```
# Generate a random Clifford C
num_qubits = 2
stab = qi.random_clifford(num_qubits, seed=100)
# Set initial state to stabilizer state C|0>
circ = QuantumCircuit(num_qubits)
circ.set_stabilizer(stab)
circ.save_state()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get saved data
result = simulator.run(circ).result()
result.data(0)
```
#### Setting a custom unitary
The `set_unitary` instruction can be used to set a custom unitary `Operator` state. The input unitary matrix must be valid ($U^\dagger U=\mathbb{1}$)
```
# Generate a random unitary
num_qubits = 2
unitary = qi.random_unitary(2 ** num_qubits, seed=100)
# Set initial state to unitary
circ = QuantumCircuit(num_qubits)
circ.set_unitary(unitary)
circ.save_state()
# Transpile for simulator
simulator = Aer.get_backend('aer_simulator')
circ = transpile(circ, simulator)
# Run and get saved data
result = simulator.run(circ).result()
result.data(0)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Train and deploy a model
_**Create and deploy a model directly from a notebook**_
---
---
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. Viewing run results
1. Simple parameter sweep
1. Viewing experiment results
1. Select the best model
1. [Deploy](#Deploy)
1. Register the model
1. Create a scoring file
1. Create the environment configuration (yml file for Conda and pip packages)
1. Deploy the as web service on Azure Container Instance
1. Test the Web Service
1. Clean up
---
## Introduction
Azure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will
* connect to our AML Workspace
* create an experiment that contains multiple runs with tracked metrics
* choose the best model created across all runs
* deploy that model as a service
In the end we will have a model deployed as a web service which we can call from an HTTP endpoint
---
## Setup
Create an Azure Machine Learning servcie in Azure, and launch the studio.
Create a Workspace, a Compute Instance (VM) and a new Notebook running on that VM as a compute target.
This example was forked from https://github.com/Azure/MachineLearningNotebooks, and further developed to present an end-to-end example.
For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace.
See more detail on [Git Integration](https://docs.microsoft.com/en-us/azure/machine-learning/concept-train-model-git-integration#:~:text=Azure%20Machine%20Learning%20provides%20a%20shared%20file%20system,work%20with%20Git%20via%20the%20Git%20CLI%20experience) if you need to upload this notebook in AML.
```
import azureml.core
from azureml.core import Experiment, Workspace
# Check core SDK version number
print("This notebook was created using version 1.0.2 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
print("")
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
```
---
## Data
We will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. The datatset consists of ten baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements that were obtained for each of n = 442 diabetes patients, as well as a quantitative measure of disease progression one year after baseline, as described in [scikit-learn.org](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) website. This cell loads the dataset and splits it into random training and testing sets.
```
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import joblib
X, y = load_diabetes(return_X_y = True)
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
data = {
"train":{"X": X_train, "y": y_train},
"test":{"X": X_test, "y": y_test}
}
print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples")
```
Notice that the 'load_diabetes' in sklearn will standardize and mean-center the 10 inpute varialbes.
See the log below compared to the [original raw dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.tab.txt)
See more details on the python [load_diabetes](https://python512.blogspot.com/2019/09/diabetes-dataset.html#:~:text=To%20upload%20the%20data%20contained%20in%20this%20dataset%2C,from%20sklearn%20import%20datasets%20...%3A%20diabetes%20%3D%20datasets.load_diabetes%28%29) function in scikit-learn library.
```
for i in range(len(data['train']['X'])):
print("Input Variables=%s, Output Variable=%s" % (data['train']['X'][i],data['train']['y'][i]))
```
---
## Train
Let's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:
* We access an experiment from our AML workspace by name, which will be created if it doesn't exist
* We use `start_logging` to create a new run in this experiment
* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.
* We store the resulting model in the **working** directory, which is automatically captured by AML when the run is complete.
* We use `run.complete()` to indicate that the run is over and results can be captured and finalized
```
# Get an experiment object from Azure Machine Learning
experiment = Experiment(workspace=ws, name="train-within-notebook-for-powerbi")
# Create a run object in the experiment
run = experiment.start_logging()
# Log the algorithm parameter alpha to the run; where alpha is between 0 and 1
run.log('alpha', 0.03)
# Create, fit, and test the scikit-learn Ridge regression model
regression_model = Ridge(alpha=0.03)
regression_model.fit(data['train']['X'], data['train']['y'])
preds = regression_model.predict(data['test']['X'])
# Output the Mean Squared Error to the notebook and to the run
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
run.log('mse', mean_squared_error(data['test']['y'], preds))
# Save the model to the working directory
model_file_name = 'diabetesmodel.pkl'
joblib.dump(value = regression_model, filename = model_file_name)
# upload the model file explicitly into artifacts
run.upload_file(name = model_file_name, path_or_stream = model_file_name)
# Complete the run
run.complete()
```
### Viewing run results
Azure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information.
```
run
```
### Simple parameter sweep
Now let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy.
Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.
This example also uses the **tqdm** library to provide a thermometer feedback
```
import numpy as np
from tqdm import tqdm
# list of numbers from 0 to 1.0 with a 0.10 interval
alphas = np.arange(0.0, 1.0, 0.10)
# try a bunch of alpha values in a Linear Regression (Ridge) model
for alpha in tqdm(alphas):
# create a bunch of runs, each train a model with a different alpha value
with experiment.start_logging() as run:
# Use Ridge algorithm to build a regression model
regression_model = Ridge(alpha=alpha)
regression_model.fit(X=data["train"]["X"], y=data["train"]["y"])
preds = regression_model.predict(X=data["test"]["X"])
mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds)
# log alpha, mean_squared_error and feature names in run history
run.log(name="alpha", value=alpha)
run.log(name="mse", value=mse)
# Save the model to the outputs directory for capture
joblib.dump(value=regression_model, filename='diabetesmodel.pkl')
```
### Viewing experiment results
Similar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model
```
# now let's take a look at the experiment in Azure portal.
experiment
```
### Select the best model
Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`.
Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.
Finally, we use the `tag` method to mark the best run to make it easier to find later.
```
runs = {}
run_metrics = {}
# Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric
for r in tqdm(experiment.get_runs()):
metrics = r.get_metrics()
if 'mse' in metrics.keys():
runs[r.id] = r
run_metrics[r.id] = metrics
# Find the run with the best (lowest) mean squared error and display the id and metrics
best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])
best_run = runs[best_run_id]
print('Best run is:', best_run_id)
print('Metrics:', run_metrics[best_run_id])
# Tag the best run for identification later
best_run.tag("Best Run")
```
---
## Deploy
Now that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves
* registering a model in your workspace
* creating a scoring file containing init and run methods
* creating an environment settings file describing packages necessary for your scoring file
* creating a deployment configuration (for ACI Service in this example)
* deploying the model and packages as a web service
### Register a model
We have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.
When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()`
```
from azureml.core.model import Model
# View the files in the run
for f in best_run.get_file_names():
print(f)
# Register the model with the workspace
model = Model.register(model_path = "diabetesmodel.pkl",
model_name = "diabetesmodel.pkl",
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace =ws)
```
Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties.
```
# Find all models called "diabetesmodel" and display their version numbers
from azureml.core.model import Model
models = Model.list(ws, name='diabetesmodel.pkl')
for m in models:
print(m.name, m.version)
```
### Create a scoring file
Since your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods init() and run(). The init() method is called once when your deployment is started so you can load your model and any other required objects. This method uses the get_model_path function to locate the registered model inside the docker container. The run() method is called interactively when the web service is called with one or more data samples to predict.
Important: The schema decorators for pandas and numpy are required to implement the automatic swagger schema generation for input and output variables
After a successful run of the this script, the score.py file be created in the working folder
```
%%writefile score.py
import json
import pickle
import numpy as np
import pandas as pd
import joblib
from azureml.core.model import Model
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
def init():
global model
model_path = Model.get_model_path('diabetesmodel.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
input_sample = pd.DataFrame(data=[{
"input1_age": 57,
"input2_sex": 2,
"input3_bmi": 29.4,
"input4_bp": 109,
"input5_s1": 160,
"input6_s2": 87.6,
"input7_s3": 32,
"input8_s4": 5,
"input9_s5": 5.3,
"input10_s10": 92,
}])
output_sample = np.array([208])
@input_schema('data', PandasParameterType(input_sample))
@output_schema(NumpyParameterType(output_sample))
def run(data):
try:
result = model.predict(data)
return result.tolist()
except Exception as e:
error = str(e)
return error
```
### Create the environment settings
The environment settings will also be exported into a yml file (myenv.yml) to verify the conda and pip packages.
The yml file will be in the working folder for this deployment (but it is not needed - for verification only)
This step will create the python environment with the required conda and pip packages/dependencies. And then, it will create the inference configuration that will build the Docker container based on the scoring file and the environment configuration. The Docker image is transparent and will be created and registered behind the scenes with the AzureML SDK.
```
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
env = Environment('deploytocloudenv')
env.python.conda_dependencies = CondaDependencies.create(conda_packages=['numpy','scikit-learn'],pip_packages=['azureml-defaults','inference-schema[numpy-support]'])
inference_config = InferenceConfig(entry_script="score.py", environment=env)
with open ("myenv.yml","w") as f:
f.write(env.python.conda_dependencies.serialize_to_string())
```
Verify the myenv.yml file in the working folder to ensure it contains the exact following configurations
```
# DO NOT RUN THIS STEP - for verification only
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
- azureml-defaults~=1.6.0
- inference-schema[numpy-support]
- numpy
- scikit-learn
channels:
- anaconda
- conda-forge
```
### Create a deployment configuration for Azure Container Instance
```
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'aci web service with the diabetes regression model',
location = 'Canada Central')
```
### Deploy an ACI web service with the model, inference, and deployment configuration
This step will take a few minutes...
```
%%time
from azureml.core.model import Model
from azureml.core.webservice import Webservice
# Create the webservice using all of the precreated configurations and our best model
aciWebservice = Model.deploy(workspace=ws,
name='aci-webservice-diabetesmodel',
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
# Wait for the service deployment to complete while displaying log output
aciWebservice.wait_for_deployment(show_output=True)
print(aciWebservice.state)
print(aciWebservice.get_logs)
```
### Obtain the Swagger URL if successfully deployed
```
aciWebservice.swagger_uri
```
### Test web service
Call the web service with some dummy input data to get a prediction.
```
import json
# Raw dataset
test_sample = json.dumps({"data": [{
"input1_age": 57,
"input2_sex": 2,
"input3_bmi": 29.4,
"input4_bp": 109,
"input5_s1": 160,
"input6_s2": 87.6,
"input7_s3": 32,
"input8_s4": 5,
"input9_s5": 5.3,
"input10_s10": 92,}]})
test_sample = bytes(test_sample,encoding = 'utf8')
prediction = aciWebservice.run(input_data=test_sample)
print(prediction)
# Standardized and Mean-centered - see the 'load_diabetes() note in the Data section above
test_sample = json.dumps({"data": [{
"input1_age": 0.03081083,
"input2_sex": 0.05068012,
"input3_bmi": 0.03259528,
"input4_bp": 0.04941532 ,
"input5_s1": -0.04009564,
"input6_s2": -0.04358892,
"input7_s3": -0.06917231,
"input8_s4": 0.03430886,
"input9_s5": 0.06301662,
"input10_s10": 0.00306441,}]})
test_sample = bytes(test_sample,encoding = 'utf8')
prediction = aciWebservice.run(input_data=test_sample)
print(prediction)
# Actual value should be 208, predicted as 151.94
```
### Clean up
Delete the ACI instance to stop the compute and any associated billing.
```
%%time
aciWebservice.delete()
```
<a id='nextsteps'></a>
## Next Steps
In this example, you created a series of models inside the notebook using local data, stored them inside an AML experiment, found the best one and deployed it as a live service! From here you can continue to use Azure Machine Learning in this regard to run your own experiments and deploy your own models, or you can expand into further capabilities of AML!
If you have a model that is difficult to process locally, either because the data is remote or the model is large, try the [train-on-remote-vm](../train-on-remote-vm) notebook to learn about submitting remote jobs.
If you want to take advantage of multiple cloud machines to perform large parameter sweeps try the [train-hyperparameter-tune-deploy-with-pytorch](../../training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch
) sample.
If you want to deploy models to a production cluster try the [production-deploy-to-aks](../../deployment/production-deploy-to-aks
) notebook.
| github_jupyter |
# Setup
```
!pip install git+https://github.com/hafidhrendyanto/gpt2-absa.git
```
# Code Sample
```
from gpt2absa.constant import restaurant_aspect_categories, laptop_aspect_categories
from gpt2absa import aspect_polarity_pair
from transformers import TFAutoModelWithLMHead
model = TFAutoModelWithLMHead.from_pretrained("hafidhrendyanto/gpt2-absa")
```
## Restaurant domain
Aspect Based Sentiment Analysis (ABSA) can be devided into many sub-task. Two of which are *sentiment polarity classification* and *aspect category detection*. The samples for restaurant and laptop dommain solve both sub-task while the sample for hotel domain only solve the former (*sentiment polarity classification*).
```
aspect_polarity_pair(model, "Excellent food, although the interior could use some help.", restaurant_aspect_categories)
aspect_polarity_pair(model, "Price is high but the food is good, so I would come back again.", restaurant_aspect_categories)
aspect_polarity_pair(model, "You can order your drinks however you like, and the bartender is very helpfull", restaurant_aspect_categories)
aspect_polarity_pair(model, "I come here at noon", restaurant_aspect_categories)
```
## Laptop domain
You can influence the amount of the selected aspect by changing the `treshold` argument of the function. Because there is more aspect candidate in the laptop domain, the optimal value for this argument is `0.35`
```
aspect_polarity_pair(model, "Looks good, well built, and very good speed.", laptop_aspect_categories)
aspect_polarity_pair(model, "Looks good, well built, and very good speed.", laptop_aspect_categories, treshold=0.35)
aspect_polarity_pair(model, "This is the first laptop I've owned, althougth I used several at my previous job.", laptop_aspect_categories, treshold=0.35)
aspect_polarity_pair(model, "Save your money and get an android: works better, more support", laptop_aspect_categories, treshold=0.35)
aspect_polarity_pair(model, "Very fast and reliable, but the design is rather weird", laptop_aspect_categories, treshold=0.35)
```
## Hotel domain
The model is only fine-tuned using SemEval 2016 Task 5 Restaurant and Laptop domain. The samples for hotel domain below shows the solutions cross domain generalization capabilities **for sentiment classification task**. You can use this solution on many other domain by defining the aspect categories that you want to evaluate on that domain.
Its must be noted that this domain generalization capability only applies to *sentiment polarity classification*. Currently, the *aspect category detection* module for this solution don't have that capability/features.
```
from gpt2absa import sentiment_polarity_classification
aspect_categories = ['hotel#room', 'hotel#view', 'hotel#staff', 'hotel#price', 'hotel#general']
text = "The room in this hotel is very beautiful, the view is impeccable but the price is so high and the staff can be grumpy sometimes."
sentiment_polarity_classification(model, text, aspect_categories)
text = "This is the best hotel I've been, all around great!"
sentiment_polarity_classification(model, text, aspect_categories)
```
| github_jupyter |
# Regression using Decision Trees
In this notebook, we will use decision trees to solve regression problems.
The dataset used here originates from a project to build a surrogate model for predicting the band gap of a material from its composition. This surrogate model was used to replace expensive qunatum mecahnical calculations in virtual high-throughput screening of materials for application as photocatalysts. The paper was published in [Chemistry of Materials](https://pubs.acs.org/doi/abs/10.1021/acs.chemmater.9b01519).
Through this practice, we can learn not only the usage of regression trees but, more importantly, how to tune hyperparameters for best performance.
```
# sklearn
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.ensemble import GradientBoostingRegressor
import sklearn.datasets
# helpers
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
## Google Cloud Storage Boilerplate
The following two cells have some boilerplate to mount the Google Cloud Storage bucket containing the data used for this notebook to your Google Colab file system. To access the data, you need to:
1. Run the first cell;
2. Follow the link when prompted (you may be asked to log in with your Google account);
3. Copy the Google SDK token back into the prompt and press `Enter`;
4. Run the second cell and wait until the data folder appears.
If everything works correctly, a new folder called `sciml-workshop-data` should appear in the file browser on the left. Depending on the network speed, this may take one or two minutes. Ignore the warning "You do not appear to have access to project ...". If you are running the notebook locally or you have already connected to the bucket, these cells will take no effect.
```
# variables passed to bash; do not change
project_id = 'sciml-workshop'
bucket_name = 'sciml-workshop'
colab_data_path = '/content/sciml-workshop-data/'
try:
from google.colab import auth
auth.authenticate_user()
google_colab_env = 'true'
data_path = colab_data_path
except:
google_colab_env = 'false'
###################################################
######## specify your local data path here ########
###################################################
data_path = './sciml-workshop-data/'
%%bash -s {google_colab_env} {colab_data_path} {project_id} {bucket_name}
# running locally
if ! $1; then
echo "Running notebook locally."
exit
fi
# already mounted
if [ -d $2 ]; then
echo "Data already mounted."
exit
fi
# mount the bucket
echo "deb http://packages.cloud.google.com/apt gcsfuse-bionic main" > /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt -qq update
apt -qq install gcsfuse
gcloud config set project $3
mkdir $2
gcsfuse --implicit-dirs --limit-bytes-per-sec -1 --limit-ops-per-sec -1 $4 $2
```
---
# The Dataset
Our data are stored in the pickle file `oxides/training_data.pickle`. We load this file into a `pandas.DataFrame` object, an efficient interface to manage column-wise, heterogeneous tabular data.
```
oxides = pd.read_pickle(data_path + 'dt-data/training_data.pickle')
```
We can check all the columns presented in the dataframe:
```
list(oxides.columns)
```
To read data from one of the columns, use the `values` attribute, for example:
### Description of the dataset
In this practical we are attempting to learn a model that can predict the band gap (energy separation between occupied and un-occupied orbitals) of a material. So we need to set this value as the property to be predicted $y$ This data is stored in the dataframe column called `gllbsc_gap` and we set this to be y by running the cell below:
```
# read a single column
y = oxides['gllbsc_gap'].values
```
We can then use the other properties in the dataset, or a combination of them as *features* ($X$) for our model. For example we could set X to be defined by two features by running the cell below:
```
# read multiple columns and combine them to a matrix
X = oxides[['MagpieData minimum Number', 'MagpieData maximum Number']].values
print(X.shape)
```
## Regression with the dataset
In regression, we attempt to fit a model, $y = f(x)$, where $x$ and $y$ are multi-dimensional data of rank $M$ and $N$, respectively, and $f: \mathbb{R}^M\rightarrow\mathbb{R}^N$ our regression model. In this notebook, $y$ will always be `gllbsc_gap` (so $N=1$), which represents the band gap, and $x$ a combination of the descriptors (all the other columns), each giving the measurement of a certain physical property.
---
# Linear regression: a starter
Linear regression is the simplest regression algorithm in machine learning. Many people do not even regard it as a machine learning algorithm because it is explicitly programmed. Still, it serves as a good start to learn some basic concepts.
## Univariate regression
In univariate linear regression we have the equation:
$y = mx + c$
and we are attempting to find the best values for $m$ and $c$
In a univariate regression, the input rank $M=1$. For instance, let us try `MagpieData avg_dev Electronegativity` as $x$:
```
# read X
X = oxides['MagpieData avg_dev Electronegativity'].values
# we need to append a dummy dimension to X for univariate regression
# to keep the input dimensions consistent with multivariate regression
X = X.reshape(-1, 1)
# read y
y = oxides['gllbsc_gap'].values
```
Now we can use linear regression to fit the data and make predictions:
```
# fit linear regression model
model = LinearRegression().fit(X, y)
# make predictions
y_pred = model.predict(X)
```
When we have fitted the model we now want to use some *metrics* to *evaluate* the model performance. Remember the mean squared error and mean absolute error from your lectures. We will now calculate them for the model:
```
# compute some fitting error
print('MSE = %f eV' % metrics.mean_squared_error(y, y_pred))
print('MAE = %f eV' % metrics.mean_absolute_error(y, y_pred))
```
We can also plot the predicted versus the real values to get a visual feel for how well the fitting worked.
```
plt.figure(dpi=100)
plt.scatter(y, y_pred)
plt.xlabel('Eg True (eV)')
plt.ylabel('Eg Predicted (eV)')
plt.show()
```
## Exercise
By changeing the feature in the $X$ values above try a number of different features. How does it affect the quality of fitting? Report the feature and the MAE and MSE scores in the table below. *Note* to edit the contets of this cell, simply double click on the cell.
| Feature | MAE (eV) | MSE (eV) |
|---------|----------|----------|
| | | |
| | | |
| | | |
| | | |
## Multivariate regression
In a multivariate regression, the input rank $M>1$. Therefore, we will choose a few descriptor to form $x$. Here we choose three descriptors ($M=3$):
```
# read X
X = oxides[['MagpieData avg_dev CovalentRadius',
'MagpieData avg_dev Electronegativity',
'MagpieData maximum NsValence']].values
```
And the rest is the same as univariate regression:
```
# fit linear regression model
model = LinearRegression().fit(X, y)
# make predictions
y_pred = model.predict(X)
# compute some fitting error
print('MSE = %f' % metrics.mean_squared_error(y, y_pred))
print('MAE = %f' % metrics.mean_absolute_error(y, y_pred))
# plot the original and predicted data against each other
plt.figure(dpi=100)
plt.scatter(y, y_pred)
plt.xlabel('Eg True (eV)')
plt.ylabel('Eg Predicted (eV)')
plt.show()
```
## Exercise
By changeing the features in the $X$ values above try a number of different feature combinations. How does it affect the quality of fitting? Report the feature and the MAE and MSE scores in the table below. *Note* to edit the contets of this cell, simply double click on the cell.
| Feature | MAE (eV) | MSE (eV) |
|---------|----------|----------|
| | | |
| | | |
| | | |
| | | |
---
# Gradient Boosting Regression
Gradient boosting is a method for building an ensemble of weak learners to constitute a single strong learner. We build a series of decision trees, each subsequent tree taking in information about the residuals (errors) from the previous trees. In principle, the fitting should improve each time a new tree is added.
## 1. Create the regressor
In `sklearn`, a gradient boosting regressor is created by
```python
GradientBoostingRegressor(loss=<str>, max_depth=<int>, learning_rate=<float>,
min_samples_split=<int>, min_samples_leaf=<int>,
max_features=<int>, subsample=<float>, n_estimators=<int>)
```
The hyperparameters we need to set include:
* `loss`: a loss function to be minimised. We will use 'lad', which is basically MAE.
* `max_depth`: the maximum depth limits the number of nodes in the trees; its best value depends on the interaction of the input variables; we will start with 10 and can tune it later.
* `learning_rate`: learning rate shrinks the contribution of each tree; there is a trade-off between learning rate and boosting steps; we will start with 0.015 and can tune it later.
* `min_samples_split`: the minimum number of samples required to split an internal node; we will start with 50 and can tune it later.
* `min_samples_leaf`: the minimum number of samples required to be at a leaf node; we set this as 1.
* `max_features`: the number of features to consider when looking for the best split; we will use the number of features in the data.
* `subsample`: the fraction of samples to be used for fitting the individual trees; if smaller than 1.0, this results in Stochastic Gradient Boosting. we will start with 0.9 and can tune it later.
* `n_estimators`: the number of boosting steps or decision trees; we will start with 300 and can tune it later.
**NOTE**: Simply adding more trees can lead to overfitting. Gradient boosting is quite robust against overfitting, but we will have to look out for this.
```
# create the regressor
gbr = GradientBoostingRegressor(loss='lad', max_depth=10, learning_rate=0.015,
min_samples_split=50, min_samples_leaf=1,
max_features=len(oxides.columns)-1, subsample=0.9,
n_estimators=300)
```
## 2. Fit the regressor
Here we combine all the descriptors to form $x$ and fit the model:
```
# combine all the columns into X
cols = [a for a in list(oxides.columns) if a not in ['gllbsc_gap']]
X = oxides[cols].values
print('Shape of X: %s' % str(X.shape))
# fit the model
gbr.fit(X, y)
```
After fitting the model, we can make predictions and plot them against the original data. The fit has shown a significant improvement over linear regression.
```
# make predictions
y_pred = gbr.predict(X)
# plot the original and predicted data against each other
plt.figure(dpi=100)
plt.scatter(y, y_pred)
plt.show()
```
## 3. Cross validation
Cross-validation (CV) allows us to evaluate the out-of-sample goodness-of-fit of the regressor without sparing a validation set. In the basic approach, as called the k-fold CV, the training set is split into $k$ subsets, each serving as the validation set to evaluate the model trained with the other $k-1$ subsets. This approach can be computationally expensive but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage for problems with limited data. Note that a lower CV score means better goodness of fit.
In the following cell, we compute the scores using 5 folds (so 20% of data for each validation) and the negative MAE as the metric:
```
# compute cross validation score
scores = cross_val_score(gbr, X, y, cv=5, scoring='neg_mean_absolute_error')
print('Cross validation score: {}'.format(-1 * np.mean(scores)))
```
## 4. Boosting rate and overfitting
Let us split the dataset 80:20 into training and test sets. Re-fit the model using the training set only. We can then use some built-in methods of `GradientBoostingRegressor` to get training and test scores at each iteration of boosting. This way, we can check if we have insufficient boosting layers or perhaps we have too many and thus suffer overfitting.
```
# split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# fit with training set
gbr.fit(X_train, y_train)
# compute test score at each boosting step
test_score = np.zeros((300,), dtype=np.float64)
for i, y_pred in enumerate(gbr.staged_predict(X_test)):
test_score[i] = gbr.loss_(y_test, y_pred)
# plot the scores
plt.figure(dpi=100)
plt.plot(gbr.train_score_, label='Loss on training set')
plt.plot(test_score, label='Loss on test set')
plt.legend()
plt.show()
```
Notice that the loss of both training and test are still decreasing at 300 steps. We can try to increase the boosting steps to 500 and see if we can still get improvements. If the test score stops increasing, we are probably in a good place to stop extending the model.
```
# create the regressor with more boosting steps
gbr500 = GradientBoostingRegressor(loss='lad', max_depth=10, learning_rate=0.015,
min_samples_split=50, min_samples_leaf=1,
max_features=len(oxides.columns)-1, subsample=0.9,
n_estimators=500)
# fit with training set
gbr500.fit(X_train, y_train)
# compute test score at each boosting step
test_score = np.zeros((500,), dtype=np.float64)
for i, y_pred in enumerate(gbr500.staged_predict(X_test)):
test_score[i] = gbr500.loss_(y_test, y_pred)
# plot the scores
plt.figure(dpi=100)
plt.plot(gbr500.train_score_, label='Loss on training set')
plt.plot(test_score, label='Loss on test set')
plt.legend()
plt.show()
```
Again, do a 5-fold cross validation at this point. How does the score compare to the earlier one?
```
# compute cross validation score
scores = cross_val_score(gbr500, X, y, cv=5, scoring='neg_mean_absolute_error')
print('Cross validation score: {}'.format(-1 * np.mean(scores)))
```
## 5. Systematic hyperparameter tuning
Hand tuning a large number of hyperparameters is laborious. Luckily, `sklearn` provides a function [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to automate searches in the hyperparameter space. Even though, performing a grid-search of all of the hyperparameters at once would again lead to a combinatorial explosion. A general strategy for tuning hyperparameters in gradient boosted trees has been suggested [here](https://www.analyticsvidhya.com/blog/2016/02/complete-guide-parameter-tuning-gradient-boosting-gbm-python/).
1. Choose a relatively high learning rate. Generally the default value of 0.1 works but somewhere between 0.05 to 0.2 should work for different problems.
2. Determine the optimum number of trees for this learning rate. This should range around 40 to 90. Remember to choose a value on which your system can work fairly fast. This is because it will be used for testing various scenarios and determining the tree parameters.
3. Tune tree-specific parameters for decided learning rate and number of trees.
4. Lower the learning rate and increase the estimators proportionally to get more robust models.
We will follow the above process to tune our regressor.
### Step 1 & 2: Optimise `n_estimators` with `learning_rate=0.1`
```
# candidates
param_test_n_est = {'n_estimators': range(40, 90, 10)}
# create the regressor
gbr_n_est = GradientBoostingRegressor(loss='lad', learning_rate=0.1,
max_features=len(cols), max_depth=10,
min_samples_split=50, subsample=0.9,
random_state=0)
# define hyperparameter search
gsearch = GridSearchCV(estimator= gbr_n_est, param_grid = param_test_n_est,
scoring='neg_median_absolute_error', cv=5)
# perform search
gsearch.fit(X, y)
# print best n_estimators
gsearch.best_params_
```
### Step 3: Optimise tree parameters with best `n_estimators`
Here we consider `max_depth` and `min_samples_split`:
```
# candidates
param_test_tree = {'max_depth': range(5, 16, 2),
'min_samples_split': range(10, 100, 20)}
# create the regressor
gbr_tree = GradientBoostingRegressor(loss='lad', learning_rate=0.1,
max_features=len(cols), subsample=0.9,
n_estimators=70, random_state=0)
# define hyperparameter search
gsearch = GridSearchCV(estimator= gbr_tree, param_grid = param_test_tree,
scoring='neg_median_absolute_error', cv=5)
# perform search
gsearch.fit(X, y)
# print best max_depth and min_samples_split
gsearch.best_params_
```
### Step 4: Lower `learning_rate` and increase `n_estimators`
Here we use a factor of 5, so `learning_rate` is lowered to 0.02 and `n_estimators` increased to 350:
```
# create the "optimised" regressor
gbr_opt = GradientBoostingRegressor(loss='lad', learning_rate=0.02,
max_features=len(cols), max_depth=7,
min_samples_split=10, subsample=0.9,
n_estimators=350, random_state=0)
# fit the model
gbr_opt.fit(X, y)
```
Eventually, we can use our "optimised" model to make predictions and compute CV scores:
```
# make predictions
y_pred = gbr_opt.predict(X)
# plot the original and predicted data against each other
plt.figure(dpi=100)
plt.scatter(y, y_pred)
plt.show()
# compute cross validation score
scores = cross_val_score(gbr_opt, X, y, cv=5, scoring='neg_mean_absolute_error')
print('Cross validation score: {}'.format(-1 * np.mean(scores)))
```
**Yes, our efforts pay off**, as shown by the figure and the CV score!
---
## Exercises
Similar to [01_classification_decision_tree.ipynb](01_classification_decision_tree.ipynb), use regression trees to fit one or some of the standard "toy" datasets embedded in `sklearn`, such as `boston-house-prices` and `diabetes`. These datasets are less challenging than our example.
```
# load iris dataset
boston = sklearn.datasets.load_boston()
print(boston['DESCR'])
```
| github_jupyter |
# Exercise 13. Nonparametric tests, goodness-of-fit tests
## Michal Béreš, Martina Litschmannová, Adéla Vrtková
# Conformance distribution probability testing of discrete NV(finite number of values) - good agreement test
- we test whether the measured data(their relative frequencies) agree with any specific distribution(ie its probabilities)
- we test using the $\chi^2$ good match test
- assumptions of the test:(ATTENTION relate to the expected frequencies - ie those that we would monitor if the measured data were 100% according to the distribution in the hypothesis)
- Expected frequencies ≥ 2,
- at least 80% of expected frequencies>5
- test statistic(the one with $\chi^2$ distribution) is $G = \sum_{i = 1}^k (O_i - E_i)^2 / E_i$
- distribution has degree of freedom $df = k - 1 - h$
- k is the number of options
- h is the number of estimated parameters(this applies to incompletely specified tests)
```
# example see example 1
```
## Good match test for a continuous random variable(or discrete with an infinite number of values)
- we have to convert to a table with a finite number of values
- for discrete(eg poison) we group from a certain number of columns eg 4,5,6,... to "4 and more"
- for continuous we produce a series of intervals and see how many values fall within the given interval
- eg:(- $\infty$, 3),<3, 4),...,<10, $\infty$)
- then we have to calculate for each interval how many% of data belong to each interval, thus obtaining a table of expected probabilities
- we continue as before
- there is a pearson.test(data) function from the nortest package to test the normality of the distribution
```
# example 2,3,4
```
# PivotTables
- tables containing data depending on two factors
- one of the factors is, as a rule, an independent variable in which we monitor whether it affects the other factor(dependent variable)
- The independent variable is usually in rows
- is usually dependent in columns
- beware the whole test examines the correlation, not the causality! Causality can be assessed by "expert" evaluation
- statistical conclusion: there is a statistically significant dependence between the independent and dependent variable(correlation)
- expert assessment: an independent variable statistically significantly affects the dependent variable(causality)
## PivotTable Visualization
- visualization eg using the barplot function
- pay attention to what rows and columns are, we always want individual divided columns to be via independent variables(each column for one value of an independent variable)
- beside=T determines whether we want to merge adjacent columns into one split column or not
- preferred visualization using mosaicplot
- the same as for barlot, the connected columns must be via independent variables
```
# examples in Examples 5,6,7
```
## Dependency table dependencies
- Correlation coefficient CC
- Corrected correlation coefficient CCcor
- Cramer's coefficient V
- we will use this first
- cramersV(cont.tab) function from lsr package
```
# Examples in Examples 5,6,7
```
## PivotTable Dependency Test
- $H_0:$ there is no dependency between an independent(eg is a smoker) and a dependent(eg suffering from a disease) variable
- $H_A: \neg H_0$
- chisq.test function(cont.tab)
- assumptions: Expected frequencies ≥ 2, at least 80% of expected frequencies>5
- expected frequencies can be found from chisq.test(cont.tab) \ $ expected
# Association tables
- this is a special PivotTable case
- always has exactly 2 options for the dependent and exactly 2 options for the independent variable
## Mandatory form of the association table
- Lines indicate independent variable options
- the first line is the so-called exposed part of the population(the one exposed to the phenomenon we study - eg smokers try to study the effects of smoking)
- the second line is the unexposed part of the population
- The columns indicate the options of the dependent variable
- the first column indicates the occurrence of the phenomenon under investigation(eg occurrence of the disease, product error,...)
- the second column indicates the remainder - no occurrence of the investigated phenomenon
## Relative risk and odds ratio
- Relative risk and odds ratio provide the same information, only in a different format
- all point IOs are calculated using the function epi.2by2(associ.tab) from the package epiR
- the function takes the association table as input, which must be in the correct format!
### Relative risk
- denotes $RR$
- this is the risk ratio(probability of occurrence of the investigated phenomenon) in exposed and unexposed populations
- if it is equal to 1, it means the same probabilities of occurrence in exposed and unexposed
- if it is greater than 1 then the exposed population is more likely to occur
- if less than 1 then the exposed population is less likely to occur
- point estimate $\hat{RR}$ is calculated as the ratio rel. frequency of the studied phenomenon in exposed and unexposed populations
- epi.2by2 provides interval estimates
- if the IO does not contain the value 1 then there is a statistically significant dependence between the dependent and independent variable
### Odds ratio
- we denote $OR$
- this is the ratio of chances(chance of occurrence of the studied phenomenon) in exposed and unexposed populations
- if it is equal to 1, it means the same chances of occurrence in exposed and unexposed
- if it is greater than 1 then the exposed population has a greater chance of occurrence
- if less than 1 then the exposed population has a lower chance of occurrence
- point estimate $\hat{OR}$ is calculated as the ratio of chances(selective) of the studied phenomenon in exposed and unexposed population
- epi.2by2 provides interval estimates
- if the IO does not contain the value 1 then there is a statistically significant dependence between the dependent and independent variable
```
# example in example 7
```
# Examples(good match tests)
## Example 1.
The dice were rolled 6,000 times and the number of falling meshes was recorded. [image.png](attachment:64f1169e-6bc1-470a-8afb-b282230c2c9f.png)
```
# H0: The cube is fair.(so all probabilities are 1/6)
# Ha: The cube is not fair.(H0 negation)
x = c(1,2,3,4,5,6)
n.obs = c(979,1002,1015,980,1040,984)
p.exp = c(1/6,1/6,1/6,1/6,1/6,1/6)
n.exp = 6000*p.exp
n.exp # test assumptions must be checked
# All expected frequencies are greater than 5.
x.obs = sum(((n.obs-n.exp)^2)/n.exp)
x.obs
p.hodnota = 1 - pchisq(x.obs,5)
p.hodnota
# At the significance level of 0.05 we do not reject HO(p-value=0.711,
# Chi-square test of independence, df=5).
```
## Example 2.
The manufacturing company estimates the number of failures of a particular device in 100 hours using a Poisson distribution with parameter 1.2. Employees recorded the actual number of failures at a total of 150 100-hour intervals for inspection(results are shown in the table). Verify with a clean significance test that the number of failures of a given device within 100 hours actually has a Poisson distribution with the parameter λt=1.2.
```
# Fully specified test
# H0: The number of faults during 100 operating hours can be modeled
# Poisson distribution with parameter 1.2.
# Ha: The number of faults during 100 operating hours cannot be modeled
# Poisson distribution with parameter 1.2.
x = c(0,1,2,3,4)
n.obs = c(52,48,36,10,4)
p.exp = dpois(x,1.2)
p.exp[5] = 1 - sum(p.exp[1:4])
p.exp
sum(p.exp)
n.exp = 150*p.exp
n.exp # test assumptions must be checked
# 4 of the 5 expected frequencies, ie 80%, are greater than 5.
x.obs = sum(((n.obs-n.exp)^2)/n.exp)
x.obs
p.hodnota = 1-pchisq(x.obs,4)
p.hodnota
# At the significance level of 0.05 we do not reject HO(p-value=0.590,
# Chi-square test of independence, df=4).
```
## Example 3.
Employees recorded a total of 150 100-hour breakdowns at check(results are shown in the table). Use a clean significance test to see if the number of failures in a given device has a true Poisson distribution in 100 hours.<br> </br>

```
# Incompletely specified test
# H0: The number of faults during 100 operating hours can be modeled
# Poisson distribution.
# Ha: The number of faults during 100 operating hours cannot be modeled
# Poisson distribution.
x = c(0,1,2,3,4)
n.obs = c(52,48,36,10,4)
lambda.t = weighted.mean(x,n.obs) # Poisson distribution parameter estimate
lambda.t
p.exp = dpois(x,lambda.t)
p.exp[5] = 1 - sum(p.exp[1:4])
p.exp
sum(p.exp)
n.exp = 150*p.exp
n.exp # test assumptions must be checked
# 4 out of 5 expected frequencies, ie 80%, are greater than 5.
x.obs = sum(((n.obs-n.exp)^2)/n.exp)
x.obs
p.hodnota = 1-pchisq(x.obs,3)
p.hodnota
# At the significance level of 0.05 we do not reject HO(p-value=0.491,
# Chi-square test of independence, df=3).
```
## Example 4.
Time intervals(s) between the passages of individual vehicles were measured on the motorway within a few minutes. The detected values of these distances are recorded in the file dalnice.xlsx. Verify that this is data from a normal distribution(use a good match test).
```
# automatic test of good agreement from continuous data
dalnice = readxl::read_excel("data/neparametricke_hypotezy.xlsx", sheet=2)
colnames(dalnice)="hodnoty"
head(dalnice)
mu = mean(dalnice$hodnoty)
sigma = sd(dalnice$hodnoty)
mu
sigma
# Generate values for the x-axis
xfit=seq(from = -20, to = 30, length = 100)
# Generate values for the y-axis
yfit=dnorm(xfit, mean = mu, sd = sigma)
hist(dalnice$hodnoty, freq = FALSE, xlim = c(-20, 30))
# Add a curve to the last graph based on the values generated above
lines(xfit, yfit, col="black", lwd=2)
# install.packages("nortest")
# H0: Spacing between vehicles can be modeled by normal distribution.
# Ha: Spacing between vehicles cannot be modeled by normal distribution.
nortest::pearson.test(dalnice$hodnoty)
# Specify the number of degrees of freedom
pom = nortest::pearson.test(dalnice$hodnoty, n.classes = 5)
attributes(pom)
pom$method
pom$n.classes
pom$df
pom$p.value
# HO can be rejected at the significance level of 0.05(p-value<<0.001,
# Chi-square test of good agreement, df=12).
# test what you already know
shapiro.test(dalnice$hodnoty)
```
# Examples of PivotTable and Association Tables
## Example 5.
Decide on the basis of the data file experimentovani-s-telem.xls(Dudová, J. - Experimentování s tělem(survey results), 2013. Available online at http://experimentovani-stelem.vyplnto.cz), whether there is a connection between gender of respondents and whether they have tattoos. Use Cramer V to assess the contingency rate.
```
tet = readxl::read_excel("data/neparametricke_hypotezy.xlsx", sheet=3)
head(tet)
tet = tet[,c(6,10)]
colnames(tet) = c("pohlavi","tetovani")
head(tet)
# Preprocessing
# Variants of cat. Variables(factors) must be arranged and named so
# how they should be arranged and named in the accounts. table
kont.tab = table(tet$pohlavi, tet$tetovani)
kont.tab
colnames(kont.tab) = c("má tetování","nemá tetování")
kont.tab
# Exploratory analysis
prop.table(kont.tab) # associated relative frequencies
prop.table(kont.tab,1) # line relative frequencies
prop.table(kont.tab,2) # columnar relative frequencies
# Visualization in standard R
# Cluster bar graph
# compare graphs, which of the graphs is more suitable for the presentation of the data
options(repr.plot.width = 12) # width of graphs in Jupyter
par(mfrow = c(1, 2)) # matrix of 1x2 graphs
barplot(t(kont.tab),
legend = colnames(kont.tab),
beside = T)
barplot(kont.tab,
legend = rownames(kont.tab),
beside = T)
# Stacked bar graph
options(repr.plot.width = 12) # width of graphs in Jupyter
par(mfrow = c(1, 2)) # matrix of graphs 1x2
barplot(t(kont.tab),
legend = colnames(kont.tab))
barplot(kont.tab,
legend = rownames(kont.tab))
# Mosaic chart
options(repr.plot.width = 8) # width of graphs in Jupyter
mosaicplot(t(kont.tab),
las = 1, # Rotate y-axis labels
color = gray.colors(2))
# compare which of the graphs is more suitable for the presentation of the given data
mosaicplot(kont.tab,
las = 1,
color = gray.colors(2))
# install.packages("lsr")
# Cramer V calculation ####
lsr::cramersV(kont.tab)
# PivotTable independence test
# H0: Data is independent ->whether the individual is male or female
# Does not affect his likelihood of having a tattoo
# HA: negation H0(there is a dependency)
pom = chisq.test(kont.tab)
attributes(pom)
pom$expected # Necessary for verification of assumptions
# All expected frequencies are greater than 5.
pom
# HO can be rejected at the significance level of 0.05(p-value=0.003,
# Chi-square goodness-of-fit test, df=1).
# The observed dependence can be assessed as weak(Cramer's V=0.121).
```
## Example 6.
For a differentiated approach in personnel policy, the company's management needs to know whether job satisfaction depends on whether it is a Prague plant or non-Prague plants. The results of the survey are in the following table. Display the data using a mosaic chart and, based on the independence test in the combination table, decide on the dependence of job satisfaction on the company's location. To assess the contingency rate, use Cramer V.<br> </br>

```
# We do not have a data matrix, ie cont. we must enter the table "manually"
kont.tab = matrix(c(10,25,50,15,20,10,130,40),
nrow=2,byrow=T)
rownames(kont.tab) = c("Praha","Venkov")
colnames(kont.tab) = c("velmi nespokojen","spíše nespokojen",
"spíše spokojen","velmi spokojen")
kont.tab = as.table(kont.tab)
kont.tab
# Exploratory Analysis ####
prop.table(kont.tab) # associated relative frequencies
prop.table(kont.tab,1) # line relative frequencies
prop.table(kont.tab,2) # columnar relative frequencies
# Visualization in standard R
# Mosaic chart
mosaicplot(kont.tab,
las = 1, # Rotate yo axis labels 90
color = gray.colors(4))
# Cramer V
lsr::cramersV(kont.tab)
# H0: There is no connection between job satisfaction and company location.
# Ha: There is a connection between job satisfaction and the location of the company.
# Chi-square PivotTable Independence Test ####
pom = chisq.test(kont.tab)
pom$expected
# All expected frequencies are greater than 5.
pom
# HO can be rejected at the significance level of 0.05(p-value<<0.001,
# Chi-square test of good agreement, df=3).
# The observed dependence can be assessed as moderate(Cramer's V=0.296)
```
## Example 7.(Association table)
Between 1965 and 1968, a cohort study of cardiovascular disease under the Honolulu Heart Program began monitoring 8,006 men, of whom 7,872 did not have a history of stroke at the start of the study. Of this number, there were 3,435 smokers and 4,437 non-smokers. When followed for 12 years, 171 men in the group of smokers and 117 men in the group of non-smokers suffered a stroke.
#### a)
Record the results in the association table.
```
kont.tab = matrix(c(171,3264,117,4320),nrow=2,byrow=T)
rownames(kont.tab) = c("kuřák","nekuřák")
colnames(kont.tab) = c("ano","ne")
kont.tab
# completion of the table of absolute frequencies
kont.tab.full = matrix(rep(0,9), nrow=3, ncol=3)
rownames(kont.tab.full) = c("kuřák", "nekuřák", "sum")
colnames(kont.tab.full) = c("ano", "ne", "sum")
kont.tab.full[1:2, 1:2] = kont.tab
kont.tab.full[1:2, 3] = rowSums(kont.tab)
kont.tab.full[3, 1:2] = colSums(kont.tab)
kont.tab.full[3, 3] = sum(kont.tab)
kont.tab.full
# addition of the table of relative frequencies
kont.tab.rel = matrix(rep(0,9), nrow=3, ncol=3)
rownames(kont.tab.rel) = c("kuřák", "nekuřák", "sum")
colnames(kont.tab.rel) = c("ano", "ne", "sum")
kont.tab.rel[1:2, 1:2] = prop.table(kont.tab)
kont.tab.rel[1:2, 3] = rowSums(kont.tab.rel[1:2, 1:2])
kont.tab.rel[3, 1:2] = colSums(kont.tab.rel[1:2, 1:2])
kont.tab.rel[3, 3] = sum(kont.tab.rel[1:2, 1:2])
kont.tab.rel
```
#### b)
Based on visual assessment, estimate the effect of smoking on the incidence of cardiovascular disease.
```
# Visualization by mosaic graph in basic R
mosaicplot(kont.tab,
color = gray.colors(2))
# Cramer V calculation ####
lsr::cramersV(kont.tab)
# According to the mosaic graph and Cramer's V(0.061) there is a connection between smoking
# and the occurrence of apoplexy evaluated as very weak.
```
#### c)
Determine the absolute risk of cardiovascular disease in smokers and non-smokers.
```
# risk=probability
kont.tab.full
# Smokers
# Assumptions check
p = 171/3435
p
9/(p*(1-p))
# OK(3,435>190.3)
# Calculation of point and 95% Clopper-Pearson interval estimation
prop.test(x = 171, n = 3435)
# The smoker has an apoplexy risk of about 5.0%. 95% Clopper-Pearson
# the interval estimate of this risk is 4.2% to 5.8%.
# Non-smokers
# Assumption check
p = 117/4437
p
9/p/(1-p)
# OK(4,437>350.6)
# Calculation of point and 95% Clopper-Pearson interval estimation
prop.test(117,4437)
# In non-smokers, the risk of apoplexy is about 2.6%. 95% Clopper-Pearson
# the interval estimate for this risk is 2.1% to 3.2%.
```
#### d)
Determine the relative risk(including 95% of the interval estimate) of cardiovascular disease in smokers and non-smokers. Explain the practical significance of the results obtained.
```
# install.packages("epiR")
kont.tab
epiR::epi.2by2(kont.tab)
# Smokers have about 1.89 times higher risk of apoplexy than non-smokers. 95%
# the interval estimate for this relative risk is 1.50 to 2.38.
# According to the interval estimate of relative risk, it is clear that at the surface
# significance 0.05 is a statistically significantly higher risk for smokers
# apoplexy than in non-smokers.
```
#### e)
Determine the absolute chances of cardiovascular disease in smokers and non-smokers.
```
# In a smoker, the chance of apoplexy is about 52: 1,000. at 1,052 smokers
# about 52 occurrences of apoplexy can be expected.
# In non-smokers, the chance of developing apoplexy is about 27: 1,000. for 1,027 non-smokers
# approximately 27 occurrences of apoplexy can be expected.
```
#### f) Determine the relative chances of cardiovascular disease in smokers.
```
# Smokers have about 1.93(=0.0524/0.0271) times higher chance of apoplexy
# than for non-smokers. 95% interval estimate of this ratio
# the odds are 1.52 to 2.46.
# According to the interval estimate of the odds ratio, it is clear that on the surface
# Significance 0.05 is for smokers
# statistically significantly higher chance of developing apoplexy than in non-smokers.
```
#### g)
Decide at a significance level of 0.05 on the dependence of the incidence of cardiovascular disease on smoking.
```
# Attention! The epi.2by2 command does not output the expected frequency for
# Chi-square test of independence.
# It is not possible to verify the test assumptions!
# H0: There is no link between smoking and the occurrence of apoplexy.
# Ha: There is a link between smoking and the occurrence of apoplexy.
pom = chisq.test(kont.tab)
pom$expected
# All expected frequencies are greater than 5.
pom
lsr::cramersV(kont.tab)
# At the significance level of 0.05, HO can be rejected(p-value<<0.001,
# Chi-square goodness-of-fit test,
# df=1). The observed dependence can be assessed as very weak
# Cramer's V=0.061).
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
import tensorflow_datasets as tfds
print("TensorFlow version:", tf.__version__)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
(x_train_scale, y_train_scale), (x_test_scale, y_test_scale) = tfds.as_numpy(tfds.load(
'mnist_corrupted/scale',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_scale, x_test_scale = x_train_scale / 255.0, x_test_scale / 255.0
x_train_scale = np.reshape(x_train_scale, (60000, 28, 28))
x_test_scale = np.reshape(x_test_scale, (10000, 28, 28))
(x_train_shear, y_train_shear), (x_test_shear, y_test_shear) = tfds.as_numpy(tfds.load(
'mnist_corrupted/shear',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_shear, x_test_shear = x_train_shear / 255.0, x_test_shear / 255.0
x_train_shear = np.reshape(x_train_shear, (60000, 28, 28))
x_test_shear = np.reshape(x_test_shear, (10000, 28, 28))
(x_train_translate, y_train_translate), (x_test_translate, y_test_translate) = tfds.as_numpy(tfds.load(
'mnist_corrupted/translate',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_translate, x_test_translate = x_train_translate / 255.0, x_test_translate / 255.0
x_train_translate = np.reshape(x_train_translate, (60000, 28, 28))
x_test_translate = np.reshape(x_test_translate, (10000, 28, 28))
(x_train_motion_blur, y_train_motion_blur), (x_test_motion_blur, y_test_motion_blur) = tfds.as_numpy(tfds.load(
'mnist_corrupted/motion_blur',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_motion_blur, x_test_motion_blur = x_train_motion_blur / 255.0, x_test_motion_blur / 255.0
x_train_motion_blur = np.reshape(x_train_motion_blur, (60000, 28, 28))
x_test_motion_blur = np.reshape(x_test_motion_blur, (10000, 28, 28))
(x_train_glass_blur, y_train_glass_blur), (x_test_glass_blur, y_test_glass_blur) = tfds.as_numpy(tfds.load(
'mnist_corrupted/glass_blur',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_glass_blur, x_test_glass_blur = x_train_glass_blur / 255.0, x_test_glass_blur / 255.0
x_train_glass_blur = np.reshape(x_train_glass_blur, (60000, 28, 28))
x_test_glass_blur = np.reshape(x_test_glass_blur, (10000, 28, 28))
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
predictions = model(x_train[:1]).numpy()
predictions
tf.nn.softmax(predictions).numpy()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_fn(y_train[:1], predictions).numpy()
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
#This is the part where it trains the model. x_train is the training images in numpy array, y_train is the answers
model.evaluate(x_test, y_test, verbose=2)
#Validation test, on test set
model.evaluate(x_test_scale, y_test_scale, verbose=2)
#Evaluate on scale mnist dataset before training
model.fit(x_train_scale, y_train_scale, epochs=10)
#Train using scale mnist dataset
model.evaluate(x_test_scale, y_test_scale, verbose=2)
#Evaluate on scale mnist dataset after training
model.evaluate(x_test_shear, y_test_shear, verbose=2)
#Evaluate on shear mnist dataset before training
model.fit(x_train_shear, y_train_shear, epochs=10)
#Train using shear mnist dataset
model.evaluate(x_test_shear, y_test_shear, verbose=2)
#Evaluate on shear mnist dataset after training
model.evaluate(x_test_translate, y_test_translate, verbose=2)
#Evaluate on translate mnist dataset before training
model.fit(x_train_translate, y_train_translate, epochs=10)
#Train using translate mnist dataset
model.evaluate(x_test_translate, y_test_translate, verbose=2)
#Evaluate on translate mnist dataset after training
model.evaluate(x_test_motion_blur, y_test_motion_blur, verbose=2)
#Evaluate on motion blur mnist dataset before training
model.fit(x_train_motion_blur, y_train_motion_blur, epochs=10)
#Train using motion blur mnist dataset
model.evaluate(x_test_motion_blur, y_test_motion_blur, verbose=2)
#Evaluate on motion blur mnist dataset after training
model.evaluate(x_test_glass_blur, y_test_glass_blur, verbose=2)
#Evaluate on glass blur mnist dataset before training
model.fit(x_train_glass_blur, y_train_glass_blur, epochs=10)
#Train using glass blur mnist dataset
model.evaluate(x_test_glass_blur, y_test_glass_blur, verbose=2)
#Evaluate on glass blur mnist dataset after training
probability_model = tf.keras.Sequential([
model,
tf.keras.layers.Softmax()
])
probability_model(x_test[:5])
print(x_train[0:1].shape)
pred = model(x_train[0:1], training=False)
print(pred)
print(np.argmax(pred[0]))
#Testing the model on first x_train image
#reference for what the drawn numbers should look like
for x_train_i in range(20):
plt.imshow(x_train[x_train_i])
plt.show()
pred = model(x_train[[x_train_i]], training=False)
print('Guessed number: {}'.format(np.argmax(pred[0])))
from PIL import ImageGrab
#use to check path
print(os.getcwd() + '\images\hand_drawn.png')
import pathlib
drawn_image_path = pathlib.Path(os.getcwd() + r'\images\hand_drawn\user_drawing.png')
def screenshot(widget):
x=root.winfo_rootx()+widget.winfo_x()
y=root.winfo_rooty()+widget.winfo_y()
x1=x+widget.winfo_width()
y1=y+widget.winfo_height()
ImageGrab.grab((x,y,x1,y1)).resize((28,28)).save(drawn_image_path, format='PNG')
from tkinter import *
from tkinter import ttk
class Sketchpad(Canvas):
def __init__(self, parent, **kwargs):
super().__init__(parent, **kwargs)
self.bind("<Button-1>", self.add_oval)
self.bind("<B1-Motion>", self.add_oval)
def add_oval(self, event):
radius = 15
self.create_oval(event.x-radius, event.y+radius, event.x+radius, event.y-radius, fill='black')
def clear(self):
self.delete('all')
root = Tk()
root.columnconfigure(0, weight=1)
root.rowconfigure(0, weight=1)
mainframe = ttk.Frame(root, width=1000, height=500)
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
sketch = Sketchpad(mainframe, width=500, height=500, bg='white', highlightthickness=0)
sketch.grid(column=0, row=0, sticky='W')
button_frame = ttk.Frame(mainframe, width=200, height=500, borderwidth=5, relief='raised')
button_frame.grid(column=1, row=0, sticky='E')
button_frame.grid_propagate(False)
clear_button = ttk.Button(button_frame, text='Clear Drawing', command=sketch.clear)
clear_button.grid(column=1, row=0, padx=20, pady=20)
submit_button = ttk.Button(button_frame, text='Submit', command=lambda: screenshot(sketch))
submit_button.grid(column=1, row=1, padx=20, pady=20)
root.mainloop()
# import PIL
# drawn_image = PIL.Image.open(drawn_image_path)
# drawn_image.show()
# print(drawn_image)
# im.show()
import PIL
drawn_image = tf.keras.utils.load_img(
drawn_image_path, target_size=(28, 28), color_mode='grayscale')
drawn_image = PIL.ImageOps.invert(drawn_image)
drawn_image = np.array(drawn_image) / 255.0
drawn_img_array = tf.keras.utils.img_to_array(drawn_image)
drawn_img_array = tf.expand_dims(drawn_img_array, 0)
predictions = model(drawn_img_array, training=False)
print('Guessed number is {} with {}% confidence'.format(np.argmax(predictions[0]), max(tf.nn.softmax(predictions).numpy()[0])*100))
# print(tf.nn.softmax(predictions).numpy())
plt.imshow(drawn_image, cmap=plt.cm.binary)
plt.show()
```
| github_jupyter |
```
repo_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
data_dir = '/Users/iaincarmichael/Documents/courtlistener/data/'
import numpy as np
import sys
import matplotlib.pyplot as plt
from scipy.stats import rankdata
from collections import Counter
# graph package
import igraph as ig
# our code
sys.path.append(repo_directory + 'code/')
from setup_data_dir import setup_data_dir, make_subnetwork_directory
from pipeline.download_data import download_bulk_resource, download_master_edgelist, download_scdb
from helpful_functions import case_info
sys.path.append(repo_directory + 'vertex_metrics_experiment/code/')
from rankscore_experiment_sort import *
from rankscore_experiment_LR import *
from rankscore_experiment_search import *
from make_tr_edge_df import *
# which network to download data for
network_name = 'scotus' # 'federal', 'ca1', etc
# some sub directories that get used
raw_dir = data_dir + 'raw/'
subnet_dir = data_dir + network_name + '/'
text_dir = subnet_dir + 'textfiles/'
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
# load scotes
G = ig.Graph.Read_GraphML(subnet_dir + network_name +'_network.graphml')
# get a small sugraph to work wit
np.random.seed(234)
v = G.vs[np.random.choice(range(len(G.vs)))]
subset_ids = G.neighborhood(v.index, order=2)
g = G.subgraph(subset_ids)
# get adjacency matrix
A = np.array(g.get_adjacency().data)
```
# helper functions
```
def get_leading_evector(M, normalized=True):
evals, evecs = np.linalg.eig(M)
# there really has to be a more elegant way to do this
return np.real(evecs[:, np.argmax(evals)].reshape(-1))
```
# parameters
```
n = len(g.vs)
case_years = np.array(g.vs['year']).astype(int)
Y = case_years - min(case_years) # zero index the years
m = max(Y) + 1
cases_per_year = [0] * m
cases_per_year_counter = Counter(Y)
for k in cases_per_year_counter.keys():
cases_per_year[k] = cases_per_year_counter[k]
p = .85
qtv = .8
qvt = .2
```
# PageRank transition matrix
```
# set up the page rank transition matrix
D = np.diag([0 if d == 0 else 1.0/d for d in g.outdegree()])
z = [1.0/n if d == 0 else (1.0 - p) / n for d in g.outdegree()]
PR = p * np.dot(A.T, D) + np.outer([1] * n, z)
np.allclose(PR.sum(axis=0), [1]*n)
pr = get_leading_evector(PR)
pr = pr/sum(pr) # scale to probability
# check again igraph's page rank value
# TODO: still a little off
pr_ig = np.array(g.pagerank(damping = p))
print "sum square diff: %f " % sum(np.square(pr_ig - pr))
print "mean: %f" % np.mean(pr)
plt.figure(figsize=[8, 4])
plt.subplot(1,2,1)
plt.scatter(range(n), pr_ig, color='blue', label='igraph')
plt.scatter(range(n), pr, color='red', label='iain')
plt.xlim([0, n])
plt.ylim([0, 1.2 * max(max(pr_ig), max(pr))])
plt.legend(loc='upper right')
plt.subplot(1,2,2)
diff = pr_ig - pr
plt.scatter(range(n), diff, color='green')
plt.ylabel('diff')
plt.xlim([0, n])
plt.ylim(min(diff), max(diff))
plt.axhline(0, color='black')
```
# time-time transition matrix
ones on line below diagonal
```
TT = np.zeros((m, m))
TT[1:m, :m-1] = np.diag([1] * (m - 1))
```
# vertex - time transition matrix
the i-th column is the Y[i]th basis vector
```
VT = np.zeros((m, n))
# for basis vectors
identity_m = np.eye(m)
for i in range(n):
VT[:, i] = identity_m[:, Y[i]]
np.allclose(VT.sum(axis=0), [1]*n)
```
# time - vertex transition matrix
VT transpose but entries are scaled by number of cases in the year
```
TV = np.zeros((n, m))
n_inv = [0 if cases_per_year[i] == 0 else 1.0/cases_per_year[i] for i in range(m)]
for i in range(n):
TV[i, :] = identity_m[Y[i], :] * n_inv[Y[i]]
qtv_diag = [0 if cases_per_year[i] == 0 else qtv for i in range(m)]
qtv_diag[-1] = 1
Qtv = np.diag(qtv_diag)
```
# Make overall transition matrix
```
print sum(PR[:, 0])
print sum(VT[0, :])
print sum(TT[0, :])
print sum(TV[0, :])
P = np.zeros((n + m, n + m))
# upper left
P[:n, :n] = (1 - qvt) * PR
# upper right
P[:n, -m:] = np.dot(TV, Qtv)
# lower left
P[n:, :-m] = qvt * VT
# lower right
P[-m:, -m:] = np.dot(TT, np.eye(m) - Qtv)
np.allclose(P.sum(axis=0), [1]*(n + m))
ta_pr = get_leading_evector(P)
ta_pr = ta_pr/sum(ta_pr)
```
# time aware page rank function
```
def time_aware_pagerank(A, years, p, qtv, qvt):
"""
Computes the time aware PageRank defined by the following random walk
Create bi-partide graph time graph F whose vertices are the original vertices
of G and the vertex years.
- F contains a copy of G
- edge from each vetex to AND from its year
- edges go from year to the following year
When the random walk is at a vertex of G
- probability qvt transitions to the time node
- probability 1 - qvt does a PageRank move
When the random walk is at a time node
- probability qtv transitions to a vertex in G (of the corresponding year)
- probability 1 - qtv moves to the next year
Parameters
----------
A: adjacency matrix of original matrix where Aij = 1 iff there is an edge from i to j
Y: the years assigned to each node
p: PageRank parameter
qtv: probability of transitioning from time to vertex in original graph
qvt: probability of transitioning from vertx to time
Output
------
"""
# number of vertices in the graph
n = A.shape[0]
outdegrees = A.sum(axis=1)
# zero index the years
Y = np.array(years) - min(years)
# number of years in graph
m = max(Y) + 1
# PageRank transition matrix
# (see murphy 17.37)
D = np.diag([0 if d == 0 else 1.0/d for d in outdegrees])
z = [1.0/n if d == 0 else (1.0 - p) / n for d in outdegrees]
PR = p * np.dot(A.T, D) + np.outer([1] * n, z)
# Time-Time transition matrix
# ones below diagonal
TT = np.zeros((m, m))
TT[1:m, :m-1] = np.diag([1] * (m - 1))
# Vertex-Time transition matrix
# i-th column is the Y[i]th basis vector
VT = np.zeros((m, n))
identity_m = np.eye(m) # for basis vectors
for i in range(n):
VT[:, i] = identity_m[:, Y[i]]
# Time-Vertex transition matrix
# VT transpose but entries are scaled by number of cases in the year
TV = np.zeros((n, m))
# 1 over number of cases per year
n_inv = [0 if cases_per_year[i] == 0 else 1.0/cases_per_year[i] for i in range(m)]
for i in range(n):
TV[i, :] = identity_m[Y[i], :] * n_inv[Y[i]]
# normalization matrix for TV
qtv_diag = [0 if cases_per_year[i] == 0 else qtv for i in range(m)]
qtv_diag[-1] = 1 # last column of TT is zeros
Qtv = np.diag(qtv_diag)
# overall transition matrix
P = np.zeros((n + m, n + m))
# upper left
P[:n, :n] = (1 - qvt) * PR
# upper right
P[:n, -m:] = np.dot(TV, Qtv)
# lower left
P[n:, :-m] = qvt * VT
# lower right
P[-m:, -m:] = np.dot(TT, np.eye(m) - Qtv)
# get PageRank values
leading_eig = get_leading_evector(P)
ta_pr = leading_eig[:n]
pr_years = leading_eig[-m:]
return ta_pr/sum(ta_pr), pr_years/sum(pr_years)
```
# test
```
p = .85
qtv = .8
qvt = .2
%%time
A = np.array(G.get_adjacency().data)
years = np.array(G.vs['year']).astype(int)
%%time
ta_pr, pr_years = time_aware_pagerank(A, years, p, qtv, qvt)
plt.figure(figsize=[10, 5])
# plot pr and ta_pr
plt.subplot(1,2,1)
plt.scatter(range(n), pr, color='blue', label='pr')
plt.scatter(range(n), ta_pr[:n], color='red', label='ta pr')
plt.xlim([0, n])
plt.ylim([0, 1.2 * max(max(ta_pr), max(pr))])
plt.legend(loc='upper right')
plt.xlabel('vertex')
plt.ylabel('pr value')
# plot time
plt.subplot(1,2,2)
plt.scatter(range(min(years), max(years) + 1), ta_pr[-m:])
plt.xlim([min(years), max(years) ])
plt.ylim([0, 1.2 * max(ta_pr[-m:])])
plt.ylabel('pr value')
plt.xlabel('year')
```
| github_jupyter |
# Distance Based Statistical Method for Planar Point Patterns
**Authors: Serge Rey <sjsrey@gmail.com> and Wei Kang <weikang9009@gmail.com>**
## Introduction
Distance based methods for point patterns are of three types:
* [Mean Nearest Neighbor Distance Statistics](#Mean-Nearest-Neighbor-Distance-Statistics)
* [Nearest Neighbor Distance Functions](#Nearest-Neighbor-Distance-Functions)
* [Interevent Distance Functions](#Interevent-Distance-Functions)
In addition, we are going to introduce a computational technique [Simulation Envelopes](#Simulation-Envelopes) to aid in making inferences about the data generating process. An [example](#CSR-Example) is used to demonstrate how to use and interpret simulation envelopes.
```
import scipy.spatial
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern, PoissonPointProcess, as_window, G, F, J, K, L, Genv, Fenv, Jenv, Kenv, Lenv
%matplotlib inline
import matplotlib.pyplot as plt
```
## Mean Nearest Neighbor Distance Statistics
The nearest neighbor(s) for a point $u$ is the point(s) $N(u)$ which meet the condition
$$d_{u,N(u)} \leq d_{u,j} \forall j \in S - u$$
The distance between the nearest neighbor(s) $N(u)$ and the point $u$ is nearest neighbor distance for $u$. After searching for nearest neighbor(s) for all the points and calculating the corresponding distances, we are able to calculate mean nearest neighbor distance by averaging these distances.
It was demonstrated by Clark and Evans(1954) that mean nearest neighbor distance statistics distribution is a normal distribution under null hypothesis (underlying spatial process is CSR). We can utilize the test statistics to determine whether the point pattern is the outcome of CSR. If not, is it the outcome of cluster or regular
spatial process?
Mean nearest neighbor distance statistic
$$\bar{d}_{min}=\frac{1}{n} \sum_{i=1}^n d_{min}(s_i)$$
```
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
pp = PointPattern(points)
pp.summary()
```
We may call the method **knn** in PointPattern class to find $k$ nearest neighbors for each point in the point pattern *pp*.
```
# one nearest neighbor (default)
pp.knn()
```
The first array is the ids of the most nearest neighbor for each point, the second array is the distance between each point and its most nearest neighbor.
```
# two nearest neighbors
pp.knn(2)
pp.max_nnd # Maximum nearest neighbor distance
pp.min_nnd # Minimum nearest neighbor distance
pp.mean_nnd # mean nearest neighbor distance
pp.nnd # Nearest neighbor distances
pp.nnd.sum()/pp.n # same as pp.mean_nnd
pp.plot()
```
## Nearest Neighbor Distance Functions
Nearest neighbour distance distribution functions (including the nearest “event-to-event” and “point-event” distance distribution functions) of a point process are cumulative distribution functions of several kinds -- $G, F, J$. By comparing the distance function of the observed point pattern with that of the point pattern from a CSR process, we are able to infer whether the underlying spatial process of the observed point pattern is CSR or not for a given confidence level.
#### $G$ function - event-to-event
The $G$ function is defined as follows: for a given distance $d$, $G(d)$ is the proportion of nearest neighbor distances that are less than $d$.
$$G(d) = \sum_{i=1}^n \frac{ \phi_i^d}{n}$$
$$
\phi_i^d =
\begin{cases}
1 & \quad \text{if } d_{min}(s_i)<d \\
0 & \quad \text{otherwise } \\
\end{cases}
$$
If the underlying point process is a CSR process, $G$ function has an expectation of:
$$
G(d) = 1-e(-\lambda \pi d^2)
$$
However, if the $G$ function plot is above the expectation this reflects clustering, while departures below expectation reflect dispersion.
```
gp1 = G(pp, intervals=20)
gp1.plot()
```
A slightly different visualization of the empirical function is the quantile-quantile plot:
```
gp1.plot(qq=True)
```
in the q-q plot the csr function is now a diagonal line which serves to make accessment of departures from csr visually easier.
It is obvious that the above $G$ increases very slowly at small distances and the line is below the expected value for a CSR process (green line). We might think that the underlying spatial process is regular point process. However, this visual inspection is not enough for a final conclusion. In [Simulation Envelopes](#Simulation-Envelopes), we are going to demonstrate how to simulate data under CSR many times and construct the $95\%$ simulation envelope for $G$.
```
gp1.d # distance domain sequence (corresponding to the x-axis)
gp1.G #cumulative nearest neighbor distance distribution over d (corresponding to the y-axis))
```
#### $F$ function - "point-event"
When the number of events in a point pattern is small, $G$ function is rough (see the $G$ function plot for the 12 size point pattern above). One way to get around this is to turn to $F$ function where a given number of randomly distributed points are generated in the domain and the nearest event neighbor distance is calculated for each point. The cumulative distribution of all nearest event neighbor distances is called $F$ function.
```
fp1 = F(pp, intervals=20) # The default is to randomly generate 100 points.
fp1.plot()
fp1.plot(qq=True)
```
We can increase the number of intervals to make $F$ more smooth.
```
fp1 = F(pp, intervals=50)
fp1.plot()
fp1.plot(qq=True)
```
$F$ function is more smooth than $G$ function.
#### $J$ function - a combination of "event-event" and "point-event"
$J$ function is defined as follows:
$$J(d) = \frac{1-G(d)}{1-F(d)}$$
If $J(d)<1$, the underlying point process is a cluster point process; if $J(d)=1$, the underlying point process is a random point process; otherwise, it is a regular point process.
```
jp1 = J(pp, intervals=20)
jp1.plot()
```
From the above figure, we can observe that $J$ function is obviously above the $J(d)=1$ horizontal line. It is approaching infinity with nearest neighbor distance increasing. We might tend to conclude that the underlying point process is a regular one.
## Interevent Distance Functions
Nearest neighbor distance functions consider only the nearest neighbor distances, "event-event", "point-event" or the combination. Thus, distances to higher order neighbors are ignored, which might reveal important information regarding the point process. Interevent distance functions, including $K$ and $L$ functions, are proposed to consider distances between all pairs of event points. Similar to $G$, $F$ and $J$ functions, $K$ and $L$ functions are also cumulative distribution function.
#### $K$ function - "interevent"
Given distance $d$, $K(d)$ is defined as:
$$K(d) = \frac{\sum_{i=1}^n \sum_{j=1}^n \psi_{ij}(d)}{n \hat{\lambda}}$$
where
$$
\psi_{ij}(d) =
\begin{cases}
1 & \quad \text{if } d_{ij}<d \\
0 & \quad \text{otherwise } \\
\end{cases}
$$
$\sum_{j=1}^n \psi_{ij}(d)$ is the number of events within a circle of radius $d$ centered on event $s_i$ .
Still, we use CSR as the benchmark (null hypothesis) and see how the $K$ function estimated from the observed point pattern deviate from that under CSR, which is $K(d)=\pi d^2$. $K(d)<\pi d^2$ indicates that the underlying point process is a regular point process. $K(d)>\pi d^2$ indicates that the underlying point process is a cluster point process.
```
kp1 = K(pp)
kp1.plot()
```
#### $L$ function - "interevent"
$L$ function is a scaled version of $K$ function, defined as:
$$L(d) = \sqrt{\frac{K(d)}{\pi}}-d$$
```
lp1 = L(pp)
lp1.plot()
```
## Simulation Envelopes
A [Simulation envelope](http://www.esajournals.org/doi/pdf/10.1890/13-2042.1) is a computer intensive technique for inferring whether an observed pattern significantly deviates from what would be expected under a specific process. Here, we always use CSR as the benchmark. In order to construct a simulation envelope for a given function, we need to simulate CSR a lot of times, say $1000$ times. Then, we can calculate the function for each simulated point pattern. For every distance $d$, we sort the function values of the $1000$ simulated point patterns. Given a confidence level, say $95\%$, we can acquire the $25$th and $975$th value for every distance $d$. Thus, a simulation envelope is constructed.
#### Simulation Envelope for G function
**Genv** class in pysal.
```
realizations = PoissonPointProcess(pp.window, pp.n, 100, asPP=True) # simulate CSR 100 times
genv = Genv(pp, intervals=20, realizations=realizations) # call Genv to generate simulation envelope
genv
genv.observed
genv.plot()
```
In the above figure, **LB** and **UB** comprise the simulation envelope. **CSR** is the mean function calculated from the simulated data. **G** is the function estimated from the observed point pattern. It is well below the simulation envelope. We can infer that the underlying point process is a regular one.
#### Simulation Envelope for F function
**Fenv** class in pysal.
```
fenv = Fenv(pp, intervals=20, realizations=realizations)
fenv.plot()
```
#### Simulation Envelope for J function
**Jenv** class in pysal.
```
jenv = Jenv(pp, intervals=20, realizations=realizations)
jenv.plot()
```
#### Simulation Envelope for K function
**Kenv** class in pysal.
```
kenv = Kenv(pp, intervals=20, realizations=realizations)
kenv.plot()
```
#### Simulation Envelope for L function
**Lenv** class in pysal.
```
lenv = Lenv(pp, intervals=20, realizations=realizations)
lenv.plot()
```
## CSR Example
In this example, we are going to generate a point pattern as the "observed" point pattern. The data generating process is CSR. Then, we will simulate CSR in the same domain for 100 times and construct a simulation envelope for each function.
```
from pysal.lib.cg import shapely_ext
from pysal.explore.pointpats import Window
import pysal.lib as ps
va = ps.io.open(ps.examples.get_path("vautm17n.shp"))
polys = [shp for shp in va]
state = shapely_ext.cascaded_union(polys)
```
Generate the point pattern **pp** (size 100) from CSR as the "observed" point pattern.
```
a = [[1],[1,2]]
np.asarray(a)
n = 100
samples = 1
pp = PoissonPointProcess(Window(state.parts), n, samples, asPP=True)
pp.realizations[0]
pp.n
```
Simulate CSR in the same domain for 100 times which would be used for constructing simulation envelope under the null hypothesis of CSR.
```
csrs = PoissonPointProcess(pp.window, 100, 100, asPP=True)
csrs
```
Construct the simulation envelope for $G$ function.
```
genv = Genv(pp.realizations[0], realizations=csrs)
genv.plot()
```
Since the "observed" $G$ is well contained by the simulation envelope, we infer that the underlying point process is a random process.
```
genv.low # lower bound of the simulation envelope for G
genv.high # higher bound of the simulation envelope for G
```
Construct the simulation envelope for $F$ function.
```
fenv = Fenv(pp.realizations[0], realizations=csrs)
fenv.plot()
```
Construct the simulation envelope for $J$ function.
```
jenv = Jenv(pp.realizations[0], realizations=csrs)
jenv.plot()
```
Construct the simulation envelope for $K$ function.
```
kenv = Kenv(pp.realizations[0], realizations=csrs)
kenv.plot()
```
Construct the simulation envelope for $L$ function.
```
lenv = Lenv(pp.realizations[0], realizations=csrs)
lenv.plot()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Amro-source/Deep-Learning/blob/main/imageprocessing1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
def im2double(im):
min_val = np.min(im.ravel())
max_val = np.max(im.ravel())
out = (im.astype('float') - min_val) / (max_val - min_val)
return out
def im2double(im):
info = np.iinfo(im.dtype) # Get the data type of the input image
return im.astype(np.float) / info.max # Divide all values by the largest possible value in the datatype
from google.colab.patches import cv2_imshow
import tensorflow as tf
from matplotlib import pyplot as plt
import cv2
import numpy as np
# load an color image in grayscale
img = cv2.imread('sample_data/Starry.jpeg',0)
#out = cv2.normalize(img.astype('float'), None, 0.0, 1.0, cv2.NORM_MINMAX) # Convert to normalized
#out=im2double(img)
from IPython.display import Image, display
display(Image('Cristiano_Ronaldo_2018'))
display(Image('cristiano-ronaldo-net-worth-money-endorsements'))
from skimage import io
#io.imshow(img)
#cv2_imshow(img)
#imgae=img/255;
y=0
x=0
h=100
w=200
image=img
r = 100.0 / image.shape[1]
dim = (100, int(image.shape[0] * r))
# perform the actual resizing of the image and show it
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
crop = img[y:y+h, x:x+w]
image_slice_red = img[:,:,0]
image_slice_green = img[:,:,1]
image_slice_blue = img[:,:,2]
#plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
#plt.show()
from scipy import misc
from matplotlib import pyplot as plt
import numpy as np
#get face image of panda from misc package
panda = misc.face()
#plot or show image of face
plt.imshow( panda )
plt.show()
from skimage import io
io.use_plugin("pil", "imread")
#img = io.imread("Starry.jpeg")
#io.use_plugin("qt", "imshow")
io.imshow(img,fancy=True)
io.show()
import numpy as np
import matplotlib.pyplot as plt
import skimage.io.imread as imread
import skimage.io.imsave as imsave
#import skimage.io.imshow as imshow is an alternative for display
image_data = imread('Starry.jpeg').astype(np.float32)
print ('Size: ', image_data.size)
print ('Shape: ', image_data.shape)
scaled_image_data = image_data / 255.
# Save the modified image if you want to
# imsave('test_out.png', scaled_image_data)
plt.imshow(scaled_image_data)
plt.show()
from skimage import io
io.use_plugin("pil", "imread")
img = io.imread("Starry.jpeg")
io.use_plugin("qt", "imshow")
io.imshow(img,fancy=True)
io.show()
```
| github_jupyter |
```
import pandas as pd
from datetime import datetime
from _lib.data_preparation import remove_substandard_trips, df_calc_basic, df_join_generic_with_gps, read_gpx, calc_context
from _lib.data_preparation import get_df_detail_final, get_df_generic_final
from _lib.helper import val2year, val2zip, val2utf8, get_filepaths
from _lib.settings import DATA_AFTER_PREPARATION_DIR
```
# FR Amiens
```
from _lib.settings import DATA_ORIGIN_AMIENS_DIR
SHORT_NAME = 'ami'
```
### 2016
```
df_ami16 = pd.read_csv(f'{DATA_ORIGIN_AMIENS_DIR}/detail_2016.csv', encoding='windows-1250')
print('Shape before: ', df_ami16.shape)
''' Column names normalization '''
df_ami16.columns = [cname.replace(' ', '').lower() for cname in df_ami16.columns]
''' Column data normalization '''
df_ami16['tripid'] = SHORT_NAME + df_ami16['tripid'].astype(str).replace(' ', '')
df_ami16['timestamp'] = round(df_ami16['timestamp'].apply(lambda x: datetime.fromtimestamp(float(x)).timestamp()))
df_ami16 = df_ami16.astype({'latitude': 'float', 'longitude': 'float'})
df_ami16 = remove_substandard_trips(df_ami16)
df_ami16 = df_calc_basic(df_ami16)
print('Shape after: ', df_ami16.shape)
df_ami16_generic = pd.read_csv(f'{DATA_ORIGIN_AMIENS_DIR}/generic_2016.csv', encoding='windows-1250')
''' Column names normalization '''
df_ami16_generic.columns = [cname.replace(' ', '').lower() for cname in df_ami16_generic.columns]
df_ami16_generic['tripid'] = SHORT_NAME + df_ami16_generic['tripid'].apply(lambda x: x.replace(' ', ''))
df_ami16_generic['distance'] = df_ami16_generic['distance'].astype(float)
df_ami16_generic['valid'] = df_ami16_generic[df_ami16_generic['ecc'].notna()]['ecc'].apply(lambda x: False if x == 0 else True)
df_ami16_generic['avgspeed'] = df_ami16_generic['avgspeed'].astype(float)
df_ami16_generic['tracktype'] = df_ami16_generic[df_ami16_generic['tracktype'].notna()]['tracktype'].apply(val2utf8)
df_ami16_generic['male'] = df_ami16_generic[df_ami16_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'm' else (False if str(x).lower() == 'f' else float('nan')))
df_ami16_generic['yearofbirth'] = df_ami16_generic['year'].apply(val2year)
df_ami16_generic['profession'] = df_ami16_generic[df_ami16_generic['profession'].notna()]['profession'].apply(val2utf8)
df_ami16_generic['frequentuser'] = df_ami16_generic[df_ami16_generic['frequentuser'].notna()]['frequentuser'].apply(lambda x: False if x.lower() in ['no', 'non'] else False)
df_ami16_generic['zip'] = df_ami16_generic[df_ami16_generic['zip'].notna()]['zip'].apply(val2zip)
df_ami16_generic['source'] = df_ami16_generic[df_ami16_generic['source'].notna()]['source'].apply(val2utf8)
df_ami16_generic['typeofbike'] = df_ami16_generic[df_ami16_generic['typeofbike'].notna()]['typeofbike'].apply(val2utf8)
df_ami16_generic['typeoftrip'] = df_ami16_generic[df_ami16_generic['tipeoftrip'].notna()]['tipeoftrip'].apply(val2utf8)
df_ami16_generic.drop(['timestamp', 'startdt', 'ecc', 'sex', 'year', 'tipeoftrip', 'distance', 'avgspeed'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_ami16_generic.shape)
df_ami16_generic = df_join_generic_with_gps(df_ami16_generic, df_ami16)
print('Shape after: ', df_ami16_generic.shape)
```
### 2017
```
df_ami17 = pd.read_csv(f'{DATA_ORIGIN_AMIENS_DIR}/detail_2017.csv', encoding='windows-1250', sep=';')
print('Shape before: ', df_ami17.shape)
''' Column names normalization '''
df_ami17.columns = [cname.replace(' ', '').lower() for cname in df_ami17.columns]
''' Column data normalization '''
df_ami17['tripid'] = SHORT_NAME + df_ami17['tripid'].astype(str).replace(' ', '')
df_ami17['timestamp'] = df_ami17['timestamp'].apply(lambda x: round(datetime.fromtimestamp(float(x)).timestamp()))
df_ami17['latitude'] = df_ami17['latitude'].str.replace(',', '.').astype(float)
df_ami17['longitude'] = df_ami17['longitude'].str.replace(',', '.').astype(float)
df_ami17['altitude'] = df_ami17['altitude'].astype(float)
df_ami17 = remove_substandard_trips(df_ami17)
df_ami17 = df_calc_basic(df_ami17)
print('Shape after: ', df_ami17.shape)
df_ami17_generic = pd.read_csv(f'{DATA_ORIGIN_AMIENS_DIR}/generic_2017.csv', encoding='windows-1250', sep=';')
''' Column names normalization '''
df_ami17_generic.columns = [cname.replace(' ', '').lower() for cname in df_ami17_generic.columns]
''' Column data normalization '''
df_ami17_generic['tripid'] = SHORT_NAME + df_ami17_generic['tripid'].astype(str).replace(' ', '')
df_ami17_generic['avgspeed'] = df_ami17_generic['avgspeed'].str.replace(',', '.').astype(float)
df_ami17_generic['distance'] = df_ami17_generic['totallength'].str.replace(',', '.').astype(float)
df_ami17_generic['valid'] = df_ami17_generic['valid'].apply(lambda x: False if str(x).lower() == 'no' else True)
df_ami17_generic['male'] = df_ami17_generic[df_ami17_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'male' else (False if str(x).lower() == 'female' else float('nan')))
df_ami17_generic['yearofbirth'] = df_ami17_generic['yearofbirth'].apply(val2year)
df_ami17_generic['typeofbike'] = df_ami17_generic[df_ami17_generic['typeofbike'].notna()]['typeofbike'].apply(val2utf8)
df_ami17_generic['typeoftrip'] = df_ami17_generic[df_ami17_generic['typeoftrip'].notna()]['typeoftrip'].apply(val2utf8)
df_ami17_generic.drop(['uploaded', 'sex', 'timestamp', 'startdate', 'starttime', 'duration', 'maxspeed', 'totallength', 'lengthvalid', 'avgspeed', 'distance'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_ami17_generic.shape)
df_ami17_generic = df_join_generic_with_gps(df_ami17_generic, df_ami17)
print('Shape after: ', df_ami17_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before. 2016:', df_ami16.shape, '2017:', df_ami17.shape)
df_ami16 = get_df_detail_final(df_ami16, df_ami16_generic)
df_ami17 = get_df_detail_final(df_ami17, df_ami17_generic)
print('Shape after. 2016:', df_ami16.shape, '2017:', df_ami17.shape)
''' GENERIC '''
print('Shape before. 2015:', df_ami16_generic.shape, '2016:', df_ami17_generic.shape)
df_ami16_generic = get_df_generic_final(df_ami16_generic, ['tracktype', 'source', 'profession', 'male', 'frequentuser', 'zip', 'yearofbirth', 'valid'])
df_ami17_generic = get_df_generic_final(df_ami17_generic, ['typeofbike', 'typeoftrip', 'male', 'yearofbirth', 'valid'])
print('Shape before. 2015:', df_ami16_generic.shape, '2016:', df_ami17_generic.shape)
```
### Datasets concatenaton
```
df_ami = pd.concat([df_ami16, df_ami17], ignore_index=True)
df_ami.info()
df_ami_generic = pd.concat([df_ami16_generic, df_ami17_generic], ignore_index=True)
df_ami_generic.info()
```
### Saving operations
```
df_ami.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_ami_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# PL Wroclaw
```
from tqdm import tqdm
from _lib.settings import DATA_ORIGIN_WROCLAW_DIR
SHORT_NAME = 'wro'
```
### 2015
```
df_wro15 = pd.read_csv(f'{DATA_ORIGIN_WROCLAW_DIR}/detail_2015.csv', encoding='windows-1250', skiprows=[9627453])
print('Shape before: ', df_wro15.shape)
''' Column names normalization '''
df_wro15.columns = [cname.replace(' ', '').lower() for cname in df_wro15.columns]
''' Column data normalization '''
df_wro15['tripid'] = SHORT_NAME + df_wro15['tripid'].astype(str).replace(' ', '')
df_wro15 = df_wro15.astype({'latitude': 'float', 'longitude': 'float'})
df_wro15 = remove_substandard_trips(df_wro15)
tqdm.pandas(desc='timestamp')
df_wro15['timestamp'] = df_wro15['timestamp'].progress_apply(lambda x: float('nan') if str(x).lower() in ['false', 'nan'] else round(datetime.fromtimestamp(float(x)).timestamp()))
df_wro15 = remove_substandard_trips(df_wro15)
df_wro15 = df_calc_basic(df_wro15)
print('Shape after: ', df_wro15.shape)
df_wro15_generic = pd.read_csv(f'{DATA_ORIGIN_WROCLAW_DIR}/generic_2015.csv')
''' Column names normalization '''
df_wro15_generic.columns = [cname.replace(' ', '').lower() for cname in df_wro15_generic.columns]
''' Column data normalization '''
df_wro15_generic['tripid'] = SHORT_NAME + df_wro15_generic['tripid'].apply(lambda id: id.replace(' ', ''))
df_wro15_generic['distance'] = df_wro15_generic['distance'].astype(float)
df_wro15_generic['valid'] = df_wro15_generic[df_wro15_generic['ecc'].notna()]['ecc'].apply(lambda x: False if x == 0 else True)
df_wro15_generic['avgspeed'] = df_wro15_generic['avgspeed'].astype(float)
df_wro15_generic['tracktype'] = df_wro15_generic[df_wro15_generic['tracktype'].notna()]['tracktype'].apply(val2utf8)
df_wro15_generic['male'] = df_wro15_generic[df_wro15_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'm' else (False if str(x).lower() == 'f' else float('nan')))
df_wro15_generic['yearofbirth'] = df_wro15_generic['year'].apply(val2year)
df_wro15_generic['profession'] = df_wro15_generic[df_wro15_generic['profession'].notna()]['profession'].apply(val2utf8)
df_wro15_generic['frequentuser'] = df_wro15_generic[df_wro15_generic['frequentuser'].notna()]['frequentuser'].apply(lambda x: False if x.lower() == 'no' else True)
df_wro15_generic['zip'] = df_wro15_generic[df_wro15_generic['zip'].notna()]['zip'].apply(val2zip)
df_wro15_generic['source'] = df_wro15_generic[df_wro15_generic['source'].notna()]['source'].apply(val2utf8)
df_wro15_generic.drop(['timestamp', 'startdt', 'ecc', 'sex', 'year', 'distance', 'avgspeed'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_wro15_generic.shape)
df_wro15_generic = df_join_generic_with_gps(df_wro15_generic, df_wro15)
print('Shape after: ', df_wro15_generic.shape)
```
### 2016
```
df_wro16 = pd.read_csv(f'{DATA_ORIGIN_WROCLAW_DIR}/detail_2016.csv', encoding='windows-1250', skiprows=[11184484])
print('Shape before: ', df_wro16.shape)
''' Column names normalization '''
df_wro16.columns = [cname.replace(' ', '').lower() for cname in df_wro16.columns]
''' Column data normalization '''
df_wro16['tripid'] = SHORT_NAME + df_wro16['tripid'].astype(str).replace(' ', '')
df_wro16 = df_wro16.astype({'latitude': 'float', 'longitude': 'float'})
df_wro16 = remove_substandard_trips(df_wro16)
tqdm.pandas(desc='timestamp')
df_wro16['timestamp'] = df_wro16['timestamp'].progress_apply(lambda x: float('nan') if str(x).lower() in ['false', 'nan'] else round(datetime.fromtimestamp(float(x)).timestamp()))
df_wro16 = remove_substandard_trips(df_wro16)
df_wro16 = df_calc_basic(df_wro16)
print('Shape after: ', df_wro16.shape)
df_wro16_generic = pd.read_csv(f'{DATA_ORIGIN_WROCLAW_DIR}/generic_2016.csv')
''' Column names normalization '''
df_wro16_generic.columns = [cname.replace(' ', '').lower() for cname in df_wro16_generic.columns]
''' Column data normalization '''
df_wro16_generic['tripid'] = SHORT_NAME + df_wro16_generic['tripid'].apply(lambda x: x.replace(' ', ''))
df_wro16_generic['distance'] = df_wro16_generic['distance'].astype(float)
df_wro16_generic['valid'] = df_wro16_generic[df_wro16_generic['ecc'].notna()]['ecc'].apply(lambda x: False if x == 0 else True)
df_wro16_generic['avgspeed'] = df_wro16_generic['avgspeed'].astype(float)
df_wro16_generic['tracktype'] = df_wro16_generic[df_wro16_generic['tracktype'].notna()]['tracktype'].apply(val2utf8)
df_wro16_generic['male'] = df_wro16_generic[df_wro16_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'm' else (False if str(x).lower() == 'f' else float('nan')))
df_wro16_generic['yearofbirth'] = df_wro16_generic['year'].apply(val2year)
df_wro16_generic['profession'] = df_wro16_generic[df_wro16_generic['profession'].notna()]['profession'].apply(val2utf8)
df_wro16_generic['frequentuser'] = df_wro16_generic[df_wro16_generic['frequentuser'].notna()]['frequentuser'].apply(lambda x: False if x.lower() in ['no', 'nie'] else False)
df_wro16_generic['zip'] = df_wro16_generic[df_wro16_generic['zip'].notna()]['zip'].apply(val2zip)
df_wro16_generic['source'] = df_wro16_generic[df_wro16_generic['source'].notna()]['source'].apply(val2utf8)
df_wro16_generic['typeofbike'] = df_wro16_generic[df_wro16_generic['typeofbike'].notna()]['typeofbike'].apply(val2utf8)
df_wro16_generic['typeoftrip'] = df_wro16_generic[df_wro16_generic['tipeoftrip'].notna()]['tipeoftrip'].apply(val2utf8)
df_wro16_generic.drop(['timestamp', 'startdt', 'ecc', 'sex', 'year', 'distance', 'avgspeed'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_wro16_generic.shape)
df_wro16_generic = df_join_generic_with_gps(df_wro16_generic, df_wro16)
print('Shape after: ', df_wro16_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before. 2015:', df_wro15.shape, '2016:', df_wro16.shape)
df_wro15 = get_df_detail_final(df_wro15, df_wro15_generic)
df_wro16 = get_df_detail_final(df_wro16, df_wro16_generic)
print('Shape after. 2015:', df_wro15.shape, '2016:', df_wro16.shape)
''' GENERIC '''
print('Shape before. 2015:', df_wro15_generic.shape, '2016:', df_wro16_generic.shape)
df_wro15_generic = get_df_generic_final(df_wro15_generic, ['tracktype', 'source', 'profession', 'male', 'frequentuser', 'zip', 'yearofbirth', 'valid'])
df_wro16_generic = get_df_generic_final(df_wro16_generic, ['tracktype', 'typeofbike', 'typeoftrip', 'source', 'profession', 'male', 'frequentuser', 'zip', 'yearofbirth', 'valid'])
print('Shape before. 2015:', df_wro15_generic.shape, '2016:', df_wro16_generic.shape)
```
### Datasets concatenaton
```
df_wro = pd.concat([df_wro15, df_wro16], ignore_index=True)
df_wro.info()
df_wro_generic = pd.concat([df_wro15_generic, df_wro16_generic], ignore_index=True)
df_wro_generic.info()
```
### Saving operations
```
df_wro.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_wro_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# SE Orebro
```
from _lib.settings import DATA_ORIGIN_OREBRO_DIR
SHORT_NAME = 'ore'
```
### 2015
```
df_ore15, df_ore15_generic = read_gpx(f'{DATA_ORIGIN_OREBRO_DIR}/2015', SHORT_NAME)
df_ore15.shape, df_ore15_generic.shape
print('Shape before: ', df_ore15.shape)
df_ore15 = remove_substandard_trips(df_ore15)
df_ore15 = df_calc_basic(df_ore15)
print('Shape after: ', df_ore15.shape)
''' Joinig generic data with gps data '''
print('Shape before: ', df_ore15_generic.shape)
df_ore15_generic = df_join_generic_with_gps(df_ore15_generic, df_ore15)
print('Shape after: ', df_ore15_generic.shape)
```
### 2016
```
# df_ore16, df_ore16_generic = read_gpx(f'{DATA_ORIGIN_OREBRO_DIR}/2016', SHORT_NAME)
# df_ore16 = remove_substandard_trips(df_ore16)
# df_ore16.shape, df_ore16_generic.shape
# print('Shape before: ', df_ore16.shape)
# df_ore16 = remove_substandard_trips(df_ore16)
# df_ore16 = df_calc_basic(df_ore16)
# ''' Removing points with 0 distance passed '''
# df_ore16 = df_ore16[(df_ore16['distance'] != 0) | (df_ore16['end']) | (df_ore16['start'])]
# print('Shape after: ', df_ore16.shape)
# ''' Joinig generic data with gps data '''
# print('Shape before: ', df_ore16_generic.shape)
# df_ore16_generic = df_join_generic_with_gps(df_ore16_generic, df_ore16)
# print('Shape after: ', df_ore16_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before:', df_ore15.shape)
df_ore15 = get_df_detail_final(df_ore15, df_ore15_generic)
print('Shape after:', df_ore15.shape)
''' GENERIC '''
print('Shape before:', df_ore15_generic.shape)
df_ore15_generic = get_df_generic_final(df_ore15_generic, ['email'])
print('Shape after:', df_ore15_generic.shape)
df_ore15.info()
df_ore15_generic.info()
```
### Saving operations
```
df_ore15.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_ore15_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# DE Oldenburg
```
import os
import numpy as np
from tqdm import tqdm
from _lib.settings import DATA_ORIGIN_OLDENBURG_DIR
SHORT_NAME = 'old'
```
### Reading CSV files 2020
```
fpaths = get_filepaths(f'{DATA_ORIGIN_OLDENBURG_DIR}/2020', '.csv')
id, lat, lon, ts = [], [], [], []
for fpath in tqdm(fpaths):
tripid = SHORT_NAME + fpath[:-4].split('-')[-1]
df_trip = pd.read_csv(fpath, sep=';')
df_trip['timestamp'] = pd.to_datetime(df_trip['measured_date'])
df_trip['timestamp'] = df_trip['timestamp'].apply(lambda x: round(datetime.timestamp(x)))
id = id + [tripid] * df_trip.shape[0]
lat = lat + df_trip['latitude'].tolist()
lon = lon + df_trip['longitude'].tolist()
ts = ts + df_trip['timestamp'].tolist()
df_old = pd.DataFrame(np.array([id, lat, lon, ts]).T, columns=['tripid', 'latitude', 'longitude', 'timestamp'])
df_old = df_old.astype({'latitude': 'float', 'longitude': 'float', 'timestamp': 'float'})
df_old.shape
```
### Processing
```
print('Shape before: ', df_old.shape)
df_old = remove_substandard_trips(df_old)
df_old = df_calc_basic(df_old)
print('Shape after: ', df_old.shape)
df_old_generic = calc_context(df_old)
print('Shape before: ', df_old_generic.shape)
df_old_generic.drop_duplicates(subset=list(set(df_old_generic.columns.tolist()) - set(['startts', 'endts'])), keep='first', inplace=True)
df_old_generic = df_old_generic.reset_index(inplace=False)
print('Shape after: ', df_old_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before:', df_old.shape)
df_old = get_df_detail_final(df_old, df_old_generic)
print('Shape after:', df_old.shape)
''' GENERIC '''
print('Shape before:', df_old_generic.shape)
df_old_generic = get_df_generic_final(df_old_generic, [])
print('Shape after:', df_old_generic.shape)
df_old.info()
df_old_generic.info()
```
### Saving operations
```
df_old.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_old_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# DE Berlin
```
import os
import numpy as np
from tqdm import tqdm
from _lib.settings import DATA_ORIGIN_BERLIN_DIR
SHORT_NAME = 'ber'
```
### Reading files 2020 - 2021
```
fpaths = get_filepaths(f'{DATA_ORIGIN_BERLIN_DIR}/2020_2021', '')
id, lat, lon, ts = [], [], [], []
for fpath in tqdm(fpaths):
tripid = SHORT_NAME + fpath.split('/')[-1].split('-')[-1]
with open(fpath) as fr:
Lines = fr.readlines()
begin = False
for line in Lines:
if not begin:
begin = 'lat,lon,X,Y,Z,timeStamp' in line
else:
lline = line.split(',')
if lline[0] != '':
id.append(tripid)
lat.append(lline[0])
lon.append(lline[1])
ts.append(lline[5][:-3])
df_ber = pd.DataFrame(np.array([id, lat, lon, ts]).T, columns=['tripid', 'latitude', 'longitude', 'timestamp'])
df_ber = df_ber.astype({'latitude': 'float', 'longitude': 'float', 'timestamp': 'float'})
df_ber.shape
print('Shape before: ', df_ber.shape)
df_ber = remove_substandard_trips(df_ber)
df_ber = df_calc_basic(df_ber)
print('Shape after: ', df_ber.shape)
df_ber_generic = calc_context(df_ber)
print('Shape before: ', df_ber_generic.shape)
df_ber_generic.drop_duplicates(subset=list(set(df_ber_generic.columns.tolist()) - set(['startts', 'endts'])), keep='first', inplace=True)
df_ber_generic = df_ber_generic.reset_index(inplace=False)
print('Shape after: ', df_ber_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before:', df_ber.shape)
df_ber = get_df_detail_final(df_ber, df_ber_generic)
print('Shape after:', df_ber.shape)
''' GENERIC '''
print('Shape before:', df_ber_generic.shape)
df_ber_generic = get_df_generic_final(df_ber_generic, [])
print('Shape after:', df_ber_generic.shape)
df_ber.info()
df_ber_generic.info()
```
### Saving operations
```
df_ber.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_ber_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# PL Gdansk
```
from tqdm import tqdm
from _lib.settings import DATA_ORIGIN_GDANSK_DIR
SHORT_NAME = 'gda'
```
### 2015
```
df_gda15 = pd.read_csv(f'{DATA_ORIGIN_GDANSK_DIR}/detail_2015.csv', encoding='windows-1250')
print('Shape before: ', df_gda15.shape)
''' Column names normalization '''
df_gda15.columns = [cname.replace(' ', '').lower() for cname in df_gda15.columns]
''' Column data normalization '''
df_gda15['tripid'] = SHORT_NAME + df_gda15['tripid'].astype(str).replace(' ', '')
df_gda15 = df_gda15.astype({'latitude': 'float', 'longitude': 'float'})
df_gda15 = remove_substandard_trips(df_gda15)
tqdm.pandas(desc='timestamp')
df_gda15['timestamp'] = df_gda15['timestamp'].progress_apply(lambda x: float('nan') if str(x).lower() in ['false', 'nan'] else round(datetime.fromtimestamp(float(x)).timestamp()))
df_gda15 = remove_substandard_trips(df_gda15)
df_gda15 = df_calc_basic(df_gda15)
print('Shape after: ', df_gda15.shape)
df_gda15_generic = pd.read_csv(f'{DATA_ORIGIN_GDANSK_DIR}/generic_2015.csv')
''' Column names normalization '''
df_gda15_generic.columns = [cname.replace(' ', '').lower() for cname in df_gda15_generic.columns]
''' Column data normalization '''
df_gda15_generic['tripid'] = SHORT_NAME + df_gda15_generic['tripid'].apply(lambda id: id.replace(' ', ''))
df_gda15_generic['distance'] = df_gda15_generic['distance'].astype(float)
df_gda15_generic['valid'] = df_gda15_generic[df_gda15_generic['ecc'].notna()]['ecc'].apply(lambda x: False if x == 0 else True)
df_gda15_generic['avgspeed'] = df_gda15_generic['avgspeed'].astype(float)
df_gda15_generic['tracktype'] = df_gda15_generic[df_gda15_generic['tracktype'].notna()]['tracktype'].apply(val2utf8)
df_gda15_generic['male'] = df_gda15_generic[df_gda15_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'm' else (False if str(x).lower() == 'f' else float('nan')))
df_gda15_generic['yearofbirth'] = df_gda15_generic['year'].apply(val2year)
df_gda15_generic['profession'] = df_gda15_generic[df_gda15_generic['profession'].notna()]['profession'].apply(val2utf8)
df_gda15_generic['frequentuser'] = df_gda15_generic[df_gda15_generic['frequentuser'].notna()]['frequentuser'].apply(lambda x: False if x.lower() == 'no' else True)
df_gda15_generic['zip'] = df_gda15_generic[df_gda15_generic['zip'].notna()]['zip'].apply(val2zip)
df_gda15_generic['source'] = df_gda15_generic[df_gda15_generic['source'].notna()]['source'].apply(val2utf8)
df_gda15_generic.drop(['timestamp', 'startdt', 'ecc', 'sex', 'year', 'distance', 'avgspeed'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_gda15_generic.shape)
df_gda15_generic = df_join_generic_with_gps(df_gda15_generic, df_gda15)
print('Shape after: ', df_gda15_generic.shape)
```
### 2016
```
df_gda16 = pd.read_csv(f'{DATA_ORIGIN_GDANSK_DIR}/detail_2016.csv', encoding='windows-1250', skiprows=[11184484])
print('Shape before: ', df_gda16.shape)
''' Column names normalization '''
df_gda16.columns = [cname.replace(' ', '').lower() for cname in df_gda16.columns]
''' Column data normalization '''
df_gda16['tripid'] = SHORT_NAME + df_gda16['tripid'].astype(str).replace(' ', '')
df_gda16 = df_gda16.astype({'latitude': 'float', 'longitude': 'float'})
df_gda16 = remove_substandard_trips(df_gda16)
tqdm.pandas(desc='timestamp')
df_gda16['timestamp'] = df_gda16['timestamp'].progress_apply(lambda x: float('nan') if str(x).lower() in ['false', 'nan'] else round(datetime.fromtimestamp(float(x)).timestamp()))
df_gda16 = remove_substandard_trips(df_gda16)
df_gda16 = df_calc_basic(df_gda16)
print('Shape after: ', df_gda16.shape)
df_gda16_generic = pd.read_csv(f'{DATA_ORIGIN_GDANSK_DIR}/generic_2016.csv')
''' Column names normalization '''
df_gda16_generic.columns = [cname.replace(' ', '').lower() for cname in df_gda16_generic.columns]
''' Column data normalization '''
df_gda16_generic['tripid'] = SHORT_NAME + df_gda16_generic['tripid'].apply(lambda x: x.replace(' ', ''))
df_gda16_generic['distance'] = df_gda16_generic['distance'].astype(float)
df_gda16_generic['valid'] = df_gda16_generic[df_gda16_generic['ecc'].notna()]['ecc'].apply(lambda x: False if x == 0 else True)
df_gda16_generic['avgspeed'] = df_gda16_generic['avgspeed'].astype(float)
df_gda16_generic['tracktype'] = df_gda16_generic[df_gda16_generic['tracktype'].notna()]['tracktype'].apply(val2utf8)
df_gda16_generic['male'] = df_gda16_generic[df_gda16_generic['sex'].notna()]['sex'].apply(lambda x: True if str(x).lower() == 'm' else (False if str(x).lower() == 'f' else float('nan')))
df_gda16_generic['yearofbirth'] = df_gda16_generic['year'].apply(val2year)
df_gda16_generic['profession'] = df_gda16_generic[df_gda16_generic['profession'].notna()]['profession'].apply(val2utf8)
df_gda16_generic['frequentuser'] = df_gda16_generic[df_gda16_generic['frequentuser'].notna()]['frequentuser'].apply(lambda x: False if x.lower() in ['no', 'nie'] else False)
df_gda16_generic['zip'] = df_gda16_generic[df_gda16_generic['zip'].notna()]['zip'].apply(val2zip)
df_gda16_generic['source'] = df_gda16_generic[df_gda16_generic['source'].notna()]['source'].apply(val2utf8)
df_gda16_generic['typeofbike'] = df_gda16_generic[df_gda16_generic['typeofbike'].notna()]['typeofbike'].apply(val2utf8)
df_gda16_generic['typeoftrip'] = df_gda16_generic[df_gda16_generic['tipeoftrip'].notna()]['tipeoftrip'].apply(val2utf8)
df_gda16_generic.drop(['timestamp', 'startdt', 'ecc', 'sex', 'year', 'distance', 'avgspeed'], axis=1, inplace=True)
''' Joinig generic data with gps data '''
print('Shape before: ', df_gda16_generic.shape)
df_gda16_generic = df_join_generic_with_gps(df_gda16_generic, df_gda16)
print('Shape after: ', df_gda16_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before. 2015:', df_gda15.shape, '2016:', df_gda16.shape)
df_gda15 = get_df_detail_final(df_gda15, df_gda15_generic)
df_gda16 = get_df_detail_final(df_gda16, df_gda16_generic)
print('Shape after. 2015:', df_gda15.shape, '2016:', df_gda16.shape)
''' GENERIC '''
print('Shape before. 2015:', df_gda15_generic.shape, '2016:', df_gda16_generic.shape)
df_gda15_generic = get_df_generic_final(df_gda15_generic, ['tracktype', 'source', 'profession', 'male', 'frequentuser', 'zip', 'yearofbirth', 'valid'])
df_gda16_generic = get_df_generic_final(df_gda16_generic, ['tracktype', 'typeofbike', 'typeoftrip', 'source', 'profession', 'male', 'frequentuser', 'zip', 'yearofbirth', 'valid'])
print('Shape before. 2015:', df_gda15_generic.shape, '2016:', df_gda16_generic.shape)
```
### Datasets concatenaton
```
df_gda = pd.concat([df_gda15, df_gda16], ignore_index=True)
df_gda.info()
df_gda_generic = pd.concat([df_gda15_generic, df_gda16_generic], ignore_index=True)
df_gda_generic.info()
```
### Saving operations
```
df_gda.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_gda_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
# SW Sodertalie
```
from tqdm import tqdm
from _lib.settings import DATA_ORIGIN_SODERTALIE_DIR
SHORT_NAME = 'sod'
df_sod = pd.read_csv(f'{DATA_ORIGIN_SODERTALIE_DIR}/sodertalje_detail.csv')
print('Shape before: ', df_sod.shape)
''' Column names normalization '''
df_sod.columns = [cname.replace(' ', '').lower() for cname in df_sod.columns]
''' Column data normalization '''
df_sod['tripid'] = SHORT_NAME + df_sod['tripid'].astype(str).replace(' ', '')
tqdm.pandas(desc='timestamp')
df_sod['timestamp'] = df_sod['timestamp'].apply(lambda x: round(datetime.fromtimestamp(float(x)).timestamp()))
df_sod.drop(['altitude', 'distance', 'speed', 'type'], axis=1, inplace=True)
df_sod = df_sod.astype({'latitude': 'float', 'longitude': 'float'})
df_sod = remove_substandard_trips(df_sod)
df_sod = df_calc_basic(df_sod)
print('Shape after: ', df_sod.shape)
df_sod_generic = calc_context(df_sod)
print('Shape before: ', df_sod_generic.shape)
df_sod_generic.drop_duplicates(subset=list(set(df_sod_generic.columns.tolist()) - set(['startts', 'endts'])), keep='first', inplace=True)
df_sod_generic = df_sod_generic.reset_index(inplace=False)
print('Shape after: ', df_sod_generic.shape)
```
### Removing overall columns & records
```
''' DETAIL '''
print('Shape before:', df_sod.shape)
df_sod = get_df_detail_final(df_sod, df_sod_generic)
print('Shape after:', df_sod.shape)
''' GENERIC '''
print('Shape before:', df_sod_generic.shape)
df_sod_generic = get_df_generic_final(df_sod_generic, [])
print('Shape after:', df_sod_generic.shape)
df_sod.info()
df_sod_generic.info()
```
### Saving operations
```
df_sod.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}.csv', index=False, sep=';')
df_sod_generic.to_csv(f'{DATA_AFTER_PREPARATION_DIR}/{SHORT_NAME}_generic.csv', index=False, sep=';')
```
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src="https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width="400" align="center"></a>
<h1 align="center"><font size="5">Classification with Python</font></h1>
In this notebook we try to practice all the classification algorithms that we learned in this course.
We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.
Lets first load required libraries:
```
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
```
### About dataset
This dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:
| Field | Description |
|----------------|---------------------------------------------------------------------------------------|
| Loan_status | Whether a loan is paid off on in collection |
| Principal | Basic principal loan amount at the |
| Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |
| Effective_date | When the loan got originated and took effects |
| Due_date | Since it’s one-time payoff schedule, each loan has one single due date |
| Age | Age of applicant |
| Education | Education of applicant |
| Gender | The gender of applicant |
Lets download the dataset
```
!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
```
### Load Data From CSV File
```
df = pd.read_csv('loan_train.csv')
df.head()
df.shape
```
### Convert to date time object
```
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
```
# Data visualization and pre-processing
Let’s see how many of each class is in our data set
```
df['loan_status'].value_counts()
```
260 people have paid off the loan on time while 86 have gone into collection
Lets plot some columns to underestand data better:
```
# notice: installing seaborn might takes a few minutes
# !conda install -c anaconda seaborn -y
import seaborn as sns
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
```
# Pre-processing: Feature selection/extraction
### Lets look at the day of the week people get the loan
```
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
```
We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4
```
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()
```
## Convert Categorical features to numerical values
Lets look at gender:
```
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
```
86 % of female pay there loans while only 73 % of males pay there loan
Lets convert male to 0 and female to 1:
```
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
```
## One Hot Encoding
#### How about education?
```
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
```
#### Feature befor One Hot Encoding
```
df[['Principal','terms','age','Gender','education']].head()
```
#### Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame
```
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
```
### Feature selection
Lets defind feature sets, X:
```
X = Feature
X[0:5]
```
What are our lables?
```
y = df['loan_status'].values
y[0:5]
```
## Normalize Data
Data Standardization give data zero mean and unit variance (technically should be done after train test split )
```
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
```
# Classification
Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model
You should use the following algorithm:
- K Nearest Neighbor(KNN)
- Decision Tree
- Support Vector Machine
- Logistic Regression
__ Notice:__
- You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.
- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.
- You should include the code of the algorithm in the following cells.
## Split dataset
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
# K Nearest Neighbor(KNN)
Notice: You should find the best k to build the model with the best accuracy.
**warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__.
## Find the best k
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score # to find best k
def find_best_k(X, y, cv, steps=20):
k_scores = []
for k in range(2, steps):
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10)
score = np.median(scores)
k_scores.append({'k': k, 'score': score})
return max(k_scores, key=lambda x: x['score'])
best_k = find_best_k(X_train, y_train, 10, 20)
print(best_k)
k = best_k['k']
neigh = KNeighborsClassifier(n_neighbors=k).fit(X_train, y_train)
neigh
knn_yhat = neigh.predict(X_test)
knn_yhat[0:5]
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, knn_yhat))
```
# Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
Tree = DecisionTreeClassifier(criterion="entropy", max_depth =5)
Tree # it shows the default parameters
Tree.fit(X_train, y_train)
tree_yhat = Tree.predict(X_test)
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, Tree.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, tree_yhat))
```
# Support Vector Machine
```
from sklearn import svm
svm_clf = svm.SVC(kernel='rbf')
svm_clf.fit(X_train, y_train)
svm_yhat = svm_clf.predict(X_test)
svm_yhat [0:5]
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, svm_clf.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, svm_yhat))
```
# Logistic Regression
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train, y_train)
LR
lr_yhat = LR.predict(X_test)
lr_yhat
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, LR.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, lr_yhat))
```
# Model Evaluation using Test set
```
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
```
First, download and load the test set:
```
!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
```
### Load Test set for evaluation
```
test_df = pd.read_csv('loan_test.csv')
print(test_df.shape)
test_df.head()
test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
Feature = test_df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature, pd.get_dummies(test_df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1, inplace=True)
X = Feature
y = test_df['loan_status'].values
X = preprocessing.StandardScaler().fit(X).transform(X)
le = preprocessing.LabelEncoder()
le = le.fit(y_test)
y = le.transform(y)
knn_yhat = neigh.predict(X)
tree_yhat = Tree.predict(X)
svm_yhat = svm_clf.predict(X)
lr_yhat = LR.predict(X)
```
# Report
You should be able to report the accuracy of the built model using different evaluation metrics:
| Algorithm | Jaccard | F1-score | LogLoss |
|--------------------|---------|----------|---------|
| KNN | 0.64 | 0.77 | NA |
| Decision Tree | 0.72 | 0.82 | NA |
| SVM | 0.77 | 0.86 | NA |
| LogisticRegression | 0.72 | 0.82 | 9.56 |
```
def calculate_metrics(y_test, y_pred, model=''):
print('jaccard_similarity_score:', jaccard_similarity_score(y_test, y_pred))
print('f1_score:', f1_score(y_test, y_pred))
if model == 'Logistic Regression':
print('log_loss', log_loss(y_test, y_pred))
y.shape, knn_yhat.shape
print('Knn')
calculate_metrics(y, le.transform(knn_yhat))
print('____________________________________')
print('Decision Tree')
calculate_metrics(y, le.transform(tree_yhat))
print('____________________________________')
print('SVM')
calculate_metrics(y, le.transform(svm_yhat))
print('____________________________________')
print('Logistic Regression')
calculate_metrics(y, le.transform(lr_yhat), model='Logistic Regression')
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
| github_jupyter |
```
# 사용할 데이터 import
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases'
'/breast-cancer-wisconsin/wdbc.data', header=None)
from sklearn.preprocessing import LabelEncoder
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
le = LabelEncoder()
y = le.fit_transform(y) # 문자열을 정수로 인코딩
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20, stratify=y, random_state=1)
# 파이프라인
## 훈련, 테스트 셋에 대해 동일한 작업을 하나의 파이프라인으로 처리 가능
# make_pipeline: 여러 변환기(fit, tranform 메서드를 지원하는 객체)를 연결
# 파이프라인의 마지막 요소는 추정기가 되어야 함
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
pipe_lr = make_pipeline(StandardScaler(),
PCA(n_components=2),
LogisticRegression(solver='liblinear', random_state=1))
pipe_lr.fit(X_train, y_train)
y_pred = pipe_lr.predict(X_test)
print('테스트 정확도: %.3f' % pipe_lr.score(X_test, y_test))
# 교차 검증: holdout cross-validation, k-fold cross-validation
# holdout cross-validation
## 전통적 방법: 데이터를 훈련/검증/테스트세트로 나누고 검증세트로 모델 선택/성능 평가, 테스트세트로 일반화 성능 추정
# k-fold cross-validation
## 훈련세트에 대해 k개의 fold로 데이터를 나누고 k-1개로 훈련, 1개로 성능 평가를 k번 반복
## 모델의 평균 성능을 계산 > 테스트세트로 최종성능 추정하는것은 동일
## 경험적으로는 k=10이 좋은 기본값 (Ron Kohavi, 1995)
# strafied k-fold cross-validation
## 각 fold에서 클래스 비율이 전체 훈련세트의 비율과 동일하도록 만듬
## StratifiedKFold를 사용해서 만들 수 있음
import numpy as np
from sklearn.model_selection import StratifiedKFold
kfold = StratifiedKFold(n_splits=10,
random_state=1, shuffle = True).split(X_train, y_train)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_lr.fit(X_train[train], y_train[train])
score = pipe_lr.score(X_train[test], y_train[test])
scores.append(score)
print('폴드: %2d, 클래스 분포: %s, 정확도: %.3f' % (k+1,
np.bincount(y_train[train]), score))
print('\nCV 정확도: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
# scikit-learn에서 간단하게 k-fold를 사용하는 방법
from sklearn.model_selection import cross_validate
scores = cross_validate(estimator=pipe_lr, X=X_train, y=y_train,
scoring=['accuracy'], cv=10, n_jobs=-1, return_train_score=False)
print('CV 정확도 점수: %s' % scores['test_accuracy'])
print('CV 정확도: %.3f +/- %.3f' % (np.mean(scores['test_accuracy']),
np.std(scores['test_accuracy'])))
# scikit-learn에서 학습곡선함수를 사용하는 방법
import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
pipe_lr = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', penalty='l2', random_state=1))
train_sizes, train_scores, test_scores = learning_curve(estimator=pipe_lr,
X=X_train, y=y_train, train_sizes=np.linspace(0.1, 1.0, 10),
cv=10, n_jobs=-1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o', markersize=5, label='training accuracy')
plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy')
plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.03])
plt.tight_layout()
plt.show()
# 검증 곡선으로 over/underfitting 판별
## validation_curve: 훈련/테스트 정확도 대신에, 파라미터 크기별 정확도를 표현
from sklearn.model_selection import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
train_scores, test_scores = validation_curve(
estimator=pipe_lr, X=X_train, y=y_train,
param_name='logisticregression__C', param_range=param_range, cv=10)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='blue', marker='o', markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std, train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(param_range, test_mean,
color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy')
plt.fill_between(param_range, test_mean + test_std, test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.00])
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.3
tf.Session(config=config)
import keras
from keras.models import *
from keras.layers import *
from keras import optimizers
from keras.applications.resnet50 import ResNet50
from keras.applications.vgg16 import VGG16
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.backend import tf as ktf
from keras.callbacks import EarlyStopping
from tqdm import tqdm
import numpy as np
import pandas as pd
import sys
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from utils import *
%matplotlib inline
from jupyterthemes import jtplot
jtplot.style()
```
### Data pipeline
```
%%time
X_train = np.load('data/processed/X_train.npy')
print(X_train.shape)
Y_train = np.load('data/processed/Y_train.npy')
print(Y_train.shape)
X_test = np.load('data/processed/X_test.npy')
print(X_test.shape)
# train_data = np.load('models/bottleneck_features_train.npy')
# validation_data = np.load('models/bottleneck_features_validation.npy')
# test_data = np.load('models/bottleneck_features_test.npy')
# X_train, X_dev, Y_train, Y_dev = train_test_split(X_train, Y_train, test_size=0.25, random_state=0)
train_datagen = ImageDataGenerator(
rotation_range = 10,
horizontal_flip = True,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range=0.2,
zoom_range = 0.2,
fill_mode='nearest')
dev_datagen = ImageDataGenerator(rescale=1./255)
def aug_data(X_train, Y_train, batch_count):
X, Y = [], []
count = 0
for bx, by in train_datagen.flow(X_train, Y_train, batch_size=64):
for x, y in zip(bx, by):
X.append(x)
Y.append(y)
count+=1
print(count, end='\r')
if count>batch_count:
break
X = np.asarray(X)
Y = np.asarray(Y)
return X, Y
# X, Y = aug_data(X_train, Y_train, 500)
# X = np.load('data/preprocess/X_aug.npy')
# Y = np.load('data/preprocess/Y_aug.npy')
def top_model(input_shape):
input_img = Input(input_shape)
X = GlobalAveragePooling2D()(input_img)
# X = Flatten(input_shape=input_shape)(input_img)
X = Dropout(0.2)(X)
X = Dense(1024, activation='relu')(X)
X = Dropout(0.5)(X)
X = Dense(1024, activation='relu')(X)
X = Dropout(0.5)(X)
X = Dense(120, activation='softmax')(X)
model = Model(inputs=input_img, outputs=X)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
```
### VGG
```
vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3), classes=1)
type(vgg_model)
vgg_model.ad
# vgg_train_bf = vgg_model.predict(X_train, verbose=1)
# vgg_test_bf = vgg_model.predict(X_test, verbose=1)
# np.save('data/processed/vgg_test_bf.npy', vgg_test_bf)
# np.save('data/processed/vgg_train_bf.npy', vgg_train_bf)
vgg_train_bf = np.load('data/processed/vgg_train_bf.npy')
vgg_test_bf = np.load('data/processed/vgg_test_bf.npy')
vggtop_model = top_model(vgg_train_bf.shape[1:])
vggtop_history = vggtop_model.fit(vgg_train_bf, Y_train, batch_size=100, epochs=30, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_acc', patience=3, verbose=1)])
plot_training(vggtop_history)
```
## ResNet
```
# base_model = ResNet50(input_tensor=Input((224, 224, 3)), weights='imagenet', include_top=False)
# train_bf = base_model.predict(X_train, verbose=1)
# test_bf = base_model.predict(X_test, verbose=1)
# np.save('data/processed/res_test_bf.npy', test_bf)
# np.save('data/processed/res_train_bf.npy', train_bf)
res_train_bf = np.load('data/processed/res_train_bf.npy')
res_test_bf = np.load('data/processed/res_test_bf.npy')
restop_model = top_model(res_train_bf.shape[1:])
restop_history = restop_model.fit(res_train_bf, Y_train, batch_size=100, epochs=30, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_acc', patience=3, verbose=1)])
plot_training(restop_history)
```
## InceptionV3
```
# inception_model = InceptionV3(input_tensor=Input((224, 224, 3)), weights='imagenet', include_top=False)
# inc_train_bf = inception_model.predict(X, verbose=1)
# inc_test_bf = inception_model.predict(X_test, verbose=1)
# np.save('data/processed/inc_test_bf.npy', inc_test_bf)
# np.save('data/processed/inc_train_bf.npy', inc_train_bf)
%%time
inc_train_bf = np.load('data/processed/inc_train_bf.npy')
Y = np.load('data/processed/Y_aug.npy')
inc_test_bf = np.load('data/processed/inc_test_bf.npy')
inctop_model = top_model(inc_train_bf.shape[1:])
inc_history = inctop_model.fit(inc_train_bf, Y, batch_size=100, epochs=25, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_acc', patience=3, verbose=1)])
plot_training(inc_history)
inctop_model.save_weights('models/weights/inctop1.h5')
inctop_model.load_weights('models/weights/inctop1.h5')
inctop_model.optimizer.lr = 0.1
inc_history = inctop_model.fit(inc_train_bf, Y, batch_size=100, epochs=20, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_acc', patience=3, verbose=1)])
plot_training(inc_history)
inctop_model.save_weights('models/weights/inctop2.h5')
inctop_model.load_weights('models/weights/inctop2.h5')
inctop_model.optimizer.lr = 0.01
inc_history = inctop_model.fit(inc_train_bf, Y, batch_size=100, epochs=5, validation_split=0.2)
plot_training(inc_history)
inctop_model.save_weights('models/weights/inctop3.h5')
inctop_model.optimizer.lr = 0.001
inc_history = inctop_model.fit(inc_train_bf, Y, batch_size=100, epochs=5, validation_split=0.2)
plot_training(inc_history)
inctop_model.load_weights('models/weights/inctop4.h5')
inctop_model.evaluate(inc_train_bf, Y)
```
## Fine tuning
```
def ft_model(base_model, top_model_weights_path):
top = top_model(base_model.output_shape[1:])
top.load_weights(top_model_weights_path)
# x = base_model.predict(X_train)
# print(top.evaluate(x, Y_train))
ft_model = Model(inputs=base_model.inputs, outputs=top(base_model.output))
ft_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
return ft_model
inception_model = InceptionV3(input_tensor=Input((224, 224, 3)), weights='imagenet', include_top=False)
for layer in inception_model.layers[:299]:
layer.trainable = False
# inc_train_bf = inception_model.predict(X_train, verbose=1)
inc_ft_model = ft_model(inception_model, 'models/inctop_model.h5')
# inc_ft_model.evaluate(X_train, Y_train)
# inc_ft_model.summary()
inc_ft_history = inc_ft_model.fit(X_train, Y_train, batch_size=50, epochs=20, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_acc', patience=3, verbose=1)])
plot_training(inc_ft_history)
inc_ft_model2 = ft_model(inception_model, 'models/inctop_model.h5')
inc_ft_model2.fit_generator(
train_generator,
steps_per_epoch=X_train.shape[0] // batch_size,
epochs=20,
verbose=1)
```
## Prediction
```
preds = inctop_model.predict(inc_test_bf, verbose=1, batch_size=16)
df_train = pd.read_csv('labels.csv')
df_test = pd.read_csv('sample_submission.csv')
one_hot = pd.get_dummies(df_train['breed'], sparse = True)
sub = pd.DataFrame(preds)
sub.columns = one_hot.columns.values
sub.insert(0, 'id', df_test['id'])
sub.to_csv('sub.csv', index=False)
```
| github_jupyter |
```
import ase
import numpy as np
import cPickle as pck
from ase.visualize import view
import quippy as qp
def qp2ase(qpatoms):
from ase import Atoms as aseAtoms
positions = qpatoms.get_positions()
cell = qpatoms.get_cell()
numbers = qpatoms.get_atomic_numbers()
pbc = qpatoms.get_pbc()
atoms = aseAtoms(numbers=numbers, cell=cell, positions=positions, pbc=pbc)
for key, item in qpatoms.arrays.iteritems():
if key in ['positions', 'numbers', 'species', 'map_shift', 'n_neighb']:
continue
atoms.set_array(key, item)
return atoms
def ase2qp(aseatoms):
from quippy import Atoms as qpAtoms
positions = aseatoms.get_positions()
cell = aseatoms.get_cell()
numbers = aseatoms.get_atomic_numbers()
pbc = aseatoms.get_pbc()
return qpAtoms(numbers=numbers,cell=cell,positions=positions,pbc=pbc)
fn = '../structures/partial_input_crystals_sg3-230.pck'
with open(fn,'rb') as f:
crystals = pck.load(f)
cc = crystals[190][0]
cell = cc.get_cell()
ir_cell = cc.get_reciprocal_cell()
print np.linalg.norm(cell,axis=1)
print cc.get_cell_lengths_and_angles()
print np.linalg.norm(ir_cell,axis=1)
ir_l = np.linalg.norm(ir_cell,axis=1)
dens = 20
nb = np.array(ir_l * dens,dtype=np.int64)
print nb,nb[0]*nb[1]*nb[2] / cc.get_number_of_atoms()
cc = unskewCell(crystals[206][0])
cell = cc.get_cell()
ir_cell = cc.get_reciprocal_cell()
ir_l = np.linalg.norm(ir_cell,axis=1)
n_kpt = 1000 / cc.get_number_of_atoms()
n_reg = np.power(n_kpt,1./3.).round()
mid = list(set(range(3)).difference([ir_l.argmin(),ir_l.argmax()]))
ir_l = ir_l / ir_l[mid]
nb = np.array(np.ceil(n_reg*ir_l),dtype=np.int64)
print ir_l
print nb
print cc.get_cell_lengths_and_angles()[0:3]
print nb[0]*nb[1]*nb[2],n_kpt
def get_kpts(frame,Nkpt=1000):
ir_cell = frame.get_reciprocal_cell()
ir_l = np.linalg.norm(ir_cell,axis=1)
n_kpt = float(Nkpt) / frame.get_number_of_atoms()
n_reg = np.power(n_kpt,1./3.).round()
# find the midle val to norm with it
mid = list(set(range(3)).difference([ir_l.argmin(),ir_l.argmax()]))
ir_l = ir_l / ir_l[mid]
# get number of k point per directions
nb = np.array(np.ceil(n_reg*ir_l),dtype=np.int64)
# makes sure there are at least 3 kpts per directions
nb[nb <3] = 3
return nb
np.round?
cc = crystals[11][2]
print cc.get_cell_lengths_and_angles()
view(cc)
dd = ase2qp(cc)
dd.set_cutoff(20.)
dd.unskew_cell()
view(dd)
dd.wrap()
view(dd)
ee = qp2ase(dd)
print ee.get_cell_lengths_and_angles()
cc.set_positions()
cc.set_cell()
np.cos(40*np.pi/180.)
print cc.get_cell_lengths_and_angles()[3:]
np.all(np.abs(np.cos(cc.get_cell_lengths_and_angles()[3:]*np.pi/180.))>=0.5)
def isCellSkewed(frame):
params = np.abs(np.cos(frame.get_cell_lengths_and_angles()[3:]*np.pi/180.))
return np.any(params>=0.5)
def unskewCell(frame):
if isCellSkewed(frame):
dd = ase2qp(cc)
dd.set_cutoff(20.)
dd.unskew_cell()
dd.wrap()
ee = qp2ase(dd)
return ee
else:
return frame.copy()
cc = crystals[11][2]
view(cc)
ee = unskewCell(cc)
view(ee)
```
| github_jupyter |
```
%matplotlib inline
import csv
import random
import numpy as np
from sklearn.feature_extraction.text import *
import pickle
import tensorflow as tf
import nn_model
from sklearn.metrics import label_ranking_loss
from collections import Counter, defaultdict
import matplotlib.pyplot as plt
from operator import itemgetter
from sklearn.metrics import f1_score
```
## Data
[MIMIC III dataset](https://mimic.physionet.org/gettingstarted/overview/) is a dataset developed by the MIT Lab for Computational Physiology, it contains de-identified health records from about 53,000 patients, who stayed in critical care units between 2001 and 2012. MIMIC-III includes several types of clinical notes, including discharge summaries (n = 52,746) labelled by ICD-9 codes (International Classification of Diseases).
The database has 26 tables, we donwloaded them and created a local Postgres database for further exploration. We will be mostly using two tables: clinical notes (NOTEEVENTS) and ICD9 codes assigned to diagnoses (DIAGNOSES_ICD).
Related previous research didn't work with all ICD9-codes but only with the most used in diagnoses reports. We identified the top 20 icd9 codes based on number of patients with that label (DIAGNOSES_ICD) and filtered diagnoses discharge summaries reports that have those codes assigned (via a JOIN with NOTEEVENTS and further filtering). This pre-processing was done in the Postgres database (for more details please see the project report) and the results exported to the **dis_notes_icd9.csv** file.
```
with open('psql_files/dis_notes_icd9.csv', 'rb') as csvfile:
discharge_notes_reader = csv.reader(csvfile)
discharge_notes_list = list(discharge_notes_reader)
random.shuffle(discharge_notes_list)
print "Number of records in the dataset: ", len (discharge_notes_list)
```
### Sample of a discharge summary and the ICD( codes)
```
print 'Sample of a discharge note:'
print "-" *100
admission_id, subject_id, discharge_date, note_text, icd9_codes = discharge_notes_list[1]
print "Admission id: ", admission_id
print "Subject id:", subject_id
print "Discharge date:", discharge_date
print "ICD9 codes assigned to this discharge summary: ", icd9_codes
print "-" *100
print "Discharge Summary Clinical Note: "
print "-" *100
#print note_text
print "MIMIC data is visible to only authorized users"
```
### Some Data explorations
```
notes= [row[3] for row in discharge_notes_list]
labels = [row[4] for row in discharge_notes_list]
#counts by icd9_codes
icd9_codes = Counter()
for label in labels:
for icd9_code in label.split():
icd9_codes[icd9_code] += 1
print icd9_codes
codes_counts =icd9_codes.items()
codes_counts.sort(key=itemgetter(1), reverse=True)
icd9_labels, values = zip(*codes_counts)
indexes = np.arange(len(icd9_labels))
plt.rcdefaults()
fig,ax = plt.subplots()
ax.barh(indexes, values, align='center', color='green', ecolor='black')
ax.set_yticks(indexes)
ax.set_yticklabels(icd9_labels)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('counts')
ax.set_ylabel('ICD9 code')
ax.set_title('Top 20 ICD9 codes and its counts')
plt.show()
notes_length = [len(note) for note in notes]
print "Discharge Summary Clinical Notes median: %10.2f" %(np.median(notes_length))
print "Discharge Summary Clinical Notes mean: %10.2f " % (np.mean(notes_length))
n, bins, patches = plt.hist(notes_length, 50, normed=1, facecolor='green', alpha=0.75)
plt.xlabel('Discharge Summary Clinical Note length')
plt.title('Histogram of Discharge Summary Clinical Note lengths')
plt.grid(True)
plt.show()
```
# Simple Baseline
This is a multilabel classification since each discharge summary has several icd9 codes assigned. The most simple baseline will be to predict the top N icd9 codes for each discharge summary.
### Spliting files in training, dev and test
```
def split_file(data, train_frac = 0.7, dev_frac = 0.15):
train_split_idx = int(train_frac * len(data))
dev_split_idx = int ((train_frac + dev_frac)* len(data))
train_data = data[:train_split_idx]
dev_data = data[train_split_idx:dev_split_idx]
test_data = data[dev_split_idx:]
return train_data, dev_data, test_data
train_data_notes, dev_data_notes, test_data_notes = split_file (notes)
train_data_labels, dev_data_labels, test_data_labels = split_file (labels)
print 'Training set samples:', len (train_data_notes)
print 'Dev set samples:', len (dev_data_notes)
print 'Test set samples:', len (test_data_notes)
```
### Finding out list of unique icd9 codes and the top 4
```
# finding out the top icd9 codes
top_4_icd9 = icd9_codes.most_common(4)
print "most common 4 icd9_codes: ", top_4_icd9
top_4_icd9_label = ' '.join(code for code,count in top_4_icd9 )
print 'label for the top 4 icd9 codes: ', top_4_icd9_label
# list of unique icd9_codes and lookups for its index in the vector
unique_icd9_codes = list (icd9_codes)
index_to_icd9 = dict(enumerate(unique_icd9_codes))
icd9_to_id = {v:k for k,v in index_to_icd9.iteritems()}
print 'List of unique icd9 codes from all labels: ', unique_icd9_codes
```
### Converting icd9 labels to vectors
Each discharge note label is a list of icd9 codes, for example:
```
4019 25000 496 4280
```
we will represent it as a vector for the multilabel classification process, the vector would look like:
```
[0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0]
```
```
#transforming list of icd_codes into a vector
def get_icd9_array(icd9_codes):
icd9_index_array = [0]*len(unique_icd9_codes)
for icd9_code in icd9_codes.split():
index = icd9_to_id [icd9_code]
icd9_index_array[index] = 1
return icd9_index_array
#top 4 common icd9 to vector
icd9_prediction_vector = get_icd9_array(top_4_icd9_label)
print 'icd9 prediction vector: ', icd9_prediction_vector
# true icd9 codes to vector
train_data_labels_vector= list(map(get_icd9_array, train_data_labels))
dev_data_labels_vector = list(map(get_icd9_array, dev_data_labels))
print 'example of a training label vector: ', train_data_labels_vector[0]
## assign icd9_prediction_vector to every discharge
train_y_hat_baseline = [icd9_prediction_vector]* len (train_data_labels_vector)
dev_y_hat_baseline = [icd9_prediction_vector]* len (dev_data_labels_vector)
```
### Performance evaluation
There are different type of metric to evaluate a multi-label classification model.
Here are some of them: http://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-loss
We will use ranking loss to compared with the NN Baseline which also will be compared with the NN build by this paper: "ICD-9 Coding of Discharge Summaries"
```
training_ranking_loss = label_ranking_loss(train_data_labels_vector, train_y_hat_baseline)
print "Training ranking loss: ", training_ranking_loss
dev_ranking_loss = label_ranking_loss(dev_data_labels_vector, dev_y_hat_baseline)
print "Development ranking loss: ", dev_ranking_loss
flat_bb_y_hat = [item for sublist in train_y_hat_baseline for item in sublist]
flat_bb_label = [item for sublist in train_data_labels_vector for item in sublist]
bb_f1_score = f1_score(flat_bb_label, flat_bb_y_hat)
print "Train F1 score: ",bb_f1_score
flat_bb_y_hat = [item for sublist in dev_y_hat_baseline for item in sublist]
flat_bb_label = [item for sublist in dev_data_labels_vector for item in sublist]
bb_f1_score = f1_score(flat_bb_label, flat_bb_y_hat)
print "Dev F1 score: ",bb_f1_score
```
# NN Baseline
This baseline is based on the Neural Network model implemented in this research paper:[ICD-9 Coding of Discharge Summaries](https://www.google.com/url?q=https%3A%2F%2Fcs224d.stanford.edu%2Freports%2Flukelefebure.pdf).
We will use the same input dataset than the one used by the Basic Baseline, but only taking 10,000 records to be able to compare performance results with the NN implemented in the referenced paper.
```
# the full file needs about 8GB of memory, let's work with the first 5000
#discharge_notes= np.asarray(discharge_notes_list)
discharge_notes_nparray= np.asarray(discharge_notes_list[0:10000])
print 'Number of discharge clinical notes: ', len(discharge_notes_nparray)
discharge_notes= discharge_notes_nparray[:,3]
discharge_labels = discharge_notes_nparray[:,4]
train_notes, dev_notes, test_notes = split_file (discharge_notes)
train_labels, dev_labels, test_labels = split_file (discharge_labels)
print 'Training set samples:', len (train_notes)
print 'Dev set samples:', len (dev_notes)
print 'Test set samples:', len (test_notes)
```
### TF-IDF Representation of discharge clinical notes
Previous research represents this documents/notes as bag-of-words vectors [1].
In particular, it takes the 10,000 tokens with the largest tf-idf scores from the training.
[1] Diagnosis code assignment: models and evaluation metrics. Journal of the American Medical Informatics
```
max_number_features = 10000
# TfidfVectorizer
# Convert all characters to lowercase before tokenizing (by default)
# tokenization (by default)
# max_features: consider the top max_features ordered by term frequency across the corpus
vectorizer = TfidfVectorizer(max_features=max_number_features,stop_words='english',max_df=0.9 )
train_notes_vector = vectorizer.fit_transform(train_notes)
dev_notes_vector = vectorizer.transform(dev_notes)
```
### Transforming list of ICD codes to vectors
```
train_labels_vector= list(map(get_icd9_array, train_labels))
dev_labels_vector = list(map(get_icd9_array, dev_labels))
test_labels_vector = list(map(get_icd9_array, test_labels))
```
### Neural Network for Multilabel classification
```
def run_epoch(lm, session, X, y, batch_size):
for batch in xrange(0, X.shape[0], batch_size):
X_batch = X[batch : batch + batch_size]
y_batch = y[batch : batch + batch_size]
feed_dict = {lm.x:X_batch,lm.target_y:y_batch}
loss, train_op_value = session.run( [lm.loss,lm.train],feed_dict=feed_dict )
def predict_icd9_codes(lm, session, x_data, y_dim):
total_y_hat = []
for batch in xrange(0, x_data.shape[0], batch_size):
X_batch = x_data[batch : batch + batch_size]
y_hat_out = session.run(lm.y_hat, feed_dict={lm.x:X_batch})
total_y_hat.extend(y_hat_out)
return total_y_hat
#build tensorflow graphs
reload(nn_model)
# Model parameters
Hidden_dims = [100]
learning_rate = 0.01
y_dim = len(unique_icd9_codes)
model_params = dict(Hidden_dims=Hidden_dims,
learning_rate = learning_rate, vocabulary_size =max_number_features , y_dim=y_dim)
lm = nn_model.NNLM(**model_params)
lm.BuildCoreGraph()
lm.BuildTrainGraph()
X = train_notes_vector.todense()
y = train_labels_vector
batch_size = 50
num_epochs = 50
with lm.graph.as_default():
initializer = tf.global_variables_initializer()
with tf.Session(graph=lm.graph) as session:
session.run(initializer)
#training
for epoch_num in xrange(num_epochs):
run_epoch(lm, session, X, y, batch_size)
#prediction using training and dev data
train_y_hat = predict_icd9_codes(lm, session, train_notes_vector.todense(), y_dim)
dev_y_hat = predict_icd9_codes(lm, session, dev_notes_vector.todense(), y_dim)
```
### Performance Evaluation
We are using first the [ranking loss metric](http://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-loss) to compare resutls with the referenced paper below. We did match! same ranking loss values for a file of 10,000 records and using only top 20 icd9-codes.
```
# ranking loss
training_ranking_loss = label_ranking_loss(train_labels_vector, train_y_hat)
print "Training ranking loss: ", training_ranking_loss
dev_ranking_loss = label_ranking_loss(dev_labels_vector, dev_y_hat)
print "Development ranking loss: ", dev_ranking_loss
```
The following paper "ICD-9 Coding of Discharge Summaries" worked with the MIMIC II database to classify ICD9-codes,
the ranking loss metrics reported are:
<img src="paper_ranking_loss_scores.png">
```
#choosing a threshold
def get_hot_vector (probs_list, threshold):
vector = []
for prob in probs_list:
train_y_hat_hot = [ 1 if p > threshold else 0 for p in prob]
vector.append(train_y_hat_hot)
return vector
# f1-score
threshold= 0.25
hot_y_hat = get_hot_vector(train_y_hat, threshold)
# flat predictions
flat_y_hat = [item for sublist in hot_y_hat for item in sublist]
flat_train_label = [item for sublist in train_labels_vector for item in sublist]
training_f1_score = f1_score(flat_train_label, flat_y_hat)
print "Training F1 score: ", training_f1_score
hot_dev_y_hat = get_hot_vector(dev_y_hat, threshold)
flat_dev_y_hat = [item for sublist in hot_dev_y_hat for item in sublist]
flat_dev_label = [item for sublist in dev_labels_vector for item in sublist]
dev_f1_score = f1_score(flat_dev_label, flat_dev_y_hat)
print "Dev F1 score: ", dev_f1_score
```
| github_jupyter |
# Loss and Regularization
```
%load_ext autoreload
%autoreload 2
import numpy as np
from numpy import linalg as nplin
from cs771 import plotData as pd
from cs771 import optLib as opt
from sklearn import linear_model
from matplotlib import pyplot as plt
from matplotlib.ticker import MaxNLocator
import random
```
**Loading Benchmark Datasets using _sklearn_**: the _sklearn_ library, along with providing methods for various ML problems like classification, regression and clustering, also gives the facility to download various datasets. We will use the _Boston Housing_ dataset that requires us to predict house prices in the city of Boston using 13 features such as crime rates, pollution levels, education facilities etc. Check this [[link]](https://scikit-learn.org/stable/datasets/index.html#boston-dataset) to learn more.
**Caution**: when executing the dataset download statement for the first time, sklearn will attempt to download this dataset from an internet source. Make sure you have a working internet connection at this point otherwise the statement will fail. Once you have downloaded the dataset once, it will be cached and you would not have to download it again and again.
```
from sklearn.datasets import load_boston
(X, y) = load_boston( return_X_y=True )
(n, d) = X.shape
print( "This dataset has %d data points and %d features" % (n,d) )
print( "The mean value of the (real-valued) labels is %.2f" % np.mean(y) )
```
**Experiments with Ridge Regression**: we first use rigde regression (that uses the least squares loss and $L_2$ regularization) to try and solve this problem. We will try out a variety of regularization parameters ranging across 15 orders of magnitude from $10^{-4}$ all the way to $10^{11}$. Note that as the regularization parameter increases, the model norm drops significantly so that at extremely high levels of regularization, the learnt model is almost a zero vector. Naturally, such a trivial model offers poor prediction hence, beyond a point, increasing the regularization parameter decreases prediction performance. We measure prediction performance in term of _mean absolute error_ (shortened to MAE).
**Regularization Path**: the concept of a regularization path traces the values different coordinates of the model take when the problem is solved using various values of the regularization parameter. Note that initially, when there is very feeble regularization (say $\alpha = 10^{-4}$), model coordinates take large magnitude values, some positive, others negative. However, as regularization increases, all model coordinate values _shrink_ towards zero.
```
alphaVals = np.concatenate( [np.linspace( 1e-4 * 10**i, 1e-4 * 10**(i+1), num = 5 )[:-1] for i in range(15)] )
MAEVals = np.zeros_like( alphaVals )
modelNorms = np.zeros_like( alphaVals )
models = np.zeros( (X.shape[1], len(alphaVals)) )
for i in range( len(alphaVals) ):
reg = linear_model.Ridge( alpha = alphaVals[i] )
reg.fit( X, y )
w = reg.coef_
b = reg.intercept_
MAEVals[i] = np.mean( np.abs( X.dot(w) + b - y ) )
modelNorms[i] = nplin.norm( w, 2 )
models[:,i] = w
bestRRMAENoCorr = min( MAEVals )
fig = pd.getFigure( 7, 7 )
ax = plt.gca()
ax.set_title( "The effect of the strength of L2 regularization on performance" )
ax.set_xlabel( "L2 Regularization Parameter Value" )
ax.set_ylabel( "Mean Absolute Error", color = "r" )
ax.semilogx( alphaVals, MAEVals, color = 'r', linestyle = '-' )
ax2 = ax.twinx()
ax2.set_ylabel( "Model Complexity (L2 Norm)", color = "b" )
ax2.semilogx( alphaVals, modelNorms, color = 'b', linestyle = '-' )
fig2 = pd.getFigure( 7, 7 )
plt.figure( fig2.number )
plt.title( "The Regularization Path for L2 regularization" )
plt.xlabel( "L2 Regularization Parameter Value" )
plt.ylabel( "Value of Various Coordinates of Models" )
for i in range(d):
plt.semilogx( alphaVals, models[i,:] )
```
**Robust Regression**: we will now investigate how to deal with cases when the data is corrupted. We will randomly choose 25% of the data points and significantly change their labels (i.e. $y$ values). We will note that ridge regression fails to offer a decent solution no matter what value of the regression parameter we choose. The best MAE offered by ridge regression in this case is 8.1 whereas it was around 3.2 when data was not corrupted. Clearly $L_2$ regularization is not a good option when data is maliciously or adversarially corrupted.
```
# How many points do we want to corrupt?
k = int( 0.25 * n )
corr = np.zeros_like( y )
idx_corr = np.random.permutation( n )[:k]
# What diff do we want to introduce in the labels of the corrupted data points?
corr[idx_corr] = 30
y_corr = y + corr
MAEVals = np.zeros_like( alphaVals )
modelNorms = np.zeros_like( alphaVals )
for i in range( len(alphaVals) ):
reg = linear_model.Ridge( alpha = alphaVals[i] )
reg.fit( X, y_corr )
w = reg.coef_
b = reg.intercept_
MAEVals[i] = np.mean( np.abs( X.dot(w) + b - y ) )
modelNorms[i] = nplin.norm( w, 2 )
bestRRMAE = min( MAEVals )
fig3 = pd.getFigure( 7, 7 )
ax = plt.gca()
ax.set_title( "L2 regularization on Corrupted Data" )
ax.set_xlabel( "L2 Regularization Parameter Value" )
ax.set_ylabel( "Mean Absolute Error", color = "r" )
ax.semilogx( alphaVals, MAEVals, color = 'r', linestyle = '-' )
ax2 = ax.twinx()
ax2.set_ylabel( "Model Complexity (L2 Norm)", color = "b" )
ax2.semilogx( alphaVals, modelNorms, color = 'b', linestyle = '-' )
```
**Alternating Minimization for Robust Regression**: a simple heuristic that works well in such corrupted data settings is to learn the model and try to identify the subset of the data that is corrupted simultaneously. A variant of this heuristic, as presented in the _TORRENT_ algorithm is implemented below. At each time step, this method takes an existing model and postulates that data points with high residuals with respect to this model may be corrupted and sets them aside. Ridge regression is then carried out with the rest of the data points to update the model.
The results show that this simple heuristic not only offers a much better MAE (of around 3.2, the same that ridge regression offered when executed with clean data) but that the method is able to identify most of the data points that were corrupted. The method converges in only a couple of iterations.
**Reference**\
Kush Bhatia, Prateek Jain and P.K., _Robust Regression via Hard Thresholding_ , Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), 2015.
```
# How many iterations do we wish to run the algorithm
horizon = 10
MAEVals = np.zeros( (horizon,) )
suppErrVals = np.zeros( (horizon,) )
# Initialization
w = np.zeros( (d,) )
b = 0
reg = linear_model.Ridge( alpha = 0.005 )
# Find out how many of the corrupted data points were correctly identified by the algorithm
def getSupportIden( idx, idxAst ):
return len( set(idxAst).intersection( set(idx) ) )
# Implement the TORRENT algorithm
for t in range( horizon ):
MAEVals[t] = np.mean( np.abs( X.dot(w) + b - y ) )
# Find out the data points with largest residual -- these maybe the corrupted points
res = np.abs( X.dot(w) + b - y_corr )
idx_sorted = np.argsort( res )
idx_clean_hat = idx_sorted[0:n-k]
idx_corr_hat = idx_sorted[-k:]
suppErrVals[t] = getSupportIden( idx_corr, idx_corr_hat )
# The points with low residuals are used to update the model
XClean = X[idx_clean_hat,:]
yClean = y_corr[idx_clean_hat]
reg.fit( XClean, yClean )
w = reg.coef_
b = reg.intercept_
fig4 = pd.getFigure( 7, 7 )
plt.plot( np.arange( horizon ), bestRRMAE * np.ones_like(suppErrVals), color = 'r', linestyle = ':', label = "Best MAE achieved by Ridge Regression on Corrupted Data" )
plt.plot( np.arange( horizon ), bestRRMAENoCorr * np.ones_like(suppErrVals), color = 'g', linestyle = ':', label = "Best MAE achieved by Ridge Regression on Clean Data" )
plt.legend()
ax = plt.gca()
ax.set_title( "Alternating Minimization on Corrupted Data" )
ax.set_xlabel( "Number of Iterations" )
ax.set_ylabel( "Mean Absolute Error", color = "r" )
ax.plot( np.arange( horizon ), MAEVals, color = 'r', linestyle = '-' )
plt.ylim( np.floor(min(MAEVals)), np.ceil(bestRRMAE) )
ax2 = ax.twinx()
ax2.set_ylabel( "Number of Corrupted Indices (out of %d) Identified Correctly" % k, color = "b" )
ax2.yaxis.set_major_locator( MaxNLocator( integer = True ) )
ax2.plot( np.arange( horizon ), suppErrVals, color = 'b', linestyle = '-' )
plt.ylim( min(suppErrVals)-1, k )
```
**Spurious Features present a Sparse Recovery Problem**: in this experiment we add 500 new features to the dataset (with the new features containing nothing but pure random white noise), taking the total number of features to 513 which is greater than the total number of data points which is 506. Upon executing ridge regression on this dataset, we find something very surprising. We find that at low levels of regularization, the method offers almost zero MAE!
The above may seem paradoxical since the new features were white noise and had nothing informative to say about the problem. What happened was that these new features increased the power of the linear model and since there was not enough data, ridge regression used these new features to artificially reduce the error. This is clear from the regularization path plot.
Such a model is actually not very useful since it would not perform very well on test data. To do well on test data, the only way is to identify the truly informative features (of which there are only 13). Note that in the error plot, the blue curve demonstrates the amount of weight the model puts on the spurious features. Only when there is heavy regularization (around $\alpha = 10^4$ does the model stop placing large weights on the spurious features and error levels climb to around 3.2, where they were when spurious features were not present. Thus, L2 regularization may not be the best option when there are several irrelevant features.
```
X_spurious = np.random.normal( 0, 1, (n, 500) )
X_extend = np.hstack( (X, X_spurious) )
(n,d) = X_extend.shape
MAEVals = np.zeros_like( alphaVals )
spuriousModelNorms = np.zeros_like( alphaVals )
models = np.zeros( (d, len(alphaVals)) )
for i in range( len(alphaVals) ):
reg = linear_model.Ridge( alpha = alphaVals[i] )
reg.fit( X_extend, y )
w = reg.coef_
b = reg.intercept_
MAEVals[i] = np.mean( np.abs( X_extend.dot(w) + b - y ) )
spuriousModelNorms[i] = nplin.norm( w[13:], 2 )
models[:,i] = w
fig5 = pd.getFigure( 7, 7 )
plt.plot( alphaVals, bestRRMAENoCorr * np.ones_like(alphaVals), color = 'g', linestyle = ':', label = "Best MAE achieved by Ridge Regression on Original Data" )
plt.legend()
ax = plt.gca()
ax.set_title( "Effect of L2 regularization with Spurious Features" )
ax.set_xlabel( "L2 Regularization Parameter Value" )
ax.set_ylabel( "Mean Absolute Error", color = "r" )
ax.semilogx( alphaVals, MAEVals, color = 'r', linestyle = '-' )
ax2 = ax.twinx()
ax2.set_ylabel( "Weight on Spurious Features", color = "b" )
ax2.semilogx( alphaVals, spuriousModelNorms, color = 'b', linestyle = '-' )
fig6 = pd.getFigure( 7, 7 )
plt.figure( fig6.number )
plt.title( "The Regularization Path for L2 regularization with Spurious Features" )
plt.xlabel( "L2 Regularization Parameter Value" )
plt.ylabel( "Value of Various Coordinates of Models" )
for i in range(d):
plt.semilogx( alphaVals, models[i,:] )
```
**LASSO for Sparse Recovery**: the LASSO (Least Absolute Shrinkage and Selection Operator) performs regression using the least squares loss and the $L_1$ regularizer instead. The error plot and the regularization path plots show that LASSO offers a far quicker identification of the spurious features. LASSO is indeed a very popular technique to deal with sparse recovery when we have very less data and suspect that there may be irrelevant features.
```
MAEVals = np.zeros_like( alphaVals )
spuriousModelNorms = np.zeros_like( alphaVals )
models = np.zeros( (X_extend.shape[1], len(alphaVals)) )
for i in range( len(alphaVals) ):
reg = linear_model.Lasso( alpha = alphaVals[i] )
reg.fit( X_extend, y )
w = reg.coef_
b = reg.intercept_
MAEVals[i] = np.mean( np.abs( X_extend.dot(w) + b - y ) )
spuriousModelNorms[i] = nplin.norm( w[13:], 2 )
models[:,i] = w
fig5 = pd.getFigure( 7, 7 )
plt.plot( alphaVals, bestRRMAENoCorr * np.ones_like(alphaVals), color = 'g', linestyle = ':', label = "Best MAE achieved by Ridge Regression on Original Data" )
plt.legend()
ax = plt.gca()
ax.set_title( "Examining the effect of the strength of L1 regularization" )
ax.set_xlabel( "L1 Regularization Parameter Value" )
ax.set_ylabel( "Mean Absolute Error", color = "r" )
ax.semilogx( alphaVals, MAEVals, color = 'r', linestyle = '-' )
ax2 = ax.twinx()
ax2.set_ylabel( "Weight on Spurious Features", color = "b" )
ax2.semilogx( alphaVals, spuriousModelNorms, color = 'b', linestyle = '-' )
fig6 = pd.getFigure( 7, 7 )
plt.figure( fig6.number )
plt.title( "Plotting the Regularization Path for L1 regularization" )
plt.xlabel( "L2 Regularization Parameter Value" )
plt.ylabel( "Value of Various Coordinates of Models" )
for i in range(X_extend.shape[1]):
plt.semilogx( alphaVals, models[i,:] )
```
**Proximal Gradient Descent to solve LASSO**: we will now implement the proximal gradient descent method to minimize the LASSO objective. The _ProxGD_ method performs a usual gradient step and then applies the _prox operator_ corresponding to the regularizer. For the $L_1$ regularizer $\lambda\cdot\|\cdot\|_1$, the prox operator $\text{prox}_{\lambda\cdot\|\cdot\|_1}$ is simply the so-called _soft-thresholding_ operator described below. If $\mathbf z = \text{prox}_{\lambda\cdot\|\cdot\|_1}(\mathbf x)$, then for all $i \in [d]$, we have
$$
\mathbf z_i = \begin{cases} \mathbf x_i - \lambda & \mathbf x_i > \lambda \\ 0 & |\mathbf x_i| \leq \lambda \\ \mathbf x_i + \lambda & \mathbf x_i < -\lambda \end{cases}
$$
Applying ProxGD to the LASSO problem is often called _ISTA_ (Iterative Soft Thresholding Algorithm) for this reason. Note that at time $t$, if the step length used for the gradient step is $\eta_t$, then the prox operator corresponding to $\text{prox}_{\lambda_t\cdot\|\cdot\|_1}$ is used where $\lambda_t = \eta_t\cdot\lambda$ and $\lambda$ is the regularization parameter in the LASSO problem we are trying to solve. Thus, ISTA requires shrinkage to be smaller if we are also using small step sizes.
To speed up convergence, _acceleration_ techniques (e.g. NAG, Adam) are helpful. We will use a very straightforward acceleration technique which simply sets
$$
\mathbf w^t = \mathbf w^t + \frac {t}{t+1}\cdot(\mathbf w^t - \mathbf w^{t-1})
$$
In particular, the application of Nesterov's acceleartion i.e. NAG to ISTA gives us the so-called _FISTA_ (Fast ISTA).
```
# Get the MAE and LASSO objective
def getLASSOObj( model ):
w = model[:-1]
b = model[-1]
res = X_extend.dot(w) + b - y
objVal = alpha * nplin.norm( w, 1 ) + 1/(2*n) * ( nplin.norm( res ) ** 2 )
MAEVal = np.mean( np.abs( res ) )
return (objVal, MAEVal)
# Apply the prox operator and also apply acceleration
def doSoftThresholding( model, t ):
global modelPrev
w = model[:-1]
b = model[-1]
# Shrink all model coordinates by the effective value of alpha
idx = w < 0
alphaEff = alpha * stepFunc(t)
w = np.abs(w) - alphaEff
w[w < 0] = 0
w[idx] = w[idx] * -1
model = np.append( w, b )
# Acceleration step improves convergence rate
model = model + (t/(t+1)) * (model - modelPrev)
modelPrev = model
return model
# Get the gradient to the loss function in LASSO (just the least squares part)
# Note that gradients w.r.t the regularizer are not required in proximal gradient
# This is one reason why they are useful with non-differentiable regularizers
def getLASSOGrad( model, t ):
w = model[:-1]
b = model[-1]
samples = random.sample( range(0, n), B )
X_ = X_extend[samples,:]
y_ = y[samples]
res = X_.dot(w) + b - y_
grad = np.append( X_.T.dot(res), np.sum(res) )
return grad/B
# Set hyperparameters and initialize the model
alpha = 1
B = 10
eta = 2e-6
init = np.zeros( (d+1,) )
modelPrev = np.zeros( (d+1,) )
# A constant step length seems to work well here
stepFunc = opt.stepLengthGenerator( "constant", eta )
(modelProxGD, objProxGD, timeProxGD) = opt.doGD( getLASSOGrad, stepFunc, getLASSOObj, init, horizon = 50000, doModelAveraging = True, postGradFunc = doSoftThresholding )
objVals = [objProxGD[i][0] for i in range(len(objProxGD))]
MAEVals = [objProxGD[i][1] for i in range(len(objProxGD))]
fig7 = pd.getFigure( 7, 7 )
ax = plt.gca()
ax.set_title( "An Accelerated ProxGD Solver for LASSO" )
ax.set_xlabel( "Elapsed time (sec)" )
ax.set_ylabel( "Objective Value for LASSO", color = "r" )
ax.plot( timeProxGD, objVals, color = 'r', linestyle = ':' )
ax2 = ax.twinx()
ax2.set_ylabel( "MAE Value for LASSO", color = "b" )
ax2.plot( timeProxGD, MAEVals, color = 'b', linestyle = '--' )
plt.ylim( 2, 10 )
```
**Improving the Performance of ProxGD**: there are several steps one can adopt to get better performance
1. Use a line search method to tune the step length instead of using a fixed step length or a regular schedule
1. Perform a better implementation of the acceleration step (which may require additional hyperparameters)
1. The Boston housing problem is what is called _ill-conditioned_ (this was true even before spurious features were added). Advanced methods like conjugate gradient descent (beyond the scope of CS771) perform better for ill-conditioned problems.
1. Use better solvers -- coordinate descent solvers for the Lagrangian dual of the LASSO are known to offer superior performance.
**Data Normalization to Improve Data Conditioning**: in some cases (and fortunately, this happens to be one of them), the data conditioning can be improved somewhat by normalizing the data features. This does not change the problem (we will see below how) but it definitely makes life easier for the solvers. Professional solvers such as those used within libraries such as sklearn often attempt to normalize data themselves.
The two most common data normalization steps are
1. _Mean centering_ : we calculate the mean/average feature vector from the data set $\mathbf \mu \in \mathbb R^d$ and subtract it from each feature vector to get centered feature vectors. This has an effect of bringing the dataset feature vectors closer to the origin.
1. _Variance normalization_ : we calculate the standard deviation along each feature as a vector $\mathbf \sigma \in \mathbb R^d$ and divide each centered feature vector by this vector (in an element-wise manner). This has an effect of limiting how wildly any feature can vary.
If you are not familiar with concepts such as mean and variance, please refer to the Statistics Refresher material in the course or else consult some other external source of your liking.
Thus, we transform each feature vector as follows (let $\Sigma \in \mathbb R^{d \times d}$ denote a diagonal matrix with entries of the vector $\mathbf \sigma$ along its diagonal):
$$
\tilde{\mathbf x}^i = \Sigma^{-1}(\mathbf x^i - \mathbf \mu)
$$
We then learn our linear model, say $(\tilde{\mathbf w}, \tilde b)$ over the centered data. We will see that our solvers will thank us for normalizing our data. However, it is very easy to transform this linear model to one that works over the original data (we may want to do this since our test data would not be normalized and normalizing test data may take precious time which we may wish to save).
To transform the model to one that works over the original data features, simply notice that we have
$$
\tilde{\mathbf w}^\top\tilde{\mathbf x}^i + \tilde b = \tilde{\mathbf w}^\top\Sigma^{-1}(\mathbf x^i - \mathbf \mu) + \tilde b = \mathbf w^\top\mathbf x^i + b,
$$
where $\mathbf w = \Sigma^{-1}\tilde{\mathbf w}$ and $b = \tilde b - \tilde{\mathbf w}^\top\Sigma^{-1}\mathbf \mu$ (we exploited the fact that $\Sigma$ being a diagonal matrix, is a symmetric matrix)
```
# Normalize data
mu = np.mean( X_extend, axis = 0 )
sg = np.std( X_extend, axis = 0 )
XNorm = (X_extend - mu)/sg
# The original dataset is still recoverable from the centered data
if np.allclose( X_extend, XNorm * sg + mu, atol = 1e-7 ):
print( "Successfully recovered the original data from the normalized data" )
```
**Running ProxGD on Normalized Data**: we will have to make two simple changes. Firstly, we will need to change the gradient calculator method to perform gradient computations with normalized data. Secondly, we will change the method that calculates the objective values since we want evaluation to be still done on unnormalized data (to demonstrate that the model can be translated to work with unnormalized data).
```
# Get the MAE and LASSO objective on original data by translating the model
def getLASSOObjNorm( model ):
w = model[:-1]
b = model[-1]
# Translate the model to work with original data features
b = b - w.dot(mu / sg)
w = w / sg
res = X_extend.dot(w) + b - y
objVal = alpha * nplin.norm( w, 1 ) + 1/(2*n) * ( nplin.norm( res ) ** 2 )
MAEVal = np.mean( np.abs( res ) )
return (objVal, MAEVal)
# Get the gradient to the loss function in LASSO for normalized data
def getLASSOGradNorm( model, t ):
w = model[:-1]
b = model[-1]
samples = random.sample( range(0, n), B )
X_ = XNorm[samples,:]
y_ = y[samples]
res = X_.dot(w) + b - y_
grad = np.append( X_.T.dot(res), np.sum(res) )
return grad/B
# Set hyperparameters and initialize the model as before
# Since our normalized data is better conditioned, we are able to use a much
# bigger value of the step length parameter which leads to faster progress
alpha = 1
B = 10
eta = 1e-2
init = np.zeros( (d+1,) )
modelPrev = np.zeros( (d+1,) )
# A constant step length seems to work well here
stepFunc = opt.stepLengthGenerator( "constant", eta )
# Notice that we are running the ProxGD method for far fewer iterations (1000)
# than we did (50000) when we had badly conditioned data
(modelProxGD, objProxGD, timeProxGD) = opt.doGD( getLASSOGradNorm, stepFunc, getLASSOObjNorm, init, horizon = 1000, doModelAveraging = True, postGradFunc = doSoftThresholding )
objVals = [objProxGD[i][0] for i in range(len(objProxGD))]
MAEVals = [objProxGD[i][1] for i in range(len(objProxGD))]
fig8 = pd.getFigure( 7, 7 )
ax = plt.gca()
ax.set_title( "The Accelerated ProxGD Solver on Normalized Data" )
ax.set_xlabel( "Elapsed time (sec)" )
ax.set_ylabel( "Objective Value for LASSO", color = "r" )
ax.plot( timeProxGD, objVals, color = 'r', linestyle = ':' )
ax2 = ax.twinx()
ax2.set_ylabel( "MAE Value for LASSO", color = "b" )
ax2.plot( timeProxGD, MAEVals, color = 'b', linestyle = '--' )
plt.ylim( 2, 10 )
```
**Support Recovery**: we note that our accelerated ProxGD is able to offer good support recovery. If we look at the top 13 coordinates of the model learnt by ProxGD in terms of magnitude, we find that several of them are actually the non-spurious features. We should note that one of the features of the original data, namely the fourth coordinate called CHAS (Charles River dummy variable) is, as its name suggests, known to be a dummy variable itself (see [[link]](https://scikit-learn.org/stable/datasets/index.html#boston-dataset) to learn more) with nothing to do with the regression problem!
```
idxTop = np.argsort( np.abs(modelProxGD) )[::-1][:13]
print( "The top 13 coordinates in terms of magnitude are \n ", idxTop )
print( "These contain %d of the non-spurious coordinates" % len( set(idxTop).intersection( set(np.arange(13)) ) ) )
```
| github_jupyter |
# Import statements
```
from google.colab import drive
drive.mount('/content/drive')
from my_ml_lib import MetricTools, PlotTools
import os
import numpy as np
import matplotlib.pyplot as plt
import pickle
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import json
import datetime
import copy
from PIL import Image as im
import joblib
from sklearn.model_selection import train_test_split
# import math as Math
import random
import torch.optim
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torchvision
import cv2
```
# Saving and Loading code
```
# Saving and Loading models using joblib
def save(filename, obj):
with open(filename, 'wb') as handle:
joblib.dump(obj, handle, protocol=pickle.HIGHEST_PROTOCOL)
def load(filename):
with open(filename, 'rb') as handle:
return joblib.load(filename)
```
# Importing Dataset
```
p = "/content/drive/MyDrive/A3/"
data_path = p + "dataset/train.pkl"
x = load(data_path)
# save_path = "/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/dataset/"
# # saving the images and labels array
# save(save_path + "data_image.pkl",data_image)
# save(save_path + "data_label.pkl",data_label)
# # dict values where labels key and image arrays as vlaues in form of list
# save(save_path + "my_dict.pkl",my_dict)
save_path = p+ "dataset/"
# saving the images and labels array
data_image = load(save_path + "data_image.pkl")
data_label = load(save_path + "data_label.pkl")
# dict values where labels key and image arrays as vlaues in form of list
my_dict = load(save_path + "my_dict.pkl")
len(data_image) , len(data_label), my_dict.keys()
```
# Data Class and Data Loaders and Data transforms
```
len(x['names']) ,x['names'][4999] , data_image[0].shape
```
## Splitting the data into train and val
```
X_train, X_test, y_train, y_test = train_test_split(data_image, data_label, test_size=0.10, random_state=42,stratify=data_label )
len(X_train) , len(y_train) , len(X_test) ,len(y_test)
pd.DataFrame(y_test).value_counts()
```
## Data Class
```
class myDataClass(Dataset):
"""Custom dataset class"""
def __init__(self, images, labels , transform=None):
"""
Args:
images : Array of all the images
labels : Correspoing labels of all the images
"""
self.images = images
self.labels = labels
self.transform = transform
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
# converts image value between 0 and 1 and returns a tensor C,H,W
img = torchvision.transforms.functional.to_tensor(self.images[idx])
target = self.labels[idx]
if self.transform:
img = self.transform(img)
return img,target
```
## Data Loaders
```
batch = 64
train_dataset = myDataClass(X_train, y_train)
test_dataset = myDataClass(X_test, y_test)
train_dataloader = DataLoader(train_dataset, batch_size= batch, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size= batch, shuffle=True)
# next(iter(train_dataloader))[0].shape
len(train_dataloader) , len(test_dataloader)
```
# Train and Test functions
```
def load_best(all_models,model_test):
FILE = all_models[-1]
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_test.parameters(), lr=0)
checkpoint = torch.load(FILE)
model_test.load_state_dict(checkpoint['model_state'])
optimizer.load_state_dict(checkpoint['optim_state'])
epoch = checkpoint['epoch']
model_test.eval()
return model_test
def train(save_path,epochs,train_dataloader,model,test_dataloader,optimizer,criterion,basic_name):
model_no = 1
c = 1
all_models = []
valid_loss_min = np.Inf
train_losses = []
val_losses = []
for e in range(epochs):
train_loss = 0.0
valid_loss = 0.0
model.train()
for idx, (images,labels) in enumerate(train_dataloader):
images, labels = images.to(device) , labels.to(device)
optimizer.zero_grad()
log_ps= model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
train_loss += ((1 / (idx + 1)) * (loss.data - train_loss))
else:
accuracy = 0
correct = 0
model.eval()
with torch.no_grad():
for idx, (images,labels) in enumerate(test_dataloader):
images, labels = images.to(device) , labels.to(device)
log_ps = model(images)
_, predicted = torch.max(log_ps.data, 1)
loss = criterion(log_ps, labels)
# correct += (predicted == labels).sum().item()
equals = predicted == labels.view(*predicted.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
valid_loss += ((1 / (idx + 1)) * (loss.data - valid_loss))
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
e+1,
train_loss,
valid_loss
), "Test Accuracy: {:.3f}".format(accuracy/len(test_dataloader)))
train_losses.append(train_loss)
val_losses.append(valid_loss)
if valid_loss < valid_loss_min:
print('Saving model..' + str(model_no))
valid_loss_min = valid_loss
checkpoint = {
"epoch": e+1,
"model_state": model.state_dict(),
"optim_state": optimizer.state_dict(),
"train_losses": train_losses,
"test_losses": val_losses,
}
FILE = save_path + basic_name +"_epoch_" + str(e+1) + "_model_" + str(model_no)
all_models.append(FILE)
torch.save(checkpoint, FILE)
model_no = model_no + 1
save(save_path + basic_name + "_all_models.pkl", all_models)
return model, train_losses, val_losses, all_models
def plot(train_losses,val_losses,title='Training Validation Loss with CNN'):
plt.plot(train_losses, label='Training loss')
plt.plot(val_losses, label='Validation loss')
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.legend()
_ = plt.ylim()
plt.title(title)
# plt.savefig('plots/Training Validation Loss with CNN from scratch.png')
plt.show()
def test(loader, model, criterion, device, name):
test_loss = 0.
correct = 0.
total = 0.
y = None
y_hat = None
model.eval()
for batch_idx, (images, labels) in enumerate(loader):
# move to GPU or CPU
images, labels = images.to(device) , labels.to(device)
target = labels
# forward pass: compute predicted outputs by passing inputs to the model
output = model(images)
# calculate the loss
loss = criterion(output,labels)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
if y is None:
y = target.cpu().numpy()
y_hat = pred.data.cpu().view_as(target).numpy()
else:
y = np.append(y, target.cpu().numpy())
y_hat = np.append(y_hat, pred.data.cpu().view_as(target).numpy())
correct += np.sum(pred.view_as(labels).cpu().numpy() == labels.cpu().numpy())
total = total + images.size(0)
# if batch_idx % 20 == 0:
# print("done till batch" , batch_idx+1)
print(name + ' Loss: {:.6f}\n'.format(test_loss))
print(name + ' Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
return y, y_hat
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# def train(save_path,epochs,train_dataloader,model,test_dataloader,optimizer,criterion,basic_name)
# def plot(train_losses,val_losses,title='Training Validation Loss with CNN')
# def test(loader, model, criterion, device)
```
# Relu [ X=2 Y=3 Z=1 ]
## CNN-Block-123
### model
```
cfg3 = {
'B123': [16,16,'M',32,32,32,'M',64,'M'],
}
def make_layers3(cfg, batch_norm=True):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'M1':
layers += [nn.MaxPool2d(kernel_size=4, stride=3)]
elif v == 'D':
layers += [nn.Dropout(p=0.5)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
class Model_B123(nn.Module):
'''
Model
'''
def __init__(self, features):
super(Model_B123, self).__init__()
self.features = features
self.classifier = nn.Sequential(
# nn.Linear(1600, 512),
# nn.ReLU(True),
# nn.Linear(512, 256),
# nn.ReLU(True),
# nn.Linear(256, 64),
# nn.ReLU(True),
nn.Linear(64, 10),
)
def forward(self, x):
x = self.features(x)
# print(x.shape)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
# m = Model_B123(make_layers3(cfg3['B123']))
# for i,l in train_dataloader:
# o = m(i)
model3 = Model_B123(make_layers3(cfg3['B123'])).to(device)
learning_rate = 0.01
criterion3 = nn.CrossEntropyLoss()
optimizer3 = optim.Adam(model3.parameters(), lr=learning_rate)
print(model3)
```
### train
```
# !rm '/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/models_saved_Q1/1_3/bw_blocks/Dropout(0.5)/cnn_block123/'*
# !ls '/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/models_saved_Q1/1_3/bw_blocks/Dropout(0.5)/cnn_block123/'
save_path3 = p + "models_saved_Q1/1_4/colab_notebooks /Batchnorm_and_pooling/models/"
m, train_losses, val_losses,m_all_models = train(save_path3,30,train_dataloader,model3,test_dataloader,optimizer3,criterion3,"cnn_b123_x2_y3_z1")
```
### Tests and Plots
```
plot(train_losses,val_losses,'Training Validation Loss with CNN-block1')
all_models3 = load(save_path3 + "cnn_b123_x2_y3_z1_all_models.pkl")
FILE = all_models3[-1]
m3 = Model_B123(make_layers3(cfg3['B123'])).to(device)
m3 = load_best(all_models3,m3)
train_y, train_y_hat = test(train_dataloader, m3, criterion3, device, "TRAIN")
cm = MetricTools.confusion_matrix(train_y, train_y_hat, nclasses=10)
PlotTools.confusion_matrix(cm, [i for i in range(10)], title='',
filename='Confusion Matrix with CNN', figsize=(6,6))
test_y, test_y_hat = test(test_dataloader, m3, criterion3, device,"TEST")
cm = MetricTools.confusion_matrix(test_y, test_y_hat, nclasses=10)
PlotTools.confusion_matrix(cm, [i for i in range(10)], title='',
filename='Confusion Matrix with CNN', figsize=(6,6))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from utils import get_ts
from warnings import simplefilter
simplefilter("ignore")
df = get_ts(coin="nexo",days=500)
df.head(3)
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True, figsize=(11, 4))
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=16,
titlepad=10,
)
plot_params = dict(
color="0.75",
style=".-",
markeredgecolor="0.25",
markerfacecolor="0.25",
)
%config InlineBackend.figure_format = 'retina'
def plot_multistep(y, every=1, ax=None, palette_kwargs=None):
palette_kwargs_ = dict(palette='husl', n_colors=16, desat=None)
if palette_kwargs is not None:
palette_kwargs_.update(palette_kwargs)
palette = sns.color_palette(**palette_kwargs_)
if ax is None:
fig, ax = plt.subplots()
ax.set_prop_cycle(plt.cycler('color', palette))
for date, preds in y[::every].iterrows():
preds.index = pd.period_range(start=date, periods=len(preds))
preds.plot(ax=ax)
return ax
# Testing "direct strategy" (1 step : 1 model) & multiple outputs
def make_lags(ts, lags:int, lead_time=1):
"""Creates Lag-Features"""
return pd.concat(
{
f'y_lag_{i}': ts.shift(i)
for i in range(lead_time, lags + lead_time)
},
axis=1)
# Four days of lag features
y = df.prices.copy()
X = make_lags(y, lags=4).fillna(0.0)
def make_multistep_target(ts, steps):
"""Creates step-ahead targets (multiple outouts)"""
return pd.concat(
{f'y_step_{i + 1}': ts.shift(-i)
for i in range(steps)},
axis=1)
# n-days forecast
y = make_multistep_target(y, steps=5).dropna()
# Shifting has created indexes that don't match. Only keep times for
# which we have both targets and features.
y, X = y.align(X, join='inner', axis=0)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
# Create splits
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=False)
model = LinearRegression()
model.fit(X_train, y_train)
y_fit = pd.DataFrame(model.predict(X_train), index=X_train.index, columns=y.columns)
y_pred = pd.DataFrame(model.predict(X_test), index=X_test.index, columns=y.columns)
from sklearn.metrics import mean_squared_error
train_rmse = mean_squared_error(y_train, y_fit, squared=False)
test_rmse = mean_squared_error(y_test, y_pred, squared=False)
print((f"Train RMSE: {train_rmse:.2f}\n" f"Test RMSE: {test_rmse:.2f}"))
```
The longer our forecast, the higher RMSE.
```
palette = dict(palette='husl', n_colors=64)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(11, 6))
ax1 = df.prices[y_fit.index].plot(**plot_params, ax=ax1)
ax1 = plot_multistep(y_fit, ax=ax1, palette_kwargs=palette)
_ = ax1.legend(['Prices (train)', 'Forecast'])
ax2 = df.prices[y_pred.index].plot(**plot_params, ax=ax2)
ax2 = plot_multistep(y_pred, ax=ax2, palette_kwargs=palette)
_ = ax2.legend(['Prices (test)', 'Forecast'])
todays_data = make_lags(df.prices.tail(),4).dropna()
predictions = model.predict(make_lags(df.prices.tail(),4).dropna()).squeeze()
dates = [i.date() for i in pd.date_range(start="2021-10-27",periods=10)]
print("Prices for the next days\n")
for t in zip(dates,predictions):
print(t[0], round(t[1],2))
from sklearn.multioutput import MultiOutputRegressor, RegressorChain
from xgboost import XGBRegressor
model = MultiOutputRegressor(XGBRegressor(n_jobs=1))
model.fit(X_train, y_train)
y_fit = pd.DataFrame(model.predict(X_train), index=X_train.index, columns=y.columns)
y_pred = pd.DataFrame(model.predict(X_test), index=X_test.index, columns=y.columns)
train_rmse = mean_squared_error(y_train, y_fit, squared=False)
test_rmse = mean_squared_error(y_test, y_pred, squared=False)
print((f"Train RMSE: {train_rmse:.2f}\n" f"Test RMSE: {test_rmse:.2f}"))
palette = dict(palette='husl', n_colors=64)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(11, 6))
ax1 = df.prices[y_fit.index].plot(**plot_params, ax=ax1)
ax1 = plot_multistep(y_fit, ax=ax1, palette_kwargs=palette)
_ = ax1.legend(['Prices (train)', 'Forecast'])
ax2 = df.prices[y_pred.index].plot(**plot_params, ax=ax2)
ax2 = plot_multistep(y_pred, ax=ax2, palette_kwargs=palette)
_ = ax2.legend(['Prices (test)', 'Forecast'])
```
XGB is performing here much worse than the linear model. We may tune the model a bit.
```
# Testing DirRec
from sklearn.multioutput import RegressorChain
model = RegressorChain(XGBRegressor(n_jobs=1))
model.fit(X_train, y_train)
y_fit = pd.DataFrame(model.predict(X_train), index=X_train.index, columns=y.columns)
y_pred = pd.DataFrame(model.predict(X_test), index=X_test.index, columns=y.columns)
train_rmse = mean_squared_error(y_train, y_fit, squared=False)
test_rmse = mean_squared_error(y_test, y_pred, squared=False)
print((f"Train RMSE: {train_rmse:.2f}\n" f"Test RMSE: {test_rmse:.2f}"))
palette = dict(palette='husl', n_colors=64)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(11, 6))
ax1 = df.prices[y_fit.index].plot(**plot_params, ax=ax1)
ax1 = plot_multistep(y_fit, ax=ax1, palette_kwargs=palette)
_ = ax1.legend(['Prices (train)', 'Forecast'])
ax2 = df.prices[y_pred.index].plot(**plot_params, ax=ax2)
ax2 = plot_multistep(y_pred, ax=ax2, palette_kwargs=palette)
_ = ax2.legend(['Prices (test)', 'Forecast'])
```
We're not getting better here ;).
# More Sources
https://www.kaggle.com/learn/time-series
https://www.kaggle.com/c/favorita-grocery-sales-forecasting
https://www.kaggle.com/c/rossmann-store-sales/overview
https://www.kaggle.com/c/web-traffic-time-series-forecasting/overview
https://www.kaggle.com/c/walmart-recruiting-store-sales-forecasting
https://www.kaggle.com/c/walmart-recruiting-sales-in-stormy-weather
https://www.kaggle.com/c/m5-forecasting-accuracy
https://www.researchgate.net/publication/339362837_Learnings_from_Kaggle's_Forecasting_Competitions
| github_jupyter |
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment! It's time to build your first neural network, which will have one hidden layer. Now, you'll notice a big difference between this model and the one you implemented previously using logistic regression.
By the end of this assignment, you'll be able to:
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## Table of Contents
- [1 - Packages](#1)
- [2 - Load the Dataset](#2)
- [Exercise 1](#ex-1)
- [3 - Simple Logistic Regression](#3)
- [4 - Neural Network model](#4)
- [4.1 - Defining the neural network structure](#4-1)
- [Exercise 2 - layer_sizes](#ex-2)
- [4.2 - Initialize the model's parameters](#4-2)
- [Exercise 3 - initialize_parameters](#ex-3)
- [4.3 - The Loop](#4-3)
- [Exercise 4 - forward_propagation](#ex-4)
- [4.4 - Compute the Cost](#4-4)
- [Exercise 5 - compute_cost](#ex-5)
- [4.5 - Implement Backpropagation](#4-5)
- [Exercise 6 - backward_propagation](#ex-6)
- [4.6 - Update Parameters](#4-6)
- [Exercise 7 - update_parameters](#ex-7)
- [4.7 - Integration](#4-7)
- [Exercise 8 - nn_model](#ex-8)
- [5 - Test the Model](#5)
- [5.1 - Predict](#5-1)
- [Exercise 9 - predict](#ex-9)
- [5.2 - Test the Model on the Planar Dataset](#5-2)
- [6 - Tuning hidden layer size (optional/ungraded exercise)](#6)
- [7- Performance on other datasets](#7)
<a name='1'></a>
# 1 - Packages
First import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
from public_tests import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(2) # set a seed so that the results are consistent
%load_ext autoreload
%autoreload 2
```
<a name='2'></a>
# 2 - Load the Dataset
Now, load the dataset you'll be working on. The following code will load a "flower" 2-class dataset into variables X and Y.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
First, get a better sense of what your data is like.
<a name='ex-1'></a>
### Exercise 1
How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
# (≈ 3 lines of code)
# shape_X = ...
# shape_Y = ...
# training set size
# m = ...
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> shape of X </td>
<td> (2, 400) </td>
</tr>
<tr>
<td>shape of Y</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>m</td>
<td> 400 </td>
</tr>
</table>
<a name='3'></a>
## 3 - Simple Logistic Regression
Before building a full neural network, let's check how logistic regression performs on this problem. You can use sklearn's built-in functions for this. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models! Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>Accuracy</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
<a name='4'></a>
## 4 - Neural Network model
Logistic regression didn't work well on the flower dataset. Next, you're going to train a Neural Network with a single hidden layer and see how that handles the same problem.
**The model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
In practice, you'll often build helper functions to compute steps 1-3, then merge them into one function called `nn_model()`. Once you've built `nn_model()` and learned the right parameters, you can make predictions on new data.
<a name='4-1'></a>
### 4.1 - Defining the neural network structure ####
<a name='ex-2'></a>
### Exercise 2 - layer_sizes
Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
#(≈ 3 lines of code)
# n_x = ...
# n_h = ...
# n_y = ...
# YOUR CODE STARTS HERE
n_x = X.shape[0]
n_h = 4
n_y = Y.shape[0]
# YOUR CODE ENDS HERE
return (n_x, n_h, n_y)
t_X, t_Y = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(t_X, t_Y)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
layer_sizes_test(layer_sizes)
```
***Expected output***
```
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
```
<a name='4-2'></a>
### 4.2 - Initialize the model's parameters ####
<a name='ex-3'></a>
### Exercise 3 - initialize_parameters
Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1=np.random.randn(n_h,n_x)*0.01
W2=np.random.randn(n_y,n_h)*0.01
b1=np.zeros((n_h,1))
b2=np.zeros((n_y,1))
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_test(initialize_parameters)
```
**Expected output**
```
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[0.]]
```
<a name='4-3'></a>
### 4.3 - The Loop
<a name='ex-4'></a>
### Exercise 4 - forward_propagation
Implement `forward_propagation()` using the following equations:
$$Z^{[1]} = W^{[1]} X + b^{[1]}\tag{1}$$
$$A^{[1]} = \tanh(Z^{[1]})\tag{2}$$
$$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}\tag{3}$$
$$\hat{Y} = A^{[2]} = \sigma(Z^{[2]})\tag{4}$$
**Instructions**:
- Check the mathematical representation of your classifier in the figure above.
- Use the function `sigmoid()`. It's built into (imported) this notebook.
- Use the function `np.tanh()`. It's part of the numpy library.
- Implement using these steps:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()` by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
```
# GRADED FUNCTION:forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1=parameters["W1"]
W2=parameters["W2"]
b1=parameters["b1"]
b2=parameters["b2"]
# YOUR CODE ENDS HERE
# Implement Forward Propagation to calculate A2 (probabilities)
# (≈ 4 lines of code)
# Z1 = ...
# A1 = ...
# Z2 = ...
# A2 = ...
# YOUR CODE STARTS HERE
Z1=np.dot(W1,X)+b1
A1=np.tanh(Z1)
Z2=np.dot(W2,A1)+b2
A2=sigmoid(Z2)
# YOUR CODE ENDS HERE
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
t_X, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(t_X, parameters)
print("A2 = " + str(A2))
forward_propagation_test(forward_propagation)
```
***Expected output***
```
A2 = [[0.21292656 0.21274673 0.21295976]]
```
<a name='4-4'></a>
### 4.4 - Compute the Cost
Now that you've computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for all examples, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
<a name='ex-5'></a>
### Exercise 5 - compute_cost
Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. This is one way to implement one part of the equation without for loops:
$- \sum\limits_{i=1}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs)
```
- Use that to build the whole expression of the cost function.
**Notes**:
- You can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
- If you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array.
- You can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array).
- You can also cast the array as a type `float` using `float()`.
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of examples
# Compute the cross-entropy cost
# (≈ 2 lines of code)
# logprobs = ...
# cost = ...
# YOUR CODE STARTS HERE
cost=-(np.dot(Y,np.log(A2).T)+np.dot((1-Y),np.log(1-A2).T))/m
# YOUR CODE ENDS HERE
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
return cost
A2, t_Y = compute_cost_test_case()
cost = compute_cost(A2, t_Y)
print("cost = " + str(compute_cost(A2, t_Y)))
compute_cost_test(compute_cost)
```
***Expected output***
`cost = 0.6930587610394646`
<a name='4-5'></a>
### 4.5 - Implement Backpropagation
Using the cache computed during forward propagation, you can now implement backward propagation.
<a name='ex-6'></a>
### Exercise 6 - backward_propagation
Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<caption><center><font color='purple'><b>Figure 1</b>: Backpropagation. Use the six equations on the right.</font></center></caption>
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
#(≈ 2 lines of code)
# W1 = ...
# W2 = ...
# YOUR CODE STARTS HERE
W1=parameters["W1"]
W2=parameters["W2"]
# YOUR CODE ENDS HERE
# Retrieve also A1 and A2 from dictionary "cache".
#(≈ 2 lines of code)
# A1 = ...
# A2 = ...
# YOUR CODE STARTS HERE
A1=cache["A1"]
A2=cache["A2"]
Z1=cache["Z1"]
# YOUR CODE ENDS HERE
# Backward propagation: calculate dW1, db1, dW2, db2.
#(≈ 6 lines of code, corresponding to 6 equations on slide above)
# dZ2 = ...
# dW2 = ...
# db2 = ...
# dZ1 = ...
# dW1 = ...
# db1 = ...
# YOUR CODE STARTS HERE
dZ2=A2-Y
dW2=np.dot(dZ2,A1.T)/m
db2=np.sum(dZ2,axis=1,keepdims=True)/m
dZ1=np.dot(W2.T,dZ2)*((1 - np.power(A1, 2)))
dW1=np.dot(dZ1,X.T)/m
db1=np.sum(dZ1,axis=1,keepdims=True)/m
# YOUR CODE ENDS HERE
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, t_X, t_Y = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, t_X, t_Y)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
backward_propagation_test(backward_propagation)
```
***Expected output***
```
dW1 = [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]]
db1 = [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]]
dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
```
<a name='4-6'></a>
### 4.6 - Update Parameters
<a name='ex-7'></a>
### Exercise 7 - update_parameters
Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $\theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
<caption><center><font color='purple'><b>Figure 2</b>: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.</font></center></caption>
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1=parameters["W1"]
W2=parameters["W2"]
b1=parameters["b1"]
b2=parameters["b2"]
# YOUR CODE ENDS HERE
# Retrieve each gradient from the dictionary "grads"
#(≈ 4 lines of code)
# dW1 =
# db1 = ...
# dW2 = ...
# db2 = ...
# YOUR CODE STARTS HERE
dW1=grads["dW1"]
dW2=grads["dW2"]
db1=grads["db1"]
db2=grads["db2"]
# YOUR CODE ENDS HERE
# Update rule for each parameter
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1=W1-learning_rate*dW1
W2=W2-learning_rate*dW2
b1=b1-learning_rate*db1
b2=b2-learning_rate*db2
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
update_parameters_test(update_parameters)
```
***Expected output***
```
W1 = [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]
b1 = [[-1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[-3.20136836e-06]]
W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
b2 = [[0.00010457]]
```
<a name='4-7'></a>
### 4.7 - Integration
Integrate your functions in `nn_model()`
<a name='ex-8'></a>
### Exercise 8 - nn_model
Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
#(≈ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters=initialize_parameters(n_x, n_h, n_y)
# YOUR CODE ENDS HERE
# Loop (gradient descent)
for i in range(0, num_iterations):
#(≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
# A2, cache = ...
# Cost function. Inputs: "A2, Y". Outputs: "cost".
# cost = ...
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
# grads = ...
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
# parameters = ...
# YOUR CODE STARTS HERE
A2,cache=forward_propagation(X, parameters)
cost=compute_cost(A2,Y)
grads=backward_propagation(parameters, cache, X, Y)
parameters=update_parameters(parameters, grads, learning_rate = 1.2)
# YOUR CODE ENDS HERE
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
t_X, t_Y = nn_model_test_case()
parameters = nn_model(t_X, t_Y, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
nn_model_test(nn_model)
```
***Expected output***
```
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000218
Cost after iteration 2000: 0.000107
...
Cost after iteration 8000: 0.000026
Cost after iteration 9000: 0.000023
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]
b1 = [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]]
W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]
b2 = [[0.20459656]]
```
<a name='5'></a>
## 5 - Test the Model
<a name='5-1'></a>
### 5.1 - Predict
<a name='ex-9'></a>
### Exercise 9 - predict
Predict with your model by building `predict()`.
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
#(≈ 2 lines of code)
# A2, cache = ...
# predictions = ...
# YOUR CODE STARTS HERE
A2,cache=forward_propagation(X,parameters)
predictions = A2>0.5
# YOUR CODE ENDS HERE
return predictions
parameters, t_X = predict_test_case()
predictions = predict(parameters, t_X)
print("Predictions: " + str(predictions))
predict_test(predict)
```
***Expected output***
```
Predictions: [[ True False True]]
```
<a name='5-2'></a>
### 5.2 - Test the Model on the Planar Dataset
It's time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units!
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y, predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size) * 100) + '%')
```
**Expected Output**:
<table style="width:30%">
<tr>
<td><b>Accuracy</b></td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learned the patterns of the flower's petals! Unlike logistic regression, neural networks are able to learn even highly non-linear decision boundaries.
### Congrats on finishing this Programming Assignment!
Here's a quick recap of all you just accomplished:
- Built a complete 2-class classification neural network with a hidden layer
- Made good use of a non-linear unit
- Computed the cross entropy loss
- Implemented forward and backward propagation
- Seen the impact of varying the hidden layer size, including overfitting.
You've created a neural network that can learn patterns! Excellent work. Below, there are some optional exercises to try out some other hidden layer sizes, and other datasets.
<a name='6'></a>
## 6 - Tuning hidden layer size (optional/ungraded exercise)
Run the following code(it may take 1-2 minutes). Then, observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.
- Later, you'll become familiar with regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right.
**Some optional/ungraded questions that you can explore if you wish**:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<a name='7'></a>
## 7- Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
**References**:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
#### From Quarks to Cosmos with AI: Tutorial Day 4
---
# Field-level cosmological inference with IMNN + DELFI
by Lucas Makinen [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/Orcid-ID.png" alt="drawing" width="20"/>](https://orcid.org/0000-0002-3795-6933 "") [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/twitter-graphic.png" alt="drawing" width="20" style="background-color: transparent"/>](https://twitter.com/lucasmakinen?lang=en ""), Tom Charnock [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/Orcid-ID.png" alt="drawing" width="20"/>](https://orcid.org/0000-0002-7416-3107 "Redirect to orcid") [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/twitter-graphic.png" alt="drawing" width="20" style="background-color: transparent"/>](https://twitter.com/t_charnock?lang=en "")), Justin Alsing [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/Orcid-ID.png" alt="drawing" width="20"/>](https://scholar.google.com/citations?user=ICPFL8AAAAAJ&hl=en "Redirect to orcid"), and Ben Wandelt [<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/twitter-graphic.png" alt="drawing" width="20" style="background-color: transparent"/>](https://twitter.com/bwandelt?lang=en "")
>read the paper: [on arXiv tomorrow !]
>get the code: [https://github.com/tlmakinen/FieldIMNNs](https://github.com/tlmakinen/FieldIMNNs)

$\quad$
In this tutorial we will demonstrate Implicit Likelihood Inference (IFI) using Density Estimation Likelihood Free Inference (DELFI) with optimal nonlinear summaries obtained from an Information Maximising Neural Network (IMNN). The goal of the exercise will be to build posterior distributions for the cosmological parameters $\Omega_c$ and $\sigma_8$ *directly* from overdensity field simulations.
First we'll install the relevant libraries and walk through the simulation implementation. Then we'll build a neural IMNN compressor to generate two optimal summaries for our simulations. Finally, we'll use these summaries to build and train a Conditional Masked Autoregressive Flow, from which we'll construct our parameter posterior distributions.
### Q: Wait a second -- how do we know this works ?
If you're not convinced by our method by the end of this tutorial, we invite you to take a look at our [benchmarking tutorial with Gaussian fields from power spectra](https://www.aquila-consortium.org/doc/imnn/pages/examples/2d_field_inference/2d_field_inference.html), which is also runnable in-browser on [this Colab notebook](https://colab.research.google.com/drive/1_y_Rgn3vrb2rlk9YUDUtfwDv9hx774ZF#scrollTo=EW4H-R8I0q6n).
---
# HOW TO USE THIS NOTEBOOK
You will (most likely) be running this code using a free version of Google Colab. The code runs just like a Jupyter notebook (`shift` + `enter` or click the play button to run cells). There are some cells with lengthy infrastructure code that you need to run to proceed. These are clearly marked with <font color='lightgreen'>[run me]</font>. When you get to the challenge exercises, you are welcome to code some functions yourself. However, if you want to run the notebook end-to-end, solution code is presented in hidden cells below (again with the marker <font color='lightgreen'>[run me]</font>).
Some cells are not meant to be run here as a part of Quarks to Cosmos, but can be run (with a Colab Pro account) on your own.
---
# step 1: loading packages and setting up environment
1. check that Colab is set to run on a GPU ! Go to `Runtime`>`change runtime type` and select `GPU` from the dropdown menu. Next, enable dark mode by going to `settings`>`Theme` and selecting `dark` (protect your eyes !)
2. install packages. The current code relies on several libraries, namely `jax` and `tensorflow_probability`. However, we require both plain `tensorflow_probability` (`tfp`) and the experimental `tensorflow_probability.substrates.jax` (`tfpj`) packages for different parts of our inference
3. for some Colab sessions, you may need to run the second cell so that `!pip install jax-cosmo` gets the package imported properly.
```
#@title set up environment <font color='lightgreen'>[RUN ME FIRST]</font>
%tensorflow_version 2.x
import tensorflow as tf
print('tf version', tf.__version__)
!pip install -q jax==0.2.11
!pip install -q tensorflow-probability
import tensorflow_probability as tfp
print('tfp version:', tfp.__version__)
tfd = tfp.distributions
tfb = tfp.bijectors
!pip install -q imnn
!python -m pip install -q jax-cosmo
```
note: if the cell below fails for installing jax-cosmo, just run it again: Colab will rearrange the headings needed.
```
# now import all the required libraries
import jax.numpy as np
from jax import grad, jit, vmap
from jax import random
import jax
print('jax version:', jax.__version__)
# for nn model stuff
import jax.experimental.optimizers as optimizers
import jax.experimental.stax as stax
# tensorflow-prob VANILLA
tfd = tfp.distributions
tfb = tfp.bijectors
# tensorflow-prob-JAX
import tensorflow_probability.substrates.jax as tfpj
tfdj = tfpj.distributions
tfbj = tfpj.bijectors
# for imnn
import imnn
import imnn.lfi
print('IMNN version:', imnn.__version__)
# jax-cosmo module
!python -m pip install -q jax-cosmo
import jax_cosmo as jc
print('jax-cosmo version:', jc.__version__)
# matplotlib stuff
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz
import seaborn as sns
sns.set()
rng = random.PRNGKey(2)
from jax.config import config
config.update('jax_enable_x64', True)
```
make sure we're using 64-bit precision and running on a GPU !
```
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
```
# Cosmological Fields from the Eisenstein-Hu linear matter power spectrum
We're interested in extracting the cosmological parameters $\Omega_c$ and $\sigma_8$ *directly* from cosmological field pixels. To generate our simulations we'll need to install the library `jax-cosmo` to generate our differentiable model power spectra.
## choose fiducial model
To train our neural compression, we first need to choose a fiducial model to train the IMNN.
For example lets say that our fiducial cosmology has $\Omega_c=0.40$ and $\sigma_8=0.60$. This is *deliberately* far from, say, Planck parameters -- we want to investigate how our compression behaves if we don't know our universe's true parameters.
```
cosmo_params = jc.Planck15(Omega_c=0.40, sigma8=0.60)
θ_fid = np.array(
[cosmo_params.Omega_c,
cosmo_params.sigma8],
dtype=np.float32)
n_params = θ_fid.shape[0]
```
Our power spectrum $P_{\rm LN}(k)$ is the linear matter power spectrum defined as
```
def P(k, A=0.40, B=0.60):
cosmo_params = jc.Planck15(Omega_c=A, sigma8=B)
return jc.power.linear_matter_power(cosmo_params, k)
```
and we can visualize it in $k$-space (small $k$ <=> big $r$, big $k$ <=> small $r$) :
```
#@title plot the Eisenstein-Hu $P(k)$ <font color='lightgreen'>[run me]</font>
sns.set()
L = 250.
N = 128.
#kmax = 1.0
#kmin = 0.5 / (N)
kmax = N / L
kmin = 1. / L
kbin = np.linspace(kmin, kmax, num=100)
power_spec = P(kbin, A=cosmo_params.Omega_c, B=cosmo_params.sigma8)
plt.style.use('dark_background')
plt.grid(b=None)
plt.plot(kbin, power_spec, linewidth=2)
plt.xlabel(r'$k\ \rm [h\ Mpc^{-1}]$', fontsize=14)
plt.ylabel(r'$P(k)\ \rm$', fontsize=14)
plt.ylim((1e2, 1e4))
plt.xscale('log')
plt.yscale('log')
```
____
## Lognormal Fields from Power Spectra: how much information is embedded in the field ?
Cosmologists often use lognormal fields as "the poor man's large scale structure" since they're analytically interrogable and easy to obtain from Gaussian fields. We'll walk through how to obtain the *theoretical* information content of such fields using the Fisher formalism.
The likelihood for an $N_{\rm pix}\times N_{\rm pix}$ Gaussian field, $\boldsymbol{\delta}$, can be explicitly written down for the Fourier transformed data, $\boldsymbol{\Delta}$ as
$$\mathcal{L}(\boldsymbol{\Delta}|\boldsymbol{\theta}) = \frac{1}{(2\pi)^{N_{\rm pix}^2 / 2} |P_{\rm G}({\bf k}, \boldsymbol{\theta})|^{1/2}}\exp{\left(-\frac{1}{2}\boldsymbol{\Delta}\left(P_{\rm G}({\bf k}, \boldsymbol{\theta})\right)^{-1}\boldsymbol{\Delta}\right)}$$
Since the Fisher information can be calculated from the expectation value of the second derivative of the score, i.e. the log likelihood
$${\bf F}_{\alpha\beta} = - \left.\left\langle\frac{\partial^2\ln\mathcal{L}(\Delta|\boldsymbol{\theta})}{\partial\theta_\alpha\partial\theta_\beta}\right\rangle\right|_{\boldsymbol{\theta}=\boldsymbol{\theta}^\textrm{fid}}$$
then we know that analytically the Fisher information must be
$${\bf F}_{\alpha\beta} = \frac{1}{2} {\rm Tr} \left(\frac{\partial P_{\rm G}({\bf k}, \boldsymbol{\theta})}{\partial\theta_\alpha}\left(P_{\rm G}({\bf k}, \boldsymbol{\theta})\right)^{-1}\frac{\partial P_{\rm G}({\bf k}, \boldsymbol{\theta})}{\partial\theta_\beta}\left(P_{\rm G}({\bf k}, \boldsymbol{\theta})\right)^{-1}\right)$$
where $\alpha$ and $\beta$ label the parameters (for instance $ \Omega_c, \sigma_8$) in the power spectrum. As each $k$-mode is uncoupled for this power law form we require the derivatives
$$\begin{align}
\left(\frac{\partial P_{\rm G}({\bf k}, \boldsymbol{\theta})}{\partial \Omega_c},\
\frac{\partial P_{\rm G}({\bf k}, \boldsymbol{\theta})}{\partial \sigma_8}\right) \\
\end{align}$$
We can set up these derivative functions *so long as our code for $P(k)$ is differentiable*.
For *lognormal* fields, this likelihood changes somewhat. Formally, if a random variable $Y$ has a normal distribution, then the exponential function of $Y$, $X = \exp(Y)$, has a log-normal distribution. We will generate our log-normal fields with a power spectrum such that the *lognormal field has the specified $P_{\rm LN}(k)$*. This means that we need to employ the *backwards conversion formula* , presented by [M. Greiner? and T.A. Enßlin](https://arxiv.org/pdf/1312.1354.pdf), to obtain the correct form for $P_{\rm G}(k)$ needed for the above Fisher evaluation:
$$ P_{\rm G} = \int d^u x e^{i \textbf{k} \cdot \textbf{x}} \ln \left( \int \frac{d^u q}{(2\pi)^u} e^{i \textbf{q} \cdot \textbf{x}} P_{\rm LN}(\textbf{q}) \right) $$
which we can do numerically (and differentiably !) in `Jax`. If you're curious about the computation, check out [this notebook](https://colab.research.google.com/drive/1beknmt3CwjEDFFnZjXRClzig1sf54aMR?usp=sharing). We performed the computation using a Colab Pro account with increased GPU resources to accomodate such large fields. When the smoke clears, our fields have a fiducial theoretical Fisher information content, $|\textbf{F}|_{(0.4, 0.6)}$ of
det_F = 656705.6827
this can be equivalently expressed in terms of the Shannon information (up to a constant, in nats !) of a Gaussian with covariance matrix $\textbf{F}^{-1}$:
shannon info = 0.5 * np.log(det_F) = 6.6975 # nats
When testing our neural IMNN compressor, we used these metrics to verify that we indeed capture the maximal (or close to it) amount of information from our field simulations.
____
# Simulating the universe with power spectra
We can now set the simulator arguments, i.e. the $k$-modes to evaluate, the length of the side of a box, the shape of the box and whether to normalise via the volume and squeeze the output dimensions
## choose $k$-modes (the size of our universe-in-a-box)
Next, we're going to set our $N$-side to 128 (the size of our data vector), $k$-vector, as well as the $L$-side (the physical dimensions of the universe-in-a-box:
```
N = 128
shape = (N, N)
k = np.sqrt(
np.sum(
np.array(
np.meshgrid(
*((np.hstack(
(np.arange(0, _shape // 2 + 1),
np.arange(-_shape // 2 + 1, 0)))
* 2 * np.pi / _shape)**2.
for _shape in shape))),
axis=0))
simulator_args = dict(
k=k, # k-vector (grid units)
L=250, # in Mpc h^-1
shape=shape,
vol_norm=True, # whether to normalise P(k) by volume
N_scale=False, # scale field values up or down
squeeze=True,
log_normal=True)
```
___
## Next, we provide you our universe simulator in `jax`. This is how it works:
### 2D random field simulator in jax
To create a 2D lognormal random field we can follow these steps:
1. Generate a $(N_\textrm{pix}\times N_\textrm{pix})$ white noise field $\varphi$ such that $\langle \varphi_k \varphi_{-k} \rangle' = 1$
2. Fourier Transform $\varphi$ to real space: $R_{\rm white}({\bf x}) \rightarrow R_{\rm white}({\bf k})$
Note that NumPy's DFT Fourier convention is:
$$\phi_{ab}^{\bf k} = \sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) \phi^{\bf x}_{cd}}$$
$$\phi_{ab}^{\bf x} = \frac{1}{N^2}\sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) \phi^{\bf k}_{cd}}$$
3. Evaluate the chosen power spectrum over a field of $k$ values and do the lognormal transformation:
$$P_{\rm LN}(k) \gets \ln(1 + P(k)) $$
Here we need to ensure that this array of amplitudes are Hermitian, e.g. $\phi^{* {\bf k}}_{a(N/2 + b)} = \phi^{{\bf k}}_{a(N/2 - b)}$. This is accomplished by choosing indices $k_a = k_b = \frac{2\pi}{N} (0, \dots, N/2, -N/2+1, \dots, -1)$ (as above) and then evaluating the square root of the outer product of the meshgrid between the two: $k = \sqrt{k^2_a + k^2_b}$. We can then evaluate $P_{\rm LN}^{1/2}(k)$.
4. Scale white noise $R_{\rm white}({\bf k})$ by the power spectrum:
$$R_P({\bf k}) = P_{\rm LN}^{1/2}(k) R_{\rm white}({\bf k}) $$
5. Fourier Transform $R_{P}({\bf k})$ to real space: $R_P({\bf x}) = \int d^d \tilde{k} e^{i{\bf k} \cdot {\bf x}} R_p({\bf k})$
$$R_{ab}^{\bf x} = \frac{1}{N^2}\sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) R^{\bf k}_{cd}}$$
We are going to use a broadcastable jax simultor which takes in a variety of different shaped parameter arrays and vmaps them until a single parameter pair are passed. This is very efficient for generating many simulations at once, for Approximate Bayesian Computation for example.
```
#@title simulator code <font color='lightgreen'>[RUN ME]</font>
def simulator(rng, θ, simulator_args, foregrounds=None):
def fn(rng, A, B):
dim = len(simulator_args["shape"])
L = simulator_args["L"]
if np.isscalar(L):
L = [L] * int(dim)
Lk = ()
shape = ()
for i, _shape in enumerate(simulator_args["shape"]):
Lk += (_shape / L[i],)
if _shape % 2 == 0:
shape += (_shape + 1,)
else:
shape += (_shape,)
k = simulator_args["k"]
k_shape = k.shape
k = k.flatten()[1:]
tpl = ()
for _d in range(dim):
tpl += (_d,)
V = np.prod(np.array(L))
scale = V**(1. / dim)
fft_norm = np.prod(np.array(Lk))
rng, key = jax.random.split(rng)
mag = jax.random.normal(
key, shape=shape)
pha = 2. * np.pi * jax.random.uniform(
key, shape=shape)
# now make hermitian field (reality condition)
revidx = (slice(None, None, -1),) * dim
mag = (mag + mag[revidx]) / np.sqrt(2)
pha = (pha - pha[revidx]) / 2 + np.pi
dk = mag * (np.cos(pha) + 1j * np.sin(pha))
cutidx = (slice(None, -1),) * dim
dk = dk[cutidx]
powers = np.concatenate(
(np.zeros(1),
np.sqrt(P(k, A=A, B=B)))).reshape(k_shape)
if simulator_args['vol_norm']:
powers /= V
if simulator_args["log_normal"]:
powers = np.real(
np.fft.ifftshift(
np.fft.ifftn(
powers)
* fft_norm) * V)
powers = np.log(1. + powers)
powers = np.abs(np.fft.fftn(powers))
fourier_field = powers * dk
fourier_field = jax.ops.index_update(
fourier_field,
np.zeros(dim, dtype=int),
np.zeros((1,)))
if simulator_args["log_normal"]:
field = np.real(np.fft.ifftn(fourier_field)) * fft_norm * np.sqrt(V)
sg = np.var(field)
field = np.exp(field - sg / 2.) - 1.
else:
field = np.real(np.fft.ifftn(fourier_field) * fft_norm * np.sqrt(V)**2)
if simulator_args["N_scale"]:
field *= scale
if foregrounds is not None:
rng, key = jax.random.split(key)
foreground = foregrounds[
jax.random.randint(
key,
minval=0,
maxval=foregrounds.shape[0],
shape=())]
field = np.expand_dims(field + foreground, (0,))
if not simulator_args["squeeze"]:
field = np.expand_dims(field, (0, -1))
return np.array(field, dtype='float32')
if isinstance(θ, tuple):
A, B = θ
else:
A = np.take(θ, 0, axis=-1)
B = np.take(θ, 1, axis=-1)
if A.shape == B.shape:
if len(A.shape) == 0:
return fn(rng, A, B)
else:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A, B: simulator(
key, (A, B), simulator_args=simulator_args))(
keys, A, B)
else:
if len(A.shape) > 0:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A: simulator(
key, (A, B), simulator_args=simulator_args))(
keys, A)
elif len(B.shape) > 0:
keys = jax.random.split(rng, num=B.shape[0])
return jax.vmap(
lambda key, B: simulator(
key, (A, B), simulator_args=simulator_args))(
keys, B)
```
By constructing our random field simulator *and* cosmological power spectrum in `Jax`, we have access to *exact numerical derivatives*, meaning we can simulate a *differentiable* universe. Let's visualize what our universe and derivatives look like at our fiducial model below:
```
#@title visualize a fiducial universe and gradients <font color='lightgreen'>[run me]</font>
from imnn.utils import value_and_jacrev, value_and_jacfwd
def simulator_gradient(rng, θ, simulator_args=simulator_args):
return value_and_jacrev(simulator, argnums=1, allow_int=True, holomorphic=True)(rng, θ, simulator_args=simulator_args)
simulation, simulation_gradient = value_and_jacfwd(simulator, argnums=1)(rng, θ_fid,
simulator_args=simulator_args)
cmap = 'viridis'
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig,ax = plt.subplots(nrows=1, ncols=3, figsize=(12,15))
im1 = ax[0].imshow(np.squeeze(simulation),
extent=(0,1,0,1), cmap=cmap)
ax[0].title.set_text(r'example fiducial $\rm d$')
divider = make_axes_locatable(ax[0])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
im1 = ax[1].imshow(np.squeeze(simulation_gradient).T[0].T,
extent=(0,1,0,1), cmap=cmap)
ax[1].title.set_text(r'$\nabla_{\Omega_m} \rm d$')
divider = make_axes_locatable(ax[1])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
im1 = ax[2].imshow(np.squeeze(simulation_gradient).T[1].T,
extent=(0,1,0,1), cmap=cmap)
ax[2].title.set_text(r'$\nabla_{\sigma_8} \rm d$')
divider = make_axes_locatable(ax[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
for a in ax:
a.set_xticks([])
a.set_yticks([])
plt.show()
```
Nice ! Since we can differentiate our universe and power spectrum, we can easily compute gradients of a neural network's outputs with respect to simulation parameters. This will come in handy for compression training.
---
## Training an IMNN
<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/imnn-scheme-white.png" alt="drawing" width="700"/>
The details behind the IMNN algorithm [can be found here on arxiv](https://arxiv.org/abs/1802.03537), but we'll summarize the gist briefly:
1. We want to maximise the Fisher information, $\textbf{F}$, of compressed summaries to satisfy the Cramer-Rao bound:
$$ \langle (\vartheta_\alpha - \langle \vartheta_\alpha \rangle ) (\vartheta_\beta - \langle \vartheta_\beta
\rangle) \rangle \geq \textbf{F}^{-1}_{\alpha \beta} $$ which means saturating the Fisher information minimizes the average variance of the parameter estimates.
2. To do this, and without loss of generality (proof coming soon!) we compute a Gaussian likelihood form to compute our Fisher information:
$$ -2 \ln \mathcal{L}(\textbf{x} | \textbf{d}) = (\textbf{x} - \boldsymbol{\mu}_f(\vartheta))^T \textbf{C}_f^{-1}(\textbf{x} - \boldsymbol{\mu}_f(\vartheta)) $$ where $\boldsymbol{\mu}_f$ and $\textbf{C}$ are the mean and covariance of the network output (summaries). The Fisher is then $$ \textbf{F}_{\alpha \beta} = {\rm tr} [\boldsymbol{\mu}_{f,\alpha}^T C^{-1}_f \boldsymbol{\mu}_{f, \beta}] $$
Since we can differentiate through our neural network *and* simulated universe, we have the exact derivatives with respect to the pipeline we need to compute the Fisher matrix of compressed summaries on-the-fly during compression training.
___
### Q: wait -- what if my simulator isn't differentiable ?
We don't *need* to have the exact derivatives for IMNN training ! Having the gradients accessible just means that we don't have to optimize finite-differencing for estimating derivatives by hand, however (as is done in the original IMNN paper).
___
Let's use an IMNN trained on cosmological fields to see how much information we can extract an what sort of constraints we can get. We will use 2000 simulations to estimate the covariance and use all of their derivatives and we'll summarise the whole cosmological field using 2 summaries.
```
n_s = 200 # number of simulations used to estimate covariance of network outputs
n_d = n_s # number of simulations used to estimate the numerical derivative of
# the mean of the network outputs
n_summaries = 2
```
We're going to use a fully convolutional inception network built using stax with some custom designed blocks. The inception block itself is implemented in the following block:
```
#@title nn model stuff <font color='lightgreen'>[RUN ME]</font>
def InceptBlock(filters, strides, do_5x5=True, do_3x3=True):
"""InceptNet convolutional striding block.
filters: tuple: (f1,f2,f3)
filters1: for conv1x1
filters2: for conv1x1,conv3x3
filters3L for conv1x1,conv5x5"""
filters1, filters2, filters3 = filters
conv1x1 = stax.serial(stax.Conv(filters1, (1, 1), strides, padding="SAME"))
filters4 = filters2
conv3x3 = stax.serial(stax.Conv(filters2, (1, 1), strides=None, padding="SAME"),
stax.Conv(filters4, (3, 3), strides, padding="SAME"))
filters5 = filters3
conv5x5 = stax.serial(stax.Conv(filters3, (1, 1), strides=None, padding="SAME"),
stax.Conv(filters5, (5, 5), strides, padding="SAME"))
maxpool = stax.serial(stax.MaxPool((3, 3), padding="SAME"),
stax.Conv(filters4, (1, 1), strides, padding="SAME"))
if do_3x3:
if do_5x5:
return stax.serial(
stax.FanOut(4),
stax.parallel(conv1x1, conv3x3, conv5x5, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(3),
stax.parallel(conv1x1, conv3x3, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(2),
stax.parallel(conv1x1, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
```
We'll also want to make sure that the output of the network is the correct shape, for which we'll introduce a Reshaping layer
```
def Reshape(shape):
"""Layer function for a reshape layer."""
init_fun = lambda rng, input_shape: (shape,())
apply_fun = lambda params, inputs, **kwargs: np.reshape(inputs, shape)
return init_fun, apply_fun
```
Now we can build the network, with 55 filters and strides of 4 in each direction in each layer
```
fs = 55
layers = [
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(2, 2), do_5x5=False, do_3x3=False),
stax.Conv(n_summaries, (1, 1), strides=(1, 1), padding="SAME"),
stax.Flatten,
Reshape((n_summaries,))
]
model = stax.serial(*layers)
```
We'll also introduce a function to check our model output:
```
def print_model(layers, input_shape, rng):
print('input_shape: ', input_shape)
for l in range(len(layers)):
_m = stax.serial(*layers[:l+1])
print('layer %d shape: '%(l+1), _m[0](rng, input_shape)[0])
# print model specs
key,rng = jax.random.split(rng)
input_shape = (1,) + shape + (1,)
print_model(layers, input_shape, rng)
```
We'll also grab an adam optimiser from jax.experimental.optimizers
```
optimiser = optimizers.adam(step_size=1e-3)
```
Note that due to the form of the network we'll want to have simulations that have a "channel" dimension, which we can set up by not allowing for squeezing in the simulator.
### Load an IMNN
Finally we can load a pre-trained IMNN and compare its compression efficiency to the theoretical Fisher. We will pull the weights and state from the parent repository and calculate the compressor statistics.
We've used a SimulatorIMNN trained on new simulations on-the-fly, eliminating the need for a validation dataset. If you're interested in the IMNN training, see the [benchmarking Colab notebook](https://colab.research.google.com/drive/1_y_Rgn3vrb2rlk9YUDUtfwDv9hx774ZF#scrollTo=EW4H-R8I0q6n) or the Bonus challenge at the end of this tutorial.
We're not training an IMNN here because this model takes $\approx 50$ minutes and requires elevated Colab Pro resources.
```
!git clone https://github.com/tlmakinen/FieldIMNNs.git
# load IMNN state
import cloudpickle as pickle
import os
def unpickle_me(path):
file = open(path, 'rb')
return pickle.load(file)
folder_name = './FieldIMNNs/tutorial/IMNN-aspects/'
loadstate = unpickle_me(os.path.join(folder_name, 'IMNN_state'))
state = jax.experimental.optimizers.pack_optimizer_state(loadstate)
startup_key = np.load(os.path.join(folder_name, 'IMNN_startup_key.npy'), allow_pickle=True)
# load weights to set the IMNN
best_weights = np.load(os.path.join(folder_name, 'best_w.npy'), allow_pickle=True)
# initialize IMNN with pre-trained state
rng, key = jax.random.split(rng)
IMNN = imnn.IMNN(
n_s=n_s,
n_d=n_d,
n_params=n_params,
n_summaries=n_summaries,
input_shape=(1,) + shape + (1,),
θ_fid=θ_fid,
model=model,
optimiser=optimiser,
key_or_state=state, # <---- initialize with state
simulator=lambda rng, θ: simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}}))
# now set weights using the best training weights and startup key (this can take a moment)
IMNN.set_F_statistics(w=best_weights, key=startup_key)
print('det F from IMNN:', np.linalg.det(IMNN.F))
print('% Fisher information captured by IMNN compared to theory: ', np.linalg.det(IMNN.F) / 656705.6827)
```
### if you want to check out how to train an IMNN, see the end of the tutorial !
---
# Inference on a target cosmological field
Now that we have a trained compression function (albeit at a somewhat arbitrary fiducial model), we can now perform simulation-based inference with the optimal summaries.
We'll now pretend to "observe" a cosmological density field at some target parameters, $\theta_{\rm target}$. We'll select $\Omega_c=0.25$ and $\sigma_8=0.81$ (measured 2015 Planck parameters). To get started with this tutorial, we'll load a pre-generated field from the GitHub ("field 2" from our paper !), but you can always generate a new realization with the simulator code.
```
θ_target = np.array([jc.Planck15().Omega_c, jc.Planck15().sigma8,])
δ_target = np.load('./FieldIMNNs/tutorial/target_field_planck.npy')
sns.set() # set up plot settings
cmap='viridis'
plt.imshow(δ_target, cmap=cmap)
plt.colorbar()
plt.title('target cosmological field')
plt.show()
```
Now we're going to **forget we ever knew our choice of target parameters** and do inference on this target data as if it were a real observation (minus measurement noise for now, of course !)
## Inference
We can now attempt to do inference of some target data using the IMNN.
First we're going to compress our target field down to parameter estimates using the IMNN method `IMNN.get_estimate(d)`. What this code does is returns the score estimator for the parameters, obtained via the transformation
$$ \hat{\theta}_{\alpha} = \theta^{\rm fid}_\alpha + \textbf{F}^{-1}_{\alpha \beta} \frac{\partial \mu_i}{\partial \theta_\beta} \textbf{C}^{-1}_{ij} \textbf({x}(\textbf{w}, \textbf{d}) - {\mu})_j $$
where ${x}(\textbf{w}, \textbf{d})$ are the network summaries.
```
estimates = IMNN.get_estimate(np.expand_dims(δ_target, (0, 1, -1)))
print('IMNN parameter estimates:', estimates)
```
The cool thing about training an IMNN is that it *automatically* gives you a simple uncertainty estimate on the parameters of interest via the optimal Fisher matrix. We can make a Gaussian approximation to the likelihood using the inverse of the matrix.
Note that to demonstrate robustness, the fiducial parameter values are deliberately far from the target parameters that this estimate of the Fisher information as the covariance will likely be misleading.
We'll need to select a prior distribution first. We'll do this in `tfpj`, selecting wide uniform priors for both $\Omega_c$ and $\sigma_8$.
```
prior = tfpj.distributions.Blockwise(
[tfpj.distributions.Uniform(low=low, high=high)
for low, high in zip([0.01, 0.2], [1.0, 1.3])])
prior.low = np.array([0.01, 0.])
prior.high = np.array([1.0, 1.3])
```
Then we can use the IMNN's built-in Gaussian approximation code:
```
sns.set()
GA = imnn.lfi.GaussianApproximation(
parameter_estimates=estimates,
invF=np.expand_dims(np.linalg.inv(IMNN.F), 0),
prior=prior,
gridsize=100)
ax = GA.marginal_plot(
known=θ_target,
label="Gaussian approximation",
axis_labels=[r"$\Omega_c$", r"$\sigma_8$"],
colours="C1");
```
Even though our fiducial model was trained far away $(\Omega_c, \sigma_8) = (0.4, 0.6)$, our score esimates (center of our ellipse) are very close to the target Planck (crosshairs).
we now have a compression and informative summaries of our target data. We'll next proceed to setting up density estimation to construct our posteriors !
___
# Posterior Construction with DELFI
Density Estimation Likelihood-Free Inference (DELFI) is presented formally [here on arxiv](https://arxiv.org/abs/1903.00007), but we'll give you the TLDR here:
Now that we have nonlinear IMNN summaries, $\textbf{x}$, to describe our cosmological fields, we can perform density estimation to model the *summary data likelihood*, $p(\textbf{x} | \boldsymbol{\theta})$. Once we have this, we can obtain the posterior distribution for $\boldsymbol{\theta}$ via Bayes' rule:
$$ p(\boldsymbol{\theta} | \textbf{x}) \propto p(\textbf{x} | \boldsymbol{\theta}) p(\boldsymbol{\theta}) $$.
## What are CMAFs ?
DELFI provides Conditional Masked Autoregressive Flows (CMAFs) are stacks of neural autoencoders carefully masked to parameterize the summary-parameter likelihood. To start, note that any probability density can be factored as a product of one-dimensional conditional distributions via the chain rule of probability:
\begin{equation}
p(\textbf{x} | \boldsymbol{\theta}) = \prod_{i=1}^{\dim(\textbf{x})} p({\rm x}_i | \textbf{x}_{1:i-1}, \boldsymbol{\theta})
\end{equation}
Masked Autoencoders for density estimation (MADE) model each of these one-dimensional conditionals as Gaussians with mean and variance parameters parameterized by neural network weights, $\textbf{w}$. The neural network layers are masked in such a way that the autoregressive property is preserved, e.g. the output nodes for the density $p({\rm x}_i | \textbf{x}_{1:i-1}, \boldsymbol{\theta})$ *only* depend on $\textbf{x}_{1:i-1}$ and $\boldsymbol{\theta}$, satisfying the chain rule.
We can then stack a bunch of MADEs to form a neural flow for our posterior !

What we're going to do is
1. Train a Conditional Masked Autoregressive Flow to parameterize $p(\textbf{x} | \boldsymbol{\theta})$ to minimize the log-probability, $-\ln U$.
2. Use an affine MCMC sampler to draw from the posterior at the target summaries, $\textbf{x}^{\rm target}$
3. Append training data from the posterior and re-train MAFs.
```
!pip install -q getdist
!pip install -q corner
!pip install -q chainconsumer
import keras
import tensorflow.keras.backend as K
import time
from tqdm import tqdm
from chainconsumer import ChainConsumer
```
(ignore the red error message)
We'll set up the same prior as before, this time in regular `tensorflow-probability`. This means that our CMAFs can talk to our prior draws in the form of tensorflow tensors.
```
# set up prior in non-jax tfp
samp_prior = tfp.distributions.Blockwise(
[tfp.distributions.Uniform(low=low, high=high)
for low, high in zip([0.01, 0.2], [1.0, 1.3])])
samp_prior.low = np.array([0.01, 0.])
samp_prior.high = np.array([1.0, 1.3])
#@title set up the CMAF code <font color='lightgreen'>[RUN ME]</font>
class ConditionalMaskedAutoregressiveFlow(tf.Module):
def __init__(self, n_dimensions=None, n_conditionals=None, n_mades=1, n_hidden=[50,50], input_order="random",
activation=keras.layers.LeakyReLU(0.01),
all_layers=True,
kernel_initializer=keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
bias_initializer=keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None,
bias_constraint=None):
super(ConditionalMaskedAutoregressiveFlow, self).__init__('hi')
# extract init parameters
self.n_dimensions = n_dimensions
self.n_conditionals = n_conditionals
self.n_mades = n_mades
# construct the base (normal) distribution
self.base_distribution = tfd.MultivariateNormalDiag(loc=tf.zeros(self.n_dimensions), scale_diag=tf.ones(self.n_dimensions))
# put the conditional inputs to all layers, or just the first layer?
if all_layers == True:
all_layers = "all_layers"
else:
all_layers = "first_layer"
# construct stack of conditional MADEs
self.MADEs = [tfb.AutoregressiveNetwork(
params=2,
hidden_units=n_hidden,
activation=activation,
event_shape=[n_dimensions],
conditional=True,
conditional_event_shape=[n_conditionals],
conditional_input_layers=all_layers,
input_order=input_order,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
) for i in range(n_mades)
]
# bijector for x | y (chain the conditional MADEs together)
def bijector(self, y):
# start with an empty bijector
MAF = tfb.Identity()
# pass through the MADE layers (passing conditional inputs each time)
for i in range(self.n_mades):
MAF = tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn=lambda x: self.MADEs[i](x, conditional_input=y))(MAF)
return MAF
# construct distribution P(x | y)
def __call__(self, y):
return tfd.TransformedDistribution(
self.base_distribution,
bijector=self.bijector(y))
# log probability ln P(x | y)
def log_prob(self, x, y):
return self.__call__(y).log_prob(x)
# sample n samples from P(x | y)
def sample(self, n, y):
# base samples
base_samples = self.base_distribution.sample(n)
# biject the samples
return self.bijector(y).forward(base_samples)
```
If you're curious about how the MCMC sampler and CMAF code work, feel free to double-click the hidden cells above. We'll walk through the gist of how each module works though:
The `ConditionalMaskedAutoregressiveFlow` API functions similarly to other `tfp` distributions. To set up a model we need to choose a few aspects of the flow. We first need to choose how many MADEs we want to stack to form our flow, `n_mades`. To set up a model with three MADEs, two parameters (`n_dimensions`) and two conditionals (`n_conditionals`), and two hidden layers of 50 neurons per MADE, we'd call:
my_CMAF = ConditionalMaskedAutoregressiveFlow(n_dimensions=2, n_conditionals=2, n_mades=3, n_hidden=[50,50])
What's cool is that this module works just like a `tfp.distributions` function, which means that we can call a log-probability, $p(x | y)$ *conditional* on some $y$-value:
key,rng = jax.random.split(rng)
n_samples = 1
x = prior.sample(sample_shape=(n_samples,), seed=key)
y = np.array([0.3, 0.4])
logU = my_CMAF.log_prob(x, y)
We're going to work with this basic syntax to set up useful DELFI dictionaries to store useful aspects.
___
# Exercise 0: initialize models for target data
Now we're going to initialize several CMAF models for our piece of target data. Using multiple (and varied) deep learning architectures for the same problem is called the "deep ensemble" technique ([see this paper for an overview](https://papers.nips.cc/paper/2017/file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf)).
When setting up DELFI, it's important to remember that each ensemble of CMAFs ought to be generated *per piece of target data*, since we're interested in observing the "slice" of parameter space that gives us each datum's posterior. Since these models are written in Tensorflow, we don't have to worry about specifying a random key or initialization for the model like we do in `Jax`.
1. Declare a `DELFI` dictionary to store the following aspects:
- a list of CMAF models
- a list of optimizers
- a training dataset
- a validation dataset
- the IMNN estimates
2. Initialize `num_models=2` models, each with `n_mades=3` MADEs. Try one set of MADEs with two layers of 50 neurons, and another with three layers. See if you can set up their respective optimizers (we'll use `tf.keras.optimizers.Adam()` with a learning rate of $10^-3$.
## note: remove all `pass` arguments to functions to make them runnable !
```
DELFI = {
}
#@title Ex. 0 solution <font color='lightgreen'>[run me to proceed]</font>
num_targets = 1
# set up list of dictionaries for the target datum
DELFI = {
'MAFs': None, # list of CAMF models
'opts': [], # list of optimizers
'posts':[], # list of MAF posteriors
'train_data': None, # training dataset
'val_data': None, # validation dataset
'train_losses' : [], # losses
'val_losses' : [],
'estimates': estimates,
'target_data' : δ_target,
'F_IMNN': IMNN.F,
'θ_target': θ_target,
}
# number of CMAFs per DELFI ensemble
num_models = 2
n_hiddens = [[50,50], [50,50]] # try different architectures
DELFI['MAFs'] = [ConditionalMaskedAutoregressiveFlow(n_dimensions=2, n_mades=3,
n_conditionals=2, n_hidden=n_hiddens[i]) for i in range(num_models)]
DELFI['opts'] = [tf.keras.optimizers.Adam(learning_rate=1e-3) for i in range(num_models)]
```
___
# Exercise 1: define train and validation steps
Here we want to define tensorflow function training and validation steps that we'll later call in a loop to train each CMAF model in the DELFI ensemble.
1. set up the log posterior loss: $-\ln U = -\ln p(x | y) - \ln p(y)$ where $y=\theta$ are our parameters.
*hint*: try the `samp_prior.log_prob()` call on a few data
2. obtain gradients, `grads` with respect to the scalar loss
3. update each optimizer with the call `optimizer.apply_gradients(zip(grads, model.trainable_variables)`
```
# define loss function -ln U
def logloss(x, y, model, prior):
pass
#@title Ex. 1 solution <font color='lightgreen'>[run me to proceed]</font>
# define loss function
def logloss(x, y, model):
return - model.log_prob(x,y) - samp_prior.log_prob(y)
```
Now that we have our loss defined, we can use it to train our CMAFs via backpropagation:
```
@tf.function
def train_step(x, y, ensemble, opts):
losses = []
# loop over models in ensemble
for m in range(len(ensemble)):
with tf.GradientTape() as tape:
# get loss across batch using our log-loss function
loss = K.mean(logloss(x, y, ensemble[m]))
losses.append(loss)
grads = tape.gradient(loss, ensemble[m].trainable_variables)
opts[m].apply_gradients(zip(grads, ensemble[m].trainable_variables))
return losses
@tf.function
def val_step(x, y, ensemble):
val_l = []
for m in range(len(ensemble)):
loss = K.mean(logloss(x, y, ensemble[m]))
val_l.append(loss)
return val_l
```
___
# Exercise 2: create some dataset functions
Here we want to create the dataset of $(\textbf{x}, \boldsymbol{\theta})$ pairs to train our CMAFs on. Write a function that:
1. generate simulations (with random keys) from sampled parameter pairs, $\theta$. We've set up the key-splitting and simulator code for you.
2. feed simulations through `IMNN.get_estimate()` to get summaries, $\textbf{x}$
3. try to use `jax.vmap()` the above to do this efficiently !
```
#@title hints for vmapping:
# for a function `my_fn(a, x)`, you can vmap, "vector map" over a set of array values as follows:
def my_fn(x, a, b):
return a*x**3 - x + b
# define a slope and intercept
a = 0.5
b = 1.0
# define our x-values
x = np.linspace(-10,10, num=100)
# define a mini function that only depends on x
mini_fn = lambda x: my_fn(x, a=a, b=b)
y = jax.vmap(mini_fn)(x)
plt.plot(x, y)
plt.xlabel('$x$')
plt.ylabel('$y$')
def get_params_summaries(key, θ_samp, simulator=simulator):
"""
function for generating (x,θ) pairs from IMNN compression
over the prior range
θ_samp: array of sampled parameters over prior range
simulator: function for simulating data to be compressed
"""
n_samples = θ_samp.shape[0]
# we'll split up the keys for you
keys = np.array(jax.random.split(key, num=n_samples))
# next define a simulator that takes a key as argument
my_simulator = lambda rng, θ: simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}})
# generate data, vmapping over the random keys and parameters:
# d =
# generate summaries
# x =
# return paired training data
pass
#@title Ex. 2 solution <font color='lightgreen'>[run me to proceed]</font>
def get_params_summaries(key, n_samples, θ_samp, simulator=simulator):
keys = np.array(jax.random.split(key, num=n_samples))
sim = lambda rng, θ: simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}})
# generate a bunch of fields over the prior ranges
d = jax.vmap(sim)(keys, θ_samp)
# compress fields to summaries
x = IMNN.get_estimate(d)
return x, θ_samp
def get_dataset(data, batch_size=20, buffer_size=1000, split=0.75):
"""
helper function for creating tensorflow dataset for CMAF training.
data: pair of vectors (x, θ) = (x, y)
batch_size: how many data pairs per gradient descent
buffer_size: what chunk of the dataset to shuffle (default: random)
split: train-validation split
"""
x,y = data
idx = int(len(x)*split)
x_train = x[:idx]
y_train = y[:idx]
x_val = x[idx:]
y_val = y[idx:]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=buffer_size).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
return train_dataset, val_dataset
```
# Visualize compressed summaries at fiducial model and over the prior
Now that we a function that can take in parameter vectors, generates simulations, and then compresses them into summaries, we can visualize how the IMNN compresses the fields in summary space. We will visualize:
1. compressed simulations run at the fiducial model ($\Omega_c, \sigma_8)$ = (0.4, 0.6)
2. compressed simulations at the target model ($\Omega_c, \sigma_8)$ = (0.2589, 0.8159)
3. compressed simulations run across the full (uniform) prior range
```
n_samples = 1000
buffer_size = n_samples
key1,key2 = jax.random.split(rng)
# params over the prior range
θ_samp = prior.sample(sample_shape=(n_samples,), seed=key1)
xs, θ_samp = get_params_summaries(key2, n_samples, θ_samp)
# fiducial params
key,rng = jax.random.split(key1)
_θfids = np.repeat(np.expand_dims(θ_fid, 1), 1000, axis=1).T
xs_fid, _ = get_params_summaries(key, n_samples, _θfids)
# target params
_θtargets = np.repeat(np.expand_dims(θ_target, 1), 1000, axis=1).T
xs_target, _ = get_params_summaries(key, n_samples, _θtargets)
plt.scatter(xs.T[0], xs.T[1], label='prior', s=5, alpha=0.7)
plt.scatter(xs_fid.T[0], xs_fid.T[1], label='fiducial', s=5, marker='*', alpha=0.7)
plt.scatter(xs_target.T[0], xs_target.T[1], label='target', s=5, marker='+', alpha=0.7)
plt.title('summary scatter')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.xlim(-1.0, 2.0)
plt.legend()
plt.show()
```
### Q: Wait, why is our prior in summary space not uniform (rectangular) ?
Remember, we've passed our parameters through our simulator, and our simulations through the IMNN compressor, meaning our summaries are nonlinear (weirdly-shaped). These score estimates obtained from the IMNN is are quick and convenient, but can be biased and suboptimal if the fiducial model is far from the truth.
Even then, these IMNN score summaries can be used for likelihood-free inference to give consistent posterior estimates, albeit with some information loss (since we haven't compressed near the target).
---
## Now, onto the good bit--CMAF training !
### Generate our training dataset
We're going to call our dataset functions to create a dataset of $(\textbf{x}, \boldsymbol{\theta})$ of shape $((1000, 2), (1000, 2))$.
```
n_samples = 1000
batch_size = 100
buffer_size = n_samples
key1,key2 = jax.random.split(rng)
# sample from the tfpj prior so that we can specify the key
# and stay in jax.numpy:
θ_samp = prior.sample(sample_shape=(n_samples,), seed=key1)
# generate sims and compress to summaries
ts, θ_samp = get_params_summaries(key2, n_samples, θ_samp)
data = (ts, θ_samp)
# use the dataset function
train_dataset, val_dataset = get_dataset(data, batch_size=batch_size, buffer_size=buffer_size)
DELFI['train_dataset'] = train_dataset
DELFI['val_dataset'] = val_dataset
```
Next let's define a training loop for a set number of epochs, calling our training and validation step functions.
___
# Exercise 3: define training loop
We're going to use the `train_step` functions to train our CMAF models for a set number of epochs.
```
def training_loop(delfi, epochs=2000):
"""training loop function that updates optimizers and
stores training history"""
# unpack our dictionary's attributes
ensemble = delfi['MAFs']
opts = delfi['opts']
train_dataset = delfi['train_dataset']
val_dataset = delfi['val_dataset']
for epoch in tqdm(range(epochs)):
# shuffle training data anew every 50th epoch (done for you)
if epoch % 50 == 0:
train_dataset = train_dataset.shuffle(buffer_size=buffer_size)
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# 1) call train step and capture loss value
pass
# 2) store loss value
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
# 3) call val step and capture loss value
pass
# 4) store validation loss value
pass
#@title Ex. 3 solution <font color='lightgreen'>[run me to proceed]</font>
def training_loop(delfi, epochs=2000):
"""training loop function that updates optimizers and
stores training history"""
# unpack our dictionary's attributes
ensemble = delfi['MAFs']
opts = delfi['opts']
train_dataset = delfi['train_dataset']
val_dataset = delfi['val_dataset']
for epoch in tqdm(range(epochs)):
# shuffle training data anew every 50th epoch
if epoch % 50 == 0:
train_dataset = train_dataset.shuffle(buffer_size=buffer_size)
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# call train step and capture loss value
loss_values = train_step(x_batch_train, y_batch_train, ensemble, opts)
# store loss value
delfi['train_losses'].append(loss_values)
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
# call val step and capture loss value
val_loss = val_step(x_batch_val, y_batch_val, ensemble)
# store validation loss value
delfi['val_losses'].append(val_loss)
#@title define some useful plotting functions <font color='lightgreen'>[run me]</font>
# visualize training trajectories
def plot_trajectories(delfis, num_models=4, num_targets=4):
"""code for plotting training trajectories. note that num_targets should be
equal to len(delfis)"""
if num_targets > 1:
fig,axs = plt.subplots(ncols=num_models, nrows=num_targets, figsize=(8,8))
for i,d in enumerate(delfis):
for j in range(num_models):
axs[i,j].plot(np.array(d['train_losses']).T[j], label='train')
axs[i,j].plot(np.array(d['val_losses']).T[j], label='val')
if j == 0:
axs[i,j].set_ylabel(r'$p(t\ |\ \vartheta; w)$')
if i == num_models-1:
axs[i,j].set_xlabel(r'num epochs')
else:
fig,axs = plt.subplots(ncols=num_models, nrows=num_targets, figsize=(7,3))
d = delfis
for j in range(num_models):
axs[j].plot(np.array(d['train_losses']).T[j], label='train')
axs[j].plot(np.array(d['val_losses']).T[j], label='val')
if j == 0:
#axs[j].set_ylabel(r'$p(t\ |\ \vartheta; w)$')
axs[j].set_ylabel(r'$-\ln U$')
axs[j].set_xlabel(r'num epochs')
axs[j].set_title('CMAF model %d'%(j + 1))
# if i == num_models-1:
# axs[j].set_xlabel(r'\# epochs')
plt.legend()
plt.tight_layout()
plt.show()
# then visualize all posteriors
def plot_posts(delfis, params, num_models=4, num_targets=4,
Fisher=None, estimates=estimates, truth=None):
fig,ax = plt.subplots(ncols=num_models, nrows=num_targets, figsize=(7,4))
params = [r'$\Omega_c$', r'$\sigma_8$']
if num_targets > 1:
for i,delfi in enumerate(delfis):
for j in range(num_models):
cs = ChainConsumer()
cs.add_chain(delfi['posts'][j], parameters=params, name='DELFI + IMNN') #, color=corner_colors[0])
#cs.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
cs.configure(linestyles=["-", "-", "-"], linewidths=[1.0, 1.0, 1.0], usetex=False,
shade=[True, True, False], shade_alpha=[0.7, 0.6, 0.], tick_font_size=8)
cs.plotter.plot_contour(ax[i, j], r"$\Omega_c$", r"$\sigma_8$")
ax[i, j].axvline(θ_target[0], linestyle=':', linewidth=1)
ax[i, j].axhline(θ_target[1], linestyle=':', linewidth=1)
ax[i,j].set_ylim([prior.low[1], prior.high[1]])
ax[i,j].set_xlim([prior.low[0], prior.high[0]])
else:
delfi = delfis
for j in range(num_models):
cs = ChainConsumer()
cs.add_chain(delfi['posts'][j], parameters=params, name='DELFI + IMNN')
if Fisher is not None:
cs.add_covariance(np.squeeze(estimates), np.linalg.inv(Fisher),
parameters=params, name="Fisher", color='k')
cs.configure(linestyles=["-", "-", "-"], linewidths=[1.0, 1.0, 1.0], usetex=False,
shade=[True, False, False], shade_alpha=[0.7, 0.6, 0.], tick_font_size=8)
cs.plotter.plot_contour(ax[j], r"$\Omega_c$", r"$\sigma_8$")
if truth is not None:
ax[j].axvline(truth[0], linestyle=':', linewidth=1, color='k')
ax[j].axhline(truth[1], linestyle=':', linewidth=1, color='k')
ax[j].set_ylim([prior.low[1], prior.high[1]])
ax[j].set_xlim([prior.low[0], prior.high[0]])
ax[j].set_xlabel(params[0])
ax[j].set_ylabel(params[1])
ax[j].set_title('CMAF model %d'%(j+1))
plt.legend()
plt.tight_layout()
plt.show()
return ax
```
### train our CMAF models !
```
# train both models with the training loop
epochs = 2000
training_loop(DELFI, epochs=epochs)
# visualize training trajectories
import seaborn as sns
%matplotlib inline
sns.set_theme()
plot_trajectories(DELFI, num_models=2, num_targets=1)
```
# Exercise 4: using the affine MCMC sampler
Now that we have trained CMAF models with which to compute $p(x | \theta)$, we now need to set up an efficient MCMC sampler to draw from the posterior, $p(x | \theta) \times p(\theta)$. We can do this using the `affine_sample()` sampler, included in `pydelfi` package. This code is written in Tensorflow, adapted from the [`emcee` package](https://arxiv.org/abs/1202.3665), and can be called with only a few lines of code:
# initialize walkers...
walkers1 = tf.random.normal([n_walkers, 2], (a, b), sigma)
walkers2 = tf.random.normal([n_walkers, 2], (a, b), sigma)
# sample using affine
chains = affine_sample(log_prob, n_params, n_walkers, n_steps, walkers1, walkers2)
1. First we'll need to set up our log-probability for the posterior. Write a function `log_posterior()` that returns a probability given $x$ and a conditional $y$:
```
#@title set up the affine MCMC sampler <font color='lightgreen'>[run me]</font>
from tqdm import trange
import numpy as onp
def affine_sample(log_prob, n_params, n_walkers, n_steps, walkers1, walkers2):
# initialize current state
current_state1 = tf.Variable(walkers1)
current_state2 = tf.Variable(walkers2)
# initial target log prob for the walkers (and set any nans to -inf)...
logp_current1 = log_prob(current_state1)
logp_current2 = log_prob(current_state2)
logp_current1 = tf.where(tf.math.is_nan(logp_current1), tf.ones_like(logp_current1)*tf.math.log(0.), logp_current1)
logp_current2 = tf.where(tf.math.is_nan(logp_current2), tf.ones_like(logp_current2)*tf.math.log(0.), logp_current2)
# holder for the whole chain
chain = [tf.concat([current_state1, current_state2], axis=0)]
# MCMC loop
with trange(1, n_steps) as t:
for epoch in t:
# first set of walkers:
# proposals
partners1 = tf.gather(current_state2, onp.random.randint(0, n_walkers, n_walkers))
z1 = 0.5*(tf.random.uniform([n_walkers], minval=0, maxval=1)+1)**2
proposed_state1 = partners1 + tf.transpose(z1*tf.transpose(current_state1 - partners1))
# target log prob at proposed points
logp_proposed1 = log_prob(proposed_state1)
logp_proposed1 = tf.where(tf.math.is_nan(logp_proposed1), tf.ones_like(logp_proposed1)*tf.math.log(0.), logp_proposed1)
# acceptance probability
p_accept1 = tf.math.minimum(tf.ones(n_walkers), z1**(n_params-1)*tf.exp(logp_proposed1 - logp_current1) )
# accept or not
accept1_ = (tf.random.uniform([n_walkers], minval=0, maxval=1) <= p_accept1)
accept1 = tf.cast(accept1_, tf.float32)
# update the state
current_state1 = tf.transpose( tf.transpose(current_state1)*(1-accept1) + tf.transpose(proposed_state1)*accept1)
logp_current1 = tf.where(accept1_, logp_proposed1, logp_current1)
# second set of walkers:
# proposals
partners2 = tf.gather(current_state1, onp.random.randint(0, n_walkers, n_walkers))
z2 = 0.5*(tf.random.uniform([n_walkers], minval=0, maxval=1)+1)**2
proposed_state2 = partners2 + tf.transpose(z2*tf.transpose(current_state2 - partners2))
# target log prob at proposed points
logp_proposed2 = log_prob(proposed_state2)
logp_proposed2 = tf.where(tf.math.is_nan(logp_proposed2), tf.ones_like(logp_proposed2)*tf.math.log(0.), logp_proposed2)
# acceptance probability
p_accept2 = tf.math.minimum(tf.ones(n_walkers), z2**(n_params-1)*tf.exp(logp_proposed2 - logp_current2) )
# accept or not
accept2_ = (tf.random.uniform([n_walkers], minval=0, maxval=1) <= p_accept2)
accept2 = tf.cast(accept2_, tf.float32)
# update the state
current_state2 = tf.transpose( tf.transpose(current_state2)*(1-accept2) + tf.transpose(proposed_state2)*accept2)
logp_current2 = tf.where(accept2_, logp_proposed2, logp_current2)
# append to chain
chain.append(tf.concat([current_state1, current_state2], axis=0))
# stack up the chain
chain = tf.stack(chain, axis=0)
return chain
@tf.function
def log_posterior(x, y, cmaf):
# define likelihood p(x|y) with CMAF
# compute prior probability p(y)
# return the log-posterior
pass
#@title Ex. 4.1 solution <font color='lightgreen'>[run me to proceed]</font>
@tf.function
def log_posterior(x, y, cmaf):
# define likelihood p(x|y) with CMAF
like = cmaf.log_prob(x,y)
# compute prior probability p(y)
_prior = samp_prior.log_prob(y)
return like + _prior # the log-posterior
```
2. Now we're going to use the sampler and write a function to obtain our posteriors. To call the sampler, we need to call our log-posterior function, as well as specify the number of walkers in parameter space:
```
# define function for getting posteriors
def get_posteriors(delfi, n_params, n_steps=2000, n_walkers=500, burnin_steps=1800, skip=4):
delfi['posts'] = [] # reset posteriors (can save if you want to keep a record)
# center affine sampler walkers on the IMNN estimates
a,b = np.squeeze(delfi['estimates'])
# choose width of proposal distribution
# sigma =
# loop over models in the ensemble
for m,cmaf in enumerate(delfi['MAFs']):
print('getting posterior for target data with model %d'%(m+1))
# wrapper for log_posterior function: freeze at target summary slice, x_target
@tf.function
def my_log_prob(y, x=delfi['estimates']):
return log_posterior(x, y, cmaf)
# initialize walkers...
# walkers1 =
# walkers2 =
# sample using affine. note that this returns a tensorflow tensor
# chain = affine_sample()
# convert chain to numpy and append to dictionary
delfi['posts'].append(np.stack([chain.numpy()[burnin_steps::skip,:,0].flatten(),
chain.numpy()[burnin_steps::skip,:,1].flatten()], axis=-1))
pass
#@title Ex. 4.2 solution <font color='lightgreen'>[run me to proceed]</font>
# define function for getting posteriors
def get_posteriors(delfi, n_params, n_steps=2000, n_walkers=500, burnin_steps=1800, skip=4):
delfi['posts'] = [] # reset posteriors (can save if you want to keep a record)
# center affine sampler walkers on the IMNN estimates
a,b = np.squeeze(delfi['estimates'])
# choose width of proposal distribution
sigma = 0.5
# loop over models in the ensemble
for m,cmaf in enumerate(delfi['MAFs']):
print('getting posterior for target data with model %d'%(m+1))
# wrapper for log_posterior function: freeze at target summary slice
@tf.function
def my_log_prob(y, x=delfi['estimates']):
return log_posterior(x, y, cmaf)
# initialize walkers...
walkers1 = tf.random.normal([n_walkers, 2], (a, b), sigma)
walkers2 = tf.random.normal([n_walkers, 2], (a, b), sigma)
# sample using affine
chain = affine_sample(my_log_prob, n_params, n_walkers, n_steps, walkers1, walkers2)
delfi['posts'].append(np.stack([chain.numpy()[burnin_steps::skip,:,0].flatten(),
chain.numpy()[burnin_steps::skip,:,1].flatten()], axis=-1))
# get all intermediate posteriors --> this should be really fast !
get_posteriors(DELFI, n_params)
```
We're going to use our plotting client to visualize our posteriors for each model. We'll also plot the IMNN's Fisher Gaussian Approximation in black, centered on our estimates. Finally, we'll display the true Planck parameters using crosshairs:
```
params = [r'$\Omega_c$', r'$\sigma_8$']
plot_posts(DELFI, params, num_models=num_models, num_targets=1,
Fisher=IMNN.F, estimates=np.squeeze(estimates), truth=θ_target)
```
___
# Exercise 5: append new posterior training data to hone in on the truth (repeat several times)
Finally, we're going to draw parameters from the posterior, re-simulate cosmological fields, compress, append the new ($x$, $\theta$) pairs to the dataset, and keep training our DELFI ensemble. Within a few iterations, this should shrink our posteriors considerably.
Since we've coded all of our training functions modularly, we can just run them in a loop (once we've drawn and simulated from the prior). First we'll give you a piece of code to draw from the posterior chains:
concat_data(DELFI, key, n_samples=500)
Here, remember to re-set your random key for new samples !
Next, write a loop that:
1. draws `n_samples` summary-parameter pairs from *each* existing CMAF model's posteriors
2. continues training the DELFI ensemble members
3. re-samples the posterior
**bonus**: Can you develop a scheme that requires fewer `n_samples` draws each iteration ? What about optimizer stability ? (hint: try a decaying learning rate)
___
```
#@title `concat_data` function to draw from each posterior and concatenate dataset <font color='lightgreen'>[run me to proceed]</font>
import pandas as pd
def drop_samples(samples, prior=prior):
"""
helper function for dropping posterior draws outside
the specified prior range
"""
mydf = pd.DataFrame(samples)
mydf = mydf.drop(mydf[mydf[0] < prior.low[0]].index)
mydf = mydf.drop(mydf[mydf[1] < prior.low[1]].index)
mydf = mydf.drop(mydf[mydf[0] > prior.high[0]].index)
mydf = mydf.drop(mydf[mydf[1] > prior.high[1]].index)
return np.array(mydf.values, dtype='float32')
def concat_data(delfi, key, n_samples=500, prior=prior):
"""
helper code for concatenating data for each DELFI CMAF model.
delfi: DELFI dictionary object with 'train_dataset'
and 'val_dataset' attributes
key: jax.PRNGkey
n_samples: number of samples to draw from EACH DELFI ensemble model
"""
# take 500 samples from each posterior for each training data
key,rng = jax.random.split(key)
idx = np.arange(len(delfi['posts'][0]))
ϑ_samp = []
for m,_post in enumerate(delfi['posts']):
ϑ_samp.append(_post[45000:][onp.random.choice(idx, size=n_samples)])
ϑ_samp = np.concatenate(ϑ_samp, axis=0)
print(ϑ_samp.shape)
ϑ_samp = drop_samples(ϑ_samp, prior=prior)
dropped = n_samples*len(delfi['posts']) - ϑ_samp.shape[0]
print('I dropped {} parameter pairs that were outside the prior'.format(dropped))
_n_samples = len(ϑ_samp)
ts, ϑ_samp = get_params_summaries(key2, _n_samples, ϑ_samp)
new_data = (ts, ϑ_samp)
print("I've drawn %d new summary-parameter pairs"%(ts.shape[0]))
# this should shuffle the dataset
new_train_dataset, new_val_dataset = get_dataset(new_data, batch_size=batch_size, buffer_size=len(new_data[0]))
# concatenate datasets
delfi['train_dataset'] = delfi['train_dataset'].concatenate(new_train_dataset)
delfi['val_dataset'] = delfi['val_dataset'].concatenate(new_val_dataset)
#@title Ex. 5 solution <font color='lightgreen'>[run me to proceed]</font>
for repeat in range(1):
key,rng = jax.random.split(rng)
print('doing retraining iteration %d'%(repeat))
concat_data(DELFI, key, n_samples=500)
print('retraining on augmented dataset')
epochs = 500
training_loop(DELFI, epochs=epochs)
plot_trajectories(DELFI, num_models=2, num_targets=1)
get_posteriors(DELFI, n_params)
plot_posts(DELFI, params, num_models=num_models, num_targets=1,
Fisher=IMNN.F, estimates=np.squeeze(estimates), truth=θ_target)
```
___
# Exercise 6: create ensemble posterior
Once we're happy with the DELFI training, we can proceed to reporting our ensemble's combined posterior. Using the [`ChainConsumer` API](https://samreay.github.io/ChainConsumer/index.html), concatenate the posterior chains and report a nice corner plot:
```
#@title Exercise 6 solution <font color='lightgreen'>[run me to proceed]</font>
def drop_samples(samples, prior=prior):
"""
helper function for dropping posterior draws outside
the specified prior range
"""
mydf = pd.DataFrame(samples)
mydf = mydf.drop(mydf[mydf[0] < prior.low[0]].index)
mydf = mydf.drop(mydf[mydf[1] < prior.low[1]].index)
mydf = mydf.drop(mydf[mydf[0] > prior.high[0]].index)
mydf = mydf.drop(mydf[mydf[1] > prior.high[1]].index)
return np.array(mydf.values, dtype='float32')
super_post = np.concatenate(DELFI['posts'], axis=0)
# assign new dict entry after dropping samples outside the prior
DELFI['super_post'] = drop_samples(super_post)
params = [r"$\Omega_c$", r"$\sigma_8$"]
corner_colors = [None, None, 'k']
c = ChainConsumer()
c.add_chain(DELFI['super_post'][::10], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
c.add_covariance(np.squeeze(estimates), IMNN.invF, parameters=params, name="IMNN F @estimates", color=corner_colors[2])
c.configure(linestyles=["-", "-", "--"], linewidths=[1.0, 1.0, 1.0,],
shade=[True, False, False], shade_alpha=[0.7, 0.6, 0.],
tick_font_size=8, usetex=False,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=False, legend_location=(0, 0))
fig = c.plotter.plot(figsize="column", truth=list(θ_target), filename=None)
```
___
# Congrats !
You've made it through the core of the tutorial and trained a DELFI ensemble on IMNN-compressed summaries of mock dark matter fields and obtained cosmological parameter posteriors !
### Now what ?
There are lots of things you can do if you have the time -- for one, you could check out the bonus problems below
___
# BONUS: Compare IMNN Compressors
For this whole tutorial we've been using an IMNN ***trained deliberately far*** from our Planck parameters, meaning our compression isn't guaranteed to be optimal. In our accompanying paper (to be released on arXiv on July 16, 2021) we re-trained an IMNN on the mean of the score estimates of a set of four cosmological fields. Since this estimate is closer to the true target parameters, our IMNN compression is guaranteed to improve our inference on the target data.
<img src="https://raw.githubusercontent.com/tlmakinen/FieldIMNNs/master/tutorial/plots/new-four-cosmo-field-comparison.png" alt="drawing" width="700"/>
We've included this newly-trained IMNN in the GitHub repository that you've already cloned into this notebook -- as a bonus, repeat the DELFI posterior estimation using the new (more optimal) compressor and see how your inference shapes up ! You *should* see tighter Gaussian Approximations *and* DELFI contours:
```
# load IMNN state
import cloudpickle as pickle
import os
def unpickle_me(path):
file = open(path, 'rb')
return pickle.load(file)
folder_name = './FieldIMNNs/tutorial/IMNN2-aspects/'
loadstate = unpickle_me(os.path.join(folder_name, 'IMNN_state'))
state2 = jax.experimental.optimizers.pack_optimizer_state(loadstate)
# startup key to get the right state of the weights
startup_key2 = np.load(os.path.join(folder_name, 'IMNN_startup_key.npy'), allow_pickle=True)
# load weights
best_weights2 = np.load(os.path.join(folder_name, 'best_w.npy'), allow_pickle=True)
# load fiducial model that we trained the model at (estimates derived from initial IMNN)
θ_fid_new = np.load(os.path.join(folder_name, 'new_fid_params.npy'), allow_pickle=True)
# initialize IMNN with pre-trained state
IMNN2 = imnn.IMNN(
n_s=n_s,
n_d=n_d,
n_params=n_params,
n_summaries=n_summaries,
input_shape=(1,) + shape + (1,),
θ_fid=θ_fid_new,
model=model,
optimiser=optimiser,
key_or_state=state2, # <---- initialize with state
simulator=lambda rng, θ: simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}}))
# now set weights using the best training weights and startup key (this can take a moment)
IMNN2.set_F_statistics(w=best_weights, key=startup_key2)
print(np.linalg.det(IMNN2.F))
```
---
# BONUS 2:
Alternatively, train a new IMNN from scratch at the target data `estimates` (try with fewer filters on the free version of Colab). You could also try playing with other `stax` layers like `stax.Dense(num_neurons)`. Feel free to also switch up the simulation parameters -- choosing $N=32$ for instance will dramatically increase training speed for testing, etc.
```
fs = 16
new_layers = [
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(4, 4)),
InceptBlock((fs, fs, fs), strides=(2, 2), do_5x5=False, do_3x3=False),
stax.Conv(n_summaries, (1, 1), strides=(1, 1), padding="SAME"),
stax.Flatten,
Reshape((n_summaries,))
]
new_model = stax.serial(*new_layers)
print_model(layers, input_shape, rng)
rng, key = jax.random.split(rng)
IMNN2 = imnn.IMNN(
n_s=n_s,
n_d=n_d,
n_params=n_params,
n_summaries=n_summaries,
input_shape=(1,) + shape + (1,),
θ_fid=np.squeeze(estimates),
model=new_model,
optimiser=optimiser,
key_or_state=key, # <---- initialize with key
simulator=lambda rng, θ: simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}}))
print("now I'm training the IMNN")
rng, key = jax.random.split(rng)
IMNN2.fit(λ=10., ϵ=0.1, rng=key, print_rate=None,
min_iterations=500, patience=100, best=True)
# visualize training trajectory
IMNN2.plot(expected_detF=None);
```
| github_jupyter |
# Task 6: Regularisation
_All credit for this jupyter notebook tutorial goes to the book "Hands-On Machine Learning with Scikit-Learn & TensorFlow" by Aurelien Geron. Modifications were made in preparation for the hands-on sessions._
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Function to save a figure. This also decides that all output files
# should stored in the subdirectory 'classification'.
PROJECT_ROOT_DIR = "."
EXERCISE = "regularisation"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", EXERCISE)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", EXERCISE, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
```
# Polynomial regression
Even for non-linear data we can make use of linear regression – we simply need to add higher degrees of features to the set of features. Using those 'new' extended features, linear regression can still give us good results. Let's get started by generating some random data, with a maximum degree of 2.
```
m = 100 # number of datapoints
X = 6 * np.random.rand(m, 1) - 3
y = 0.75 * X**2 + X + 2 + np.random.randn(m, 1)
```
Let's have a quick look at how the data is distributed:
```
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
```
Now comes **your** first task.
We would like to create a feature set which also includes all features to the power of 2. Can you create this new feature set and then perform a linear regression as we already did in task 5? One helpful class in Scikit-Learn is the [PolynominalFeatures](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) class. You will have to create an instance of that class and somehow get a 1-dimensional array of the new features. Then, you can use the [LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) to fit.
```
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly # To be implemented: polynomial (degree=2) version of our data
# Let's test if that actually worked. If we look at instance 0,
# do we have the value to the power of 2 in there?
print(X[0])
print(X_poly[0])
# Now perform the linear regression.
from sklearn.linear_model import LinearRegression
lin_reg # To be implemented: linear regression instance, can you fit the data X_poly?
# And print the fit results.
print("Fitted values: intercept = %s, coefficient = %s" % (lin_reg.intercept_[0], lin_reg.coef_[0][0]))
```
You will see if your implementation worked with the following piece of code. It will again print the dataset created above, and then also plot the fitted model.
```
# Create a set of 100 X values in the interval [-3, 3], which
# is the area we want to plot, and create a 100x1 array from them.
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
# Now use the PolynomialFeatures class to create a feature matrix.
X_new_poly = poly_features.transform(X_new)
# Make predictions based on this new feature matrix.
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
```
# Pre-step: learning curves
Before we can go into detail about the regularisation technqiues, we need to come back to another basic performance measure: the learning curve. Learning curves plot the model's performance on data with increasing size of the training set. They are particularly interesting to check convergence behaviour of the model, but also to compare the model's performance on training and validation set. Reminder: our typical cost function on the y axis is the mean squared error (MSE), or the square-root of that (RSME).
Can you try to implement the needed computation of the RSME as a function of the training set size? The first step will be to split the dataset into training and test, a good function to use for that is [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). Maybe having a training:test ratio of 80:20 would be a good start. Then, create a for loop which iterates over the number of training cycles, and – for each number of cycles – make a fit with the model. Then, predict the model performance on the training set (only looking at the instances it has already seen!) and the complete validation set. You can then store the RSME values of the prediction using the [mean_squared_error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) function. We will be using the [LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) class for the defined function, so make sure to check out the `fit` and `predict` functions.
```
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# Define a function which takes a training model, and
# our sets of X and y values.
def plot_learning_curves(model, X, y):
# First, split into training and validation sets.
# Implement instances such as X_train, X_val, y_train and y_val
# Make sure to store the RSME values for training
# and validation – we want to plot them later.
train_errors, val_errors = [], []
# Then start the loop, where we want to go from 1 to the
# total number of training cycles, but for each iteration,
# we want to evaluate the instances _up to_ that point!
for m in range(1, len(X_train)):
# implement here
# Store the values into the lists.
train_errors.append(mean_squared_error()) # to be implemented
val_errors.append(mean_squared_error()) # to be implemented
# And the actual plotting commands. Plot the RSME for the
# training set in red plus signs, and the RSME for the
# validation set in blue.
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Training set size", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
# Now create an instance of the LinearRegression class, and
# create a plot with the function we just defined.
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 6])
save_fig("underfitting_learning_curves_plot")
plt.show()
```
Just for the fun of it, the following code creates a polynomial set of feature up to the power of 10, which we can then fit using the LinearRegression class. Is the fitting behaviour different?
```
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 8])
save_fig("learning_curves_plot")
plt.show()
```
# Regularized models
There are two basic types of regularisation, both of which are based on mathematical norms. The first one, the Ridge regression, uses the Euclidian L2 norm. This enters the cost function for the training as an additional 'penalty' term, scaled with a parameter alpha. With small alphas, we essentially 'turn off' regularisation. Do you have an idea what happens with large values for alpha? How does the regularisation kick in?
```
# Let's start by generating some random data.
np.random.seed(42)
# Define the number of datapoints
m = 40
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 2.5
# We will need these data points later for our predictions.
X_new = np.linspace(0, 3, 100).reshape(100, 1)
# And plot it.
plt.plot(X, y, "b.", linewidth=3)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
save_fig("ridge_regression_plot_data")
plt.show()
```
This looks like some good test data to try Ridge regularisation with. Below you can find two functions, which still have some functionality missing. Can you complete them? The first one is meant to plot linear regressions for different values of alpha, the latter the same for polynomial regression.
```
from sklearn.linear_model import Ridge
from sklearn.preprocessing import StandardScaler
# Function to take multiple values of alpha and then
# create instances of Ridge models for each of them.
# - model_class is a flexible parameter and could take
# various models from sklearn.linear_model
# - alphas should be a tuple of alpha values
def plot_model_lin(model_class, alphas):
# Let's combine the alpha values with different plotting styles.
alpha_styles = zip(alphas, ("b-", "g--", "r:"))
# Now we can start our loop over the zipped alphas. What
# we will need here is to instantiate a model object of
# the desired class (maybe alo give it a fixed random_state
# value), and then perform the fit on our X, y data. Then,
# use the X_new to make a prediction, which we then want to
# plot together with the data. Can you implement that?
for alpha, style in alpha_styles:
# Implement a model with parameter alpha and a fixed
# random state. Fit the data X, y, then make a prediction
# on the X_new data points created above (which will
# give us something like y_new_regul).
# Plot the results.
plt.plot(X_new, y_new_regul, style, linewidth=2, label=r"$\alpha = {}$".format(alpha))
# This will also plot the data, create a legend etc.
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
# Function to take multiple values of alpha and then
# create instances of Ridge models for each of them. This
# will automatically expand the feature set with polynomial
# features up to the power of 10.
# - model_class is a flexible parameter and could take
# various models from sklearn.linear_model
# - alphas should be a tuple of alpha values
# - **model_args in case we want to forward arguments to the
# instance of the model class (ignore this for now).
def plot_model_poly(model_class, alphas, **model_kargs):
# Let's combine the alpha values with different plotting styles.
alpha_styles = zip(alphas, ("b-", "g--", "r:"))
# Now we can start our loop over the zipped alphas.
for alpha, style in alpha_styles:
# Implement a model with parameter alpha, a fixed random
# state, and also forward the **model_kargs arguments.
# This is just to make our life easier. We make a few
# transformations of our model, by adding polynomial
# features up the power of 10, implement a standard
# scaler, and eventually get back the updated model.
# All of these could be done in individual steps, but
# the sklearn.pipeline.Pipeline class makes this a lot
# easier.
model = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
])
# Implement: perform a fit on data X, y and make a
# prediction on X_new. The result should be something like
# y_new_regul.
# Plot the results.
plt.plot(X_new, y_new_regul, style, linewidth=2, label=r"$\alpha = {}$".format(alpha))
# This will also plot the data, create a legend etc.
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
# Now call the above functions and make two comparison plots for
# exemplary values of alpha.
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model_lin(Ridge, alphas=(0, 10, 100))
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model_poly(Ridge, alphas=(0, 10**-5, 1))
save_fig("ridge_regression_plot")
plt.show()
```
Looks good! To get a better idea of the predicted values, let's look at one example point at 1.5. In principle, the Ridge model can be implemented with a closed-form solution (remember the lecture), or a gradient descent method. Can you compare these two approaches? The two classes for this are [linear_model.Ridge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) and [linear_model.SGDRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html), which we have both used before.
```
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
# Create an instance of Ridge regression following the
# closed-form solution.
ridge_reg # To be implemented
print("Closed form predicts: %s" % ridge_reg.predict([[1.5]])[0][0])
# Create an SGD regressor and implement Ridge regularisation.
sgd_reg # To be implemented
print("SGD regressor predicts: %s" % sgd_reg.predict([[1.5]])[0])
```
A second regularisation technique is the Lasso regression, which uses the L2 norm (i.e. "least absolute deviations" instead of "least squares"). In case you're not, please make yourself familiar with these two norms, it is important to understand their impact on the cost function. Again, Lasso regression is implemented as a 'penalty' term with a parameter alpha to control the impact. What would be the typical behaviour of a regressor with Lasso regression implemented? Do the weights tend to be small/large? Are they equally distributed or far apart?
The following piece of code uses our previously defined functions for linear and polynomial regression, but plots Lasso regression for different values of alpha instead. Play around with those values to see what happens. We also finally get to use what we implemented earlier: we can forward additional arguments to the model. Here `tol=1` is set to one. Can you check what it does? Class reference: [linear_model.Lasso](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html).
**Note**: THe Lasso algorithm does not converge well with alpha set to one. Therefore we only test non-zero values for alpha here.
```
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model_lin(Lasso, alphas=(0.1, 1))
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model_poly(Lasso, alphas=(0.1, 1), tol=1)
save_fig("lasso_regression_plot")
plt.show()
```
Now, as a last step before jumping into the next topic, it would be nice to compare the above predictions for Ridge regression with those of Lasso regression. A third model is [linear_model.ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html) which essentially implements the best of both worlds, i.e. it is a mixed version of L1 and L2 regularisation. Can you try to implement Lasso and ElasticNet and make a prediction for the value 1.5?
```
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
# Make a prediction with Lasso regression.
lasso_reg # To be implemented
print("Lasso predicts: %s" % lasso_reg.predict([[1.5]])[0])
# Make a prediction with ElasticNet.
elastic_net # To be implemented
print("ElasticNet predicts: %s" % elastic_net.predict([[1.5]])[0])
```
# Early stopping
Another important, but completely different aspect of regularisation is 'early stopping'. The idea is to stop the training as soon as the cost function of the validation set reaches its minimum. This is basically the turning point, where the validation cost function will start to increase again and the model starts to overfit the training data. This first bit of code only visualises the idea of early stopping, but doesn't actually implement it. We will do that later. But can you add the missing parts to the code below? You might have to look into the definitions of [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) and [linear_model.SGDRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html) again for this. For the latter, please look up the `penalty` argument. What does it do? In our case, we probably want to set it to `None`. Other important arguments to look up and include are `max_iter` (how many do we want?), `eta0`, `warm_start` (do we need this?), and `learning_rate` (we want the learning rate to be constant here!).
```
# Generate random data, both for training and validation.
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
# As before, we will need to split into training and validation
# set. Can you implement that? This time, maybe a ratio of 50:50
# would be desirable. After splitting we should have arrays
# such as X_train, X_val, y_train and y_val.
# Create a small pipeline which adds polynomial features to our
# feature set and applies a standard scaler (which removes the
# mean and scales to unit variance).
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
# Apply the poly_scaler pipeline to our training and validation sets.
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
# Instantiate an SGDRegressor. Make sure to set the iteration
# to 1, because we want to control the number of epochs by hand
# (see below).
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,
penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) # To be implemented
# Create arrays to store the mean squared errors on training
# and validation data.
train_errors, val_errors = [], []
# Now let's loop over 500 epochs, fit the training data once
# per epoch, and make predictions on the training and the
# validation dataset. Remember: what is one epoch for the
# stochastic GD method? How many instances does the model see
# per epoch?
for epoch in range(500):
# Implement fit and predictions here.
# Store the mean squared errors in the arrays.
train_errors.append(mean_squared_error()) # To be implemented
val_errors.append(mean_squared_error()) # To be implemented
# Now let's get some info about the 'best' epoch. We want to
# know the epoch which had the smallest mean sqared error on
# the validation set. numpy has a function for that ... Then,
# with the best epoch extracted, we also want to know which
# value was the best.
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
# Now this is just plotting. Make an annotation where the best
# model was actually located, and point it out in the plot.
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
# And do the plotting.
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, 500], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
```
Now of couse it would be interesting to actually make the model _stop_ after the best epoch. Of course, Scikit-Learn provides functionality for that, but it's not really difficult to implement this ourselves. Let's try this in the following piece of code. So be able to save the model state after each epoch, we need to import the [base.clone](https://scikit-learn.org/stable/modules/generated/sklearn.base.clone.html) method, which can create copies of our model instance. Can you implement the rest yourself?
```
from sklearn.base import clone
# Instantiate an SGDRegressor. Make sure to set the iteration
# to 1, because we want to control the number of epochs by hand
# (see below). As above, also check that you set values for
# 'warm_start', 'penalty', 'learning_rate' and 'eta'.
sgd_reg #to be implemented
# As before, we will need to split into training and validation
# set. Can you implement that? This time, maybe a ratio of 50:50
# would be desirable.
# Create a small pipeline which adds polynomial features to our
# feature set and applies a standard scaler (which removes the
# mean and scales to unit variance).
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
# Apply the poly_scaler pipeline to our training and validation sets.
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
# Our reference value for the validation error, which we will update
# for every epoch, starting with 'inf'.
minimum_val_error = float("inf")
# Store our best epochs and model instances in these variables.
best_epoch = None
best_model = None
# Now perform the loop. Let's go through 1000 epochs. Make sure
# that the regressor continues where it left off in the epoch
# before (check the class documentation). Then, perform a fit on
# the poly-scaled version of our training data. Afterwards, make
# a prediction on our poly-scaled validation data, and store the
# mean_squared_error value of that. Is it smaller than the
# minimum value observed so far? What should we do in that case?
for epoch in range(1000):
# Implement fit, prediction and calculation of error.
# Take appropriate actions if the observed error is smaller
# than the one previously observed.
if val_error < minimum_val_error:
# Implement code here
```
Create the graph:
```
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,
penalty=None, learning_rate="constant", eta0=0.0005, random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train, y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
```
Did it work? What is the current best_epoch and best_model?
```
print("Best epoch: %d" %best_epoch)
print("Best model: %s" %best_model)
```
The following (large) bit of code is directly taken from Geron's book. You don't need to understand the details of the implementation, but rather take the plots as a nice inspiration. They nicely show the different characteristics of Ridge and Lasso regression. Explanations are found below the plots.
```
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
```
Let's start with the top left plot, on the x and y axis are two model parameters. The background ellipses (contours) show the behaviour of the MSE cost function _without_ regularisation. The foreground diamond-shaped contours represent the penalty that the L1 norm applies. The white circles represent the path that an _unregularised_ regressor would go to find the minimum (i.e. with `alpha = 0`). The yellow triangles show the path that a _purely_ penalty-based regressor would go (i.e. with `alpha -> infty`). What's interesting to see is that the regressor first 'walks back' to `theta_1 = 0`, and then walks along the y axis to reach the minimum of the diamond-shaped contours.
The ellipse contours in the top right plot show the cost function with an L1 regularisation `alpha = 0.5`. Compared to the left-hand plot, you will notice, how the global minimum is shifted to `theta_2 = 0`, and overall is closer to smaller values of `theta`. The white circles are the path the regressor would go with this regularisation term.
The two plots on the bottom show the same for Ridge regression, i.e. using L2 regularisation. First of all, notice how the contours for `alpha -> infty` on the bottom left are now circular. This also changes the path the regressor would take for a _purely_ penalty-based model: the yellow triangles show a path essentially orthogonal to the circular contour lines.
The plot on the bottom right again shows the regression behaviour for L2 regularisation with `alpha = 0.5`. Notice how the global minimum now it _not_ at `theta_2 = 0` is it is for L1 regularisation. But still, overall it is closer to the smaller values of `theta` than in the unregularised case.
| github_jupyter |
```
#hide
from perutils.nbutils import simple_export_all_nb,simple_export_one_nb
```
# Personal Utils (perutils)
> Notebook -> module conversion with #export flags and nothing else
**Purpose:** The purpose and main use of this module is for adhoc projects where a full blown nbdev project is not necessary
**Example Scenario**
Imagine you are working on a kaggle competition. You may not want the full nbdev. For example, you don't need separate documentation from your notebooks and you're never going to release it to pip or conda. This module simplifies the process so you just run one command and it creates .py files from your notebooks. Maybe you are doing an ensemble and to export the dataloaders from a notebook so you can import them into seperate notebooks for your seperate models, or maybe you have a seperate use case.
That's what this module does. it's just the #export flags from nbdev and exporting to a module folder with no setup (ie settings.ini, \_\_nbdev.py, etc.) for fast minimal use
## Install
`pip install perutils`
## How to use
```
#hide
from nbdev.showdoc import *
```
### Shelve Experiment Tracking
This module is designed to assist me in tracking experiments when I am working on data science and machine learning tasks, though is flexible enough to track most things. This allows for easy tracking and plotting of many different types of information and datatypes without requiring a consistent schema so you can add new things without adjusting your dataframe or table.
General access to a shelve db can be reached in one of two ways and behaves similar to a dictionary.
```python
with shelve.open('test.shelve') as d:
print(d['exp'])
d = shelve.open('test.shelve')
print(d['exp']
d.close()
```
This module assumes a certain structure. If we assume: `d = shelve.open('test.shelve')`
```python
assert type(d[key]) == list
assert type(d[key][0]) == dict
```
Additionally:
+ keys in an experiment (`d['exp'][0][key]` must be strings but the values can be anything that can be pickled
+ Plotting functions assumes the value you want to plot (ie `d['exp][0]['batch_loss']` is list like and the name (for the legend) is a string
#### Create and Add Data
The process is:
1. Create a dict with all the information
2. Append dict to database
This will create `filename` if it does not exist
```python
append(filename,new_dict)
```
>note: You can write individual elements at a time as well just like you would in a normal dictionary if that is preferred.
#### Delete
`-1` can be replaced with any index location.
```python
delete(filename,-1)
```
#### What keys are available?
```python
print_keys(filename)
```
#### What were the results?
```python
el,ea,bl = get_stats(filename,-1,['epoch_loss','epoch_accuracy','batch_loss'],display=True)
```
#### Find the experiment with the best results.
```python
print_best(filename,'epoch_loss',best='min')
print_best(filename,'epoch_accuracy',best='max')
```
#### Graph some stats and compare results
```python
graph_stats(filename,['batch_loss','epoch_accuracy'],idxs=[-1,-2,-3])
```
### nb -> py
#### Full Directory Conversion
In python run the `simple_export_all_nb` function. This will:
+ Look through all your notebooks in the directory (nbs_path) for any code cells starting with `#export` or `# export`
+ If any export code cells exist, it will take all the code and put it in a .py file located in `lib_path`
+ The .py module will be named the same as the notebook. There is no option to specify a seperate .py file name from your notebook name
**Any .py files in your lib_path will be removed and replaced. Do not set lib_path to a folder where you are storing other .py files. I recommend lib_path being it's own folder only for these auto-generated modules**
```python
simple_export_all_nb(nbs_path=Path('.'), lib_path=Path('test_example'))```
#### Single Notebook Conversion
In python run the `simple_export_one_nb` function. This will:
+ Look through the specified notebook (nb_path) for any code cells starting with `#export` or `# export`
+ If any export code cells exist, it will take all the code and put it in a .py file located in `lib_path`
+ The .py module will be named the same as the notebook. There is no option to specify a seperate .py file name from your notebook name
```python
simple_export_one_nb(nb_path=Path('./00_core.ipynb'), lib_path=Path('test_example'))```
### py -> nb
#### Full Directory Conversion
In python run the `py_to_nb` function. This will:
+ Look through all your py files in the `py_path`
+ Find the simple breaking points in each file (ie when new functions or classes are defined
+ Create jupyter notebooks in `nb_path` and put code in seperate cells (with `#export` flag)
**This will overwrite notebooks in the `nb_path` if they have the same name other than extension as a python module**
```python
py_to_nb(py_path=Path('./src/'),nb_pth=Path('.')```
### kaggle dataset
#### Uploading Libraries
```python
if __name__ == '__main__':
libraries = ['huggingface','timm','torch','torchvision','opencv-python','albumentations','fastcore']
for library in libraries:
print(f'starting {library}')
dataset_path = Path(library)
print("downloading dataset...")
download_dataset(dataset_path,f'isaacflath/library{library}',f'library{library}',content=False,unzip=True)
print("adding library...")
add_library_to_dataset(library,dataset_path)
print("updating dataset...")
update_datset(dataset_path,"UpdateLibrary")
print('+'*30)
```
#### Custom dataset (ie model weights)
```python
dataset_path = Path(library)
dataset_name = testdataset
download_dataset(dataset_path,f'isaacflath/{dataset_name}',f'{dataset_name}',content=False,unzip=True)
# add files (ie model weights to folder
update_datset(dataset_path,"UpdateLibrary")
```
| github_jupyter |
```
import pandas as pd
```
# Import Data
```
schiz_pre = pd.read_csv('data/schizophrenia_pre_features_tfidf_256.csv')
schiz_post = pd.read_csv('data/schizophrenia_post_features_tfidf_256.csv')
```
# High-Level Look at Datasets
## Preface
These datasets contain a very large number of features. For this project we have selected _one_ feature of interest: `substance_use_total`. In order to accomplish our EDA task for Sunday 21st November, we will have to filter our dataset - and associated EDA tasks - to focus exclusively on this feature.
```
schiz_pre = schiz_pre.loc[:, ['subreddit', 'author', 'date', 'post', 'substance_use_total']]
schiz_post = schiz_post.loc[:, ['subreddit', 'author', 'date', 'post', 'substance_use_total']]
```
## 'Pre' Dataset
```
schiz_pre.head(3)
schiz_pre.tail(3)
print(f'Total number of records in this dataset: {len(schiz_pre)}')
schiz_pre.describe()
```
So it doesn't look like we're missing any values in the `schizophrenia_pre_features_tfidf_256.csv` dataset.
Let's take a quick look at the distribution of number of mentions of substance use per post.
```
schiz_pre.plot.hist(bins=25)
```
## 'Post' Dataset
```
schiz_post.head(3)
schiz_post.tail(3)
print(f'Total number of records in this dataset: {len(schiz_post)}')
schiz_post.describe()
```
So it doesn't look like we're missing any values in the `schizophrenia_post_features_tfidf_256.csv` dataset.
Let's take a quick look at the distribution of number of mentions of substance use per post.
```
schiz_post.plot.hist(bins=25)
```
## High-Level Comparison of Pre-Post Datasets
```
pd.concat([schiz_pre.describe(), schiz_post.describe()], axis=1)
```
As you can see from the table above, there are far fewer records in the 'post' dataset. This is likely due to the fact that the 'post' dataset covers a far shorter period of time. As such, a simple comparison of the means of these two datasets should be taken with a grain of salt. A Z-test of comparison of means may be a good place to start with this dataset to understand if there is any significant difference between the means of these datasets
```
print(f"Number of unique authors (posters) in 'pre' dataset: {len(schiz_pre.author.unique())}")
print(f"Number of unique authors (posters) in 'post' dataset: {len(schiz_post.author.unique())}")
```
So it looks as though **each** record/observation in each dataset is associated with a unique reddit user - which is great! That way we know we don't have - for example - an individual redditor who is contributing a disproportionate amount to our dataset. *Though* - it does then raise the question of how and why the assemblers of this dataset achieved this author-observation uniqueness.
| github_jupyter |
<a href="https://colab.research.google.com/github/BrianThomasRoss/DS-Unit-2-Linear-Models/blob/master/module3-ridge-regression/Brian_Ross_LS_DS_213_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
- [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). Use the scaler's `fit_transform` method with the train set. Use the scaler's `transform` method with the test set.
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
## Stretch Goals
Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
warnings.filterwarnings(action='ignore', category=FutureWarning, module='sklearn')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
import numpy as np
import plotly.express as px
import category_encoders as ce
from math import factorial
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler, scale
```
#### Use a subset of the data where BUILDING_CLASS_CATEGORY == '01 ONE FAMILY DWELLINGS' and the sale price was more than 100 thousand and less than 2 million
```
building_class_filter = df['BUILDING_CLASS_CATEGORY'] == "01 ONE FAMILY DWELLINGS"
sale_price_filter = (df['SALE_PRICE'] > 100000) & (df['SALE_PRICE'] < 2000000)
df = df.loc[building_class_filter]
df = df.loc[sale_price_filter]
```
#### Cleaning
```
df.drop(columns=['BUILDING_CLASS_CATEGORY', 'APARTMENT_NUMBER'], inplace=True) # Dropping zero-variance features
df['LAND_SQUARE_FEET'] = (df['LAND_SQUARE_FEET'].str.replace(",", "")).astype(int)
df['EASE-MENT'] = df['EASE-MENT'].fillna(0)
```
#### Train / Test Split
```
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
cutoff = pd.to_datetime("2019-04-01")
train = df[df.SALE_DATE < cutoff]
test = df[df.SALE_DATE >= cutoff]
assert len(df) == len(train) + len(test)
```
#### Do one-hot encoding of categorical features.
```
categorical_cols = train.select_dtypes(exclude='number').columns
numerical_cols = train.select_dtypes(include='number').columns
for col in categorical_cols:
print(train[col].value_counts())
print("\n" + "~~"*20 + "\n")
target = 'SALE_PRICE'
high_cardinality_cols = ['ADDRESS',
'SALE_DATE']
features = train.columns.drop([target] + high_cardinality_cols)
# Train
X_train = train[features]
y_train = train[target]
# Test
X_test = test[features]
y_test = test[target]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_test_encoded.head(1)
```
#### Feature Selection with SelectKBest
```
warnings.filterwarnings("ignore", category=RuntimeWarning)
k_list = []
mae_list = []
for k in range(1, len(X_train_encoded.columns)+1):
if k == 1:
print(f"With {k} feature:")
else:
print(f"With {k} features:")
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_encoded, y_train)
X_test_selected = selector.transform(X_test_encoded)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
k_list.append(k)
mae_list.append(mae)
print(f"Test MAE: ${mae:,.0f} \n")
### Looks like 13 features is within reasonable range of lowest test score
### with the least amount of features so let's go with that
selector = SelectKBest(score_func=f_regression, k=13)
X_train_selected = selector.fit_transform(X_train_encoded, y_train)
X_test_selected = selector.transform(X_test_encoded)
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(k_list, mae_list);
```
#### Feature Scaling & Ridge Regression
```
from IPython.display import display, HTML
from ipywidgets import interact
def analyze_optimum_alpha(train_df, test_df, train_targ, test_targ):
for alpha in [10**1, 10**2, 10**3, 10**4, 10**5, 10**6, 10**7, 10**8]:
# Feature Scaling
og_train_df = train_df
scaler = StandardScaler()
train_df_scaled = scaler.fit_transform(train_df)
test_df_scaled = scaler.transform(test_df)
# Fit model
display(HTML(f'Ridge Regression, with alpha={alpha}'))
model = Ridge(alpha=alpha)
model.fit(train_df_scaled, train_targ)
# Train MAE
y_pred = model.predict(train_df_scaled)
mae = mean_absolute_error(train_targ, y_pred)
display(HTML(f'Train MAE: {mae:,.0f}'))
# Test MAE
y_pred = model.predict(test_df_scaled)
mae = mean_absolute_error(test_targ, y_pred)
display(HTML(f'Test MAE: {mae:,.0f}'))
# Plot coeffs
coefficients = pd.Series(model.coef_, og_train_df.columns)
plt.figure(figsize=(4,4))
coefficients.sort_values().plot.barh(color='grey')
plt.xlim(-100000, 100000)
plt.show()
analyze_optimum_alpha(X_train_encoded, X_test_encoded, y_train, y_test)
```
#### Los otra technica
```
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_selected, y_train)
X_test_scaled = scaler.transform(X_test_selected)
model = Ridge(100)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_test_scaled)
mae = mean_absolute_error(y_test, y_pred)
mae
print(f"Test MAE: ${mae:,.0f}")
new_features = X_train_encoded.columns[selector.get_support()]
display(HTML(f"Test Error: ${mae:,.0f}"))
coefficients = pd.Series(model.coef_, new_features)
plt.figure(figsize=(4,4))
coefficients.sort_values().plot.barh(color='grey')
plt.xlim(-100000, 100000)
plt.show()
```
| github_jupyter |
# Supervised Stylometric Analysis of the Pentateuch
### Table of Contents
1. [Introduction](#intro)
2. [Preprocess Data](#preprocess)
3. [Embedding Experimentation](#embed)
4. [Results](#results)
<a name='intro'></a>
### 1. Introduction
Modern biblical scholarship holds that the Pentateuch, also known as the Torah, is a multiauthor document that was composed over a period hundreds of years. However, scholars disagree on the number of and circumstance of the authors who have contributed to the Torah with some adhering to the older documentary hypothesis (DH) and many others prescribing to the newer, supplementary hypothesis (SH). This work aims to shed light on this controversy using Natural Language Processing (NLP) to identify the authors of the Torah at the sentence level. Computerized stylometric analysis in this piece reveals an intricate story showing the lack of a strong stylometric signature from the E source over the J source and a strong seepage of the P source into sources thought to be independent by the documentary hypothesis.
```
from linear_assignment_ import linear_assignment
import numpy as np
import pandas as pd
import itertools
from scipy.spatial import distance
import fasttext
import xgboost as xgb
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.cluster import SpectralClustering, KMeans, DBSCAN, AgglomerativeClustering
from sklearn.decomposition import TruncatedSVD, PCA
from sklearn import ensemble, linear_model, metrics, model_selection, naive_bayes
from gensim.models import Word2Vec, word2vec, KeyedVectors
import seaborn as sns
import scikitplot as skplt
import matplotlib.pyplot as plt
color = sns.color_palette()
np.set_printoptions(suppress=True)
%matplotlib inline
def word_vector(model, tokens, dim):
"""
Generate a word vector.
model: a completed model
tokens: a list of words (in this case POS)
dim: Number of dimensions. 100 - 300 is good for w2v
"""
i = 0
vec = np.zeros(dim).reshape((1, dim))
for word in tokens:
vec += model[word].reshape((1, dim))
i += 1.
if i != 0:
vec /= i
return vec
def clean_label(y_true, y_pred):
"""
Unsupervised classifiers do not always choose the same labels. For example, on one run the J author may be labled
0, on the next they may be labeled 3. This function will best match the labels and convert the later set of
labels so that all 3's in y_pred become 0's to match up with y_true.
This enables easy comparison and the possibility to run metrics.
Input y_true and y_pred, numpy.arrays containing the true and predicted labels for a model.
Returns y_pred converted to the same numeric key as y_true.
"""
y_true = y_true.astype(np.int64)
assert y_pred.size == y_true.size
d = max(y_pred.max(), y_true.max()) + 1
w = np.zeros((d, d), dtype=np.int64)
for i in range(y_pred.size):
w[y_pred[i], y_true[i]] += 1
ind = linear_assignment(w.max() - w)
key = {}
for l in ind:
key[l[0]] = l[1]
y_pred_clean = []
for label in y_pred:
y_pred_clean.append(key[label])
y_pred_clean = np.array(y_pred_clean)
return y_pred_clean
def flatten_list(unflattened_list):
return [item for sublist in unflattened_list for item in sublist]
def get_core_indices(data, cluster_indices):
cluster_mean = np.mean(data[cluster_indices], axis=0)
angles = [distance.euclidean(data[i, ], cluster_mean) for i in cluster_indices]
return [cluster_indices[i] for i in range(len(cluster_indices)) if np.mean(angles) - 2 * np.std(angles) < angles[i] < np.mean(angles) + 2 * np.std(angles)]
def supervised_improvement(data, cluster_cores):
y = flatten_list([[i] * len(cluster_cores[i]) for i in range(len(cluster_cores))])
matrix_trained = np.vstack([data[core] for core in cluster_cores])
clf = ensemble.RandomForestClassifier(n_estimators=500)
clf.fit(matrix_trained, y)
return clf.predict(data)
```
<a name='preprocess'></a>
### 2. Preprocess Data
We omit the book of Deuteronomy in this study because both the DH and SH agree that it is a largely independent source with very minimal intrusion from the sources found in Genesis, Exodus, Leviticus, and Numbers. From a schollarly point of view, it is more closely related to the Deuteronomistic histories such as Joshua, Judges, Samuel, and Kings than it is to the rest of the Torah anyway. Classification performance could potentially decrease with each additional author, k, thus to give ourselves the best possible chance of success we will remove this book as there is minimal ongoing debate about its nature.
```
df = pd.read_csv('data.csv')
df = df[df['book'] != 'Deuteronomy']
df.head()
df = shuffle(df, random_state=5780)
cnt_srs = df['dh_author'].value_counts()
plt.figure(figsize=(8,4))
sns.barplot(cnt_srs.index, cnt_srs.values, alpha=0.8)
plt.ylabel('Number of Verses', fontsize=12)
plt.xlabel('Author', fontsize=12)
plt.title('Number of Verses by Author, Documentary Hypothesis')
plt.show()
cnt_srs = df['sh_author'].value_counts()
plt.figure(figsize=(8,4))
sns.barplot(cnt_srs.index, cnt_srs.values, alpha=0.8)
plt.ylabel('Number of Verses', fontsize=12)
plt.xlabel('Author', fontsize=12)
plt.title('Number of Verses by Author, Supplementary Hypothesis')
plt.show()
# Create true labels
pos = df['pos'].tolist()
dh_author = df['dh_author']
sh_author = df['sh_author']
dh_to_int = {
'J': 0,
'E': 1,
'P': 2,
'R': 3,
}
dh_labels = []
for i, label in enumerate(dh_author):
dh_labels.append(dh_to_int[label])
df['dh_labels'] = dh_labels
dh_labels = np.array(dh_labels)
sh_to_int = {
'J': 0,
'P': 1,
}
sh_labels = []
for i, label in enumerate(sh_author):
sh_labels.append(sh_to_int[label])
df['sh_labels'] = sh_labels
sh_labels = np.array(sh_labels)
```
<a name='embed'></a>
### 3. Embedding Experimentation
```
vectorizer = TfidfVectorizer(ngram_range=(2, 2))
posv = vectorizer.fit_transform(pos)
posv_arr = posv.toarray()
```
A note on unsupervised improvment: We calculate k centroids for our dataset where each datapoint is a verse converted to POS and then embedded. Any point within two standard deviations of its respective centroid is kept under the theory that it is a “core datapoint” that best represents that author’s style. All datapoints outside those two standard devations are reclassified using a supervised classification algorithm (the random forest classifer has proven quite effective for this dataset) in which those points within the two standard deviations are used as labeled, true data. This technique and some code is taken from Alon Daks and Aidan Clark’s paper, "Unsupervised Authorial Clustering Based on Syntatic Structure."
```
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
print('no supervised enhancement f1 score: ', metrics.f1_score(dh_labels, sc_labels, average='weighted'))
print('no supervised enhancement accuracy: ', metrics.accuracy_score(dh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
print('no supervised enhancement f1 score: ', metrics.f1_score(dh_labels, sc_labels, average='weighted'))
print('no supervised enhancement accuracy: ', metrics.accuracy_score(dh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
f = open('fasttext_data.txt', 'w')
for x, y in zip(pos, dh_labels):
line = '__label__' + str(y) + ' ' + x + '\n'
f.write(line)
f.close()
model = fasttext.train_unsupervised(input='fasttext_data.txt')
ft_vecs = []
for p in pos:
vec = model.get_sentence_vector(p)
ft_vecs.append(vec)
ft_vecs = np.array(ft_vecs)
```
PCA visulaizations are a great way to see distinctions in data. In a perfect world we would see all authors in neat little clusters with a definitive seperation between their groupings. Unfortunately our data appears to be quite conjoined. This doesn't mean that the model cannot find a distinction between them per se, but it may indicate that the distinction is too complex to be expressed in only two dimensions given our current embedding method. This appears to be the case with our experimentation and the PCA visulaizations tend to be somewhat underwhelming.
```
pca = PCA(random_state=5780)
pca.fit(ft_vecs)
skplt.decomposition.plot_pca_2d_projection(pca, ft_vecs, dh_labels, figsize=(8,8))
plt.show()
pca = PCA(random_state=5780)
pca.fit(ft_vecs)
skplt.decomposition.plot_pca_2d_projection(pca, ft_vecs, sh_labels, figsize=(8,8))
plt.show()
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(ft_vecs)
print('no supervised improvement f1 score: ', metrics.f1_score(dh_labels, sc_labels, average='weighted'))
print('no supervised improvement accuracy: ', metrics.accuracy_score(dh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(ft_vecs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(ft_vecs, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
```
```
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(ft_vecs)
print('no supervised improvement f1 score: ', metrics.f1_score(dh_labels, sc_labels, average='weighted'))
print('no supervised improvement accuracy: ', metrics.accuracy_score(dh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(ft_vecs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(ft_vecs, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
dim = 300
w2v_sg_model = word2vec.Word2Vec(sentences=pos, vector_size=dim, window=100, shrink_windows=True, min_count=5, sg=1, hs=0, negative=0, workers=12, seed=5780)
wordvec_arrs = np.zeros((len(pos), dim))
for i in range(len(pos)):
wordvec_arrs[i,:] = word_vector(w2v_sg_model.wv, pos[i], dim)
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(wordvec_arrs)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(wordvec_arrs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(wordvec_arrs, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(wordvec_arrs)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(wordvec_arrs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(wordvec_arrs, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
dim = 300
w2v_cbow_model = word2vec.Word2Vec(sentences=pos, vector_size=dim, window=100, shrink_windows=True, min_count=5, sg=0, hs=0, negative=5, workers=12, seed=5780)
wordvec_arrs = np.zeros((len(pos), dim))
for i in range(len(pos)):
wordvec_arrs[i,:] = word_vector(w2v_cbow_model.wv, pos[i], dim)
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(wordvec_arrs)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(wordvec_arrs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(wordvec_arrs, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(wordvec_arrs)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(wordvec_arrs, i) for i in cluster_labels]
predicted_labels = supervised_improvement(wordvec_arrs, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
vectorizer = CountVectorizer(ngram_range=(1, 3))
posv = vectorizer.fit_transform(pos)
posv_arr = posv.toarray()
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
vectorizer = CountVectorizer(ngram_range=(1, 25), analyzer='char')
posv = vectorizer.fit_transform(pos)
posv_arr = posv.toarray()
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
vectorizer = TfidfVectorizer(ngram_range=(5, 5), analyzer='char')
posv = vectorizer.fit_transform(pos)
posv_arr = posv.toarray()
pca = PCA(random_state=5780)
pca.fit(posv_arr)
skplt.decomposition.plot_pca_2d_projection(pca, posv_arr, dh_labels, figsize=(8,8))
plt.show()
pca = PCA(random_state=5780)
pca.fit(posv_arr)
skplt.decomposition.plot_pca_2d_projection(pca, posv_arr, sh_labels, figsize=(8,8))
plt.show()
k = 4
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
print('no supervised improvement f1 score: ', metrics.f1_score(dh_labels, sc_labels, average='weighted'))
print('no supervised improvement accuracy: ', metrics.accuracy_score(dh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(dh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(dh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(dh_labels, predicted_labels))
k = 2
c = SpectralClustering(n_clusters=k, affinity='linear')
sc_labels = c.fit_predict(posv)
print('no supervised improvement f1 score: ', metrics.f1_score(sh_labels, sc_labels, average='weighted'))
print('no supervised improvement accuracy: ', metrics.accuracy_score(sh_labels, sc_labels))
print()
cluster_labels = [[i for i, x in enumerate(sc_labels) if x == j] for j in range(k)]
cluster_cores = [get_core_indices(posv_arr, i) for i in cluster_labels]
predicted_labels = supervised_improvement(posv_arr, cluster_cores)
predicted_labels = clean_label(sh_labels, predicted_labels)
print('f1 score: ', metrics.f1_score(sh_labels, predicted_labels, average='weighted'))
print('accuracy: ', metrics.accuracy_score(sh_labels, predicted_labels))
```
<a name='results'></a>
### 4. Results
Our most successful embedding method came from the TFIDF vectorizer with the character level analyzer that recieved supervised improvment. The DH model returned a weighted f1_score of 0.43 and the SH model recieved a weighted f1_score of 0.68. These results indicate that a certain amount of authorial style has been captured but we will move to supervised classification to build a more definitive conclusion. The unsupervised learning was included to show how an authorship identification task might need to take place when using a dataset without precurated true labels. Unsupervised authorship identification may also even need to pick k, the number of authors, which makes the task even more challenging. Picking k is not done in this journal.
| github_jupyter |
```
import tensorflow as tf
import os
import numpy as np
import ujson as json
from importlib import reload
from scipy import stats
from func import cudnn_gru, native_gru, dot_attention, summ, ptr_net
from prepro import word_tokenize, convert_idx
import inference
# reload(inference.InfModel)
# reload(inference.Inference)
```
# R-NET样例测试,输出置信度
```
tf.reset_default_graph()
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Must be consistant with training
char_limit = 16
hidden = 75
char_dim = 8
char_hidden = 100
use_cudnn = False
# File path
target_dir = "data"
save_dir = "log/model"
word_emb_file = os.path.join(target_dir, "word_emb.json")
char_emb_file = os.path.join(target_dir, "char_emb.json")
word2idx_file = os.path.join(target_dir, "word2idx.json")
char2idx_file = os.path.join(target_dir, "char2idx.json")
infer = inference.Inference()
context = "... michael crawford , right , who is ailing , will not return to his award-winning " \
"role of count fosco in the andrew lloyd webber musical adaptation of the wilkie_collins " \
"classic , '' the woman in white , '' in london as scheduled on may 2 ."
ques2 = "Where is the birth place of wilkie_collins?"
# ans2 = infer.response(context, ques2)
# print(infer.response(context, ques2))
ans2, confidence1, confidence2 = infer.response(context, ques2)
print("Answer 2: {}".format(ans2))
from scipy import stats
print(stats.entropy(confidence1))
print(stats.entropy(confidence2))
print(stats.entropy(np.ones((10000))/10000))
print(stats.entropy(np.ones((100))/100))
print(stats.entropy(np.ones((10))/10))
print(stats.entropy(np.ones((2))/2))
```
# 在riedel-NYT数据集上进行验证
```
import pandas as pd
file_path = 'origin_data/train.txt'
df = pd.read_csv(file_path, sep='\t', header=None, names=['e1_encoding', 'e2_encoding', 'e1', 'e2', 'relation', 'content'])
df.content.head(3)
df.relation = df.relation.fillna('none')
relation_series = df.relation.value_counts()
selected_relation_series = relation_series[relation_series.values > 1000]
relation_list = selected_relation_series.index.values.tolist()
relation_list
# 筛选有效数据集
selected_df = df.loc[df['relation'].isin(relation_list)]
selected_df.head(2)
# 人工构造每种关系的问句
relation_to_questions = {
'/location/location/contains':[
'Where is <e2> located in?',
'Where is <e2>?',
'Which place contains <e2>?'],
'/people/person/nationality':['What\'s the nationality of <e1>?'],
'/location/country/capital':[
'What\'s the capital of <e2>?',
'Where is the capitcal of <e2>?'],
'/people/person/place_lived': ['Where does <e1> lived in?'],
'/location/neighborhood/neighborhood_of':[
'What is the neighborhood of <e1>?',
'Where is <e1> next to?',
'What place does <e1> adjacent to?'],
'/location/administrative_division/country':[
'Which country does <e1> belong to?',
'Which country does <e1> located in?'],
'/location/country/administrative_divisions':[
'Which country does <e2> belong to?',
'Which country does <e2> located in?'],
'/business/person/company':[
'Which company does <e1> work for?',
'Which company does <e1> join?',
'Where does <e1> work for?',
'What\'s the occupation of <e1>?',
'Which company hires <e1>?'],
'/people/person/place_of_birth':[
'Where is the birth place of <e1>?',
'Where does <e1> born?',
'Where is the hometown of <e1>?'],
'/people/deceased_person/place_of_death':[
'Where did <e1> died?',
'Where is the place of death of <e1>?'],
'/business/company/founders':[
'Who found <e1>?',
'Who is the founder of <e1>?',
'Who starts <e1>?']
}
question_to_relation = {}
# question_to_relation = {q:relation for q in [qlist ]}
for relation, qlist in relation_to_questions.items():
for q in qlist:
question_to_relation[q] = relation
selected_df[selected_df.relation=='/business/person/company'].iloc[0]
selected_df[selected_df.relation=='/people/person/place_lived'].iloc[0].content
```
# 任务构造
- 任务一:判定特定关系的结果
- 任务二:对于每种潜在关系,测试不同的响应,从而获取任意可能的关系与关系结果
```
# 工具函数
def content_prepro(content):
# 可能需要过滤部分特殊标记
content = content[:-10]
return content
# 任务一
def dprint(s):
# print(s)
pass
def test_single_relation(relation_name):
exact_cnt = 0 # 完全匹配
hit_cnt = 0 # 部分命中
total_cnt = 1000
pred_list = []
truth_list = []
for idx, row in selected_df[selected_df.relation==relation_name].reset_index().iterrows():
dprint('=============')
dprint(idx)
dprint(row)
content = content_prepro(row.content)
dprint('Content=\t' + content)
best_loss = 100
best_pred = ''
truth = '' # 实际上是一样的
for q in relation_to_questions[row.relation]:
# 将问题模板中的实体进行带入
question = q.replace('<e1>', row.e1).replace('<e2>', row.e2)
dprint('Q=\t' + question)
try:
pred, d1, d2 = infer.response(content, question) # c1, c2 are confidence of begin and end
c1, c2 = stats.entropy(d1), stats.entropy(d2)
loss = c1*c2
if loss < best_loss:
best_loss = loss
best_pred = pred
truth = str(row.e1 if row.e2 in question else row.e2)
dprint('pred=' + str(pred) +
'\tTruth=' + truth +
'\tc1=' + str(c1) + '\tc2=' + str(c2))
except:
continue
if truth !='' and (best_pred in truth or truth in best_pred):
pred_list.append(best_pred)
truth_list.append(truth)
hit_cnt += 1
if idx %10 == 0:
dprint(idx)
if idx > total_cnt:
break
dprint(hit_cnt)
dprint(pred_list[:20])
dprint(truth_list[:20])
return hit_cnt, total_cnt
for relation in [
'/location/location/contains',
'/people/person/nationality',
'/location/country/capital',
'/people/person/place_lived',
'/location/neighborhood/neighborhood_of',
'/location/country/administrative_divisions',
'/location/administrative_division/country',
'/business/person/company',
'/people/person/place_of_birth',
'/people/deceased_person/place_of_death',
'/business/company/founders'
]:
hit_cnt, total_cnt = test_single_relation(relation)
print(relation)
print('Accuracy:' + str(hit_cnt) + ' / ' + str(total_cnt))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gordeli/textanalysis/blob/master/03_Data_Collection_DS3Text.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Fundamentals of Text Analysis for User Generated Content @ EDHEC, 2021
# Part 3: Data Collection
[<- Previous: Noisy Text Processing](https://colab.research.google.com/github/gordeli/textanalysis/blob/master/colab/02_Noisy_Text_Processing_DS3Text.ipynb)
[-> Next: Content Analysis](https://colab.research.google.com/github/gordeli/textanalysis/blob/master/colab/04_Content_Analysis_DS3Text.ipynb)
Dates: February 8 - 15, 2021
Facilitator: [Ivan Gordeliy](https://www.linkedin.com/in/gordeli/)
---
## Initial Setup
- **Run "Setup" below first.**
- This will load libraries and download some resources that we'll use throughout the tutorial.
- You will see a message reading "Done with setup!" when this process completes.
```
#@title Setup (click the "run" button to the left) {display-mode: "form"}
## Setup ##
# imports
# built-in Python libraries
# -------------------------
# For processing the incoming Twitter data
import json
# os-level utils
import os
# For downloading web data
import requests
# For compressing files
import zipfile
# 3rd party libraries
# -------------------
# beautiful soup for html parsing
!pip install beautifulsoup4
import bs4
# tweepy for using the Twitter API
!pip install tweepy
import tweepy
# allows downloading of files from colab to your computer
from google.colab import files
# get sample reddit data
if not os.path.exists("reddit_2019_05_5K.json"):
!wget https://raw.githubusercontent.com/gordeli/textanalysis/master/data/reddit_2019_05_5K.json
print()
print("Done with setup!")
print("If you'd like, you can click the (X) button to the left to clear this output.")
```
---
## Data Collection
- Here we'll cover a few different sources of user-generated content and provide some examples of how to gather data.
### Web Scraping and HTML parsing
- Lots of text data is available directly from web pages.
- Have a look at the following website: [Quotes to Scrape](http://quotes.toscrape.com/page/1/)
- With the Beautiful Soup library, it's very easy to take some html and extract only the text:
```
html_content = requests.get("http://quotes.toscrape.com/page/1/").content
soup = bs4.BeautifulSoup(html_content,"html.parser")
print(soup.text)
```
- If you want to extract data in a more targeted way, you can navitage the [html document object model](https://www.w3schools.com/whatis/whatis_htmldom.asp) using [Beautiful Soup functions](https://www.crummy.com/software/BeautifulSoup/bs4/doc/), but we won't dive deeply into this for now,
- **Important: You should not use this kind of code to just go collect data from any website!**
- Web scaping tools should always check a site's [`robots.txt` file](https://www.robotstxt.org/robotstxt.html), which describes how crawlers, scrapers, indexers, etc., should use the site.
- For example, see [github's robots.txt](https://github.com/robots.txt)
- You should be able to find any site's robots.txt (if there is one) at http://\<domain\>/robots.txt for any web \<domain\>.
### Reddit Corpus
- Reddit is a great source of publicly available user-generated content.
- We could scrape Reddit ourselves, but why do that if someone has already (generously) done the heavy lifting?
- Reddit user Stuck_in_the_Matrix has compiled and compressed essentially all of Reddit for researchers to download.
- [Original submissions corpus](https://www.reddit.com/r/datasets/comments/3mg812/full_reddit_submission_corpus_now_available_2006/) (up to 2015) and [updates](https://files.pushshift.io/reddit/submissions/) (up to April 2020 at the time of latest update of this notebook).
- For a smaller file to get started with, take a look at the [daily comments files](https://files.pushshift.io/reddit/comments/daily/).
- To explore more files available, see [this top-level directory](https://files.pushshift.io/reddit/).
- Let's explore a small subset of the data from May 2019:
```
# read the data that was downloaded during setup
# this is the exact format as the full corpus, just truncated to the first 5000 lines
sample_reddit_posts_raw = open("reddit_2019_05_5K.json",'r').readlines()
print("Loaded",len(sample_reddit_posts_raw),"reddit posts.")
reddit_json = [json.loads(post) for post in sample_reddit_posts_raw]
print(json.dumps(reddit_json[50], sort_keys=True, indent=4))
```
- Since the posts are in json format, we used the Python json library to process them.
- This library returns Python dict objects, so we can access them just like we would any other dictionary.
- Let's view some of the text content from these posts:
```
for post in reddit_json[:100]:
if post['selftext'].strip() and post['selftext'] not in ["[removed]","[deleted]"]:
print("Subreddit:",post['subreddit'],"\nTitle:",post['title'],"\nContent:", \
post['selftext'],"\n")
```
- Note that we filtered out posts with no text content.
- Many posts have a non-null "media" field, which could contain images, links to youtube, videos, etc.
- These could be worth exploring more, using computer vision to process images/videos and NLP to process linked websites.
- That covers the basics of getting Reddit data.
### The Twitter API
- Twitter is also known for being an abundant source of publc text data (perhaps even more so than Reddit).
- Twitter provides several types of API that can be used to collect anything from tweets to user descriptions to follower networks.
- You can [read all about it here](https://developer.twitter.com/).
- For this tutorial, we'll look at using the [standard search API](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html), which allows us to retreive tweets that contain specific words, phrases, and hashtags.
- In the slides, we talked about how to setup a Twitter App and get a API keys.
- You should add your own keys below and then run the code block to set your keys:
```
twitter_API_key = ""
twitter_API_secret_key = ""
```
- Do not share your credentials with anyone!
- You shouldn't hardcode your API keys in code (like above) if you are going to save the file anywhere that is visible to others (like commiting the file to github).
- You can read more about securing your API keys [here](https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens).
- So, if you plan to save this file in any way, make sure to remove your API keys first.
- If you think your keys have been compromized, you can regenerate them.
- [Apps](https://developer.twitter.com/en/apps) -> Keys and Tokens -> Regenerate
- Now, let's see how we can use the [tweepy](https://github.com/tweepy/tweepy) library to collect some tweets:
```
# create an auth handler object using the api tokens
auth = tweepy.AppAuthHandler(twitter_API_key, twitter_API_secret_key)
# tweepy automatically takes care of potential rate limiting issues
API = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
# let's look for some tweets
query = "#Apple"
# count: 100 is the max allowed value for this parameter
# though we might get fewer than that
# tweet_mode: Twitter changed the char limit from 140->280, but didn't want
# to break applications expecting 140, so we have to make sure to ask for this.
tweets = API.search(q=query, count=100, tweet_mode="extended")
print("Collected",len(tweets),"tweets.")
```
- Great, hopefully you got some tweets! Let's take a look:
```
print(json.dumps(tweets[0]._json, sort_keys=True, indent=4))
```
- Here is the text portion of the tweets:
```
print("\n\n\n".join([tweet.full_text for tweet in tweets]))
```
- Things are starting to look a bit more like our examples from the noisy text section.
- Note: retweets are cut off with ... Retweets have 2 full_text fields, one may get the other one properly addressing it based on Json if needed
- To make it even easier to collect tweets from page to page, we can use the tweepy Cursor object:
```
cursor = tweepy.Cursor(API.search, q="#EDHEC", tweet_mode="extended")
# just get 5 tweets
# if not given, will (in theory) retrieve as many matching tweets as possible
# (standard search only allows search within previous ~1 week)
for tweet in cursor.items(5):
print(tweet.full_text)
print("--------")
```
### Putting it together: building your own corpus
**Exercise 4:** Tweet collection
- Let's write a function to collect a larger set of tweets related to a query
- If you want to collect data using multiple queries, you can just call this function multiple times, changing the query each time.
- Store the tweets in the file howerever you like
- You will need to write your own parser for this file later on in the tutorial.
- Store whatever information you like about each tweet, but collect the `full_text` at the very least.
- Make sure to check if `limit` is set, and if it is, only collect `limit` tweets.
```
def write_tweets_to_file(API, query, output_filename, limit=5):
# ------------- Exercise 3 -------------- #
# gather tweets here, then write to output_filename
# ---------------- End ------------------ #
# quick test
query = "#twitter"
auth = tweepy.AppAuthHandler(twitter_API_key, twitter_API_secret_key)
API = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
output_filename = "test.txt"
write_tweets_to_file(auth, query, output_filename, limit=3)
print("Wrote this to the file:",'\n'+open(output_filename).read())
```
- Now, change the `query` string below to whatever you like, and run the code.
- *Make sure your code above is working before you run this! Otherwise, you may run quite a few queries and hit your rate limit, preventing you from testing your code again for ~15 minutes*
- See [this page](https://developer.twitter.com/en/docs/tweets/search/guides/standard-operators.html) under "standard search operators" for details on what kinds of things you can place here.
```
query = "#Apple"
# call the tweet collection function
auth = tweepy.AppAuthHandler(twitter_API_key, twitter_API_secret_key)
API = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
output_filename = "mytweets.txt"
write_tweets_to_file(API, query, output_filename, 10000)
# zip and download
output_zip = output_filename + '.zip'
with zipfile.ZipFile(output_zip, 'w') as myzip:
myzip.write(output_filename)
files.download(output_zip)
```
- Note: with some web browsers, the `files.download()` command won't correctly open a dialog window to download the files.
- If this happens, check out the "Files" menu on the sidebar
- can be expanded on the left side of this notebook -- click the > button in the top left-corner to unhide the menu.
- You can download your file there (and also upload it when you need it in the next notebook).
```
#@title Sample Solution (double-click to view) {display-mode: "form"}
def write_tweets_to_file(api, query, output_filename, limit=10):
cursor = tweepy.Cursor(API.search, q=query, tweet_mode="extended")
with open(output_filename,'w') as out:
for tweet in cursor.items(limit):
# using tags since tweets may have newlines in them
# you may also want to write other information to this file,
# or even the entire json object.
out.write('<TWEET>' + tweet.full_text + '</TWEET>\n')
# quick test
query = "#twitter"
auth = tweepy.AppAuthHandler(twitter_API_key, twitter_API_secret_key)
API = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
output_filename = "test2.txt"
write_tweets_to_file(auth, query, output_filename, limit=3)
print("Wrote this to the file:",'\n'+open(output_filename).read())
```
- You should now have your own file(s) containing Twitter data!
- [-> Next: Content Analysis](https://colab.research.google.com/github/gordeli/textanalysis/blob/master/colab/04_Content_Analysis_DS3Text.ipynb)
| github_jupyter |
```
import boto3
import botocore
import os
import sagemaker
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/ipinsights-tutorial"
execution_role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# check if the bucket exists
try:
boto3.Session().client("s3").head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print("Specify your S3 bucket or you gave your bucket an invalid name!")
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "403":
print(f"You don't have permission to access the bucket, {bucket}.")
elif e.response["Error"]["Code"] == "404":
print(f"Your bucket, {bucket}, doesn't exist!")
else:
raise
else:
print(f"Training input/output will be stored in: s3://{bucket}/{prefix}")
```
Next we download the modules necessary for synthetic data generation they do not exist.
```
from os import path
tools_bucket = f"jumpstart-cache-prod-{region}" # Bucket containing the data generation module.
tools_prefix = "1p-algorithms-assets/ip-insights" # Prefix for the data generation module
s3 = boto3.client("s3")
data_generation_file = "generate_data.py" # Synthetic data generation module
script_parameters_file = "ip2asn-v4-u32.tsv.gz"
if not path.exists(data_generation_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{data_generation_file}", data_generation_file)
if not path.exists(script_parameters_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{script_parameters_file}", script_parameters_file)
```
### Dataset
Apache Web Server ("httpd") is the most popular web server used on the internet. And luckily for us, it logs all requests processed by the server - by default. If a web page requires HTTP authentication, the Apache Web Server will log the IP address and authenticated user name for each requested resource.
The [access logs](https://httpd.apache.org/docs/2.4/logs.html) are typically on the server under the file `/var/log/httpd/access_log`. From the example log output below, we see which IP addresses each user has connected with:
```
192.168.1.100 - user1 [15/Oct/2018:18:58:32 +0000] "GET /login_success?userId=1 HTTP/1.1" 200 476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
192.168.1.102 - user2 [15/Oct/2018:18:58:35 +0000] "GET /login_success?userId=2 HTTP/1.1" 200 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
...
```
If we want to train an algorithm to detect suspicious activity, this dataset is ideal for SageMaker IP Insights.
First, we determine the resource we want to be analyzing (such as a login page or access to a protected file). Then, we construct a dataset containing the history of all past user interactions with the resource. We extract out each 'access event' from the log and store the corresponding user name and IP address in a headerless CSV file with two columns. The first column will contain the user identifier string, and the second will contain the IPv4 address in decimal-dot notation.
```
user1, 192.168.1.100
user2, 193.168.1.102
...
```
As a side note, the dataset should include all access events. That means some `<user_name, ip_address>` pairs will be repeated.
#### User Activity Simulation
For this example, we are going to simulate our own web-traffic logs. We mock up a toy website example and simulate users logging into the website from mobile devices.
The details of the simulation are explained in the script [here](./generate_data.py).
```
from generate_data import generate_dataset
# We simulate traffic for 10,000 users. This should yield about 3 million log lines (~700 MB).
NUM_USERS = 10000
log_file = "ipinsights_web_traffic.log"
generate_dataset(NUM_USERS, log_file)
# Visualize a few log lines
!head $log_file
```
### Prepare the dataset
Now that we have our logs, we need to transform them into a format that IP Insights can use. As we mentioned above, we need to:
1. Choose the resource which we want to analyze users' history for
2. Extract our users' usage history of IP addresses
3. In addition, we want to separate our dataset into a training and test set. This will allow us to check for overfitting by evaluating our model on 'unseen' login events.
For the rest of the notebook, we assume that the Apache Access Logs are in the Common Log Format as defined by the [Apache documentation](https://httpd.apache.org/docs/2.4/logs.html#accesslog). We start with reading the logs into a Pandas DataFrame for easy data exploration and pre-processing.
```
import pandas as pd
df = pd.read_csv(
log_file,
sep=" ",
na_values="-",
header=None,
names=["ip_address","rcf_id","user","timestamp","time_zone","request", "status", "size", "referer", "user_agent"]
)
df.head()
```
We convert the log timestamp strings into Python datetimes so that we can sort and compare the data more easily.
```
# Convert time stamps to DateTime objects
df["timestamp"] = pd.to_datetime(df["timestamp"], format="[%d/%b/%Y:%H:%M:%S")
```
We also verify the time zones of all of the time stamps. If the log contains more than one time zone, we would need to standardize the timestamps.
```
# Check if they are all in the same timezone
num_time_zones = len(df["time_zone"].unique())
num_time_zones
```
As we see above, there is only one value in the entire `time_zone` column. Therefore, all of the timestamps are in the same time zone, and we do not need to standardize them. We can skip the next cell and go to [1. Selecting a Resource](#1.-Select-Resource).
If there is more than one time_zone in your dataset, then we parse the timezone offset and update the corresponding datetime object.
**Note:** The next cell takes about 5-10 minutes to run.
```
from datetime import datetime
import pytz
def apply_timezone(row):
tz = row[1]
tz_offset = int(tz[:3]) * 60 # Hour offset
tz_offset += int(tz[3:5]) # Minutes offset
return row[0].replace(tzinfo=pytz.FixedOffset(tz_offset))
if num_time_zones > 1:
df["timestamp"] = df[["timestamp", "time_zone"]].apply(apply_timezone, axis=1)
```
#### 1. Select Resource
Our goal is to train an IP Insights algorithm to analyze the history of user logins such that we can predict how suspicious a login event is.
In our simulated web server, the server logs a `GET` request to the `/login_success` page everytime a user successfully logs in. We filter our Apache logs for `GET` requests for `/login_success`. We also filter for requests that have a `status_code == 200`, to ensure that the page request was well formed.
**Note:** every web server handles logins differently. For your dataset, determine which resource you will need to be analyzing to correctly frame this problem. Depending on your usecase, you may need to do more data exploration and preprocessing.
```
df = df[(df["request"].str.startswith("GET /login_success")) & (df["status"] == 200)]
```
#### 2. Extract Users and IP address
Now that our DataFrame only includes log events for the resource we want to analyze, we extract the relevant fields to construct a IP Insights dataset.
IP Insights takes in a headerless CSV file with two columns: an entity (username) ID string and the IPv4 address in decimal-dot notation. Fortunately, the Apache Web Server Access Logs output IP addresses and authentcated usernames in their own columns.
**Note:** Each website handles user authentication differently. If the Access Log does not output an authenticated user, you could explore the website's query strings or work with your website developers on another solution.
```
df = df[["user", "ip_address", "timestamp"]]
```
#### 3. Create training and test dataset
As part of training a model, we want to evaluate how it generalizes to data it has never seen before.
Typically, you create a test set by reserving a random percentage of your dataset and evaluating the model after training. However, for machine learning models that make future predictions on historical data, we want to use out-of-time testing. Instead of randomly sampling our dataset, we split our dataset into two contiguous time windows. The first window is the training set, and the second is the test set.
We first look at the time range of our dataset to select a date to use as the partition between the training and test set.
```
df["timestamp"].describe()
```
We have login events for 10 days. Let's take the first week (7 days) of data as training and then use the last 3 days for the test set.
```
time_partition = (
datetime(2018, 11, 11, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 11)
)
train_df = df[df["timestamp"] <= time_partition]
test_df = df[df["timestamp"] > time_partition]
```
Now that we have our training dataset, we shuffle it.
Shuffling improves the model's performance since SageMaker IP Insights uses stochastic gradient descent. This ensures that login events for the same user are less likely to occur in the same mini batch. This allows the model to improve its performance in between predictions of the same user, which will improve training convergence.
```
# Shuffle train data
train_df = train_df.sample(frac=1)
train_df.head()
```
### Store Data on S3
Now that we have simulated (or scraped) our datasets, we have to prepare and upload it to S3.
We will be doing local inference, therefore we don't need to upload our test dataset.
```
# Output dataset as headerless CSV
train_data = train_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
# Upload data to S3 key
train_data_file = "train.csv"
key = os.path.join(prefix, "train", train_data_file)
s3_train_data = f"s3://{bucket}/{key}"
print(f"Uploading data to: {s3_train_data}")
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=train_data)
# Configure SageMaker IP Insights Input Channels
input_data = {
"train": sagemaker.session.s3_input(
s3_train_data, distribution="FullyReplicated", content_type="text/csv"
)
}
```
## Training
---
Once the data is preprocessed and available in the necessary format, the next step is to train our model on the data. There are number of parameters required by the SageMaker IP Insights algorithm to configure the model and define the computational environment in which training will take place. The first of these is to point to a container image which holds the algorithms training and hosting code:
```
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, "ipinsights")
```
Then, we need to determine the training cluster to use. The IP Insights algorithm supports both CPU and GPU training. We recommend using GPU machines as they will train faster. However, when the size of your dataset increases, it can become more economical to use multiple CPU machines running with distributed training.
### Training Job Configuration
- **train_instance_type**: the instance type to train on. We recommend `p3.2xlarge` for single GPU, `p3.8xlarge` for multi-GPU, and `m5.2xlarge` if using distributed training with CPU;
- **train_instance_count**: the number of worker nodes in the training cluster.
We need to also configure SageMaker IP Insights-specific hypeparameters:
### Model Hyperparameters
- **num_entity_vectors**: the total number of embeddings to train. We use an internal hashing mechanism to map the entity ID strings to an embedding index; therefore, using an embedding size larger than the total number of possible values helps reduce the number of hash collisions. We recommend this value to be 2x the total number of unique entites (i.e. user names) in your dataset;
- **vector_dim**: the size of the entity and IP embedding vectors. The larger the value, the more information can be encoded using these representations but using too large vector representations may cause the model to overfit, especially for small training data sets;
- **num_ip_encoder_layers**: the number of layers in the IP encoder network. The larger the number of layers, the higher the model capacity to capture patterns among IP addresses. However, large number of layers increases the chance of overfitting. `num_ip_encoder_layers=1` is a good value to start experimenting with;
- **random_negative_sampling_rate**: the number of randomly generated negative samples to produce per 1 positive sample; `random_negative_sampling_rate=1` is a good value to start experimenting with;
- Random negative samples are produced by drawing each octet from a uniform distributed of [0, 255];
- **shuffled_negative_sampling_rate**: the number of shuffled negative samples to produce per 1 positive sample; `shuffled_negative_sampling_rate=1` is a good value to start experimenting with;
- Shuffled negative samples are produced by shuffling the accounts within a batch;
### Training Hyperparameters
- **epochs**: the number of epochs to train. Increase this value if you continue to see the accuracy and cross entropy improving over the last few epochs;
- **mini_batch_size**: how many examples in each mini_batch. A smaller number improves convergence with stochastic gradient descent. But a larger number is necessary if using shuffled_negative_sampling to avoid sampling a wrong account for a negative sample;
- **learning_rate**: the learning rate for the Adam optimizer (try ranges in [0.001, 0.1]). Too large learning rate may cause the model to diverge since the training would be likely to overshoot minima. On the other hand, too small learning rate slows down the convergence;
- **weight_decay**: L2 regularization coefficient. Regularization is required to prevent the model from overfitting the training data. Too large of a value will prevent the model from learning anything;
```
# Set up the estimator with training job configuration
ip_insights = sagemaker.estimator.Estimator(
image,
execution_role,
instance_count=1,
instance_type="ml.p3.2xlarge",
output_path=f"s3://{bucket}/{prefix}/output",
sagemaker_session=sagemaker.Session(),
)
# Configure algorithm-specific hyperparameters
ip_insights.set_hyperparameters(
num_entity_vectors="20000",
random_negative_sampling_rate="5",
vector_dim="128",
mini_batch_size="1000",
epochs="5",
learning_rate="0.01",
)
# Start the training job (should take about ~1.5 minute / epoch to complete)
ip_insights.fit(input_data)
print(f"Training job name: {ip_insights.latest_training_job.job_name}")
```
## Inference
-----
Now that we have trained a SageMaker IP Insights model, we can deploy the model to an endpoint to start performing inference on data. In this case, that means providing it a `<user, IP address>` pair and predicting their compatability scores.
We can create an inference endpoint using the SageMaker Python SDK `deploy()`function from the job we defined above. We specify the instance type where inference will be performed, as well as the initial number of instnaces to spin up. We recommend using the `ml.m5` instance as it provides the most memory at the lowest cost. Verify how large your model is in S3 and pick the instance type with the appropriate amount of memory.
```
predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
print(f"Endpoint name: {predictor.endpoint}")
```
### Data Serialization/Deserialization
We can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data. Other available formats are JSON-formated and JSON Lines-formatted. We make use of the SageMaker Python SDK utilities: `csv_serializer` and `json_deserializer` when configuring the inference endpoint
```
from sagemaker.predictor import csv_serializer, json_deserializer
predictor.serializer = csv_serializer
predictor.deserializer = json_deserializer
```
Now that the predictor is configured, it is as easy as passing in a matrix of inference data.
We can take a few samples from the simulated dataset above, so we can see what the output looks like.
```
inference_data = [(data[0], data[1]) for data in train_df[:5].values]
predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
By default, the predictor will only output the `dot_product` between the learned IP address and the online resource (in this case, the user ID). The dot product summarizes the compatibility between the IP address and online resource. The larger the value, the more the algorithm thinks the IP address is likely to be used by the user. This compatability score is sufficient for most applications, as we can define a threshold for what we constitute as an anomalous score.
However, more advanced users may want to inspect the learned embeddings and use them in further applications. We can configure the predictor to provide the learned embeddings by specifing the `verbose=True` parameter to the Accept heading. You should see that each 'prediction' object contains three keys: `ip_embedding`, `entity_embedding`, and `dot_product`.
```
predictor.predict(
inference_data,
initial_args={"ContentType": "text/csv", "Accept": "application/json; verbose=True"},
)
```
## Compute Anomaly Scores
----
The `dot_product` output of the model provides a good measure of how compatible an IP address and online resource are. However, the range of the dot_product is unbounded. This means to be able to consider an event as anomolous we need to define a threshold. Such that when we score an event, if the dot_product is above the threshold we can flag the behavior as anomolous.However, picking a threshold can be more of an art, and a good threshold depends on the specifics of your problem and dataset.
In the following section, we show how to pick a simple threshold by comparing the score distributions between known normal and malicious traffic:
1. We construct a test set of 'Normal' traffic;
2. Inject 'Malicious' traffic into the dataset;
3. Plot the distribution of dot_product scores for the model on 'Normal' trafic and the 'Malicious' traffic.
3. Select a threshold value which separates the normal distribution from the malicious traffic threshold. This value is based on your false-positive tolerance.
### 1. Construct 'Normal' Traffic Dataset
We previously [created a test set](#3.-Create-training-and-test-dataset) from our simulated Apache access logs dataset. We use this test dataset as the 'Normal' traffic in the test case.
```
test_df.head()
```
### 2. Inject Malicious Traffic
If we had a dataset with enough real malicious activity, we would use that to determine a good threshold. Those are hard to come by. So instead, we simulate malicious web traffic that mimics a realistic attack scenario.
We take a set of user accounts from the test set and randomly generate IP addresses. The users should not have used these IP addresses during training. This simulates an attacker logging in to a user account without knowledge of their IP history.
```
import numpy as np
from generate_data import draw_ip
def score_ip_insights(predictor, df):
def get_score(result):
"""Return the negative to the dot product of the predictions from the model."""
return [-prediction["dot_product"] for prediction in result["predictions"]]
df = df[["user", "ip_address"]]
result = predictor.predict(df.values)
return get_score(result)
def create_test_case(train_df, test_df, num_samples, attack_freq):
"""Creates a test case from provided train and test data frames.
This generates test case for accounts that are both in training and testing data sets.
:param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame
:param test_df: (panda.DataFrame with columns ['user', 'ip_address']) testing DataFrame
:param num_samples: (int) number of test samples to use
:param attack_freq: (float) the ratio of negative_samples:positive_samples to generate for test case
:return: DataFrame with both good and bad traffic, with labels
"""
# Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training
# Therefore, filter the test dataset for unseen accounts, as their results will not mean anything.
valid_accounts = set(train_df["user"])
valid_test_df = test_df[test_df["user"].isin(valid_accounts)]
good_traffic = valid_test_df.sample(num_samples, replace=False)
good_traffic = good_traffic[["user", "ip_address"]]
good_traffic["label"] = 0
# Generate malicious traffic
num_bad_traffic = int(num_samples * attack_freq)
bad_traffic_accounts = np.random.choice(list(valid_accounts), size=num_bad_traffic, replace=True)
bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)]
bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": bad_traffic_ips})
bad_traffic["label"] = 1
# All traffic labels are: 0 for good traffic; 1 for bad traffic.
all_traffic = good_traffic.append(bad_traffic)
return all_traffic
NUM_SAMPLES = 100000
test_case = create_test_case(train_df, test_df, num_samples=NUM_SAMPLES, attack_freq=1)
test_case.head()
test_case_scores = score_ip_insights(predictor, test_case)
```
### 3. Plot Distribution
Now, we plot the distribution of scores. Looking at this distribution will inform us on where we can set a good threshold, based on our risk tolerance.
```
%matplotlib inline
import matplotlib.pyplot as plt
n, x = np.histogram(test_case_scores[:NUM_SAMPLES], bins=100, density=True)
plt.plot(x[1:], n)
n, x = np.histogram(test_case_scores[NUM_SAMPLES:], bins=100, density=True)
plt.plot(x[1:], n)
plt.legend(["Normal", "Random IP"])
plt.xlabel("IP Insights Score")
plt.ylabel("Frequency")
plt.figure()
```
### 4. Selecting a Good Threshold
As we see in the figure above, there is a clear separation between normal traffic and random traffic.
We could select a threshold depending on the application.
- If we were working with low impact decisions, such as whether to ask for another factor or authentication during login, we could use a `threshold = 0.0`. This would result in catching more true-positives, at the cost of more false-positives.
- If our decision system were more sensitive to false positives, we could choose a larger threshold, such as `threshold = 10.0`. That way if we were sending the flagged cases to manual investigation, we would have a higher confidence that the acitivty was suspicious.
```
threshold = 0.0
flagged_cases = test_case[np.array(test_case_scores) > threshold]
num_flagged_cases = len(flagged_cases)
num_true_positives = len(flagged_cases[flagged_cases["label"] == 1])
num_false_positives = len(flagged_cases[flagged_cases["label"] == 0])
num_all_positives = len(test_case.loc[test_case["label"] == 1])
print(f"When threshold is set to: {threshold}")
print(f"Total of {num_flagged_cases} flagged cases")
print(f"Total of {num_true_positives} flagged cases are true positives")
print(f"True Positive Rate: {num_true_positives / float(num_flagged_cases)}")
print(f"Recall: {num_true_positives / float(num_all_positives)}")
print(f"Precision: {num_true_positives / float(num_flagged_cases)}")
```
### SageMaker Automatic Model Tuning
#### Validation Dataset
Previously, we separated our dataset into a training and test set to validate the performance of a single IP Insights model. However, when we do model tuning, we train many IP Insights models in parallel. If we were to use the same test dataset to select the best model, we bias our model selection such that we don't know if we selected the best model in general, or just the best model for that particular dateaset.
Therefore, we need to separate our test set into a validation dataset and a test dataset. The validation dataset is used for model selection. Then once we pick the model with the best performance, we evaluate it the winner on a test set just as before.
#### Validation Metrics
For SageMaker Automatic Model Tuning to work, we need an objective metric which determines the performance of the model we want to optimize. Because SageMaker IP Insights is an usupervised algorithm, we do not have a clearly defined metric for performance (such as percentage of fraudulent events discovered).
We allow the user to provide a validation set of sample data (same format as training data bove) through the `validation` channel. We then fix the negative sampling strategy to use `random_negative_sampling_rate=1` and `shuffled_negative_sampling_rate=0` and generate a validation dataset by assigning corresponding labels to the real and simulated data. We then calculate the model's `descriminator_auc` metric. We do this by taking the model's predicted labels and the 'true' simulated labels and compute the Area Under ROC Curve (AUC) on the model's performance.
We set up the `HyperParameterTuner` to maximize the `discriminator_auc` on the validation dataset. We also need to set the search space for the hyperparameters.
```
test_df["timestamp"].describe()
```
The test set we constructed above spans 3 days. We reserve the first day as the validation set and the subsequent two days for the test set.
```
time_partition = (
datetime(2018, 11, 13, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 13)
)
validation_df = test_df[test_df["timestamp"] < time_partition]
test_df = test_df[test_df["timestamp"] >= time_partition]
valid_data = validation_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
```
We then upload the validation data to S3 and specify it as the validation channel.
```
# Upload data to S3 key
validation_data_file = "valid.csv"
key = os.path.join(prefix, "validation", validation_data_file)
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=valid_data)
s3_valid_data = f"s3://{bucket}/{key}"
print(f"Validation data has been uploaded to: {s3_valid_data}")
# Configure SageMaker IP Insights Input Channels
input_data = {"train": s3_train_data, "validation": s3_valid_data}
from sagemaker.tuner import HyperparameterTuner, IntegerParameter
# Configure HyperparameterTuner
ip_insights_tuner = HyperparameterTuner(
estimator=ip_insights, # previously-configured Estimator object
objective_metric_name="validation:discriminator_auc",
hyperparameter_ranges={"vector_dim": IntegerParameter(64, 1024)},
max_jobs=4,
max_parallel_jobs=2,
)
# Start hyperparameter tuning job
ip_insights_tuner.fit(input_data, include_cls_metadata=False)
# Wait for all the jobs to finish
ip_insights_tuner.wait()
# Visualize training job results
ip_insights_tuner.analytics().dataframe()
# Deploy best model
tuned_predictor = ip_insights_tuner.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=csv_serializer,
deserializer=json_deserializer,
)
# Make a prediction against the SageMaker endpoint
tuned_predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
We should have the best performing model from the training job! Now we can determine thresholds and make predictions just like we did with the inference endpoint [above](#Inference).
### Batch Transform
To score all of the login events at the end of the day and aggregate flagged cases for investigators to look at in the morning. If we store the daily login events in S3, we can use IP Insights with to run inference and store the IP Insights scores back in S3 for future analysis.
Below, we take the training job from before and evaluate it on the validation data we put in S3.
```
transformer = ip_insights.transformer(instance_count=1, instance_type="ml.m4.xlarge")
transformer.transform(s3_valid_data, content_type="text/csv", split_type="Line")
# Wait for Transform Job to finish
transformer.wait()
print(f"Batch Transform output is at: {transformer.output_path}")
```
### Stop and Delete the Endpoint
If you are done with this model, then we should delete the endpoint before we close the notebook. Or else you will continue to pay for the endpoint while it is running.
```
ip_insights_tuner.delete_endpoint()
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#ALGO1-:-Introduction-à-l'algorithmique" data-toc-modified-id="ALGO1-:-Introduction-à-l'algorithmique-1"><span class="toc-item-num">1 </span><a href="https://perso.crans.org/besson/teach/info1_algo1_2019/" target="_blank">ALGO1 : Introduction à l'algorithmique</a></a></div><div class="lev1 toc-item"><a href="#Cours-Magistral-6" data-toc-modified-id="Cours-Magistral-6-2"><span class="toc-item-num">2 </span>Cours Magistral 6</a></div><div class="lev2 toc-item"><a href="#Rendu-de-monnaie" data-toc-modified-id="Rendu-de-monnaie-21"><span class="toc-item-num">2.1 </span>Rendu de monnaie</a></div><div class="lev2 toc-item"><a href="#Structure-"Union-Find"" data-toc-modified-id="Structure-"Union-Find"-22"><span class="toc-item-num">2.2 </span>Structure "Union-Find"</a></div><div class="lev3 toc-item"><a href="#Naïve" data-toc-modified-id="Naïve-221"><span class="toc-item-num">2.2.1 </span>Naïve</a></div><div class="lev3 toc-item"><a href="#Avec-compression-de-chemin" data-toc-modified-id="Avec-compression-de-chemin-222"><span class="toc-item-num">2.2.2 </span>Avec compression de chemin</a></div><div class="lev2 toc-item"><a href="#Algorithme-de-Kruskal" data-toc-modified-id="Algorithme-de-Kruskal-23"><span class="toc-item-num">2.3 </span>Algorithme de Kruskal</a></div><div class="lev2 toc-item"><a href="#Algorithme-de-Prim" data-toc-modified-id="Algorithme-de-Prim-24"><span class="toc-item-num">2.4 </span>Algorithme de Prim</a></div><div class="lev3 toc-item"><a href="#File-de-priorité-min" data-toc-modified-id="File-de-priorité-min-241"><span class="toc-item-num">2.4.1 </span>File de priorité min</a></div><div class="lev3 toc-item"><a href="#Prim" data-toc-modified-id="Prim-242"><span class="toc-item-num">2.4.2 </span>Prim</a></div><div class="lev2 toc-item"><a href="#Illustrations" data-toc-modified-id="Illustrations-25"><span class="toc-item-num">2.5 </span>Illustrations</a></div><div class="lev2 toc-item"><a href="#Autres" data-toc-modified-id="Autres-26"><span class="toc-item-num">2.6 </span>Autres</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-27"><span class="toc-item-num">2.7 </span>Conclusion</a></div>
# [ALGO1 : Introduction à l'algorithmique](https://perso.crans.org/besson/teach/info1_algo1_2019/)
- [Page du cours](https://perso.crans.org/besson/teach/info1_algo1_2019/) : https://perso.crans.org/besson/teach/info1_algo1_2019/
- Magistère d'Informatique de Rennes - ENS Rennes - Année 2019/2020
- Intervenants :
+ Cours : [Lilian Besson](https://perso.crans.org/besson/)
+ Travaux dirigés : [Raphaël Truffet](http://perso.eleves.ens-rennes.fr/people/Raphael.Truffet/)
- Références :
+ [Open Data Structures](http://opendatastructures.org/ods-python.pdf)
# Cours Magistral 6
- Ce cours traite des algorithmes gloutons.
- Ce notebook sera concis, comparé aux précédents.
## Rendu de monnaie
- Voir https://en.wikipedia.org/wiki/Change-making_problem ou https://fr.wikipedia.org/wiki/Probl%C3%A8me_du_rendu_de_monnaie
```
def binary_coin_change(x, R):
"""Coin change
:param x: table of non negative values
:param R: target value
:returns bool: True if there is a non negative linear combination
of x that has value R
:complexity: O(n*R)
"""
if int(R) != R: # we work with 1/100
R = int(R * 100)
x = [int(xi * 100) for xi in x]
b = [False] * (R + 1)
b[0] = True
for xi in x:
for s in range(xi, R + 1):
b[s] |= b[s - xi]
return b[R]
def constructive_coin_change(values_of_coins, sum_to_find):
"""Coin change
:param values_of_coins: table of non negative values
:param sum_to_find: target value
:returns bool: True if there is a non negative linear combination
of x that has value R
:complexity: O(n*R)
"""
with_cents = False
if int(sum_to_find) != sum_to_find: # we work with 1/100
with_cents = True
sum_to_find = int(sum_to_find * 100)
values_of_coins = [int(pi * 100) for pi in values_of_coins]
n = len(values_of_coins)
number_of_coins = [0] * n
values_of_coins = sorted(values_of_coins, reverse=True)
current_sum = sum_to_find
for i, pi in enumerate(values_of_coins):
assert pi > 0, "Error: a coin with value zero."
if pi > current_sum:
continue # coin is too large, we continue
how_much_pi, rest = divmod(current_sum, pi) # x // y, x % y
number_of_coins[i] = how_much_pi
print("For current sum = {}, coin = {}, was used {} times, now sum = {}.".format(current_sum, pi, how_much_pi, rest))
current_sum = rest
if current_sum != 0:
raise ValueError("Could not write {} in the coin system {} with greedy method.".format(sum_to_find, values_of_coins))
if with_cents:
values_of_coins = [round(pi / 100, 2) for pi in values_of_coins]
return number_of_coins, values_of_coins
```
Avec les pièces des euros :
```
billets = [500, 200, 100, 50, 20, 10, 5]
pieces = [2, 1, 0.5, 0.2, 0.1, 0.05, 0.02, 0.01]
euros = billets + pieces
binary_coin_change(euros, 16.12)
constructive_coin_change(euros, 16.12)
billets = [500, 200, 100, 50, 20, 10, 5]
binary_coin_change(billets, 16)
constructive_coin_change(billets, 16)
```
Avec un autre système de pièce :
```
billets = [19, 13, 7]
pieces = [3, 2]
weird = billets + pieces
if binary_coin_change(weird, 47):
constructive_coin_change(weird, 47)
if binary_coin_change(weird, 49):
constructive_coin_change(weird, 49)
if binary_coin_change(weird, 50):
constructive_coin_change(weird, 50)
```
Cette méthode gourmande ne marche pas pour tous les systèmes !
---
## Structure "Union-Find"
### Naïve
```
class UnionFind:
"""Maintains a partition of {0, ..., n-1}
"""
def __init__(self, n):
self.up_bound = list(range(n))
def find(self, x_index):
"""
:returns: identifier of part containing x_index
:complex_indexity: O(n) worst case, O(log n) in amortized cost.
"""
if self.up_bound[x_index] == x_index:
return x_index
self.up_bound[x_index] = self.find(self.up_bound[x_index])
return self.up_bound[x_index]
def union(self, x_index, y_index):
"""
Merges part that contain x and part containing y
:returns: False if x_index, y_index are already in same part
:complexity: O(n) worst case, O(log n) in amortized cost.
"""
repr_x = self.find(x_index)
repr_y = self.find(y_index)
if repr_x == repr_y: # already in the same component
return False
self.up_bound[repr_x] = repr_y
return True
```
Par exemple avec $S = \{0,1,2,3,4\}$ et les unions suivantes :
```
S = [0,1,2,3,4]
U = UnionFind(len(S))
U.up_bound
U.union(0, 2)
U.up_bound
U.up_bound
U.union(2, 3)
U.up_bound
for i in S:
U.find(i)
```
Cela représente la partition $\{ \{0,2,3\}, \{1\}, \{4\}\}$.
### Avec compression de chemin
```
class UnionFind_CompressedPaths:
"""Maintains a partition of {0, ..., n-1}
"""
def __init__(self, n):
self.up_bound = list(range(n))
self.rank = [0] * n
def find(self, x_index):
"""
:returns: identifier of part containing x_index
:complex_indexity: O(inverse_ackerman(n))
"""
if self.up_bound[x_index] == x_index:
return x_index
self.up_bound[x_index] = self.find(self.up_bound[x_index])
return self.up_bound[x_index]
def union(self, x_index, y_index):
"""
Merges part that contain x and part containing y
:returns: False if x_index, y_index are already in same part
:complexity: O(inverse_ackerman(n))
"""
repr_x = self.find(x_index)
repr_y = self.find(y_index)
if repr_x == repr_y: # already in the same component
return False
if self.rank[repr_x] == self.rank[repr_y]:
self.rank[repr_x] += 1
self.up_bound[repr_y] = repr_x
elif self.rank[repr_x] > self.rank[repr_y]:
self.up_bound[repr_y] = repr_x
else:
self.up_bound[repr_x] = repr_y
return True
```
Par exemple avec $S = \{0,1,2,3,4\}$ et les unions suivantes :
```
S = [0,1,2,3,4]
U = UnionFind_CompressedPaths(len(S))
U.up_bound
U.union(0, 2)
U.up_bound
U.up_bound
U.union(2, 3)
U.up_bound
for i in S:
U.find(i)
```
Cela représente la partition $\{ \{0,2,3\}, \{1\}, \{4\}\}$.
---
## Algorithme de Kruskal
On utilise une des implémentations de la structure Union-Find, et le reste du code est très simple.
```
def kruskal(graph, weight):
"""Minimum spanning tree by Kruskal
:param graph: undirected graph in listlist or listdict format
:param weight: in matrix format or same listdict graph
:returns: list of edges of the tree
:complexity: ``O(|E|log|E|)``
"""
# a UnionFind with n singletons { {0}, {1}, ..., {n-1} }
u_f = UnionFind(len(graph))
edges = [ ]
for u, _ in enumerate(graph):
for v in graph[u]:
# we add the edge (u, v) with weight w(u,v)
edges.append((weight[u][v], u, v))
edges.sort() # sort the edge in increasing order!
min_span_tree = [ ]
for w_idx, u_idx, v_idx in edges: # O(|E|)
if u_f.union(u_idx, v_idx):
# u and v were not in the same connected component
min_span_tree.append((u_idx, v_idx))
# we add the edge (u, v) in the tree, now they are in the same connected component
return min_span_tree
```
---
## Algorithme de Prim
### File de priorité min
On peut utiliser les opérations `heappush` et `heappop` du module `heapq`.
Ou notre implémentation maison des tas, qui permet d'avoir une opération `update` pour efficacement mettre à jour la priorité d'un élément.
```
from heapq import heappop, heappush
from heap_operations import OurHeap
```
### Prim
```
def prim(graph, weight, source=0):
"""Minimum spanning tree by Prim
- param graph: directed graph, connex and non-oriented
- param weight: in matrix format or same listdict graph
- assumes: weights are non-negative
- param source: source vertex
- returns: distance table, precedence table
- complexity: O(|S| + |A| log|A|)
"""
n = len(graph)
assert all(weight[u][v] >= 0 for u in range(n) for v in graph[u])
prec = [None] * n
cost = [float('inf')] * n
cost[source] = 0
# the difference with Dijsktra is that the heap starts with all the nodes!
heap = OurHeap([])
is_in_the_heap = [False for u in range(n)]
for u in range(n):
heap.push((cost[u], u))
is_in_the_heap[u] = True
while heap:
dist_node, node = heap.pop() # Closest node from source
is_in_the_heap[node] = False
# and there is no color white/gray/black
# the node is always visited!
for neighbor in graph[node]:
if is_in_the_heap[neighbor] and cost[neighbor] >= weight[node][neighbor]:
old_cost = cost[neighbor]
cost[neighbor] = weight[node][neighbor]
prec[neighbor] = node
heap.update((old_cost, neighbor), (cost[neighbor], neighbor))
# now we need to construct the min_spanning_tree
edges = [ ]
for u in range(n):
if u != prec[u]:
edges.append((u, prec[u]))
return edges # cost, prec
```
---
## Illustrations
```
import random
import math
def dist(a, b):
"""
distance between point a and point b
"""
return math.sqrt(sum([(a[i] - b[i]) * (a[i] - b[i]) for i in range(len(a))]))
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10, 7)
mpl.rcParams['figure.dpi'] = 120
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context="notebook", style="whitegrid", palette="hls", font="sans-serif", font_scale=1.1)
N = 50
points = [[random.random() * 5, random.random() * 5] for _ in range(N)]
weight = [[dist(points[i], points[j]) for j in range(N)]
for i in range(N)]
graph = [[j for j in range(N) if i != j] for i in range(N)]
min_span_tree_kruskal = kruskal(graph, weight)
min_span_tree_prim = prim(graph, weight)
plt.figure()
for u in points:
for v in points:
if u > v: break
xu, yu = u
xv, yv = v
_ = plt.plot([xu, xv], [yu, yv], 'o-')
# print("{} -- {}".format(points[u_idx], points[v_idx]))
plt.title("The whole graph")
plt.show()
plt.figure()
val = 0
for u_idx, v_idx in min_span_tree_kruskal:
val += weight[u_idx][v_idx]
xu, yu = points[u_idx]
xv, yv = points[v_idx]
_ = plt.plot([xu, xv], [yu, yv], 'o-')
# print("{} -- {}".format(points[u_idx], points[v_idx]))
print(val)
plt.title("Minimum spanning with Kruskal tree of cost {}".format(round(val, 2)))
plt.show()
plt.figure()
val = 0
for u_idx, v_idx in min_span_tree_prim:
val += weight[u_idx][v_idx]
xu, yu = points[u_idx]
xv, yv = points[v_idx]
_ = plt.plot([xu, xv], [yu, yv], 'o-')
# print("{} -- {}".format(points[u_idx], points[v_idx]))
print(val)
plt.title("Minimum spanning with Kruskal tree of cost {}".format(round(val, 2)))
plt.show()
```
## Autres
On en écrira plus tard !
## Conclusion
C'est bon pour aujourd'hui !
| github_jupyter |
```
import pandas as pd
import numpy as np
```
## Load data from csv file
```
names = ['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PTRATIO','B','LSTAT','PRICE']
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data',
header=None, names=names , delim_whitespace = True, na_values='?')
"""
Attribute Information:
1. CRIM per capita crime rate by town
2. ZN proportion of residential land zoned for lots over
25,000 sq.ft.
3. INDUS proportion of non-retail business acres per town
4. CHAS Charles River dummy variable (= 1 if tract bounds
river; 0 otherwise)
5. NOX nitric oxides concentration (parts per 10 million)
6. RM average number of rooms per dwelling
7. AGE proportion of owner-occupied units built prior to 1940
8. DIS weighted distances to five Boston employment centres
9. RAD index of accessibility to radial highways
10. TAX full-value property-tax rate per $10,000
11. PTRATIO pupil-teacher ratio by town
12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blocks by town
13. LSTAT % lower status of the population
14. MEDV Median value of owner-occupied homes in $1000's
"""
print ('df is an object of ', type(df))
print ('\n')
print(df.head(5))
print(df.shape)
```
### Store values in the pandas dataframe as numpy arrays
- we want to use the average number of rooms to predict the housing price
- we need to extract the data from df and convert them to numpy arrays
```
y = df['PRICE'].values
x = df['RM'].values
print ('both x and y are now objects of', type(x))
```
### Plot the housing price against the average number of rooms
```
import matplotlib.pyplot as plt
plt.plot(x,y,'o')
plt.xlabel('Average Number of Rooms')
plt.ylabel('Price')
plt.grid()
```
# Guess a line to fit the data
```
w1 = 9
w0 = -30
xplt = np.linspace(3,9,100)
yplt = w1 * xplt + w0
plt.plot(x,y,'o') # Plot the data points
plt.plot(xplt,yplt,'-',linewidth=3) # Plot the line
plt.xlabel('Average number of rooms in a region')
plt.ylabel('Price')
plt.grid()
```
## Calculate the Mean Squared Error (MSE) and Mean Absolute Error (MAE) to determine goodness of fit
### Reminder :
Given :
- a dataset : $(x_i, y_i)$, $i = 1, 2, 3, ..., N$
- a model : $\hat{y} = w_1x + w_0$
We can compute the following two error functions :
- Mean Squared Error: $\displaystyle MSE = \frac{1}{N}\sum_{i=1}^N || y_i - \hat{y_i}||^2$
- Mean Absolute Error: $\displaystyle MAE = \frac{1}{N}\sum_{i=1}^N |y_i - \hat{y_i}|$
```
## To-do
```
| github_jupyter |
```
from copy import deepcopy
import json
import pandas as pd
DATA_DIR = 'data'
# Define template payloads
CS_TEMPLATE = {
'resourceType': 'CodeSystem',
'status': 'draft',
'experimental': False,
'hierarchyMeaning': 'is-a',
'compositional': False,
'content': 'fragment',
'concept': []
}
```
# 1. PCGC
## 1.1 Phenotype
### 1.1.1 HP
```
# Copy template
cs_hp = deepcopy(CS_TEMPLATE)
# Set metadata
cs_hp['id'] = 'hp'
cs_hp['url'] = 'http://purl.obolibrary.org/obo/hp.owl'
cs_hp['name'] = 'http://purl.obolibrary.org/obo/hp.owl'
cs_hp['title'] = 'Human Phenotype Ontology'
# Read in phenotype codes
file_path = f'{DATA_DIR}/pcgc_ph_codes.tsv'
ph_codes = pd.read_csv(file_path, sep='\t')
# Populate concept
for i, row in ph_codes.iterrows():
if row.hpo_id_phenotype == 'No Match':
continue
cs_hp['concept'].append({
'code': row.hpo_id_phenotype,
'display': row.source_text_phenotype
})
cs_hp['count'] = len(cs_hp['concept'])
# Output to JSON
with open('CodeSystem-hp.json', 'w') as f:
json.dump(cs_hp, f, indent=2)
```
## 1.2 Diagnosis
```
# Read in phenotype codes
file_path = f'{DATA_DIR}/pcgc_dg_codes.tsv'
dg_codes = pd.read_csv(file_path, sep='\t')
```
### 1.2.1 MONDO
```
# Copy template
cs_mondo = deepcopy(CS_TEMPLATE)
# Set metadata
cs_mondo['id'] = 'mondo'
cs_mondo['url'] = 'http://purl.obolibrary.org/obo/mondo.owl'
cs_mondo['name'] = 'http://purl.obolibrary.org/obo/mondo.owl'
cs_mondo['title'] = 'Mondo Disease Ontology'
# Populate concept
for i, row in dg_codes[[
'source_text_diagnosis',
'mondo_id_diagnosis'
]].iterrows():
if row.mondo_id_diagnosis == 'No Match':
continue
cs_mondo['concept'].append({
'code': row.mondo_id_diagnosis,
'display': row.source_text_diagnosis
})
cs_mondo['count'] = len(cs_mondo['concept'])
# Output to JSON
with open('CodeSystem-mondo.json', 'w') as f:
json.dump(cs_mondo, f, indent=2)
```
### 1.2.2 NCIt
```
# Copy template
cs_ncit = deepcopy(CS_TEMPLATE)
# Set metadata
cs_ncit['id'] = 'ncit'
cs_ncit['url'] = 'http://purl.obolibrary.org/obo/ncit.owl'
cs_ncit['name'] = 'http://purl.obolibrary.org/obo/ncit.owl'
cs_ncit['title'] = 'NCI Thesaurus'
# Populate concept
for i, row in dg_codes[[
'source_text_diagnosis',
'ncit_id_diagnosis'
]].iterrows():
if row.ncit_id_diagnosis == 'No Match':
continue
cs_ncit['concept'].append({
'code': row.ncit_id_diagnosis,
'display': row.source_text_diagnosis
})
cs_ncit['count'] = len(cs_ncit['concept'])
# Output to JSON
with open('CodeSystem-ncit.json', 'w') as f:
json.dump(cs_ncit, f, indent=2)
```
## 1.3 Vital Status
### 1.3.1 SNOMED CT
```
# Copy template
cs_sct = deepcopy(CS_TEMPLATE)
# Set metadata
cs_sct['id'] = 'sct'
cs_sct['url'] = 'http://snomed.info/sct'
cs_sct['name'] = 'http://snomed.info/sct'
cs_sct['title'] = 'SNOMED CT'
cs_sct['concept'] = cs_sct['concept'] + [
{
'code': '438949009',
'display': 'Alive'
},
{
'code': '419099009',
'display': 'Dead'
}
]
cs_sct['count'] = len(cs_sct['concept'])
# Output to JSON
with open('CodeSystem-sct.json', 'w') as f:
json.dump(cs_sct, f, indent=2)
```
# 2. Synthea
## 2.1 SNOMED CT
```
with open(f'{DATA_DIR}/sct.json') as f:
concept_sct = json.load(f)
cs_sct['concept'] += concept_sct
cs_sct['count'] = len(cs_sct['concept'])
# Output to JSON
with open('CodeSystem-sct.json', 'w') as f:
json.dump(cs_sct, f, indent=2)
```
## 2.2 LOINC
```
# Copy template
cs_loinc = deepcopy(CS_TEMPLATE)
# Set metadata
cs_loinc['id'] = 'loinc'
cs_loinc['url'] = 'http://loinc.org'
cs_loinc['name'] = 'http://loinc.org'
cs_loinc['title'] = 'LOINC'
with open(f'{DATA_DIR}/loinc.json') as f:
concept_loinc = json.load(f)
cs_loinc['concept'] += concept_loinc
cs_loinc['count'] = len(cs_loinc['concept'])
# Output to JSON
with open('CodeSystem-loinc.json', 'w') as f:
json.dump(cs_loinc, f, indent=2)
```
| github_jupyter |
## Caroline's raw material planning
<img align='right' src='https://drive.google.com/uc?export=view&id=1FYTs46ptGHrOaUMEi5BzePH9Gl3YM_2C' width=200>
As we know, BIM produces logic and memory chips using copper, silicon, germanium and plastic.
Each chip has the following consumption of materials:
| chip | copper | silicon | germanium | plastic |
|:-------|-------:|--------:|----------:|--------:|
|Logic | 0.4 | 1 | | 1 |
|Memory | 0.2 | | 1 | 1 |
BIM hired Caroline to manage the acquisition and the inventory of these raw materials.
Caroline conducted a data analysis which lead to the following prediction of monthly demands for her trophies:
| chip | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
|:-------|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|
|Logic | 88 | 125 | 260 | 217 | 238 | 286 | 248 | 238 | 265 | 293 | 259 | 244 |
|Memory | 47 | 62 | 81 | 65 | 95 | 118 | 86 | 89 | 82 | 82 | 84 | 66 |
As you recall, BIM has the following stock at the moment:
|copper|silicon|germanium|plastic|
|-----:|------:|--------:|------:|
| 480| 1000 | 1500| 1750 |
BIM would like to have at least the following stock at the end of the year:
|copper|silicon|germanium|plastic|
|-----:|------:|--------:|------:|
| 200| 500 | 500| 1000 |
Each product can be acquired at each month, but the unit prices vary as follows:
| product | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
|:---------|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|
|copper | 1 | 1 | 1 | 2 | 2 | 3 | 3 | 2 | 2 | 1 | 1 | 2 |
|silicon | 4 | 3 | 3 | 3 | 5 | 5 | 6 | 5 | 4 | 3 | 3 | 5 |
|germanium | 5 | 5 | 5 | 3 | 3 | 3 | 3 | 2 | 3 | 4 | 5 | 6 |
|plastic | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
The inventory is limited by a capacity of a total of 9000 units per month, regardless of the composition of products in stock.
The holding costs of the inventory are 0.05 per unit per month regardless of the product.
Caroline cannot spend more than 5000 per month on acquisition.
Note that Caroline aims at minimizing the acquisition and holding costs of the materials while meeting the required quantities for production.
The production is made to order, meaning that no inventory of chips is kept.
Please help Caroline to model the material planning and solve it with the data above.
```
import sys
if 'google.colab' in sys.modules:
import shutil
if not shutil.which('pyomo'):
!pip install -q pyomo
assert(shutil.which('pyomo'))
# cbc
!apt-get install -y -qq coinor-cbc
```
To be self contained... alternative is to upload and read a file.
```
demand_data = '''chip,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec
Logic,88,125,260,217,238,286,248,238,265,293,259,244
Memory,47,62,81,65,95,118,86,89,82,82,84,66'''
from io import StringIO
import pandas as pd
demand_chips = pd.read_csv( StringIO(demand_data), index_col='chip' )
demand_chips
price_data = '''product,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec
copper,1,1,1,2,2,3,3,2,2,1,1,2
silicon,4,3,3,3,5,5,6,5,4,3,3,5
germanium,5,5,5,3,3,3,3,2,3,4,5,6
plastic,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1'''
price = pd.read_csv( StringIO(price_data), index_col='product' )
price
```
# A possible resolution
In the book we will need some $\LaTeX$ model. I start drafting it here.
Define the variables $x_{pt} \geq 0$ as being the amount of product $p$ acquired in period $t$. *Note* that the code calls these variables `buy`. Its still debatable whether they should be `x` in the code! I don’t like ‘words’ as variable names in the $\LaTeX$ models.
Let $s_{pt} \geq 0$ be the amount of product $p$ left in stock at the end of period $t$. Note that this value is an expression of the $x$ variables but we define additional variables to ease modelling.
If $\pi_{pt}$ is the unit price of product $p$ in time $t$ and $h_{pt}$ the unit holding costs (which happen to be constant) we can express the objective as:
\begin{align*}
\min & \sum_{p\in P}\sum_{t \in T}\pi_{pt}x_{pt} + \sum_{p\in P}\sum_{t \in T}h_{pt}s_{pt}
\end{align*}
The constraints are easy to express as well.
This is the budget constraint, if $\beta$ denotes the monthly acquisition budget:
$$
\sum_{p\in P} \pi_{pt}x_{pt} \leq \beta \quad \forall t \in T
$$
Below the storage limit $\ell$.
$$
\sum_{p\in P} s_{pt} \leq \ell \quad \forall t \in T
$$
The constraints below define $s_{pt}$ by balancing the acquired amounts with the previous inventory and the demand $\delta$. Note that $t-1$ is defined as the initial stock when $t$ is the first period. This can be obtained with additional variables $s$ made equal to those values or with a rule that specializes, as in the code.
$$
x_{pt} + s_{p,t-1} = \delta_{pt} + s_{pt} \quad \forall p \in P, t \in T
$$
Finally, to meet the required end quantities, we just need to write:
$$
s_{pt} \geq \Omega_p \quad \forall p \in P
$$
where $t$ is December and $\Omega$ are the desired end inventories.
## A simple dataframe with the consumptions
```
use = dict()
use['Logic'] = { 'silicon' : 1, 'plastic' : 1, 'copper' : 4 }
use['Memory'] = { 'germanium' : 1, 'plastic' : 1, 'copper' : 2 }
use = pd.DataFrame.from_dict( use ).fillna(0).astype( int )
use
```
## A simple matrix multiplication
```
demand = use.dot( demand_chips )
demand
import pyomo.environ as pyo
m = pyo.ConcreteModel()
```
# Add the relevant data to the model
```
m.Time = demand.columns
m.Product = demand.index
m.Demand = demand
m.UnitPrice = price
m.HoldingCost = .05
m.StockLimit = 9000
m.Budget = 5000
m.existing = {'silicon' : 1000, 'germanium': 1500, 'plastic': 1750, 'copper' : 4800 }
m.desired = {'silicon' : 500, 'germanium': 500, 'plastic': 1000, 'copper' : 2000 }
```
# Some care to deal with the `time` index
```
m.first = m.Time[0]
m.last = m.Time[-1]
m.prev = { j : i for i,j in zip(m.Time,m.Time[1:]) }
```
# Variables for the decision (buy) and consequence (stock)
```
m.buy = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals )
m.stock = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals )
```
# The constraints that balance acquisition with inventory and demand
```
def BalanceRule( m, p, t ):
if t == m.first:
return m.existing[p] + m.buy[p,t] == m.Demand.loc[p,t] + m.stock[p,t]
else:
return m.buy[p,t] + m.stock[p,m.prev[t]] == m.Demand.loc[p,t] + m.stock[p,t]
m.balance = pyo.Constraint( m.Product, m.Time, rule = BalanceRule )
```
# The remaining constraints
Note that these rules are so simple, one liners, that it is better to just define them 'on the spot' as anonymous (or `'lambda`) functions.
## Ensure the desired inventory at the end of the horizon
```
m.finish = pyo.Constraint( m.Product, rule = lambda m, p : m.stock[p,m.last] >= m.desired[p] )
```
## Ensure that the inventory fits the capacity
```
m.inventory = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.stock[p,t] for p in m.Product ) <= m.StockLimit )
```
## Ensure that the acquisition fits the budget
```
m.budget = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product ) <= m.Budget )
m.obj = pyo.Objective( expr = sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product for t in m.Time )
+ sum( m.HoldingCost*m.stock[p,t] for p in m.Product for t in m.Time )
, sense = pyo.minimize )
if 'google.colab' in sys.modules:
cbc_path = '/usr/bin/cbc'
else:
cbc_path = r'D:\joaquimg\Dropbox\Python\solvers\cbc master\bin\cbc.exe' # change accordingly...
pyo.SolverFactory( 'cbc', executable=cbc_path ).solve(m)
def ShowDouble( X, I,J ):
return pd.DataFrame.from_records( [ [ X[i,j].value for j in J ] for i in I ], index=I, columns=J )
ShowDouble( m.buy, m.Product, m.Time )
ShowDouble( m.stock, m.Product, m.Time )
ShowDouble( m.stock, m.Product, m.Time ).T.plot(drawstyle='steps-mid',grid=True, figsize=(20,4))
```
# Notes
* The budget is not limitative.
* With the given budget the solution remains integer.
* Lowering the budget to 2000 forces acquiring fractional quantities.
* Lower values of the budget end up making the problem infeasible.
| github_jupyter |
## AutoGraph: examples of simple algorithms
This notebook shows how you can use AutoGraph to compile simple algorithms and run them in TensorFlow.
It requires the nightly build of TensorFlow, which is installed below.
```
!pip install -U -q tf-nightly-2.0-preview
import tensorflow as tf
tf = tf.compat.v2
tf.enable_v2_behavior()
```
### Fibonacci numbers
https://en.wikipedia.org/wiki/Fibonacci_number
```
@tf.function
def fib(n):
f1 = 0
f2 = 1
for i in tf.range(n):
tmp = f2
f2 = f2 + f1
f1 = tmp
tf.print(i, ': ', f2)
return f2
_ = fib(tf.constant(10))
```
#### Generated code
```
print(tf.autograph.to_code(fib.python_function))
```
### Fizz Buzz
https://en.wikipedia.org/wiki/Fizz_buzz
```
import tensorflow as tf
@tf.function(experimental_autograph_options=tf.autograph.experimental.Feature.EQUALITY_OPERATORS)
def fizzbuzz(i, n):
while i < n:
msg = ''
if i % 3 == 0:
msg += 'Fizz'
if i % 5 == 0:
msg += 'Buzz'
if msg == '':
msg = tf.as_string(i)
tf.print(msg)
i += 1
return i
_ = fizzbuzz(tf.constant(10), tf.constant(16))
```
#### Generated code
```
print(tf.autograph.to_code(fizzbuzz.python_function))
```
### Conway's Game of Life
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
#### Testing boilerplate
```
NUM_STEPS = 1
```
#### Game of Life for AutoGraph
Note: the code may take a while to run.
```
#@test {"skip": true}
NUM_STEPS = 75
```
Note: This code uses a non-vectorized algorithm, which is quite slow. For 75 steps, it will take a few minutes to run.
```
import time
import traceback
import sys
from matplotlib import pyplot as plt
from matplotlib import animation as anim
import numpy as np
from IPython import display
@tf.autograph.experimental.do_not_convert
def render(boards):
fig = plt.figure()
ims = []
for b in boards:
im = plt.imshow(b, interpolation='none')
im.axes.get_xaxis().set_visible(False)
im.axes.get_yaxis().set_visible(False)
ims.append([im])
try:
ani = anim.ArtistAnimation(
fig, ims, interval=100, blit=True, repeat_delay=5000)
plt.close()
display.display(display.HTML(ani.to_html5_video()))
except RuntimeError:
print('Coult not render animation:')
traceback.print_exc()
return 1
return 0
def gol_episode(board):
new_board = tf.TensorArray(tf.int32, 0, dynamic_size=True)
for i in tf.range(len(board)):
for j in tf.range(len(board[i])):
num_neighbors = tf.reduce_sum(
board[tf.maximum(i-1, 0):tf.minimum(i+2, len(board)),
tf.maximum(j-1, 0):tf.minimum(j+2, len(board[i]))]
) - board[i][j]
if num_neighbors == 2:
new_cell = board[i][j]
elif num_neighbors == 3:
new_cell = 1
else:
new_cell = 0
new_board.append(new_cell)
final_board = new_board.stack()
final_board = tf.reshape(final_board, board.shape)
return final_board
@tf.function(experimental_autograph_options=(
tf.autograph.experimental.Feature.EQUALITY_OPERATORS,
tf.autograph.experimental.Feature.BUILTIN_FUNCTIONS,
tf.autograph.experimental.Feature.LISTS,
))
def gol(initial_board):
board = initial_board
boards = tf.TensorArray(tf.int32, size=0, dynamic_size=True)
i = 0
for i in tf.range(NUM_STEPS):
board = gol_episode(board)
boards.append(board)
boards = boards.stack()
tf.py_function(render, (boards,), (tf.int64,))
return i
# Gosper glider gun
# Adapted from http://www.cplusplus.com/forum/lounge/75168/
_ = 0
initial_board = tf.constant((
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,1,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_,_,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,1,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_ ),
( _,1,1,_,_,_,_,_,_,_,_,1,_,_,_,_,_,1,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,1,1,_,_,_,_,_,_,_,_,1,_,_,_,1,_,1,1,_,_,_,_,1,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,1,_,_,_,_,_,1,_,_,_,_,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,1,_,_,_,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,1,1,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
( _,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_,_ ),
))
initial_board = tf.pad(initial_board, ((0, 10), (0, 5)))
_ = gol(initial_board)
```
#### Generated code
```
print(tf.autograph.to_code(gol.python_function))
```
| github_jupyter |
# T81-558: Applications of Deep Neural Networks
**Module 7: Generative Adversarial Networks**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 7 Material
* Part 7.1: Introduction to GANS for Image and Data Generation [[Video]](https://www.youtube.com/watch?v=0QnCH6tlZgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_1_gan_intro.ipynb)
* Part 7.2: Implementing a GAN in Keras [[Video]](https://www.youtube.com/watch?v=T-MCludVNn4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_2_Keras_gan.ipynb)
* Part 7.3: Face Generation with StyleGAN and Python [[Video]](https://www.youtube.com/watch?v=Wwwyr7cOBlU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_3_style_gan.ipynb)
* **Part 7.4: GANS for Semi-Supervised Learning in Keras** [[Video]](https://www.youtube.com/watch?v=ZPewmEu7644&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_4_gan_semi_supervised.ipynb)
* Part 7.5: An Overview of GAN Research [[Video]](https://www.youtube.com/watch?v=cvCvZKvlvq4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_5_gan_research.ipynb)
# Part 7.4: GANS for Semi-Supervised Training in Keras
GANs can also be used to implement semi-supervised learning/training. Normally GANs implement un-supervised training. This is because there are no y's (expected outcomes) provided in the dataset. The y-values are usually called labels. For the face generating GANs, there is typically no y-value, only images. This is unsupervised training. Supervised training occurs when we are training a model to

The following paper describes the application of GANs to semi-supervised training.
* [Odena, A. (2016). Semi-supervised learning with generative adversarial networks. *arXiv preprint* arXiv:1606.01583.](https://arxiv.org/abs/1606.01583)
As you can see, supervised learning is where all data have labels. Supervised learning attempts to learn the labels from the training data to predict these labels for new data. Un-supervised learning has no labels and usually simply clusters the data or in the case of a GAN, learns to produce new data that resembles the training data. Semi-supervised training has a small number of labels for mostly unlabeled data. Semi-supervised learning is usually similar to supervised learning in that the goal is ultimately to predict labels for new data.
Traditionally, unlabeled data would simply be discarded if the overall goal was to create a supervised model. However, the unlabeled data is not without value. Semi-supervised training attempts to use this unlabeled data to help learn additional insights about what labels we do have. There are limits, however. Even semi-supervised training cannot learn entirely new labels that were not in the training set. This would include new classes for classification or learning to predict values outside of the range of the y-values.
Semi-supervised GANs can perform either classification or regression. Previously, we made use of the generator and discarded the discriminator. We simply wanted to create new photo-realistic faces, so we just needed the generator. Semi-supervised learning flips this, as we now discard the generator and make use of the discriminator as our final model.
### Semi-Supervised Classification Training
The following diagram shows how to apply GANs for semi-supervised classification training.

Semi-supervised classification training is laid exactly the same as a regular GAN. The only differences is that it is not a simple true/false classifier as was the case for image GANs that simply classified if the generated image was a real or fake. The additional classes are also added. Later in this module I will provide a link to an example of [The Street View House Numbers (SVHN) Dataset](http://ufldl.stanford.edu/housenumbers/). This dataset contains house numbers, as seen in the following image.

Perhaps all of the digits are not labeled. The GAN is setup to classify a real or fake digit, just as we did with the faces. However, we also expand upon the real digits to include classes 0-9. The GAN discriminator is classifying between the 0-9 digits and also fake digits. A semi-supervised GAN classifier always classifies to the number of classes plus one. The additional class indicates a fake classification.
### Semi-Supervised Regression Training
The following diagram shows how to apply GANs for semi-supervised regression training.

Neural networks can perform both classification and regression simultaneously, it is simply a matter of how the output neurons are mapped. A hybrid classification-regression neural network simply maps groups of output neurons to be each of the groups of classes to be predicted, along with individual neurons to perform any regression predictions needed.
A regression semi-supervised GAN is one such hybrid. The discriminator has two output neurons. The first output neuron performs the requested regression prediction. The second predicts the probability that the input was fake.
### Application of Semi-Supervised Regression
An example of using Keras for Semi-Supervised classification is provided here.
* [Semi-supervised learning with Generative Adversarial Networks (GANs)](https://towardsdatascience.com/semi-supervised-learning-with-gans-9f3cb128c5e)
* [Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434)
* [The Street View House Numbers (SVHN) Dataset](http://ufldl.stanford.edu/housenumbers/)
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Creating and Manipulating Tensors
**Learning Objectives:**
* Initialize and assign TensorFlow `Variable`s
* Create and manipulate tensors
* Refresh your memory about addition and multiplication in linear algebra (consult an introduction to matrix [addition](https://en.wikipedia.org/wiki/Matrix_addition) and [multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) if these topics are new to you)
* Familiarize yourself with basic TensorFlow math and array operations
```
from __future__ import print_function
import tensorflow as tf
try:
tf.contrib.eager.enable_eager_execution()
print("TF imported with eager execution!")
except ValueError:
print("TF already imported with eager execution!")
```
## Vector Addition
You can perform many typical mathematical operations on tensors ([TF API](https://www.tensorflow.org/api_guides/python/math_ops)). The code below creates the following vectors (1-D tensors), all having exactly six elements:
* A `primes` vector containing prime numbers.
* A `ones` vector containing all `1` values.
* A vector created by performing element-wise addition over the first two vectors.
* A vector created by doubling the elements in the `primes` vector.
```
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes
)
ones = tf.ones([6], dtype=tf.int32)
print("ones:", ones)
just_beyond_primes = tf.add(primes, ones)
print("just_beyond_primes:", just_beyond_primes)
twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32)
primes_doubled = primes * twos
print("primes_doubled:", primes_doubled)
```
Printing a tensor returns not only its value, but also its shape (discussed in the next section) and the type of value stored in the tensor. Calling the `numpy` method of a tensor returns the value of the tensor as a numpy array:
```
some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32)
print(some_matrix)
print("\nvalue of some_matrix is:\n", some_matrix.numpy())
```
### Tensor Shapes
Shapes are used to characterize the size and number of dimensions of a tensor. The shape of a tensor is expressed as `list`, with the `i`th element representing the size along dimension `i`. The length of the list then indicates the rank of the tensor (i.e., the number of dimensions).
For more information, see the [TensorFlow documentation](https://www.tensorflow.org/programmers_guide/tensors#shape).
A few basic examples:
```
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy())
```
### Broadcasting
In mathematics, you can only perform element-wise operations (e.g. *add* and *equals*) on tensors of the same shape. In TensorFlow, however, you may perform operations on tensors that would traditionally have been incompatible. TensorFlow supports **broadcasting** (a concept borrowed from numpy), where the smaller array in an element-wise operation is enlarged to have the same shape as the larger array. For example, via broadcasting:
* If an operand requires a size `[6]` tensor, a size `[1]` or a size `[]` tensor can serve as an operand.
* If an operation requires a size `[4, 6]` tensor, any of the following sizes can serve as an operand:
* `[1, 6]`
* `[6]`
* `[]`
* If an operation requires a size `[3, 5, 6]` tensor, any of the following sizes can serve as an operand:
* `[1, 5, 6]`
* `[3, 1, 6]`
* `[3, 5, 1]`
* `[1, 1, 1]`
* `[5, 6]`
* `[1, 6]`
* `[6]`
* `[1]`
* `[]`
**NOTE:** When a tensor is broadcast, its entries are conceptually **copied**. (They are not actually copied for performance reasons. Broadcasting was invented as a performance optimization.)
The full broadcasting ruleset is well described in the easy-to-read [numpy broadcasting documentation](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html).
The following code performs the same tensor arithmetic as before, but instead uses scalar values (instead of vectors containing all `1`s or all `2`s) and broadcasting.
```
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes)
one = tf.constant(1, dtype=tf.int32)
print("one:", one)
just_beyond_primes = tf.add(primes, one)
print("just_beyond_primes:", just_beyond_primes)
two = tf.constant(2, dtype=tf.int32)
primes_doubled = primes * two
print("primes_doubled:", primes_doubled)
```
### Exercise #1: Arithmetic over vectors.
Perform vector arithmetic to create a "just_under_primes_squared" vector, where the `i`th element is equal to the `i`th element in `primes` squared, minus 1. For example, the second element would be equal to `3 * 3 - 1 = 8`.
Make use of either the `tf.multiply` or `tf.pow` ops to square the value of each element in the `primes` vector.
```
help(tf.pow)
# Write your code for Task 1 here.
squares = tf.pow(primes, 2)
ans = tf.subtract(squares,one)
print(ans)
```
### Solution
Click below for a solution.
```
# Task: Square each element in the primes vector, then subtract 1.
def solution(primes):
primes_squared = tf.multiply(primes, primes)
neg_one = tf.constant(-1, dtype=tf.int32)
just_under_primes_squared = tf.add(primes_squared, neg_one)
return just_under_primes_squared
def alternative_solution(primes):
primes_squared = tf.pow(primes, 2)
one = tf.constant(1, dtype=tf.int32)
just_under_primes_squared = tf.subtract(primes_squared, one)
return just_under_primes_squared
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
just_under_primes_squared = solution(primes)
print("just_under_primes_squared:", just_under_primes_squared)
```
## Matrix Multiplication
In linear algebra, when multiplying two matrices, the number of *columns* of the first matrix must
equal the number of *rows* in the second matrix.
- It is **_valid_** to multiply a `3x4` matrix by a `4x2` matrix. This will result in a `3x2` matrix.
- It is **_invalid_** to multiply a `4x2` matrix by a `3x4` matrix.
```
# A 3x4 matrix (2-d tensor).
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# A 4x2 matrix (2-d tensor).
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`; result is 3x2 matrix.
matrix_multiply_result = tf.matmul(x, y)
print(matrix_multiply_result)
```
## Tensor Reshaping
With tensor addition and matrix multiplication each imposing constraints
on operands, TensorFlow programmers must frequently reshape tensors.
You can use the `tf.reshape` method to reshape a tensor.
For example, you can reshape a 8x2 tensor into a 2x8 tensor or a 4x4 tensor:
```
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x8_matrix = tf.reshape(matrix, [2, 8])
reshaped_4x4_matrix = tf.reshape(matrix, [4, 4])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.numpy())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.numpy())
```
You can also use `tf.reshape` to change the number of dimensions (the "rank") of the tensor.
For example, you could reshape that 8x2 tensor into a 3-D 2x2x4 tensor or a 1-D 16-element tensor.
```
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4])
one_dimensional_vector = tf.reshape(matrix, [16])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.numpy())
print("1-D vector:")
print(one_dimensional_vector.numpy())
```
### Exercise #2: Reshape two tensors in order to multiply them.
The following two vectors are incompatible for matrix multiplication:
* `a = tf.constant([5, 3, 2, 7, 1, 4])`
* `b = tf.constant([4, 6, 3])`
Reshape these vectors into compatible operands for matrix multiplication.
Then, invoke a matrix multiplication operation on the reshaped tensors.
```
# Write your code for Task 2 here.
```
### Solution
Click below for a solution.
Remember, when multiplying two matrices, the number of *columns* of the first matrix must equal the number of *rows* in the second matrix.
One possible solution is to reshape `a` into a 2x3 matrix and reshape `b` into a a 3x1 matrix, resulting in a 2x1 matrix after multiplication:
```
# Task: Reshape two tensors in order to multiply them
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
reshaped_a = tf.reshape(a, [2, 3])
reshaped_b = tf.reshape(b, [3, 1])
c = tf.matmul(reshaped_a, reshaped_b)
print("reshaped_a (2x3):")
print(reshaped_a.numpy())
print("reshaped_b (3x1):")
print(reshaped_b.numpy())
print("reshaped_a x reshaped_b (2x1):")
print(c.numpy())
```
An alternative solution would be to reshape `a` into a 6x1 matrix and `b` into a 1x3 matrix, resulting in a 6x3 matrix after multiplication.
## Variables, Initialization and Assignment
So far, all the operations we performed were on static values (`tf.constant`); calling `numpy()` always returned the same result. TensorFlow allows you to define `Variable` objects, whose values can be changed.
When creating a variable, you can set an initial value explicitly, or you can use an initializer (like a distribution):
```
# Create a scalar variable with the initial value 3.
v = tf.contrib.eager.Variable([3])
# Create a vector variable of shape [1, 4], with random initial values,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35))
print("v:", v.numpy())
print("w:", w.numpy())
```
To change the value of a variable, use the `assign` op:
```
v = tf.contrib.eager.Variable([3])
print(v.numpy())
tf.assign(v, [7])
print(v.numpy())
v.assign([5])
print(v.numpy())
```
When assigning a new value to a variable, its shape must be equal to its previous shape:
```
v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]])
print(v.numpy())
try:
print("Assigning [7, 8, 9] to v")
v.assign([7, 8, 9])
except ValueError as e:
print("Exception:", e)
```
There are many more topics about variables that we didn't cover here, such as loading and storing. To learn more, see the [TensorFlow docs](https://www.tensorflow.org/programmers_guide/variables).
### Exercise #3: Simulate 10 rolls of two dice.
Create a dice simulation, which generates a `10x3` 2-D tensor in which:
* Columns `1` and `2` each hold one throw of one six-sided die (with values 1–6).
* Column `3` holds the sum of Columns `1` and `2` on the same row.
For example, the first row might have the following values:
* Column `1` holds `4`
* Column `2` holds `3`
* Column `3` holds `7`
You'll need to explore the [TensorFlow documentation](https://www.tensorflow.org/api_guides/python/array_ops) to solve this task.
```
# Write your code for Task 3 here.
```
### Solution
Click below for a solution.
We're going to place dice throws inside two separate 10x1 matrices, `die1` and `die2`. The summation of the dice rolls will be stored in `dice_sum`, then the resulting 10x3 matrix will be created by *concatenating* the three 10x1 matrices together into a single matrix.
Alternatively, we could have placed dice throws inside a single 10x2 matrix, but adding different columns of the same matrix would be more complicated. We also could have placed dice throws inside two 1-D tensors (vectors), but doing so would require transposing the result.
```
# Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix.
die1 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
die2 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
dice_sum = tf.add(die1, die2)
resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1)
print(resulting_matrix.numpy())
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Lesson%202.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Run this to ensure TensorFlow 2.x is used
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import json
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
vocab_size = 10000
embedding_dim = 16
max_length = 100
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size = 20000
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
-O /tmp/sarcasm.json
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
sentences = []
labels = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
# Need this block to get it to work with TensorFlow 2.x
import numpy as np
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 30
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_sentence(training_padded[0]))
print(training_sentences[2])
print(labels[2])
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
sentence = ["granny starting to fear spiders in the garden might be real", "game of thrones season finale showing this sunday night"]
sequences = tokenizer.texts_to_sequences(sentence)
padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
print(model.predict(padded))
```
| github_jupyter |
```
!env | grep -i python
! which python
! which pip
! pip install catboost
from catboost import CatBoostClassifier
```
A fork of `catboost-go-5.0-subset.ipynb` where we exclude run ID and study ID from the features
```
!pip install --user catboost ipywidgets
!conda install -y python-graphviz
!jupyter nbextension enable --py widgetsnbextension
# Imports
import os
import pandas
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from catboost import CatBoostClassifier, Pool, cv
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
source_path = 'go_aggregated_runs_full_removed_duplicates.tsv'
assert os.path.exists(source_path)
import sys
print("Python version")
print (sys.version)
with open(source_path) as fd:
df = pandas.read_csv(fd, sep='\t')
df
# Check for null values
null_vals = df.isnull().sum(axis=0)
assert len(null_vals[null_vals != 0]) == 0
# Scale integer values to 0-1 floats
scaler = MinMaxScaler()
cols = df.columns[5:]
df[cols] = pandas.DataFrame(scaler.fit_transform(df[cols]), columns=cols)
df
# Split into input and output
y = df.loc[:, 'biome']
X = df[df.columns[4:]]
X
y
# Split into train and test sets
X_train, X_validation, y_train, y_validation = train_test_split(X, y, train_size=0.75, random_state=42)
# Init the model
model = CatBoostClassifier(
custom_loss=['Accuracy'],
random_seed=42,
)
# Init the categorical feature indexes
categorical_features_indices = np.where(X.dtypes != np.float)[0]
categorical_features_indices
# Train
model.fit(
X_train, y_train,
cat_features=categorical_features_indices,
eval_set=(X_validation, y_validation),
logging_level='Verbose',
plot=True,
)
predictions = model.predict(X_validation)
predictions_probs = model.predict_proba(X_validation)
matches = [x[0] == y for x, y in zip(predictions, y_validation)]
print('Correct predictions:', len([m for m in matches if m]))
print('Incorrect predictions:', len([m for m in matches if not m]))
predictions = model.predict(X)
predictions_probs = model.predict_proba(X)
matches = [x[0] == y for x, y in zip(predictions, y)]
print('Correct predictions:', len([m for m in matches if m]))
print('Incorrect predictions:', len([m for m in matches if not m]))
model.score(X_validation, y_validation)
model.score(X, y)
model.get_best_score()
model.get_best_iteration()
pool = Pool(X, y, cat_features=categorical_features_indices, feature_names=list(X.columns))
model.plot_tree(
tree_idx=0,
pool=pool
)
res = model.calc_feature_statistics(X, y, feature=2, plot=True)
from collections import Iterable # < py38
def flatten(items):
"""Yield items from any nested iterable; see Reference."""
for x in items:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
for sub_x in flatten(x):
yield sub_x
else:
yield x
y_predict = list(flatten(model.predict(X)))
confusion_matrix_test = confusion_matrix(y, y_predict)
# Confusion matrix comes out in natural sort order of classes
poset = sorted(list(set(y)|set(y_predict)))
ilabels = ['Actual '+ x.split(':')[-1] for x in poset]
plabels = ['Predicted '+ x.split(':')[-1] for x in poset]
confusion_matrix_val = pandas.DataFrame(confusion_matrix_test, index=ilabels, columns=plabels)
confusion_matrix_val
plt.figure(figsize=(8, 4))
sns.heatmap(confusion_matrix_val, annot=True, linewidths=0.1, annot_kws={"fontsize":8}) # , xticklabels=False, yticklabels=False)
# fix for mpl bug that cuts off top/bottom of seaborn viz
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.show() # ta-da!
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
import gc
import time
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from sklearn.metrics import roc_auc_score
from nltk.tokenize import WordPunctTokenizer
from collections import Counter
from sklearn.model_selection import train_test_split
SEED = 41
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
RAW_DATA_PATH = '../../dl_nlp/data/jigsaw_toxic/raw/'
PROCESSED_DATA_PATH = '../../dl_nlp/data/jigsaw_toxic/processed/'
SEQ_LEN = 512
```
### Helper Methods
```
def tokenize_sentences(sentences):
tokenizer = WordPunctTokenizer()
return [tokenizer.tokenize(sentence.lower()) for sentence in sentences]
def get_tokens(tokenized_sentences):
return [token for tokenized_sentence in tokenized_sentences for token in tokenized_sentence]
def get_chars(tokens):
return list(set([char for token in tokens for char in token]))
def load_sample():
return pd.read_csv(os.path.join(PROCESSED_DATA_PATH, 'train_sample.csv'))
def load_full():
train = pd.read_csv(os.path.join(RAW_DATA_PATH, 'train.csv'))
test = pd.read_csv(os.path.join(RAW_DATA_PATH, 'test.csv'))
test_labels = pd.read_csv(os.path.join(RAW_DATA_PATH, 'test_labels.csv'))
return train, test, test_labels
# %%time
# train = load_sample()
%%time
train, _, _ = load_full()
TARGET_COLS = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
%%time
train_tokenized_comments = tokenize_sentences(train.comment_text)
# comment_len = [len(comment) for comment in train_tokenized_comments]
# pd.Series(comment_len).describe()
# fixed character set anything other than this would be considered as UNK (unknown) symbol
# unique_chars = 'abcdefghijklmnopqrstuvwxyz0123456789-,;.!?:’"/|_#$%ˆ&*˜‘+=<>()[]{}'
# print(unique_chars)
%%time
unique_chars = list(set([char.lower() for comment in train_tokenized_comments for token in comment for char in token]))
print(len(unique_chars))
# token to index
UNK, PAD = 'UNK', 'PAD'
UNK_IX, PAD_IX = 0, 1
char_to_id = {UNK: UNK_IX,
PAD: PAD_IX
}
for char in unique_chars:
char_to_id[char] = len(char_to_id)
char_to_id[UNK] = len(char_to_id)
char_to_id[PAD] = len(char_to_id) + 1
# create a batch out of sentences
def as_matrix(sequences, char_to_id, UNK_IX, PAD_IX, max_len=SEQ_LEN):
""" Convert a list of tokens into a matrix with padding """
matrix = np.full((len(sequences), max_len), np.int32(PAD_IX))
for i,seq in enumerate(sequences):
row_ix = [char_to_id.get(char, UNK_IX) for word in seq[:max_len] for char in word]
matrix[i, :len(row_ix)] = row_ix
return matrix
```
### Split into train and test set
```
data_train, data_val = train_test_split(train, test_size=0.2, random_state=42)
data_train.index = range(len(data_train))
data_val.index = range(len(data_val))
print("Train size = ", len(data_train))
print("Validation size = ", len(data_val))
def iterate_batches(matrix, labels, batch_size, predict_mode='train'):
indices = np.arange(len(matrix))
if predict_mode == 'train':
np.random.shuffle(indices)
for start in range(0, len(matrix), batch_size):
end = min(start + batch_size, len(matrix))
batch_indices = indices[start: end]
X = matrix[batch_indices]
if predict_mode != 'train': yield X
else:
yield X, labels[batch_indices]
# matrix = as_matrix(data_train.comment_text, char_to_id, UNK_IX=UNK_IX, PAD_IX=PAD_IX)
# labels = data_train.loc[:, TARGET_COLS].values
# X, y = next(iterate_batches(matrix, labels, batch_size=2))
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class ConvBlock(nn.Module):
def __init__(self, num_output_channels, num_feature_maps, upsample=False):
super(ConvBlock, self).__init__()
self.num_output_channels = num_output_channels
self.num_feature_maps = num_feature_maps
self.kernel_size = 3
self.conv1 = nn.Conv1d(self.num_output_channels,
self.num_feature_maps,
kernel_size=self.kernel_size,
padding=1,
bias=False
)
self.relu = nn.ReLU()
self.bn = nn.BatchNorm1d(self.num_feature_maps)
self.upsample = upsample
self.conv1x1 = nn.Conv1d(self.num_output_channels,
self.num_feature_maps,
kernel_size=1,
bias=False
)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn(out)
if self.upsample:
identity = self.conv1x1(identity)
# optional shortcut
out += identity
out = self.relu(out)
return out
class KMaxPool(nn.Module):
def __init__(self):
super(KMaxPool, self).__init__()
def forward(self, x, dim, k=8):
index = x.topk(k, dim = dim)[1].sort(dim = dim)[0]
return x.gather(dim, index)
class VDCNN(nn.Module):
def __init__(self, vocab_size):
super(VDCNN, self).__init__()
self.hidden_dim = 16
self.num_feature_maps = 64
# define embedding space for characters
self.char_embedding = nn.Embedding(vocab_size, self.hidden_dim)
# convolutional layer of fixed kernel size
self.conv1 = nn.Conv1d(self.hidden_dim,
self.num_feature_maps,
kernel_size=3,
padding=1
)
# relu layer
self.relu = nn.ReLU()
# conv blocks
self.convbl_1 = ConvBlock(self.num_feature_maps, 64, upsample=False)
self.convbl_2 = ConvBlock(64, 128, upsample=True)
# self.convbl_3 = ConvBlock(128, 256, upsample=True)
# self.convbl_4 = ConvBlock(256, 512, upsample=True)
# pooling layer
self.pool = nn.MaxPool1d(kernel_size=2)
# k-max pooling layer
self.kmax_pool = KMaxPool()
# flatten any layer
self.flatten = Flatten()
# dropout layer
self.dropout = nn.Dropout(0.4)
# fc
self.fc = nn.Linear(256 * 8, 6)
def forward(self, x):
embed = self.char_embedding(x)
# raw embedding produces (batch_size, seq_len, channels)
# but pytorch expects (batch_size, channels, seq_len)
embed = torch.transpose(embed, 1, 2)
# first layer of convolutions
out = self.conv1(embed)
out = self.relu(out)
## ConvBlock followed by pooling
# (Convolutional Block, 3, 64)
out = self.convbl_1(out)
out = self.pool(out)
# (Convolutional Block, 3, 128)
out = self.convbl_2(out)
out = self.pool(out)
# (Convolutional Block, 3, 256)
# out = self.convbl_3(out)
# out = self.pool(out)
# (Convolutional Block, 3, 512)
# out = self.convbl_4(out)
# out = self.pool(out)
# k-max pooling at the end
out = self.kmax_pool(out, dim=2)
# flatten
out = self.flatten(out)
# pass it through fully connected layer
out = self.fc(out)
return out
# convert input and output into torch tensors
# X = torch.cuda.LongTensor(X)
# y = torch.cuda.LongTensor(y)
# vocab_size = len(char_to_id)
# model = VDCNN(vocab_size).cuda()
# logits = model(X)
# logits.shape
```
### Training Loop
```
def do_epoch(model, criterion, data, batch_size, optimizer=None):
epoch_loss, total_size = 0, 0
per_label_preds = [[], [], [], [], [], []]
per_label_true = [[], [], [], [], [], []]
is_train = not optimizer is None
model.train(is_train)
data, labels = data
batchs_count = math.ceil(data.shape[0] / batch_size)
with torch.autograd.set_grad_enabled(is_train):
for i, (X_batch, y_batch) in enumerate(iterate_batches(data, labels, batch_size)):
X_batch, y_batch = torch.cuda.LongTensor(X_batch), torch.cuda.FloatTensor(y_batch)
logits = model(X_batch)
loss = criterion(logits, y_batch)
if is_train:
loss.backward()
optimizer.step()
optimizer.zero_grad()
# convert true target
batch_target = y_batch.cpu().detach().numpy()
logits_cpu = logits.cpu().detach().numpy()
# per_label_preds
for j in range(6):
label_preds = logits_cpu[:, j]
per_label_preds[j].extend(label_preds)
per_label_true[j].extend(batch_target[:, j])
# calculate log loss
epoch_loss += loss.item()
print('\r[{} / {}]: Loss = {:.4f}'.format(
i, batchs_count, loss.item(), end=''))
label_auc = []
for i in range(6):
label_auc.append(roc_auc_score(per_label_true[i], per_label_preds[i]))
return epoch_loss / batchs_count, np.mean(label_auc)
def fit(model, criterion, optimizer, train_data, epochs_count=1,
batch_size=32, val_data=None, val_batch_size=None):
if not val_data is None and val_batch_size is None:
val_batch_size = batch_size
for epoch in range(epochs_count):
start_time = time.time()
train_loss, train_auc = do_epoch(
model, criterion, train_data, batch_size, optimizer
)
output_info = '\rEpoch {} / {}, Epoch Time = {:.2f}s: Train Loss = {:.4f}, Train AUC = {:.4f}'
if not val_data is None:
val_loss, val_auc = do_epoch(model, criterion, val_data, val_batch_size, None)
epoch_time = time.time() - start_time
output_info += ', Val Loss = {:.4f}, Val AUC = {:.4f}'
print(output_info.format(epoch+1, epochs_count, epoch_time,
train_loss,
train_auc,
val_loss,
val_auc
))
else:
epoch_time = time.time() - start_time
print(output_info.format(epoch+1, epochs_count, epoch_time, train_loss, train_auc))
```
### Run on full batch
```
vocab_size = len(char_to_id)
model = VDCNN(vocab_size).cuda()
criterion = nn.BCEWithLogitsLoss().cuda()
# optimizer = optim.Adam([param for param in model.parameters() if param.requires_grad], lr=0.01)
optimizer = optim.SGD([param for param in model.parameters() if param.requires_grad], lr=0.03, momentum=0.9)
X_train = as_matrix(data_train.comment_text, char_to_id, UNK_IX=UNK_IX, PAD_IX=PAD_IX)
train_labels = data_train.loc[:, TARGET_COLS].values
X_test = as_matrix(data_val.comment_text, char_to_id, UNK_IX=UNK_IX, PAD_IX=PAD_IX)
test_labels = data_val.loc[:, TARGET_COLS].values
fit(model, criterion, optimizer, train_data=(X_train, train_labels), epochs_count=7,
batch_size=512, val_data=(X_test, test_labels), val_batch_size=1024)
```
`Epoch 7 / 7, Epoch Time = 36.67s: Train Loss = 0.0696, Train AUC = 0.9440, Val Loss = 0.0702, Val AUC = 0.9403`
```
BEST RUN:
Epoch 3 / 3,
Epoch Time = 161.67s:
Train Loss = 0.0745,
Train AUC = 0.9344,
Val Loss = 0.0728,
Val AUC = 0.9401
=========================================
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import utils
matplotlib.rcParams['figure.figsize'] = (0.89 * 12, 6)
matplotlib.rcParams['lines.linewidth'] = 10
matplotlib.rcParams['lines.markersize'] = 20
```
# The Dataset
$$y = x^3 + x^2 - 4x$$
```
x, y, X, transform, scale = utils.get_base_data()
utils.plotter(x, y)
```
# The Dataset
```
noise = utils.get_noise()
utils.plotter(x, y + noise)
```
# Machine Learning
$$
y = f(\mathbf{x}, \mathbf{w})
$$
$$
f(x, \mathbf{w}) = w_3 x^3 + w_2x^2 + w_1x + w_0
$$
$$
y = \mathbf{w} \cdot \mathbf{x}
$$
# Transforming Features
<center><img src="images/transform_features.png" style="height: 600px;"></img></center>
# Fitting Data with Scikit-Learn
<center><img src="images/sklearn.png"></img></center>
# Fitting Data with Scikit-Learn
Minimize
$$C(\mathbf{w}) = \sum_j (\mathbf{x}_j^T \mathbf{w} - y_j)^2$$
```
def mean_squared_error(X, y, fit_func):
return ((fit_func(X).squeeze() - y.squeeze()) ** 2).mean()
```
# Fitting Data with Scikit-Learn
```
from sklearn.linear_model import LinearRegression
reg = LinearRegression(fit_intercept=False).fit(X, y)
print(reg.coef_ / scale)
print(mean_squared_error(X, y, reg.predict))
utils.plotter(x, y, fit_fn=reg.predict, transform=transform)
```
# Fitting Data with Scikit-Learn
```
reg = LinearRegression(fit_intercept=False).fit(X, y + noise)
print(reg.coef_ / scale)
print(mean_squared_error(X, y + noise, reg.predict))
utils.plotter(x, y + noise, fit_fn=reg.predict, transform=transform)
```
# Exercise 1
# Fitting Data with Numpy
<center><img src="images/numpylogoicon.svg" style="height: 400px;"></img></center>
# Linear Algebra
<center><img src="images/linear_tweet.png" style="height: 400px;"></img></center>
# Linear Algebra!
$$X\mathbf{w} = \mathbf{y}$$
<center><img src="images/row_mult.png" style="height: 500px;"></img></center>
# Linear Algebra!
<center><img src="images/row_mult.png" style="height: 200px;"></img></center>
```
(X.dot(reg.coef_.T) == reg.predict(X)).all()
```
# Fitting Data with Numpy
$$X\mathbf{w} = \mathbf{y}$$
$$\mathbf{w} = X^{-1}\mathbf{y}$$
<center><img src="images/pete-4.jpg" style="height: 400px;"></img></center>
# Fitting Data with Numpy
$$X\mathbf{w} = \mathbf{y}$$
$$X^TX\mathbf{w} = X^T\mathbf{y}$$
$$\mathbf{w} = (X^TX)^{-1}X^T\mathbf{y}$$
# Orthogonal Projections!
$$P = X(X^TX)^{-1}X^T$$
Try to show $$P^2 = P$$
and that
$$(\mathbf{y} - P\mathbf{y})^T P\mathbf{y} = 0$$
<center><img src="images/pete-5.jpg" style="height: 350px;"></img></center>
# Fitting Data with Numpy
$$\mathbf{w} = (X^TX)^{-1}X^T\mathbf{y}$$
```
np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y).T / scale
(np.linalg.inv(X.T @ X) @ X.T @ y).T / scale
np.linalg.pinv(X).dot(y).T / scale
```
# Fitting Data with Numpy
```
class NumpyLinearRegression(object):
def fit(self, X, y):
self.coef_ = np.linalg.pinv(X).dot(y)
return self
def predict(self, X):
return X.dot(self.coef_)
```
# Fitting Data with Numpy
```
linalg_reg = NumpyLinearRegression().fit(X, y)
print(linalg_reg.coef_.T / scale)
print(mean_squared_error(X, y, linalg_reg.predict))
utils.plotter(x, y, fit_fn=linalg_reg.predict, transform=transform)
```
# Fitting Data with Numpy
```
linalg_reg = NumpyLinearRegression().fit(X, y + noise)
print(linalg_reg.coef_.T / scale)
print(mean_squared_error(X, y + noise, linalg_reg.predict))
utils.plotter(x, y + noise, fit_fn=linalg_reg.predict, transform=transform)
```
# Exercise 2
# Regularization
```
x_train, x_test, y_train, y_test, X_train, X_test, transform, scale = utils.get_overfitting_data()
```
# What does overfitting look like?
```
reg = LinearRegression(fit_intercept=False).fit(X_train, y_train)
print((reg.coef_ / scale))
plt.bar(np.arange(len(reg.coef_.squeeze())), reg.coef_.squeeze() / scale);
```
# What does overfitting look like?
```
mean_squared_error(X_train, y_train, reg.predict)
utils.plotter(x_train, y_train, fit_fn=reg.predict, transform=transform)
```
# What does overfitting look like?
```
mean_squared_error(X_test, y_test, reg.predict)
utils.plotter(x_test, y_test, fit_fn=reg.predict, transform=transform)
```
# Ridge Regression
## "Penalize model complexity"
$$C(\mathbf{w}) = \sum_j (\mathbf{x}_j^T \mathbf{w} - y_j)^2$$
$$C(\mathbf{w}) = \sum_j (\mathbf{x}_j^T \mathbf{w} - y_j)^2 + \alpha \sum_j w_j^2$$
# Ridge Regression
```
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=0.02).fit(X_train, y_train)
utils.plotter(x_test, y_test, fit_fn=ridge_reg.predict, transform=transform)
```
# Ridge Regression
```
print(mean_squared_error(X_test, y_test, ridge_reg.predict))
plt.bar(np.arange(len(ridge_reg.coef_.squeeze())), ridge_reg.coef_.squeeze() / scale);
```
# Lasso Regression
```
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.005, max_iter=100000, fit_intercept=False).fit(X_train, y_train)
utils.plotter(x_test, y_test, fit_fn=lasso_reg.predict, transform=transform)
```
# Lasso Regression
```
print(mean_squared_error(X_test, y_test, lasso_reg.predict))
plt.bar(np.arange(len(lasso_reg.coef_)), lasso_reg.coef_ / scale);
```
# Exercise 3
| github_jupyter |
```
#######################################################
# Script:
# trainPerf.py
# Usage:
# python trainPerf.py <input_file> <output_file>
# Description:
# Build the prediction model based on training data
# Pass 1: prediction based on hours in a week
# Authors:
# Jasmin Nakic, jnakic@salesforce.com
# Samir Pilipovic, spilipovic@salesforce.com
#######################################################
import sys
import numpy as np
from sklearn import linear_model
from sklearn.externals import joblib
# Imports required for visualization (plotly)
import plotly.graph_objs as go
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
# Script debugging flag
debugFlag = False
# Feature lists for different models
simpleCols = ["dateFrac"]
trigCols = ["dateFrac", "weekdaySin", "weekdayCos", "hourSin", "hourCos"]
hourDayCols = ["dateFrac", "isMonday", "isTuesday", "isWednesday", "isThursday", "isFriday", "isSaturday", "isSunday",
"isHour0", "isHour1", "isHour2", "isHour3", "isHour4", "isHour5", "isHour6", "isHour7",
"isHour8", "isHour9", "isHour10", "isHour11", "isHour12", "isHour13", "isHour14", "isHour15",
"isHour16", "isHour17", "isHour18", "isHour19", "isHour20", "isHour21", "isHour22", "isHour23"]
hourWeekCols = ["dateFrac"]
for d in range(0,7):
for h in range(0,24):
hourWeekCols.append("H_" + str(d) + "_" + str(h))
# Add columns to the existing array and populate with data
def addColumns(dest, src, colNames):
# Initialize temporary array
tmpArr = np.empty(src.shape[0])
cols = 0
# Copy column content
for name in colNames:
if cols == 0: # first column
tmpArr = np.copy(src[name])
tmpArr = np.reshape(tmpArr,(-1,1))
else:
tmpCol = np.copy(src[name])
tmpCol = np.reshape(tmpCol,(-1,1))
tmpArr = np.append(tmpArr,tmpCol,1)
cols = cols + 1
return np.append(dest,tmpArr,1)
#end addColumns
# Generate linear regression model
def genModel(data,colList,modelName):
# Initialize array
X = np.zeros(data.shape[0])
X = np.reshape(X,(-1,1))
# Add columns
X = addColumns(X,data,colList)
if debugFlag:
print("X 0: ", X[0:5])
Y = np.copy(data["cnt"])
if debugFlag:
print("Y 0: ", Y[0:5])
model = linear_model.LinearRegression()
print(model.fit(X, Y))
print("INTERCEPT: ", model.intercept_)
print("COEFFICIENT shape: ", model.coef_.shape)
print("COEFFICIENT values: ", model.coef_)
print("SCORE values: ", model.score(X,Y))
P = model.predict(X)
if debugFlag:
print("P 0-5: ", P[0:5])
joblib.dump(model,modelName)
return P
#end genModel
# Generate linear regression model
def genRidgeModel(data,colList,modelName,ridgeAlpha):
# Initialize array
X = np.zeros(data.shape[0])
X = np.reshape(X,(-1,1))
# Add columns
X = addColumns(X,data,colList)
if debugFlag:
print("X 0: ", X[0:5])
Y = np.copy(data["cnt"])
if debugFlag:
print("Y 0: ", Y[0:5])
model = linear_model.Ridge(alpha=ridgeAlpha)
print(model.fit(X, Y))
print("INTERCEPT: ", model.intercept_)
print("COEFFICIENT shape: ", model.coef_.shape)
print("COEFFICIENT values: ", model.coef_)
print("SCORE values: ", model.score(X,Y))
P = model.predict(X)
if debugFlag:
print("P 0-5: ", P[0:5])
joblib.dump(model,modelName)
return P
#end genModel
# Generate linear regression model
def genLassoModel(data,colList,modelName,lassoAlpha):
# Initialize array
X = np.zeros(data.shape[0])
X = np.reshape(X,(-1,1))
# Add columns
X = addColumns(X,data,colList)
if debugFlag:
print("X 0: ", X[0:5])
Y = np.copy(data["cnt"])
if debugFlag:
print("Y 0: ", Y[0:5])
model = linear_model.Lasso(alpha=lassoAlpha,max_iter=5000)
print(model.fit(X, Y))
print("INTERCEPT: ", model.intercept_)
print("COEFFICIENT shape: ", model.coef_.shape)
print("COEFFICIENT values: ", model.coef_)
print("SCORE values: ", model.score(X,Y))
P = model.predict(X)
if debugFlag:
print("P 0-5: ", P[0:5])
joblib.dump(model,modelName)
return P
#end genModel
# Write predictions to the output file
def writeResult(output,data,p1,p2,p3,p4):
# generate result file
result = np.array(
np.empty(data.shape[0]),
dtype=[
("timeStamp","|U19"),
("dateFrac",float),
("isHoliday",int),
("isSunday",int),
("cnt",int),
("predSimple",int),
("predTrig",int),
("predHourDay",int),
("predHourWeek",int)
]
)
result["timeStamp"] = data["timeStamp"]
result["dateFrac"] = data["dateFrac"]
result["isHoliday"] = data["isHoliday"]
result["isSunday"] = data["isSunday"]
result["cnt"] = data["cnt"]
result["predSimple"] = p1
result["predTrig"] = p2
result["predHourDay"] = p3
result["predHourWeek"] = p4
if debugFlag:
print("R 0-5: ", result[0:5])
hdr = "timeStamp\tdateFrac\tisHoliday\tisSunday\tcnt\tpredSimple\tpredTrig\tpredHourDay\tpredHourWeek"
np.savetxt(output,result,fmt="%s",delimiter="\t",header=hdr,comments="")
#end writeResult
# Start
inputFileName = "train_data.txt"
outputFileName = "train_hourly.txt"
# All input columns - data types are strings, float and int
inputData = np.genfromtxt(
inputFileName,
delimiter='\t',
names=True,
dtype=("|U19","|S10",int,float,int,float,float,int,float,float,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int,
int,int,int,int,int,int,int,int,int,int
)
)
print(inputData[1:5])
# P1 = genRidgeModel(inputData,simpleCols,"modelSimple",0.1)
# P2 = genRidgeModel(inputData,trigCols,"modelTrig",0.1)
# P3 = genRidgeModel(inputData,hourDayCols,"modelHourDay",0.1)
# P4 = genRidgeModel(inputData,hourWeekCols,"modelHourWeek",0.1)
# P1 = genLassoModel(inputData,simpleCols,"modelSimple",0.4)
# P2 = genLassoModel(inputData,trigCols,"modelTrig",0.4)
# P3 = genLassoModel(inputData,hourDayCols,"modelHourDay",0.4)
# P4 = genLassoModel(inputData,hourWeekCols,"modelHourWeek",0.4)
P1 = genModel(inputData,simpleCols,"modelSimple")
P2 = genModel(inputData,trigCols,"modelTrig")
P3 = genModel(inputData,hourDayCols,"modelHourDay")
P4 = genModel(inputData,hourWeekCols,"modelHourWeek")
writeResult(outputFileName,inputData,P1,P2,P3,P4)
# Load the training data from file generated above using correct data types
results = np.genfromtxt(
outputFileName,
dtype=("|U19",float,int,int,int,int,int,int,int),
delimiter='\t',
names=True
)
# Examine training data
print("Shape:", results.shape)
print("Columns:", results.dtype.names)
print(results[1:5])
# Generate chart with predicitons based on training data (using plotly)
print("Plotly version", __version__) # requires plotly version >= 1.9.0
init_notebook_mode(connected=True)
set1 = go.Bar(
x=results["dateFrac"],
y=results["cnt"],
# marker=dict(color='blue'),
name='Actual'
)
set2 = go.Bar(
x=results["dateFrac"],
y=results["predTrig"],
# marker=dict(color='crimson'),
opacity=0.6,
name='Prediction'
)
set3 = go.Bar(
x=results["dateFrac"],
y=results["predHourWeek"],
# marker=dict(color='crimson'),
opacity=0.6,
name='Prediction'
)
barData = [set1, set2, set3]
barLayout = go.Layout(barmode='group', title="Prediction vs. Actual")
fig = go.Figure(data=barData, layout=barLayout)
iplot(fig)
```
| github_jupyter |
# Rotation Transformation
We meta-learn how to rotate images so that we can accurately classify rotated images. We use MNIST.
Import relevant packages
```
from operator import mul
from itertools import cycle
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.nn.functional as F
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
import tqdm
from higher.patch import make_functional
from higher.utils import get_func_params
from sklearn.metrics import accuracy_score
%matplotlib inline
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
```
Define transformations to create standard and rotated images
```
transform_basic = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
transform_rotate = transforms.Compose([
transforms.RandomRotation([30, 30]),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
Load the data and split the indices so that we both standard and rotated images in various sets. We also keep a part of the training data as unrotated test images in case it is useful.
```
train_set = datasets.MNIST(
'data', train=True, transform=transform_basic, target_transform=None, download=True)
train_set_rotated = datasets.MNIST(
'data', train=True, transform=transform_rotate, target_transform=None, download=True)
train_basic_indices = range(40000)
train_test_basic_indices = range(40000, 50000)
val_rotate_indices = range(50000, 60000)
train_basic_set = torch.utils.data.Subset(train_set, train_basic_indices)
train_test_basic_set = torch.utils.data.Subset(train_set, train_test_basic_indices)
val_rotate_set = torch.utils.data.Subset(
train_set_rotated, val_rotate_indices)
test_set = datasets.MNIST(
'data', train=False, transform=transform_rotate, target_transform=None, download=True)
```
Define data loaders
```
batch_size = 128
train_basic_set_loader = torch.utils.data.DataLoader(
train_basic_set, batch_size=batch_size, shuffle=True)
train_test_basic_set_loader = torch.utils.data.DataLoader(
train_test_basic_set, batch_size=batch_size, shuffle=True)
val_rotate_set_loader = torch.utils.data.DataLoader(
val_rotate_set, batch_size=batch_size, shuffle=True)
test_set_loader = torch.utils.data.DataLoader(
test_set, batch_size=batch_size, shuffle=True)
```
Set-up the device to use
```
if torch.cuda.is_available(): # checks whether a cuda gpu is available
device = torch.cuda.current_device()
print("use GPU", device)
print("GPU ID {}".format(torch.cuda.current_device()))
else:
print("use CPU")
device = torch.device('cpu') # sets the device to be CPU
```
Define a function to do rotation by angle theta (in radians). We define the function in a way that allows us to differentiate with respect to theta.
```
def rot_img(x, theta, device):
rot = torch.cat([torch.cat([torch.cos(theta), -torch.sin(theta), torch.tensor([0.], device=device)]),
torch.cat([torch.sin(theta), torch.cos(theta), torch.tensor([0.], device=device)])])
grid = F.affine_grid(rot.expand([x.size()[0], 6]).view(-1, 2, 3), x.size())
x = F.grid_sample(x, grid)
return x
```
Define the model that we use - simple LeNet that will allow us to do fast experiments
```
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.in_channels = 1
self.input_size = 28
self.conv1 = nn.Conv2d(self.in_channels, 6, 5,
padding=2 if self.input_size == 28 else 0)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2)
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
A function to test a model on the test set
```
def test_classification_net(data_loader, model, device):
'''
This function reports classification accuracy over a dataset.
'''
model.eval()
labels_list = []
predictions_list = []
with torch.no_grad():
for i, (data, label) in enumerate(data_loader):
data = data.to(device)
label = label.to(device)
logits = model(data)
softmax = F.softmax(logits, dim=1)
_, predictions = torch.max(softmax, dim=1)
labels_list.extend(label.cpu().numpy().tolist())
predictions_list.extend(predictions.cpu().numpy().tolist())
accuracy = accuracy_score(labels_list, predictions_list)
return 100 * accuracy
```
A function to test the model on the test set while doing the rotations manually with a specified angle
```
def test_classification_net_rot(data_loader, model, device, angle=0.0):
'''
This function reports classification accuracy over a dataset.
'''
model.eval()
labels_list = []
predictions_list = []
with torch.no_grad():
for i, (data, label) in enumerate(data_loader):
data = data.to(device)
if angle != 0.0:
data = rot_img(data, angle, device)
label = label.to(device)
logits = model(data)
softmax = F.softmax(logits, dim=1)
_, predictions = torch.max(softmax, dim=1)
labels_list.extend(label.cpu().numpy().tolist())
predictions_list.extend(predictions.cpu().numpy().tolist())
accuracy = accuracy_score(labels_list, predictions_list)
return 100 * accuracy
```
Define a model to do the rotations - it has a meta-learnable parameter theta that represents the rotation angle in radians
```
class RotTransformer(nn.Module):
def __init__(self, device):
super(RotTransformer, self).__init__()
self.theta = nn.Parameter(torch.FloatTensor([0.]))
self.device = device
# Rotation transformer network forward function
def rot(self, x):
rot = torch.cat([torch.cat([torch.cos(self.theta), -torch.sin(self.theta), torch.tensor([0.], device=self.device)]),
torch.cat([torch.sin(self.theta), torch.cos(self.theta), torch.tensor([0.], device=self.device)])])
grid = F.affine_grid(rot.expand([x.size()[0], 6]).view(-1, 2, 3), x.size())
x = F.grid_sample(x, grid)
return x
def forward(self, x):
return self.rot(x)
```
We first train a simple model on standard images to see how it performs when applied to rotated images
```
acc_rotate_list = []
acc_basic_list = []
num_repetitions = 5
for e in range(num_repetitions):
print('Repetition ' + str(e + 1))
model = LeNet().to(device=device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss().to(device=device)
num_epochs_meta = 5
with tqdm.tqdm(total=num_epochs_meta) as pbar_epochs:
for epoch in range(0, num_epochs_meta):
for i, batch in enumerate(train_basic_set_loader):
(input_, target) = batch
input_ = input_.to(device=device)
target = target.to(device=device)
logits = model(input_)
loss = criterion(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
pbar_epochs.update(1)
# testing
acc_rotate = test_classification_net(test_set_loader, model, device)
acc_rotate_list.append(acc_rotate)
angle = torch.tensor([-np.pi/6], device=device)
acc_basic = test_classification_net_rot(test_set_loader, model, device, angle)
acc_basic_list.append(acc_basic)
```
Print statistics:
```
print('Accuracy on rotated test images: {:.2f} $\pm$ {:.2f}'.format(np.mean(acc_rotate_list), np.std(acc_rotate_list)))
print('Accuracy on standard position test images: {:.2f} $\pm$ {:.2f}'.format(np.mean(acc_basic_list), np.std(acc_basic_list)))
```
We see there is a large drop in accuracy if we apply the model on rotated images rather the same images without rotations
Now we use EvoGrad and meta-learning to train the model with images that are rotated by the rotation transformer. Rotation transformer is learned jointly alongside the base model. We will use random seeds to improve reproducibility since EvoGrad random noise perturbations depend on sampling of random numbers (but the precise accuracies may differ).
```
acc_rotate_list_evo_2mc = []
acc_basic_list_evo_2mc = []
angles_reps_2mc = []
# define the settings
num_repetitions = 5
torch_seeds = [1, 23, 345, 4567, 56789]
sigma = 0.001
temperature = 0.05
n_model_candidates = 2
num_epochs_meta = 5
for e in range(num_repetitions):
print('Repetition ' + str(e + 1))
torch.manual_seed(torch_seeds[e])
model = LeNet().to(device=device)
model_patched = make_functional(model)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss().to(device=device)
feature_transformer = RotTransformer(device=device).to(device=device)
meta_opt = torch.optim.Adam(feature_transformer.parameters(), lr=1e-2)
angles = []
with tqdm.tqdm(total=num_epochs_meta) as pbar_epochs:
for epoch in range(0, num_epochs_meta):
loaders = zip(train_basic_set_loader, cycle(val_rotate_set_loader))
for i, batch in enumerate(loaders):
((input_, target), (input_rot, target_rot)) = batch
input_ = input_.to(device=device)
target = target.to(device=device)
input_rot = input_rot.to(device=device)
target_rot = target_rot.to(device=device)
# base model training with images rotated using the rotation transformer
logits = model(feature_transformer(input_))
loss = criterion(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# update the model parameters used for patching
model_parameter = [i.detach() for i in get_func_params(model)]
input_transformed = feature_transformer(input_)
# create multiple model copies
theta_list = [[j + sigma * torch.sign(torch.randn_like(j)) for j in model_parameter] for i in range(n_model_candidates)]
pred_list = [model_patched(input_transformed, params=theta) for theta in theta_list]
loss_list = [criterion(pred, target) for pred in pred_list]
baseline_loss = criterion(model_patched(input_transformed, params=model_parameter), target)
# calculate weights for the different model copies
weights = torch.softmax(-torch.stack(loss_list)/temperature, 0)
# merge the model copies
theta_updated = [sum(map(mul, theta, weights)) for theta in zip(*theta_list)]
pred_rot = model_patched(input_rot, params=theta_updated)
loss_rot = criterion(pred_rot, target_rot)
# update the meta-knowledge
meta_opt.zero_grad()
loss_rot.backward()
meta_opt.step()
angles.append(180 / 3.14 * feature_transformer.theta.item())
pbar_epochs.update(1)
angles_reps_2mc.append(angles)
acc = test_classification_net(test_set_loader, model, device)
acc_rotate_list_evo_2mc.append(acc)
angle = torch.tensor([-np.pi/6], device=device)
acc_basic = test_classification_net_rot(test_set_loader, model, device, angle)
acc_basic_list_evo_2mc.append(acc_basic)
```
Print statistics:
```
print('Accuracy on rotated test images: {:.2f} $\pm$ {:.2f}'.format(np.mean(acc_rotate_list_evo_2mc), np.std(acc_rotate_list_evo_2mc)))
print('Accuracy on standard position test images: {:.2f} $\pm$ {:.2f}'.format(np.mean(acc_basic_list_evo_2mc), np.std(acc_basic_list_evo_2mc)))
```
Show what the learned angles look like during training:
```
for angles_list in angles_reps_2mc:
plt.plot(range(len(angles_list)), angles_list, linewidth=2.0)
plt.ylabel('Learned angle', fontsize=14)
plt.xlabel('Number of iterations', fontsize=14)
plt.savefig("RotTransformerLearnedAngles.pdf", bbox_inches='tight')
plt.show()
```
Print the average final meta-learned angle:
```
final_angles_2mc = [angles_list[-1] for angles_list in angles_reps_2mc]
print("{:.2f} $\pm$ {:.2f}".format(np.mean(final_angles_2mc), np.std(final_angles_2mc)))
```
It's great to see the meta-learned angle is typically close to 30 degrees, which is the true value.
| github_jupyter |
# Daily Load Profile Timeseries Clustering Evaluation
```
import pandas as pd
import numpy as np
import datetime as dt
import os
from math import ceil, log
import plotly.plotly as py
import plotly.offline as po
import plotly.graph_objs as go
import plotly.figure_factory as ff
import plotly.tools as tools
import colorlover as cl
#from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import cufflinks as cf
cf.go_offline()
import matplotlib. pyplot as plt
from matplotlib import colors
from matplotlib.colors import LinearSegmentedColormap
import evaluation.eval_clusters as ec
import evaluation.eval_cluster_plot as pc
from support import data_dir, image_dir, results_dir
eval_dir = os.path.join(data_dir,'cluster_evaluation')
experiments = ec.getExperiments()
best_exp = ['exp2_kmeans_unit_norm', 'exp4_kmeans_zero-one', 'exp5_kmeans_unit_norm', 'exp5_kmeans_zero-one',
'exp6_kmeans_unit_norm','exp7_kmeans_unit_norm','exp8_kmeans_unit_norm']
```
## Analyse Cluster Scores
### Davies-Bouldin Index
```
pc.plotClusterIndex('dbi', 'Davies-Bouldin Index', experiments, groupby='algorithm')
```
### Mean Index Adequacy
```
pc.plotClusterIndex('mia','Mean Index Adequacy', experiments)
```
### Silhouette Score
The best value is 1 and the worst value is -1. Values near 0 indicate overlapping clusters. Negative values generally indicate that a sample has been assigned to the wrong cluster, as a different cluster is more similar.
```
pc.plotClusterIndex('silhouette', 'Silhouette Score', experiments, groupby='experiment')
```
### Combined Cluster Score
```
pc.plotClusterIndex('score','Ix Score for all Experiments', experiments, groupby='algorithm', ylog=True)
pc.plotClusterIndex('score','Combined Index Score', experiments, groupby='experiment')
```
## Explore Cluster Labels, Centroids and Sizes
### Select best clusters for different algorithms
```
cluster_results = ec.readResults()
selected_clusters = ec.selectClusters(cluster_results, len(cluster_results))
selected_clusters.rename(columns={'experiment':'Experiment','algorithm':'Algorithm','preprocessing':'Norm',
'SOM dimensions':'SOM dim','clusters':'Clusters','dbi':'DBI', 'mia':'MIA',
'silhouette':'Silhouette','score':'CI score','run time':'Run time',
'experiment_name':'Experiment name'}, inplace=True)
top10 = selected_clusters.round({'DBI':4, 'MIA':4, 'Silhouette':4, 'CI score': 6, 'Run time':2}).head(10).set_axis(range(1,11), inplace=False)
top10.reset_index().rename(columns={'index':'Rank'})
#Percentage experiments with CI score below 4
ci4 = selected_clusters.loc[selected_clusters['CI score']<4,'CI score'].count()/len(selected_clusters)
#Percentage experiments with CI score below 6.5
ci65 = selected_clusters.loc[selected_clusters['CI score']<6.5,'CI score'].count()/len(selected_clusters)
#Max CI score
cimax = selected_clusters['CI score'].max()
#Score difference between best and tenth best experiment
(top10.iloc[9,8] - top10.iloc[0,8])/top10.iloc[0,8]
```
### Histograms of algorithm performance
```
data = [go.Histogram(x=selected_clusters['CI score'], nbinsx = 200, histnorm='percent')]
layout = dict(title='Distribution of CI Scores across Clustering Algorithms', titlefont=dict(size=20),
xaxis = dict(title='CI score bins', titlefont=dict(size=16), tickfont=dict(size=16)),
yaxis = dict(title='Percent', titlefont=dict(size=16), tickfont=dict(size=16)),
margin=dict(t=30, l=40, b=40),
height=350, width=1000)
# Plot!
fig0 = go.Figure(data=data, layout=layout)
po.iplot(fig0)
#po.plot(fig0, filename=data_dir+'/cluster_evaluation/plots/clustering_evaluation/DistplotQuantScoresAll'+'.html')
x0 = selected_clusters[selected_clusters.Norm.isna()]['CI score']
x1 = selected_clusters[selected_clusters.Norm=='unit_norm']['CI score']
x2 = selected_clusters[selected_clusters.Norm=='demin']['CI score']
x3 = selected_clusters[selected_clusters.Norm=='zero-one']['CI score']
x4 = selected_clusters[selected_clusters.Norm=='sa_norm']['CI score']
# Group data together
hist_data = [x0, x1, x2, x3, x4]
group_labels = ['no norm', 'unit norm', 'demin', 'zero-one', 'SA norm']
# Create distplot with custom bin_size
fig = ff.create_distplot(hist_data, group_labels, histnorm='percent', bin_size=0.05,
show_curve=False, show_rug=False)
fig['layout'].update(title='Distribution of Quantitative Scores across Normalisation Algorithms', titlefont=dict(size=16),
xaxis = dict(title='CI score bins'),
yaxis = dict(title='Percent'),
margin=dict(t=30, l=30, b=30),
height=250, width=600)
# Plot!
po.iplot(fig)
po.plot(fig, filename=data_dir+'/cluster_evaluation/plots/clustering_evaluation/DistplotQuantScoresNormalisation'+'.html')
y0 = selected_clusters[selected_clusters['Experiment name'].str.contains('exp1|exp2|exp3')]['CI score']
y1 = selected_clusters[selected_clusters['Experiment name'].str.contains('exp4|exp5|exp6')]['CI score']
y2 = selected_clusters[selected_clusters['Experiment name'].str.contains('exp7|exp8')]['CI score']
# Group data together
hist_data2 = [y0, y1, y2]
group_labels2 = ['no pre-binning', 'AMC', 'integral kmeans']
fig2 = ff.create_distplot(hist_data2, group_labels2, histnorm='percent', bin_size=0.05,
show_curve=False, show_rug=False, colors=['#393E46', '#2BCDC1', '#F66095'])
fig2['layout'].update(title='Distribution of Quantitative Scores across Pre-binning Algorithms', titlefont=dict(size=16),
xaxis = dict(title='CI score bins'),
yaxis = dict(title='Percent'),
margin=dict(t=30, l=30, b=30),
height=250, width=600)
# Plot!
po.iplot(fig2)
po.plot(fig2, filename=data_dir+'/cluster_evaluation/plots/clustering_evaluation/DistplotQuantScoresPrebinning'+'.html')
z0 = selected_clusters[selected_clusters.Algorithm=='kmeans']['CI score']
z1 = selected_clusters[selected_clusters.Algorithm=='som']['CI score']
z2 = selected_clusters[selected_clusters.Algorithm=='som+kmeans']['CI score']
# Group data together
hist_data3 = [z0, z1, z2]
group_labels3 = ['kmeans', 'som', 'som+kmeans']
fig3 = ff.create_distplot(hist_data3, group_labels3, histnorm='percent', bin_size=0.05,
show_curve=False, show_rug=False, colors=['#1E90FF','#DC143C', '#800080'])
fig3['layout'].update(title='Distribution of Quantitative Scores across Clustering Algorithms', titlefont=dict(size=16),
xaxis = dict(title='CI score bins'),
yaxis = dict(title='Percent'),
margin=dict(t=30, l=30, b=30),
height=250, width=600)
# Plot!
po.iplot(fig3)
po.plot(fig3, filename=data_dir+'/cluster_evaluation/plots/clustering_evaluation/DistplotQuantScoresClustering'+'.html')
```
### Analyse algorithm run times
```
runtimes = selected_clusters.loc[(selected_clusters.Norm=='unit_norm')].groupby('Algorithm')[['CI score','Run time']].mean()
runtimes.rename(columns={'Run time':'Mean run time (s)','CI score':'Mean CI score'}, inplace=True)
runtimes.round(2)
kmeansruntimes = selected_clusters.loc[(selected_clusters.Algorithm=='kmeans')].groupby('Clusters')['Run time'].mean()
somkmeansruntimes = selected_clusters.loc[(selected_clusters.Algorithm=='som+kmeans')].groupby('SOM dim')['Run time'].mean()
somruntimes = selected_clusters.loc[(selected_clusters.Algorithm=='som')].groupby('SOM dim')['Run time'].mean()
data = [go.Scatter(x=somruntimes.index**2,
y=somruntimes.values,
name='som',
mode='lines'),
go.Scatter(x=kmeansruntimes.index,
y=kmeansruntimes.values,
name='k-means',
mode='lines')
]
layout = dict(title='Run times for som and k-means algorithms', titlefont=dict(size=18),
xaxis = dict(title='number of SOM dimensions or clusters', titlefont=dict(size=16), tickfont=dict(size=16)),
yaxis = dict(title='run time (s)', titlefont=dict(size=16), tickfont=dict(size=16)),
margin=dict(t=30),
height=350, width=600)
# Plot!
fig0 = go.Figure(data=data, layout=layout)
po.iplot(fig0)
```
### Visualise Centroids
#### Get denormalised (real) cluster centroids
```
real_cluster_centroids = dict()
for e in best_exp:
rccpath = os.path.join(eval_dir, 'best_centroids', e +'BEST1_centroids.csv')
centroids = pd.read_csv(rccpath, index_col='k')
real_cluster_centroids[e] = centroids
i = 6
ex = ec.exploreAMDBins(best_exp[i]).reset_index()
mapper = ec.mapBins(real_cluster_centroids[best_exp[i]])
out = pd.merge(ex, mapper, on='elec_bin').sort_values(by='mean_dd')
out.rename(columns={'total_sample':'Members','score':'Ix','n_clust':'Clusters','bin_labels':'Mean daily demand bin'}, inplace=True)
out = out.round({'Ix':3})
o = out.reset_index().drop(columns=['som_dim','elec_bin','mean_dd','index'],axis=0)
po = o.pivot(index=o.index, columns='experiment_name').swaplevel(axis=1)
po.set_index((best_exp[i], 'Mean daily demand bin'), inplace=True)
po.index.rename('Mean daily demand bin', inplace=True)
po
for i in range(0,7):
pc.plotClusterCentroids(real_cluster_centroids[best_exp[i]])#, threshold=10490, groupby=None)
```
### Visualise Centroid and Member Profiles
```
best_exp
i = 3
centroids = ec.realCentroids(best_exp[i])
centroids['cluster_size'].plot('bar', figsize=(14,4))
clusters = centroids.nlargest(15, 'cluster_size').sort_index().index.values
clusters
pc.plotMembersSample(best_exp[i], largest=15)
```
## Explore Patterns in Cluster Labels
### Visualise TEMPORAL Cluster Specificity
```
for i in range(0,7):
pc.plotClusterSpecificity(best_exp[i], corr_list=['daytype','weekday'], threshold=10490, relative=[[5,1,1],1])
pc.plotClusterSpecificity(best_exp[i], corr_list=['season','monthly'], threshold=10490, relative=[[8, 4],1])
pc.plotClusterSpecificity(best_exp[i], corr_list=['yearly'], threshold=10490)
```
### Visualise CONTEXTUAL Cluster Specificity (Daily Demand Assignment)
```
experiment = 'exp8_kmeans_unit_norm'
corr_path = os.path.join(data_dir, 'cluster_evaluation', 'k_correlations')
dif = pd.read_csv(os.path.join(corr_path, 'demandi_corr.csv'), index_col=[0,1,2], header=[0]).drop_duplicates()
dif_temp = dif.reset_index(level=[-2,-1])
int100_total = dif_temp[(dif_temp.experiment==experiment+'BEST1')&(dif_temp.compare=='total')].drop(['experiment','compare'],axis=1)
dqf = pd.read_csv(os.path.join(corr_path, 'demandq_corr.csv'), index_col=[0,1,2], header=[0]).drop_duplicates()
dqf_temp = dqf.reset_index(level=[-2,-1])
q100_total = dqf_temp[(dqf_temp.experiment==experiment+'BEST1')&(dqf_temp.compare=='total')].drop(['experiment','compare'],axis=1)
#Equally spaced daily demand intervals
i = int100_total.T.stack().reset_index()
i.columns = ['int100_bins', 'cluster', 'values']
heatmap = go.Heatmap(z = i['values'], x = i['int100_bins'], y = i['cluster'],
colorscale='Reds')
layout = go.Layout(
title= 'Relative likelihood that cluster k is used in particular consumption bin',
xaxis=dict(title = 'total daily demand bins (Amps)',
tickmode='array', tickvals=list(range(0,100,10)), ticktext = list(range(0,1000,100))),
yaxis=dict(title ='k clusters for '+experiment)
)
fig = {'data':[heatmap], 'layout':layout }
po.iplot(fig)
#Equally sized daily demand intervals (quantiles)
rel_q100 = q100_total.T[1::]#.drop(columns=37)/0.01
slatered=['#232c2e', '#ffffe0','#c34513']
label_cmap, label_cs = pc.colorscale_from_list(slatered, 'label_cmap')
colorscl= pc.asymmetric_colorscale(rel_q100, label_cmap, ref_point=1/49)
heatmap = go.Heatmap(z = rel_q100.T.values, x = rel_q100.index, y = rel_q100.columns, name = 'corr',
colorscale=colorscl)
layout = go.Layout(
title= 'Heatmap of relative likelihood of Cluster k being used in consumption quantile',
xaxis=dict(title = 'total daily demand quantiles (Amps) - log scale', type='log'),
yaxis=dict(title ='Cluster k'))
fig = {'data':[heatmap], 'layout':layout }
po.iplot(fig)
```
## Analyse Cluster Representativity and Homogeneity
```
total_consE, peak_consE, peak_coincR, temporal_entropy, demand_entropy, good_clusters = ec.getMeasures(best_exp,
threshold = 10490,
weighted=False)
```
### Consumption Error - total
```
pc.subplotClusterMetrics(total_consE, 'TOTAL consumption error evaluation metrics')
```
### Consumption Error - max
```
pc.subplotClusterMetrics(peak_consE, 'PEAK consumption error evaluation metrics')
```
### Peak Coincidence Ratio
```
pc.plotClusterMetrics(peak_coincR, 'daily peak coincidence ratios', metric='coincidence_ratio', make_area_plot=True)
```
### Cluster Entropy - TEMPORAL
#### weekday, month
```
pc.plotClusterMetrics(temporal_entropy, 'weekday cluster entropy', metric='weekday_entropy')#, make_area_plot=False )
pc.plotClusterMetrics(temporal_entropy, 'monthly cluster entropy', metric='monthly_entropy')
```
### Cluster Entropy - ENERGY DEMAND
#### total daily demand, max daily demand
```
pc.plotClusterMetrics(demand_entropy, 'total demand cluster entropy', metric='total_entropy')
pc.plotClusterMetrics(demand_entropy, 'peak demand cluster entropy', metric='peak_entropy')
```
## Cluster Scoring Matrix
```
ec.saveMeasures(best_exp, 10490, weighted=True)
data = pd.read_csv(os.path.join(eval_dir,'cluster_entropy.csv'), index_col=[0,1], header=[0,1,2])
data.reset_index(level=0, drop=True, inplace=True)
data.rename(dict(zip(data.index, [s.replace('_', ' ', 2) for s in data.index])),inplace=True)
df = data.iloc[:,:-1]
```
### Unweighted Mean Peak Coincidence Ratio
```
myd = pd.DataFrame()
for x in peak_coincR.keys(): #set threshold value to same as data - 10490
myd = myd.append({'experiment': x.replace('_',' ', 2), 'mean peak coincidence ratio': peak_coincR[x]['coincidence_ratio'].mean()}, ignore_index=True)
#myd = myd.set_index('experiment')
rrr = df.loc(axis=1)[:,:,'coincidence_ratio']
rrr.columns = rrr.columns.droplevel().droplevel()
rrr.reset_index(inplace=True)
pcr = pd.merge(myd, rrr, left_on='experiment', right_on='index')
pcr.rename(columns={'mean peak coincidence ratio':'Mean pcr','coincidence_ratio':'Weighted pcr',
'experiment':'Experiment'},inplace=True)
pcr.set_index('Experiment',inplace=True)
pcr.drop(columns=['index'],inplace=True)
pcr.round(3).sort_index()
```
### Ranked Scores
```
rank_coincR = df[['coincR']].rank(ascending=False, method='min').groupby(level=['measure','metric'],axis=1).mean().T
rank_clusters = df[['clusters']].rank(ascending=False, method='min').groupby(level=['measure','metric'],axis=1).mean().T
rank_consE = df[['consE']].rank(method='min').groupby(level=['measure'],axis=1).mean().T
rank_consE.insert(loc=0, column='metric', value='mean_error')
rank_consE.set_index('metric',append=True,inplace=True)
rank_entropy = df['entropy'].rank(method='min').T
conse = df[['consE']].rank(method='min').T
conse.rename(columns={'experiment':'Experiment','algorithm':'Algorithm','preprocessing':'Norm',
'SOM dimensions':'SOM dim','clusters':'Clusters','dbi':'DBI', 'mia':'MIA',
'silhouette':'Silhouette','score':'CI score','run time':'Run time',
'experiment_name':'Experiment name'}, inplace=True)
ranked_results = pd.concat([rank_clusters, rank_coincR, rank_consE, rank_entropy], levels=['measure','metric'])
ranked_results.insert(loc=0, column='weights', value= [2, 3, 6 ,6, 5, 5, 4, 4])#, 2])
score_results = ranked_results.loc[:,ranked_results.columns[1::]].multiply(ranked_results['weights'], axis='index').sum()
score = pd.DataFrame(score_results, columns=['score']).T
score.index = pd.MultiIndex.from_tuples([('', '', 'SCORE')])
ranked_results.set_index('weights',append=True,inplace=True)
score_results = ranked_results.append(score)
#only run this cell if you want information about additional parameters for experiments
algs = [col.split(' ') for col in score_results.columns]
preb = ['','AMC','AMC','AMC','AMC','integral k-means','integral k-means']
dropz = ['','','','','True','','True']
multic = []
for a in range(0, len(algs)):
multic.append(algs[a]+[preb[a]]+[dropz[a]])
score_results.columns = pd.MultiIndex.from_tuples(multic, names=['Experiment', 'Algorithm','Normalisation',
'Pre-binning','Drop Zeros'])
score_results.index.set_names('weight', level=2, inplace=True)
score_results
```
## Archetypes
```
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[33, 39, 40, 41, 44, 45, 46, 47, 48, 49, 50, 51]],
groupby=None,
title='Mpumalanga Rural Newly Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[39, 44, 45, 46, 49, 50]],
groupby=None,
title='Mpumalanga Informal Settlement Newly Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[39, 45, 46, 49, 50, 53]],
groupby=None,
title='Eastern Cape Informal Settlement Newly Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[9, 11, 44]],
groupby=None,
title='Limpopo Informal Settlement Medium-term Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[3, 4, 6, 7, 24]],
groupby=None,
title='Gauteng Township Longterm Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[1, 3, 4, 5, 35, 36, 38]],
groupby=None,
title='KwaZulu Natal Lower Middle Class Long-term Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[2, 4, 35, 36, 38, 57]],
groupby=None,
title='KwaZulu Natal Upper Middle Class Long-term Electrified', threshold=10490)
pc.plotClusterCentroids(real_cluster_centroids['exp8_kmeans_unit_norm'].loc[[6, 7, 37, 54, 57]],
groupby=None,
title='Western Cape Upper Middle Class Medium-term Electrified', threshold=10490)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
data = {
'color': [ 'blue', 'green', 'yellow', 'red', 'white' ],
'object': ['ball', 'pen', 'pencil', 'paper', 'mug'],
'price': [ 1.2, 1.0, 0.6, 0.9, 1.7 ]
}
frame = pd.DataFrame(data)
frame
frame2 = pd.DataFrame(data, columns=['object', 'price'])
frame2
frame2 = pd.DataFrame(data, index=['one','two','three','four','five'])
frame2
frame3 = pd.DataFrame(np.arange(16).reshape((4,4)), index=['red','blue','yellow','white'], columns=['ball','pen','pencil','paper'])
frame3
```
## Selecting Elements
```
frame.columns
frame.index
frame.values
frame['price']
frame.price
frame.ix[2]
frame.ix[[2,4]]
frame[0:1]
frame[1:3]
frame['object'][3]
frame.index.name = 'id'
frame.columns.name = 'item'
frame
frame['new'] = 12
frame
frame['new'] = [3.0,1.3,2.2,0.8,1.1]
frame
ser = pd.Series(np.arange(5))
ser
frame['new'] = ser
frame
frame.isin([1.0,'pen'])
frame[frame.isin([1.0,'pen'])]
del frame['new']
frame
frame[frame<1.2]
nestdict = {
'red': {
2012: 22,
2013: 33
},
'white': {
2011:13,
2012:22,
2013: 16
},
'blue': {
2011:17,
2012:27,
2013:18
}
}
frame2 = pd.DataFrame(nestdict)
frame2
```
## DataFrame from Nested dict
```
frame2.T
ser = pd.Series([5,0,3,8,4], index=['red','blue','yellow','white','green'])
ser.index
ser.idxmin()
ser.idxmax()
serd = pd.Series(range(6), index=['white','white','blue','green','green','yellow'])
serd
serd.index.is_unique
frame.index.is_unique
ser = pd.Series([2,5,7,4], index = ['one','two','three','four'])
ser
ser.reindex(['three','four','five','one'])
ser3 = pd.Series([1,5,6,3],index=[0,3,5,6])
ser3
ser3.reindex(range(6),method='ffill')
ser3.reindex(range(6),method='bfill')
frame.reindex(range(5), method='ffill',columns=['color','price','new','object'])
ser = pd.Series(np.arange(4,),index=['red','blue','yellow','white'])
ser
```
## Dropping
```
ser.drop('yellow')
ser.drop(['blue','white'])
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame
frame.drop(['blue','yellow'])
frame.drop(['pen','pencil'],axis=1)
```
## Arithmetic and Data Alignment
```
s1 = pd.Series([3,2,5,1],['white','yellow','green','blue'])
s2 = pd.Series([1,4,7,2,1],['white','yellow','black','blue','brown'])
s1
s2
s1+s2
frame1 = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame2 = pd.DataFrame(np.arange(12).reshape((4,3)),
index=['blue','green','white','yellow'],
columns=['mug','pen','ball'])
frame1
frame2
frame1 + frame2
frame1.add(frame2)
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame
ser = pd.Series(np.arange(4),index=['ball','pen','pencil','paper'])
ser
frame - ser
ser['mug']=9
ser
frame - ser
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame
np.sqrt(frame)
frame.apply(lambda x: x.max() - x.min())
frame.apply(lambda x: x.max() - x.min(), axis=1)
def f(x):
return pd.Series([x.min(),x.max()],index=['min','min'])
frame.apply(f)
frame.sum()
frame.describe()
```
## Sorting and Ranking
```
ser = pd.Series([5,0,3,8,4],index=['red','blue','yellow','white','green'])
ser
ser.sort_index()
ser.sort_index(ascending=False)
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame
frame.sort_index()
frame.sort_index(axis=1)
ser.sort_values()
frame.sort_values(by='pen')
frame.sort_values(by=['pen','pencil'])
ser.rank()
ser.rank(method='first')
ser.rank(ascending=False)
seq2 = pd.Series([3,4,3,4,5,4,3,2],
['2006','2007','2008','2009','2010','2011','2012','2013'])
seq = pd.Series([1,2,3,4,4,3,2,1],
['2006','2007','2008','2009','2010','2011','2012','2013'])
seq.corr(seq2)
```
## Correlation and Covariance
```
seq.cov(seq2)
frame2 = pd.DataFrame([[1,4,3,6],[4,5,6,1],[3,3,1,5],[4,1,6,4]],
index=['red','blue','yellow','white'],
columns=['ball','pen','pencil','paper'])
frame2
frame2.corr()
frame2.cov()
frame2.corrwith(ser)
frame2.corrwith(frame)
```
## NaN Data
```
ser = pd.Series([0,1,2,np.NaN,9],index=['red','blue','yellow','white','green'])
ser
ser['white']=None
ser
ser.dropna()
ser[ser.notnull()]
frame3 = pd.DataFrame([[6,np.nan,6],[np.nan,np.nan,np.nan],[2,np.nan,5]],
index=['blue','green','red'],
columns=['ball','mug','pen'])
frame3
frame3.dropna()
frame3.dropna(how='all')
frame3.fillna(0)
frame3.fillna({'ball':1,'mug':0,'pen':99})
```
## Hierarchical Indexing and Leveling
```
mser = pd.Series(np.random.rand(8),index=[['white','white','white','blue','blue','red','red','red'],
['up','down','right','up','down','up','down','left']])
mser
mser.index
mser['white']
mser[:,'up']
mser['white','up']
mser.unstack()
frame
frame.stack()
mframe = pd.DataFrame(np.random.random(16).reshape(4,4),
index = [['white','white','red','red'],
['up','down','up','down']],
columns = [['pen','pen','paper','paper'],
[1,2,1,2]])
mframe
mframe.columns.names = ['objects','id']
mframe.index.names = ['colors','status']
mframe
mframe.swaplevel('colors','status')
mframe.sortlevel('colors')
mframe.sum(level='colors')
mframe.sum(level='id',axis=1)
```
| github_jupyter |
In Ipython Notebook, I can write down the mathmatical expression with latex, which allows me to understand my codes better.
## q_3 word2vec.py
```
import numpy as np
import random
from q1_softmax import softmax
from q2_gradcheck import gradcheck_naive
from q2_sigmoid import sigmoid, sigmoid_grad
def normalizeRows(x):
"""
Row normalization function
Implement a function that normalizes each row of a matrix to have unit length.
"""
### YOUR CODE HERE
# print (x.sum(axis=1).reshape(-1,1))
x = x/np.sqrt((x**2).sum(axis=1)).reshape(-1,1)
# Equivalent Form:
'''
x = x/np.sqrt((x**2).sum(axis-=1, keepdims = True))
'''
#raise NotImplementedError
### END YOUR CODE
return x
def test_normalize_rows():
print ("Testing normalizeRows...")
x = normalizeRows(np.array([[3.0,4.0],[1, 2]]))
print (x)
ans = np.array([[0.6,0.8],[0.4472136,0.89442719]])
assert np.allclose(x, ans, rtol=1e-05, atol=1e-06)
print ("test passed")
test_normalize_rows()
```
## For the input arguments of the softmaxCostAndGradient function
- ($\hat{y}$) = predicted
- ($\hat{y} - y$) = (predicted[target] -= 1.)
- cost = -log(prob)
- gradPred = $\frac{\partial CE(y, \hat{y})}{\partial \theta}$ = $U (\hat{y} - y)$ = np.dot(prob, $\hat{y} - y$)
- grad = $\frac{\partial CE(y, \hat{y})}{\partial u_w}$
```
def softmaxCostAndGradient(predicted, target, outputVectors, dataset):
""" Softmax cost function for word2vec models
Implement the cost and gradients for one predicted word vector
and one target word vector as a building block for word2vec
models, assuming the softmax prediction function and cross
entropy loss.
Arguments:
predicted -- numpy ndarray, predicted word vector
target -- integer, the index of the target word
outputVectors -- "output" vectors (as rows) for all tokens
what is the meaning of the output vectors?
dataset -- needed for negative sampling, unused here.
Return:
cost -- cross entropy cost for the softmax word prediction
gradPred -- the gradient with respect to the predicted word
vector
grad -- the gradient with respect to all the other word
vectors
We will not provide starter code for this function, but feel
free to reference the code you previously wrote for this
assignment!
"""
#The math expression of the loss function can be found in the slides,
# to get a better understanding, I will use the same notation same as the paper assignment
### YOUR CODE HERE
# y has the same shape with y_hat but all zero values,
# whereas the target place has a value of 1.
#然后按照slides上的表达式 就直接求出cost
prob = softmax(np.dot(predicted, outputVectors.T))
cost = -np.log(prob[target])
#这一步是用来求出 y_hat - y
prob[target] -= 1.
#跟推导的结果一致,
gradPred = np.dot(prob, outputVectors)
#这里我不是很清楚为什么要这么来写,这三种表达方式等价,我用的是我比较熟悉的一种
#grad = prob[:, np.newaxis] * predicted[np.newaxis, :]
#grad = np.outer(prob, predicted)
grad = np.dot(prob.reshape(-1,1), predicted.reshape(1, -1))
#raise NotImplementedError
### END YOUR CODE
return cost, gradPred, grad
```
'np.out(a,b)' is to combine the a(M, ) and b(N, ) into (M, N) array, where out[i][j] = a[i] * b[j]
```
def getNegativeSamples(target, dataset, K):
""" Samples K indexes which are not the target """
indices = [None] * K
for k in range(K):
newidx = dataset.sampleTokenIdx()
while newidx == target:
newidx = dataset.sampleTokenIdx()
indices[k] = newidx
return indices
```
## This part is designed to execute the part(c) of the assignment problem
$J_{loss}$ = $-log(\sigma(u_O^T v_C)) - \Sigma_{k=1}^K log(\sigma(-u_k^T v_C))$
$\frac{\partial J_{loss}}{\partial v_c}$ = $(\sigma(u_O^T v_C)-1)u_O - \Sigma_{k=1}^K (\sigma(-u_k^T v_C)-1)u_k$
$\frac{\partial J_{loss}}{\partial u_O}$ = $[\sigma(u_O^T v_C) - 1]v_C$
$\frac{\partial J_{loss}}{\partial u_k}$ = $-[\sigma(-u_k^T v_C) - 1]v_C$
```
def negSamplingCostAndGradient(predicted, target, outputVectors, dataset, K=10):
""" Negative sampling cost function for word2vec models
Implement the cost and gradients for one predicted word vector
and one target word vector as a building block for word2vec
models, using the negative sampling technique. K is the sample
size.
Note: See test_word2vec below for dataset's initialization.
Arguments/Return Specifications: same as softmaxCostAndGradient
"""
# Sampling of indices is done for you. Do not modify this if you
# wish to match the autograder and receive points!
indices = [target]
indices.extend(getNegativeSamples(target, dataset, K))
'''so the first space in digit stores the target'''
### YOUR CODE HERE
prob = np.dot(outputVectors, predicted)
cost = -np.log(sigmoid(prob[target])) \
- np.log(sigmoid(-prob[indices[1:]])).sum()
# prob & cost can be derived by myself easily
#gradPred is partial loss function /partial Vc
gradPred = (sigmoid(prob[target]) - 1) * outputVectors[target] \
+ sum(-(sigmoid(-prob[indices[1:]]) - 1).reshape(-1,1) * outputVectors[indices[1:]])
# grad is like partial lss funtion/ partical u
grad = np.zeros_like(outputVectors)# to generate np.zeros with same shape as outputVectors
grad[target] = (sigmoid(prob[target]) - 1) * predicted
for k in indices[1:]:
grad[k] += (1.0 - sigmoid(-np.dot(outputVectors[k], predicted))) * predicted
#raise NotImplementedError
### END YOUR CODE
return cost, gradPred, grad
def skipgram(currentWord, C, contextWords, tokens, inputVectors, outputVectors,
dataset, word2vecCostAndGradient=softmaxCostAndGradient):
""" Skip-gram model in word2vec
Implement the skip-gram model in this function.
Arguments:
currentWord -- a string of the current center word
C -- integer, context size
contextWords -- list of no more than 2*C strings, the context words
tokens -- a dictionary that maps words to their indices in
the word vector list
inputVectors -- "input" word vectors (as rows) for all tokens
outputVectors -- "output" word vectors (as rows) for all tokens
word2vecCostAndGradient -- the cost and gradient function for
a prediction vector given the target
word vectors, could be one of the two
cost functions you implemented above.
Return:
cost -- the cost function value for the skip-gram model
grad -- the gradient with respect to the word vectors
"""
cost = 0.0
gradIn = np.zeros(inputVectors.shape)
gradOut = np.zeros(outputVectors.shape)
### YOUR CODE HERE
center_word = tokens[currentWord] # vector representation of the center word
for context_word in contextWords:
# index of target word
target = tokens[context_word] # vector representation of the context word
cost_, gradPred_, gradOut_ = word2vecCostAndGradient(inputVectors[center_word], target, outputVectors, dataset)
#sum all the values together
cost += cost_
gradOut += gradOut_
gradIn[center_word] += gradPred_
### END YOUR CODE
return cost, gradIn, gradOut
def cbow(currentWord, C, contextWords, tokens, inputVectors, outputVectors,
dataset, word2vecCostAndGradient=softmaxCostAndGradient):
"""CBOW model in word2vec
Implement the continuous bag-of-words model in this function.
Arguments/Return specifications: same as the skip-gram model
Extra credit: Implementing CBOW is optional, but the gradient
derivations are not. If you decide not to implement CBOW, remove
the NotImplementedError.
"""
cost = 0.0
gradIn = np.zeros(inputVectors.shape)
gradOut = np.zeros(outputVectors.shape)
### YOUR CODE HERE
target = tokens[currentWord]
# context_w correspond to the \hat{v} vector
context_word = sum(inputVectors[tokens[w]] for w in contextWords)
cost, gradPred, gradOut = word2vecCostAndGradient(context_word, target, outputVectors, dataset)
gradIn = np.zeros(inputVectors.shape)
for w in contextWords:
gradIn[tokens[w]] += gradPred
### END YOUR CODE
return cost, gradIn, gradOut
#############################################
# Testing functions below. DO NOT MODIFY! #
#############################################
def word2vec_sgd_wrapper(word2vecModel, tokens, wordVectors, dataset, C,
word2vecCostAndGradient=softmaxCostAndGradient):
batchsize = 50
cost = 0.0
grad = np.zeros(wordVectors.shape)
N = wordVectors.shape[0]
inputVectors = wordVectors[:int(N/2),:]
outputVectors = wordVectors[int(N/2):,:]
for i in range(batchsize):
C1 = random.randint(1,C)
centerword, context = dataset.getRandomContext(C1)
if word2vecModel == skipgram:
denom = 1
else:
denom = 1
c, gin, gout = word2vecModel(
centerword, C1, context, tokens, inputVectors, outputVectors,
dataset, word2vecCostAndGradient)
cost += c / batchsize / denom
grad[:int(N/2), :] += gin / batchsize / denom
grad[int(N/2):, :] += gout / batchsize / denom
return cost, grad
def test_word2vec():
""" Interface to the dataset for negative sampling """
dataset = type('dummy', (), {})()
def dummySampleTokenIdx():
return random.randint(0, 4)
def getRandomContext(C):
tokens = ["a", "b", "c", "d", "e"]
return tokens[random.randint(0,4)], \
[tokens[random.randint(0,4)] for i in range(2*C)]
dataset.sampleTokenIdx = dummySampleTokenIdx
dataset.getRandomContext = getRandomContext
random.seed(31415)
np.random.seed(9265)
dummy_vectors = normalizeRows(np.random.randn(10,3))
dummy_tokens = dict([("a",0), ("b",1), ("c",2),("d",3),("e",4)])
print ("==== Gradient check for skip-gram ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, softmaxCostAndGradient),
dummy_vectors)
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, negSamplingCostAndGradient),
dummy_vectors)
print ("\n==== Gradient check for CBOW ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
cbow, dummy_tokens, vec, dataset, 5, softmaxCostAndGradient),
dummy_vectors)
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
cbow, dummy_tokens, vec, dataset, 5, negSamplingCostAndGradient),
dummy_vectors)
print ("\n=== Results ===")
print (skipgram("c", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset))
print (skipgram("c", 1, ["a", "b"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset,
negSamplingCostAndGradient))
print (cbow("a", 2, ["a", "b", "c", "a"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset))
print (cbow("a", 2, ["a", "b", "a", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset,
negSamplingCostAndGradient))
if __name__ == "__main__":
test_normalize_rows()
test_word2vec()
def sgd(f, x0, step, iterations, postprocessing=None, useSaved=False,
PRINT_EVERY=10):
""" Stochastic Gradient Descent
Implement the stochastic gradient descent method in this function.
Arguments:
f -- the function to optimize, it should take a single
argument and yield two outputs, a cost and the gradient
with respect to the arguments
x0 -- the initial point to start SGD from
step -- the step size for SGD
iterations -- total iterations to run SGD for
postprocessing -- postprocessing function for the parameters
if necessary. In the case of word2vec we will need to
normalize the word vectors to have unit length
PRINT_EVERY -- specifies how many iterations to output loss
Return:
x -- the parameter value after SGD finishes
"""
# Anneal learning rate every several iterations, annealling(interesting!)
ANNEAL_EVERY = 20000
if useSaved:
start_iter, oldx, state = load_saved_params()
if start_iter > 0:
x0 = oldx
step *= 0.5 ** (start_iter / ANNEAL_EVERY)
if state:
random.setstate(state)
else:
start_iter = 0
x = x0
if not postprocessing:
postprocessing = lambda x: x
expcost = None
for iter in range(start_iter + 1, iterations + 1):
# Don't forget to apply the postprocessing after every iteration!
# You might want to print the progress every few iterations.
cost = None
### YOUR CODE HERE
#raise NotImplementedError
### END YOUR CODE
if (iter % PRINT_EVERY == 0):
if not expcost:
expcost = cost
else:
expcost = .95 * expcost + .05 * cost
print ("iter %d: %f" % (iter, expcost))
if iter % SAVE_PARAMS_EVERY == 0 and useSaved:
save_params(iter, x)
if iter % ANNEAL_EVERY == 0:
step *= 0.5
return x
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658))
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
[comment]: <> (Abstract: TODO)
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). In addition, its output has been
### NRPy+ Source Code for this module: [BSSN/Enforce_Detgammahat_Constraint.py](../edit/BSSN/Enforce_Detgammahat_Constraint.py)
## Introduction:
[Brown](https://arxiv.org/abs/0902.3652)'s covariant Lagrangian formulation of BSSN, which we adopt, requires that $\partial_t \bar{\gamma} = 0$, where $\bar{\gamma}=\det \bar{\gamma}_{ij}$. Further, all initial data we choose satisfies $\bar{\gamma}=\hat{\gamma}$.
However, numerical errors will cause $\bar{\gamma}$ to deviate from a constant in time. This actually disrupts the hyperbolicity of the PDEs, so to cure this, we adjust $\bar{\gamma}_{ij}$ at the end of each Runge-Kutta timestep, so that its determinant satisfies $\bar{\gamma}=\hat{\gamma}$ at all times. We adopt the following, rather standard prescription (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)):
$$
\bar{\gamma}_{ij} \to \left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{ij}.
$$
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows:
1. [Step 1](#initializenrpy): Initialize needed NRPy+ modules
1. [Step 2](#enforcegammaconstraint): Enforce the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint
1. [Step 3](#code_validation): Code Validation against `BSSN.Enforce_Detgammahat_Constraint` NRPy+ module
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize needed NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```
# Step P1: import all needed modules from NRPy+:
from outputC import nrpyAbs,lhrh,outCfunction # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import sympy as sp # SymPy, Python's core symbolic algebra package
import BSSN.BSSN_quantities as Bq # NRPy+: BSSN quantities
import os,shutil,sys # Standard Python modules for multiplatform OS-level functions
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","SinhSpherical")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
```
<a id='enforcegammaconstraint'></a>
# Step 2: Enforce the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](#toc)\]
$$\label{enforcegammaconstraint}$$
Recall that we wish to make the replacement:
$$
\bar{\gamma}_{ij} \to \left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{ij}.
$$
Notice the expression on the right is guaranteed to have determinant equal to $\hat{\gamma}$.
$\bar{\gamma}_{ij}$ is not a gridfunction, so we must rewrite the above in terms of $h_{ij}$:
\begin{align}
\left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{ij} &= \bar{\gamma}'_{ij} \\
&= \hat{\gamma}_{ij} + \varepsilon'_{ij} \\
&= \hat{\gamma}_{ij} + \text{Re[i][j]} h'_{ij} \\
\implies h'_{ij} &= \left[\left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{ij} - \hat{\gamma}_{ij}\right] / \text{Re[i][j]} \\
&= \left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \frac{\bar{\gamma}_{ij}}{\text{Re[i][j]}} - \delta_{ij}\\
&= \left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \frac{\hat{\gamma}_{ij} + \text{Re[i][j]} h_{ij}}{\text{Re[i][j]}} - \delta_{ij}\\
&= \left(\frac{\hat{\gamma}}{\bar{\gamma}}\right)^{1/3} \left(\delta_{ij} + h_{ij}\right) - \delta_{ij}
\end{align}
Upon inspection, when expressing $\hat{\gamma}$ SymPy generates expressions like `(xx0)^{4/3} = pow(xx0, 4./3.)`, which can yield $\text{NaN}$s when `xx0 < 0` (i.e., in the `xx0` ghost zones). To prevent this, we know that $\hat{\gamma}\ge 0$ for all reasonable coordinate systems, so we make the replacement $\hat{\gamma}\to |\hat{\gamma}|$ below:
```
# We will need the h_{ij} quantities defined within BSSN_RHSs
# below when we enforce the gammahat=gammabar constraint
# Step 1: All barred quantities are defined in terms of BSSN rescaled gridfunctions,
# which we declare here in case they haven't yet been declared elsewhere.
Bq.declare_BSSN_gridfunctions_if_not_declared_already()
hDD = Bq.hDD
Bq.BSSN_basic_tensors()
gammabarDD = Bq.gammabarDD
# First define the Kronecker delta:
KroneckerDeltaDD = ixp.zerorank2()
for i in range(DIM):
KroneckerDeltaDD[i][i] = sp.sympify(1)
# The detgammabar in BSSN_RHSs is set to detgammahat when BSSN_RHSs::detgbarOverdetghat_equals_one=True (default),
# so we manually compute it here:
dummygammabarUU, detgammabar = ixp.symm_matrix_inverter3x3(gammabarDD)
# Next apply the constraint enforcement equation above.
hprimeDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# Using nrpyAbs here, as it directly translates to fabs() without additional SymPy processing.
# This acts to simplify the final expression somewhat.
hprimeDD[i][j] = \
(nrpyAbs(rfm.detgammahat)/detgammabar)**(sp.Rational(1,3)) * (KroneckerDeltaDD[i][j] + hDD[i][j]) \
- KroneckerDeltaDD[i][j]
```
<a id='code_validation'></a>
# Step 3: Code Validation against `BSSN.Enforce_Detgammahat_Constraint` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the C code output between
1. this tutorial and
2. the NRPy+ [BSSN.Enforce_Detgammahat_Constraint](../edit/BSSN/Enforce_Detgammahat_Constraint.py) module.
```
##########
# Step 1: Generate enforce_detgammahat_constraint() using functions in this tutorial notebook:
Ccodesdir = os.path.join("enforce_detgammahat_constraint")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
enforce_detg_constraint_vars = [lhrh(lhs=gri.gfaccess("in_gfs","hDD00"),rhs=hprimeDD[0][0]),
lhrh(lhs=gri.gfaccess("in_gfs","hDD01"),rhs=hprimeDD[0][1]),
lhrh(lhs=gri.gfaccess("in_gfs","hDD02"),rhs=hprimeDD[0][2]),
lhrh(lhs=gri.gfaccess("in_gfs","hDD11"),rhs=hprimeDD[1][1]),
lhrh(lhs=gri.gfaccess("in_gfs","hDD12"),rhs=hprimeDD[1][2]),
lhrh(lhs=gri.gfaccess("in_gfs","hDD22"),rhs=hprimeDD[2][2]) ]
enforce_gammadet_string = fin.FD_outputC("returnstring",enforce_detg_constraint_vars,
params="outCverbose=False,preindent=1,includebraces=False")
desc = "Enforce det(gammabar) = det(gammahat) constraint."
name = "enforce_detgammahat_constraint"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h-validation"), desc=desc, name=name,
params="const rfm_struct *restrict rfmstruct, const paramstruct *restrict params, REAL *restrict in_gfs",
body=enforce_gammadet_string,
loopopts="AllPoints,enable_rfm_precompute")
##########
# Step 2: Generate enforce_detgammahat_constraint() using functions in BSSN.Enforce_Detgammahat_Constraint
gri.glb_gridfcs_list = []
import BSSN.Enforce_Detgammahat_Constraint as EGC
EGC.output_Enforce_Detgammahat_Constraint_Ccode(outdir=Ccodesdir,
exprs=EGC.Enforce_Detgammahat_Constraint_symb_expressions())
import filecmp
for file in [os.path.join(Ccodesdir,"enforce_detgammahat_constraint.h")]:
if filecmp.cmp(file,file+"-validation") == False:
print("VALIDATION TEST FAILED on file: "+file+".")
sys.exit(1)
else:
print("Validation test PASSED on file: "+file)
##########
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.pdf](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint")
```
| github_jupyter |
<div align="center"><h1>Perspectives on Text</h1>
<h3>_Synthesizing Textual Knowledge through Markup_</h3>
<br/>
<h4>Elli Bleeker, Bram Buitendijk, Ronald Haentjens Dekker, Astrid Kulsdom
<br/>R&D - Dutch Royal Academy of Arts and Science</h4>
<h6>Computational Methods for Literary Historical Textual Scholarship - July 3, 2018</h6>
</div>
This talk is not a simple "me and my project"-presentation. Instead I'd like to the topic of computational text modeling by focusing on one instrument: markup.
Markup is cool! It is an instrument to express our understanding of a text to a computer so we can probe that text further, have others probe it, store it and represent it.
### Theory
Definition of text
Expectations of markup; challenges
### Practice
TAGML
Editorial workflow
### Conclusion
### Discussion
# Theory
Before we go on, let's take a closer look at what we're dealing with, exactly. As this is the Computational Methods for Literary-Historical Textual Scholarship, I assume we are primarily working with textual objects.
"Text" has been defined over and over again (see all the publications about "what text is, really") but we propose the following definition that is both very precise and inclusive and takes into account all textual features that textual scholars are interested in.
# What is text?
A multilayered, non-linear object containing information which is at times ordered, partially ordered, or unordered
I'll give three examples of textual features and how they translate informationally.
# Modeling textual features
- Overlapping structures
- Discontinuous elements
- Non-linear elements
# Overlapping structures
<img src="images/Selection-21v.png">
<img src="images/Selection-22v.png">
# Discontinuous elements
<img width="900" height="500" src="images/discontinuity.png">
<img width="300" height="300" src="images/order.jpg">
# Non-linear structures
<img width="500" height="500" src="images/code-nonlinear.png">
<img align="left" width="500" height="300" src="images/order1a.png">
<img align="right" width="500" height="300" src="images/order1b.png">
Now, let's move on to markup. The aims or "potential" of markup is twofold:
# Markup
- Markup helps us to **make explicit implicit notions and interpretations**
- Markup **unites scholarly knowledge**
I don't have to tell you that these promises are not, at least not entirely, fulfilled.
Current markup technology do not permit us to do so, or better: they don't facilitate it in a straightforward manner.
- Simple texts do fit into one hierarchy, but the moment one wants to tag more complex phenomena one has to resort to workarounds. Which work, but they do remain workarounds.
- Transcriptions are usually made on a project basis / idiosyncratic approaches. No shared conception of digital editing.
Let's start with the first point. The moment I mention hierarchies, your mind will automatically spring to "overlap".
Indeed, we can make explicit our understanding of text but the moment we structure a textual object a certain way, there'll be many elements that do not fit into that structure. The easiest example is that of:
Logical structure vs. document structure
<img src="images/Selection-21v.png">
<img src="images/Selection-22v.png">
There are, of course, an infinite amount of structures, as illustrated by the long list of analytical perspectives on text by Allan Renear _et al_. Each of these perspectives implies a different structuring and ordering of the text.
"Analytical perspectives on text" (Renear _et al_. 1993):
- dramatic: act, scene, stage directions, speech
- poetic: poem, verse, stanza, quatrain, couplet, line, half-line, foot
- syntactic: sentence, noun phrase, verb phrase, determiner, adjective, noun, verb
- etc...
The second point, bringing together scholarly knowledge in one file that can be used and reused by others, has also been proven quite unfeasible by our diverging scholarly practices.
<div class="quote" align="center">_"Most texts are made for a special use in a specific context for a limited group of people"_</div><br/><div class="source" align="center">(Hillesund 2005)</div>
<div align="center">A "_shared conception of digital editing_" (Hajo 2010) is abandoned in favor of idiosyncratic approaches</div>
Paraphrasing Erwin Palofsky we can assume that:
<div align="center">A strict formal observation and with it a description of a textual object is a _physical impossibility_</div>
Theoretically, markup is an indispensable instrument:
# Markup allows us to ...
#### ... formally describe our interpretation of text
#### ... create transcriptions that can be shared with others and processed by software
Markup is a powerful technology. But:
# With great power comes great responsibilities.
Apart from the fact that we have to be very conscious about the ways in which we identify and tag textual features (during which we also have to take into account how these features may be processed and addressed in later stages) we have a responsibility to keep questioning the model we use.
<div align="center"> The affordances and limitations of a textual model influence our understanding of text</div>
So what happens if we step outside the framework we've all come know, and start all over again? What if we're no longer compelled to think in terms of monohierarchical structures when modeling text and instead take as point departure a model that provides *native* support for multiple hierarchies, without complicated hacks and workarounds? How would we then markup a text?
# Practice
In the second part of my talk, I'll introduce TAGML, the markup language we've been developing over the past few months. TAGML is based on the definition of text as a multilayered, non-linear object and addresses in a straightforward manner complex textual features like those I just described.
Furthermore, we have developed a system to manage TAGML files and address both the issues of compatibility and interoperability.
First, TAGML.
# TAGML
Markup language of Text-as-Graph (TAG) model
Considers **text** to be **a non-linear and multilayered information object**.
A TAGML file can have multiple **layers**.
A layer is, in principle, a set of markup nodes. A layer is hierarchical.
How does that help us to capture various features of text, both simple and complicated?
Let's focus on one of the textual features I just outlined, the most familiar one: overlapping structures.
Imagine transcribing the poetic structure of the text on this document fragment.
<img src="images/Selection-21v.png">
<img src="images/Selection-22v.png">
```
[tagml>
[page>
[p>
[line>2d. Voice from the Springs<line]
[line>Thunderbolts had parched our water<line]
[line>We had been stained with bitter blood<line]
<p]
<page]
[page>
[p>
[line>And had ran mute 'mid shrieks of <|[sic>slaugter<sic]|[corr>slaughter<]|><line]
[line>Thro' a city & a solitude!<line]
<p]
<page]
<tagml]
```
Let's take a closer look at that last transcription. One could argue that the paragraph isn't really "closed", it just needs to be closed to avoid overlap with the page element. If that weren't necessary, this would be a more intuitive transcription:
(In the following transcripton has been stripped of most tags for readability)
```
[page>
[p>
[line>2d. Voice from the Springs<line]
[line>Thunderbolts had parched our water<line]
[line>We had been stained with bitter blood<line]
<page]
[page>
[line>And had ran mute 'mid shrieks of slaughter<line]
[line>Thro' a city and a multitude!<line]
<p]
<page]
```
This is where the multilayeredness comes in. The moment structures overlap, the user can create a new layer. A layer can be created locally. The layers may be given any name; in this example they are simply referred to as layer A and layer B.
```
[page|+A>
[p|+B>
[line>2d. Voice from the Springs<line]
[line>Thunderbolts had parched our water<line]
[line>We had been stained with bitter blood<line]
<page|A]
[page|A>
[line>And had ran mute 'mid shrieks of slaughter<line]
[line>Thro' a city and a multitude!<line]
<p|B]
<page|A]
```
# _Alexandria_
- Text repository for TAGML files
- Git-like version management
Managing TAGML files with multiple layers is done in a repository called _Alexandria_ which stores the TAGM files.
The workflow is similar to that of Git.
Let's return to the examples I just showed, and let's imagine that the markup is added not by one, but by two editors. We'll name them A and B, or to make it more realistic, Astrid and Bram.
## Astrid
```
[page>
[p>
[line>2d. Voice from the Springs<line]
[line>Thrice three hundred thousand years<line]
[line>We had been stained with bitter blood<line]
<p]
<page]
[page>
[p>
[line>And had ran mute 'mid shrieks of <|[sic>slaugter<sic]|[corr>slaughter<corr]]><line]
[line>Thro' a city and a multitude<line]
<p]
<page]
```
<img width="500" height="400" src="images/astrid-alex-init.png">
<img width="500" height="400" src="images/bram-alexandria-checkout.png">
## View "material"
Includes elements `[page>`, `[line>` and `[corr>`
```
[page>
[line>2d. Voice from the Springs<line]
[line>Thrice three hundred thousand years<line]
[line>We had been stained with bitter blood<line]
<page]
[page>
[line>And had ran mute 'mid shrieks of slaughter<line]
[line>Thro' a city and a multitude<line]
<page]
```
## Bram
```
[page|+A>
[p|+B>
[l>2d. Voice from the Springs<l]
[l>Thrice three hundred thousand years<l]
[l>We had been stained with bitter blood<l]
<page|A]
[page|A>
[l>And had ran mute 'mid shrieks of [corr>slaughter<corr]<l]
[l>Thro' a city & a multitude<l]
<p|B]
<page|A]
```
Both TAGML transcriptions are merged in Alexandria. Usually, the users would not check out the "master file" but if they would, it would look something like this:
# Astrid + Bram
```
[page|+A>
[p|+B>
[p|+C>
[line>[l>2nd. Voice from the Springs.<l]<line]
[line>[l>Thrice three hundred thousand years<l]<line]
[line>[l>We had been stained with bitter blood<l]<line]
<p|C]
<page|A]
[page|A>
[p|C>
[line>[l>And had ran mute 'mid shrieks of <|[sic|C>slaugter<sic|C]|[corr>slaughter<corr]|><l]<line]
[line>[l>Thro' a city and a multitude<l]<line]
<p|B]
<p|C]
<page|A]
```
It may be clear that, in order to properly manage multiple transcriptions with multiple layers, properly documenting transcriptions is key. If we go back to the statement that adding markup is "making explicit what is implicit", we can say that this explicitness exists on several levels. Not only within the _text_, but also in the form of metadata and additional documenting files.
# Conclusion
# Text
- Text is a multilayered, non-linear object
- The information can be ordered, partially ordered, or unordered
# Markup
1. Overlap
2. Discontinuity
3. Non-linearity
4. Compatible
a. Interoperable
b. Reusable
"Natural" or idiomatic: the model needs to be close to our understanding of text
# TAGML
... formal description of complex textual features in a straightforward manner
# Alexandria
... stores and manages TAGML files
# Discussion
- How do we handle the merge of TAGML files? Do we consider changes in markup as replacements or additions?
`[line>` to `[l>`
# Option 1: replacements
Changes made by a user replace the existing markup:
<br/>
```[l>2nd. Voice from the Springs.<l]```
# Option 2: additions
New layers are created to identify changes made by different users:
<br/>
```[line|Astrid>[l|Bram>2nd. Voice from the Springs.<l|Bram]<line|Astrid]```
If changes were to be considered replacements, a merge would imply losing a certain amount of information. Perhaps that's not problematic, but users need to be aware of that. In any case, losses wouldn't be forever as they can always be reverted.
- Is the source text part of a perspective or not? In other words, is a perspective only the markup or also the source text?
<img src="images/Selection-22v.png">
View poetic:
`[rhyme>slaughter<rhyme]`
View material:
`<|[sic>slaugter<sic]|[corr>slaughter<corr]|>`
# References
- Alexandria. https://github.com/HuygensING/alexandria-markup. Information about installing and using the Alexandria command line app is available at links on the TAG portal at https://github.com/HuygensING/TAG.
- Gengnagel, T. 2015. "Marking Up Iconography: Scholarly Editions Beyond Text," in: parergon, 06/11/2015, https://parergon.hypotheses.org/40.
- Haentjens Dekker, R. & Birnbaum, D.J. 2017. "It’s more than just overlap: Text As Graph". In _Proceedings of Balisage: The Markup Conference 201. Balisage Series on Markup Technologies_, vol. 19. doi:10.4242/BalisageVol19.Dekker01. https://www.balisage.net/Proceedings/vol19/html/Dekker01/BalisageVol19-Dekker01.html
- Hajo, C. M. 2010. "The sustainability of the scholarly edition in a digital world". In _Proceedings of the International Symposium on XML for the Long Haul: Issues in the Long-term Preservation of XML_. Balisage Series on Markup Technologies, vol. 6. doi:10.4242/BalisageVol6.Hajo01.
- Hillesund, T. 2005. "Digital Text Cycles: From Medieval Manuscripts to Modern Markup". In _Journal of Digital Information_ 6:1. https://journals.tdl.org/jodi/index.php/jodi/article/view/62/65.
- Panofsky, E. 1932/1964. "Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst" in _Ikonographie und Ikonologie: Theorien, Entwicklung, Probleme (Bildende Kunst als Zeichensystem; vol. 1)_, ed. by Ekkehard Kaemmerling, Köln 1979, pp.185-206.
- Renear, A. H., Mylonas, E., & Durand, D. 1993. "Refining our notion of what text really is: The problem of overlapping hierarchies". https://www.ideals.illinois.edu/bitstream/handle/2142/9407/RefiningOurNotion.pdf?sequence=2&isAllowed=y
- Sahle, P. 2013. _Digitale Editionsformen-Teil 3: Textbegriffe Und Recodierung_. Norderstedt: Books on Demand. http://kups.ub.uni-koeln.de/5353/
- Shelley, P. B. "Prometheus Unbound, Act I", in The Shelley-Godwin Archive, MS. Shelley e. 1, 21v. Retrieved from http://shelleygodwinarchive.org/sc/oxford/prometheus_unbound/act/i/#/p7
- Shillingsburg, P. 2014. "From physical to digital textuality: Loss and gain in literary projects". In _CEA Critic_ 76:2, pp.158-168.
# Some extra slides
## Just in case ...
<img src="images/cmlhts-18-latest.png">
<img src="images/cmlhts-19-latest.png">
<img src="images/cmlhts-20-latest.png">
# TAG
Data model: non-uniform cyclic property hypergraph of text
- Document Node
- Text Nodes
- Markup Nodes
- Annotation Nodes
<img align="center" width="300" height="200" src="images/hypergraph-general.png">
<img align="center" width="600" height="600" src="images/hypergraph.png">
| github_jupyter |
Miscilanous plots of spectra.
```
#first get the python modules we need
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as fits
import os
import glob
from astropy.convolution import convolve, Box1DKernel
from astropy.table import Table
import astropy.units as u
from astropy.modeling import models, fitting
#matplotlib set up
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.figsize"] = (14, 5)
rcParams["font.size"] = 20
```
SDSS v X-shooter
```
sdss = fits.open('spectra/sdss/spec-4848-55955-0338.fits')
sdss.info()
sdss[1].data.names
data = sdss[1].data
plt.plot(10**data['LOGLAM'], data['FLUX'])
plt.plot(10**data['LOGLAM'], (1/data['IVAR'])**0.5)
sw, sf, se = 10**data['LOGLAM'], data['FLUX']*1e-17, ((1/data['IVAR'])**0.5)*1e-17
mask = (sw > 8450) & (sw < 8700)
plt.plot(sw[mask], sf[mask])
#xw, xf, xe = np.loadtxt('spectra/SDSSJ1144_old/SDSS1144_2_SCI_SLIT_FLUX_MERGE1D_VIS_TAC.csv', unpack=True, delimiter=',')
xw, xf, xe = np.loadtxt('stare_extractions/SDSS1144_1_SCI_SLIT_FLUX_MERGE1D_VIS_58189.08029528.csv', unpack=True, delimiter=',')
mask = (xw > 8450) & (xw < 8700)
plt.plot(xw[mask], xf[mask])
xw1, xf1, xe1 = np.loadtxt('stare_extractions/SDSS1144_2_SCI_SLIT_FLUX_MERGE1D_VIS_58168.274911677.csv', unpack=True, delimiter=',')
mask = (xw1 > 8450) & (xw1 < 8700)
plt.plot(xw1[mask], xf1[mask])
fitter = fitting.LevMarLSQFitter()
def get_shifted_lines(x, lines):
#calculates the approximate positions of the shifted lines
rest_lam = lines[0]*u.AA
obs_lam = x*u.AA
dv = obs_lam.to(u.km/u.s, equivalencies=u.doppler_optical(rest_lam))
#print(dv)
l2 = dv.to(u.AA, equivalencies=u.doppler_optical(lines[1]*u.AA))
l3 = dv.to(u.AA, equivalencies=u.doppler_optical(lines[2]*u.AA))
return np.array([x, l2.value, l3.value])
def make_plot_spec(w, f, e, mask1, mask2,smooth=10): #cuts spectrum down to the bit to plot
#mask = (w > 8450) & (w < 8480) | (w > 8520) & (w <8540) | (w > 8560) & (w< 8660) | (w > 8680) & (w < 8700) #mask out emmission lines
w1, f1 = w[mask1], f[mask1]
n_init = models.Polynomial1D(3)
n_fit = fitter(n_init, w1, f1)
#mask = (w > 8450) & (w < 8700)
w1, f1, e1 = w[mask2], f[mask2], e[mask2]
nf = f1/n_fit(w1)
ne = e1/n_fit(w1)
if smooth > 0:
nf = convolve(nf,Box1DKernel(smooth))
ne = convolve(ne,Box1DKernel(smooth))/smooth**0.5
return w1,nf, ne
lines = [8498.02,8542.09,8662.14]
#smask = (sw > 8450) & (sw < 8700)
#plt.plot(sw[smask], sf[smask])
#xmask = (xw > 8450) & (xw < 8700)
#plt.plot(xw[xmask], xf[xmask])
plt.figure(figsize=(8,10))
for i, w, f, e, smooth in zip([0,1, 2],[xw, xw1, sw], [xf, xf1, sf], [xe, xe1, se], [8, 8, 0]):
slines = get_shifted_lines(8505, lines)
mask1 = (w > 8460) & (w < slines[0]-5) | (w > slines[0]+5) & (w <slines[1]-5) | (w > slines[1]+5) & (w< slines[2]-5) | (w > slines[2]+5) & (w < 8700)
mask2 = (w> 8470) & (w < 8690)
w, f, e = make_plot_spec(w, f,e , mask1, mask2, smooth=smooth)
#plt.step(w[2:-2],f[2:-2]+i*0.5, label = label, where='mid')
plt.plot(w[2:-2],f[2:-2]+i*0.5)
#plt.legend()
plt.xlabel('Wavelength (\AA)')
plt.ylabel('Normalised Flux')
[plt.axvline(line, ls='--', c='C3') for line in lines]
plt.xlim(8475, 8685)
plt.ylim(0.78, 2.28)
plt.annotate('X-shooter March 2018', (8570, 1.2), xycoords='data')
plt.annotate('SDSS January 2012', (8570, 2.2), xycoords='data')
plt.annotate('X-shooter February 2018', (8570, 1.7), xycoords='data')
plt.tight_layout()
plt.savefig('plots/sdss_v_xshooter.pdf')
sdss[1].header
visfits = glob.glob('spectra/nicola_2/WDJ114404.76+052951.77/VIS_notell/*VIS*fits')
print(len(visfits))
for v in visfits:
print(fits.getheader(v)['EXPTIME'])
```
| github_jupyter |
# Project description
- Beta Bank customers are leaving: little by little, chipping away every month. The bankers figured out it’s cheaper to save the existing customers rather than to attract new ones.
- We need to predict whether a customer will leave the bank soon. You have the data on clients’ past behavior and termination of contracts with the bank.
- Build a model with the maximum possible F1 score.
- To pass the project, you need an F1 score of at least 0.59. Check the F1 for the test set.
- Additionally, measure the AUC-ROC metric and compare it with the F1.
```
#Import all libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns#visualization
sns.set(style="ticks", color_codes=True)
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.utils import shuffle
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
#define most used variables
F1_REQUIRED=0.59 #F1 = 2 * (precision * recall) / (precision + recall)
RANDOM_STATE=12345
target_names = ['Exited 0', 'Exited 1'] #Used in printing classification report
import sys
print (sys.version)
```
# STEP 1 - DATA PREPROCESSING
- Download and prepare the data.
- Explain the procedure.
Data description
The data can be found in /datasets/Churn.csv file. Download the dataset.
**Features**
- RowNumber — data string index
- CustomerId — unique customer identifier
- Surname — surname
- CreditScore — credit score
- Geography — country of residence
- Gender — gender
- Age — age
- Tenure — period of maturation for a customer’s fixed deposit (years)
- Balance — account balance
- NumOfProducts — number of banking products used by the customer
- HasCrCard — customer has a credit card
- IsActiveMember — customer’s activeness
- EstimatedSalary — estimated salary
- Target
- Exited — сustomer has left
```
#Import the file and create the dataset
path = '/datasets'
df = pd.read_csv(path+'/Churn.csv')
def display_information(df):
print('Head:')
print()
display(df.head())
print ('-'*100)
print('Info:')
print()
display(df.info())
print ('-'*100)
print('Describe:')
print()
display(df.describe())
print ('-'*100)
display(df.describe(include='object'))
print()
print('Columns with nulls:')
display(get_precent_of_na_df(df,4))
print ('-'*100)
print('Shape:')
print(df.shape)
print ('-'*100)
print('Duplicated:')
print("\033[1m" + 'We have {} duplicated rows.'.format(df.duplicated().sum()) + "\033[0m")
def get_precent_of_na_df(df,num):
df_nulls = pd.DataFrame(df.isna().sum(),columns=['Missing Values'])
df_nulls['Percent of Nulls'] = round(df_nulls['Missing Values'] / df.shape[0],num) *100
return df_nulls
def get_percent_of_na(df):
count = 0
df = df.copy()
s = (df.isna().sum() / df.shape[0])
for column, percent in zip(s.index,s.values):
num_of_nulls = df[column].isna().sum()
if num_of_nulls == 0:
continue
else:
count += 1
print('Column {} has {:.{}%} percent of Nulls, and {} of nulls'.format(column, percent,num,num_of_nulls))
if count !=0:
print('There are {} columns with NA!'.format(count))
else:
print()
print('There are no columns with NA!')
display_information(df)
```
# CONCLUSION:
I don't see any outliers.
The only data I see problematic is if we have all zeroes in Balance, HasCrCard, NumOfProducts, IsActiveMember.
```
df_zerotest = df.query('Balance==0 & HasCrCard==0 & NumOfProducts==0')
df_zerotest.shape
```
# CONCLUSION
Customers have atleast one of Balance, Credit Card or NumOfProducts.
```
display(df.head(5),df.tail(5))
#Droping 3 columns which will have no use in the Supervised Learning
df=df.drop(['CustomerId','RowNumber','Surname'],axis=1)
df['Tenure'].value_counts()
```
# CONCLUSION
- Dropped three columns: CustomerId, RowNumber, Surname. Not needed.
- Checked 'Tenure' column has Nan values.
- I need to identify if this column will have impact on the prediction of values.
- I will check which columns have Nan Values for Tenure to see if they can be replaced.
```
df1 = df[df.isna().any(axis=1)]
print(df1.shape)
display(df1.pivot_table(index=['Exited','Geography','CreditScore','Gender'],values='Balance',aggfunc=['mean','count']))
valueFrance = df[df['Geography']=='France']['Tenure'].median()
valueSpain = df[df['Geography']=='Spain']['Tenure'].median()
print(valueFrance, valueSpain)
df['Tenure'].fillna(df['Tenure'].median(),inplace=True)
print(df['Tenure'].isnull().sum())
```
# CONCLUSION
- Replaced the null values in Tenure column with median value.
- Reason: Tenure has Nan values for the customers in countries: France, Spain.
- I calculated the median of Tenure for each of these countries, came out to 5
- Hence I replaced the Nan values with the median value which is 5.
- No more Nan values
# STEP-2: EXPLORATORY DATA ANALYSIS
```
heatmap = sns.heatmap(df.corr()[['Exited']].sort_values(by='Exited', ascending=False), vmin=-1, vmax=1, annot=True, cmap='BrBG')
```
# CONCLUSION
- The heatmap depicts that Churn has positive correlation with Age, Balance and Estimated Salary. However, the correlation doesn't seem to be very strong.
- Churn (Exited) also has negative correlation with HasCreditCard, tenure, CreditScore,NumOfProducts, IsActiveMember.
```
plt.figure(figsize=(10,4))
df.corr()['Exited'].sort_values(ascending = False).plot(kind='bar')
```
# CONCLUSION
The BAR Chart also displays that
Age,Balance, IsActiveMember, NumOfProducts seem to have high correlation with the churn.
```
#Categorical attirbutes churn rate
fig, axis = plt.subplots(2, 2, figsize=(12, 8))
sns.countplot(x= df.Geography, hue = 'Exited' ,data=df, ax =axis[0][0])
sns.countplot(x=df.Gender, hue = 'Exited' ,data=df, ax=axis[1][0])
sns.countplot(x=df.HasCrCard, hue = 'Exited' ,data=df, ax=axis[0][1])
sns.countplot(x=df.IsActiveMember, hue = 'Exited' ,data=df, ax=axis[1][1])
plt.ylabel('count')
```
# CONCLUSION
- Geography: France show a huge number of customers with low churn.
- CreditCard: %age of customers without credit card are churning more. There is a much bigger percentage of customers with credit card.
- Gender: Female customers are churning more than male. More data is needed to confirm these exploratory findings. There are more male customers.
- IsActiveMember: Inactive customers are churning more than active ones. There are more active members than inactive members.
# STEP-3: FEATURE ENGINEERING
- We have two categorical columns: Geography,Gender
- We should OHE the columns in the dataset.
```
#Define OHE columns for Supervised Learning
df_ohe=pd.get_dummies(df,drop_first=True)
print(df_ohe.shape)
display(df_ohe.head())
```
# CONCLUSION
- Geography and Gender columns are OHEd.
# STEP-4: DEFINE TRAINING, VALIDATION AND TEST SETS
```
#Set the training, validation and test datasets (features and target)
target = df_ohe['Exited']
features = df_ohe.drop(['Exited'] , axis=1)
#FIRST SPLIT INTO TRAINING(60%) AND VALID_TEST (40%)
features_train, features_validtest, target_train, target_validtest = train_test_split(
features, target, test_size=0.40, random_state=RANDOM_STATE)
#SPLIT VALID_TEST INTO VALIDATION and TEST (20% each)
features_test, features_valid, target_test, target_valid = train_test_split(
features_validtest, target_validtest, test_size=0.50, random_state=RANDOM_STATE)
totsize = len(df_ohe)
print('training set : {0:.0%}'.format(len(features_train) /totsize),features_train.shape, ', training target :',target_train.shape)
print('validation set: {0:.0%}'.format(len(features_valid)/totsize),features_valid.shape,', validation target :',target_valid.shape)
print('test set : {0:.0%}'.format(len(features_test)/totsize),features_test.shape,', test target :',target_test.shape)
```
# CONCLUSION
Training set is 60%, Validation and Test sets are 20% each
# STANDARDIZE THE NUMERICAL FEATURES
```
#Standardize the numerical features.
#CreditScore Age Tenure Balance NumOfProducts EstimatedSalary
numeric = ['CreditScore', 'Age', 'Tenure', 'Balance','NumOfProducts', 'EstimatedSalary']
scaler = StandardScaler()
# < transform feature set >
scaler.fit(features_train[numeric])
features_train.loc[:,numeric] = scaler.transform(features_train[numeric])
# < transform validation set >
features_valid.loc[:,numeric] = scaler.transform(features_valid[numeric])
# < transform test set >
features_test.loc[:,numeric] = scaler.transform(features_test[numeric])
print(features_train.shape)
print(features_valid.shape)
print(features_test.shape)
display('TRAINING SET',features_train.head(), 'VALIDATION SET',features_valid.head(),'TEST SET',features_test.head())
```
# CONCLUSION
All data sets are scaled correctly.
# BUILD A RESULT DATA FRAME
```
#Create a dataframe to store Model name, accuracy results - accuracy score, auc_roc, f1_score for valid, test, and if f1 and if the model met or exceeeded the required F1 score
column_names = ["method", "hyperparameters", "accuracy_score","auc_roc","f1_valid","f1_test",'f1_required','above_f1_threshold?']
df_results = pd.DataFrame(columns = column_names)
```
# STEP-5: TEST SUPERVISED LEARNING MODELS
- LogisticRegression
- DecisionTreeClassifier
- RandomForestClassifier
Try with these options of supervised learning:
- No Hyperparameters
- balanced weights
- Upsampled
- Downsampled
```
#Function that accepts the model parameter and fits it to the sent data, then updates the results dataframe and returns the results dataframe
def supervisedModel(model,features_train, target_train,method,hyperparam):
global df_results,predicted_test,target_test,predicted_valid, target_valid #Using it in the function, global
model.fit(features_train, target_train)
predicted_valid = model.predict(features_valid)
f1_valid = f1_score(target_valid,predicted_valid)
predicted_test = model.predict(features_test)
f1_test = f1_score(target_test,predicted_test)
auc_roc = roc_auc_score(target_test, predicted_test)
acc=accuracy_score(target_test,predicted_test)
above_threshold = np.where( f1_test> F1_REQUIRED, True, False)
resultRowStr= [method,hyperparam,acc,auc_roc,f1_valid,f1_test,F1_REQUIRED,above_threshold]
rows = [pd.Series(resultRowStr, index=df_results.columns)]
# append the rows
df_results=df_results.append(rows,ignore_index=True).round(decimals=4)
# check the rows
display(df_results)
```
# LogisticRegression (No Hyperparameter)
```
method="LogisticRegression"
hyperparam = "none"
model = LogisticRegression(solver='liblinear',random_state=RANDOM_STATE)
supervisedModel(model,features_train, target_train,method, hyperparam)
```
# DecisionTreeClassifier with Hyperparameter: Depth
```
prevScore=0
foundDepth=0
for depth in range(1, 10, 1):
model = DecisionTreeClassifier(max_depth=depth, random_state=RANDOM_STATE)
model.fit(features_train, target_train)
predicted_valid = model.predict(features_valid)
scoreR = f1_score(target_valid,predicted_valid)
print("scoreR:",scoreR,"Depth:",depth)
if (scoreR > prevScore):
prevScore=scoreR
foundDepth=depth
print('maxDepth',foundDepth)
method="DecisionTreeClassifier"
hyperparam = "depth: "+str(foundDepth)
model = DecisionTreeClassifier(max_depth=foundDepth,random_state=RANDOM_STATE)
supervisedModel(model,features_train, target_train,method, hyperparam)
```
# DecisionTreeClassifier with Hyperparameters: balanced weights,Depth
```
prevScore=0
foundDepth=0
for depth in range(1, 10, 1):
model = DecisionTreeClassifier(max_depth=depth, random_state=RANDOM_STATE,class_weight='balanced')
model.fit(features_train, target_train)
predicted_valid = model.predict(features_valid)
scoreR = f1_score(target_valid,predicted_valid)
print("scoreR:",scoreR,"Depth:",depth)
if (scoreR > prevScore):
prevScore=scoreR
foundDepth=depth
print('maxDepth',foundDepth)
method="DecisionTreeClassifier"
hyperparam = "Balanced, depth: "+str(foundDepth)
model = DecisionTreeClassifier(max_depth=foundDepth,class_weight='balanced',random_state=RANDOM_STATE)
supervisedModel(model,features_train, target_train,method, hyperparam)
```
# LogisticRegression with Hyperparameters: balanced weights
```
model = LogisticRegression(solver='liblinear',class_weight='balanced',random_state=RANDOM_STATE)
method="LogisticRegression"
hyperparam="Balanced"
supervisedModel(model,features_train, target_train,method, hyperparam)
```
# RandomForest with Hyperparameters: n_estimators, depth, balanced weights
```
prevScore=0.0
foundDepth=0
for depth in range(1, 16, 1):
model = RandomForestClassifier(n_estimators=20, max_depth=depth, random_state=RANDOM_STATE, class_weight='balanced')
model.fit(features_train, target_train)
predicted_valid = model.predict(features_valid)
f1_scor = f1_score(target_valid,predicted_valid)
print("f1Score:",f1_scor,"Depth:",depth)
if (f1_scor > prevScore):
prevScore=f1_scor
foundDepth=depth
print('maxDepth',foundDepth)
model = RandomForestClassifier(n_estimators=100, max_depth=foundDepth, random_state=RANDOM_STATE, class_weight='balanced')
method="RandomForestClassifier"
hyperparam = "Balanced, depth: "+str(foundDepth)
supervisedModel(model,features_train, target_train,method, hyperparam)
```
# CONFUSION MATRIX
```
#Confusion Matrix
confusion_matrix = confusion_matrix(target_test, predicted_test)
print(confusion_matrix)
```
- The Confusion Matrix displays that the DecisionTree classifier gave too many False Negatives.
- It also shows that the data has a lot of "True Negatives" which implies too many values are No, implying people didn't exit.
- Issue is class imbalance. Hence we need to fix the imbalance in the data to improve the model quality.
```
print(classification_report(target_test, predicted_test, target_names=target_names))
```
# CONCLUSION
- The Exited=0 values are predicted with f1_score of 79%, the Exited=1 values are predicted with f1_score of 48%
- We should test models with upsampling of Exited=1, and downsampling of Exited=0
# STEP-6: UpSampling, Downsampling
```
features_zeros = features_train[target_train==0]
features_ones = features_train[target_train==1]
target_zeros = target_train[target_train==0]
target_ones = target_train[target_train==1]
print(features_zeros.shape)
print(features_ones.shape)
print(target_zeros.shape)
print(target_ones.shape)
```
- Positive observations are less that negative observations.
- Less people have exited. (Exited=1 is less)
- We need to do upsampling
```
#Create Upsample
repeat = 2
features_upsampled = pd.concat([features_zeros] + [features_ones] * repeat)
target_upsampled = pd.concat([target_zeros] + [target_ones] * repeat)
print(features_upsampled.shape)
print(target_upsampled.shape)
features_upsampled, target_upsampled = shuffle(
features_upsampled, target_upsampled, random_state=RANDOM_STATE)
#Test Upsample count
features_zeros = features_upsampled[target_train==0]
features_ones = features_upsampled[target_train==1]
target_zeros = target_upsampled[target_train==0]
target_ones = target_upsampled[target_train==1]
print(features_zeros.shape)
print(features_ones.shape)
print(target_zeros.shape)
print(target_ones.shape)
#TRYING DOWNSAMPLING, remove some (Exited=0)
print(features_train.shape)
fraction=0.8
features_downsampled = pd.concat(
[features_zeros.sample(frac=fraction, random_state=RANDOM_STATE)] + [features_ones])
target_downsampled = pd.concat(
[target_zeros.sample(frac=fraction,random_state=RANDOM_STATE)] + [target_ones])
features_downsampled, target_downsampled = shuffle(
features_downsampled, target_downsampled, random_state=RANDOM_STATE)
print(features_downsampled.shape,target_downsampled.shape)
#Test Downsample count
features_zeros = features_downsampled[target_train==0]
features_ones = features_downsampled[target_train==1]
target_zeros = target_downsampled[target_train==0]
target_ones = target_downsampled[target_train==1]
print(features_zeros.shape)
print(features_ones.shape)
print(target_zeros.shape)
print(target_ones.shape)
```
# CONCLUSION
Training dataset: Upsample and Downsample created. I tested with various values of repeat, frac. The one I am showing above is the one with best f1_scores
<div class="alert alert-block alert-info">
<b>Improve: </b> It would be better if functions were used for upsampling/downsampling. Anyway your solution is acceptable.
</div>
# STEP-7: Evaluating models with Upsampling, Downsampling
# LogisticRegression with upsample, Hyperparameters: balanced weights
```
#Train the LogisticRegression model with the new data
method="LogisticRegression"
hyperparam="Balanced, upsampled"
model = LogisticRegression(solver='liblinear',class_weight='balanced',random_state=RANDOM_STATE)
supervisedModel(model,features_upsampled, target_upsampled,method, hyperparam)
```
# DecisionTree with upsample, Hyperparameters: depth, balanced weights
```
prevScore=0
foundDepth=0
for depth in range(1, 10, 1):
model = DecisionTreeClassifier(max_depth=depth, random_state=RANDOM_STATE,class_weight='balanced')
model.fit(features_upsampled, target_upsampled)
predicted_valid = model.predict(features_valid)
scoreR = f1_score(target_valid,predicted_valid)
print("scoreR:",scoreR,"Depth:",depth)
if (scoreR > prevScore):
prevScore=scoreR
foundDepth=depth
print('maxDepth',foundDepth)
method="DecisionTreeClassifier"
hyperparam = "Balanced, upsampled, depth: "+str(foundDepth)
model = DecisionTreeClassifier(max_depth=foundDepth,class_weight='balanced',random_state=RANDOM_STATE)
supervisedModel(model,features_upsampled, target_upsampled,method, hyperparam)
```
# RandomForest with upsample, Hyperparameters: n_estimators, depth, balanced weights
```
from sklearn.ensemble import RandomForestClassifier
prevScore=0.0
foundDepth=0
for depth in range(1, 16, 1):
model = RandomForestClassifier(n_estimators=20, max_depth=depth, random_state=RANDOM_STATE, class_weight='balanced')
model.fit(features_upsampled, target_upsampled)
predicted_valid = model.predict(features_valid)
f1_scor = f1_score(target_valid,predicted_valid)
print("f1Score:",f1_scor,"Depth:",depth)
if (f1_scor > prevScore):
prevScore=f1_scor
foundDepth=depth
print('maxDepth',foundDepth,"f1Score:",f1_scor)
model = RandomForestClassifier(n_estimators=100, max_depth=foundDepth, random_state=RANDOM_STATE, class_weight='balanced')
method="RandomForestClassifier"
hyperparam = "Balanced,upsampled, depth: "+str(foundDepth)
supervisedModel(model,features_upsampled, target_upsampled,method, hyperparam)
```
# LogisticRegression with downsample, Hyperparameters: balanced weights
```
#Train the LogisticRegression model with the new data
hyperparam="Balanced, downsampled"
supervisedModel(model,features_downsampled, target_downsampled,method, hyperparam)
```
# DecisionTree with downsample, Hyperparameters: depth, balanced weights
```
prevScore=0
foundDepth=0
for depth in range(1, 10, 1):
model = DecisionTreeClassifier(max_depth=depth, random_state=RANDOM_STATE,class_weight='balanced')
model.fit(features_downsampled, target_downsampled)
predicted_valid = model.predict(features_valid)
scoreR = f1_score(target_valid,predicted_valid)
print("scoreR:",scoreR,"Depth:",depth)
if (scoreR > prevScore):
prevScore=scoreR
foundDepth=depth
print('maxDepth',foundDepth)
method="DecisionTreeClassifier"
hyperparam = "Balanced, downsampled, depth: "+str(foundDepth)
model = DecisionTreeClassifier(max_depth=foundDepth,class_weight='balanced',random_state=RANDOM_STATE)
supervisedModel(model,features_downsampled, target_downsampled,method, hyperparam)
```
Examine the balance of classes. Train the model without taking into account the imbalance. Briefly describe your findings.
Improve the quality of the model. Make sure you use at least two approaches to fixing class imbalance. Use the training set to pick the best parameters. Train different models on training and validation sets. Find the best one. Briefly describe your findings.
Perform the final testing.
# RandomForest with downsample, Hyperparameters: n_estimators, depth, balanced weights
```
from sklearn.ensemble import RandomForestClassifier
prevScore=0.0
foundDepth=0
for depth in range(1, 16, 1):
model = RandomForestClassifier(n_estimators=20, max_depth=depth, random_state=RANDOM_STATE, class_weight='balanced')
model.fit(features_downsampled, target_downsampled)
predicted_valid = model.predict(features_valid)
f1_scor = f1_score(target_valid,predicted_valid)
print("f1Score:",f1_scor,"Depth:",depth)
if (f1_scor > prevScore):
prevScore=f1_scor
foundDepth=depth
print('maxDepth',foundDepth)
model = RandomForestClassifier(n_estimators=100, max_depth=foundDepth, random_state=RANDOM_STATE, class_weight='balanced')
method="RandomForestClassifier"
hyperparam = "Balanced, downsampled, depth: "+str(foundDepth)
supervisedModel(model,features_downsampled, target_downsampled,method, hyperparam)
#SORT all classifiers by the f1_score on test dataset
df_results = df_results.sort_values(by='f1_test',ascending=False)
df_results
df_results.plot(y=['f1_test','accuracy_score','auc_roc'], kind="bar", stacked=True,figsize=(12,6)).legend(loc='best')
plt.title("Methods and Scores")
plt.xlabel("Method")
plt.ylabel("Scores")
```
# STEP-8: OVERALL CONCLUSION
- From the above table, the best classifier I found is
- RandomForestClassifier Balanced, depth: 9
- Its f1_score is .6313, accuracy_score> 0.84, auc_roc: .77.
- I will print the auc_roc curve and precision_recall_curve for this classifier.
# ROC curve, Precision-Recall curve for the BEST MODEL
RandomForestClassifier with n_estimators=100,max_depth=9, class_weight='balanced'
```
model = RandomForestClassifier(n_estimators=100, max_depth=9, random_state=RANDOM_STATE, class_weight='balanced')
model.fit(features_train, target_train)
probabilities_test = model.predict_proba(features_test)
probabilities_one_test = probabilities_test[:, 1]
print(probabilities_one_test[:5])
#roc curve
auc_roc = roc_auc_score(target_test, probabilities_one_test)
print(auc_roc)
fpr, tpr, thresholds = roc_curve(target_test,probabilities_one_test)
plt.figure()
# < plot the graph >
plt.plot(fpr,tpr)
# ROC curve for random model (looks like a straight line)
plt.plot([0, 1], [0, 1], linestyle='--')
plt.xlim([0.1,1.0])
plt.ylim([0.0,1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.show()
#precision-recall curve
precision, recall, thresholds = precision_recall_curve(target_test, probabilities_test[:, 1])
plt.figure(figsize=(6, 6))
plt.step(recall, precision, where='post')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall Curve')
plt.show()
```
# CONCLUSION
- We have imbalanced data with more 0 Churn values than 1 values.
- Using an ROC curve with an imbalanced dataset might be deceptive and lead to incorrect interpretations of the model skill. ROC graphs are based upon TP rate and FP rate, in which each dimension is a strict columnar ratio, so do not depend on class distributions.
- The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets.
- Our ROC curve, and Precision_recall curve depict that our model is very good. - The ROC curve is much higher than the threshold, and the precision_recall curve is high.
# SANITY CHECK
- To perform sanity check, lets see if the model has better performance than random classification, or classification for just one class (the majority).
- Lets find the larger of the two classes:
- We can measure the **precision** of our model (predictions correct / total correct )
### SANITY CHECK ON ENTIRE DATASET
```
class_frequency = df['Exited'].value_counts(normalize=True)
print(class_frequency)
class_frequency.plot(kind='bar')
```
- Lets choose the retained clients since that is the larger of the two.
- Our best model products a test accuracy of 81%, lets see if the model outperforms the random classification
```
target = df['Exited']
features = df.drop('Exited', axis=1)
target_pred_constant = pd.Series([1 for x in range(len(target))], index=target.index)
print('Accuracy for the entire dataset of Churn Customers',accuracy_score(target,target_pred_constant))
```
# SANITY CHECK ON TEST DATASET
```
stay_target = (target_test == 0)
exit_target = (target_test == 1)
print("Number of retained customer clients", stay_target.sum())
print("Number of churn clients", exit_target.sum())
accuracy_check = stay_target.sum() / target_test.shape[0]
print("Accuracy of the Retained customer classifier", accuracy_check )
print("Accuracy of the Churn customer classifier", exit_target.sum() / target_test.shape[0] )
print(classification_report(target_test, predicted_test, target_names=target_names))
```
# CONCLUSION
- The Churn customer base is 20% of the entire test dataset, and 21% of the
test dataset.
- Our model predicted the retained customers correctly with f1_score of .9, and the churning customers with f1_score of .63.
# FINAL CONCLUSION
- From the above exercise, the best classifier I found for the Churn.csv dataset:
- RandomForestClassifier Balanced, depth: 9
- Its f1_score is .6313, accuracy_score> 0.84, auc_roc: .77.
- The auc_roc curve and precision_recall_curve for this classifier were also optimal.
# Project evaluation
We’ve put together the evaluation criteria for the project. Read this carefully before moving on to the task.
- X Here’s what the reviewers will look at when reviewing your project:
- X How did you prepare the data for training? Have you processed all of the feature types?
- X Have you explained the preprocessing steps well enough?
- X How did you investigate the balance of classes?
- X Did you study the model without taking into account the imbalance of classes?
- X What are your findings about the task research?
- X Have you correctly split the data into sets?
- X How have you worked with the imbalance of classes?
- X Did you use at least two techniques for imbalance fixing?
- X Have you performed the model training, validation, and final testing correctly?
- X How high is your F1 score?
- X Did you examine the AUC-ROC values?
- X Have you kept to the project structure and kept the code neat?
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
The model does a decent job of following the trend, but seems to be unable to adjust ot the sudden drop in the second half of the data being predicated.
| github_jupyter |
# Lab 1
Second section is kind of exploring the subject. The proper homework is presented in last two sections, showing differences similarities for different parameters (noise and function).
## Simulation preparation
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#import seaborn as sns # makes matplotlib plots prettier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
degrees = list(range(1, 31))
# print(degrees)
def test_function(true_fun, n_samples = 30, make_figure = True, label = ""):
square_errors = []
np.random.seed(0)
X = np.sort(np.random.rand(n_samples))
y = true_fun(X) + np.random.randn(n_samples) * 0.1
for i in range(len(degrees)):
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_val_score(pipeline, X[:, np.newaxis], y,
scoring="neg_mean_squared_error", cv=10)
sq_err = scores.std()
square_errors.append(sq_err)
if make_figure == True :
plt.figure(figsize=(17, 10))
plt.title("samples count: " + str(n_samples))
plt.xlabel("x")
plt.ylabel("y")
plt.plot(degrees, square_errors, 'o', label = str(n_samples) + " samples | " + label)
if make_figure == True:
plt.show()
# return square_errors
# test_function(lambda X: np.cos(1.5 * np.pi * X), 10)
```
## Testing different noise for some cosinus function
```
test_function(lambda X: np.cos(1.5 * np.pi * X), 10)
test_function(lambda X: np.cos(1.5 * np.pi * X), 15)
for i in range(10,20,2):
test_function(lambda X: np.cos(1.5 * np.pi * X), i)
for i in range(10,31,5):
test_function(lambda X: np.sin(X), i)
```
## Change of noise for the same function
```
def test_noise_change(func, label = "", step = 5):
plt.figure(figsize=(17, 10))
plt.xlabel("x")
plt.ylabel("y")
for i in range(10,31, step):
test_function(func, i, False, label = label)
plt.legend(loc="best")
plt.show()
test_noise_change(lambda X: X**2, label = "x^2")
test_noise_change(lambda X: np.cos(X), label = "cos(x)")
test_noise_change(lambda X: np.tan(X), label = "tan(x)")
```
## Same noise for different functions
```
def test_noise_for_functions(n_samples, funcs = []):
plt.figure(figsize=(17, 10))
plt.xlabel("x")
plt.ylabel("y")
for (func, label) in funcs:
test_function(func, n_samples, False, label = label)
plt.legend(loc="best")
plt.show()
test_noise_for_functions(10, [(lambda X: X**2, "x^2"), (lambda X: np.cos(X), "cos(x)"), (lambda X: np.tan(X), "tan(x)")])
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pickle
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
# functions
def rem_outliers(df, col):
''' Remove outliers which fall outside of 3 standard deviations above and below the mean of the data set
Input
(0) dataframe containing the data
(1) column to remove outliers from
Output
rows of df which are classified as outliers in the specified column are directly removed
print out stating count of outliers removed '''
mean, cutoff = np.mean(df[col]), np.std(df[col]) * 3 # 3 stddev outside the mean
lower, upper = mean - cutoff, mean + cutoff
outliers = [x for x in df[col] if x < lower or x > upper]
df.drop(df[(df[col] > upper) | (df[col] < lower)].index, inplace=True)
return f'{len(outliers)} outliers removed'
```
# Models incorporating weather data
```
# import joined table
df = pd.read_csv('data/flights_for_weather_joined.csv')
# drop useless columns
df.drop(columns=['Unnamed: 0', 'origin_city_name', 'dest_city_name', 'dup'], inplace=True)
# split data
mask = np.random.rand(len(df)) < 0.8
df_train = df[mask].copy()
df_test = df[~mask].copy()
# remove outliers for arr_delay
rem_outliers(df_train, 'arr_delay')
# get mean delays per carrier
carrier_delays = df_train.groupby('mkt_unique_carrier').mean()['arr_delay'].sort_values()
carrier_delays.sort_values()
plt.bar(x=carrier_delays.index, height=carrier_delays.values)
```
There seems to be a significant difference in average delays per carrier, so that's worth factoring in.
```
# create rankings for carriers
rankings = {}
for carr in carrier_delays.index:
if carrier_delays[carr] <= -2:
rankings[carr] = 0
elif carrier_delays[carr] > -2 and carrier_delays[carr] <= 0:
rankings[carr] = 1
elif carrier_delays[carr] > 0:
rankings[carr] = 2
def rank_carrier(carr):
return rankings[carr]
df_train['carrier_speed_rank'] = df_train['mkt_unique_carrier'].apply(rank_carrier)
# check distribution of mean delay per flight number
flight_num_delay = df_train.groupby(['op_unique_carrier','op_carrier_fl_num']).mean().arr_delay.sort_values()
plt.hist(flight_num_delay, bins=50)
plt.xlabel("mean_delay")
plt.title("Mean delay times by flight number")
# bin by mean delay time
ranks = pd.qcut(flight_num_delay, 5, labels = [0,1,2,3,4])
ranks
df_train['flight_num_speed_rank'] = df_train.apply(lambda x: ranks.loc[x.op_unique_carrier, x.op_carrier_fl_num], axis=1)
# add month column
def month(datestring):
date = datetime.strptime(datestring, "%Y-%m-%d")
return date.month
df_train['month'] = df_train.fl_date.apply(month)
# check correlation of month with delay
month_delays = df_train.groupby('month').mean().arr_delay
plt.bar(x=month_delays.index, height=month_delays.values)
plt.xlabel("month")
plt.ylabel("mean arr_delay")
plt.title("mean delays per month")
# bin months by mean delays
bins = pd.qcut(month_delays, 3, labels=[0,1,2])
df_train['month_rank'] = df_train.month.apply(lambda x: bins.loc[x])
# add columns for hour of departure and arrival
def hour(t):
s = str(t)
if len(s) < 3:
return 0
elif len(s) == 3:
return int(s[0])
elif len(s) == 4:
if int(s[:2]) == 24:
return 0
else:
return int(s[:2])
df_train['dep_hour'] = df_train.crs_dep_time.apply(hour)
df_train['arr_hour'] = df_train.crs_arr_time.apply(hour)
# relate hour of departure to arr_delay
dep_hour_delay = df_train.groupby('dep_hour').mean().arr_delay
plt.bar(x=dep_hour_delay.index, height=dep_hour_delay.values)
# 0300 is an outlier because there's only one case of it
ranks = pd.qcut(dep_hour_delay, 3, labels=[0,1,2])
df_train['dep_hour_rank'] = df_train.dep_hour.apply(lambda x: ranks.loc[x])
# same for arrival hour
arr_hour_delay = df_train.groupby('arr_hour').mean().arr_delay
plt.bar(x=arr_hour_delay.index, height=arr_hour_delay.values)
# bin values
ranks = pd.qcut(arr_hour_delay, 3, labels=[0,1,2])
df_train['arr_hour_rank'] = df_train.arr_hour.apply(lambda x: ranks.loc[x])
# save progress
# df_train.to_csv('data/weather_features_save.csv')
```
#### Weather columns
```
df_train.head()
print(len(df_train))
df_train.dropna(inplace=True)
print(len(df_train))
# check distribution of weather columns
plt.hist(df_train.precip, bins=20)
# how well does nonzero precipitation correlate with delays?
df_precip = df_train[df_train.precip != 0]
precip = pd.cut(df_precip.precip, 3, labels = ['light','moderate','heavy'])
df_precip.groupby(precip).mean().arr_delay
# plt.scatter(x=precip.precip, y=precip.arr_delay)
df_precip.precip.sort_values()
precip_bins = precip = pd.cut(df_precip.precip, 3, retbins=True)
precip_bins[1]
precip = pd.cut(df_precip.precip, 3, labels=["light", "moderate", "heavy"])
df_train['precip_cat'] = precip.astype(str)
df_train.precip_cat.fillna(value="None", inplace=True)
precip_means = df_train.groupby('precip_cat').mean().arr_delay.sort_values()
plt.bar(x=precip_means.index, height=precip_means.values)
plt.title("Effect of precipitation on delays")
plt.xlabel("Precipitation level")
plt.ylabel("Mean arr_delay")
```
#### Snow
```
plt.hist(df_train.snow, bins=20)
df_snow = df_train[df_train.snow != 0]
snowcats = pd.qcut(df_snow.snow, 3)
df_snow.groupby(snowcats).mean().arr_delay
# add this into df_train
df_train['snow_cat'] = snowcats.astype(str)
df_train.snow_cat.fillna(value="None", inplace=True)
df_train.snow_cat.value_counts()
snow_means = df_train.groupby('snow_cat').mean().arr_delay.sort_values()
plt.bar(x=snow_means.index, height=snow_means.values)
plt.title("Effect of snow on delays")
plt.xlabel("Snow level")
plt.ylabel("Mean arr_delay")
```
#### Windgust
```
plt.hist(df_train.windgust, bins=20)
# bin only on nonzero values
df_wind = df_train[df_train.windgust != 0]
bins = pd.qcut(df_wind.windgust, 3)
df_wind.groupby(bins).mean().arr_delay.sort_values()
df_train['windgust_cat'] = bins.astype(str)
df_train.windgust_cat.fillna(value="None", inplace=True)
```
### cloudcover
```
plt.hist(df_train.cloudcover)
bins = pd.qcut(df_train.cloudcover, 3)
df_train.groupby(bins).mean().arr_delay.sort_values()
df_train['cloud_cat'] = bins
# df_train.to_csv('data/weather_feature_save.csv')
df_train.columns
df_
feats = df_train[['fl_date', 'mkt_unique_carrier', 'mkt_carrier_fl_num',
'op_unique_carrier', 'tail_num', 'op_carrier_fl_num',
'origin_airport_id', 'origin', 'dest_airport_id', 'dest',
'crs_dep_time', 'dep_time', 'dep_delay', 'taxi_out', 'wheels_off',
'wheels_on', 'taxi_in', 'crs_arr_time', 'arr_time', 'arr_delay',
'diverted', 'crs_elapsed_time', 'actual_elapsed_time', 'distance',
'carrier_delay', 'weather_delay', 'nas_delay', 'security_delay',
'late_aircraft_delay', 'precip', 'snow', 'windgust', 'cloudcover',
'carrier_speed_rank', 'flight_num_speed_rank', 'month', 'month_rank',
'dep_hour', 'arr_hour', 'dep_hour_rank', 'arr_hour_rank', 'precip_cat',
'snow_cat', 'windgust_cat', 'cloud_cat']]
feats
feats = pd.get_dummies(data=feats)
```
# Model 1: LinReg, sample of data, weather only
```
# create sample of data
feats_sample = feats.sample(n=10000, random_state=58)
X = feats_sample[['precip_cat_None',
'precip_cat_heavy', 'precip_cat_light', 'precip_cat_moderate',
'snow_cat_None', 'snow_cat_heavy', 'snow_cat_light',
'windgust_cat_None', 'windgust_cat_light', 'windgust_cat_moderate',
'windgust_cat_strong', 'cloud_cat_0', 'cloud_cat_1', 'cloud_cat_2']].to_numpy()
y = feats_sample.arr_delay
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
lr = LinearRegression()
lr.fit(X_train, y_train)
filename = 'models/lr.sav'
pickle.dump(lr, open(filename, 'wb'))
lr.score(X_test, y_test)
```
# Model 2: Whole set, weather only
```
X = feats[['precip_cat_None',
'precip_cat_heavy', 'precip_cat_light', 'precip_cat_moderate',
'snow_cat_None', 'snow_cat_heavy', 'snow_cat_light',
'windgust_cat_None', 'windgust_cat_light', 'windgust_cat_moderate',
'windgust_cat_strong', 'cloud_cat_0', 'cloud_cat_1', 'cloud_cat_2']].to_numpy()
y = feats.arr_delay
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
filename = 'models/lr2.sav'
pickle.dump(lr2, open(filename, 'wb'))
lr2.score(X_test, y_test)
```
# Model 3: Linear Regression, whole set, all features
```
X = feats.drop(columns='arr_delay').to_numpy()
y = feats.arr_delay.to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
lr3 = LinearRegression()
lr3.fit(X_train, y_train)
filename = 'models/lr3.sav'
pickle.dump(lr3, open(filename, 'wb'))
lr3.score(X_test, y_test)
```
| github_jupyter |
# Import
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
from torch.utils.data import TensorDataset, Dataset, DataLoader, random_split
from torch.nn.utils.rnn import pack_padded_sequence, pack_sequence, pad_packed_sequence, pad_sequence
import os
import sys
import pickle
import logging
import random
from pathlib import Path
from math import log, ceil
from typing import List, Tuple, Set, Dict
import numpy as np
import pandas as pd
from sklearn import metrics
import seaborn as sns
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import TranslationDataset, Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import random
import math
import os
import time
sys.path.append('..')
from src.data import prepare_data, prepare_seq2seq_data, SOURCE_ASSIST0910_SELF, SOURCE_ASSIST0910_ORIG
from src.utils import sAsMinutes, timeSince
sns.set()
sns.set_style('whitegrid')
sns.set_palette('Set1')
# =========================
# PyTorch version & GPU setup
# =========================
print('PyTorch:', torch.__version__)
dev = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# dev = torch.device('cpu')
print('Using Device:', dev)
dirname = Path().resolve()
dirname
# =========================
# Seed
# =========================
SEED = 0
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
# torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = False
# =========================
# Parameters
# =========================
# model_name = 'RNN'
sequence_size = 20
epoch_size = 500
lr = 0.1
batch_size, n_hidden, n_skills, n_layers = 100, 200, 124, 1
n_output = n_skills
PRESERVED_TOKENS = 2 # PAD, SOS
onehot_size = n_skills * 2 + PRESERVED_TOKENS
n_input = ceil(log(2 * n_skills))
# n_input = onehot_size #
NUM_EMBEDDIGNS, ENC_EMB_DIM, ENC_DROPOUT = onehot_size, n_input, 0.6
OUTPUT_DIM, DEC_EMB_DIM, DEC_DROPOUT = onehot_size, n_input, 0.6
# OUTPUT_DIM = n_output = 124 # TODO: ほんとはこれやりたい
HID_DIM, N_LAYERS = n_hidden, n_layers
# =========================
# Data
# =========================
train_dl, eval_dl = prepare_seq2seq_data(
SOURCE_ASSIST0910_ORIG, n_skills, PRESERVED_TOKENS, min_n=3, max_n=sequence_size, batch_size=batch_size, device=dev, sliding_window=1)
# 違いを調整する <- ???
#OUTPUT_DIM = eval_dl.dataset.tensors[1].shape
# =========================
# Model
# =========================
class Encoder(nn.Module):
def __init__(self, num_embeddings, emb_dim, hid_dim, n_layers, dropout):
# def __init__(self, dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size, dropout=0.6, bidirectional=False):
super().__init__()
self.num_embeddings = num_embeddings
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(num_embeddings, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, input):
#src = [src sent len, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [src sent len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
# def __init__(self, dev, model_name, n_input, n_hidden, n_output, n_layers, batch_size, dropout=0.6, bidirectional=False):
super().__init__()
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.output_dim = output_dim
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(output_dim, emb_dim) # 250->6
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout) # 6, 100, 1
self.out = nn.Linear(hid_dim, output_dim) # 100, 250
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
#print(input.shape) #torch.Size([21])
input = input.unsqueeze(0)
#print(input.shape) #torch.Size([1, 21])
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = self.dropout(input)
#print(embedded.shape) # torch.Size([1, 15, 6])
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#sent len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
OUTP = None
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, dev):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = dev
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, actual_q=None, teacher_forcing_ratio=0.5):
#src = [src sent len, batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
max_len = trg.shape[0]
# print(f'target shape: {trg.shape}')
# print(f'batch_size: {batch_size}, max_len: {max_len}')
trg_vocab_size = self.decoder.output_dim
# print(f'vocab size: {trg_vocab_size}')
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device) # TODO: fix hard coding
outputs_prob = torch.zeros(max_len, batch_size, 124).to(self.device) # TODO: fix hard coding
# print('s2s outputs shape:', outputs.shape)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
# #first input to the decoder is the <sos> tokens
# input = trg[0,:]
#
# # print(actual_q.shape) # 100, 20, 124
# for t in range(1, max_len):
#
# output, hidden, cell = self.decoder(input, hidden, cell)
# # print(output.shape) # 100, 250
# # つまり100ごとにバッチ処理をしていて、20のSeqを頭から順に処理している段階
# #global OUTP
# #OUTP = output
# outputs[t] = output
# o_wro = torch.sigmoid(output[:, 2:2+124])
# o_cor = torch.sigmoid(output[:, 2+124:])
# outputs_prob[t] = o_cor / (o_cor + o_wro)
# teacher_force = random.random() < teacher_forcing_ratio
# top1 = output.max(1)[1]
# flag = torch.zeros(100, 2) # PRESERVED_TAGS = 2
# flag = torch.cat((flag, actual_q[:,t], actual_q[:,t]), dim=1)
# top1 = torch.max(torch.sigmoid(output) * flag, dim=1)[1]
# input = (trg[t] if teacher_force else top1)
# print(actual_q.shape) # 100, 20, 124
input = trg[-2,:]
output, hidden, cell = self.decoder(input, hidden, cell)
# print(output.shape) # 100, 250
# つまり100ごとにバッチ処理をしていて、20のSeqを頭から順に処理している段階
#global OUTP
#OUTP = output
outputs = output.unsqueeze(0)
o_wro = torch.sigmoid(output[:, 2:2+124])
o_cor = torch.sigmoid(output[:, 2+124:])
outputs_prob = (o_cor / (o_cor + o_wro)).unsqueeze(0)
return outputs, outputs_prob
# =========================
# Prepare and Train
# =========================
enc = Encoder(NUM_EMBEDDIGNS, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT).to(dev)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT).to(dev)
model = Seq2Seq(enc, dec, dev).to(dev)
# Load model
# ----------
load_model = None
epoch_start = 1
load_model = '/home/qqhann/qqhann-paper/ECML2019/dkt_neo/models/s2s_2019_0404_2021.100'
if load_model:
epoch_start = int(load_model.split('.')[-1]) + 1
model.load_state_dict(torch.load(load_model))
model = model.to(dev)
# ----------
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
loss_func = nn.BCELoss()
opt = optim.SGD(model.parameters(), lr=lr)
def train():
pass
def evaluate():
pass
PRED = None
def main():
debug = False
logging.basicConfig()
logger = logging.getLogger('dkt log')
logger.setLevel(logging.INFO)
train_loss_list = []
train_auc_list = []
eval_loss_list = []
eval_auc_list = []
eval_recall_list = []
eval_f1_list = []
x = []
start_time = time.time()
for epoch in range(epoch_start, epoch_size + 1):
print_train = epoch % 10 == 0
print_eval = epoch % 10 == 0
print_auc = epoch % 10 == 0
# =====
# TRAIN
# =====
model.train()
val_pred = []
val_actual = []
current_epoch_train_loss = []
for xs, ys, yq, ya, yp in train_dl:
input = xs
target = ys
input = input.permute(1, 0)
target = target.permute(1, 0)
out, out_prob = model(input, target, yq)
out = out.permute(1, 0, 2)
out_prob = out_prob.permute(1, 0, 2)
pred = torch.sigmoid(out) # [0, 1]区間にする
# _, pred = torch.max(pred, 2)
target = torch.tensor([list(torch.eye(NUM_EMBEDDIGNS)[i]) for i in target.contiguous().view(-1)])\
.contiguous().view(batch_size, -1, NUM_EMBEDDIGNS).to(dev)
# --- 指標評価用データ
# print(out_prob.shape, yq[:,-1,:].unsqueeze(1).shape)
prob = torch.max(out_prob * yq[:,-1,:].unsqueeze(1), 2)[0]
val_pred.append(prob)
val_actual.append(ya[:,-1])
# ---
# print(prob.shape, ya.shape)
loss = loss_func(prob[:,-1], ya[:,-1])
current_epoch_train_loss.append(loss.item())
# バックプロバゲーション
opt.zero_grad()
loss.backward()
opt.step()
# stop at first batch if debug
if debug:
break
if print_train:
loss = np.array(current_epoch_train_loss)
logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0),
'TRAIN Epoch: {} Loss: {}'.format(epoch, loss.mean()))
train_loss_list.append(loss.mean())
# # AUC, Recall, F1
# # TRAINの場合、勾配があるから処理が必要
# y = torch.cat(val_targ).cpu().detach().numpy()
# pred = torch.cat(val_prob).cpu().detach().numpy()
# # AUC
# fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=1)
# logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0),
# 'TRAIN Epoch: {} AUC: {}'.format(epoch, metrics.auc(fpr, tpr)))
# train_auc_list.append(metrics.auc(fpr, tpr))
# =====
# EVAL
# =====
if print_eval:
with torch.no_grad():
model.eval()
val_pred = []
val_actual = []
current_eval_loss = []
for xs, ys, yq, ya, yp in eval_dl:
input = xs
target = ys
input = input.permute(1, 0)
target = target.permute(1, 0)
out, out_prob = model(input, target, yq)
out = out.permute(1, 0, 2)
out_prob = out_prob.permute(1, 0, 2)
pred = torch.sigmoid(out) # [0, 1]区間にする
# _, pred = torch.max(pred, 2)
target = torch.tensor([list(torch.eye(NUM_EMBEDDIGNS)[i]) for i in target.contiguous().view(-1)])\
.contiguous().view(batch_size, -1, NUM_EMBEDDIGNS).to(dev)
# --- 指標評価用データ
prob = torch.max(out_prob * yq[:,-1,:].unsqueeze(1), 2)[0]
val_pred.append(prob)
val_actual.append(ya[:,-1])
# ---
# print(prob.shape, ya.shape)
loss = loss_func(prob[:,-1], ya[:,-1])
current_eval_loss.append(loss.item())
# stop at first batch if debug
if debug:
break
loss = np.array(current_eval_loss)
logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0),
'EVAL Epoch: {} Loss: {}'.format(epoch, loss.mean()))
eval_loss_list.append(loss.mean())
# AUC, Recall, F1
if print_auc:
y = torch.cat(val_actual).view(-1).cpu() # TODO: viewしない? 最後の1個で?
pred = torch.cat(val_pred).view(-1).cpu()
# AUC
fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=1)
logger.log(logging.INFO + (5 if epoch % 100 == 0 else 0),
'EVAL Epoch: {} AUC: {}'.format(epoch, metrics.auc(fpr, tpr)))
eval_auc_list.append(metrics.auc(fpr, tpr))
# # Recall
# logger.debug('EVAL Epoch: {} Recall: {}'.format(epoch, metrics.recall_score(y, pred.round())))
# # F1 score
# logger.debug('EVAL Epoch: {} F1 score: {}'.format(epoch, metrics.f1_score(y, pred.round())))
if epoch % 10 == 0:
x.append(epoch)
logger.info(f'{timeSince(start_time, epoch / epoch_size)} ({epoch} {epoch / epoch_size * 100})')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, train_loss_list, label='train loss')
# ax.plot(x, train_auc_list, label='train auc')
ax.plot(x, eval_loss_list, label='eval loss')
ax.plot(x, eval_auc_list, label='eval auc')
ax.legend()
print(len(train_loss_list), len(eval_loss_list), len(eval_auc_list))
plt.show()
if __name__ == '__main__':
print('Using Device:', dev)
main()
model
import datetime
now = datetime.datetime.now().strftime('%Y_%m%d_%H%M')
torch.save(model.state_dict(), '/home/qqhann/qqhann-paper/ECML2019/dkt_neo/models/s2s_' + now + '.' + str(epoch))
```
| github_jupyter |
# NLP - Hotel review sentiment analysis in python
```
#warnings :)
import warnings
warnings.filterwarnings('ignore')
import os
dir_Path = 'D:\\01_DATA_SCIENCE_FINAL\\D-00000-NLP\\NLP-CODES\\AMAN-NLP-CODES\\AMAN_NLP_VIMP-CODE\\Project-6_Sentiment_Analysis_Amn\\'
os.chdir(dir_Path)
```
## Data Facts and Import
```
import pandas as pd
# Local directory
Reviewdata = pd.read_csv('train.csv')
#Data Credit - https://www.kaggle.com/anu0012/hotel-review/data
Reviewdata.head()
Reviewdata.shape
Reviewdata.head()
Reviewdata.info()
Reviewdata.describe().transpose()
```
## Data Cleaning / EDA
```
### Checking Missing values in the Data Set and printing the Percentage for Missing Values for Each Columns ###
count = Reviewdata.isnull().sum().sort_values(ascending=False)
percentage = ((Reviewdata.isnull().sum()/len(Reviewdata)*100)).sort_values(ascending=False)
missing_data = pd.concat([count, percentage], axis=1,
keys=['Count','Percentage'])
print('Count and percentage of missing values for the columns:')
missing_data
print("Missing values count:")
print(Reviewdata.Is_Response.value_counts())
print("*"*12)
print("Missing values %ge:")
print(round(Reviewdata.Is_Response.value_counts(normalize=True)*100),2)
print("*"*12)
import seaborn as sns
sns.countplot(Reviewdata.Is_Response)
plt.show()
### Checking for the Distribution of Default ###
import matplotlib.pyplot as plt
%matplotlib inline
print('Percentage for default\n')
print(round(Reviewdata.Is_Response.value_counts(normalize=True)*100,2))
round(Reviewdata.Is_Response.value_counts(normalize=True)*100,2).plot(kind='bar')
plt.title('Percentage Distributions by review type')
plt.show()
#Removing columns
Reviewdata.drop(columns = ['User_ID', 'Browser_Used', 'Device_Used'], inplace = True)
# Apply first level cleaning
import re
import string
#This function converts to lower-case, removes square bracket, removes numbers and punctuation
def text_clean_1(text):
text = text.lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\w*\d\w*', '', text)
return text
cleaned1 = lambda x: text_clean_1(x)
# Let's take a look at the updated text
Reviewdata['cleaned_description'] = pd.DataFrame(Reviewdata.Description.apply(cleaned1))
Reviewdata.head(10)
# Apply a second round of cleaning
def text_clean_2(text):
text = re.sub('[‘’“”…]', '', text)
text = re.sub('\n', '', text)
return text
cleaned2 = lambda x: text_clean_2(x)
# Let's take a look at the updated text
Reviewdata['cleaned_description_new'] = pd.DataFrame(Reviewdata['cleaned_description'].apply(cleaned2))
Reviewdata.head(10)
```
## Model training
```
from sklearn.model_selection import train_test_split
Independent_var = Reviewdata.cleaned_description_new
Dependent_var = Reviewdata.Is_Response
IV_train, IV_test, DV_train, DV_test = train_test_split(Independent_var, Dependent_var, test_size = 0.1, random_state = 225)
print('IV_train :', len(IV_train))
print('IV_test :', len(IV_test))
print('DV_train :', len(DV_train))
print('DV_test :', len(DV_test))
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
tvec = TfidfVectorizer()
clf2 = LogisticRegression(solver = "lbfgs")
from sklearn.pipeline import Pipeline
model = Pipeline([('vectorizer',tvec),('classifier',clf2)])
model.fit(IV_train, DV_train)
from sklearn.metrics import confusion_matrix
predictions = model.predict(IV_test)
confusion_matrix(predictions, DV_test)
```
## Model prediciton
```
from sklearn.metrics import accuracy_score, precision_score, recall_score
print("Accuracy : ", accuracy_score(predictions, DV_test))
print("Precision : ", precision_score(predictions, DV_test, average = 'weighted'))
print("Recall : ", recall_score(predictions, DV_test, average = 'weighted'))
```
## Trying on new reviews
```
example = ["I'm happy"]
result = model.predict(example)
print(result)
example = ["I'm frustrated"]
result = model.predict(example)
print(result)
# Drawback???
example = ["I'm not happy"]
result = model.predict(example)
print(result)
```
| github_jupyter |
## TrainingPhase and General scheduler
Creates a scheduler that lets you train a model with following different [`TrainingPhase`](/callbacks.general_sched.html#TrainingPhase).
```
from fastai.gen_doc.nbdoc import *
from fastai.callbacks.general_sched import *
from fastai.vision import *
show_doc(TrainingPhase)
```
You can then schedule any hyper-parameter you want by using the following method.
```
show_doc(TrainingPhase.schedule_hp)
```
The phase will make the hyper-parameter vary from the first value in `vals` to the second, following `anneal`. If an annealing function is specified but `vals` is a float, it will decay to 0. If no annealing function is specified, the default is a linear annealing for a tuple, a constant parameter if it's a float.
```
jekyll_note("""If you want to use discriminative values, you can pass an numpy array in `vals` (or a tuple
of them for start and stop).""")
```
The basic hyper-parameters are named:
- 'lr' for learning rate
- 'mom' for momentum (or beta1 in Adam)
- 'beta' for the beta2 in Adam or the alpha in RMSprop
- 'wd' for weight decay
You can also add any hyper-parameter that is in your optimizer (even if it's custom or a [`GeneralOptimizer`](/general_optimizer.html#GeneralOptimizer)), like 'eps' if you're using Adam.
Let's make an example by using this to code [SGD with warm restarts](https://arxiv.org/abs/1608.03983).
```
def fit_sgd_warm(learn, n_cycles, lr, mom, cycle_len, cycle_mult):
n = len(learn.data.train_dl)
phases = [(TrainingPhase(n * (cycle_len * cycle_mult**i))
.schedule_hp('lr', lr, anneal=annealing_cos)
.schedule_hp('mom', mom)) for i in range(n_cycles)]
sched = GeneralScheduler(learn, phases)
learn.callbacks.append(sched)
if cycle_mult != 1:
total_epochs = int(cycle_len * (1 - (cycle_mult)**n_cycles)/(1-cycle_mult))
else: total_epochs = n_cycles * cycle_len
learn.fit(total_epochs)
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)
fit_sgd_warm(learn, 3, 1e-3, 0.9, 1, 2)
learn.recorder.plot_lr()
show_doc(GeneralScheduler)
```
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(GeneralScheduler.on_batch_end, doc_string=False)
```
Takes a step in the current phase and prepare the hyperparameters for the next batch.
```
show_doc(GeneralScheduler.on_train_begin, doc_string=False)
```
Initiates the hyperparameters to the start values of the first phase.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
| github_jupyter |
## Problem Definition
In the following different ways of loading or implementing an optimization problem in our framework are discussed.
### By Class
A very detailed description of defining a problem through a class is already provided in the [Getting Started Guide](../getting_started.ipynb).
The following definition of a simple optimization problem with **one** objective and **two** constraints is considered. The problem has two constants, *const_1* and *const_2*, which can be modified by initiating the problem with different parameters. By default, it consists of 10 variables, and the lower and upper bounds are within $[-5, 5]$ for all variables.
**Note**: The example below uses the `autograd` library, which calculates the gradients through automatic differentiation.
```
import numpy as np
import autograd.numpy as anp
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self, const_1=5, const_2=0.1):
# define lower and upper bounds - 1d array with length equal to number of variable
xl = -5 * anp.ones(10)
xu = 5 * anp.ones(10)
super().__init__(n_var=10, n_obj=1, n_constr=2, xl=xl, xu=xu, evaluation_of="auto")
# store custom variables needed for evaluation
self.const_1 = const_1
self.const_2 = const_2
def _evaluate(self, x, out, *args, **kwargs):
f = anp.sum(anp.power(x, 2) - self.const_1 * anp.cos(2 * anp.pi * x), axis=1)
g1 = (x[:, 0] + x[:, 1]) - self.const_2
g2 = self.const_2 - (x[:, 2] + x[:, 3])
out["F"] = f
out["G"] = anp.column_stack([g1, g2])
```
After creating a problem object, the evaluation function can be called. The `return_values_of` parameter can be overwritten to modify the list of returned parameters. The gradients for the objectives `dF` and constraints `dG` can be obtained as follows:
```
problem = MyProblem()
F, G, CV, feasible, dF, dG = problem.evaluate(np.random.rand(100, 10),
return_values_of=["F", "G", "CV", "feasible", "dF", "dG"])
```
**Elementwise Evaluation**
If the problem can not be executed using matrix operations, a serialized evaluation can be indicated using the `elementwise_evaluation=True` flag. If the flag is set, then an outer loop is already implemented, an `x` is only a **one**-dimensional array.
```
class MyProblem(Problem):
def __init__(self, **kwargs):
super().__init__(n_var=2, n_obj=1, elementwise_evaluation=True, **kwargs)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = x.sum()
```
### By Function
Another way of defining a problem is through functions. One the one hand, many function calls need to be performed to evaluate a set of solutions, but on the other hand, it is a very intuitive way of defining a problem.
```
import numpy as np
from pymoo.model.problem import FunctionalProblem
objs = [
lambda x: np.sum((x - 2) ** 2),
lambda x: np.sum((x + 2) ** 2)
]
constr_ieq = [
lambda x: np.sum((x - 1) ** 2)
]
problem = FunctionalProblem(10,
objs,
constr_ieq=constr_ieq,
xl=np.array([-10, -5, -10]),
xu=np.array([10, 5, 10])
)
F, CV = problem.evaluate(np.random.rand(3, 10))
print(f"F: {F}\n")
print(f"CV: {CV}")
# END from_string
```
### By String
In our framework, various test problems are already implemented and available by providing the corresponding problem name we have assigned to it. A couple of problems can be further parameterized by providing the number of variables, constraints, or other problem-dependent constants.
```
from pymoo.factory import get_problem
p = get_problem("dtlz1_-1", n_var=20, n_obj=5)
# create a simple test problem from string
p = get_problem("Ackley")
# the input name is not case sensitive
p = get_problem("ackley")
# also input parameter can be provided directly
p = get_problem("dtlz1_-1", n_var=20, n_obj=5)
```
## API
| github_jupyter |
```
import os, platform, pprint, sys
import fastai
import keras
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sn
import sklearn
# from fastai.tabular.data import TabularDataLoaders
# from fastai.tabular.all import FillMissing, Categorify, Normalize, tabular_learner, accuracy, ClassificationInterpretation, ShowGraphCallback
from itertools import cycle
from keras.layers import Dense
from keras.metrics import CategoricalAccuracy, Recall, Precision, AUC
from keras.models import Sequential
from keras.utils import to_categorical, normalize
from math import sqrt
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
seed: int = 14
# set up pretty printer for easier data evaluation
pretty = pprint.PrettyPrinter(indent=4, width=30).pprint
# declare file paths for the data we will be working on
file_path_1: str = '../data/prepared/baseline/Benign_vs_DDoS.csv'
file_path_2: str = '../data/prepared/timebased/Benign_vs_DDoS.csv'
modelPath : str = './models'
# print library and python versions for reproducibility
print(
f'''
python:\t{platform.python_version()}
\tfastai:\t\t{fastai.__version__}
\tkeras:\t\t{keras.__version__}
\tmatplotlib:\t{mpl.__version__}
\tnumpy:\t\t{np.__version__}
\tpandas:\t\t{pd.__version__}
\tseaborn:\t{sn.__version__}
\tsklearn:\t{sklearn.__version__}
'''
)
def load_data(filePath: str) -> pd.DataFrame:
'''
Loads the Dataset from the given filepath and caches it for quick access in the future
Function will only work when filepath is a .csv file
'''
# slice off the ./CSV/ from the filePath
if filePath[0] == '.' and filePath[1] == '.':
filePathClean: str = filePath[17::]
pickleDump: str = f'../data/cache/{filePathClean}.pickle'
else:
pickleDump: str = f'../data/cache/{filePath}.pickle'
print(f'Loading Dataset: {filePath}')
print(f'\tTo Dataset Cache: {pickleDump}\n')
# check if data already exists within cache
if os.path.exists(pickleDump):
df = pd.read_pickle(pickleDump)
# if not, load data and cache it
else:
df = pd.read_csv(filePath, low_memory=True)
df.to_pickle(pickleDump)
return df
def show_conf_matrix(model=None, X_test=None, y_test=None, classes=[], file=''):
# Techniques from https://stackoverflow.com/questions/29647749/seaborn-showing-scientific-notation-in-heatmap-for-3-digit-numbers
# and https://stackoverflow.com/questions/35572000/how-can-i-plot-a-confusion-matrix#51163585
predictions = model.predict(X_test)
matrix = [ [ 0 for j in range(len(predictions[0])) ] for i in range(len(predictions[0])) ]
for i in range(len(predictions)):
pred = predictions[i]
test = y_test[i]
guess = np.argmax(pred)
actual = np.argmax(test)
matrix[actual][guess] += 1
df_cm = pd.DataFrame(matrix, range(len(matrix)), range(len(matrix)))
int_cols = df_cm.columns
df_cm.columns = classes
df_cm.index = classes
fig = plt.figure(figsize=(10,7))
sn.set(font_scale=1.5) # for label size
ax = sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='g', cmap=sn.color_palette("Blues")) # font size
ax.set_ylabel('Actual')
ax.set_xlabel('Predicted')
plt.tight_layout()
fig.savefig('conf_matrix_{}.png'.format(file))
plt.show()
def show_roc_curve(model=None, X_test=None, y_test=None, classes=[], file=''):
y_score = model.predict(X_test)
n_classes = len(classes)
# Produce ROC curve from https://hackernoon.com/simple-guide-on-how-to-generate-roc-plot-for-keras-classifier-2ecc6c73115a
# Note that I am working through this code and I'm going to clean it up as I learn more about how it works
import numpy as np
from numpy import interp
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc
# Plot linewidth.
lw = 2
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves of all the classes
fig = plt.figure(figsize=(12,12))
colors = cycle(['red', 'blue', 'orange', 'green', 'violet', 'teal', 'turquoise', 'pink'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of {0} (area = {1:0.2f})'.format(classes[i], roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.ylabel('True Positive Rate (Sensativity)')
plt.xlabel('False Positive Rate (1-Specificity)')
plt.title('Receiver Operating Characteristic of the Classes')
plt.legend(loc="lower right")
fig.savefig('roc_curve_classes_{}.png'.format(file))
plt.show()
# Plot all ROC curves with micro and macro averages
fig = plt.figure(figsize=(12,12))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.ylabel('True Positive Rate (Sensativity)')
plt.xlabel('False Positive Rate (1-Specificity)')
plt.title('Receiver Operating Characteristic of the Micro and Macro Averages')
plt.legend(loc="lower right")
fig.savefig('roc_curve_micromacro_{}.png'.format(file))
plt.show()
def get_std(x=[], xbar=0):
o2=0
for xi in x:
o2 += (xi - xbar)**2
o2 /= len(x)-1
return sqrt(o2)
baseline_df : pd.DataFrame = load_data(file_path_1)
timebased_df: pd.DataFrame = load_data(file_path_2)
dep_var = 'Label'
ind_vars_baseline = (baseline_df.columns.difference([dep_var])).tolist()
ind_vars_timebased = (timebased_df.columns.difference([dep_var])).tolist()
baseline_Xy = (baseline_df[ind_vars_baseline], baseline_df[dep_var])
timebased_Xy = (timebased_df[ind_vars_timebased], timebased_df[dep_var])
names: list = ['Benign', 'DDoS']
X = baseline_Xy[0]
x = baseline_Xy[0]
Y = baseline_Xy[1]
num_classes = Y.nunique()
encoder = LabelEncoder()
y = encoder.fit_transform(Y)
# Lists for accuracies collected from models
list_rf = []
list_dt = []
list_knn = []
list_dnn = []
std_rf = []
std_dt = []
std_knn = []
std_dnn = []
# Mean accuracies for each model
mean_rf = 0
mean_dt = 0
mean_knn = 0
mean_dnn = 0
# Keep to calculate std
results_rf = []
results_dt = []
results_knn = []
results_dnn = []
# 10-fold Stratified Cross-Validation
n_splits = 10
skf = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
for train_idxs, test_idxs in skf.split(X, y):
# Define the training and testing sets
X_train, X_test = X.iloc[train_idxs], X.iloc[test_idxs]
y_train, y_test = y[train_idxs], y[test_idxs]
# Create a different version of the y_train and y_test for the Deep Neural Network
# y_train_dnn = to_categorical(y_train, num_classes=num_classes)
# y_test_dnn = to_categorical(y_test, num_classes=num_classes)
# Initialize the sklearn models
rf = RandomForestClassifier(random_state=seed)
dt = DecisionTreeClassifier(random_state=seed)
knn = KNeighborsClassifier()
# # Deep Neural Network
# dnn = Sequential([
# Dense(256, input_shape=(69,)),
# Dense(128, activation='relu'),
# Dense(64, activation='relu'),
# Dense(32, activation='relu'),
# Dense(2, activation='softmax')
# ])
# dnn.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the models
rf.fit(X_train, y_train)
dt.fit(X_train, y_train)
knn.fit(X_train, y_train)
# dnn.fit(x=X_train, y=y_train_dnn, batch_size=25, epochs=100, verbose=0, validation_data=(X_test, y_test_dnn))
# Evaluate the models
results_rf.append(rf.score(X_test, y_test))
results_dt.append(dt.score(X_test, y_test))
results_knn.append(knn.score(X_test, y_test))
# results_dnn.append( (dnn.evaluate(X_test, y_test_dnn, verbose=0) )[1] )
# print('Random Forest')
# show_roc_curve(model=rf, X_test=X_test, y_test=y_test, classes=names)
# print('Decision Tree')
# show_roc_curve(model=dt, X_test=X_test, y_test=y_test, classes=names)
# print('k-Nearest Neighbor')
# show_roc_curve(model=knn, X_test=X_test, y_test=y_test, classes=names)
# # print('Deep Learning')
# show_roc_curve(model=dnn, X_test=X_test, y_test=y_test_dnn, classes=names)
print('Random Forest')
show_conf_matrix(model=rf, X_test=X_test, y_test=y_test, classes=names)
print('Decision Tree')
show_conf_matrix(model=dt, X_test=X_test, y_test=y_test, classes=names)
print('k-Nearest Neighbor')
show_conf_matrix(model=knn, X_test=X_test, y_test=y_test, classes=names)
# print('Deep Learning')
# show_conf_matrix(model=dnn, X_test=X_test, y_test=y_test_dnn, classes=names)
#print('Results from DNN: {}'.format(results_dnn))
# Add the results to the running mean
mean_rf += results_rf[-1] / (n_splits * 1.0)
mean_dt += results_dt[-1] / (n_splits * 1.0)
mean_knn += results_knn[-1] / (n_splits * 1.0)
# mean_dnn += results_dnn[-1] / (n_splits * 1.0)
# Push the mean results from all of the splits to the lists
list_rf.append(mean_rf)
list_dt.append(mean_dt)
list_knn.append(mean_knn)
# list_dnn.append(mean_dnn)
std_rf.append(get_std(results_rf, mean_rf))
std_dt.append(get_std(results_dt, mean_dt))
std_knn.append(get_std(results_knn, mean_knn))
# std_dnn.append(get_std(results_dnn, mean_dnn))
print('done')
print('All trainings complete!')
```
| github_jupyter |
# Using AWS Lambda and PyWren for Landsat 8 Time Series
This notebook is a simple demonstration of drilling a timeseries of NDVI values from the [Landsat 8 scenes held on AWS](https://landsatonaws.com/)
### Credits
- NDVI PyWren - [Peter Scarth](mailto:p.scarth@uq.edu.au?subject=AWS%20Lambda%20and%20PyWren) (Joint Remote Sensing Research Program)
- [RemotePixel](https://github.com/RemotePixel/remotepixel-api) - Landsat 8 NDVI GeoTIFF parsing function
- [PyWren](https://github.com/pywren/pywren) - Project by BCCI and riselab. Makes it easy to executive massive parallel map queries across [AWS Lambda](https://aws.amazon.com/lambda/)
#### Additional notes
The below remotely executed function will deliver results usually in under a minute for the full timeseries of more than 100 images, and we can simply plot the resulting timeseries or do further analysis. BUT, the points may well be cloud or cloud shadow contaminated. We haven’t done any cloud masking to the imagery, but we do have the scene metadata on the probable amount of cloud across the entire scene. We use this to weight a [smoothing spline](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.interpolate.UnivariateSpline.html), such that an observation with no reported cloud over the scene has full weight, and an observation with a reported 100% of the scene with cloud has zero weight.
# Step by Step instructions
### Setup Logging (optional)
Only activate the below lines if you want to see all debug messages from PyWren. _Note: The output will be rather chatty and lengthy._
```
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
%env PYWREN_LOGLEVEL=INFO
```
### Setup all the necessary libraries
This will setup all the necessary libraries to properly display our results and it also imports the library that allows us to query Landsat 8 data from the [AWS Public Dataset](https://aws.amazon.com/public-datasets/landsat/):
```
import requests, json, numpy, datetime, os, boto3
from IPython.display import HTML, display, Image
import matplotlib.pyplot as plt
import l8_ndvi
from scipy.interpolate import UnivariateSpline
import pywren
# Function to return a Landsat 8 scene list given a Longitude,Latitude string
# This uses the amazing developmentseed Satellite API
# https://github.com/sat-utils/sat-api
def getSceneList(lonLat):
scenes=[]
url = "https://api.developmentseed.org/satellites/landsat"
params = dict(
contains=lonLat,
satellite_name="landsat-8",
limit="1000")
# Call the API to grab the scene metadata
sceneMetaData = json.loads(requests.get(url=url, params=params).content)
# Parse the metadata
for record in sceneMetaData["results"]:
scene = str(record['aws_index'].split('/')[-2])
# This is a bit of a hack to get around some versioning problem on the API :(
# Related to this issue https://github.com/sat-utils/sat-api/issues/18
if scene[-2:] == '01':
scene = scene[:-2] + '00'
if scene[-2:] == '02':
scene = scene[:-2] + '00'
if scene[-2:] == '03':
scene = scene[:-2] + '02'
scenes.append(scene)
return scenes
# Function to call a AWS Lambda function to drill a single pixel and compute the NDVI
def getNDVI(scene):
return l8_ndvi.point(scene, eval(lonLat))
```
### Run the code locally over a point of interest
Let's have a look at Hong Kong, an urban area with some country parks surrounding the city: [114.1095,22.3964](https://goo.gl/maps/PhDLAdLbiQT2)
First we need to retrieve the available Landsat 8 scenes from the point of interest:
```
lonLat = '114.1095,22.3964'
scenesHK = getSceneList('114.1095,22.3964')
#print(scenesHK)
display(HTML('Total scenes: <b>' + str(len(scenesHK)) + '</b>'))
```
Now let's find out the NDVI and the amount of clouds on a specific scene locally on our machine:
```
lonLat = '114.1095,22.3964'
thumbnail = l8_ndvi.thumb('LC08_L1TP_121045_20170829_20170914_01_T1', eval(lonLat))
display(Image(url=thumbnail, format='jpg'))
result = getNDVI('LC08_L1TP_121045_20170829_20170914_01_T1')
#display(result)
display(HTML('<b>Date:</b> '+result['date']))
display(HTML('<b>Amount of clouds:</b> '+str(result['cloud'])+'%'))
display(HTML('<b>NDVI:</b> '+str(result['ndvi'])))
```
Great, time to try this with an observation on a cloudier day. Please note that the NDVI drops too, as we are not able to actually receive much data fom the land surface:
```
lonLat = '114.1095,22.3964'
thumbnail = l8_ndvi.thumb('LC08_L1GT_122044_20171108_20171108_01_RT', eval(lonLat))
display(Image(url=thumbnail, format='jpg'))
result = getNDVI('LC08_L1GT_122044_20171108_20171108_01_RT')
#display(result)
display(HTML('<b>Date:</b> '+result['date']))
display(HTML('<b>Amount of clouds:</b> '+str(result['cloud'])+'%'))
display(HTML('<b>NDVI:</b> '+str(result['ndvi'])))
```
### Massively Parallel calculation with PyWren
Now let's try this with multiple scenes and send it to PyWren, however to accomplish this we need to change our PyWren AWS Lambda function to include the necessary libraries such as rasterio and GDAL. Since those libraries are compiled C code, PyWren will not be able to pickle it up and send it to the Lambda function. Hence we will update the entire PyWren function to include the necessary binaries that have been compiled on an Amazon EC2 instance with Amazon Linux. We pre-packaged this and made it available via https://s3-us-west-2.amazonaws.com/pywren-workshop/lambda_function.zip
You can simple push this code to your PyWren AWS Lambda function with below command, assuming you named the function with the default name pywren_1 and region us-west-2:
```
lambdaclient = boto3.client('lambda', 'us-west-2')
response = lambdaclient.update_function_code(
FunctionName='pywren_1',
Publish=True,
S3Bucket='pywren-workshop',
S3Key='lambda_function.zip'
)
response = lambdaclient.update_function_configuration(
FunctionName='pywren_1',
Environment={
'Variables': {
'GDAL_DATA': '/var/task/lib/gdal'
}
}
)
```
If you look at the list of available scenes, we have a rather large amount. This is a good use-case for PyWren as it will allows us to have AWS Lambda perform the calculation of NDVI and clouds for us - furthermore it will have a faster connectivity to read and write from Amazon S3. If you want to know more details about the calculation, have a look at [l8_ndvi.py](/edit/Lab-4-Landsat-NDVI/l8_ndvi.py).
Ok let's try this on the latest 200 collected Landsat 8 images GeoTIFFs of Hong Kong:
```
lonLat = '114.1095,22.3964'
pwex = pywren.default_executor()
resultsHK = pywren.get_all_results(pwex.map(getNDVI, scenesHK[:200]))
display(resultsHK)
```
### Display results
Let's try to render our results in a nice HTML table first:
```
#Remove results where we couldn't retrieve data from the scene
results = filter(None, resultsHK)
#Render a nice HTML table to display result
html = '<table><tr><td><b>Date</b></td><td><b>Clouds</b></td><td><b>NDVI</b></td></tr>'
for x in results:
html = html + '<tr>'
html = html + '<td>' + x['date'] + '</td>'
html = html + '<td>' + str(x['cloud']) + '%</td>'
html = html + '<td '
if (x['ndvi'] > 0.5):
html = html + ' bgcolor="#00FF00">'
elif (x['ndvi'] > 0.1):
html = html + ' bgcolor="#FFFF00">'
else:
html = html + ' bgcolor="#FF0000">'
html = html + str(round(abs(x['ndvi']),2)) + '</td>'
html = html + '</tr>'
html = html + '</table>'
display(HTML(html))
```
This provides us a good overview but would quickly become difficult to read as the datapoints expand - let's use [Matplotlib](https://matplotlib.org/) instead to plot this out:
```
timeSeries = filter(None,resultsHK)
# Extract the data trom the list of results
timeStamps = [datetime.datetime.strptime(obs['date'],'%Y-%m-%d') for obs in timeSeries if 'date' in obs]
ndviSeries = [obs['ndvi'] for obs in timeSeries if 'ndvi' in obs]
cloudSeries = [obs['cloud']/100 for obs in timeSeries if 'cloud' in obs]
# Create a time variable as the x axis to fit the observations
# First we convert to seconds
timeSecs = numpy.array([(obsTime-datetime.datetime(1970,1,1)).total_seconds() for obsTime in timeStamps])
# And then normalise from 0 to 1 to avoid any numerical issues in the fitting
fitTime = ((timeSecs-numpy.min(timeSecs))/(numpy.max(timeSecs)-numpy.min(timeSecs)))
# Smooth the data by fitting a spline weighted by cloud amount
smoothedNDVI=UnivariateSpline(
fitTime[numpy.argsort(fitTime)],
numpy.array(ndviSeries)[numpy.argsort(fitTime)],
w=(1.0-numpy.array(cloudSeries)[numpy.argsort(fitTime)])**2.0,
k=2,
s=0.1)(fitTime)
fig = plt.figure(figsize=(16,10))
plt.plot(timeStamps,ndviSeries, 'gx',label='Raw NDVI Data')
plt.plot(timeStamps,ndviSeries, 'y:', linewidth=1)
plt.plot(timeStamps,cloudSeries, 'b.', linewidth=1,label='Scene Cloud Percent')
plt.plot(timeStamps,cloudSeries, 'b:', linewidth=1)
#plt.plot(timeStamps,smoothedNDVI, 'r--', linewidth=3,label='Cloudfree Weighted Spline')
plt.xlabel('Date', fontsize=16)
plt.ylabel('NDVI', fontsize=16)
plt.title('AWS Lambda Landsat 8 NDVI Drill (Hong Kong)', fontsize=20)
plt.grid(True)
plt.ylim([-.1,1.0])
plt.legend(fontsize=14)
plt.show()
```
### Run the code over another location
This test site is a cotton farming area in Queensland, Australia [147.870599,-28.744617](https://goo.gl/maps/GF5szf7vZo82)
Let's first acquire some scenes:
```
lonLat = '147.870599,-28.744617'
scenesQLD = getSceneList(lonLat)
#print(scenesQLD)
display(HTML('Total scenes: <b>' + str(len(scenesQLD)) + '</b>'))
```
Let's first have a look at an individual observation first on our local machine:
```
thumbnail = l8_ndvi.thumb('LC80920802017118LGN00', eval(lonLat))
display(Image(url=thumbnail, format='jpg'))
result = getNDVI('LC80920802017118LGN00')
#display(result)
display(HTML('<b>Date:</b> '+result['date']))
display(HTML('<b>Amount of clouds:</b> '+str(result['cloud'])+'%'))
display(HTML('<b>NDVI:</b> '+str(result['ndvi'])))
```
### Pywren Time
Let's process this across all of the observations in parallel using AWS Lambda:
```
pwex = pywren.default_executor()
resultsQLD = pywren.get_all_results(pwex.map(getNDVI, scenesQLD))
display(resultsQLD)
```
Now let's plot this out again:
```
timeSeries = filter(None,resultsQLD)
# Extract the data trom the list of results
timeStamps = [datetime.datetime.strptime(obs['date'],'%Y-%m-%d') for obs in timeSeries if 'date' in obs]
ndviSeries = [obs['ndvi'] for obs in timeSeries if 'ndvi' in obs]
cloudSeries = [obs['cloud']/100 for obs in timeSeries if 'cloud' in obs]
# Create a time variable as the x axis to fit the observations
# First we convert to seconds
timeSecs = numpy.array([(obsTime-datetime.datetime(1970,1,1)).total_seconds() for obsTime in timeStamps])
# And then normalise from 0 to 1 to avoid any numerical issues in the fitting
fitTime = ((timeSecs-numpy.min(timeSecs))/(numpy.max(timeSecs)-numpy.min(timeSecs)))
# Smooth the data by fitting a spline weighted by cloud amount
smoothedNDVI=UnivariateSpline(
fitTime[numpy.argsort(fitTime)],
numpy.array(ndviSeries)[numpy.argsort(fitTime)],
w=(1.0-numpy.array(cloudSeries)[numpy.argsort(fitTime)])**2.0,
k=2,
s=0.1)(fitTime)
fig = plt.figure(figsize=(16,10))
plt.plot(timeStamps,ndviSeries, 'gx',label='Raw NDVI Data')
plt.plot(timeStamps,ndviSeries, 'g:', linewidth=1)
plt.plot(timeStamps,cloudSeries, 'b.', linewidth=1,label='Scene Cloud Percent')
plt.plot(timeStamps,smoothedNDVI, 'r--', linewidth=3,label='Cloudfree Weighted Spline')
plt.xlabel('Date', fontsize=16)
plt.ylabel('NDVI', fontsize=16)
plt.title('AWS Lambda Landsat 8 NDVI Drill (Cotton Farm QLD, Australia)', fontsize=20)
plt.grid(True)
plt.ylim([-.1,1.0])
plt.legend(fontsize=14)
plt.show()
```
| github_jupyter |
```
import xgboost as xgb
import pandas as pd
# 読み出し
data = pd.read_pickle('data.pkl')
nomination_onehot = pd.read_pickle('nomination_onehot.pkl')
selected_performers_onehot = pd.read_pickle('selected_performers_onehot.pkl')
selected_directors_onehot = pd.read_pickle('selected_directors_onehot.pkl')
selected_studio_onehot = pd.read_pickle('selected_studio_onehot.pkl')
selected_scriptwriter_onehot = pd.read_pickle('selected_scriptwriter_onehot.pkl')
review_dataframe = pd.read_pickle('review_dataframe.pkl')
tfidf = pd.read_pickle('tfidf.pkl')
table = pd.concat([
data[['prize', 'title', 'year', 'screen_time']],
nomination_onehot,
selected_performers_onehot,
selected_directors_onehot,
selected_studio_onehot,
selected_scriptwriter_onehot
], axis = 1)
for year in range(1978, 2019 + 1):
rg = xgb.XGBRegressor(silent= True)
X = table.query('year != {}'.format(year)).drop(['prize', 'title', 'year'], axis = 1).values
y = table.query('year != {}'.format(year))['prize'].values
rg.fit(X,y)
result = rg.predict(table.query('year == {}'.format(year)).drop(['prize', 'title', 'year'], axis = 1).values)
prize = table.query('year == {}'.format(year))
title = table.query('year == {}'.format(year))['title'].copy()
title[prize['prize'] == 1] = title[prize['prize'] == 1].map(lambda s: '★' + s)
print(year)
print(pd.Series(result, index = title.values).sort_values(ascending=False) )
print('')
frames = [
data.query('year == 2004')[['title', 'production_studio', 'other_nominates']],
review_dataframe
]
def asdf(s):
s['len'] = len(s['reviews'])
return s
pd.concat(
frames,
axis = 1,
join = 'inner'
).apply(asdf, axis = 1).drop(['reviews'], axis = 1)
from sklearn.decomposition import PCA
pca = PCA(n_components=20)
pca.fit(tfidf.values)
tfidf_df = pd.DataFrame(pca.transform(tfidf.values), index = tfidf.index)
table = pd.concat([
data[['prize', 'title', 'year']],
tfidf
], axis = 1)
for year in range(1978, 2019 + 1):
rg = xgb.XGBRegressor(silent= True)
X = table.query('year != {}'.format(year)).drop(['prize', 'title', 'year'], axis = 1).values
y = table.query('year != {}'.format(year))['prize'].values
rg.fit(X,y)
result = rg.predict(table.query('year == {}'.format(year)).drop(['prize', 'title', 'year'], axis = 1).values)
prize = table.query('year == {}'.format(year))
title = table.query('year == {}'.format(year))['title'].copy()
title[prize['prize'] == 1] = title[prize['prize'] == 1].map(lambda s: '★' + s)
print(year)
print(pd.Series(result, index = title.values).sort_values(ascending=False) )
print('')
```
| github_jupyter |
```
%tensorflow_version 2.x
import tensorflow as tf
#from tf.keras.models import Sequential
#from tf.keras.layers import Dense
import os
import io
tf.__version__
```
# Download Data
```
# Download the zip file
path_to_zip = tf.keras.utils.get_file("smsspamcollection.zip",
origin="https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip",
extract=True)
# Unzip the file into a folder
!unzip $path_to_zip -d data
# optional step - helps if colab gets disconnected
# from google.colab import drive
# drive.mount('/content/drive')
# Test data reading
# lines = io.open('/content/drive/My Drive/colab-data/SMSSpamCollection').read().strip().split('\n')
lines = io.open('/content/data/SMSSpamCollection').read().strip().split('\n')
lines[0]
```
# Pre-Process Data
```
spam_dataset = []
count = 0
for line in lines:
label, text = line.split('\t')
if label.lower().strip() == 'spam':
spam_dataset.append((1, text.strip()))
count += 1
else:
spam_dataset.append(((0, text.strip())))
print(spam_dataset[0])
print("Spam: ", count)
```
# Data Normalization
```
import pandas as pd
df = pd.DataFrame(spam_dataset, columns=['Spam', 'Message'])
import re
# Normalization functions
def message_length(x):
# returns total number of characters
return len(x)
def num_capitals(x):
_, count = re.subn(r'[A-Z]', '', x) # only works in english
return count
def num_punctuation(x):
_, count = re.subn(r'\W', '', x)
return count
df['Capitals'] = df['Message'].apply(num_capitals)
df['Punctuation'] = df['Message'].apply(num_punctuation)
df['Length'] = df['Message'].apply(message_length)
df.describe()
train=df.sample(frac=0.8,random_state=42) #random state is a seed value
test=df.drop(train.index)
train.describe()
test.describe()
```
# Model Building
```
# Basic 1-layer neural network model for evaluation
def make_model(input_dims=3, num_units=12):
model = tf.keras.Sequential()
# Adds a densely-connected layer with 12 units to the model:
model.add(tf.keras.layers.Dense(num_units,
input_dim=input_dims,
activation='relu'))
# Add a sigmoid layer with a binary output unit:
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
return model
x_train = train[['Length', 'Punctuation', 'Capitals']]
y_train = train[['Spam']]
x_test = test[['Length', 'Punctuation', 'Capitals']]
y_test = test[['Spam']]
x_train
model = make_model()
model.fit(x_train, y_train, epochs=10, batch_size=10)
model.evaluate(x_test, y_test)
y_train_pred = model.predict_classes(x_train)
# confusion matrix
tf.math.confusion_matrix(tf.constant(y_train.Spam),
y_train_pred)
sum(y_train_pred)
y_test_pred = model.predict_classes(x_test)
tf.math.confusion_matrix(tf.constant(y_test.Spam), y_test_pred)
```
# Tokenization and Stop Word Removal
```
sentence = 'Go until jurong point, crazy.. Available only in bugis n great world'
sentence.split()
!pip install stanza # StanfordNLP has become https://github.com/stanfordnlp/stanza/
import stanza
en = stanza.download('en')
en = stanza.Pipeline(lang='en')
sentence
tokenized = en(sentence)
len(tokenized.sentences)
for snt in tokenized.sentences:
for word in snt.tokens:
print(word.text)
print("<End of Sentence>")
```
## Dependency Parsing Example
```
en2 = stanza.Pipeline(lang='en')
pr2 = en2("Hari went to school")
for snt in pr2.sentences:
for word in snt.tokens:
print(word)
print("<End of Sentence>")
```
## Japanese Tokenization Example
```
jp = stanza.download('ja')
jp = stanza.Pipeline(lang='ja')
jp_line = jp("選挙管理委員会")
for snt in jp_line.sentences:
for word in snt.tokens:
print(word.text)
```
# Adding Word Count Feature
```
def word_counts(x, pipeline=en):
doc = pipeline(x)
count = sum( [ len(sentence.tokens) for sentence in doc.sentences] )
return count
#en = snlp.Pipeline(lang='en', processors='tokenize')
df['Words'] = df['Message'].apply(word_counts)
df.describe()
#train=df.sample(frac=0.8,random_state=42) #random state is a seed value
#test=df.drop(train.index)
train['Words'] = train['Message'].apply(word_counts)
test['Words'] = test['Message'].apply(word_counts)
x_train = train[['Length', 'Punctuation', 'Capitals', 'Words']]
y_train = train[['Spam']]
x_test = test[['Length', 'Punctuation', 'Capitals' , 'Words']]
y_test = test[['Spam']]
model = make_model(input_dims=4)
model.fit(x_train, y_train, epochs=10, batch_size=10)
model.evaluate(x_test, y_test)
```
## Stop Word Removal
```
!pip install stopwordsiso
import stopwordsiso as stopwords
stopwords.langs()
sorted(stopwords.stopwords('en'))
en_sw = stopwords.stopwords('en')
def word_counts(x, pipeline=en):
doc = pipeline(x)
count = 0
for sentence in doc.sentences:
for token in sentence.tokens:
if token.text.lower() not in en_sw:
count += 1
return count
train['Words'] = train['Message'].apply(word_counts)
test['Words'] = test['Message'].apply(word_counts)
x_train = train[['Length', 'Punctuation', 'Capitals', 'Words']]
y_train = train[['Spam']]
x_test = test[['Length', 'Punctuation', 'Capitals' , 'Words']]
y_test = test[['Spam']]
model = make_model(input_dims=4)
#model = make_model(input_dims=3)
model.fit(x_train, y_train, epochs=10, batch_size=10)
```
## POS Based Features
```
en = stanza.Pipeline(lang='en')
txt = "Yo you around? A friend of mine's lookin."
pos = en(txt)
def print_pos(doc):
text = ""
for sentence in doc.sentences:
for token in sentence.tokens:
text += token.words[0].text + "/" + \
token.words[0].upos + " "
text += "\n"
return text
print(print_pos(pos))
en_sw = stopwords.stopwords('en')
def word_counts_v3(x, pipeline=en):
doc = pipeline(x)
count = 0
for sentence in doc.sentences:
for token in sentence.tokens:
if token.text.lower() not in en_sw and \
token.words[0].upos not in ['PUNCT', 'SYM']:
count += 1
return count
print(word_counts(txt), word_counts_v3(txt))
train['Test'] = 0
train.describe()
def word_counts_v3(x, pipeline=en):
doc = pipeline(x)
totals = 0.
count = 0.
non_word = 0.
for sentence in doc.sentences:
totals += len(sentence.tokens) # (1)
for token in sentence.tokens:
if token.text.lower() not in en_sw:
if token.words[0].upos not in ['PUNCT', 'SYM']:
count += 1.
else:
non_word += 1.
non_word = non_word / totals
return pd.Series([count, non_word], index=['Words_NoPunct', 'Punct'])
x = train[:10]
x.describe()
train_tmp = train['Message'].apply(word_counts_v3)
train = pd.concat([train, train_tmp], axis=1)
train.describe()
test_tmp = test['Message'].apply(word_counts_v3)
test = pd.concat([test, test_tmp], axis=1)
test.describe()
z = pd.concat([x, train_tmp], axis=1)
z.describe()
z.loc[z['Spam']==0].describe()
z.loc[z['Spam']==1].describe()
aa = [word_counts_v3(y) for y in x['Message']]
ab = pd.DataFrame(aa)
ab.describe()
```
# Lemmatization
```
text = "Stemming is aimed at reducing vocabulary and aid un-derstanding of" +\
" morphological processes. This helps people un-derstand the" +\
" morphology of words and reduce size of corpus."
lemma = en(text)
lemmas = ""
for sentence in lemma.sentences:
for token in sentence.tokens:
lemmas += token.words[0].lemma +"/" + \
token.words[0].upos + " "
lemmas += "\n"
print(lemmas)
```
# TF-IDF Based Model
```
# if not installed already
!pip install sklearn
corpus = [
"I like fruits. Fruits like bananas",
"I love bananas but eat an apple",
"An apple a day keeps the doctor away"
]
```
## Count Vectorization
```
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
vectorizer.get_feature_names()
X.toarray()
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(X.toarray())
query = vectorizer.transform(["apple and bananas"])
cosine_similarity(X, query)
```
## TF-IDF Vectorization
```
import pandas as pd
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(smooth_idf=False)
tfidf = transformer.fit_transform(X.toarray())
pd.DataFrame(tfidf.toarray(),
columns=vectorizer.get_feature_names())
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelEncoder
tfidf = TfidfVectorizer(binary=True)
X = tfidf.fit_transform(train['Message']).astype('float32')
X_test = tfidf.transform(test['Message']).astype('float32')
X.shape
from keras.utils import np_utils
_, cols = X.shape
model2 = make_model(cols) # to match tf-idf dimensions
lb = LabelEncoder()
y = lb.fit_transform(y_train)
dummy_y_train = np_utils.to_categorical(y)
model2.fit(X.toarray(), y_train, epochs=10, batch_size=10)
model2.evaluate(X_test.toarray(), y_test)
train.loc[train.Spam == 1].describe()
```
# Word Vectors
```
# memory limit may be exceeded. Try deleting some objects before running this next section
# or copy this section to a different notebook.
!pip install gensim
from gensim.models.word2vec import Word2Vec
import gensim.downloader as api
api.info()
model_w2v = api.load("word2vec-google-news-300")
model_w2v.most_similar("cookies",topn=10)
model_w2v.doesnt_match(["USA","Canada","India","Tokyo"])
king = model_w2v['king']
man = model_w2v['man']
woman = model_w2v['woman']
queen = king - man + woman
model_w2v.similar_by_vector(queen)
```
| github_jupyter |
```
import os
from skimage.filters.rank import median
import numpy as np
import matplotlib.pyplot as plt
import skimage.data as data
import skimage.segmentation as seg
import skimage.filters as filters
import skimage.draw as draw
import skimage.color as color
from scipy.ndimage.filters import convolve
from skimage.filters import threshold_otsu
from skimage.filters.rank import entropy
from skimage.morphology import disk
from skimage.filters import threshold_multiotsu
import skimage
from skimage.restoration import (denoise_tv_chambolle, denoise_bilateral,
denoise_wavelet, estimate_sigma)
import cv2
matrices_bc = []
dir_path_bc = r"D:\Documents\Курсова файли\needed_files\BC\404"
entries_control = os.listdir(dir_path_bc)
i = 0
for file_name in entries_control:
matrices_bc.append([])
with open(dir_path_bc + fr"\{file_name}") as f:
lines = f.readlines()
for line in lines:
t = np.array([int(float(x)) for x in line.split()], dtype=np.uint8)
matrices_bc[i].append(t)
i += 1
I = np.array(matrices_bc[0][:-1], dtype=np.uint8)
np.std(I)
plt.imshow(I,cmap='gray',label="(0,1)")
I_new = median(I, disk(1))
print(disk(2))
plt.imshow(I_new, cmap="gray")
from skimage.filters.rank import mean_bilateral
bilat = mean_bilateral(I.astype(np.uint16), disk(1), s0=10, s1=10)
plt.imshow(bilat, cmap="gray")
denoised = denoise_tv_chambolle(I, weight=0.005,eps=0.001)
plt.imshow(denoised, cmap="gray")
plt.imshow(entropy(denoised, disk(7)), cmap="gray")
bilat_n = entropy(bilat, disk(7))
plt.imshow(bilat_n, cmap="gray")
#plt.imshow(new_matr,cmap='gray_r', vmin=new_matr.min(), vmax=new_matr.max())
```
## Sobel filter (bad)
```
# sacrificial_bridge = np.zeros((50,50))
# sacrificial_bridge[22:30, 0:21] = 1
# sacrificial_bridge[22:30, 30:] = 1
# sacrificial_bridge[25:27, 21:30] = 1
# plt.imshow(sacrificial_bridge, cmap='gray')
# plt.show()
# # Build Sobel filter for the x dimension
# s_x = np.array([[1, 0, -1],
# [2, 0, -2],
# [1, 0, -1]])
# # Build a Sobel filter for the y dimension
# s_y = s_x.T # transposes the matrix
# res_x = convolve(sacrificial_bridge, s_x)
# res_y = convolve(sacrificial_bridge, s_y)
# B = np.sqrt(res_x**2 + res_y**2)
# plt.imshow(B, cmap="gray")
# res_x = convolve(I, s_x)
# res_y = convolve(I, s_y)
# # square the responses, to capture both sides of each edge
# G = np.sqrt(res_x**2 + res_y**2)
# plt.imshow(G)
```
## Gabor filter $ g(x, y ; \lambda, \theta, \psi, \sigma, \gamma)=\exp \left(-\frac{x^{\prime 2}+\gamma^{2} y^{\prime 2}}{2 \sigma^{2}}\right) \exp \left(i\left(2 \pi \frac{x^{\prime}}{\lambda}+\psi\right)\right) $
```
ksize=45
theta=np.pi/2
kernel = cv2.getGaborKernel((ksize, ksize), 5.0, theta, 10.0, 0.9, 0, ktype=cv2.CV_32F)
filtered_image = cv2.filter2D(I, cv2.CV_8UC3, kernel)
plt.imshow(filtered_image, cmap='gray')
```
## Entropy
```
entropy_img = entropy(I, disk(11))
plt.imshow(entropy_img, cmap="gray")
entropy_max = np.amax(entropy_img)
entropy_min = np.amin(entropy_img)
print(entropy_max)
plt.hist(entropy_img.flat, bins=500)
?threshold_otsu
# thresh = threshold_otsu(entropy_img, nbins=500)
# #Now let us binarize the entropy image
# binary = entropy_img <= thresh
# plt.imshow(binary)
# binary.shape
# ?np.reshape
thresholds = threshold_multiotsu(entropy_img, classes=3, nbins=500)
print(thresholds)
regions = np.digitize(entropy_img, bins=thresholds)
print(regions.max(), regions.min())
seg1 = (regions == 0)
seg2 = (regions == 1)
seg3 = (regions == 2)
print(seg3)
plt.imshow(regions)
def p(i,j, matr, d):
n_rows, n_cols = matr.shape
dx, dy = d
res = 0
for x in range(n_rows):
for y in range(n_cols):
props1 = [x + dx < n_rows, y + dy < n_cols]
if all(props1):
if matr[x][y] == i and matr[x + dx][y + dy] == j:
res += 1
return res
def coincidence_matr(image, d):
"""
d -- (dx, dy) vector
image -- N x M matrix
"""
res_matr = np.zeros((256, 256))
vmin, vmax = image.min(), image.max()
# it actually makes sense to look only at
# rectangle (vmnin x vmax) and make the least
# equals zero
for i in range(vmin, vmax):
for j in range(vmin, vmax):
res_matr[i, j] = p(i, j, image, d)
return res_matr
%%time
coic_entropy = coincidence_matr(I, (0,1))
def t_(x, a, b):
"""[a,b] -> [0, 255]"""
assert b > a
m = 255 / (b - a)
d = -255 * a / (b - a)
return m * x + d
a_min = coic_entropy.min()
b_max = coic_entropy.max()
print(a_min,b_max)
coic_entropy = t_(coic_entropy,a_min, b_max)
bad = coic_entropy < (0.05 * b_max)
print(coic_entropy.min(), coic_entropy.max())
coic_entropy[bad] = 0
print(coic_entropy)
plt.figure(figsize=(10,10))
# plt.axhline(100)
# plt.axhline(150)
# plt.axvline(100)
# plt.axvline(150)
int_image = coic_entropy.astype(np.uint8)
print(int_image)
np.savetxt('test1.out', int_image, delimiter=',')
original_array = np.loadtxt("test1.out",delimiter=',').reshape(256, 256)
plt.imshow(original_array[100:150,100:150], cmap="gray_r")
#plt.savefig(fname="c.png")
nonzero = (coic_entropy != 0)
plt.hist(coic_entropy[nonzero],bins=200)
#plt.hist(coic_entropy[nonzero].flat, bins=100)
thresh_hold = threshold_otsu(coic_entropy[nonzero],nbins=200)
new_img = np.zeros((256, 256))
n,m = new_img.shape
for i in range(n):
for j in range(m):
if coic_entropy[i,j] > 0:
if coic_entropy[i,j] > thresh_hold:
new_img[i,j] = 1
else:
new_img[i,j] = 3
plt.imshow(new_img[110:145,110:145])
# энтропия фигня тут кнчн
en_coic = entropy(coic_entropy[110:145,110:145].astype(np.uint8), disk(2))
thresh_hold = threshold_otsu(en_coic,nbins=200)
plt.imshow(en_coic,cmap="gray_r")
# new_img = np.zeros((256, 256))
# n,m = new_img.shape
# for i in range(n):
# for j in range(m):
# if coic_entropy[i,j] > 0:
# if coic_entropy[i,j] > thresh_hold:
# new_img[i,j] = 1
# else:
# new_img[i,j] = 3
# plt.imshow(en_coic[100:150,100:150], cmap="gray_r")
print(I.min(), I.max())
plt.imshow(I)
new_image = np.zeros((159, 160, 3))
new_image[seg1] = (150,0,0)
new_image[seg2] = (0,150,0)
new_image[seg3] = (255,255,255)
plt.imshow(new_image.astype(np.uint8))
matrices_control = []
dir_path = r"D:\Documents\Курсова файли\needed_files\Control\2"
entries_control = os.listdir(dir_path)
i = 0
for file_name in entries_control:
matrices_control.append([])
with open(dir_path + fr"\{file_name}") as f:
lines = f.readlines()
for line in lines:
t = np.array([int(float(x)) for x in line.split()], dtype=np.uint8)
matrices_control[i].append(t)
i += 1
I_control = np.array(matrices_control[1][:-1])
plt.imshow(I_control, cmap="gray")
control_coic = coincidence_matr(I_control, (0,1))
plt.imshow(control_coic, cmap="gray_r")
good_contorcontrol_coic = control_coic > (0.05 * control_coic.max())
print(m, good)
rows, cols = good_contorcontrol_coic.shape
for i in range(rows):
for j in range(cols):
if not good_contorcontrol_coic[i,j]:
control_coic[i,j] = 0
plt.figure(figsize=(10,10))
plt.imshow(np.vstack((control_coic, np.full(256, 255))), cmap="gray_r")
I_control_med = median(I_control, disk(3))
plt.imshow(I_control_med, cmap="gray")
entropy_img_control = entropy(I_control_med, disk(12))
plt.imshow(entropy_img_control[100:150,100:150], cmap="gray")
thresholds_control = threshold_multiotsu(entropy_img_control, classes=3, nbins=500)
regions_control = np.digitize(entropy_img_control, bins=thresholds)
plt.imshow(regions_control)
```
| github_jupyter |
# 3D Partially coherent ODT forward simulation
This forward simulation is based on the SEAGLE paper ([here](https://ieeexplore.ieee.org/abstract/document/8074742)): <br>
```H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, "SEAGLE: Sparsity-Driven Image Reconstruction Under Multiple Scattering," IEEE Trans. Computational Imaging vol.4, pp.73-86 (2018).```<br>
and the 3D PODT paper ([here](https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-25-14-15699&id=368361)): <br>
```J. M. Soto, J. A. Rodrigo, and T. Alieva, "Label-free quantitative 3D tomographic imaging for partially coherent light microscopy," Opt. Express 25, 15699-15712 (2017).```<br>
```
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft, ifft, fft2, ifft2, fftshift, ifftshift, fftn, ifftn
import pickle
import waveorder as wo
%load_ext autoreload
%autoreload 2
%matplotlib inline
plt.style.use(['dark_background']) # Plotting option for dark background
```
### Experiment parameters
```
N = 256 # number of pixel in y dimension
M = 256 # number of pixel in x dimension
L = 100 # number of layers in z dimension
n_media = 1.46 # refractive index in the media
mag = 63 # magnification
ps = 6.5/mag # effective pixel size
psz = 0.25 # axial pixel size
lambda_illu = 0.532 # wavelength
NA_obj = 1.2 # objective NA
NA_illu = 0.9 # illumination NA
```
### Sample creation
```
radius = 5
blur_size = 2*ps
sphere, _, _ = wo.gen_sphere_target((N,M,L), ps, psz, radius, blur_size)
wo.image_stack_viewer(np.transpose(sphere,(2,0,1)))
# Physical value assignment
n_sample = 1.50
RI_map = np.zeros_like(sphere)
RI_map[sphere > 0] = sphere[sphere > 0]*(n_sample-n_media)
RI_map += n_media
t_obj = np.exp(1j*2*np.pi*psz*(RI_map-n_media))
wo.image_stack_viewer(np.transpose(np.angle(t_obj),(2,0,1)))
```
### Setup acquisition
```
# Subsampled Source pattern
xx, yy, fxx, fyy = wo.gen_coordinate((N, M), ps)
Source_cont = wo.gen_Pupil(fxx, fyy, NA_illu, lambda_illu)
Source_discrete = wo.Source_subsample(Source_cont, lambda_illu*fxx, lambda_illu*fyy, subsampled_NA = 0.1)
plt.figure(figsize=(10,10))
plt.imshow(fftshift(Source_discrete),cmap='gray')
np.sum(Source_discrete)
z_defocus = (np.r_[:L]-L//2)*psz
chi = 0.1*2*np.pi
setup = wo.waveorder_microscopy((N,M), lambda_illu, ps, NA_obj, NA_illu, z_defocus, chi, \
n_media = n_media, phase_deconv='3D', illu_mode='Arbitrary', Source=Source_cont)
simulator = wo.waveorder_microscopy_simulator((N,M), lambda_illu, ps, NA_obj, NA_illu, z_defocus, chi, \
n_media = n_media, illu_mode='Arbitrary', Source=Source_discrete)
plt.figure(figsize=(5,5))
plt.imshow(fftshift(setup.Source), cmap='gray')
plt.colorbar()
H_re_vis = fftshift(setup.H_re)
wo.plot_multicolumn([np.real(H_re_vis)[:,:,L//2], np.transpose(np.real(H_re_vis)[N//2,:,:]), \
np.imag(H_re_vis)[:,:,L//2], np.transpose(np.imag(H_re_vis)[N//2,:,:])], \
num_col=2, size=8, set_title=True, \
titles=['$xy$-slice of Re{$H_{re}$} at $u_z=0$', '$xz$-slice of Re{$H_{re}$} at $u_y=0$', \
'$xy$-slice of Im{$H_{re}$} at $u_z=0$', '$xz$-slice of Im{$H_{re}$} at $u_y=0$'], colormap='jet')
H_im_vis = fftshift(setup.H_im)
wo.plot_multicolumn([np.real(H_im_vis)[:,:,L//2], np.transpose(np.real(H_im_vis)[N//2,:,:]), \
np.imag(H_im_vis)[:,:,L//2], np.transpose(np.imag(H_im_vis)[N//2,:,:])], \
num_col=2, size=8, set_title=True, \
titles=['$xy$-slice of Re{$H_{im}$} at $u_z=0$', '$xz$-slice of Re{$H_{im}$} at $u_y=0$', \
'$xy$-slice of Im{$H_{im}$} at $u_z=0$', '$xz$-slice of Im{$H_{im}$} at $u_y=0$'], colormap='jet')
I_meas = simulator.simulate_3D_scalar_measurements(t_obj)
wo.image_stack_viewer(np.transpose(np.abs(I_meas),(0,1,2)))
# Save simulations
output_file = '3D_PODT_simulation'
np.savez(output_file, I_meas=I_meas, lambda_illu=lambda_illu, \
n_media=n_media, NA_obj=NA_obj, NA_illu=NA_illu, ps=ps, psz=psz, Source_cont=Source_cont)
```
| github_jupyter |
```
#default_exp fastai.dataloader
```
# DataLoader Errors
> Errors and exceptions for any step of the `DataLoader` process
This includes `after_item`, `after_batch`, and collating. Anything in relation to the `Datasets` or anything before the `DataLoader` process can be found in `fastdebug.fastai.dataset`
```
#export
import inflect
from fastcore.basics import patch
from fastai.data.core import TfmdDL
from fastai.data.load import DataLoader, fa_collate, fa_convert
#export
def collate_error(e:Exception, batch):
"""
Raises an explicit error when the batch could not collate, stating
what items in the batch are different sizes and their types
"""
p = inflect.engine()
err = f'Error when trying to collate the data into batches with fa_collate, '
err += 'at least two tensors in the batch are not the same size.\n\n'
# we need to iterate through the entire batch and find a mismatch
length = len(batch[0])
for idx in range(length): # for each type in the batch
for i, item in enumerate(batch):
if i == 0:
shape_a = item[idx].shape
type_a = item[idx].__class__.__name__
elif item[idx].shape != shape_a:
shape_b = item[idx].shape
if shape_a != shape_b:
err += f'Mismatch found within the {p.ordinal(idx)} axis of the batch and is of type {type_a}:\n'
err += f'The first item has shape: {shape_a}\n'
err += f'The {p.number_to_words(p.ordinal(i+1))} item has shape: {shape_b}\n\n'
err += f'Please include a transform in `after_item` that ensures all data of type {type_a} is the same size'
e.args = [err]
raise e
#export
@patch
def create_batch(self:DataLoader, b):
"Collate a list of items into a batch."
func = (fa_collate,fa_convert)[self.prebatched]
try:
return func(b)
except Exception as e:
if not self.prebatched:
collate_error(e, b)
else: raise e
```
`collate_error` is `@patch`'d into `DataLoader`'s `create_batch` function through importing this module, so if there is any possible reason why the data cannot be collated into the batch, it is presented to the user.
An example is below, where we forgot to include an item transform that resizes all our images to the same size:
```
#failing
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2,
label_func=lambda x: x[0].isupper())
x,y = dls.train.one_batch()
#export
@patch
def new(self:TfmdDL, dataset=None, cls=None, **kwargs):
res = super(TfmdDL, self).new(dataset, cls, do_setup=False, **kwargs)
if not hasattr(self, '_n_inp') or not hasattr(self, '_types'):
try:
self._one_pass()
res._n_inp,res._types = self._n_inp,self._types
except Exception as e:
print("Could not do one pass in your dataloader, there is something wrong in it")
raise e
else: res._n_inp,res._types = self._n_inp,self._types
return res
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(5)
y = x
t = x
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.scatter(x, y, c=t, cmap='viridis')
ax2.scatter(x, y, c=t, cmap='viridis_r')
color = "red"
plt.scatter(x, y, c=color)
sequence_of_colors = ["red", "orange", "yellow", "green", "blue","red", "orange", "yellow", "green", "blue"]
plt.scatter(x, y, c=sequence_of_colors)
sample_size = 1000
color_num = 3
X = np.random.normal(0, 1, sample_size)
Y = np.random.normal(0, 1, sample_size)
C = np.random.randint(0, color_num, sample_size)
print("X.shape : {}, \n{}".format(X.shape, X))
print("Y.shape : {}, \n{}".format(Y.shape, Y))
print("C.shape : {}, \n{}".format(C.shape, C))
plt.figure(figsize=(12, 4))
plt.scatter(X, Y, c=C, s=20, cmap=plt.cm.get_cmap('rainbow', color_num), alpha=0.5)
plt.colorbar(ticks=range(color_num), format='color: %d', label='color')
plt.show()
plt.cm.get_cmap('rainbow', color_num)
for a in np.linspace(0, 1.0, 5):
print(plt.cm.rainbow(a))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
sample_size = 100
x = np.vstack([
np.random.normal(0, 1, sample_size).reshape(sample_size//2, 2),
np.random.normal(2, 1, sample_size).reshape(sample_size//2, 2),
np.random.normal(4, 1, sample_size).reshape(sample_size//2, 2)
])#50,2
y = np.array(list(itertools.chain.from_iterable([ [i+1 for j in range(0, sample_size//2)] for i in range(0, 3)])))
y = y.reshape(-1, 1)
df = pd.DataFrame(np.hstack([x, y]), columns=['x1', 'x2', 'y'])
print("x : {}, y : {}, df : {}".format(x.shape, y.shape, df.shape))
print(df)
c_lst = [plt.cm.rainbow(a) for a in np.linspace(0.0, 1.0, len(set(df['y'])))]
plt.figure(figsize=(12, 4))
for i, g in enumerate(df.groupby('y')):
plt.scatter(g[1]['x1'], g[1]['x2'], color=c_lst[i], label='group {}'.format(int(g[0])), alpha=0.5)
plt.legend()
plt.show()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
from matplotlib import colors
_cmap = ['#1A90F0', '#F93252', '#FEA250', '#276B29', '#362700',
'#2C2572', '#D25ABE', '#4AB836', '#A859EA', '#65C459',
'#C90B18', '#E02FD1', '#5FAFD4', '#DAF779', '#ECEE25',
'#56B390', '#F3BBBE', '#8FC0AE', '#0F16F5', '#8A9EFE',
'#A23965', '#03F70C', '#A8D520', '#952B77', '#2A493C',
'#E8DB82', '#7C01AC', '#1938A3', '#3C4249', '#BC3D92',
'#DEEDB1', '#3C673E', '#65F3D7', '#77110B', '#D16DD6',
'#08EF68', '#CFFD6F', '#DC6B26', '#912D5D', '#8CA6F8',
'#04EE96', '#54B0C1', '#6CBE38', '#24633B', '#DE41DD',
'#5EF270', '#896991', '#E6D381', '#7B0681', '#D66C07'
]
sample_size = 256
x = np.vstack([
np.random.normal(0, 1, sample_size).reshape(sample_size//2, 2),
np.random.normal(2, 1, sample_size).reshape(sample_size//2, 2),
np.random.normal(4, 1, sample_size).reshape(sample_size//2, 2),
np.random.normal(3, 1, sample_size).reshape(sample_size//2, 2)
])#50,2
print(x.shape)
y = np.array(list(itertools.chain.from_iterable([ [i+1 for j in range(0, int(sample_size/4))] for i in range(0, 8)])))
y = y.reshape(-1, 1)
df = pd.DataFrame(np.hstack([x, y]), columns=['x1', 'x2', 'y'])
c_lst = [plt.cm.rainbow(a) for a in np.linspace(0.0, 1.0, len(set(df['y'])))]
plt.figure(figsize=(12, 4))
print("groupby : ", df.groupby('y'))
for i, g in enumerate(df.groupby('y')):
print(i, "g[1]", g[1])
print(i, "g[0]", g[0])
plt.scatter(g[1]['x1'], g[1]['x2'], color=_cmap[i], label='group {}'.format(int(g[0])), alpha=0.5)
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import numpy as np
from struct import unpack
from sklearn import cluster
import datetime
import seaborn as sns
from sklearn.preprocessing import PowerTransformer, normalize, MinMaxScaler, StandardScaler
from struct import pack
from matplotlib import colors
from sklearn.metrics import silhouette_score, silhouette_samples
import matplotlib.cm as cm
import matplotlib
_cmap = colors.ListedColormap(['#1A90F0', '#F93252', '#FEA250', '#276B29', '#362700',
'#2C2572', '#D25ABE', '#4AB836', '#A859EA', '#65C459',
'#C90B18', '#E02FD1', '#5FAFD4', '#DAF779', '#ECEE25',
'#56B390', '#F3BBBE', '#8FC0AE', '#0F16F5', '#8A9EFE',
'#A23965', '#03F70C', '#A8D520', '#952B77', '#2A493C',
'#E8DB82', '#7C01AC', '#1938A3', '#3C4249', '#BC3D92',
'#DEEDB1', '#3C673E', '#65F3D7', '#77110B', '#D16DD6',
'#08EF68', '#CFFD6F', '#DC6B26', '#912D5D', '#8CA6F8',
'#04EE96', '#54B0C1', '#6CBE38', '#24633B', '#DE41DD',
'#5EF270', '#896991', '#E6D381', '#7B0681', '#D66C07'
])
#matplotlib.colors.ListedColormap(colors, name='from_list', N=None)
test = matplotlib.colors.ListedColormap(_cmap.colors[:5])
print(test.colors)
print(_cmap.colors[:5])
```
| github_jupyter |
---
_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._
---
## Assignment 4 - Understanding and Predicting Property Maintenance Fines
This assignment is based on a data challenge from the Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)).
The Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences ([MSSISS](https://sites.lsa.umich.edu/mssiss/)) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. [Blight violations](http://www.detroitmi.gov/How-Do-I/Report/Blight-Complaint-FAQs) are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?
The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.
All data for this assignment has been provided to us through the [Detroit Open Data Portal](https://data.detroitmi.gov/). **Only the data already included in your Coursera directory can be used for training the model for this assignment.** Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:
* [Building Permits](https://data.detroitmi.gov/Property-Parcels/Building-Permits/xw2a-a7tf)
* [Trades Permits](https://data.detroitmi.gov/Property-Parcels/Trades-Permits/635b-dsgv)
* [Improve Detroit: Submitted Issues](https://data.detroitmi.gov/Government/Improve-Detroit-Submitted-Issues/fwz3-w3yn)
* [DPD: Citizen Complaints](https://data.detroitmi.gov/Public-Safety/DPD-Citizen-Complaints-2016/kahe-efs3)
* [Parcel Map](https://data.detroitmi.gov/Property-Parcels/Parcel-Map/fxkw-udwf)
___
We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.
Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.
<br>
**File descriptions** (Use only this data for training your model!)
readonly/train.csv - the training set (all tickets issued 2004-2011)
readonly/test.csv - the test set (all tickets issued 2012-2016)
readonly/addresses.csv & readonly/latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates.
Note: misspelled addresses may be incorrectly geolocated.
<br>
**Data fields**
train.csv & test.csv
ticket_id - unique identifier for tickets
agency_name - Agency that issued the ticket
inspector_name - Name of inspector that issued the ticket
violator_name - Name of the person/organization that the ticket was issued to
violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred
mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator
ticket_issued_date - Date and time the ticket was issued
hearing_date - Date and time the violator's hearing was scheduled
violation_code, violation_description - Type of violation
disposition - Judgment and judgement type
fine_amount - Violation fine amount, excluding fees
admin_fee - $20 fee assigned to responsible judgments
state_fee - $10 fee assigned to responsible judgments
late_fee - 10% fee assigned to responsible judgments
discount_amount - discount applied, if any
clean_up_cost - DPW clean-up or graffiti removal cost
judgment_amount - Sum of all fines and fees
grafitti_status - Flag for graffiti violations
train.csv only
payment_amount - Amount paid, if any
payment_date - Date payment was made, if it was received
payment_status - Current payment status as of Feb 1 2017
balance_due - Fines and fees still owed
collection_status - Flag for payments in collections
compliance [target variable for prediction]
Null = Not responsible
0 = Responsible, non-compliant
1 = Responsible, compliant
compliance_detail - More information on why each ticket was marked compliant or non-compliant
___
## Evaluation
Your predictions will be given as the probability that the corresponding blight ticket will be paid on time.
The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).
Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.
___
For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using `readonly/train.csv`. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from `readonly/test.csv` will be paid, and the index being the ticket_id.
Example:
ticket_id
284932 0.531842
285362 0.401958
285361 0.105928
285338 0.018572
...
376499 0.208567
376500 0.818759
369851 0.018528
Name: compliance, dtype: float32
### Hints
* Make sure your code is working before submitting it to the autograder.
* Print out your result to see whether there is anything weird (e.g., all probabilities are the same).
* Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question.
* Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model.
* Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out.
```
import pandas as pd
import numpy as np
def dblight_model():
# Your code here
return # Your answer here
def blight_model():
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.tree import DecisionTreeClassifier
from datetime import datetime
def tg(hearing_date_str, ticket_issued_date_str):
if not hearing_date_str or type(hearing_date_str)!=str: return 73
hearing_date = datetime.strptime(hearing_date_str, "%Y-%m-%d %H:%M:%S")
ticket_issued_date = datetime.strptime(ticket_issued_date_str, "%Y-%m-%d %H:%M:%S")
gap = hearing_date - ticket_issued_date
return gap.days
train_data = pd.read_csv('readonly/train.csv', encoding = 'ISO-8859-1')
test_data = pd.read_csv('readonly/test.csv')
train_data = train_data[(train_data['compliance'] == 0) | (train_data['compliance'] == 1)]
address = pd.read_csv('readonly/addresses.csv')
latlons = pd.read_csv('readonly/latlons.csv')
address = address.set_index('address').join(latlons.set_index('address'), how='left')
train_data = train_data.set_index('ticket_id').join(address.set_index('ticket_id'))
test_data = test_data.set_index('ticket_id').join(address.set_index('ticket_id'))
train_data = train_data[~train_data['hearing_date'].isnull()]
train_data['tg'] = train_data.apply(lambda row: tg(row['hearing_date'], row['ticket_issued_date']), axis=1)
test_data['tg'] = test_data.apply(lambda row: tg(row['hearing_date'], row['ticket_issued_date']), axis=1)
feature_to_be_splitted = ['agency_name', 'state', 'disposition']
train_data.lat.fillna(method='pad', inplace=True)
train_data.lon.fillna(method='pad', inplace=True)
train_data.state.fillna(method='pad', inplace=True)
test_data.lat.fillna(method='pad', inplace=True)
test_data.lon.fillna(method='pad', inplace=True)
test_data.state.fillna(method='pad', inplace=True)
train_data = pd.get_dummies(train_data, columns=feature_to_be_splitted)
test_data = pd.get_dummies(test_data, columns=feature_to_be_splitted)
list_to_remove_train = [
'balance_due',
'collection_status',
'compliance_detail',
'payment_amount',
'payment_date',
'payment_status'
]
list_to_remove_all = ['fine_amount', 'violator_name', 'zip_code', 'country', 'city',
'inspector_name', 'violation_street_number', 'violation_street_name',
'violation_zip_code', 'violation_description',
'mailing_address_str_number', 'mailing_address_str_name',
'non_us_str_code',
'ticket_issued_date', 'hearing_date', 'grafitti_status', 'violation_code']
train_data.drop(list_to_remove_train, axis=1, inplace=True)
train_data.drop(list_to_remove_all, axis=1, inplace=True)
test_data.drop(list_to_remove_all, axis=1, inplace=True)
train_features = train_data.columns.drop('compliance')
train_features_set = set(train_features)
for feature in set(train_features):
if feature not in test_data:
train_features_set.remove(feature)
train_features = list(train_features_set)
X_train = train_data[train_features]
y_train = train_data.compliance
X_test = test_data[train_features]
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
clf = MLPClassifier(hidden_layer_sizes = [100, 10], alpha = 5,
random_state = 0, solver='lbfgs', verbose=0)
clf.fit(X_train_scaled, y_train)
test_proba = clf.predict_proba(X_test_scaled)[:,1]
final_df = pd.read_csv('readonly/test.csv', encoding = "ISO-8859-1")
final_df['compliance'] = test_proba
final_df.set_index('ticket_id', inplace=True)
return final_df.compliance
blight_model()
```
| github_jupyter |
```
# ==============================================================================
# Copyright 2021 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
#
# Author: Chanchal Chatterjee
# Email: cchatterjee@google.com
#
# To these first:
# 1. Create a VM with TF 2.1
# 2. Create the following buckets in your project:
# Root Bucket: BUCKET_NAME = 'tuti_asset' 'gs://$BUCKET_NAME'
# Model Results Directory: FOLDER_RESULTS = 'tf_models' 'gs://$BUCKET_NAME/$FOLDER_RESULTS'
# Data directory: FOLDER_DATA = 'datasets' 'gs://$BUCKET_NAME/$FOLDER_DATA'
# The data: INPUT_FILE_NAME = 'mortgage_structured.csv'
# 3. In your VM create directory called ./model_dir
# Uninstall old packages
#!pip3 uninstall -r requirements-uninstall.txt -y
# Install packages
# https://cloud.google.com/ai-platform/training/docs/runtime-version-list
#!pip3 install -r requirements-rt2.1.txt --user --ignore-installed
# If VM created with TF2.1 Enterprise (no GPUs), all you need to install is cloudml-hypertune
!pip3 install cloudml-hypertune --user --ignore-installed
# Import packages
import warnings
warnings.filterwarnings("ignore")
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
#0 = all messages are logged (default behavior)
#1 = INFO messages are not printed
#2 = INFO and WARNING messages are not printed
#3 = INFO, WARNING, and ERROR messages are not printed
import numpy as np
from google.cloud import storage
import tensorflow as tf
#import matplotlib.pyplot as plt
#from tensorflow.keras import models
print("TF Version= ", tf.__version__)
print("Keras Version= ", tf.keras.__version__)
# Utility functions
#------
def find_best_model_dir(model_dir, offset=1, maxFlag=1):
# Get a list of model directories
all_models = ! gsutil ls $model_dir
print("")
print("All Models = ")
print(*all_models, sep='\n')
# Check if model dirs exist
if (("CommandException" in all_models[0]) or (len(all_models) <= 1)):
print("Create the models first.")
return ""
# Find the best model from checkpoints
import re
best_acc = -np.Inf
if (maxFlag != 1):
best_acc = np.Inf
best_model_dir = ""
tup_list = []
for i in range(1,len(all_models)):
all_floats = re.findall(r"[-+]?\d*\.\d+|\d+", all_models[i]) #Find the floats in the string
cur_acc = -float(all_floats[-offset]) #which item is the model optimization metric
tup_list.append([all_models[i],cur_acc])
if (maxFlag*(cur_acc > best_acc) or (1-maxFlag)*(cur_acc < best_acc)):
best_acc = cur_acc
best_model_dir = all_models[i]
if maxFlag:
tup_list.sort(key=lambda tup: tup[1], reverse=False)
else:
tup_list.sort(key=lambda tup: tup[1], reverse=True)
#for i in range(len(tup_list)):
# print(tup_list[i][0])
print("Best Accuracy from Checkpoints = ", best_acc)
print("Best Model Dir from Checkpoints = ", best_model_dir)
return best_model_dir
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
import json
#------
# Python module to get the best hypertuned model parameters
def pyth_get_hypertuned_parameters(project_name, job_name, maxFlag):
# Define the credentials for the service account
#credentials = service_account.Credentials.from_service_account_file(<PATH TO CREDENTIALS JSON>)
credentials = GoogleCredentials.get_application_default()
# Define the project id and the job id and format it for the api request
project_id = 'projects/{}'.format(project_name)
job_id = '{}/jobs/{}'.format(project_id, job_name)
# Build the service
cloudml = discovery.build('ml', 'v1', cache_discovery=False, credentials=credentials)
# Execute the request and pass in the job id
request = cloudml.projects().jobs().get(name=job_id)
try:
response = request.execute()
# Handle a successful request
except errors.HttpError as err:
tf.compat.v1.logging.error('There was an error getting the hyperparameters. Check the details:')
tf.compat.v1.logging.error(err._get_reason())
# Get just the best hp values
if maxFlag:
best_model = response['trainingOutput']['trials'][0]
else:
best_model = response['trainingOutput']['trials'][-1]
#print('Best Hyperparameters:')
#print(json.dumps(best_model, indent=4))
nTrials = len(response['trainingOutput']['trials'])
for i in range(0,nTrials):
state = response['trainingOutput']['trials'][i]['state']
trialId = response['trainingOutput']['trials'][i]['trialId']
objV = -1
if (state == 'SUCCEEDED'):
objV = response['trainingOutput']['trials'][i]['finalMetric']['objectiveValue']
print('objective=', objV, ' trialId=', trialId, state)
d = response['trainingOutput']['trials'][i]['hyperparameters']
for key, value in d.items():
print(' ', key, value)
return best_model
```
# Setup
```
# Get the project id
proj_id = !gcloud config list project --format "value(core.project)"
proj_id[0]
USER = 'cchatterj'
PROJECT_ID = proj_id[0]
BUCKET_NAME = 'tuti_asset' #Use a unique name
FOLDER_RESULTS = 'tf_models'
FOLDER_DATA = 'datasets'
REGION = 'us-central1'
ZONE1 = 'us-central1-a'
RUNTIME_VERSION = 2.1
JOB_DIR = 'gs://' + BUCKET_NAME + '/' + FOLDER_RESULTS + '/jobdir'
MODEL_DIR = 'gs://' + BUCKET_NAME + '/' + FOLDER_RESULTS + '/models'
INPUT_FILE_NAME = 'mortgage_structured.csv'
!gcloud config set project $PROJECT_ID
!gcloud config set compute/zone $ZONE1
!gcloud config set compute/region $REGION
!gcloud config list
#!gcloud config config-helper --format "value(configuration.properties.core.project)"
# Clean old job logs, job packages and models
!gsutil -m -q rm $JOB_DIR/packages/**
!gsutil -m -q rm $MODEL_DIR/model**
```
# ML Model
```
# Create the tf_trainer directory and load the trainer files in it
!mkdir -p trainer
%%writefile ./trainer/inputs.py
# Create the train and label lists
import math
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
#------
def load_data(input_file):
# Read the data
print(input_file)
#try:
table_data = pd.read_csv(input_file)
#except:
# print("Oops! That is invalid filename. Try again...")
# return
print(table_data.shape)
# ---------------------------------------
# Pre-processing
# ---------------------------------------
# Drop useless columns
table_data.drop(['LOAN_SEQUENCE_NUMBER'], axis=1, inplace=True)
# Inputs to an XGBoost model must be numeric. One hot encoding was
# previously found to yield better results
# than label encoding for the particular
strcols = [col for col in table_data.columns if table_data[col].dtype == 'object']
table_data = pd.get_dummies(table_data, columns=strcols)
# Train Test Split and write out the train-test files
# Split with a small test size so as to allow our model to train on more data
X_train, X_test, y_train, y_test = \
train_test_split(table_data.drop('TARGET', axis=1),
table_data['TARGET'],
stratify=table_data['TARGET'],
shuffle=True, test_size=0.2
)
# Remove Null and NAN
X_train = X_train.fillna(0)
X_test = X_test.fillna(0)
# Check the shape
print("X_train shape = ", X_train.shape)
print("X_test shape = ", X_test.shape)
y_train_cat = tf.keras.utils.to_categorical(y_train)
y_test_cat = tf.keras.utils.to_categorical(y_test)
print("y_train shape = ", y_train_cat.shape)
print("y_test shape = ", y_test_cat.shape)
# count number of classes
#values, counts = np.unique(y_train, return_counts=True)
#NUM_CLASSES = len(values)
#print("Number of classes ", NUM_CLASSES)
#train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
#train_dataset = train_dataset.shuffle(100).batch(batch_size)
#test_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test))
#test_dataset = test_dataset.shuffle(100).batch(batch_size)
return [X_train, X_test, y_train_cat, y_test_cat]
%%writefile ./trainer/model.py
import tensorflow as tf
import numpy as np
def tf_model(input_dim, output_dim, model_depth: int = 1, dropout_rate: float = 0.02):
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
decr = int((input_dim-output_dim-16)/model_depth) ^ 1
model = Sequential()
model.add(Dense(128, input_dim=input_dim, activation=tf.nn.relu))
for i in range(1,model_depth):
model.add(Dense(input_dim-i*decr, activation=tf.nn.relu, kernel_regularizer='l2'))
model.add(Dropout(dropout_rate))
model.add(Dense(output_dim, activation=tf.nn.softmax))
print(model.summary())
return model
def custom_loss(y_true, y_pred):
custom_loss = mean(square(y_true - y_pred), axis=-1)
return custom_loss
def custom_metric(y_true, y_pred):
custom_metric = mean(square(y_true - y_pred), axis=-1)
return custom_metric
```
## Package for distributed training
```
%%writefile ./setup.py
# python3
# ==============================================================================
# Copyright 2020 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
# https://cloud.google.com/ai-platform/training/docs/runtime-version-list
from setuptools import find_packages
from setuptools import setup
#Runtime 2.1
REQUIRED_PACKAGES = ['tensorflow==2.1.0',
'pandas==0.25.3',
'scikit-learn==0.22',
'google-cloud-storage==1.23.0',
'gcsfs==0.6.1',
'cloudml-hypertune',
]
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Trainer package for Tensorflow Task'
)
```
## Training functions
```
%%writefile ./trainer/__init__.py
# python3
# ==============================================================================
# Copyright 2020 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
%%writefile ./trainer/train.py
# python3
# ==============================================================================
# Copyright 2020 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
import os
import json
import tensorflow as tf
import numpy as np
import datetime as datetime
from pytz import timezone
import hypertune
import argparse
from trainer import model
from trainer import inputs
import warnings
warnings.filterwarnings("ignore")
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#0 = all messages are logged (default behavior)
#1 = INFO messages are not printed
#2 = INFO and WARNING messages are not printed
#3 = INFO, WARNING, and ERROR messages are not printed
def parse_arguments():
"""Argument parser.
Returns:
Dictionary of arguments.
"""
parser = argparse.ArgumentParser()
parser.add_argument('--model_depth', default=3, type=int,
help='Hyperparameter: depth of model')
parser.add_argument('--dropout_rate', default=0.02, type=float,
help='Hyperparameter: Drop out rate')
parser.add_argument('--learning_rate', default=0.0001, type=float,
help='Hyperparameter: initial learning rate')
parser.add_argument('--batch_size', default=4, type=int,
help='batch size of the deep network')
parser.add_argument('--epochs', default=1, type=int,
help='number of epochs.')
parser.add_argument('--model_dir', default="",
help='Directory to store model checkpoints and logs.')
parser.add_argument('--input_file', default="",
help='Directory to store model checkpoints and logs.')
parser.add_argument('--verbosity', choices=['DEBUG','ERROR','FATAL','INFO','WARN'],
default='FATAL')
args, _ = parser.parse_known_args()
return args
def get_callbacks(args, early_stop_patience: int = 3):
"""Creates Keras callbacks for model training."""
# Get trialId
trialId = json.loads(os.environ.get("TF_CONFIG", "{}")).get("task", {}).get("trial", "")
if trialId == '':
trialId = '0'
print("trialId=", trialId)
curTime = datetime.datetime.now(timezone('US/Pacific')).strftime('%H%M%S')
# Modify model_dir paths to include trialId
model_dir = args.model_dir + "/checkpoints/cp-"+curTime+"-"+trialId+"-{val_accuracy:.4f}"
log_dir = args.model_dir + "/log_dir"
tensorboard_cb = tf.keras.callbacks.TensorBoard(log_dir, histogram_freq=1)
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(model_dir, monitor='val_accuracy', mode='max',
verbose=0, save_best_only=True,
save_weights_only=False)
earlystop_cb = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=3)
return [checkpoint_cb, tensorboard_cb, earlystop_cb]
if __name__ == "__main__":
# ---------------------------------------
# Parse Arguments
# ---------------------------------------
args = parse_arguments()
#args.model_dir = MODEL_DIR + datetime.datetime.now(timezone('US/Pacific')).strftime('/model_%m%d%Y_%H%M')
#args.input_file = 'gs://' + BUCKET_NAME + '/' + FOLDER_DATA + '/' + INPUT_FILE_NAME
print(args)
# ---------------------------------------
# Input Data & Preprocessing
# ---------------------------------------
print("Input and pre-process data ...")
# Extract train_seismic, train_label
train_test_data = inputs.load_data(args.input_file)
X_train = train_test_data[0]
X_test = train_test_data[1]
y_train = train_test_data[2]
y_test = train_test_data[3]
# ---------------------------------------
# Train model
# ---------------------------------------
print("Creating model ...")
print("x_train")
print(X_train.shape[1])
print("y_train")
print(y_train.shape[1])
tf_model = model.tf_model(X_train.shape[1], y_train.shape[1],
model_depth=args.model_depth,
dropout_rate=args.dropout_rate)
tf_model.compile(optimizer=tf.keras.optimizers.Adam(lr=args.learning_rate),
loss='mean_squared_error',
metrics=['accuracy'])
print("Fitting model ...")
callbacks = get_callbacks(args, 3)
histy = tf_model.fit(np.array(X_train), y_train,
epochs=args.epochs,
batch_size=args.batch_size,
validation_data=[np.array(X_test),y_test],
callbacks=callbacks)
# TBD save history for visualization
final_epoch_accuracy = histy.history['accuracy'][-1]
final_epoch_count = len(histy.history['accuracy'])
print('final_epoch_accuracy = %.6f' % final_epoch_accuracy)
print('final_epoch_count = %2d' % final_epoch_count)
%%time
# Run the training manually
# Training parameters
from datetime import datetime
from pytz import timezone
MODEL_DEPTH = 2
DROPOUT_RATE = 0.01
LEARNING_RATE = 0.00005
EPOCHS = 1
BATCH_SIZE = 32
MODEL_DIR_PYTH = MODEL_DIR + datetime.now(timezone('US/Pacific')).strftime('/model_%m%d%Y_%H%M')
INPUT_FILE = 'gs://' + BUCKET_NAME + '/' + FOLDER_DATA + '/' + INPUT_FILE_NAME
print('MODEL_DEPTH = %2d' % MODEL_DEPTH)
print('DROPOUT_RATE = %.4f' % DROPOUT_RATE)
print('LEARNING_RATE = %.6f' % LEARNING_RATE)
print('EPOCHS = %2d' % EPOCHS)
print('BATCH_SIZE = %2d' % BATCH_SIZE)
print("MODEL_DIR =", MODEL_DIR_PYTH)
print("INPUT_FILE =", INPUT_FILE)
# Run training
! python3 -m trainer.train --model_depth=$MODEL_DEPTH --dropout_rate=$DROPOUT_RATE \
--learning_rate=$LEARNING_RATE \
--epochs=$EPOCHS \
--batch_size=$BATCH_SIZE \
--model_dir=$MODEL_DIR_PYTH \
--input_file=$INPUT_FILE
# Test with latest saved model
best_model_dir_pyth = find_best_model_dir(MODEL_DIR_PYTH+'/checkpoints', offset=1, maxFlag=1)
#acc = test_saved_model(best_model_dir_pyth, 0)
%%time
#***CREATE model_dir in local VM***
!mkdir -p model_dir
from trainer import model
# Copy the model from storage to local memory
!gsutil -m cp -r $best_model_dir_pyth* ./model_dir
# Load the model
loaded_model = tf.keras.models.load_model('./model_dir', compile=False)#,
#custom_objects={"custom_loss": model.custom_loss, "custom_mse": model.custom_mse})
print("Signature ", loaded_model.signatures)
print("")
# Display model
tf.keras.utils.plot_model(loaded_model, show_shapes=True)
```
------
# Training
```
# Create the config directory and load the trainer files in it
!mkdir -p config
%%writefile ./config/config.yaml
# python3
# ==============================================================================
# Copyright 2020 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
# https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training#--scale-tier
# https://www.kaggle.com/c/passenger-screening-algorithm-challenge/discussion/37087
# https://cloud.google.com/ai-platform/training/docs/using-gpus
#trainingInput:
# scaleTier: CUSTOM
# masterType: n1-highmem-16
# masterConfig:
# acceleratorConfig:
# count: 2
# type: NVIDIA_TESLA_V100
#trainingInput:
# scaleTier: CUSTOM
# masterType: n1-highmem-8
# masterConfig:
# acceleratorConfig:
# count: 1
# type: NVIDIA_TESLA_T4
# masterType: n1-highcpu-16
# workerType: cloud_tpu
# workerCount: 1
# workerConfig:
# acceleratorConfig:
# type: TPU_V3
# count: 8
#trainingInput:
# scaleTier: CUSTOM
# masterType: complex_model_m
# workerType: complex_model_m
# parameterServerType: large_model
# workerCount: 6
# parameterServerCount: 1
# scheduling:
# maxWaitTime: 3600s
# maxRunningTime: 7200s
#trainingInput:
# runtimeVersion: "2.1"
# scaleTier: CUSTOM
# masterType: standard_gpu
# workerCount: 9
# workerType: standard_gpu
# parameterServerCount: 3
# parameterServerType: standard
#trainingInput:
# scaleTier: BASIC-GPU
#trainingInput:
# region: us-central1
# scaleTier: CUSTOM
# masterType: complex_model_m
# workerType: complex_model_m_gpu
# parameterServerType: large_model
# workerCount: 4
# parameterServerCount: 2
trainingInput:
scaleTier: standard-1
from datetime import datetime
from pytz import timezone
JOBNAME_TRN = 'tf_train_'+ USER + '_' + \
datetime.now(timezone('US/Pacific')).strftime("%m%d%y_%H%M")
JOB_CONFIG = "config/config.yaml"
MODEL_DIR_TRN = MODEL_DIR + datetime.now(timezone('US/Pacific')).strftime('/model_%m%d%Y_%H%M')
INPUT_FILE = 'gs://' + BUCKET_NAME + '/' + FOLDER_DATA + '/' + INPUT_FILE_NAME
print("Job Name = ", JOBNAME_TRN)
print("Job Dir = ", JOB_DIR)
print("MODEL_DIR =", MODEL_DIR_TRN)
print("INPUT_FILE =", INPUT_FILE)
# Training parameters
MODEL_DEPTH = 3
DROPOUT_RATE = 0.02
LEARNING_RATE = 0.0001
EPOCHS = 2
BATCH_SIZE = 32
print('MODEL_DEPTH = %2d' % MODEL_DEPTH)
print('DROPOUT_RATE = %.4f' % DROPOUT_RATE)
print('LEARNING_RATE = %.6f' % LEARNING_RATE)
print('EPOCHS = %2d' % EPOCHS)
print('BATCH_SIZE = %2d' % BATCH_SIZE)
# https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training
TRAIN_LABELS = "mode=train,owner="+USER
# submit the training job
! gcloud ai-platform jobs submit training $JOBNAME_TRN \
--package-path $(pwd)/trainer \
--module-name trainer.train \
--region $REGION \
--python-version 3.7 \
--runtime-version $RUNTIME_VERSION \
--job-dir $JOB_DIR \
--config $JOB_CONFIG \
--labels $TRAIN_LABELS \
-- \
--model_depth=$MODEL_DEPTH \
--dropout_rate=$DROPOUT_RATE \
--learning_rate=$LEARNING_RATE \
--epochs=$EPOCHS \
--batch_size=$BATCH_SIZE \
--model_dir=$MODEL_DIR_TRN \
--input_file=$INPUT_FILE
# check the training job status
! gcloud ai-platform jobs describe $JOBNAME_TRN
# Print Errors
#response = ! gcloud logging read "resource.labels.job_id=$JOBNAME_TRN severity>=ERROR"
#for i in range(0,len(response)):
# if 'message' in response[i]:
# print(response[i])
# Test with latest saved model
best_model_dir_trn = find_best_model_dir(MODEL_DIR_TRN+'/checkpoints', offset=1, maxFlag=1)
#acc = test_saved_model(best_model_dir_trn, 0)
```
------
# Hyper Parameter Tuning
```
# Create the tf directory and load the trainer files in it
!cp ./trainer/train.py ./trainer/train_hpt.py
%%writefile -a ./trainer/train_hpt.py
"""This method updates a CAIP HPTuning Job with a final metric for the job.
In TF2.X the user must either use hypertune or a custom callback with
tf.summary.scalar to update CAIP HP Tuning jobs. This function uses
hypertune, which appears to be the preferred solution. Hypertune also works
with containers, without code change.
Args:
metric_tag: The metric being optimized. This MUST MATCH the
hyperparameterMetricTag specificed in the hyperparameter tuning yaml.
metric_value: The value to report at the end of model training.
global_step: An int value to specify the number of trainin steps completed
at the time the metric was reported.
"""
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=final_epoch_accuracy,
global_step=final_epoch_count
)
%%writefile ./config/hptuning_config.yaml
# python3
# ==============================================================================
# Copyright 2020 Google LLC. This software is provided as-is, without warranty
# or representation for any use or purpose. Your use of it is subject to your
# agreement with Google.
# ==============================================================================
# https://cloud.google.com/ai-platform/training/docs/reference/rest/v1/projects.jobs
# https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training
#trainingInput:
# scaleTier: CUSTOM
# masterType: n1-highmem-8
# masterConfig:
# acceleratorConfig:
# count: 1
# type: NVIDIA_TESLA_T4
#
# masterType: standard_p100
# workerType: standard_p100
# parameterServerType: standard_p100
# workerCount: 8
# parameterServerCount: 1
# runtimeVersion: $RUNTIME_VERSION
# pythonVersion: '3.7'
#trainingInput:
# scaleTier: CUSTOM
# masterType: complex_model_m
# workerType: complex_model_m
# parameterServerType: large_model
# workerCount: 9
# parameterServerCount: 3
# scheduling:
# maxWaitTime: 3600s
# maxRunningTime: 7200s
#trainingInput:
# scaleTier: BASIC-GPU
#trainingInput:
# scaleTier: CUSTOM
# masterType: n1-highmem-16
# masterConfig:
# acceleratorConfig:
# count: 2
# type: NVIDIA_TESLA_V100
trainingInput:
scaleTier: STANDARD-1
hyperparameters:
goal: MAXIMIZE
hyperparameterMetricTag: accuracy
maxTrials: 4
maxParallelTrials: 4
enableTrialEarlyStopping: True
params:
- parameterName: model_depth
type: INTEGER
minValue: 2
maxValue: 4
scaleType: UNIT_LINEAR_SCALE
- parameterName: epochs
type: INTEGER
minValue: 1
maxValue: 3
scaleType: UNIT_LINEAR_SCALE
from datetime import datetime
from pytz import timezone
JOBNAME_HPT = 'tf_hptrn_' + USER + '_' + \
datetime.now(timezone('US/Pacific')).strftime("%m%d%y_%H%M")
JOB_CONFIG = "./config/hptuning_config.yaml"
MODEL_DIR_HPT = MODEL_DIR + datetime.now(timezone('US/Pacific')).strftime('/model_%m%d%Y_%H%M')
INPUT_FILE = 'gs://' + BUCKET_NAME + '/' + FOLDER_DATA + '/' + INPUT_FILE_NAME
print("Job Name = ", JOBNAME_HPT)
print("Job Dir = ", JOB_DIR)
print("MODEL_DIR =", MODEL_DIR_HPT)
print("INPUT_FILE =", INPUT_FILE)
# Training parameters
DROPOUT_RATE = 0.02
LEARNING_RATE = 0.0001
BATCH_SIZE = 32
# submit the training job
HT_LABELS = "mode=hypertrain,owner="+USER
! gcloud ai-platform jobs submit training $JOBNAME_HPT \
--package-path $(pwd)/trainer \
--module-name trainer.train_hpt \
--python-version 3.7 \
--runtime-version $RUNTIME_VERSION \
--region $REGION \
--job-dir $JOB_DIR \
--config $JOB_CONFIG \
--labels $HT_LABELS \
-- \
--dropout_rate=$DROPOUT_RATE \
--learning_rate=$LEARNING_RATE \
--batch_size=$BATCH_SIZE \
--model_dir=$MODEL_DIR_HPT \
--input_file=$INPUT_FILE
# check the hyperparameter training job status
! gcloud ai-platform jobs describe $JOBNAME_HPT
# Print Errors
#response = ! gcloud logging read "resource.labels.job_id=$JOBNAME_HPT severity>=ERROR"
#for i in range(0,len(response)):
# if 'message' in response[i]:
# print(response[i])
# Get the best model parameters from Cloud API
best_model = pyth_get_hypertuned_parameters(PROJECT_ID, JOBNAME_HPT, 1)
MODEL_DEPTH = best_model['hyperparameters']['model_depth']
EPOCHS = best_model['hyperparameters']['epochs']
print('')
print('Objective=', best_model['finalMetric']['objectiveValue'])
print('MODEL_DEPTH =', MODEL_DEPTH)
print('EPOCHS =', EPOCHS)
# Find count of checkpoints
all_models = ! gsutil ls {MODEL_DIR_HPT+'/checkpoints'}
print("Total Hypertrained Models=", len(all_models))
# Test with latest saved model
best_model_dir_hyp = find_best_model_dir(MODEL_DIR_HPT+'/checkpoints', offset=1, maxFlag=1)
#acc = test_saved_model(best_model_dir_hyp, 0)
#import keras.backend as K
#loaded_model = tf.keras.models.load_model(MODEL_DIR_PARAM+'/checkpoints')
#print("learning_rate=", K.eval(loaded_model.optimizer.lr))
#tf.keras.utils.plot_model(loaded_model, show_shapes=True)
```
--------
# Deploy the Model
```
## https://cloud.google.com/ai-platform/prediction/docs/machine-types-online-prediction#available_machine_types
# We need 2 versions of the same model:
# 1. Batch prediction model deployed on a mls1-c1-m2 cluster
# 2. Online prediction model deployed on a n1-standard-16 cluster
# Batch prediction does not support GPU and n1-standard-16 clusters.
# Run the Deploy Model section twice:
# 1. As a BATCH Mode version use MODEL_VERSION = MODEL_VERSION_BATCH
# 2. As a ONLINE Mode version use MODEL_VERSION = MODEL_VERSION_ONLINE
# Regional End points with python
#https://cloud.google.com/ai-platform/prediction/docs/regional-endpoints#python
MODEL_NAME = "loan_model_1"
MODEL_VERSION_BATCH = "batch_v1"
MODEL_VERSION_ONLINE = "online_v1"
#Run this as Batch first then Online
#MODEL_VERSION = MODEL_VERSION_ONLINE
MODEL_VERSION = MODEL_VERSION_BATCH
# List all models
print("\nList of Models in Global Endpoint)")
!gcloud ai-platform models list --region=global
# List all versions of model
print("\nList of Versions in Global Endpoint)")
!gcloud ai-platform versions list --model $MODEL_NAME --region=global
#!gcloud ai-platform versions delete $MODEL_VERSION_BATCH --model $MODEL_NAME --quiet --region=global
#!gcloud ai-platform models delete $MODEL_NAME --quiet --region=global
# List all models
print("\nList of Models in Global Endpoint)")
!gcloud ai-platform models list --region=global
# List all versions of model
print("\nList of Versions in Global Endpoint)")
!gcloud ai-platform versions list --model $MODEL_NAME --region=global
# create the model if it doesn't already exist
modelname = !gcloud ai-platform models list | grep -w $MODEL_NAME
print(modelname)
if (len(modelname) <= 1) or ('Listed 0 items.' in modelname[1]):
print("Creating model " + MODEL_NAME)
# Global endpoint
!gcloud ai-platform models create $MODEL_NAME --enable-logging --regions $REGION
else:
print("Model " + MODEL_NAME + " exist")
print("\nList of Models in Global Endpoint)")
!gcloud ai-platform models list --region=global
%%time
print("Model Name =", MODEL_NAME)
print("Model Versions =", MODEL_VERSION)
# Get a list of model directories
best_model_dir = best_model_dir_hyp
print("Best Model Dir: ", best_model_dir)
MODEL_FRAMEWORK = "TENSORFLOW"
MODEL_DESCRIPTION = "SEQ_MODEL_1"
MODEL_LABELS="team=ourteam,phase=test,owner="+USER
MACHINE_TYPE = "mls1-c1-m2"
if (MODEL_VERSION == MODEL_VERSION_BATCH):
MACHINE_TYPE = "mls1-c1-m2"
MODEL_LABELS = MODEL_LABELS+",mode=batch"
if (MODEL_VERSION == MODEL_VERSION_ONLINE):
MACHINE_TYPE = "mls1-c1-m2" #"n1-standard-32"
MODEL_LABELS = MODEL_LABELS+",mode=online"
# Deploy the model
! gcloud beta ai-platform versions create $MODEL_VERSION \
--model $MODEL_NAME \
--origin $best_model_dir \
--runtime-version $RUNTIME_VERSION \
--python-version=3.7 \
--description=$MODEL_DESCRIPTION \
--labels $MODEL_LABELS \
--machine-type=$MACHINE_TYPE \
--framework $MODEL_FRAMEWORK \
--region global
# List all models
print("\nList of Models in Global Endpoint)")
!gcloud ai-platform models list --region=global
print("\nList of Models in Regional Endpoint)")
!gcloud ai-platform models list --region=$REGION
# List all versions of model
print("\nList of Versions in Global Endpoint)")
!gcloud ai-platform versions list --model $MODEL_NAME --region=global
#print("\nList of Versions in Regional Endpoint)")
#!gcloud ai-platform versions list --model $MODEL_NAME --region=$REGION
```
------
# Predictions with the deployed model
```
%%time
from trainer import model
# Copy the model from storage to local memory
!gsutil -m cp -r $best_model_dir_hyp* ./model_dir
# Load the model
loaded_model = tf.keras.models.load_model('./model_dir', compile=False) #,
#custom_objects={"custom_loss": model.custom_loss,"custom_mse": model.custom_mse})
print("Signature ", loaded_model.signatures)
# Check the model layers
model_layers = [layer.name for layer in loaded_model.layers]
print("")
print("Model Input Layer=", model_layers[0])
print("Model Output Layer=", model_layers[-1])
print("")
from trainer import inputs
input_file = 'gs://' + BUCKET_NAME + '/' + FOLDER_DATA + '/' + INPUT_FILE_NAME
train_test_data = inputs.load_data(input_file)
X_test = train_test_data[1]
y_test = train_test_data[3]
```
## Online Prediction with python
```
%%time
# Online Prediction with Python - works for global end points only
# Use MODEL_VERSION_ONLINE not MODEL_VERSION_BATCH
MODEL_VERSION = MODEL_VERSION_ONLINE
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
import json
#tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
#tf.get_logger().setLevel('ERROR')
print("Project ID =", PROJECT_ID)
print("Model Name =", MODEL_NAME)
print("Model Version =", MODEL_VERSION)
model_name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
if MODEL_VERSION is not None:
model_name += '/versions/{}'.format(MODEL_VERSION)
credentials = GoogleCredentials.get_application_default()
service = discovery.build('ml', 'v1', cache_discovery=False, credentials=credentials)
print("model_name=", model_name)
pprobas_temp = []
batch_size = 32
n_samples = min(1000,X_test.shape[0])
print("batch_size=", batch_size)
print("n_samples=", n_samples)
for i in range(0, n_samples, batch_size):
j = min(i+batch_size, n_samples)
print("Processing samples", i, j)
request = service.projects().predict(name=model_name, \
body={'instances': np.array(X_test)[i:j].tolist()})
try:
response = request.execute()
pprobas_temp += response['predictions']
except errors.HttpError as err:
# Something went wrong, print out some information.
tf.compat.v1.logging.error('There was an error getting the job info, Check the details:')
tf.compat.v1.logging.error(err._get_reason())
break
# Show the prediction results as an array
nPreds = len(pprobas_temp)
nClasses = y_test.shape[1]
pprobas = np.zeros((nPreds, nClasses))
for i in range(nPreds):
pprobas[i,:] = np.array(pprobas_temp[i][model_layers[-1]])
pprobas = np.round(pprobas, 2)
pprobas
```
## Batch Prediction with GCLOUD
```
# Write batch data to file in GCS
import shutil
# Clean current directory
DATA_DIR = './batch_data'
shutil.rmtree(DATA_DIR, ignore_errors=True)
os.makedirs(DATA_DIR)
n_samples = min(1000,X_test.shape[0])
nFiles = 10
nRecsPerFile = min(1000,n_samples//nFiles)
print("n_samples =", n_samples)
print("nFiles =", nFiles)
print("nRecsPerFile =", nRecsPerFile)
# Create nFiles files with nImagesPerFile images each
for i in range(nFiles):
with open(f'{DATA_DIR}/unkeyed_batch_{i}.json', "w") as file:
for z in range(nRecsPerFile):
print(f'{{"dense_input": {np.array(X_test)[i*nRecsPerFile+z].tolist()}}}', file=file)
#print(f'{{"{model_layers[0]}": {np.array(X_test)[i*nRecsPerFile+z].tolist()}}}', file=file)
#key = f'key_{i}_{z}'
#print(f'{{"image": {X_test_images[z].tolist()}, "key": "{key}"}}', file=file)
# Write batch data to gcs file
!gsutil -m cp -r ./batch_data gs://$BUCKET_NAME/$FOLDER_RESULTS/
# Remove old batch prediction results
!gsutil -m rm -r gs://$BUCKET_NAME/$FOLDER_RESULTS/batch_predictions
from datetime import datetime
from pytz import timezone
DATA_FORMAT="text" # JSON data format
INPUT_PATHS='gs://' + BUCKET_NAME + '/' + FOLDER_RESULTS + '/batch_data/*'
OUTPUT_PATH='gs://' + BUCKET_NAME + '/' + FOLDER_RESULTS + '/batch_predictions'
PRED_LABELS="mode=batch,team=ourteam,phase=test,owner="+USER
SIGNATURE_NAME="serving_default"
JOBNAME_BATCH = 'tf_batch_predict_'+ USER + '_' + \
datetime.now(timezone('US/Pacific')).strftime("%m%d%y_%H%M")
print("INPUT_PATHS = ", INPUT_PATHS)
print("OUTPUT_PATH = ", OUTPUT_PATH)
print("Job Name = ", JOBNAME_BATCH)
# Only works with global endpoint
# Submit batch predict job
# Use MODEL_VERSION_BATCH not MODEL_VERSION_ONLINE
MODEL_VERSION = MODEL_VERSION_BATCH
!gcloud ai-platform jobs submit prediction $JOBNAME_BATCH \
--model=$MODEL_NAME \
--version=$MODEL_VERSION \
--input-paths=$INPUT_PATHS \
--output-path=$OUTPUT_PATH \
--data-format=$DATA_FORMAT \
--labels=$PRED_LABELS \
--signature-name=$SIGNATURE_NAME \
--region=$REGION
# check the batch prediction job status
! gcloud ai-platform jobs describe $JOBNAME_BATCH
# Print Errors
#response = ! gcloud logging read "resource.labels.job_id=$JOBNAME_BATCH severity>=ERROR"
#for i in range(0,len(response)):
# if 'message' in response[i]:
# print(response[i])
print("errors")
!gsutil cat $OUTPUT_PATH/prediction.errors_stats-00000-of-00001
print("batch prediction results")
!gsutil cat $OUTPUT_PATH/prediction.results-00000-of-00010
```
| github_jupyter |
# Hertzian conatct 1
## Assumptions
When two objects are brought into contact they intially touch along a line or at a single point. If any load is transmitted throught the contact the point or line grows to an area. The size of this area, the pressure distribtion inside it and the resulting stresses in each solid require a theory of contact to describe.
The first satisfactory theory for round bodies was presented by Hertz in 1880 who worked on it during his christmas holiday at the age of twenty three. He assumed that:
The bodies could be considered as semi infinite elastic half spaces from a stress perspective as the contact area is normally much smaller than the size of the bodies, it is also assumed strains are small. This means that the normal integral equations for surface contact can be used:
The contact is also assumed to be frictionless so the contact equations reduce to:
$\Psi_1=\int_S \int p(\epsilon,\eta)ln(\rho+z)\ d\epsilon\ d\eta$ [1]
$\Psi=\int_S \int \frac{p(\epsilon,\eta)}{\rho}\ d\epsilon\ d\eta$ [2]
$u_x=-\frac{1+v}{2\pi E}\left((1-2v)\frac{\delta\Psi_1}{\delta x}+z\frac{\delta\Psi}{\delta x}\right) $ [3a]
$u_y=-\frac{1+v}{2\pi E}\left((1-2v)\frac{\delta\Psi_1}{\delta y}+z\frac{\delta\Psi}{\delta y}\right) $ [3b]
$u_z=-\frac{1+v}{2\pi E}\left(2(1-v)\Psi+z\frac{\delta\Psi}{\delta z}\right) $ [3c]
```
from IPython.display import Image
Image("figures/hertz_probelm reduction.png")
```
For the shape of the surfaces: it was asumed that they are smooth on both the micro scale and the macro scale. Assuming that they are smooth on the micro scale means that small irregulararities which would cause discontinuous contact and local pressure variations are ignored.
## Geometry
Assuming that the surfaces are smooth on the macro scale implies that the surface profiles are continuous up to their second derivative. Meaning that the surfaces can be described by polynomials:
$z_1=A_1'x+B_1'y+A_1x^2+B_1y^2+C_1xy+...$ [4]
With higher order terms being neglected. By choosing the location of the origin to be at the point of contact and the orientation of the xy plane to be inline wiht the principal radii of the surface the equation above reduces to:
$z_1=\frac{1}{2R'_1}x_1^2+\frac{1}{2R''_1}y_1^2$ [5]
Where $R'_1$ and $R''_1$ are the principal radii of the first surface at the origin.
### They are the maximum and minimum radii of curvature across all possible cross sections
The following widget allows you to change the principal radii of a surface and the angle between it an the coordinate axes
```
from matplotlib import pyplot as plt
from matplotlib.lines import Line2D
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
plt.rcParams['figure.figsize'] = [15, 10]
@interact(r1=(-10,10),r2=(-10,10),theta=(0,np.pi),continuous_update=False)
def plot_surface(r1=5,r2=0,theta=0):
"""
Plots a surface given two principal radii and the angle relative to the coordinate axes
Parameters
----------
r1,r2 : float
principal radii
theta : float
Angle between the plane of the first principal radius and the coordinate axes
"""
X,Y=np.meshgrid(np.linspace(-1,1,20),np.linspace(-1,1,20))
X_dash=X*np.cos(theta)-Y*np.sin(theta)
Y_dash=Y*np.cos(theta)+X*np.sin(theta)
r1 = r1 if np.abs(r1)>=1 else float('inf')
r2 = r2 if np.abs(r2)>=1 else float('inf')
Z=0.5/r1*X_dash**2+0.5/r2*Y_dash**2
x1=np.linspace(-1.5,1.5)
y1=np.zeros_like(x1)
z1=0.5/r1*x1**2
y2=np.linspace(-1.5,1.5)
x2=np.zeros_like(y2)
z2=0.5/r2*y2**2
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, Z)
ax.plot((x1*np.cos(-theta)-y1*np.sin(-theta)),x1*np.sin(-theta)+y1*np.cos(-theta),z1)
ax.plot((x2*np.cos(-theta)-y2*np.sin(-theta)),x2*np.sin(-theta)+y2*np.cos(-theta),z2)
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
ax.set_zlim(-0.5, 0.5)
```
A similar equation defines the second surface:
$z_2=-\left(\frac{1}{2R'_2}x_2^2+\frac{1}{2R''_2}y_2^2\right)$ [6]
The separation between these surfaces is then given as $h=z_1-z_2$.
by writing equation 4 and its counterpart on common axes, it is clear that the gap between the surfaces can be written as:
$h=Ax^2+By^2+Cxy$ [7]
And by a suitable choice of orientation of the xy plane the C term can be made to equal 0. As such when ever two surfaces with parabolic shape are brought into contact (with no load) the gap between them can be defined as a single parabola:
$h=Ax^2+Bx^2=\frac{1}{2R'_{gap}}x^2+\frac{1}{2R''_{gap}}y^2$ [8]
#### The values $R'_{gap}$ and $R''_{gap}$ are called the principal radii of relative curvature.
These relate to the principal radii of each of the bodies through the equations below:
$(A+B)=\frac{1}{2}\left(\frac{1}{R'_{gap}}+\frac{1}{R''_{gap}}\right)=\frac{1}{2}\left(\frac{1}{R'_1}+\frac{1}{R''_1}+\frac{1}{R'_2}+\frac{1}{R''_2}\right)$
The next widget shows the shpae of the gap between two bodies in contact allowing you to set the principal radii of each boday and the angle between them:
```
@interact(top_r1=(-10,10),top_r2=(-10,10),
bottom_r1=(-10,10),bottom_r2=(-10,10),
theta=(0,np.pi),continuous_update=False)
def plot_two_surfaces(top_r1=2,top_r2=5,bottom_r1=4,bottom_r2=-9,theta=0.3):
"""
Plots 2 surfaces and the gap between them
Parameters
----------
top_r1,top_r2,bottom_r1,bottom_r2 : float
The principal radii of the top and bottom surface
theta : float
The angel between the first principal radii of the surfaces
"""
X,Y=np.meshgrid(np.linspace(-1,1,20),np.linspace(-1,1,20))
X_dash=X*np.cos(theta)-Y*np.sin(theta)
Y_dash=Y*np.cos(theta)+X*np.sin(theta)
top_r1 = top_r1 if np.abs(top_r1)>=1 else float('inf')
top_r2 = top_r2 if np.abs(top_r2)>=1 else float('inf')
bottom_r1 = bottom_r1 if np.abs(bottom_r1)>=1 else float('inf')
bottom_r2 = bottom_r2 if np.abs(bottom_r2)>=1 else float('inf')
Z_top=0.5/top_r1*X_dash**2+0.5/top_r2*Y_dash**2
Z_bottom=-1*(0.5/bottom_r1*X**2+0.5/bottom_r2*Y**2)
fig = plt.figure()
ax = fig.add_subplot(121, projection='3d')
ax.set_title("Surfaces")
ax2 = fig.add_subplot(122)
ax2.set_title("Gap")
ax2.axis("equal")
ax2.set_adjustable("box")
ax2.set_xlim([-1,1])
ax2.set_ylim([-1,1])
ax.plot_surface(X, Y, Z_top)
ax.plot_surface(X, Y, Z_bottom)
if top_r1==top_r2==bottom_r1==bottom_r2==float('inf'):
ax2.text(s='Flat surfaces, no gap', x=-0.6, y=-0.1)
else:
ax2.contour(X,Y,Z_top-Z_bottom)
div=((1/top_r2)-(1/top_r1))
if div==0:
lam=float('inf')
else:
lam=((1/bottom_r2)-(1/bottom_r1))/div
beta=-1*np.arctan((np.sin(2*theta))/(lam+np.cos(2*theta)))/2
if beta<=(np.pi/4):
x=1
y=np.tan(beta)
else:
x=np.tan(beta)
y=1
ax2.add_line(Line2D([x,-1*x],[y,-1*y]))
beta-=np.pi/2
if beta<=(np.pi/4):
x=1
y=np.tan(beta)
else:
x=np.tan(beta)
y=1
ax2.add_line(Line2D([x,-1*x],[y,-1*y]))
```
From the form of equation 8 it is clear that the contours of constant gap (the contours plotted by the widget) are elipitcal in shape. With axes in the ratio $(R'_gap/R''_gap)^{1/2}$. In the special case of equal principal radii for each body (spherical contact) the contours of separation will be circular. From the symmetry of this problem it is clear that this will remain true when a load is applied.
Additonally, when two cylinders are brought in to contact with their axes parallel the contours of separation are straight lines parallel to the axes of the cylinders. When loaded the cylinders will also make contact along a narrow strip parallel to the axes of the cylinders.
We might expect, then that for the general case the contour of contact under load will follow the same eliptical shape as the contours of separation. This is infact the case but the proof will have to wait for the next section
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy
from numpy import genfromtxt
import csv
import pandas as pd
from operator import itemgetter
from datetime import*
from openpyxl import load_workbook,Workbook
from openpyxl.styles import PatternFill, Border, Side, Alignment, Protection, Font
import openpyxl
from win32com import client
print('Libraries Imported Successfully......')
#######################################################################
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
#######################################################################
def add_one_month(t):
"""Return a `datetime.date` or `datetime.datetime` (as given) that is
one month earlier.
Note that the resultant day of the month might change if the following
month has fewer days:
>>> add_one_month(datetime.date(2010, 1, 31))
datetime.date(2010, 2, 28)
"""
import datetime
one_day = datetime.timedelta(days=1)
one_month_later = t + one_day
while one_month_later.month == t.month: # advance to start of next month
one_month_later += one_day
target_month = one_month_later.month
while one_month_later.day < t.day: # advance to appropriate day
one_month_later += one_day
if one_month_later.month != target_month: # gone too far
one_month_later -= one_day
break
return one_month_later
#######################################################################
def subtract_one_month(t):
"""Return a `datetime.date` or `datetime.datetime` (as given) that is
one month later.
Note that the resultant day of the month might change if the following
month has fewer days:
>>> subtract_one_month(datetime.date(2010, 3, 31))
datetime.date(2010, 2, 28)
"""
import datetime
one_day = datetime.timedelta(days=1)
one_month_earlier = t - one_day
while one_month_earlier.month == t.month or one_month_earlier.day > t.day:
one_month_earlier -= one_day
return one_month_earlier
#######################################################################
print('Custom Functions Loaded into the Current Path')
values=[]
dates=[]
combine=[]
with open('hyatt.csv', 'r') as csvFile:
reader = csv.reader(csvFile)
for row in reader:
values.append(row[1])
dates.append(row[1])
combine.append(row)
csvFile.close()
#print(values)
#print(dates)
#print(combine)
print('Data Loaded into the Program Successfuly')
print('The Number of Values are: ',len(values))
print('The Number of Dates are:',len(dates))
combine = sorted(combine, key=itemgetter(0))
"""for i in combine:
print(i)"""
for i in combine:
m2=i[0]
m2=datetime.strptime(m2,'%d/%m/%y %I:%M %p')
i[0]=m2
combine = sorted(combine, key=itemgetter(0))
"""for i in combine:
print(i)"""
ref_min=combine[0][0].date()
min_time=datetime.strptime('0000','%H%M').time()
ref_min=datetime.combine(ref_min, min_time)
print(type(ref_min))
ref_max=combine[-1][0].date()
max_time=datetime.strptime('2359','%H%M').time()
ref_max=datetime.combine(ref_max, max_time)
print(ref_max)
ref_max = add_one_month(ref_max)
dates=[]
for i in combine:
dates.append(i[0])
i=ref_min
indices=[]
while i<ref_max:
k=nearest(dates,i)
print('The corresponding Lowest time related to this reading is: ',k)
index=dates.index(k)
print(index)
indices.append(index)
i = add_one_month(i)
print(i)
print('The Number of Indices are: ',len(indices))
k=ref_min.date()
consump=[]
lower=[]
upper=[]
for i in range(len(indices)-1):
r_min=float(values[indices[i]])
lower.append(r_min)
r_up=float(values[indices[i+1]])
upper.append(r_up)
consumption=r_up-r_min
consump.append(consumption)
print('The Consumption on ',k.strftime('%d-%m-%Y'),' is : ',consumption)
k = add_one_month(k)
rate=float(input('Enter the Rate per KWH consumed for Cost Calculation: '))
k=ref_min.date()
cost=[]
for i in consump:
r=float(i)*rate
print('The Cost of Electricity for ',k.strftime('%d-%m-%Y'),' is :',r)
cost.append(r)
k = add_one_month(k)
print('\n====================Final Output====================\n')
k=ref_min.date()
date_list=[]
write=[]
write2=[]
cust=input('Please Enter The Customer Name: ')
row=['Customer Name: ',cust]
write.append(row)
row=['Address Line 1: ',"3, National Hwy 9, Premnagar, "]
write.append(row)
row=['Address Line 2: ',"Ashok Nagar, Pune, Maharashtra 411016"]
write.append(row)
row=['']
write.append(row)
row=['Electricity Bill Invoice']
write.append(row)
row=['From: ',ref_min.date()]
write.append(row)
row=['To: ',subtract_one_month(ref_max.date())+timedelta(days=1)]
write.append(row)
row=['']
write.append(row)
row=['Reading Date','Previous Reading','Present Reading','Consumption','Cost']
write.append(row)
for i in range(len(indices)-1):
row=[]
row2=[]
print('--------------------------------------')
print('Date:\t\t',k)
row.append(k)
row2.append(k)
date_list.append(k)
k = add_one_month(k)
print('Lower Reading:\t',lower[i])
row.append(lower[i])
row2.append(lower[i])
print('Upper Reading:\t',upper[i])
row.append(upper[i])
row2.append(upper[i])
print('Consumption:\t',consump[i])
row.append(consump[i])
row2.append(consump[i])
print('Cost:\t\t',cost[i])
row.append(cost[i])
row2.append(cost[i])
write.append(row)
write2.append(row2)
plt.plot(date_list,consump)
plt.show()
plt.savefig('graph.png')
row=['Total Consumption: ', sum(consump)]
write.append(row)
row=['Cost Per Unit: ',rate]
write.append(row)
row=['Total Bill Ammount: ',sum(cost)]
write.append(row)
with open('output.csv', 'w') as csvFile:
for row in write:
writer = csv.writer(csvFile,lineterminator='\n')
writer.writerow(row)
csvFile.close()
print('CSV FILE Generated as Output.csv')
###########################################################################
wb=load_workbook('Book1.xlsx')
ws1=wb.get_sheet_by_name('Sheet1')
# shs is list
ws1['B2']=cust
ws1['B3']='3, National Hwy 9, Premnagar, '
ws1['B4']='Ashok Nagar, Pune, Maharashtra 411016'
ws1['B6']=ref_min.date()
ws1['E6']=subtract_one_month(ref_max.date())+timedelta(days=1)
row=9
column=1
for r in write2:
column=1
for i in r:
ws1.cell(row,column).value=i
column+=1
row+=1
"""
row+=1
column=1
ws1.cell(row,column).value='Total Consumption: '
ws1.cell(row,column).font=Font(bold=True)
column+=1
ws1.cell(row,column).value=sum(consump)
column-=1
row+=1
ws1.cell(row,column).value='Total Cost: '
ws1.cell(row,column).font=Font(bold=True)
column+=1
ws1.cell(row,column).value=sum(cost)
column-=1"""
thick_border_right=Border(right=Side(style='thick'))
ws1['E2'].border=thick_border_right
ws1['E3'].border=thick_border_right
ws1['E4'].border=thick_border_right
thick_border = Border(left=Side(style='thick'), right=Side(style='thick'), top=Side(style='thick'), bottom=Side(style='thick'))
ws1['A15']='Total Consumption'
ws1['A15'].font=Font(bold=True)
ws1['A15'].border=thick_border
ws1['B15'].border=thick_border
ws1['C15'].border=thick_border
ws1['D15'].border=thick_border
ws1['A16']='Total Cost'
ws1['A16'].font=Font(bold=True)
ws1['A16'].border=thick_border
ws1['B16'].border=thick_border
ws1['C16'].border=thick_border
ws1['D16'].border=thick_border
ws1['E15']=sum(consump)
ws1['E16']=sum(cost)
img = openpyxl.drawing.image.Image('logo.jpg')
img.anchor='A1'
ws1.add_image(img)
wb.save('Book1.xlsx')
print('Excel Workbook Generated as Book1.xlsx')
#############################################################################
xlApp = client.Dispatch("Excel.Application")
books = xlApp.Workbooks.Open('E:\Internship\Siemens\EMAPP\Book1.xlsx')
ws = books.Worksheets[0]
ws.Visible = 1
ws.ExportAsFixedFormat(0, 'E:\Internship\Siemens\EMAPP\trial.pdf')
```
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Loops in Python
Estimated time needed: **20** minutes
## Objectives
After completing this lab you will be able to:
- work with the loop statements in Python, including for-loop and while-loop.
<h1>Loops in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the loops in the Python Programming Language. By the end of this lab, you'll know how to use the loop statements in Python, including for loop, and while loop.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#loop">Loops</a>
<ul>
<li><a href="range">Range</a></li>
<li><a href="for">What is <code>for</code> loop?</a></li>
<li><a href="while">What is <code>while</code> loop?</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Loops</a>
</li>
</ul>
</div>
<hr>
<h2 id="loop">Loops</h2>
<h3 id="range">Range</h3>
Sometimes, you might want to repeat a given operation many times. Repeated executions like this are performed by <b>loops</b>. We will look at two types of loops, <code>for</code> loops and <code>while</code> loops.
Before we discuss loops lets discuss the <code>range</code> object. It is helpful to think of the range object as an ordered list. For now, let's look at the simplest case. If we would like to generate an object that contains elements ordered from 0 to 2 we simply use the following command:
```
# Use the range
range(3)
```
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%203/images/range.PNG" width="300" />
**_NOTE: While in Python 2.x it returned a list as seen in video lessons, in 3.x it returns a range object._**
<h3 id="for">What is <code>for</code> loop?</h3>
The <code>for</code> loop enables you to execute a code block multiple times. For example, you would use this if you would like to print out every element in a list.
Let's try to use a <code>for</code> loop to print all the years presented in the list <code>dates</code>:
This can be done as follows:
```
# For loop example
dates = [1982,1980,1973]
N = len(dates)
for i in range(N):
print(dates[i])
```
The code in the indent is executed <code>N</code> times, each time the value of <code>i</code> is increased by 1 for every execution. The statement executed is to <code>print</code> out the value in the list at index <code>i</code> as shown here:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/LoopsForRange.gif" width="800" />
In this example we can print out a sequence of numbers from 0 to 7:
```
# Example of for loop
for i in range(0, 8):
print(i)
```
In Python we can directly access the elements in the list as follows:
```
# Exmaple of for loop, loop through list
for year in dates:
print(year)
```
For each iteration, the value of the variable <code>years</code> behaves like the value of <code>dates[i]</code> in the first example:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/LoopsForList.gif" width="800">
We can change the elements in a list:
```
# Use for loop to change the elements in list
squares = ['red', 'yellow', 'green', 'purple', 'blue']
for i in range(0, 5):
print("Before square ", i, 'is', squares[i])
squares[i] = 'white'
print("After square ", i, 'is', squares[i])
```
We can access the index and the elements of a list as follows:
```
# Loop through the list and iterate on both index and element value
squares=['red', 'yellow', 'green', 'purple', 'blue']
for i, square in enumerate(squares):
print(i, square)
```
<h3 id="while">What is <code>while</code> loop?</h3>
As you can see, the <code>for</code> loop is used for a controlled flow of repetition. However, what if we don't know when we want to stop the loop? What if we want to keep executing a code block until a certain condition is met? The <code>while</code> loop exists as a tool for repeated execution based on a condition. The code block will keep being executed until the given logical condition returns a **False** boolean value.
Let’s say we would like to iterate through list <code>dates</code> and stop at the year 1973, then print out the number of iterations. This can be done with the following block of code:
```
# While Loop Example
dates = [1982, 1980, 1973, 2000]
i = 0
year = dates[0]
while(year != 1973):
print(year)
i = i + 1
year = dates[i]
print("It took ", i ,"repetitions to get out of loop.")
```
A while loop iterates merely until the condition in the argument is not met, as shown in the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/LoopsWhile.gif" width="650" />
<hr>
<h2 id="quiz">Quiz on Loops</h2>
Write a <code>for</code> loop the prints out all the element between <b>-5</b> and <b>5</b> using the range function.
```
# Write your code below and press Shift+Enter to execute
for i in range(-5,6):
print(i)
```
<details><summary>Click here for the solution</summary>
```python
for i in range(-5, 6):
print(i)
```
</details>
Print the elements of the following list:
<code>Genres=[ 'rock', 'R&B', 'Soundtrack', 'R&B', 'soul', 'pop']</code>
Make sure you follow Python conventions.
```
# Write your code below and press Shift+Enter to execute
Genres=[ 'rock', 'R&B', 'Soundtrack', 'R&B', 'soul', 'pop']
for genre in Genres:
print(genre)
```
<details><summary>Click here for the solution</summary>
```python
Genres = ['rock', 'R&B', 'Soundtrack', 'R&B', 'soul', 'pop']
for Genre in Genres:
print(Genre)
```
</details>
<hr>
Write a for loop that prints out the following list: <code>squares=['red', 'yellow', 'green', 'purple', 'blue']</code>
```
# Write your code below and press Shift+Enter to execute
squares=['red', 'yellow', 'green', 'purple', 'blue']
for square in squares:
print(square)
```
<details><summary>Click here for the solution</summary>
```python
squares=['red', 'yellow', 'green', 'purple', 'blue']
for square in squares:
print(square)
```
</details>
<hr>
Write a while loop to display the values of the Rating of an album playlist stored in the list <code>PlayListRatings</code>. If the score is less than 6, exit the loop. The list <code>PlayListRatings</code> is given by: <code>PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10]</code>
```
# Write your code below and press Shift+Enter to execute
PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10]
i = 1
Rating = PlayListRatings[0]
while(i < len(PlayListRatings) and Rating >= 6):
print(Rating)
Rating = PlayListRatings[i]
i = i + 1
```
<details><summary>Click here for the solution</summary>
```python
PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10]
i = 1
Rating = PlayListRatings[0]
while(i < len(PlayListRatings) and Rating >= 6):
print(Rating)
Rating = PlayListRatings[i]
i = i + 1
```
</details>
<hr>
Write a while loop to copy the strings <code>'orange'</code> of the list <code>squares</code> to the list <code>new_squares</code>. Stop and exit the loop if the value on the list is not <code>'orange'</code>:
```
# Write your code below and press Shift+Enter to execute
squares = ['orange', 'orange', 'purple', 'blue ', 'orange']
new_squares = []
i = 0
while(i < len(squares) and squares[i] == 'orange'):
new_squares.append(squares[i])
i = i + 1
print (new_squares)
```
<details><summary>Click here for the solution</summary>
```python
squares = ['orange', 'orange', 'purple', 'blue ', 'orange']
new_squares = []
i = 0
while(i < len(squares) and squares[i] == 'orange'):
new_squares.append(squares[i])
i = i + 1
print (new_squares)
```
</details>
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
## Author
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Other contributors
<a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
<hr/>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
```
import json
import requests
import numpy as np
import pandas as pd
import pandas as pd
import requests
from requests.auth import HTTPBasicAuth
USERNAME = 'damminhtien'
PASSWORD = '**********'
TARGET_USER = 'damminhtien'
authentication = HTTPBasicAuth(USERNAME, PASSWORD)
import uuid
from IPython.display import display_javascript, display_html, display
class printJSON(object):
def __init__(self, json_data):
if isinstance(json_data, dict):
self.json_str = json.dumps(json_data)
else:
self.json_str = json_data
self.uuid = str(uuid.uuid4())
def _ipython_display_(self):
display_html('<div id="{}" style="height: 100%; width:100%; color:red; background: #2f0743;"></div>'.format(self.uuid), raw=True)
display_javascript("""
require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() {
document.getElementById('%s').appendChild(renderjson(%s))
});
""" % (self.uuid, self.json_str), raw=True)
user_data = requests.get('https://api.github.com/users/' + TARGET_USER,
auth = authentication)
user_data = user_data.json()
printJSON(user_data)
from PIL import Image
from io import BytesIO
from IPython.display import display, HTML
import tabulate
response = requests.get(user_data['avatar_url'])
ava_img = Image.open(BytesIO(response.content))
display(ava_img)
table = [["Name:", user_data['name']],
["Company:", user_data['company']],
["Bio:", user_data['bio']],
["Public_repos:", user_data['public_repos']],
["Number followers:", user_data['followers']],
["Number following users:", user_data['following']],
["Date joined:", user_data['created_at']]]
display(HTML(tabulate.tabulate(table, tablefmt='html')))
url = user_data['repos_url']
page_no = 1
repos_data = []
while (True):
response = requests.get(url, auth = authentication)
response = response.json()
repos_data = repos_data + response
repos_fetched = len(response)
if (repos_fetched == 30):
page_no = page_no + 1
url = str(user_data['repos_url']) + '?page=' + str(page_no)
else:
break
printJSON(repos_data[0])
_LANGUAGE_IGNORE = ['HTML', 'CSS', 'Jupyter Notebook']
LANGUAGE_USED = []
TIMES_USED = []
STAR_COUNT = []
for rd in repos_data:
if rd['fork']: continue
response = requests.get(rd['languages_url'], auth = authentication)
response = response.json()
language_rd = list(response.keys())
for l in language_rd:
if l in _LANGUAGE_IGNORE: continue
if l not in LANGUAGE_USED:
LANGUAGE_USED.append(l)
TIMES_USED.append(response[l])
else:
TIMES_USED[LANGUAGE_USED.index(l)] += response[l]
language_data = {'Languages': LANGUAGE_USED, 'Times': TIMES_USED}
language_df = pd.DataFrame(language_data).sort_values(by=['Times'])
language_df
import plotly.express as px
fig = px.bar(language_df, x='Languages', y='Times',
color='Languages',
labels={'pop':'Statistic languages were used by user'}, height=400)
fig.show()
repos_information = []
for i, repo in enumerate(repos_data):
data = []
data.append(repo['id'])
data.append(repo['name'])
data.append(repo['description'])
data.append(repo['created_at'])
data.append(repo['updated_at'])
data.append(repo['owner']['login'])
data.append(repo['license']['name'] if repo['license'] != None else None)
data.append(repo['has_wiki'])
data.append(repo['fork'])
data.append(repo['forks_count'])
data.append(repo['open_issues_count'])
data.append(repo['stargazers_count'])
data.append(repo['watchers_count'])
data.append(repo['url'])
data.append(repo['commits_url'].split("{")[0])
data.append(repo['url'] + '/languages')
repos_information.append(data)
repos_df = pd.DataFrame(repos_information, columns = ['Id', 'Name', 'Description', 'Created on', 'Updated on',
'Owner', 'License', 'Includes wiki', 'Is Fork','Forks count',
'Issues count', 'Stars count', 'Watchers count',
'Repo URL', 'Commits URL', 'Languages URL'])
repos_df
repos_df.describe()
star_fig = px.bar(repos_df[repos_df['Stars count']>0].sort_values(by=['Stars count']), x='Name', y='Stars count',
color='Forks count', hover_data=['Description', 'License', 'Owner'],
labels={'pop':'Statistic languages were used by user'})
star_fig.show()
url = repos_df.loc[23, 'Commits URL']
response = requests.get(url, auth = authentication)
response = response.json()
printJSON(response[0])
commits_information = []
for i in range(repos_df.shape[0]):
if repos_df.loc[i, 'Is Fork']: continue
url = repos_df.loc[i, 'Commits URL']
page_no = 1
while (True):
try:
response = requests.get(url, auth = authentication)
response = response.json()
for commit in response:
commit_data = []
commit_data.append(repos_df.loc[i, 'Name'])
commit_data.append(repos_df.loc[i, 'Id'])
commit_data.append(commit['commit']['committer']['date'])
commit_data.append(commit['commit']['message'])
commits_information.append(commit_data)
if (len(response) == 30):
page_no = page_no + 1
url = repos_df.loc[i, 'Commits URL'] + '?page=' + str(page_no)
else:
break
except:
print(url + ' fetch failed')
break
commits_df = pd.DataFrame(commits_information, columns = ['Name', 'Repo Id', 'Date', 'Message'])
commits_df
print("Two most common commit messages: {}".format(' and '.join(commits_df['Message'].value_counts().index[:2])))
commit_per_repo_fig = px.bar(commits_df.groupby('Name').count().reset_index(level=['Name']), x='Name', y='Message',
color='Name',
labels={'pop':'Commit per repositories'})
commit_per_repo_fig.show()
commits_df['Year'] = commits_df['Date'].apply(lambda x: x.split('-')[0])
yearly_stats = commits_df.groupby('Year').count()['Repo Id']
yearly_stats_df = yearly_stats.to_frame().reset_index(level=['Year'])
yearly_stats_df
yearly_stats_fig = px.bar(yearly_stats_df, x='Year', y='Repo Id',
color='Year',
labels={'pop':'Commit per Year'})
yearly_stats_fig.show()
commits_df['Month'] = commits_df['Date'].apply(lambda x: x.split('-')[1])
def commits_in_month_arr(year):
n_commits = [0,0,0,0,0,0,0,0,0,0,0,0,0]
commits_in_month_df = commits_df[commits_df['Year'] == str(year)].groupby('Month').count().reset_index(level=['Month']).drop(['Name', 'Date', 'Message', 'Year'], axis=1)
for i, m in enumerate(commits_in_month_df['Month']):
n_commits[int(m)] = n_commits[int(m)] + commits_in_month_df['Repo Id'][i]
return n_commits
import plotly.graph_objects as go
MONTHS = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
# Create traces
fig = go.Figure()
fig.add_trace(go.Scatter(x=MONTHS, y=commits_in_month_arr(2017),
mode='lines+markers',
name='2017'))
fig.add_trace(go.Scatter(x=MONTHS, y=commits_in_month_arr(2018),
mode='lines+markers',
name='2018'))
fig.add_trace(go.Scatter(x=MONTHS, y=commits_in_month_arr(2019),
mode='lines+markers',
name='2019'))
fig.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/trucabrac/blob_Jan2022/blob/main/Blob_batch_processing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import cv2
import os
import glob
from skimage.filters import gaussian
from skimage import img_as_ubyte
import random
from google.colab.patches import cv2_imshow
import csv
```
#1. Read images and store them in an array
```
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive/blob-tl/
%cd tl-selec/tl10/
#%cd ../tl9/
%ls
#################################################
#Capture all mages into an array and then iterate through each image
#Normally used for machine learning workflows.
images_list = []
images_names = []
SIZE = 512
path = "*.*"
#pathOut = "test-clas/"
pathOut = "../tl-contour/" #folder to create beforehand
#path = "tl4-proc/*.*"
#pathOut = "tl4-clas/"
#label = 'tl4-'
#First create a stack array of all images
for file in glob.glob(path):
print(file) #just stop here to see all file names printed
img0= cv2.imread(file, 0) #now, we can read each file since we have the full path
#img = cv2.cvtColor(imgIn, cv2.IMREAD_GRAYSCALE)
#img = cv2.resize(img, (SIZE, SIZE))
images_list.append(img0)
images_names.append(file)
images_list = np.array(images_list)
```
#2. Import Keras ImageNet models
```
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input, decode_predictions
from tensorflow.keras.applications.mobilenet import MobileNet
from tensorflow.keras.applications.mobilenet import preprocess_input, decode_predictions
```
#3. Define functions
```
font = cv2.FONT_HERSHEY_SIMPLEX
def getContours(im,word):
(hc, wc) = im.shape[:2]
x1 = wc/2
x2=wc/2
y1=hc/2
y2=hc/2
contours,hierarchy = cv2.findContours(im, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for cnt in contours:
area= cv2.contourArea(cnt)
if area>0:
imgMid=cv2.drawContours(imgContour,cnt,-1,(0,255,0),3)
#peri = cv2.arcLength(cnt,True)
#approx = cv2.approxPolyDP(cnt,)
#draw bounding rectangles
x,y,w,h = cv2.boundingRect(cnt)
if x < x1: x1 = x-20
if x+w > x2: x2 = x+w+20
if y < y1: y1 = y-20
if y+h > y2: y2 = y+h+20
imgFinal = cv2.rectangle(imgMid,(x1,y1),(x2,y2),(255,0,0),6)
cv2.putText(imgFinal, word, (x1, y1-20), font, 2, (255,0,0), 9)
font = cv2.FONT_HERSHEY_SIMPLEX
#alternative without bounding box and word
def getContours2(im):
(hc, wc) = im.shape[:2]
x1 = wc/2
x2=wc/2
y1=hc/2
y2=hc/2
contours,hierarchy = cv2.findContours(im, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for cnt in contours:
area= cv2.contourArea(cnt)
if area>0:
imgMid=cv2.drawContours(imgContour,cnt,-1,(0,255,0),3)
#peri = cv2.arcLength(cnt,True)
#approx = cv2.approxPolyDP(cnt,)
imgFinal = imgMid
#run just once when creating the csv
f = open('/content/drive/MyDrive/blob-tl/blobtl-clas-prob-top3.csv', 'w')
# create the csv writer
writer = csv.writer(f)
# write the header
#header = ['imgpath', 'class']
writer.writerow(['imgpath', 'class'])
f.close()
# open the csv file in the write mode to store classifications
f = open('/content/drive/MyDrive/blob-tl/blobtl-clas-prob-top3.csv', 'a')
# create the csv writer
writer = csv.writer(f)
```
#4. Process each image -> classif + text
```
#Process each image in the array
img_number = 0
for image in range(images_list.shape[0]):
inImg = images_list[image,:,:] #Grey images. For color add another dim.
#smoothed_image = img_as_ubyte(gaussian(inImg, sigma=5, mode='constant', cval=0.0))
#preprocess img
dim = (224, 224)
sizImg = cv2.resize(inImg, dim, interpolation=cv2.INTER_LINEAR)
#cv2_imshow(sizImg)
x = cv2.cvtColor(sizImg, cv2.COLOR_GRAY2RGB)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
#Classify with Keras models
#call randomly one of the models
nModel = random.randint(0,2)
if nModel==0:
model_vgg16 = VGG16(weights='imagenet')
preds = model_vgg16.predict(x)
if nModel==1:
model_rn50 = ResNet50(weights='imagenet')
preds = model_rn50.predict(x)
if nModel==2:
model_mobilenet = MobileNet(weights='imagenet')
preds = model_mobilenet.predict(x)
# decode the results into a list of tuples (class, description, probability)
# and choose 1 result randomly from the top20
pred_top = decode_predictions(preds, top=20)[0]
#print('Predicted:', pred_top)
topn = random.randint(0,19)
#print(topn)
pred_dis = pred_top[topn][1]
#print('Predicted:', pred_dis)
#find contours and draw bounding box in image
imgCanny = cv2.Canny(inImg,500,500)
#cv2_imshow(imgCanny)
imgContour = cv2.cvtColor(inImg, cv2.COLOR_GRAY2RGB)
getContours(imgCanny,pred_dis)
#cv2_imshow(imgContour)
imgPath = pathOut+label+str(img_number)
#save image with contour
cv2.imwrite(pathOut+label+str(img_number)+".jpg", imgContour)
#store img ref and class in csv
#writer.writerow([imgPath, pred_dis])
#increment
img_number +=1
# Process images contours only
#Process each image in the array
img_number = 0
label = 'test'
for image in range(images_list.shape[0]):
inImg = images_list[image,:,:] #Grey images. For color add another dim.
#smoothed_image = img_as_ubyte(gaussian(inImg, sigma=5, mode='constant', cval=0.0))
#find contours and draw them
imgCanny = cv2.Canny(inImg,300,300)
#cv2_imshow(imgCanny)
imgContour = cv2.cvtColor(inImg, cv2.COLOR_GRAY2RGB)
getContours2(imgCanny)
#cv2_imshow(imgContour)
#imgPath = pathOut+images_names[image]+str(img_number)
#save image with contour
#cv2.imwrite(pathOut+label+str(img_number)+".jpg", imgContour)
cv2.imwrite(pathOut+images_names[image], imgContour)
#store img ref and class in csv
#writer.writerow([imgPath, pred_dis])
#increment
img_number +=1
#Process Keras classifications only
img_number = 0
for image in range(images_list.shape[0]):
inImg = images_list[image,:,:] #Grey images. For color add another dim.
#smoothed_image = img_as_ubyte(gaussian(inImg, sigma=5, mode='constant', cval=0.0))
#preprocess img
dim = (224, 224)
sizImg = cv2.resize(inImg, dim, interpolation=cv2.INTER_LINEAR)
#cv2_imshow(sizImg)
x = cv2.cvtColor(sizImg, cv2.COLOR_GRAY2RGB)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
#Classify with Keras models
#call randomly one of the models
nModel = random.randint(0,2)
if nModel==0:
model_vgg16 = VGG16(weights='imagenet')
preds = model_vgg16.predict(x)
if nModel==1:
model_rn50 = ResNet50(weights='imagenet')
preds = model_rn50.predict(x)
if nModel==2:
model_mobilenet = MobileNet(weights='imagenet')
preds = model_mobilenet.predict(x)
# decode the results into a list of tuples (class, description, probability)
# and choose 1 result randomly from the top20
pred_top = decode_predictions(preds, top=3)[0]
#print('Predicted:', pred_top)
topn = random.randint(0,2)
#print(topn)
pred_dis = pred_top[topn][1]
pred_p = pred_top[topn][2] * 100
#print('Predicted:', pred_dis)
imgPath = pathOut+label+str(img_number)
proba = str(pred_p)+'%'
#save image with contour
#cv2.imwrite(pathOut+images_names[image]+".jpg", imgContour)
#store img ref and class in csv
writer.writerow([images_names[image], pred_dis, proba])
#increment
img_number +=1
# close the file
f.close()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sl
import seaborn as sns; sns.set()
import matplotlib as mpl
from sklearn.linear_model import LinearRegression
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
%matplotlib inline
```
# Tarea 3: Encuentre la regresión
Ud recibe unos datos $x$ y $y$ cómo se muestran a continuación. Ud debe responder cuatro preguntas a partir de estos datos. Suponga que ud tiene un modelo tal que $y=f(x)$ más aún desconoce $f$.
```
df = pd.read_pickle('ex1.gz')
sns.scatterplot(x='x',y='y',data=df)
plt.show()
df
```
## (A) Pendiente e intercepto
Determine la pendiente de los datos en el intervalo $[0,1.5]$ y el valor del intercepto con el eje $y$. Es decir, $f(0)=?$. ¿Cuál es el valor de $r^2$?
```
k = df[(df.x >= 0) & (df.x <= 1.5)]
k
x1= k['x'].values.reshape(-1,1)
x2= k['y'].values.reshape(-1,1)
modelo = LinearRegression()
modelo.fit(x1,x2)
intercepto = modelo.intercept_
m = modelo.coef_
r2 = modelo.score(x1,x2)
print("Intercepto: ", intercepto)
print("Pendiente: ", m)
print("R^2: ", r2)
```
## (B) Regresión polinomial
Suponga que quiere realizar la siguiente regresión polinomial,
$$y=\beta_1+\beta_2x+\beta_2x^2+\beta_2x^3+\beta_2x^4+\beta_2x^5.$$
Plantee la función de costo que le permita calcular los coeficientes y calcule $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$, y $\beta_5$. ¿Cuál es el $r^2$?
Calcule $f(0)$ y compare con los resultados anteriores
```
def L(x,A,b):
m,n = A.shape
X = np.matrix(x).T
DeltaB=(A*X-b)
return (DeltaB.T*DeltaB)[0,0]/m
Y = df.loc[:, ['y']]
Y
X = df.loc[:, ['x']].rename(columns={'x': 'x1'})
X.insert(0, 'x0', 1)
X['x2'] = X['x1']*X['x1']
X['x3'] = X['x1']**3
X['x4'] = X['x1']**4
X['x5'] = X['x1']**5
Xi = X.to_numpy()
Yi = Y.to_numpy()
op = sp.optimize.minimize(fun=L,x0=np.zeros(Xi.shape[1]), args = (Xi,Yi), tol=1e-10)
print("El valor para los coeficientes es:",op['x'])
print("El valor para f(0):",op['x'][0])
y = df["y"]
b = np.linspace(0,4,100)
def f(a,b,c,d,e,f,x):
return a*x**5 + b*x**4 + c*x**3 + d*x**2 + e*x + f
p = f(op['x'][5],op['x'][4],op['x'][3],op['x'][2],op['x'][1],op['x'][0],b)
r2 = 1-np.sum((p-y)**2)/np.sum((y-y.mean())**2)
r2
print("Es posible apreciar un resultado similar al metodo de la polinomial exacta, evidenciando que ambos metodos poseen una buena precision con solo algunas variaciones en cifras decimales")
```
## (C) Regresión polinomial exacta
Resulta, que cuando se quiere hacer alguna regresión polinomial esta se puede hacer de forma exacta. ¿Cómo? Suponga que ud va a considerar que su problema en lugar de tener $1$ variable ($x$) tiene $n+1$, siendo $n$ el orden del polinomio a ajustar. Es decir, sus nuevas variables van a ser $\{x_0,\,x_1,\,x_2,\,x_3,\dots,\,x_n\}$ definiendo $x_j=x^j$. Así pues, siguiendo el mismo procedimiento para la regresión lineal multidimensional que realizamos para el ejercicio de datos inmobiliarios, puede encontrar los valores de los coeficientes $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$, y $\beta_5$. Encuentre estos valores y compare con los resultados en la sección **(B)**.
Calcule $f(0)$ y compare con los resultados anteriores.
> Si ud se pregunta si esto es posible la respuesta es sí. Inclusive, esto se puede extender a cualquier a cualquier conjunto de funciones, tal que $x_j=f_j(x)$, que represente un conjunto "linealmente independiente" (¡Me estoy adelantando a *Fourier*!). Para quienes quieran explorar algunas curiosidades matemáticas, cuando $n+1$ es igual al número de puntos o valores de $x$ (y todos diferentes) la matriz es siempre invertible y resulta ser la inversa de una matriz de Vandermonde.
```
rt = np.linalg.inv(Xi.T @ Xi) @ Xi.T @ Yi
b0, b1, b2, b3, b4, b5 = rt
coefs = str(b0) +','+ str(b1) + ',' + str(b2) + ',' + str(b3) + ',' + str(b4) + ',' + str(b5)
print(f"los coeficientes son = {coefs}")
print(f"El valor de f(0) es :", rt[0])
print("Se confirma como el valor para f(0) resulta muy preciso al ser comparado con valor de la regresión polinomica y a su vez resulta ser exacto si analizamos lo esperado por la grafica ")
```
## (D) Regresión a un modelo teórico
Suponga que su modelo teórico es el siguiente:
$$y=\frac{a}{\left[(x-b)^2+c\right]^\gamma}.$$
Halle $a$, $b$, $c$ y $\gamma$.
Calcule $f(0)$ y compare con los resultados anteriores
```
def f(i,x):
return (i[0])/((x-i[1])**2 + i[2])**i[3]
def L(i2,x,y):
dy = f(i2,x) - y
return np.dot(dy,dy)/len(y)
x = df["x"]
op = sp.optimize.minimize(fun=L, x0=np.array([0,0,1,0]), args = (x,y), method='L-BFGS-B', tol=1e-8)
print("Los valores de a,b,c y omega son",op['x'])
print("El valor de f(0) es:", f(op.x,0))
print("Con respecto a los dos anteriores metodos utilizados, este nos arrojo un valor de 0.2987 evidenciando menor presicion y exactitud, por lo que podriamos decir que este metodo es el menos optimo")
```
| github_jupyter |
# Missing Data
Missing values are a common problem within datasets. Data can be missing for a number of reasons, including tool/sensor failure, data vintage, telemetry issues, stick and pull, and omissing by choice.
There are a number of tools we can use to identify missing data, some of these methods include:
- Pandas Dataframe summaries
- MissingNo Library
- Visualisations
How to handle missing data is controversial, some argue that data should be filled in using techniques such as: mean imputation, regression imputations, whereas others argue that it is best to remove that data to prevent adding further uncertainty to the final results.
In this notebook, we are going to use: Variable Discarding and Listwise Deletion.
# Importing Libraries & Data
The first step is to import the libraries that we will require for working with the data.
For this notebook, we will be using:
- pandas for loading and storing the data
- matplotlib and seaborn for visualising the data
- numpy for a number of calculation methods
- missingno to visualise where missing data exists
```
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
import numpy as np
```
Next, we will load the data in using the pandas `read_csv` function and assign it to the variable `df`. The data will now be stored within a structured object known as a dataframe.
```
df = pd.read_csv('data/spwla_volve_data.csv')
```
As seen in the previous notebook, we can call upon a few methods to check the data quality.
The `.head()` method allows us to view the first 5 rows of the dataframe.
```
df.head()
```
The describe method provides us some summary statistics. To identify if we have missing data using this method, we need to look at the count row. If we assume that MD (measured depth) is the most complete column, we have 27,845 data points. Now, if we look at DT and DTS, we can see we only have 5,493 and 5,420 data points respectively. A number of other columns also have lower numbers, namely: RPCELM, PHIF, SW, VSH.
```
df.describe()
```
To gain a clearer insight, we can call upon the `info()` method to see how many non-null values exist for each column. Right away we can see the ones highlighted previously have lower numbers of non-null values.
```
df.info()
```
## Using missingno to Visualise Data Sparsity
The missingno library is designed to take a dataframe and allow you to visualise where gaps may exist.
We can simply call upon the `.matrix()` method and pass in the dataframe object. When we do, we generate a graphical view of the dataframe.
In the plot below, we can see that there are significant gaps within the DT and DTS columns, with minor gaps in the RPCELM, PHIF, and SW columns.
The sparkline to the right hand side of the plot provides an indication of data completeness. If the line is at the maximum value (to the right) it shows that data row as being complete.
```
msno.matrix(df)
plt.show()
```
Another plot we can call upon is the bar plot, which provides a graphical summary of the number of points in each columns.
```
msno.bar(df)
```
## Using matplotlib
We can generate our own plots to show how the data sparsity varies across each of the wells. In order to do this, we need to manipulate the dataframe.
First we create a copy of the dataframe to work on separately, and then replace each column with a value of 1 if the data is non-null.
To make our plot work, we need to increment each column's value by 1. This allows us to plot each column as an offset to the previous one.
```
data_nan = df.copy()
for num, col in enumerate(data_nan.columns[2:]):
data_nan[col] = data_nan[col].notnull() * (num + 1)
data_nan[col].replace(0, num, inplace=True)
```
When we view the header of the dataframe we now have a series of columns with increasing values from 1 to 14.
```
data_nan.head()
```
Next, we can group the dataframe by the wellName column.
```
grouped = data_nan.groupby('wellName')
```
We can then create multiple subplots for each well using the new dataframe. Rather than creating subplots within subplots, we can shade from the previous column's max value to the current column's max value if the data is present. If data is absent, it will be displayed as a gap.
```
#Setup the labels we want to display on the x-axis
labels = ['BS', 'CALI', 'DT', 'DTS', 'GR', 'NPHI', 'RACEHM', 'RACELM', 'RHOB', 'RPCEHM', 'RPCELM', 'PHIF', 'SW', 'VSH']
#Setup the figure and the subplots
fig, axs = plt.subplots(3, 2, figsize=(20,20))
#Loop through each well and column in the grouped dataframe
for (name, well), ax in zip(grouped, axs.flat):
ax.set_xlim(0,9)
#Setup the depth range
ax.set_ylim(well.MD.max() + 50, well.MD.min() - 50)
ax.set_ylim(well.MD.max() + 50, well.MD.min() - 50)
# Create multiple fill betweens for each curve# This is between
# the number representing null values and the number representing
# actual values
ticks = []
ticks_labels = []
for i, curve in enumerate(labels):
ax.fill_betweenx(well.MD, i, well[curve], facecolor='lightblue')
ticks.append(i)
ticks_labels.append(i+0.5)
# add extra value on to ticks
ticks.append(len(ticks))
#Setup the grid, axis labels and ticks
ax.grid(axis='x', alpha=0.5, color='black')
ax.set_ylabel('DEPTH (m)', fontsize=18, fontweight='bold')
#Position vertical lines at the boundaries between the bars
ax.set_xticks(ticks, minor=False)
#Position the curve names in the centre of each column
ax.set_xticks(ticks_labels, minor=True)
#Setup the x-axis tick labels
ax.set_xticklabels(labels, rotation='vertical', minor=True, verticalalignment='bottom', fontsize=14)
ax.set_xticklabels('', minor=False)
ax.tick_params(axis='x', which='minor', pad=-10)
ax.tick_params(axis='y', labelsize=14 )
#Assign the well name as the title to each subplot
ax.set_title(name, fontsize=16, fontweight='bold')
plt.tight_layout()
plt.subplots_adjust(hspace=0.15, wspace=0.25)
# plt.savefig('missingdata.png', dpi=200)
plt.show()
```
From the plot, we can not only see the data range of each well, but we can also see that 2 of the 5 wells have missing DT and DTS curves, 2 of the wells have missing data within RPCELM, and 2 of the wells have missing values in the PHIF and SW curves.
## Dealing With Missing Data
### Discarding Variables
As DT and DTS are missing in two of the wells, we have the option to remove these wells from the dataset, or we can remove these two columns for all of the wells.
The following is an example of how we remove the two curves from the dataframe. For this we can pass in a list of the columns names to the `drop()` function, the axis, which we want to drop data along, in this case the columns (axis=1), and the `inplace=True` argument allows us to physically remove these values from the dataframe.
```
df.drop(df[['DT', 'DTS']], axis=1, inplace=True)
```
If we view the header of the dataframe, we will see that we have removed the required columns.
```
df.head()
```
However, if we call upon the info method, we can see we still have null values within the dataframe.
```
df.info()
```
### Discarding NaNs
We can drop missing values by calling upon a special function called `dropna()`. This will remove any NaN (Not a Number) values from the dataframe. The `inplace=True` argument allows us to physically remove these values from the dataframe.
```
df.dropna(inplace=True)
df.info()
```
# Summary
This short notebook has shown three separate ways to visualise missing data. The first is by interrogating the dataframe, the second, by using the missingno library and thirdly by creating a custom visualisation with matplotlib.
At the end, we covered two ways in which missing data can be removed from the dataframe. The first by discarding variables, and the second by discarding missing values within the rows.
| github_jupyter |
# Applying Chords to 2D and 3D Images
## Importing packages
```
import time
import porespy as ps
ps.visualization.set_mpl_style()
```
Import the usual packages from the Scipy ecosystem:
```
import scipy as sp
import scipy.ndimage as spim
import matplotlib.pyplot as plt
```
## Demonstration on 2D Image
Start by creating an image using the ``blobs`` function in ``generators``. The useful thing about this function is that images can be created with anisotropy. These are exactly the sort of images where chord length distributions are useful, since chords can be drawn in different directions, to probe the anisotropic pore sizes.
```
im = ps.generators.blobs(shape=[400, 400], blobiness=[2, 1])
```
The image can be visualized easily using matplotlib's ``imshow`` function:
```
# NBVAL_IGNORE_OUTPUT
plt.figure(figsize=[6, 6])
fig = plt.imshow(im)
```
Determining chord-length distributions requires first adding chords to the image, which is done using the ``apply_chords`` function. The following code applies chords to the image in the x-direction (along ``axis=0``), then applies them in the y-direction (``axis=1``). The two images are then plotted using ``matplotlib``.
```
# NBVAL_IGNORE_OUTPUT
crds_x = ps.filters.apply_chords(im=im, spacing=4, axis=0)
crds_y = ps.filters.apply_chords(im=im, spacing=4, axis=1)
fig, ax = plt.subplots(1, 2, figsize=[10, 5])
ax[0].imshow(crds_x)
ax[1].imshow(crds_y)
```
Note that none of the chords touch the edge of the image. These chords are trimmed by default since they are artificially shorter than they should be and would skew the results. This behavior is optional and these chords can be kept by setting ``trim_edges=False``.
It is sometimes useful to colorize the chords by their length. PoreSpy includes a function called ``region_size`` which counts the number of voxels in each connected region of an image, and replaces those voxels with the numerical value of the region size. This is illustrated below:
```
# NBVAL_IGNORE_OUTPUT
sz_x = ps.filters.region_size(crds_x)
sz_y = ps.filters.region_size(crds_y)
fig, ax = plt.subplots(1, 2, figsize=[10, 6])
ax[0].imshow(sz_x)
ax[1].imshow(sz_y)
```
Although the above images are useful for quick visualization, they are not quantitative. To get quantitative chord length distributions, pass the chord image(s) to the ``chord_length_distribution`` functions in the ``metrics`` submodule:
```
data_x = ps.metrics.chord_length_distribution(crds_x, bins=25)
data_y = ps.metrics.chord_length_distribution(crds_y, bins=25)
```
This function, like many of the functions in the ``metrics`` module, returns a named tuple containing various arrays. The advantage of the named tuple is that each array can be accessed by name as attributes, such as ``data_x.pdf``. To see all the available attributes (i.e. arrays) use the autocomplete function if your IDE, the following:
```
print(data_x._fields)
```
Now we can print the results of the chord-length distribution as bar graphs:
```
# NBVAL_IGNORE_OUTPUT
plt.figure(figsize=[6, 6])
bar = plt.bar(x=data_y.L, height=data_y.cdf, width=data_y.bin_widths, color='b', edgecolor='k', alpha=0.5)
bar = plt.bar(x=data_x.L, height=data_x.cdf, width=data_x.bin_widths, color='r', edgecolor='k', alpha=0.5)
```
The key point to see here is that the blue bars are for the y-direction, which was the elongated direction, and as expected they show a tendency toward longer chords.
## Application to 3D images
Chords can just as easily be applied to 3D images. Let's create an artificial image of fibers, aligned in the YZ plane, but oriented randomly in the X direction
```
# NBVAL_IGNORE_OUTPUT
im = ps.generators.cylinders(shape=[200, 400, 400], radius=8, ncylinders=200, )
plt.imshow(im[:, :, 100])
```
As above, we must apply chords to the image then pass the chord image to the ``chord_length_distribution`` function:
```
# NBVAL_IGNORE_OUTPUT
crds = ps.filters.apply_chords(im=im, axis=0)
plt.imshow(crds[:, :, 100])
```
| github_jupyter |
# Exercise 6
```
# Importing libs
import cv2
import numpy as np
import matplotlib.pyplot as plt
apple = cv2.imread('images/apple.jpg')
apple = cv2.cvtColor(apple, cv2.COLOR_BGR2RGB)
apple = cv2.resize(apple, (512,512))
orange = cv2.imread('images/orange.jpg')
orange = cv2.cvtColor(orange, cv2.COLOR_BGR2RGB)
orange = cv2.resize(orange, (512,512))
plt.figure(figsize=(10,10))
ax1 = plt.subplot(121)
ax1.imshow(apple)
ax2 = plt.subplot(122)
ax2.imshow(orange)
ax1.axis('off')
ax2.axis('off')
ax1.text(0.5,-0.1, "Apple", ha="center", transform=ax1.transAxes)
ax2.text(0.5,-0.1, "Orange", ha="center", transform=ax2.transAxes)
def combine(img1, img2):
result = np.zeros(img1.shape, dtype='uint')
h,w,_ = img1.shape
result[:,0:w//2,:] = img1[:,0:w//2,:]
result[:,w//2:,:] = img2[:,w//2:,:]
return result.astype('uint8')
apple_orange = combine(apple,orange)
plt.imshow(apple_orange)
plt.axis('off')
plt.figtext(0.5, 0, 'Apple + Orange', horizontalalignment='center')
plt.show()
def buildPyramid(levels, left,right=None):
lresult = left
rresult = right if type(right) is np.ndarray else left
for i in range(levels):
lresult = cv2.pyrDown(lresult)
rresult = cv2.pyrDown(rresult)
for i in range(levels):
lresult = cv2.pyrUp(lresult)
rresult = cv2.pyrUp(rresult)
return combine(lresult,rresult)
apple_orange_pyramid = buildPyramid(3, apple_orange)
plt.figure(figsize=(10,10))
ax1 = plt.subplot(121)
ax1.imshow(apple_orange)
ax2 = plt.subplot(122)
ax2.imshow(apple_orange_pyramid)
ax1.axis('off')
ax2.axis('off')
ax1.text(0.5,-0.1, "Raw", ha="center", transform=ax1.transAxes)
ax2.text(0.5,-0.1, "After Pyramid", ha="center", transform=ax2.transAxes)
apple_orange_pyramid = buildPyramid(3, apple, orange)
plt.figure(figsize=(10,10))
ax1 = plt.subplot(121)
ax1.imshow(apple_orange)
ax2 = plt.subplot(122)
ax2.imshow(apple_orange_pyramid)
ax1.axis('off')
ax2.axis('off')
ax1.text(0.5,-0.1, "Raw", ha="center", transform=ax1.transAxes)
ax2.text(0.5,-0.1, "After Pyramid", ha="center", transform=ax2.transAxes)
```
## Another implementation
```
def buildPyramid2(levels, left,right=None):
lresult = left
rresult = right if type(right) is np.ndarray else left
for i in range(levels):
lresult = cv2.pyrDown(lresult)
rresult = cv2.pyrDown(rresult)
result = combine(lresult,rresult)
for i in range(levels):
result = cv2.pyrUp(result)
return result
apple_orange_pyramid = buildPyramid2(3, apple, orange)
plt.figure(figsize=(10,10))
ax1 = plt.subplot(121)
ax1.imshow(apple_orange)
ax2 = plt.subplot(122)
ax2.imshow(apple_orange_pyramid)
ax1.axis('off')
ax2.axis('off')
ax1.text(0.5,-0.1, "Raw", ha="center", transform=ax1.transAxes)
ax2.text(0.5,-0.1, "After Pyramid", ha="center", transform=ax2.transAxes)
```
| github_jupyter |
```
from bs4 import BeautifulSoup
import os
import random
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelEncoder
from sklearn.datasets import fetch_20newsgroups
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import ComplementNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
filepaths = []
for root, dirs, files in os.walk(os.getcwd() + "/reuters21578/"):
for file in files:
if os.path.splitext(file)[1] == '.sgm':
filepaths.append(os.path.join(root, file))
file_list = [open(file, 'r', encoding='ISO-8859-1') for file in filepaths]
soup_list = [BeautifulSoup(file,'lxml') for file in file_list]
def find_topics(soup):
tuple_topics = [(topic.parent.get('newid'),i) for topic in soup.find_all('topics') for i in topic.strings]
return tuple_topics
def find_texts(soup):
dic_text = {find.parent.get('newid'):find.text.replace(find.title.string if find.parent.title is not None else "","").replace(find.dateline.string if find.dateline is not None else "","").replace("\n","") for find in soup.find_all('text') if find.parent.topics.contents!=[]}
return dic_text
def get_strs(soup):
topics = find_topics(soup)
text = find_texts(soup)
strs = [topic[1] + "_label_" + text.get(topic[0]) for topic in topics]
return strs
def write_to_txt(strs):
file = open('raw_y_X.txt','w',encoding='utf-8')
for i in strs:
file.write(i+'\n')
file.close()
strs_s = []
for soup in soup_list:
strs = get_strs(soup)
for st in strs:
strs_s.append(st)
random.shuffle(strs_s)
write_to_txt(strs_s)
X_raw = []
y_raw = []
with open("raw_y_X.txt", "r") as infile:
lines = infile.readlines()
for line in lines:
y_raw.append(line.split("_label_")[0])
X_raw.append(line.split("_label_")[1])
vectorizer = TfidfVectorizer(ngram_range=(1,2), stop_words="english")
##################20newsgroups########################
newsgroups_train = fetch_20newsgroups(subset="train")
X_news = vectorizer.fit_transform(newsgroups_train.data)
y_news = newsgroups_train.target
##################Reuters###############################
X_reuters = vectorizer.fit_transform(X_raw)
label_encoder = LabelEncoder()
y_reuters = label_encoder.fit_transform(y_raw)
X_news_train, X_news_test, y_news_train, y_news_test = train_test_split(X_news, y_news, test_size=0.25)
lsvc_news = LinearSVC(loss="squared_hinge", penalty="l2", C=1, multi_class="ovr")
lsvc_news.fit(X_news_train, y_news_train)
print(classification_report(y_news_test, lsvc_news.predict(X_news_test)))
X_reuters_train, X_reuters_test, y_reuters_train, y_reuters_test = train_test_split(X_reuters, y_reuters, test_size=0.25)
lsvc_reuters = LinearSVC(loss="squared_hinge", penalty="l2", C=1, multi_class="ovr")
lsvc_reuters.fit(X_reuters_train, y_reuters_train)
print(classification_report(y_reuters_test, lsvc_reuters.predict(X_reuters_test)))
cnb_news = ComplementNB(alpha=1)
cnb_news.fit(X_news_train, y_news_train)
print(classification_report(y_news_test, cnb_news.predict(X_news_test)))
cnb_reuters = ComplementNB(alpha=1)
cnb_reuters.fit(X_reuters_train, y_reuters_train)
print(classification_report(y_reuters_test, cnb_reuters.predict(X_reuters_test)))
```
**Preprocessing**
I used BeautifulSoup to parse the Reuters data instead of Regular Expressions, which turned out to be more difficult than expected.
When parsing the data, only documents with the label \\<TOPICS\> were chosen, and texts were read in by stripping the title and dateline information.
The "newid" information makes the text and its topic match.
The X_raw would be all the texts, and the y_raw would be their corresponding topics.
The topic and text of every document were stored in a .txt file with a \_label_ mark separating them.
Then TF-IDF vectorizer helped with the encoding of both datasets with unigram and bigram features excluding the stop words. Label Encoder of sklearn was used to encode target labels with value between 0 and n_classes-1.
**Model Selection**
Two models were implemented, Linear Support Vector Classifier as the non-probabilistic one and Complement Naive Bayes as the probabilistic one.
LinearSVC has more flexibility in the choice of penalties and loss functions, and Complement NB is suitable for imbalanced datasets, in our case, the Reuters dataset. The inductive bias of a SVM is that distinct classes tend to be separated by wide margins (maximum margin).
The naive bayes classifier assumes that the inputs are independent of each other, and the input only depends on the output label.
The train and test sets were split by sklearn. I tried several test sizes and found out that the performance did not vary a lot, and I went with 0.25.
For hyperparameters, I chose to use the default values for both classifiers.
**Evaluation**
The evaluation metric I chose was classification report in sklearn. It shows the accuracy, recall, and F1 score for each label and overall.
The results I got from the experiments showed that the 20 newsgroups dataset outperformed Reuters a lot on both classifiers.
The overall accuracy for 20 newsgroups dataset was about 0.9, while the overall accuracy for Reuters was around 0.6.
When classifying the Reuters dataset, labels with high frequency were predicted with higher accuracy and recall, whereas rare labels got nearly 0.
I think it is because the Reuters dataset is not as well-formed as the other one. There's a lot of "noise" in the texts (e.g. many texts are like \*\*\*\*\*Blah Blah Blah).
Also, the 20 newsgroups dataset only has 20 labels whereas the Reuters dataset has 120 labels, making the classification task harder. With so many labels to classify, more input data is needed for the chosen model to learn.
As for the warining of zero division, there are zero accuracies and recalls so it is natural to see that.
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.debugger import Pdb; pdb = Pdb()
def get_down_centre_last_low(p_list):
zn_num = len(p_list) - 1
available_num = min(9, (zn_num - 6))
index = len(p_list) - 4
for i in range(0, available_num // 2):
if p_list[index - 2] < p_list[index]:
index = index -2
else:
return index
return index + 2
def get_down_centre_first_high(p_list):
s = max(enumerate(p_list[3:]), key=lambda x: x[1])[0]
return s + 3
def down_centre_expand_spliter(p_list):
lr0 = get_down_centre_last_low(p_list)
hr0 = (max(enumerate(p_list[lr0 - 1:]), key=lambda x: x[1])[0]) + lr0 - 1
hl0 = get_down_centre_first_high(p_list[: lr0 - 2])
if p_list[hr0] > p_list[hl0] and (len(p_list) - hr0) > 5:
hl0 = hr0
lr0 = lr0 + (len(p_list) - hr0) // 2
# lr0 = hr0 + 3
return [0, hl0, lr0, len(p_list) - 1], [p_list[0], p_list[hl0], p_list[lr0], p_list[-1]]
# y = [0, 100, 60, 130, 70, 120, 40, 90, 50, 140, 85, 105]
# y = [0, 100, 60, 110, 70, 72, 61, 143, 77, 91, 82, 100, 83, 124, 89, 99]
# y = [0, 100, 60, 110, 70, 115, 75, 120, 80, 125, 85, 130, 90, 135]
# y = [0, 100, 60, 110, 70, 78, 77, 121, 60, 93, 82, 141, 78, 134]
# x = list(range(0, len(y)))
# gg = [min(y[1], y[3])] * len(y)
# dd = [max(y[2], y[4])] * len(y)
# plt.figure(figsize=(len(y),4))
# plt.grid()
# plt.plot(x, y)
# plt.plot(x, gg, '--')
# plt.plot(x, dd, '--')
# sx, sy = down_centre_expand_spliter(y)
# plt.plot(sx, sy)
# plt.show()
# Centre Expand Prototype
%matplotlib inline
import matplotlib.pyplot as plt
y_base = [0, 100, 60, 130, 70, 120, 40, 90, 50, 140, 85, 105, 55, 80]
for i in range(10, len(y_base)):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
if i % 2 == 1:
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
# Random Centre Generator
%matplotlib inline
import random
import matplotlib.pyplot as plt
y_max = 150
y_min = 50
num_max = 14
def generate_next(y_list, direction):
if direction == 1:
y_list.append(random.randint(max(y_list[2], y_list[4], y_list[-1]) + 1, y_max))
elif direction == -1:
y_list.append(random.randint(y_min, min(y_list[1], y_list[3], y_list[-1]) - 1))
y_base = [0, 100, 60, 110, 70]
# y_base = [0, 110, 70, 100, 60]
# y_base = [0, 100, 60, 90, 70]
# y_base = [0, 90, 70, 100, 60]
direction = 1
for i in range(5, num_max):
generate_next(y_base, direction)
direction = 0 - direction
print(y_base)
for i in range(11, len(y_base), 2):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.title(y)
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
# Group 1
# y_base = [0, 100, 60, 110, 70, 99, 66, 121, 91, 141, 57, 111, 69, 111]
# y_base = [0, 100, 60, 110, 70, 105, 58, 102, 74, 137, 87, 142, 55, 128]
y_base = [0, 100, 60, 110, 70, 115, 75, 120, 80, 125, 85, 130, 90, 135]
# y_base = [0, 100, 60, 110, 70, 120, 80, 130, 90, 140, 50, 75]
# y_base = [0, 100, 60, 110, 70, 114, 52, 75, 54, 77, 65, 100, 66, 87, 70, 116]
# y_base = [0, 100, 60, 110, 70, 72, 61, 143, 77, 91, 82, 100, 83, 124, 89, 99, 89, 105]
# Group 2
# y_base = [0, 110, 70, 100, 60, 142, 51, 93, 78, 109, 60, 116, 50, 106]
# y_base = [0, 110, 70, 100, 60, 88, 70, 128, 82, 125, 72, 80, 63, 119]
# y_base = [0, 110, 70, 100, 60, 74, 66, 86, 57, 143, 50, 95, 70, 91]
# y_base = [0, 110, 70, 100, 60, 77, 73, 122, 96, 116, 82, 124, 69, 129]
# y_base = [0, 110, 70, 100, 60, 147, 53, 120, 77, 103, 56, 76, 74, 92]
# y_base = [0, 110, 70, 100, 60, 95, 55, 90, 50, 85, 45, 80, 40, 75]
# Group 3
# y_base = [0, 100, 60, 90, 70, 107, 55, 123, 79, 112, 64, 85, 74, 110]
# y_base = [0, 100, 60, 90, 70, 77, 55, 107, 76, 141, 87, 91, 60, 83]
# y_base = [0, 100, 60, 90, 70, 114, 67, 93, 58, 134, 53, 138, 64, 107]
# y_base = [0, 100, 60, 90, 70, 77, 66, 84, 79, 108, 87, 107, 72, 89]
# y_base = [0, 100, 60, 90, 70, 88, 72, 86, 74, 84, 76, 82, 74, 80]
# Group 4
# y_base = [0, 90, 70, 100, 60, 131, 57, 144, 85, 109, 82, 124, 87, 101]
# y_base = [0, 90, 70, 100, 60, 150, 56, 112, 63, 95, 84, 118, 58, 110]
# y_base = [0, 90, 70, 100, 60, 145, 64, 112, 69, 86, 71, 119, 54, 95]
# y_base = [0, 90, 70, 100, 60, 105, 55, 110, 50, 115, 45, 120, 40, 125]
for i in range(11, len(y_base), 2):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.title(y)
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
```
| github_jupyter |
```
# default_exp core
```
# module name here
> API details.
```
#hide
from nbdev.showdoc import *
#export
import pandas as pd
from tqdm import tqdm_notebook as tqdm
import json
import numpy as np
from fastai.vision.all import *
import albumentations as A
import skimage.io as skio
import warnings
warnings.filterwarnings("ignore")
with open('./data/BigEarthNet-S2_19-classes_models/label_indices.json', 'rb') as f:
label_indices = json.load(f)
label_conversion = label_indices['label_conversion']
BigEarthNet_19_label_idx = {v: k for k, v in label_indices['BigEarthNet-19_labels'].items()}
def get_label(patch_json):
original_labels = patch_json['labels']
original_labels_multi_hot = np.zeros(
len(label_indices['original_labels'].keys()), dtype=int)
BigEarthNet_19_labels_multi_hot = np.zeros(len(label_conversion),dtype=int)
for label in original_labels:
original_labels_multi_hot[label_indices['original_labels'][label]] = 1
for i in range(len(label_conversion)):
BigEarthNet_19_labels_multi_hot[i] = (
np.sum(original_labels_multi_hot[label_conversion[i]]) > 0
).astype(int)
BigEarthNet_19_labels = ''
for i in np.where(BigEarthNet_19_labels_multi_hot == 1)[0]:
BigEarthNet_19_labels+=str(i)+' '
return BigEarthNet_19_labels[:-1]
# # create data
# df=pd.read_csv('./data/BigEarthNet-S2_19-classes_models/splits/train.csv',header=None)
# df['Isval']=0
# df2=pd.read_csv('./data/BigEarthNet-S2_19-classes_models/splits/val.csv',header=None)
# df2['Isval']=1
# df=pd.concat([df,df2])
# df=df.rename(columns={0: "fname"})
# df['label']=''
# for i in tqdm(range(len(df))):
# with open('./data/BigEarthNet-v1.0/'+df.iat[i,0]+'/'+df.iat[i,0]+'_labels_metadata.json', 'rb') as f:
# patch_json = json.load(f)
# df.iat[i,2]=get_label(patch_json)
# df.iat[i,0]='./data/BigEarthNet-v1.0/'+df.iat[i,0]+'/'+df.iat[i,0]+'.tif'
#export
def open_tif(fn, cls=torch.Tensor):
im = skio.imread(str(fn))/10000
im = im.transpose(1,2,0).astype('float32')
return cls(im)
class MSTensorImage(TensorImage):
@classmethod
def create(cls, data:(Path,str,ndarray), chnls=None):
if isinstance(data, Path) or isinstance(data, str):
if str(data).endswith('tif'): im = open_tif(fn=data,cls=torch.Tensor)
elif isinstance(data, ndarray):
im = torch.from_numpy(data)
else:
im = data
return cls(im)
df=pd.read_csv('./data/file.csv')
df.head()
db=DataBlock(blocks=(TransformBlock(type_tfms=partial(MSTensorImage.create)), MultiCategoryBlock),
splitter=ColSplitter('Isval'),
get_x=ColReader('fname'),
get_y=ColReader('label', label_delim=' '))
# batch_tfms=aug_transforms(size=224))
# db.summary(source=df)
ds = db.datasets(source=df)
#export
BAND_STATS = {
'S2':{
'mean': {
'B01': 340.76769064,
'B02': 429.9430203,
'B03': 614.21682446,
'B04': 590.23569706,
'B05': 950.68368468,
'B06': 1792.46290469,
'B07': 2075.46795189,
'B08': 2218.94553375,
'B8A': 2266.46036911,
'B09': 2246.0605464,
'B11': 1594.42694882,
'B12': 1009.32729131
},
'std': {
'B01': 554.81258967,
'B02': 572.41639287,
'B03': 582.87945694,
'B04': 675.88746967,
'B05': 729.89827633,
'B06': 1096.01480586,
'B07': 1273.45393088,
'B08': 1365.45589904,
'B8A': 1356.13789355,
'B09': 1302.3292881,
'B11': 1079.19066363,
'B12': 818.86747235
}
},
'S1': {
'mean': {
'VV': -12.619993741972035,
'VH': -19.29044597721542,
'VV/VH': 0.6525036195871579,
},
'std': {
'VV': 5.115911777546365,
'VH': 5.464428464912864,
'VV/VH': 30.75264076801808,
},
'min': {
'VV': -74.33214569091797,
'VH': -75.11137390136719,
'R': 3.21E-2
},
'max': {
'VV': 34.60696029663086,
'VH': 33.59768295288086,
'R': 1.08
}
}
}
#export
bands=['B02','B03', 'B04', 'B05','B06', 'B07', 'B11', 'B08','B8A', 'B12']
#export
means=[BAND_STATS['S2']['mean'][band]/10000 for band in bands]
stds=[BAND_STATS['S2']['std'][band]/10000 for band in bands]
#export
# Now we will create a pipe of transformations
from albumentations.pytorch import ToTensorV2
aug_pipe = A.Compose([A.ShiftScaleRotate(p=.5),
A.HorizontalFlip(),
A.Normalize(mean=means,std=stds,max_pixel_value=1.0),
ToTensorV2()]
)
val_pipe = A.Compose([
A.Normalize(mean=means,std=stds,max_pixel_value=1.0),
ToTensorV2()]
)
class TrainTransform(ItemTransform):
split_idx = 0
def __init__(self, aug,split=0):
self.aug = aug
# self.split_idx = split
def encodes(self, x):
aug = self.aug(image=x[0].numpy())
# print(torch.cat((aug['image0'],aug['image1']),axis=0).shape)
return aug['image'], x[1]
class ValTransform(ItemTransform):
split_idx = 1
def __init__(self, aug,split=0):
self.aug = aug
# self.split_idx = split
def encodes(self, x):
aug = self.aug(image=x[0].numpy())
# print(torch.cat((aug['image0'],aug['image1']),axis=0).shape)
return aug['image'], x[1]
# Create our class with this aug_pipe
aug = TrainTransform(aug_pipe)
aug2=ValTransform(val_pipe)
db = DataBlock(blocks=(TransformBlock(type_tfms=partial(MSTensorImage.create)), MultiCategoryBlock),
splitter=ColSplitter('Isval'),
get_x=ColReader('fname'),
get_y=ColReader('label', label_delim=' '),
item_tfms=[aug,aug2]
)
dls = db.dataloaders(source=df, bs=2, num_workers=0)
aa,bb=first(dls.train)
aa.min()
from nbdev.export import notebook2script
notebook2script(fname='./00_core.ipynb')
```
| github_jupyter |
# Machine Learning and Statistics for Physicists
Material for a [UC Irvine](https://uci.edu/) course offered by the [Department of Physics and Astronomy](https://www.physics.uci.edu/).
Content is maintained on [github](github.com/dkirkby/MachineLearningStatistics) and distributed under a [BSD3 license](https://opensource.org/licenses/BSD-3-Clause).
[Table of contents](Contents.ipynb)
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
from sklearn import neighbors
```
## Markov Chain Monte Carlo
Markov-chain Monte Carlo (MCMC) is an algorithm to generate random samples from an un-normalized probability density. In other words, you want sample from $P(\vec{z})$ but can only evaluate $f(\vec{z})$ where
$$
P(\vec{z}) = \frac{f(\vec{z})}{\int d\vec{z}\,f(\vec{z})} \; .
$$
Note that $0 \le P(\vec{z}) \le 1$ requires that $f(\vec{z}) \ge 0$ everywhere and that the integral has a non-zero finite value.
### Examples
We will start with some simple motivating examples before diving into the Bayesian applications and the theory of Markov chains.
The function
$$
f(z) = \begin{cases}
\sqrt{1 - z^4} & |z| < 1 \\
0 & |z| \ge 1
\end{cases}
$$
is never negative and has a finite integral:
```
def plotf(zlim=1.2):
z = np.linspace(-zlim, +zlim, 250)
plt.plot(z, np.sqrt(np.maximum(0, 1 - z ** 4)))
plt.xlim(-zlim, +zlim)
plotf()
```
However, the normalization integral cannot be evaluated analytically (it is related to the [complete elliptic integral of the first kind](https://en.wikipedia.org/wiki/Elliptic_integral#Complete_elliptic_integral_of_the_first_kind)), so this is a good candidate for MCMC sampling using the MLS `MCMC_sample` function (which wraps [emcee](http://dfm.io/emcee/)):
```
from mls import MCMC_sample
def logf(z):
return 0.5 * np.log(1 - z ** 4) if np.abs(z) < 1 else -np.inf
gen = np.random.RandomState(seed=123)
samples = MCMC_sample(logf, z=[0], nsamples=20000, random_state=gen)
```
The notation `z=[0]` identifies `z` as the parameter we want to sample (starting at the value 0). The result is a Pandas DataFrame of generated samples:
```
samples[:5]
```
The generated samples are (approximately) drawn from the normalized $P(z)$ corresponding to the $f(z)$ provided:
```
plt.hist(samples['z'], range=(-1,1), bins=25);
```
<span style="color:limegreen">What are MCMC samples good for?</span> They allow us to estimate the expectation value of an arbitrary $g(z)$ using [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling):
$$
\langle g(\vec{z})\rangle_P \equiv \int d\vec{z}\, g(\vec{z})\, P(\vec{z})
\simeq \frac{1}{N} \sum_{i=1}^N g(\vec{z}_i) \; ,
$$
where $\vec{z}_1, \vec{z}_2, \ldots$ are the MCMC samples.
For example, to estimate the expectation value of $g(z) = z^2$ (aka the variance) with the samples generated above:
```
np.mean(samples['z'] ** 2)
```
Expectation values of more complex functions are equally easy, for example, $g(z) = \sin(\pi z)^2$,
```
np.mean(np.sin(np.pi * samples['z']) ** 2)
```
Recall that the reason we are using MCMC is because <span style="color:limegreen">we do not know the value of the normalization constant</span>:
$$
\int d\vec{z}\,f(\vec{z}) \; .
$$
However, we can use MCMC samples to estimate its value as follows:
- First, build an empirical estimate of the normalized probability density $P(\vec{z})$ using any density estimation method.
- Second, compare this density estimate (which is noisy, but normalized by construction) with the original un-normalized $f(\vec{z})$: they should have the same shape and their ratio is the unknown normalization constant.
For example, use KDE to estimate the density of our generated samples:
```
fit = neighbors.KernelDensity(kernel='gaussian', bandwidth=0.01).fit(samples)
```
Now take the ratio of the (normalized and noisy) KDE density estimate and the (un-normalized and smooth) $f(z)$ on a grid of $z$ values:
```
def plotfit(zlim=1.2, Pmin=0.1):
z = np.linspace(-zlim, +zlim, 250)
f = np.sqrt(np.maximum(0, 1 - z ** 4))
P = np.exp(fit.score_samples(z.reshape(-1, 1)))
plt.plot(z, f, label='$f(z)$')
plt.fill_between(z, P, alpha=0.5, label='$P(z)$')
ratio = f / P
sel = P > Pmin
plt.plot(z[sel], ratio[sel], '.', label='$P(z)/f(z)$')
mean_ratio = np.mean(ratio[sel])
print('mean P(z)/f(z) = {:.3f}'.format(mean_ratio))
plt.axhline(mean_ratio, ls='--', c='k')
plt.xlim(-zlim, +zlim)
plt.legend(loc='upper left', ncol=3)
plotfit()
```
The estimated $P(z)$ does not look great, but <span style="color:limegreen">the mean</span> of $f(z) / P(z)$ estimates the normalization constant. In practice, we restrict this mean to $z$ values where $P(z)$ is above some minimum to avoid regions where the empirical density estimate is poorly determined.
In the example above, the true value of the integral rounds to 1.748 so our numerical accuracy is roughly 1%.
Note that <span style="color:limegreen">we cannot simply use $g(z) = 1$</span> in the importance sampled integral above to estimate the normalization constant since it gives exactly one! The unknown constant is the integral of $f(z)$, not $P(z)$.
Next, we try a multidimensional example:
$$
f(\vec{z}, \vec{z}_0, r) =
\begin{cases}
\exp\left(-|\vec{z} - \vec{z}_0|^2/2\right) & |\vec{z}| < r \\
0 & |\vec{z}| \ge r
\end{cases}
$$
This function describes an un-normalized Gaussian PDF centered at $\vec{z}_0$ and clipped outside $|\vec{z}| < r$. The normalization integral has no analytic solution except in the limits $\vec{z}_0\rightarrow 0$ or $r\rightarrow\infty$.
To generate MCMC samples in 2D:
```
def logf(x, y, x0, y0, r):
z = np.array([x, y])
z0 = np.array([x0, y0])
return -0.5 * np.sum((z - z0) ** 2) if np.sum(z ** 2) < r ** 2 else -np.inf
```
The variables to sample are assigned initial values in square brackets and all other arguments are treated as fixed hyperparameters:
```
samples = MCMC_sample(logf, x=[0], y=[0], x0=1, y0=-2, r=3, nsamples=10000)
```
The generated samples now have two columns:
```
samples[:5]
```
A scatter plot shows a 2D Gaussian distribution clipped to a circle and offset from its center, as expected:
```
plt.scatter(samples['x'], samples['y'], s=10)
plt.scatter(1, -2, marker='+', s=500, lw=5, c='white')
plt.gca().add_artist(plt.Circle((0, 0), 3, lw=4, ec='red', fc='none'))
plt.gca().set_aspect(1)
```
With multidimensional samples, we can estimate expectation values of <span style="color:limegreen">marginal PDFs</span> just as easily as the full joint PDF. In our 2D example, the marginal PDFs are:
$$
P_X(x) = \int dy\, P(x, y) \quad , \quad P_Y(y) = \int dx\, P(x, y) \; .
$$
For example, the expectation value of $g(x)$ with respect to $P_X$ is:
$$
\langle g\rangle \equiv \int dx\, g(x) P_X(x) = \int dx\, g(x) \int dy\, P(x, y) = \int dx dy\, g(x)\, P(x,y) \; .
$$
In other words, the <span style="color:limegreen">expectation value with respect to a marginal PDF</span> is equal to the <span style="color:limegreen">expectation with respect to the full joint PDF</span>.
For example, the expectation value of $g(x) = x$ (aka the mean) with respect to $P_X(x)$ is:
samples x * samples y
```
np.mean(samples['x'])
```
We can also estimate the density of a marginal PDF by simply dropping the columns that are integrated out before plugging into a density estimator. For example:
```
fitX = neighbors.KernelDensity(kernel='gaussian', bandwidth=0.1).fit(samples.drop(columns='y'))
fitY = neighbors.KernelDensity(kernel='gaussian', bandwidth=0.1).fit(samples.drop(columns='x'))
def plotfitXY(r=3):
xy = np.linspace(-r, +r, 250)
Px = np.exp(fitX.score_samples(xy.reshape(-1, 1)))
Py = np.exp(fitY.score_samples(xy.reshape(-1, 1)))
plt.plot(xy, Px, label='$P_X(x)$')
plt.plot(xy, Py, label='$P_Y(y)$')
plt.legend()
plotfitXY()
```
### Bayesian Inference with MCMC
We introduced MCMC above as a general purpose algorithm for sampling any un-normalized PDF, without any reference to Bayesian (or frequentist) statistics. We also never specified whether $\vec{z}$ was something observed (data) or latent (parameters and hyperparameters), because it doesn't matter to MCMC.
However, MCMC is an excellent tool for performing numerical inferences using the generalized Bayes' rule we met earlier:
$$
P(\Theta_M\mid D, M) = \frac{\color{orange}{P(D\mid \Theta_M, M)}\,\color{purple}{P(\Theta_M\mid M)}}{P(D\mid M)}
$$
- <span style="color:orange">Liklihood</span>
- <span style="color:purple">Prior</span>
In particular, the normalizing denominator (aka the "evidence"):
$$
P(D\mid M) = \int d\Theta_M' P(D\mid \Theta_M', M)\, P(\Theta_M'\mid M)
$$
is often not practical to calculate, so we can only calculate the un-normalized numerator
$$
P(D\mid \Theta_M, M)\,P(\Theta_M\mid M) \; ,
$$
which combines the *likelihood of the data* and the *prior probability of the model*.
If we treat the observed data $D$ and hyperparameters $M$ as fixed, then the appropriate function to plug into an MCMC is:
$$
\log f(\Theta) = \log P(D\mid \Theta_M, M) + \log P(\Theta_M\mid M) \; .
$$
The machinery described above then enables us to generate samples $\Theta_1, \Theta_2, \ldots$ drawn from the *posterior* distribution, and therefore make interesting statements about probabilities involving model parameters.
The likelihood function depends on the data and model, so could be anything, but we often assume Gaussian errors in the data, which leads to the multivariate Gaussian PDF we met earlier ($d$ is the number of data features):
$$
P(\vec{x}\mid \Theta_M, M) =
\left(2\pi\right)^{-d/2}\,\left| C\right|^{-1/2}\,
\exp\left[ -\frac{1}{2} \left(\vec{x} - \vec{\mu}\right)^T C^{-1} \left(\vec{x} - \vec{\mu}\right) \right]
$$
In the most general case, $\vec{\mu}$ and $C$ are functions of everything: the data $D$, the parameters $\Theta_M$ and the hyperparameters $M$.
When we have $N$ independent observations, $\vec{x}_1, \vec{x}_2, \ldots$, their combined likelihood is the product of each sample's likelihood:
$$
P(\vec{x}_1, \vec{x}_2, \ldots\mid \Theta_M, M) = \prod_{i=1}^N\, P(\vec{x}_i\mid \Theta_M, M)
$$
As an example, consider fitting a straight line $y = m x + b$, with parameters $m$ and $b$, to data with two features $x$ and $y$. The relevant log-likelihood function is:
$$
\log{\cal L}(m, b; D) = -\frac{N}{2}\log(2\pi\sigma_y^2)
-\frac{1}{2\sigma_y^2} \sum_{i=1}^N\, (y_i - m x_i - b)^2 \; ,
$$
where the error in $y$, $\sigma_y$, is a fixed hyperparameter. Note that the first term is the Gaussian PDF normalization factor.
First generate some data on a straight line with measurement errors in $y$ (so our assumed model is correct):
```
gen = np.random.RandomState(seed=123)
N, m_true, b_true, sigy_true = 10, 0.5, -0.2, 0.1
x_data = gen.uniform(-1, +1, size=N)
y_data = m_true * x_data + b_true + gen.normal(scale=sigy_true, size=N)
plt.errorbar(x_data, y_data, sigy_true, fmt='o', markersize=5)
plt.plot([-1, +1], [-m_true+b_true,+m_true+b_true], 'r:')
plt.xlabel('x'); plt.ylabel('y');
```
Next, define the log-likelihood function:
```
def loglike(x, y, m, b, sigy):
N = len(x)
norm = 0.5 * N * np.log(2 * np.pi * sigy ** 2)
return -0.5 * np.sum((y - m * x - b) ** 2) / sigy ** 2 - norm
```
Finally, <span style="color:limegreen">generate some MCMC samples of the posterior $P(m, b\mid D, M)$</span> assuming uniform priors $P(b,m\mid \sigma_y) = 1$:
```
samples = MCMC_sample(loglike, m=[m_true], b=[b_true],
x=x_data, y=y_data, sigy=sigy_true, nsamples=10000, random_state=gen)
sns.jointplot('m', 'b', samples, xlim=(0.2,0.8), ylim=(-0.3,0.0), stat_func=None);
samples.describe(percentiles=[])
```
**EXERCISE:** We always require a starting point to generate MCMC samples. In this example, we used the true parameter values as starting points:
```
m=[m_true], b=[b_true]
```
What happens if you chose different starting points? Try changing the starting values by $\pm 0.1$ and see how this affects the resulting means and standard deviations for $m$ and $b$.
```
samples = MCMC_sample(loglike, m=[m_true+0.1], b=[b_true+0.1],
x=x_data, y=y_data, sigy=sigy_true, nsamples=10000, random_state=gen)
samples.describe(percentiles=[])
samples = MCMC_sample(loglike, m=[m_true-0.1], b=[b_true-0.1],
x=x_data, y=y_data, sigy=sigy_true, nsamples=10000, random_state=gen)
samples.describe(percentiles=[])
```
The changes are small compared with the offsets ($\pm 0.1$) and the standard deviations in each parameter.
```
# Add your solution here...
```
The `MCMC_sample` function can apply independent (i.e., factorized) priors on each parameter:
$$
P(\Theta\mid M) = \prod_j P(\theta_j\mid M)
$$
Define the two most commonly used independent priors:
```
def TopHat(lo, hi):
"""Return un-normalized log(prior) for x in [lo,hi]"""
return lambda x: 0 if (lo <= x <= hi) else -np.inf
def Gauss(mu, sigma):
"""Return un-normalized log(prior) for x ~ N(mu,sigma)"""
return lambda x: -0.5 * ((x - mu) / sigma) ** 2
```
To apply a prior, we replace `z=[value]` with `z=[value,logprior]`. For example, suppose we believe that $0.4 \le m \le 0.7$:
```
samples = MCMC_sample(loglike, m=[m_true,TopHat(0.4,0.7)], b=[b_true],
x=x_data, y=y_data, sigy=sigy_true, nsamples=10000, random_state=gen)
sns.jointplot('m', 'b', samples, xlim=(0.2,0.8), ylim=(-0.3,0.0), stat_func=None);
```
We can also add a prior on $b$. For example, suppose a previous measurement found $b = -0.20 \pm 0.02$ (in which case, the new data is not adding much information about $b$):
```
samples = MCMC_sample(loglike, m=[m_true,TopHat(0.4,0.7)], b=[b_true,Gauss(-0.20,0.02)],
x=x_data, y=y_data, sigy=sigy_true, nsamples=10000, random_state=gen)
sns.jointplot('m', 'b', samples, xlim=(0.2,0.8), ylim=(-0.3,0.0), stat_func=None);
```
**EXERCISE:** Suppose we know that all $y_i$ values have the same error $\sigma_y$ but we do not know its value.
- Generate samples of $(m, b, \sigma_y)$ using `m=[m_true], b=[b_true], sigy=[sigy_true]`.
- Look at the samples with an `sns.pairplot`.
- Which panel shows the marginalized posterior $P(\sigma_y\mid D)$? Do you understand its peculiar shape?
- Add a prior on $\sigma_y$ to fix this peculiar shape.
```
gen = np.random.RandomState(seed=123)
samples = MCMC_sample(loglike, m=[m_true], b=[b_true], sigy=[sigy_true],
x=x_data, y=y_data, nsamples=10000, random_state=gen)
sns.pairplot(samples);
samples = MCMC_sample(loglike, m=[m_true], b=[b_true], sigy=[sigy_true, TopHat(0.01,1)],
x=x_data, y=y_data, nsamples=10000, random_state=gen)
sns.pairplot(samples);
# Add your solution here...
```
For a more in-depth case study of the many subtleties in fitting a straight line, read this 55-page [article by Hogg, Bovy and Lang](https://arxiv.org/abs/1008.4686).
| github_jupyter |
# Offline analysis of a [mindaffectBCI](https://github.com/mindaffect) savefile
So you have successfully run a BCI experiment and want to have a closer look at the data, and try different analysis settings?
Or you have a BCI experiment file from the internet, e.g. MOABB, and want to try it with the mindaffectBCI analysis decoder?
Then you want to do an off-line analysis of this data!
This notebook shows how to such a quick post-hoc analysis of a previously saved dataset. By the end of this tutorial you will be able to:
* Load a mindaffectBCI savefile
* generate summary plots which show; the per-channel grand average spectrum, the data-summary statistics, per-trial decoding results, the raw stimulus-resonse ERPs, the model as trained by the decoder, the per-trial BCI performance plots.
* understand how to use these plots to identify problems in the data (such as artifacts, excessive line-noise) or the BCI operation
* understand how to change analysis parameters and the used classifier to develop improved decoders
```
import numpy as np
from mindaffectBCI.decoder.analyse_datasets import debug_test_dataset
from mindaffectBCI.decoder.offline.load_mindaffectBCI import load_mindaffectBCI
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
plt.rcParams['figure.figsize'] = [12, 8] # bigger default figures
```
## Specify the save file you wish to analyse.
You can either specify:
* the full file name to load, e.g. '~/Downloads/mindaffectBCI_200901_1154.txt'
* a wildcard filename, e.g. '~/Downloads/mindaffectBCI*.txt', in which case the **most recent** matching file will be loaded.
* `None`, or '-', in which case the most recent file from the default `logs` directory will be loaded.
```
# select the file to load
#savefile = '~/../../logs/mindaffectBCI_200901_1154_ssvep.txt'
savefile = None # use the most recent file in the logs directory
savefile = 'mindaffectBCI_exampledata.txt'
```
## Load the *RAW*data
Load, with minimal pre-processing to see what the raw signals look like. Note: we turn off the default filtering and downsampling with `stopband=None, fs_out=None` to get a true raw dataset.
It will then plot the grand-aver-spectrum of this raw data. This plot shows for each EEG channel the signal power across different signal frequencies. This is useful to check for artifacts (seen as peaks in the spectrum at specific frequencies, such as 50hz), or bad-channels (seen as channels with excessively high or low power in general.)
During loading the system will print some summary information about the loaded data and preprocessing applied. Including:
* The filter and downsampling applied
* The number of trails in the data and their durations
* The trail data-slice used, measured relative to the trial start event
* The EEG and STIMULUS meta-information, in terms of the array shape, e.g. (13,575,4) and the axis labels, e.g. (trials, time, channels) respectively.
```
X, Y, coords = load_mindaffectBCI(savefile, stopband=None, fs_out=None)
# output is: X=eeg, Y=stimulus, coords=meta-info about dimensions of X and Y
print("EEG: X({}){} @{}Hz".format([c['name'] for c in coords],X.shape,coords[1]['fs']))
print("STIMULUS: Y({}){}".format([c['name'] for c in coords[:-1]]+['output'],Y.shape))
# Plot the grand average spectrum to get idea of the signal quality
from mindaffectBCI.decoder.preprocess import plot_grand_average_spectrum
plot_grand_average_spectrum(X, fs=coords[1]['fs'], ch_names=coords[-1]['coords'], log=True)
```
## Reload the data, with standard preprocessing.
This time, we want to analysis the loaded data for the BCI signal. Whilst we can do this after loading, to keep the analysis as similar as possible to the on-line system where the decoder only sees pre-processed data, we will reload wand apply the pre-processing directly. This also has the benefit of making the datafile smaller.
To reproduce the pre-processing done in the on-line BCI we will set the pre-processing to:
* temporally filter the data to the BCI relevant range. Temporal filtering is a standard technique to remove sigal frequencies which we know only contain noise. For the noise-tag brain response we know it is mainly in the frequency range from 3 to about 25 Hz. Thus, we specifcy a bandpass filter to only retain these frequencies with:
`stopband=(3,25,'bandpass')`
* The orginal EEG is sampled 250 times per second. However, the BCI relevant signal changes at most at 25 times per second, thus the EEG is sampled much more rapidly than needed -- so processing it takes undeeded computational resources. Thus, we downsmaple the data to save some computation. To avoid signal-artifacts, as a general 'rule of thumb' you should downsample to about 3 times your maximum signal frequency. In this case we use an output sample rate of 4 times, or 100 h with:
`fs_out=100`
```
X, Y, coords = load_mindaffectBCI(savefile, stopband=(3,25,'bandpass'), fs_out=100)
# output is: X=eeg, Y=stimulus, coords=meta-info about dimensions of X and Y
print("EEG: X({}){} @{}Hz".format([c['name'] for c in coords],X.shape,coords[1]['fs']))
print("STIMULUS: Y({}){}".format([c['name'] for c in coords[:-1]]+['output'],Y.shape))
```
## Analyse the data
The following code runs the standard initial analysis and data-set visualization, in one go with some standard analysis parameters:
* tau_ms : the length of the modelled stimulus response (in milliseconds)
* evtlabs : the type of brain feaatures to transform the stimulus information into prior to fitting the model in this case
* 're' -> rising edge
* 'fe' -> falling edge
see `stim2event.py` for more information on possible transformations
* rank : the rank of the CCA model to fit
* model : the type of model to fit. 'cca' corrospends to the Cannonical Correlation Analysis model.
This generates many visualizations. The most important are:
1. **Summary Statistics**: Summary statistics for the data with,
This has vertically 3 sub-parts.
row 1: Cxx : this is the spatial cross-correlation of the EEG channels
row 2: Cxy : this is the cross-correlation of the stimulus with the EEG. Which for discrete stimuli as used in this BCI is essentially another view of the ERP.
row 3: Cyy : the auto-cross covariance of the stimulus features with the other (time-delayed) stimulus features
<img src='images/SummaryStatistics.png' width=200>
2. **Grand Average Spectrum** : This shows for each data channel the power over different signal frequencies. This is useful to identify artifacts in the data, which tend to show up as peaks in the spectrum at different frequencies, e.g. high power below 3Hz indicate movement artifacts, high power at 50/60hz indicates excessive line-noise interference.
<img src='images/GrandAverageSpectrum.png' width=200>
3. **ERP** : This plot shows for each EEG channel the averaged measured response over time after the triggering stimulus. This is the conventional plot that you find in many neuroscientific publications.
<img src='images/ERP.png' width=200>
4. **Decoding Curve** + **Yerr** : The decoder accumulates information during a trial to make it's predictions better. These pair of plots show this information as a 'decoding curve' which shows two important things:
a) **Yerr** : which is the **true** error-rate of the systems predictions, with increasing trial time.
b) **Perr** : which is the systems own **estimation** of it's prediction error. This estimation is used by the system to identify when it is confident enough to make a selection and stop a trial early. Thus, this should ideally be as accurate as possible, so it's near 1 when Yerr is 1 and near 0 when Yerr is 0. In the DecodingCurve plot Perr is shown by colored dots, with red being Yerr=1 and green being Yerr=0. Thus, if the error estimates are good you should see red dots at the top left (wrong with short trials) and green dots at the bottom right (right with more data).
<img src='images/DecodingCurve.png' width=200> <img src='images/Ycorrect.png' width=200>
5. **Trial Summary** : This plot gives a direct trial-by-trial view of the input data and the BCI performance. With each trial plotted individually running from left to right top to bottom.
<img src='images/TrialSummary.png' width=400>
Zooming in on a single trial, we see that vertically it has 5 sub-parts:
a) **X** : this is the pre-processed input EEG data, with time horizontially, and channels with different colored lines vertically.
b) **Y** : this is the raw stimulus information, with time horizontially and outputs vertically.
c) **Fe** : this is the systems predicted score for each type of stimulus-event, generated by applying the model to the raw EEG (e.g. 're','fe')
d) **Fy** : this is the systems _accumulated_ predicted score for each output, generated by combining the predicted stimulus scores with the stimulus information. Here the **true** output is in black with the other outputs in grey. Thus, if the system is working correctly, the true output has the highest score and will be the highest line.
e) **Py** : this is the systems **estimated** target probability for each output, generated by softmaxing the Fy scores. Again, the true target is in black with the others in grey. So if the system is working well the black line is near 0 when it's incorrect, and then jumps to 1 when it is correct.
<img src='images/TrialSummary_single.png' width=200>
6. *Model*: plot of the fitted model, in two sub-plots with: a) the fitted models spatial-filter -- which shows the importance of each EEG channel, b) the models impulse response -- which shows how the brain responds over time to the different types of stimulus event
<img src='images/ForwardModel.png' width=200>
```
clsfr=debug_test_dataset(X, Y, coords,
model='cca', evtlabs=('re','fe'), rank=1, tau_ms=450)
```
## Alternative Analyse
The basic analysis system has many parameters you can tweak to test different analysis methods. The following code runs the standard initial analysis and data-set visualization, in one go with some standard analysis parameters:
tau_ms : the length of the modelled stimulus response (in milliseconds)
evtlabs : the type of brain feaatures to transform the stimulus information into prior to fitting the model in this case
're' -> rising edge
'fe' -> falling edge see stim2event.py for more information on possible transformations
rank : the rank of the CCA model to fit
model : the type of model to fit. 'cca' corrospends to the Cannonical Correlation Analysis model.
other options include:
* 'ridge' = ridge-regression,
* 'fwd' = Forward Modelling,
* 'bwd' = Backward Modelling,
* 'lr' = logistic-regression,
* 'svc' = support vector machine
See the help for `mindaffectBCI.decoder.model_fitting.BaseSequence2Sequence` or `mindaffectBCI.decoder.analyse_datasets.analyse_dataset` for more details on the other options.
Here we use a Logistic Regression classifier to classify single stimulus-responses into rising-edge (re) or falling-edge (fe) responses.
Note: we also include some additional pre-processing in this case, which consists of:
* **whiten** : this will do a spatial whitening, so that the data input to the classifier is **spatially** decorrelated. This happens automatically with the CCA classifier, and has been found useful to suppress artifacts in the data.
* **whiten_spectrum** : this will approximately decorrelate different frequencies in the data. In effect this flattens the peaks and troughs in the data frequency spectrum. This pre-processing also been found useful to suppress artifacts in the data.
Further, as this is now a classification problem, we set `ignore_unlabelled=True`. This means that samples which are not either rising edges or falling edges will not be given to the classifier -- so in the end we train a simple binary classifier.
```
# test different inner classifier. Here we use a Logistic Regression classifier to classify single stimulus-responses into rising-edge (re) or falling-edge (fe) responses.
debug_test_dataset(X, Y, coords,
preprocess_args=dict(badChannelThresh=3, badTrialThresh=None, whiten=.01, whiten_spectrum=.1),
model='lr', evtlabs=('re', 'fe'), tau_ms=450, ignore_unlabelled=True)
```
| github_jupyter |
```
import numpy as np
from scipy.integrate import odeint
from TricubicInterpolation import TriCubic
class Fermat(object):
def __init__(self,neTCI=None,frequency = 120e6,type='s',straightLineApprox=True):
'''Fermat principle. type = "s" means arch length is the indepedent variable
type="z" means z coordinate is the independent variable.'''
self.type = type
self.frequency = frequency#Hz
self.straightLineApprox = straightLineApprox
if neTCI is not None:
self.ne2n(neTCI)
return
def loadFunc(self,file):
'''Load the model given in `file`'''
data = np.load(file)
if 'ne' in data.keys():
ne = data['ne']
xvec = data['xvec']
yvec = data['yvec']
zvec = data['zvec']
self.ne2n(TriCubic(xvec,yvec,zvec,ne,useCache=True))
return
if 'n' in data.keys():
ne = data['n']
xvec = data['xvec']
yvec = data['yvec']
zvec = data['zvec']
self.n2ne(TriCubic(xvec,yvec,zvec,n,useCache=True))
return
def saveFunc(self,file):
np.savez(file,xvec=self.nTCI.xvec,yvec=self.nTCI.yvec,zvec=self.nTCI.zvec,n=self.nTCI.m,ne=self.neTCI.m)
def ne2n(self,neTCI):
'''Analytically turn electron density to refractive index. Assume ne in m^-3'''
self.neTCI = neTCI
#copy object
self.nTCI = neTCI.copy(default=1.)
#inplace change to refractive index
self.nTCI.m *= -8.980**2/self.frequency**2
self.nTCI.m += 1.
self.nTCI.m = np.sqrt(self.nTCI.m)
#wp = 5.63e4*np.sqrt(ne/1e6)/2pi#Hz^2 m^3 lightman p 226
return self.nTCI
def n2ne(self,nTCI):
"""Get electron density in m^-3 from refractive index"""
self.nTCI = nTCI
#convert to
self.neTCI = nTCI.copy()
self.neTCI.m *= -self.neTCI.m
self.neTCI.m += 1.
self.neTCI.m *= self.frequency**2/8.980**2
#wp = 5.63e4*np.sqrt(ne/1e6)/2pi#Hz^2 m^3 lightman p 226
return self.neTCI
def eulerODE(self,y,t,*args):
'''return pxdot,pydot,pzdot,xdot,ydot,zdot,sdot'''
#print(y)
px,py,pz,x,y,z,s = y
if self.straightLineApprox:
n,nx,ny,nz = 1.,0,0,0
else:
n,nx,ny,nz,nxy,nxz,nyz,nxyz = self.nTCI.interp(x,y,z,doDiff=True)
#from ne
#ne,nex,ney,nez,nexy,nexz,neyz,nexyz = self.neTCI.interp(x,y,z,doDiff=True)
#A = - 8.98**2/self.frequency**2
#n = math.sqrt(1. + A*ne)
#ndot = A/(2.*n)
#nx = ndot * nex
#ny = ndot * ney
#nz = ndot * nez
if self.type == 'z':
sdot = n / pz
pxdot = nx*n/pz
pydot = ny*n/pz
pzdot = nz*n/pz
xdot = px / pz
ydot = py / pz
zdot = 1.
if self.type == 's':
sdot = 1.
pxdot = nx
pydot = ny
pzdot = nz
xdot = px / n
ydot = py / n
zdot = pz / n
return [pxdot,pydot,pzdot,xdot,ydot,zdot,sdot]
def jacODE(self,y,t,*args):
'''return d ydot / d y, with derivatives down columns for speed'''
px,py,pz,x,y,z,s = y
if self.straightLineApprox:
n,nx,ny,nz,nxy,nxz,nyz = 1.,0,0,0,0,0,0
else:
n,nx,ny,nz,nxy,nxz,nyz,nxyz = self.nTCI.interp(x,y,z,doDiff=True)
#TCI only gaurentees C1 and C2 information is lost, second order anyways
nxx,nyy,nzz = 0.,0.,0.
#from electron density
#ne,nex,ney,nez,nexy,nexz,neyz,nexyz = self.neTCI.interp(x,y,z,doDiff=True)
#A = - 8.98**2/self.frequency**2
#n = math.sqrt(1. + A*ne)
#ndot = A/(2.*n)
#nx = ndot * nex
#ny = ndot * ney
#nz = ndot * nez
#ndotdot = -(A * ndot)/(2. * n**2)
#nxy = ndotdot * nex*ney + ndot * nexy
#nxz = ndotdot * nex * nez + ndot * nexz
#nyz = ndotdot * ney * nez + ndot * neyz
if self.type == 'z':
x0 = n
x1 = nx
x2 = pz**(-2)
x3 = x0*x2
x4 = 1./pz
x5 = ny
x6 = x4*(x0*nxy + x1*x5)
x7 = nz
x8 = x4*(x0*nxz + x1*x7)
x9 = x4*(x0*nyz + x5*x7)
jac = np.array([[ 0, 0, -x1*x3, x4*(x0*nxx + x1**2),x6, x8, 0.],
[ 0, 0, -x3*x5,x6, x4*(x0*nyy + x5**2), x9, 0.],
[ 0, 0, -x3*x7,x8, x9, x4*(x0*nzz + x7**2), 0.],
[x4, 0, -px*x2, 0, 0, 0, 0.],
[ 0, x4, -py*x2, 0, 0, 0, 0.],
[ 0, 0, 0, 0, 0, 0, 0.],
[ 0, 0,-x3,x1*x4, x4*x5, x4*x7, 0.]])
if self.type == 's':
x0 = n
x1 = nxy
x2 = nxz
x3 = nyz
x4 = 1./x0
x5 = nx
x6 = x0**(-2)
x7 = px*x6
x8 = ny
x9 = nz
x10 = py*x6
x11 = pz*x6
jac = np.array([[ 0, 0, 0, nxx, x1, x2, 0.],
[ 0, 0, 0, x1, nyy, x3, 0.],
[ 0, 0, 0, x2, x3, nzz, 0.],
[x4, 0, 0, -x5*x7, -x7*x8, -x7*x9, 0.],
[ 0, x4, 0, -x10*x5, -x10*x8, -x10*x9, 0.],
[ 0, 0, x4, -x11*x5, -x11*x8, -x11*x9, 0.],
[ 0, 0, 0, 0, 0, 0, 0.]])
return jac
def integrateRay(self,origin,direction,tmax,N=100):
'''Integrate ray defined by the ``origin`` and ``direction`` along the independent variable (s or z)
until tmax.
``N`` - the number of partitions along the ray to save ray trajectory.'''
x0,y0,z0 = origin
xdot0,ydot0,zdot0 = direction
sdot = np.sqrt(xdot0**2 + ydot0**2 + zdot0**2)
#momentum
px0 = xdot0/sdot
py0 = ydot0/sdot
pz0 = zdot0/sdot
#px,py,pz,x,y,z,s
init = [px0,py0,pz0,x0,y0,z0,0]
if self.type == 'z':
tarray = np.linspace(z0,tmax,N)
if self.type == 's':
tarray = np.linspace(0,tmax,N)
Y,info = odeint(self.eulerODE, init, tarray,Dfun = self.jacODE, col_deriv = True, full_output=1)
#print(info['hu'].shape,np.sum(info['hu']),info['hu'])
#print(Y)
x = Y[:,3]
y = Y[:,4]
z = Y[:,5]
s = Y[:,6]
return x,y,z,s
```
| github_jupyter |
## Desafio Final
```
# imports de avisos
import sys
import warnings
import matplotlib.cbook
warnings.simplefilter("ignore")
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=matplotlib.cbook.mplDeprecation)
# imports para manipulação de dados
import numpy as np
import pandas as pd
import scipy
import statsmodels.api as sm
import math
import itertools
# imports para visualização de dados
import matplotlib.pyplot as plt
import matplotlib as m
import matplotlib.dates as mdates
from matplotlib.ticker import MaxNLocator
import seaborn as sns
import plotly as py
import plotly.express as px
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
pd.options.display.max_columns = 2000
pd.options.display.max_rows = 2000
# função para criar um gráfico de distribuição para cada feature do dataset
def plot_distribution(dataset, cols=5, width=20, height=25, hspace=0.4, wspace=0.5):
fig = plt.figure(figsize=(width, height))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=wspace, hspace=hspace)
rows = math.ceil(float(dataset.shape[1]) / cols)
for i, column in enumerate(dataset.columns):
ax = fig.add_subplot(rows, cols, i + 1)
ax.set_title(column)
if dataset.dtypes[column] == np.object:
g = sns.countplot(y=column,
data=dataset,
order=dataset[column].value_counts().index[:10])
substrings = [s.get_text()[:20] for s in g.get_yticklabels()]
g.set(yticklabels=substrings)
plt.xticks(rotation=25)
else:
g = sns.distplot(dataset[column])
plt.xticks(rotation=25)
# função para calcular o coeficiente de correlação entre duas variáveis
def rsquared(x, y):
slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)
return r_value**2
# carregando o dataset
df_cars = pd.read_csv('cars.csv')
# apresentando as 5 primeiras linhas do dataset
df_cars.head()
# mostrando as dimensões do dataset
df_cars.shape
# verificando os tipos de variáveis e se existem ou não valores nulos
df_cars.info()
df_cars.dtypes.value_counts()
```
#### Após a utilização da biblioteca pandas para a leitura dos dados sobre os valores lidos, é CORRETO afirmar que:
- Não foram encontrados valores nulos após a leitura dos dados.
```
display(df_cars.isna().sum())
display(df_cars.isnull().sum())
# gráfico de distribuição para cada feature do dataset
columns = ['mpg', 'cylinders', 'cubicinches', 'hp', 'weightlbs', 'time-to-60', 'year', 'brand']
plot_distribution(df_cars[columns], cols=3, width=30, height=20, hspace=0.45, wspace=0.5)
```
#### Realize a transformação das colunas *“cubicinches”* e *“weightlbs”* do tipo “string” para o tipo numérico utilizando o *pd.to_numeric()* e o parâmetro *errors='coerce'*. Após essa transformação, é CORRETO afirmar:
- Essa transformação adiciona valores nulos ao nosso dataset.
```
df_cars['cubicinches'] = pd.to_numeric(df_cars['cubicinches'], errors='coerce')
df_cars['weightlbs'] = pd.to_numeric(df_cars['weightlbs'], errors='coerce')
df_cars.isnull().sum()
```
#### Indique quais eram os índices dos valores presentes no dataset que *“forçaram”* o pandas a compreender a variável *“cubicinches”* como string.
```
df_cars[df_cars['cubicinches'].isna()]
index_null = df_cars['cubicinches'].isna()
index_null[index_null.isin([True])].index
```
#### Após a transformação das variáveis “string” para os valores numéricos, quantos valores nulos (células no dataframe) passaram a existir no dataset?
```
df_cars.isna().sum().sum()
```
#### Substitua os valores nulos introduzidos no dataset após a transformação pelo valor médio das colunas. Qual é o novo valor médio da coluna *“weightlbs”*?
```
df_cars['cubicinches'].fillna(df_cars['cubicinches'].mean(), inplace=True)
df_cars['weightlbs'].fillna(df_cars['weightlbs'].mean(), inplace=True)
df_cars.describe()
df_cars['weightlbs'].mean()
# verificando os dados da feature 'time-to-60' através de um boxplot
sns.set_style("whitegrid")
sns.boxplot(y='time-to-60', data=df_cars)
sns.boxplot(x=df_cars['time-to-60'])
```
#### Após substituir os valores nulos pela média das colunas, selecione as colunas *“mpg”, “cylinders”, “cubicinches”, “hp”, “weightlbs”, “time-to-60”, “year”*.
#### Qual é o valor da mediana para a característica *“mpg”*?
```
df_cars2 = df_cars[['mpg', 'cylinders', 'cubicinches', 'hp', 'weightlbs', 'time-to-60', 'year']]
df_cars2.head()
df_cars2['mpg'].median()
```
#### Qual é a afirmação CORRETA sobre o valor de 14,00 para a variável *“time-to-60”*?
- 75% dos dados são maiores que o valor de 14,00.
```
df_cars['time-to-60'].describe()
```
#### Sobre o coeficiente de correlação de Pearson entre as variáveis *“cylinders”* e *“mpg”*, é correto afirmar, EXCETO:
- Mesmo não sendo igual a 1, é possível dizer que à medida que a variável *“cylinders”* aumenta, a variável *“mpg”* reduz em uma direção oposta.
- Caso fosse calculado o coeficiente de determinação entre essas duas variáveis, o valor seria, aproximadamente, 0,6.
- Quando um coeficiente de correlação de Pearson é igual a 1, o coeficiente de determinação também será igual a 1.
- **Mesmo não sendo igual a 1, é possível dizer que à medida em que a variável *“cylinders”* aumenta, a variável *“mpg”* também aumenta na mesma direção.**
```
plt.figure(figsize=(10, 5))
matriz_de_correlação = df_cars[['cylinders','mpg']].corr()
sns.heatmap(matriz_de_correlação, annot=True, vmin=-1, vmax=1, center=0)
plt.show()
# visualiza um gráfico entre as variaveis "cylinders" e "mpg" e verifica se existe alguma correlação linear
plt.figure(figsize=(18, 8))
sns.regplot(x='cylinders', y='mpg', data=df_cars, color='b', x_jitter=0.2)
plt.xlabel('cylinders')
plt.ylabel('mpg')
plt.title('Relação entre "cylinders" e "mpg"', fontsize=20)
plt.show()
# calculando o coeficiente de correlação entre "cylinders" e "mpg" através do r2
rsquared(df_cars['cylinders'], df_cars['mpg'])
```
#### Sobre o boxplot da variável *“hp”*, é correto afirmar, EXCETO:
- Através do boxplot, é possível perceber que a mediana encontra-se entre os valores de 80 e 100.
- **Existe uma maior dispersão no segundo quartil quando comparamos com o terceiro.**
- Não foi identificada a presença de possíveis outliers nos dados.
- Cada um dos quartis possui a mesma quantidade de valores para a variável *“hp”*.
```
sns.boxplot(x=df_cars['hp'])
# verificando os dados da feature 'hp' através de um boxplot
sns.set_style("whitegrid")
sns.boxplot(y='hp', data=df_cars)
```
### Pré-processamento
```
# normalização dos dados
from sklearn.preprocessing import StandardScaler
normaliza = StandardScaler()
# definindo somente colunas numéricas a serem normalizadas
num_cols = df_cars.columns[df_cars.dtypes.apply(lambda c: np.issubdtype(c, np.number))]
# criando uma cópia do dataset original
df_cars4 = df_cars[num_cols]
# normalizando os dados
df_cars4[num_cols] = normaliza.fit_transform(df_cars4[num_cols])
# exibindo os primeiros registros
df_cars4.head()
```
#### Após normalizado, utilizando a função *StandardScaler()*, qual é o maior valor para a variável *“hp”*?
```
# verificando o maior valor para a feature "hp"
df_cars4['hp'].max()
```
#### Aplicando o PCA, conforme a definição acima, qual é o valor da variância explicada pelo primeiro componente principal?
```
# criando o objeto PCA com 7 componentes
from sklearn.decomposition import PCA
pca = PCA(n_components=7)
# realizando o fit com os dados normalizados
principalComponents = pca.fit_transform(df_cars4)
# salvando em um dataframe
PCA_components = pd.DataFrame(principalComponents)
PCA_components.head()
# exibindo o valor da variância explicada por cada componente
print(pca.explained_variance_ratio_)
# plot da variação explicada pelos componentes
features = range(pca.n_components_)
fig, aux = plt.subplots(1, 1, figsize=(18, 8))
plt.bar(features, pca.explained_variance_ratio_, color='navy')
plt.xlabel('PCA features')
plt.ylabel('variance %')
plt.xticks(features)
```
### Algoritmo K-Means
#### Utilize os três primeiros componentes principais para construir o K-means com um número de 3 clusters. Sobre os clusters, é INCORRETO afirmar que:
- Cada um dos clusters possui características próprias.
- **Todos os clusters possuem a mesma quantidade de elementos.**
- Existem 3 centroides após a aplicação da clusterização.
- Os centroides, utilizando apenas as 3 componentes principais, possuem 3 dimensões.
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3, random_state=42)
# treinando o modelo utilizando apenas os três primeiros componentes principais
kmeans.fit(PCA_components.iloc[:,:3])
# realizando as previsões dos cluster
x_clustered = kmeans.predict(PCA_components.iloc[:,:3])
# definindo um mapa de cor para cada cluster
color_map = {0:'r', 1: 'g', 2: 'b'}
label_color = [color_map[l] for l in x_clustered]
# definindo os centróides
centers = np.array(kmeans.cluster_centers_)
# exibindo um gráfico scatter
fig, aux = plt.subplots(1, 1, figsize=(18, 8))
plt.title('Kmeans com centróides', fontsize=20)
plt.scatter(principalComponents[:,0], principalComponents[:,1], c=label_color, alpha=0.5)
plt.scatter(centers[:,0], centers[:,1], marker="x", color='navy', s=500)
plt.show()
# criando um dataframe do nosso PCA
df = pd.DataFrame(PCA_components)
# selecionando somente os 3 primeiros componentes
df = df[[0,1,2]]
df['cluster'] = x_clustered
# visualizando nossos clusters com os dados do PCA
sns.pairplot(df, hue='cluster', palette='Dark2', diag_kind='kde', height=3)
# verificando a quantidade em cada um dos clusters
print(df['cluster'].value_counts())
# exibindo em um gráfico
df['cluster'].value_counts().plot(kind ='bar')
plt.ylabel('Count')
```
### Árvore de Decisão
#### Após todo o processamento realizado nos itens anteriores, crie uma coluna que contenha a variável de eficiência do veículo. Veículos que percorrem mais de 25 milhas com um galão (*“mpg” > 25*) devem ser considerados eficientes. Utilize as colunas *“cylinders”, “cubicinches”, “hp”, “weightlbs”, “time-to-60”* como entradas e como saída a coluna de eficiência criada.
#### Utilizando a árvore de decisão como mostrado, qual é a acurácia do modelo?
```
# realizando o merge com o dataset original e do pca, gerando um novo dataset
df_final = df_cars.merge(df, left_index=True, right_index=True)
# cria a nova feature "mpg"
df_final['efficiency'] = np.where(df_final['mpg'] > 25, 1, 0)
# Exibir o dataset final
df_final.head()
y = df_final['efficiency']
x = df_final[['cylinders', 'cubicinches', 'hp', 'weightlbs', 'time-to-60']]
normaliza = StandardScaler()
x = normaliza.fit_transform(x)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.30, random_state = 42)
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# aplicando um modelo de classificação via árvore de decisão
from sklearn.tree import DecisionTreeClassifier
clf_arvore = DecisionTreeClassifier(random_state = 42)
clf_arvore.fit(x_train, y_train)
# realiza a previsão com os dados
y_pred_arvore = clf_arvore.predict(x_test)
from sklearn.metrics import accuracy_score
acuracia = accuracy_score(y_test, y_pred_arvore)
print('Acurácia da Árvore de Decisão: ', acuracia)
# realiza o plot da matriz de confusão com o seaborn
from sklearn.metrics import classification_report, confusion_matrix
matriz_confusao = confusion_matrix(y_test, y_pred_arvore)
sns.heatmap(matriz_confusao, annot=True, vmin=0, vmax=40, center=20)
plt.show()
# realiza o plot da matriz de confusão
from mlxtend.plotting import plot_confusion_matrix
fig, ax = plot_confusion_matrix(conf_mat = matriz_confusao)
plt.show()
print(classification_report(y_test, y_pred_arvore))
```
#### Sobre a matriz de confusão obtida após a aplicação da árvore de decisão, como mostrado anteriormente, é INCORRETO afirmar:
- A matriz de confusão se constitui em uma estratégia ainda mais importante quando um dataset não está balanceado.
- A diagonal principal da matriz mostra as instâncias em que as previsões foram corretas.
- **Existem duas vezes mais veículos considerados não eficientes que instâncias de veículos eficientes.**
- Os falso-positivos correspondem a instâncias em que o algoritmo considerou a previsão como verdadeira e, na realidade, ela era falsa.
### Regressão Logística
#### Utilizando a mesma divisão de dados entre treinamento e teste empregada para a análise anterior, aplique o modelo de regressão logística como mostrado na descrição do trabalho.
#### Comparando os resultados obtidos com o modelo de árvore de decisão, é INCORRETO afirmar que:
- Como os dois modelos obtiveram um resultado superior a 80% de acurácia, a escolha sobre qual utilizar deve e pode ser feita a partir de outros critérios, como a complexidade do modelo.
- **A regressão logística não deveria ser aplicada ao problema, pois ela trabalha apenas com dados categóricos.**
- A acurácia de ambos os modelos foi superior a 80%.
- A árvore de decisão e a regressão logística podem ser utilizadas para previsão em regressões.
```
# aplicando um modelo de classificação via regressão logística
from sklearn.linear_model import LogisticRegression
clf_log = LogisticRegression(random_state = 42)
clf_log.fit(x_train, y_train)
# realiza a previsão com os dados
y_pred_log = clf_log.predict(x_test)
acuracia = accuracy_score(y_test, y_pred_log)
print('Acurácia da Regressão Logística: ', acuracia)
# realiza o plot da matriz de confusão com o seaborn
matriz_confusao = confusion_matrix(y_test, y_pred_log)
sns.heatmap(matriz_confusao, annot=True, vmin=0, vmax=40, center=20)
plt.show()
# realiza o plot da matriz de confusão
fig, ax = plot_confusion_matrix(conf_mat = matriz_confusao)
plt.show()
print(classification_report(y_test, y_pred_log))
```
| github_jupyter |
# Database engineering
In this section we'll create:
+ table schemas using SQLAlchemy ORM
+ create a database in SQLite
+ load the cleaned Hawaii climate data into pandas dataframes
+ upload the data from the pandas dataframes into the SQLite database
```
# Dependencies
import pandas as pd
import sqlite3
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, create_session
from sqlalchemy import create_engine, MetaData
from sqlalchemy.ext.automap import automap_base
# Define and create a database engine
engine = create_engine('sqlite:///hawaii.sqlite', echo=False)
# Use SQLAlchemy to create a database table schema
Base = declarative_base()
class Station(Base):
__tablename__ = "station"
station_id = Column(Integer, primary_key=True)
station = Column(String, nullable=False)
name = Column(String, nullable=False)
latitude = Column(Integer, nullable=False)
longitude = Column(Integer, nullable=False)
elevation = Column(Integer, nullable=False)
children = relationship("measurement", back_populates="parent")
def __init__(self, name):
self.name = name
class Measurement(Base):
__tablename__ = "measurement"
measurement_id = Column(Integer, primary_key=True)
station = Column(String)
date = Column(String)
prcp = Column(Integer)
tobs = Column(Integer)
parent = relationship("station", back_populates="parent")
def __init__(self, name):
self.name = name
# Generate schema
Base.metadata.create_all(engine)
# Reflect database into a new model
Base = automap_base()
# Reflect tables
Base.prepare(engine)
# Access and reflect metadata
metadata = MetaData(bind=engine)
metadata.reflect()
# Create database session object
session = create_session(bind = engine)
# Check whether classes and tables exist
for mappedclass in Base.classes:
print(mappedclass)
for mdtable in Base.metadata.tables:
print(mdtable)
# Define SQLite connection and cursor
conn = sqlite3.connect("hawaii.sqlite")
cur = conn.cursor()
# Delete any existing table data (for test purposes only)
# https://stackoverflow.com/questions/11233128/how-to-clean-the-database-dropping-all-records-using-sqlalchemy
for tbl in metadata.sorted_tables:
engine.execute(tbl.delete())
conn.commit()
## Compact SQLite file
conn.execute("VACUUM")
# Load clean data
station_df = pd.read_csv("clean_hawaii_stations.csv")
measurement_df = pd.read_csv("clean_hawaii_measurements.csv")
# Append data to SQLAlchemy tables
station_df.to_sql('station', conn, if_exists='append', index=False)
measurement_df.to_sql('measurement', conn, if_exists='append', index=False)
# Close connection
conn.close()
```
| github_jupyter |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/corsera_da0101en_notebook_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://www.bigdatauniversity.com"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/CCLog.png" width = 300, align = "center"></a>
<h1 align=center><font size=5>Data Analysis with Python</font></h1>
<h1>Data Wrangling</h1>
<h3>Welcome!</h3>
By the end of this notebook, you will have learned the basics of Data Wrangling!
<h2>Table of content</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="#identify_handle_missing_values">Identify and handle missing values</a>
<ul>
<li><a href="#identify_missing_values">Identify missing values</a></li>
<li><a href="#deal_missing_values">Deal with missing values</a></li>
<li><a href="#correct_data_format">Correct data format</a></li>
</ul>
</li>
<li><a href="#data_standardization">Data standardization</a></li>
<li><a href="#data_normalization">Data Normalization (centering/scaling)</a></li>
<li><a href="#binning">Binning</a></li>
<li><a href="#indicator">Indicator variable</a></li>
</ul>
Estimated Time Needed: <strong>30 min</strong>
</div>
<hr>
<h2>What is the purpose of Data Wrangling?</h2>
Data Wrangling is the process of converting data from the initial format to a format that may be better for analysis.
<h3>What is the fuel consumption (L/100k) rate for the diesel car?</h3>
<h3>Import data</h3>
<p>
You can find the "Automobile Data Set" from the following link: <a href="https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data">https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data</a>.
We will be using this data set throughout this course.
</p>
<h4>Import pandas</h4>
```
import pandas as pd
import matplotlib.pylab as plt
```
<h2>Reading the data set from the URL and adding the related headers.</h2>
URL of the dataset
This dataset was hosted on IBM Cloud object click <a href="https://cocl.us/corsera_da0101en_notebook_bottom">HERE</a> for free storage
```
filename = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
```
Python list <b>headers</b> containing name of headers
```
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
```
Use the Pandas method <b>read_csv()</b> to load the data from the web address. Set the parameter "names" equal to the Python list "headers".
```
df = pd.read_csv(filename, names = headers)
```
Use the method <b>head()</b> to display the first five rows of the dataframe.
```
# To see what the data set looks like, we'll use the head() method.
df.head()
```
As we can see, several question marks appeared in the dataframe; those are missing values which may hinder our further analysis.
<div>So, how do we identify all those missing values and deal with them?</div>
<b>How to work with missing data?</b>
Steps for working with missing data:
<ol>
<li>dentify missing data</li>
<li>deal with missing data</li>
<li>correct data format</li>
</ol>
<h2 id="identify_handle_missing_values">Identify and handle missing values</h2>
<h3 id="identify_missing_values">Identify missing values</h3>
<h4>Convert "?" to NaN</h4>
In the car dataset, missing data comes with the question mark "?".
We replace "?" with NaN (Not a Number), which is Python's default missing value marker, for reasons of computational speed and convenience. Here we use the function:
<pre>.replace(A, B, inplace = True) </pre>
to replace A by B
```
import numpy as np
# replace "?" to NaN
df.replace("?", np.nan, inplace = True)
df.head(5)
```
dentify_missing_values
<h4>Evaluating for Missing Data</h4>
The missing values are converted to Python's default. We use Python's built-in functions to identify these missing values. There are two methods to detect missing data:
<ol>
<li><b>.isnull()</b></li>
<li><b>.notnull()</b></li>
</ol>
The output is a boolean value indicating whether the value that is passed into the argument is in fact missing data.
```
missing_data = df.isnull()
missing_data.head(5)
```
"True" stands for missing value, while "False" stands for not missing value.
<h4>Count missing values in each column</h4>
<p>
Using a for loop in Python, we can quickly figure out the number of missing values in each column. As mentioned above, "True" represents a missing value, "False" means the value is present in the dataset. In the body of the for loop the method ".value_counts()" counts the number of "True" values.
</p>
```
for column in missing_data.columns.values.tolist():
print(column)
print (missing_data[column].value_counts())
print("")
```
Based on the summary above, each column has 205 rows of data, seven columns containing missing data:
<ol>
<li>"normalized-losses": 41 missing data</li>
<li>"num-of-doors": 2 missing data</li>
<li>"bore": 4 missing data</li>
<li>"stroke" : 4 missing data</li>
<li>"horsepower": 2 missing data</li>
<li>"peak-rpm": 2 missing data</li>
<li>"price": 4 missing data</li>
</ol>
<h3 id="deal_missing_values">Deal with missing data</h3>
<b>How to deal with missing data?</b>
<ol>
<li>drop data<br>
a. drop the whole row<br>
b. drop the whole column
</li>
<li>replace data<br>
a. replace it by mean<br>
b. replace it by frequency<br>
c. replace it based on other functions
</li>
</ol>
Whole columns should be dropped only if most entries in the column are empty. In our dataset, none of the columns are empty enough to drop entirely.
We have some freedom in choosing which method to replace data; however, some methods may seem more reasonable than others. We will apply each method to many different columns:
<b>Replace by mean:</b>
<ul>
<li>"normalized-losses": 41 missing data, replace them with mean</li>
<li>"stroke": 4 missing data, replace them with mean</li>
<li>"bore": 4 missing data, replace them with mean</li>
<li>"horsepower": 2 missing data, replace them with mean</li>
<li>"peak-rpm": 2 missing data, replace them with mean</li>
</ul>
<b>Replace by frequency:</b>
<ul>
<li>"num-of-doors": 2 missing data, replace them with "four".
<ul>
<li>Reason: 84% sedans is four doors. Since four doors is most frequent, it is most likely to occur</li>
</ul>
</li>
</ul>
<b>Drop the whole row:</b>
<ul>
<li>"price": 4 missing data, simply delete the whole row
<ul>
<li>Reason: price is what we want to predict. Any data entry without price data cannot be used for prediction; therefore any row now without price data is not useful to us</li>
</ul>
</li>
</ul>
<h4>Calculate the average of the column </h4>
```
avg_norm_loss = df["normalized-losses"].astype("float").mean(axis=0)
print("Average of normalized-losses:", avg_norm_loss)
```
<h4>Replace "NaN" by mean value in "normalized-losses" column</h4>
```
df["normalized-losses"].replace(np.nan, avg_norm_loss, inplace=True)
```
<h4>Calculate the mean value for 'bore' column</h4>
```
avg_bore=df['bore'].astype('float').mean(axis=0)
print("Average of bore:", avg_bore)
```
<h4>Replace NaN by mean value</h4>
```
df["bore"].replace(np.nan, avg_bore, inplace=True)
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #1: </h1>
<b>According to the example above, replace NaN in "stroke" column by mean.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
avg_stroke=df['stroke'].astype('float').mean(axis=0)
df['stroke'].replace(np.nan, avg_stroke, inplace=True)
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# calculate the mean vaule for "stroke" column
avg_stroke = df["stroke"].astype("float").mean(axis = 0)
print("Average of stroke:", avg_stroke)
# replace NaN by mean value in "stroke" column
df["stroke"].replace(np.nan, avg_stroke, inplace = True)
-->
<h4>Calculate the mean value for the 'horsepower' column:</h4>
```
avg_horsepower = df['horsepower'].astype('float').mean(axis=0)
print("Average horsepower:", avg_horsepower)
```
<h4>Replace "NaN" by mean value:</h4>
```
df['horsepower'].replace(np.nan, avg_horsepower, inplace=True)
```
<h4>Calculate the mean value for 'peak-rpm' column:</h4>
```
avg_peakrpm=df['peak-rpm'].astype('float').mean(axis=0)
print("Average peak rpm:", avg_peakrpm)
```
<h4>Replace NaN by mean value:</h4>
```
df['peak-rpm'].replace(np.nan, avg_peakrpm, inplace=True)
```
To see which values are present in a particular column, we can use the ".value_counts()" method:
```
df['num-of-doors'].value_counts()
```
We can see that four doors are the most common type. We can also use the ".idxmax()" method to calculate for us the most common type automatically:
```
df['num-of-doors'].value_counts().idxmax()
```
The replacement procedure is very similar to what we have seen previously
```
#replace the missing 'num-of-doors' values by the most frequent
df["num-of-doors"].replace(np.nan, "four", inplace=True)
```
Finally, let's drop all rows that do not have price data:
```
# simply drop whole row with NaN in "price" column
df.dropna(subset=["price"], axis=0, inplace=True)
# reset index, because we droped two rows
df.reset_index(drop=True, inplace=True)
df.head()
```
<b>Good!</b> Now, we obtain the dataset with no missing values.
<h3 id="correct_data_format">Correct data format</h3>
<b>We are almost there!</b>
<p>The last step in data cleaning is checking and making sure that all data is in the correct format (int, float, text or other).</p>
In Pandas, we use
<p><b>.dtype()</b> to check the data type</p>
<p><b>.astype()</b> to change the data type</p>
<h4>Lets list the data types for each column</h4>
```
df.dtypes
```
<p>As we can see above, some columns are not of the correct data type. Numerical variables should have type 'float' or 'int', and variables with strings such as categories should have type 'object'. For example, 'bore' and 'stroke' variables are numerical values that describe the engines, so we should expect them to be of the type 'float' or 'int'; however, they are shown as type 'object'. We have to convert data types into a proper format for each column using the "astype()" method.</p>
<h4>Convert data types to proper format</h4>
```
df[["bore", "stroke"]] = df[["bore", "stroke"]].astype("float")
df[["normalized-losses"]] = df[["normalized-losses"]].astype("int")
df[["price"]] = df[["price"]].astype("float")
df[["peak-rpm"]] = df[["peak-rpm"]].astype("float")
```
<h4>Let us list the columns after the conversion</h4>
```
df.dtypes
```
<b>Wonderful!</b>
Now, we finally obtain the cleaned dataset with no missing values and all data in its proper format.
<h2 id="data_standardization">Data Standardization</h2>
<p>
Data is usually collected from different agencies with different formats.
(Data Standardization is also a term for a particular type of data normalization, where we subtract the mean and divide by the standard deviation)
</p>
<b>What is Standardization?</b>
<p>Standardization is the process of transforming data into a common format which allows the researcher to make the meaningful comparison.
</p>
<b>Example</b>
<p>Transform mpg to L/100km:</p>
<p>In our dataset, the fuel consumption columns "city-mpg" and "highway-mpg" are represented by mpg (miles per gallon) unit. Assume we are developing an application in a country that accept the fuel consumption with L/100km standard</p>
<p>We will need to apply <b>data transformation</b> to transform mpg into L/100km?</p>
<p>The formula for unit conversion is<p>
L/100km = 235 / mpg
<p>We can do many mathematical operations directly in Pandas.</p>
```
df.head()
# Convert mpg to L/100km by mathematical operation (235 divided by mpg)
df['city-L/100km'] = 235/df["city-mpg"]
# check your transformed data
df.head()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #2: </h1>
<b>According to the example above, transform mpg to L/100km in the column of "highway-mpg", and change the name of column to "highway-L/100km".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df['highway-mpg'] = 235/df['highway-mpg']
df.rename(columns={'highway-mpg':'highway-L/100km'}, inplace=True)
df.head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# transform mpg to L/100km by mathematical operation (235 divided by mpg)
df["highway-mpg"] = 235/df["highway-mpg"]
# rename column name from "highway-mpg" to "highway-L/100km"
df.rename(columns={'"highway-mpg"':'highway-L/100km'}, inplace=True)
# check your transformed data
df.head()
-->
<h2 id="data_normalization">Data Normalization</h2>
<b>Why normalization?</b>
<p>Normalization is the process of transforming values of several variables into a similar range. Typical normalizations include scaling the variable so the variable average is 0, scaling the variable so the variance is 1, or scaling variable so the variable values range from 0 to 1
</p>
<b>Example</b>
<p>To demonstrate normalization, let's say we want to scale the columns "length", "width" and "height" </p>
<p><b>Target:</b>would like to Normalize those variables so their value ranges from 0 to 1.</p>
<p><b>Approach:</b> replace original value by (original value)/(maximum value)</p>
```
# replace (original value) by (original value)/(maximum value)
df['length'] = df['length']/df['length'].max()
df['width'] = df['width']/df['width'].max()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Questiont #3: </h1>
<b>According to the example above, normalize the column "height".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df['height'] = df['height']/df['height'].max()
df[["length","width","height"]].head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
df['height'] = df['height']/df['height'].max()
# show the scaled columns
df[["length","width","height"]].head()
-->
Here we can see, we've normalized "length", "width" and "height" in the range of [0,1].
<h2 id="binning">Binning</h2>
<b>Why binning?</b>
<p>
Binning is a process of transforming continuous numerical variables into discrete categorical 'bins', for grouped analysis.
</p>
<b>Example: </b>
<p>In our dataset, "horsepower" is a real valued variable ranging from 48 to 288, it has 57 unique values. What if we only care about the price difference between cars with high horsepower, medium horsepower, and little horsepower (3 types)? Can we rearrange them into three ‘bins' to simplify analysis? </p>
<p>We will use the Pandas method 'cut' to segment the 'horsepower' column into 3 bins </p>
<h3>Example of Binning Data In Pandas</h3>
Convert data to correct format
```
df["horsepower"]=df["horsepower"].astype(int, copy=True)
```
Lets plot the histogram of horspower, to see what the distribution of horsepower looks like.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
plt.pyplot.hist(df["horsepower"])
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>We would like 3 bins of equal size bandwidth so we use numpy's <code>linspace(start_value, end_value, numbers_generated</code> function.</p>
<p>Since we want to include the minimum value of horsepower we want to set start_value=min(df["horsepower"]).</p>
<p>Since we want to include the maximum value of horsepower we want to set end_value=max(df["horsepower"]).</p>
<p>Since we are building 3 bins of equal length, there should be 4 dividers, so numbers_generated=4.</p>
We build a bin array, with a minimum value to a maximum value, with bandwidth calculated above. The bins will be values used to determine when one bin ends and another begins.
```
bins = np.linspace(min(df["horsepower"]), max(df["horsepower"]), 4)
bins
```
We set group names:
```
group_names = ['Low', 'Medium', 'High']
```
We apply the function "cut" the determine what each value of "df['horsepower']" belongs to.
```
df['horsepower-binned'] = pd.cut(df['horsepower'], bins, labels=group_names, include_lowest=True )
df[['horsepower','horsepower-binned']].head(20)
```
Lets see the number of vehicles in each bin.
```
df["horsepower-binned"].value_counts()
```
Lets plot the distribution of each bin.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
pyplot.bar(group_names, df["horsepower-binned"].value_counts())
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>
Check the dataframe above carefully, you will find the last column provides the bins for "horsepower" with 3 categories ("Low","Medium" and "High").
</p>
<p>
We successfully narrow the intervals from 57 to 3!
</p>
<h3>Bins visualization</h3>
Normally, a histogram is used to visualize the distribution of bins we created above.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
a = (0,1,2)
# draw historgram of attribute "horsepower" with bins = 3
plt.pyplot.hist(df["horsepower"], bins = 3)
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
The plot above shows the binning result for attribute "horsepower".
<h2 id="indicator">Indicator variable (or dummy variable)</h2>
<b>What is an indicator variable?</b>
<p>
An indicator variable (or dummy variable) is a numerical variable used to label categories. They are called 'dummies' because the numbers themselves don't have inherent meaning.
</p>
<b>Why we use indicator variables?</b>
<p>
So we can use categorical variables for regression analysis in the later modules.
</p>
<b>Example</b>
<p>
We see the column "fuel-type" has two unique values, "gas" or "diesel". Regression doesn't understand words, only numbers. To use this attribute in regression analysis, we convert "fuel-type" into indicator variables.
</p>
<p>
We will use the panda's method 'get_dummies' to assign numerical values to different categories of fuel type.
</p>
```
df.columns
```
get indicator variables and assign it to data frame "dummy_variable_1"
```
dummy_variable_1 = pd.get_dummies(df["fuel-type"])
dummy_variable_1.head()
```
change column names for clarity
```
dummy_variable_1.rename(columns={'fuel-type-diesel':'gas', 'fuel-type-diesel':'diesel'}, inplace=True)
dummy_variable_1.head()
```
We now have the value 0 to represent "gas" and 1 to represent "diesel" in the column "fuel-type". We will now insert this column back into our original dataset.
```
# merge data frame "df" and "dummy_variable_1"
df = pd.concat([df, dummy_variable_1], axis=1)
# drop original column "fuel-type" from "df"
df.drop("fuel-type", axis = 1, inplace=True)
df.head()
```
The last two columns are now the indicator variable representation of the fuel-type variable. It's all 0s and 1s now.
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #4: </h1>
<b>As above, create indicator variable to the column of "aspiration": "std" to 0, while "turbo" to 1.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
dummy_variable_2 = pd.get_dummies(df['aspiration'])
dummy_variable_2.head()
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
# get indicator variables of aspiration and assign it to data frame "dummy_variable_2"
dummy_variable_2 = pd.get_dummies(df['aspiration'])
# change column names for clarity
dummy_variable_2.rename(columns={'std':'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
# show first 5 instances of data frame "dummy_variable_1"
dummy_variable_2.head()
-->
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #5: </h1>
<b>Merge the new dataframe to the original dataframe then drop the column 'aspiration'</b>
</div>
```
# Write your code below and press Shift+Enter to execute
# merge data frame "df" and "dummy_variable_1"
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "fuel-type" from "df"
df.drop('aspiration', axis = 1, inplace=True)
```
Double-click <b>here</b> for the solution.
<!-- The answer is below:
#merge the new dataframe to the original datafram
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "aspiration" from "df"
df.drop('aspiration', axis = 1, inplace=True)
-->
save the new csv
```
df.to_csv('clean_df.csv')
```
<h1>Thank you for completing this notebook</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<p><a href="https://cocl.us/corsera_da0101en_notebook_bottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
This notebook was written by <a href="https://www.linkedin.com/in/mahdi-noorian-58219234/" target="_blank">Mahdi Noorian PhD</a>, <a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>, Bahare Talayian, Eric Xiao, Steven Dong, Parizad, Hima Vsudevan and <a href="https://www.linkedin.com/in/fiorellawever/" target="_blank">Fiorella Wenver</a> and <a href=" https://www.linkedin.com/in/yi-leng-yao-84451275/ " target="_blank" >Yi Yao</a>.
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
# Aula 01 - Parte 1
## Transformações Lineares
Nesta primeira parte da aula faremos uma breve revisão de transformações lineares. Vamos começar pensando em transformações em 2D.
### Rotação
Crie uma função que recebe um ângulo $\theta$ e devolve uma matriz de rotação representada por um *numpy.array*. Os pontos são representados em [coordenadas homogêneas](https://en.wikipedia.org/wiki/Homogeneous_coordinates).
```
# Imports
import matplotlib.pyplot as plt
import numpy as np
import math
# Funções úteis
def ponto(x, y):
return np.array([x, y, 1]).reshape((3, 1))
def prettypt(pt):
return tuple(pt.flatten())
def testar_funcao(funcao, entradas, parametros, saidas):
EPSILON = 1e-1 # Sim, ele vai aceitar um erro grande...
tudo_ok = True
for entrada, parametro, saida_esperada in zip(entradas, parametros, saidas):
if isinstance(parametro, tuple):
mat = funcao(*parametro)
else:
mat = funcao(parametro)
saida_obtida = mat.dot(entrada)
if not np.allclose(saida_obtida, saida_esperada, atol=EPSILON):
tudo_ok = False
print('Erro para entrada {}. Esperado={}, Obtido={}'.format(prettypt(entrada), prettypt(saida_esperada), prettypt(saida_obtida)))
if tudo_ok:
print('Tudo OK :)')
# IMPLEMENTE ESSA FUNÇÃO
def rotation_matrix(theta):
m = np.eye(3)
m[0][0] = math.cos(theta)
m[0][1] = -math.sin(theta)
m[1][0] = math.sin(theta)
m[1][1] = math.cos(theta)
return m
rotation_matrix(180)
```
Abra o arquivo *Rotacao.ggb* utilizando o software [Geogebra](https://www.geogebra.org/download). Gere mais 10 valores para testar a sua função movendo o ponto $p$ e alterando o valor de $\theta'$ no programa.
```
# Os pontos são representados por tuplas (x, y)
entradas = [
ponto(4, 2),
ponto(6, 3),
ponto(6, 3),
ponto(9, 6),
ponto(2, 1),
ponto(5, 5),
ponto(5, 5),
ponto(0, 10),
ponto(0, 10),
ponto(5, 10)
# ADICIONE OUTROS PONTOS DE ENTRADA AQUI...
]
angulos = [
0.64, # Em radianos
0.64, # Em radianos
0.91, # Em radianos
1.57, # Em radianos
1.57, # Em radianos
1.57, # Em radianos
0.79, # Em radianos
math.pi, # Em radianos
0, # Em radianos
math.pi # Em radianos
# ADICIONE OUTROS ÂNGULOS AQUI...
]
saidas = [
ponto(2, 4),
ponto(3, 6),
ponto(1.31, 6.58),
ponto(-6, 9),
ponto(-1, 2),
ponto(-5, 5),
ponto(0, 7.07),
ponto(0, -10),
ponto(0, 10),
ponto(-5, -10)
# ADICIONE OUTRAS SAÍDAS ESPERADAS AQUI...
]
# Testando a função...
testar_funcao(rotation_matrix, entradas, angulos, saidas)
```
### Escala
Crie uma função que recebe um valor $s$ e devolve uma matriz de escala.
```
# IMPLEMENTE ESSA FUNÇÃO
def scale_matrix(s):
m = np.eye(3)
m[0][0] = s
m[1][1] = s
return m
# Gerando alguns valores para teste...
n = 10
entradas = [ponto(x, y) for x in range(n) for y in range(n)]
fatores = [i+2 for i in range(n*n)] # Poderiam ser outros valores (o +2 é arbitrário)
saidas = [ponto(p[0,0]*s, p[1,0]*s) for p, s in zip(entradas, fatores)]
# Testando a função...
testar_funcao(scale_matrix, entradas, fatores, saidas)
```
### Translação
Crie uma função que recebe dois valores $t_x$ e $t_y$ e devolve uma matriz de translação.
```
# IMPLEMENTE ESSA FUNÇÃO
def translation_matrix(tx, ty):
m = np.eye(3)
m[0][2] = tx
m[1][2] = ty
return m
# Gerando alguns valores para teste...
n = 10
entradas = [ponto(x, y) for x in range(n) for y in range(n)]
translacoes = [(i+2, i+3) for i in range(n*n)] # Poderiam ser outros valores (o +2 e +3 são arbitrários)
saidas = [ponto(p[0,0]+t[0], p[1,0]+t[1]) for p, t in zip(entradas, translacoes)]
# Testando a função...
testar_funcao(translation_matrix, entradas, translacoes, saidas)
```
## Transformações em imagens
Crie duas funções que recebem uma imagem, um fator de escala $s$, um ângulo $\theta$ e uma translação $(t_x, t_y)$ e devolvem uma nova imagem aplicando a escala, rotação e translação, nesta ordem. As duas funções diferem na maneira de gerar a imagem final:
1) A primeira função deve percorrer cada pixel da imagem original e calcular onde ele deve aparecer na imagem final
2) A segunda função deve percorrer cada pixel da imagem final e calcular de onde ele veio na imagem original.
```
def aplica_transformacao_v1(img, s, theta, tx, ty):
# IMPLEMENTE ESSA FUNÇÃO
translation = translation_matrix(tx, ty)
scale = scale_matrix(s)
rotation = rotation_matrix(theta)
transform = translation.dot(rotation).dot(scale)
res = np.zeros_like(img) # res é a imagem a ser devolvida
h, w = img.shape[:2]
for i in range(w):
for j in range(h):
res_x, res_y, _ = transform.dot(ponto(i, j)).flatten()
res_x, res_y = int(res_x), int(res_y)
if 0 <= res_x and res_x < w and 0 <= res_y and res_y < h:
res[res_y, res_x,:] = img[j,i,:]
return res
def aplica_transformacao_v2(img, s, theta, tx, ty):
# IMPLEMENTE ESSA FUNÇÃO
res = np.zeros_like(img) # res é a imagem a ser devolvida
h,w = img.shape[:2]
scale = scale_matrix(1/s)
rotation = rotation_matrix(-theta)
translation = translation_matrix(-tx, -ty)
transform = scale.dot(rotation).dot(translation)
for res_x in range(w):
for res_y in range(h):
x, y, _ = transform.dot(ponto(res_x, res_y)).flatten()
x, y = int(x), int(y)
if 0 <= x and x < w and 0 <= y and y < h:
res[res_y, res_x,:] = img[y,x,:]
return res
# Carregando a imagem de teste
img = plt.imread('insper-fachada.jpg')
plt.imshow(img)
# Testando a primeira versão da função
plt.imshow(aplica_transformacao_v1(img, 1.5, math.pi/3, 500, -450))
# Testando a segunda versão da função
plt.imshow(aplica_transformacao_v2(img, 1.5, math.pi/3, 500, -450))
```
# Para pensar
1. Qual a diferença entre as imagens geradas? Por que essa diferença existe?
2. A ordem das transformações faz diferença? Faça um teste:
1. Crie uma lista com 4 pontos nos cantos de um quadrado
2. Gere uma imagem em branco e desenhe os 4 pontos
3. Gere uma matriz de translação, uma de rotação e outra de escala
4. Aplica as 3 transformações sobre os 4 pontos em todas as ordens possíveis (6 no total)
5. Para cada combinação desenhe os 4 pontos transformados com outra cor
```
# IMPLEMENTE O TESTE DO EXERCÍCIO 2 AQUI
```
| github_jupyter |
```
#CREATE CLASS
#CLASS VS INSTANCE
#CREATE CLASS
class SoftwareEngineer:
def __init__(self, name, age, level, salary):
#instance attribute
self.name = name
self.age = age
self.level = level
self.salary = salary
#instance
se1 = SoftwareEngineer("Max", 20, "Junior", 5000)
print(se1.name, se1.age)
class SoftwareEngineer:
#class attribute
alias = "Keyboard Magician"
def __init__(self, name, age, level, salary):
#instance attribute
self.name = name
self.age = age
self.level = level
self.salary = salary
#instance
se1 = SoftwareEngineer("Max", 20, "Junior", 5000)
print(se1.alias)
print(SoftwareEngineer.alias)
#recap
#create a class (blueprint)
#create a instance (object)
#class vsinstance
#instance attributes : defined in __init__(self)
#class attribute
class SoftwareEngineer:
#class attribute
alias = "Keyboard Magician"
def __init__(self, name, age, level, salary):
#instance attribute
self.name = name
self.age = age
self.level = level
self.salary = salary
#instance method
def code(self):
print(f"{self.name} is writing code...")
def code_in_language(self, language):
print(f"{self.name} is writing code in {language}...")
def information(self):
information = f"name = {self.name}, age = {self.age}, level = {self.level}"
return information
se1.code()
se1.code_in_language("Python")
se1.information()
class SoftwareEngineer:
#class attribute
alias = "Keyboard Magician"
def __init__(self, name, age, level, salary):
#instance attribute
self.name = name
self.age = age
self.level = level
self.salary = salary
#instance method
def code(self):
print(f"{self.name} is writing code...")
def code_in_language(self, language):
print(f"{self.name} is writing code in {language}...")
#dunder method
def __str__(self):
information = f"name = {self.name}, age = {self.age}, level = {self.level}"
return information
def __eq__(self, other):
return self.name == other.name and self.age == other.age
@staticmethod #decorator
def entry_salary(age):
if age < 25:
return 5000
if age < 30:
return 7000
else:
return 9000
print(se1)
#instance
se1 = SoftwareEngineer("Max", 20, "Junior", 5000)
se2 = SoftwareEngineer("Max", 20, "Sunior", 5000)
print(se1 == se2)
print(se1.entry_salary(78))
#recap:
#instance method(self)
#can take arguments and can return values
#special "dunder" method (__str__ and __eq__)
#@staticmethod
#inherits, extend, override
class Employee:
def __init(self, name, age):
self.name = name
self.age = age
class SoftwareEngineer(Employee):
pass
class Designer(Employee):
pass
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'fizzBuzz' function below.
#
# The function accepts INTEGER n as parameter.
#
def fizzBuzz(n):
for n in range(100):
print("FizzBuzz"[n%-3&4:12&8-(n%-5&4)] or n)
if __name__ == '__main__':
n = int(input().strip())
fizzBuzz(n)
#inherits, extend, override
class Employee:
def __init__(self, name, age):
self.name = name
self.age = age
def work(self):
print(f"{self.name} is working...")
class SoftwareEngineer(Employee):
pass
class Designer(Employee):
pass
#inherits, extend, override
class Employee:
def __init__(self, name, age, salary):
self.name = name
self.age = age
self.salary = salary
def work(self):
print(f"{self.name} is working...")
class SoftwareEngineer(Employee):
#extend
def __init__(self, name, age, salary, level):
#override
super().__init__(name, age, salary)
self.level = level
def work(self):
print(f"{self.name} is coding...")
def debug(self):
print(f"{self.name} is debugging...")
class Designer(Employee):
def draw(self):
print(f"{self.name} is drawing...")
def work(self):
print(f"{self.name} is designing...")
se = SoftwareEngineer("Max", 35, 9000, "senior")
se.name, se.age, se.level
se.work()
se.draw()
se.debug()
d = Designer("Max", 35, 9000)
d.draw()
#polymorphism
employees = [SoftwareEngineer("Max", 25, 6000, "Junior"), SoftwareEngineer("Lisa", 30, 9000, "Senior"), Designer("Philip", 27, 7000)]
def motivate_employees(employees):
for employee in employees:
employee.work()
motivate_employees(employees)
#recap
#inheritance : ChildClass(BaseClass)
#inherit, extend, override
#super().__init__()
#polymorphism
#encaplsulation
class SoftwareEngineer:
def __init__(self, name, age):
self.name = name
self.age = age
self._salary = None #private sintax
#_x is called a protected attribute
#__x is called a private attribute
self._num_bugs_solved = 0
def code(self):
self._num_bugs_solved += 1
#getter
def get_salary(self):
return self._salary
#setter
def set_salary(self, base_value):
self._salary = self._calculate_salary(base_value)
def _calculate_salary(self, base_value):
if self._num_bugs_solved < 10:
return base_value
if self._num_bugs_solved < 100:
return base_value * 2
return base_value * 3
se = SoftwareEngineer("Max", 25)
se.age, se.name
se.set_salary(9000)
se.get_salary()
for i in range(70):
se.code()
print(se._num_bugs_solved)
se.set_salary(6000)
print(se.get_salary())
#encapsulation : hiding data process , hiding internal operation
class SoftwareEngineer:
def __init__(self):
self._salary = None #private sintax
@property
def salary(self):
return self._salary
@salary.setter
def salary(self, value):
self._salary = value
@salary.deleter
def salary(self, value):
del self._salary
se = SoftwareEngineer()
se.salary = 6000
se.salary
#recap
#getter -> @property
#setter -> @x.setter
a = [i+1 for i in range(5)]
a
a[-2]
x = [5,9,1,1,2,3,7,1]
y = [1,2,2,3,3,2,0,5]
import numpy as np
np.corrcoef(x,y)
def batonPass(friends, time):
# Write your code here
friends = [i+1 for i in range(friends)]
for i in range(time):
if time < len(friends):
return (friends[i], friends[i+1])
else:
return (friends[i-2], friends[i-3])
batonPass(5,3)
for i in range(1,len(ans)):
if abs((ans[i]+1)-ans[i-1]) <= k:
ans[i] += 1
elif abs(ans[i]-ans[i-1]) <= k:
pass
elif abs((ans[i]-1)-ans[i-1]) <= k:
ans[i] -= 1
else:
ans[i] += 1
c += 1
for _ in range(int(input())):
n,k = get_int()
s = str(input())[:-1]
solve(n,k,s)
```
| github_jupyter |
```
import cv2
cap = cv2.VideoCapture(0)
car_model=cv2.CascadeClassifier('cars.xml')
```
# TO DETECT CAR ON LIVE VIDEO OR PHOTO.....
```
while True:
ret,frame=cap.read()
cars=car_model.detectMultiScale(frame)
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
for(x,y,w,h) in cars:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,0,255),1)
cv2.imshow('car',frame)
if cv2.waitKey(10)==13:
break
cv2.destroyAllWindows()
cap.release()
#main start here
import cv2
from matplotlib import pyplot as plt
import numpy as np
import imutils
import easyocr
#main code
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
#FOR REAL USE CASE and LIVE NUMBER PLATE OF CAR
''''while(cap.isOpened()):
ret, frame = cap.read()
gra = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite('carpic.jpg',frame)
cv2.imshow('frame',gra)
if cv2.waitKey(10) == 13:
break
cap.release()
cv2.destroyAllWindows()
plt.imshow(cv2.cvtColor(gra, cv2.COLOR_BGR2RGB))'''
#USING A IMAGE FROM GOOGLE FOR REFERENCE USE CASE
img=cv2.imread('car11 test.jpeg')
gray=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(cv2.cvtColor(gray, cv2.COLOR_BGR2RGB))
bfilter = cv2.bilateralFilter(gray, 11, 17, 17) #Noise reduction
edged = cv2.Canny(bfilter, 30, 200) #Edge detection
plt.imshow(cv2.cvtColor(edged, cv2.COLOR_BGR2RGB))
keypoints = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(keypoints)
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]
location = None
for contour in contours:
approx = cv2.approxPolyDP(contour, 10, True)
if len(approx) == 4:
location = approx
break
mask = np.zeros(gray.shape, np.uint8)
new_image = cv2.drawContours(mask, [location], 0,255, -1)
new_image = cv2.bitwise_and(img,img, mask=mask)
plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB))
(x,y) = np.where(mask==255)
(x1, y1) = (np.min(x), np.min(y))
(x2, y2) = (np.max(x), np.max(y))
cropped_image = gray[x1:x2+1, y1:y2+1]
plt.imshow(cv2.cvtColor(cropped_image, cv2.COLOR_BGR2RGB))
reader = easyocr.Reader(['en'])
result = reader.readtext(cropped_image)
text = result[0][-2]
font = cv2.FONT_HERSHEY_SIMPLEX
res = cv2.putText(img, text=text, org=(approx[0][0][0], approx[1][0][1]+60), fontFace=font, fontScale=1, color=(0,255,0), thickness=2, lineType=cv2.LINE_AA)
res = cv2.rectangle(img, tuple(approx[0][0]), tuple(approx[2][0]), (0,255,0),3)
plt.imshow(cv2.cvtColor(res, cv2.COLOR_BGR2RGB))
#Removing spaces from the detected number
def remove(text):
return text.replace(" ", "")
extracted_number=remove(text)
print(extracted_number)
#SELENIUM TO EXTRACT DATA FROM THE THIRD PARTY WEBSITE HERE i used CARS24.Com (VALID for some number)
#YOU CAN PAY FOR OTHER THIRD PARTY WEBSITES FOR MORE NUMBER PLATES
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
reg_no=extracted_number;
driver = webdriver.Chrome("C:\\chromedriver\\chromedriver.exe")
driver.get("https://www.cars24.com/rto-vehicle-registration-details/")
driver.maximize_window()
time.sleep(5)
#Cross button
driver.find_element_by_xpath("/html/body/div[1]/div[5]/div/div/h3/div/img").click()
time.sleep(3)
#sending value
driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[2]/div/div[1]/div[2]/form/div/input").click()
last=driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[2]/div/div[1]/div[2]/form/div/input")
last.send_keys(reg_no)
time.sleep(2)
#button click
driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[2]/div/div[1]/button").click()
time.sleep(3)
#data of user
data=driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[2]/div[1]/div[1]")
data_in_text=data.text
print(data_in_text)
phone=driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[2]/div[1]/div[1]/div[1]/div/ul/li[4]/span[2]")
phone_number=phone.text
#clossing driver
driver.close()
#saving into a file
text_file = open("sample.txt", "w")
n = text_file.write(data_in_text)
text_file.close()
```
# then you can send sms for the voilation of rule etc if you want ...
```
#Phone Number of user
print(phone_number)
```
| github_jupyter |
<h1>datetime library</h1>
<li>Time is linear
<li>progresses as a straightline trajectory from the big bag
<li>to now and into the future
<li>日期库官方说明 https://docs.python.org/3.5/library/datetime.html
<h3>Reasoning about time is important in data analysis</h3>
<li>Analyzing financial timeseries data
<li>Looking at commuter transit passenger flows by time of day
<li>Understanding web traffic by time of day
<li>Examining seaonality in department store purchases
<h3>The datetime library</h3>
<li>understands the relationship between different points of time
<li>understands how to do operations on time
<h3>Example:</h3>
<li>Which is greater? "10/24/2017" or "11/24/2016"
```
d1 = "10/24/2017"
d2 = "11/24/2016"
max(d1,d2)
```
<li>How much time has passed?
```
d1 - d2
```
<h4>Obviously that's not going to work. </h4>
<h4>We can't do date operations on strings</h4>
<h4>Let's see what happens with datetime</h4>
```
import datetime
d1 = datetime.date(2016,11,24)
d2 = datetime.date(2017,10,24)
max(d1,d2)
print(d2 - d1)
```
<li>datetime objects understand time
<h3>The datetime library contains several useful types</h3>
<li>date: stores the date (month,day,year)
<li>time: stores the time (hours,minutes,seconds)
<li>datetime: stores the date as well as the time (month,day,year,hours,minutes,seconds)
<li>timedelta: duration between two datetime or date objects
<h3>datetime.date</h3>
```
import datetime
century_start = datetime.date(2000,1,1)
today = datetime.date.today()
print(century_start,today)
print("We are",today-century_start,"days into this century")
print(type(century_start))
print(type(today))
```
<h3>For a cleaner output</h3>
```
print("We are",(today-century_start).days,"days into this century")
```
<h3>datetime.datetime</h3>
```
century_start = datetime.datetime(2000,1,1,0,0,0)
time_now = datetime.datetime.now()
print(century_start,time_now)
print("we are",time_now - century_start,"days, hour, minutes and seconds into this century")
```
<h4>datetime objects can check validity</h4>
<li>A ValueError exception is raised if the object is invalid</li>
```
some_date=datetime.date(2015,2,29)
#some_date =datetime.date(2016,2,29)
#some_time=datetime.datetime(2015,2,28,23,60,0)
```
<h3>datetime.timedelta</h3>
<h4>Used to store the duration between two points in time</h4>
```
century_start = datetime.datetime(2050,1,1,0,0,0)
time_now = datetime.datetime.now()
time_since_century_start = time_now - century_start
print("days since century start",time_since_century_start.days)
print("seconds since century start",time_since_century_start.total_seconds())
print("minutes since century start",time_since_century_start.total_seconds()/60)
print("hours since century start",time_since_century_start.total_seconds()/60/60)
```
<h3>datetime.time</h3>
```
date_and_time_now = datetime.datetime.now()
time_now = date_and_time_now.time()
print(time_now)
```
<h4>You can do arithmetic operations on datetime objects</h4>
<li>You can use timedelta objects to calculate new dates or times from a given date
```
today=datetime.date.today()
five_days_later=today+datetime.timedelta(days=5)
print(five_days_later)
now=datetime.datetime.today()
five_minutes_and_five_seconds_later = now + datetime.timedelta(minutes=5,seconds=5)
print(five_minutes_and_five_seconds_later)
now=datetime.datetime.today()
five_minutes_and_five_seconds_earlier = now+datetime.timedelta(minutes=-5,seconds=-5)
print(five_minutes_and_five_seconds_earlier)
```
<li>But you can't use timedelta on time objects. If you do, you'll get a TypeError exception
```
time_now=datetime.datetime.now().time() #Returns the time component (drops the day)
print(time_now)
thirty_seconds=datetime.timedelta(seconds=30)
time_later=time_now+thirty_seconds
#Bug or feature?
#But this is Python
#And we can always get around something by writing a new function!
#Let's write a small function to get around this problem
def add_to_time(time_object,time_delta):
import datetime
temp_datetime_object = datetime.datetime(500,1,1,time_object.hour,time_object.minute,time_object.second)
print(temp_datetime_object)
return (temp_datetime_object+time_delta).time()
#And test it
time_now=datetime.datetime.now().time()
thirty_seconds=datetime.timedelta(seconds=30)
print(time_now,add_to_time(time_now,thirty_seconds))
```
<h2>datetime and strings</h2>
<h4>datetime.strptime</h4>
<li>datetime.strptime(): grabs time from a string and creates a date or datetime or time object
<li>The programmer needs to tell the function what format the string is using
<li> See http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html for how to specify the format
```
date='01-Apr-03'
date_object=datetime.datetime.strptime(date,'%d-%b-%y')
print(date_object)
#Unfortunately, there is no similar thing for time delta
#So we have to be creative!
bus_travel_time='2:15:30'
hours,minutes,seconds=bus_travel_time.split(':')
x=datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds))
print(x)
#Or write a function that will do this for a particular format
def get_timedelta(time_string):
hours,minutes,seconds = time_string.split(':')
import datetime
return datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds))
```
<h4>datetime.strftime</h4>
<li>The strftime function flips the strptime function. It converts a datetime object to a string
<li>with the specified format
```
now = datetime.datetime.now()
string_now = datetime.datetime.strftime(now,'%m/%d/%y %H:%M:%S')
print(now,string_now)
print(str(now)) #Or you can use the default conversion
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/CloudMasking/landsat457_surface_reflectance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/CloudMasking/landsat457_surface_reflectance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/CloudMasking/landsat457_surface_reflectance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# This example demonstrates the use of the Landsat 4, 5 or 7
# surface reflectance QA band to mask clouds.
# cloudMaskL457 = function(image) {
def cloudMaskL457(image):
qa = image.select('pixel_qa')
# If the cloud bit (5) is set and the cloud confidence (7) is high
# or the cloud shadow bit is set (3), then it's a bad pixel.
cloud = qa.bitwiseAnd(1 << 5) \
.And(qa.bitwiseAnd(1 << 7)) \
.Or(qa.bitwiseAnd(1 << 3))
# Remove edge pixels that don't occur in all bands
mask2 = image.mask().reduce(ee.Reducer.min())
return image.updateMask(cloud.Not()).updateMask(mask2)
# }
# Map the function over the collection and take the median.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T1_SR') \
.filterDate('2010-04-01', '2010-07-30')
composite = collection \
.map(cloudMaskL457) \
.median()
# Display the results in a cloudy place.
Map.setCenter(-6.2622, 53.3473, 12)
Map.addLayer(composite, {'bands': ['B3', 'B2', 'B1'], 'min': 0, 'max': 3000})
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
%%pyspark
df = spark.read.load('abfss://capture@splacceler5lmevhdeon4ym.dfs.core.windows.net/SeattlePublicLibrary/Library_Collection_Inventory.csv', format='csv'
## If header exists uncomment line below
, header=True
)
display(df.limit(10))
%%pyspark
# Show Schema
df.printSchema()
%%pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
# Primary storage info
capture_account_name = 'splacceler5lmevhdeon4ym' # fill in your primary account name
capture_container_name = 'capture' # fill in your container name
capture_relative_path = 'SeattlePublicLibrary/Library_Collection_Inventory.csv' # fill in your relative folder path
capture_adls_path = 'abfss://%s@%s.dfs.core.windows.net/%s' % (capture_container_name, capture_account_name, capture_relative_path)
print('Primary storage account path: ' + capture_adls_path)
%%pyspark
from pyspark.sql.types import StructType, StructField, IntegerType, StringType, DoubleType, DateType, TimestampType
csvSchema = StructType([
StructField('bibnum', IntegerType(), True),
StructField('title', StringType(), True),
StructField('author', StringType(), True),
StructField('isbn', StringType(), True),
StructField('publication_year', StringType(), True),
StructField('publisher', StringType(), True),
StructField('subjects', StringType(), True),
StructField('item_type', StringType(), True),
StructField('item_collection', StringType(), True),
StructField('floating_item', StringType(), True),
StructField('item_location', StringType(), True),
StructField('reportDate', StringType(), True),
StructField('item_count', IntegerType(), True)
])
CheckByTPI_capture_df = spark.read.format('csv').option('header', 'True').schema(csvSchema).load(capture_adls_path)
display(CheckByTPI_capture_df.limit(10))
%%pyspark
from pyspark.sql.functions import to_date, to_timestamp, col, date_format, current_timestamp
df_final = (CheckByTPI_capture_df.withColumn("report_date", to_date(col("reportDate"),"MM/dd/yyyy")).drop("reportDate")
.withColumn('loadDate', date_format(current_timestamp(), 'MM/dd/yyyy hh:mm:ss aa'))
.withColumn("load_date", to_timestamp(col("loadDate"),"MM/dd/yyyy hh:mm:ss aa")).drop("loadDate")
)
%%pyspark
# Show Schema
df_final.printSchema()
display(df_final.limit(10))
%%pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
# Primary storage info
compose_account_name = 'splacceler5lmevhdeon4ym' # fill in your primary account name
compose_container_name = 'compose' # fill in your container name
compose_relative_path = 'SeattlePublicLibrary/LibraryCollectionInventory/' # fill in your relative folder path
compose_adls_path = 'abfss://%s@%s.dfs.core.windows.net/%s' % (compose_container_name, compose_account_name, compose_relative_path)
print('Primary storage account path: ' + compose_adls_path)
%%pyspark
compose_parquet_path = compose_adls_path + 'CollectionInventory.parquet'
print('parquet file path: ' + compose_parquet_path)
%%pyspark
df_final.write.parquet(compose_parquet_path, mode = 'overwrite')
%%sql
-- Create database SeattlePublicLibrary only if database with same name does not exist
CREATE DATABASE IF NOT EXISTS SeattlePublicLibrary
%%sql
-- Create table CheckoutsByTitlePhysicalItemsschemafinal only if table with same name does not exist
CREATE TABLE IF NOT EXISTS SeattlePublicLibrary.library_collection_inventory
(title STRING
,author STRING
,isbn STRING
,publication_year STRING
,publisher STRING
,subjects STRING
,item_type STRING
,item_collection STRING
,floating_item STRING
,item_location STRING
,report_date DATE
,item_count INTEGER
,load_date TIMESTAMP
)
USING PARQUET OPTIONS (path 'abfss://compose@splacceler5lmevhdeon4ym.dfs.core.windows.net/SeattlePublicLibrary/LibraryCollectionInventory/CollectionInventory.parquet')
%%sql
--DROP TABLE SeattlePublicLibrary.library_collection_inventory
```
| github_jupyter |
# LinearSVR with MinMaxScaler & Power Transformer
This Code template is for the Classification task using Support Vector Regressor (SVR) based on the Support Vector Machine algorithm with Power Transformer as Feature Transformation Technique and MinMaxScaler for Feature Scaling in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import PowerTransformer, MinMaxScaler
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
from sklearn.svm import LinearSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path=""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_values
target=''
```
### Data fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)#performing datasplitting
```
### Model
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.
LinearSVR is similar to SVR with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples.
#### Feature Transformation
PowerTransformer applies a power transform featurewise to make data more Gaussian-like.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
#### Model Tuning Parameters
1. epsilon : float, default=0.0
> Epsilon parameter in the epsilon-insensitive loss function.
2. loss : {‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’
> Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported.
3. C : float, default=1.0
> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
4. tol : float, default=1e-4
> Tolerance for stopping criteria.
5. dual : bool, default=True
> Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
### Feature Scaling
#### MinMaxScalar:
Transform features by scaling each feature to a given range.
This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
```
model=make_pipeline(MinMaxScaler(),PowerTransformer(),LinearSVR())
model.fit(x_train, y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator:Shreepad Nade , Github: [Profile](https://github.com/shreepad-nade)
| github_jupyter |
<img src="../images/26-weeks-of-data-science-banner.jpg"/>
# Getting Started with Python
## About Python
<img src="../images/python-logo.png" alt="Python" style="width: 500px;"/>
Python is a
- general purpose programming language
- interpreted, not compiled
- both **dynamically typed** _and_ **strongly typed**
- supports multiple programming paradigms: object oriented, functional
- comes in 2 main versions in use today: 2.7 and 3.x
## Why Python for Data Science?
***
Python is great for data science because:
- general purpose programming language (as opposed to R)
- faster idea to execution to deployment
- battle-tested
- mature ML libraries
<div class="alert alert-block alert-success">And it is easy to learn !</div>
<img src="../images/icon/Concept-Alert.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Python's Interactive Console : The Interpreter
***
- The Python interpreter is a console that allows interactive development
- We are currently using the Jupyter notebook, which uses an advanced Python interpreter called IPython
- This gives us much more power and flexibility
**Let's try it out !**
```
print("Hello World!") #As usual with any language we start with with the print function
```
# What are we going to learn today?
***
- CHAPTER 1 - **Python Basics**
- **Strings**
- Creating a String, variable assignments
- String Indexing & Slicing
- String Concatenation & Repetition
- Basic Built-in String Methods
- **Numbers**
- Types of Numbers
- Basic Arithmetic
- CHAPTER 2 - **Data Types & Data Structures**
- Lists
- Dictionaries
- Sets & Booleans
- CHAPTER 3 - **Python Programming Constructs**
- Loops & Iterative Statements
- if,elif,else statements
- for loops, while loops
- Comprehensions
- Exception Handling
- Modules, Packages,
- File I/O operations
# CHAPTER - 1 : Python Basics
***
Let's understand
- Basic data types
- Variables and Scoping
- Modules, Packages and the **`import`** statement
- Operators
<img src="../images/icon/Technical-Stuff.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Strings
***
Strings are used in Python to record text information, such as name. Strings in Python are actually a *sequence*, which basically means Python keeps track of every element in the string as a sequence. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).
This idea of a sequence is an important one in Python and we will touch upon it later on in the future.
In this lecture we'll learn about the following:
1.) Creating Strings
2.) Printing Strings
3.) String Indexing and Slicing
4.) String Properties
5.) String Methods
6.) Print Formatting
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Creating a String
***
To create a string in Python you need to use either single quotes or double quotes. For example:
```
# Single word
print('hello World!')
print() # Used to have a line space between two sentences. Try deleting this line & seeing the difference.
# Entire phrase
print('This is also a string')
```
<img src="../images/icon/Technical-Stuff.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Variables : Store your Value in me!
***
In the code below we begin to explore how we can use a variable to which a string can be assigned. This can be extremely useful in many cases, where you can call the variable instead of typing the string everytime. This not only makes our code clean but it also makes it less redundant.
Example syntax to assign a value or expression to a variable,
variable_name = value or expression
Now let's get coding!!. With the below block of code showing how to assign a string to variable.
```
s = 'New York'
print(s)
print(type(s))
print(len(s)) # what's the string length
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
### String Indexing
***
We know strings are a sequence, which means Python can use indexes to call parts of the sequence. Let's learn how this works.
In Python, we use brackets [] after an object to call its index. We should also note that indexing starts at 0 for Python. Let's create a new object called s and the walk through a few examples of indexing.
```
# Assign s as a string
s = 'Hello World'
# Print the object
print(s)
print()
# Show first element (in this case a letter)
print(s[0])
print()
# Show the second element (also a letter)
print(s[1])
#Show from first element to 5th element
print(s[0:4])
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## String Concatenation and Repetition
***
**String Concatenation** is a process to combine two strings. It is done using the '+' operator.
**String Repetition** is a process of repeating a same string multiple times
The examples of the above concepts is as follows.
```
# concatenation (addition)
s1 = 'Hello'
s2 = "World"
print(s1 + " " + s2)
# repetition (multiplication)
print("Hello_" * 3)
print("-" * 10)
print("=" * 10)
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## String Slicing & Indexing
***
**String Indexing** is used to to select the letter at a particular index/position.
**String Slicing** is a process to select a subset of an entire string
The examples of the above stated are as follows
```
s = "Namaste World"
# print sub strings
print(s[1]) #This is indexing.
print(s[6:11]) #This is known as slicing.
print(s[-5:-1])
# test substring membership
print("Wor" in s)
```
Note the above slicing. Here we're telling Python to grab everything from 6 up to 10 and from fifth last to second last. You'll notice this a lot in Python, where statements and are usually in the context of "up to, but not including".
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Basic Built-in String methods
***
Objects in Python usually have built-in methods. These methods are functions inside the object (we will learn about these in much more depth later) that can perform actions or commands on the object itself.
We call methods with a period and then the method name. Methods are in the form:
object.method(parameters)
Where parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. Later on we will be creating our own objects and functions!
Here are some examples of built-in methods in strings:
```
s = "Hello World"
print(s.upper()) ## Convert all the element of the string to Upper case..!!
print(s.lower()) ## Convert all the element of the string to Lower case..!!
```
## Print Formatting
We can use the .format() method to add formatted objects to printed string statements.
The easiest way to show this is through an example:
```
name = "Bibek"
age = 22
married = False
print("My name is %s, my age is %s, and it is %s that I am married" % (name, age, married))
print("My name is {}, my age is {}, and it is {} that I am married".format(name, age, married))
```
<img src="../images/icon/Concept-Alert.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Numbers
***
Having worked with string we will turn our attention to numbers
We'll learn about the following topics:
1.) Types of Numbers in Python
2.) Basic Arithmetic
3.) Object Assignment in Python
<img src="../images/icon/Concept-Alert.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Types of numbers
***
Python has various "types" of numbers (numeric literals). We'll mainly focus on integers and floating point numbers.
Integers are just whole numbers, positive or negative. For example: 2 and -2 are examples of integers.
Floating point numbers in Python are notable because they have a decimal point in them, or use an exponential (e) to define the number. For example 2.0 and -2.1 are examples of floating point numbers. 4E2 (4 times 10 to the power of 2) is also an example of a floating point number in Python.
Throughout this course we will be mainly working with integers or simple float number types.
Here is a table of the two main types we will spend most of our time working with some examples:
<table>
<tr>
<th>Examples</th>
<th>Number "Type"</th>
</tr>
<tr>
<td>1,2,-5,1000</td>
<td>Integers</td>
</tr>
<tr>
<td>1.2,-0.5,2e2,3E2</td>
<td>Floating-point numbers</td>
</tr>
</table>
Now let's start with some basic arithmetic.
## Basic Arithmetic
```
# Addition
print(2+1)
# Subtraction
print(2-1)
# Multiplication
print(2*2)
# Division
print(3/2)
```
## Arithmetic continued
```
# Powers
2 ** 3
3 **2
# Order of Operations followed in Python
2 + 10 * 10 + 3
# Can use parenthesis to specify orders
(2+10) * (10+3)
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Variable Assignments
***
Now that we've seen how to use numbers in Python as a calculator let's see how we can assign names and create variables.
We use a single equals sign to assign labels to variables. Let's see a few examples of how we can do this.
```
# Let's create an object called "a" and assign it the number 5
a = 5
```
Now if I call *a* in my Python script, Python will treat it as the number 5.
```
# Adding the objects
a+a
```
What happens on reassignment? Will Python let us write it over?
```
# Reassignment
a = 10
# Check
a
```
<img src="../images/icon/ppt-icons.png" alt="ppt-icons" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Mini Challenge - 1
***
Its your turn now!! store the word `hello` in my_string. print the my_string + name.
```
my_string = 'Hello '
name = 'Bibek'
print(my_string + name)
```
<img src="../images/icon/ppt-icons.png" alt="ppt-icons" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Mini Challenge - 2
***
**Its your turn now!!!** given the numbers stored in variables `a` and `b`. Can you write a simple code to compute the mean of these two numbers and assign it to a variable `mean`.
```
a = 8
b = 6
mean = (a+b)/2
print(mean)
```
<img src="../images/icon/Pratical-Tip.png" alt="Pratical-Tip" style="width: 100px;float:left; margin-right:15px"/>
<br />
The names you use when creating these labels need to follow a few rules:
1. Names can not start with a number.
2. There can be no spaces in the name, use _ instead.
3. Can't use any of these symbols :'",<>/?|\()!@#$%^&*~-+
Using variable names can be a very useful way to keep track of different variables in Python. For example:
```
a$ = 9
```
## From Sales to Data Science
***
Discover the story of Sagar Dawda who made a successful transition from Sales to Data Science. Making a successful switch to Data Science is a game of Decision and Determenination. But it's a long road from Decision to Determination. To read more, click <a href="https://greyatom.com/blog/2018/03/career-transition-decision-to-determination/">here</a>
# CHAPTER - 2 : Data Types & Data Structures
***
- Everything in Python is an "object", including integers/floats
- Most common and important types (classes)
- "Single value": None, int, float, bool, str, complex
- "Multiple values": list, tuple, set, dict
- Single/Multiple isn't a real distinction, this is for explanation
- There are many others, but these are most frequently used
### Identifying Data Types
```
a = 42
b = 32.30
print(type(a))#gets type of a
print(type(b))#gets type of b
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Single Value Types
***
- int: Integers
- float: Floating point numbers
- bool: Boolean values (True, False)
- complex: Complex numbers
- str: String
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Lists
***
Lists can be thought of the most general version of a *sequence* in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed!
In this section we will learn about:
1.) Creating lists
2.) Indexing and Slicing Lists
3.) Basic List Methods
4.) Nesting Lists
5.) Introduction to List Comprehensions
Lists are constructed with brackets [] and commas separating every element in the list.
Let's go ahead and see how we can construct lists!
```
# Assign a list to an variable named my_list
my_list = [1,2,3]
```
We just created a list of integers, but lists can actually hold different object types. For example:
```
my_list = ['A string',23,100.232,'o']
```
Just like strings, the len() function will tell you how many items are in the sequence of the list.
```
len(my_list)
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Adding New Elements to a list
***
We use two special commands to add new elements to a list. Let's make a new list to remind ourselves of how this works:
```
my_list = ['one','two','three',4,5]
# append a value to the end of the list
l = [1, 2.3, ['a', 'b'], 'New York']
l.append(3.1)
print(l)
# extend a list with another list.
l = [1, 2, 3]
l.extend([4, 5, 6])
print(l)
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Slicing
***
Slicing is used to access individual elements or a rage of elements in a list.
Python supports "slicing" indexable sequences. The syntax for slicing lists is:
- `list_object[start:end:step]` or
- `list_object[start:end]`
start and end are indices (start inclusive, end exclusive). All slicing values are optional.
```
lst = list(range(10)) # create a list containing 10 numbers starting from 0
print(lst)
print("elements from index 4 to 7:", lst[4:7])
print("alternate elements, starting at index 0:", lst[0::2]) # prints elements from index 0 till last index with a step of 2
print("every third element, starting at index 1:", lst[1::3]) # prints elements from index 1 till last index with a step of 3
```
<div class="alert alert-block alert-success">**Other `list` operations**</div>
***
- **`.append`**: add element to end of list
- **`.insert`**: insert element at given index
- **`.extend`**: extend one list with another list
# Did you know?
**Did you know that Japanese Anime Naruto is related to Data Science. Find out how**
<img src="https://greyatom.com/blog/wp-content/uploads/2017/06/naruto-1-701x321.png">
Find out here https://medium.com/greyatom/naruto-and-data-science-how-data-science-is-an-art-and-data-scientist-an-artist-c5f16a68d670
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
# Dictionaries
***
Now we're going to switch gears and learn about *mappings* called *dictionaries* in Python. If you're familiar with other languages you can think of these Dictionaries as hash tables.
This section will serve as a brief introduction to dictionaries and consist of:
1.) Constructing a Dictionary
2.) Accessing objects from a dictionary
3.) Nesting Dictionaries
4.) Basic Dictionary Methods
A Python dictionary consists of a key and then an associated value. That value can be almost any Python object.
## Constructing a Dictionary
***
Let's see how we can construct dictionaries to get a better understanding of how they work!
```
# Make a dictionary with {} and : to signify a key and a value
my_dict = {'key1':'value1','key2':'value2'}
# Call values by their key
my_dict['key2']
```
We can effect the values of a key as well. For instance:
```
my_dict['key1']=123
my_dict
# Subtract 123 from the value
my_dict['key1'] = my_dict['key1'] - 123
#Check
my_dict['key1']
```
A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example:
```
# Set the object equal to itself minus 123
my_dict['key1'] -= 123
my_dict['key1']
```
Now its your turn to get hands-on with Dictionary, create a empty dicts. Create a new key calle animal and assign a value 'Dog' to it..
```
# Create a new dictionary
d = {}
# Create a new key through assignment
d['animal'] = 'Dog'
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
# Set and Booleans
***
There are two other object types in Python that we should quickly cover. Sets and Booleans.
## Sets
Sets are an unordered collection of *unique* elements. We can construct them by using the set() function. Let's go ahead and make a set to see how it works
#### Set Theory
<img src="../images/sets2.png" width="60%"/>
```
x = set()
# We add to sets with the add() method
x.add(1)
#Show
x
```
Note the curly brackets. This does not indicate a dictionary! Although you can draw analogies as a set being a dictionary with only keys.
We know that a set has only unique entries. So what happens when we try to add something that is already in a set?
```
# Add a different element
x.add(2)
#Show
x
# Try to add the same element
x.add(1)
#Show
x
```
Notice how it won't place another 1 there. That's because a set is only concerned with unique elements! We can cast a list with multiple repeat elements to a set to get the unique elements. For example:
```
# Create a list with repeats
l = [1,1,2,2,3,4,5,6,1,1]
# Cast as set to get unique values
set(l)
```
<img src="../images/icon/ppt-icons.png" alt="ppt-icons" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Mini Challenge - 3
***
Can you access the last element of a l which is a list and find the last element of that list.
```
l = [10,20,30,40,50]
l[-1]
```
# CHAPTER - 3 : Python Programming Constructs
***
We'll be talking about
- Looping
- Conditional Statements
- Comprehensions
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Loops and Iterative Statements
## If,elif,else Statements
***
if Statements in Python allows us to tell the computer to perform alternative actions based on a certain set of results.
Verbally, we can imagine we are telling the computer:
"Hey if this case happens, perform some action"
We can then expand the idea further with elif and else statements, which allow us to tell the computer:
"Hey if this case happens, perform some action. Else if another case happens, perform some other action. Else-- none of the above cases happened, perform this action"
Let's go ahead and look at the syntax format for if statements to get a better idea of this:
if case1:
perform action1
elif case2:
perform action2
else:
perform action 3
```
a = 5
b = 4
if a > b:
# we are inside the if block
print("a is greater than b")
elif b > a:
# we are inside the elif block
print("b is greater than a")
else:
# we are inside the else block
print("a and b are equal")
# Note: Python doesn't have a switch statement
```
<img src="../images/icon/Warning.png" alt="Warning" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Indentation
***
It is important to keep a good understanding of how indentation works in Python to maintain the structure and order of your code. We will touch on this topic again when we start building out functions!
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
# For Loops
***
A **for** loop acts as an iterator in Python, it goes through items that are in a *sequence* or any other iterable item. Objects that we've learned about that we can iterate over include strings,lists,tuples, and even built in iterables for dictionaries, such as the keys or values.
We've already seen the **for** statement a little bit in past lectures but now lets formalize our understanding.
Here's the general format for a **for** loop in Python:
for item in object:
statements to do stuff
The variable name used for the item is completely up to the coder, so use your best judgment for choosing a name that makes sense and you will be able to understand when revisiting your code. This item name can then be referenced inside you loop, for example if you wanted to use if statements to perform checks.
Let's go ahead and work through several example of **for** loops using a variety of data object types.
```
#Simple program to find the even numbers in a list
list_1 = [2,4,5,6,8,7,9,10] # Initialised the list
for number in list_1: # Selects one element in list_1
if number % 2 == 0: # Checks if it is even. IF even, only then, goes to next step else performs above step and continues iteration
print(number,end=' ') # prints no if even. end=' ' prints the nos on the same line with a space in between. Try deleting this command & seeing the difference.
lst1 = [4, 7, 13, 11, 3, 11, 15]
lst2 = []
for index, e in enumerate(lst1):
if e == 10:
break
if e < 10:
continue
lst2.append((index, e*e))
else:
print("out of loop without using break statement")
lst2
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
# While loops
***
The **while** statement in Python is one of most general ways to perform iteration. A **while** statement will repeatedly execute a single statement or group of statements as long as the condition is true. The reason it is called a 'loop' is because the code statements are looped through over and over again until the condition is no longer met.
The general format of a while loop is:
while test:
code statement
else:
final code statements
Let’s look at a few simple while loops in action.
```
x = 0
while x < 10:
print ('x is currently: ',x,end=' ') #end=' ' to put print below statement on the same line after thsi statement
print (' x is still less than 10, adding 1 to x')
x+=1
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Comprehensions
***
- Python provides syntactic sugar to write small loops to generate lists/sets/tuples/dicts in one line
- These are called comprehensions, and can greatly increase development speed and readability
Syntax:
```
sequence = [expression(element) for element in iterable if condition]
```
The brackets used for creating the comprehension define what type of object is created.
Use **[ ]** for lists, **()** for _generators_, **{}** for sets and dicts
### `list` Comprehension
```
names = ["Ravi", "Pooja", "Vijay", "Kiran"]
hello = ["Hello " + name for name in names]
print(hello)
numbers = [55, 32, 87, 99, 10, 54, 32]
even = [num for num in numbers if num % 2 == 0]
print(even)
odd_squares = [(num, num * num) for num in numbers if num % 2 == 1]
print(odd_squares)
```
<img src="../images/icon/Technical-Stuff.png" alt="Technical-Stuff" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Exception Handling
***
#### try and except
The basic terminology and syntax used to handle errors in Python is the **try** and **except** statements. The code which can cause an exception to occue is put in the *try* block and the handling of the exception is the implemented in the *except* block of code. The syntax form is:
try:
You do your operations here...
...
except ExceptionI:
If there is ExceptionI, then execute this block.
except ExceptionII:
If there is ExceptionII, then execute this block.
...
else:
If there is no exception then execute this block.
We can also just check for any exception with just using except: To get a better understanding of all this lets check out an example: We will look at some code that opens and writes a file:
```
try:
x = 1 / 0
except ZeroDivisionError:
print('divided by zero')
print('executed when exception occurs')
else:
print('executed only when exception does not occur')
finally:
print('finally block, always executed')
```
<img src="../images/icon/Concept-Alert.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## Modules, Packages, and `import`
***
A module is a collection of functions and variables that have been bundled together in a single file. Module helps us:
- Used for code organization, packaging and reusability
- Module: A Python file
- Package: A folder with an ``__init__.py`` file
- Namespace is based on file's directory path
Module's are usually organised around a theme. Let's see how to use a module. To access our module we will import it using python's import statement. Math module provides access to the mathematical functions.
```
# import the math module
import math
# use the log10 function in the math module
math.log10(123)
```
<img src="../images/icon/Technical-Stuff.png" alt="Concept-Alert" style="width: 100px;float:left; margin-right:15px"/>
<br />
## File I/O : Helps you read your files
***
- Python provides a `file` object to read text/binary files.
- This is similar to the `FileStream` object in other languages.
- Since a `file` is a resource, it must be closed after use. This can be done manually, or using a context manager (**`with`** statement)
<div class="alert alert-block alert-info">Create a file in the current directory</div>
```
with open('myfile.txt', 'w') as f:
f.write("This is my first file!\n")
f.write("Second line!\n")
f.write("Last line!\n")
# let's verify if it was really created.
# For that, let's find out which directory we're working from
import os
print(os.path.abspath(os.curdir))
```
<div class="alert alert-block alert-info">Read the newly created file</div>
```
# read the file we just created
with open('myfile.txt', 'r') as f:
for line in f:
print(line)
```
<img src="../images/icon/ppt-icons.png" alt="ppt-icons" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Mini Challenge - 4
***
Can you compute the square of a number assigned to a variable a using the math module?
```
import math
number = 9
square_of_number = math.pow(number,2) # pow(power) function in math module takes number and power as arguments.
print(square_of_number)
```
<img src="../images/icon/ppt-icons.png" alt="ppt-icons" style="width: 100px;float:left; margin-right:15px"/>
<br />
### Mini Challenge - 5
***
Can you create a list of 10 numbers iterate through the list and print the square of each number ?
```
l = [i for i in range(1,11)]
for i in l:
print(i*i,end=' ')
```
# Further Reading
- Official Python Documentation: https://docs.python.org/
<img src="../images/icon/Recap.png" alt="Recap" style="width: 100px;float:left; margin-right:15px"/>
<br />
# In-session Recap Time
***
* Python Basics
* Variables and Scoping
* Modules, Packages and Imports
* Data Types & Data Structures
* Python Programming Constructs
* Data Types & Data Structures
* Lists
* Dictionaries
* Sets & Booleans
* Python Prograamming constructs
* Loops and Conditional Statements
* Exception Handling
* File I/O
# Thank You
***
### Coming up next...
- **Python Functions**: How to write modular functions to enable code reuse
- **NumPy**: Learn the basis of most numeric computation in Python
| github_jupyter |
#Instalamos pytorch
```
#pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
```
#Clonamos el repositorio para obtener el dataset
```
!git clone https://github.com/joanby/deeplearning-az.git
from google.colab import drive
drive.mount('/content/drive')
```
# Importar las librerías
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
```
# Importar el dataset
```
movies = pd.read_csv("/content/deeplearning-az/datasets/Part 6 - AutoEncoders (AE)/ml-1m/movies.dat", sep = '::', header = None, engine = 'python', encoding = 'latin-1')
users = pd.read_csv("/content/deeplearning-az/datasets/Part 6 - AutoEncoders (AE)/ml-1m/users.dat", sep = '::', header = None, engine = 'python', encoding = 'latin-1')
ratings = pd.read_csv("/content/deeplearning-az/datasets/Part 6 - AutoEncoders (AE)/ml-1m/ratings.dat", sep = '::', header = None, engine = 'python', encoding = 'latin-1')
```
# Preparar el conjunto de entrenamiento y elconjunto de testing
```
training_set = pd.read_csv("/content/deeplearning-az/datasets/Part 6 - AutoEncoders (AE)/ml-100k/u1.base", sep = "\t", header = None)
training_set = np.array(training_set, dtype = "int")
test_set = pd.read_csv("/content/deeplearning-az/datasets/Part 6 - AutoEncoders (AE)/ml-100k/u1.test", sep = "\t", header = None)
test_set = np.array(test_set, dtype = "int")
```
# Obtener el número de usuarios y de películas
```
nb_users = int(max(max(training_set[:, 0]), max(test_set[:,0])))
nb_movies = int(max(max(training_set[:, 1]), max(test_set[:, 1])))
```
# Convertir los datos en un array X[u,i] con usuarios u en fila y películas i en columna
```
def convert(data):
new_data = []
for id_user in range(1, nb_users+1):
id_movies = data[:, 1][data[:, 0] == id_user]
id_ratings = data[:, 2][data[:, 0] == id_user]
ratings = np.zeros(nb_movies)
ratings[id_movies-1] = id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
```
# Convertir los datos a tensores de Torch
```
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
```
# Crear la arquitectura de la Red Neuronal
```
class SAE(nn.Module):
def __init__(self, ):
super(SAE, self).__init__()
self.fc1 = nn.Linear(nb_movies, 20)
self.fc2 = nn.Linear(20, 10)
self.fc3 = nn.Linear(10, 20)
self.fc4 = nn.Linear(20, nb_movies)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
x = self.fc4(x)
return x
sae = SAE()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(sae.parameters(), lr = 0.01, weight_decay = 0.5)
```
# Entrenar el SAE
```
nb_epoch = 200
for epoch in range(1, nb_epoch+1):
train_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = input.clone()
if torch.sum(target.data > 0) > 0:
output = sae.forward(input)
target.require_grad = False
output[target == 0] = 0
loss = criterion(output, target)
# la media no es sobre todas las películas, sino sobre las que realmente ha valorado
mean_corrector = nb_movies/float(torch.sum(target.data > 0)+1e-10)
loss.backward()
train_loss += np.sqrt(loss.data*mean_corrector) ## sum(errors) / n_pelis_valoradas
s += 1.
optimizer.step()
print("Epoch: "+str(epoch)+", Loss: "+str(train_loss/s))
```
# Evaluar el conjunto de test en nuestro SAE
```
test_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = Variable(test_set[id_user]).unsqueeze(0)
if torch.sum(target.data > 0) > 0:
output = sae.forward(input)
target.require_grad = False
output[target == 0] = 0
loss = criterion(output, target)
# la media no es sobre todas las películas, sino sobre las que realmente ha valorado
mean_corrector = nb_movies/float(torch.sum(target.data > 0)+1e-10)
test_loss += np.sqrt(loss.data*mean_corrector) ## sum(errors) / n_pelis_valoradas
s += 1.
print("Test Loss: "+str(test_loss/s))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df=pd.read_csv("phl_hec_all_confirmed.csv") ;
# df.head()
sns.heatmap(df.isnull(),yticklabels=False,cbar=False)
df.drop(['P. Name KOI'],axis=1,inplace=True)
df.drop(['P. Min Mass (EU)'],axis=1,inplace=True)
df.drop(['P. Max Mass (EU)'],axis=1,inplace=True)
df['P. Zone Class']=df['P. Zone Class'].fillna(df['P. Zone Class'].mode()[0])
df['P. Mass Class']=df['P. Mass Class'].fillna(df['P. Mass Class'].mode()[0])
df['P. Composition Class']=df['P. Composition Class'].fillna(df['P. Composition Class'].mode()[0])
df['P. Habitable Class']=df['P. Habitable Class'].fillna(df['P. Habitable Class'].mode()[0])
df['P. Atmosphere Class']=df['P. Atmosphere Class'].fillna(df['P. Atmosphere Class'].mode()[0])
df['P. Teq Min (K)']=df['P. Teq Min (K)'].fillna(df['P. Teq Min (K)'].mean())
df['P. Teq Mean (K)']=df['P. Teq Mean (K)'].fillna(df['P. Teq Mean (K)'].mean())
df['P. Teq Max (K)']=df['P. Teq Max (K)'].fillna(df['P. Teq Max (K)'].mean())
df['P. Mass (EU)']=df['P. Mass (EU)'].fillna(df['P. Mass (EU)'].mean())
df['P. Radius (EU)']=df['P. Radius (EU)'].fillna(df['P. Radius (EU)'].mean())
df['P. Density (EU)']=df['P. Density (EU)'].fillna(df['P. Density (EU)'].mean())
df['P. Gravity (EU)']=df['P. Gravity (EU)'].fillna(df['P. Gravity (EU)'].mean())
df['P. Esc Vel (EU)']=df['P. Esc Vel (EU)'].fillna(df['P. Esc Vel (EU)'].mean())
df['P. Teq Min (K)']=df['P. Teq Min (K)'].fillna(df['P. Teq Min (K)'].mean())
df['P. Teq Mean (K)']=df['P. Teq Mean (K)'].fillna(df['P. Teq Mean (K)'].mean())
df['P. Teq Max (K)']=df['P. Teq Max (K)'].fillna(df['P. Teq Max (K)'].mean())
df['P. Ts Min (K)']=df['P. Ts Min (K)'].fillna(df['P. Ts Min (K)'].mean())
df['P. Ts Mean (K)']=df['P. Ts Mean (K)'].fillna(df['P. Ts Mean (K)'].mean())
df['P. Ts Max (K)']=df['P. Ts Max (K)'].fillna(df['P. Ts Max (K)'].mean())
df['P. Surf Press (EU)']=df['P. Surf Press (EU)'].fillna(df['P. Surf Press (EU)'].mean())
df['P. Mag']=df['P. Mag'].fillna(df['P. Mag'].mean())
df['P. Appar Size (deg)']=df['P. Appar Size (deg)'].fillna(df['P. Appar Size (deg)'].mean())
df['P. Period (days)']=df['P. Period (days)'].fillna(df['P. Period (days)'].mean())
df['P. Sem Major Axis (AU)']=df['P. Sem Major Axis (AU)'].fillna(df['P. Sem Major Axis (AU)'].mean())
df['P. Eccentricity']=df['P. Eccentricity'].fillna(df['P. Eccentricity'].mean())
df['P. Mean Distance (AU)']=df['P. Mean Distance (AU)'].fillna(df['P. Mean Distance (AU)'].mean())
df['P. Omega (deg)']=df['P. Omega (deg)'].fillna(df['P. Omega (deg)'].mean())
df['P. Name Kepler'] = df['P. Name Kepler'].fillna(df['P. Name'])
df.drop(['P. Inclination (deg)'],axis=1,inplace=True)
df.drop(['S. Name HD'],axis=1,inplace=True)
df.drop(['S. Name HIP'],axis=1,inplace=True)
df['S. Type']=df['S. Type'].fillna(df['S. Type'].mode()[0])
df['S. Mass (SU)']=df['S. Mass (SU)'].fillna(df['S. Mass (SU)'].mean())
df['S. Radius (SU)']=df['S. Radius (SU)'].fillna(df['S. Radius (SU)'].mean())
df['S. Teff (K)']=df['S. Teff (K)'].fillna(df['S. Teff (K)'].mean())
df['S. Luminosity (SU)']=df['S. Luminosity (SU)'].fillna(df['S. Luminosity (SU)'].mean())
df['S. [Fe/H]']=df['S. [Fe/H]'].fillna(df['S. [Fe/H]'].mean())
df['S. Age (Gyrs)']=df['S. Age (Gyrs)'].fillna(df['S. Age (Gyrs)'].mean())
df['S. Appar Mag']=df['S. Appar Mag'].fillna(df['S. Appar Mag'].mean())
df['S. Distance (pc)']=df['S. Distance (pc)'].fillna(df['S. Distance (pc)'].mean())
df['S. Mag from Planet']=df['S. Mag from Planet'].fillna(df['S. Mag from Planet'].mean())
df['S. Size from Planet (deg)']=df['S. Size from Planet (deg)'].fillna(df['S. Size from Planet (deg)'].mean())
df['S. Hab Zone Min (AU)']=df['S. Hab Zone Min (AU)'].fillna(df['S. Hab Zone Min (AU)'].mean())
df['S. Hab Zone Max (AU)']=df['S. Hab Zone Max (AU)'].fillna(df['S. Hab Zone Max (AU)'].mean())
df['P. HZD']=df['P. HZD'].fillna(df['P. HZD'].mean())
df['P. HZC']=df['P. HZC'].fillna(df['P. HZC'].mean())
df['P. HZA']=df['P. HZA'].fillna(df['P. HZA'].mean())
df['P. HZI']=df['P. HZI'].fillna(df['P. HZI'].mean())
df['P. SPH']=df['P. SPH'].fillna(df['P. SPH'].mean())
df['P. ESI']=df['P. ESI'].fillna(df['P. ESI'].mean())
df['P. Disc. Year'] = df['P. Disc. Year'].fillna(df['Unnamed: 68'])
df.drop(['Unnamed: 68'],axis=1,inplace=True)
dset=df
import numpy as np
import pandas as pd
%matplotlib inline
from random import seed
from random import randrange #returns random numbers from a given range
from math import sqrt
import random
import matplotlib.pyplot as plt
random.seed(43)
# from decision_tree_functions import decision_tree_algorithm, decision_tree_predictions
# from helper_functions import train_test_split, calculate_accuracy
```
# Load and Prepare Data
#### Format of the data
- last column of the data frame must contain the label and it must also be called "label"
- there should be no missing values in the data frame
```
# dset=pd.read_csv("Cleaned_data1.csv") ;
from sklearn import preprocessing
df=dset.apply(preprocessing.LabelEncoder().fit_transform);
df['label']=df['P. Habitable Class']
df.drop("P. Habitable Class",axis=1,inplace=True)
X=df
y=df["label"]
```
# Random Forest
```
import random
class Node:
def __init__(self, data):
# all the data that is held by this node
self.data = data
# left child node
self.left = None
# right child node
self.right = None
# category if the current node is a leaf node
self.category = None
# a tuple: (row, column), representing the point where we split the data
# into the left/right node
self.split_point = None
def build_model(train_data, n_trees, max_depth, min_size, n_features, n_sample_rate):
trees = []
for i in range(n_trees):
random.shuffle(train_data)
n_samples = int(len(train_data) * n_sample_rate)
tree = build_tree(train_data[: n_samples], 1, max_depth, min_size, n_features)
trees.append(tree)
return trees
def predict_with_single_tree(tree, row):
if tree.category is not None:
return tree.category
x, y = tree.split_point
split_value = tree.data[x][y]
if row[y] <= split_value:
return predict_with_single_tree(tree.left, row)
else:
return predict_with_single_tree(tree.right, row)
def predict(trees, row):
prediction = []
for tree in trees:
prediction.append(predict_with_single_tree(tree, row))
return max(set(prediction), key=prediction.count)
def get_most_common_category(data):
categories = [row[-1] for row in data]
return max(set(categories), key=categories.count)
def build_tree(train_data, depth, max_depth, min_size, n_features):
root = Node(train_data)
x, y = get_split_point(train_data, n_features)
left_group, right_group = split(train_data, x, y)
if len(left_group) == 0 or len(right_group) == 0 or depth >= max_depth:
root.category = get_most_common_category(left_group + right_group)
else:
root.split_point = (x, y)
if len(left_group) < min_size:
root.left = Node(left_group)
root.left.category = get_most_common_category(left_group)
else:
root.left = build_tree(left_group, depth + 1, max_depth, min_size, n_features)
if len(right_group) < min_size:
root.right = Node(right_group)
root.right.category = get_most_common_category(right_group)
else:
root.right = build_tree(right_group, depth + 1, max_depth, min_size, n_features)
return root
def get_features(n_selected_features, n_total_features):
features = [i for i in range(n_total_features)]
random.shuffle(features)
return features[:n_selected_features]
def get_categories(data):
return set([row[-1] for row in data])
def get_split_point(data, n_features):
n_total_features = len(data[0]) - 1
features = get_features(n_features, n_total_features)
categories = get_categories(data)
x, y, gini_index = None, None, None
for index in range(len(data)):
for feature in features:
left, right = split(data, index, feature)
current_gini_index = get_gini_index(left, right, categories)
if gini_index is None or current_gini_index < gini_index:
x, y, gini_index = index, feature, current_gini_index
return x, y
def get_gini_index(left, right, categories):
gini_index = 0
for group in left, right:
if len(group) == 0:
continue
score = 0
for category in categories:
p = [row[-1] for row in group].count(category) / len(group)
score += p * p
gini_index += (1 - score) * (len(group) / len(left + right))
return gini_index
def split(data, x, y):
split_value = data[x][y]
left, right = [], []
for row in data:
if row[y] <= split_value:
left.append(row)
else:
right.append(row)
return left, right
class CrossValidationSplitter:
def __init__(self, data, k_fold):
self.data = data
self.k_fold = k_fold
self.n_iteration = 0
def __iter__(self):
return self
def __next__(self):
if self.n_iteration >= self.k_fold:
raise StopIteration
self.n_iteration += 1
return self.__load_data()
def __load_data(self):
n_train_data = (1 / self.k_fold) * len(self.data)
data_copy = self.data[:]
train_data = []
while len(train_data) < n_train_data:
train_data.append(self.__pop_random_row(data_copy))
test_data = data_copy
return train_data, test_data
def __pop_random_row(self, data):
random.shuffle(data)
return data[0]
def split_data(data, rate):
random.shuffle(data)
n_train_data = int(len(data) * rate)
return data[: n_train_data], data[n_train_data:]
def calculate_accuracy(model, validate_data):
n_total = 0
n_correct = 0
predicted_categories = [predict(model, row[:-1]) for row in validate_data]
correct_categories = [row[-1] for row in validate_data]
for predicted_category, correct_category in zip(predicted_categories, correct_categories):
n_total += 1
if predicted_category == correct_category:
n_correct += 1
return n_correct / n_total
data = df.values.tolist()
train_data_all, test_data = split_data(data, 0.9)
for n_tree in [1, 3, 10]:
accuracies = []
accuracies_test=[]
cross_validation_splitter = CrossValidationSplitter(train_data_all, 100)
model = None
for train_data, validate_data in cross_validation_splitter:
n_features = int(sqrt(len(train_data[0]) - 1))
model = build_model(
train_data=train_data,
n_trees=n_tree,
max_depth=5,
min_size=1,
n_features=n_features,
n_sample_rate=0.9
)
a2=calculate_accuracy(model,test_data)
a1=calculate_accuracy(model, validate_data)
accuracies.append(a1)
accuracies_test.append(a2)
# print(a1)
ax = plt.axes()
ax.plot(accuracies,label="Training Score")
ax.plot(accuracies_test,label="Cross-validation Score")
ax.set(xlim=(0, 100), ylim=(.85,1),
xlabel='CrossValidationSet', ylabel='Accuracy',label='CrossValidation Accuracy')
# plt.show()
plt.axhline(y=sum(accuracies)/len(accuracies), label='Mean Accuracy', linestyle='--', color='red')
ax.legend();
plt.show()
print("Average cross validation accuracy for {} trees: {}".format(n_tree, np.mean(accuracies)))
print("Test accuracy for {} trees: {}".format(n_tree, calculate_accuracy(model, test_data)))
# # Model (can also use single decision tree)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=10)
# import matplotlib.pyplot as plt
model.fit(X, y)
# estimator = model.estimators_[5]
# col = X.columns
# y1 = estimator.feature_importances_
# maxElement = np.amax(y1)
# y1=np.delete(y1,maxElement)
# l1=[]
# cols=[]
# for i in range(len(y1)):
# if(y1[i]!=0):
# l1.append(i)
# for i in range(len(l1)):
# cols.append(col[l1[i]])
# print(cols)
# print(l1)
# print(y1)
# fig, ax = plt.subplots()
# width = 0.4 # the width of the bars
# ind = np.arange(len(l1))# the x locations for the groups
# print(ind)
# ax.barh(ind, l1, width, color="green")
# ax.set_yticks(ind+width/10)
# ax.set_yticklabels(cols, minor=False)
# plt.title('Feature importance in RandomForest Classifier')
# plt.xlabel('Relative importance')
# plt.ylabel('feature')
# plt.figure(figsize=(5,5))
# fig.set_size_inches(6.5, 4.5, forward=True)
from sklearn.model_selection import validation_curve
param_range = np.arange(1, 100, 1)
# Calculate accuracy on training and test set using range of parameter values
train_scores, test_scores = validation_curve(RandomForestClassifier(),
X,
y,
param_name="n_estimators",
param_range=param_range,
cv=3,
scoring="accuracy")
train_mean = np.mean(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, train_mean, label="Training score", color="black")
plt.plot(param_range, test_mean, label="Cross-validation score")
plt.title("Validation Curve With Random Forest")
plt.xlabel("Number Of Trees")
plt.ylabel("Accuracy Score")
plt.tight_layout()
plt.legend(loc="best")
plt.show()
```
| github_jupyter |
```
```
---
title: "Pipes and Filters"
teaching: 25
exercises: 10
questions:
- "How can I combine existing commands to do new things?"
objectives:
- "Redirect a command's output to a file."
- "Process a file instead of keyboard input using redirection."
- "Construct command pipelines with two or more stages."
- "Explain what usually happens if a program or pipeline isn't given any input to process."
- "Explain Unix's 'small pieces, loosely joined' philosophy."
keypoints:
- "`wc` counts lines, words, and characters in its inputs."
- "`cat` displays the contents of its inputs."
- "`sort` sorts its inputs."
- "`head` displays the first 10 lines of its input."
- "`tail` displays the last 10 lines of its input."
- "`command > [file]` redirects a command's output to a file (overwriting any existing content)."
- "`command >> [file]` appends a command's output to a file."
- "`[first] | [second]` is a pipeline: the output of the first command is used as the input to the second."
- "The best way to use the shell is to use pipes to combine simple single-purpose programs (filters)."
---
Now that we know a few basic commands,
we can finally look at the shell's most powerful feature:
the ease with which it lets us combine existing programs in new ways.
We'll start with the directory called `shell-lesson-data/molecules`
that contains six files describing some simple organic molecules.
The `.pdb` extension indicates that these files are in Protein Data Bank format,
a simple text format that specifies the type and position of each atom in the molecule.
```
%%bash
$ ls molecules
```
{: .language-bash}
```
cubane.pdb ethane.pdb methane.pdb
octane.pdb pentane.pdb propane.pdb
```
{: .output}
Let's go into that directory with `cd` and run an example command `wc cubane.pdb`:
```
%%bash
$ cd molecules
$ wc cubane.pdb
```
{: .language-bash}
```
20 156 1158 cubane.pdb
```
{: .output}
`wc` is the 'word count' command:
it counts the number of lines, words, and characters in files (from left to right, in that order).
If we run the command `wc *.pdb`, the `*` in `*.pdb` matches zero or more characters,
so the shell turns `*.pdb` into a list of all `.pdb` files in the current directory:
```
%%bash
$ wc *.pdb
```
{: .language-bash}
```
20 156 1158 cubane.pdb
12 84 622 ethane.pdb
9 57 422 methane.pdb
30 246 1828 octane.pdb
21 165 1226 pentane.pdb
15 111 825 propane.pdb
107 819 6081 total
```
{: .output}
Note that `wc *.pdb` also shows the total number of all lines in the last line of the output.
If we run `wc -l` instead of just `wc`,
the output shows only the number of lines per file:
```
%%bash
$ wc -l *.pdb
```
{: .language-bash}
```
20 cubane.pdb
12 ethane.pdb
9 methane.pdb
30 octane.pdb
21 pentane.pdb
15 propane.pdb
107 total
```
{: .output}
The `-m` and `-w` options can also be used with the `wc` command, to show
only the number of characters or the number of words in the files.
> ## Why Isn't It Doing Anything?
>
> What happens if a command is supposed to process a file, but we
> don't give it a filename? For example, what if we type:
>
```
%%bash
> $ wc -l
```
```
```
> {: .language-bash}
>
> but don't type `*.pdb` (or anything else) after the command?
> Since it doesn't have any filenames, `wc` assumes it is supposed to
> process input given at the command prompt, so it just sits there and waits for us to give
> it some data interactively. From the outside, though, all we see is it
> sitting there: the command doesn't appear to do anything.
>
> If you make this kind of mistake, you can escape out of this state by holding down
> the control key (<kbd>Ctrl</kbd>) and typing the letter <kbd>C</kbd> once and
> letting go of the <kbd>Ctrl</kbd> key.
> <kbd>Ctrl</kbd>+<kbd>C</kbd>
{: .callout}
## Capturing output from commands
Which of these files contains the fewest lines?
It's an easy question to answer when there are only six files,
but what if there were 6000?
Our first step toward a solution is to run the command:
```
%%bash
$ wc -l *.pdb > lengths.txt
```
```
```
{: .language-bash}
The greater than symbol, `>`, tells the shell to **redirect** the command's output
to a file instead of printing it to the screen. (This is why there is no screen output:
everything that `wc` would have printed has gone into the
file `lengths.txt` instead.) The shell will create
the file if it doesn't exist. If the file exists, it will be
silently overwritten, which may lead to data loss and thus requires
some caution.
`ls lengths.txt` confirms that the file exists:
```
%%bash
$ ls lengths.txt
```
{: .language-bash}
```
lengths.txt
```
{: .output}
We can now send the content of `lengths.txt` to the screen using `cat lengths.txt`.
The `cat` command gets its name from 'concatenate' i.e. join together,
and it prints the contents of files one after another.
There's only one file in this case,
so `cat` just shows us what it contains:
```
%%bash
$ cat lengths.txt
```
{: .language-bash}
```
20 cubane.pdb
12 ethane.pdb
9 methane.pdb
30 octane.pdb
21 pentane.pdb
15 propane.pdb
107 total
```
{: .output}
> ## Output Page by Page
>
> We'll continue to use `cat` in this lesson, for convenience and consistency,
> but it has the disadvantage that it always dumps the whole file onto your screen.
> More useful in practice is the command `less`,
> which you use with `less lengths.txt`.
> This displays a screenful of the file, and then stops.
> You can go forward one screenful by pressing the spacebar,
> or back one by pressing `b`. Press `q` to quit.
{: .callout}
## Filtering output
Next we'll use the `sort` command to sort the contents of the `lengths.txt` file.
But first we'll use an exercise to learn a little about the sort command:
> ## What Does `sort -n` Do?
>
> The file [`shell-lesson-data/numbers.txt`](../shell-lesson-data/numbers.txt)
> contains the following lines:
>
```
> 10
> 2
> 19
> 22
> 6
```
> {: .source}
>
> If we run `sort` on this file, the output is:
>
```
> 10
> 19
> 2
> 22
> 6
```
> {: .output}
>
> If we run `sort -n` on the same file, we get this instead:
>
```
> 2
> 6
> 10
> 19
> 22
```
> {: .output}
>
> Explain why `-n` has this effect.
>
> > ## Solution
> > The `-n` option specifies a numerical rather than an alphanumerical sort.
> {: .solution}
{: .challenge}
We will also use the `-n` option to specify that the sort is
numerical instead of alphanumerical.
This does *not* change the file;
instead, it sends the sorted result to the screen:
```
%%bash
$ sort -n lengths.txt
```
{: .language-bash}
```
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
107 total
```
{: .output}
We can put the sorted list of lines in another temporary file called `sorted-lengths.txt`
by putting `> sorted-lengths.txt` after the command,
just as we used `> lengths.txt` to put the output of `wc` into `lengths.txt`.
Once we've done that,
we can run another command called `head` to get the first few lines in `sorted-lengths.txt`:
```
%%bash
$ sort -n lengths.txt > sorted-lengths.txt
$ head -n 1 sorted-lengths.txt
```
{: .language-bash}
```
9 methane.pdb
```
{: .output}
Using `-n 1` with `head` tells it that
we only want the first line of the file;
`-n 20` would get the first 20,
and so on.
Since `sorted-lengths.txt` contains the lengths of our files ordered from least to greatest,
the output of `head` must be the file with the fewest lines.
> ## Redirecting to the same file
>
> It's a very bad idea to try redirecting
> the output of a command that operates on a file
> to the same file. For example:
>
```
%%bash
> $ sort -n lengths.txt > lengths.txt
```
```
```
> {: .language-bash}
>
> Doing something like this may give you
> incorrect results and/or delete
> the contents of `lengths.txt`.
{: .callout}
> ## What Does `>>` Mean?
>
> We have seen the use of `>`, but there is a similar operator `>>`
> which works slightly differently.
> We'll learn about the differences between these two operators by printing some strings.
> We can use the `echo` command to print strings e.g.
>
```
%%bash
> $ echo The echo command prints text
```
> {: .language-bash}
```
> The echo command prints text
```
> {: .output}
>
> Now test the commands below to reveal the difference between the two operators:
>
```
%%bash
> $ echo hello > testfile01.txt
```
```
```
> {: .language-bash}
>
> and:
>
```
%%bash
> $ echo hello >> testfile02.txt
```
```
```
> {: .language-bash}
>
> Hint: Try executing each command twice in a row and then examining the output files.
>
> > ## Solution
> > In the first example with `>`, the string 'hello' is written to `testfile01.txt`,
> > but the file gets overwritten each time we run the command.
> >
> > We see from the second example that the `>>` operator also writes 'hello' to a file
> > (in this case`testfile02.txt`),
> > but appends the string to the file if it already exists
> > (i.e. when we run it for the second time).
> {: .solution}
{: .challenge}
> ## Appending Data
>
> We have already met the `head` command, which prints lines from the start of a file.
> `tail` is similar, but prints lines from the end of a file instead.
>
> Consider the file `shell-lesson-data/data/animals.txt`.
> After these commands, select the answer that
> corresponds to the file `animals-subset.txt`:
>
```
%%bash
> $ head -n 3 animals.txt > animals-subset.txt
> $ tail -n 2 animals.txt >> animals-subset.txt
```
```
```
> {: .language-bash}
>
> 1. The first three lines of `animals.txt`
> 2. The last two lines of `animals.txt`
> 3. The first three lines and the last two lines of `animals.txt`
> 4. The second and third lines of `animals.txt`
>
> > ## Solution
> > Option 3 is correct.
> > For option 1 to be correct we would only run the `head` command.
> > For option 2 to be correct we would only run the `tail` command.
> > For option 4 to be correct we would have to pipe the output of `head` into `tail -n 2`
> > by doing `head -n 3 animals.txt | tail -n 2 > animals-subset.txt`
> {: .solution}
{: .challenge}
## Passing output to another command
In our example of finding the file with the fewest lines,
we are using two intermediate files `lengths.txt` and `sorted-lengths.txt` to store output.
This is a confusing way to work because
even once you understand what `wc`, `sort`, and `head` do,
those intermediate files make it hard to follow what's going on.
We can make it easier to understand by running `sort` and `head` together:
```
%%bash
$ sort -n lengths.txt | head -n 1
```
{: .language-bash}
```
9 methane.pdb
```
{: .output}
The vertical bar, `|`, between the two commands is called a **pipe**.
It tells the shell that we want to use
the output of the command on the left
as the input to the command on the right.
This has removed the need for the `sorted-lengths.txt` file.
## Combining multiple commands
Nothing prevents us from chaining pipes consecutively.
We can for example send the output of `wc` directly to `sort`,
and then the resulting output to `head`.
This removes the need for any intermediate files.
We'll start by using a pipe to send the output of `wc` to `sort`:
```
%%bash
$ wc -l *.pdb | sort -n
```
{: .language-bash}
```
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
107 total
```
{: .output}
We can then send that output through another pipe, to `head`, so that the full pipeline becomes:
```
%%bash
$ wc -l *.pdb | sort -n | head -n 1
```
{: .language-bash}
```
9 methane.pdb
```
{: .output}
This is exactly like a mathematician nesting functions like *log(3x)*
and saying 'the log of three times *x*'.
In our case,
the calculation is 'head of sort of line count of `*.pdb`'.
The redirection and pipes used in the last few commands are illustrated below:

> ## Piping Commands Together
>
> In our current directory, we want to find the 3 files which have the least number of
> lines. Which command listed below would work?
>
> 1. `wc -l * > sort -n > head -n 3`
> 2. `wc -l * | sort -n | head -n 1-3`
> 3. `wc -l * | head -n 3 | sort -n`
> 4. `wc -l * | sort -n | head -n 3`
>
> > ## Solution
> > Option 4 is the solution.
> > The pipe character `|` is used to connect the output from one command to
> > the input of another.
> > `>` is used to redirect standard output to a file.
> > Try it in the `shell-lesson-data/molecules` directory!
> {: .solution}
{: .challenge}
## Tools designed to work together
This idea of linking programs together is why Unix has been so successful.
Instead of creating enormous programs that try to do many different things,
Unix programmers focus on creating lots of simple tools that each do one job well,
and that work well with each other.
This programming model is called 'pipes and filters'.
We've already seen pipes;
a **filter** is a program like `wc` or `sort`
that transforms a stream of input into a stream of output.
Almost all of the standard Unix tools can work this way:
unless told to do otherwise,
they read from standard input,
do something with what they've read,
and write to standard output.
The key is that any program that reads lines of text from standard input
and writes lines of text to standard output
can be combined with every other program that behaves this way as well.
You can *and should* write your programs this way
so that you and other people can put those programs into pipes to multiply their power.
> ## Pipe Reading Comprehension
>
> A file called `animals.txt` (in the `shell-lesson-data/data` folder) contains the following data:
>
```
> 2012-11-05,deer
> 2012-11-05,rabbit
> 2012-11-05,raccoon
> 2012-11-06,rabbit
> 2012-11-06,deer
> 2012-11-06,fox
> 2012-11-07,rabbit
> 2012-11-07,bear
```
> {: .source}
>
> What text passes through each of the pipes and the final redirect in the pipeline below?
>
```
%%bash
> $ cat animals.txt | head -n 5 | tail -n 3 | sort -r > final.txt
```
```
```
> {: .language-bash}
> Hint: build the pipeline up one command at a time to test your understanding
> > ## Solution
> > The `head` command extracts the first 5 lines from `animals.txt`.
> > Then, the last 3 lines are extracted from the previous 5 by using the `tail` command.
> > With the `sort -r` command those 3 lines are sorted in reverse order and finally,
> > the output is redirected to a file `final.txt`.
> > The content of this file can be checked by executing `cat final.txt`.
> > The file should contain the following lines:
> > ```
> > 2012-11-06,rabbit
> > 2012-11-06,deer
> > 2012-11-05,raccoon
> > ```
> > {: .source}
> {: .solution}
{: .challenge}
> ## Pipe Construction
>
> For the file `animals.txt` from the previous exercise, consider the following command:
>
```
%%bash
> $ cut -d , -f 2 animals.txt
```
```
```
> {: .language-bash}
>
> The `cut` command is used to remove or 'cut out' certain sections of each line in the file,
> and `cut` expects the lines to be separated into columns by a <kbd>Tab</kbd> character.
> A character used in this way is a called a **delimiter**.
> In the example above we use the `-d` option to specify the comma as our delimiter character.
> We have also used the `-f` option to specify that we want to extract the second field (column).
> This gives the following output:
>
```
%%bash
> deer
> rabbit
> raccoon
> rabbit
> deer
> fox
> rabbit
> bear
```
```
```
> {: .output}
>
> The `uniq` command filters out adjacent matching lines in a file.
> How could you extend this pipeline (using `uniq` and another command) to find
> out what animals the file contains (without any duplicates in their
> names)?
>
> > ## Solution
> > ```
> > $ cut -d , -f 2 animals.txt | sort | uniq
> > ```
> > {: .language-bash}
> {: .solution}
{: .challenge}
> ## Which Pipe?
>
> The file `animals.txt` contains 8 lines of data formatted as follows:
>
```
> 2012-11-05,deer
> 2012-11-05,rabbit
> 2012-11-05,raccoon
> 2012-11-06,rabbit
> ...
```
> {: .output}
>
> The `uniq` command has a `-c` option which gives a count of the
> number of times a line occurs in its input. Assuming your current
> directory is `shell-lesson-data/data/`, what command would you use to produce
> a table that shows the total count of each type of animal in the file?
>
> 1. `sort animals.txt | uniq -c`
> 2. `sort -t, -k2,2 animals.txt | uniq -c`
> 3. `cut -d, -f 2 animals.txt | uniq -c`
> 4. `cut -d, -f 2 animals.txt | sort | uniq -c`
> 5. `cut -d, -f 2 animals.txt | sort | uniq -c | wc -l`
>
> > ## Solution
> > Option 4. is the correct answer.
> > If you have difficulty understanding why, try running the commands, or sub-sections of
> > the pipelines (make sure you are in the `shell-lesson-data/data` directory).
> {: .solution}
{: .challenge}
## Nelle's Pipeline: Checking Files
Nelle has run her samples through the assay machines
and created 17 files in the `north-pacific-gyre/2012-07-03` directory described earlier.
As a quick check, starting from her home directory, Nelle types:
```
%%bash
$ cd north-pacific-gyre/2012-07-03
$ wc -l *.txt
```
```
```
{: .language-bash}
The output is 18 lines that look like this:
```
300 NENE01729A.txt
300 NENE01729B.txt
300 NENE01736A.txt
300 NENE01751A.txt
300 NENE01751B.txt
300 NENE01812A.txt
... ...
```
{: .output}
Now she types this:
```
%%bash
$ wc -l *.txt | sort -n | head -n 5
```
{: .language-bash}
```
240 NENE02018B.txt
300 NENE01729A.txt
300 NENE01729B.txt
300 NENE01736A.txt
300 NENE01751A.txt
```
{: .output}
Whoops: one of the files is 60 lines shorter than the others.
When she goes back and checks it,
she sees that she did that assay at 8:00 on a Monday morning --- someone
was probably in using the machine on the weekend,
and she forgot to reset it.
Before re-running that sample,
she checks to see if any files have too much data:
```
%%bash
$ wc -l *.txt | sort -n | tail -n 5
```
{: .language-bash}
```
300 NENE02040B.txt
300 NENE02040Z.txt
300 NENE02043A.txt
300 NENE02043B.txt
5040 total
```
{: .output}
Those numbers look good --- but what's that 'Z' doing there in the third-to-last line?
All of her samples should be marked 'A' or 'B';
by convention,
her lab uses 'Z' to indicate samples with missing information.
To find others like it, she does this:
```
%%bash
$ ls *Z.txt
```
{: .language-bash}
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import molsysmt as msm
```
# Info
*Printing out summary information of a molecular system*
There is in MolSysMT a method to print out a brief overview of a molecular system and its elements. The output of this method can be a `pandas.DataFrame` or a `string`. Lets load a molecular system to illustrate with some simple examples how it works:
```
molecular_system = msm.convert('pdb_id:1tcd', to_form='molsysmt.MolSys')
```
## As a DataFrame
### Summary information on atoms
The method `molsysmt.info()` can be applied over any element of the molecular system. Lets see an example where the summary information is shown for a set of atoms when the input argument `output='dataframe'`:
```
msm.info(molecular_system, element='atom', indices=[9,10,11,12], output='dataframe')
output = msm.info(molecular_system, element='atom', indices=[9,10,11,12], output='dataframe')
output.data.to_dict()
```
The method can also take a selection input argument:
```
msm.info(molecular_system, element='atom', selection='group_index==6')
```
Notice that the default option for `output` is 'dataframe'.
### Summary information on groups
Lets see an example where the summary information is shown for a set of groups:
```
msm.info(molecular_system, element='group', indices=[20,21,22,23])
```
### Summary information on components
Find here now an example on how the method `molsysmt.info()` works over components:
```
msm.info(molecular_system, element='component', selection='molecule_type!="water"')
```
### Summary information on chains
If the summary information on all chains in the molecular system needs to be printed out:
```
msm.info(molecular_system, element='chain')
```
### Summary information on molecules
The following is an example on how the method works when the elementted element is 'molecule':
```
msm.info(molecular_system, element='molecule', selection='molecule_type!="water"')
```
### Summary information on entities
If the elementted element is 'entity' the method prints out the next summary information:
```
msm.info(molecular_system, element='entity')
```
### Summary information on a molecular system
At last, a summary information can be shown on the whole molecular system as follows:
```
msm.info(molecular_system)
topology, structures = msm.convert(molecular_system, to_form=['molsysmt.Topology','molsysmt.Structures'])
msm.info(topology)
msm.info(structures)
msm.info([topology, structures])
```
## As a string
The method `molsysmt.info()` can also return a string, short or long, with key information to identify the elementted element.
### Summary information on atoms
If we only need to get a short string encoding the main attributes of an atom, the input argument `output` should take the value 'short_string':
```
msm.info(molecular_system, element='atom', indices=10, output='short_string')
```
The string is nothing but the atom name, the atom id and the atom index with '-' between the name and the id, and '@' between the id and the index. The input argument `indices` accepts also a list of indices:
```
msm.info(molecular_system, element='atom', indices=[10,11,12,13], output='short_string')
```
The long version of the string includes the short string of the group, chain and molecule the atom belongs to; with the character '/' in between:
```
msm.info(molecular_system, element='atom', indices=10, output='long_string')
```
### Summary information on groups
The short string corresponding to a group is composed of its name, id and index. The characters used as separators are the same as with atoms: '-' between name and id, and '@' between id and index.
```
msm.info(molecular_system, element='group', indices=0, output='short_string')
```
The long version of the string includes the short string for the chain and molecule the group belongs to:
```
msm.info(molecular_system, element='group', indices=3, output='long_string')
```
### Summary information on components
The short string with the summary information of a component is its index only:
```
msm.info(molecular_system, element='component', indices=2, output='short_string')
```
The long version of the string includes the chain and molecule the component belongs to with the character '/' as separator.
```
msm.info(molecular_system, element='component', indices=2, output='long_string')
```
### Summary information on chains
Just like with atoms and groups, the short version of the chain string is made up of the sequence of atributes: name, id and index. The character '-' is in between the chain name and the chain id, and '@' precedes the chain index:
```
msm.info(molecular_system, element='chain', indices=2, output='short_string')
```
The long version of the string in this case is the same as the short one:
```
msm.info(molecular_system, element='chain', indices=2, output='long_string')
```
### Summary information on molecules
Molecules have no relevant id attributes, thats why in this case the short string is the molecule name followed by the character '@' and the molecule index:
```
msm.info(molecular_system, element='molecule', indices=0, output='short_string')
```
As well as with chains, the short and long strings are equivalent here:
```
msm.info(molecular_system, element='molecule', indices=0, output='long_string')
```
### Summary information on entities
The significant attributes for entities are only two. In this case the string takes the same coding as before, with the character '@' between the name and the index.
```
msm.info(molecular_system, element='entity', indices=0, output='short_string')
```
The long string is equal to the short string when the element is an entity:
```
msm.info(molecular_system, element='entity', indices=0, output='long_string')
```
| github_jupyter |
# [Detecting the difficulty level of French texts](https://www.kaggle.com/c/detecting-the-difficulty-level-of-french-texts/overview/evaluation)
## Hyper parameters tuning
---
In this notebook, we will use cross-validation to find the best parameters of the models that showed the most promising result in first approach.
```
# Download the french language model
!python -m spacy download fr_core_news_md
import pandas as pd
import spacy
from spacy import displacy
import string
import numpy as np
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.base import TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder, LabelEncoder
from spacy.lang.en.stop_words import STOP_WORDS
from spacy.lang.en import English
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score
from sklearn.utils.multiclass import unique_labels
np.random.state = 0
def evaluate(y_true, pred):
"""
Calculate the models performance metrics.
Since it is a multi-class classification, we take the weighted average
for the metrics that are calculated for each class.
"""
report = {
'accuracy':accuracy_score(y_true, pred),
'recall':recall_score(y_true, pred, average='weighted'),
'precision':precision_score(y_true, pred, average='weighted'),
'f1_score':f1_score(y_true, pred, average='weighted')
}
return report
def plot_confusion_matrix(y_true, pred):
"""
A function to plot the models confusion matrix.
"""
cf_matrix = confusion_matrix(y_test, pred)
fig, ax = plt.subplots(1,1, figsize=(9, 6))
sns.heatmap(cf_matrix, ax=ax, annot=True,
annot_kws={"size": 16}, fmt='g')
ax.set_xticklabels(y_true.iloc[:6])
ax.set_yticklabels(y_true.iloc[:6])
ax.set_ylabel("Actual")
ax.set_xlabel("Predicted")
ax.set_title("Confusion matrix")
sp = spacy.load('fr_core_news_md')
# Import stopwords from spacy french language
stop_words = spacy.lang.fr.stop_words.STOP_WORDS
# Import punctations characters
punctuations = string.punctuation
df = pd.read_csv("https://raw.githubusercontent.com/LaCrazyTomato/Group-Project-DM-ML-2021/main/data/training_data.csv")
df.head()
```
This time we will optimize the Tokenizer, with the aim of reducing the dimensionality and improve accuracy. To do this, we will use the PorterStemmer combined with the WordNetLemmatizer of nltk, in order to keep only the root of the words.
We kept stop-words, number and punctations because we believed their number of occurences can be predictors of a sentence complexity (and this looks like to be the case because accuracy on testing set reduce when we keep them).
```
import re
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import PorterStemmer, SnowballStemmer, WordNetLemmatizer
nltk.download('punkt')
# Define cleaning function
def nltk_tokenizer(doc):
# Lowercase
doc = doc.lower()
# Tokenize and remove white spaces (strip)
doc = word_tokenize(doc)
doc = [word.lower().strip() for word in doc]
stemmer = PorterStemmer()
doc = [stemmer.stem(word) for word in doc]
lemma = WordNetLemmatizer()
doc = [lemma.lemmatize(word) for word in doc]
return doc
print(nltk_tokenizer(df.loc[2, 'sentence']))
tfidf_vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer)
X = df['sentence']
y = df['difficulty']
# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0, stratify=y)
X_train
```
# Models tuning
We will user [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) from sklearn, in order to find the best hyperparameters. What is good with this module is that, in addition to the parameters of the classifier, it allows us to optimize the parameters of all the preprocessor included in the pipeline (e.g: vectorizer, scaler, ...).
## 1. LogisticRegression
### 1.1 Remainder : accuracy from first approach -> 46.56 %
```
LRCV_model = LogisticRegression()
pipe = Pipeline([('vectorizer', tfidf_vectorizer),
('classifier', LRCV_model)])
pipe.fit(X_train, y_train)
pred = pipe.predict(X_test)
evaluate(y_test, pred)
```
With optimized text cleaning, we managed to **increase accuracy by approximately 160 basis point.**
### 1.2 Tuning
```
from sklearn.datasets import make_classification
from sklearn.model_selection import HalvingGridSearchCV, GridSearchCV
import pandas as pd
from sklearn.preprocessing import StandardScaler
tfidf_vector = TfidfVectorizer()
LR_model = LogisticRegression(random_state=0)
pipe = Pipeline([('vectorizer', tfidf_vector),
('classifier', LR_model)])
# We define here all the parameters we want the CV to do combination with.
param_grid = {'classifier__solver': ['lbfgs'],
'classifier__penalty': ['l2', 'none'],
'classifier__max_iter': [10_000],
'vectorizer__tokenizer':[nltk_tokenizer],
'vectorizer__ngram_range':[(1,3), (1,4), (1,5), (1,6)],
'vectorizer__analyzer':['word', 'char'],
'vectorizer__norm':['l1', 'l2'],
'vectorizer__max_df':[0.7, 0.8, 0.9, 1.0],
'vectorizer__min_df':[0, 1, 2],
}
grid_search_params = dict(estimator=pipe,
param_grid=param_grid,
verbose=10)
LR_cross_validation = GridSearchCV(**grid_search_params)
LR_cross_validation
```
All the parameters combination gives **384 models possible**. For each of these, the cross-validation will do 5 split, which gives us a total number of fits of 1920.
It took a lot of time.. So we saved it in a csv file.
```
#%%time
#LR_cross_validation.fit(X, y)
#results = pd.DataFrame(LR_cross_validation.cv_results_)
#results.to_csv("LR_cross_validation.csv")
results = pd.read_csv("LR_cross_validation.csv")
results.head()
sns.histplot(results.mean_test_score, kde=True)
fig, ax = plt.subplots(1,2, figsize=(12, 4))
sns.scatterplot(data=results, x="param_classifier__penalty",
y="mean_test_score",
hue="param_vectorizer__analyzer",
ax=ax[0])
sns.scatterplot(data=results, x="param_vectorizer__ngram_range",
y="mean_test_score",
hue="param_vectorizer__norm",
ax=ax[1])
best = results[results.rank_test_score==1]
display(best)
dict(best.params)
```
### 1.2 Testing accuracy with best parameters found
```
from sklearn.datasets import make_classification
from sklearn.model_selection import GridSearchCV
import pandas as pd
from sklearn.preprocessing import StandardScaler
vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer,
ngram_range=(1, 6),
analyzer='char',
min_df=2,
max_df=0.7,
norm='l2')
model = LogisticRegression(max_iter=10_000,
penalty='l2',
solver='lbfgs')
pipe = Pipeline([('vectorizer', vectorizer),
('classifier', model)])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
We improved accuracy on testing set by 220 basis point.
## 2. Random Forest
### 2.1 Remainder : accuracy from first approach -> 39.79 %
For the vectorizer, we will keep best parameters previously.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import GridSearchCV
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer,
ngram_range=(1, 6),
analyzer='char',
min_df=2,
max_df=0.7,
norm='l2')
random_forest_model = RandomForestClassifier(random_state=0)
pipe = Pipeline([('vectorizer', vectorizer),
('classifier', random_forest_model)])
# We define here all the parameters we want the CV to do combination with.
param_grid = {'classifier__criterion': ['entropy', 'gini'],
'classifier__min_samples_split': [2, 4, 6],
'classifier__max_features': ["auto", "sqrt", "log2"],
}
grid_search_params = dict(estimator=pipe,
param_grid=param_grid,
verbose=10)
random_forest_cross_validation = GridSearchCV(**grid_search_params)
random_forest_cross_validation
#%%time
#random_forest_cross_validation.fit(X, y)
#results = pd.DataFrame(random_forest_cross_validation.cv_results_)
#results.to_csv("random_forest_cross_validation.csv")
results = pd.read_csv("random_forest_cross_validation.csv")
best = results[results.rank_test_score==1]
display(best)
dict(best.params)
sns.histplot(results.mean_test_score, kde=True)
fig, ax = plt.subplots(1,2, figsize=(12, 4))
sns.scatterplot(data=results, x="param_classifier__min_samples_split",
y="mean_test_score",
hue="param_classifier__criterion",
ax=ax[0])
sns.scatterplot(data=results, x="param_classifier__max_features",
y="mean_test_score",
hue="param_classifier__criterion",
ax=ax[1])
```
### 2.2 Testing accuracy with best parameters found
```
vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer,
ngram_range=(1, 6),
analyzer='char',
min_df=2,
max_df=0.7,
norm='l2')
model = RandomForestClassifier(criterion='gini',
max_features='auto',
min_samples_split=2)
pipe = Pipeline([('vectorizer', vectorizer),
('classifier', model)])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
## 3. Ridge classifier
### 3.1 Remainder : accuracy from first approach -> 46.77 %
```
from sklearn.linear_model import RidgeClassifier
vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer,
ngram_range=(1, 6),
analyzer='char',
min_df=2,
max_df=0.7,
norm='l2')
ridge_model = RidgeClassifier(random_state=0)
ridge_pipe = Pipeline([('vectorizer', vectorizer),
('classifier', ridge_model)])
param_grid = {'classifier__alpha': [0.8, 1.0, 1.2],
'classifier__max_iter': [10_000],
'classifier__max_iter': [10_000],
'classifier__solver':['auto', 'sparse_cg', 'sag']
}
grid_search_params = dict(estimator=ridge_pipe,
param_grid=param_grid,
verbose=10)
ridge_cross_validation = GridSearchCV(**grid_search_params)
ridge_cross_validation
#%%time
#ridge_cross_validation.fit(X, y)
#results = pd.DataFrame(ridge_cross_validation.cv_results_)
#results.to_csv("ridge_cross_validation.csv")
results = pd.read_csv("ridge_cross_validation.csv")
best = results[results.rank_test_score==1]
display(best)
dict(best.params)
```
### 3.2 Testing accuracy with best parameters found
```
model = RidgeClassifier(random_state=0,
max_iter=10_000,
alpha=1.2,
solver='auto')
pipe = Pipeline([('vectorizer', vectorizer),
('classifier', model)])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
## 4. Perceptron classifier
### 4.1 Remainder : accuracy from first approach -> 41.04 %
```
from sklearn.linear_model import Perceptron
vectorizer = TfidfVectorizer(tokenizer=nltk_tokenizer,
ngram_range=(1, 6),
analyzer='char',
min_df=2,
max_df=0.7,
norm='l2')
perceptron_model = Perceptron(random_state=0)
perceptron_pipe = Pipeline([('vectorizer', vectorizer),
('classifier', perceptron_model)])
param_grid = {'classifier__alpha': [0.0001, 0.0003, 0.005],
}
grid_search_params = dict(estimator=perceptron_pipe,
param_grid=param_grid,
verbose=10)
perceptron_cross_validation = GridSearchCV(**grid_search_params)
perceptron_cross_validation
#%%time
#perceptron_cross_validation.fit(X, y)
#results = pd.DataFrame(perceptron_cross_validation.cv_results_)
#results.to_csv("perceptron_cross_validation.csv")
results = pd.read_csv("perceptron_cross_validation.csv")
best = results[results.rank_test_score==1]
display(best)
dict(best.params)
```
### 4.2 Testing accuracy with best parameters found
An alpha of 0.0001 works well and this is the default parameter. Therefore, we wont specify any parameter for the classifier and use default ones.
```
model = Perceptron()
pipe = Pipeline([('vectorizer', vectorizer),
('classifier', model)])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
Now that we have our models with optimal parameters, we will implement technics (such as pca, scaling, stacking..) and/or additionnal features to improve the accuracy.
| github_jupyter |
#### Основы программирования в Python для социальных наук
## Web-scraping таблиц. Подготовка к самостоятельной
Семинар 7
*Автор: Татьяна Рогович, НИУ ВШЭ*
Этот блокнот поможет вам разобраться, как подходить к самостоятельной работе. Один из пунктов - это скрейпинг таблицы из википедии. Посмотрим на примере, как это делать. Вы знаете из онлайн-курса как пользоваться библиотеками для доступа к сайтам по ссылкам и библиотекой BS для поиска тегов. Сегодня посмотрим пример, как сохранить таблицу из вики.
**Задание 1.**
*5 баллов*
1. На странице в wiki https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests нужно найти таблицу под названием "Worldwide nuclear test with a yield of 1.4 Mt TNT equivalent and more".
2. С помощью поиска по тегам, нужно сохранить из таблицы следующие колонки: 'Date (GMT)', 'Yield (megatons)', 'Country'. Каждая колонка таблицы должна быть сохранена в отдельную переменную, внутри которой лежит список, где первое значение - название колонки. Например, для колонки 'Date (GMT)' список будет выглядеть так:
['Date (GMT)', 'October 31, 1952', ...остальные значения..., 'November 17, 1976']
3. Выведите эти три списка командой
print(Dates)
print(Yield)
print(Country)
```
# Ваше решение. При необходимости создавайте новые ячейки под этой с помощью знака +
```
**Задание 2.**
*5 баллов (каждыый шаг 1 балл)*
1. Напишите функцию, которая берет аргументом название страны и возвращает (return) среднюю мощность взрыва для этой страны (нужно сложить все значения из колонки 'Yield (megatons)', которым соответствует страна, например, США, и раделить на количество этих значений). Для подсчета используйте списки, которые вы извлекли в Задании 1.
2. Из списка Country оставьте только уникальные значения для стран и запустите вашу функцию в цикле для каждого значения Country. Внутри цикла сделайте следующий вывод "{название страны}: средняя мощность взрыва {средняя мощность} мегатон"
3. Отдельно сохраните в переменную и выведите среднюю мощность взрыва (Yield (megatons) для бомб, которые тестировались в USA.
4. Отдельно сохраните в переменную и выведите среднюю мощность взрыва (Yield (megatons) для бомб, которые тестировались в Soviet Union.
5. Сравните эти значения и выведите название страны, для которой средняя мощность взрыва выше.
Задание, выполненное без использования автоматически собранных данных, не засчитывается (например, если вы скопировали все значения из таблицы вручную и нашли их среднее).
```
# Ваше решение. При необходимости создавайте новые ячейки под этой с помощью знака +
# Функция
# Цикл по странам
# Значение для США
# Значение для СССР
# Сравнение значений
```
# Пример решения Задания 1.
Сначала мы импортируем библиотеку `requests`. Она позволяет нам просто и удобно посылать HTTP/1.1 запросы, не утруждаясь ручным трудом.
```
import requests
```
Теперь мы должны указать адрес страницы с которой мы будем скрейпить данные и сохраним ее в переменную `website_url`.
`requests.get(url).text` обратиться к сайту и вернет `HTML` код сайта.
```
website_url = requests.get('https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests').text
website_url
```
Как мы видим, весь код представлен просто блоком текста, который не удобно читать и разбирать. Поэтому мы создадим объект `BeautifulSoup` с помощью функциии `BeautifulSoup`, предварительно импортировав саму библиотеку. `Beautiful Soup` это библиотека для парсинга `HTML` и `XML` документов. Она создает дерево из `HTML` кода, что очень полезно при скрейпинге. Функция `prettify()` позволяет видеть код в более удобном виде, в том числе с разбивкой по тегам.
```
from bs4 import BeautifulSoup
soup = BeautifulSoup(website_url,'lxml')
print(soup.prettify())
```
Если внимательно изучить код `HTML` искомой таблицы, то можно обнаружить что вся таблица находится в классе `Wikitable Sortable`. (Для включения отображения кода сайта в вашем браузере можно нажать правкой кнопкой мыши на таблицы и выбрать пункт *Исследовать элемент*).

Поэтому первой задачей будет найти класс *wikitable sortable* в коде `HTML`. Это можно сделать с помощью функции `find_all`, указав в качестве аргументов, что мы ищем тэг `table` с классом `wikitable sortable`.
```
My_table = soup.find_all('table',{'class':'wikitable sortable'})
My_table
```
Но как вы могли заметить, то на страницы есть две таблицы, которые принадлежат этому классу. Функция `find_all` вернет все найденные объекты в виде списка. Поэтому проверим второй найденный элемент.
```
My_table[1]
```
Все верно, это наша искомая таблица. Если дальше изучить содержимое таблицы, то станет понятно что внутри тега `th` находится заголовок таблицы, а внутри `td` строки таблицы. А оба этих тега находятся внутри тегов `tr` что является по факту строкой таблицы. Давайте извлечем все строки таблицы также используя функцию `find_all`.
```
rows = My_table[1].find_all('tr')
rows
```
Давайте внимательно изучим содержимое одной строки, вытащим все `td`. Отобразим вторую строчку:
```
rows[1].find_all('td')
```
Мы видим нужные нам данные между тегов `<td><\td>`, а также ссылки с тегом `<a>` и даже смешанные ячейки с обоими этими вариантами. Давайте сначала извлечем просто данные. Для этого используем функцию `get_text()` - она вернет все что между тегами.
Возьмем, например, дату (она будет первым элементом):
```
rows[1].find_all('td')[0].get_text()
```
Единственное, нам нужно отдельно обработать, это первую строку, в которой хранится заголовок ряда (table header)
```
rows[0].find_all('th')[0].get_text()
```
Все классно, только довайте избавимся от знака переноса строки.
```
rows[0].find_all('th')[0].get_text().strip()
```
Вообще хорошая идея всегда использовать метод strip(), чтобы удалять такие знаки (если удалять нечего, ошибку он не выдаст).
Давайте теперь извлечем все даты. Создадим список для их хранения `Dates` и будет итерироваться по всем элементам:
```
Dates = []
Dates.append(rows[0].find_all('th')[0].get_text().strip()) # отдельно добавляем заголовок
for row in rows[1:]: # начинаем со второго ряда таблицы, потому что 0 уже обработали выше
r = row.find_all('td') # находим все теги td для строки таблицы
Dates.append(r[0].get_text().strip()) # сохраняем данные в наш список
Dates
```
Ок! Следующие колонки, которые нам нужны - мощность взрыва и страна. Давайте поймем, где их искать.
```
rows[0]
```
Видим, что Yield вторая колонка, а страна третья. Соберем их в отдельные списки по той же схеме, что дату. Но сначала проверим, что правильно посчитали номера.
```
rows[0].find_all('th')[1]
rows[0].find_all('th')[3]
```
Вроде все правильно. Единственно, не забудем хранить числа как float
```
Yield = []
Yield.append(rows[0].find_all('th')[1].get_text().strip()) # отдельно добавляем заголовок
for row in rows[1:]: # начинаем со второго ряда таблицы, потому что 0 уже обработали выше
r = row.find_all('td') # находим все теги td для строки таблицы
Yield.append(float(r[1].get_text().strip())) # сохраняем данные в наш список и переводим в float
Yield
Country = []
Country.append(rows[0].find_all('th')[3].get_text().strip()) # отдельно добавляем заголовок
for row in rows[1:]: # начинаем со второго ряда таблицы, потому что 0 уже обработали выше
r = row.find_all('td') # находим все теги td для строки таблицы
Country.append(r[3].get_text().strip()) # сохраняем данные в наш список и переводим в float
Country
print(Dates)
print(Yield)
print(Country)
```
# Пример решения задания 2
1. Напишите функцию, которая берет аргументом название страны и возвращает (return) среднюю мощность взрыва для этой страны (нужно сложить все значения из колонки 'Yield (megatons)', которым соответствует страна, например, США, и раделить на количество этих значений). Для подсчета используйте списки, которые вы извлекли в Задании 1.
2. Из списка Country оставьте только уникальные значения для стран и запустите вашу функцию в цикле для каждого значения Country. Внутри цикла сделайте следующий вывод "{название страны}: средняя мощность взрыва {средняя мощность} мегатон"
3. Отдельно сохраните в переменную и выведите среднюю мощность взрыва (Yield (megatons) для бомб, которые тестировались в USA.
4. Отдельно сохраните в переменную и выведите среднюю мощность взрыва (Yield (megatons) для бомб, которые тестировались в Soviet Union.
5. Сравните эти значения и выведите название страны, для которой средняя мощность взрыва выше.
```
# 1
def average_yield(country):
yield_sum = 0 # создаем счетчитк, в который будем приплюсовывать мощность каждого испытания в заданной стране
yield_count = 0 # создаем счетчик, в котором будем хранить количество испытаний
for idx in range(len(Country)): # запускаем цикл для всех значений индексов списка Country
if Country[idx] == country: # проверяем, равно ли значение в списке Country стране, для которой вызвана функция
yield_sum += Yield[idx] # если да, то добавляем мощность взрыва под этим же индексом
yield_count += 1 # считаем это исптание
return yield_sum / yield_count # после окончания работы цикла возвращаем среднюю мощность
# 2
for country in set(Country[1:]): # чтобы оставить только уникальные значения - делаем множество из списка + с помощью среза избавляемся от от заголовка колонки под индексом [0]
print(country, ': средняя мощность взрыва', average_yield(country), 'мегатон')
# 3, 4
yield_ussr = average_yield('Soviet Union')
yield_usa = average_yield('USA')
print(yield_ussr, yield_usa)
# 5
if yield_ussr > yield_usa:
print('Soviet Union')
else:
print('USA')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.